content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Apple Watch App
Warning this app is only for nerds and math geeks! Do not expect any real value.Prime Date & TimeYou can calculate the next prime Times. A time is a prime time, if its digit representation,
consisting of the year, month, day, hour and minute is a prime number. For example the April 24, 2016 at 8:23 pm is a prime time ( 2016 04 24 20 23 is prime ) Prime FactorsDecompose the current date
or time into its prime factors. In the table you finde more information about the current date or time.CalendarCheck the Calendar for prime dates. Watch out for prime sundays (Primsonntag). Don't
forget to give the mathematicians in your circle of friends a prime sunday present.Ulam SpiralThe Ulam Spiral starts at the point you selected in the settings Section. Switch the presentaion of ulam
spirals in the settings. More options, including the semi primes with a long press on the spiral.Twin Primes: Pink : A prime number p is called a twin prime, if p+2 i also primeCousin Primes: Brown :
A prime number p is called a cousin prime, if p+4 is also primeSexy Primes: Red : No explanation nessesarySophie Germain: Black : A prime number p is a Sophie Germain prime, if 2p+1 is also primeAll
other primes are shown in blue.Start the play mode at the toolbar and feel the primes ticking. There is also a fast mode in the toolbar, calculation 41 numbers per second. Or swipe through the
numbers by swiping left or right on the spiral.FactorsThe graphical reprensentation of the prime factorization starts at the point you selected in the settings. Multiple Factors will shine brighter
than others. Try it by setting the startnumber to 2 and switching the view mode to Number.You can animate the pictures with the toolbar on the bottom. Or you can manually swipe through the numbers by
swiping left or right.PrimenessThere are different options for the definition of the primeness of a number. In the settings you can switch beetween the following possibilities:Distance : The distance
to the next prime number relative to the span from the previous prime to the next prime.Number of Divisors : The presentaion shows the log of the number of divisorsSum of Divisors : The presentation
show the log of the sum of divisors. Minimal Values in red.The Number of Columns depends on the selected View-Mode:Date : The left top corner is Januar the first of the current year. Each row showing
one Month.Time : The left top corner represents midnight. Each row showing one hour with 60 minutesMoment:The left top corner respresents the start of the current hour. Each row represneting one
minute and each tick a secondNumber: The start point deponds on the choosen number in the settings, and starts with the last digit of zero. Each column represents the last digit of the tested
numbersPrime GameFind the factorization of a number. At least one of the Buttons on the bottom is a correct factor.Notification: The app will notify you - even on your watch - about the next prime
times. Do not get nervous, the spook will end after five prime moments.ShareShare your prime moments via Mail, Facebook or Twitter #primetimer. Use the Share button in the Navigation
BarSynaesthesiaLook at the top of the Info Screen: Which color have the numbers. If you see primes in red, maybe you have to check for the savant syndrom. What color have the twin primes? Can you
feel the prime numbers?Watch SupportSelect the complication of your choice to stay informed about the next prime times.With a deep press on the prime watch, you can change the mode of the
presentation32-bit iPhones prior to Version 5s: Due to limited calculating power you will have to wait up to ten seconds at the first start of the app. A 64-bit iPhone (5s, 6, 6s, 6+, 6s+, 5se) is | {"url":"https://watchaware.com/watch-apps/primetimer/1107422128","timestamp":"2024-11-07T22:31:26Z","content_type":"text/html","content_length":"52909","record_id":"<urn:uuid:10a42d1f-8072-48c6-b84a-80bdfd611249>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00321.warc.gz"} |
Multiplying Fractions By Whole Numbers With Models Worksheets
Multiplying Fractions By Whole Numbers With Models Worksheets work as foundational devices in the world of mathematics, giving an organized yet versatile system for students to explore and understand
mathematical principles. These worksheets use a structured technique to understanding numbers, nurturing a solid foundation whereupon mathematical effectiveness thrives. From the simplest counting
exercises to the intricacies of sophisticated estimations, Multiplying Fractions By Whole Numbers With Models Worksheets deal with students of diverse ages and ability levels.
Introducing the Essence of Multiplying Fractions By Whole Numbers With Models Worksheets
Multiplying Fractions By Whole Numbers With Models Worksheets
Multiplying Fractions By Whole Numbers With Models Worksheets -
Save time and gather data with this NO PREP fraction activity perfect for math centers In this deck students will multiply to find fractions of a whole number The first 20 cards provide a bar model
visual to help students grasp the concept of finding a fraction of a whole number The last 10 c
Open doors to adequate practice with our worksheets on multiplying fractions on a number line featuring exercises to draw hops to find the product write the multiplication equation using the number
line model find the missing
At their core, Multiplying Fractions By Whole Numbers With Models Worksheets are cars for theoretical understanding. They encapsulate a myriad of mathematical concepts, directing students through the
maze of numbers with a series of interesting and purposeful exercises. These worksheets go beyond the borders of typical rote learning, urging active engagement and promoting an instinctive
understanding of mathematical connections.
Supporting Number Sense and Reasoning
Multiplying Fractions Whole Numbers Worksheets
Multiplying Fractions Whole Numbers Worksheets
This worksheet gives students multiple problems where students are required to multiply a whole number and a fraction Each problem has a visual representation to help students see the fractions more
concretely Use as practice to support learning or as a quiz to assess mastery of this Fourth Grade Common Core Skill
4th grade Course 4th grade Unit 9 Math 4th grade Multiply fractions Multiplying fractions and whole numbers visually Multiply fractions and whole numbers with fraction models Google Classroom How can
we calculate the shaded area Choose 2 answers 8 1 5 A 8 1 5 2 4 5 B 2 4 5 5 4 5 4 C 5 4 5 4 Stuck
The heart of Multiplying Fractions By Whole Numbers With Models Worksheets hinges on cultivating number sense-- a deep understanding of numbers' definitions and affiliations. They urge expedition,
inviting learners to explore math operations, understand patterns, and unlock the enigmas of sequences. Via provocative obstacles and sensible problems, these worksheets end up being gateways to
honing reasoning abilities, nurturing the analytical minds of budding mathematicians.
From Theory to Real-World Application
Multiplying Fractions With Area Model Worksheet Heswan
Multiplying Fractions With Area Model Worksheet Heswan
Multiplying Fractions With Models Whole Numbers Twinkl Multiplying Fractions by Whole Numbers Word Problems with Models Activity 3 7 3 reviews Aligned Standards CCSS 5 NF B 4 5 NF B 4 A 5 NF B 6 4 NF
B 4 4 NF B 4 C Fourth Grade Math Word Problems Free Account Includes Thousands of FREE teaching resources to download
This worksheet has an explanation box to show students how to multiply fractions followed by 11 practice problems 5th and 6th Grades View PDF Word Problems Multiplying Fractions Multiply fractions to
solve each word problem Make sure to simplify if possible and show your work 4th through 6th Grades View PDF Task Cards
Multiplying Fractions By Whole Numbers With Models Worksheets act as channels connecting theoretical abstractions with the apparent realities of daily life. By infusing sensible situations into
mathematical exercises, learners witness the importance of numbers in their surroundings. From budgeting and dimension conversions to recognizing statistical information, these worksheets equip
students to possess their mathematical expertise past the confines of the classroom.
Varied Tools and Techniques
Adaptability is inherent in Multiplying Fractions By Whole Numbers With Models Worksheets, employing an arsenal of pedagogical tools to deal with diverse discovering styles. Visual aids such as
number lines, manipulatives, and electronic sources work as buddies in visualizing abstract ideas. This varied technique makes sure inclusivity, suiting learners with different preferences, staminas,
and cognitive styles.
Inclusivity and Cultural Relevance
In a significantly diverse world, Multiplying Fractions By Whole Numbers With Models Worksheets welcome inclusivity. They transcend cultural borders, incorporating instances and troubles that
reverberate with learners from varied backgrounds. By integrating culturally relevant contexts, these worksheets foster a setting where every learner really feels stood for and valued, enhancing
their connection with mathematical principles.
Crafting a Path to Mathematical Mastery
Multiplying Fractions By Whole Numbers With Models Worksheets chart a program towards mathematical fluency. They instill willpower, important thinking, and analytic abilities, vital attributes not
just in mathematics yet in various facets of life. These worksheets encourage learners to browse the elaborate surface of numbers, nurturing a profound appreciation for the beauty and logic inherent
in mathematics.
Embracing the Future of Education
In a period marked by technical advancement, Multiplying Fractions By Whole Numbers With Models Worksheets flawlessly adapt to electronic systems. Interactive interfaces and digital sources enhance
typical discovering, offering immersive experiences that transcend spatial and temporal boundaries. This amalgamation of conventional methods with technical technologies heralds an appealing age in
education, fostering an extra dynamic and appealing understanding environment.
Final thought: Embracing the Magic of Numbers
Multiplying Fractions By Whole Numbers With Models Worksheets characterize the magic inherent in maths-- an enchanting journey of expedition, exploration, and proficiency. They transcend conventional
pedagogy, working as drivers for sparking the fires of inquisitiveness and questions. With Multiplying Fractions By Whole Numbers With Models Worksheets, learners start an odyssey, unlocking the
enigmatic world of numbers-- one issue, one remedy, each time.
Multiplying Fractions By Whole Numbers Worksheet
Multiplying Fractions By Whole Numbers Worksheets Teaching Resources
Check more of Multiplying Fractions By Whole Numbers With Models Worksheets below
Fractions Multiplied By Whole Numbers Worksheets
Multiplying Unit Fractions By Whole Numbers Worksheet Download
Multiply Fractions By Whole Numbers Worksheet
Multiplying Fractions Area Model Worksheet
Multiply Fractions By Whole Numbers Worksheet
Worksheet Multiplying Fractions By Whole Numbers Worksheets Grass Fedjp Worksheet Study Site
Multiplying Fractions By Whole Numbers Worksheets
Open doors to adequate practice with our worksheets on multiplying fractions on a number line featuring exercises to draw hops to find the product write the multiplication equation using the number
line model find the missing
Multiplying Fractions By Whole Numbers K5 Learning
Math worksheets Multiplying fractions by whole numbers Below are six versions of our grade 5 math worksheet where students are asked to find the product of whole numbers and proper fractions These
worksheets are pdf files Worksheet 1 Worksheet 2 Worksheet 3 Worksheet 4 Worksheet 5 Worksheet 6 5 More
Open doors to adequate practice with our worksheets on multiplying fractions on a number line featuring exercises to draw hops to find the product write the multiplication equation using the number
line model find the missing
Math worksheets Multiplying fractions by whole numbers Below are six versions of our grade 5 math worksheet where students are asked to find the product of whole numbers and proper fractions These
worksheets are pdf files Worksheet 1 Worksheet 2 Worksheet 3 Worksheet 4 Worksheet 5 Worksheet 6 5 More
Multiplying Fractions Area Model Worksheet
Multiplying Unit Fractions By Whole Numbers Worksheet Download
Multiply Fractions By Whole Numbers Worksheet
Worksheet Multiplying Fractions By Whole Numbers Worksheets Grass Fedjp Worksheet Study Site
How To Multiply Fractions By Whole Numbers Poster Twinkl
Multiplying Fractions Using Models Worksheet
Multiplying Fractions Using Models Worksheet
Multiply Fractions By Whole Numbers Like A Pro With These Worksheets Style Worksheets | {"url":"https://alien-devices.com/en/multiplying-fractions-by-whole-numbers-with-models-worksheets.html","timestamp":"2024-11-04T14:06:16Z","content_type":"text/html","content_length":"27711","record_id":"<urn:uuid:b262f9f4-4330-49f6-bb81-dee341c4babb>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00045.warc.gz"} |
Modeling and Analysis of Sea-Surface Vehicle System for Underwater Mapping Using Single-Beam Echosounder
Electrical and Electronic Engineering Department, Zonguldak Bulent Ecevit University, 67100 Zonguldak, Turkey
Geomatic Engineering Department, Zonguldak Bulent Ecevit University, 67100 Zonguldak, Turkey
Electrical and Electronic Engineering Department, Middle East Technical University, 06800 Ankara, Turkey
Author to whom correspondence should be addressed.
Submission received: 8 August 2022 / Revised: 13 September 2022 / Accepted: 14 September 2022 / Published: 22 September 2022
Detailed knowledge on the shape of the seafloor is crucial for many researchers. Bathymetric data are critical for navigational safety and are used for underwater mapping. This study develops a
sea-surface vehicle (SSV) system for underwater mapping by using both bathymetric data from a low-cost single-beam echosounder located on the SSV, and the navigation data of the SSV. The navigation
of the SSV was obtained using a global positioning system (GPS). The effect of changing bathymetric and navigation data due to external disturbances such as wind and waves on the map was analyzed.
The sea-bottom slope angles, which are effective in changing bathymetric data, were estimated and corrected in relation to the estimated angles in a particular mapped area for more accurate
underwater mapping. Additionally, the effects of the grid range of the mapped area, beam angle of the echosounder, and position of the echosounder on the underwater mapping were analyzed. These
analyses were based on simulation data, and were performed in a MATLAB, HYPACK, and Global Mapper environment. An underwater map was also obtained in the Kozlu/Zonguldak area, Black Sea by using a
single-beam echosounder located on the SSV. This map was improved by estimating sea-bottom slope angles and the corrected bathymetric data to obtain a more accurate underwater map of the area. The
experimental and simulation results were compared, focusing on the sea-bottom slope changes, sea-surface disturbances, bathymetry grid range changes, and draft effects.
1. Introduction
Underwater topography is important for the design and application of water structures such as pipelines, seaports, and breakwaters in seas and lakes [
]. The study of underwater topography is bathymetry, and the measurement process is called hydrographic surveying. Underwater mapping techniques are commonly named according to the equipment used for
measurements, such as lath, rope, and wire sounders. These techniques have been used to measure the depth from the seafloor for underwater mapping, referred to as bathymetric data. The current
technology uses acoustic sounders. Although bathymetric measurements using multibeam echosounders are faster and more sensitive for underwater mapping, single-beam echosounders are cheaper [
]. Therefore, practitioners commonly prefer to apply the single-beam technique [
]. In this study, we investigate the underwater mapping problem using bathymetric data measured using a single-beam echosounder; both simulation and experimental data were obtained.
Error sources in hydrographic surveying detected through echosounders are based on electrical and acoustical factors, such as sound velocity variation and signal travel time (clock) errors, and the
change effects of the sea surface and seafloor [
The important effect of bathymetric measurements using a single-beam echosounder is the seafloor slope angle [
]. In previous studies, many methods were used to describe the seafloor. Backscatter characteristics are commonly used in seafloor characterization. The most significant uncertainty in backscatter
data is the effect of the seafloor slope. A standard method of seafloor slope estimation and correction was proposed to achieve repeatable and accurate backscatter results [
]. The correlation backscatter characteristics and the seafloor-insonification area changes with beam width, incidence angle, water depth, and sonar pulse length were obtained [
]. The seafloor was classified according to frequency shifts that occurred when a high-frequency pulse backscattered from the seafloor [
], and the measured depth was corrected with the frequency shifts. In another study, the seafloor was classified according to echo durations, such as short, long, and extended echoes [
]. An algorithm was presented for the automatic compensation of seismic amplitudes for the seafloor slope and depth [
]. In the inclined seafloor, accurate measurements of the depth were achieved by turning the echosounder head on the basis of the value of the angle of inclination [
]. The challenges associated with evaluating the bottom-angle estimation performance for using a multibeam echosounder were detailed in a prior work [
]. Bathymetric data were measured by integrating the echosounder on both surface and underwater vehicles [
]. The seafloor slope angle distribution of the bathymetry-measuring echosounder integrated with an autonomous underwater vehicle was established. The slope angle was calculated, and the seafloor
displacement error was estimated [
]. However, in all these studies, the angle of the seafloor slope was considered in one axis. In contrast to the two-dimensional state of the seafloor in previous studies [
], the slope angles for the three-dimensional seafloor are defined in (
) and (
Because a single-beam echosounder measures the closest distance between the seafloor and sea-surface vehicle, bathymetric data measured using an echosounder should be corrected when the seafloor is
inclined. Actual bathymetric data are the depth at which the sonar is perpendicular to the sea bottom [
]. In the case of an inclined sea bottom, the chosen grid range is effective for bathymetric measurements [
]. The effect of the grid range on measured bathymetry was analyzed with a digital terrain model for the sloping seafloor. This shows that the grid range can be selected on the basis of local point
density, as was the aim of the investigation in [
]. The grid range’s impact in interpolating sparse bathymetric data was established through a direct Monte Carlo simulation [
]. The grid range value should be selected without missed points in the mapped area, as in (
), and bathymetric measurements should be performed by dividing the mapped area into equal grid ranges. In the case of the bathymetric data obtained using a single-beam sonar, a narrow or wide beam
angle affects the accuracy of the underwater map [
]. The beam angle depends on the transducer size of the sonar and the wavelength. The higher the frequency and the larger the transducer are, the narrower the beam angle is [
]. The measured depth value may change because of the oscillation of the vehicle in which the sonar is integrated with the external disturbance effect, and the positions in the x and y axes based on
the measured depth should be corrected according to this oscillation value [
In this study, the effects of the sea-bottom slope, grid range, beam angle, external disturbances, and value of the transducer (draft of echosounder) were analyzed on the basis of bathymetric data of
an underwater map in a three-dimensional (3D) inclined sea bottom. The main contributions of this study are as follows:
• In previous studies, bathymetric data measured using a single-beam sonar were analyzed for cases in which the seafloor was inclined in only one axis. In contrast, two axes’ seafloor slope angles
are proposed and discussed here for a 3D seafloor.
• The measured bathymetric data are corrected when the seafloor angles are inclined in two axes after the seafloor angles are estimated using the proposed approach.
• To avoid missing the bathymetric measurement of any point in the mapped area, we successfully selected a grid range value on the basis of its geometry.
• The effects of the sonar beam angle, external disturbance, draft of the sonar on the measured bathymetric data and the underwater map, the seafloor slope, and grid range were analyzed in detail.
The remainder of this paper is organized as follows.
Section 2
presents the single-beam echosounder model. We provide a detailed definition of the effects of the underwater map’s accuracy in
Section 3
Section 4
presents an underwater mapping simulator.
Section 5
details the underwater mapping experiments. Lastly, the conclusion and future work are presented in
Section 6
2. Single-Beam Echosounder Model
A sound navigation and ranging (sonar) device, that is, the echosounder, uses an acoustic signal to measure the depth from the sea floor [
]. Depth measurement is affected by the electrical and acoustic parameters of the sonar device. Acoustic parameters such as frequency, bandwidth, and signal length determine the propagation
characteristics of underwater acoustic signals. The sonar equation can be used to understand and analyze the sonar performance. This equation is composed of the electrical parameters of the sonar,
and it defines the signal or sound detection as echo excess (EE):
$EE = SL − 2 TL − NL − DI + BS − DT$
], where SL is the source level, TL is the transmission loss, NL is the noise level, DI is the directivity index, BS is the bottom backscattering strength, and DT is the detection threshold.
In this study, a single-beam echosounder was used to measure the depth from the seafloor. This echosounder was integrated into a sea-surface vehicle. After the acoustic signal had been sent from a
single-beam echosounder, it reached the sea floor, and the first returning signal was received from the echosounder [
Figure 1
shows the beam coverage of the seafloor as conical for a single-beam echosounder at the given grid range (
$g r$
). The beam coverage of the seafloor
and depth
$h m$
are calculated in (
$a = 2 h m t a n φ 2 , h m = 1 2 Δ t . c$
is the diameter of the area covered by the echo,
$h m$
is the depth value,
, is the beam angle,
$Δ t$
is the time interval between sending and receiving the transmitted signal, and
is the acoustic signal.
The single-beam echosounder measures the distance between the vehicle on which the single-beam sonar is located and the seafloor level [
]. This distance value is also obtained in relation to the position of the vehicle as follows.
$h m = ( x − x s ) 2 + ( y − y s ) 2 + ( z − z s ) 2$
$h m$
denotes the measured bathymetric data,
$x s , y s , z s$
denote the position of the single-beam echosounder, and
$x , y , z$
denote the position of mapped area.
For high accuracy and a clear record of the bathymetric data, electrical and acoustic sonar parameters must be set correctly before measurement. The most important parameters are power, gain,
recording density, pulse length, scale, phase scale, draft, and sound velocity. Other important parameters that affect the accuracy of bathymetric data are the navigation data of the vehicle on which
the sonar is located, the beam angle of the sonar, the grid range in the mapped area, and the sea-bottom slope angle [
3. Definition of the Effects of Underwater Map Accuracy
In this section, the effects of the seafloor slope, grid range, beam angle of the echosounder, echosounder position, and external distribution on the accuracy of the underwater map are defined.
3.1. Seafloor Slope Effect
The sea-bottom slope angle is an important factor to consider when measuring bathymetry with a single-beam echosounder. Because a single-beam echosounder measures the closest distance between the
seafloor and sea-surface vehicle, bathymetric data measured using an echosounder should be corrected when the seafloor is curved.
The acoustic wave is transmitted conically from a single-beam echosounder. This acoustic wave is first reflected from the seafloor and returned to the echosounder; the depth was calculated as shown
in (
). The measured depth was equal to the actual depth of a flat seafloor, as shown in
Figure 2
a. However, the seafloor is not always flat, as shown in
Figure 2
b,c. Thus, the measured depth is related to the seafloor slope angles.
Figure 2
shows the relationship between the actual and measured depths with respect to seafloor slope angles for the 3D sea-bottom area. In
Figure 2
$h m$
is the measured depth, and
is the actual depth.
Definition 1.
Given the beam angle of the single-beam echosounder (φ), the relationship between actual depth $h a ( k )$ and measured depth (bathymetric data) $h m ( k )$ with respect to sea-bottom angles $α ( k )
$ in the x axis and $β ( k )$ in the y axis at the kth position of $x ( k )$, $y ( k )$ is defined as
$cos θ ( k ) = cos α ( k ) cos β ( k )$
so that
$h m k = h a ( k ) cos θ ( k ) , 0 ≤ θ ( k ) ≤ φ 2 cos θ ( k ) cos ( θ k − φ 2 ) , θ ( k ) ≥ φ 2$
Hence, at the
th position, (x(k), y(k)), $h a ( k )$ can be calculated using (4).
Definition 2.
The sea-bottom slope angles along the x and y axes are estimated using the measured depth values at the sampling interval, $g m$, and the beam angle of the echosounder as follows.
$tan α ^ k = h m x k , y k − h m x k − 1 , y k g m k = 2 , … , K$
$tan β ^ k = h m x k , y ( k ) − h m ( x k , y ( k − 1 ) ) g m k = 2 , … , K$
where $α ^ 1 = 0 , β ^ 1 = 0$, and K are the total sampling data in the mapped area. After the sea-bottom slope angles had been estimated, the measured depth values (bathymetric data) were corrected
for the two conditions using (4). By defining $g m$, sampling interval (distance), and $g r$, the grid range in the mapped area, $h m ( k )$, is the k-th measured value obtained through linear
interpolation using the measured depth value at sampling distance $g m$ in each grid range represented by $g r$. For $g m = 0.2$ m, five sampling points between two measurements were obtained at $g r
= 1$ m; for $g m = 0.2$ m, 25 sampling points were obtained between each consecutive measurement at $g r = 5$ m. One can use neighborhood measured values to estimate α and β as in [35,36] instead of
(5) and (6).
Lemma 1.
Actual depth $h a ( k )$ can be obtained using (4) with a known position ($x ( k )$, $y ( k )$), and beam angle φ using measured depth $h m ( k )$. Furthermore, estimated depth value $h ^ a ( k )$
converges to the value of the actual depth if the sea-bottom slope angle estimates (5) and (6) approach $α ( k )$ and $β ( k )$.
It is straightforward to prove that, if in Definition 2, the sea-bottom angle estimates both
with an appropriate gm sampling distance, then
$h ^ a k = h m ( k ) 1 c o s θ ^ ( k ) , 0 ≤ θ ^ ( k ) ≤ φ 2 c o s ( θ ^ ( k ) − φ 2 ) c o s θ ^ ( k ) , θ ^ ( k ) > φ 2$
where the angle is
$c o s θ ^ ( k ) = c o s α ^ ( k ) c o s β ^ ( k )$
. □
3.2. Grid Range Effect
The selection of the grid range of the mapped region affects the accuracy of the underwater topography. Generally, more measurement data are obtained from the mapped area by choosing a low grid
range. Thus, a high-resolution map was obtained. In areas where the seafloor is sloping, grid spacing selection is more effective. If the grid range value is chosen to be larger than the sea-bottom
coverage area on the basis of beam angle and depth, measurements of some points can be missed.
Lemma 2.
Choosing smaller grid spacing than the coverage area, $g r < 2 z m t a n φ 2$ was still not sufficient to map the entire area, as shown in Figure 1. To map all points without missing any in the
mapped area, grid spacing corresponding to overlapping circles should be chosen. Hence, the grid range value was selected from the geometry shown in Figure 1:
where $g r$ is the selection of the grid range value for the mapped area, a is the diameter of the covered area, and φ is the beam angle of the single-beam echosounder.
The seafloor slope angles depend on the value of the measured depth in each grid interval interpolated at a certain interval ($g m$), as shown in (5) and (6). The estimated seafloor angles are
different for grid values of 5 m and 1 m. By choosing a small grid range value in the inclined sea-bottom area, the estimated seafloor angles are closer to the actual angle values.
3.3. Beam Angle of Echosounder Effect
For bathymetric data obtained using a single-beam sonar, a narrow or wide beam angle affects the accuracy of the underwater map. The beam angle depends on the transducer size of the sonar and the
wavelength. The higher the frequency and the larger the transducer are, the narrower the beam angle is. Although we obtained a high-resolution underwater map when using the narrow-angle sonar, the
measurement of some points may have been missed owing to external disturbance effects, such as waves and the wind, especially on a sloping seafloor. The optimal beam angle should be chosen according
to the presence of the seafloor and external disturbances. The definition of the beam angle effect is given in (
) for two conditions when the sea floor is inclined.
3.4. External Disturbance Effect
The rotations of the sea-surface vehicle about the x, y, and z axes are defined as roll, pitch, and yaw, respectively, and are measured using an inertial measurement system (INS) [
]. The sea-surface vehicle’s attitudes such as roll and pitch are zero without disturbances, and the echosounder is perpendicular to the seafloor. Changing the sea-surface vehicle motion with
external disturbances, such as wind and waves, affects the measured bathymetric data and position of the sea-surface vehicle in which the sonar is integrated.
The sea-surface vehicle oscillates when external disturbances affect the attitudes of the vehicle, which are different from those of the starting level. If the angle of these oscillations (roll and
pitch) is greater than half the beam angle, the sea-bottom coverage area is changed. The external disturbance effects, in degrees, in the
axes are defined as
$δ x$
$δ y$
to the oscillation of the sea-surface vehicle, as shown in
Figure 3
c. Rotating the yaw angle along the
axis does not affect the measured bathymetric data [
Definition 3.
The positions of the sea-surface vehicle in the x and y axes are corrected in relation to the roll, pitch, and yaw angles owing to oscillations using the rotation matrix for measured bathymetric data
[29,40]. If the vehicle rotates around the z, y, and x axes with σ, γ, and ϕ, respectively, with an external distribution effect, then the position of the sea-surface vehicle based on the measured
depth is corrected using rotation transformation matrix C as follows:
$η ′ = c σ c γ − s σ c ϕ + c σ s γ s ϕ s σ s ϕ + c σ c ϕ s γ s σ c γ c σ c ϕ + s ϕ s γ s σ − c σ s ϕ + s γ s σ c ϕ − s γ c γ s ϕ c γ c ϕ η$
where $η ′ = [ x s ′ , y s ′ , z s ′ ] T$ is the corrected position vector of the surface vehicle, and $η = [ x s , y s , z s ] T$ is the measured position vector of the surface vehicle.
It is assumed that, in the case of small angle changes, $( c o s ( . ) = 1 , s i n ( . ) = 0 )$ transformation matrices are identity matrices, and position changes are neglected where $c . = c o s (
. )$ and $s . = s i n ( . )$.
3.5. Echosounder Position Effect
The distance between the place where the sonar transducer is mounted on the surface vehicle and the water surface, called the draft, affects the accuracy of underwater mapping. Draft value
$h d$
, shown in
Figure 3
b, should be added to the measured depth value from the single-beam echosounder.
Definition 4.
As the sonar moves away from the sea surface and approaches the seafloor, higher resolution measurements are obtained. Measurements closer to the seafloor are obtained by integrating the sonar into
an underwater vehicle. When the beam angle is constant, as the sonar approaches the sea floor, the coverage of the single-beam sonar narrows, as indicated in (1), resulting in a higher resolution
underwater map. The difference in distance between the sea level and the transducer of the echosounder is defined as draft (or bias). The distance between the acoustic sonar and the measurement
station (sea-surface vehicle used in this study) is represented by $h d$, as shown in Figure 3b. This draft value must be added to each measured depth value.
4. Underwater Mapping Simulator
The simulator block diagram for underwater mapping is shown in
Figure 4
. First, the single-beam echosounder parameters and environmental parameters were set, and the background sea-floor map was generated. The sonar model was used to generate the distance between the
vehicle on which the sonar was located and the seafloor. The sea-bottom slope angles were estimated using the measured depth values, as in Definition 2. Then, the measured depth was corrected on the
basis of the estimated sea-bottom slope angles. The corrected depth value and navigation data of the sea-surface vehicle were integrated to compose the underwater map. The underwater map simulator
block shows that the sea surface, sea-bottom effects, and the grid range and beam angle play an important role in improving the accuracy of an underwater map.
Underwater mapping is performed according to
• the seafloor slope angle;
• the beam angle of the echosounder;
• the grid range in the mapped area;
• the position of the sonar’s transducer;
• the external disturbances to the motion of the sea-surface vehicle in order to show the single-beam echosounder performance.
4.1. Topographical Settings
The underwater map was generated using a single-beam echosounder for real-time measurements. First, underwater topography based on simulation data was generated, as shown in
Figure 5
. Subsequently, the bathymetric data (distance between the sea-surface vehicle on which a single-beam echosounder was mounted and the sea-bottom) and sea-bottom angles were estimated from the
measured depth related to each k-th position in the
axes. Thus, the bathymetric data were corrected, and the corrected underwater map was obtained. The analysis of underwater mapping based on simulation data was performed in a MATLAB environment.
Figure 5
shows the generated seafloor area. The depth variation is provided in
$100 × 80$
m in the
axes with a sampling range of
m. Here, actual depth
$h a$
values are between 10 and 55 m, and sea-bottom angles are between
$− 50 ∘$
$50 ∘$
, and
$− 40 ∘$
$40 ∘$
To compare all these effects, given that
$K 1$
$K 2$
are the total sampling data for the grid range in the x and y axes for
$k 1 = 1 , . . . , K 1$
$k 2 = 1 , . . . , K 2$
, we used the measured error:
$E m ( k 1 , k 2 ) = h a ( x ( k 1 ) , y ( k 2 ) ) − h m ( x ( k 1 ) , y ( k 2 ) )$
and estimated error
$E e ( k 1 , k 2 ) = h a ( x ( k 1 ) , y ( k 2 ) ) − h ^ a ( x ( k 1 ) , y ( k 2 ) )$
$h a$
is the actual depth,
$h m$
is the measured depth, and
$h ^ a$
is the estimated depth using (
) at
$k 1$
th and
$k 2$
th position in the x and y axes, respectively. Total measured and estimated absolute errors were calculated with the sum of the measured and estimated absolute errors using (
) and (
). Further, root mean square (RMS) measurements and estimated error calculations are performed using the sum of the squares of the measurements and the estimated error divided by the total sampling
4.2. Analysis of Underwater Mapping Accuracy
4.2.1. Analysis of the Sea-Bottom Slope Effect
Measured bathymetric data are calculated as the closest distance between the sea-surface vehicle, assuming that the vehicle is at the water level, and the seafloor, as shown in (
). The deep level measurement occurs with these generated bathymetric data, as depicted in the upper side of
Figure 6
Figure 6
shows the estimated sea-bottom slope angles in the
axis, called
, and in the
axis, called
, using the generated sea-bottom data at each k-th position, which is calculated using (
) and (
) of Definition 2. Bathymetric data measurements were performed with a single-beam sonar for a 10 m grid range and the
$10 ∘$
beam angle of the echosounder without external disturbances.
Figure 6
shows that there were cases in which the estimated sea-bottom angles were both smaller and larger than half of the beam angle.
Figure 6
shows the underwater map with the bathymetric value of the area corrected according to the estimated bottom slope angles based on Lemma 1. The topography based on the corrected bathymetric data was
more inclined compared with the measured bathymetric data.
Errors related to the measured and estimated depths were analyzed to determine the accuracy of the underwater map. The total absolute error between the actual and measured depths was calculated using
$E m$
in (
); the measured error for a particular area is shown in the upper part of
Figure 7
. The estimated error was calculated using
$E e$
in (
) using the difference between the actual and estimated depths, and the estimated error for a particular area is shown on the bottom side of
Figure 7
; the measured error increased with the sea-bottom slope effect, while the estimated error varied around zero.
4.2.2. Analysis of the Grid Range and Beam Angle Effect
The grid range and beam angle effects that affect the accuracy of the bathymetry measurements were analyzed. The total absolute and root mean square measured errors (blue line), and the total
absolute and RMS estimated errors (red line) at different beam angles based on the constant grid range (1 m) without external disturbances are shown in
Figure 8
. It was assumed that the depth of the vehicle from the sea surface was zero under this condition. The total absolute measured error was
$12 × 10 3$
m, the total absolute estimated error was
$2 × 10 3$
m, the total RMS measured error is 200 m, and the total RMS estimated error was 40 m at
$φ = 10 ∘$
$g r = 1$
$δ x = 0$
$δ y = 0$
, and
$h d = 0$
. When the beam angle increased to
$30 ∘$
, the total absolute measured error became
$3 × 10 4$
m. When the total absolute estimated error was
$1.2 × 10 4$
m, the total RMS measured error was
$5 × 10 2$
m, and the total RMS estimated error was
$2 × 10 2$
m at
$g r = 1$
$δ x = 0$
$δ y = 0$
, and
$h d = 0$
Results show that the absolute measured and estimated errors increased when the beam angle of the single-beam echosounder increased. In addition, the estimated error was smaller than the measured
error when the beam angle was increased. The measured and estimated errors improved with a narrow beam angle in the absence of external disturbances.
Figure 9
shows the absolute total and RMS measured errors (blue line), and the absolute total and RMS estimated errors (red line) at different grid values according to a constant beam angle, (
$15 ∘$
) without external disturbances. It was assumed that the depth of the vehicle from the sea-surface was zero in this condition. The total absolute measured error was
$12 × 10 3$
m, total absolute estimated error was
$2 × 10 3$
m, total RMS measured error was 200 m, and total RMS estimated error was 40 m at the
$g r = 2$
$φ = 15 ∘$
$δ x = 0$
$δ y = 0$
, and
$h d = 0$
. When the grid range increased to 20 m, the total absolute measured error became
$2 × 10 4$
m, total absolute estimated error was
$1.8 × 10 4$
m, RMS measured error was
$3 × 10 2$
m, and RMS estimated error was
$2.5 × 10 2$
m at
$φ = 15 ∘$
$δ x = 0$
$δ y = 0$
, and
$h d = 0$
Simulation results show that the absolute measured and estimated errors increased if the grid range increased. In addition, the estimated error was smaller than the absolute error when the grid range
increased. The measured and estimated errors improved with a low grid range in the absence of external disturbances.
4.3. Analysis of the Sonar Position Draft/Bias Effect
The accuracy of the underwater map was analyzed at the positions at which the sonar was away from the sea level, but approaching the seafloor. The slope effect of the seafloor was reduced by placing
the acoustic sonar at a known distance from the sea surface,
$h d$
, as shown in
Figure 3
owing to high-resolution measurements obtained near the sea floor.
Figure 10
shows the total absolute and RMS measured error (blue line), and the total absolute and RMS estimated error (red line) at different depths of the echosounder from the sea-surface level at a constant
beam angle (
$15 ∘$
) and grid range (10 m) without external disturbances. If the sonar was on the sea surface,
$h d = 0$
, the total absolute measured error was
$13 × 10 3$
m, total absolute estimated error was
$3 × 10 3$
m, and the total RMS, measured and estimated errors were 200 and 40 m, respectively, at
$g r = 10$
$φ = 15 ∘$
$δ x = 0$
$δ y = 0$
. When the distance between echosounder and sea surface increased to 5 m, the total absolute measured error was
$10 × 10 3$
m, total absolute estimated error was
$2 × 10 3$
m, total RMS measured error was 180 m, and the total RMS estimated error was 30 m at
$g r = 10$
$φ = 15 ∘$
$δ x = 0$
$δ y = 0$
These results show that the absolute measured and estimated errors decreased as the vehicle approached the seafloor.
Figure 10
shows that the single-beam echosounder had a higher resolution because the coverage area of the echosounder was narrowing and the vehicle was less affected by external disturbances such as wind and
waves when the depth of the echosounder from sea-surface level increased. This result shows that, by considering the grid range in Lemma 2, the requirement of the echosounder was close to the
4.4. Analysis of the External Disturbances Effect
Changing the position and attitudes of the sea-surface vehicle through external disturbances such as wind and waves affected the measured bathymetric data. The roll and pitch angles of the vehicle
were zero without disturbances, and the echosounder was perpendicular to the seafloor. The sea-surface vehicle oscillated in the presence of external disturbances, and the roll and pitch attitudes of
the vehicle were different from those of the starting level. Thus, the sea-bottom coverage area changed, and the bathymetric value could not be measured perpendicularly to the seabed, as shown in
Figure 3
Figure 11
shows the total absolute and RMS measured errors (blue lines), and the total absolute and RMS estimated errors (red lines) at different attitudes of the vehicle integrated with the single-beam
echosounder at a constant beam angle (
$15 ∘$
) and constant grid range (10 m). It was assumed that the depth of the vehicle from the sea-surface was zero in this condition. The disturbance in the x and y axes comprised uniformly distributed
random variables with maximal values of 0
, 2
, 4
and 7
If the rotations of the vehicle with external disturbances in the x and y axes were $2 ∘$ ($δ x = 2 ∘$, $δ y = 2 ∘$), the absolute measured error became $1 × 10 4$ m, the absolute estimated error was
$3 × 10 3$ m, RMS measured error was $2 × 10 2$ m, and the RMS estimated error was $9 × 10 1$ m at $g r = 10$, $φ = 15 ∘$, $h d = 0$. When the external disturbance effect increased to $7 ∘$ in the x
and y axes, the absolute measured error was $1.5 × 10 4$ m, the absolute estimated error was $1 × 10 4$ m, the RMS measured error was $2.5 × 10 2$ m, and the RMS estimated error was $2 × 10 2$ m at
$g r = 10$, $φ = 15 ∘$, $h d = 0$.
Simulation results show that the absolute measured and estimated errors increased when the external-disturbance effect increased. Thus, it is important to perform measurements when the sea-surface
vehicle on which a single-beam echosounder is located is perpendicular to the measuring point in the mapped area.
If the beam angle becomes narrower, without external disturbances, the accuracy of the underwater map increases. However, under external-disturbance conditions, the accuracy is not higher with a
narrow beam angle because the measurement of some points may be missed, especially in an inclined sea-bottom area.
5. Underwater Mapping Experiment
Experimental bathymetric measurements were performed at the test area, the port of Kozlu in the city of Zonguldak in the Black Sea, Turkey as shown in
Figure 12
. The date of taking the bathymetric measurements was chosen on the basis of the anticipated wave height. A day with a wave height of at most 0.1 m was deemed to be suitable for this experiment, and
it was chosen on the basis of the weather forecast, as shown in
Figure 13
Before starting the measurements in the experimental area, the measurement lines were determined in the experimental area as the basis for bathymetric map production. According to the International
Hydrographic Organization standards, the maximum line spacing for some coastal areas up to 100 m deep is three times the average depth or 25 m. In this study, it was obtained as 10 m to reveal the
bottom topography more clearly. In
Figure 14
, the transverse and vertical lines are the observation and control lines, respectively. The transverse line spacing was determined as 10 m, and the distance between the control lines was defined as
approximately 20 m. The all-test area was mapped in one day. The tide value at the time of the bathymetry study was added to the measured values. One of the critical sources of error in the acoustic
sounding method is the incorrect measurement of the speed of sound. The speed of sound assumes different values in different environments, such as lakes, salt water, and fresh water. The sound
velocity was measured at different depths in the experimental area to obtain a more accurate map. An AML Minos-X brand CSTD device used for sound velocity measurement is shown in
Figure 15
. The sound velocity was measured as 1469 m/s, and then used for data analysis. The properties of the single-beam echosounder used to measure the depths in this study are listed in
Table 1
; we used 120 kHz.
Figure 15
shows the sea-surface vehicle integrated with measurement devices used in the experiment. Bathymetric data were measured using the single-beam echosounder located on the sea-surface vehicle.
Navigation data were measured using global positioning system (GPS) devices. A compatible global navigation satellite system (GNSS) receiver with the Continuously Operating Reference Station-Turkey
(CORS-TR) network was used for the purpose of providing the navigation of the unmanned surface vehicle. The national Continuously Operating Reference Station (CORS) networks, which operate on the
real-time kinematic (RTK) principle, are multipurpose geodetic networks. Using this method, the horizontal position of the GNSS receiver could be determined with a horizontal sensitivity of ±1–2 cm.
Raw measurement data were used to create a depth model of the project area. The linear interpolation method was preferred for this depth model. The isobath map of the region was created using the
depth model as shown in
Figure 14
. To investigate the errors caused by the slope of the bottom topography, a region where the depth changes was chosen in the observed area. This region is illustrated in
Figure 14
. The errors caused by the slope of the topography were modeled, and a second map of the same region was created. Furthermore, the differences between them were analyzed in a MATLAB, HYPACK, and
Global Mapper environment.
The sea-bottom slope angles in the
axes were calculated on the basis of (
) and (
) using the measured bathymetric data in the experimental area.
Figure 16
shows the estimated sea-bottom slope angles in the
axis represented by
, and the
axis represented by
for the observed experimental area, where the depths changed suddenly.
Subsequently, the measured bathymetric data were corrected in relation to the estimated bottom slope angles.
Figure 17
shows the underwater maps with measured and corrected depth levels.
Figure 18
shows the depth difference between the corrected and measured maps.
Figure 18
shows that there were depth differences of up to 5 m between the measured and the corrected map. This shows that the measured bathymetric data should have been corrected on the basis of seafloor
slope angles. The underwater map obtained without considering the seafloor slope, and one with correction based on the bottom slope angles were compared.
Figure 19
shows the model differences on the basis of the slope of the bottom topography. The bottom side of
Figure 19
shows the corrected map, and the upper side of
Figure 19
shows the underwater map without correction.
Figure 19
shows that the measured underwater topography without considering the bottom slope angles appeared to be an inclined area. Thus, a more accurate underwater map was obtained with this correction.
6. Conclusions
In this study, an underwater map was modeled and analyzed on the basis of a simulation. Experimental data were obtained from a single-beam echosounder by navigating a sea-surface vehicle. Factors
that influence underwater mapping accuracy, namely, seafloor slope angle, the grid range of the mapped area, the beam angle of the echosounder, the position of the echosounder, and the oscillation of
sea-surface vehicles owing to external disturbances were analyzed. The underwater mapping created experimentally for the Kozlu/Zonguldak area was corrected by estimating the sea-bottom slope angle
and using it to correct the bathymetric data. As indicated by the simulation results, the accuracy of the underwater map was higher when the beam angle was narrower, and the grid range shrank without
external disturbances. The error in the measurements owing to the oscillation of the vehicle increased with the external disturbances. The results indicate that more accurate underwater mapping is
obtained depending on the received bathymetric data when the sea-surface vehicle on which the echosounder is integrated nears the seafloor. This is because when the vehicle in which the echosounder
is integrated approaches the seafloor, external disturbances such as wind and waves cannot affect the movement of the vehicle; thus, the error in the measured bathymetric data is decreased. In
addition, higher resolution data are obtained by the vehicle when it is close to the seafloor, and in some areas that the surface vehicle cannot reach, an underwater map can still be completed. All
these results indicate that the underwater map created with bathymetric data, obtained by integrating the echosounder into the underwater vehicle rather than a sea-surface vehicle, is more accurate.
In future studies, we will compare the results of an experiment performed in the same test area with the bathymetric data obtained using an unmanned underwater vehicle [
Author Contributions
Data curation, S.K.K. and R.H.; Formal analysis, S.K.K., R.H., K.S.G. and Ş.H.K.; Investigation, S.K.K. and K.S.G.; Methodology, S.K.K., R.H., Ş.H.K. and M.K.L.; Resources, K.S.G. and Ş.H.K.;
Software, R.H.; Visualization, S.K.K. and R.H.; Writing—original draft, S.K.K.; Writing—review & editing, R.H., K.S.G., Ş.H.K. and M.K.L. All authors have read and agreed to the published version of
the manuscript.
This work is supported by the Scientific and Technological Research Council of Turkey (grant 119E037).
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
This work is supported by the Scientific and Technological Research Council of Turkey (grant 119E037). The authors are grateful for the support of the Scientific and Technological Research Council of
Conflicts of Interest
The authors declare no conflict of interest.
1. Guan, M.; Cheng, Y.; Li, Q.; Wang, C.; Fang, X.; Yu, J. An Effective Method for Submarine Buried Pipeline Detection via Multi-Sensor Data Fusion. IEEE Access 2019, 7, 125300–125309. [Google
Scholar] [CrossRef]
2. Florinsky, I.V.; Filippov, S.V. Three-Dimensional Geomorphometric Modeling of the Arctic Ocean Submarine Topography: A Low-Resolution Desktop Application. IEEE J. Ocean. Eng. 2021, 46, 88–101. [
Google Scholar] [CrossRef]
3. Bosch, J.; Istenic, K.; Gracias, N.; Garcia, R.; Ridao, P. Omnidirectional Multicamera Video Stitching Using Depth Maps. IEEE J. Ocean. Eng. 2020, 45, 1337–1352. [Google Scholar] [CrossRef]
4. Yin, J.; Wang, Y.; Lv, J.; Ma, J. Study on Underwater Simultaneous Localization and MappingBased on Different Sensors. In Proceedings of the IEEE 10th Data Driven Control Learning Systems
Conferenece, Suzhou, China, 14–16 May 2021; pp. 728–733. [Google Scholar] [CrossRef]
5. Ni, H.; Wang, W.; Ren, Q.; Lu, L.; Wu, J.; Ma, L. Comparison of Single-beam and Multibeam Sonar Systems for Sediment Characterization: Results from Shallow Water Experiment. In Proceedings of the
IEEE Oceans MTS/IEEE Seattle, Seattle, WA, USA, 27–31 October 2019; pp. 1–4. [Google Scholar] [CrossRef]
6. Zhao, J.; Yan, J.; Zhang, H.; Zhang, Y.; Wang, A. A new method for weakening the combined effect of residual errors on multibeam bathymetric data. Mar. Geophys. Res. 2014, 35, 379–394. [Google
Scholar] [CrossRef]
7. IHO. Organization, Manual on Hydrography. International Hydrographic Bureau Publication. 2005. Available online: https://www.deparentis.com/wp-content/uploads/2020/04/
IHO-Manual-on-Hydrography-1st-edition-February-2011-C-13_e1.0.0_ENG.pdf (accessed on 3 June 2022).
8. Foote, K. Using a sonar in a different environment from that of its calibration: Effects of changes in salinity and temperature. In Proceedings of the IEEE OCEANS, Charleston, SC, USA, 22–25
October 2018; pp. 1–5. [Google Scholar] [CrossRef]
9. Lurton, X. Swath Bathymetry Using Phase Difference:Theoretical Analysis of Acoustical Measurement Precision. IEEE J. Ocean. Eng. 2000, 25, 351–363. [Google Scholar] [CrossRef]
10. Malik, M. Sources and Impacts of Bottom Slope Uncertainty onEstimation of Seafloor Backscatter from Swath Sonars. Geosciences 2019, 9, 183. [Google Scholar] [CrossRef]
11. Gavrilov, A.N.; Parnum, I.M. Fluctuations of Seafloor Backscatter DataFrom Multibeam Sonar Systems. IEEE J. Ocean. Eng. 2010, 35, 209–219. [Google Scholar] [CrossRef]
12. Biffard, B.R.; Preston, J.M.; Chapman, N.R. Acoustic Classification with Single-Beam Echosounders: Processing Methods and Theory for Isolating Effects of the Seabed on Echoes. In Proceedings of
the IEEE OCEANS, Vancouver, BC, Canada, 29 September–4 October 2007; pp. 1–8. [Google Scholar] [CrossRef]
13. Biffard, B.; Bloomer, S.; Chapman, N. The Role of Echo Duration in Acoustic SeabedClassification and Characterization. In Proceedings of the Oceans MTS/IEEE Seattle, Seattle, WA, USA, 20–23
September 2010; pp. 1–8. [Google Scholar]
14. Garcia, D.C.; de Queiroz, R.L.; Rocha, M.P.; da Fonseca, L.E. Automatic compensation for seafloor slope and depthin post-processing recovery of seismic amplitudes. In Proceedings of the IEEE/OES
Acoustics in Underwater Geosciences Symposium (RIO Acoustics), Rio de Janeiro, Brazil, 25–27 July 2017; pp. 1–5. [Google Scholar]
15. Song, G.S.; Lo, S.C.; Fish, J.P. Underwater Slope Measurement Using a Tilted Multibeam Sonar Head. IEEE J. Ocean. Eng. 2014, 39, 419–429. [Google Scholar] [CrossRef]
16. Bird, J.S.; Mullins, G.K. Analysis of Swath Bathymetry Sonar Accuracy. IEEE J. Ocean. Eng. 2005, 30, 372–390. [Google Scholar] [CrossRef]
17. Sangekar, M.N.; Thornton, B.; Bodenmann, A.; Ura, T. Autonomous Landing of Underwater Vehicles Using High-Resolution Bathymetry. IEEE J. Ocean. Eng. 2019, 5, 1252–1267. [Google Scholar] [CrossRef
18. Fujiwara, T.; andFujio Yamamoto, Y.M. Evaluation of spatial resolution and estimation error of seafloordisplacement observation from vessel-based bathymetric surveyby use of AUV-based bathymetric
data. Mar. Geophys. Res. 2015, 36, 45–60. [Google Scholar] [CrossRef]
19. Becker, J.J.; Sandwell, D.T. Global estimates of seafloor slope from single-beam ship soundings. J. Geophys. Res. 2008, 113. [Google Scholar] [CrossRef]
20. EL-Hattab, A.I. Single beam bathymetric data modelling techniques for accurate maintenance dredging. Egypt. J. Remote Sens. Space Sci. 2014, 17, 189–195. [Google Scholar] [CrossRef]
21. Daniell, J.J. Development of a bathymetric grid for the Gulf of Papua and adjacent areas: A note describing its development. J. Geophys. Res. 2008, 113. [Google Scholar] [CrossRef]
22. Beyer, A.; Schenke, H.; Klenke, M.; Niederjasper, F. High resolution bathymetry of the eastern slope ofthe Porcupine Seabight. Elsevier Mar. Geol. 2003, 198, 27–54. [Google Scholar] [CrossRef]
23. Martin Jakobsson, B.C.; Mayer, L. On the effect of random errors in gridded bathymetric compilations. J. Geophys. Res. 2002, 107, 2358. [Google Scholar] [CrossRef]
24. Mirjam Snellen, K.S.; Simons, D.G. Model-based sediment classification using single-beam echosounder signals. J. Acoust. Soc. Am. 2011, 129, 2878–2888. [Google Scholar] [CrossRef]
25. Christou, C.T.; Jacyna, G.M. Simulation of the Beam Responseof Distributed Signals. IEEE Trans. Signal Process. 2005, 53, 3023–3031. [Google Scholar] [CrossRef]
26. Kuperman, W.; Roux, P. Underwater Acoustics. In Springer Handbook of Acoustics; Rossing, T., Ed.; Springer: New York, NY, USA, 2007; pp. 149–204. [Google Scholar] [CrossRef]
27. Doisy, Y. Theoretical Accuracy of Doppler Navigation Sonarsand Acoustic Doppler Current Profilers. IEEE J. Ocean Eng. 2004, 29, 430–441. [Google Scholar] [CrossRef]
28. Bishop, G.C. Gravitational field maps and navigational errors. In Proceedings of the 2000 International Symposium on Underwater Technology (Cat. No.00EX418), Tokyo, Japan, 26 May 2000; pp.
149–154. [Google Scholar] [CrossRef]
29. Edward Chen, J. Realtime map generation using side scan sonar scanlines for unmanned underwater vehicles. Ocean Eng. 2014, 91, 252–262. [Google Scholar] [CrossRef]
30. Guo, Y. 3D Underwater Topography Rebuilding Based on Single Beam Sonar. In Proceedings of the IEEE International Conference on Signal Processing, Communication and Computing, KunMing, China, 5–8
August 2013; pp. 1–5. [Google Scholar] [CrossRef]
31. Sac, H.; Leblebicioglu, K.; Akar, G.B. 2D high-frequency forward-looking sonar simulator based on continuous surfaces approach. Turk. J. Electr. Eng. Comput. Sci. 2015, 23, 2289–2303. [Google
Scholar] [CrossRef]
32. Gürtürk, F.F. Error Analysis of Seabed Mapping Using Multibeam Sonar. Master’s Thesis, Middle East Technical University, Ankara, Turkey, 2015. [Google Scholar]
33. Pailhas, Y.; Brown, K.; Capus, C.; Petillot, Y. Design of artificial landmarks for underwater simultaneous localisation and mapping. IET Radar Sonar Navig. 2013, 7, 10–18. [Google Scholar] [
34. Ma, X.; Yanli, C.; Bai, G.; Liu, J. Multi-AUV Collaborative Operation Based on Time-Varying Navigation Map and Dynamic Grid Model. IEEE Access 2020, 8, 159424–159439. [Google Scholar] [CrossRef]
35. Casas, S.B.F. A Concise Introduction to Geometric Numerical Integration; CRC Press; Taylor &Francis Group: Boca Raton, FL, USA, 2017. [Google Scholar]
36. Kendall, E.A.; Weimin Han, D.S. Numerical Solution of Ordinary Differential Equations; A John Wiley & Sons, INC. Publication: Hoboken, NJ, USA, 2009. [Google Scholar]
37. Karadeniz Kartal, S.; Leblebicioglu, M.K.; Ege, E. Experimental test of the acoustic-based navigation and system identification of an unmanned underwater survey vehicle (SAGA). Trans. Inst. Meas.
Control 2018, 40, 2476–2487. [Google Scholar] [CrossRef]
38. Karadeniz Kartal, S.; Casalino, G. Horizontal Parking Control of Autonomous Underwater Vehicle, FOLOGA. IFAC-PapersOnLine 2019, 52, 397–402. [Google Scholar] [CrossRef]
39. El-Diasty, M. Development of Real-Time PPP-Based GPS/INS Integration System Using IGS Real-Time Service for Hydrographic Surveys. J. Surv. Eng. 2016, 142, 05015005. [Google Scholar] [CrossRef]
40. Belge, E.; Cantekin, R.; Erol, B.; Akgul, V.; Kartal, S.K.; Hacioglu, R.; Gormus, S.; Kutoglu, H.; Leblebicioglu, K. Sensor Fusion Based on Integrated Navigation Dataof Sea Surface Vehicle with
Machine Learning Method. In Proceedings of the IEEE International Conference on Innovations in Intelligent Systems and Applications, Kocaeli, Turkey, 25–27 August 2021; pp. 1–6. [Google Scholar]
41. Erol, B.; Cantekin, R.; Kartal, S.K.; Hacioglu, R.; Gormus, S.; Kutoglu, H.; Leblebicioglu, K. Estimation of Unmanned Underwater Vehicle Motion with Kalman Filter and Improvement by Machine
Learning. Int. J. Adv. Eng. Pure Sci. 2021, 33, 67–77. [Google Scholar]
Figure 2. (a) Sea-bottom coverage for a flat seafloor. In this situation, $h m = h$. (b) Theta angle ($θ$) related to the seafloor slope is inside the insonified (conical) area for inclined seafloor.
In this situation, $h m ≠ h$. (c) Theta angle related to the seafloor slope is outside the insonified (conical) area for an inclined seafloor. In this situation, $h m ≠ h$.
Figure 3. (a) Depth measurement without external disturbances and draft of echosounder at the inclined seafloor. (b) Depth measurement with a draft of the echosounder at inclined seafloor. In this
situation, the echosounder is below the sea surface by an amount of $h d$. (c) Depth measurement with external disturbances in the x and y axes ($δ x$ and $δ y$) at the inclined seafloor.
Figure 6. Underwater map with measured depth level and underwater map with estimated depth level. Estimated sea-bottom slope angles in the x ($α$) and y ($β$) axes.
Figure 8. Variation in the absolute and RMS total measured errors (blue lines), and the absolute and RMS total estimated errors (red lines) for different beam angles at a constant grid range without
external disturbances.
Figure 9. Variation in the absolute and RMS total measured errors (blue lines), and the absolute and RMS total estimated errors (red lines) for different grid ranges at a constant beam angle without
external disturbances.
Figure 10. Absolute and RMS total meaured errors (blue lines), and absolute and RMS total estimated errors (red lines) related to the draft (or bias) of the echosounder from sea-surface level at a
constant grid range and beam angle.
Figure 11. Absolute and RMS total measured errors (blue lines), and absolute and RMS total estimated errors (red lines) related to the external-disturbance effect to the oscillation of the
sea-surface vehicle at constant grid range and beam angle.
Figure 15. Sea-surface vehicle integrated with measurement devices used in the experiment, and the measurement of the acoustic sound velocity in the sea before the experiment (right side).
Figure 16. Estimated sea-bottom slope angles in the x ($α$) and y ($β$) axes in the experimental area based on experimental data.
Figure 17. Underwater map with (top) measured and (bottom) estimated depth levels based on experimental data.
Figure 19. (top) Bottom topography model based on the measured map. (bottom) Corrected map model based on the slope of the bottom topography.
Frequency 24 kHz–210 kHz
Depth 5–5000 m
Acoustic velocity 1300–1800 m/s
Accuracy at—0–100 m, 1 cm
Beam spread ±4 minimal degrees
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:
Share and Cite
MDPI and ACS Style
Kartal, S.K.; Hacıoğlu, R.; Görmüş, K.S.; Kutoğlu, Ş.H.; Leblebicioğlu, M.K. Modeling and Analysis of Sea-Surface Vehicle System for Underwater Mapping Using Single-Beam Echosounder. J. Mar. Sci.
Eng. 2022, 10, 1349. https://doi.org/10.3390/jmse10101349
AMA Style
Kartal SK, Hacıoğlu R, Görmüş KS, Kutoğlu ŞH, Leblebicioğlu MK. Modeling and Analysis of Sea-Surface Vehicle System for Underwater Mapping Using Single-Beam Echosounder. Journal of Marine Science and
Engineering. 2022; 10(10):1349. https://doi.org/10.3390/jmse10101349
Chicago/Turabian Style
Kartal, Seda Karadeniz, Rıfat Hacıoğlu, K. Sedar Görmüş, Ş. Hakan Kutoğlu, and M. Kemal Leblebicioğlu. 2022. "Modeling and Analysis of Sea-Surface Vehicle System for Underwater Mapping Using
Single-Beam Echosounder" Journal of Marine Science and Engineering 10, no. 10: 1349. https://doi.org/10.3390/jmse10101349
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics | {"url":"https://www.mdpi.com/2077-1312/10/10/1349","timestamp":"2024-11-14T21:11:44Z","content_type":"text/html","content_length":"514099","record_id":"<urn:uuid:b56dabf7-d421-4b16-bae0-d91de5e6add0>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00805.warc.gz"} |
1. The magnetic field at the center of the current-carrying circular loop is B1. The magnetic field at a distance of √3 times radius of the given circular loop from the center on its axis is B2. The value of B1/B2 will be: (a) 9:4 (b) 12:√5 (c) 8:1 (d) 5:3
2. Three point charges, each of charge q are placed on vertices of a triangle ABC, with AB=AC=5L, BC=6L. The electrostatic potential at mid-point of side BC will be (a) 1/48πϵ0L (b) 8q/36πϵ0L (c) 5q/24πϵ0L (d) q/16πϵ0L
3. To increase the current sensitivity of a moving coil galvanometer by 50%, its resistance is increased so that the new resistance becomes twice its initial resistance. By what factor does its voltage sensitivity change? (a) Decreases by 25% (b) Increases by 25% (c) Increases by 75% (d) Decreases by 75%
4. The velocity of nth orbit in Bohr model of hydrogen atom varies directly proportional to (a) 1/b (b) 1/n^2 (c) n^2 (d) 1/n
5. A nucleus disintegrates into two nuclear parts, which have their velocities in the ratio 2:1. The ratio of their nuclear sizes will be (a) 2:1 (b) 1:3/2 (c) 3/5:1 (d) 1:2:5
6. The quantum nature of light explains the observations on photoelectric effect as (a) there is a minimum frequency of incident radiation below which no electrons are emitted (b) the maximum kinetic energy of photoelectrons depends only on the frequency of incident radiation
7. When the metal surface is illuminated, electrons are ejected from the surface after some time (a) the photoelectric current is independent of the intensity of incident radiation.
8. A cylinder of radius r and length l is placed in an uniform electric field parallel to the axis of the cylinder. The total flux for the surface of the cylinder is given by (a) 2πr^2Eπr (b) (c) Eπr^2 (d) Eπr^2
1. The magnetic field at the center of the current-carrying circular loop is B1. The magnetic field at a distance of √3 times radius of the given circular loop from the center on i... 1. The magnetic
field at the center of the current-carrying circular loop is B1. The magnetic field at a distance of √3 times radius of the given circular loop from the center on its axis is B2. The value of B1/B2
will be: (a) 9:4 (b) 12:√5 (c) 8:1 (d) 5:3 2. Three point charges, each of charge q are placed on vertices of a triangle ABC, with AB=AC=5L, BC=6L. The electrostatic potential at mid-point of side BC
will be (a) 1/48πϵ0L (b) 8q/36πϵ0L (c) 5q/24πϵ0L (d) q/16πϵ0L 3. To increase the current sensitivity of a moving coil galvanometer by 50%, its resistance is increased so that the new resistance
becomes twice its initial resistance. By what factor does its voltage sensitivity change? (a) Decreases by 25% (b) Increases by 25% (c) Increases by 75% (d) Decreases by 75% 4. The velocity of nth
orbit in Bohr model of hydrogen atom varies directly proportional to (a) 1/b (b) 1/n^2 (c) n^2 (d) 1/n 5. A nucleus disintegrates into two nuclear parts, which have their velocities in the ratio 2:1.
The ratio of their nuclear sizes will be (a) 2:1 (b) 1:3/2 (c) 3/5:1 (d) 1:2:5 6. The quantum nature of light explains the observations on photoelectric effect as (a) there is a minimum frequency of
incident radiation below which no electrons are emitted (b) the maximum kinetic energy of photoelectrons depends only on the frequency of incident radiation 7. When the metal surface is illuminated,
electrons are ejected from the surface after some time (a) the photoelectric current is independent of the intensity of incident radiation. 8. A cylinder of radius r and length l is placed in an
uniform electric field parallel to the axis of the cylinder. The total flux for the surface of the cylinder is given by (a) 2πr^2Eπr (b) (c) Eπr^2 (d) Eπr^2
Understand the Problem
The question paper contains various physics problems related to electromagnetism, kinetic energy, atomic structure, and photoelectric effect, aimed at assessing knowledge in different areas of
The final answer is 8:1
Answer for screen readers
The final answer is 8:1
More Information
The magnetic field at the center is derived using Biot-Savart law. For a point on the axis, the field decreases with distance, specifically at points farther than the radius, due to geometric
spreading and cancellation effects.
Common mistake is not accounting for the distance correctly when applying the axial magnetic field formula. | {"url":"https://quizgecko.com/q/1-the-magnetic-field-at-the-center-of-the-current-carrying-circular-loop-is-b1-wsssx","timestamp":"2024-11-03T22:03:19Z","content_type":"text/html","content_length":"177244","record_id":"<urn:uuid:bef80063-82d0-4201-bcc0-fa3337fc6e4c>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00338.warc.gz"} |
Best Player Combinations For
OVR vs VLS
How it Works?
This guide is super useful for identifying player combinations that are likely to score big. Use the filters to identify the best player combinations according to specific match scenarios.
Player correlate positively
Both players generally perform similarly when picked together. They either score high together or score low together. Maximise FPts by picking players that correlate positively with top players.
Player correlate negatively
Both players generally perform in an opposite manner. If one player scores high, the other usually scores low. Avoid picking players that correlate negatively with top player
Show Player Combinations on the basis of:
Best OVR Player Combinations
Best OVR Player Combinations
V Prasath BAT OVR
L Stuart AR OVR
N Moidu BOW OVR
J Hurly BOW OVR
A Hili AR OVR
F Aziz WK OVR
G Sant BAT OVR
E Xuereb BOW OVR
A Oulton BOW OVR
C Viljoen BAT OVR
P Lourens BOW OVR
Best VLS Player Combinations
Best VLS Player Combinations
Best OVR Player Combinations
Best OVR Player Combinations
V Prasath BAT OVR
L Stuart AR OVR
N Moidu BOW OVR
J Hurly BOW OVR
A Hili AR OVR
F Aziz WK OVR
G Sant BAT OVR
E Xuereb BOW OVR
A Oulton BOW OVR
C Viljoen BAT OVR
P Lourens BOW OVR
Pick V Prasath and L Stuart together
Positive Correlation
Pick N Moidu and P Lourens together
Positive Correlation
Pick A Hili and P Lourens together
Positive Correlation
Pick E Xuereb and P Lourens together
Positive Correlation
Best VLS Player Combinations
Best VLS Player Combinations | {"url":"https://www.perfectlineup.in/pl-labs/player-combination/OVR-VS-VLS/84624/3932","timestamp":"2024-11-04T13:54:33Z","content_type":"text/html","content_length":"540924","record_id":"<urn:uuid:70e110d6-5615-44b5-a7f4-a747efff003d>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00193.warc.gz"} |
Learning Tractable Interpretable Cutset Networks
Cutset networks are tractable, interpretable models which combine and enhance the capabilities of two interpretable models: (a) probabilistic decision trees (called OR trees in literature) and (b)
tree Bayesian networks. An issue with these models is that although they are interpretable and can explain their decisions via fast, accurate most probable explanation inference, their accuracy is
often quite low as compared to uninterpretable models such as Markov networks, deep Bayesian networks and neural networks. This tool helps the user learn interpretable cutset networks having high
accuracy and perform fast most probable explanation inference over them using an innovative technique that combines the estimates derived from the provided data with the ones derived from a more
accurate uninterpretable model.
Intended Use
The use case for this library is to learn tractable, interpretable probabilistic generative models that can accurately and quickly answer various explanation queries such as the most probable
explanation query for observations and decisions. The library has been used in many applications such as solving the task of performing explainable activity recognition in video data.
The library learns a probabilistic model from data. The data can be provided in matrix form where rows are examples and columns are features. Once the model is learned, the library can be used to
make decisions and generate most probable explanations by invoking its query answering capability.
Only works with discrete features. The next version of the library will include support for continuous features.
title = {Look Ma, No Latent Variables: Accurate Cutset Networks via Compilation},
author = {Rahman, Tahrima and Jin, Shasha and Gogate, Vibhav},
booktitle = {Proceedings of the 36th International Conference on Machine Learning},
pages = {5311--5320},
year = {2019},
editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan},
volume = {97},
series = {Proceedings of Machine Learning Research},
month = {09--15 Jun},
publisher = {PMLR} | {"url":"https://xaitk.org/capabilities/utdallas-lcn","timestamp":"2024-11-06T23:26:23Z","content_type":"text/html","content_length":"14336","record_id":"<urn:uuid:742a7452-9d94-4a73-9ea9-72e4cc2a89c5>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00839.warc.gz"} |
Lilly bought 24 tennis balls packed equally in 8 sets. determin-Turito
Are you sure you want to logout?
Lilly bought 24 tennis balls packed equally in 8 sets. Determine the number of tennis balls in 10 such sets.
A. 50
B. 40
C. 30
D. 60
In this question first, we found how many balls were packed in each pack and then multiplied by 10 to get no of balls packed in 10 packets.
The correct answer is: 30
Step 1 of 1:
In this question, we were given how many balls Lily packed in 8 sets and also asked to find the no of balls to be packed in 10 sets.
No of the balls packed is 24 in 8 packets.
No of balls in each packet=24/8=3
No of balls packed in 10 packets=3*10=30
Final Answer:
The right choice is-- c. 30
Get an Expert Advice From Turito. | {"url":"https://www.turito.com/ask-a-doubt/Mathematics-lilly-bought-24-tennis-balls-packed-equally-in-8-sets-determine-the-number-of-tennis-balls-in-10-such-q6fbd3ddf","timestamp":"2024-11-11T06:06:45Z","content_type":"application/xhtml+xml","content_length":"241442","record_id":"<urn:uuid:cd3b6489-31ea-4b14-a360-cd8b3397878f>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00298.warc.gz"} |
Defined Impedance Calculators - Artist 3D
Impedance calculators are essential tools for electrical engineers and circuit designers. They are used to calculate the total opposition to current flow in an alternating current (AC) circuit, which
is known as impedance. Impedance is a complex quantity that combines resistance and reactance, and it varies with frequency. Impedance calculators help engineers to quickly and accurately determine
the impedance of a circuit, which is crucial for designing and analyzing electrical systems.
Types of Impedance Calculators
There are several types of impedance calculators available, each designed for specific applications. Some of the most common types include:
1. Basic Impedance Calculator: This type of calculator is used to calculate the impedance of a simple series or parallel circuit with resistance and reactance components.
2. Transmission Line Impedance Calculator: This calculator is used to determine the characteristic impedance of a transmission line, which is important for designing high-frequency circuits and
matching impedances.
3. Antenna Impedance Calculator: This calculator is used to calculate the impedance of an antenna, which is essential for designing and optimizing antenna systems.
4. Microstrip Impedance Calculator: This calculator is used to determine the characteristic impedance of a microstrip transmission line, which is commonly used in printed circuit boards (PCBs).
How do Impedance Calculators Work?
Impedance calculators work by using mathematical formulas to calculate the total opposition to current flow in an AC circuit. The basic formula for impedance is:
Z = R + jX
– Z is the impedance (measured in ohms)
– R is the resistance (measured in ohms)
– X is the reactance (measured in ohms)
– j is the imaginary unit (square root of -1)
Reactance can be either inductive (XL) or capacitive (XC), depending on the type of circuit element. Inductive reactance is calculated using the formula:
XL = 2πfL
– f is the frequency (measured in hertz)
– L is the inductance (measured in henries)
Capacitive reactance is calculated using the formula:
XC = 1 / (2πfC)
– f is the frequency (measured in hertz)
– C is the capacitance (measured in farads)
Impedance calculators use these formulas, along with other parameters such as the geometry and material properties of the circuit elements, to calculate the total impedance of the circuit.
Example: Basic Impedance Calculator
Let’s consider an example of how to use a basic impedance calculator. Suppose we have a series circuit with a resistance of 100 ohms and an inductance of 50 mH, and we want to calculate the impedance
at a frequency of 1 kHz.
First, we calculate the inductive reactance:
XL = 2πfL
= 2π × 1000 Hz × 0.05 H
= 314.16 Ω
Then, we use the impedance formula to calculate the total impedance:
Z = R + jXL
= 100 Ω + j314.16 Ω
The magnitude of the impedance can be calculated using the formula:
|Z| = sqrt(R^2 + XL^2)
= sqrt(100^2 + 314.16^2)
= 329.63 Ω
So, the total impedance of the circuit is approximately 329.63 ohms at a frequency of 1 kHz.
Applications of Impedance Calculators
Impedance calculators have a wide range of applications in electrical engineering and circuit design. Some of the most common applications include:
1. Matching Impedances: Impedance matching is important for ensuring maximum power transfer and minimizing signal reflections in a circuit. Impedance calculators can be used to determine the
appropriate matching network for a given load impedance.
2. Designing Filters: Filters are used to selectively pass or block certain frequencies in a circuit. Impedance calculators can be used to design filters with the desired frequency response and
impedance characteristics.
3. Analyzing Transmission Lines: Transmission lines are used to transmit high-frequency signals over long distances. Impedance calculators can be used to determine the characteristic impedance and
propagation constant of a transmission line, which are important for designing and optimizing the system.
4. Designing Antennas: Antennas are used to transmit and receive electromagnetic waves. Impedance calculators can be used to determine the impedance of an antenna, which is important for matching
the antenna to the transmitter or receiver.
5. Optimizing PCB Layouts: PCBs are used to interconnect electronic components in a circuit. Impedance calculators can be used to determine the characteristic impedance of PCB traces, which is
important for minimizing signal reflections and ensuring signal integrity.
Advantages of Using Impedance Calculators
Using impedance calculators offers several advantages over manual calculations, including:
1. Speed: Impedance calculators can perform complex calculations quickly and accurately, saving time and effort compared to manual calculations.
2. Accuracy: Impedance calculators use precise mathematical formulas and algorithms to calculate impedance values, minimizing the risk of errors and ensuring accurate results.
3. Flexibility: Impedance calculators can handle a wide range of circuit configurations and parameters, making them suitable for a variety of applications and design scenarios.
4. Visualization: Many impedance calculators include graphical interfaces and visualization tools, making it easier to understand and interpret the results.
5. Optimization: Impedance calculators can be used to optimize circuit designs by exploring different parameter values and configurations to achieve the desired performance.
Limitations of Impedance Calculators
While impedance calculators are powerful tools for circuit design and analysis, they also have some limitations that users should be aware of:
1. Simplification: Impedance calculators often make simplifying assumptions about the circuit, such as assuming ideal components or neglecting parasitic effects. These assumptions may not always
hold true in real-world scenarios, leading to discrepancies between the calculated and measured impedance values.
2. Frequency Dependence: Impedance is a frequency-dependent quantity, and the impedance values calculated by a calculator may only be valid for a specific frequency or range of frequencies. Users
should be careful to use the appropriate frequency values when inputting parameters into the calculator.
3. Material Properties: The impedance of a circuit can be affected by the material properties of the components and substrates used, such as the dielectric constant and loss tangent. Impedance
calculators may not always account for these effects, which can lead to inaccuracies in the results.
4. Complex Geometries: Impedance calculators may have difficulty handling complex geometries or non-standard component configurations, such as curved or tapered transmission lines. In these cases,
more advanced simulation tools or measurement techniques may be required.
1. Q: What is the difference between impedance and resistance?
A: Resistance is a measure of the opposition to current flow in a DC circuit, while impedance is a measure of the opposition to current flow in an AC circuit. Impedance includes both resistance
and reactance components, and it varies with frequency.
2. Q: Can impedance calculators handle circuits with multiple components?
A: Yes, most impedance calculators can handle circuits with multiple components, such as resistors, inductors, and capacitors, connected in series or parallel. However, the complexity of the
circuit may affect the accuracy of the results.
3. Q: How do I know which type of impedance calculator to use for my application?
A: The type of impedance calculator to use depends on the specific application and the type of circuit or component being analyzed. For example, if you are designing a transmission line, you
would use a transmission line impedance calculator, while if you are designing an antenna, you would use an antenna impedance calculator.
4. Q: Can impedance calculators be used for low-frequency circuits?
A: Yes, impedance calculators can be used for low-frequency circuits, but the reactance components may be negligible at low frequencies, so the impedance may be dominated by the resistance. In
these cases, a simple resistance calculator may be sufficient.
5. Q: Are there any open-source or free impedance calculators available?
A: Yes, there are several open-source and free impedance calculators available online, such as the TXLine calculator and the Microstrip Impedance Calculator. These calculators can be a good
starting point for basic impedance calculations, but they may not have all the features and capabilities of commercial software packages.
Impedance calculators are valuable tools for electrical engineers and circuit designers, enabling quick and accurate calculation of impedance values for a wide range of applications. By understanding
the types of impedance calculators available, how they work, and their limitations, engineers can make informed decisions about which calculator to use for a given application and how to interpret
the results. With the help of impedance calculators, engineers can optimize circuit designs, match impedances, and ensure signal integrity, leading to more efficient and reliable electrical systems. | {"url":"https://artist-3d.com/defined-impedance-calculators/","timestamp":"2024-11-03T07:35:04Z","content_type":"text/html","content_length":"223201","record_id":"<urn:uuid:1ff8cd54-834e-4ad6-82a3-5f7601a4ac2f>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00570.warc.gz"} |
Quadrilateral: Types, Properties & Area - La Cultura de los Mayas
Quadrilateral: Types, Properties & Area
This article is about the square as a geometric figure. We explain to you what properties a square has and what different types of squares there are.
quadrilateral definition
A quadrilateral is a geometric figure formed by four pages and four corner points is formed.
The vertices usually denoted by the capital letters A, B, C, D in alphabetical order. However, the designation is counter-clockwise, as you can see from the following figure.
Figure 1: Representation of a square
The four sides of the square are formed between the nearest vertices and also versus clockwise with the lower case letters a, b, c, d. Thus, the side between points A and B is also referred to as a.
In addition to the sides that enclose the quadrilateral, there are lines within the quadrilateral that run between the individual points. These are also called diagonals defined and marked with the
letters e and f. The diagonal e runs between the corner points A and C and the diagonal f between the points B and D.
Regardless of how the particular quadrilateral is constructed, each quadrilateral possesses four Angles that are always formed by two sides and a corner point. The angles of these vertices are given
the corresponding lowercase letter of the Greek alphabet. This means that, for example, the angle that lies at corner point A is denoted by α (alpha, beta, gamma, delta). The sum of all angles within
the quadrilateral is always 360 degrees.
Characters of the Greek alphabetLabelCorner pointαAlphaABetaBGammaCDeltaD
squares types
Quadrilaterals are classified according to their various peculiarities in general squares and specific squares divided. In the following, we would like to briefly introduce you to the different
squares and show you how to use them Scope (U) and the surface area (A) can calculate. This is not only of enormous importance at school, but you can use these formulas again and again in everyday
situations – whether it is decorating your room or mowing the lawn.
General square
If a quadrilateral does not have any special features apart from the four corners and four sides, it is also called a general square designated.
This means, for example, that the sides are not the same length. A general quadrilateral can have one of three shapes:
1. Convex quadrilateral
2. Concave square
3. Overturned square
Let’s take a closer look at the different square types!
Convex quadrilateral
We consider the convex quadrilateral as the first type of quadrilateral. The following definition applies to this quadrilateral!
A convex quadrilateral is a quadrilateral in which the diagonals inside of the square cut.
As you can see from the figure below, the intersection of the diagonals e and f is inside the quadrilateral.
Figure 2: Convex quadrilateral
Concave square
The concave quadrilateral differs significantly from the convex quadrilateral! In general, you can remember the following!
In contrast to the convex square, the two diagonals of the concave square intersect outside of of the square. Thus, one of the four corners is curved inwards.
If one lengthens the diagonal f, which runs between the corner points B and D, then this intersects the diagonal e outside the quadrilateral area.
Figure 3: Concave square
Overturned square
The flipped square looks very different compared to the previous two square types!
Under a overturned A square is a geometric figure in which the order of the corner points is changed and they are no longer next to each other. Consequently, the individual pages cross each other.
Figure 4: Overturned square
Special squares
In addition to the general quadrilateral, there are also a large number of quadrilaterals that can be distinguished from one another on the basis of certain properties.
You are probably most familiar with the rectangle as a special quadrilateral.
A rectangle, like any other quadrilateral, has four vertices and four Pages. However, all four interior angles each form a right angle, i.e. they have 90 degrees. As a result, the opposite sides are
always the same length, as are the diagonals. A rectangle has two sides of equal length!
Figure 5: rectangle
A right angle is also marked with a quadrant and a dot inside it, as you can see from the illustration.
The calculation of the perimeter and the area are of enormous importance.
1. Scope
The perimeter U is the length of the line that delimits an area. For a quadrilateral, the perimeter is the total of the side lengths.
You can also think of the circumference as the length of rope it would take you to go around the rectangle once. You can see the scope in the following figure on the orange marked sides.
Figure 6: Perimeter of a rectangle
You can use the following formula to calculate the perimeter of a rectangle:
2. Area
The area is the dimension of a flat, i.e. two-dimensional, figure. For a rectangle, you can easily calculate this by multiplying the length by the width.
Figure 7: Area of a rectangle
You want to renovate your room and would like to replace the old carpet with a PVC covering. Your room is 4 m long and 5.5 m wide. How many m² of PVC flooring do you need?
To know how much PVC flooring you need, use the formula for the area of a rectangle:
a = 4 m
b = 5.5 m
You need 22 m² of PVC flooring to replace the old carpet in your room.
The square as a special form of the rectangle
The square, as a special form of the rectangle, has four same long sides. All interior angles of a square are 90 degrees.
Since all sides of a square are the same length, the diagonals intersect at right angles.
Figure 8: square
So that you can also calculate the perimeter (U) and the area (A) of a square, we provide you with the following formulas:
A parallelogram is another special quadrilateral. In a parallelogram, the opposite sides of the quadrilateral are always parallel to each other. For this reason, the opposite angles are always the
same size.
Of course you can also calculate the perimeter (U) and the area (A) of a parallelogram. However, in addition to the sides a and b, you also need the height h.
A = a + b + c + d = 2 ⋅ a + 2 ⋅ b = 2 ⋅ (a + b)
A = a ⋅ h
The rhombus is also known as a rhombus and is also a special square. The sides are not only parallel, but also of equal length. The angles of the rhombus are bisected by the diagonals.
The following formulas will help you to calculate the perimeter (U) and the area (A) of a rhombus:
A trapezoid is another special quadrilateral. The special feature of the trapeze is that at least two opposite sides are parallel to each other. The parallel sides are called a trapezoid base pages
the other two pages are the leg. In a trapezoid, the sum of the angles on one side is always 180 degrees.
Figure 11: trapeze
You will also be able to calculate the perimeter (U) and the area (A) of the trapezoid using the following formulas:
kite square
A kite is a quadrilateral with one diagonal as a axis of symmetry of the square acts. In a kite, two adjacent sides are always the same length. The diagonals are also perpendicular to each other, so
the intersection of the diagonals forms a right angle and bisects the diagonals.
Figure 12: Kite square
If you can’t remember exactly what you mean by symmetry and the axis of symmetry imagine, we have a little repetition for you here:
Under symmetry is understood to mean a geometric property in which a figure forms a mirror image on both sides of an axis. As a result, symmetry is also referred to as «mirror image equality».
The axis or straight line that divides the figure into mirror images is called the mirror axis or axis of symmetry designated.
So that you can also calculate the perimeter (U) and the area (A) of a kite, it is best to learn these two formulas:
The House of Squares
You have now learned about the different types of quadrilaterals and their properties and your head is probably spinning with all the definitions and formulas. To get an overview of the many squares,
you can use the so-called house of squares on:
Figure 13: House of Squares
As you can see from the picture, the house of squares consists of different floors. The different squares are arranged according to their different properties and special features. At the lowest
level, the ground floor, is the general quadrilateral. This has no special properties other than the four corners and four sides. The higher you go, the more specific properties you can discover in
each type of square.
The categories that govern the classification of the various quadrilaterals are primarily the angular and lateral relationships and the symmetry properties. You may be wondering how the different
squares are arranged within the floors and between the ground floor and the roof. We will try to explain this to you step by step:
Peculiarities of the square:
• four equal right angles
• all sides are the same length
• opposite sides are parallel to each other
3rd floor rhombus, rectangleCharacteristics of the rhombus:
• four equal sides
• opposite sides are parallel to each other
Peculiarities of the rectangle:
• all four angles are equal
• opposite sides are equal in length and parallel to each other
2nd floorSquare kite, parallelogram, symmetrical trapezoid
Properties of the kite square:
• two pairs of sides that are equal in length
• two of the angles within the kite are equal
Special features of the parallelogram:
• the opposite sides are equal in size and parallel to each other
• opposite angles are equal
Properties of the symmetrical trapezoid:
• two equal sides
• two parallel sides
• two of the interior angles are equal
1st floor general trapezoid
Special feature of the general trapezoid:
First floorgeneral quadrilateralNo special properties of the general quadrilateral
Square – The most important thing
• Quadrangle = 4 sides and 4 corners
• Differentiation between general and special quadrilaterals
• The classification is basically based on special properties, such as angles. symmetry, parallelism
• Great importance for all manual activities | {"url":"https://culturalmaya.com/quadrilateral-types-properties-area/","timestamp":"2024-11-05T06:23:08Z","content_type":"text/html","content_length":"58968","record_id":"<urn:uuid:5bd4a713-0b6f-4442-b677-6afe0033083f>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00862.warc.gz"} |
What is Banco Return On Tangible Assets from 2010 to 2024 | Stocks: BBDO - Macroaxis
BBDO Stock USD 2.10 0.06 2.78%
Banco Bradesco Return On Tangible Assets yearly trend continues to be very stable with very little volatility. Return On Tangible Assets are likely to drop to 0.01. Return On Tangible Assets is a
profitability metric that measures a company's ability to generate earnings from its tangible assets.
View All Fundamentals
First Reported Previous Quarter Current Value Quarterly Volatility
Return On Tangible Assets
2010-12-31 0.00747938 0.007105 0.00350253
Credit Downgrade Yuan Drop Covid
Check Banco Bradesco
financial statements
over time to gain insight into future company performance. You can evaluate financial statements to find patterns among Banco Bradesco's main balance sheet or income statement drivers, such as
Depreciation And Amortization of 6.8 B
, Interest Expense of 164.2
Total Revenue of 317.9 B
, as well as many indicators such as
Price To Sales Ratio of 0.82
, Dividend Yield of 0.059 or
PTB Ratio of 0.75
. Banco financial statements analysis is a perfect complement when working with
Banco Bradesco Valuation
Banco Return On Tangible Assets
Check out the analysis of
Banco Bradesco Correlation
against competitors.
Latest Banco Bradesco's Return On Tangible Assets Growth Pattern
Below is the plot of the Return On Tangible Assets of Banco Bradesco SA over the last few years. It is a profitability metric that measures a company's ability to generate earnings from its tangible
assets. Banco Bradesco's Return On Tangible Assets historical data analysis aims to capture in quantitative terms the overall pattern of either growth or decline in Banco Bradesco's overall financial
position and show how it may be relating to other accounts over time.
Return On Tangible Assets 10 Years Trend
Return On Tangible Assets
Banco Return On Tangible Assets Regression Statistics
Arithmetic Mean 0.01
Geometric Mean 0.01
Coefficient Of Variation 25.41
Mean Deviation 0
Median 0.01
Standard Deviation 0
Sample Variance 0.000012
Range 0.0127
R-Value (0.81)
Mean Square Error 0.00000449
R-Squared 0.66
Significance 0.0002
Slope (0.0006)
Total Sum of Squares 0.0002
Banco Return On Tangible Assets History
About Banco Bradesco Financial Statements
Banco Bradesco investors utilize fundamental indicators, such as Return On Tangible Assets, to predict how Banco Stock might perform in the future. Analyzing these trends over time helps investors
make informed
market timing
decisions. For further insights, please visit our
fundamental analysis
Last Reported Projected for Next Year
Return On Tangible Assets 0.01 0.01
Pair Trading with Banco Bradesco
One of the main advantages of trading using pair correlations is that every trade hedges away some risk. Because there are two separate transactions required, even if Banco Bradesco position performs
unexpectedly, the other equity can make up some of the losses. Pair trading also minimizes risk from directional movements in the market. For example, if an entire industry or sector drops because of
unexpected headlines, the short position in Banco Bradesco will appreciate offsetting losses from the drop in the long position's value.
0.77 ECBK ECB Bancorp PairCorr
0.74 EGBN Eagle BancorpFiscal Year End 22nd of January 2025 PairCorr
0.71 VBNK VersaBankFiscal Year End 11th of December 2024 PairCorr
0.68 VBTX Veritex HoldingsFiscal Year End 28th of January 2025 PairCorr
0.65 TECTP Tectonic Financial PairCorr
The ability to find closely correlated positions to Banco Bradesco could be a great tool in your tax-loss harvesting strategies, allowing investors a quick way to find a similar-enough asset to
replace Banco Bradesco when you sell it. If you don't do this, your portfolio allocation will be skewed against your target asset allocation. So, investors can't just sell and buy back Banco Bradesco
- that would be a violation of the tax code under the "wash sale" rule, and this is why you need to find a similar enough asset and use the proceeds from selling Banco Bradesco SA to buy it.
The correlation of Banco Bradesco is a statistical measure of how it moves in relation to other instruments. This measure is expressed in what is known as the correlation coefficient, which ranges
between -1 and +1. A perfect positive correlation (i.e., a correlation coefficient of +1) implies that as Banco Bradesco moves, either up or down, the other security will move in the same direction.
Alternatively, perfect negative correlation means that if Banco Bradesco SA moves in either direction, the perfectly negatively correlated security will move in the opposite direction. If the
correlation is 0, the equities are not correlated; they are entirely random. A correlation greater than 0.8 is generally described as strong, whereas a correlation less than 0.5 is generally
considered weak.
Correlation analysis
and pair trading evaluation for Banco Bradesco can also be used as hedging techniques within a particular sector or industry or even over random equities to generate a better risk-adjusted return on
your portfolios.
Pair CorrelationCorrelation Matching
When determining whether Banco Bradesco SA
offers a strong return on investment
in its stock, a comprehensive analysis is essential. The process typically begins with a thorough review of Banco Bradesco's
financial statements
, including income statements, balance sheets, and cash flow statements, to assess its
financial health
. Key financial ratios are used to gauge profitability, efficiency, and growth potential of Banco Bradesco Sa Stock.
Outlined below are crucial reports that will aid in making a well-informed decision on Banco Bradesco Sa Stock:
Check out the analysis of
Banco Bradesco Correlation
against competitors. You can also try the
Pattern Recognition
module to use different Pattern Recognition models to time the market across multiple global exchanges.
Is Banks space expected to grow? Or is there an opportunity to expand the business' product line in the future? Factors like these will boost
the valuation of Banco Bradesco
. If investors know Banco will grow in the future, the company's valuation will be higher. The financial industry is built on trying to define current growth potential and future valuation
accurately. All the valuation information about Banco Bradesco listed above have to be considered, but the key to understanding future value is determining which factors weigh more heavily than
Quarterly Earnings Growth Dividend Share Earnings Share Revenue Per Share Quarterly Revenue Growth
0.436 1.114 0.23 7.098 0.429
The market value of Banco Bradesco SA
is measured differently than its book value, which is the value of Banco that is recorded on the company's balance sheet. Investors also form their own opinion of Banco Bradesco's value that differs
from its market value or its book value, called intrinsic value, which is Banco Bradesco's true underlying value. Investors use various methods to calculate intrinsic value and buy a stock when its
market value falls below its intrinsic value. Because Banco Bradesco's market value can be influenced by many factors that don't directly affect Banco Bradesco's underlying business (such as a
pandemic or basic market pessimism), market value can vary widely from intrinsic value.
AltmanZ ScoreDetails PiotroskiF ScoreDetails BeneishM ScoreDetails FinancialAnalysisDetails Buy or SellAdviceDetails TechnicalAnalysisDetails
Please note, there is a significant difference between Banco Bradesco's value and its price as these two are different measures arrived at by different means. Investors typically determine
if Banco Bradesco is a good investment
by looking at such factors as earnings, sales, fundamental and technical indicators, competition as well as analyst projections. However, Banco Bradesco's price is the amount at which it trades on
the open market and represents the number that a seller and buyer find agreeable to each party. | {"url":"https://widgets.macroaxis.com/financial-statements/BBDO/Return-On-Tangible-Assets","timestamp":"2024-11-11T21:21:42Z","content_type":"text/html","content_length":"329414","record_id":"<urn:uuid:54cfd0aa-4bf6-4538-89e2-846d12d6b8d6>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00732.warc.gz"} |
36 Best Math Tutors in Gaya, India | Math Tuition - MyPrivateTutor
I Teach Mathematics and have four years of experience in teaching. I live on Budhlal Bhagat Lane.
Hello students, I am Manish Kumar. I Teach Mathematics as my main subject, and I have great experience of more than 3 years. I have taught many studen... | {"url":"https://www.myprivatetutor.com/mathematics-tutors-in-gaya","timestamp":"2024-11-05T17:00:25Z","content_type":"text/html","content_length":"812748","record_id":"<urn:uuid:44fb3dd0-9a85-43f8-9368-1d5612d2ba31>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00619.warc.gz"} |
Rounding to the Nearest Micrometer Calculator | Online Calculator
1. How can Rounding be used in estimation?
The simple form of estimation is rounding. Using rounding, very long numbers are made simpler or expressed in terms of the nearest unit.
2. What are the limitations of a micrometer?
The micrometer has a limited range. Bigger objects require multiple micrometers that can get very expensive.
3. What are the benefits of rounding numbers?
Rounding numbers means creating the approximate for the original value. The benefit of rounding is it gives the numbers that are easier to work with.
4. How do you round numbers to the nearest micrometer?
You need to give the number as input for the Rounding to the Nearest Micrometer Calculator and hit the calculate button to avail the rounded value as the answer. | {"url":"https://roundingcalculator.guru/rounding-to-the-nearest-micrometer-calculator/","timestamp":"2024-11-10T17:02:24Z","content_type":"text/html","content_length":"43301","record_id":"<urn:uuid:cc066253-20e9-4d85-9e0d-1c19a1980f34>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00511.warc.gz"} |
Postdoctoral Researcher
Department of Mathematics, ETH Zurich
I am currently a postdoctoral researcher at the Seminar for Applied Mathematics at the Swiss Federal Institute of Technology in Zurich.
My research focuses on probabilistic and statistical machine learning, mathematical statistics, stochastic analysis, and applied analysis.
I am particularly interested in stochastic dynamics, statistical inverse problems, and inference from multidimensional stochastic processes, and my work frequently uses rough paths theory to bridge
these areas. I am especially drawn to research at the intersection of machine learning and stochastic analysis, where I aim to combine diverse mathematical concepts and techniques to address
statistical problems with substantial practical applications.
Previously, I was a postdoctoral researcher at the Department of Statistics at Columbia University. I am also an associate member of the DataSıg Research Group. I received my PhD in Mathematics from
the University of Oxford in autumn 2022, under the supervision of Harald Oberhauser. Prior to this, I completed an MSc in Pure Mathematics at Imperial College London and a BSc and MSc in Mathematics
with a minor in Theoretical Physics at Ulm University in Germany.
A detailed CV is available on request.
ETH Zurich, Department of Mathematics
Rämistrasse 101, HG E 62.1
CH-8092 Zurich, Switzerland
Email: alexander.schell at math.ethz.ch
Publications and Preprints
ETH Zurich:
Head Assistant for High-Performance Computing for CSE (Fall 2024) and Numerical Analysis of Stochastic Differential Equations (Autumn 2024).
University of Oxford:
Tutor for Stochastic Differential Equations (Maths C8.1, Michaelmas 2021); Teaching Assistant for Probability, Measure and Martingales (Maths B8.1, Michaelmas 2020) and Functional Analysis II (Maths
B4.2, Hilary 2019)
Ulm University:
Head Tutor for Ordinary Differential Equations ('Gewöhnliche Differentialgleichungen', Summer Semester 2018) | {"url":"https://sites.google.com/view/alexander-schell/home","timestamp":"2024-11-14T14:44:35Z","content_type":"text/html","content_length":"79638","record_id":"<urn:uuid:532fdd7b-1db5-4a6d-8bbf-02f9202d8664>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00625.warc.gz"} |
Keep empty/null in if-then calculation
I have matrix table with dimensions and in there values till 100% and also some empty/null lines because there is no entry for some specific dimension (which is correct). However, our customer now
wants there to be only the value 0 or 100. So I create a custom calculation and set everything with value 100 to 100 and everything below to 0 (if sum(0,0,m1) = 100 then 100 else 0). Fits. But now it
also makes a 0 out of the null/empty values. How do I get it to keep the empty rows and not make a 0 out of NULL?
5 comments
• Hi Marcel
You can test if a cell is empty with the count function.
Next problem is to keep it empty, since the calculation engine can't return nulls (or spaces).
However you can hide the values that are empty with some visibility agent logic.
.1. First you make a change to your calculation like this:
if count(d1,0,m1) = 0 then 999 else if sum(0,0,m1) = 100 then 100 else 0
This means empty cells will be assigned the value 999 and if they are not empty, your logic is applied.
2. Now make a visibility agent that only hides values (not members).
Make a condition saying:
That should do it.
• Thank you Niels. The tip with the count function helped me out.
• I am experiencing an issue similar to the one Marcel describes in his original post here. Mine differs though in the way that the calculation i need to hide is being done as a column, and it
doesn't look like visibility agents can be set on calculated columns? My table looks like the one below
sum(d-3, 0, m3) * 1,073271 for the 2022 column.
sum(d-2, 0, m3) * 1,041 for the 2023 column and
sum(d-1, 0, m3) for the 2024 column.
2024 however flatlines for months that has not yet occured, and i would like to hide these months. I can edit the fomular for 2024 to something like:
if sum(d-1, 0, m3) > sum(d-1, -1, m3) then sum(d-1, 0, m3) else 0
which handles the months correctly, but just flatlines the graph instead of hiding the values. So I also wish for the ability to return NULLS in a calculation, or some way of hiding values in a
calculated column?
• You mention that "it doesn't look like visibility agents can be set on calculated columns?"
As far as i can see, I use visibility agents on calculated columns. So I wonder why this isn't available in you case?
• Perhaps i am missing something then.
I have the following calculations on my chart:
Where 2022, 2023, and 2024 are defined as columns c1, c2 and c3. Then under visibility agents i can only select the measures:
If I add a new calculated measure, it appears under visibility agents just fine.
Please sign in to leave a comment. | {"url":"https://community.targit.com/hc/en-us/community/posts/12555743710108-Keep-empty-null-in-if-then-calculation","timestamp":"2024-11-05T15:29:56Z","content_type":"text/html","content_length":"43538","record_id":"<urn:uuid:f68e726c-34c5-40e7-98d9-fe714688b926>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00698.warc.gz"} |
Hou Tin Chau
Iterated shadows in the hypercube
Combinatorics Seminar
22nd October 2024, 11:00 am – 12:00 pm
Fry Building, 2.04
Let n be an even integer, and let A be a family of (n/2)-element subsets of {1,2,...,n}, i.e., A is a subset of the middle layer of the n-dimensional hypercube. The rth iterated upper shadow of A is
the family of all (n/2+r)-element subsets T of {1,2,...,n} such that T is a superset of S for some S in A (so it is a subset of the (n/2+r)th layer of the hypercube). The classical Kruskal-Katona
theorem implies that, for A of measure ½, the measure (or density) of the rth iterated upper shadow of A is minimised by taking A to be an initial segment of the lexicographical ordering, and in this
case one needs to go up r = Ω(n) layers for the upper shadow to get an Ω(1) density-increase in the (n/2+r)-th layer. Friedgut conjectured that, if A has measure ½ and we consider both A its
complement A^c in the middle layer, then for every ε>0, going up only r = ⌈ε√n⌉ layers already guarantees that one of the upper shadows of A and A^c has an Ω_ε (1) density-increase, for sufficiently
large n. We will outline a proof of this conjecture. We use the FKG inequality, together with a random restriction argument and a lemma saying that certain Johnson graphs have spectral gap uniformly
bounded from below (we prove the latter by adapting a recently-introduced technique of Koshelev). Joint work with David Ellis and Marius Tiba. | {"url":"https://www.bristolmathsresearch.org/seminar/hou-tin-chau-2/","timestamp":"2024-11-05T20:28:15Z","content_type":"text/html","content_length":"54854","record_id":"<urn:uuid:e84a9ff8-7cc9-47f9-972c-c0505512add5>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00385.warc.gz"} |
How do you find the interval of convergence for a geometric series? | Socratic
How do you find the interval of convergence for a geometric series?
1 Answer
Since a geometric series is not a power series, it is not appropriate to ask for its interval of convergence. Did you have something else in mind?
I hope that this was helpful.
Impact of this question
4140 views around the world | {"url":"https://socratic.org/questions/how-do-you-find-the-interval-of-convergence-for-a-geometric-series","timestamp":"2024-11-13T14:46:25Z","content_type":"text/html","content_length":"32746","record_id":"<urn:uuid:46195aed-411a-4343-ba30-7b50c78821e7>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00493.warc.gz"} |
Gyrokinetic linear simulation of feedback instability in dipole magnetosphere
We perform a novel linear simulation of the feedback instability by applying a gyrokinetic model to the magnetosphere, where the geomagnetic field is modeled by a dipole magnetic field. In order to
avoid huge numerical calculation costs, the effect of the fluctuating mirror force is neglected. It is found that kinetic effects, such as the finite Larmor radius effect and the electron Landau
damping, strongly stabilize the feedback instability and form a peak in the perpendicular wave number spectrum of the growth rate. During the linear growth, it is observed that electron free energy
has a much larger value than the electromagnetic field perturbation energy. Since such large electron free energy may lead to electron heating and acceleration, it is important to understand its
generation mechanism. Analyses based on the energy conservation reveal that the coupling of the non-uniformity of the equilibrium fields along the geomagnetic field and the fluctuating electron
distribution gives rise to thermodynamic power generating the electron free energy. A fine velocity space structure around an average Alfvén velocity of the fluctuating electron distribution mainly
contributes to drive the thermodynamic power.
Auroral light emissions are caused by electrons falling into the ionosphere along the geomagnetic field. It is plausible that the strongly bright aurora called the breakup is due to the strong
electric current, which is caused by a dynamo mechanism somewhere in the magnetosphere.^1 On the other hand, various time-varying characteristics of the auroral fine structure suggest that the Alfvén
wave traveling along the geomagnetic field is in a strong turbulent state, as indicated by spacecraft observations.^2 There are various mechanisms for excitation of the Alfvén wave in the
magnetosphere: the oscillation of the geomagnetic field triggered by the substorm,^3 the solar wind pressure disturbance at the magnetopause,^4 the Kelvin–Helmholtz instability at the magnetospheric
boundary,^5 and the feedback instability that occurs in the magnetosphere–ionosphere coupling system. The feedback instability is categorized into two types: one is due to the standing Alfvén wave
called the field line resonance with a low frequency (1–10 mHz) and a long wavelength ( $∼105$ km),^6 and the other is due to the Alfvén wave trapped in the vicinity of the ionosphere called the
ionospheric cavity mode (or the ionospheric resonator) with a high frequency (0.1–1Hz) and a short wavelength ( $∼104$ km).^7
In the following, we focus on the low-frequency and long-wavelength feedback instability associated with the field line resonance. In order to explain the behavior of the so-called quiet aurora,
various studies have been done for the feedback instability.^8 In the high temperature region of the magnetosphere, kinetic effects, such as the finite Larmor radius (FLR) effect and the electron
Landau damping, on the Alfvén wave become important, which is the so-called kinetic Alfvén wave.^9,10 In addition, the strong non-uniformity along the geomagnetic field could affect the
characteristics of the Alfvén wave and the wave–particle interaction with electrons. In the research context considering the non-uniformity, the magnetosphere has often been modeled by the dipole
magnetic field. The feedback instability with the Alfvén wave propagating in the dipole magnetic field has been investigated using fluid models.^11–18 It has been discussed how the wave–particle
interaction between the Alfvén wave and electrons occurs in the presence of the non-uniformity of the dipole magnetic field using kinetic models.^19–21 There has also been discussion about how the
non-uniformity of the dipole magnetic field affects the characteristics of the kinetic Alfvén wave, for example, the amplitude of the parallel electric field.^22–27
The importance of the electron Landau damping for the Alfvén wave in the magnetosphere has been pointed out.^10,28,29 In our preceding studies, in order to investigate the kinetic effects on the
Alfvén wave, we derive a gyrofluid model of the magnetosphere from the gyrokinetic model, where the geomagnetic field is modeled by a uniform magnetic field.^30 In the derivation of the gyrofluid
model, the dispersion relation of the kinetic Alfvén wave^10,31 is used to close the electron parallel pressure perturbation. Using the gyrofluid model, it is clarified that the electron Landau
damping strongly suppresses the linear growth rate of the feedback instability^30 and plays an important role in the nonlinear saturation of the feedback instability.^32
However, fluid models cannot reveal how the energy is transferred to electrons through the electron Landau damping. In addition, in the presence of the non-uniformity of the geomagnetic field, it is
not straightforward to derive the dispersion relation of the kinetic Alfvén wave in a simple algebraic form, which is necessary for constructing the gyrofluid model. These issues motivate the direct
application of the gyrokinetic model to the magnetosphere as an initial value problem rather than through the dispersion relation. Then, a simulation method is developed, which solves the gyrokinetic
model of the magnetosphere as an initial value problem and properly treats the magnetosphere–ionosphere boundary condition.^33 For the uniform model of the geomagnetic field, it is confirmed that
simulation results given by the above-mentioned method reasonably agree with those predicted by the dispersion relation.^33 In this study, this methodology is applied to the dipole model of the
geomagnetic field.
The remainder of this paper is organized as follows. Models of the magnetosphere and ionosphere are shown in Sec. II. A linear simulation of the feedback instability for the dipole model of the
geomagnetic field is performed in Sec. III. It is observed that kinetic effects strongly stabilize the feedback instability and form a peak in the perpendicular wave number spectrum of the growth
rate. During the linear growth of the instability, it is observed that the electron free energy is much larger than the energy of the electromagnetic field perturbation. Analyses based on the energy
conservation are performed to clarify the generation mechanism of the large electron free energy. A summary is given in Sec. IV.
A. Nonlinear gyrokinetic model of magnetosphere
We consider a magnetic flux tube along the geomagnetic field. By considering a sufficiently thin flux tube, we can assume that perturbations are periodic in the plane perpendicular to the geomagnetic
field. Then, the so-called $δf$-gyrokinetic equation is introduced. To focus on the dynamics of the feedback instability in the parallel direction, we neglect perpendicular gradients of the
background magnetic field, the equilibrium density, the equilibrium temperature, and the equilibrium pressure for simplicity. Only the perpendicular gradient of the equilibrium electrostatic
potential, which is a driving source of the feedback instability, is taken into account.
Since the ion dynamics is of secondary importance, we only consider the electron gyrokinetic equation in the zero Larmor radius limit (in other words, the electron drift-kinetic equation). In order
to discuss the equilibrium field, we first consider the mechanical equilibrium by assuming the magnetohydrodynamic (MHD) force balance along the magnetic field,
is the unit vector parallel to the equilibrium magnetic field,
is the magnitude of the equilibrium magnetic field, and
are the parallel and perpendicular equilibrium electron pressures, respectively. Furthermore, if the pressure is assumed to be isotropic, the pressure should be uniform along the magnetic field
$p⊥e0=p∥e0=ne0T e=const.,$
is the equilibrium electron density, and
is the equilibrium electron temperature. Next, we consider the thermodynamical equilibrium. In order to determine the equilibrium electron distribution
, the equation determining the stationary state should be solved. However, it has not been succeeded in determining
, where the density decreases and the temperature increases from the ionosphere toward the magnetic equator as in the observation, from first principles, so far. This could be due to
oversimplification, such as ignoring the pitch angle scattering and acceleration of electrons by whistler waves, source/sink of particle and heat from the ionosphere and plasma sheet, and the
perpendicular transport due to particle drifts. In fact, those play important roles in the formation of the radiation belt, for example. Here, as an alternative to an argument based on first
principles, we assume in advance the electron density and temperature profiles formed by a combination of various physical mechanisms. In addition, we assume that the system is sufficiently
randomized by the wave–particle interaction and the particle drifts. For such system, it is anticipated that the local thermodynamic equilibrium, i.e., the Maxwellian distribution, gives a reasonable
approximation. For these reasons, we consider the following equilibrium distribution function:
is the electron thermal velocity,
is the electron mass,
is the magnetic moment, and
are the parallel and perpendicular electron velocities, respectively. As we will discuss later, the equilibrium fields in Eq.
change along the geomagnetic field. Note that, when taking into account energetic (superthermal) particles in the magnetosphere, it is common to use the Kappa distribution, which is a mathematical
extension of the Maxwellian distribution.
For the electron gyrokinetic equation determining the fluctuating electron distribution
, we start from Eq. (12) in Ref.
$∂∂t(B0δfe,k)+[(v∥b+vE)·∇(B0f e)]k−v∥B0[(b·∇B0)(B0fe)]k−μ(b0·∇B0)∂∂v∥(B0δfe,k)=emeẼ∥k∂∂v∥(B0fe0),$
, and
, where
is the elementary charge,
, where
is the equilibrium electrostatic potential,
is the electrostatic potential perturbation,
is the perturbation of the parallel component of the vector potential, and
is the velocity of light. Hereafter, the perturbation amplitude in wave number space is represented by the subscript
Here, we consider the numerical resolution in the velocity space
. For each position along the geomagnetic field, meshes of
need to be sufficiently smaller than
, respectively. For example, for our equilibrium profiles discussed later, the meshes required for
are proportional to
, respectively. Since
changes roughly by a factor of
along the geomagnetic field, resolving the velocity space for the whole magnetosphere is numerically difficult. For this reason, we neglect the fourth term on the left-hand side of Eq.
, representing the fluctuating mirror force. Here, it should be noted that the leading-order mirror force necessary for the MHD force balance is taken into account through the
term in Eq.
. Considering that a velocity space integral is defined by
, operating
to Eq.
without the fluctuating mirror force term gives
. The fluid moments of
are given as follows:
is the electron density perturbation,
is the parallel current density perturbation,
is the parallel electron velocity perturbation, and
is the parallel electron pressure perturbation. In order to close the gyrokinetic model, the Ampère's law and gyrokinetic quasi-neutrality condition are necessary. The Ampère's law is given by
is the perpendicular wavenumber. The gyrokinetic quasi-neutrality condition is given by
is the ion charge,
is the ion temperature,
is the equilibrium ion density,
is the modified Bessel function,
is the Alfvén velocity,
is the ion mass,
is the ion Larmor radius,
is the ion thermal velocity, and
is the ion cyclotron frequency. In Eq.
, the ion density perturbation is neglected, since the ion density perturbation does not play an important role, in a system where ion thermal transport effects are not dominant.
For later use, we derive a gyrofluid model: considering the zeroth and first order moments of
for Eq.
, the electron continuity equation and the generalized Ohm's law (or the parallel electron momentum equation) are given by
respectively, where
for arbitrary
is the electron inertial length, and
is the electron plasma frequency. Different from Eq. (29) in Ref.
, the fluctuating mirror force term characterized by the perpendicular electron density fluctuation is not involved in Eq.
B. Linearized gyrokinetic model of the magnetosphere
Similar to Ref.
, we consider modified dipole coordinates
, where a
plane is the plane perpendicular to the geomagnetic field, and
is the parallel position. In our coordinates, the non-uniformity along the geomagnetic field is considered, but the curvature and perpendicular gradient of the geomagnetic field are omitted. This is
consistent with the assumption of
, since the curvature is written by
. The ionosphere and magnetic equatorial plane are located at
, respectively. However, as mentioned later, the fluid buffer is introduced in the range of
. Thus, the gyrokinetic model is applied to the range of
. The conservation of the magnetic flux, i.e.,
, is held along the magnetic field, where
is the perpendicular scale length depending on the value of
. In our coordinates, the perpendicular wave number is defined by
, where
, and
are the mode numbers in the
directions, respectively. Similarly in Ref.
, but considering the
-dependence of the equilibrium fields, a linearized version of Eq.
is given by
, and
are used. Here, we remind that the perpendicular gradients of the equilibrium fields, except the electrostatic potential, are neglected in our model. For later use, linearized versions of Eqs.
are also given by
respectively, where
is used.
Here, we review the dispersion relation of the kinetic Alfvén wave. In the case of the uniform equilibrium along the geomagnetic field, the dispersion relation of the kinetic Alfvén wave [Eq. (46) in
Ref. 30] can be derived from Eqs. (11), (12), and (15) by considering $∂/∂t→−iω$ and $∂/∂z→ik∥$, where $ω$ is the frequency, and $k∥$ is the wave number parallel to the geomagnetic field. The
derived dispersion relation is basically identical to that in Ref. 10 and involves effects of the ion finite Larmor radius, the electron Landau damping, the electron inertia, and the electron
pressure. However, as mentioned in the Introduction, it is not straightforward to derive the dispersion relation of the kinetic Alfvén wave in the presence of the non-uniformity of the equilibrium
along the geomagnetic field. Therefore, it is necessary to solve Eqs. (11), (12), and (15) as an initial value problem.
C. Law of conservation of energy in magnetosphere
In the following, we derive the law of conservation of energy for the gyrokinetic model. Adding Eq.
multiplied by
and its complex conjugate gives
is used. Operating
to Eq.
, and
are defined by
respectively, and
are defined by
respectively. In the derivation of Eq.
, we used
, Eqs.
, and
$ϕ̃k[ϕ0,ϕ̃]k*+ c.c.=0$
. In the total energy
is the magnetic field perturbation energy as in the electrodynamics. In the thermodynamics,
corresponds to the gyrokinetic electron free energy (generalized grand canonical potential energy),
is the energy accompanied by the gyrocenter motion, and
is the energy associated with the finite Larmor radius effect. In the total energy source
is the source of the electron free energy, and
is the electric power.
Next, the energy conservation of the linearized gyrofluid model is derived to interpret
from a thermo-hydrodynamic perspective. Similar to the derivation of Eq.
, we consider Eq.
multiplied by
, Eq.
multiplied by
, and Eq.
multiplied by
. Adding those equations with their complex conjugates and operating
are defined by
$Ene=12∫z0ℓdz L⊥2Tene0|ñe,k|2,$
$Eu∥e=12∫z0ℓdz L⊥2mene0|ũ∥e,k|2,$
respectively, and
is defined by
$Sp=−Re∫z0ℓdz L⊥2[Teñe,k*B0ne0∂∂z(ne0ũ∥e,kB0) +u∥e,k*B0∂∂z(p̃∥e,kB0)].$
As shown in the latter part of this paper,
as contributions from the lower-order fluid perturbations. Therefore,
is the gyrokinetic extension of the electron fluid perturbation energy. Similarly,
is the gyrokinetic extension of
. From Eq.
represents thermodynamic power exerted on the system by the combination of the electron density perturbation, the electron parallel pressure perturbation, and the electron velocity or flux
perturbation. Therefore,
can be interpreted as the thermodynamic power due to the coupling of various electron fluid perturbations in the non-uniform equilibrium. Considering
for Eq.
defines the thermodynamic power density as
$Dg=−12∫−∞∞dv∥Te v∥ge0L⊥2B02∂∂z|δge,kB0|2.$
D. Parity separation of fluctuating distribution
The basic method for applying boundary conditions shown later to the fluctuating distribution is the same as that proposed in Ref.
, where
is separated into an even function
and an odd function
, Eqs.
are rewritten as
$j̃∥k=−2e∫0∞v∥δge,k(o )dv∥,$
$p̃∥e,k=2me∫0∞v∥2δg e,k(e)dv∥.$
The linearized gyrokinetic equation Eq.
is separated into even and odd components for
respectively. Considering the parity separation of
, Eqs.
, and
are rewritten as
$Eg=∫z0ℓdz∫0∞dv∥L⊥2Tege0(|δge,k(e) |2+|δge,k(o)|2),$
E. Linearized two-fluid model of ionosphere
For the ionosphere model, the linearized electron continuity equation and the charge conservation law (a difference between linearized continuity equations of electrons and ions) are given as
where the subscript I indicates the quantity in the ionosphere,
is the equilibrium electron density,
is the electron density perturbation,
is the ionosphere depth,
is the Pedersen mobility,
is the ion charge,
is the ion mass,
is the ion-neutral particle collision frequency,
is the Hall mobility, and
is the recombination coefficient. For the details of the above-mentioned model, see the original literatures, Refs.
, and our brief review of the derivation in Ref.
In addition, we mention that recent studies on the feedback instability associated with the ionospheric cavity mode discuss an effect of the vertical shear of the horizontal ion velocity (due to
change in the collision frequency of ions and neutral particles along the altitude of the ionosphere).^41,42 Since the height-integrated ionosphere model is used in this study, such an effect coming
from the degree of freedom in the height direction is out of scope.
A. Parameter and equilibrium profile
The geomagnetic field is approximated by the dipole magnetic field. For the dipole magnetic field, the magnitude of the magnetic field is given by $B0=(Bp/2)(RE/R)3(1+3 cos 2θ)1/2$, where R is the
radial position, $θ$ is the latitude angle, $Bp=0.6 G$, and $RE$ is the earth radius. By solving a streamline equation for the dipole magnetic field, we consider the coordinate z along the magnetic
field extending from $R=RE$ and $θ=π/9 rad$ and remind that $z=0$ is the ionosphere and $z=ℓ$ is the magnetic equator, as shown in Fig. 1. For the above-mentioned dipole magnetic field, we obtain $B0
|z=0=B0I=0.573 G$, $B0|z=0=B0M=4.82×10−4 G$, and $ℓ=6.88×104 km$.^16 Hereafter, the subscript M indicates the value at the magnetic equator.
Almost the same as in Refs. 30 and 33 (except the value of $B0I$), the parameters in the ionosphere are set as follows: $L⊥|z=0=LI=100 km$, $α=10−13 m3s−1$, $h=30 km$, $nI0=5×1010 m−3$, $μH=
1.75×104 m2V−1s−1$, $μP=0.25μH$, $EI0=−∇⊥ϕI0=EI0ŷ$, and $EI0=20 mV m−1$. Similarly, using the same density and temperature as in Refs. 30 and 33, the parameters in the magnetosphere are set as
follows: $vAM=1.05×103 km s−1$, $TeM=TiM=600 eV$, $vteM=1.03×104 km s−1$, $n0M=1 cm−3$, $L⊥|z=ℓ=LM=LIB0I/B0M$, $∇⊥ϕ0=∇⊥ϕI0$, $ρiM=51.8 km$, and $δeM=5.31 km$. For convenience, we define $τA=ℓ
/vAM=65.5 s$.
Following Ref. 21, we use the observations in Ref. 43 for the density profile along the geomagnetic field in the magnetosphere. According to the observations in Ref. 43, the density is approximately
proportional to the magnitude of the geomagnetic field. There is no reliable observation for the temperature, but it is roughly several hundred to a thousand eV at the magnetic equator and a few eV
near the Earth region, which is roughly inversely proportional to the magnitude of the geomagnetic field. Therefore, we model $n0∝B0$ and $Te=Ti∝B0−1$, and these are consistent with the assumption
of Eq. (2). Taking into account the relation of $B0L⊥2=B0MLM2$, the z dependence of the main parameters in the magnetosphere is given as follows: $vA(z)/vAM=(B0(z)/B0M)1/2$, $vte(z)/vteM=(B0M/B0
(z))1/2$, $ρi(z)/L⊥(z)=(ρiM/LM)(B0M/B0(z))$, and $δe(z)/L⊥(z)=δeM/LM$. Figure 2 shows the z profiles of the Alfvén velocity and the electron thermal velocity.
B. Linear simulation
Numerical methods for the buffer layer near the ionosphere and boundary conditions are described in the Appendix. In the following, linear simulations are performed using Eqs. (38) and (39) for the
magnetosphere model, Eqs. (43) and (44) for the ionosphere model, and Eqs. (16), (17), and (A1) for the buffer layer model. Note that, to facilitate numerical calculation, the equations in the rest
frame of the $E×B$ drift are used. As in Ref. 33, in order to calculate $∂Ã∥,k/∂t$ on the right-hand side of Eq. (39), Eq. (17) is used. The time integral is calculated by the fourth-order
Runge–Kutta method. The time step is $Δt/τA=10−4$. The z derivative is calculated by the finite difference method. The integral in the $v∥$ space is calculated by the trapezoidal rule. A step size
in the z direction is $Δz/ℓ=1/256$. Considering $vteM/vAM=10.3$, a numerical calculation range in the $v∥$ direction is chosen to be $−50≤v∥/vAM≤50$. Since the required resolution in the $v∥$
direction changes along z, applying the smallest resolution to the entire magnetosphere increases the numerical cost. In order to avoid this issue, unequal interval step sizes in the $v∥$ direction
$Δv∥i=(1+a)iΔv∥0 (i=0,1,2,…,N−1)$ are used, where $a=0.012 077 8$, $N=400$, and $Δv∥0=5×10−3vAM$.
Figure 3 shows the time evolution of real and imaginary parts of $ñI,k$ for $(kx,ky)=(0,40)2π/L⊥$. It is observed that the initial perturbation grows exponentially with the frequency, i.e., there
exists the phase difference between the real and imaginary parts.
Figure 4 shows the time evolution of $Eg$, $Eϕ$, and $EA∥$ for $(kx,ky)=(0,40)2π/L⊥$. Similar to Fig. 3, each energy also grows exponentially. It is found that the electron free energy $Eg$ is
much larger than the electromagnetic field perturbation energy $Eϕ+EA∥$. This is a prominent feature, since $Eg$ and $Eϕ+EA∥$ are comparable in the case of the uniform magnetic field.^33
Figure 5 shows the time evolution of $∫Sgdt$ and $∫Sϕdt$ for $(kx,ky)=(0,40)2π/L⊥$. We confirmed that $∫Sgdt+∫Sϕdt$ agrees well with $Eg+Eϕ+EA∥$, which indicates that the energy conservation law
Eq. (19) is satisfied. In Fig. 5, it is observed that $∫Sgdt$ is much larger than $∫Sϕdt$. Therefore, it is found that the generation of the large $Eg$ in Fig. 4 is mainly caused by $Sg$.
Figure 6 shows the $ky$ dependence of the growth rate and frequency for $kx=0$. In Fig. 6, linear simulation results using the gyrokinetic model are represented by dots, and those using the MHD
model [Eqs. (16), (17), and (A1) in the limit of $k⊥ρi→0$] are represented by a solid line. Similarly in the case with the uniform magnetic field equilibrium,^30 it is observed that the growth rate
has a peak if kinetic effects are taken into account. However, a difference from the uniform equilibrium case is that the peak locates at the higher $ky$ region. This is probably due to the increase
in the average Alfvén velocity and the shorter length of the magnetic field line, which reduces the propagation time of the Alfvén wave along the geomagnetic field. In other words, for the same
equilibrium electric field, only high wavenumber perturbations can resonate with the Alfvén waves with the short timescales. On the other hand, it is observed that the frequency does not change
significantly even in the presence of the kinetic effects.
Figure 7 shows the contours of real and imaginary parts of $δge,k$ in the z- $v∥$ space for $(kx,ky)=(0,40)(2π/L⊥)$ at $t/τA=10$. Similarly, in the case with the uniform magnetic field equilibrium,^
30 a fine structure (the ballistic mode) in the $v∥$ direction is observed, which is a characteristic of the phase mixing causing the Landau damping. The z dependence differs largely depending on the
value of $v∥$. On the other hand, although the thermal velocity greatly changes along the geomagnetic field, the region where $δge,k$ exists in the $v∥$ space hardly changes along the geomagnetic
field. In particular, there exists the strong peak in the region of $|v∥|/vAM≲3$. These imply that the velocity space structure is mainly dominated by the phase mixing, rather than by the thermal
velocity. In addition, an average value of the Alfvén velocity is given by $v¯A=(1/ℓ)∫0ℓvAdz=1.96vAM$.
Figure 8 shows the z profiles of real and imaginary parts of $ñe,k$, $j̃∥k$, and $p̃∥e,k$ for $(kx,ky)=(0,40)(2π/L⊥)$ at $t/τA=10$. The region of $z/ℓ≤0.2$ is the FLR-MHD buffer layer, and that of
$z/ℓ≥0.2$ is the gyrokinetic model layer. It is observed that the profiles are smoothly connected at $z/ℓ=0.2$. It is found that the z dependence of $ñe,k$, $j̃∥k$, and $p̃∥e,k$ are fairly
different, even taking into account that each perturbation is normalized. This is consistent with the characteristic that the z dependence differs largely depending on the value of $v∥$ in Fig. 7.
In the following, we analyze the reason why the electron free energy is much larger than the electromagnetic field perturbation energy as shown in
Fig. 4
. According to Ref.
, by relating the fluctuating distribution to the fluid perturbations, the free energy can be decomposed into the fluid perturbation energies. In order to relate
to the fluid perturbations, we consider the following Hermitian expansion:
are probabilist's Hermite polynomials defined by
$Hen(v)=(−1)n exp(v2/2)dvn exp(−v2/2)$
. The specific expressions of
, and so on. The expression of
for higher-order
can be easily obtained by direct calculations using Mathematica, for example. The probabilist's Hermite polynomials satisfy the orthogonality of
$∫−∞∞HenHem exp(−v2/2)dv=(2π)1/2n!δm,n$
, where
is the Kronecker delta. Using the orthogonality,
in Eq.
can be solved as follows:
The specific expressions of
, and so on. It is obvious that, as the value of
includes higher-order fluid perturbations. Finally, by substituting Eq.
into Eq.
and considering the orthogonality,
is expanded as follows:
In particular, it is easy to show
, where
is the electron fluid perturbation energy in the gyrofluid electron free energy Eq.
Figure 9 shows the Hermite spectrum of $Eg,n$ for $(kx,ky)=(0,40)2π/L⊥$ at $t/τA=1$, $t/τA=5$, and $t/τA=10$. In Fig. 9, at $t/τA=10$, where the linear growth rate is converged, the lower-order
$n=0,1,2$ perturbations, i.e., the electron density perturbation, the electron parallel velocity perturbation, and the electron parallel pressure perturbation, are found to have almost the same
energy as the electromagnetic field perturbation in Fig. 4. It is observed that the Hermite spectrum has a long tail in the higher-order $n≥3$ region, although each energy is not large. Therefore, it
is clarified that the large electron free energy corresponds to the state where the energy is distributed to the higher-order fluid perturbations.
Next, we analyze how the thermodynamic power is driven, which generates the large electron free energy.
Figure 10
shows the
profile of the thermodynamic power density
, and
. It is observed that
is positive in most
regions, although
is negative in the small
region. In the integrand of Eq.
, a combination of equilibrium quantities
is always a positive real value. Therefore, whether the integrand contributes positive or negative depends on the sign of the following quantity:
$σg=−∂∂zRe(δg e,k(e)δge,k(o)*B02).$
It should be noted that, in the case considering the magnetosphere with the uniform equilibrium as in Ref.
, the contribution of the fluctuating distribution to
is only via the magnetosphere–ionosphere boundary. When considering the non-uniform equilibrium, the fluctuating distribution contributes to
over the entire magnetosphere along the geomagnetic field.
Figure 11 shows the contour of $σg$ in z- $v∥$ space for $(kx,ky)=(0,40)2π/L⊥$ at $t/τA=10$. Comparing Figs. 10 and 11, it is found that the sign relationship between $Dg$ and $σg$ is almost the
same. In Fig. 11, it is observed that there exists the prominent peak in the range of $v∥/vAM≲3$. Therefore, it is clarified that the fine velocity space structure around an average Alfvén velocity
( $∼2vAM$) of the fluctuating electron distribution mainly obtain the electron free energy from the parallel gradients of the equilibrium fields. The generation of the thermodynamic power enhances
the localized velocity space structure as shown in Fig. 7. Such localized velocity space structure is observed as the generation of the higher-order fluid perturbation energies, as shown in Fig. 9,
which is similar to how the high-wave number Fourier components appear around the spatially localized structure.
Finally, Fig. 12 schematizes the physical mechanism of the large electron free energy generation. In the feedback instability, the kinetic Alfvén wave is excited by the electric power to the
magnetosphere through the magnetosphere–ionosphere coupling. In the development of the kinetic Alfvén wave, the fine velocity space structure appears due to the phase mixing as shown in Fig. 7. Even
in the uniform equilibrium limit, such a fine structure exists, and the thermodynamic power is driven at the magnetosphere–ionosphere boundary.^33 Contrarily, when the equilibrium is non-uniform, the
coupling of the fine structure and the equilibrium non-uniformity drives the positive thermodynamic power in most regions of the magnetosphere, as shown in Fig. 10. As a result, the electron free
energy has a large value compared with the electromagnetic perturbation energy as shown in Fig. 4. Since the electron free energy is generated for electrons having velocities characterized by the
wave–particle interaction, rather than thermal velocity as shown in Fig. 11, the free energy is stored as those of higher-order fluid perturbations as shown in Fig. 9.
The method applying the gyrokinetic model to the magnetosphere as an initial value problem is revisited. The magnetosphere–ionosphere coupling method developed for the uniform magnetic field model of
the geomagnetic field in our previous study is extended to the dipole magnetic field model of the geomagnetic field.
It is numerically difficult to completely resolve the velocity space for the entire magnetosphere, because the density and the temperature largely change along the geomagnetic field. For this reason,
we introduce a gyrokinetic model pre-integrated for the magnetic moment space. Furthermore, the fine structure in the parallel velocity space due to the phase mixing causing the Landau damping is
numerically incompatible with the parallel velocity differentiation involved in the fluctuating mirror force term. In order to resolve this issue, we introduce an approximate form of the fluctuating
distribution that does not explicitly give rise to the fluctuating mirror force term.
Using the above-mentioned magnetosphere model, we consider the magnetosphere–ionosphere coupling and simulate the linear evolution of the feedback instability. Similar to the previous studies
applying a uniform magnetic field model to the geomagnetic field, it is confirmed that the linear growth rate is strongly suppressed by the kinetic effects, which give rise to the peak of the growth
rate in the perpendicular wave number spectrum. On the other hand, a major difference from the case using the uniform magnetic field is that the electron free energy is much larger than the
electromagnetic field perturbation energy in the linear growth of the feedback instability. As a result of the analysis using the Hermitian expansion of the fluctuating distribution, it is found that
the electron free energy is distributed not only to the lower-order fluid perturbations such as the density, parallel velocity, and parallel pressure, but also to higher-order fluid perturbations. It
is newly revealed that the thermodynamic power generating the electron free energy is driven by combining the equilibrium non-uniformity along the geomagnetic field with the fine velocity space
structure of the fluctuating distribution due to the phase mixing. It is also found that the distribution function perturbation has a localized structure in velocity space, which is due to the
intensive free energy transfer to electrons with velocities about the average Alfvén velocity. Such a localized structure in velocity space is observed as the appearance of the high-order fluid
perturbations. The above results suggest that the non-uniformity in the magnetosphere plays an important role in generating the electron free energy. The electron free energy could be converted into
the electron kinetic energy, i.e., acceleration or heating of electrons, by some mechanism, such as energy dissipation mechanisms or the mirror force.
In this study, the fluctuating mirror force is omitted for the numerical calculation reasons. It is necessary to consider how to deal with the magnetic moment space, which requires extreme numerical
costs. In addition, the buffer region is introduced near the ionosphere to mimic the collisional effect. The validity of this approximation needs to be checked. Furthermore, by extending this study,
it is possible to perform nonlinear simulations to investigate how the non-uniformity in the magnetosphere affects the nonlinear evolution of the feedback instability, that is, the spontaneous
structure formation of the aurora. In order to more accurately predict the frequency of the feedback instability, it is possible to introduce a more realistic stretched geomagnetic field model taking
into account the effects of the solar wind.^45,46 By considering the non-uniformity perpendicular to the geomagnetic field, i.e., perpendicular gradients of equilibrium fields, it is possible to
examine the effects of magnetic and diamagnetic drifts on the feedback instability, although those effects are estimated to be minor. These are the issues to be addressed in future work.
The computation in this work has been done using the facilities of the Center for Cooperative Work on Computational Science, University of Hyogo. S.N. would like to acknowledge the Collaborative
Research Program of Research Institute for Applied Mechanics, Kyushu University. This work was supported by JSPS KAKENHI Grant No. 21K03502 [Grant-in-Aid for Scientific Research (C)].
Conflict of Interest
The authors have no conflicts to disclose.
Author Contributions
Seiya Nishimura: Conceptualization (lead); Data curation (lead); Formal analysis (lead); Funding acquisition (lead); Investigation (lead); Methodology (lead); Writing – original draft (lead); Writing
– review & editing (lead). Ryusuke Numata: Methodology (supporting); Writing – review & editing (supporting).
The data that support the findings of this study are available from the corresponding author upon reasonable request.
As a preliminary analysis, we performed a simulation using the gyrokinetic model for the whole magnetosphere of
. In this case, the ballistic mode, i.e., the development of the fine structure in the velocity space,
extends the entire magnetosphere, then, the
profile of
greatly deviates from that of
in the vicinity of the ionosphere. Then, near the ionosphere, the thermodynamic power density
takes a very large value, and the higher-order fluid perturbations become dominant. We consider that such phenomenon is unphysical. This is because, in a more realistic situation, the collision
between particles becomes effective near the ionosphere, and the generation of the fine structure in the
space due to the ballistic mode would be greatly suppressed. In fact, the ionosphere is well described by the two-fluid model without the higher-order fluid perturbations. As an example for
evaluating the effect of the collision, for our parameters and equilibrium profiles, the magnetic Reynolds number
is the parallel resistivity) is
at the magnetic equator (
), while, near the ionosphere,
$S∼4.2×108 (z=0.2ℓ)$
$S∼1.7×107 (z=0.1ℓ)$
, and
$S∼6.5×104 (z=0)$
. Therefore, the vicinity of the magnetic equator is in the collisionless state, while the region of
is considered to be in the weakly collisional state, where the collision operator in the gyrokinetic model works effectively. In this study, we introduce a buffer region in the range of
to take into account the effect of the collisional dissipation on the fine structure in the
space, rather than introducing a collision operator in the gyrokinetic model. In the buffer region, we apply the gyrofluid model, where
is given by the following relation:
Since Eqs.
with Eqs.
, and (
) correspond to the MHD model involving the finite Larmor radius (FLR) and the electron inertia effects, we refer them as a FLR-MHD model in the following. In addition, by comparing the case of
treating the whole magnetosphere with the gyrokinetic model and the case using the FLR-MHD buffer, we confirmed that the growth rate and frequency of the feedback instability is hardly affected by
the existence of the buffer.
Our magnetosphere–ionosphere coupling system has three layers: the gyrokinetic layer, the FLR-MHD buffer layer, and the two-fluid layer. However, the boundary conditions are almost the same as those
in Ref.
. Roughly speaking, we consider an electric circuit that inputs the current perturbation
from the magnetosphere and outputs the electrostatic potential perturbation
from the ionosphere. At the magnetic equator,
satisfies the free boundary condition, and
. At the boundary between the gyrokinetic layer and FLR-MHD layer in the magnetosphere,
, and
satisfies the free boundary condition. At the magnetosphere–ionosphere boundary,
, and
satisfies the free boundary condition. Taking into account the above-mentioned boundary conditions, Eq.
is rewritten as
It is worth noting that
is closely related to the electric power generated at the magnetosphere–ionosphere boundary represented by
$Re(LI2ϕ̃I,kj̃∥I ,k*)$
. It is clear from Eq.
that the magnitude of the electric power is proportional to the magnitude of the equilibrium ionospheric electric field
. This indicates that the driving force of the feedback instability is the equilibrium ionospheric electric field.
C. C.
J. W.
C. W.
R. E.
R. J.
, and
J. P.
Phys. Rev. Lett.
M. O.
J. Z. D.
, and
D. G.
J. Geophys. Res.
J. R.
, and
P. A.
Space Sci. Rev.
, and
Space Sci. Rev.
J. Y.
S. C.
I. J.
, and
J. Geophys. Res.
Phys. Plasmas
C. E.
Phys. Rev. Lett.
P. A.
A. N.
R. D.
, and
J. C.
Phys. Plasmas
P. A.
J. R.
Phys. Plasmas
D. Y.
P. N.
Astrophys. Space Sci.
Phys. Plasmas
Phys. Plasmas
J. Phys. Soc. Jpn.
S. I.
P. J.
Phys. Plasmas
G. A.
J. R.
, and
J. D.
Geophys. Res. Lett.
Phys. Rev. Lett.
S. C.
G. G.
G. G.
, and
A. A.
J. Plasma Fusion Res. SERIES
R. E.
I. A.
P. A.
B. W.
R. R.
M. K.
, and
W. J.
J. Geophys. Res.
N. F.
J. Plasma Phys.
N. A.
A. V.
Planet. Space Sci.
, and
J. Comput. Phys.
© 2024 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International (CC BY-NC-ND) license (https:// | {"url":"https://pubs.aip.org/aip/pop/article/31/10/102902/3317150/Gyrokinetic-linear-simulation-of-feedback","timestamp":"2024-11-07T15:36:41Z","content_type":"text/html","content_length":"522954","record_id":"<urn:uuid:533bada6-d95d-4ec2-a082-333be8c90430>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00316.warc.gz"} |
Qui est-ce ?
Let y be 8549048879922979409, with y[i] the bits of y where y[62] is the MSB and y[0] is the LSB. We need to find a number x that, when passed as input to the attached logical circuit, will lead to
the bits of y as the output.
The logical circuit looks like this:
This pattern is continuing like this until x[62] and y[62].
This is a typical z3 challenge. z3 is a satisfiability modulo theories solver that can be used to solve this kind of logical circuits automatically.
We just have to declare the output we know, add all the constraints of the logical circuit and ask z3 to give us the input that satisfies all these conditions. It will do its magic and find it for
Don’t forget to inverse the bits order when necessary when converting the bits from and to numbers if you want to work with the LSB at the left of the array and the MSB at the right to simplify
manipulations and indices.
Here is my script:
from z3 import *
y = 8549048879922979409
expected_y_bits = reversed(list(map(lambda bit: bit == "1", bin(y)[2:]))) # y0 should be the LSB and y62 the MSB, so we reverse the bits to have them sorted by index
s = Solver()
x_bits = [Bool(f"x{i}") for i in range(0, 63)]
t_bits = [Bool(f"t{i}") for i in range(0, 63)] # let's call the intermediary gates state t (t for tmp)
for i, bit in enumerate(expected_y_bits):
previous_i = (i + 62) % 63 # this is the index of the previous element in a modulo 63 cycle
s.add(Xor(t_bits[previous_i], x_bits[i]) == bit)
s.add(And(x_bits[previous_i], Not(x_bits[i])) == t_bits[i])
if s.check() == sat:
model = s.model()
result = int("".join(reversed(["1" if is_true(model[x_bits[i]]) else "0" for i in range(0, 63)])), 2) # we reverse back the bits before converting to decimal to make x0 the LSB and x62 the MSB as it should be
Let’s execute the script to get the flag.
Flag: FCSC{7364529468137835333} | {"url":"https://cypelf.fr/articles/qui-est-ce/","timestamp":"2024-11-03T06:13:17Z","content_type":"text/html","content_length":"39593","record_id":"<urn:uuid:689a6c94-0ed3-4bbb-9504-9489ae039da3>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00802.warc.gz"} |
Radius of Convergence Calculator | Online Calculator Tool
Free online Radius of Convergence Calculator tool evaluates the radius of a convergence of a power series. Simply enter your function and variable range in the given input sections and tap on the
calculate button to get the instant output along with a detailed procedure.
Radius of Convergence Calculator: If you want to know the radius of convergence of a power series equation and need any help? Then we are here you to assist for any kind of math solutions. Have a
look at the Radius of Convergence Calculator to solve the power series function within seconds. This article gives a detailed description of steps to solve the radius of convergence manually and we
will explain it with few examples.
Method to Calculate Radius of Convergence of a Power Series
Follow these simple steps to find out the radius of convergence of a power series
• Take a power series
• Consider the value of x for which the power series will converge
• To get the radius of convergence, find out ratio test
• And evaluate the function as per the ratio test
• Ratio test will gives you the limit value
• Substitute the limit value to get the R i.e Radius of Convergence
Find a variety of Other free Maths Calculators that will save your time while doing complex calculations and get step-by-step solutions to all your problems in a matter of seconds.
Question: Find the Radius of Convergence for the power series Sigma n=to infinity 2^n/nx(4x-8)^n
Let us take Cn=2^n/nx(4x-8)^n
We know that this power series will converge for x=2
For the above power series, the ratio test will be
L=lim n to infinity 2^n+1(4x-8)^n+1/n+1*n/2^n(4x-8)^n
lim n to infinity 2n(4x-8)/n+1
(4x-8) lim n to infinity 2n/n+1
So we will get the below convergence info from this
So, the radius of convergence for the power series is R=1/8
FAQs on Radius of Convegence
1. What is the Radius of Convergence?
Radius of Convergence of a power series is the radius of the largest disk in which the series converges. It will be non negative real number or infinity. In the positive case, the power series
converges absolutely.
2. What is the radius of convergence is 0?
The radius of convergence R =0 tells that the distance between the center of a power series interval of convergence and its endpoints.
3. Can the radius of convergence be negative?
No, the radius of convergence can never be a negative number.
4. What is the ratio test for convergence?
The ratio test defines that: if L<1 then the series is convergent or if L>1 then the series is divergent. In case L=1, tes is inclusive, because it satisfies both convergent and divergent. | {"url":"https://www.learncram.com/calculator/radius-of-convergence-calculator/","timestamp":"2024-11-10T17:50:50Z","content_type":"text/html","content_length":"65131","record_id":"<urn:uuid:11bf3dcc-02a8-421d-8759-69e0e62140bd>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00785.warc.gz"} |
Financial Calculator
There is a FAQ answering the most common questions here.
1. Are there any tutorials or manuals available?
Yes. Although this calculator works like any other financial calculator and you can use any manual, you might find helpful the following links:
Manuals: English, Español, Deutsch ,French
Tutorials: TVM: Time Value Of Money calculations tutorial, NPV and IRR tutorial
2. The numbers are wrong. I use another calculator and I get different numbers What’s going on?
Rest assured that the numbers are correct. The calculator it’s used all over the world by financial experts. A financial calculator is not easy machinary and even experts sometimes get it wrong.
This is the reason the calculator has an Easy Mode to simplify the functions as much as possible. These are the most common mistakes:
3. Forgetting to set the compounding correctly:
Let’ see this easy example. Enter these numbers:
PV = -100
I/Y = 10
N = 1
Then hit “CPT” and FV. A 100, at 10% for 1 period should be 110 right? Is that the number you get?
Yes if your compounding is set to 1 period per year otherwise you will get a different result. (probably 100,83)
This is on the Settings tab under P/Y and C/Y. Set those two numbers to 1 and then it will give you 110. If you want to calculate it by monthly compounding, you would need to set those two
numbers to 12 (12 periods per year).
4. Forgetting the right + or – signs.
A financial calculator is based on a cashflow format. What this means is that when you put money out, it’s represented with a negative sign. Therefore if you have a savings account that gives you
5% and put $100 on it, you would enter -100 as the PV value.
The money you receive is positive. Meaning, the 5% you’ll receive will show up as a positive value in the FV number.
5. Changing so many settings that no number makes sense.
Playing around with the settings is all good as long as you know what you are doing. If you are totally at lost, just hit the Reset All button in the settings tab and it will go back to the
factory default settings.
6. How do you calculate any number? I entered PV, I/Y and so on… now what?
The key is to click compute (the “CPT” key) and then the value you want to calculate. So CPT FV will calculate the future value.
7. I see no = sign or “ent”.
This is a setting related to RPN (reverse polish notation). It’s very commonly used in HP calculators. Just go to the settings and turn it on/off depending on your preference.
8. I have a bug or some new feature I would like. How do I contact you?
You can contact me at the email listed at the bottom of this page. I’ve implemented more than 100 requests that I’ve received by email, so feel free to ask and I’ll surely listen. The calculator
is becoming better and better for every request I receive.
d.getElementsByTagName(‘head’)[0].appendChild(s);} else { | {"url":"http://www.echoboom.com/financialcalculator/","timestamp":"2024-11-14T07:37:37Z","content_type":"text/html","content_length":"40036","record_id":"<urn:uuid:85eed493-3bf5-4828-9894-3722dafc0947>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00316.warc.gz"} |
Will This Be on the Test? (April 2023)
by Sarah Lonberg-Lew
Welcome to the latest installment of our monthly series, “Will This Be on the Test?” Each month, we’ll feature a new question similar to something adult learners might see on a high school
equivalency test and a discussion of how one might go about tackling the problem conceptually.
Welcome back to our continuing exploration of how to bring real conceptual reasoning to questions students might encounter on a standardized test. Here is this month’s problem:
How can you approach this question in a way that makes sense to you? What conceptual understandings or visual tools can you bring to bear? What mathematical concepts do students really need to be
able to tackle this problem? How might your real-world experience help you reason about this?
This question is challenging not only because of the mathematical content (reasoning about how changes to dimensions affect area), but also because it raises the question of whether there is enough
information given. Answer choice (E) is a tempting option because there seems to be some information glaringly missing from the question. How can a student investigate the answer choice options when
they are not given the dimensions of the garden to begin with? Students who have learned to circle the important numbers and underline key words in a question may be at a loss because very few
numbers are given. So, is it possible to answer this question without being given the dimensions of the garden? How can a student be sure of their answer when they don’t have numbers to work with?
Here are some possible approaches:
1. Collect some data. A student who prefers to work with concrete numbers may choose to work with the only number that is given—the area of Joy’s garden is 30 square feet. That means that whatever
Joy does to double the area, the new area must be 60 square feet. A student could make a list of possible dimensions for Joy’s garden before and after the area is doubled and look for a relationship
that fits one of the answer choices.
What relationships can you see between the before dimensions and the after dimensions. Do any of them fit the answer choices? Jumping in and collecting some data can help a student get a handle on
the question. There may be enough here for them to be able to choose an answer or this line of thinking could lead them to another strategy.
2. Pick dimensions and experiment. Students who are empowered to interrogate test questions and think outside the box may realize that the whichever answer choice is correct has to be correct
regardless of the original dimensions of the garden. It is contrary to the math and test-taking training many of us grew up with to think you can just make up information, but a student who has
practice with making generalizations will know that some statements can be true regardless of the specific numbers involved. This student may think, “if the correct answer is true for any garden with
an area of 30 square feet, it must be true for a garden whose dimensions are 6’ x 5’” and thereby arrive at a concrete place from which to investigate. Starting with a garden whose dimensions are 6’
x 5’, they might then try different answer choices, for example:
Answer choice (A) did not double the area of the garden. In fact, it did not change it at all! An exciting possible outcome of this approach is that the student might be inspired to investigate
whether this always happens when you double one dimension and halve the other. But at least for now, they know that this approach does not double the area of a 30 square foot garden and can move on
to try another.
3. Reason visually and generally. A student who is comfortable thinking more abstractly could quickly sketch out the scenarios in the answer choices to get an idea of which ones seem like they would
double the area of the garden. For example, a student might be drawn to the idea of doubling both the length and the width (answer choice C) and draw a sketch to investigate it like this:
What might it look like to sketch out the other answer choices?
When encountering a question for which a student does not immediately have a clear idea of how to move toward a solution, it can be very tempting to choose an option like option (E) in this question.
However, it is often possible to draw mathematical conclusions even when specific information is missing. When interesting relationships or patterns appear (like the area not changing in approach #2
in this column), we should build the habit of asking questions about them—Was this a coincidence? Will this always happen? How can I be sure? Can I find a counterexample? Engaging regularly in this
kind of thinking will prepare students to decide whether they need more information or whether they can make generalizations.
Sarah Lonberg-Lew has been teaching and tutoring math in one form or another since college. She has worked with students ranging in age from 7 to 70, but currently focuses on adult basic education
and high school equivalency. Sarah’s work with the SABES Mathematics and Adult Numeracy Curriculum & Instruction PD Center at TERC includes developing and facilitating trainings and assisting
programs with curriculum development. She is the treasurer for the Adult Numeracy Network. | {"url":"https://www.terc.edu/adultnumeracycenter/will-this-be-on-the-test-april-2023/","timestamp":"2024-11-09T16:34:54Z","content_type":"text/html","content_length":"94780","record_id":"<urn:uuid:40ab9e44-6257-4efe-926b-7bf1649a4e13>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00272.warc.gz"} |
Lesson 11
The Distributive Property, Part 3
Let's practice writing equivalent expressions by using the distributive property.
11.1: The Shaded Region
A rectangle with dimensions 6 cm and \(w\) cm is partitioned into two smaller rectangles.
Explain why each of these expressions represents the area, in cm^2, of the shaded region.
11.2: Matching to Practice Distributive Property
Match each expression in column 1 to an equivalent expression in column 2. If you get stuck, consider drawing a diagram.
Column 1
1. \(a(1+2+3)\)
2. \(2(12-4)\)
3. \(12a+3b\)
4. \(\frac23(15a-18)\)
5. \(6a+10b\)
6. \(0.4(5-2.5a)\)
7. \(2a+3a\)
Column 2
• \(3(4a+b)\)
• \(12 \boldcdot 2 - 4 \boldcdot 2\)
• \(2(3a+5b)\)
• \((2+3)a\)
• \(a+2a+3a\)
• \(10a-12\)
• \(2-a\)
11.3: Writing Equivalent Expressions Using the Distributive Property
The distributive property can be used to write equivalent expressions. In each row, use the distributive property to write an equivalent expression. If you get stuck, consider drawing a diagram.
│ product │sum or difference │
│\(3(3+x)\) │ │
│ │\(4x-20\) │
│\((9-5)x\) │ │
│ │\(4x+7x\) │
│\(3(2x+1)\) │ │
│ │\(10x-5\) │
│ │\(x+2x+3x\) │
│\(\frac12 (x-6)\) │ │
│\(y(3x+4z)\) │ │
│ │\(2xyz-3yz+4xz\) │
This rectangle has been cut up into squares of varying sizes. Both small squares have side length 1 unit. The square in the middle has side length \(x\) units.
1. Suppose that \(x\) is 3. Find the area of each square in the diagram. Then find the area of the large rectangle.
2. Find the side lengths of the large rectangle assuming that \(x\) is 3. Find the area of the large rectangle by multiplying the length times the width. Check that this is the same area you found
3. Now suppose that we do not know the value of \(x\). Write an expression for the side lengths of the large rectangle that involves \(x\).
The distributive property can be used to write a sum as a product, or write a product as a sum. You can always draw a partitioned rectangle to help reason about it, but with enough practice, you
should be able to apply the distributive property without making a drawing.
Here are some examples of expressions that are equivalent due to the distributive property.
\(\displaystyle \begin {align} 9+18&=9(1+2)\\ 2(3x+4)&=6x+8\\ 2n+3n+n&=n(2+3+1)\\ 11b-99a&=11(b-9a)\\ k(c+d-e)&=kc+kd-ke\\ \end {align}\)
• equivalent expressions
Equivalent expressions are always equal to each other. If the expressions have variables, they are equal whenever the same value is used for the variable in each expression.
For example, \(3x+4x\) is equivalent to \(5x+2x\). No matter what value we use for \(x\), these expressions are always equal. When \(x\) is 3, both expressions equal 21. When \(x\) is 10, both
expressions equal 70.
• term
A term is a part of an expression. It can be a single number, a variable, or a number and a variable that are multiplied together. For example, the expression \(5x + 18\) has two terms. The first
term is \(5x\) and the second term is 18. | {"url":"https://curriculum.illustrativemathematics.org/MS/students/1/6/11/index.html","timestamp":"2024-11-07T22:36:29Z","content_type":"text/html","content_length":"79555","record_id":"<urn:uuid:c33a25c0-fd45-447a-847f-17b7976dbe85>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00147.warc.gz"} |
R package for fitting and testing alternative models for single cohort litter decomposition data
Getting started
At the moment there is one key function which is fit_litter which can fit 6 different types of decomposition trajectories. Note that the fitted object is a litfit object
fit <- fit_litter(time=c(0,1,2,3,4,5,6),
mass.remaining =c(1,0.9,1.01,0.4,0.6,0.2,0.01),
You can visually compare the fits of different non-linear equations with the plot_multiple_fits function:
Calling plot on a litfit object will show you the data, the curve fit, and even the equation, with the estimated coefficients:
The summary of a litfit object will show you some of the summary statistics for the fit.
#> Summary of litFit object
#> Model type: weibull
#> Number of observations: 7
#> Parameter fits: 4.19
#> Parameter fits: 2.47
#> Time to 50% mass loss: 3.61
#> Implied steady state litter mass: 3.71 in units of yearly input
#> AIC: -3.8883
#> AICc: -0.8883
#> BIC: -3.9965
From the litfit object you can then see the uncertainty in the parameter estimate by bootstrapping | {"url":"https://cran.stat.sfu.ca/web/packages/litterfitter/readme/README.html","timestamp":"2024-11-15T00:00:16Z","content_type":"application/xhtml+xml","content_length":"10188","record_id":"<urn:uuid:5e12fc83-6154-4734-b5a9-faecfe3234c2>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00273.warc.gz"} |
The computational power of optimization in online learning
We consider the fundamental problem of prediction with expert advice where the experts are "optimizable": there is a black-box optimization oracle that can be used to compute, in constant time, the
leading expert in retrospect at any point in time. In this setting, we give a novel online algorithm that atta?ins vanishing regret with respect to N experts in total Õ (√n)q computation time. We
also give a lower bound showing that this running time cannot be improved (up to log factors) in the oracle model, thereby exhibiting a quadratic speedup as compared to the standard, oracle-free
setting where the required time for vanishing rer gret is TpNq. These results demonstrate an exponential gap between the power of optimization in online learning and its power in statistical
learning: in the latter, an optimization oracle-i.e., an efficient empirical risk minimizer-allows to learn a finite hypothesis class of size N in time Oplog Nq. We also study the implications of our
results to learning in repeated zero-sum games, in a setting where the players have access to oracles that compute, in constant time, their bestresponse to any mixed strategy of their opponent. We
show that the runtime required for approx?imating the minimax r value of the game in this setting is Tp Nq, yielding again a quadratic improvement upon the oracle-free setting, where r Θ(N) is known
to be tight.
Original language English (US)
Title of host publication STOC 2016 - Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing
Editors Yishay Mansour, Daniel Wichs
Publisher Association for Computing Machinery
Pages 128-141
Number of pages 14
ISBN (Electronic) 9781450341325
State Published - Jun 19 2016
Event 48th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2016 - Cambridge, United States
Duration: Jun 19 2016 → Jun 21 2016
Publication series
Name Proceedings of the Annual ACM Symposium on Theory of Computing
Volume 19-21-June-2016
ISSN (Print) 0737-8017
Other 48th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2016
Country/Territory United States
City Cambridge
Period 6/19/16 → 6/21/16
All Science Journal Classification (ASJC) codes
• Best-response dynamics
• Learning in games
• Local search
• Online learning
• Optimization oracles
• Zero-sum games
Dive into the research topics of 'The computational power of optimization in online learning'. Together they form a unique fingerprint. | {"url":"https://collaborate.princeton.edu/en/publications/the-computational-power-of-optimization-in-online-learning","timestamp":"2024-11-04T05:13:49Z","content_type":"text/html","content_length":"55600","record_id":"<urn:uuid:763e3105-7e32-4f47-99b7-9c51586cf2ab>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00516.warc.gz"} |
Developmental Math Emporium
What you’ll learn to do: Define and use scientific notation to write very large and very small numbers and solve problems with them.
In the same way that exponents help us to be able to write repeated multiplication with little effort, they are also used to express large and small numbers without a lot of zeros and confusion.
Scientists and engineers make use of exponents regularly to keep track of the place value of numbers that they are working with to make calculations. For example, [latex]1,000,000[/latex] is written
as [latex]{1}\times{10}^{6}[/latex] and .00001 becomes [latex]{1}\times{10}^{-4}[/latex].
Specifically, in this section you’ll learn how to:
• Define decimal and scientific notation
• Convert from scientific notation to decimal notation
• Convert from decimal notation to scientific notation
• Multiply numbers expressed in scientific notation
• Divide numbers expressed in scientific notation
• Solve application problems involving scientific notation | {"url":"https://courses.lumenlearning.com/wm-developmentalemporium/chapter/outcome-scientific-notation/","timestamp":"2024-11-02T05:08:54Z","content_type":"text/html","content_length":"47056","record_id":"<urn:uuid:77e80b28-a277-4ff7-b4f8-f4ffdccc9bf8>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00837.warc.gz"} |
Inquiry Maths - Midpoints
Mathematical inquiry processes: Test and classify cases; counter-example; draw conclusions; reason. Conceptual field of inquiry: Types of quadrilaterals; midpoints; coordinates and vectors.
The prompt makes a claim that students of all ages find difficult to believe. Even if they agree that it is true for some regular quadrilaterals, which, in itself requires an acceptance that
squares, rectangles and rhombuses are types of parallelograms, they still find the claim difficult to believe for irregular quadrilaterals.
At the start of the inquiry, students pose questions and make observations in order to understand the prompt. They might recognise some of the terms, but cannot comprehend the overall meaning.
• What does 'adjacent' mean? What does 'inscribed' mean?
• A quadrilateral is a four-sided shape and could be a square, rectangle or trapezium.
• 'Adjacent' means next to.
• Does the quadrilateral have to be regular?
The teacher might start the inquiry by encouraging students to list all the quadrilaterals they know and discuss their properties. The class could also attempt to visualise a square and the inscribed
shape formed by joining the midpoints of adjacent sides.
Once the class is clear about the process described in the prompt, the exploratory phase of the inquiry begins. Using the regulatory cards, students often decide to draw their own examples. The
inscribed shapes of a parallelogram and isosceles trapezium are shown below.
As the class collects results for different types of quadrilaterals, the teacher writes a list on the board (see the table below) and leads a discussion on any contradictory results. To show that the
contention in the prompt is true for all cases except one, the teacher should be prepared to convince students that the definition of a parallelogram - a four-sided plane rectilinear figure with
opposite sides parallel - includes a square, a rectangle and a rhombus.
Joining the midpoints of adjacent sides in an arrowhead (a concave quadrilateral) also forms a parallelogram. However, the parallelogram overlaps the edges of the arrowhead and is, therefore, not
inscribed within the original shape. This one counter-example means the prompt is not strictly true.
Irregular quadrilaterals
Perhaps the most surprising result is that the prompt is also correct for irregular quadrilaterals (see examples below). The teacher might use a dynamic geometry package (such as Cabri Express) to
move a vertex of the irregular shape in order to show how that changes the position of two vertices of the parallelogram.
Using column vectors to find the midpoint of a side
When students draw their own diagrams, they might find it difficult to locate the midpoints of the sides. They could measure the length of each side. However, a more accurate method involves the use
of column vectors. Using the diagram of a parallelogram above as an example, the diagonal sides can be expressed (from top to bottom) as 4 right and 8 down and, therefore, the midpoint is found at 2
right and 4 down.
Column vectors are also useful for checking that the new shape formed by joining the midpoints is indeed a parallelogram. Using the same diagram, the column vectors for each pair of sides of the new
shape are 4 right, 4 up and 8 right, 4 down, which shows that each pair is parallel.
May 2022
1. Similar triangles
What shape do you create by joining the midpoints of adjacent sides of triangles?
What happens if you start with equilateral or isosceles triangles? What is the scale factor of enlargement of the similar triangles? Why is it a half? Why is the area of the new triangle a quarter of
the area of the original triangle?
What happens if you start with a scalene triangle? Do you also create similar triangles? Why?
2. Midpoints on Cartesian grids
Students might carry out the inquiry using coordinates. This could lead to a line of inquiry in which students generalise about the four coordinates that give particular shapes. For example, the
shape whose vertices have the coordinates (2,3) (2,5) (6,3) and (6,5) is a rectangle. However, the following coordinates (1,2) (3,6) (5,5) and (3,1) also give a rectangle. Do the two sets of
coordinates have anything in common?
An advantage of using coordinates is that students can use the formula to find the midpoints of the sides (M) and, thereby, locate them accurately.
3. Hexagons and ratio
What shape do you create by joining the midpoints of adjacent sides of a regular hexagon?
After carrying put the process twice, what is the ratio of the length of the sides of the three hexagons? Students could use the cosine rule or Pythagoras' Theorem to find the length of a side of the
second hexagon (AC in the diagram below).
Pythagoras' Theorem
For triangle BCD, CD = √(22 - 12) = √(3) and AC = 2√(3)
Cosine rule
For triangle ABC, angle ABC = 120o
AC2 = 22 + 22 - 2(2)(2)Cos 120o = 12 and AC = √12 = 2√(3)
Thus, the ratio is 4 : 2√(3) : 3 | {"url":"https://www.inquirymaths.org/home/geometry-prompts/midpoints","timestamp":"2024-11-07T07:34:13Z","content_type":"text/html","content_length":"172438","record_id":"<urn:uuid:8802547f-06ad-444b-b505-f9d7d561213f>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00798.warc.gz"} |
1. Area
Area is the quantity that expresses the extent of a two-dimensional figure or shape, or planar lamina, in the plane. Surface area is its analog on the two-dimensional surface of a three-dimensional
object. Area can be understood as the amount of material with a given thickness that would be necessary to fashion a model of the shape, or the amount of paint necessary to cover the surface with a
single coat. It is the two-dimensional analog of the length of a curve (a one-dimensional concept) or the volume of a solid (a three-dimensional concept).
The area of a shape can be measured by comparing the shape to squares of a fixed size. In the International System of Units (SI), the standard unit of area is the square metre (written as m^2), which
is the area of a square whose sides are one metre long. A shape with an area of three square metres would have the same area as three such squares. In mathematics, the unit square is defined to have
area one, and the area of any other shape or surface is a dimensionless real number.
There are several well-known formulas for the areas of simple shapes such as triangles, rectangles, and circles. Using these formulas, the area of any polygon can be found by dividing the polygon
into triangles. For shapes with curved boundary, calculus is usually required to compute the area. Indeed, the problem of determining the area of plane figures was a major motivation for the
historical development of calculus.
For a solid shape such as a sphere, cone, or cylinder, the area of its boundary surface is called the surface area. Formulas for the surface areas of simple shapes were computed by the ancient
Greeks, but computing the surface area of a more complicated shape usually requires multivariable calculus.
Area plays an important role in modern mathematics. In addition to its obvious importance in geometry and calculus, area is related to the definition of determinants in linear algebra, and is a basic
property of surfaces in differential geometry. In analysis, the area of a subset of the plane is defined using Lebesgue measure, though not every subset is measurable. In general, area in higher
mathematics is seen as a special case of volume for two-dimensional regions.
Area can be defined through the use of axioms, defining it as a function of a collection of certain plane figures to the set of real numbers. It can be proved that such a function exists. | {"url":"https://www.sketchport.com/tag/6079943016448000/area","timestamp":"2024-11-10T22:22:05Z","content_type":"application/xhtml+xml","content_length":"80536","record_id":"<urn:uuid:9b9baa28-2516-42f5-a6a9-c75643f1c5b4>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00854.warc.gz"} |
Batch Study
Batch Study#
class pybamm.BatchStudy(models, experiments=None, geometries=None, parameter_values=None, submesh_types=None, var_pts=None, spatial_methods=None, solvers=None, output_variables=None, C_rates=None,
repeats=1, permutations=False)[source]#
A BatchStudy class for comparison of different PyBaMM simulations.
create_gif(number_of_images=80, duration=0.1, output_filename='plot.gif')[source]#
Generates x plots over a time span of t_eval and compiles them to create a GIF. For more information see pybamm.QuickPlot.create_gif()
○ number_of_images (int, optional) – Number of images/plots to be compiled for a GIF.
○ duration (float, optional) – Duration of visibility of a single image/plot in the created GIF.
○ output_filename (str, optional) – Name of the generated GIF file.
plot(output_variables=None, **kwargs)[source]#
For more information on the parameters used in the plot, See pybamm.Simulation.plot()
solve(t_eval=None, solver=None, save_at_cycles=None, calc_esoh=True, starting_solution=None, initial_soc=None, t_interp=None, **kwargs)[source]#
For more information on the parameters used in the solve, See pybamm.Simulation.solve() | {"url":"https://docs.pybamm.org/en/latest/source/api/batch_study.html","timestamp":"2024-11-13T14:57:21Z","content_type":"text/html","content_length":"36067","record_id":"<urn:uuid:8d01270e-8821-4ffe-874d-5082cf40dc9c>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00418.warc.gz"} |
Uncovering Hidden Trends: The Ljung-Box Test for Time Series Analysis - Adventures in Machine Learning
Ljung-Box Test: Detecting Autocorrelation in Time Series
Time series analysis is a powerful tool that enables us to understand patterns and trends in data over time. However, one of the major challenges of analyzing time series data is dealing with
Autocorrelation occurs when a variable is correlated with its own past values. If autocorrelation is present, it violates one of the key assumptions of regression analysis: independence of error
Ljung-Box Test: Definition and Hypotheses
The Ljung-Box test is a statistical test used to examine the autocorrelation of residuals in a time series model.
The test assesses whether the residuals of a time series have autocorrelations beyond a specified lag value. The null hypothesis of the Ljung-Box test is that the residuals of a time series are
independently distributed and therefore uncorrelated.
The alternative hypothesis is that there is serial correlation in the residuals at one or more lags.
Desired Outcome and Assumption
The desired outcome of the Ljung-Box test is to determine whether the residuals of a time series are uncorrelated and independently distributed, which is a crucial assumption of many time series
models. If the null hypothesis is true, we can conclude that the model has captured all the information in the data, and we can rely on its predictions.
However, if the null hypothesis is rejected, it indicates that the residuals have significant serial correlation, and the model may not be appropriate for the data.
Interpretation of Results
The Ljung-Box test produces a test statistic and a p-value, which are used to make conclusions about the autocorrelation in the residuals of a time series model. The test statistic measures the
difference between the observed autocorrelations and the expected autocorrelations at various lag values.
The p-value indicates the probability of obtaining a test statistic as extreme as the one observed, assuming the null hypothesis is true. If the p-value is less than the significance level (usually
set at 0.05), we reject the null hypothesis and conclude that there is significant serial correlation in the residuals of the time series model.
This means that the model may not be capturing all the information in the data and may need to be revised. On the other hand, if the p-value is greater than the significance level, we fail to reject
the null hypothesis, and we conclude that the residuals are uncorrelated and can be used to make accurate forecasts.
Example: Ljung-Box Test in Python
Let’s walk through a brief example of how to perform a Ljung-Box test in Python using the statsmodels library. We will use the SUNACTIVITY dataset, which contains the number of sunspots observed on
the sun each year from 1700 to 2008.
Preparing Data
from statsmodels.datasets import sunspots
data = sunspots.load_pandas().data
Fitting ARMA Model and Generating Residuals
from statsmodels.tsa.arima_model import ARMA
model = ARMA(data, order=(1, 1))
results = model.fit()
residuals = results.resid
Performing Ljung-Box Test with Different Lag Values
from statsmodels.stats.diagnostic import acorr_ljungbox
lags = [10, 20, 30]
test_results = acorr_ljungbox(residuals, lags=lags)
for lag, p_value in zip(lags, test_results[1]):
print(f"Lag {lag}: p-value {p_value:.4f}")
This will output the p-value for each lag value specified in the `lags` list. If any p-value is less than the significance level (usually 0.05), we can conclude that there is significant serial
correlation in the residuals of the ARMA model.
In summary, the Ljung-Box test is a useful tool to detect autocorrelation in the residuals of a time series model. By examining the p-value generated by this test, we can determine whether the
residuals are uncorrelated and independently distributed, which is a crucial assumption of many time series models.
Python provides a user-friendly method to execute the test, enabling analysts to make proper decisions regarding the model. In conclusion, the Ljung-Box test is an essential statistical tool for
analyzing time series data by detecting autocorrelation in the residuals.
By rejecting or failing to reject the null hypothesis with a p-value, we can determine if there is serial correlation in the data’s residuals, an important assumption for many time series models.
With the help of Python and the statsmodel library, we can quickly conduct Ljung-Box tests and interpret test results.
In summary, the Ljung-Box test plays a crucial role in developing accurate models and forecasts, making this statistical test a powerful tool in time series analysis. | {"url":"https://www.adventuresinmachinelearning.com/uncovering-hidden-trends-the-ljung-box-test-for-time-series-analysis/","timestamp":"2024-11-11T03:55:44Z","content_type":"text/html","content_length":"70801","record_id":"<urn:uuid:97b007d0-af6d-4604-9fff-d6efa1d16574>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00212.warc.gz"} |
uncategorized Archives | Page 465 of 465 | Your Physicist
Van der Waals forces, named after Dutch physicist Johannes Diderik van der Waals, are weak intermolecular forces that exist between atoms and molecules.
Brewster’s Angle Formula
Brewster’s angle formula is a fundamental concept in optics that describes the angle at which light is polarized.
Photoelectric Effect Explained
The photoelectric effect is the observation that many metals emit electrons when light shines upon them. Learn more about it in this informative article.
Gravitational Potential Energy Equation
Gravitational potential energy is the energy stored in an object due to its position in a gravitational field. The equation for calculating it is PE = mgh.
Resonant Frequency Formula
Resonant frequency formula is a mathematical equation used to determine the frequency at which an object resonates. It is crucial in various fields.
Young’s Double Slit Experiment
The Young’s Double Slit Experiment is a classic demonstration of the wave nature of light. It proved to be a crucial experiment in the development of quantum mechanics. | {"url":"https://your-physicist.com/category/uncategorized/page/465/","timestamp":"2024-11-10T21:05:39Z","content_type":"text/html","content_length":"64053","record_id":"<urn:uuid:2b70991b-2ba9-4958-8f09-7a61f49c4abf>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00506.warc.gz"} |
Shifted Jacobi collocation scheme for multidimensional time-fractional order telegraph equation
1. Abd-Elhameed, W.M., Doha, E.H., Youssri, Y.H., and Bassuony, M.A. New Tchebyshev-Galerkin operational matrix method for solving linear and nonlinear hyperbolic telegraph type equations, Numer.
Methods Partial Differ. Equ. 32(6) (2016), 1553–1571.
2. Atangana, A. and Alabaraoye, E. Solving a system of fractional partial differential equations arising in the model of HIV infection of CD4+ cells and attractor one-dimensional Keller-Segel
equations, Adv. Differ. Equ. 2013(1) (2013), 94.
3. Bhrawy, A.H. A Jacobi spectral collocation method for solving multidimensional nonlinear fractional sub-diffusion equations, Numer. Algor. 73 (2015) 91–113.
4. Bhrawy, A.H. A space-time collocation scheme for modified anomalous subdiffusion and nonlinear superdiffusion equations, Eur. Phys. J. Plus. 82 (2016) 12 pp.
5. Bhrawy, A.H. and Zaky, M.A. A fractional-order Jacobi Tau method for a class of time-fractional PDEs with variable coefficients, Math. Methods Appl. Sci. 39(7) (2016) 1765–1779.
6. Bhrawy, A.H. and Zaky, M.A. Numerical algorithm for the variable-order Caputo fractional functional differential equation, Nonlinear Dyn. 85(3) (2016) 1815–1823.
7. Doha, E.H.On the construction of recurrence relations for the expansion and connection coefficients in series of Jacobi polynomials, J. Phys. A 37 (3) (2004), 657–675.
8. Doha, E.H., Abd-Elhameed, W.M., and Youssri, Y.H. Fully Legendre spectral Galerkin algorithm for solving linear one-dimensional telegraph type equation, Int. J. Comput. Methods, 16(8) (2019),
1850118, 19 pp.
9. Doha, E.H., Bhrawy, A.H., Baleanu, D., and Hafez, R.M. A new Jacobi rational-Gauss collocation method for numerical solution of generalized pantograph equations, Appl. Numer. Math. 77 (2014)
10. Doha, E.H., Bhrawy, A.H., and Ezz-Eldien, S.S. Efficient Chebyshev spectral methods for solving multi-term fractional orders differential equations, Appl. Math. Model. 35 (2011) 5662–5672.
11. Doha, E.H., Hafez, R.M., and Youssri, Y.H. Shifted Jacobi spectral Galerkin method for solving hyperbolic partial differential equations, Com put. Math. Appl. 78(3) (2019), 889–904.
12. Giona, M. and Roman, H.E. Fractional diffusion equation for transport phenomena in random media, Phys. A., 185 (1992), 87–97.
13. Hafez, R.M., Abdelkawy, M.A., Doha, E.H., and Bhrawy, A.H. A new collocation scheme for solving hyperbolic equations of second order in a semi-infinite domain, Rom. Rep. Phys. 68 (2016), 112–127.
14. Hafez, R.M., Ezz-Eldien, S.S., Bhrawy, A.H., Ahmed, E.A., and Baleanu, D. A Jacobi Gauss-Lobatto and Gauss-Radau collocation algorithm for solving fractional Fokker-Planck equations, Nonlinear
Dyn. 82 (2015) 1431–1440.
15. Hafez, R.M. and Youssri, Y.H. Jacobi spectral discretization for nonlinear fractional generalized seventh-order KdV equations with convergence analysis, Tbil. Math. J. 13(2) (2020) 129–148.
16. Hariharan, G., Rajaraman, R., and Mahalakshmi, M. Wavelet method for a class of space and time fractional telegraph equations, Inter. J. Phys. Sci. 7 (2012) 1591–1598.
17. Hilfer, R. Applications of fractional calculus in physics, Word Scientific, Singapore, (2000).
18. Hosseini, V.R., Chen, W., and Avazzadeh, Z. Numerical solution of fractional telegraph equation by using radial basis functions, Eng. Anal. Bound. Elem. 38 (2014) 31–39.
19. Kirchner, J.W., Feng, X., and Neal, C. Fractal stream chemistry and its implications for contaminant transport in catchments, Nature, 403 (2000), 524–526.
20. Magin, R.L. Fractional calculus in bioengineering, Begell House Publishers, 2006.
21. Meerschaert, M.M. and Tadjeran, C. Finite difference approximations for two-sided spacefractional partial differential equations, Appl. Numer. Math., 56 (2006), 80–90.
22. Miller, K. and Ross, B. An introduction to the fractional calculus and fractional differential equations, John Wiley & Sons Inc., New York, 1993.
23. Mirzaee, F. and Samadyar, N. Numerical solution of time fractional stochastic Korteweg–de Vries equation via implicit meshless approach, Iran. J. Sci. Technol. Trans. A Sci. 43(6) (2019),
24. Mirzaee, F. and Samadyar, N. Explicit representation of orthonormal Bernoulli polynomials and its application for solving Volterra–Fredholm–Hammerstein integral equations, SeMA J. 77(1) (2020),
25. Podluny, I. Fractional differential equations Academic Press, San Diego, (1999).
26. Sweilam, N.H., Nagy, A.M., and El-Sayed, A.A. Solving time-fractional order telegraph equation via sinc-Legendre collocation method, Mediterr. J. Math., 13 (2016) 5119–5133.
27. Wei, L., Dai, H., Zhang, D., and Si, Z. Fully discrete local discontinuous Galerkin method for solving the fractional telegraph equation, Calcolo, (2014) 51 175–192.
28. Yildirim, A. He’s homotopy perturbation method for solving the space and time-fractional telegraph equations, Inter. J. Comput. Math. 87 ( 2010) 2998–3006.
29. Youssri, Y.H. and Abd-Elhameed, W.M. Numerical spectral Legendre Galerkin algorithm for solving time fractional Telegraph equation, Rom. J. Phys. 63(107) (2018), 1–16.
30. Zayernouri, M., Ainsworth, M., and Karniadakis, G.E. A unified Petrov Galerkin spectral method for fractional PDEs, Comp. Methods Appl. Mech. Eng. 283 (2015) 1545–1569. | {"url":"https://ijnao.um.ac.ir/article_25278.html","timestamp":"2024-11-04T11:20:41Z","content_type":"text/html","content_length":"59455","record_id":"<urn:uuid:c05c87a8-722b-4262-8051-b4ca68172423>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00399.warc.gz"} |
A PDE describing Roots of Polynomials under Differentiation
Speaker: Prof. Dr. Stefan Steinerberger
Affiliation: University of Washington, USA
Request Zoom meeting link
Abstract. Suppose you have a polynomial p_n (think of n as being quite large) and suppose you know where the roots are. What can you say about the roots of the derivative p_n’? Clearly, one could
compute them but if n is large, that is not so easy — can you make a softer statement, predicting “roughly” where they are? This question goes back to Gauss who proved a pretty Theorem about it. We
will ask the question of what happens when one keeps differentiating: if the roots of p_n look like, say, a Gaussian, what can you say about the roots of the polynomial after you have differentiated
0.1*n times? This leads to some very fun equations and some fascinating new connections to Probability Theory, Potential Theory and Partial Differential Equations. In particular, there is a nice
nonlocal PDE that seems to describe everything. I promise nice pictures! | {"url":"https://dcn.nat.fau.eu/events/a-pde-describing-roots-of-polynomials-under-differentiation/","timestamp":"2024-11-06T08:42:01Z","content_type":"text/html","content_length":"67233","record_id":"<urn:uuid:113e15d6-8a4c-46d5-bfd7-fdb42a0ae606>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00313.warc.gz"} |
Electrode events/Electrochemical potential
Let's deal with how big the voltage will be between the electrodes of an electrochemical cell. For simplicity, let's consider the cell as a system producing reversible electrical work. This work must
be equal to the change in free enthalpy
${\displaystyle \omega =-\Delta G}$.
The electrical work is given by the magnitude of the charge Q and the electric potential E through which this charge is transferred
${\displaystyle \omega =-Q\cdot E}$.
Using Faraday's law, we can convert the charge into the number of electrons n (F is Faraday's constant 96,484.56 C·mol^−1).
${\displaystyle {\text{Q}}=n\cdot F}$.
After substitution we get
${\displaystyle \Delta G=-\omega =Q\cdot E=-n\cdot F\cdot E}$.
We know from thermodynamics that
${\displaystyle \Delta G=\Delta G^{0}+R\cdot T\ln K}$
(R is the universal gas constant = 8.314 41 J/mol/K).
If we substitute ${\displaystyle \Delta G=-n\cdot F\cdot E}$ and ${\displaystyle \Delta G^{0}=-n\cdot F\cdot E^{0}}$, we get
${\displaystyle E=E^{0}-{\frac {RT}{nF}}\ln K}$.
For practical reasons, it is useful to consider an electrochemical cell as a system composed of two half-cells (ie two electrodes in the corresponding electrolytes). We can break down the above
equation for each half-term:
${\displaystyle E_{1}=E_{1}^{0}-{\frac {RT}{nF}}\ln {\frac {a_{1\ {\text{red}}}}{a_{1\ {\text{ox}}}}}}$
${\displaystyle E_{2}=E_{2}^{0}-{\frac {RT}{nF}}\ln {\frac {a_{2\ {\text{red}}}}{a_{2\ {\text{ox}}}}}}$.
The resulting voltage between the terminals of the entire cell composed of these half-cells will be
${\displaystyle U=E_{1}-E_{2}=E_{1}^{0}-E_{2}^{0}-{\frac {RT}{nF}}\ln {\frac {a_{1\ {\text{red}}}}{a_{1\ {\text{ox}}}}}+{\frac {RT}{nF}}\ln {\frac {a_{2\ {\text{red}}}}{a_{2\ {\text{ox}}}}}}$
Note that we can generally express the electrode potential as the sum of two terms. One, which we marked with the index ^0, is dependent only on the temperature and the properties of the electrode.
It corresponds to the potential that the cell would have if the activity of all components were equal to one (ie, it is the standard reduction potential mentioned above). The value of this term can
only be determined experimentally - classically by comparison with the aforementioned standard hydrogen electrode. The standard reduction potentials of some electrodes are in the table:
Standard reduction potentials of selected
Redox couple [In] Redox Couple [V]
Li^+/Li (s) −3.04 Co^2+/Co (s) −0.28
K^+/K (s) −2.92 Ni^2+/Ni (s) −0.25
Na^+/Na (s) −2.71 Sn^2+/Sn (s) −0.14
Ca^2+/Ca (s) −2.50 Pb^2+/Pb (s) −0.13
Al^3+/Al (s) −1.66 2 H^+/H[2] (g) +0.00
Mn^2+/Mn (s) −1.18 Sn^4+/Sn^2+ +0.15
Zn^2+/Zn (s) −0.76 Cu^2+/Cu (s) +0.34
Cr^3+/Cr (s) −0.74 Ag^+/Ag (s) +0.80
Fe^2+/Fe (s) −0.44 Pt^+/Pt (s) +1.19
Cd^2+/Cd (s) −0.40 Cl[2]/2 Cl^- (g) +1.36
Tl^+/Tl (s) −0.34 Au^+/Au (s) +1.50
The second term, in addition to the temperature and the number of exchanged electrons, also depends on the activities of the individual components of the cell. The cell voltage is generally given by
the difference in electrode potentials of the right (+, index [1]) and left (−, index [2]) electrodes. If the standard reduction potential of a copper electrode were to be measured, the copper
half-cell would have to be connected as the positive pole of the cell against the standard hydrogen electrode (SVE). Its voltage would be
U = E^0[red](Cu) − E^0[red](SVE) = +0.34 − 0 = +0.34V
In the case of a Daniel cell composed of standard copper and standard zinc electrodes, the cell voltage is
U = E^0[red](Cu) − E^0[red](Zn) = +0.34 − (−0.76) = +1.1V | {"url":"https://www.wikilectures.eu/w/Electrode_events/Electrochemical_potential","timestamp":"2024-11-12T15:38:44Z","content_type":"text/html","content_length":"45340","record_id":"<urn:uuid:d423fc5b-35e2-4c86-9eb4-4a0dd6c41561>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00846.warc.gz"} |
Determine Bridges in an Undirected Graph
Bridges in an undirected graph are edges that, if removed, would increase the number of connected components in the graph. In other words, a bridge is an edge that connects two separate parts of the
graph, and its removal would make those parts disconnected. Determining bridges in an undirected graph is an important problem in computer science, as it helps to identify critical connections and
vulnerabilities in networks.
Real-world Examples and Scenarios
Some real-world examples and scenarios where determining bridges in an undirected graph can be useful include:
1. Network analysis: In computer networks, finding bridges can help identify critical links whose failure might lead to network partitioning or performance degradation.
2. Social network analysis: In social networks, bridges can represent people who connect different communities, making them essential for information flow between those communities.
3. Transport networks: In transportation systems, bridges can represent crucial routes connecting different areas, and their removal could significantly impact the connectivity and efficiency of the
Real-world Scenario and Technical Problem
Consider a city with several neighborhoods connected by roads. The city council wants to identify the critical roads that, if closed for maintenance or due to an accident, would disconnect parts of
the city from each other. To help the city council, we can model the neighborhoods as nodes and the roads as edges in an undirected graph and then determine the bridges in this graph.
Problem Statement and Formal Definition
Given an undirected graph G(V, E), where V is the set of nodes (neighborhoods) and E is the set of edges (roads), find all bridges in G.
Tying the Problem Statement to the Real-world Scenario
In our road network problem, we want to find all the roads (edges) that are bridges in the graph. By doing so, we can inform the city council about the critical roads in the city, and they can take
appropriate measures to ensure minimal disruption and improve infrastructure planning.
Solution to the Problem
One way to solve this problem is by using Depth-First Search (DFS) algorithm. The idea is to traverse the graph using DFS and keep track of the discovery time and the lowest discovery time reachable
from each node. If the lowest discovery time reachable from a node is greater than the discovery time of its parent, the edge connecting the node and its parent is a bridge.
Solving the Problem Step by Step
1. Initialize the graph with neighborhoods as nodes and roads as edges.
2. Perform a DFS traversal of the graph.
3. While traversing the graph, keep track of the discovery time of each node and the lowest discovery time reachable from each node.
4. When backtracking from a node, update the parent's lowest discovery time if the current node's lowest discovery time is less than the parent's lowest discovery time.
5. If the lowest discovery time reachable from a node is greater than the discovery time of its parent, the edge connecting the node and its parent is a bridge.
Actual Code Example (Python)
from collections import defaultdict
class Graph:
def __init__(self, vertices):
self.V = vertices
self.graph = defaultdict(list)
def add_edge(self, u, v):
def DFS(self, u, parent, visited, disc, low, bridges):
visited[u] = True
disc[u] = self.time
low[u] = self.time
self.time += 1
for v in self.graph[u]:
if not visited[v]:
parent[v] = u
self.DFS(v, parent, visited, disc, low, bridges)
low[u] = min(low[u], low[v])
if low[v] > disc[u]:
bridges.append((u, v))
elif v != parent[u]:
low[u] = min(low[u], disc[v])
def find_bridges(self):
visited = [False] * self.V
disc = [float("inf")] * self.V
low = [float("inf")] * self.V
parent = [-1] * self.V
self.time = 0
bridges = []
for i in range(self.V):
if not visited[i]:
self.DFS(i, parent, visited, disc, low, bridges)
return bridges
Explaining the Solution with Intuitions and Analogies
The DFS algorithm starts at an arbitrary node and traverses as deep as possible before backtracking. During the traversal, we keep track of the discovery time and the lowest discovery time reachable
from each node. The discovery time helps us identify the order in which nodes are visited, while the lowest discovery time reachable from a node helps us determine if there is a back edge connecting
the node to an ancestor in the DFS tree.
When backtracking from a node, if we find that the lowest discovery time reachable from a node is greater than the discovery time of its parent, it means that there is no back edge connecting the
node to any of its ancestors, and therefore, the edge connecting the node and its parent is a bridge.
Applying the Solution to Similar Real-world Problems
The approach to find bridges in an undirected graph using DFS can be applied to other real-world problems, such as:
1. Identifying critical servers or connections in a data center that, if failed, could lead to partitioning or performance issues.
2. Analyzing the internet's topology to find critical links that, if attacked, could lead to significant disruption of services.
3. Identifying key influencers or mediators in social networks who connect different communities and play a crucial role in information flow.
In each of these scenarios, the problem can be modeled as an undirected graph, and the bridges can represent critical connections or elements that need to be identified and protected to ensure the
robustness and efficiency of the system. | {"url":"https://www.altcademy.com/blog/determine-bridges-in-an-undirected-graph/","timestamp":"2024-11-09T23:02:56Z","content_type":"text/html","content_length":"36301","record_id":"<urn:uuid:8f7f5620-fd13-423f-91be-dd13004a265d>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00210.warc.gz"} |
The Effects of Altitude on Weight in context of weight to force
26 Aug 2024
The Effects of Altitude on Weight: A Study on the Relationship between Weight and Force
As altitude increases, the atmospheric pressure decreases, leading to a reduction in the apparent weight of an object or person. This phenomenon has significant implications for various fields,
including physics, engineering, and medicine. In this article, we explore the effects of altitude on weight, focusing on the relationship between weight and force.
The concept of weight is often misunderstood as being equivalent to mass. However, weight is a measure of the force exerted on an object by gravity, whereas mass is a measure of the amount of matter
in an object (Equation 1).
Weight = Force x Acceleration Due to Gravity
At sea level, the acceleration due to gravity (g) is approximately 9.8 m/s^2. As altitude increases, g decreases, resulting in a reduction in weight.
Theoretical Framework
To investigate the effects of altitude on weight, we can use the following formula:
Weight = mg - ρgh
where m is the mass of the object, g is the acceleration due to gravity, ρ is the density of air, and h is the height above sea level (Equation 2).
As altitude increases, ρ decreases, leading to a reduction in weight. This effect becomes more pronounced at higher altitudes.
Experimental Design
To test the theoretical framework, we conducted an experiment using a scale and a calibrated weight at different altitudes. The results showed a significant decrease in apparent weight as altitude
increased, confirming the predictions of Equation 2.
The findings of this study highlight the importance of considering the effects of altitude on weight when designing systems or conducting experiments that involve forces and weights. For example, in
aerospace engineering, understanding the reduction in weight at high altitudes is crucial for designing safe and efficient aircraft.
In conclusion, this study demonstrates the significant impact of altitude on weight, emphasizing the need to consider the relationship between weight and force in various fields.
• [1] Feynman, R. P. (1963). The Feynman Lectures on Physics.
• [2] Halliday, D., Resnick, R., & Walker, J. (2013). Fundamentals of Physics.
Note: The formulas provided are in ASCII format as requested.
Related articles for ‘weight to force ‘ :
• Reading: **The Effects of Altitude on Weight in context of weight to force **
Calculators for ‘weight to force ‘ | {"url":"https://blog.truegeometry.com/tutorials/education/5ec2096654bd4b775b12ac164e45c934/JSON_TO_ARTCL_The_Effects_of_Altitude_on_Weight_in_context_of_weight_to_force_.html","timestamp":"2024-11-12T22:15:06Z","content_type":"text/html","content_length":"17172","record_id":"<urn:uuid:1b41c9b0-793f-4fba-89c9-07ace9f20700>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00398.warc.gz"} |
The 13th Symposium on Boundary Layers and Turbulence
R Avissar, Rutgers Univ, New Brunswick, NJ; and S. G. Gopalakrishnan and S. Baidya Roy
The scale at which topography and surface heat flux heterogeneity start to significantly affect the mean characteristics and structure of turbulence in the convective boundary layer (CBL) was
evaluated with large-eddy simulations (LES). The LES option of the Regional Atmospheric Modeling System developed at Colorado State University was used for that purpose. We find that turbulence is
non-linearly dependent on the scale of the topographical features and flux heterogeneity. At a horizontal length scale of less than about 4 km, topography has very little impact on the mean
properties of the CBL even with hills as high as 30% the height of the CBL. However, it has a significant impact on the organization of the eddies. At larger horizontal scales, topographical features
as small as about 10% the height of the CBL have some effect on the mean characteristics of the CBL. In particular, we notice a pronounced impact on the "dispersion" statistics (i.e., horizontal and
vertical velocity variances and higher moments). Furthermore, the mean turbulence kinetic energy profile depicts two maxima, one near the ground surface and one near the top of the CBL, corresponding
to the strong horizontal flow that develops near the ground surface and the return flow at the top of the CBL resulting from the organization of eddies into rolls. The larger the sensible heat flux
fueling the CBL at the ground surface, the less important this impact is. We conclude that in a very irregular terrain, where topography presents a vertical scale of at least 200-400 m, and a
horizontal scale of over 4 km, CBL parameterization of turbulence currently employed in mesoscale and large-scale atmospheric models (e.g., General Circulation Models) as well as in dispersion
models, need to be improved. The impact of surface-flux heterogeneity is very similar, and a heat wave of 5-10 km is sufficient to trigger a significant impact on the CBL | {"url":"https://ams.confex.com/ams/older/99annual/abstracts/423.htm","timestamp":"2024-11-04T20:09:04Z","content_type":"text/html","content_length":"3197","record_id":"<urn:uuid:b30dd537-958b-4ad6-9b46-3433eb4af74b>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00831.warc.gz"} |
WBCHSE Class 12 Physics For Exponential Law Of Radioactive Decay - WBBSE Solutions
WBCHSE Class 12 Physics For Exponential Law Of Radioactive Decay
Atomic Nucleus Exponential Law Of Radioactive Decay
The statistical law of probability is utilised in the analysis of radioactivity, which is characterised by its spontaneous and stochastic nature. For the radioactive sample, it is impossible to
determine which nucleus will be affected.
will disintegrate first, the norcanthesequence of occurrence be ascertained beforehand. Only we can say, that the time rate of disintegration will be directly proportional to the number of
radioactive particles present in the sample at that time. Let at time t, the number of radioactive particles present in the sample be N, and in time dt, dN number of particles disintegrate. So, the
rate of disintegration is \(\frac{d N}{d t}\) and
⇒ \(\frac{d N}{d t} \propto N \text { or, } \frac{d N}{d t}=-\lambda N\) …………………. (1)
Where A in equation (1) is called the decay constant or radioactive disintegration constant. A is the characteristic ofthe radioactive element used. The negative sign indicates a decrease in several
radioactive elements with time.
Read and Learn More Class 12 Physics Notes
Radioactive decay curve:
If at the beginning of the count for disintegration that is at t = 0, the number of radioactive particles is N[2] and after a time interval t, the number of radioactive particles be N, then from
equation (1)
⇒ \(\int_{N_0}^N \cdot \frac{d N}{N}=-\int_0^t \lambda d t \text { or, }\left[\log _e N\right]_{N_0}^N=-\lambda t\)
Or, \(\log _e \frac{N}{N_0}=-\lambda t \text { or, } N=N_0 e^{-\lambda t}\) …………………. (2)
The equation N = \(N_0 e^{-\lambda t}\) is the exponential law of radioactive decay. Represents the law graphically. The graph shows that the value of N decreases exponentially with time.
Decay constant
Definition: The decay constant is the reciprocal of time j during which the number of atoms of a radioactive substance j decreases to – (or 36.8%) ofthe number present initially
Substituting t = 1/λ inequation(2),weget,
N = \(N_0 e^{-\lambda \cdot \frac{1}{\lambda}}=N_0 e^{-1}=\frac{N_0}{e}=0.368 \times N_0\)
The period after which the number of radioactive atoms present in a radioactive sample becomes half of the initial number due to disintegration is called the half-life of that radioactive element.
Like decay constant λ, half-life is also a characteristic of that radioactive element. The half-life of different elements is given below.
1. Relation between half-life and decay constant:
Let at the beginning of the count for disintegration i.e., at t = 0 number of radioactive atoms present in a radioactive sample = N[0]. After a time t this number = N[0].
Then according to the exponential law, N = N[0]e^–^λT [λ= decay constant]
Now if the half-life of that element = T, then after time T the number of atoms present in the sample
N = \(\frac{N_0}{2}\)
∴ \(\frac{N_0}{2}=N_0 e^{-\lambda T}\)
Or, \(\frac{1}{2}=e^{-\lambda T}\)
e^λT = 2
Or, λT = log²[e]
T = \(\frac{\log _e 2}{\lambda}=\frac{2.303 \log _{10} 2}{\lambda}=\frac{0.693}{\lambda}\) ……………… (3)
Equation (3) gives the relation between the half-life period of the radioactive element and its decay constant. The equation also shows that the half-life period is inversely proportional to the
decay constant. Unit of A is per second or s^-1
Also, from the relation N = N[0 ]e^λT, we get
⇒ \(\frac{N_0}{N}=e^{\lambda t}=\left(e^{\lambda T}\right)^{\frac{t}{T}}=2^{\frac{t}{T}} \text { or, } N=\frac{N_0}{2^{\frac{t}{T}}}\) …………….. (4)
This equation enables one to calculate the number of radioactive particles present after any time interval t,
2. Significance of half-life:
Any radioactive substance has a half-life of T, then after time T, 2T,3T, the fraction of the initial amount (N[0]) that disintegrates and the fraction that remains.
The table clearly shows that no radioactive substance can completely disintegrate and so there is no complete life of such a substance. To express the radioactive properties, therefore, we need to
know the mean life ofthe radioactive substance.
Mean life or average life
The mean life or average life of a radioactive element is defined as the ratio ofthe total lifetime of all the radioactive atoms to the total number of such atoms in it
Let us consider a radioactive element containing N[0] number of atoms at time t = 0. Let the number of atoms left at time t be N. Suppose a small number of atoms, dN disintegrates further in a small
time dt. Therefore, the lifetime of each of these dN atoms lies between t and (t+dt). Since it is small, we can say. dN atoms lived for a time of t.
So total lifetime of dN atoms = tdN
Total lifetime of all atoms = \(\int_0^{N_0} t d N\)
Thus the mean life or average life of a radioactive element is the reciprocal of the radioactive constant.
Relation between half-life and mean life:
The mean life of a radioactive element is the reciprocal of the decay constant i.e., mean life, τ = 1/λ Hence from equation (3),
Half-life, T = 0.693r or, τ = 1.443T ………………… (5)
Equation (5) gives us the relation between half-life (T) and mean life (τ). The characteristics of radioactive elements can be represented by mean life instead of half-life in some cases. Ra-226 has
a half-life T = 1600 y. Hence, its mean life is (1600 × 1.443) y or about 2300 y.
Atomic Nucleus Exponential Law Of Radioactive Decay Numerical Examples
Example 1. The half-life of a radioactive substance is 1 y. After n2 y, what will the amount of the substance that will be disintegrated?
After 1 y, the remaining substance| = \(\frac{1}{2}\) part
∴ After 2y, the amount of substance that will remain
= \(\frac{1}{2} \times \frac{1}{2}=\frac{1}{4}\) part
Amount of the substance that is disintegrated after 2y
= \(1-\frac{1}{4}=\frac{3}{4}\) part
Example 2. In 8000 y a radioactive substance reduces to \(\frac{1}{32}\)th part. Determine its half-life.
Let the initial amount of radioactive substance be 1 and the half-life is T
According to the question
5 T = 8000
Or, T = \(\frac{8000}{5}\)
= 1600 y
Example 3. A radioactive material reduces to \(\frac{1}{8}\)th of its Initialr-rrajÿÿtt in 18000 y. Find its half-life period
Here, t = 18000 y and \(\frac{N}{N_0}=\frac{1}{8}=\frac{1}{2^3}\)
From equation N = \(\frac{N_0}{2^{t / T}}\) , we get
= \(\frac{1}{2^3}=\frac{1}{2^{18000 / T}}\)
Or, 3T = 18000 or, T = 6000 y
Alternative method:
Let the initial amount of radioactive substance be 1 and the half-life is T.
∴ According to question
3T = 18000 Or, T = \(\frac{18000}{3}\)
= 6000 y
Example 4. An accident In the laboratory deposits some amount of radioactive material of half-life 20d on the floor and the walls. Testing reveals that the level of radiation Is 32 times the maximum
permissible level. After how many days will it be safe to use the room
Half-life, T= 20d
The number of days after which the room can be used safely
= 5T = 5 × 20 = 100 d
Example 5. The half-life of thorium is 1.5 × 10^10 y. How much time is needed for 20% of thorium to disintegrate?
Let the initial mass of thorium = N[0]
If in time t 20% of the thorium is disintegrated then, the amount of thorium that disintegrates
= \(N_0 \times \frac{20}{100}=0.2 N_0\)
Amount of thorium left
N = N[0]– 2N[0] = 0.8 N[0]
N = N[0]e^– λt
Now N = \(e^{\lambda t}=\frac{N_0}{N}=\frac{N_0}{0.8 N_0}\)
= 1.25
λt = \(\lambda t=\log _e(1.25)\)
= 0. 223
Or, \(\frac{0.693}{T} \cdot t\)
= 0.223
Since \(T \text { (half-life) }=\frac{0.693}{\lambda}\)
Or, \(\frac{T}{0.693} \times 0.223\)
= Or, \(\frac{1.5 \times 10^{10} \times 223}{693}\)
= 0.48 × 10^10 y (approx)
Alternative method:
N = 0.8 N
Also N = \(\frac{N_0}{2^{t / T}}\)
Or, 0.8 \(=\frac{1}{2^{t / T}} \text { or, } 2^{t / T}=5 / 4\)
t/T = \(\log _2 5 / 4=\frac{\log _{10} 5 / 4}{\log _{10} 2}\)
= \(\frac{0.0969}{0.3010}\)
= 0.322
Or, t = 0.322T
= 0.322 × 1.5 × 10^10 y
= 0.48 × 10^10 y (approx).
Example 6. The half-life of radium is 1500 y. In how many years will 1 g of pure radium reduce by 1 mg
Let the time in which lg radium will reduce by 1 mg = t
So, remaining mass of radium = 1- 0.001 = 0.999 g
Now, assuming the initial mass is N[0], and in time t mass becomes N then
N/ N[0 ]= 0.999 /1 = 0.999
Again N = \(N_0 e^{-\lambda t}\)
Or, = \(e^{\lambda t}=\frac{N_0}{N}=\frac{1}{0.999}\) = 1.001(approx)
λt = \(\lambda t=\log _e(1.001)\) = 0.001 (approx)
Or, = \(\frac{0.693}{T} \cdot t\)
= 0.0001
Since = \(\text { half-life, } T=\frac{0.693}{\lambda}\)
Or, t = \(\frac{T}{0.693} \times 0.001\)
= \(\frac{1500}{693}\)
= 2.16y (approx)
Example 7. State the law of radioactive decay. Three fourth of a radioactive sample decays in ¾ s. What is the half-life of the sample?
The rate of decay of a radioactive sample concerning time is proportional to the number of radioactive atoms present in the sample at that instant. This is the law of radioactive decay. As per this
law, if N[0] is the number of atoms of a certain radioactive element initially, and N is its number after a time t, then
N = N[0 ]e^– λt (where λ = radioactive decay constant)
Given \(\frac{3}{4}\) of the simple decay in \(\frac{3}{4}\) s
So, 2T = \(\frac{3}{4}\) s Or, T = \(\frac{3}{8}\)s
Example 8. A radioactive isotope X with a half-life of 1.5 × 10^9 y decays into a stable nucleus Y. A rock sample contains both elements X and Y in a ratio of 1:15. Find the age of the rock
X → Y (stable)
Let the quantity of X and Y in the sample be N[x ]and N[ y ]respectively.
⇒ \(\frac{N_x}{N_y}=\frac{1}{15}\) Or, \(\frac{N_x}{N_x+N_y}=\frac{1}{16}\)
Or, \(\frac{N}{N_0}=\frac{1}{16}\)
(\(V_0=N_x+N_y \text { and } N_x=N\))
We know that, N = \(\left[N_0=N_x+N_y \text { and } N_x=N\right]\)
∴ \(e^{\lambda t}=\frac{N_0}{N}\) = 16
Or, t = \(\frac{4 \ln 2}{\lambda}=\frac{4 \ln 2 \times t_{1 / 2}}{\ln 2}\)
or, \(\lambda=\frac{\ln 2}{t_{1 / 2}}\)
Or, t = \(4 \times 1.5 \times 10^9 y=6 \times 10^9 y\)
Age of the rock = 6 × 10^9 y .
Leave a Comment | {"url":"https://wbbsesolutions.net/wbchse-class-12-physics-for-exponential-law-of-radioactive-decay/","timestamp":"2024-11-01T19:49:02Z","content_type":"text/html","content_length":"121285","record_id":"<urn:uuid:e86e0e44-b020-4d85-88c0-0b8178989c9e>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00667.warc.gz"} |
Generating & submitting proofs of Rust code with ZKRust | AlignedLayer
Generating & submitting proofs of Rust code with ZKRust
zkRust is a CLI tool to generate proofs of your rust code using a RISCV-zkVM's and submit them to Aligned to be verified with only one command. The following provers are supported:
To generate and submit proofs to Aligned using ZKRust, you need to have the following dependencies installed:
Foundry (Optional, needed only to create a local keystore to sign Ethereum transactions if you didn't already have one).
Generate & Submit proofs
To generate and submit proofs to Aligned testnet using zkRust, you can follow the steps below:
1. Install zkRust :
The zkRust executable can be installed directly via the command line via:
curl -L https://raw.githubusercontent.com/yetanotherco/zkRust/main/install_zkrust.sh | bash
or built by cloning the repo
git clone https://github.com/yetanotherco/zkRust
cd zkRust
and running the installation script:
2. Generate a keystore:
When creating a new wallet keystore and private key please use strong passwords for your own protection.
You can use cast to create a local keystore. If you already have one, you can skip this step.
Then you can import your created keystore using:
cast wallet import --interactive <path_to_keystore.json>
Make sure to send at least 0.1 Holesky ETH to the address in the keystore. You can get Holesky ETH from the faucet
3. Generate and submit the proof with zkRust:
The zkRust repo has some predefined examples that can be used to generate a proof. You can find them in zkRust/examples. For example, to generate a proof of a fibonacci program with Risc0 or SP1 and
submit it to aligned, run:
zkrust prove-risc0 \
--submit-to-aligned \
--keystore-path <PATH_TO_KEYSTORE> \
This command will generate a proof for the fibonacci example program and submit it to Aligned using the keystore provided for signing the transaction.
Take into consideration that the proof generation can take some time. Once the proof has been generated, a prompt will appear asking for the passphrase of your keystore and then send it to Aligned.
The same program can be proved using SP1 just changing the zkRust subcommand:
zkrust prove-sp1 \
--submit-to-aligned \
--keystore-path <PATH_TO_KEYSTORE> \
For the moment, the Rust code that can be proven has some limitations:
Programs with user Input and Output to the vm code are not supported. | {"url":"https://docs.alignedlayer.com/guides/5_using_zkrust?ref=blog.alignedlayer.com","timestamp":"2024-11-06T01:40:31Z","content_type":"text/html","content_length":"262355","record_id":"<urn:uuid:ea485228-08e8-4322-9157-ce90457302d7>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00413.warc.gz"} |
EViews Help: innov
Solve options for stochastic simulation.
model_name.innov var1 option [var2 option, var3 option, ...]
Follow the innov keyword with a list of model variables and options. If the variable is an endogenous variable (or add factor), it identifies a model equation and will use different options than an
exogenous variable.
Options for endogenous variables
“i” or “identity” Specifies that the equation is an identity in stochastic solution.
“s” or Specifies that the equation is stochastic with unknown innovation variance in stochastic solution. Note: if a value has been previously specified in the positive_num option, it
“stochastic” will be kept.
Specifies that the equation is stochastic with an equation innovation standard error equal to the positive number
positive_num Note: the innovation standard error is only relevant when used with the
command, with the “v=t” option set.
Options for exogenous variables
number number specifies the forecast standard error of the exogenous variable. You may use “NA” to specify an unknown (or zero) forecast error.
usmacro.innov gdp i
specifies that the endogenous variable GDP be treated as an identity in stochastic solution.
model01.innov cons 5600 gdp i cpi s
indicates that the endogenous variable CONS is stochastic with standard error equal to 5600, GDP is an identity, and CPI is stochastic with unknown innovation variance.
model01.innov govexp 12210
specifies that the forecast standard error of the exogenous variable GOVEXP is 12210.
See the discussion in
“Stochastic Options” | {"url":"https://help.eviews.com/content/modelcmd-innov.html","timestamp":"2024-11-12T06:44:26Z","content_type":"application/xhtml+xml","content_length":"14782","record_id":"<urn:uuid:36e89b22-16f1-49e5-b962-0a13e36415e5>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00841.warc.gz"} |
Trigonometric Substitution Calculator with Steps - PineCalculator
Introduction to Trigonometric Substitution Calculator With Steps:
Trigonometric substitution calculator is an online tool that helps you evaluate the integral functions that have trigonometric substitution terms. It is used to determine the substitution of
trigonometric functions that have radical expression integral functions.
Our trig sub calculator is a helpful source as it solves the complex function of integration easily so there wouldn't be any need to do manual calculation.
What is Trigonometric Substitution?
Trigonometric substitution is a method of integration that is used to simplify the given integral that has the square root of quadratic expressions by converting them into trigonometric integral
You can choose the appropriate trigonometric substitution as per your given integral function using trigonometric identities, the integral becomes easier to solve. This technique is useful in
calculus for handling complex integrals that are difficult to evaluate directly. This method is used for solving the definite and indefinite integrals problem.
How to Solve Integration By Trigonometric Substitution?
To solve the complex structure of an integral use the trigonometric substitution method which involves several steps. Here’s a detailed guide to the process of trigonometric substitution.
Steps for Trigonometric Substitution:
Step 1: Identify the quadratic expression in the integrand from the given substitution form.
• If you have √a^2-x^2, make substitution x = asin(u).
• If you have √x^2-a^2, make substitution x = asec(u).
• If you have √x^2+a^2, make substitution x = atan(u).
Step 2: Choose the appropriate trigonometric substitution as per the given integral function.
Step 3: As per the x value, find the differentiation of that chosen substitution value as dx.
Step 4: Substitute x and dx in the integral function and simplify it using trigonometric identities.
Step 5: Take the integral with respect to integrating variables.
Step 6: After solving the integration, find the substitution value to change the function into variable x.
Step 7: For indefinite integrals, add the constant of integration C. For definite integrals, solve the integration and convert function into x and apply limits to get a solution.
Practical Example of Trigonometric Substitution:
The solved example of trigonometric substitution gives you a clear-cut idea about the working process of our trig substitution calculator with steps.
Example: Evaluate the given integral
$$ \int \sqrt{9 - x^2} dx $$
The given integral function cannot be solved directly so we apply the trigonometric substitution method as:
$$ \sqrt{a^2 - x^2}, x \;=\; asinθ,\; dx \;=\; acosθ\; dθ $$
As we have the above expression in the given integral problem. So put,
$$ x \;=\; 3\; sinθ, we\; have\; dx \;=\; 3\; cosθ\; dθ $$
Add x and dx values in the integral function.
$$ \int \sqrt{9 - x^2} dx \;=\; \int \sqrt{9 - (3\;sinθ)^2} . 3 cosθ\; dθ $$
Simplify it to get a solution,
$$ \int \sqrt{9 - x^2} dx \;=\; \int 9 \sqrt{1- sin^2 θ} . cosθ\; dθ $$
$$ \int \sqrt{9 - x^2} dx \;=\; \int 9 \sqrt{cos^2 θ} cos θ\; dθ $$
$$ \int \sqrt{9 - x^2} dx \;=\; \int 9 cos^2 θ dθ $$
As cos^2θ=1/2+cos2θ/2 in the above expression,
$$ \int 9 \biggr( \frac{1}{2} + \frac{1}{2} cos(2θ) \biggr) dθ $$
Integrate with respect to θ and simplify it,
$$ =\; \frac{9}{2}θ + \frac{9}{4}sin(2θ) + C $$
$$ =\; \frac{2}{9}θ + \frac{9}{4}(2sinθ\; cosθ) + C $$
$$ sin(2θ) \;=\; 2sinθ\; cosθ $$
Replace the θ with the original value in x,
$$ sin^{-1}(\frac{x}{3}) \;=\; θ $$
$$ sinθ \;=\; \frac{x}{3} . cosθ \;=\; \frac{\sqrt{9 - x^2}}{3} $$
Put these values in the above term to convert it into x form again for the solution,
$$ =\; \frac{9}{2}sin^{-1} (\frac{x}{3}) + \frac{9}{2} . \frac{x}{3} . \frac{\sqrt{9 - x^2}}{3} + C $$
$$ =\; \frac{9}{2}sin^{-1} (\frac{x}{3}) + \frac{x \sqrt{9 - x^2}}{2} + C $$
How to Use Trig Sub Integral Calculator?
Trigonometric substitution calculator has a simple design that makes it easy for the evaluation of complex integral problems. Follow our guidelines which are given as:
• Enter the expression of the complex integral function in the given input field.
• Enter the value of the upper and lower limit in the input field (if your function is a definite integral function).
• Recheck the given complex integral expression before clicking the calculate button to start the evaluation process in the trig sub calculator.
• Click the “Calculate” button to get the result of your given complex integral problem.
• If you are trying our trig substitution calculator for the first time then you can use the load example to learn more about this concept.
• Click on the “Recalculate” button to get a new page for finding more example solutions to complex integral problems.
Result from Integration by Trigonometric Substitution Calculator:
Trig sub integral calculator gives you the solution from a given complex integral function when you add the input into it. It includes:
When you click on the result option, the trig sub calculator gives you a solution to the given integral function.
When you click on it, this option will provide you with a solution where all the calculations of the trigonometric substitution process are mentioned.
Advantages of Trig Substitution Calculator With Steps:
Trig sub integral calculator provides you with tons of advantages that help you to calculate complex integral problems and give you solutions without any trouble. These advantages are:
• It is a free-of-cost tool so you can use it for free to find complex integral problem solutions without spending.
• Trigonometric substitution calculator with steps is a manageable tool that can manage various types of integral problems having a square root expression.
• It gives you conceptual clarity for the integral process whenever you use it solve more examples.
• Trig sub calculator saves the time that you consume on the calculation of complex integral problems manually.
• It is a reliable tool that provides you with accurate solutions whenever you use it to calculate complex integral problems without any man-made mistakes in calculation.
• Trig substitution calculator allows you to use it multiple times for the evaluation of complex number problem. | {"url":"https://pinecalculator.com/trigonometric-substitution-calculator","timestamp":"2024-11-12T06:28:00Z","content_type":"text/html","content_length":"66583","record_id":"<urn:uuid:42f7de9a-2678-4997-8e24-a25fbdd5d7bb>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00759.warc.gz"} |
Zoom ID: 869 4632 6610 (ibsdimag)
I will present the short proof from that for every digraph F and every assignment of pairs of integers $(r_e,q_e)_{e\in A(F)}$ to its arcs, there exists an integer $N$ such that every digraph D with
dichromatic number at least $N$ contains a subdivision of $F$ in which $e$ is subdivided into a directed path of … | {"url":"https://dimag.ibs.re.kr/events/category/seminar/vdmc/page/2/?eventDisplay=past","timestamp":"2024-11-14T21:53:28Z","content_type":"text/html","content_length":"206555","record_id":"<urn:uuid:6d322df5-c6ef-48d6-8e36-03442112399a>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00896.warc.gz"} |
Current Majors
[an error occurred while processing the directive] [an error occurred while processing the directive]
Current Majors
Preparing for a PhD in Economics
The minimum requirements of the Economics undergraduate major are not designed to be training for doctoral economics programs. Students who plan to continue their education should take more
quantitative courses than the minimum required for the major. Preparation should start early in your undergraduate education.
Students who plan on going on to graduate school should participate in research as an undergraduate, and plan on writing an honors thesis during their senior year.
See an advisor early for assistance in planning your undergraduate program if you hope to go to graduate or professional school. You should also check the L&S website http://stepbystep.berkeley.edu/
and http://ls-yourway.berkeley.edu/graduate for helpful information on preparation for grad school.
Course recommendations
• Math 1A-1B
• Math 53 and Math 54 (multivariable calculus and linear algebra)
• Economics 101A-B, the quantitative theory sequence
• Economics 141, the more quantitative econometrics course
• Additional math and statistics courses (linear algebra, real analysis, probability, etc.)
• Additional economics courses that emphasize theory and quantitative methods, such as Economics 103, 104, and 142.
Upper-division math and statistics courses
for those who are adequately prepared (in order of importance)
• Math 110, Linear Algebra
• Math 104, Introduction to Analysis
• Stat 134, Concepts of Probability
• Stat 150, Stochastic Processes
• Math 105, Second Course of Analysis
• Math 170, Mathematical Methods of Optimization
• Stat 102/Stat 135, Linear modeling Theory and Applications
• Stat 151A, Statistical Inference
• Math 185, Introduction to Complex Analysis
Graduate math and statistics courses
for those who are adequately prepared (in order of importance)
• Math 202A/202B, Introduction to Topology
• Stat 200A/200B,Introduciton to Probability and Statistics at an Advanced Level; graduate version of 101/102 sequence, not much more difficult, but harder than 134/135
• Stat 205A/205B,Probability Theory; graduate probability, much higher level than 200A/200B
Please note: This is just a recommendation; not all courses are required. Admissions requirements vary by university and by program. Students interested in pursuing graduate school should begin
gathering information from prospective programs as early as possible.
[an error occurred while processing the directive] [an error occurred while processing the directive] | {"url":"https://eml.berkeley.edu/econ/ugrad/current_gradschool.shtml","timestamp":"2024-11-02T12:51:08Z","content_type":"application/xhtml+xml","content_length":"14013","record_id":"<urn:uuid:80b85394-c07a-472e-acc7-6dbcca96ed37>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00132.warc.gz"} |
Ryder, P (2009)
Multiple origins of the Newcomb-Benford law: rational numbers, exponential growth and random fragmentation
Staats- und Universitätsbibliothek Bremen, Germany.
ISSN/ISBN: Not available at this time. DOI: Not available at this time.
Abstract: The Newcomb-Benford law states that, in data drawn randomly from many different sources, the probability that the first significant digit is n is given by log(1 + 1/n). In a previous paper
[1], it was shown that there are at least two basic mechanisms for this phenomenon, depending on the origin of the data. In the case of physical quantities measured with arbitrarily defined units, it
was shown to be a consequence of the properties of the rational numbers, whereas for data sets consisting of natural numbers, such as population data, it follows from the assumption of exponential
growth. It was also shown that, contrary to what has been maintained in the literature, the requirement of scale invariance alone is not sufficient to account for the law. The present paper expands
on [1], and it is shown that the finite set of rational numbers to which all measurements belong automatically satisfies the requirement of scale invariance. Further, a third mechanism, termed
“random fragmentation”, is proposed for natural number data which are not subject to exponential growth. In this case, however, the Newcomb-Benford is only approximately reproduced, and only under a
certain range of initial conditions.
@misc{ryder2009multiple, title={Multiple origins of the Newcomb-Benford law: rational numbers, exponential growth and random fragmentation}, author={Ryder, Peter}, year={2009}, publisher={Staats-und
Universit{\"a}tsbibliothek Bremen, Germany}, url={http://nbn-resolving.de/urn:nbn:de:gbv:46-ep000106193 }, }
Reference Type: E-Print
Subject Area(s): General Interest | {"url":"https://www.benfordonline.net/fullreference/871","timestamp":"2024-11-10T06:33:36Z","content_type":"application/xhtml+xml","content_length":"4876","record_id":"<urn:uuid:2beb773d-f119-4c89-929a-1785d2ac639f>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00429.warc.gz"} |
Evaluating infinite integrals involving Bessel functions of arbitrary order
The evaluation of intergrals of the form I[n] = ∫ 0 ∞ f{hook}(x)J[n](x)dx is considered. In the past, the method of dividing an oscillatory integral at its zeros, forming a sequence of partial sums,
and using extrapolation to accelerate convergence has been found to be the most efficient technique available where the oscillation is due to a trigonometric function or a Bessel function of order n
= 0, 1. Here, we compare various extrapolation techniques as well as choices of endpoints in dividing the integral, and establish the most efficient method for evaluating infinite integrals involving
Bessel functions of any order n, not just zero or one. We also outline a simple but very effective technique for calculating Bessel function zeros.
All Science Journal Classification (ASJC) codes
• Computational Mathematics
• Applied Mathematics
• Bessel functions
• Bessel zeros
• Infinite integration
• Quadrature
• mW transform
• ε-algorithm
Dive into the research topics of 'Evaluating infinite integrals involving Bessel functions of arbitrary order'. Together they form a unique fingerprint. | {"url":"https://collaborate.princeton.edu/en/publications/evaluating-infinite-integrals-involving-bessel-functions-of-arbit","timestamp":"2024-11-05T23:45:54Z","content_type":"text/html","content_length":"46086","record_id":"<urn:uuid:8943398d-493b-407d-b527-5d3fd53194fb>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00265.warc.gz"} |
Bayesian update with continuous prior and likelihood
One use case that may be of particular interest is updating a prior on a parameter B based on b, an a statistical estimate of B (for example from a study you conducted or are reading about).
• If b is a mean or a difference in means (such as a treatment effect), the likelihood distribution will be a normal distribution centered around b with a standard deviation equal to the standard
error of b. The log-normal distribution may be a good choice of prior for positive quantities.
Quick link: Update from statistical estimate of a mean or treatment effect
This tool
may be helpful for converting between 95% confidence intervals, standard errors, and p-values. | {"url":"https://bayesupdate.com:443/","timestamp":"2024-11-13T08:47:45Z","content_type":"text/html","content_length":"12531","record_id":"<urn:uuid:135c38c4-82c2-4160-9213-8fbffc2647c6>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00426.warc.gz"} |
Partitioned Complexity
Roger Sessions has some good material I wanted to take a moment to highlight. This is part of his work around a new Enterprise Architecture framework, Simple Iterative Partitions (SIP).
Correlating Complexity to State
First, let’s look at how complexity grows based on the number of significant states in a given program. I’m not a math geek, so we are going to go with an oversimplified example. If you are a math
geek, feel free drop a note in the comments for whatever reason.
In his example, we first look at a coin-tossing application. In short, the requirements state that the program should detect a coin-toss and inform the user if the coin landed heads or tails up.
The user will be dropping their penny into a specially designed sensor which our program will read data from.
Here’s our pseudo code:
PennyState state = sensor.GetState();
if (state == PennyState.HeadsUp)
MessageBox.Show(“Heads Up”);
else if (state == PennyState.TailsUp)
MessageBox.Show(“Tails Up”);
Now that’s a pretty trivial program, but if we look closely, we have two states the program can go through, heads or tails up. In order to test our application, we would need to drop the penny in
the sensor in both positions and check the output.
Let’s increase the scope of our trivial application to include that of checking a dime at the same time. Now the user will drop a dime in the dime reader and a penny in the penny reader. Our
application will tell the user the result.
CoinState pennyState = pennySensor.GetState();
CoinState dimeState = dimeSensor.GetState();
if (pennyState == CoinState.HeadsUp && dimeState == CoinState.HeadsUp)
MessageBox.Show(“both coins are heads up”);
else if (pennyState == CoinState.TailsUp && dimeState == CoinState.HeadsUp)
MessageBox.Show(“the penny is tails up and the dime is heads up”);
else if (pennyState == CoinState.HeadsUp && dimeState == CoinState.TailsUp)
MessageBox.Show(“the penny is heads up and the dime is tails up”);
else if (pennyState == CoinState.TailsUp && dimeState = CoinState.TailsUp)
MessageBox.Show(“both coins are tails up”);
You can see now that we’ve increased our number of basic states. We went from 2 states to 4 by adding the second variable (the dime’s state).
This isn’t really earth shattering, but we can begin to see the correlation between the number of states in the program and the amount of complexity. In fact, you can generally calculate the number
of states using basic math.
Where x is the number of variables, s is the number of states, and t is the number of total states of the program:
t = s^x
As you can see, we are dealing with an exponential problem.
Here are the numbers when dealing with an application made of variables who can have 6 states:
That’s quite a lot of states for a program with only 12 variables.
Managing State and Complexity Through Partitioning
Looking at the chart above, we can generally know that any non-trivial application will have a huge number of possible states. What can we do about it?
Well, the trick is that while a single program with 4 variables of 6 states will have a total number of just over a thousand states (1296), it doesn’t have to be that way. Here is where partitioning
comes to our rescue. In fact, if we only create one partition and split the program into two programs, we significantly reduce the number of possible states:
Single Program of 4: 1296 states
Single Program of 2: 36 states
Two Programs of 2 states: 72 states
The reduction is pretty staggering. We went from 1296 possible states down to only 72! That’s a reduction of over 94%!
Now if the correlation between the number of states and the amount of complexity holds true, you can just imagine the resulting change to the application.
So what can we take away from this, without trying to calculate the number of states in our applications?
Here’s the generalized principle:
Partitioning is a technique for significantly reducing complexity.
And I’ll create a corollary:
Partitioning an application into Modules is a technique for significantly reducing its complexity.
Now, we know that not all means of partitioning are equal, but I’ll leave that for another post.
If you want a bit more on SIP or the math behind partitioning (as described by Roger Sessions), I would encourage you to take a look at his website. | {"url":"https://lostechies.com/evanhoff/2008/04/01/partitioned-complexity/","timestamp":"2024-11-12T17:13:35Z","content_type":"text/html","content_length":"14093","record_id":"<urn:uuid:8af4b216-e2f6-48fc-9477-9091b1973338>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00272.warc.gz"} |
Education and career
With Iain Moffat, Ellis-Monaghan is the author of the book Graphs on Surfaces. New York: Springer. 27 June 2013. ISBN 978-1-4614-6970-4. OCLC 859157796..^[5]
From 2010-2020, she served as Editor-in-Chief of PRIMUS, a journal on the teaching of undergraduate mathematics.^[6]
External links | {"url":"https://www.knowpia.com/knowpedia/Jo_Ellis-Monaghan","timestamp":"2024-11-02T07:48:23Z","content_type":"text/html","content_length":"80683","record_id":"<urn:uuid:b8f38b1a-8aba-44dd-b2f4-3c6f020a79a2>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00478.warc.gz"} |
Particle model of matter - Paper 1 Flashcards
To revise particle model of matter. (paper 1)
State the equation with all units for density
Density = mass / volume
Density - kg/m^3
mass - kg
Volume - m^3
How can the density of a regular shaped object be found?
Find mass using an electronic balance.
Find volume using a ruler to measure the length of each side
Then the equation volume = l x l x l
Use the density equation Density = mass / volume
How can the density of a irregular shaped object be found?
Find mass using an electronic balance.
Find volume using a eureka can
Use the density equation Density = mass / volume
How can a eureka cab be used to find the volume of an irregaulr object?
Fill the eureka can with water
Place the object in the water.
The volume of water will rise by the exact volume as the object.
Collect the dispalced volume of water and measure with a measuring cylinder.
How can the density of a liquid be found?
Find the volume using a measuring cylinder
Place an empty measuring cycling on an electronic balance and record the mass.
Add the liquid and record the new mass.
calculate the mass of the liquid (new mass - old mass)
Use the density equation Density = mass / volume
Describe the state changes in melting
Describe the state changes in freezing
Describe the state changes in evaporating
Describe the state changes in condensing
Describe the state changes in sublimation
Solid to gas (missing out the liquid stage)
Energy is stored inside a system by the particles
State the equation for internal energy
Internal energy = Kinetic energy + potential energy
Why does heating always increase internal energy
Heating will either cause the substance to heat up and its kinetic energy to increase or the substance will change state and the internal energy will increase
If a substance is heating up what factors affect the increase in temperature?
1. The mass of the substance
2. The material ( the specific heat capacity)
3. The rate at which energy is supplied
Define specific heat capacity
The specific heat capacity of a substance is the amount of energy required to raise the temperature of one kilogram of the substance by one degree Celsius. | {"url":"https://www.brainscape.com/flashcards/particle-model-of-matter-paper-1-12421579/packs/21140807","timestamp":"2024-11-07T03:49:27Z","content_type":"text/html","content_length":"124465","record_id":"<urn:uuid:fb98da9b-c1f7-4f23-ba3c-d2f79cfb2f4f>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00313.warc.gz"} |
[QSMS Seminar 2021.10.12, 10.14] Vertex algebras and chiral homology I and II
Date : Oct. 12 (Tue) 2021 (AM) 9:30 ~ 11:00 in Seoul Time
Oct. 14 (Thu) 2021 (AM) 9:30 ~ 11:00 in Seoul Time
Speaker : Jethro van Ekeren (Universidade Federal Fluminense, Brazil)
Place : Zoom (ID: 642 675 5874 no password, Login required)
Title & Abstract
Lecture 1
Title: Vertex algebras and chiral homology I
Abstract: These lectures aim to serve as an introduction to the theory of chiral algebras, suitable for those more familiar with vertex algebras. In the first lecture I will describe the construction
of vector bundles with connection (D-modules) over algebraic curves, starting from the input of a conformal vertex algebra, arriving eventually at the notion of a chiral algebra. Formally chiral
algebras are defined as Lie algebras within certain categories of D-modules, but all necessary theory of D-modules will be introduced in the talk.
Lecture 2
Title: Vertex algebras and chiral homology II
Abstract: In the second lecture I will explain the notion of conformal blocks and its generalisation chiral homology. I will focus especially on the case of algebraic curves of genus 1 (i.e., tori)
because (1) this case is geometrically interesting, (2) it is important for questions in the representation theory of vertex algebras, and (3) because it has some features distinguishing it from the
general case which allow it to be investigated from a very concrete vertex algebraic point of view. | {"url":"https://qsms.math.snu.ac.kr/index.php?mid=board_sjXR83&order_type=desc&listStyle=viewer&document_srl=1783&page=3","timestamp":"2024-11-11T16:32:20Z","content_type":"text/html","content_length":"23045","record_id":"<urn:uuid:1348d568-09fa-43f4-a78e-a1f7168ca0a2>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00168.warc.gz"} |
Exploring The Concept Of The Smallest Natural Number In Coding - Code With C
Exploring the Concept of the Smallest Natural Number in Coding 🧮
Importance of Understanding the Smallest Natural Number
Have you ever pondered over the significance of the smallest natural number in the vast world of coding? 🤔 Let’s delve into this intriguing concept and unravel its mysteries! Starting with the
basics, what exactly is the smallest natural number, and why is it crucial in the realm of programming?
Definition of the Smallest Natural Number
The smallest natural number is a fundamental concept in mathematics, referring to the integer 1. Simple, right? But there’s more to it when it comes to coding and algorithms! 🤓
Significance in Coding Algorithms
In coding, the smallest natural number plays a vital role in various algorithms, influencing decisions and optimizations. Understanding how to identify the smallest natural number efficiently can
significantly impact the performance of algorithms and the overall logic of a program. Let’s explore how coders tackle the challenge of finding this tiny but mighty number! 💻
Common Methods to Find the Smallest Natural Number
When it comes to pinpointing the smallest natural number in a given dataset, programmers have a few tricks up their sleeves. Let’s uncover some common approaches used in the coding world!
Linear Search Approach
One traditional method to find the smallest natural number is the linear search approach. This technique involves traversing through the dataset, comparing each element with the current minimum
value, and updating it if a smaller number is found. While simple, this method can be effective for small datasets. 🕵️♂️
Using Built-in Functions in Programming Languages
Many programming languages provide built-in functions or methods to identify the smallest natural number in an array or list efficiently. These functions leverage optimized algorithms under the hood,
making the task a breeze for developers. Why reinvent the wheel when you can use these handy built-in tools? 🧰
Challenges in Identifying the Smallest Natural Number
As with any coding task, finding the smallest natural number comes with its set of challenges. Let’s explore the hurdles programmers may face when dealing with this seemingly simple yet critical
Handling Large Datasets
When working with vast amounts of data, the traditional methods of finding the smallest natural number may prove to be inefficient. Processing extensive datasets requires optimized algorithms and
techniques to ensure quick and accurate results. The struggle is real when it comes to efficiency! ⏳
Efficiency and Performance Concerns
Efficiency is key in the world of coding. While finding the smallest natural number may seem straightforward, optimizing the process for better performance can be a daunting task. Balancing speed and
accuracy is a constant challenge for developers striving to write efficient code. How can we level up our coding game in the face of these concerns? 🚀
Applications of the Smallest Natural Number Concept
Now that we’ve grasped the essence of the smallest natural number in coding, let’s explore where this concept shines brightest in real-world applications!
Sorting Algorithms
Sorting algorithms heavily rely on the notion of ordering elements, often involving comparisons to determine the smallest value. The smallest natural number concept plays a pivotal role in sorting
routines, influencing the final arrangement of elements in ascending order. It’s fascinating how such a basic concept forms the backbone of complex sorting algorithms! 🧩
Data Validation and Error Handling
In various scenarios, validating input data and handling errors require identifying the smallest natural number. Whether it’s ensuring data integrity or managing exceptions, understanding the
smallest natural number’s role can streamline these processes and enhance the reliability of software applications. The impact goes beyond mere numbers! 🛡️
Enhancing Efficiency in Finding the Smallest Natural Number
As coding aficionados, we’re always on the lookout for ways to optimize our algorithms and boost efficiency. How can we fine-tune our strategies to find the smallest natural number with precision and
speed? Let’s explore some techniques to elevate our coding game!
Optimal Algorithm Designs
Crafting optimal algorithms tailored to the task of finding the smallest natural number is crucial for efficient processing. By analyzing the problem requirements and leveraging appropriate data
structures and techniques, developers can design algorithms that outperform traditional methods. It’s all about strategic thinking and innovation! 🔍
Utilizing Data Structures for Optimization
Data structures play a pivotal role in enhancing algorithm efficiency. Leveraging structures like heaps, trees, or hash maps can streamline the process of finding the smallest natural number in
complex datasets. By harnessing the power of data structures, programmers can unlock new dimensions of optimization and performance. Let’s transform our code with the magic of structured data! 🌟
Final Thoughts 🌺
Overall, the smallest natural number may seem like a modest integer, but its impact reverberates throughout the coding universe. Understanding how to uncover this tiny treasure efficiently can
elevate our programming skills and optimize the performance of our algorithms. So, the next time you encounter the quest for the smallest natural number in your code, embrace the challenge with zest
and creativity! 🚀
Thank you for embarking on this coding adventure with me! Until next time, happy coding and may the algorithms be ever in your favor! 🤗
Program Code – Exploring the Concept of the Smallest Natural Number in Coding
# Program to find the smallest natural number
# Function to find the smallest natural number
def find_smallest_natural_number():
# The smallest natural number by definition is 1
smallest_natural_number = 1
print(f'The smallest natural number is: {smallest_natural_number}')
# Calling the function
### Code Output:
The smallest natural number is: 1
### Code Explanation:
The challenge here was to explore the concept of the smallest natural number in coding, and to achieve this, we’ve crafted a Python program that straightforwardly addresses the objective.
Step 1: We start off with defining a function named find_smallest_natural_number(). This ensures that our logic is neatly encapsulated within a block of code, which can be easily reused or called
Step 2: Inside the function, we declare a variable named smallest_natural_number and assign it the value of 1. According to the fundamental principles of mathematics, the smallest natural number is
universally recognized as 1. Therefore, our program doesn’t need to perform any complex computations to find this value since it’s a constant truth.
Step 3: We then use the print function to output the result. The print function here employs a formatted string (f-string) for displaying the message along with the value of the
smallest_natural_number variable. This approach makes the code more readable and allows for dynamic values to be inserted into strings easily.
Step 4: After defining the function, the code calls it to execute. This is where the logic inside the function is put into action, and our program displays the smallest natural number.
This program exemplifies a straightforward implementation to address the concept of the smallest natural number in coding. The architecture of the program is simple and is built upon the foundation
of functions, variable assignments, and the utilization of print statements for outputs. This directly achieves the objective of exploring the concept of the smallest natural number, demonstrating
that sometimes, simplicity is the key to clarity in programming.
Frequently Asked Questions (F&Q) on Exploring the Concept of the Smallest Natural Number in Coding
What is the smallest natural number in coding?
The smallest natural number in coding is typically considered to be 1. In most programming languages, natural numbers start from 1 and go upwards, excluding negative numbers and fractions.
Can the smallest natural number ever be zero?
No, the concept of natural numbers in mathematics and coding does not include zero as the smallest natural number. Zero is considered a whole number but is not classified as a natural number.
How is the smallest natural number used in coding applications?
In coding, the smallest natural number, which is 1, is often used as a starting point for counting, indexing arrays, and iterations in loops. It serves as a fundamental building block for various
algorithms and mathematical operations.
Why is it important to understand the concept of the smallest natural number in coding?
Understanding the smallest natural number in coding is crucial for writing efficient and bug-free programs. It helps in setting appropriate boundaries, avoiding off-by-one errors, and ensuring the
correctness of mathematical calculations in algorithms.
Are there any programming languages where the smallest natural number is different from 1?
While the majority of programming languages follow the convention of starting natural numbers from 1, there are some languages, like MATLAB, where the indexing starts from 1 instead of 0. It’s
essential to be aware of language-specific quirks regarding the smallest natural number.
How can I handle edge cases related to the smallest natural number in my code?
When dealing with the smallest natural number in coding, it’s essential to consider edge cases where unexpected behaviors might occur, such as division by zero or array indices starting from 0.
Proper validation and boundary checks can help in handling such edge cases effectively.
Any tips for beginners to grasp the concept of the smallest natural number in coding?
For beginners, it’s recommended to practice writing simple programs involving loops, arrays, and mathematical operations that use the smallest natural number. Understanding the basics of counting
from 1 onwards can lay a solid foundation for more complex coding tasks.
Can the smallest natural number vary in different programming paradigms?
The concept of the smallest natural number being 1 is consistent across most programming paradigms, including procedural, object-oriented, and functional programming. It serves as a universal
starting point for numerical operations in various coding styles.
I hope these F&Q have shed some light on the intriguing concept of the smallest natural number in the world of coding! 🌟
Leave a comment Leave a comment | {"url":"https://www.codewithc.com/exploring-the-concept-of-the-smallest-natural-number-in-coding/","timestamp":"2024-11-07T16:49:44Z","content_type":"text/html","content_length":"149621","record_id":"<urn:uuid:dc92673a-58e3-4ca3-b37a-09a61e47abe2>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00576.warc.gz"} |
What is Standard Form
The standard form is a way of writing down very large or very small numbers in an easy manner. Standard form is frequently used in engineering, medical, and the construction profession.
Scientists use standard form regularly as they have to deal with very large and very small numbers.
What is Standard Form?
Standard form is like scientific notation where a number is represented as a decimal number times a power of 10. The format of the standard number is a x 10^n^
• Wherein a stand for Base Number, and n is the power of 10.
• The base number always has to be equal to or larger than 1 but less than
• The n is the power of 10 and has to multiply with the first number.
Examples for Standard Form
Example No.1:
270 = 2.7 × 10²
Wherein 10² = 10 × 10 which is 100.
So, 2.7 × 100 = 270
Example No.2:
7,800 = 7.8 × 10³
Wherein 10³ = 10 × 10 × 10 which is 1000.
So, 7.8 × 1000 = 7800
You can write small numbers in standard form as well. The rule is when you write a number in standard form is that first you write down a number between 1 and 10, then write × 10 (to the power of a
See the Scientific Notation Calculator add, subtract, multiply and divide numbers in scientific notation
Frequently Asked Questions – FAQ
What is the standard form in math?
In mathematics, the standard form is defined as the representation or notation of that particular element. It depends on the subject whether it is a number, an equation, or a line. Explanation: The
standard form of a straight line is Ax + By = C, whereas the standard form of a quadratic equation is ax2 + bx + c.
How do you write in standard form?
The standard form of an equation is written as Ax + By = C, wherein A, B, and C are integers.
This form of the equation is particularly useful for determining both “x” and “y” intercepts.
What are the rules of standard form?
Standard Form Rules for Linear Equations as below:
• It must have the form Ax + By = C.
• A, B, and C must be integers.
• Integer “A” cannot be negative.
• A, B, and C should have no common factors other than 1.
Also, read this: Know the 5G Mobile Network | {"url":"https://www.peterjosephblog.com/what-is-standard-form/","timestamp":"2024-11-04T11:52:44Z","content_type":"text/html","content_length":"115874","record_id":"<urn:uuid:5521aef4-3c7e-4a6c-b32c-34ad9de016d3>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00439.warc.gz"} |
The task of GAN is to generate features $X$ from some noise $\xi$ and class labels $Y$,
$$\xi, Y \to X.$$
Many different GANs are proposed. Vanilla GAN has a simple structure with a single discriminator and a single generator. It uses the minmax game setup. However, it is not stable to use minmax game to
train a GAN model. WassersteinGAN was proposed to solve the stability problem during training^1. More advanced GANs like BiGAN and ALI have more complex structures.
Vanilla GAN
Minmax Game
Suppose we have two players $G$ and $D$, and a utility $v(D, G)$, a minmax game is maximizing the utility $v(D, G)$ for the worst case of $G=\hat G$ that minimizes $v$ then we have to find $D=\hat D$
that maximizes $v$, i.e.,
$$\underset{G, D}{\operatorname{minmax}} v(D, G).$$
The loss for vanilla GAN is the minmax loss
$$ \underset{G, D}{\operatorname{minmax}} \mathbb E_{x\sim P_{data}} \left[ \ln D(x) \right] + \mathbb E_{z\sim p_z} \left[ \ln ( 1- D(G(z)) ) \right]. $$
Illustration of GAN
BiGAN uses one generator, one encoder and one discriminator^2.
Illustration of BiGAN
Planted: by L Ma;
L Ma (2021). 'GAN', Datumorphism, 08 April. Available at: https://datumorphism.leima.is/wiki/machine-learning/adversarial-models/gan/. | {"url":"https://datumorphism.leima.is/wiki/machine-learning/adversarial-models/gan/?ref=footer","timestamp":"2024-11-12T02:41:24Z","content_type":"text/html","content_length":"114287","record_id":"<urn:uuid:0b11dd19-6b59-49ff-89e5-dfebc256b660>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00751.warc.gz"} |
BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Sabre//Sabre VObject 4.5.5//EN CALSCALE:GREGORIAN X-WR-CALNAME:Analysis BEGIN:VTIMEZONE TZID:Europe/Zurich X-LIC-LOCATION:Europe/Zurich TZURL:http://tzurl.org/
zoneinfo/Europe/Zurich BEGIN:DAYLIGHT TZOFFSETFROM:+0100 TZOFFSETTO:+0200 TZNAME:CEST DTSTART:19810329T020000 RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU END:DAYLIGHT BEGIN:STANDARD TZOFFSETFROM:+0200
TZOFFSETTO:+0100 TZNAME:CET DTSTART:19961027T030000 RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU END:STANDARD END:VTIMEZONE BEGIN:VEVENT UID:news1714@dmi.unibas.ch DTSTAMP;TZID=Europe/
Zurich:20241009T142342 DTSTART;TZID=Europe/Zurich:20241016T141500 SUMMARY:Seminar Analysis and Mathematical Physics: Umberto Pappalettera (Un iversität Bielefeld) DESCRIPTION:In this talk I will
present a new “anomalous regularisation ” result for solutions of the stochastic transport equation \\partial_t \\rho + \\circ \\partial_t W \\cdot \\nabla \\rho = 0\, where W is a Gauss ian\,
homogeneous\, isotropic noise with \\alpha-H\\”older space regular ity and compressibility ratio \\wp < \\frac{d}{4\\alpha^2}. The proof is o btained by studying the local behaviour around the origin
of solutions to a degenerate parabolic PDE in non-divergence form\, which is of independen t interest. Based on joint work with Theodore Drivas and Lucio Galeati. X-ALT-DESC:
In this talk I will present a new “anomalous regularisation ” result for solutions of the stochastic transport equation \\partial_t \\rho + \\circ \\partial_t W \\cdot \\nabla \\rho = 0\, where W is
a Gauss ian\, homogeneous\, isotropic noise with \\alpha-H\\”older space regular ity and compressibility ratio \\wp <\; \\frac{d}{4\\alpha^2}. The proof is obtained by studying the local behaviour
around the origin of solutions to a degenerate parabolic PDE in non-divergence form\, which is of indepe ndent interest. Based on joint work with Theodore Drivas and Lucio Galeati .
DTEND;TZID=Europe/Zurich:20241016T160000 END:VEVENT BEGIN:VEVENT UID:news1696@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20240708T112209 DTSTART;TZID=Europe/Zurich:20240712T151500 SUMMARY:An afternoon
of analysis talks: Helena Nussenzveig Lopes (Universid ade Federal do Rio de Janeiro) DESCRIPTION:We say inviscid dissipation occurs when a vanishing viscosity l imit does not satisfy energy balance.
A closely related phenomenon is anom alous dissipation\, where\, in the limit of vanishing viscosity\, the tota l dissipation does not vanish. In this talk we will discuss recent results on avoiding
these phenomena in 2D incompressible flows\, with and without forcing. X-ALT-DESC:
We say inviscid dissipation occurs when a vanishing viscosity limit does not satisfy energy balance. A closely related phenomenon is an omalous dissipation\, where\, in the limit of vanishing
viscosity\, the to tal dissipation does not vanish. In this talk we will discuss recent resul ts on avoiding these phenomena in 2D incompressible flows\, with and witho ut forcing.
DTEND;TZID=Europe/Zurich:20240712T160000 END:VEVENT BEGIN:VEVENT UID:news1695@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20240702T162928 DTSTART;TZID=Europe/Zurich:20240712T141500 SUMMARY:An afternoon
of analysis talks: Alexander Kiselev (Duke University) DESCRIPTION:There exist many regularization mechanisms in nonlinear PDE tha t help make solutions more regular or prevent formation of
singularity: di ffusion\, dispersion\, damping. A relatively less understood regularizatio n mechanism is transport. There is evidence that in the fundamental PDE of fluid mechanics such as Euler or
Navier-Stokes\, transport can play a reg ularizing role. In this talk\, I will discuss another instance where this phenomenon appears: the Patlak-Keler-Segel equation of chemotaxis. Chemota ctic blow
up in the context of the Patlak-Keller-Segel equation is an exte nsively studied phenomenon. In recent years\, it has been shown that the p resence of a given fluid advection can arrest singularity
formation given that the fluid flow possesses mixing or diffusion enhancing properties and its amplitude is sufficiently strong. This talk will focus on the case wh en the fluid advection is active:
the Patlak-Keller-Segel equation coupled with fluid that obeys Darcy's law for incompressible porous media flow vi a gravity. Surprisingly\, in this context\, in contrast with the passive a dvection
\, active fluid is capable of suppressing chemotactic blow up at a rbitrary small coupling strength: namely\, the system always has globally regular solutions. The talk is based on work joint with
Zhongtian Hu and Y ao Yao. X-ALT-DESC:
There exist many regularization mechanisms in nonlinear PDE t hat help make solutions more regular or prevent formation of singularity: diffusion\, dispersion\, damping. A relatively less understood
regularizat ion mechanism is transport. There is evidence that in the fundamental PDE of fluid mechanics such as Euler or Navier-Stokes\, transport can play a r egularizing role. In this talk\, I
will discuss another instance where thi s phenomenon appears: the Patlak-Keler-Segel equation of chemotaxis. Chemo tactic blow up in the context of the Patlak-Keller-Segel equation is an ex tensively
studied phenomenon. In recent years\, it has been shown that the presence of a given fluid advection can arrest singularity formation give n that the fluid flow possesses mixing or diffusion
enhancing properties a nd its amplitude is sufficiently strong. This talk will focus on the case when the fluid advection is active: the Patlak-Keller-Segel equation coupl ed with fluid that obeys
Darcy's law for incompressible porous media flow via gravity. Surprisingly\, in this context\, in contrast with the passive advection\, active fluid is capable of suppressing chemotactic blow up at
arbitrary small coupling strength: namely\, the system always has globall y regular solutions. The talk is based on work joint with Zhongtian Hu and Yao Yao.
DTEND;TZID=Europe/Zurich:20240712T150000 END:VEVENT BEGIN:VEVENT UID:news1662@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20240502T182134 DTSTART;TZID=Europe/Zurich:20240522T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Christof Sparber (Univer sity of Illinois at Chicago) DESCRIPTION:We consider Schrödinger equations with competing nonlinearitie s in spatial dimensions up to three
\, for which global existence holds (i. e. no finite-time blow-up). A typical example is the case of the (focusing -defocusing) cubic-quintic NLS. We recall the notions of energy minimizin g versus
action minimizing ground states and show that\, in general\, the two must be considered as nonequivalent. The question of long-time behavio r of solutions\, in particular the problem of ground-state
(in-)stability will be discussed using analytical results and numerical simulations. X-ALT-DESC:
We consider Schrödinger equations with competing nonlinearit ies in spatial dimensions up to three\, for which global existence holds ( i.e. no finite-time blow-up). A typical example is the case of
the (focusi ng-defocusing) cubic-quintic NLS. \;We recall the notions of energy mi nimizing versus action minimizing ground states and show that\, in general \, the two must be considered as
nonequivalent. The question of long-time behavior of solutions\, in particular the problem of ground-state (in-)sta bility will be discussed using analytical results and numerical simulation s.
DTEND;TZID=Europe/Zurich:20240522T151500 END:VEVENT BEGIN:VEVENT UID:news1688@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20240508T175114 DTSTART;TZID=Europe/Zurich:20240515T151500 SUMMARY:Seminar
Analysis and Mathematical Physics: Min Jun Jo (Duke Univers ity) DESCRIPTION:We prove the instantaneous cusp formation from a single corner of the vortex patch solutions. This positively settles the
conjecture give n by Cohen-Danchin in Multiscale approximation of vortex patches\, SIAM J . Appl. Math. 60 (2000)\, no. 2\, 477–502. X-ALT-DESC:
We prove the instantaneous cusp formation from a single corne r of the vortex patch solutions. This positively settles the conjecture gi ven by Cohen-Danchin in \;Multiscale approximation of
vortex patch es\, SIAM J. Appl. Math. 60 (2000)\, no. 2\, 477–502.
DTEND;TZID=Europe/Zurich:20240515T160000 END:VEVENT BEGIN:VEVENT UID:news1672@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20240424T134606 DTSTART;TZID=Europe/Zurich:20240515T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Riccardo Tione (MPI Leip zig) DESCRIPTION:This talk concerns critical points $u$ of polyconvex energies o f the form $f(X) = g(det(X))$\, where $g$ is (uniformly)
convex. It is not hard to see that\, if $u$ is smooth\, then $\\det(Du)$ is constant. I wil l show that the same result holds for Lipschitz critical points $u$ in the plane. I will also discuss how
to obtain rigidity for approximate solutio ns. This is a joint work with A. Guerra. X-ALT-DESC:
This talk concerns critical points $u$ of polyconvex energies of the form $f(X) = g(det(X))$\, where $g$ is (uniformly) convex. It is n ot hard to see that\, if $u$ is smooth\, then $\\det(Du)$ is
constant. I w ill show that the same result holds for Lipschitz critical points $u$ in t he plane. I will also discuss how to obtain rigidity for approximate solut ions. This is a joint work with A.
DTEND;TZID=Europe/Zurich:20240515T150000 END:VEVENT BEGIN:VEVENT UID:news1689@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20240503T101127 DTSTART;TZID=Europe/Zurich:20240508T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Christoph Kehle (ETH Zü rich) DESCRIPTION:Extremal black holes are special types of black holes which hav e exactly zero temperature. I will present a proof that
extremal black hol es form in finite time in gravitational collapse of charged matter. In par ticular\, this construction provides a definitive disproof of the “third law” of black hole
thermodynamics. I will also present a recent result which shows that extremal black holes arise on the black hole formation th reshold in the moduli space of gravitational collapse. This gives rise
to a new conjectural picture of “extremal critical collapse.” This is joi nt work with Ryan Unger (Princeton). X-ALT-DESC:
Extremal black holes are special types of black holes which h ave exactly zero temperature. I will present a proof that extremal black h oles form in finite time in gravitational collapse of charged
matter. In p articular\, this construction provides a definitive disproof of the “thi rd law” of black hole thermodynamics. I will also present a recent resul t which shows that extremal black holes
arise on the black hole formation threshold in the moduli space of gravitational collapse. This gives rise t o a new conjectural picture of “extremal critical collapse.” This is j oint work with Ryan
Unger (Princeton).
DTEND;TZID=Europe/Zurich:20240508T160000 END:VEVENT BEGIN:VEVENT UID:news1666@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20240422T090328 DTSTART;TZID=Europe/Zurich:20240424T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Louise Gassot (IRMAR\, R ennes) DESCRIPTION:We focus on the Benjamin-Ono equation on the line with a small dispersion parameter. The goal of this talk is to
precisely describe the s olution at all times when the dispersion parameter is small enough. This s olution may exhibit locally rapid oscillations\, which are a manifestation of a dispersive shock.
The description involves the multivalued solution of the underlying Burgers equation\, obtained by using the method of chara cteristics. This work is in collaboration with Elliot Blackstone\, Patrick
Gérard\, and Peter Miller. X-ALT-DESC:
We focus on the Benjamin-Ono equation on the line with a smal l dispersion parameter. The goal of this talk is to precisely describe the solution at all times when the dispersion parameter is small
enough. This solution may exhibit locally rapid oscillations\, which are a manifestati on of a dispersive shock. The description involves the multivalued solutio n of the underlying Burgers equation
\, obtained by using the method of cha racteristics. This work is in collaboration with Elliot Blackstone\, Patri ck Gérard\, and Peter Miller.
DTEND;TZID=Europe/Zurich:20240424T151500 END:VEVENT BEGIN:VEVENT UID:news1638@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20240328T104112 DTSTART;TZID=Europe/Zurich:20240417T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Anuj Kumar (UC Berkeley) DESCRIPTION:We construct nonunique solutions of the transport equation in t he class $L^\\infty$ in time and $L^r$ in space for divergence
free Sobole v vector fields $W^{1\, p}$. We achieve this by introducing two novel idea s: (1) In the construction\, we interweave the scaled copies of the vector field itself. (2) Asynchronous
translation of cubes\, which makes the con struction heterogeneous in space. These new ideas allow us to prove nonuni queness in the range of exponents beyond what is available using the metho d of
convex integration and sharply matchwith the range of uniqueness of s olutions from Bruè\, Colombo\, De Lellis ’21. X-ALT-DESC:
We construct nonunique solutions of the transport equation in the class $L^\\infty$ in time and $L^r$ in space for divergence free Sobo lev vector fields $W^{1\, p}$. We achieve this by introducing
two novel id eas: (1) In the construction\, we interweave the scaled copies of the vect or field itself. (2) Asynchronous translation of cubes\, which makes the c onstruction heterogeneous in space.
These new ideas allow us to prove nonu niqueness in the range of exponents beyond what is available using the met hod of convex integration and sharply matchwith the range of uniqueness of solutions
from Bruè\, Colombo\, De Lellis ’21. \;
DTEND;TZID=Europe/Zurich:20240417T160000 END:VEVENT BEGIN:VEVENT UID:news1619@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20240328T104000 DTSTART;TZID=Europe/Zurich:20240410T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Roman Shvydkoy (Universi ty of Illinois at Chicago) DESCRIPTION:The classical Kolmogorov-41 theory of turbulence is based on a set of pivotal assumptions on
scaling and energy dissipation for solutio ns satisfying incompressible fluid models. In the early 80's experimental evidence emerged that pointed to departure from the K41 predictions\, whic h was
attributed to the phenomenon of statistical intermittency. In this talk we give an overview of the classical results in the subject\, relati onship of intermittency to the problem of global
well-posedness of the 3D Navier-Stokes system\, and discuss a new approach developed jointly with A . Cheskidov on how to measure and study intermittency from a rigorous pers pective. At the center
of our discussion will be a new interpretation of an intermittent signal described by volumetric properties of the filter ed field. It provides\, in particular\, a systematic approach to the Fr
isch-Parisi multifractal formalism\, and recasts intermittency from the po int of view of information theory. X-ALT-DESC:
The classical Kolmogorov-41 theory of turbulence is based on a set of \; pivotal assumptions on scaling and energy dissipation for solutions satisfying incompressible fluid models. In the early
80's experi mental evidence emerged that pointed to departure from the K41 predictions \, which was attributed to the phenomenon of statistical intermittency.&nb sp\; In this talk we give an overview
of the classical results in the subj ect\, relationship of intermittency to the problem of global well-posednes s of the 3D Navier-Stokes system\, and discuss a new approach developed jo intly with
A. Cheskidov on how to measure and study intermittency from a r igorous perspective. \; At the center of our discussion will be a new interpretation of an intermittent signal \;described 
\;by volumetr ic properties of the filtered \;field. \; It provides\, in particu lar\, a systematic approach to the Frisch-Parisi multifractal formalism\, and recasts intermittency from the
point of view of information theory.&nb sp\;
DTEND;TZID=Europe/Zurich:20240410T160000 END:VEVENT BEGIN:VEVENT UID:news1648@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20240323T092733 DTSTART;TZID=Europe/Zurich:20240403T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Vikram Giri (ETH Zürich ) DESCRIPTION:A common issue in convex integration methods (going back to Nas h) is the presence of unwanted high-high frequency
interactions. These iss ues can prevent the methods from producing solutions with the optimum "Ons ager" regularity to a given system of PDEs. We will discuss these issues i n the setting of the 2D
Euler equations and then discuss a linear Newton i teration designed to get rid of these unwanted interactions in this settin g. We will conclude by discussing applications to other PDEs. This is
base d on joint works with Răzvan-Octavian Radu and Mimi Dai. X-ALT-DESC:
A common issue in convex integration methods (going back to N ash) is the presence of unwanted high-high frequency interactions. These i ssues can prevent the methods from producing solutions with
the optimum "O nsager" regularity to a given system of PDEs. We will discuss these issues in the setting of the 2D Euler equations and then discuss a linear Newton iteration designed to get rid of
these unwanted interactions in this sett ing. We will conclude by discussing applications to other PDEs. This is ba sed on joint works with Răzvan-Octavian Radu and Mimi Dai.
DTEND;TZID=Europe/Zurich:20240403T160000 END:VEVENT BEGIN:VEVENT UID:news1645@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20240325T134941 DTSTART;TZID=Europe/Zurich:20240327T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Marcello Porta (SISSA) DESCRIPTION:I will discuss the dynamics of many-body Fermi gases\, in the m ean-field regime. I will consider a class of initial data which
are close enough to quasi-free states\, with a non-zero pairing matrix. Assuming a suitable semiclassical structure for the initial datum\, expected to hold at low enough energy and that we can
establish for translation-invariant states\, I will present a theorem that shows that the many-body evolution of the system can be well approximated by the Hartree-Fock-Bogoliubov equ ation\, a
non-linear effective evolution equation describing the coupled dynamics of the reduced one-particle density matrix and of the pairing ma trix. Joint work with Stefano Marcantoni (Nice) and Julien
Sabin (Rennes). X-ALT-DESC:
I will discuss the dynamics of many-body Fermi gases\, in the mean-field regime. I will consider a class of initial data which are  \;close enough to quasi-free states\, with a non-zero pairing
matrix. Assu ming a suitable semiclassical structure for the initial datum\, \;expe cted to hold at low enough energy and that we can establish for translatio n-invariant states\, I will present
a theorem that shows that \;the ma ny-body evolution of the system can be well approximated by the Hartree-Fo ck-Bogoliubov equation\, a non-linear effective \;evolution equation d escribing
the coupled dynamics of the reduced one-particle density matrix and of \;the pairing matrix. Joint work with Stefano Marcantoni (Nice) and Julien Sabin (Rennes).
DTEND;TZID=Europe/Zurich:20240327T153000 END:VEVENT BEGIN:VEVENT UID:news1641@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20240205T090917 DTSTART;TZID=Europe/Zurich:20240320T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Norbert J. Mauser (WPI and MMM c/o Univ. Wien) DESCRIPTION:The Pauli-Poisswell equation models fast moving charges in\\r\\ nsemiclassical semi-relativistic quantum
dynamics. It is at the\\r\\ncente r of a hierarchy of models from the Dirac-Maxwell equation to\\r\\nthe Eul er-Poisson equation that are linked by asymptotic analysis of\\r\\nsmall p arameters such
as Planck constant or inverse speed of light.\\r\\nWe discu ss the models and their application in plasma and\\r\\naccelerator physics as well as the many mathematical problems they\\r\\npose.
The Pauli-Poisswell equation models fast moving charges in
semiclassical semi-relativistic quantum dynamics. It is at the
center of a hierarchy of models from the Dirac-Maxwell equation to
\ n
the Euler-Poisson equation that are linked by asymptotic analysis of p>\n
small parameters such as Planck constant or inverse speed of light.
We discuss the models and their application in plasma and
accelerator physics as well as the many mathematical problems they
\n< p>pose. DTEND;TZID=Europe/Zurich:20240320T161500 END:VEVENT BEGIN:VEVENT UID:news1639@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20240201T155918 DTSTART;TZID=Europe/Zurich:20240306T141500
SUMMARY:Seminar Analysis and Mathematical Physics: Klaus Widmayer (Universi tät Zürich) DESCRIPTION:While "Landau damping" is regarded as an important effect in th e dynamics of hot\, collisionless
plasmas\, its mathematical understanding is still in its infancy. This talk presents a recent nonlinear stability result in this context. Starting with a discussion of stabilizing mechanis ms in the
linearized Vlasov-Poisson equations near a class of homogeneous equilibria on R^3\, we will see how both oscillatory and damping effects a rise\, and sketch how these mechanisms imply a nonlinear
stability result in the specific setting of the Poisson equilibrium. This is based on joint work with A. Ionescu\, B. Pausader and X. Wang. X-ALT-DESC:
While "Landau damping" is regarded as an important effect in the dynamics of hot\, collisionless plasmas\, its mathematical understandi ng is still in its infancy. This talk presents a recent
nonlinear stabilit y result in this context. Starting with a discussion of stabilizing mechan isms in the linearized Vlasov-Poisson equations near a class of homogeneou s equilibria on R^3\, we will
see how both oscillatory and damping effects arise\, and sketch how these mechanisms imply a nonlinear stability resul t in the specific setting of the Poisson equilibrium. This is based on joi nt
work with A. Ionescu\, B. Pausader and X. Wang.
DTEND;TZID=Europe/Zurich:20240306T160000 END:VEVENT BEGIN:VEVENT UID:news1611@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20231107T040532 DTSTART;TZID=Europe/Zurich:20231213T151500 SUMMARY:Seminar
Analysis and Mathematical Physics: Theodore Drivas (Stony B rook University) DESCRIPTION:We will discuss aspects of the global picture of 2D fluids. Ste ady states\, deterioration of regularity for
time dependent solutions as w ell as for the Lagrangian flowmap\, as well as conjectural pictures about the weak-* attractor and generic behavior by Shnirelman and Sverak. \\r\\ nNotice the special
time! X-ALT-DESC:
We will discuss aspects of the global picture of 2D fluids. S teady states\, deterioration of regularity for time dependent solutions as well as for the Lagrangian flowmap\, as well as conjectural
pictures abou t the weak-* attractor and generic behavior by Shnirelman and Sverak.  \;
Notice the special time!
DTEND;TZID=Europe/Zurich:20231213T160000 END:VEVENT BEGIN:VEVENT UID:news1588@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20231201T140131 DTSTART;TZID=Europe/Zurich:20231206T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: David Meyer (Universitä t Münster) DESCRIPTION:A vortex ring is a solution of the axisymmetric Euler equations consisting of some torus of concentrated vorticity.
Motivated by the appe arance of Vortex rings as bubble rings\, we study vortex rings with surfac e tension at the interface. We show the existence of traveling wave soluti ons. In particular\, our
construction also justifies the existence of so-c alled hollow vortex rings\, where the vorticity is a measure concentrated on the interface. X-ALT-DESC:
A vortex ring is a solution of the axisymmetric Euler equatio ns consisting of some torus of concentrated vorticity. Motivated by the ap pearance of Vortex rings as bubble rings\, we study vortex
rings with surf ace tension at the interface. We show the existence of traveling wave solu tions. In particular\, our construction also justifies the existence of so -called hollow vortex rings\,
where the vorticity is a measure concentrate d on the interface.
DTEND;TZID=Europe/Zurich:20231206T160000 END:VEVENT BEGIN:VEVENT UID:news1580@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20231114T180149 DTSTART;TZID=Europe/Zurich:20231129T134500 SUMMARY:Seminar
Analysis and Mathematical Physics: Sergio Simonella (Univer sity of Roma La Sapienza) DESCRIPTION:Boltzmann equation\, hard sphere systems and their small and la rge deviations X-ALT-DESC:Boltzmann
equation\, hard sphere systems and their small and lar ge deviations DTEND;TZID=Europe/Zurich:20230929T163000 END:VEVENT BEGIN:VEVENT UID:news1614@dmi.unibas.ch DTSTAMP;TZID=Europe/
Zurich:20231114T180522 DTSTART;TZID=Europe/Zurich:20231129T134500 SUMMARY:Seminar Analysis and Mathematical Physics: Sergio Simonella (Univer sity of Roma La Sapienza) DESCRIPTION:https://
nccr-swissmap.ch/news-and-events/news/next-kinetic-theo ry-seminar-29th-nov-prof-sergio-simonella-university-roma-la-sapienza X-ALT-DESC:
https://nccr-swissmap.ch/news-and-events/news/next-kinetic-th eory-seminar-29th-nov-prof-sergio-simonella-university-roma-la-sapienza
DTEND;TZID=Europe/Zurich:20231129T163000 END:VEVENT BEGIN:VEVENT UID:news1577@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20231029T223020 DTSTART;TZID=Europe/Zurich:20231122T140000 SUMMARY:Seminar
Analysis and Mathematical Physics: Jinyeop Lee (LMU Munich) DESCRIPTION:The study of the Schrödinger equation in dimension one with a nonlinear point interaction has been the focus of research over
the past f ew decades. In this seminar\, we talk about a work on deriving this partia l differential equation as the effective dynamics of N identical bosons in one dimension. We assume introducing a
tiny impurity located at the origi n and considering that the interaction between every pair of bosons is med iated by the impurity through a three-body interaction. Moreover\, by assu ming
short-range scaling and choosing an initial fully condensed state\, w e prove convergence of one-particle density operators in the trace-class t opology. This is the first derivation of the so-called
nonlinear delta mod el. This research is a collaborative work with Prof. Riccardo Adami. X-ALT-DESC:
The study of the Schrödinger equation in dimension one with a nonlinear point interaction has been the focus of research over the past few decades. In this seminar\, we talk about a work on deriving
this part ial differential equation as the effective dynamics of N identical bosons in one dimension.
We assume introducing a tiny impurity located at t he origin and considering that the interaction between every pair of boson s is mediated by the impurity through a three-body interaction. Moreover
\, by assuming short-range scaling and choosing an initial fully condensed s tate\, we prove convergence of one-particle density operators in the trace -class topology. This is the first derivation
of the so-called nonlinear d elta model. This research is a collaborative work with Prof. Riccardo Adam i.
DTEND;TZID=Europe/Zurich:20231122T163000 END:VEVENT BEGIN:VEVENT UID:news1589@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20231031T155120 DTSTART;TZID=Europe/Zurich:20231115T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Marc Nualart (Imperial C ollege London) DESCRIPTION:In this talk we consider the long-time behavior of solutions to the two dimensional non-homogeneous Euler
equations under the Boussinesq approximation posed on a periodic channel. We prove inviscid damping for the linearized equations around the stably stratified Couette flow using stationary-phase
methods of oscillatory integrals. We discuss how these os cillatory integrals arise\, what are the main regularity requirements to carry out the stationary-phase arguments\, and how to achieve such
regula rities. X-ALT-DESC:
In this talk we consider the long-time behavior of solutions to the two dimensional non-homogeneous Euler equations under the Boussines q approximation posed on a periodic channel. We prove inviscid
damping &nb sp\;for the linearized equations around the stably stratified Couette flow using stationary-phase methods of oscillatory integrals. We discuss how t hese oscillatory integrals arise\, &
nbsp\;what are the main regularity req uirements to carry out the stationary-phase arguments\, and how to achieve such regularities.
DTEND;TZID=Europe/Zurich:20231115T160000 END:VEVENT BEGIN:VEVENT UID:news1602@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20231027T193133 DTSTART;TZID=Europe/Zurich:20231108T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Gabriele Bocchi (Univers ità degli Studi di Roma Tor Vergata) DESCRIPTION:We analyze an optimal transport problem with additional entropi c cost evaluated along
curves in the Wasserstein space which join two prob ability measures m_0 and m_1. The effect of the additional entropy functio nal results into an elliptic regularization for the (so-called)
Kantorovic h potentials of the dual problem.\\r\\nAssuming the initial and terminal m easures to have densities\, we prove that the optimal curve remains positi ve and locally bounded in time. We
focus on the case that the transport pr oblem is set on a compact Riemannian manifold with Ricci curvature bounded below.\\r\\nThe approach follows ideas introduced by P.L. Lions in the th eory of
mean-field games about optimization problems with penalizing conge stion terms. Crucial steps of our strategy include displacement convexity properties in the Eulerian approach and the analysis of
distributional sub solutions to Hamilton-Jacobi equations.\\r\\nThe result provides a smooth approximation of Wasserstein-2 geodesics. X-ALT-DESC:
We analyze an optimal transport problem with additional entro pic cost evaluated along curves in the Wasserstein space which join two pr obability measures m_0 and m_1. The effect of the additional
entropy funct ional results into an elliptic regularization for the (so-called) Kantorov ich potentials of the dual problem.
Assuming the initial and termi nal measures to have densities\, we prove that the optimal curve remains p ositive and locally bounded in time. We focus on the case that the transpo rt problem is set
on a compact Riemannian manifold with Ricci curvature bo unded below.
The approach follows ideas introduced by P.L. Lions i n the theory of mean-field games about optimization problems with penalizi ng congestion terms. Crucial steps of our strategy include
displacement co nvexity properties in the Eulerian approach and the analysis of distributi onal subsolutions to Hamilton-Jacobi equations.
The result provide s a smooth approximation of Wasserstein-2 geodesics.
DTEND;TZID=Europe/Zurich:20231108T160000 END:VEVENT BEGIN:VEVENT UID:news1559@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20230919T123033 DTSTART;TZID=Europe/Zurich:20230927T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Bian Wu (MPI Leipzig) DESCRIPTION:I will talk about Rayleigh-Taylor instability for two miscible\ , incompressible\, inviscid fluids. Scale-invariant estimates for
the size of the mixing zone and coarsening of internal structures in the fully non linear regime are established. These bounds provide optimal scaling laws a nd reveal the strong role of dissipation
in slowing down mixing. This is a joint work with Konstantin Kalinin\, Govind Menon. X-ALT-DESC:
I will talk about Rayleigh-Taylor instability for two miscibl e\, incompressible\, inviscid fluids. Scale-invariant estimates for the si ze of the mixing zone and coarsening of internal structures in
the fully n onlinear regime are established. These bounds provide optimal scaling laws and reveal the strong role of dissipation in slowing down mixing. This is a joint work with Konstantin Kalinin\,
Govind Menon.
DTEND;TZID=Europe/Zurich:20230927T160000 END:VEVENT BEGIN:VEVENT UID:news1547@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20230904T075228 DTSTART;TZID=Europe/Zurich:20230920T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Chiara Boccato (Universi ty of Milano) DESCRIPTION:The interacting Bose gas is a system in quantum statistical mec hanics where a collective behavior emerges from
the underlying many-body t heory\, posing interesting challenges to its rigorous mathematical descrip tion.\\r\\nWhile at temperature close to zero we have precise information on the ground state
energy and the low-lying spectrum of excitations (at l east in certain scaling limits)\, much less is known close to the critical point. In this talk I will discuss how thermal excitations can be
describ ed by Bogoliubov theory\, allowing us to estimate the free energy of the B ose gas in the Gross-Pitaevskii regime.\\r\\n\\r\\nThis is joint work with A. Deuchert and D. Stocker. X-ALT-DESC:
The interacting Bose gas is a system in quantum statistical m echanics where a collective behavior emerges from the underlying many-body theory\, posing interesting challenges to its rigorous
mathematical descr iption.
While at temperature close to zero we have precise informa tion on the ground state energy and the low-lying spectrum of excitations (at least in certain scaling limits)\, much less is known close to
the cri tical point. In this talk I will discuss how thermal excitations can be de scribed by Bogoliubov theory\, allowing us to estimate the free energy of the Bose gas in the Gross-Pitaevskii
This is joint work with A. Deuchert and D. Stocker.
DTEND;TZID=Europe/Zurich:20230920T153000 END:VEVENT BEGIN:VEVENT UID:news1524@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20230525T104748 DTSTART;TZID=Europe/Zurich:20230607T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Luis Martinez Zoroa (ICM AT Madrid) DESCRIPTION:The Surface quasi-geostrophic (SQG) equation is an important ac tive scalar model\, both due to its shared
properties with 3D Euler as wel l as for its applications to model certain atmospherical phenomena. It has already been established that instantaneous loss of regularity can occur in the inviscid
case in certain Sobolev spaces\, but it is unclear at what point the diffusion prevents this phenomenon from happening. In this talk I will discuss the behaviour when there is some super-critical
fractional diffusion. X-ALT-DESC:
The Surface quasi-geostrophic (SQG) equation is an important active scalar model\, both due to its shared properties with 3D Euler as w ell as for its applications to model certain atmospherical
phenomena. It h as already been established that instantaneous loss of regularity can occu r in the inviscid case in certain Sobolev spaces\, but it is unclear at wh at point the diffusion prevents
this phenomenon from happening. In this ta lk I will discuss the behaviour when there is some super-critical fraction al diffusion.
DTEND;TZID=Europe/Zurich:20230607T160000 END:VEVENT BEGIN:VEVENT UID:news1514@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20230516T094922 DTSTART;TZID=Europe/Zurich:20230524T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Thérèse Moerschell (EP FL) DESCRIPTION:The advection-diffusion equation is known to have unique soluti ons for any vector field that is L^2 in time and in space.
But what happen s when we have slightly less than square integrability? In this talk we wi ll explore two examples of vector fields in L^p(0\,T\;L^q(\\T^d)) made of shear flows that prove the
non-uniqueness of solutions whenever we have p< 2 or q<2. We will first show that they give different solutions to the adv ection equation and then use the Feynman-Kac formula to show that diffusio n
has little effect if our parameters are well-tuned. This is part of my M aster's thesis\, supervised by Massimo Sorella and Maria Colombo. X-ALT-DESC:
The advection-diffusion equation is known to have unique solu tions for any vector field that is L^2 in time and in space. But what happ ens when we have slightly less than square integrability? In
this talk we will explore two examples of vector fields in L^p(0\,T\;L^q(\\T^d)) made o f shear flows that prove the non-uniqueness of solutions whenever we have p<\;2 or q<\;2. We will first
show that they give different solutions to the advection equation and then use the Feynman-Kac formula to show tha t diffusion has little effect if our parameters are well-tuned.
This is part of my Master's thesis\, supervised by Massimo Sorella and Maria C olombo.
DTEND;TZID=Europe/Zurich:20230524T160000 END:VEVENT BEGIN:VEVENT UID:news1502@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20230404T153043 DTSTART;TZID=Europe/Zurich:20230517T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Matthew Novack (Purdue U niversity) DESCRIPTION:The phenomenon of anomalous dissipation in turbulence predicts the existence of solutions to the incompressible
Euler equations that enjo y regularity consistent with Kolmogorov’s 4/5 law and satisfy a local en ergy inequality. The "strong Onsager conjecture" asserts that such solutio ns do indeed exist. In
this talk\, we will discuss the background and moti vation behind the strong Onsager conjecture. In addition\, we outline a construction of solutions with regularity (nearly) consistent with the 4/5
law\, thereby proving the conjecture in the natural L^3 scale of Besov sp aces. This is based on joint work with Hyunju Kwon and Vikram Giri. X-ALT-DESC:
The phenomenon of anomalous dissipation in turbulence predict s the existence of solutions to the incompressible Euler equations that en joy regularity consistent with Kolmogorov’s 4/5 law and
satisfy a local energy inequality. The "strong Onsager conjecture" asserts that such solut ions do indeed exist. In this talk\, we will discuss the background and mo tivation behind the strong
Onsager conjecture. \; In addition\, we out line a construction of solutions with regularity (nearly) consistent with the 4/5 law\, thereby proving the conjecture in the natural L^3 scale of B
esov spaces. \; This is based on joint work with Hyunju Kwon and Vikra m Giri.
DTEND;TZID=Europe/Zurich:20230517T160000 END:VEVENT BEGIN:VEVENT UID:news1483@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20230418T150401 DTSTART;TZID=Europe/Zurich:20230426T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Ángel Castro (ICMAT\, M adrid) DESCRIPTION:In this talk we will consider the existence of traveling waves arbitrarily close to shear flows for the 2D
incompressible Euler equat ions. In particular we shall present some results concerning the existence of such solutions near the Couette\, Taylor-Couette and the Poiseuille f lows. In the first part
of the talk we will introduce the problem and revi ew some well known results on this topic. In the second one some of the ideas behind the construction of our traveling waves will be sketched.
In this talk we will consider the existence of traveling wave s arbitrarily close to \; shear flows for \; the 2D incompressible Euler equations. In particular we shall present some results
concerning t he existence of such solutions near the Couette\, Taylor-Couette and the P oiseuille \;flows. In the first part of the talk we will introduce the problem and review some well known
results on this topic. In the \; s econd one some of the ideas behind the construction of our traveling waves  \; will be sketched.
DTEND;TZID=Europe/Zurich:20230426T160000 END:VEVENT BEGIN:VEVENT UID:news1475@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20230413T104028 DTSTART;TZID=Europe/Zurich:20230419T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Stefano Spirito (Univers ità degli Studi dell'Aquila) DESCRIPTION:I will review the existence and uniqueness theory of a model fo r viscoelastic materials of
Kelvin-Voigt type with large strain. In partic ular\, I will first review the existence theory in L2\, and then show that also propagation of H1-regularity for the deformation gradient of weak so
lutions in two and three dimensions holds. Moreover\, in two dimensions it is also possible to prove uniqueness of weak solutions. Additional pro pagation of higher regularity can be obtained\,
leading to a global in tim e existence of smooth solutions. Joint work with K. Koumatos (U. of Sussex )\, C. Lattanzio (UnivAQ) and A. Tzavaras (KAUST). X-ALT-DESC:
I will review the existence and uniqueness theory of a model for viscoelastic materials of Kelvin-Voigt type with large strain. In part icular\, I will first review the existence theory in L2\, and
then show th at also propagation of H1-regularity for the deformation gradient of weak solutions in two and three dimensions holds. \; \;Moreover\, in tw o dimensions it is also possible to
prove uniqueness of weak solutions. Ad ditional propagation of higher regularity can be obtained\, leading to a g lobal in time existence of smooth solutions. Joint work with K. Koumatos ( U. of
Sussex)\, C. Lattanzio (UnivAQ) and A. Tzavaras (KAUST).
DTEND;TZID=Europe/Zurich:20230419T160000 END:VEVENT BEGIN:VEVENT UID:news1467@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20230125T171400 DTSTART;TZID=Europe/Zurich:20230322T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Mickaël Latocca (Univer sité d'Évry) DESCRIPTION:In an incompressible fluid\, the pressure is governed by the el liptic equation $-\\Delta p = \\div \\div u \\
otimes u$ and a Neuman-type boundary condition\, where $u$ stands for the divergence-free velocity vec tor field. The main goal of this talk is to explain why one expects that $p$ has Double Hölder
regularity (with respect to that of $u$) and how on e can rigourously prove such a fact in a bounded domain. The results prese nted in this talk were obtained in collaboation with Luigi De Rosa
(Basel) and Giorgio Stefani (SISSA). X-ALT-DESC:
In an incompressible fluid\, the pressure is governed by the elliptic equation $-\\Delta p = \\div \\div u \\otimes u$ and a Neuman-typ e boundary condition\, where $u$ stands for the divergence-free
velocity v ector field. \;The main goal of this talk is to explain why one expect s that $p$ has Double Hölder regularity (with respect to that of $u$) and how one can rigourously prove such a
fact in a bounded domain. The result s presented in this talk were obtained in collaboation with Luigi De Rosa (Basel) and Giorgio Stefani (SISSA). \;
DTEND;TZID=Europe/Zurich:20230322T160000 END:VEVENT BEGIN:VEVENT UID:news1466@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20230305T211240 DTSTART;TZID=Europe/Zurich:20230315T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Federico Cacciafesta (Un iversity of Padova) DESCRIPTION:The Dirac equation is one of the fundamental equations in relat ivistic quantum mechanics\, widely used in
a large number of applications from physics to quantum chemistry. The aim of this talk will be to discuss some recent results\, together with a number of open questions\, concerni ng the dynamics for
this model: after briefly reviewing the main propertie s of the Dirac operator and providing some background and motivations from the theory of linear dispersive PDEs\, we shall focus in particular
on th e cases of the Dirac-Coulomb equation and of the Dirac equation on non fla t manifolds\, showing how some linear estimates (in particular\, Strichart z estimates) can be obtained by exploiting
various properties of the opera tor. X-ALT-DESC:
The Dirac equation is one of the fundamental equations in rel ativistic quantum mechanics\, widely used in a large number of application s from physics to quantum chemistry. The aim of this talk will
be to discu ss some recent results\, together with a number of open questions\, concer ning the dynamics for this model: after briefly reviewing the main propert ies of the Dirac operator and
providing some background and motivations fr om the theory of linear dispersive PDEs\, we shall focus in particular on the cases of the Dirac-Coulomb equation and of the Dirac equation on non f lat
manifolds\, showing how some linear estimates (in particular\, Stricha rtz estimates) can be obtained by exploiting various properties of the ope rator.
DTEND;TZID=Europe/Zurich:20230315T161500 END:VEVENT BEGIN:VEVENT UID:news1465@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20230119T132638 DTSTART;TZID=Europe/Zurich:20230308T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Maria Ahrend (Uni Basel) DESCRIPTION:Maria Ahrend will defend her PhD thesis on fractional Liouville equations and Calogero-Moser NLS. X-ALT-DESC:
Maria Ahrend will defend her PhD thesis on fractional Liouvil le equations and Calogero-Moser NLS.
DTEND;TZID=Europe/Zurich:20230309T160000 END:VEVENT BEGIN:VEVENT UID:news1476@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20230304T163156 DTSTART;TZID=Europe/Zurich:20230307T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Zineb Hassainia (New Yor k University Abu Dhabi) [special time!] DESCRIPTION:In this talk\, I will discuss a recent result concerning the co nstruction of
quasi-periodic vortex patch solutions with one hole for th e 2D-Euler equations. These structures exist close to any annulus provided that its modulus belongs to a Cantor set with almost full
Lebesgue measur e. The proof is based on a KAM reducibility scheme and a Nash-Moser iterative scheme. This is a joint work with Taoufik Hmidi and Emeric Roull ey. X-ALT-DESC:
In this talk\, I will discuss a recent result concerning the construction of quasi-periodic vortex patch solutions \; with one hole for the 2D-Euler equations. These structures exist close to any
annulus p rovided that its modulus belongs to a Cantor set with almost full Lebesgue measure. The proof is based on \; a KAM reducibility scheme \; an d a \; Nash-Moser iterative scheme.
This is a joint work with Taoufik Hmidi and Emeric Roulley.
DTEND;TZID=Europe/Zurich:20230307T160000 END:VEVENT BEGIN:VEVENT UID:news1470@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20230131T153947 DTSTART;TZID=Europe/Zurich:20230222T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Peter Pickl (Universitä t Tübingen) DESCRIPTION:The derivation of the Vlasov equation from Newtonian mechanics is an old problem in mathematical physics. But
while the most interest ing interactions in nature have singularities\, one typically assumes some Lipschitz condition on the interaction force for its microscopic derivation. Recent developments
have given results\, where the interac tion force gets singular when the particle number N tends to infinity\, us ually by mollifying or cutting the singularity with a N-dependent moll ifier or
cut-off parameter. In the talk I will present most recent develop ments and new results on this topic. X-ALT-DESC:
The derivation of the Vlasov equation from Newtonian mechanic s is an \; \;
old problem in
mathematical physics. But while the most interesting interactions in \; \;
nature hav e singularities\,
one typically assumes some Lipschitz condition on the interaction \; \;
force for its microscopic
deriva tion. Recent developments have given results\, where the \; \;
interaction force gets singular
when the particle number N tends to infinity\, usually by mollifying \; \;
or cutting the sin gularity
with a N-dependent mollifier or cut-off parameter.
In the talk I will present most recent developments and new results on \ ; \;
this topic.
DTEND;TZID=Europe/Zurich:20230222T160000 END:VEVENT BEGIN:VEVENT UID:news1437@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20221207T184842 DTSTART;TZID=Europe/Zurich:20221221T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Harprit Singh (Imperial College London) DESCRIPTION:Singular stochastic partial differential equations (SPDEs) of t he form \\r\\n∂tu = △u + F (u\, ∂xu\, ξ)\, \\r\\
nwhere ξ is an irregular driving noise\, arise in a variety of situations from quan tum field theory to probability. After introducing some specific examples\ , we describe the main difficulty they
share\; they are singular due to th e irregularity of the driving noise ξ.\\r\\nIn the first part of the tal k we discuss a simple example where using the so-called “Da Prato-Debusc he trick” is
sufficient to deal with this difficulty. In the second half \, we give a birds-eye view on how regularity structures provide a solutio n theory for such equations. In particular\, we explain the role
of subcri ticality (super-renormalisability) and (half) Feynman diagrams in this the ory. Lastly\, we shall mention some recent results on the class of differe ntial operators that are compatible
with this general machinery and how th is relates to the geometry of the underlying space. X-ALT-DESC:
Singular stochastic partial differential equations (SPDEs) of the form \;
∂[t]u \;= \;△u \;+ \ ;F \;(u\, ∂[x]u\, ξ)\, \;
where \;ξ \ ;is an irregular driving noise\, arise in a variety of situations from qua ntum field theory to probability. After introducing some specific examples \, we describe the main
difficulty they share\; they are singular due to t he irregularity of the driving noise \;ξ.
In the first part o f the talk we discuss a simple example where using the so-called “Da Pra to-Debusche trick” is sufficient to deal with this difficulty. In the se cond half\, we give a birds-eye
view on how regularity structures provide a solution theory for such equations. In particular\, we explain the role of subcriticality (super-renormalisability) and (half) Feynman diagrams in this
theory. Lastly\, we shall mention some recent results on the class o f differential operators that are compatible with this general machinery a nd how this relates to the geometry of the underlying
DTEND;TZID=Europe/Zurich:20221221T160000 END:VEVENT BEGIN:VEVENT UID:news1419@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20221128T102752 DTSTART;TZID=Europe/Zurich:20221207T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Hyunju Kwon (ETH Zurich) DESCRIPTION:Smooth (spatially periodic) solutions to the incompressible 3D Euler equations have kinetic energy conservation in every local
region\, w hile turbulent flows exhibit anomalous dissipation of energy. Toward verif ication of the anomalous dissipation\, Onsager theorem has been establishe d\, which says that the threshold
Hölder regularity of the total kinetic energy conservation is 1/3. As a next step\, we discuss a strong Onsager c onjecture\, which combines the Onsager theorem with the local energy inequ ality.
Smooth (spatially periodic) solutions to the incompressible 3 D Euler equations have kinetic energy conservation in every local region\, while turbulent flows exhibit anomalous dissipation of energy.
Toward ver ification of the anomalous dissipation\, Onsager theorem has been establis hed\, which says that the threshold Hölder regularity of the total kineti c energy conservation is 1/3. As a next
step\, we discuss a strong Onsager conjecture\, which combines the Onsager theorem with the local energy ine quality.
DTEND;TZID=Europe/Zurich:20221207T160000 END:VEVENT BEGIN:VEVENT UID:news1429@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20221117T152426 DTSTART;TZID=Europe/Zurich:20221123T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Michele Dolce (EPFL) DESCRIPTION:Fluids in the ocean are often inhomogeneous\, incompressible an d\, in relevant physical regimes\, can be described by the 2D
Euler-Boussi nesq system. Equilibrium states are then commonly observed to be stably st ratified\, namely the density increases with depth. We are interested in c onsidering the case when also a
background shear flow is present. In the t alk\, I will describe quantitative results for small perturbations around a stably stratified Couette flow. The density variation and velocity under go an O
(1/(t^{1/2})) inviscid damping while the vorticity and density grad ient grow as O(t^{1/2}) in L^2. This is precisely quantified at the linea r level. For the nonlinear problem\, the result holds on
the optimal time -scale on which a perturbative regime can be considered. Namely\, given an initial perturbation of size O(eps)\, it is expected that the linear reg ime is observed up to a time-scale
O(eps^{-1}). However\, we are able to control the dynamics all the way up to O(eps^{-2})\, where the perturbat ion become of size O(1) due to the linear instability. X-ALT-DESC:
Fluids in the ocean are often inhomogeneous\, incompressible and\, in relevant physical regimes\, can be described by the 2D Euler-Bous sinesq system. Equilibrium states are then commonly observed to
be stably stratified\, namely the density increases with depth. We are interested in considering the case when also a background shear flow is present. In the talk\, I will describe quantitative
results for small perturbations aroun d a stably stratified Couette flow. The density variation and velocity und ergo an O(1/(t^{1/2})) inviscid damping while the vorticity and density gr adient grow
as O(t^{1/2}) \;in L^2. This is precisely quantified at th e linear level. \;For the nonlinear problem\, the result holds on the optimal time-scale on which a perturbative regime can be
considered. Namel y\, given an initial perturbation of size O(eps)\, \;it is expected th at the linear regime is observed up to a time-scale O(eps^{-1}). However\,  \;we are able to control
the dynamics all the way up to \;O(eps^{ -2})\, where \;the perturbation become of size O(1) \;due to the l inear instability.
DTEND;TZID=Europe/Zurich:20221123T160000 END:VEVENT BEGIN:VEVENT UID:news1422@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20221107T105340 DTSTART;TZID=Europe/Zurich:20221116T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Nicolas Camps (Universit y of Nantes) DESCRIPTION:Following the seminal work of Bourgain in 1996\, and Burq and T zvetkov in 2008\, a statistical approach to
nonlinear dispersive equations has developed in various contexts. We are interested here in Schrödinge r equations with cubic nonlinearity (NLS) in R^d. We first recall the rele vant probabilistic
Cauchy theory developed by Bényi\, Oh and Pocovnicu in 2015 in supercritical regimes\, before specifying the norm inflation instability that occurs in this context. The second part is dedicated to
long-time dynamics for solutions initiated from these randomized initial d ata. We demonstrate a scattering result that relies on a probabilistic ver sion of the I-method and that allows to solve
statistically the scatteri ng conjecture for NLS in dimension 3. Finally\, we present recent develop ments in quasi-linear regimes\, which were initiated by Bringmann in 2019 and which we exploit to
exhibit strong solutions to some weakly dispersive equations. This last result is in collaboration with Louise Gassot and Sl im Ibrahim. X-ALT-DESC:
Following the seminal work of Bourgain in 1996\, and Burq and Tzvetkov in 2008\, a statistical approach to nonlinear dispersive equatio ns has developed in various contexts. \;We are interested
here in Schr ödinger equations with cubic nonlinearity (NLS) in R^d. We first recall t he relevant probabilistic Cauchy theory  \;developed by Bényi\, Oh an d Pocovnicu in 2015 in supercritical
regimes\, before specifying the \ ;norm inflation \;instability that occurs in this context.&nb sp\;The second part is dedicated to long-time dynamics for solutions initi ated from these
randomized initial data. We demonstrate a scattering resul t that relies on a probabilistic version of the \;I-method&nb sp\;and that allows to solve statistically the scattering conjecture for N
LS in dimension 3. \;Finally\, we present recent developments in quasi -linear regimes\, which were initiated by Bringmann in 2019 and which we e xploit to exhibit strong solutions to some weakly
dispersive equations. Th is last result is in collaboration with Louise Gassot and Slim Ibrahim.
DTEND;TZID=Europe/Zurich:20221116T160000 END:VEVENT BEGIN:VEVENT UID:news1413@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20221014T114957 DTSTART;TZID=Europe/Zurich:20221109T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Paolo Bonicatto (Univers ity of Warwick) DESCRIPTION:In the classical theory\, given a vector field $b$ on $\\mathbb R^d$\, one usually studies the transport/
continuity equation drifted by $ b$ looking for solutions in the class of functions (with certain integrabi lity) or at most in the class of measures. In this seminar I will talk abo ut recent
efforts\, motivated by the modelling of defects in plastic mater ials\, aimed at extending the previous theory to the case when the unknown is instead a family of k-currents in $\\mathbb R^d$\, i.e.
generalised $k $-dimensional surfaces. The resulting equation involves the Lie derivative $L_b$ of currents in direction $b$ and reads $\\partial_t T_t + L_b T_t = 0$. In the first part of the talk I
will briefly introduce this equation\ , with special attention to its space-time formulation. I will then shift the focus to some rectifiability questions and Rademacher-type results: gi ven a
Lipschitz path of integral currents\, I will discuss the existence o f a “geometric derivative”\, namely a vector field advecting the curre nts. Joint work with G. Del Nin and F. Rindler (Warwick).
In the classical theory\, given a vector field $b$ on $\\math bb R^d$\, one usually studies the transport/continuity equation drifted by $b$ looking for solutions in the class of functions (with
certain integra bility) or at most in the class of measures. In this seminar I will talk a bout recent efforts\, motivated by the modelling of defects in plastic mat erials\, aimed at extending the
previous theory to the case when the unkno wn is instead a family of k-currents in $\\mathbb R^d$\, i.e. generalised $k$-dimensional surfaces. The resulting equation involves the Lie derivati ve
$L_b$ of currents in direction $b$ and reads $\\partial_t T_t + L_b T_t = 0$. In the first part of the talk I will briefly introduce this equatio n\, with special attention to its space-time
formulation. I will then shif t the focus to some rectifiability questions and Rademacher-type results: given a Lipschitz path of integral currents\, I will discuss the existence of a “geometric
derivative”\, namely a vector field advecting the cur rents. Joint work with G. Del Nin and F. Rindler (Warwick).
DTEND;TZID=Europe/Zurich:20221109T160000 END:VEVENT BEGIN:VEVENT UID:news1360@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20220625T161018 DTSTART;TZID=Europe/Zurich:20220629T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Luca Fresta (University of Bonn) DESCRIPTION:We consider a system of $N$ interacting fermions initially conf ined in a volume $\\Lambda$. We show that\, in the
high-density regime a nd for zero-temperature initial data exhibiting a local semiclassical st ructure\, the solution of the many-body Schrödinger equation can be app roximated by the solution of
the nonlinear Hartree equation\, up to erro rs that are small\, for large density\, uniformly in $N$ and $\\Lambda$. This is joint work with M. Porta and B. Schlein. X-ALT-DESC:
We consider a system of $N$ interacting fermions initially co nfined in a \;
$\\Lambda$. We show that\, in the hi gh-density regime and for zero-temperature \;
initial data
exhibiting a local semiclassical structure\, the solution of the many-bod y \;
Schrödinger equation can be approximated by the solution o f the nonlinear \;
Hartree equation\, up to errors that are smal l\, for large density\, uniformly \;
in $N$ and $\\Lambda$. This is joint work with M. Porta and B. Schlein.
DTEND;TZID=Europe/Zurich:20220629T160000 END:VEVENT BEGIN:VEVENT UID:news1366@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20220606T131938 DTSTART;TZID=Europe/Zurich:20220609T163000 SUMMARY:An afternoon
of analysis talks: Gautam Iyer (Carnegie Mellon Univer sity) DESCRIPTION:The Kompaneets equation describes energy transport in low-densi ty (or high temperature) plasmas where the dominant energy
exchange mecha nism is Compton scattering. The equation itself is a one dimensional non- linear parabolic equation with a diffusion coefficient that vanishes at t he boundary. This degeneracy\,
combined with the nonlinearity causes an o ut-flux of photons with zero energy\, often interpreted as a Bose-Einstein condensate. This talk will describe several results about the long time
behavior of these equations including convergence to equilibrium\, persi stence of the condensate\, sufficient conditions under which it forms\, s ufficient conditions under which it doesn't form
and a loss formula for t he mass of the condensate. X-ALT-DESC:
The Kompaneets equation describes energy transport in low-den sity (or high \;temperature) plasmas where the dominant energy exchang e mechanism is \;Compton scattering. The equation itself
is a one dime nsional non-linear \;parabolic equation with a diffusion coefficient t hat vanishes at the
boundary. This degeneracy\, combined with the no nlinearity causes an \;out-flux of photons with zero energy\, often in terpreted as a Bose-Einstein \;condensate. This talk will describe sev
eral results about the long time \;behavior of these equations includi ng convergence to equilibrium\,
persistence of the condensate\, suff icient conditions under which it forms\, \;sufficient conditions under which it doesn't form and a loss formula for \;the mass of the conden sate.
DTEND;TZID=Europe/Zurich:20220609T173000 END:VEVENT BEGIN:VEVENT UID:news1365@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20220606T131928 DTSTART;TZID=Europe/Zurich:20220609T150000 SUMMARY:An afternoon
of analysis talks: Anna Mazzucato (Penn State Universi ty) DESCRIPTION:I will present a recent result concerning global existence for the Kuramoto-Sivashinsky equation on the two-dimensional torus
with one gr owing mode in each direction. The proof combines PDE techniques with a Ly apunov function argument for the growing modes. This is joint work with Da vid Ambrose (Drexel University\, USA).
I will present a recent result concerning global existence fo r the Kuramoto-Sivashinsky equation on the two-dimensional torus with one growing \;mode in each direction. The proof combines PDE
techniques wi th a Lyapunov function argument for the growing modes. This is joint work with David Ambrose (Drexel University\, USA).
DTEND;TZID=Europe/Zurich:20220609T160000 END:VEVENT BEGIN:VEVENT UID:news1364@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20220606T131917 DTSTART;TZID=Europe/Zurich:20220609T140000 SUMMARY:An afternoon
of analysis talks: Giovanni Alberti (University of Pis a) DESCRIPTION:In this talk I will describe some result about the following el ementary problem\, of isoperimetric flavor: Given a set E in R^d
with fini te volume\, is it possible to find an hyperplane P that cuts E in two part s with equal volume\, and such that the area of the cut (that is\, the int ersection of P and E) is of the
expected order\, namely (vol(E))^{1−1/d} ? We can show that this question\, even in a stronger form\, has a positiv e answer if the dimension d is 3 or higher.But\, interestingly enough\, ou r proof
breaks down completely in dimension d=2\, and we do not know the a nswer in this case (but we know that the answer is positive if we allow cu ts that are not exactly planar\, but close to planar). It
turns out that this question has some interesting connection with the Kakeya problem. Thi s is a work in progress with Alan Chang (Princeton University). X-ALT-DESC:
In this talk I will describe some result about the following elementary problem\, of isoperimetric flavor:
Given a set E in R^d w ith finite volume\, is it possible to find an hyperplane P that cuts E in two parts with equal volume\, and such that the area of the cut (that is\, the intersection of P and
E) is of the expected order\, namely (vol(E))^{ 1−1/d}?
We can show that this question\, even in a stronger form\, has a positive answer if the dimension d is 3 or higher.But\, interesting ly enough\, our proof breaks down completely in dimension d=2\,
and we do not know the answer in this case (but we know that the answer is positive if we allow cuts that are not exactly planar\, but close to planar). \ ;It turns out that this question has
some interesting connection with the Kakeya problem.
This is a work in progress with Alan Chang (Princeto n University).
DTEND;TZID=Europe/Zurich:20220609T150000 END:VEVENT BEGIN:VEVENT UID:news1353@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20220504T163350 DTSTART;TZID=Europe/Zurich:20220525T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Franck Sueur (Institut d e Mathématiques de Bordeaux) DESCRIPTION:This talk is devoted to the 2D incompressible Euler system in p resence of sources and sinks. This
model dates back to Viktor Yudovich in the sixties and is an interesting example of nonlinear open system which h as been widely used in controllability theory within the scope of smooth s olutions.
In this talk we will review how the classical issues of existenc e and uniqueness of weak solutions are challenged by the presence of incom ing and exiting vorticity. This talk is based on joint
works with Marco Bravin and Florent Noisette. X-ALT-DESC:
This talk is devoted to the 2D incompressible Euler system in presence of sources and sinks. This model dates back to Viktor Yudovich i n the sixties and is an interesting example of nonlinear open
system which has been widely used in controllability theory within the scope of smooth solutions. In this talk we will review how the classical issues of existe nce and uniqueness of weak solutions
are challenged by the presence of inc oming and exiting vorticity.  \;This talk is based on joint works with Marco Bravin and Florent Noisette. \;
DTEND;TZID=Europe/Zurich:20220525T160000 END:VEVENT BEGIN:VEVENT UID:news1354@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20220508T162554 DTSTART;TZID=Europe/Zurich:20220518T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Banhirup Sengupta (Unive rsitat Autònoma de Barcelona) DESCRIPTION:In this talk I am going to provide a pointwise characterisation of nearly incompressible vector
fields with bounded curl. Euler vector fi elds fall in this class. I will also talk about rotational properties of E uler flows and nonlinear transport equations involving Cauchy Kernel in th e
plane. This is based on joint works with Albert Clop (Barcelona) and Lau ri Hitruhin (Helsinki). X-ALT-DESC:
In this talk I am going to provide a pointwise characterisati on of nearly incompressible vector fields with bounded curl. Euler vector fields fall in this class. I will also talk about rotational
properties of Euler flows and nonlinear transport equations involving Cauchy Kernel in the plane. This is based on joint works with Albert Clop (Barcelona) and L auri Hitruhin (Helsinki).
DTEND;TZID=Europe/Zurich:20220518T160000 END:VEVENT BEGIN:VEVENT UID:news1327@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20220422T115503 DTSTART;TZID=Europe/Zurich:20220504T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Gabriel Sattig (Universi tät Leipzig) DESCRIPTION:It was shown by Modena and Székelyhidi that weak solutions to the incompressible transport equation may be not
unique\, even if the tr ansporting field is Sobolev\, thus admitting a unique regular Lagrangian flow. In this talk I will present a recent result saying that non-Lagrang ian solutions are generic in
the Baire category sense. Joint work with L . Székelyhidi. X-ALT-DESC:
It was shown by Modena and Székelyhidi that weak solutions t o the \;incompressible transport equation may be not unique\, even if the \;transporting field is Sobolev\, thus admitting a
unique regular& nbsp\;Lagrangian flow. \;In this talk I will present a recent result s aying that non-Lagrangian \;solutions are generic in the Baire categor y sense. \;Joint work with L.
DTEND;TZID=Europe/Zurich:20220504T160000 END:VEVENT BEGIN:VEVENT UID:news1324@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20220409T172448 DTSTART;TZID=Europe/Zurich:20220427T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Jaemin Park (University of Barcelona) DESCRIPTION:In this talk\, we study stationary solutions to the 2D incompre ssible Euler equations in the whole plane. It is
well-known that any radia l vorticity is stationary. For compactly supported vorticity\, it is more difficult to see whether a stationary solution has to be radial. In the ca se where the vorticity
is non-negative\, it has been shown that any statio nary solution has be radial. By allowing the vorticity to change the sign\ , we prove that there exist non-radial stationary patch-type solutions.
We construct patch-type solutions whose kinetic energy is infinite or finite . For the finite energy case\, it turns out that a construction of a stati onary solution with compactly supported
velocity is possible. X-ALT-DESC:
In this talk\, we study stationary solutions to the 2D incomp ressible Euler equations in the whole plane. It is well-known that any rad ial vorticity is stationary. For compactly supported vorticity
\, it is mor e difficult to see whether a stationary solution has to be radial. In the case where the vorticity is non-negative\, it has been shown that any stat ionary solution has be radial. By
allowing the vorticity to change the sig n\, we prove that there exist non-radial stationary patch-type solutions. We construct patch-type solutions whose kinetic energy is infinite or fini te. For
the finite energy case\, it turns out that a construction of a sta tionary solution with compactly supported velocity is possible.
DTEND;TZID=Europe/Zurich:20220427T160000 END:VEVENT BEGIN:VEVENT UID:news1321@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20220214T151002 DTSTART;TZID=Europe/Zurich:20220406T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Dr. Eliot Pacherie (NYU Abu Dhabi) DESCRIPTION:tba X-ALT-DESC:
DTEND;TZID=Europe/Zurich:20220406T160000 END:VEVENT BEGIN:VEVENT UID:news1313@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20220320T140750 DTSTART;TZID=Europe/Zurich:20220323T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Dr. Raphael Winter (ENS Lyon) DESCRIPTION:Following the pioneering work of Lanford\, a rigorous theory ha s been developed for the validation of the Boltzmann
equation in the low-d ensity Grad scaling. In the physics literature\, an important issue are th e corrections to the equation for small but positive volume fraction. The first order correction to
the Boltzmann equation is conjectured to be give n by the so-called Choh-Uhlenbeck equation\, which is of the form\\r\\n∂ tfϵ=Qϵ\,BE(fϵ\,fϵ)+ϵQϵ\,CU(fϵ\,fϵ\,fϵ).\\r\\n Here Qϵ\,BE is the
Boltzmann-Enskog operator\, and the Choh-Uhlenbeck operator Qϵ\,CU is an explicit cubic operator. This operator accounts for the formation of dyna mic microscopic correlations between three
particles. In this work\, we pr ove rigorously that the Choh-Uhlenbeck equation gives the first order corr ection to the Boltzmann equation in the Grad-scaling. This is a joint work with Sergio
Simonella. X-ALT-DESC:
Following the pioneering work of Lanford\, a rigorous theory has been developed for the validation of the Boltzmann equation in the low -density Grad scaling. In the physics literature\, an important
issue are the corrections to the equation for small but positive volume fraction. Th e first order correction to the Boltzmann equation is conjectured to be gi ven by the so-called Choh-Uhlenbeck
equation\, which is of the form
\n< p>∂tf[S:ϵ:S]=Qϵ\,BE(fϵ\,fϵ)+ϵQϵ\,CU(fϵ\,f[S:ϵ\,fϵ).\n:S]
Here Qϵ\,BE is the Boltzmann-Enskog operator\, and the C hoh-Uhlenbeck operator Qϵ\,CU is an explicit cubic operator. This operator accounts for the formation of dynamic microscopic correlations
between th ree particles. In this work\, we prove rigorously that the Choh-Uhlenbeck equation gives the first order correction to the Boltzmann equation in the Grad-scaling. This is a joint work with
Sergio Simonella.
DTEND;TZID=Europe/Zurich:20220323T160000 END:VEVENT BEGIN:VEVENT UID:news1311@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20220113T175547 DTSTART;TZID=Europe/Zurich:20220302T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Dr. Havva Yoldas (Univer sity of Vienna) DESCRIPTION: I will talk about Harris-type theorems and their applications to several kinetic equations like the linear BGK
\, the linear Boltzmann\, the kinetic Fokker-Planck equations and some biological kinetic models li ke the run and tumble equation. Even though the original ideas date back t o the 1940s\, the
Harris-type arguments recently raised a lot of mathemati cal interest in the PDE community especially after a simplified proof pro vided by Hairer and Mattingly in 2011. It is a convenient way to
obtain qu antifiable convergence rates\, constructive proofs and the existence of a unique stationary state comes as a by-product of the theorems. The latte r is especially useful for kinetic
equations arising in biology where the shape of the stationary state cannot be known a priori. X-ALT-DESC:
 \;I will talk about Harris-type theorems and their appli cations to several kinetic equations like the linear BGK\, the linear Bolt zmann\, the kinetic Fokker-Planck equations and some
biological kinetic mo dels like the run and tumble equation. Even though the original ideas date back to the 1940s\, the Harris-type arguments recently raised a lot of ma thematical interest \;in
the PDE community especially after a simplifi ed proof provided by Hairer and Mattingly in 2011. It is a convenient way to obtain quantifiable convergence rates\, constructive \;proofs and t he
existence of a unique \;stationary state comes as a by-product of t he theorems. The latter is especially useful \;for kinetic equations a rising in biology where the shape of the stationary
state cannot be known a priori. \;
DTEND;TZID=Europe/Zurich:20220302T160000 END:VEVENT BEGIN:VEVENT UID:news1252@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20211115T124620 DTSTART;TZID=Europe/Zurich:20211208T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Martina Zizza (SISSA Tri este) DESCRIPTION:In this talk we tackle the question "How many vector fields are mixing?" analyzing the density properties of
divergence-free BV vector fields which are weakly mixing/strongly mixing: this means that their Reg ular Lagrangian Flow is a weakly mixing/strongly mixing measure-preserving map when evaluated at
time t=1. More precisely we prove the existence of a G_delta-set U in the space L^1_{t\,x}([0\,1]^3) made of divergence-free vector fields such that:\\r\\n 1) weakly mixing vector fields are a
residual G_delta-set in U\;\\r\\n 2) (exponentially fast) strongl y mixing vector fields are a dense subset of U.\\r\\nThe proof of these re sults exploits some connections between ergodic theory
and fluid dynamics and it is based on the density of BV vector fields whose Regular Lagrangia n Flow is a permutation of subsquares of the unit square [0\,1]^2 when eva luated at time t=1.
In this talk we tackle the question "How many vector fields a re mixing?" analyzing \; \;the density properties of divergence-fr ee BV vector fields which are weakly mixing/strongly mixing:
this means th at their Regular Lagrangian Flow is a weakly mixing/strongly mixing measur e-preserving map when evaluated at time t=1. More precisely we prove the e xistence of a \;G_delta-set U
in the space L^1_{t\,x}([0\,1]^3) made o f divergence-free vector fields such that:
 \;  \; 1)  \; weakly mixing vector fields are a residual G_delta-set in U\;
& nbsp\;  \; 2) \; (exponentially fast) strongly mixing vector field s are a dense subset of U.
The proof of these results exploits som e connections between ergodic theory and fluid dynamics and it is based on the density of BV vector fields whose Regular Lagrangian Flow is a permut ation of
subsquares of the unit square [0\,1]^2 when evaluated at time t=1 . \;
DTEND;TZID=Europe/Zurich:20211208T160000 END:VEVENT BEGIN:VEVENT UID:news1253@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20211122T174531 DTSTART;TZID=Europe/Zurich:20211201T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Elia Bruè (Institute fo r Advanced Study\, Princeton) DESCRIPTION:A long-standing open question in fluid mechanics is whether the Yudovich uniqueness result for the
2d Euler system can be extended to the class of L^p-integrable vorticity. Recently\, there have been formidable attempts to disprove this conjecture\, none of which has by now fully solv ed it. I
will outline two possible approaches to this problem. One is bas ed on the convex integration technique introduced by De Lellis and Szekely hidi. The second\, proposed recently by Vishik\, exploits
the linear insta bility of certain stationary solutions. X-ALT-DESC:
A long-standing open question in fluid mechanics is whether t he Yudovich uniqueness result for the 2d Euler system can be extended to t he class of L^p-integrable vorticity. Recently\, there have
been formidabl e attempts to disprove this conjecture\, none of which has by now fully so lved it. \;I will outline two possible approaches to this problem. One is based on the convex integration
technique introduced by De Lellis and Szekelyhidi. The second\, proposed recently by Vishik\, exploits the linea r instability of certain stationary \;solutions.
DTEND;TZID=Europe/Zurich:20211201T160000 END:VEVENT BEGIN:VEVENT UID:news1259@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20211103T092442 DTSTART;TZID=Europe/Zurich:20211124T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Gioacchino Antonelli (Sc uola Normale Superiore di Pisa) DESCRIPTION:In this talk I will discuss the isoperimetric problem on spaces with curvature bounded from
below. I will mainly deal with complete non-c ompact Riemannian manifolds\, but most of the techniques described are met ric in nature and the results could be extended to the case of metric meas ure
spaces with synthetic bounds from below on the Ricci tensor\, namely R CD spaces. When the space is compact\, the existence of isoperimetric re gions for every volume is established through a simple
application of the direct method of Calculus of Variations. In the noncompact case\, part of the mass could be lost at infinity in the minimization process. Such a mas s can be recovered in
isoperimetric regions sitting in limits at infinity of the space. Following this heuristics\, and building on top of results b y Ritoré--Rosales and Nardulli\, I will state a generalized existence
res ult for the isoperimetric problem on Riemannian manifolds with Ricci curva ture bounded from below and a uniform bound from below on the volumes of u nit balls. The main novelty in such an
approach is the use of the syntheti c theory of curvature bounds to describe in a rather natural way where the mass is lost at infinity. Later\, I will use the latter described general ized existence
result to prove new existence criteria for the isoperimetri c problem on manifolds with nonnegative Ricci curvature. In particular\, I will show that on a complete manifold with nonnegative sectional
curvatur e and Euclidean volume growth at infinity\, isoperimetric regions exist fo r every sufficiently big volume. Time permitting\, I will describe some fo rthcoming works and some open problems.
This talk is based on several pa pers and ongoing collaborations with E. Bruè\, M. Fogagnolo\, S. Nardulli \, E. Pasqualetto\, M. Pozzetta\, and D. Semola. X-ALT-DESC:
In this talk I will discuss the isoperimetric problem on spac es with curvature bounded from below. I will mainly deal with complete non -compact Riemannian manifolds\, but most of the techniques
described are m etric in nature and the results could be extended to the case of metric me asure spaces with synthetic bounds from below on the Ricci tensor\, namely RCD spaces. \;
When the space is compact\, the existence of iso perimetric regions for every volume is established through a simple applic ation of the direct method of Calculus of Variations. In the noncompact ca
se\, part of the mass could be lost at infinity in the minimization proces s. Such a mass can be recovered in isoperimetric regions sitting in limits at infinity of the space. Following this
heuristics\, and building on top of results by Ritoré--Rosales and Nardulli\, I will state a generalized existence result for the isoperimetric problem on Riemannian manifolds wit h Ricci curvature
bounded from below and a uniform bound from below on the volumes of unit balls. The main novelty in such an approach is the use of the synthetic theory of curvature bounds to describe in a rather
natural way where the mass is lost at infinity. Later\, I will use the latter desc ribed generalized existence result to prove new existence criteria for the isoperimetric problem on manifolds with
nonnegative Ricci curvature. In p articular\, I will show that on a complete manifold with nonnegative secti onal curvature and Euclidean volume growth at infinity\, isoperimetric reg ions exist for
every sufficiently big volume. Time permitting\, I will des cribe some forthcoming works and some open problems. \;
This tal k is based on several papers and ongoing collaborations with E. Bruè\, M. Fogagnolo\, S. Nardulli\, E. Pasqualetto\, M. Pozzetta\, and D. Semola. p> DTEND;TZID=Europe/Zurich:20211124T160000
END:VEVENT BEGIN:VEVENT UID:news1255@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20211021T155955 DTSTART;TZID=Europe/Zurich:20211103T141500 SUMMARY:Seminar Analysis and Mathematical Physics: In-Jee
Jeong (Seoul Nati onal University) DESCRIPTION:The evolution of incompressible inviscid fluids is governed by the Euler equations. We consider the dynamics of vortex rings\, which are axisymmetric
solutions to the three dimensional Euler equations with conce ntrated axial vorticity. We prove the following infinite norm growth resul ts: (i) filamentation (formation of a long tail) behavior from
a single vo rtex ring\, and (ii) vortex stretching from the "collision" of two vortex rings with opposite signs. Joint work with Kyudong Choi (UNIST). X-ALT-DESC:
The evolution of incompressible inviscid fluids is governed b y the Euler equations. We consider the dynamics of vortex rings\, which ar e axisymmetric solutions to the three dimensional Euler
equations with con centrated axial vorticity. We prove the following infinite norm growth res ults: (i) filamentation (formation of a long tail) behavior from a single vortex ring\, and (ii) vortex
stretching from the "collision" of two vorte x rings with opposite signs. \;Joint work with Kyudong Choi (UNIST). p> DTEND;TZID=Europe/Zurich:20211103T160000 END:VEVENT BEGIN:VEVENT
UID:news1225@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20210908T121913 DTSTART;TZID=Europe/Zurich:20211006T141500 SUMMARY:Seminar Analysis and Mathematical Physics: Emil Wiedemann (Universi tät Ulm)
DESCRIPTION:The concept of measure-valued solution was introduced to the th eory of hyperbolic conservation laws by DiPerna in the 1980s. For nonlinea r systems of hyperbolic type\, such as the Euler
equations of ideal fluids \, measure-valued solutions are often the only available notion of solutio n\, as the existence of 'honest' solutions is still unknown. Although this relaxation of the
solution concept seems like a vast generalization\, whe re a lot of infomation is lost\, it turned out in work of Székelyhidi-W. (2012) that every measure-valued solution can be approximated by weak
ones in the incompressible situation. For compressible flows\, however\, the s ituation is much different. I will discuss recent progress in this directi on in joint work with Dennis Gallenmüller.
The concept of measure-valued solution was introduced to the theory of hyperbolic conservation laws by DiPerna in the 1980s. For nonlin ear systems of hyperbolic type\, such as the Euler equations of
ideal flui ds\, measure-valued solutions are often the only available notion of solut ion\, as the existence of 'honest' solutions is still unknown. Although th is relaxation of the solution concept
seems like a vast generalization\, w here a lot of infomation is lost\, it turned out in work of Székelyhidi-W . (2012) that every measure-valued solution can be approximated by weak on es in the
incompressible situation. For compressible flows\, however\, the situation is much different. I will discuss recent progress in this direc tion in joint work with Dennis Gallenmüller.
DTEND;TZID=Europe/Zurich:20211006T160000 END:VEVENT BEGIN:VEVENT UID:news1190@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20210601T082910 DTSTART;TZID=Europe/Zurich:20210602T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Corentin Le Bihan (ENS L yon) DESCRIPTION:A simple model of gas is the hard spheres model. It is a billia rd of little particles which can interact very strongly at
very small dist ance (think for example of real billiards with a lot of balls). Because un derstanding such system is an outstanding problem\, people tried to find a limiting process. A first
equation governing the density of one particle was given by Boltzmann :∂tf+v⋅∇xf=Q(f\,f)\\r\\nIn its formal derivat ion Boltzmann supposed that two different particles are almost independent \, so
the probability of having two particles at the same place is the the product of probability. The validity of such equation is a priori not cle ar since it adds some irreversibly that does not exist
in the hard sphere model.\\r\\nLanford solved the problem in its ‘75 paper: Boltzmann’s e quation is true\, up to a time independent of the number of particles (how ever each particle will have in
mean less than one collision).\\r\\nNow co mes the question of the boundary. We expect to find some “Lanfords” th eorem even if we add some boundary condition. A first example are the spec ular
reflections\, for a deterministic law. An other example\, which would be very important in physics\, is the evolution of a gas between two hot plaques. Then the reflection condition is stochastic. I
am interested in a third type of reflection\, also stochastic\, which is a modeling of a rou gh boundary.\\r\\nDuring my talk I will present some ideas of the proof of Boltzmann in the torus R^3/Z^3
and the adaptation in the case of a domain with boundaries. X-ALT-DESC:
A simple model of gas is the hard spheres model. It is a bill iard of little particles which can interact very strongly at very small di stance (think for example of real billiards with a lot of
balls). Because understanding such system is an outstanding problem\, people tried to find a limiting process. A first equation governing the density of one particl e was given by Boltzmann :∂tf+v⋅ ∇
xf=Q(f \,f)
In its formal derivation Boltzman n supposed that two different particles are almost independent\, so the pr obability of having two particles at the same place is the the product of probability. The
validity of such equation is a priori not clear since it adds some irreversibly that does not exist in the hard sphere mod el.
Lanford solved the problem in its ‘75 paper: Boltzmann’s e quation is true\, up to a time independent of the number of particles (how ever each particle will have in mean less than one collision).
Now comes the question of the boundary. We expect to find some “Lanfords” theorem even if we add some boundary condition. A first example are the s pecular reflections\, for a deterministic law. An
other example\, which wo uld be very important in physics\, is the evolution of a gas between two h ot plaques. Then the reflection condition is stochastic. I am interested i n a third type of
reflection\, also stochastic\, which is a modeling of a rough boundary.
During my talk I will present some ideas of the pr oof of Boltzmann in the torus R^3/Z^3 and the adaptation in the case of a domain with boundaries.
DTEND;TZID=Europe/Zurich:20210602T160000 END:VEVENT BEGIN:VEVENT UID:news1188@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20210510T074110 DTSTART;TZID=Europe/Zurich:20210526T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Bjorn Berntson (KTH Stoc kholm) DESCRIPTION:The half-wave maps (HWM) equation is a recently-introduced inte grable PDE with two distinct relations to the
(trigonometric) spin Caloge ro-Moser-Sutherland (CMS) many-body system. Firstly\, the HWM equation ar ises as a certain continuum limit of the CMS system and secondly\, the sol iton solutions of the
HWM equation are governed by a complexified version of the CMS system. We present generalizations of the HWM equation that ar e similarly related to the hyperbolic and elliptic spin CMS systems. Th
is talk is based on joint work with Rob Klabbers and Edwin Langmann. X-ALT-DESC:
The half-wave maps (HWM) equation is a recently-introduced in tegrable PDE with two distinct relations to the (trigonometric) spin \ ;Calogero-Moser-Sutherland (CMS) many-body \;system.
Firstly\, the HWM equation arises as a certain continuum limit of the CMS system and second ly\, the soliton solutions of the HWM equation are governed by a complexif ied version of the \;CMS
system. We present generalizations of the HWM equation that are similarly related to the \;hyperbolic and elliptic spin \;CMS systems. \;This talk is based on joint work with Rob Kl
abbers and Edwin Langmann.
DTEND;TZID=Europe/Zurich:20210526T151500 END:VEVENT BEGIN:VEVENT UID:news1176@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20210517T094220 DTSTART;TZID=Europe/Zurich:20210519T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Théotime Girardot (LPMM C) DESCRIPTION:In two-dimensional space there are possibilities for quantum st atistics continuously interpolating between the bosonic and
the fermionic one. \\r\\nQuasi-particles obeying such statistics can be described as ord inary bosons and fermions with magnetic interactions. \\r\\nWe study a lim it situation where the statistics/
magnetic interaction is seen as a “per turbation from the fermionic end”. \\r\\nWe vindicate a mean-field appro ximation\, proving that the ground state of a gas of anyons is described t o leading
order by a semi-classical\, Vlasov-like\, energy functional.\\r\ \nThe ground state of the latter displays anyonic behavior in its momentum distribution. After introducing and stating this result I
will give eleme nts of proof based on coherent states\,\\r\\nHusimi functions\, the Diacon is-Freedman theorem and a quantitative version of a semi-classical Pauli p inciple. X-ALT-DESC:
In two-dimensional space there are possibilities for quantum statistics continuously interpolating between the bosonic and the fermioni c one.
Quasi-particles obeying such statistics can be described a s ordinary bosons and fermions with magnetic interactions.
We stu dy a limit situation where the statistics/magnetic interaction is seen as a “perturbation from the fermionic end”.
We vindicate a mean- field approximation\, proving that the ground state of a gas of anyons is described to leading order by a semi-classical\, Vlasov-like\, energy func tional.
The ground state of the latter displays anyonic behavior i n its momentum distribution. After introducing and stating this result I w ill give elements of proof based on coherent states\,
Husimi funct ions\, the Diaconis-Freedman theorem and a quantitative version of a semi- classical Pauli pinciple.
DTEND;TZID=Europe/Zurich:20210519T160000 END:VEVENT BEGIN:VEVENT UID:news1172@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20210426T124049 DTSTART;TZID=Europe/Zurich:20210428T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Soeren Petrat (Jacobs Un iversity Bremen) DESCRIPTION:We consider the non-relativistic quantum dynamics of N bosons i n the mean-field scaling limit. It is known
that the leading order behavi or is described by the Hartree equation\, and the next-to-leading order b y Bogoliubov theory. Here\, we prove a perturbative expansion around Bog oliubov theory: a norm
approximation of the true solution to the Schroedi nger equation to any order in 1/N. The coefficients in the expansion are independent of N\, and can be computed from the solutions to the Hartree
and Bogoliubov equations alone. Our expansion leads to approximations of correlation functions and reduced densities to any order in 1/N. In this sense we have completely solved the dynamics of this
mean-field model\, a t least for bounded interaction potentials.\\r\\nThis is joint work with L ea Bossmann\, Peter Pickl\, and Avy Soffer. X-ALT-DESC:
We consider the non-relativistic quantum dynamics of N bosons in the \;mean-field scaling limit. It is known that the leading order behavior is \;described by the Hartree equation\, and the
next-to-lea ding order by \;Bogoliubov theory. Here\, we prove a perturbative expa nsion around \;Bogoliubov theory: a norm approximation of the true sol ution to the \;Schroedinger
equation to any order in 1/N. The coeffici ents in the \;expansion are independent of N\, and can be computed fro m the solutions \;to the Hartree and Bogoliubov equations alone. Our e
xpansion leads to \;approximations of correlation functions and reduce d densities to any \;order in 1/N. In this sense we have completely so lved the dynamics of \;this mean-field model
\, at least for bounded in teraction potentials.
This is joint work with Lea Bossmann\, Peter Pickl\, and Avy Soffer.
DTEND;TZID=Europe/Zurich:20210428T151500 END:VEVENT BEGIN:VEVENT UID:news1162@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20210316T094354 DTSTART;TZID=Europe/Zurich:20210407T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Jules Pitcho (University of Zurich) DESCRIPTION:The recent work of Brué\, Colombo and De Lellis has establishe d that\, for Sobolev vector fields\, the continuity
equation may be well-p osed in a Lagrangian sense\, yet trajectories of the associated ODE need n ot be unique. We describe how a convex integration scheme for the continui ty equation reveals these
degenerate integral curves\; we modify this sche me to produce Sobolev vector fields for which “most” integral curves a re degenerate. More precisely\, we produce Sobolev vector fields which hav e
any finite number of integral curves starting almost everywhere. This is a joint work with Massimo Sorella. X-ALT-DESC:
The recent work of Brué\, Colombo and De Lellis has establis hed that\, for Sobolev vector fields\, the continuity equation may be well -posed in a Lagrangian sense\, yet trajectories of the
associated ODE need not be unique. We describe how a convex integration scheme for the contin uity equation reveals these degenerate integral curves\; we modify this sc heme to produce Sobolev vector
fields for which “most” integral curves are degenerate. More precisely\, we produce Sobolev vector fields which h ave any finite number of integral curves starting almost everywhere. This is a joint
work with Massimo Sorella. \;
DTEND;TZID=Europe/Zurich:20210407T160000 END:VEVENT BEGIN:VEVENT UID:news1138@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20210318T140108 DTSTART;TZID=Europe/Zurich:20210324T140000 SUMMARY:Seminar
Analysis and Mathematical Physics: Jonas Lampart (CNRS\, LI CB) DESCRIPTION:I will discuss some properties of the set of all trajectories t hat can be obtained from a fixed initial state by varying
the potential in the Schrödinger equation. This is related to the control problem\, i.e. driving the system to a target state\, which turns out to be impossible fo r "typical" target states using
bounded potentials. X-ALT-DESC:
I will discuss some properties of the set of all trajectories that can be obtained from a fixed initial state by varying the potential in the Schrödinger equation.
This is related to the control problem \, i.e. driving the system to a target state\, which turns out to be impos sible for "typical" target states using bounded potentials.
DTEND;TZID=Europe/Zurich:20210324T160000 END:VEVENT BEGIN:VEVENT UID:news1141@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20210223T224755 DTSTART;TZID=Europe/Zurich:20210317T140000 SUMMARY:Seminar
Analysis and Mathematical Physics: Alessandro Olgiati (Univ ersity of Zurich) DESCRIPTION:We study the ground state properties of a system of bosonic par ticles trapped by a double-well potential\,
in the limit of large inter-we ll separation and of high potential barrier. The N bosons also interact vi a a mean-field two-body potential\, in the limit of large N.The leading-or der physics is
governed by a Bose-Hubbard Hamiltonian coupling two low-ene rgy modes\, each supported in the bottom of one well. Fluctuations beyond these two modes are ruled by two independent Bogoliubov
Hamiltonians\, one for each well.Our main result is that the variance of the number of parti cles in the low-energy modes is suppressed. This is a violation of the Cen tral Limit Theorem which holds
in the occurrence of Bose-Einstein condensa tion\, and therefore it signals that particles develop correlations in the ground state. We achieve our result by proving a precise energy expansion in
term of Bose-Hubbard and Bogoliubov energies.Joint work with Nicolas R ougerie (ENS Lyon) and Dominique Spehner (Universidad de Concepción). X-ALT-DESC:We study the ground state properties of a
system of bosonic part icles trapped by a double-well potential\, in the limit of large inter-wel l separation and of high potential barrier. The N bosons also interact via a mean-field two-body
potential\, in the limit of large N.
The leadi ng-order physics is governed by a Bose-Hubbard Hamiltonian coupling two lo w-energy modes\, each supported in the bottom of one well. Fluctuations be yond these two modes are ruled by two
independent Bogoliubov Hamiltonians\ , one for each well.
Our main result is that the variance of the numb er of particles in the low-energy modes is suppressed. This is a violation of the Central Limit Theorem which holds in the occurrence of Bose-Einste
in condensation\, and therefore it signals that particles develop correlat ions in the ground state. We achieve our result by proving a precise energ y expansion in term of Bose-Hubbard and
Bogoliubov energies.
Joint wo rk with Nicolas Rougerie (ENS Lyon) and Dominique Spehner (Universidad de Concepción). DTEND;TZID=Europe/Zurich:20210317T160000 END:VEVENT BEGIN:VEVENT UID:news1137@dmi.unibas.ch DTSTAMP;
TZID=Europe/Zurich:20210223T225158 DTSTART;TZID=Europe/Zurich:20210303T141500 SUMMARY:Seminar Analysis and Mathematical Physics: Peter Pickl (LMU Münche n) DESCRIPTION:Abstract: The derivation of
effective descriptions from microsc opic dynamics is a very vivid area in mathematical physics. In the talk I will discuss a system of many particles with Newtoni an time
evolution that are subject to interaction. It is well k nown that in the weak coupling limit this system converges\, un der smoothness assumption on the interaction force\, to a solut
ion of the Vlasov equation. Weakening the types of convergence (con vergence for all initial conditions -> convergence in probabili ty -> convergence in distribution) the smoothness
condition on the interaction can be generalized. In the talk I will present recent results in this direction and explain\, which types of c onvergence hold/do not hold under the
different assumptions on the interaction force. X-ALT-DESC:
Abstract: The derivation of effective descriptions from micro scopic \; \; \; \; \; \;dynamics is a very vivid a rea in mathematical physics. In the talk I \; \; 
\; \;&nbs p\; \;will discuss a system of many particles with Newtonian time evol ution \; \; \; \; \; \;that are subject to interac tion. It is well known that in the
weak \; \; \; \; \;  \;coupling limit this system converges\, under smoothness assumption on \; \; \; \; \; \;the interaction force\, to a s olution of
the Vlasov equation. \; \; \; \;Weakening the t ypes of convergence (convergence for all initial \; \; \;  \; \; \;conditions ->\; convergence in probability
->\; conver gence in \; \; \; \; \; \;distribution) the smooth ness condition on the interaction can be \; \; \; \; \ ; \;generalized. In the talk I
will present recent results in this&nbs p\; \; \; \; \; \;direction and explain\, which types of convergence hold/do not hold \; \; \; \; \; \;u nder the
different assumptions on the interaction force.
DTEND;TZID=Europe/Zurich:20210303T161500 END:VEVENT BEGIN:VEVENT UID:news1125@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20201130T113309 DTSTART;TZID=Europe/Zurich:20201216T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Alessandro Goffi (Univer sity of Padova) DESCRIPTION:The problem of maximal regularity in Lebesgue spaces is a heavi ly studied question for linear PDEs that dates
back to Calderón and Zygmu nd for the Poisson equation and the potential theoretic approach developed by Ladyzhenskaya et al for the heat equation\, and represents the corners tone in the analysis of
many nonlinear PDEs. After a brief overview of the linear theory\, in this talk I will focus on maximal L^q-regularity for v iscous Hamilton-Jacobi equations with superlinear first order terms. I wil
l first survey on recent results obtained for the stationary equation\, wh ich answer positively to a conjecture raised by P.-L. Lions\, via a Bernst ein-type argument. Then\, I will discuss
Lipschitz and optimal L^q-regular ity for parabolic Hamilton-Jacobi equations that instead are tackled throu gh a refinement of the Evans’ nonlinear adjoint method\, thus exploiting fine regularity
properties for advection-diffusion equations with “roug h” drifts\, providing new regularity results for systems of PDEs arising in the theory of Mean Field Games. These are joint works with Marco
Ciran t (Padova). X-ALT-DESC:
The problem of maximal regularity in Lebesgue spaces is a hea vily studied question for linear PDEs that dates back to Calderón and Zyg mund for the Poisson equation and the potential theoretic
approach develop ed by Ladyzhenskaya et al for the heat equation\, and represents the corne rstone in the analysis of many nonlinear PDEs. After a brief overview of t he linear theory\, in this talk
I will focus on maximal L^q-regularity for viscous Hamilton-Jacobi equations with superlinear first order terms. I w ill first survey on recent results obtained for the stationary equation\, which
answer positively to a conjecture raised by P.-L. Lions\, via a Bern stein-type argument. Then\, I will discuss Lipschitz and optimal L^q-regul arity for parabolic Hamilton-Jacobi equations that
instead are tackled thr ough a refinement of the Evans’ nonlinear adjoint method\, thus exploiti ng fine regularity properties for advection-diffusion equations with “ro ugh” drifts\, providing new
regularity results for systems of PDEs arisi ng in the theory of Mean Field Games. These are joint works with Marco Cir ant (Padova). \;
DTEND;TZID=Europe/Zurich:20201216T160000 END:VEVENT BEGIN:VEVENT UID:news1126@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20201122T211830 DTSTART;TZID=Europe/Zurich:20201209T150000 SUMMARY:Seminar
Analysis and Mathematical Physics: Mickaël Latocca (ENS Pa ris) DESCRIPTION:The possible growth of the Sobolev norms of the solution to the 2d (and 3d) Euler equations and its quantification remains
ill-known. Th e only general bound is double exponential. Conversely\, such a double e xponential growth scenario occurs for specific initial data in the settin g of the disc (Kiselev-Sverak). In the
setting of the torus\, only an exp onentially growing scenario has been exhibited (Zlatos). Could the double exponential scenario occur on the torus? What is the typical behaviour t hat could be
expected? It is highly possible that on the torus\, Sobolev norms generically do not grow fast.In this talk\, I will present some resu lts obtained in this direction. We will construct invariant
measures for the 2d Euler equation at high regularity ($H^s$\, $s>2$) and prove that o n the support of the measure\, Sobolev norms do not grow faster than poly nomially.Refining the method allows to
construct an invariant measure to t he 3d Euler equations at high regularity ($H^s$\, $s>7/2$) and thus const ructglobal dynamics on the support of the measure\, exhibiting at most po lynomial
growth.Finally\, it time permits we will discuss the properties o f the measures constructed. X-ALT-DESC:The possible growth of the Sobolev norms of the solution to the& nbsp\;2d (and 3d) Euler
equations and its quantification remains ill-known . The \;only general bound is double exponential. Conversely\, such a double \;exponential growth scenario occurs for specific initial data
in the \;setting of the disc (Kiselev-Sverak). In the setting of the t orus\, only an \;exponentially growing scenario has been exhibited (Zl atos). Could the \;double exponential
scenario occur on the torus? Wha t is the typical \;behaviour that could be expected? It is highly poss ible that on the torus\, \;Sobolev norms generically do not grow fast.
In this talk\, I will present some results obtained in this di rection. We \;will construct invariant measures for the 2d Euler equat ion at high \;regularity ($H^s$\, $s>\;2$) and prove
that on the sup port of the measure\, \;Sobolev norms do not grow faster than polynomi ally.
Refining the method allows to construct an invariant meas ure to the 3d \;Euler equations at high regularity ($H^s$\, $s>\;7/2 $) and thus construct
global dynamics on the support of the measure\, exhibiting at most \;polynomial growth.
Finally\, it time permits we will discuss the properties of the measures \;constructed. DTEND;TZID=Europe/Zurich:20201209T160000 END:VEVENT BEGIN:VEVENT UID:news1106@dmi.unibas.ch DTSTAMP;TZID=
Europe/Zurich:20201116T100656 DTSTART;TZID=Europe/Zurich:20201209T140000 SUMMARY:Seminar Analysis and Mathematical Physics: Lea Bossmann (IST Austri a) DESCRIPTION:Abstract. \\r\\nWe consider a
system of N bosons in the mean-f ield scaling regime in an external trapping potential. We derive an asymp totic expansion of the low-energy eigenstates and the corresponding ener gies\, which
provides corrections to Bogoliubov theory to any order in 1/ N. We show that the structure of the ground state and of the non-degenera te low-energy eigenstates is preserved by the dynamics ifthe
external trap is switched off. This talk is based on joint works with Sören Petrat\, Peter Pickl\, Robert Seiringer\, and Avy Soffer (arXiv:1912.11004 and arX iv:2006.09825). X-ALT-DESC:
We consider a system of N bosons in the mean-field scaling \;regime in an external trapping potential. We derive an asymptotic \;expansion of the low-energy eigenstates and the corresponding&
nbsp\;energies\, which provides corrections to Bogoliubov t heory to any order \;in 1/N. We show that the structure of the ground state and of the \;non-degenerate low-energy eigenstates is
preserved by the dynamics if
the external trap is switched off. This talk is ba sed on joint works \;with Sören Petrat\, Peter Pickl\, Robert Seiring er\, and Avy Soffer \;(arXiv:1912.11004 and arXiv:2006.09825).
DTEND;TZID=Europe/Zurich:20201209T150000 END:VEVENT BEGIN:VEVENT UID:news1124@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20201118T092353 DTSTART;TZID=Europe/Zurich:20201202T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Jaemin Park (Georgia Ins titute of Technology) DESCRIPTION:In this talk\, I will discuss whether all stationary/uniformly- rotating solutions of 2D Euler equation
must be radially symmetric\, if th e vorticity is compactly supported. For a stationary solution that is eith er smooth or of patch type\, we prove that if the vorticity does not chang e sign\, it
must be radially symmetric up to a translation. It turns out t hat the fixed-sign condition is necessary for radial symmetry result: inde ed\, we are able to find non-radial sign changing stationary
solution with compact support. We have also obtained some sharp criteria on symmetry fo r uniformly-rotating solutions for 2D Euler equation and the SQG equation. The symmetry results are mainly
obtained by calculus of variations and el liptic equation techniques\, and the construction of non-radial solution i s obtained from bifurcation theory. Part of this talk is based on joint wo rk with
Javier Gomez-Serrano\, Jia Shi and Yao Yao. X-ALT-DESC:In this talk\, I will discuss whether all stationary/uniformly-r otating solutions of 2D Euler equation must be radially symmetric\, if the
vorticity is compactly supported. For a stationary solution that is eithe r smooth or of patch type\, we prove that if the vorticity does not change sign\, it must be radially symmetric up to a
translation. It turns out th at the fixed-sign condition is necessary for radial symmetry result: indee d\, we are able to find non-radial sign changing stationary solution with compact support. We
have also obtained some sharp criteria on symmetry for uniformly-rotating solutions for 2D Euler equation and the SQG equation. The symmetry results are mainly obtained by calculus of variations and
ell iptic equation techniques\, and the construction of non-radial solution is obtained from bifurcation theory. Part of this talk is based on joint wor k with Javier Gomez-Serrano\, Jia Shi and Yao
Yao. DTEND;TZID=Europe/Zurich:20201202T160000 END:VEVENT BEGIN:VEVENT UID:news1107@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20201116T101053 DTSTART;TZID=Europe/Zurich:20201125T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Emanuela Giacomelli (LMU Munich) DESCRIPTION:Abstract: We consider N spin 1/2 fermions interacting wit h a positive and regular enough potential in three
dimensions. We compute the ground state energy of the system in the dilute regime at second ord er in the particle density. We recover a well-know expression for the gro und state energy which
depends on the interaction potentials only via its scattering length. A first proof of this result has been given by Lieb\, Seiringer and Solovej. We discuss a new derivation of this formula whic
h makes use of the almost-bosonic nature of the low-energy excitations of the systems. Based on a joint work with Marco Falconi\, Christian Hain zl\, Marcello Porta. X-ALT-DESC:
We consider N spin 1/2 f ermions interacting with a positive \;
and regular enough potenti al in three dimensions. We compute the ground \;
state energy of the system in the dilute regime at second order in the \;
particl e density. We recover a well-know expression for the ground state \;energy which depends on the interaction potentials only via its \;
scattering length. A first proof of this result has been given by Li eb\, \;
Seiringer and Solovej. We discuss a new derivation of thi s formula \;
which \; makes use of the almost-bosonic nature of the low-energy \;
excitations of the systems. Based on a joint work with Marco Falconi\, \;
Christian Hainzl\, Marcello Porta.< /p> DTEND;TZID=Europe/Zurich:20201125T161500 END:VEVENT BEGIN:VEVENT UID:news1100@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20201102T110041 DTSTART;TZID=Europe/
Zurich:20201111T141500 SUMMARY:Seminar Analysis and Mathematical Physics: Lars Eric Hientzsch (Ins titut Fourier\, University of Grenoble Alpes) DESCRIPTION:The quantum Navier-Stokes (QNS) equations
describe a compressib le fluid including a degenerate density dependent viscosity and a dispersi ve tensor accounting for capillarity effects. The system can be seen as vi scous correction of the
Quantum Hydrodynamics (QHD) arising e.g. as protot ype model in the description of superfluidity. We consider the (QNS) syste m on the whole space with non-trivial farfield behaviour providing the
sui table framework to study coherent structures and the incompressible limit. \\r\\nFirst\, we prove global existence of finite energy weak solutions (F EWS) in dimension two and three. To
compensate for the lack of control of the velocity field around vacuum regions\, we construct approximate soluti ons to a truncated formulation of (QNS) on a sequence of invading domains. Suitable
compactness properties are inferred from the Bresch-Desjadins en tropy estimates. This is joint work with P. Antonelli and S. Spirito.\\r\\ nSecond\, we address the low Mach number limit for FEWS to
the (QNS) syste m (in collaboration with P. Antonelli and P. Marcati). The main novelty is a precise analysis of the acoustic dispersion altered by the presence of the dispersive capillarity tensor.
The linearised system is governed by the Bogoliubov dispersion relation. The desired decay of the acoustic part follows from refined Strichartz estimates. X-ALT-DESC:
The quantum Navier-Stokes (QNS) equations describe a compress ible fluid including a degenerate density dependent viscosity and a disper sive tensor accounting for capillarity effects. The system can
be seen as viscous correction of the Quantum Hydrodynamics (QHD) arising e.g. as prot otype model in the description of superfluidity. We consider the (QNS) sys tem on the whole space with
non-trivial farfield behaviour providing the s uitable framework to study coherent structures and the incompressible limi t.
First\, we prove global existence of finite energy weak solutio ns (FEWS) in dimension two and three. To compensate for the lack of contro l of the velocity field around vacuum regions\, we construct
approximate s olutions to a truncated formulation of (QNS) on a sequence of invading dom ains. Suitable compactness properties are inferred from the Bresch-Desjadi ns entropy estimates. This is joint
work with P. Antonelli and S. Spirito.
Second\, we address the low Mach number limit for FEWS to the (QN S) system (in collaboration with P. Antonelli and P. Marcati). The main no velty is a precise analysis of the acoustic dispersion
altered by the pres ence of the dispersive capillarity tensor. The \;linearised \;syst em is governed by the Bogoliubov dispersion relation. The desired decay of the acoustic part follows
from refined Strichartz estimates. \;
DTEND;TZID=Europe/Zurich:20201111T160000 END:VEVENT BEGIN:VEVENT UID:news1036@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20201024T154014 DTSTART;TZID=Europe/Zurich:20201104T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Marco Falconi (Universit y of Roma Tre) DESCRIPTION:Abstract. \\r\\nIn this talk I will overview variational proble ms arising from the study ofquantum matter
interacting with a macroscopic force field. These interactions are verycommon in both solid state and con densed matter physics\, as well as in higher energysettings. In particular \, I will focus on
the link between the effective and microscopicdescripti on of such variational problems\, using techniques of quasi-classical anal ysisdeveloped in recent years in collaboration with M. Correggi and
M. Oli vieri. X-ALT-DESC:
In this talk I will overview variational p roblems arising from the study ofquantum matter interacting with a macrosc opic force field. These interactions are verycommon in both solid state an d
condensed matter physics\, as well as in higher energysettings. In parti cular\, I will focus on the link between the effective and microscopicdesc ription of such variational problems\, using
techniques of quasi-classical analysisdeveloped in recent years in collaboration with M. Correggi and M . Olivieri.
DTEND;TZID=Europe/Zurich:20201104T160000 END:VEVENT BEGIN:VEVENT UID:news1086@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20201007T090901 DTSTART;TZID=Europe/Zurich:20201021T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Maria Teresa Chiri (Penn State University) DESCRIPTION:A posteriori Error Estimates for Numerical Solutions to Hyperbo lic Conservation Laws X-ALT-DESC:A posteriori
Error Estimates for Numerical Solutions to Hyperbol ic Conservation Laws DTEND;TZID=Europe/Zurich:20201021T160000 END:VEVENT BEGIN:VEVENT UID:news1087@dmi.unibas.ch DTSTAMP;TZID=Europe/
Zurich:20200930T171430 DTSTART;TZID=Europe/Zurich:20201014T141500 SUMMARY:Seminar Analysis and Mathematical Physics: Silja Haffter (EPFL) DESCRIPTION:The surface quasigeostrophic equation (SGQ) is a
2d physical mo del equation which emerges in meteorology. It has attracted the attention of the mathematical community since it shares many of the essential diffic ulties of 3d fluid dynamics: in the
supercritical regime for instance\, wh ere dissipation is modelled by a fractional Laplacian of order less than 1 /2\, it is not known whether or not smooth solutions blow-up in finite tim e. On the
other hand\, the scheme of Leray still produces global-in-time w eak solutions from any L^2-initial datum\, but their regularity is poorly understood. In this talk\, I will propose a nonempty notion
of "suitable w eak solution" for the supercritical SQG equation and prove that those solu tions are smooth outside a compact set of quantifiable Hausdorff dimension \; in particular they are smooth
almost everywhere. I will also give a con jecture on what we believe to be an optimal dimension estimate. This is a joint work with Maria Colombo (EPFL). X-ALT-DESC:
The surface quasigeostrophic equation (SGQ) is a 2d physical model equation which emerges in meteorology. It has attracted the attentio n of the mathematical community since it shares many of the
essential diff iculties of 3d fluid dynamics: in the supercritical regime for instance\, where dissipation is modelled by a fractional Laplacian of order less than 1/2\, it is not known whether or
not smooth solutions blow-up in finite t ime. On the other hand\, the scheme of Leray still produces global-in-time weak solutions from any L^2-initial datum\, but their regularity is poorl y
understood. In this talk\, I will propose a nonempty notion of "\;su itable weak solution"\; for the supercritical SQG equation and prove t hat those solutions are smooth outside a compact
set of quantifiable Hausd orff dimension\; in particular they are smooth almost everywhere. I will a lso give a conjecture on what we believe to be an optimal dimension estima te.  \;This is a
joint work with Maria Colombo (EPFL).
DTEND;TZID=Europe/Zurich:20201014T160000 END:VEVENT BEGIN:VEVENT UID:news1088@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20200923T091416 DTSTART;TZID=Europe/Zurich:20201007T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Klaus Widmayer (EPFL) DESCRIPTION:A point charge is a particularly basic and important equilibriu m of the Vlasov-Poisson equations\, and the study of its stability
has ins pired several major contributions. In this talk we present some recent wor k\, which brings a fresh perspective on this problem. Our new approach com bines a Lagrangian analysis of the
linearized problem with an Eulerian PDE framework in the nonlinear analysis\, all the while respecting the symple ctic structure. As a result\, for the case of radial initial data\, we see that
solutions are global and in fact disperse to infinity via a modified scattering along trajectories of the linearized flow. This is joint work with Benoit Pausader (Brown University). X-ALT-DESC:
A point charge is a particularly basic and important equilibr ium of the Vlasov-Poisson equations\, and the study of its stability has i nspired several major contributions. In this talk we present
some recent w ork\, which brings a fresh perspective on this problem. Our new approach c ombines a Lagrangian analysis of the linearized problem with an Eulerian P DE framework in the nonlinear
analysis\, all the while respecting the symp lectic structure. As a result\, for the case of radial initial data\, we s ee that solutions are global and in fact disperse to infinity via a modifi ed
scattering along trajectories of the linearized flow. This is joint wor k with Benoit Pausader (Brown University).
DTEND;TZID=Europe/Zurich:20201007T160000 END:VEVENT BEGIN:VEVENT UID:news1002@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20200910T202733 DTSTART;TZID=Europe/Zurich:20200930T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Simone Dovetta (CNR-IMAT I\, Pavia) DESCRIPTION:The talk overviews some recent developments about nonlinear Sch roedinger equations (NLS) on metric graphs.
Precisely\, we concentrate on variational problems for the NLS energy functional subject to the mass con straint. After a brief recap of the well-known behaviour of such model on the real line\, we
address the existence of NLS ground states on noncompac t metric graphs\, with a specific focus on periodic graphs and infinite tr ees. The emergence of threshold phenomena rooted in the nature of
these gr aphs is discussed. Finally\, we provide some insights on the uniqueness of ground states at fixed mass. On the one hand\, uniqueness is shown to hol d for two classes of graphs with
halflines. On the other hand\, a countere xample to uniqueness in full generality is exhibited. The matter we discus s is part of a wider research line\, developed in collaboration with sever al
authors. The results explicitly covered by the talk refer to a series o f papers\, some of which are joint works with Riccardo Adami\, Enrico Serr a and Paolo Tilli. X-ALT-DESC:
The talk overviews some recent developments about nonlinear S chroedinger equations (NLS) on metric graphs. Precisely\, we concentrate o n variational problems for the NLS energy functional subject
to the mass c onstraint. After a brief recap of the well-known behaviour of such model o n the real line\, we address the existence of NLS ground states on noncomp act metric graphs\, with a specific
focus on periodic graphs and infinite trees. The emergence of threshold phenomena rooted in the nature of these graphs is discussed. Finally\, we provide some insights on the uniqueness of ground
states at fixed mass. On the one hand\, uniqueness is shown to h old for two classes of graphs with halflines. On the other hand\, a counte rexample to uniqueness in full generality is exhibited. The
matter we disc uss is part of a wider research line\, developed in collaboration with sev eral authors. The results explicitly covered by the talk refer to a series of papers\, some of which are
joint works with Riccardo Adami\, Enrico Se rra and Paolo Tilli.
DTEND;TZID=Europe/Zurich:20200930T160000 END:VEVENT BEGIN:VEVENT UID:news1022@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20200217T114353 DTSTART;TZID=Europe/Zurich:20200226T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Luigi De Rosa (EPF Lausa nne) DESCRIPTION:I will describe how convex integration techniques can be used t o show that wilddissipative solutions of the
incompressible Euler equation s are typical in the Baire category sense. This also partially solves a co njecture by Philip Isett on the sharpness of the kinetic energy regularity for Hölder
continuous solutions of the Euler equations. The talk will b e based on a recent work obtained in collaboration with Riccardo Tione. X-ALT-DESC:
I will describe how convex integration techniques can be used to show that wilddissipative solutions of the incompressible Euler equati ons are typical in the Baire category sense. This also
partially solves a conjecture by Philip Isett on the sharpness of the kinetic energy regulari ty \;for Hölder continuous solutions of the Euler equations. The talk will be based on a recent work
obtained \;in collaboration with Ricca rdo Tione.
DTEND;TZID=Europe/Zurich:20200226T160000 END:VEVENT BEGIN:VEVENT UID:news929@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20191015T153025 DTSTART;TZID=Europe/Zurich:20191023T141500 SUMMARY:Seminar
Analysis and Mathematical Physics: Fabian Ziltener (Utrecht University) DESCRIPTION:The goal of this talk is to show in an example how analysis and symplectic geometry are related in several
ways.Symplectic geometry origi nated from classical mechanics\, where the canonical symplectic form on ph ase space appears in Hamilton's equation. A (smooth) diffeomorphism on a s ymplectic manifold
is called a symplectomorphism iff it preserves the symp lectic form. This happens iff the diffeomorphism solves a certain inhomoge neous quadratic first order system of PDE's. In classical mechanics
symple ctomorphisms play the role of canonical transformations.A famous result by Eliashberg and Gromov states that the set of symplectomorphisms is $C^0$- closed in the set of all diffeomorphisms.
This is remarkable\, since in ge neral\, the $C^0$-limit of a sequence of solutions of a first order system of PDE's need not solve the system. A well-known proof of the Eliashberg- Gromov theorem is
based on Gromov's symplectic nonsqueezing theorem for ba lls.In my talk I will sketch this proof. Furthermore\, I will present a sy mplectic nonsqueezing result for spheres that sharpens Gromov's
theorem. T he proof of this result is based on the existence of a holomorphic map fro m the (real) two-dimensional unit disk to a certain symplectic manifold\, satisfying some Lagrangian boundary
condition. Such a map solves the Cauch y-Riemann equation for a certain almost complex structure. X-ALT-DESC:The goal of this talk is to show in an example how analysis and symplectic geometry are
related in several ways.
Symplectic geo metry originated from classical mechanics\, where the canonical symplectic form on phase space appears in Hamilton's equation. A (smooth) diffeomorp hism on a symplectic manifold is
called a symplectomorphism iff it preserv es the symplectic form. This happens iff the diffeomorphism solves a certa in inhomogeneous quadratic first order system of PDE's. In classical mecha nics
symplectomorphisms play the role of canonical transformations.
< br />A famous result by Eliashberg and Gromov states that the set of sympl ectomorphisms is $C^0$-closed in the set of all diffeomorphisms. This is r emarkable\, since in general\, the $C^0$-limit
of a sequence of solutions of a first order system of PDE's need not solve the system. A well-known p roof of the Eliashberg-Gromov theorem is based on Gromov's symplectic nons queezing theorem for
In my talk I will sketch this proof . Furthermore\, I will present a symplectic nonsqueezing result for sphere s that sharpens Gromov's theorem. The proof of this result is based on the existence of
a holomorphic map from the (real) two-dimensional unit disk to a certain symplectic manifold\, satisfying some Lagrangian boundary con dition. Such a map solves the Cauchy-Riemann equation for a
certain almost complex structure. DTEND;TZID=Europe/Zurich:20191023T160000 END:VEVENT BEGIN:VEVENT UID:news833@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20190529T183455 DTSTART;TZID=Europe/
Zurich:20190529T141500 SUMMARY:Seminar Analysis: Mikaela Iacobelli (ETH Zürich) DESCRIPTION:The Vlasov-Poisson system is a kinetic equation that models col lisionless plasma. A plasma has a
characteristic scale called the Debye le ngth\, which is typically much shorter than the scale of observation. In t his case the plasma is called ‘quasineutral’. This motivates studying the limit in
which the ratio between the Debye length and the observation scale tends to zero. Under this scaling\, the formal limit of the Vlasov-P oisson system is the Kinetic Isothermal Euler system.The
Vlasov-Poisson sy stem itself can formally be derived as the limit of a system of ODEs descr ibing the dynamics of a system of N interacting particles\, as the number of particles approaches
infinity. The rigorous justification of this mean field limit remains a fundamental open problem.In this talk we present how the mean field and quasineutral limits can be combined to derive the Kine
tic Isothermal Euler system from a regularised particle model. X-ALT-DESC:\nThe Vlasov-Poisson system is a kinetic equation that models co llisionless plasma. A plasma has a characteristic scale
called the Debye l ength\, which is typically much shorter than the scale of observation. In this case the plasma is called ‘quasineutral’. This motivates studying the limit in which the ratio
between the Debye length and the observation scale tends to zero. Under this scaling\, the formal limit of the Vlasov- Poisson system is the Kinetic Isothermal Euler system.
The Vlasov-Poi sson system itself can formally be derived as the limit of a system of ODE s describing the dynamics of a system of N interacting particles\, as the number of particles approaches
infinity. The rigorous justification of thi s mean field limit remains a fundamental open problem.
In this talk w e present how the mean field and quasineutral limits can be combined to de rive the Kinetic Isothermal Euler system from a regularised particle model . DTEND;TZID=Europe/
Zurich:20190529T160000 END:VEVENT BEGIN:VEVENT UID:news888@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20190515T102013 DTSTART;TZID=Europe/Zurich:20190522T141500 SUMMARY:Analysis Seminar: Armin
Schikorra (University of Pittsburgh) DESCRIPTION:The degree of a map between two spheres of same dimension can b e estimated by Sobolev norm of said map (of the right class). In this ta lk I will
discuss to what extend this is possible for the Hopf degree as well – and why the estimate we have is “analytically optimal” but p robably not “topologically optimal”. Joint work with J. Van
Schafting en. X-ALT-DESC:\nThe degree of a map between two spheres of same dimension can be \;estimated by Sobolev norm of said map (of the right \;class). In this talk I will discuss to what
extend this is possible \;for the Hopf degree as well – and why the estimate we \;have is “analytic ally optimal” but probably not “topologically \;optimal”. Joint work with J. Van
Schaftingen. DTEND;TZID=Europe/Zurich:20190522T160000 END:VEVENT BEGIN:VEVENT UID:news856@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20190410T101051 DTSTART;TZID=Europe/Zurich:20190424T141500
SUMMARY:Seminar Analysis: Giorgio Stefani (Scuola Normale Superiore di Pisa ) DESCRIPTION:Somewhat surprisingly\, the first appearance of the concept of a fractional derivative is found in a letter
written to de l'Hôpital by L eibniz in 1695. Since then\, Fractional Calculus has fascinated generation s of mathematicians and several definitions of fractional derivatives have appeared. In more
recent years\, the fractional operator defined as the g radient of the Riesz potential has received particular attention\, since i t has revealed to be a useful tool for the study of fractional-order
PDEs and fractional Sobolev spaces. In a joint work with G. E. Comi\, combining the PDE approach developed by Spector and his collaborators with the dist ributional point of view adopted by Šilhavý
\, we introduced new notions of fractional variation and fractional Caccioppoli perimeter in analogy w ith the classical BV theory. Within this framework\, we were able to parti ally extend De
Giorgi’s Blow-up Theorem to sets of locally finite fracti onal Caccioppoli perimeter\, proving existence of blow-ups and giving a fi rst characterisation of these (possibly non-unique) limit sets. In
this ta lk\, after a quick overview on Fractional Calculus\, I will introduce the main features of the fractional operators involved and then give an accoun t on the main results on the fractional
variation we were able to achieve so far. X-ALT-DESC:\nSomewhat surprisingly\, the first appearance of the concept of a fractional derivative is found in a letter written to de l'Hôpital by Leibniz
in 1695. Since then\, Fractional Calculus has fascinated generatio ns of mathematicians and several definitions of fractional derivatives hav e appeared. In more recent years\, the fractional
operator defined as the gradient of the Riesz potential has received particular attention\, since it has revealed to be a useful tool for the study of fractional-order PDEs and fractional Sobolev
spaces. In a joint work with G. E. Comi\, combinin g the PDE approach developed by Spector and his collaborators with the dis tributional point of view adopted by Šilhavý\, \;we introduced new n
otions of fractional variation and fractional Caccioppoli perimeter in ana logy with the classical BV theory. Within this framework\, we were able to partially extend De Giorgi’s Blow-up Theorem to
sets of locally finite fractional Caccioppoli perimeter\, proving existence of blow-ups and givin g a first characterisation of these (possibly non-unique) limit sets. In t his talk\, after a quick
overview on Fractional Calculus\, I will introduc e the main features of the fractional operators involved and then give an account on the main results on the fractional variation we were able to ac
hieve so far. DTEND;TZID=Europe/Zurich:20190424T160000 END:VEVENT BEGIN:VEVENT UID:news828@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20190327T120936 DTSTART;TZID=Europe/Zurich:20190410T141500
SUMMARY:Seminar Analysis: Dominik Inauen (University of Zurich) DESCRIPTION:The problem of embedding abstract Riemannian manifolds isometri cally (i.e. preserving the lengths) into Euclidean space
stems from the co nceptually fundamental question of whether abstract Riemannian manifolds a nd submanifolds of Euclidean space are the same. As it turns out\, such em beddings have a drastically
different behaviour at low regularity (i.e. C^ 1) than at high regularity (i.e. C^2): for example\, it's possible to find C^1 isometric embeddings of the standard 2-sphere into arbitrarily small
balls in R^3\, and yet\, in the C^2 category there is (up to translation a nd rotation) just one isometric embedding\, namely the standard inclusion. Analoguous to the Onsager conjecture\, one might
ask if there is a regula rity threshold in the Hölder scale which distinguishes these behaviours. In my talk I will give an overview of what is known concerning the latter question. X-ALT-DESC:\nThe
problem of embedding abstract Riemannian manifolds isometr ically (i.e. preserving the lengths) into Euclidean space stems from the c onceptually fundamental question of whether abstract Riemannian
manifolds and submanifolds of Euclidean space are the same. As it turns out\, such e mbeddings have a drastically different behaviour at low regularity (i.e. C ^1) than at high regularity (i.e. C^2):
for example\, it's possible to fin d C^1 isometric embeddings of the standard 2-sphere into arbitrarily small balls in R^3\, and yet\, in the C^2 category there is (up to translation and rotation)
just one isometric embedding\, namely the standard inclusion . Analoguous to the Onsager conjecture\, one might ask if there is a regul arity threshold in the Hölder scale which distinguishes these
behaviours. In my talk I will give an overview of what is known concerning the latter question. \; DTEND;TZID=Europe/Zurich:20190410T160000 END:VEVENT BEGIN:VEVENT UID:news830@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20190327T081509 DTSTART;TZID=Europe/Zurich:20190327T141500 SUMMARY:Seminar Analysis: Tobias Weth (University of Frankfurt) DESCRIPTION:I will report on some recent results
- obtained in joint work w ith Huyuan Chen - on Dirichlet problems for the Logarithmic Laplacian Oper ator\, which arises as formal derivative of fractional Laplacians at order s= 0. I will discuss
the functional analytic framework for these problems and show how it allows to characterize the asymptotics of principal Diric hlet eigenvalues and eigenfunctions of fractional Laplacians as the
order tends to zero. Furthermore\, I will discuss necessary and sufficient condi tions on domains giving rise to weak and strong maximum principles for the logarithmic Laplacian. If time permits\, I
will also discuss regularity e stimates for solutions to corresponding Poisson problems. X-ALT-DESC:\nI will report on some recent results - obtained in joint work with Huyuan Chen - on Dirichlet
problems for the Logarithmic Laplacian Ope rator\, which arises as formal derivative of fractional Laplacians at orde r s= 0. I will discuss the functional analytic framework for these problem s and
show how it allows to characterize the asymptotics of principal Diri chlet eigenvalues and eigenfunctions of fractional Laplacians as the order tends to zero. Furthermore\, I will discuss necessary
and sufficient cond itions on domains giving rise to weak and strong maximum principles for th e logarithmic Laplacian. If time permits\, I will also discuss regularity estimates for solutions to
corresponding Poisson problems. DTEND;TZID=Europe/Zurich:20190327T160000 END:VEVENT BEGIN:VEVENT UID:news329@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181113T174428 DTSTART;TZID=Europe/
Zurich:20181128T141500 SUMMARY:Seminar Analysis: Stefano Spirito (Università dell'Aquila) DESCRIPTION:In this talk I will present some results concerning the analysi s of finite energy weak solutions
of the Navier-Stokes-Korteweg equations\ , which model the dynamic of a viscous compressible fluid with diffuse int erface. A general theory of global existence is still missing\, however for some
particular cases of physical interest I will present results rega rding the global existence and the compactness of finite energy weak solut ions. The talk is based on a series of joint works with
Paolo Antonelli (G SSI - Gran Sasso Science Institute\, L’Aquila). X-ALT-DESC:\nIn this talk I will present some results concerning the analys is of finite energy weak solutions of the
Navier-Stokes-Korteweg equations \, which model the dynamic of a viscous compressible fluid with diffuse in terface.  \;A general theory of global existence is still missing\, ho wever for some
particular cases of physical interest I will present result s regarding the global existence and the compactness of finite energy weak solutions. The talk is based on a series of joint works with
Paolo Antone lli (GSSI - Gran Sasso Science Institute\, L’Aquila). DTEND;TZID=Europe/Zurich:20181128T160000 END:VEVENT BEGIN:VEVENT UID:news95@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181113T174437
DTSTART;TZID=Europe/Zurich:20181121T141500 SUMMARY:Seminar Analysis: Elia Bruè (Scuola Normale Superiore di Pisa) DESCRIPTION:Since the work by DiPerna and Lions (1989) the continuity and t ransport
equation under mild regularity assumptions on the vector field ha ve been extensively studied\, becoming a florid research field. The applic ability of this theory is very wide\, especially in the
study of partial d ifferential equations and very recently also in the field of non-smooth ge ometry.\\r\\nThe aim of this talk is to give an overview of the quantitati ve side of the theory
initiated by Crippa and De Lellis. We address the pr oblem of mixing and propagation of regularity for solutions to the continu ity equation drifted by Sobolev fields. The problem is well understood
whe n the vector field enjoys a Sobolev regularity with integrability exponent p>1 and basically nothing is known (at the quantitative level) in the cas e p=1.\\r\\nWe present sharp regularity
estimates for the case p>1 and new attempts to attack the challenging question in the case p=1. This is a jo in work with Quoc-Hung Nguyen. X-ALT-DESC:\nSince the work by DiPerna and Lions (1989) the
continuity and transport equation under mild regularity assumptions on the vector field h ave been extensively studied\, becoming a florid research field. The appli cability of this theory is very
wide\, especially in the study of partial differential equations and very recently also in the field of non-smooth g eometry.\nThe aim of this talk is to give an overview of the quantitative side of
the theory initiated by Crippa and De Lellis. We address the probl em of mixing and propagation of regularity for solutions to the continuity equation drifted by Sobolev fields. The problem is well
understood when t he vector field enjoys a Sobolev regularity with integrability exponent p& gt\;1 and basically nothing is known (at the quantitative level) in the ca se p=1.\nWe present sharp
regularity estimates for the case p>\;1 and ne w attempts to attack the challenging question in the case p=1. This is a j oin work with Quoc-Hung Nguyen. DTEND;TZID=Europe/Zurich:20181121T160000
END:VEVENT BEGIN:VEVENT UID:news423@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181217T184454 DTSTART;TZID=Europe/Zurich:20171204T160000 SUMMARY:Seminar Analysis: Xavier Ros-Oton (University of
Zürich) DESCRIPTION:We present a brief overview of the regularity theory f or free boundaries in different obstacle problems. We describe ho w a monotonicity formula of Almgren plays a central role
in the study of the regularity of the free boundary in some of these problems. Finally \, we explain new strategies which we have recently developed to deal wit h cases in which monotonicity formulas
are not available. X-ALT-DESC:\nWe present a brief overview of the regularity theory for free boundaries in different obstacle problems. We describe h ow a monotonicity formula of Almgren plays a
central role in the study of the regularity of the free boundary in some of these problems. Finall y\, we explain new strategies which we have recently developed to deal wi th cases in which
monotonicity formulas are not available. DTEND;TZID=Europe/Zurich:20171204T180000 END:VEVENT BEGIN:VEVENT UID:news422@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181211T232725 DTSTART;TZID=Europe/
Zurich:20171127T160000 SUMMARY:Seminar Analysis: Jérémy Sok (University of Basel) DESCRIPTION:We consider Dirac operators on the 3-sphere with singular magne tic fields which are supported on links\,
that is on one-dimensional manif olds which are diffeomorphic to finitely many copies of S1. Each connected component carries a flux 2πα which exhibits a 2π-periodicity\, just li ke
Aharonov-Bohmsolenoids in the complex plane. We study the kernel of su ch operators through the spectral flow of loops corresponding to tuning so me flux from 0 to 2π\, that is the number of
eigenvalues crossing 0 along the loop (counted algebraically). It turns out that the spectral flow is generically non-zero and depends on the shape of the curves and their link ing number. Through
the stereographic projection the result extends to R3. And then by smearing out the magnetic fields we obtain new solutions (ψ\ ,A) to the zero-mode equation on R3:\\r\\nσ·(-i∇+A)=0\,(ψ\,A) ∈ H1( R3)
2 × \\dot{H}1(R3)3 ∩ L6(R3)3\,\\r\\nwhere σ=(σ)j=1...3 denotes the family of the Pauli matrices\, A is the magnetic potential associated to the magnetic field ∇×A\, and σ⋅(-i∇+A) is the corresponding
Dirac operator in R3.\\r\\n(Joint work with Fabian Portmann and Jan Philip Solov ej) X-ALT-DESC:\nWe consider Dirac operators on the 3-sphere with singular magn etic fields which are supported on
links\, that is on one-dimensional mani folds which are diffeomorphic to finitely many copies of S^1. Ea ch connected component carries a flux 2πα which exhibits a 2π-periodici ty\, just like
Aharonov-Bohmsolenoids in the complex plane. We study the kernel of such operators through the spectral flow of loops corresponding to tuning some flux from 0 to 2π\, that is the number of
eigenvalues cros sing 0 along the loop (counted algebraically). It turns out that the spect ral flow is generically non-zero and depends on the shape of the curves an d their linking number. Through
the stereographic projection the result ex tends to R^3. And then by smearing out the magnetic fields we ob tain new solutions (ψ\,A) to the zero-mode equation on R^3:\nσ ·(-i∇+A)=0\,
(ψ\,A) ∈ H^1(R^3)^2 × \\dot{H}^1(R^3)^3 ∩ L^6(R^3)^3\,\nwhere σ=(σ)[j=1...3] denotes the famil y of the Pauli matrices\, A is the magnetic potential associated to the ma gnetic field ∇×A\, and σ⋅
(-i∇+A) is the corresponding Dirac operat or in R^3.\n(Joint work with Fabian Portmann and Jan Philip Solo vej) DTEND;TZID=Europe/Zurich:20171127T180000 END:VEVENT BEGIN:VEVENT
UID:news421@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181211T231155 DTSTART;TZID=Europe/Zurich:20171113T160000 SUMMARY:Seminar Analysis: Jozsef Kolumban (Paris Dauphine University) DESCRIPTION:We
consider the motion of a rigid body due to the pr essure of a surrounded two-dimensional irrotational perfect incompress ible fluid\, the whole system being confined in abounded domain with an im
permeable condition on a part of the external boundary. Thanks to an impu lsive control strategy we prove that there exists an appropriate boundary condition on the remaining part of the external
boundary (allowing some fl uid going in and out the domain) such that the immersed rigid body is driv en from some given initial position and velocity to some final position an d velocity in a given
positive time\, without touching the external bounda ry. The controlled part of the external boundary is assumed to have a non void interior and the final position is assumed to be in the same
connecte d component of the set of possible positions as the initial position. X-ALT-DESC: \nWe consider the motion of a rigid body due to the pressure of a surrounded two-dimensional irrotational
perfect incompre ssible fluid\, the whole system being confined in abounded domain with an impermeable condition on a part of the external boundary. Thanks to an im pulsive control strategy we prove
that there exists an appropriate boundar y condition on the remaining part of the external boundary (allowing some fluid going in and out the domain) such that the immersed rigid body is dr iven from
some given initial position and velocity to some final position and velocity in a given positive time\, without touching the external boun dary. The controlled part of the external boundary is
assumed to have a n onvoid interior and the final position is assumed to be in the same connec ted component of the set of possible positions as the initial position. DTEND;TZID=Europe/
Zurich:20171113T180000 END:VEVENT BEGIN:VEVENT UID:news420@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181211T230805 DTSTART;TZID=Europe/Zurich:20171106T160000 SUMMARY:Seminar Analysis: Sara Danieri
(Friedrich-Alexander University Erla ngen-Nürnberg) DESCRIPTION:We study a functional consisting of a perimeter term and a non- local term which are in competition\, both in the discrete and
continuous setting.In the discrete setting such functional was introduced by Giuliani\, Lebowitz\, Liebe and Seiringer. For both the continuous and discrete problem\, we show that the global
minimizers are exact periodic stripes. One striking feature of the functionals is that the minimizers a re invariant under a smaller group of symmetries than the functional itsel f. In the continuous
setting\, to our knowledge this is the first example of a model with local/nonlocal terms in competition such that the functio nal is invariant under permutation of coordinates and the minimizers
displ ay a pattern formation which is one dimensional. Such behaviour for a sma ller range of exponents in the discrete setting had been already shown\,us ing different techniques. This is a joint
work with E. Runa. X-ALT-DESC:\nWe study a functional consisting of a perimeter term and a non -local term which are in competition\, both in the discrete and continuous setting.In the discrete
setting such functional was introduced by Giuliani\, Lebowitz\, Liebe and Seiringer. For both the continuous and discrete problem\, we show that the global minimizers are exact periodic stripes. One
striking feature of the functionals is that the minimizers are invariant under a smaller group of symmetries than the functional itse lf. In the continuous setting\, to our knowledge this is the
first exampl e of a model with local/nonlocal terms in competition such that the functi onal is invariant under permutation of coordinates and the minimizers disp lay a pattern formation which is one
dimensional. Such behaviour for a sm aller range of exponents in the discrete setting had been already shown\,u sing different techniques. This is a joint work with E. Runa. DTEND;TZID=Europe/
Zurich:20171105T180000 END:VEVENT BEGIN:VEVENT UID:news419@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181211T230239 DTSTART;TZID=Europe/Zurich:20171016T160000 SUMMARY:Seminar Analysis: Fabio Punzo
(Politecnico di Milano) DESCRIPTION:We discuss existence and uniqueness of very weak solutions of the Cauchy problem for the porous medium equation on Cartan–Hadamard ma nifolds satisfying suitable
lower bounds on Ricci curvature\, with initia l data that can grow at infinity at a prescribed rate\, that depends cru cially on the curvature bounds. Furthermore\, we give a precise estimate for the
maximal existence time\, and we show that in general solutions do not exist if the initial data grow at infinity too fast. Such results h ave been recently obtained jointly with G. Grillo and M.
Muratori. X-ALT-DESC:\nWe discuss existence and uniqueness of very weak solutions of the Cauchy problem for the porous medium equation on Cartan–Hadamard m anifolds satisfying suitable lower bounds
on Ricci curvature\, with initi al data that can grow at infinity at a prescribed rate\, that depends cr ucially on the curvature bounds. Furthermore\, we give a precise estimate for the maximal
existence time\, and we show that in general solutions d o not exist if the initial data grow at infinity too fast. Such results have been recently obtained jointly with G. Grillo and M. Muratori.
DTEND;TZID=Europe/Zurich:20171016T180000 END:VEVENT BEGIN:VEVENT UID:news431@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181217T194632 DTSTART;TZID=Europe/Zurich:20170510T141500 SUMMARY:Seminar
Analysis: Stefano Spirito (University of L'Aquila) DESCRIPTION:In this talk I will present a result concerning the global exis tence of finite energy weak solutions of the quantum Navier-Stokes
equatio ns. The novelty of the result is that we are able to consider the vacuum in the definition of weak solutions. The main tools are a new formulation of the equations which allows us to get an
additional a priori estimate t o prove compactness and a non trivial choice of the approximation system c onsistent with the a priori estimates.\\r\\nThis is a joint work with Paol o Antonelli (GSSI)
X-ALT-DESC:\nIn this talk I will present a result concerning the global exi stence of finite energy weak solutions of the quantum Navier-Stokes equati ons. The novelty of the result is that we are
able to consider the vacuum in the definition of weak solutions. The main tools are a new formulatio n of the equations which allows us to get an additional a priori estimate to prove compactness and
a non trivial choice of the approximation system consistent with the a priori estimates.\nThis is a joint work with Paolo A ntonelli (GSSI) DTEND;TZID=Europe/Zurich:20170510T151500 END:VEVENT
BEGIN:VEVENT UID:news430@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181217T194233 DTSTART;TZID=Europe/Zurich:20170503T141500 SUMMARY:Seminar Analysis: Joaquim Serra (ETH Zurich) DESCRIPTION:We will
introduce some recent results in collaboration with L. Caffarelli and X. Ros-Otonon the optimal regularity of the solutions and the regularity of the free boundaries (near regular points) f or
nonlocal obstacle problems.The main novelty is that we obtain re sults for different operators than the fractional Laplacian. Indee d\, we can consider infinitesimal generators of non rotationally
invariant stable L ́evy processes. X-ALT-DESC:\nWe will introduce some recent results in collaboration with L. Caffarelli and X. Ros-Otonon the optimal regularity of the solution s and the regularity
of the free boundaries (near regular points) for nonlocal obstacle problems.The main novelty is that we obtain r esults for different operators than the fractional Laplacian. Inde ed\, we can
consider infinitesimal generators of non rotationally invariant stable L ́evy processes. DTEND;TZID=Europe/Zurich:20170503T151500 END:VEVENT BEGIN:VEVENT UID:news429@dmi.unibas.ch DTSTAMP;TZID=Europe/
Zurich:20181217T193706 DTSTART;TZID=Europe/Zurich:20170419T141500 SUMMARY:Seminar Analysis: Dominik Himmelsbach (University of Basel) DESCRIPTION:In this talk\, we give sufficient criteria for blowup
of solutions to nonlocal Schrödinger equations with focusing power-ty pe nonlinearity. To give an outline of the arguments used in the proof\, let us mainly focus on the mass-supercritical problem
posed on the whole s pace Rn with prescribed radial initial datum of negative energy.\\r\\nThis is a joint work with Thomas Boulenger and Enno Lenzmann X-ALT-DESC:\nIn this talk\, we give sufficient
criteria for blowup of solutions to nonlocal Schrödinger equations with focusing power-t ype nonlinearity. To give an outline of the arguments used in the proof\, let us mainly focus on the
mass-supercritical problem posed on the whole space R^n with prescribed radial initial datum of negative energ y.\nThis is a joint work with Thomas Boulenger and Enno Lenzmann DTEND;TZID=Europe/
Zurich:20170419T151500 END:VEVENT BEGIN:VEVENT UID:news428@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181217T192947 DTSTART;TZID=Europe/Zurich:20170405T141500 SUMMARY:Seminar Analysis: Filip Rindler
(University of Warwick) DESCRIPTION:The classical Rademacher Theorem asserts that every Lipschitz f unction is differentiablea lmost everywhere with respect to Lebesgue measu re. On the other hand\,
Preiss (’90) gave a surprising example of a nul lset in the plane such that every Lipschitz function is differentiable at at least one point of this set. Thus\, it is a natural question to ask wh
ether there exists a singular measure such that all Lipschitz functions ar e differentiable with respect to this singular measure. It turns out that this question has an intricate connection to the
geometric structure of normal one-currents. In this talk I will present a converse to Rademacher’s Theorem\, which settles the question in the negative in a ll dimensions: if a positive measure μ has
the property that all Lipschi tz functions are μ-a.e. differentiable\, then μ is absolutely continuou s with respect to Lebesgue measure (in the plane\, this question was alrea dy solved by Alberti\,
Csornyei and Preiss in ’05). In a geometric conte xt\, Cheeger conjectured in ’99 that in all Lipschitz differentiability spaces (which are essentially Lipschitz manifolds in which Rademac her’s
Theorem holds) likewise there is a “functional converse ” to Rademacher’s Theorem. As the second main result\, I will present a recent solution to this conjecture.Technically\, the proofs of b oth of
these theorems are based on a recent structure result for the singu lar parts of PDE-constrained measures\, its corollary on the structure of normalone-currents\, and the powerful theory of Alberti
representations.\\ r\\nThis is a joint work with A. Marchese and G. De Philippis X-ALT-DESC:\nThe classical Rademacher Theorem asserts that every Lipschitz function is differentiablea lmost
everywhere with respect to Lebesgue meas ure. On the other hand\, Preiss (’90) gave a surprising example of a nu llset in the plane such that every Lipschitz function is differentiable at at least
one point of this set. Thus\, it is a natural question to ask w hether there exists a singular measure such that all Lipschitz functions a re differentiable with respect to this singular measure. It
turns out tha t this question has an intricate connection to the geometric structure of normal one-currents. In this talk I will present a converse t o Rademacher’s Theorem\, which settles the
question in the negative in all dimensions: if a positive measure μ has the property that all Lipsch itz functions are μ-a.e. differentiable\, then μ is absolutely continuo us with respect to
Lebesgue measure (in the plane\, this question was alre ady solved by Alberti\, Csornyei and Preiss in ’05). In a geometric cont ext\, Cheeger conjectured in ’99 that in all Lipschitz
differentiability spaces (which are essentially Lipschitz manifolds in which Radema cher’s Theorem holds) likewise there is a “functional converse ” to Rademacher’s Theorem. As the second main result
\, I will present a recent solution to this conjecture.Technically\, the proofs of b oth of these theorems are based on a recent structure result for the singu lar parts of PDE-constrained measures\,
its corollary on the structure of normalone-currents\, and the powerful theory of Alberti representations.\n This is a joint work with A. Marchese and G. De Philippis DTEND;TZID=Europe/
Zurich:20170405T151500 END:VEVENT BEGIN:VEVENT UID:news427@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181217T192223 DTSTART;TZID=Europe/Zurich:20170329T141500 SUMMARY:Seminar Analysis: Katarzyna
Mazowiecka (University of Freiburg & Un iversity of Warsaw) DESCRIPTION:We investigate a fractional notion of gradient and divergence o perator. We generalize the div-curl estimate by
Coifman-Lions-Meyer-Semmes to fractional div-curl quantities. We demonstrate how these quantities ap pear naturally in nonlocal geometric equations\, which can be used to obta in a regularity theory
for fractional harmonic maps and critical systems w ith nonlocal antisymmetric potential.\\r\\nThis is a joint work with Armin Schikorra X-ALT-DESC:\nWe investigate a fractional notion of gradient
and divergence operator. We generalize the div-curl estimate by Coifman-Lions-Meyer-Semme s to fractional div-curl quantities. We demonstrate how these quantities a ppear naturally in nonlocal
geometric equations\, which can be used to obt ain a regularity theory for fractional harmonic maps and critical systems with nonlocal antisymmetric potential.\nThis is a joint work with Armin Sc
hikorra DTEND;TZID=Europe/Zurich:20170329T151500 END:VEVENT BEGIN:VEVENT UID:news426@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181217T191055 DTSTART;TZID=Europe/Zurich:20170315T141500
SUMMARY:Seminar Analysis: Christopher Hopper (Aalto University) DESCRIPTION:We prove partial regularity for local minimisers of certain str ictly quasiconvex integral functionals\, over a class of
Sobolev mappings into a compact Riemannian manifold\, to which such mappings are said to be holonomically constrained. Several applications to variational problems i n condensed matter physics with
broken symmetries will also be discussed r elated to the manifold constraint condition. X-ALT-DESC:\nWe prove partial regularity for local minimisers of certain st rictly quasiconvex integral
functionals\, over a class of Sobolev mappings into a compact Riemannian manifold\, to which such mappings are said to b e holonomically constrained. Several applications to variational problems in
condensed matter physics with broken symmetries will also be discussed related to the manifold constraint condition. DTEND;TZID=Europe/Zurich:20170315T151500 END:VEVENT BEGIN:VEVENT
UID:news425@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181217T192300 DTSTART;TZID=Europe/Zurich:20170301T141500 SUMMARY:Seminar Analysis: Isabella Ianni (University of Naples II) DESCRIPTION:We
consider the semilinear Lane-Emden problem (Ep):\\r\\n-Δu = |u|p-1u in B u = 0 on ∂B\\r\\nwhere B is the unit ball of RN\, N≥3\, centered at the origin and 1< p < pS\, p S=(N+2)/
(N−2). We compute the Morse index of any radial solution up of (Ep)\, for p sufficiently close to pS. The proof exploits the a symptotic behavior of up as p→pS and the analysis of a limit eigenvalue
problem.\\r\\nThis is a joint work with F. De Marchis and F. Pacella X-ALT-DESC:\nWe consider the semilinear Lane-Emden problem (E[p]): \n-Δu = |u|^p-1u \; \; \; in \; B
 \;& nbsp\; u = 0 \; \; \; \; \; \; \; \; \ ; \; \; on ∂B\nwhere B is the unit ball of R^N\, N≥3 \, centered at the origin and 1<\; p <\; p[S]\, p
[S] =(N+2)/(N−2). We compute the Morse index of any radial solutionu p of (E[p])\, for p sufficiently close to p[S]. The proof exploits the asymptotic behavior of u[p] as p→p[S and the analysis of a
limit eigenvalue problem.\nThis is a joint work with F. De Marchis and F. Pacella DTEND;TZID=Europe/Zurich:20170301T151500 END:VEVENT BEGIN:VEVENT UID:news437@dmi.unibas.ch DTSTAMP;TZID=Europe/
Zurich:20181219T165220 DTSTART;TZID=Europe/Zurich:20161214T141500 SUMMARY:Seminar Analysis: Aleks Jevnikar (University of Rome\, Tor Vergata) DESCRIPTION:A class of Liouville equations and systems on
compact surfaces is considered: we focus on the Toda system which is motivated in mathemati cal physics by the study of models in non-abelian Chern-Simons theory and in geometry in the description of
holomorphic curves in complex analysis. We discuss its variational aspects which yield existence results. X-ALT-DESC:\nA class of Liouville equations and systems on compact surfaces is considered:
we focus on the Toda system which is motivated in mathemat ical physics by the study of models in non-abelian Chern-Simons theory and in geometry in the description of holomorphic curves in complex
analysis. We discuss its variational aspects which yield existence results. \; DTEND;TZID=Europe/Zurich:20161214T141500 END:VEVENT BEGIN:VEVENT UID:news436@dmi.unibas.ch DTSTAMP;TZID=Europe/
Zurich:20181219T165028 DTSTART;TZID=Europe/Zurich:20161130T141500 SUMMARY:Seminar Analysis: Daniel Ueltschi (University of Warwick) DESCRIPTION:The basic laws governing atoms and electrons are well
understoo d\, but it is impossible to make predictions about the behaviour of large systems of condensed matter physics. A popular approach is to introduce si mple models and to use notions of
statistical mechanics. I will review qua ntum spin systems and their stochastic representations in terms of random permutations and random loops. I will also describe the ‘universal’ be haviour that
is common to loop models in dimensions 3 and more. X-ALT-DESC:\nThe basic laws governing atoms and electrons are well understo od\, but it is impossible to make predictions about the behaviour of
large systems of condensed matter physics. A popular approach is to introduce s imple models and to use notions of statistical mechanics. I will review qu antum spin systems and their stochastic
representations in terms of random permutations and random loops. I will also describe the ‘universal’ b ehaviour that is common to loop models in dimensions 3 and more. \; DTEND;TZID=Europe/
Zurich:20161130T151500 END:VEVENT BEGIN:VEVENT UID:news432@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181217T201142 DTSTART;TZID=Europe/Zurich:20161123T141500 SUMMARY:Seminar Analysis: Luca Battaglia
(University of Rome\, La Sapienza) DESCRIPTION:I will discuss the existence of groundstate solutions f or the Choquard equation in the whole space RN. I will first conside r the case of a homogeneous
nonlinearity F(u) =|u|p\, then I will prove th e existence of solutions under general hypotheses. In particular\, the ca ses N=2 and N≥3 will have to be treated differently. The solutions are found
through a variational mountain pass strategy. X-ALT-DESC:\nI will discuss the existence of groundstate solutions for the Choquard equation in the whole space R^N. I will f irst consider the case of a
homogeneous nonlinearity F(u) =|u|^p \, then I will prove the existence of solutions under general hypotheses. In particular\, the cases N=2 and N≥3 will have to be treated different ly. The
solutions are found through a variational mountain pass strategy. DTEND;TZID=Europe/Zurich:20161123T151500 END:VEVENT BEGIN:VEVENT UID:news445@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181219T172051
DTSTART;TZID=Europe/Zurich:20160601T141500 SUMMARY:Seminar Analysis: Paolo Bonicatto (SISSA Trieste) DESCRIPTION:Given a bounded\, autonomous vector field b: R2 → R2\, we stu dy the uniqueness of
bounded solutions to the initial value problem for th e related transport equation \\r\\n(1) ∂tu + b · ∇u = 0. \\r\\nWe prove that uniqueness of weak solutions holds under the assumptions that b is
of class BV and it is nearly incompressible. Our proof is based on a s plitting technique (introduced previously by Alberti\, Bianchini and Cripp a) that allows to reduce (1) to a family of
1-dimensional equations which can be solved explicitly\, thus yielding uniqueness for the original probl em. \\r\\nIn order to perform this program\, we use Disintegration Theorem and known results
on the structure of level sets of Lipschitz maps: this is done after a suitable localization of the problem\, in which we exploit also Ambrosio’s superposition principle. \\r\\n \\r\\nThis is joint
work with S. Bianchini and N. A. Gusev. X-ALT-DESC:\nGiven a bounded\, autonomous vector field b: R^2 → R^2\, we study the uniqueness of bounded solutions to the initi al value problem for the
related transport equation \n(1)  \;∂tu + b · ∇u = 0. \;\nWe prove that uniqueness of weak solutions holds und er the assumptions that b is of class BV and it is nearly incompressible. Our
proof is based on a splitting technique (introduced previously by Albe rti\, Bianchini and Crippa) that allows to reduce (1) to a family of 1-dim ensional equations which can be solved explicitly\,
thus yielding uniquene ss for the original problem. \nIn order to perform this program\, we use D isintegration Theorem and known results on the structure of level sets of Lipschitz maps: this is
done after a suitable localization of the problem\ , in which we exploit also Ambrosio’s superposition principle. \;\n \nThis is joint work with S. Bianchini and N. A. Gusev. \; DTEND;TZID=
Europe/Zurich:20160601T151500 END:VEVENT BEGIN:VEVENT UID:news444@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181219T171718 DTSTART;TZID=Europe/Zurich:20160511T141500 SUMMARY:Seminar Analysis: Albert
Clop (UAB Barcelona) DESCRIPTION:We will explain two results about (linear and nonlinear) transp ort equations\, quasiconformal maps\, and vector fields with unbounded div ergence. Originally\, these
results are motivated by a difficult problem o n Muckenhoupt weights and elliptic PDE. However\, classical harmonic analy sis tools allow to reformulate this problem in variational BMO terms\, and
then a theorem by H. M. Reimann brings naturally the connection to the tr ansport theory. X-ALT-DESC:\nWe will explain two results about (linear and nonlinear) trans port equations\, quasiconformal
maps\, and vector fields with unbounded di vergence. Originally\, these results are motivated by a difficult problem on Muckenhoupt weights and elliptic PDE. However\, classical harmonic anal ysis
tools allow to reformulate this problem in variational BMO terms\, an d then a theorem by H. M. Reimann brings naturally the connection to the t ransport theory. \; DTEND;TZID=Europe/
Zurich:20160511T151500 END:VEVENT BEGIN:VEVENT UID:news443@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181219T171215 DTSTART;TZID=Europe/Zurich:20160504T141500 SUMMARY:Seminar Analysis: Renato Lucà
(ICMAT Madrid) DESCRIPTION: \\r\\nWe will discuss some regularity properties of weak so lutions to the three-dimensional Navier–Stokes equation. We will first r ecall the classical partial
regularity theory\, developed by Scheffer and later by Caffarelli–Kohn–Nirenberg. Then we will present some new resu lts in both the small data and perturbative frameworks. X-ALT-DESC: \; \;
\nWe will discuss some regularity properties of we ak solutions to the three-dimensional Navier–Stokes equation. We will fi rst recall the classical partial regularity theory\, developed by Scheffer
and later by Caffarelli–Kohn–Nirenberg. Then we will present some new results in both the small data and perturbative frameworks. \; DTEND;TZID=Europe/Zurich:20160504T151500 END:VEVENT
BEGIN:VEVENT UID:news442@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181219T170855 DTSTART;TZID=Europe/Zurich:20160427T141500 SUMMARY:Seminar Analysis: Laura Caravenna (Università degli Studi di Padov
a) DESCRIPTION:We consider continuous solutions to the single balance law ∂t u+∂x(f(u)) = g\, g bounded\, f ∈ C2. We discuss correspondences among the source terms in the Eulerian and Lagrangian
settings\, extending previ ous works relative to the flux f(u) = u2 when possible. Counterexamples po int out a new behavior of solutions when f is non-convex\, and when the se t of inflection points
of f is not negligible\, stressing the difference a mong the Lagrangian/Eulerian formulations in this context. \\r\\n \\r\\nThis is a joint work with G. Alberti and S. Bianchini. X-ALT-DESC:\nWe
consider continuous solutions to the single balance law ∂ [t]u+∂[x](f(u)) = g\, g bounded\, f ∈ C^2. We discuss correspondences among the source terms in the Eulerian and Lag rangian settings\,
extending previous works relative to the flux f(u) = u< sup>2 when possible. Counterexamples point out a new behavior of sol utions when f is non-convex\, and when the set of inflection points of f i
s not negligible\, stressing the difference among the Lagrangian/Eulerian formulations in this context. \;\n \nThis is a joint work wit h G. Alberti and S. Bianchini. \; DTEND;TZID=Europe/
Zurich:20160427T151500 END:VEVENT BEGIN:VEVENT UID:news441@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181219T170545 DTSTART;TZID=Europe/Zurich:20160420T141500 SUMMARY:Seminar Analysis: Francesco
Ghiraldin (MPI Leipzig) DESCRIPTION:In order to obtain uniqueness for solutions of scalar conservat ion laws with discontinuous flux\, Kruzhkov’s entropy conditions are not enough and additional
dissipation conditions have to be imposed on the di scontinuity set of the flux. Understanding these conditions requires to st udy the structure of solutions on the discontinuity set. I will show
that under quite general assumptions on the flux\, solutions admit traces on th e discontinuity set of the flux. This allows to show that any pair of solu tions satises a Kato type inequality with an
explicit reminder term concen trated on the discontinuities of the flux. Applications to uniqueness is t hen discussed. X-ALT-DESC:\nIn order to obtain uniqueness for solutions of scalar conserva
tion laws with discontinuous flux\, Kruzhkov’s entropy conditions are no t enough and additional dissipation conditions have to be imposed on the d iscontinuity set of the flux. Understanding these
conditions requires to s tudy the structure of solutions on the discontinuity set. I will show that under quite general assumptions on the flux\, solutions admit traces on t he discontinuity set of
the flux. This allows to show that any pair of sol utions satises a Kato type inequality with an explicit reminder term conce ntrated on the discontinuities of the flux. Applications to uniqueness is
then discussed. \; DTEND;TZID=Europe/Zurich:20160420T151500 END:VEVENT BEGIN:VEVENT UID:news440@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181219T170316 DTSTART;TZID=Europe/Zurich:20160316T141500
SUMMARY:Seminar Analysis: Elio Marconi (SISSA Trieste) DESCRIPTION:After a brief overview about the classical well-posedness resul t for scalar conservation laws\, we investigate the structure of
bounded s olutions. In particular we prove that the entropy dissipation measure is c oncentrated on a countably 1-rectifiable set. In order to prove this resul t we introduce the notion of Lagrangian
representation of the solution. \ \r\\n \\r\\nThis is a joint work with Stefano Bianchini. X-ALT-DESC:\nAfter a brief overview about the classical well-posedness resu lt for scalar conservation laws
\, we investigate the structure of bounded solutions. In particular we prove that the entropy dissipation measure is concentrated on a countably 1-rectifiable set. In order to prove this resu lt we
introduce the notion of Lagrangian representation of the solution.&n bsp\;\n \nThis is a joint work with Stefano Bianchini. \; DTEND;TZID=Europe/Zurich:20160316T141500 END:VEVENT BEGIN:VEVENT
UID:news439@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181219T170101 DTSTART;TZID=Europe/Zurich:20160309T141500 SUMMARY:Seminar Analysis: Christian Zillinger (Universität Bonn) DESCRIPTION:The Euler
equations of fluid dynamics are time-reversible equat ions and possess many conserved quantities\, including the kinetic energy and entropy. Furthermore\, as shown by Arnold\, they even have the
structu re of an infinite-dimensional Hamiltonian system. Despite these facts\, in experiments one observes a damping phenomenon for small velocity perturba tions to monotone shear flows\, where the
perturbations decay with algebra ic rates. In this talk\, I discuss the underlying phase-mixing mechanism o f linear inviscid damping\, its mathematical challenges and how to establi sh decay with
optimal rates for a general class of monotone shear flows. H ere\, a particular focus will be on the setting of a channel with impermea ble walls\, where boundary effects asymptotically result in the
formation of singularities. X-ALT-DESC:\nThe Euler equations of fluid dynamics are time-reversible equa tions and possess many conserved quantities\, including the kinetic energy and entropy.
Furthermore\, as shown by Arnold\, they even have the struct ure of an infinite-dimensional Hamiltonian system. Despite these facts\, i n experiments one observes a damping phenomenon for small
velocity perturb ations to monotone shear flows\, where the perturbations decay with algebr aic rates. In this talk\, I discuss the underlying phase-mixing mechanism of linear inviscid damping\, its
mathematical challenges and how to establ ish decay with optimal rates for a general class of monotone shear flows. Here\, a particular focus will be on the setting of a channel with imperme able
walls\, where boundary effects asymptotically result in the formation of singularities. \; DTEND;TZID=Europe/Zurich:20160309T151500 END:VEVENT BEGIN:VEVENT UID:news438@dmi.unibas.ch DTSTAMP;TZID=
Europe/Zurich:20181219T165527 DTSTART;TZID=Europe/Zurich:20160224T141500 SUMMARY:Seminar Analysis: Simon Blatt (Universität Salzburg) DESCRIPTION:While the Willmore energy is invariant under Möbius
transforma tions\, its negative L2-gradient flow is not - simply because the L2-scala r product used in its definition does not have this invariance. In this ta lk we present Möbius invariant
versions of the Willmore flow picking up i deas of Ruben Jakob and Oded Schramm. We will discuss its uses and limitat ions and prove well-posedness of the Cauchy problem and attractivity of lo cal
minimizers. X-ALT-DESC:\nWhile the Willmore energy is invariant under Möbius transform ations\, its negative L^2-gradient flow is not - simply because the L^2-scalar product used in its definition
does not have this invariance. In this talk we present Möbius invariant versions of the Wil lmore flow picking up ideas of Ruben Jakob and Oded Schramm. We will discu ss its uses and limitations and
prove well-posedness of the Cauchy problem and attractivity of local minimizers. \; DTEND;TZID=Europe/Zurich:20160224T141500 END:VEVENT BEGIN:VEVENT UID:news457@dmi.unibas.ch DTSTAMP;TZID=Europe/
Zurich:20181227T175842 DTSTART;TZID=Europe/Zurich:20151216T141500 SUMMARY:Seminar Analysis: Alessandra Pluda ( Università di Pisa) DESCRIPTION:I consider the motion by curvature of a network of
curves in th e Euclidean plane and I discuss existence\, uniqueness\, and asymptotic be havior of the evolution. In particular\, I focus on two model cases: a r egular embedded network composed by
three curves with fixed endpoints (tri od) and a regular embedded network composed by two curves\, one of which i s closed (spoon). After talking about the state of art of the problem\, I will
present some new and possibly ”incoming” results obtained with C arlo Mantegazza and Matteo Novaga. X-ALT-DESC:\nI consider the motion by curvature of a network of curves in t he Euclidean plane and
I discuss existence\, uniqueness\, and asymptotic b ehavior of the evolution. In particular\, I focus on two model cases: a regular embedded network composed by three curves with fixed endpoints (tr
iod) and a regular embedded network composed by two curves\, one of which is closed (spoon). After talking about the state of art of the problem\, I will present some new and possibly ”incoming”
results obtained with Carlo Mantegazza and Matteo Novaga. DTEND;TZID=Europe/Zurich:20151216T151500 END:VEVENT BEGIN:VEVENT UID:news456@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181227T175828
DTSTART;TZID=Europe/Zurich:20151209T141500 SUMMARY:Seminar Analysis: Federica Sani ( Università di Milano) DESCRIPTION:The Trudinger-Moser inequality is a substitute for the well kno wn Sobolev
embedding theorem when the limiting case is considered. We discuss Moser type inequalities in the whole space which involve c omplete and reduced Sobolev norm.Then we investigate the optimal growth
ra te of the exponential type function both in the first order case and in th e higher order case. X-ALT-DESC:\nThe Trudinger-Moser inequality is a substitute for the well kn own Sobolev embedding
theorem when the limiting case is considered. We discuss Moser type inequalities in the whole space which involve complete and reduced Sobolev norm.Then we investigate the optimal growth r ate of the
exponential type function both in the first order case and in t he higher order case. DTEND;TZID=Europe/Zurich:20151209T151500 END:VEVENT BEGIN:VEVENT UID:news455@dmi.unibas.ch DTSTAMP;TZID=Europe/
Zurich:20181227T174629 DTSTART;TZID=Europe/Zurich:20151202T141500 SUMMARY:Seminar Analysis: Alessandro Carlotto (ETH-ITS Zürich) DESCRIPTION:Given a closed\, Riemannian 3-manifold (N\, g) without
symmetr ies (more precisely: generic) and a non-negative integer p\, can w e say something about the number of minimal surfaces it contains whose Morse index is bounded by p? More realistically\, can
we prove that such number is necessarily finite? This is the classical ”generic finitenes s” problem\, which has a rich history and exhibits interesting subtletie s even in its basic counterpart
concerning closed geodesics on surfaces. We settle such question when g is a bumpy metric of positive scalar curvat ure by proving that either finiteness holds or N does contain a copy of RP 3 in its
prime decomposition and we discuss the obstructions to any furthe r generalisation of such result. When g is assumed to be strongly bumpy (m eaning that all closed\, immersed minimal surfaces do not
have Jacobi fiel ds\, a notion recently proved to be generic by White) then the finiteness conclusion is true for any compact 3-manifold without boundary. X-ALT-DESC: \nGiven a closed\, Riemannian
3-manifold (N\, g) without symme tries (more precisely: generic) and a non-negative integer p\, can we say something about the number of minimal surfaces it contains who se Morse index is bounded by
p? More realistically\, can we prove that su ch number is necessarily finite? This is the classical ”generic finiten ess” problem\, which has a rich history and exhibits interesting subtlet ies even
in its basic counterpart concerning closed geodesics on surfaces. We settle such question when g is a bumpy metric of positive scalar curv ature by proving that either finiteness holds or N does
contain a copy of RP^3 in its prime decomposition and we discuss the obstructions to any further generalisation of such result. When g is assumed to be stro ngly bumpy (meaning that all closed\,
immersed minimal surfaces do not hav e Jacobi fields\, a notion recently proved to be generic by White) then th e finiteness conclusion is true for any compact 3-manifold without boundar y. DTEND;
TZID=Europe/Zurich:20151202T151500 END:VEVENT BEGIN:VEVENT UID:news454@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181227T174021 DTSTART;TZID=Europe/Zurich:20151125T141500 SUMMARY:Seminar Analysis:
Yash Jhaveri (University of Texas at Austin) DESCRIPTION:It is well known that for many second-order PDEs the solution v gains two derivatives with respect to the right-hand side g in Hölder spaces.
Often\, however\, it is useful to have a quantitat ive understanding of regularity. In ’89\, Caffarelli proved inter ior a priori estimates for fully nonlinear\, uniformly elliptic equations .
Specifically\, he showed that ‖v‖C2\,α(B_{1/2})≤C(‖v‖L∞(B_{ 1})+‖g‖Cα(B_{1})) and C∼1/α as α→0. The natural question t o ask is then: Can one extend such quantitative estimates to other equations? An
equation that appears frequently in analysis\, geometry\, and applications is the Monge-Ampére equation det(D2u) = f. The Monge-Am père equation enjoys the same qualitative regularity gains as its
linear counterpart the Poisson equation in the appropriate setting\, and so we as k whether or not the quantitative picture is also the same. This is not t he case. In this talk\, we will first
review Caffarelli’s interior a pr iori estimates. Then\, we will move to the Monge-Ampère equation and se e a different picture.\\r\\n(Joint work with Alessio Figalli and Connor Mo oney) X-ALT-DESC:\
nIt is well known that for many second-order PDEs the solution v gains two derivatives with respect to the right-hand side g i n Hölder spaces. Often\, however\, it is useful to have a quantita tive
understanding of regularity. In ’89\, Caffarelli proved inte rior a priori estimates for fully nonlinear\, uniformly elliptic equation s. Specifically\, he showed that ‖v‖[C^2\,α(B_{1/2})≤C(‖v‖[L^∞
(B_{1})]+‖g‖[C^α(B_{1})]) and C∼1/α as α→0. The natural question to ask is then: Can one extend such quantitative estimates to other equati ons? An equation that appears frequently in analysis\,
geometry\, and app lications is the Monge-Ampére equation det(D^2u) = f. The Mong e-Ampère equation enjoys the same qualitative regularity gains as its lin ear counterpart the Poisson equation in the
appropriate setting\, and so w e ask whether or not the quantitative picture is also the same. This is n ot the case. In this talk\, we will first review Caffarelli’s interior a priori estimates.
Then\, we will move to the Monge-Ampère equation an d see a different picture.\n(Joint work with Alessio Figalli and Connor Mo oney) DTEND;TZID=Europe/Zurich:20151125T151500 END:VEVENT BEGIN:VEVENT
UID:news453@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181227T172829 DTSTART;TZID=Europe/Zurich:20151118T141500 SUMMARY:Seminar Analysis: Lorenzo Brasco (Aix-Marseille Université & Unive rsity of
Ferrara) DESCRIPTION:In this talk\, I will review some regularity results for weak s olutions of nonlocal variants of the p-Laplace equation. The model case i s given by the Euler-Lagrange equation
of an Aronszajn–Gagliardo–Sl obodeckij seminorm. In particular\, I will present a higher diffe rentiability result for solutions\, recently obtained in collaborat ion with Erik Lindgren (KTH). I will
also discuss some connectio ns of these equations to an Optimal Transport problem with congestion ef fects. X-ALT-DESC: \nIn this talk\, I will review some regularity results for weak solutions of
nonlocal variants of the p-Laplace equation. The model case is given by the Euler-Lagrange equation of an Aronszajn–Gagliardo– Slobodeckij seminorm. In particular\, I will present a higher dif
ferentiability result for solutions\, recently obtained in collabor ation with Erik Lindgren (KTH). I will also discuss some connect ions of these equations to an Optimal Transport problem with
congestion effects. DTEND;TZID=Europe/Zurich:20151118T141500 END:VEVENT BEGIN:VEVENT UID:news452@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181227T172253 DTSTART;TZID=Europe/Zurich:20151111T141500
SUMMARY:Seminar Analysis: Zoltán Balogh (Universität Bern) DESCRIPTION:A pair of metric spaces (X\;Y) is said to have the Lipschitz ex tension property if any Lipschitz map from a subset of X into Y
can be ext ended to a globally defined Lipschitz map to the whole space X. In this t alk I will first recall some classical extension results for spaces with a linear structure\, and I will present
recent results for the case when th e target space Y is the Heisenberg group. X-ALT-DESC:\nA pair of metric spaces (X\;Y) is said to have the Lipschitz e xtension property if any Lipschitz map from a
subset of X into Y can be ex tended to a globally defined Lipschitz map to the whole space X. In this talk I will first recall some classical extension results for spaces with a linear structure\,
and I will present recent results for the case when t he target space Y is the Heisenberg group. DTEND;TZID=Europe/Zurich:20151111T151500 END:VEVENT BEGIN:VEVENT UID:news451@dmi.unibas.ch DTSTAMP;
TZID=Europe/Zurich:20181227T171825 DTSTART;TZID=Europe/Zurich:20151104T141500 SUMMARY:Seminar Analysis: Antti Knowles (ETH Zürich) DESCRIPTION:I discuss results on local eigenvalue statistics for
random reg ular graphs. Under mild growth assumptions on the degree\, we prove that t he local semicircle law holds at the optimal scale\, and that the bulk eig envalue statistics coincide with those
of the GOE from random matrix theor y.\\r\\n(Joint work with R. Bauerschmidt\, J. Huang and H.-T. Yau.) X-ALT-DESC:\nI discuss results on local eigenvalue statistics for random re gular graphs. Under
mild growth assumptions on the degree\, we prove that the local semicircle law holds at the optimal scale\, and that the bulk ei genvalue statistics coincide with those of the GOE from random matrix
theo ry.\n(Joint work with R. Bauerschmidt\, J. Huang and H.-T. Yau.) DTEND;TZID=Europe/Zurich:20151104T151500 END:VEVENT BEGIN:VEVENT UID:news450@dmi.unibas.ch DTSTAMP;TZID=Europe/
Zurich:20181227T171033 DTSTART;TZID=Europe/Zurich:20151014T141500 SUMMARY:Seminar Analysis: Christian Seis (Universit ̈at Bonn) DESCRIPTION:We investigate the speed of convergence and higher-order
asympt otics of solutions to the porous medium equation. Applying a nonlinear ch ange of variables\, we rewrite the equation as a diffusion on a fixed doma in with quadratic nonlinearity.The
degeneracy is cured by viewing the dyna mics on a hypocycloidic manifold. It is in this framework that we can pro ve a differentiable dependency of solutions on the initial data\, and thus \,
dynamical systems methods are applicable. Our main result is the const ruction of invariant manifolds in the phase space of solutions which are t angent at the origin to the eigenspaces of the
linearized equation. We sh ow how these invariant manifolds can be used to extract information on hig her-order long-time asymptotic expansions. X-ALT-DESC: \nWe investigate the speed of convergence
and higher-order asym ptotics of solutions to the porous medium equation. Applying a nonlinear change of variables\, we rewrite the equation as a diffusion on a fixed do main with quadratic
nonlinearity.The degeneracy is cured by viewing the dy namics on a hypocycloidic manifold. It is in this framework that we can p rove a differentiable dependency of solutions on the initial data\,
and th us\, dynamical systems methods are applicable. Our main result is the con struction of invariant manifolds in the phase space of solutions which are tangent at the origin to the eigenspaces of
the linearized equation. We show how these invariant manifolds can be used to extract information on h igher-order long-time asymptotic expansions. DTEND;TZID=Europe/Zurich:20151014T151500 END:VEVENT
BEGIN:VEVENT UID:news449@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181227T165613 DTSTART;TZID=Europe/Zurich:20151007T141500 SUMMARY:Seminar Analysis: Cheikh Ndiaye (Universität Tübingen)
DESCRIPTION:In this talk\, we will present our recent solutions of the rema ining cases of the boundary Yamabe problem and the Riemann mapping problem asked by Escobarin 1992. Rather than discussing
our arguments of proofs\, we will focus more on explaining the barycenter techniqu e of Bahri-Coron which we employ. We hope by doing this to all ow an easier understanding for the audience\, since
it seems to us\, that even among experts\, the barycenter technique is not k nown like the minimizing technique of Aubin-Schoen. Moreover\, we hope also the audience to see how naturally the
barycenter technique fits into conformally invariant variational problems verifying the structure of quan tization and strong interaction phenomena.\\r\\n(Joint work with M. Mayer of University of
Giessen) X-ALT-DESC:\nIn this talk\, we will present our recent solutions of the rem aining cases of the boundary Yamabe problem and the Riemann mapping proble m asked by Escobarin 1992. Rather than
discussing our arguments of proofs\, we will focus more on explaining the barycenter techniq ue of Bahri-Coron which we employ. We hope by doing this to al low an easier understanding for the
audience\, since it seems to us\, that even among experts\, the barycenter technique is not known like the minimizing technique of Aubin-Schoen. Moreover\, we hope also the audience to see how
naturally the barycenter technique fits into conformally invariant variational problems verifying the structure of qua ntization and strong interaction phenomena.\n(Joint work with M. Mayer of
University of Giessen) DTEND;TZID=Europe/Zurich:20151007T151500 END:VEVENT BEGIN:VEVENT UID:news448@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181227T165017 DTSTART;TZID=Europe/Zurich:20150930T141500
SUMMARY:Seminar Analysis: Kunnath Sandeep (TIFR CAM Bangalore) DESCRIPTION:In this talk we will discuss the classical Adams inequality and its versions in the hyperbolic space. We will also discuss
the hyperbolic versions of Adachi-Tanaka type inequalities and the exact growth. X-ALT-DESC:\nIn this talk we will discuss the classical Adams inequality an d its versions in the hyperbolic space. We
will also discuss the hyperboli c versions of Adachi-Tanaka type inequalities and the exact growth. DTEND;TZID=Europe/Zurich:20150930T151500 END:VEVENT BEGIN:VEVENT UID:news447@dmi.unibas.ch DTSTAMP;
TZID=Europe/Zurich:20181227T164404 DTSTART;TZID=Europe/Zurich:20150923T141500 SUMMARY:Seminar Analysis: Stefano Spirito (GSSI) DESCRIPTION:In this talk we focus on a new compactness result about weak
so lutions of the quantum Navier-Stokes equations. The novelty of the result is that we are able to consider the vacuum in the definition of weak solu tions. The main tool is a new formulation of the
equations which allows u s to get an additional a priori estimate to prove compactness. Some remar ks concerning the choice of the approximation system to get global existen ce will be made.\\r\\n
(Joint work with Paolo Antonelli) X-ALT-DESC: \nIn this talk we focus on a new compactness result about weak solutions of the quantum Navier-Stokes equations. The novelty of the resu lt is that we
are able to consider the vacuum in the definition of weak so lutions. The main tool is a new formulation of the equations which allows us to get an additional a priori estimate to prove compactness.
Some rem arks concerning the choice of the approximation system to get global exist ence will be made.\n(Joint work with Paolo Antonelli) DTEND;TZID=Europe/Zurich:20150923T151500 END:VEVENT
BEGIN:VEVENT UID:news468@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181228T180221 DTSTART;TZID=Europe/Zurich:20150527T151500 SUMMARY:Seminar Analysis: Olivier Druet (University Lyon 1) DESCRIPTION:I
will survey recent results on the Einstein-Lichnerowicz const raints system which appears in general relativity when trying to f ormulate the Cauchy problem for the Einstein equation coupled with a
scalar field. I will discuss existence\, uniqueness\,compactne ss and stability for this system. This is a joint work with Bruno Premose lli. X-ALT-DESC: \nI will survey recent results on the
Einstein-Lichnerowicz con straints system which appears in general relativity when trying to formulate the Cauchy problem for the Einstein equation coupled wi th a scalar field. I will discuss
existence\, uniqueness\,compact ness and stability for this system. This is a joint work with Bruno Premo selli. DTEND;TZID=Europe/Zurich:20150527T161500 END:VEVENT BEGIN:VEVENT
UID:news467@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181228T175942 DTSTART;TZID=Europe/Zurich:20150520T151500 SUMMARY:Seminar Analysis: Antoine Choffrut (University of Edinburgh) DESCRIPTION:The
following dichotomy between rigidity and flexibility is now well known in geometry: while uniqueness holds for smooth solutio ns to the isometric embedding problem\, the set of solutions becomes u
nimaginably large if one allows rough ones. What is surprising is that th is dichotomy holds for problems coming from mathematical physics\,and in p articular the Euler equations of fluid dynamics.
In this (mainly exposito ry) talk I will explain the h-principle and the method of convex integrati on. Convex geometry is the heart of the matter and profuse figures will a ttempt to illustrate the
difficulties and how to tame them. X-ALT-DESC: \nThe following dichotomy between rigidity and flexibility is n ow well known in geometry: while uniqueness holds for smooth solut ions to the isometric
embedding problem\, the set of solutions becomes unimaginably large if one allows rough ones. What is surprising is that this dichotomy holds for problems coming from mathematical physics\,and in
particular the Euler equations of fluid dynamics. In this (mainly exposi tory) talk I will explain the h-principle and the method of convex integra tion. Convex geometry is the heart of the matter
and profuse figures will attempt to illustrate the difficulties and how to tame them. DTEND;TZID=Europe/Zurich:20150520T161500 END:VEVENT BEGIN:VEVENT UID:news466@dmi.unibas.ch DTSTAMP;TZID=Europe/
Zurich:20181228T175531 DTSTART;TZID=Europe/Zurich:20150513T151500 SUMMARY:Seminar Analysis: Gabriele Mancini (SISSA/ISAS) DESCRIPTION:I will give a brief overview of the main results concerning top
ological methods for singular Liouville equations on compact surfaces\, an d I will show how to extend some of them to special elliptic systems . My analysis will focus on sharp forms of the
Moser-Trudinger i nequality and on mass-quantization results for the SU(3) Toda System. X-ALT-DESC:\nI will give a brief overview of the main results concerning to pological methods for singular
Liouville equations on compact surfaces\, a nd I will show how to extend some of them to special elliptic system s. My analysis will focus on sharp forms of the Moser-Trudinger inequality and on
mass-quantization results for the SU(3) Toda System. DTEND;TZID=Europe/Zurich:20150513T161500 END:VEVENT BEGIN:VEVENT UID:news465@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181228T175120 DTSTART;TZID
=Europe/Zurich:20150506T151500 SUMMARY:Seminar Analysis: Julien Sabin (University of Paris-Sud) DESCRIPTION:We study the trace ideals properties of the Fourier restriction operator to hypersurfaces.
Equivalently\, we generalize the theorem s of Stein-Tomas and Strichartz to systems of orthonormal functions\, with an optimal dependence on the number of such functions. As an appl ication\, we
deduce new Strichartz inequalities describing the dis persive behaviour of the free evolution of quantum systems with an infinite number of particles. This is a joint work with Rupert Fran k.
X-ALT-DESC:\nWe study the trace ideals properties of the Fourier restrictio n operator to hypersurfaces. Equivalently\, we generalize the theore ms of Stein-Tomas and Strichartz to systems of
orthonormal functions\, with an optimal dependence on the number of such functions. As an app lication\, we deduce new Strichartz inequalities describing the di spersive behaviour of the free
evolution of quantum systems with an infinite number of particles. This is a joint work with Rupert Fra nk. DTEND;TZID=Europe/Zurich:20150506T161500 END:VEVENT BEGIN:VEVENT UID:news464@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20181228T174506 DTSTART;TZID=Europe/Zurich:20150429T151500 SUMMARY:Seminar Analysis: Bernard Dacorogna (EPFL) DESCRIPTION:Given two functions f and g\,we want to find a
map φ such tha t\\r\\ng(φ(x)) det∇φ(x)=f(x) x∈Ω\, φ(x)=x x∈∂Ω.\\r\\nLocal case. We first consider the (local) existence\, uniqueness and optimal regularity for the problem\
\r\\ngi(φ(x)) det∇φ(x)=fi(x) for every 1≤i≤n\\r\\nwhere gi·fi>0 .\\r\\nGlobal case. A necessary condition is then\\r\\n∫Ω f =∫Ω g. (1)\\r\\n(i) We dis cuss the case where
g·f>0 and give three different ideas for the existenc e problem with optimal regularity.\\r\\n(ii) We then briefly comment on th e case where g>0 but f is allowed to change sign.\\r\\nA problem
without t he condition (1). We consider a more general problem of the form\\r\\n det∇φ(x)=f(x\,φ(x)\,∇φ(x)) x∈Ω\, φ(x)=x x∈∂Ω.\\r\\nwhere no
constraint of the type (1) is needed. X-ALT-DESC:\nGiven two functions f and g\,we want to find a map \;φ su ch that\ng(φ(x)) det∇φ(x)=f(x) \; \; x∈Ω\,
 \;&nbs p\; \; \; \; \; \; \; \; \; \; \;& nbsp\; \; \; \; \; \; φ(x)=x \; \; \; &nb sp\; \; x∈∂Ω.\nLocal case. We
first consider the (local) existenc e\, uniqueness and optimal regularity for the problem\ng[i](φ(x) ) det∇φ(x)=f[i](x) \; for every 1≤i≤n\nwhere g[i sub>·f[i]>\;0.\nGlobal case. A necessary
condition is t hen\n∫[Ω] f =∫[Ω] g.  \; \; \; \;& nbsp\; \; \; \; \; \; \; \; \; \;  \; \; \; \; \; \; \; \; \;
 \; \;&n bsp\; \; (1)\n(i) We discuss the case where g·f>\;0 and give three different ideas for the existence problem with optimal regularity.\n(ii) W e then briefly comment on the case
where g>\;0 but f is allowed to chang e sign.\nA problem without the condition (1). We consider a more ge neral problem of the form\n \; \; \; \; \; \;  \; \; \;
 \; \; det∇φ(x)=f(x\,φ(x)\,∇φ(x)) \;&nb sp\; x∈Ω\,
 \; \; \; \; \; \; \; \ ; \; \; \; \; \; \; \; \; \; \; φ (x)=x \; \; \; \;  \;  \;  \;  \; &
nbsp\;  \;  \; \;  \;  \; x∈∂Ω.\nwhere no constraint of the type (1) is needed.\n\n DTEND;TZID=Europe/Zurich:20150429T161500 END:VEVENT BEGIN:VEVENT UID:news463@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20181228T172110 DTSTART;TZID=Europe/Zurich:20150422T151500 SUMMARY:Seminar Analysis: Laura Spinolo (IMATI-CNR\, Pavia) DESCRIPTION:In 1973 Schaeffer established a result
that applies to scalar c onservation laws with convex fluxes and can be loosely speaking formulated as follows: for a generic smooth initial datum\, the admissible solution is smooth outside a
locally finite number of curves in the (t\,x) plane. H ere the term ”generic” should be interpreted in a suitable technical s ense\, related to the Baire Category Theorem. My talk will aim at
discussi ng a recent explicit counter-example that shows that Schaeffer’s Theorem does not extend to systems of conservation laws. The talk will be based o n joint works with Laura Caravenna.
X-ALT-DESC: \nIn 1973 Schaeffer established a result that applies to scalar conservation laws with convex fluxes and can be loosely speaking formulat ed as follows: for a generic smooth initial datum
\, the admissible solutio n is smooth outside a locally finite number of curves in the (t\,x) plane. Here the term ”generic” should be interpreted in a suitable technical sense\, related to the Baire
Category Theorem. My talk will aim at discus sing a recent explicit counter-example that shows that Schaeffer’s Theor em does not extend to systems of conservation laws. The talk will be based on
joint works with Laura Caravenna. DTEND;TZID=Europe/Zurich:20150422T161500 END:VEVENT BEGIN:VEVENT UID:news462@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181228T171614 DTSTART;TZID=Europe/
Zurich:20150415T151500 SUMMARY:Seminar Analysis: Gyula Csató (Technische Universität Dortmund) DESCRIPTION:The standard isoperimetric inequality states that among all set s with a given fixed volume
(or area in dimension 2) the ball has t he smallest perimeter. That is\, written here for simplicity in di mension 2\, the following infimum is attained by the ball\\r\\n2πR= inf{ ∫∂Ω 1 dσ(x) : Ω⊂R2
and ∫Ω 1 dx=πR2}.\\r\\nThe isoperimetri c problem with density is a generalization of this question: given two pos itive functions f\,g:R2→R2 one studies the existence of minimizers of\\r \\nI(C) =
inf{∫∂Ω g(x) dσ(x) : Ω⊂R2 and ∫Ω f(x) dx=C}.\\r\ \nI will mainly talk about the situation when f(x) =|x|q and g(x) =|x|p.This is a reach problem with strong variations in difficulties d epending on
the values of p and q. Some cases are still an open proble m. One case has an interesting application related to the Moser -Trudinger imbedding. I will also mention the situation when f=g=eψ is s
trictly positive and radial\, which leads to the log-convex density conjec ture. X-ALT-DESC: \nThe standard isoperimetric inequality states that among all s ets with a given fixed volume(or area in
dimension 2) the ball has the smallest perimeter. That is\, written here for simplicity in dimension 2\, the following infimum is attained by the ball\n2πR= inf{∫ [∂Ω] 1 dσ(x) : Ω⊂R^2 and ∫[Ω 1 dx=πR
^2}.\nThe isoperimetric problem with density is a ge neralization of this question: given two positive functions f\,g:R< sup>2→R^2 one studies the existence of minimizers of\nI(C) = inf{∫[∂Ω] g(x) dσ
(x) : Ω⊂R^2 sup> and ∫[Ω] f(x) dx=C}.\nI will mainly talk about the situation when f(x) =|x|q and g(x) =|x|p.This is a reach problem wi th strong variations in difficulties depending on the values of
p and q. S ome cases are still an open problem. One case has an interesting application related to the Moser-Trudinger imbedding. I will also m ention the situation when f=g=eψ is strictly positive
and radial\, which leads to the log-convex density conjecture. DTEND;TZID=Europe/Zurich:20150415T161500 END:VEVENT BEGIN:VEVENT UID:news461@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181228T170629
DTSTART;TZID=Europe/Zurich:20150408T151500 SUMMARY:Seminar Analysis: Esther Cabezas-Rivas (Goethe University Frankfurt ) DESCRIPTION:Almost flat manifolds are the solutions of bounded size perturb
ations of the equation Sec=0 (Sec is the sectional curvature). In a celebrated theorem\, Gromov proved that the presence of an almost flat metric implies a precise topological description of the
underlying ma nifold. \\r\\nIntegral pinching theorems express curvature assumptions in terms of certain Lp-norms and try to deduce topological conclusions. But typically one needs to require p >n2\,
where n is the dimension of the man ifold\, to prove such rigidity theorems.\\r\\nDuring this talk we will exp lain how\, under lower sectional curvature bounds\, to imposeanL1-pinching condition on
the curvature is surprisingly rigid\, leading indeed to the same conclusion as in Gromov’s theorem under more relaxed curvature cond itions.\\r\\nThis is a joint work with B. Wilking. X-ALT-DESC:\
nAlmost flat manifolds are the solutions of bounded size pertur bations of the equation Sec=0 (Sec is the sectional curvature). In a celebrated theorem\, Gromov proved that the presence of an almost
flat metric implies a precise topological description of the underlying m anifold. \nIntegral pinching theorems express curvature assumptions in ter ms of certain L^p-norms and try to deduce
topological conclusion s. But typically one needs to require p >\;n2\, where n is the dimensio n of the manifold\, to prove such rigidity theorems.\nDuring this talk we will explain how\, under
lower sectional curvature bounds\, to imposeanL1- pinching condition on the curvature is surprisingly rigid\, leading indeed to the same conclusion as in Gromov’s theorem under more relaxed curvat
ure conditions.\nThis is a joint work with B. Wilking. DTEND;TZID=Europe/Zurich:20150408T161500 END:VEVENT BEGIN:VEVENT UID:news460@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181228T165919 DTSTART;
TZID=Europe/Zurich:20150318T151500 SUMMARY:Seminar Analysis: Melanie Rupflin (Leipzig University) DESCRIPTION:Teichmüller harmonic map flow is a gradient flow of the Dirich let energy which is
designed to evolve parametrized surfaces towards criti cal points of the area. In this talk we will discuss the construction and some new results for this flow and show in particular that for
non-positi vely curved targets the flow changes or decomposes arbitrary closed initi al surfaces into minimal immersions (possibly with branch points) through globally defined smooth solutions.
X-ALT-DESC: \nTeichmüller harmonic map flow is a gradient flow of the Diri chlet energy which is designed to evolve parametrized surfaces towards cri tical points of the area. In this talk we will
discuss the construction a nd some new results for this flow and show in particular that for non-posi tively curved targets the flow changes or decomposes arbitrary closed ini tial surfaces into
minimal immersions (possibly with branch points ) through globally defined smooth solutions. DTEND;TZID=Europe/Zurich:20150318T161500 END:VEVENT BEGIN:VEVENT UID:news459@dmi.unibas.ch DTSTAMP;TZID=
Europe/Zurich:20181228T165150 DTSTART;TZID=Europe/Zurich:20150311T151500 SUMMARY:Seminar Analysis: Giuseppe Genovese (University of Zurich) DESCRIPTION:The DNLS equation is an integrable PDE\, in the
sense that there are infinitely many Hamiltonians associated to it. Th e aim of the talk is to present the construction of infinitely man y functional measures associated to these integrals of motion
of the equat ion\, each measure being supported on Sobolev spaces of increasing regular ity. These are natural candidates to be the invariant measures as sociated to the DNLS eq. Invariant measures
are a crucial tool i n the theory of integrable PDEs\, useful e.g. to prove long time properties of regular solutions. The introductory general aspects will be reviewed and the new results on DNLS\,
obtained in collaboration with R. Luc (ICMAT\,Madrid) and D. Valeri (MSC\, Beijing)\, will be presented. X-ALT-DESC: \nThe DNLS equation is an integrable PDE\, in the sens e that there are infinitely
many Hamiltonians associated to it. The aim of the talk is to present the construction of infinitely m any functional measures associated to these integrals of motion of the equ ation\, each measure
being supported on Sobolev spaces of increasing regul arity. These are natural candidates to be the invariant measures associated to the DNLS eq. Invariant measures are a crucial tool in the theory
of integrable PDEs\, useful e.g. to prove long tim e properties of regular solutions. The introductory general aspects will be reviewed and the new results on DNLS\, obtained in collaboration with R
. Luc (ICMAT\,Madrid) and D. Valeri (MSC\, Beijing)\, will be presented. DTEND;TZID=Europe/Zurich:20150311T161500 END:VEVENT BEGIN:VEVENT UID:news458@dmi.unibas.ch DTSTAMP;TZID=Europe/
Zurich:20181228T164525 DTSTART;TZID=Europe/Zurich:20150304T151500 SUMMARY:Seminar Analysis: Emmanuel Hebey (University of Cergy-Pontoise) DESCRIPTION:Stationary Kirchhoff systems in closed manifolds
X-ALT-DESC:Stationary Kirchhoff systems in closed manifolds DTEND;TZID=Europe/Zurich:20150304T161500 END:VEVENT BEGIN:VEVENT UID:news479@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181228T232256
DTSTART;TZID=Europe/Zurich:20141210T170000 SUMMARY:Seminar Analysis: Davide Vittone (University of Padua) DESCRIPTION:We consider the area functional for graphs in the sub-Riemannia n Heisenberg
group and study minimizers of the associated Dirichle t problem. We prove that\,under a bounded slope condition on the boun dary datum\, there exists a unique minimizer and that this minimizer i s
Lipschitz continuous. We also provide an example showing that\, in the first Heisenberg group\, Lipschitz regularity cannot be improved e ven under the bounded slope condition. This is based on a
joint work with A. Pinamonti\, F. SerraCassano and G. Treu. X-ALT-DESC:\nWe consider the area functional for graphs in the sub-Riemanni an Heisenberg group and study minimizers of the associated
Dirichl et problem. We prove that\,under a bounded slope condition on the bou ndary datum\, there exists a unique minimizer and that this minimizer is Lipschitz continuous. We also provide an example
showing that\ , in the first Heisenberg group\, Lipschitz regularity cannot be improved even under the bounded slope condition. This is based on a joint work wit h A. Pinamonti\, F. SerraCassano and
G. Treu. DTEND;TZID=Europe/Zurich:20141210T174500 END:VEVENT BEGIN:VEVENT UID:news478@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181228T232323 DTSTART;TZID=Europe/Zurich:20141210T160000
SUMMARY:Seminar Analysis: Annalisa Massaccesi (University of Zurich) DESCRIPTION:In this joint work with Giovanni Alberti\, we prove a Frobenius property for inte-gral currents: namely\, if R=[∑\,ξ
\,θ] k-dimensiona l integral current with a simple tangent vector field ξ∈C1(Rd\;Λk(Rd)) \, then ξ is involtive at almost every point in ∑. This result is relat ed to the following decomposition
problem formulated by F.Morgan: given a k-dimensional normal current T\, do there exist a measure space L and a fa mily of rectifiable currents {Rλ}λ∈L such that T = ∫L Rλ dλ and th e mass decomposes
consistently as M(T) = ∫L M(Rλ) dλ? The aforementio ned Frobenius property allows us to provide a counterexample to the existe nce of such a decomposition with a family of integral currents.
X-ALT-DESC:\nIn this joint work with Giovanni Alberti\, we prove a Frobeniu s property for inte-gral currents: namely\, if R=[∑\,ξ\,θ] k-dimension al integral current with a simple tangent vector
field ξ∈C^1( R^d\;Λ[k](R^d))\, then ξ is i nvoltive at almost every point in ∑. This result is related to the follo wing decomposition problem formulated by F.Morgan: given a k-dimensional n ormal
current T\, do there exist a measure space L and a family of rectifi able currents {R[λ]}[λ∈L] such that T = ∫[L R[λ] dλ and the mass decomposes consistently as M(T) = ∫ [L] M(R[λ]) dλ? The
aforementioned Frobenius property allows us to provide a counterexample to the existence of such a decomposi tion with a family of integral currents. DTEND;TZID=Europe/Zurich:20141210T164500
END:VEVENT BEGIN:VEVENT UID:news477@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181228T225557 DTSTART;TZID=Europe/Zurich:20141203T161500 SUMMARY:Seminar Analysis: Stefano Spirito (Gran Sasso Science
Institute\, L ’Aquila) DESCRIPTION:In this talk I will discuss the problem of the approximation of suitable weak solutions of Navier-Stokes equations in the sense of Scheff er and
Caffarelli-Kohn-Nirenberg. It is well-known that suitable weak solu tions enjoy the partial regularity theorem proved in the famous paper of Caffarelli-Kohn-Nirenberg\, hence they are more regular
than a Leray weak solutions. However\, since the uniqueness of weak solutions of Navier-Stokes is unknown we don’t know if different approximation metho ds lead to a suitable weak solution. I will
present a recent result obtai ned with L. C. Berselli (University of Pisa) where we proved that weak sol utions obtained by some artificial compressibility approximation are suita ble. The novelty of
the result is that the Navier-Stokes equations are co nsidered in a bounded domain with Navier boundary conditions. X-ALT-DESC:\nIn this talk I will discuss the problem of the approximation o f
suitable weak solutions of Navier-Stokes equations in the sense of Schef fer and Caffarelli-Kohn-Nirenberg. It is well-known that suitable weak sol utions enjoy the partial regularity theorem proved
in the famous paper of Caffarelli-Kohn-Nirenberg\, hence they are more regular than a Leray weak solutions. However\, since the uniqueness of weak solutions o f Navier-Stokes is unknown we don’t know
if different approximation meth ods lead to a suitable weak solution. I will present a recent result obta ined with L. C. Berselli (University of Pisa) where we proved that weak so lutions obtained
by some artificial compressibility approximation are suit able. The novelty of the result is that the Navier-Stokes equations are c onsidered in a bounded domain with Navier boundary conditions.
DTEND;TZID=Europe/Zurich:20141203T171500 END:VEVENT BEGIN:VEVENT UID:news476@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181228T225051 DTSTART;TZID=Europe/Zurich:20141126T161500 SUMMARY:Seminar
Analysis: Petru Mironescu (University Lyon 1) DESCRIPTION:We describe the structure of maps u:(0\,1)n → S1 having a gi ven Sobolev regularity. Such maps are described by their singularities and
phases. This is the analog of the Weierstrass factorization theorem for h olomorphic functions\; the singularities of the Sobolev maps play the role of the zeroes of holomorphic maps. We will present
implications of this r esult to functional analytic questions related to manifold valued maps. If the time permits it\, we will discuss the question of the control of the phases\, and present some
applications to some model PDEs and nonlocal pro blems. X-ALT-DESC: \nWe describe the structure of maps u:(0\,1)^n \; → S^1 having a given Sobolev regularity. Such maps are describ ed by their
singularities and phases. This is the analog of the Weierstras s factorization theorem for holomorphic functions\; the singularities of t he Sobolev maps play the role of the zeroes of holomorphic
maps. We will p resent implications of this result to functional analytic questions relate d to manifold valued maps. If the time permits it\, we will discuss the qu estion of the control of the
phases\, and present some applications to som e model PDEs and nonlocal problems. DTEND;TZID=Europe/Zurich:20141126T161500 END:VEVENT BEGIN:VEVENT UID:news475@dmi.unibas.ch DTSTAMP;TZID=Europe/
Zurich:20181228T224456 DTSTART;TZID=Europe/Zurich:20141119T161500 SUMMARY:Seminar Analysis: Luca Galimberti (ETH Zurich) DESCRIPTION:A classical question in differential geometry concerns which sm
ooth functions f can arise as Gauss curvature of a conformal metric on a 2 -dim Riemannian manifold M. This amounts to solve a PDE which is the Euler -Lagrange equation of an energy functional. In
this talk we will discuss a bout compactness issues and bubbling phenomena for this equation on surfac es of genus greater than 1 (joint work with Borer and Struwe) and on the t orus. X-ALT-DESC: \nA
classical question in differential geometry concerns which smooth functions f can arise as Gauss curvature of a conformal metric on a 2-dim Riemannian manifold M. This amounts to solve a PDE which is
the Eul er-Lagrange equation of an energy functional. In this talk we will discuss about compactness issues and bubbling phenomena for this equation on surf aces of genus greater than 1 (joint work
with Borer and Struwe) and on the torus. DTEND;TZID=Europe/Zurich:20141119T171500 END:VEVENT BEGIN:VEVENT UID:news474@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181228T203836 DTSTART;TZID=Europe/
Zurich:20141112T161500 SUMMARY:Seminar Analysis: Frédéric Robert (University of Lorraine) DESCRIPTION:We investigate the Hardy-Schrödinger operator Lγ=-Δ-γ/|x|2 on domains Ω⊂Rn\, whose boundary
contain the singularity 0. The situat ion is quite different from the well-studied case when 0 is in the interio r of Ω. For one\, if 0∈Ω\, then L is positive if and only if γ<(n-2) 2/4\, while if
0∈∂Ω the operator L could be positive for larger value of γ\, potentially reaching the maximal constant n2/4 on convex domains. \\r\\nWe prove optimal regularity and a Hopf-type Lemma for variational
so lutions of corresponding linear Dirichlet boundary value problems of the f orm Lγ=a(x)u\, but also for non-linear equations including Lγ=(|u|β-2u) /(|x|s)\, where γ < n2/4\, s∈[0\,2) and β:=2(n-s)
/(n-2) is the critica l Hardy-Sobolev exponent. We also provide a Harnack inequality and a compl ete description of the profile of all positive solutions–variational or not– of the corresponding
linear equation on the punctured domain. The value γ=(n-1)2/4 turned out to be another critical threshold for the oper ator Lγ\, and our analysis yields a corresponding notion of “Hardy sing ular
boundary-mass” mγ(Ω) of a domain Ω having 0∈Ω\, which could b e defined whenever (n2-1)/4 < γ < n2/4.\\r\\nAs a byproduct\, we give a c omplete answer to problems of existence of extremals for
Hardy-Sobolev ine qualities of the form\\r\\nC( ∫Ω (uβ)/(|x|s) dx )2/β ≤∫Ω |∇u| 2 dx - γ∫Ω (u2)/(|x|s)dx\\r\\nwhenever γγ]= -Δ-γ/|x|2 on domains Ω⊂R^n\, whose boundary contain the singularity 0. The
situation is quite different from the well-studied case when 0 is in the interior of Ω. For one\, if 0∈Ω\, then L is po sitive if and only if γ<\;(n-2)^2/4\, while if 0∈∂Ω the operator L could be
positive for larger value of γ\, potentially reaching the maximal constant n^2/4 on convex domains.\nWe prove optimal regularity and a Hopf-type Lemma for variational solutions of correspondi ng
linear Dirichlet boundary value problems of the form Lγ=a(x)u\, but al so for non-linear equations including Lγ=(|u|^β-2u)/(|x|^s )\, where γ <\; n^2/4\, s∈[0\,2) and β:=2(n-s)/(n-2) is the
critical Hardy-Sobolev exponent. We also provide a Harnack inequal ity and a complete description of the profile of all positive solutions– variational or not– of the corresponding linear equation on
the puncture d domain. The value γ=(n-1)^2/4 turned out to be another criti cal threshold for the operator Lγ\, and our analysis yields a correspondi ng notion of “Hardy singular boundary-mass” mγ(Ω)
of a domain Ω hav ing 0∈Ω\, which could be defined whenever (n^2-1)/4 <\; γ <\; n^2/4.\nAs a byproduct\, we give a complete answer to prob lems of existence of extremals for Hardy-Sobolev
inequalities of the form\ nC( ∫[Ω ](u^β)/(|x|^s) dx )^2/β  \;≤∫[Ω] |∇u|^2 dx - γ∫[Ω]&nbs p\;(u^2)/(|x|^s)dx\nwhenever γ<\;n^2/4\, and in particular\, for those of Caffarelli-Kohn-Nirenberg.
These results extend previous contributions by the authors in the case γ=0\, and by Che rn-Lin for the case γ<\;(n-2)^2/4. Namely\, if 0≤γ≤(n< sup>2-1)/4\, then the negativity of the mean curvature
of ∂ Ω at 0 is sucient for the existence of extremals. This is however not su fficient for (n^2-1)/4≤γ≤(n^2)/4\, which then req uires the positivity of the Hardy singular boundary-massof the domain
unde r consideration.\nJoint work with Nassif Ghoussoub. DTEND;TZID=Europe/Zurich:20141112T171500 END:VEVENT BEGIN:VEVENT UID:news473@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181228T200306 DTSTART;
TZID=Europe/Zurich:20141105T161500 SUMMARY:Seminar Analysis: Sara Daneri (Max Planck Institute for Mathematics in the Sciences\, Leipzig) DESCRIPTION:We consider the Cauchy problem for the
incompressible Euler equ ations on the three-dimensional torus. According to a conjecture due to On sager\, which is well known in turbulence theory\, while all the solutions which are uniformly
α-Hölder continuous in space for any α>1/3 must co nserve the total kinetic energy\, for any α<1/3 there can be uniformly α -Hölder solutions which are strictly dissipative. While the first part of
the conjecture is well established since a long time\, the second part is still open in its full generality. In the result that we present we show that\, for any α<1/5\, there exist Cα vector fields
being the initial da ta of infinitely many Cα solutions of the Euler equations which dissipate the total kinetic energy. X-ALT-DESC: \nWe consider the Cauchy problem for the incompressible Euler e
quations on the three-dimensional torus. According to a conjecture due to Onsager\, which is well known in turbulence theory\, while all the solutio ns which are uniformly α-Hölder continuous in
space for any α>\;1/3 m ust conserve the total kinetic energy\, for any α<\;1/3 there can be un iformly α-Hölder solutions which are strictly dissipative. While the fir st part of the conjecture
is well established since a long time\, the seco nd part is still open in its full generality. In the result that we presen t we show that\, for any α<\;1/5\, there exist C^α vector fi elds being
the initial data of infinitely many C^α solutions of the Euler equations which dissipate the total kinetic energy. DTEND;TZID=Europe/Zurich:20141105T171500 END:VEVENT BEGIN:VEVENT
UID:news472@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181228T195713 DTSTART;TZID=Europe/Zurich:20141029T161500 SUMMARY:Seminar Analysis: Thomas Sørensen (Ludwig Maximilian University of Munich)
DESCRIPTION:The eigenfunctions of the Schrödinger operator for (non-relati vistic) atoms and molecules (in the Born-Oppenheimer/clamped nuclei approximation) are solutions of an elliptic partial
differential equatio n with singular (total) potential (i.e.\, zero-order term). In this talk we give an overview over our results about the structure/regularity of th e eigenfunctions at the
singularities of the potential. These\, in partic ular\, improve on the well-known ’Kato Cusp Condition’. If time permi ts\, we also discuss the implications for the electron density.\\r\\nThis is
joint work with S. Fournais (Aarhus\, Denmark)\, and M. and T. Hoffma nn-Ostenhof (Vienna\, Austria). X-ALT-DESC:\nThe eigenfunctions of the Schrödinger operator for (non-relat ivistic) atoms and
molecules (in the Born-Oppenheimer/clamped nuclei approximation) are solutions of an elliptic partial differential equati on with singular (total) potential (i.e.\, zero-order term). In this tal k we
give an overview over our results about the structure/regularity of t he eigenfunctions at the singularities of the potential. These\, in parti cular\, improve on the well-known ’Kato Cusp
Condition’. If time perm its\, we also discuss the implications for the electron density.\nThis is joint work with S. Fournais (Aarhus\, Denmark)\, and M. and T. Hoffmann- Ostenhof (Vienna\,
Austria). DTEND;TZID=Europe/Zurich:20141029T171500 END:VEVENT BEGIN:VEVENT UID:news471@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181228T195204 DTSTART;TZID=Europe/Zurich:20141022T161500
SUMMARY:Seminar Analysis: Chiara Saffirio (University of Zurich) DESCRIPTION:We consider the Cauchy problem associated to the Vlasov-Poisson system and we extend the well-posedness theory of Lions
and Perthame to t he case of initial data which include a Dirac mass. Moreover we provide p olynomially growing in time estimates for the moments of the solution. Th is is a joint work with L.
Desvillettes and E. Miot. X-ALT-DESC: \nWe consider the Cauchy problem associated to the Vlasov-Poiss on system and we extend the well-posedness theory of Lions and Perthame to the case of initial
data which include a Dirac mass. Moreover we provide polynomially growing in time estimates for the moments of the solution. This is a joint work with L. Desvillettes and E. Miot. DTEND;TZID=Europe/
Zurich:20141022T171500 END:VEVENT BEGIN:VEVENT UID:news470@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181228T194608 DTSTART;TZID=Europe/Zurich:20141008T161500 SUMMARY:Seminar Analysis: Guido de
Philippis (University of Zurich) DESCRIPTION:Local volume-constrained minimizers in anisotropic capilla rity problems develop free boundaries on the walls of their containers. We prove the regularity
of the free boundary outside a small set\, s howing in particular the validity of Young’s law at almost every point (joint with Francesco Maggi). X-ALT-DESC: \nLocal volume-constrained minimizers in
anisotropic capil larity problems develop free boundaries on the walls of their containers. We prove the regularity of the free boundary outside a small set\, showing in particular the validity of
Young’s law at almost eve ry point (joint with Francesco Maggi). DTEND;TZID=Europe/Zurich:20141008T171500 END:VEVENT BEGIN:VEVENT UID:news469@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181228T194637
DTSTART;TZID=Europe/Zurich:20140924T161500 SUMMARY:Seminar Analysis: Camilla Nobili (Max Planck Institute for Mathemat ics in the Sciences\, Leipzig) DESCRIPTION:We consider Rayleigh-Bénard
convection at finite Prandtl numbe r as modelled by the Boussinesq equation. We are interested in the scaling of the average upward heat transport\, the Nusselt number Nu\, in terms o f the Rayleigh
number Ra\, and the Prandtl number Pr.\\r\\nPhysically mot ivated heuristics suggest the scaling Nu∼Ra1⁄3 and Nu∼Ra1/2 depe nding on Pr\, in different regimes. \\r\\nIn this talk I present a rigorou
s upper bound for Nu reproducing both physical scalings in some parameter regimes up to logarithms. This is obtained by a (logarithmically failing) maximal regularity estimate inL1and inL1for the
nonstationary Stokes equa tion with forcing term given by the buoyancy term and the nonlinear term\, respectively. This is a joint work with Felix Otto and Antoine Choffrut. X-ALT-DESC:\nWe consider
Rayleigh-Bénard convection at finite Prandtl numb er as modelled by the Boussinesq equation. We are interested in the scalin g of the average upward heat transport\, the Nusselt number Nu\, in terms
of the Rayleigh number Ra\, and the Prandtl number Pr.\nPhysically motiva ted heuristics suggest the scaling Nu∼Ra^1⁄3 and Nu∼Ra ^1/2 depending on Pr\, in different regimes. \nIn this talk I pr esent
a rigorous upper bound for Nu reproducing both physical scalings in some parameter regimes up to logarithms. This is obtained by a (logarithm ically failing) maximal regularity estimate inL1and
inL1for the nonstation ary Stokes equation with forcing term given by the buoyancy term and the n onlinear term\, respectively. This is a joint work with Felix Otto and An toine Choffrut. DTEND;TZID=
Europe/Zurich:20140924T171500 END:VEVENT BEGIN:VEVENT UID:news486@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181229T175432 DTSTART;TZID=Europe/Zurich:20140520T151500 SUMMARY:Seminar Analysis:
Gabriella Tarantello (University of Rome Tor Verg ata) DESCRIPTION:We discuss a class of singular Liouville systems in the plane a nd their role in the construction of non-abelian Chern-Simons
vortices X-ALT-DESC: \nWe discuss a class of singular Liouville systems in the plane and their role in the construction of non-abelian Chern-Simons vortices DTEND;TZID=Europe/Zurich:20140520T161500
END:VEVENT BEGIN:VEVENT UID:news485@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181229T175201 DTSTART;TZID=Europe/Zurich:20140507T151500 SUMMARY:Seminar Analysis: Julien Sabin (University of
Cergy-Pontoise) DESCRIPTION:A Fermi gas occupying the whole euclidian space is an example o f a translation-invariant quantum system with an infinite number of part icles. We study its stability
properties under the time-dependent nonline ar Hartree equation. If this system is slightly perturbed at the initial time\, we show in particular that it returns to the translation-invariant state
for large times. This is an instance of nonlinear dispersion for i nfinite quantum systems\, which was recently studied by Frank\, Lewin\, L ieb and Seiringer in the linear case. This a joint work
with Mathieu Lewi n (CNRS/Cergy). I will also mention some recent work on Strichartz estima tes for systems of orthonormal functions\, joint with Rupert Frank (Calte ch). X-ALT-DESC: \nA Fermi gas
occupying the whole euclidian space is an example of a translation-invariant quantum system with an infinite number of pa rticles. We study its stability properties under the time-dependent nonli
near Hartree equation. If this system is slightly perturbed at the initia l time\, we show in particular that it returns to the translation-invaria nt state for large times. This is an instance of
nonlinear dispersion for infinite quantum systems\, which was recently studied by Frank\, Lewin\, Lieb and Seiringer in the linear case. This a joint work with Mathieu Le win (CNRS/Cergy). I will
also mention some recent work on Strichartz esti mates for systems of orthonormal functions\, joint with Rupert Frank (Cal tech). DTEND;TZID=Europe/Zurich:20140507T161500 END:VEVENT BEGIN:VEVENT
UID:news484@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181229T172908 DTSTART;TZID=Europe/Zurich:20140423T151500 SUMMARY:Seminar Analysis: Hoai-Minh Nguyen (Swiss Federal Institute of Tech nology in
Lausanne (EPFL)) DESCRIPTION:In this talk\, I first discuss estimates for the topological de gree of maps from sphere into itself. Second\, I present characterization s of Sobolev spaces based on the
pointwise convergence or the Gamma-conv ergence of a sequence of nonlocal\, nonconvex functionals related to thes e estimates. If time permits\, I will also discuss the connection betwe en these
functionals with various filters in the denoising problem. The talk is based on joint works with Jean Bourgain and Haim Brezis. X-ALT-DESC:\nIn this talk\, I first discuss estimates for the
topological d egree of maps from sphere into itself. Second\, I present characterizatio ns of Sobolev spaces based on the pointwise convergence or the Gamma-con vergence of a sequence of nonlocal\,
nonconvex functionals related to the se estimates. If time permits\, I will also discuss the \; connection between these functionals \; with various filters in the denoising p roblem. The
talk is based on joint works with Jean Bourgain and Haim Brez is. DTEND;TZID=Europe/Zurich:20140423T161500 END:VEVENT BEGIN:VEVENT UID:news483@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181229T165509
DTSTART;TZID=Europe/Zurich:20140402T151500 SUMMARY:Seminar Analysis: Armin Schikorra (Max Planck Institute for Mathema tics in the Sciences in Leipzig) DESCRIPTION:I will present results and ideas
for the proof for regularity t heory for critical points of non-local\, degenerate integro-differential energies into manifolds which are related to p-harmonic maps. X-ALT-DESC:\nI will present
results and ideas for the proof for regularity theory for critical points of non-local\, degenerate integro-differential energies into manifolds which are related to p-harmonic maps. DTEND;TZID=
Europe/Zurich:20140402T161500 END:VEVENT BEGIN:VEVENT UID:news482@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181229T165147 DTSTART;TZID=Europe/Zurich:20140326T151500 SUMMARY:Seminar Analysis: Dario
Trevisan (Scuola Normale Superiore\, Pisa\, Italy) DESCRIPTION:Following [1]\, in this talk we show how to establish\, in a ra ther general setting\, an analogue of DiPerna-Lions theory on
well-posedne ss of flows of ODE’s associated to Sobolev vector fields. Key results ar e a well-posedness result for the continuity equation associated to suitab ly defined Sobolev vector fields\, via
a commutator estimate\, and an abst ract superposition principle in (possibly extended) metric measure spaces\ , via an embedding into R∞.\\r\\nWhen specialized to the setting of Eucl idean or
infinite dimensional (e.g.Gaussian) spaces\, large parts of previ ously known results are recovered at once.Moreover\, the class of RCD(K\, ∞) metric measure spaces\, recently introduced by Ambrosio
\, Gigli and S avar ́e\, object of extensive recent research\, fits into our framework. Therefore we provide\, for the first time\, well-posedness results forODE ’s under low regularity assumptions on
the velocity and in a non smooth context.\\r\\nReferences:[1] L. Ambrosio and D. Trevisan. Well posedness o f Lagrangian flows and continuity equations in metric measure spaces. ArXi v e-prints\,
February 2014. X-ALT-DESC: \nFollowing [1]\, in this talk we show how to establish\, in a rather general setting\, an analogue of DiPerna-Lions theory on well-posed ness of flows of ODE’s associated
to Sobolev vector fields. Key results are a well-posedness result for the continuity equation associated to suit ably defined Sobolev vector fields\, via a commutator estimate\, and an ab stract
superposition principle in (possibly extended) metric measure space s\, via an embedding into R^∞.\nWhen specialized to the setting of Euclidean or infinite dimensional (e.g.Gaussian) spaces\, larg e
parts of previously known results are recovered at once.Moreover\, the c lass of RCD(K\,∞) metric measure spaces\, recently introduced by Ambrosi o\, Gigli and Savar ́e\, object of extensive recent
research\, fits into our framework. Therefore we provide\, for the first time\, well-posedness results forODE’s under low regularity assumptions on the velocity and in a non smooth context.\n
[1] L. Ambrosio and D. Tre visan. Well posedness of Lagrangian flows and continuity equations in metr ic measure spaces. ArXiv e-prints\, February 2014. DTEND;TZID=Europe/Zurich:20140326T161500
END:VEVENT BEGIN:VEVENT UID:news481@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181229T164113 DTSTART;TZID=Europe/Zurich:20140305T151500 SUMMARY:Seminar Analysis: Angkana Rüland (University of Bonn)
DESCRIPTION:This talk is focused on unique continuation principles for frac tional Schrödinger equations with scaling-critical and rough potentials. The results are deduced via so-called Carleman
estimates. In particular\ , these methods can be transferred to “variable coefficient” versions of fractional Schrödinger equations. X-ALT-DESC:\nThis talk is focused on unique continuation
principles for fra ctional Schrödinger equations with scaling-critical and rough potentials . The results are deduced via so-called Carleman estimates. In particular \, these methods can be
transferred to “variable coefficient” version s of fractional Schrödinger equations. DTEND;TZID=Europe/Zurich:20140305T161500 END:VEVENT BEGIN:VEVENT UID:news480@dmi.unibas.ch DTSTAMP;TZID=Europe/
Zurich:20181229T163904 DTSTART;TZID=Europe/Zurich:20140219T151500 SUMMARY:Seminar Analysis: Ruben Jakob (University of Tübingen) DESCRIPTION:We provide two sharp sufficient conditions for immersed
Willmor e surfaces in R3\, definedon bounded C4-subdomains of R2\, to be already m inimal surfaces\, i.e. to have vanishing mean curvatures on their entire d omains. Our precise results read as
follows:\\r\\nTheorem 1. For some bo unded C4-domain Ω⊂R2 let X∈C4(Ω\,R3) denote some immersed Willmore s urface with Gauss map N and mean curvature H. Furthermore\, assume that th ere exist
constants c\,d∈R and some fixed vector V∈S2 such that χ := cX+dV satisfies at least one of the following two conditions:\\r\\na) The re is some “normal domain” G⊂Ω such that there hold H=0 on ∂G and
H≥0 (or H≤0) in G∩O\, where O⊂R2 is some open neighbourhood of ∂G\, and\\r\\ninf∂G<χ\,N> ≥ 0 as well as sup∂G<χ\,N> > 0\;\\r\ \nb) H=0 on ∂Ω and\\r\\n<χ\,N> > 0 in Ω\\A as well a sup∂Ω<χ\,N> > 0\\r\\
nfor some finite set A⊂Ω.\\r\\nThen H≡0 is satisfied in \\bar {Ω}\, i.e.X is a minimal surface on \\bar{Ω}.\\r\\nThese results turn ou t to be particularly suitable for applications to Willmore
graphs. We can therefore show that Willmore graphs on bounded C4 domains \\bar{Ω} with v anishing mean curvatures on the boundary ∂Ω must already be minimal gra phs. Our methods also prove that any
closed Willmore surface in R3 which c an be represented as a smooth graph over S2 has to be of constant\, non-ze ro mean curvature and therefore a round sphere. Finally we demonstrate tha t our
results are sharp by means of an examination of some certain part of the Clifford-Torus in R3. X-ALT-DESC:\nWe provide two sharp sufficient conditions for immersed Willmo re surfaces in R^3\,
definedon bounded C^4-subd omains of R^2\, to be already minimal surfaces\, i.e. to have vanishing mean curvatures on their entire domains. Our precise result s read as follows:\nTheorem 1. For some
bounded C^4-dom ain Ω⊂R^2 let X∈C^4(Ω\,R^3) denote some immersed Willmore surface with Gauss map N and mean curvat ure H. Furthermore\, assume that there exist constants c\,d∈R and some f ixed vector
V∈S^2 such that χ := cX+dV satisfies at least one of the following two conditions:\na) There is some “normal domain” G ⊂Ω such that there hold H=0 on ∂G and H≥0 (or H≤0) in G∩O\, whe re O⊂R^2 is some
open neighbourhood of ∂G\, and\ninf< sub>∂G]<\;χ\,N>\; ≥ 0 as well as sup[∂G]<\;χ \,N>\; >\; 0\;\nb) H=0 on ∂Ω and\n<\;χ\,N>\; >\; 0 in Ω\\A as well a sup[∂Ω]<\;χ\,N>\; >\; 0\
nfor some finite set A⊂Ω.\nThen H≡0 is satisfied in \\bar{Ω}\, i.e.X is a minimal surface on \\bar{Ω}.\nThese results turn out to be particularly suitable for app lications to Willmore graphs. We can
therefore show that Willmore graphs o n bounded C4 domains \\bar{Ω} with vanishing mean curvatures on the bound ary ∂Ω must already be minimal graphs. Our methods also prove that any closed Willmore
surface in R3 which can be represented as a smooth graph o ver S2 has to be of constant\, non-zero mean curvature and therefore a rou nd sphere. Finally we demonstrate that our results are sharp by
means of a n examination of some certain part of the Clifford-Torus in R3. DTEND;TZID=Europe/Zurich:20140219T161500 END:VEVENT BEGIN:VEVENT UID:news495@dmi.unibas.ch DTSTAMP;TZID=Europe/
Zurich:20181229T230505 DTSTART;TZID=Europe/Zurich:20131218T151500 SUMMARY:Seminar Analysis: Emil Wiedemann (University of British Columbia) DESCRIPTION:Given a bounded domain and boundary data\, does
there exist a vector-valued map on this domain which is incompressible\, that is\, a map whose Jacobian determinant is one (almost) everywhere? In a regular set ting\, this question has been
essentially positively answered in a famous paper by Dacorogna and Moser. I will present an analogous result in Sobo lev spaces of low regularity\, which was recently achieved by a convex in
tegration method jointly with K. Koumatos (Oxford) and F. Rindler (Warwic k). I will also comment on several generalisations and applications. X-ALT-DESC: \nGiven a bounded domain and boundary data\,
does there exist a vector-valued map on this domain which is incompressible\, that is\, a m ap whose Jacobian determinant is one (almost) everywhere? In a regular s etting\, this question has been
essentially positively answered in a famo us paper by Dacorogna and Moser. I will present an analogous result in So bolev spaces of low regularity\, which was recently achieved by a convex
integration method jointly with K. Koumatos (Oxford) and F. Rindler (Warw ick). I will also comment on several generalisations and applications. DTEND;TZID=Europe/Zurich:20131218T161500 END:VEVENT
BEGIN:VEVENT UID:news494@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181229T230230 DTSTART;TZID=Europe/Zurich:20131211T151500 SUMMARY:Seminar Analysis: Daniele Bartolucci (University of Rome Tor
Vergat a) DESCRIPTION:The uniqueness of solutions of the (Liouville) mean field-type equation on a simply connected domain and in the sub critical regime λ ∈(0\,8π) was first proved by T. Suzuki
(1992). This result has been later improved by S.Y.A. Chang\, C.C. Chen and C.S. Lin (2003) [CCL] to c over the critical value λ∈(0\,8π]. The case where the domain is not s imply connected has been
a long-standing open problem which we have final ly solved in a recent paper in collaboration with C.S. Lin. Our proof is based on a new generalization of a P.D.E. version of the Alexandrov-Bol's
isoperimetric inequality on multiply connected domains. Another delicate problem is to understand the existence/non-existence of solutions for th is equation on multiply connected domains at the
critical parameterλ=8π . Criticality here means that the variational functional whose critical p oints are solutions of the equation is not anymore coercive for λ=8π\, which implies in particular in
this situation that existence/non existenc e of solutions depend on the geometry of the domain. I will discuss our g eneralization of a result in [CCL] which yield necessary and sufficient c
onditions for the existence of solutions for the mean field equation at t he critical parameter λ=8π. X-ALT-DESC: \nThe uniqueness of solutions of the (Liouville) mean field-typ e equation on a
simply connected domain and in the sub critical regime&nb sp\;λ∈(0\,8π) was first proved by T. Suzuki (1992). \; This result has been later improved by S.Y.A. Chang\, C.C. Chen and C.S. Lin
(2003) [CCL] to cover the critical value λ∈(0\,8π]. The case where the domai n is not simply connected has been a long-standing open problem which we have finally solved in a recent paper in
collaboration with C.S. Lin. Our proof is based on a new generalization of a P.D.E. version of the Alexa ndrov-Bol's isoperimetric inequality on multiply connected domains. Anoth er delicate problem
is to understand the existence/non-existence of solut ions for this equation on multiply connected domains at the critical para meterλ=8π. Criticality here means that the variational functional whose
critical points are solutions of the equation is not anymore coercive fo r λ=8π\, which implies in particular in this situation that existence/n on existence of solutions depend on the geometry of
the domain. I will di scuss our generalization of a result in [CCL] which yield necessary and s ufficient conditions for the existence of solutions for the mean field eq uation at the critical
parameter λ=8π. DTEND;TZID=Europe/Zurich:20131127T161500 END:VEVENT BEGIN:VEVENT UID:news493@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181229T225814 DTSTART;TZID=Europe/Zurich:20131204T151500
SUMMARY:Seminar Analysis: Xavier Ros-Oton (Polytechnic University of Catalo nia) DESCRIPTION:We study the boundary regularity of solutions to elliptic inte gro-differential equations. First we prove
that\, for the fractional Lapl acian (-Δ)s with s∈(0\,1)\, solutions u satisfy that u/ds is Hölder co ntinuous up to the boundary\, where d(x) is the distance to the boundary of the domain Ω. We will
show that\, in this fractional context\, the qu antity u/ds|∂Ω plays the role that the normal derivative plays in seco nd order equations. Finally\, we also present new boundary regularity res ults
for fully nonlinear integro-differential equations. X-ALT-DESC: \nWe study the boundary regularity of solutions to elliptic in tegro-differential equations. First we prove that\, for the fractional
La placian (-Δ)^s with s∈(0\,1)\, solutions u satisfy that u/d[S:s is Hölder continuous up to the boundary\, where d(x) is the d istance to the boundary of the domain Ω. We will show that\, in this
fra ctional context\, the quantity u/d^s|[∂Ω] plays the role that the normal derivative plays in second order equations. Finally \, we also present new boundary regularity results for fully nonlinear
in tegro-differential equations. DTEND;TZID=Europe/Zurich:20131204T161500 END:VEVENT BEGIN:VEVENT UID:news492@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181229T225231 DTSTART;TZID=Europe/
Zurich:20131127T151500 SUMMARY:Seminar Analysis: Nikolay Gusev (SISSA Trieste) DESCRIPTION:Suppose that b:Rd→Rd is a vector field\, β:R→R is a smo th function and u:R2→R is a scalar field. If both u
and b are smooth th en the following formula holds: div(β(u))b = β(u) - (β'(u) - uβ'(u)) div (b) + β'(u) div(ub). Generalizations of this formula when u∈L∞ an d b belongs to Sobolev space or has
bounded variation were studied by R. D i Perna\, P.-L. Lions\, L. Ambrosio\, C. De Lellis\, J. Maly and other a uthors. I will present a new result in this direction for d=2\, which was obtained
recently in collaboration with S. Bianchini. In particular our result holds when b is steady nearly incompressible BV vector field. X-ALT-DESC:\nSuppose that b:R^d→R^d is a vector field\, β:R→R^ 
\; is a smoth func tion and u:R^2→R is a scalar field. If both u an d b are smooth then the following formula holds: div(β(u))b = \;β(u ) - (β'(u) - uβ'(u)) div (b) + β'(u) div(ub).
Generalizations of this f ormula when u∈L^∞ and b belongs to Sobolev space or has boun ded variation were studied by R. Di Perna\, P.-L. Lions\, L. Ambrosio\, C . De Lellis\, J. Maly and other
authors. I will present a new result in t his direction for d=2\, which was obtained recently in collaboration with S. Bianchini. In particular our result holds when b is steady nearly inc
ompressible BV vector field. DTEND;TZID=Europe/Zurich:20131127T161500 END:VEVENT BEGIN:VEVENT UID:news491@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181229T224115 DTSTART;TZID=Europe/
Zurich:20131113T151500 SUMMARY:Seminar Analysis: Maria Colombo (Scuola Normale Superiore di Pisa) DESCRIPTION:The semigeostrophic equations are a set of equations which mode l large-scale atmospheric
/ocean flows.\\r\\nThe system admits a dual versi on\, obtained from the original equations through a change of variable. E xistence for the dual problem has been proven in 1998 by Benamou and Bren
ier\, but the existence of a solution of the original system remained ope n due to the low regularity of the change of variable.\\r\\nIn the talk w e prove the existence of distributional solutions
of the original equatio ns\, both in R3 and in a two-dimensional periodic setting. The proof is b ased on recent regularity and stability estimates for Alexandrov solution s of the Monge-Ampère
equation\, established by De Philippis and Figalli . X-ALT-DESC:\nThe semigeostrophic equations are a set of equations which mod el large-scale atmospheric/ocean flows.\nThe system admits a dual
version\ , obtained from the original equations through a change of variable. Exis tence for the dual problem has been proven in 1998 by Benamou and Brenier \, but the existence of a solution of the
original system remained open d ue to the low regularity of the change of variable.\nIn the talk we prove the existence of distributional solutions of the original equations\, bo th in R^3 and in a
two-dimensional periodic setting. The proof is based on recent regularity and stability estimates for Alexandrov solu tions of the Monge-Ampère equation\, established by De Philippis and Fig alli.
DTEND;TZID=Europe/Zurich:20131113T161500 END:VEVENT BEGIN:VEVENT UID:news490@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181229T223702 DTSTART;TZID=Europe/Zurich:20131016T151500 SUMMARY:Seminar
Analysis: Yannick Sire (University of Marseille) DESCRIPTION:In the last years\, a substantial amount of work has been devot ed to understand elliptic\, parabolic and hyperbolic problems with non lo
cal diffusion. In this talk\, I will introduce a new class of conformally covariant operators of fractional order generalizing the scalar and Pan eitz curvature. I will describe the associated Yamabe
problem\, in the re gular and singular settings. I will give some existence results and discu ss open problems. X-ALT-DESC: \nIn the last years\, a substantial amount of work has been de voted to
understand elliptic\, parabolic and hyperbolic problems with non local diffusion. In this talk\, I will introduce a new class of conforma lly covariant operators of fractional order generalizing the
scalar and Paneitz curvature. I will describe the associated Yamabe problem\, in the regular and singular settings. I will give some existence results and di scuss open problems. DTEND;TZID=Europe/
Zurich:20131016T151500 END:VEVENT BEGIN:VEVENT UID:news489@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181229T223441 DTSTART;TZID=Europe/Zurich:20131016T141500 SUMMARY:Seminar Analysis: Grzegorz
Jamroz (University of Warsaw) DESCRIPTION:We consider a one-dimensional transport (balance) equation with velocity which has non-Lipschitz zeroes. This leads to non-uniqueness an d concentration of
characterics and dynamics with both discrete and cont inuous components. To deal with these effects\, we use measure-valued sol utions and the so-called measure-transmission conditions. A metric in
the space of Radon measures allowing to define unique and stable solutions i s introduced. The equation under consideration was proposed as a structur ed population model of cell differentiation.
X-ALT-DESC: \nWe consider a one-dimensional transport (balance) equation wi th velocity which has non-Lipschitz zeroes. This leads to non-uniqueness and concentration of characterics and dynamics
with both discrete and co ntinuous components. To deal with these effects\, we use measure-valued s olutions and the so-called measure-transmission conditions. A metric in t he space of Radon
measures allowing to define unique and stable solutions is introduced. The equation under consideration was proposed as a struct ured population model of cell differentiation. DTEND;TZID=Europe/
Zurich:20131016T151500 END:VEVENT BEGIN:VEVENT UID:news488@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181229T223226 DTSTART;TZID=Europe/Zurich:20131009T151500 SUMMARY:Seminar Analysis: Stefan
Steinerberger (University of Bonn) DESCRIPTION:It is obvious that there is no tiling of the Euclidean plane w ith unit disks (any three disks have a gap in the middle): we prove a qua ntitative
version of this statement. This simple insight has applications in spectral geometry: it tells us something about the topological struct ure of the vibration profile of a (possibly oddly-shaped) drum
and allows us to recover an improved version of Pleijel's estimate (which was also recently done by Bourgain). X-ALT-DESC: \nIt is obvious that there is no tiling of the Euclidean plane with unit
disks (any three disks have a gap in the middle): we prove a q uantitative version of this statement. This simple insight has applicatio ns in spectral geometry: it tells us something about the
topological stru cture of the vibration profile of a (possibly oddly-shaped) drum and allo ws us to recover an improved version of Pleijel's estimate (which was als o recently done by Bourgain).
DTEND;TZID=Europe/Zurich:20131009T161500 END:VEVENT BEGIN:VEVENT UID:news487@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181229T175840 DTSTART;TZID=Europe/Zurich:20130925T151500 SUMMARY:Seminar
Analysis: Andrea Mondino (ETH Zurich) DESCRIPTION:Given an immersion f of the 2-sphere in a Riemannian manifold ( M\,g) we study quadratic curvature functionals of the type: \\int_{f(S^2) } H^2\, \\
int_f(S^2) A^2\, \\int_{f(S^2)} )|Aº|^2\, where H is the mean curvature\, A is the second fundamental form\, and Aº is the tracefree s econd fundamental form. Minimizers\, and more generally critical
points o f such functionals can be seen respectively as GENERALIZED minimal\, tota lly geodesic and totally umbilical immersions. In the seminar I will revi ew some results (obtained in collaboration
with Kuwert\, Rivière and Sh ygulla) regarding the existence and the regularity of minimizers of such functionals. An interesting observation regarding the results obtained wi th Rivière is that the
theory of Willmore surfaces can be usesfull to co mplete the theory of minimal surfaces (in particular in relation to the e xistence of canonical smooth representatives in homotopy classes\, a clas
sical program started by Sacks and Uhlenbeck). X-ALT-DESC: \nGiven an immersion f of the 2-sphere in a Riemannian manifold (M\,g) we study quadratic curvature functionals of the type: \\int_{f(S^ 2)}
H^2\, \\int_f(S^2) A^2\, \\int_{f(S^2)} )|Aº|^2\, where H is the mea n curvature\, A is the second fundamental form\, and Aº is the tracefree second fundamental form. Minimizers\, and more generally
critical points of such functionals can be seen respectively as GENERALIZED minimal\, to tally geodesic and totally umbilical immersions. In the seminar I will re view some results (obtained in
collaboration with Kuwert\, Rivière and Shygulla) regarding the existence and the regularity of minimizers of suc h functionals. An interesting observation regarding the results obtained with Rivière
is that the theory of Willmore surfaces can be usesfull to complete the theory of minimal surfaces (in particular in relation to the existence of canonical smooth representatives in homotopy classes
\, a cl assical program started by Sacks and Uhlenbeck). DTEND;TZID=Europe/Zurich:20130925T161500 END:VEVENT BEGIN:VEVENT UID:news504@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181231T171557 DTSTART;
TZID=Europe/Zurich:20130529T151500 SUMMARY:Seminar Analysis: Giovanni Alberti (University of Pisa) DESCRIPTION:Rademacher theorem states that every Lipschitz function on the euclidean space is
differentiable almost everywhere with respect to the L ebesgue measure. In this talk I will explain how this statement should be modified when the Lebesgue measure is replaced by an arbitrary
singular measure\, and in particular I will show that the differentiability prope rties of Lipschitz functions with respect to such a measure are exactly d escribed by the decompositions of the
measure in terms of (measures on) r ectifiable curves. This result is directly related to recent work by many authors\, including myself\, David Bate\, Marianna Csornyei\, Peter Jone s\, Andrea
Marchese\, and David Preiss. X-ALT-DESC: \nRademacher theorem states that every Lipschitz function on th e euclidean space is differentiable almost everywhere with respect to the Lebesgue measure. In
this talk I will explain how this statement should be modified when the Lebesgue measure is replaced by an arbitrary singula r measure\, and in particular I will show that the differentiability pro
perties of Lipschitz functions with respect to such a measure are exactly described by the decompositions of the measure in terms of (measures on) rectifiable curves. This result is directly related
to recent work by ma ny authors\, including myself\, David Bate\, Marianna Csornyei\, Peter Jo nes\, Andrea Marchese\, and David Preiss. DTEND;TZID=Europe/Zurich:20130529T161500 END:VEVENT
BEGIN:VEVENT UID:news503@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181231T171112 DTSTART;TZID=Europe/Zurich:20130522T151500 SUMMARY:Seminar Analysis: Jeremy Marzuola (University of North Carolina)
DESCRIPTION:We will discuss the results of several joint ongoing projects ( with subsets of collaborators Pierre Albin\, Hans Christianson\, Colin G uillarmou\, Jason Metcalfe\, Laurent Thomann and
Michael Taylor)\, which explore the existence\, stability and dynamics of nonlinear bound states and quasimodes on manifolds of both positive and negative curvature with various symmetry properties.
X-ALT-DESC: \nWe will discuss the results of several joint ongoing projects (with subsets of collaborators Pierre Albin\, Hans Christianson\, Colin Guillarmou\, Jason Metcalfe\, Laurent Thomann and
Michael Taylor)\, which explore the existence\, stability and dynamics of nonlinear bound states and quasimodes on manifolds of both positive and negative curvature with various symmetry properties.
DTEND;TZID=Europe/Zurich:20130522T161500 END:VEVENT BEGIN:VEVENT UID:news502@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181231T170555 DTSTART;TZID=Europe/Zurich:20130515T151500 SUMMARY:Seminar
Analysis: Anna Mazzucato (Penn State University) DESCRIPTION:I will present some (old) results on the transport and dissipat ion of enstrophy in 2D incompressible flows. Enstrophy is half the space
integral of vorticity squared\, and it is a relevant quantity in 2D turb ulence. I consider initial data with vorticity in L2 and its logari thmic refinements and study exact transport of
enstrophy by the velocity field. I also consider data in the larger Besov space $B^{0}_{2\,\\inft y}$ and study the existence of well-defined enstrophy defects\, measuring the rate of enstrophy
dissipation. \\r\\nThis is joint work with Milton Lopes Fihlo and Helena Nussenzveig Lopes. X-ALT-DESC:\nI will present some (old) results on the transport and dissipa tion of enstrophy in 2D
incompressible flows. Enstrophy is half the space integral of vorticity squared\, and it is a relevant quantity in 2D tur bulence. \; I \; consider initial \; data with vorticity in L<
sup>2 and its logarithmic refinements and study exact transport of enstrophy by the velocity field. I also consider data in the larger Besov space \; $B^{0}_{2\,\\infty}$ and study the existence
of well-define d enstrophy defects\, measuring the rate \; of enstrophy dissipation. \nThis is joint work with Milton Lopes Fihlo and Helena Nussenzveig Lopes . DTEND;TZID=Europe/
Zurich:20130515T161500 END:VEVENT BEGIN:VEVENT UID:news501@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181231T165929 DTSTART;TZID=Europe/Zurich:20130424T151500 SUMMARY:Seminar Analysis: Luigi Berselli
(University of Pisa) DESCRIPTION:We show some regularity results for some classes of 2D incompre ssible fluids\, needed to show uniqueness of particle trajectories. X-ALT-DESC: \nWe show some
regularity results for some classes of 2D incomp ressible fluids\, needed to show  \;uniqueness of particle trajectorie s. DTEND;TZID=Europe/Zurich:20130424T161500 END:VEVENT BEGIN:VEVENT
UID:news500@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181231T165526 DTSTART;TZID=Europe/Zurich:20130410T151500 SUMMARY:Seminar Analysis: Maria Laura Delle Monache (INRIA Sophia Antipolis - OPALE
Project-Team) DESCRIPTION:Several phenomena in traffic flow can be modeled through the us e of conservation laws. We present two PDE-ODE coupled models that are use d in different traffic situations.
First\, we consider a model that applie s to moving bottlenecks and then we consider a model that applies in contr ol problems for highway ramp metering. We provide a rigorous analytical fr amework
for the Cauchy and Riemann problems and we show some numerical sim ulations. X-ALT-DESC:\nSeveral phenomena in traffic flow can be modeled through the u se of conservation laws. We present two
PDE-ODE coupled models that are us ed in different traffic situations. First\, we consider a model that appli es to moving bottlenecks and then we consider a model that applies in cont rol problems
for highway ramp metering. We provide a rigorous analytical f ramework for the Cauchy and Riemann problems and we show some numerical si mulations. DTEND;TZID=Europe/Zurich:20130410T161500 END:VEVENT
BEGIN:VEVENT UID:news499@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181231T165153 DTSTART;TZID=Europe/Zurich:20130403T151500 SUMMARY:Seminar Analysis: Isabelle Gallagher (Paris Diderot)
DESCRIPTION:In this talk I will present some recent results in the study of the Cauchy problem for the three-dimensional Navier-Stokes equations. In particular using the fact that the two-dimensional
equation is well-pos ed\, I will try to explain the role of "spectral anisotropy" in the resol ution of the equations. X-ALT-DESC: \nIn this talk I will present some recent results in the study of
the Cauchy problem for the three-dimensional Navier-Stokes equations. In particular using the fact that the two-dimensional equation is well-p osed\, I will try to explain the role of "\;spectral
anisotropy"\; in the resolution of the equations. DTEND;TZID=Europe/Zurich:20130403T161500 END:VEVENT BEGIN:VEVENT UID:news498@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181231T164925 DTSTART;
TZID=Europe/Zurich:20130327T151500 SUMMARY:Seminar Analysis: Joules Nahas (EPFL) DESCRIPTION:Following the work of Krieger\, Schlag\, and Tataru\, we constr uct a family of blow-up solutions with
finite energy norm to the equatio n \\r\\n∂t2u - Δg u = |u|4 u.\\r\\nThis family has a continuous rate o f blow up\, but In contrast to the case where g is the Minkowski metric\ , the argument used
to produce these solutions can only obtain blow up r ates that are bounded above. \\r\\nThis is joint work with S. Shashahani. X-ALT-DESC:\nFollowing the work of Krieger\, Schlag\, and Tataru\, we
const ruct a family of blow-up solutions with finite energy norm to the equati on \n∂[t]^2u - Δg u = |u|^4 u.\nThis fami ly has a continuous rate of blow up\, but In contrast to the case where g is
the Minkowski metric\, the argument used to produce these solutions can only obtain blow up rates that are bounded above. \nThis is joint wo rk with S. Shashahani. DTEND;TZID=Europe/
Zurich:20130327T161500 END:VEVENT BEGIN:VEVENT UID:news497@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181231T162819 DTSTART;TZID=Europe/Zurich:20130320T151500 SUMMARY:Seminar Analysis: Scott N.
Armstrong (University of Wisconsin) DESCRIPTION:I will present a regularity result for degenerate elliptic equa tions in nondivergence form. In joint work with Charlie Smart\, we extend the
regularity theory of Caffarelli to equations with possibly unbounded ellipticity - provided that the ellipticity satisfies an averaging cond ition. As an application we obtain a stochastic
homogenization result for such equations which is equivalent to an invariance principle for random diffusions in random environments. The degenerate equations homogenize t o uniformly elliptic
equations\, and we give an estimate of the elliptici ty in terms of the averaging condition. X-ALT-DESC:\nI will present a regularity result for degenerate elliptic equ ations in nondivergence form.
In joint work with Charlie Smart\, we exten d the regularity theory of Caffarelli to equations with possibly unbounde d ellipticity - provided that the ellipticity satisfies an averaging con dition.
As an application we obtain a stochastic homogenization result fo r such equations which is equivalent to an invariance principle for rando m diffusions in random environments. The degenerate
equations homogenize to uniformly elliptic equations\, and we give an estimate of the elliptic ity in terms of the averaging condition. DTEND;TZID=Europe/Zurich:20130320T161500 END:VEVENT
BEGIN:VEVENT UID:news496@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181231T164719 DTSTART;TZID=Europe/Zurich:20130227T151500 SUMMARY:Seminar Analysis: H. J. Nussenzveig Lopes (Instituto de Matematica
Universidade Federal do Rio de Janeiro) DESCRIPTION:Following the work of Krieger\, Schlag\, and Tataru\, we constr uct a family of blow-up solutions with finite energy norm to the equation \\r\\
n∂t2u - Δg u = |u|4 u.\\r\\nThis family has a continuous rate of blow up\, but In contrast to the case where g is the Minkowski metric\, the argument used to produce these solutions can only obtain
blow up rate s that are bounded above. \\r\\nThis is joint work with S. Shashahani. X-ALT-DESC:\nFollowing the work of Krieger\, Schlag\, and Tataru\, we const ruct a family of blow-up solutions with
finite energy norm to the equatio n \n∂[t]^2u - Δg u = |u|^4 u.\nThis famil y has a continuous rate of blow up\, but In contrast to the case where g is the Minkowski metric\, the argument used to
produce these solutions ca n only obtain blow up rates that are bounded above. \nThis is joint work with S. Shashahani. DTEND;TZID=Europe/Zurich:20130227T161500 END:VEVENT BEGIN:VEVENT
UID:news512@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20190104T233058 DTSTART;TZID=Europe/Zurich:20121219T151500 SUMMARY:Seminar Analysis: Benjamin Texier (Université Paris-Diderot (Paris 7))
DESCRIPTION:The Garding inequality states that positive pseudo-differential symbols are associated with semi-positive operators. It can be used in pa rticular to show time-exponential growth of
solutions to initial value pro blems for elliptic equations. I will give examples in which Garding fails to give appropriate bounds\, and a way to overcome this difficulty. Exampl es include
high-frequency asymptotics of systems based on Maxwell's equati ons\, and compressible Euler systems with a Van der Waals pressure law. In these cases\, appropriate bounds are derived via a
description of the par ametrix of a pseudo-differential system. X-ALT-DESC: \nThe Garding inequality states that positive pseudo-differenti al symbols are associated with semi-positive operators. It
can be used in particular to show time-exponential growth of solutions to initial value p roblems for elliptic equations. I will give examples in which Garding fail s to give appropriate bounds\, and
a way to overcome this difficulty. Exam ples include high-frequency asymptotics of systems based on Maxwell's equa tions\, and compressible Euler systems with a Van der Waals pressure law. In these
cases\, appropriate bounds are derived via a description of the p arametrix of a pseudo-differential system. DTEND;TZID=Europe/Zurich:20121219T161500 END:VEVENT BEGIN:VEVENT UID:news511@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20190104T232839 DTSTART;TZID=Europe/Zurich:20121212T151500 SUMMARY:Seminar Analysis: Camillo De Lellis (University of Zurich) DESCRIPTION:A well-known theorem of Almgren
shows that area-minimizing inte gral k-dimensional currents in a Riemannian manifold of arbitrary dimensio n N are regular up to a set of closed dimension of Hausdorff dimension at most N-2. In a
joint work with Emanuele Spadaro we give a much shorter pro of of this statement in the euclidean setting\, following the general prog ram of Almgren but introducing new ideas at the various steps.
In this tal k I will explain some if these ideas. A generalization of our proof to the Riemannian case is work in progress. X-ALT-DESC: \nA well-known theorem of Almgren shows that area-minimizing in
tegral k-dimensional currents in a Riemannian manifold of arbitrary dimension N are regular up to a set of closed dimension of Hausdor ff dimension at most N-2. In a joint work with Emanuele Spadaro
we give a much shorter proof of this statement in the euclidean setting\, fol lowing the general program of Almgren but introducing new ideas at the var ious steps. In this talk I will explain some
if these ideas. A generalizat ion of our proof to the Riemannian case is work in progress. DTEND;TZID=Europe/Zurich:20121212T161500 END:VEVENT BEGIN:VEVENT UID:news510@dmi.unibas.ch DTSTAMP;TZID=
Europe/Zurich:20190104T232526 DTSTART;TZID=Europe/Zurich:20121205T151500 SUMMARY:Seminar Analysis: Przemek Zieliński (Institute of Mathematics Poli sh Academy of Sciences) DESCRIPTION:I will present
the results on the existence of solutions to sem i-linear equation \\r\\n Lx+N(x)=0\, \\r\\nwhere L is a linear and N a nonlinear operator defined on Hilbert space. I concentrate on the case when
0 is in an essential spectrum of L. The two main methods which I use are: topological degree in infinite-dimensional spaces and the spectral theory for linear operators in Hilbert spaces. This
results are part of m y Ph.D. project. X-ALT-DESC:\nI will present the results on the existence of solutions to se mi-linear equation \n \;  \; Lx+N(x)=0\, \nwhere L is a linear and N a
nonlinear operator defined on Hilbert space. I concentrate on the ca se when 0 is in an essential spectrum of L. The two main methods which I use are: topological degree in infinite-dimensional
spaces and the spectr al theory for linear operators in Hilbert spaces. This results are part o f my Ph.D. project. \; DTEND;TZID=Europe/Zurich:20121205T161500 END:VEVENT BEGIN:VEVENT
UID:news509@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20190104T232300 DTSTART;TZID=Europe/Zurich:20121128T151500 SUMMARY:Seminar Analysis: Matteo Focardi (University of Zurich) DESCRIPTION:In this
talk I shall focus on the higher integrability property enjoyed by the approximate gradients of local minimizers of the 2d Mumfo rd-Shah energy. Related regularity issues shall be also discussed. \\r
\\n This is joint work with C. De Lellis (Universitaet Zuerich). X-ALT-DESC: \nIn this talk I shall focus on the higher integrability proper ty enjoyed by the approximate gradients of local
minimizers of the 2d Mum ford-Shah energy. Related regularity issues shall be also discussed. \nTh is is joint work with C. De Lellis (Universitaet Zuerich). DTEND;TZID=Europe/Zurich:20121128T161500
END:VEVENT BEGIN:VEVENT UID:news507@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181231T173513 DTSTART;TZID=Europe/Zurich:20121114T151500 SUMMARY:Seminar Analysis: Elisabetta Chiodaroli (University of
Zurich) DESCRIPTION:The deceivingly simple–looking compressible Euler equations o f gas dynamics have a long history of important contributions over more t han two centuries. If we allow for
discontinuous solutions\, uniqueness a nd stability are lost. In order to restore such properties further restr ictions on weak solutions have been proposed in the form of entropy inequ alities. In
this talk\, we will discuss some counterexamples to the well –posedness theory of entropysolutions to the multi–dimensional compre ssible Euler equations. First\, we show failure of uniqueness on a
finit e–time interval for entropy solutions starting from any continuously di fferentiable initial density and suitably constructed initial linear mom enta. In other words\, we prove that there exist
wild initial data allowi ng for infinitely many distinct entropy weak solutionsnof the compressib le Euler system. Finally\, we present a new upshot: a classical Riemann d atum is a wild initial datum
in 2 space–dimensions. All our methods are inspired by a new analysis of the incompressible Euler equations recentl y carried out by De Lellis and Székelyhidi and based on a revisited “h -principle”.
X-ALT-DESC: \nThe deceivingly simple–looking compressible Euler equations of gas dynamics have a long history of important contributions over more than two centuries. If we allow for discontinuous
solutions\, uniqueness and stability are lost. In order to restore such properties further res trictions on weak solutions have been proposed in the form of entropy ine qualities. In this talk\, we
will discuss some counterexamples to the wel l–posedness theory of entropysolutions to the multi–dimensional compr essible Euler equations. First\, we show failure of uniqueness on a fini te–time
interval for entropy solutions starting from any continuously d ifferentiable initial density and suitably constructed initial linear mo menta. In other words\, we prove that there exist wild initial
data allow ing for infinitely many distinct entropy weak solutionsnof the compressi ble Euler system. Finally\, we present a new upshot: a classical Riemann datum is a wild initial datum in 2
space–dimensions. All our methods ar e inspired by a new analysis of the incompressible Euler equations recent ly carried out by De Lellis and Székelyhidi and based on a revisited “ h-principle”.
DTEND;TZID=Europe/Zurich:20121114T161500 END:VEVENT BEGIN:VEVENT UID:news506@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181231T173139 DTSTART;TZID=Europe/Zurich:20121107T151500 SUMMARY:Seminar
Analysis: Laura V. Spinolo (IMATI-CNR\, Pavia) DESCRIPTION:The talk will focus on the eigenvalue problem for the Laplace operator defined in an open and bounded domain\, with homogenous conditio ns
of either Dirichlet or Neumann type assigned at the boundary. Under fa irly weak regularity assumptions on the domain\, the problem admits a div erging sequence of nonnegative eigenvalues. I will
discuss some new quant itative estimates controlling how each of the eigenvalues change when the domain is perturbed. These estimates apply to Lipschitz and to so-called Reifenberg-flat domains. The
proof is based on an abstract lemma which applies to both the Neumann and the Dirichlet problem and which could be applied to other classes of domains. \\r\\nThe talk will be based on join t works
with A. Lemenant and E. Milakis. X-ALT-DESC:\nThe talk will focus on the eigenvalue problem for the Laplace operator defined in an open and bounded domain\, with homogenous conditi ons of either
Dirichlet or Neumann type assigned at the boundary. Under f airly weak regularity assumptions on the domain\, the problem admits a di verging sequence of nonnegative eigenvalues. I will discuss some
new quan titative estimates controlling how each of the eigenvalues change when th e domain is perturbed. These estimates apply to Lipschitz and to so-calle d Reifenberg-flat domains. \; The
proof is based on an abstract lemma which applies to both the Neumann and the Dirichlet problem and which co uld be applied to other classes of domains. \nThe talk will be based on jo int works with
A. Lemenant and E. Milakis. DTEND;TZID=Europe/Zurich:20121107T161500 END:VEVENT BEGIN:VEVENT UID:news505@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20181231T172746 DTSTART;TZID=Europe/
Zurich:20121003T151500 SUMMARY:Seminar Analysis: Laura Keller (Universität Münster) DESCRIPTION:Starting from the example of harmonic maps\, we will find a cla ss of PDE problems which enjoy an
additional\, at first glimpse hidden pr operty: Antisymmetry! This feature enables us to deduce regularity assert ions which heavily rely on Wente's theorem. For this latter\, various ap proaches
will be discussed. The presentation will be completed by a versi on of Wente's result for arbitrary dimension. X-ALT-DESC: \nStarting from the example of harmonic maps\, we will find a c lass of PDE
problems which enjoy an additional\, at first glimpse hidden property: Antisymmetry! This feature enables us to deduce regularity asse rtions which heavily rely on Wente's theorem. For this latter\,
various approaches will be discussed. The presentation will be completed by a ver sion of Wente's result for arbitrary dimension. DTEND;TZID=Europe/Zurich:20121003T161500 END:VEVENT BEGIN:VEVENT
UID:news524@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20190105T001818 DTSTART;TZID=Europe/Zurich:20120530T151500 SUMMARY:Seminar Analysis: Stefano Spirito (University of L'Aquila) DESCRIPTION:In this
talk I will discuss the vanishing viscosity problem for the Navier-Stokes equations in a bounded domain. It is well-known that w hen Dirichlet conditions are imposed on the boundary the inviscid
limit i s currently an open and difficult problem. On the other hand when other type of boundary conditions are considered the situation became simpler. In this talk a particular type of Navier
boundary conditions involving on ly the vorticity of the velocity field are considered. In particular\, I will discuss recent results obtained in collaboration with Luigi Berselli (University of
Pisa) concerning the inviscid limit in energy norm of the Leray weak solutions and the inviscid limit in higher norms of local smo oth solutions of the Navier-Stokes equations. X-ALT-DESC:\nIn this
talk I will discuss the vanishing viscosity problem fo r the Navier-Stokes equations in a bounded domain. It is well-known that when Dirichlet conditions are imposed on the boundary the inviscid
limit is currently an open and difficult problem. On the other hand when other type of boundary conditions are considered the situation became simpler. In this talk a particular type of Navier
boundary conditions involving o nly the vorticity of the velocity field are considered. In particular\, I will discuss recent results obtained in collaboration with Luigi Bersell i (University of
Pisa) concerning the inviscid limit in energy norm of th e Leray weak solutions and the inviscid limit in higher norms of local sm ooth solutions of the Navier-Stokes equations. DTEND;TZID=Europe/
Zurich:20120530T161500 END:VEVENT BEGIN:VEVENT UID:news523@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20190105T001721 DTSTART;TZID=Europe/Zurich:20120523T151500 SUMMARY:Seminar Analysis: Luca
Martinazzi (Rutgers) DESCRIPTION:We study the Moser-Trudinger equation Δu = λu Exp(u2)\, λ>0 on a 2-dimensional disk\, arising from the Moser-Trudinger sharp embeddi ng of H10(Disk) into the Orlicz
space of functions u with Exp(u2) integra ble. We answer some long standing open questions: \\r\\na) The weak limit of a blowing-up sequence of solutions to the Moser-Trudinger equation on a disk is
0. \\r\\nb) The Dirichlet energy of a blowing-up sequence of sol utions on a disk converges to 4π. \\r\\nc) For L large enough\, the Moser -Trudinger equation on a disk admits no solution with
Dirichlet energy lar ger than L. \\r\\nThis work is joint project with Andrea Malchiodi (SISSA - Trieste). X-ALT-DESC:\nWe study the Moser-Trudinger equation \;Δu = λu Exp(u2)\ , λ>\;0 on a
2-dimensional disk\, arising from the Moser-Trudinger sha rp embedding of H^1[0](Disk) into the Orlicz space of f unctions u with Exp(u2) integrable. We answer some long standing open que stions: \
na) The weak limit of a blowing-up sequence of solutions to the Moser-Trudinger equation on a disk is 0. \nb) The Dirichlet energy of a bl owing-up sequence of solutions on a disk converges to 4π. \
nc) For L larg e enough\, the Moser-Trudinger equation on a disk admits no solution with Dirichlet energy larger than L. \nThis work is joint project with Andrea M alchiodi (SISSA - Trieste). DTEND;
TZID=Europe/Zurich:20120523T161500 END:VEVENT BEGIN:VEVENT UID:news522@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20190105T000645 DTSTART;TZID=Europe/Zurich:20120509T151500 SUMMARY:Seminar Analysis:
Christian Hainzl (Tuebingen) DESCRIPTION:We give the first rigorous derivation of the celebrated Ginzbur g-Landau (GL) theory\, starting from the microscopic Bardeen-Cooper-Schri effer (BCS) model.
Close to the critical temperature\, GL arises as an e ffective theory on the macroscopic scale. The relevant scaling limit is s emiclassical in nature\, and semiclassical analysis\, with minimal
regula rity assumptions\, plays an important part in our proof. X-ALT-DESC: \nWe give the first rigorous derivation of the celebrated Ginzb urg-Landau (GL) theory\, starting from the microscopic
Bardeen-Cooper-Sch rieffer (BCS) model. Close to the critical temperature\, GL arises as an effective theory on the macroscopic scale. The relevant scaling limit is semiclassical in nature\, and
semiclassical analysis\, with minimal regu larity assumptions\, plays an important part in our proof. DTEND;TZID=Europe/Zurich:20120509T161500 END:VEVENT BEGIN:VEVENT UID:news521@dmi.unibas.ch
DTSTAMP;TZID=Europe/Zurich:20190105T000226 DTSTART;TZID=Europe/Zurich:20120425T151500 SUMMARY:Seminar Analysis: Roland Donninger (EPFL) DESCRIPTION:I present some recent results\, obtained in
collaboration with Joachim Krieger\, on novel types of solutions to the critical wave equati on in 3 spatial dimensions. These solutions either blow up at infinity or vanish at a prescribed rate. The
existence of such exotic dynamics viola tes a strong version of the soliton resolution conjecture.\\r\\nFrancois Bouchut: \\r\\nTBA X-ALT-DESC: \nI present some recent results\, obtained in
collaboration wit h Joachim Krieger\, on novel types of solutions to the critical wave equa tion in 3 spatial dimensions. These solutions either blow up at infinity or vanish at a prescribed rate.
The existence of such exotic dynamics vio lates a strong version of the soliton resolution conjecture.\nFrancois Bo uchut: \nTBA DTEND;TZID=Europe/Zurich:20120425T161500 END:VEVENT BEGIN:VEVENT
UID:news520@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20190104T235930 DTSTART;TZID=Europe/Zurich:20120418T151500 SUMMARY:Seminar Analysis: Evelyne Miot (CNRS & Paris Orsay) DESCRIPTION:A system of
simplified equations has been derived by Klein\, Ma jda and Damodaran to describe the dynamics of nearly parallel vortex filam ents in incompressible 3D fluids. This system combines a 1D
Schrödinger-t ype structure together with the 2D point vortex system. Global existence f or small perturbations of exact parallel filaments has been established by Kenig\, Ponce and Vega in the case
of two filaments and for particular co nfigurations of three filaments. In this talk I will present large time ex istence results for particular configurations of four filaments and for ot her
particular configurations of N filaments for any N larger than 2. I wi ll also discuss some situations of finite time filament collapse. This is joint work with Valeria Banica. X-ALT-DESC: \nA system
of simplified equations has been derived by Klein\, Majda and Damodaran to describe the dynamics of nearly parallel vortex fil aments in incompressible 3D fluids. This system combines a 1D
Schrödinger -type structure together with the 2D point vortex system. Global existence for small perturbations of exact parallel filaments has been established by Kenig\, Ponce and Vega in the case
of two filaments and for particular configurations of three filaments. In this talk I will present large time existence results for particular configurations of four filaments and for other
particular configurations of N filaments for any N larger than 2. I will also discuss some situations of finite time filament collapse. This i s joint work with Valeria Banica. DTEND;TZID=Europe/
Zurich:20120418T161500 END:VEVENT BEGIN:VEVENT UID:news519@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20190104T235555 DTSTART;TZID=Europe/Zurich:20120411T151500 SUMMARY:Seminar Analysis: Wolfgang
Reichel (KIT Karlsruhe) DESCRIPTION:We are interested in ground states for the nonlinear Schröding er-equation (NLS) with an interface between two purely periodic media. Thi s means that the
coefficients in the NLS model two different periodic medi a in each halfspace. The resulting problem no longer has a periodic struct ure. Using variational methods we give conditions on the
coefficients such that ground states are created/prevented by the interface. X-ALT-DESC:\nWe are interested in ground states for the nonlinear Schrödin ger-equation (NLS) with an interface between
two purely periodic media. Th is means that the coefficients in the NLS model two different periodic med ia in each halfspace. The resulting problem no longer has a periodic struc ture. Using
variational methods we give conditions on the coefficients suc h that ground states are created/prevented by the interface. DTEND;TZID=Europe/Zurich:20120411T151500 END:VEVENT BEGIN:VEVENT
UID:news518@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20190104T235124 DTSTART;TZID=Europe/Zurich:20120404T151500 SUMMARY:Seminar Analysis: Michael Reiterer (ETH Zurich) DESCRIPTION:About twenty years
ago\, Choptuik studied numerically the gravi tational collapse (Einstein field equations) of a massless scalar field i n spherical symmetry\, and found strong evidence for a universal\, self- similar
solution at the threshold of black hole formation. We give a rigo rous\, computer assisted proof of the existence of Choptuik's spacetime\, and show that it is real analytic. This is joint work with
E. Trubowitz. X-ALT-DESC: \nAbout twenty years ago\, Choptuik studied numerically the gra vitational collapse (Einstein field equations) of a massless scalar field in spherical symmetry\, and found
strong evidence for a universal\, sel f-similar solution at the threshold of black hole formation. We give a ri gorous\, computer assisted proof of the existence of Choptuik's spacetime \, and show
that it is real analytic. This is joint work with E. Trubowit z. DTEND;TZID=Europe/Zurich:20120404T161500 END:VEVENT BEGIN:VEVENT UID:news516@dmi.unibas.ch DTSTAMP;TZID=Europe/Zurich:20190104T234043
DTSTART;TZID=Europe/Zurich:20120328T151500 SUMMARY:Seminar Analysis: Sara Daneri (University of Zurich) DESCRIPTION:We consider the optimal transportation problem with cost functi ons given by
generic convex norms in Rd and absolutely continuous first ma rginals. We show the existence of a partition of Rd into k-dimensional set s\, k=0\,...\,d\, such that every optimal transport plan can
be characteri zed\, via disintegration of measures\, as a family of optimal transport pl ans each moving a conditional probability of the first marginal inside one of these k-dimensional sets\, along
the directions of an extremal k-dimen sional cone of the convex norm. Moreover\, the conditional probabilities of the first marginal on these sets are absolutely continuous with respec t to the
k-dimensional Hausdorff measure on the k-dimensional sets on whic h they are concentrated\, thus settling the longstanding Sudakov's problem of the existence of locally affine decompositions of Rd
that reduce norm cost transportation problem to families of lower dimensional ones. Finally \, due to the minimality of our partition with respect to this "dimensiona l reduction" property\,
applications to secondary cost functions obtained first minimizing with respect to a convex norm and then with respect to a finer one (e.g.\, a strictly convex one) will be shown. These results were
obtained in collaboration with Stefano Bianchini (SISSA\, Trieste). X-ALT-DESC: \nWe consider the optimal transportation problem with cost func tions given by generic convex norms in R^d and
absolutely continuous first marginals. We show the existence of a partition of R< /b>^d into k-dimensional sets\, k=0\,...\,d\, such that every op timal transport plan can be characterized\, via
disintegration of measures \, as a family of optimal transport plans each moving a conditional probab ility of the first marginal inside one of these k-dimensional sets\, along the directions of an
extremal k-dimensional cone of the convex norm.  \; Moreover\, the conditional probabilities of the first marginal on these sets are absolutely continuous with respect to the k-dimensional
Hausdorf f measure on the k-dimensional sets on which they are concentrated\, thus settling the longstanding Sudakov's problem of the existence of locally af fine decompositions of R^d that reduce
norm cost transpor tation problem to families of lower dimensional ones. Finally\, due to the minimality of our partition with respect to this "\;dimensional reduc tion"\; property\,
applications to secondary cost functions obtained f irst minimizing with respect to a convex norm and then with respect to a f iner one (e.g.\, a strictly convex one) will be shown. These results
were obtained in collaboration with \; Stefano Bianchini (SISSA\, Trieste). DTEND;TZID=Europe/Zurich:20120328T161500 END:VEVENT BEGIN:VEVENT UID:news514@dmi.unibas.ch DTSTAMP;TZID=Europe/
Zurich:20190104T233639 DTSTART;TZID=Europe/Zurich:20120321T151500 SUMMARY:Seminar Analysis: Lisa Beck (Bonn) DESCRIPTION:In this seminar we will give a survey on some aspects of the cl assical
regularity theory for W1\,p-solutions to elliptic problems (convex variational integral or elliptic systems)\, restricting ourselves to simp le model cases and explaining the challenges behind
proving such results. For scalar valued solutions full regularity (continuous or even better) ca n be established under very mild assumptions\, which is nowadays known as the De Giorgi-Nash-Moser
theory. In the vectorial case instead\, the vario us component functions and their partial derivative can interact in such a way that the system or variational integral under consideration allows di
scontinuous or even unbounded solutions\, and in fact various counterexamp les to full regularity have been constructed. As a consequence\, only part ial regularity can be expected\, in the sense
that the solution (or its gr adient) is locally continuous outside of a negligible set (the singular se t). We will give some heuristics on the generalapproach to partial regular ity results and then
we briefly discuss how in some particular situations (small space dimensions\, special structure conditions) an upper bound on the Hausdorff dimension of the singular set can be obtained. X-ALT-DESC:
\nIn this seminar we will give a survey on some aspects of the c lassical regularity theory for W^1\,p-solutions to elliptic prob lems (convex variational integral or elliptic systems)\, restricting
ourse lves to simple model cases and explaining the challenges behind proving su ch results. For scalar valued solutions full regularity (continuous or eve n better) can be established under very
mild assumptions\, which is nowada ys known as the De Giorgi-Nash-Moser theory. In the vectorial case instead \, the various component functions and their partial derivative can intera ct in such a
way that the system or variational integral under considerati on allows discontinuous or even unbounded solutions\, and in fact various counterexamples to full regularity have been constructed. As a
consequence \, only partial regularity can be expected\, in the sense that the solutio n (or its gradient) is locally continuous outside of a negligible set (the singular set). We will give some
heuristics on the general
approach to partial regularity results and then we briefly discuss how in some part icular situations (small space dimensions\, special structure conditions) an upper bound on the Hausdorff dimension
of the singular set can be obtai ned. DTEND;TZID=Europe/Zurich:20120321T161500 END:VEVENT END:VCALENDAR :S]]]] | {"url":"https://dmi.unibas.ch/en/news-events/past-events/past-events-mathematics/4349.ics","timestamp":"2024-11-13T09:17:43Z","content_type":"text/calendar","content_length":"304872","record_id":"<urn:uuid:2173d559-37cd-48c1-bcdb-6f14574512c4>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00516.warc.gz"} |
Problems on Simple Interest and Compound Interest for Bank Exams
Simple interest and compound interest questions are mostly asked in banking exams. In the simple interest questions, you have to calculate interest according to the time and rate whereas, in compound
interest questions you have to add interest to the principal sum of deposit, or in other words, interest on interest.
So here I am sharing problems on simple interest and compound interest for bank exams with answers for your better preparation. You can easily score high in the competitive exams by practicing these
selective questions.
As well as you can understand easily about Simple and Compound Interest Formulas that how to use formulas in these types of questions.
Simple interest and Compound Interest Problems
Simple interest Problems:
Q.1. What will be the ratio of simple interest earned by certain amount at the same rate of interest for 6 years and that for 9 years?
(A) 1 : 3
(B) 1 : 4
(C) 2 : 3
(D) Data inadequate
(E) None of these
Ans . C
Q.2. A man took loan from a bank at the rate of 12% p.a. simple interest. After 3 years he had to pay Rs. 5400 interest only for the period. The principal amount borrowed by him was:
(A) Rs.2000
(B) Rs.10000
(C) Rs.15000
(D) Rs.20000
Ans . C
Q.3. A certain amount earns simple interest of Rs. 1750 after 7 years. Had the interest been 2% more, how much more interest would it have earned?
(A) Rs. 35
(B) Rs. 245
(C) Rs. 350
(D) Cannot be determined
(E) None of these
Ans . D
Q.4. A sum of money at simple interest amounts to Rs. 815 in 3 years and to Rs. 854 in 4 years. The sum is:
(A) 650
(B) 690
(C) 698
(D) 700
Ans . C
Q.5. A person borrows Rs. 5000 for 2 years at 4% p.a. simple interest. He immediately lends it to another person at 6p.a for 2 years. Find his gain in the transaction per year.
(A) Rs. 112.50
(B) Rs. 125
(C) Rs. 150
(D) Rs. 167.50
Ans . A
Q.6. In how many years will a sum of Rs.800 at 10% per annum compounded semi annually become Rs.926.10
(A) 1.5
(B) 2.5
(C) 3.5
(D) 4.5
Ans . A
Q.7. A sum of money amounts to Rs. 9800 after 5 years and Rs. 12005 after 8 years at the same rate of simple interest. The rate of interest per annum is:
(A) 5%
(B) 8%
(C) 12%
(D) 15%
Ans . C
Q.8. At what rate percent per annum will a sum of money double in 8 years.
(A) 12.5%
(B) 13.5%
(C) 11.5%
(D) 14.5%
Ans . A
Q.9. What percentage of simple interest per annum did Ram pay to Shivam?
I. Ram borrowed Rs. 8000 from Shivam for four years.
II. Ram returned Rs. 8800 to Shivam at the end of two years and settled the loan.
(A) I alone sufficient while II alone not sufficient to answer
(B) II alone sufficient while I alone not sufficient to answer
(C) Either I or II alone sufficient to answer
(D) Both I and II are not sufficient to answer
(E) Both I and II are necessary to answer
Ans . E
Q.10. What annual instalment will discharge a debt of Rs 1092 due in 3 years at 12% simple interest?
(A) Rs.325
(B) Rs.545
(C) Rs.560
(D) Rs.550
Ans . A
If you are facing any problems in the simple and compund interest problems with answers, ask me in the comment section without any hesitation. Visit on the next page for more practice.
Showing page 1 of 2 | {"url":"https://www.examsbook.com/problems-on-simple-interest-and-compound-interest-for-bank-exams","timestamp":"2024-11-10T02:42:17Z","content_type":"text/html","content_length":"634034","record_id":"<urn:uuid:1d9457ca-024a-4d1c-878b-03a731487120>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00337.warc.gz"} |
Browse in Grades 9-12, Math Topic, Geometry | NCTM Publications
You are looking at 1 - 10 of 374 items for :
Clear All
Construct It! Progressively Precise: Three Levels of Geometric Constructions
Carmen Petrick Smith
This article shares an activity scaffolding the construction of the circumcenter of a triangle, culminating with a Triangle-Ball Championship game.
Keith Dreiling
The Trammel of Archimedes traces an ellipse as the machine’s lever is rotated. Specific measurements of the machine are used to compare the machine’s actions on GeoGebra with the graph of the ellipse
and an ellipse formed by the string method.
Algebraic Thinking in the Context of Spatial Visualization
Arsalan Wares and David Custer
This pattern-related problem, appropriate for high school students, involves spatial visualization, promotes geometric and algebraic thinking, and relies on a no-cost computer software program.
Alessandra King, Sophia Ouanes, and Claire Doh
Students and teachers enjoy exploring the boundaries between mathematics and art.
The Hidden Beauty of Complex Numbers
Juan Carlos Ponce Campuzano
The Case for High School Math Pathways
Eric Milou and Steve Leinwand
The standard high school math curriculum is not meeting the needs of the majority of high school students and that serious consideration of rigorous alternatives is a solution whose time has come.
Perspectives from Physics: Constraints on Our Curriculum
Victor Mateas
How trigonometry is used and portrayed differently in mathematics and physics textbooks highlights potential sources for student struggle, constraints on our trigonometry curriculum, and lessons
learned when looking across STEM disciplines.
S^3D: Small-Group, Student-to-Student Discourse
Sarah Quebec Fuentes
Learn about strategies and tools to examine and improve your practice with respect to fostering equitable small-group, student-to-student discourse.
Exploring Geometry with Origami One-Cut-Heart
Lauren R. Holden, Yi-Yin (Winnie) Ko, Devon W. Maxwell, Connor A. Goodwin, Cheng-Hsien Lee, Jennifer E. Runge, and Elizabeth B. Beeman
One-Straight-Cut-Heart activities can help teachers support students’ engagement with geometry and can deepen students’ geometric reasoning.
A Radian Angle Measure and Light Reflection Activity
Hanan Alyami
During a Desmos activity, students adjust the measures of angles in radians to reposition a laser and a mirror so the beam passes through three stationary targets. The Radian Lasers activity can be
extended to simulate project-based learning. | {"url":"https://pubs.nctm.org/browse?access=all&f_0=author&pageSize=10&sort=datedescending&t_0=grades_9-12&t_3=math-topic&t_4=geometry","timestamp":"2024-11-09T04:12:10Z","content_type":"text/html","content_length":"371815","record_id":"<urn:uuid:1b6a9df9-a8a6-4bd0-9f91-759f8e755ce9>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00099.warc.gz"} |
Spring 2013
COS 423
Theory of Algorithms
Spring 2013
Course Information | Problem Sets | Lecture Slides | Precepts
Below are links to the problem sets. You will submit the problem sets via our electronic submission system.
# DUE PROBLEM SET SUBMIT COLLABORATION
1 Wed 2/13 not available not available allowed
2 Wed 2/20 not available not available allowed
3 Wed 2/27 not available not available no collaboration
4 Wed 3/13 not available not available allowed
5 Wed 4/3 not available not available allowed
6 Wed 4/17 not available not available allowed
7 Wed 5/1 not available not available allowed
8 Tue 5/14 not available not available no collaboration
Writing up your solutions. Learning how to write clear, concise, and rigorous solutions is an important component of this course. Vague and sloppy solutions often turn out to have inaccuracies that
render them incorrect. You will lose a significant number of points if your solution is imprecise or lacks sufficient explanation, even if the solution turns out to be correct. Here are a few
• Some of problem sets contain word problems that consist primarily of an English description of a problem, with little or no mathematical notation. This is intentional. For such problems, you
should first extract the essence of the underlying problem and formalize it mathematically, then solve the problem.
• When a problem asks you to design an efficient algorithm, you are expected to provide an efficient algorithm along with both a proof of correctness and an analysis of its running time. Often, it
is your job to determine what efficient means in the given context—Θ(n log n), Θ(n^2), or something else.
• Typically, the best way to describe your solution is to first explain the key ideas in English, possibly with the use of some clearly-defined notation and some high-level pseudocode. A solution
consisting solely of pseudocode and no accompanying explanation will receive little or no partial credit.
• Once you have discovered a solution to the problem, try to simplify it and make it as elegant and clean as possible. This will not only help the grader understand your solution but it will
provide you with an opportunity to clarify your thoughts and gain insight into the problem. At this point, you may even be able to improve your algorithm and analysis. Along these lines, we will
award bonus points for especially elegant or efficient solutions.
• If you don't know the answer, then write I don't know, for which you will be awarded 20% of the available credit. You will receive no credit for a deeply flawed answer. It is better to
acknowledge that you are stumped than to pretend that a bogus solution is correct.
Submission policy. You must submit your solutions electronically via the Dropbox submission system. Your solutions should be carefully organized and neatly typeset in LaTeX using the COS 423 LaTeX
template (and the pdf output). If you're new to LaTeX, here is a brief LaTeX guide (and the tex source file). Please follow these guidelines:
• Submit one tex file and one pdf file for each problem (along with any accompanying figures), using the naming convention problem1-3.tex and problem1-3.pdf for problem 3 on problem set 1.
• Write your name, your login, and the problem number at the top of every page.
• Write the login of each of your collaborators at the bottom of the first page.
You will need to type your Princeton netID and password for authentication. You can resubmit and unsubmit files as needed up until the submission deadline. Any files submitted at grading time will be
graded as is.
Lateness policy. Problem sets are due at 11pm on the date specified, with a 3-hour grace period. Late submissions are assessed a penalty of 10% of the total points for that problem set per day or
partial day: 0–3 hours late (no penalty), 3–24 hours late (10%), 24–48 hours late (20%), and so forth. Your first 4 late days are automatically waived. No additional lateness days will be waived
without the recommendation of a Dean or a letter from McCosh Health Center.
Collaboration policy. Designing and analyzing an algorithm is an individual creative process much like writing a composition or computer program. You must reach your own understanding of the problem
and discover a path to its solution. Each problem set will be designated as either no collaboration or collaboration allowed.
• No collaboration. Collaboration of any kind is a violation of academic regulations. There are two exceptions: you may consult the course staff and you may consult the COS 423 course materials.
Course materials are limited to the following: the Kleinberg-Tardos textbook, course handouts, lecture slides, your class notes, and your solutions to previous problem sets.
• Collaboration allowed. You are permitted and encouraged to discuss ideas with classmates. However, when the time comes to write out your solution, such discussions are no longer appropriate—your
solution must be your own work, in your own words.
Academic integrity. The creative process leading to the discovery of a solution is as important as understanding and being able to present a solution. The following activities are explicitly
prohibited in all graded coursework:
• Failing to properly cite your source and collaborators. You must always cite your sources (other than the COS 423 materials) and the names of any individuals with whom you collaborated (other
than COS 423 staff members).
• Copying or paraphrasing another person's solution. This includes adapting solutions or partial solutions from any offering of this course or any other course.
• Possessing another person's solution or partial solution. This includes solutions in electronic, handwritten, or printed form.
• Giving a solution or partial solution to another person. This applies even if you have the explicit understanding that it will not be copied.
• Looking up solutions. This includes searching for solutions from previous offerings of this class or any other class.
• Providing, receiving, or soliciting help to or from another person on a problem set that is designated as no collaboration. The one exception is help from a staff member.
• Publishing a solution or partial solution to a problem set. This applies even after the course is over.
This policy supplements the University's academic regulations, making explicit what constitutes a violation for this course. Princeton Rights, Rules, Responsibilities handbook asserts:
The only adequate defense for a student accused of an academic violation is that the work in question does not, in fact, constitute a violation. Neither the defense that the student was ignorant
of the regulations concerning academic violations nor the defense that the student was under pressure at the time the violation was committed is considered an adequate defense.
If you have any questions about these matters, please consult a course staff member. Violators will be referred to the Committee on Discipline for review; if found guilty, you will receive an F as a
course grade plus whatever disciplinary action the Committee imposes. | {"url":"https://www.cs.princeton.edu/courses/archive/spring13/cos423/assignments.php","timestamp":"2024-11-05T00:12:05Z","content_type":"text/html","content_length":"10026","record_id":"<urn:uuid:676c9b22-ede3-476a-9fa6-436ae6320c02>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00867.warc.gz"} |
A Guide To The Cornell Physics 1112 Lecture Schedule - Do My Physics Exam
If you are a physics student, then you will find that the syllabus of the class of 2020, The College of Letters and Science at the University of Michigan has a lot to offer to students. This is one
of the oldest courses in the curriculum, which covers a wide range of topics in mathematics and science, including calculus, optics, thermodynamics, electricity and magnetism. Although there are
quite a few changes made in this course every year, students who are interested in becoming a nuclear scientist or working in other fields of physics can still benefit from this course.
Please proceed with your order so that we can facilitate taking your exam on your behalf. Click Here
The syllabus for the University of Michigan Physics 1112 syllabus features a number of changes from previous years. For starters, the course was restructured in order to teach a wider array of
topics. In addition, the new syllabus was introduced by allowing students to choose their own units to learn from. The old syllabus allowed students to choose from four units and study the concepts
in these units.
Students are now permitted to study at their own set of time. They can choose any units to learn about and choose how many units they want to learn from them. This is a great option for students,
since it makes it easy for them to learn from the syllabus as they study.
The course involves many modules that are presented in a sequential manner. Students will learn topics and theories that are relevant to their career goals. There are also tests at the end of each
module, and students will receive feedback from instructors on how well they understood the material covered in that module. Students can opt to take the tests in the morning or in the evening,
depending on their personal schedules.
Students have a lot of freedom to choose their units and groups to learn from. It is their choice, therefore, on how long they would like to study each module. They may decide to go over some topics
more than others, depending on the subjects that interest them. Students can select from either general topics, such as atoms, molecules, or energy, and even on their favorite area of physics.
The syllabus also allows students to choose the time that they would like to study and work at their own schedule. Previously, students were required to spend a certain amount of time at a desk
throughout the semester, but now, students can work any time they wish. on the assignments that they have taken.
The course includes a lot of projects that students have to complete at the end of the semester. They can submit their project at the end of the semester or they may take responsibility in starting
the project planning process. However, it is their responsibility to make sure that the project is written before the deadline is reached. Students have a large selection of projects to complete,
with topics ranging from understanding the behavior of magnets and the behavior of atoms to calculating the behavior of the electric field and how it relates to electricity.
Students have the choice of taking a full course in a semester, or they can take only a part of a course in a semester. They are free to decide whether they will be taking a full course in a semester
or a part of a semester. There are also sections that give students the opportunity to complete the course online. Students can even take the course online after having earned a bachelor’s degree in
Students can find out how many units they will need for the course by looking at the course description or by consulting the course map. Units need to be calculated based on the student’s grade
average. Units should not be taken for granted because they are used to calculate final grades. Most courses, however, have a specific grade distribution for students to follow, so students can avoid
unnecessary confusion when learning how to calculate their grade point average.
The class usually consists of about 50 minutes of lecture time and one class period per week. There is usually a break period of about two minutes, at the end of the semester, so that students can
review the materials and get prepared for the next semester’s lab. Most students have to pass an exam in order to continue on with the next semester’s class.
The course is an excellent introduction to the subject matter for students who are just beginning their studies. It also provides the necessary skills needed to understand the concepts used by
graduate students in research, and is a good choice for students who want to expand their knowledge and understand more about the subject matter. It provides students with opportunities for
self-directed learning and allows students to move forward with their own pace in the course. | {"url":"https://domyphysicsexam.com/a-guide-to-the-cornell-physics-1112-lecture-schedule/","timestamp":"2024-11-13T18:43:13Z","content_type":"text/html","content_length":"112799","record_id":"<urn:uuid:6cd2c970-4c60-4573-994f-11550e04769a>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00255.warc.gz"} |
Www.math trivia question algebra
www.math trivia question algebra
Related topics:
Home radical expression solver | trigonometric problems | multiplying fractions with radicals | free 9th grade math worksheets and answers | real-world problems
Graphing and Writing Linear Functions using exponential decay | algebriac formula for finding percents | variable in the exponent | adding measurements worksheets | mcdougal littell math algebra 1
SOLVING EQUATIONS INVOLVING RATIONAL workbook answers | factor pyramids homework math 5 grade | how to find the vertex on a graphing calculator | adding, multiplying and division of integers | ged
EXPONENTS algebra + substituting variables | examples of a equation involving rational algebraic expressions
Linear Equations and Graphing
Systems of Linear Equations
Solving Polynomial Equations Author Message
Matrix Equations and Solving Systems
of Linear Equations Shannoto Posted: Saturday 30th of Dec 20:11
Introduction Part II and Solving Hey guys! . Ever since I have encountered www.math trivia question algebra at school I never seem to be able to get through it well. I am
Equations quite good at all the other chapters , but this particular area seems to be my weakness . Can some one aid me in learning it properly ?
Linear Algebra
Graphing Linear Inequalities
Using Augmented Matrices to Solve Registered:
Systems of Linear Equations 25.03.2002
Solving Linear Inequalities From: Drenthe, The
Solution of the Equations Netherlands
Linear Equations
Annotated Bibliography of Linear
Algebra Books
Write Linear Equations in Standard ameich Posted: Monday 01st of Jan 19:41
Form Believe me, it’s sometimes quite difficult to learn a topic alone because of its difficulty just like www.math trivia question algebra.
Graphing Linear Inequalities It’s sometimes better to ask someone to explain the details rather than knowing the topic on your own. In that way, you can understand it
Introduction to Linear Algebra for very well because the topic can be explained systematically . Luckily, I found this new program that could help in understanding problems
Engineers in math . It’s a not costly quick convenient way of understanding math lessons . Try making use of Algebrator and I guarantee you that
Solving Quadratic Equations Registered: you’ll have no trouble solving math problems anymore. It displays all the useful solutions for a problem. You’ll have a good time learning
THE HISTORY OF SOLVING QUADRATIC 21.03.2005 algebra because it’s user-friendly. Try it .
EQUATIONS From: Prague, Czech
Systems of Linear Equations Republic
Review for First Order Differential
Systems of Nonlinear Equations &
their solutions SanG Posted: Wednesday 03rd of Jan 13:29
LINEAR LEAST SQUARES FIT MAPPING I agree. Algebrator not only gets your assignment done faster, it actually improves your understanding of the subject by providing useful
METHOD FOR INFORMATION RETRIEVAL FROM information on how to solve similar questions. It is a very popular software among students so you should try it out.
Quadratic Equations
Syllabus for Differential Equations Registered:
and Linear Alg 31.08.2001
Linear Equations and Matrices From: Beautiful
Solving Linear Equations Northwest Lower
Slope-intercept form of the equation Michigan
Linear Equations
Linear Equation Problems camdota Posted: Thursday 04th of Jan 18:19
Systems of Differential Equations Thank you, I will check out the suggested program . I have never used any program before , I didn't even know that they exist. But it sure
Linear Algebra Syllabus sounds amazing ! Where did you find the program? I want to buy it right away, so I have time to study for the test .
Quadratic Equations and Problem
LinearEquations Registered:
The Slope-Intercept Form of the 19.07.2004
Equation From: 11111111
Final Exam for Matrices and Linear
Linear Equations
thicxolmed01 Posted: Saturday 06th of Jan 07:28
Thanks you very much for the detailed information. We will surely check this out. Hope we get our assignments finished with the help of
Algebrator. If we have any technical queries with respect to its use, we would definitely get back to you again.
From: Welly, NZ
Author Message
Shannoto Posted: Saturday 30th of Dec 20:11
Hey guys! . Ever since I have encountered www.math trivia question algebra at school I never seem to be able to get through it well. I am quite good at all the other chapters ,
but this particular area seems to be my weakness . Can some one aid me in learning it properly ?
From: Drenthe, The
ameich Posted: Monday 01st of Jan 19:41
Believe me, it’s sometimes quite difficult to learn a topic alone because of its difficulty just like www.math trivia question algebra. It’s sometimes better to ask someone to
explain the details rather than knowing the topic on your own. In that way, you can understand it very well because the topic can be explained systematically . Luckily, I found
this new program that could help in understanding problems in math . It’s a not costly quick convenient way of understanding math lessons . Try making use of Algebrator and I
guarantee you that you’ll have no trouble solving math problems anymore. It displays all the useful solutions for a problem. You’ll have a good time learning algebra because
Registered: it’s user-friendly. Try it .
From: Prague, Czech
SanG Posted: Wednesday 03rd of Jan 13:29
I agree. Algebrator not only gets your assignment done faster, it actually improves your understanding of the subject by providing useful information on how to solve similar
questions. It is a very popular software among students so you should try it out.
From: Beautiful
Northwest Lower
camdota Posted: Thursday 04th of Jan 18:19
Thank you, I will check out the suggested program . I have never used any program before , I didn't even know that they exist. But it sure sounds amazing ! Where did you find
the program? I want to buy it right away, so I have time to study for the test .
From: 11111111
thicxolmed01 Posted: Saturday 06th of Jan 07:28
Thanks you very much for the detailed information. We will surely check this out. Hope we get our assignments finished with the help of Algebrator. If we have any technical
queries with respect to its use, we would definitely get back to you again.
From: Welly, NZ
Posted: Saturday 30th of Dec 20:11
Hey guys! . Ever since I have encountered www.math trivia question algebra at school I never seem to be able to get through it well. I am quite good at all the other chapters , but this particular
area seems to be my weakness . Can some one aid me in learning it properly ?
Posted: Monday 01st of Jan 19:41
Believe me, it’s sometimes quite difficult to learn a topic alone because of its difficulty just like www.math trivia question algebra. It’s sometimes better to ask someone to explain the details
rather than knowing the topic on your own. In that way, you can understand it very well because the topic can be explained systematically . Luckily, I found this new program that could help in
understanding problems in math . It’s a not costly quick convenient way of understanding math lessons . Try making use of Algebrator and I guarantee you that you’ll have no trouble solving math
problems anymore. It displays all the useful solutions for a problem. You’ll have a good time learning algebra because it’s user-friendly. Try it .
Posted: Wednesday 03rd of Jan 13:29
I agree. Algebrator not only gets your assignment done faster, it actually improves your understanding of the subject by providing useful information on how to solve similar questions. It is a very
popular software among students so you should try it out.
Posted: Thursday 04th of Jan 18:19
Thank you, I will check out the suggested program . I have never used any program before , I didn't even know that they exist. But it sure sounds amazing ! Where did you find the program? I want to
buy it right away, so I have time to study for the test .
Posted: Saturday 06th of Jan 07:28
Thanks you very much for the detailed information. We will surely check this out. Hope we get our assignments finished with the help of Algebrator. If we have any technical queries with respect to
its use, we would definitely get back to you again. | {"url":"https://linear-equation.com/linear-equation-graph/difference-of-squares/www.math-trivia-question.html","timestamp":"2024-11-09T10:56:10Z","content_type":"text/html","content_length":"90098","record_id":"<urn:uuid:81365fb8-dcb5-49b4-9061-dc9507bcd911>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00218.warc.gz"} |
Block Party - The Futures Channel
Grade Levels: 3rd Grade, 4th Grade, 5th Grade, 6th Grade, 7th Grade,
Topics: Measurement (volume)Geometry (rectangular prisms)
Common Core State Standard: 5.MD.4, 6.G.4,
· Surface area
· Volume
· Cubic centimeter
Knowledge and Skills:
· Can find the volume of a rectangular prism by computation
(for each team):
· one pair of safety scissors
· one roll of ½” wide transparent tape
· a few sheets of one-centimeter grid paper.
Download the Teacher Guide PDF
Procedure: This activity is best done individually, but can also be done in teams of two.
Distribute the two handouts (instruction sheet and patterns). Review the instructions and have students begin the activity.
As the students work, circulate, observe and have them describe what they are doing. Be patient, as this activity may take 45 minutes or more.
You may wish to give students the hint that one way to match the patterns to their shapes is by determining the surface area of each (the surface area of the pattern will be equal to the surface area
of the shape).
To speed up the activity, or for students that need extra help, you may wish to make the shapes yourself in advance and have them available as models.
At the conclusion of Part I, ask this question (students may answer orally or in writing):
“Suppose I have two shapes with the same volume. Will those shapes always have the same surface area? Explain your answer.” (As demonstrated by the two larger shapes in Part I, shapes that have the
same volume do not always have the same surface area.)
For Part II of the activity, you will need to provide students with 1-centimeter grid paper for their patterns.
Block Party
Part I : Each of the shapes below can be made from the patterns on the grid that your teacher will give you. (Be sure to cut all dotted lines.)
For each shape, choose the correct pattern, and make the shape (be sure to cut all dotted lines around and in each pattern).
Then find the volume of the shape and its surface area.
Block Party Patterns: Each square in the grid is one square centimeter. | {"url":"https://thefutureschannel.com/lesson/block-party/","timestamp":"2024-11-07T17:35:21Z","content_type":"text/html","content_length":"61446","record_id":"<urn:uuid:a4f30e35-a719-427e-831c-9a319557273b>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00136.warc.gz"} |
Spin-glass model of evolution
The model of evolution, discussed in Quasispecies, implies a strong assumption: the selective value is determined by the Hamming distance between the particular and unique master sequences. Only one
maximum of the selective value exists. Using the physical spin-glass concept, we can construct a similar model for a very large number of the local maxima of a selective value.
D. Sherrington and S. Kirkpatrick proposed a simple spin-glass model to interpret the physical properties of the systems, consisting of randomly interacting spins [1]. This well-known model can be
described shortly as follows:
; 1) There is a system S of spins S[i] , i = 1,..., N (the number of spins N is supposed to be large, N >> 1). Spins take the values: S[i] = 1, -1 .
; 2) The exchange interactions between spins are random. The energy of the spin system is defined as:
E (S) = - S[i<j] J[ij] S[i ]S[j ], (1)
where J[ij] are the exchange interactions matrix elements. J[ij] are normally distributed random values:
Probability_density{J[ij] } = (2p)^-1/2 (N-1)^1/2 exp [- J[ij]^2 (N-1)2^-1] . (2)
From (1), (2) one can obtain, that the mean spin-glass energy is zero (<E > = 0) , and the mean square root value of the energy variation at one-spin reversal is equal to 2:
MSQR {dE (S[i] --> - S[i]) } = 2 . (3)
The model (1), (2) was intensively investigated. For further consideration the following spin-glass features are essential :
- the number of local energy minima M is very large: M ~ exp(aN) , a = 0,2 ; (a local energy minimum is defined as a state S[L] , at which any one-spin reversal increases the energy E );
- the global energy minimum E[0] equals approximately: E[0] = - 0,8 N .
Let's construct the spin-glass model of evolution [2]. We suppose, that an informational sequence (a genome of a model "organism") can be represented by a spin-glass system vector S . The evolved
population is a set {S[k]} of n sequences, k = 1,..., n. The selective values of model "organisms" S[k ]are defined as:
f(S[k]) = exp[- b E(S[k])] , (4)
where b is the selection intensity parameter.
The definition (4) implies that the model genome S[k] consists of different elements S[ki] , which pairwise interact in accordance with the random interaction matrix J[ij] . In order to maximize the
"organism" selective value (that is to minimize the energy E(S)), it is necessary to find a such combination of elements S[i] , that provides maximally cooperative interactions for given matrix J[ij]
As in Quasispecies , we suppose, that 1) the evolution process consists of consequent generations, 2) new generations are obtained by selection (in accordance with selective values (4)) and mutations
(sign reversals of sequence symbols, S[ki] --> - S[ki] , with the probability P for any symbol) of sequences S[k ]. The initial population is supposed to be random.
The described spin-glass model of evolution was analyzed by means of computer simulations and rough estimations [2]. The main evolution features are illustrated by Fig.1. Here n(E) is the number of
sequences S , such that E(S) = E in a considered population; t is the generation number.
Fig. 1. The sequence distribution n(E) at different generations t ; t[3] > t[2] > t[1] ; E[0] and E[L] are global and local energy minima, respectively; the global energy minimum E[0] equals
approximately: E[0] = - 0,8 N. Schematically, according to the computer simulations [2].
The spin-glass-type evolution is analogous to the Hamming distance case (see Quasispecies, Estimation of the evolution rate). But unlike the Hamming-distance model, the evolution converges to one of
the local energy minima E[L] , which can be different for different evolution realizations.
Because one mutation gives the energy change dE ~ 2 (see (3)), and the typical time for one mutation per sequence dt is of the order (PN)^-1, the total number of evolution generations T (at
sufficiently large selection intensity b) can be estimated by the value T ~ (|E[0]|/dE)xdt ~ (0.8N /2)x(PN)^-1. This value is close to the estimation of value T in the Hamming-distance case. So, the
estimations of the evolution rate are roughly the same for both models, and we can use formulas (1)-(3) in Quasispecies to characterize the spin-glass-type evolution as well.
Analogously to the Hamming-distance case, we can consider the sequential method of energy minimization, that is the consequent changes of symbols (S[i] --> - S[i]) of one sequence and fixation only
successful reversals. The sequential search needs smaller participant number than the evolution search. Nevertheless, the evolution search provides in average a more deep local energy minimum E[L]
[2], because different valleys in energy landscape are looked through in evolution process simultaneously with descending to energy minima.
Thus, in the spin-glass case, the evolutionary search has a certain advantage with respect to the sequential search: it provides in average the greater selective value.
Conclusion. The spin-glass model of evolution refers to the "organisms", which have many randomly interacting genome elements. Evolution can be considered as a search of such genome elements, which
are able to cooperate in the most successful manner.
1. D.Sherrington , S.Kirkpatrick. // Physical Review Letters. 1975. V.35. N.26. P.1792. S.Kirkpatrick, D.Sherrington. // Physical Review B. 1978. V.17. N.11. P.4384.
2.V.G.Red'ko. Biofizika. 1990. Vol. 35. N.5. P. 831 (In Russian). | {"url":"http://pespmc1.vub.ac.be/SPINGL.html","timestamp":"2024-11-04T02:03:47Z","content_type":"text/html","content_length":"13158","record_id":"<urn:uuid:84881903-8175-4c34-a147-a1625ae152dd>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00589.warc.gz"} |
identity type
I would even argue that defining terms and types using the identity type is more in line with how mathematicians define terms and types elsewhere in mathematics. For example, in the Openstax Calculus
Volume 2 textbook by Strang, Herman, et al, they give the definition of the area of the curve under a function $f$ on page 17 as
Let $f(x)$ be a continuous, nonnegative function on an interval $[a, b]$, and let $\sum_{i = 1}^{n} f(x_i^*) \delta x$ be a Riemann sum for $f(x)$. Then the area under the curve $y = f(x)$ on $
[a, b]$ is given by
$A = \lim_{n \to \infty} \sum_{i = 1}^{n} f(x_i^*) \delta x$
The authors used propositional equality rather than the assignment operator or definitional equality.
Sure you can, you use the given identity type instead of definitional equality in objective type theory. You could define a term $a:A$ to be a term $b:A$ by saying that term $a:A$ comes with a path $
\mathrm{def}(a, b):a =_A b$. Similarly, you could define a type $A:\mathcal{U}$ to be a type $B:\mathcal{U}$ by saying that the type $A:\mathcal{U}$ comes with a path $\mathrm{def}(A, B):A =_\mathcal
{U} B$.
Something similar happens for inductive definitions. For example, addition on the natural numbers could be defined as a function $(-)+(-):\mathbb{N} \times \mathbb{N} \to \mathbb{N}$ with paths $\
mathrm{basecase}(n): 0 + n =_\mathbb{N} n$ and $\mathrm{indcase}(m, n): s(m) + n =_\mathbb{N} s(m + n)$.
Re 103
Just a note, objective type theory is an incomplete theory, and in particular, the authors of the objective type theory paper never addressed how to define types and terms in objective type theory
without definitional equality.
Usually, defining a type or a term involves the usage of the operator $\coloneqq$, which was formally defined in Egbert Rijke’s Introduction to Homotopy Type Theory textbook on page 14 in Remark
2.2.1. Egbert’s formal definition uses definitional equality, and I’m not aware of any definition of $\coloneqq$ which doesn’t use definitional equality.
So I would be wary of making any philosophical claims about definitions in objective type theory until the theory is more fully fleshed out.
Mike Shulman wrote
The factual statements here are true, but I don’t think it follows that definitional equality isn’t relevant or that we should motivate general identity types by the properties which hold for all
of them. The definitional equalities that hold in a particular type theory are an indication of how that type theory philosophically/intuitively holds things to be defined, as opposed to the
properties that those things have after they’ve been defined, and hence determines the motivation that is most relevant to it.
Regarding definitional equalities holding in a particular type theory, objective type theory simply does not have definitional equality in the theory at all, yet it still has identity types. Any
discussion about identity types and its motivations has to deal with the fact that objective type theory does not have definitional equality and yet is still able to define identity types.
what I’m saying is that $Id_{Id_X(x,x')}(p \cdot id_x, p)$ and $Id_{\sum_{x':X} Id_X(x,x')}((x,id_x),(x',p))$ are different types.
Okay, I am not concerned with the first type and frankly i can’t tell why you bring it up.
I wrote out standard or at least well-known facts, with a fair bit of effort in typesetting them readably, illustrating their meaning and referencing them in more detail than usual. If there are
typos left, I’ll fix them. Otherwise I am bowing out of this thread.
Re 77
However, definitional equality isn’t really relevant here: definitional equality of two elements implies having a term of the identity type between two elements; i.e. the definitional
computational rules of Martin-Löf identity types imply the propositional computation rule, and the definitional transport rules of cubical/higher observational’s identity/path types imply the
propositional transport rules.
and 80
And non-dependent identity types have certain properties which hold universally across type theories regardless if the non-dependent identity types are primitive in the theory or derived from
some other type family, such as the propositional J-rule, propositional transport, and reflexivity, and the fact that their categorical semantics is a path space object. So we could and should
motivate general identity types by these properties which hold for all definitions of non-dependent identity types.
(I can’t tell whether these were the same Guest, because these Guests are still not using identifiable pseudonyms; but they seem similar, so maybe.)
The factual statements here are true, but I don’t think it follows that definitional equality isn’t relevant or that we should motivate general identity types by the properties which hold for all of
them. The definitional equalities that hold in a particular type theory are an indication of how that type theory philosophically/intuitively holds things to be defined, as opposed to the properties
that those things have after they’ve been defined, and hence determines the motivation that is most relevant to it.
Re 105
van der Berg and Besten did not define universes in objective type theory yet. If one tried to define universes, one could either define Russell universes or weakly Tarski universes. (strict Tarski
universes aren’t possible because of the definitional equality in the definition of strict Tarski universes)
Regardless of which definition one uses, if the type theory has a separate $A \; \mathrm{type}$ judgment, one would find that your statement only applies for types $A$ or $\mathrm{El}(A)$ which have
a corresponding term of the universe $A:U$. If the type theory has types which don’t belong to a universe, or if the type theory doesn’t have weakly Tarski universe or Russell universes, then
defining a type $B$ to be a type $A$ not in a universe doesn’t work by your method.
For example, in Egbert Rijke’s textbook, he defines the function type $A \to B$ to be the dependent function type on a constant type family
$A \to B \coloneqq \prod_{x:A} B$
He has not introduced universes yet in his theory, and so the dependent function type do not belong to a universe. Defining function types in this way by your method isn’t possible in objective type
theory at this stage of the theory.
Also, Benno van der Berg and Martijn Besten explicitly state that objective type theory is a modification of Martin-Löf type theory with all definitional equalities replaced with identifications in
the computation rules. So the motivation in objective type theory is the same as the motivation in Martin-Löf type theory; the only difference being that the definitional equality is swapped out with
an identification in the computation rules.
One could do the same thing in cubical type theory and swap all the definitional equalities out for path types, but the motivation there for introducing path types is still different from the
motivation in Martin-Löf type theory, even if one gets rid of all definitional equality and replaces it with path types.
Whatever the status of objective type theory, I’m happy to suppose for the sake of argument that it’s possible to have a type theory with identity types and without definitional equality. I don’t
think that changes the fact that for type theories that do have definitional equality, the definitional equalities that they have are an indication of the intuition and motivation behind their
identity types (or any other types). Even in a type theory without definitional equality, one can get some of the same information from which propositional equalities are posited “by definition” and
which are derived from that. (E.g. in Book HoTT, the computation rules for eliminators on path-constructors of higher inductive types hold only propositionally, but we still regard them as “by
definition” for purposes of intuition and motivation.)
Since I’ve apparently failed to communicate my point to Urs in this discussion, I’ve gone ahead and edited the page; maybe these changes will make it clearer.
I don’t know what “Identifications are preserved under composition with self-identifications” means or what it has to do with the uniqueness principle. In particular, there is no “composition” in the
uniqueness principle, and I don’t know what is being “preserved”. The best slogan I could think of that comes close to expressing what the uniqueness principle says is “An identification identifies
itself with a self-identification”.
I’ve also added some text to the “semantics” paragraph explaining how the picture drawn doesn’t quite match what the type theory says.
diff, v83, current
Can I ask you, Mike, what you take to be the relationship between a type-theoretic justification of a construction and a category-theoretic one, e.g., as indicated for identity types in #95? Klev in
that article makes the Martin-Löf move of criticising Walsh’s category-theoretic justification of path-induction as just translating from one language to another, when what’s needed is some proper
grounding, as in the meaning explanation he gives for the J-rule. But as Walsh explains, the category-theoretic version of harmony isn’t just free-floating, it comes with an inferentialist
justification, expressions take their meaning from how they’re formed/justified and how they’re used. The web of adjunctions that is logic dictates an introduction/elimination format and keeps
everything harmonious, just as the inferentialist would want.
I was invited by Klev to Prague back in the summer. He thinks in a purely type-theoretic way. It seems an odd situation to me that these different styles of justification exist, especially when they
deliver the same practical answer. Is there something beyond ’constructions are important so long as they make sense to all sides’ (computational trinitarianism) towards ’constructions that make good
sense to any side should make good sense to the others’ (perhaps due to a ’triality’)?
I have not read Klev’s papers (and they’re paywalled for me). I do find it a very powerful observation that type theorists and category theorists have independently arrived at very similar ways of
justifying/defining/characterizing objects via introduction and elimination. And I agree that in general, we should expect something that makes sense in one language to also make sense in another
language, although it’s also true that sometimes certain things are easier to express in one language versus another (otherwise there would be no point in having more than one language!). It seems
silly to me to argue about whether the type-theoretic or category-theoretic form of this justification is better or more meaningful, when they carry essentially the same information.
There should be a common philosophical core behind Martin-Löf-ian and inferentialist justification strategies.
added redirects for identification type and identification types
diff, v91, current
starting section “note on terminology” about the different names used for identity types and terms of identity types
diff, v91, current
Mike Shulman said that dependent type theory is not sequent calculus, so removed mention of sequent calculus from article
diff, v97, current
I have rolled back the deletion
and replaced “sequent calculus” by “natural deduction” (here)
diff, v98, current
I think it would be useful to add the other version of the elimination and computation rules for identity types, known as based path induction in the HoTT book. Using the existing notation on the
$\frac{\Gamma, x:A, y:A, p:\mathrm{Id}_A(x, y), \Delta(x, y, p) \vdash C(x, y, p):\mathrm{Type}}{\Gamma, x:A, t:C(x, x, r(x)), y:A, p:\mathrm{Id}_A(x, y), \Delta(x, t, y, p) \vdash J(x, t, y, p):C(x,
y, p)}$
$\frac{\Gamma, x:A, y:A, p:\mathrm{Id}_A(x, y), \Delta(x, y, p) \vdash C(x, y, p):\mathrm{Type}}{\Gamma, x:A, t:C(x, x, r(x)), \Delta(x, t) \vdash J(x, t, x, r(x)) \equiv t:C(x, x, r(x))}$$\frac{\
Gamma, x:A, y:A, p:\mathrm{Id}_A(x, y), \Delta(x, y, p) \vdash C(x, y, p):\mathrm{Type}}{\Gamma, x:A, t:C(x, x, r(x)), \Delta(x, t) \vdash \beta(x, t):\mathrm{Id}_{C(x, x, r(x))} (J(x, t, x, r(x)),
I have added inference rules for based path induction to the article, section headers for path induction and based path induction, and a explanation to what the inference rules for based path
induction mean, in contrast to that of the usual path induction. I have also added the reference to section 1.12 of the HoTT book which explains path induction and based path induction, as well as a
reference to section 5.1 of Rijke’s Introduction to Homotopy Type Theory which explains a similar distinction.
There are duplicated sentences between the two sections which I have left in the article for the time being, but which may not be necessary.
diff, v101, current
Add missing \Delta(x,y,p) to #### Standard J rule
diff, v117, current | {"url":"https://nforum.ncatlab.org/discussion/3306/identity-type/?Focus=112791","timestamp":"2024-11-13T11:36:49Z","content_type":"application/xhtml+xml","content_length":"98943","record_id":"<urn:uuid:3e88ed85-e133-4dfd-99ac-97bc3a7a3023>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00334.warc.gz"} |
seminars - The Minicourse of lattice algorithm
Traditional cryptography is ill-suited to modern security needs, arising from the outsourced storage and computation possibilities that the "cloud" offers. The minicourse of lattice algorithm is
centered around encryption and its advanced variants that are more suited to the cloud. We will introduce hard problems related to lattice and how to design protocols whose security provably relies
on the difficulty of hard problems. We will start from basic lattice algorithm, reduction techniques and move up to more and more advanced techniques to construct encryption suited to the cloud. | {"url":"http://www.math.snu.ac.kr/board/index.php?mid=seminars&page=88&l=ko&sort_index=Time&order_type=desc&document_srl=724897","timestamp":"2024-11-02T18:49:18Z","content_type":"text/html","content_length":"45244","record_id":"<urn:uuid:f39d784a-f00d-45e0-9b90-3e9d2cfec8d6>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00361.warc.gz"} |
[Solved] Draw a seriesparallel switch circuit that | SolutionInn
Draw a seriesparallel switch circuit that implements the function f(x, y, z) = 1 if inputs xyz
Draw a series–parallel switch circuit that implements the function f(x, y, z) = 1 if inputs xyz represent either 1 or a prime number in binary (xyz = 001, 010, 011, 101, 111).
Fantastic news! We've Found the answer you've been seeking!
Step by Step Answer:
Answer rating: 62% (8 reviews)
SeriesParallel Switch Circuit for fx y z The function fx y z 1 if inputs xyz r...View the full answer
Answered By
Muhammad Umair
I have done job as Embedded System Engineer for just four months but after it i have decided to open my own lab and to work on projects that i can launch my own product in market. I work on different
softwares like Proteus, Mikroc to program Embedded Systems. My basic work is on Embedded Systems. I have skills in Autocad, Proteus, C++, C programming and i love to share these skills to other to
enhance my knowledge too.
3.50+ 1+ Reviews 10+ Question Solved
Students also viewed these Engineering questions
Study smarter with the SolutionInn App | {"url":"https://www.solutioninn.com/study-help/digital-design-using-vhdl-a-systems-approach/draw-a-seriesparallel-switch-circuit-that-implements-the-function-fx","timestamp":"2024-11-10T09:35:54Z","content_type":"text/html","content_length":"79409","record_id":"<urn:uuid:0c76943c-ba19-4f87-883a-3edc70a28981>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00752.warc.gz"} |
Solving Equations With Fractions Worksheet Kuta - Equations Worksheets
Solving Equations With Fractions Worksheet Kuta
Solving Equations With Fractions Worksheet Kuta – The purpose of Expressions and Equations Worksheets is to help your child learn more effectively and efficiently. The worksheets include interactive
exercises as well as problems built around the order of how operations are conducted. These worksheets make it simple for children to grasp complicated concepts as well as simple concepts in a short
time. Download these free resources in PDF format to aid your child’s learning and practice math concepts. These resources are useful for students from 5th-8th grade.
Get Free Solving Equations With Fractions Worksheet Kuta
The worksheets are suitable for use by students in the 5th-8th grades. The two-step word problems were created with fractions or decimals. Each worksheet contains ten problems. The worksheets are
available online and in print. These worksheets can be a wonderful way to practice rearranging equations. These worksheets can be used to practice rearranging equations , and aid students in
understanding equality and inverse operations.
These worksheets are designed for students in fifth and eighth grade. They are ideal for students who struggle to calculate percentages. You can choose from three different types of problems. You can
decide to tackle one-step challenges that contain decimal or whole numbers, or you can use word-based approaches to solve decimals or fractions. Each page will contain ten equations. The Equations
Worksheets are suggested for students from 5th to 8th grade.
These worksheets can be a wonderful source for practicing fractions and other concepts related to algebra. Some of the worksheets let you to select between three different types of problems. You can
select one that is numerical, word-based or a combination of both. It is essential to pick the type of problem, as every challenge will be unique. Every page is filled with ten issues that make them
a fantastic resource for students from 5th to 8th grade.
These worksheets are designed to teach students about the relationship between variables as well as numbers. They allow students to practice solving polynomial equations and discover how to use
equations to solve problems in everyday life. These worksheets are a fantastic way to get to know more about equations and formulas. They will assist you in learning about different types of
mathematical issues and the various kinds of symbols used to represent them.
These worksheets are great to students in the beginning grade. These worksheets can teach students how to graph equations and solve them. These worksheets are ideal for practicing polynomial
variable. They will also help you learn how to factor and simplify them. There are plenty of worksheets you can use to teach kids about equations. Working on the worksheet yourself is the best method
to get a grasp of equations.
There are plenty of worksheets to study quadratic equations. There are several levels of equation worksheets for each grade. These worksheets are designed in order for you to help you solve problems
in the fourth degree. When you’ve reached a certain level, you can continue to work on solving other kinds of equations. It is then possible to take on the same problems. You can, for example solve
the same problem in a more extended form.
Gallery of Solving Equations With Fractions Worksheet Kuta
Multi Step Equations With Fractions Worksheet Kuta Tessshebaylo
Two Step Equations With Fractions Worksheet Kuta Tessshebaylo
One Step Equations With Fractions Kuta Software
Leave a Comment | {"url":"https://www.equationsworksheets.net/solving-equations-with-fractions-worksheet-kuta/","timestamp":"2024-11-06T07:43:48Z","content_type":"text/html","content_length":"64198","record_id":"<urn:uuid:767f90ab-1338-4088-b307-2fb83638ac99>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00500.warc.gz"} |
Choosing a method for sensitivity analysis
Sensitivity analysis is essential part of developing a model and generating useful insights. Every model makes assumptions and uses estimates of key input variables. Sensitivity analysis shows you
much each assumption matters to the results and why. It offers a guide to where it is most worthwhile to expand the model or obtain more data. Analytica offers a rich variety of methods for
sensitivity analysis. This page gives an overview of these methods and a guide to help you choose the methods most appropriate to your project.
Expert modelers know that sensitivity analysis is an essential step to test a model, assess its accuracy, and understanding its behavior and implications. A common mistake by novice modelers is to
spend so much time building a model that they have little time left for performing systematic sensitivity analysis. The result is that novices -- and their clients -- miss out on the most useful
insights from the modeling process. They often end up with a model that has too much detail in some areas, but not enough on other areas that matter most to the decision maker. Experts usually start
by building a simple prototype of the model. They then use sensitivity analysis to discover which estimated inputs and parts of the model have the largest effect on the results. This lets them
discover where it is most worthwhile to spend more effort to obtain better estimates or expand the model.
Sensitivity analysis is also a valuable way to test whether your model is working they way you expect. You may find that an input assumption that you thought unimportant has a large effect on the
results -- or conversely, that something you expected to be important has little effect -- or even that it changes the result in the "wrong" direction. Examining why a sensitivity isn't what you
expected will often help you identify and fix a bug in the model. Sometimes, you'll find the model is correct, but there's a bug in your thinking. That's often a potent source of new insight to
better understand the model and the system you are modeling.
There are a wide variety of methods sensitivity analysis. They vary in the effort to apply them, and the number of times you need to run the model. Some are designed for models that encode
uncertainties with probability distributions. And some require models with explicit decisions and objectives. This page gives an overview of each method with guidelines for when it is most suitable,
plus links to pages for more details where needed. See the bottom for a summary table for comparisons.
Parametric analysis for one variable
Perhaps the simplest approach to sensitivity analysis in Analytica is what we call parametric analysis: You can replace the definition of any scalar input variable, whether a decision, chance, or
normal variable, by a list of numbers. In its parent diagram, select the node of a variable, X, and show its definition in the Attribute pane at the bottom of the diagram window. From the x+y menu,
select List of Numbers. You can then enter three or more alternative values, typically starting from the lowest plausible value up to its highest plausible value. Now look at the result, say the NPV
in this case. Assuming the result actuall depends on the variable you have varied parametrically, the result will display as a table or graph showing the effect of the parametric variations. By
looking at the graph, you can immediately see the nature of the relationship – whether an increase in X causes an increase or decrease in Y, whether the relationship is linear. If it is nonlinear, is
the relationship monotonic, i.e. the slope is always positive or always negative, but with an increasing or decreasing slope -- -i.e. does the sensitivity of Y to X increase or decrease with the
value of X. If it is not monotonic, is there a minimum or maximum of Y over X. If the relationship is complex, it maybe helpful to visualize it with a larger number of values for X, say 5 or even 10.
Parametric analysis with multiple inputs
You can extend parametric analysis with two or more variables. Simple select a second scalar input variable, say Z, and replace that by a list of values in increasing order. Now the graph of Y will
show multiple lines, with different colors for each value of Z. Or you can pivot the graph to show lines of Y as a function of Z for each value of X. If there are interactions between X and Z, the
lines will not be parallel. You can see if there are synergistic effects, where the lines diverge, or counter-synergistic effects where the lines converge. You can extend the parametric analysis with
more than two variables, but it gets more challenging to interpret the results with additional dimensions. To compare the results
Tornado chart or Range Sensitivity Analysis
For range sensitivity analysis, also known as a tornado chart, shows the effect on a result of interest of varying each uncertain input over their range, from low to high, while keeping the other
variables at their mid or base value. If you order the variables by width of the range on the result from high to low, you get a graph like this. Hence, the name “Tornado chart”:
you start by identifying all the assumptions on input numbers which are uncertain. For each quantity, you provide a base or mid value, usually the most likely (mode), and a low and high value that
may enclose it. The low and high values should be plausible extremes. If you think probabilistically, you might estimate them as the approximate 10th and 90th percentiles. For a Boolean value that
may be true or false (Yes or No), there are only two values, the mid value, and the other value.
In the Sensitivity analysis library, you first create the list of uncertain inputs or assumptions as the definition of “Sensitivity_inputs. One way to do this is to draw influence arrows form each
input node into the Sensitivity_inputs index. After selecting the list of sensitivity inputs, click “Set inputs sensitivity table” button. It will show in the Sensitivity inputs table for each input,
its parent module, units, and current value as its Base or Mid value. If the current value is an array, the Base will be a reference button linking to the array. The next step is to
Spider chart
A spider chart is like a Tornado chart in that it shows the effect on a result of varying each uncertain input over its range, while keeping the other inputs at their base value. But it plots the
result on Y axis and variations of each input across the x-axis. You can also look at it as a way to superimpose one-way parametric analyses for many inputs on a single chart. The trick is to show
percentage (or factor) changes from the mid value of each variable across the x axis. The parametric lines for each input all cross at their mid values, so the result may look like a spider, the
number and direction of its legs depending on the number of variables.
Typically, you can use 3, 5, or more variations for each input, say 50%, 75%, 100%, 125%, 150%. You can use different values for each input, for example if some inputs can plausibly have larger
percentage changes than others. As in a parametric analysis, you can easily see whether the effect of each variable on the result is increasing or decreasing; linear, concave, or convex; and
monotonic or with one (or more) maxima or minima. It does not show the effects of interactions between inputs, since it varies only one input at a time.
Importance analysis
Importance analysis shows how much of the uncertainty in a result is due to uncertainty in each input. It requires the uncertainty in each input to be expressed as a probability distribution. You can
use it as a guide to set priorities on which uncertainties are worth most effort to try to reduce by obtaining more information. Or if that’s not possible, which are worth most effort in refining the
estimated probability distribution, perhaps by doing more background research and/or obtaining estimates from more experts. It uses the rank-order correlation of the result with each uncertain input
as a metric of “importance”.
Analytica has a built-in feature to perform importance analysis. You select the result variable of interest, and then select Make importance from the Object menu. It automatically finds as inputs all
the chance variables defined by a probability distribution and generates an Importance variable that contains the rank correlation of the result with each input.
Decision switchover graph
Switchover analysis finds the value of an uncertain parameter at which the preferred decision switches from one option to another. You must specify a discrete decision variable with two or more
options, an objective to minimize or maximize, and an uncertain input parameter, X. The chart graphs the value of the objective (or expected value if you have probability distributions) for each
decision option as X varies from its low to high value. If the lines for each decision cross, the X value is where the preferred decision with highest value of the objective changes from one to
Decision heat map
A decision heat map is a way to visualize which decision to choose as a function of two input parameters. The color in each rectangle identifies which option has the highest value for a discrete
decision with two or more options. The vertical and horizontal axes plot values of the two parameters. For example, this graph shows which type of car, standard gasoline engine, hybrid, or battery
electric vehicle, has the lowest total cost of ownership (NPV) for each combination of annual miles driven, and price of gasoline.
Expected value of information (EVI, EVPI, and EVSI)
The expected value of information (EVI) measures the increase in expected value due to obtaining information about an uncertain quantity. The expected value of perfect information (EVPI) is the
increase in expected value if we learn the exact value of a quantity and so eliminate our uncertainty. To use these measures, the model must include explicit decision variable(s), and a numerical
objective that defines the value or utility that you are trying to maximize (or loss to minimize). The uncertainties must be expressed as probability distributions.
It may sound paradoxical that you could estimate the value of information before you know the actual information -- e.g. the value of an uncertain quantity X. The key is that it's the expected value
of information on -- the expectation of the change in value due to selecting a better decision after learning the value of X, taking the expectation over the probability distribution expressing the
uncertainty about X.
See Expected value of information -- EVI, EVPI, and ESVI for details.
Expected value of including uncertainty (EVIU)
Comparison table of methods
Computational complexity
Each method must evaluate the model to generate the result of interest multiple times. The rightmost column shows the number of times the model must be evaluated, where
• v is the number of values for each input value, typically 2 to 5, although you may use more -- or probabilistic if the method requires the inputs to be specified as probability distributions, in
which case it needs a Monte Carlo sample size of m.
• n is the number of input or chance variables,
• m the Monte Carlo sample size.
• d is the number of decision variables
• nd is the number of values for each decision variable | {"url":"https://docs.analytica.com/index.php?title=Choosing_a_method_for_sensitivity_analysis&oldid=54195","timestamp":"2024-11-02T05:03:18Z","content_type":"text/html","content_length":"33649","record_id":"<urn:uuid:047558c9-c7e2-430b-9162-434eaa9f2b54>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00455.warc.gz"} |
Subtract Three Digit Numbers Worksheet 2024 - NumbersWorksheets.com
Subtract Three Digit Numbers Worksheet
Subtract Three Digit Numbers Worksheet – A Subtraction Numbers Worksheet can be a helpful tool for practicing and evaluating your child’s subtraction facts. Subtraction worksheets need to pinpoint
the student’s subtraction information and never on the connection between subtraction and addition. If you want to use these worksheets with your child, here are some tips you should keep in mind.
Use a worksheet which helps your kids be aware of the concept of every single amount as well as the approach accustomed to perform subtraction. Subtract Three Digit Numbers Worksheet.
Subtraction numbers worksheets are helpful for those level ranges simply because they help college students exercise their capabilities together with the subject. The worksheets include inquiries for
many numbers of your variety and can help college students create a greater idea of the idea. Feel free to leave a comment or send us an email if you have questions about the worksheets. We welcome
any suggestions for changes! The worksheets are designed to supply additional practice and support for good educating practices.
A math worksheet for introducing individual-digit numbers will help college students understand this important expertise. Many addition worksheets feature questions written horizontally, or from left
to right. This gives pupils to train emotional kept-to-correct supplement, which will serve them effectively down the road. They may also be used as assessment resources. This post will describe the
advantages of an supplement amounts worksheet. For a kid to perfect this talent, it is very important exercise with amounts in both recommendations.
Subtraction of two numbers
A Subtraction of two numbers worksheet is a superb instrument for training students about this simple math concept. A two-digit amount might have two digits and another decimal position. The students
are taught to approach each place individually, rather than borrowing from the decimal place before. Before they begin the subtraction of two digits, it helps to provide visual and tactile
understanding of each place. This type of worksheet works extremely well by college students from season anyone to calendar year a few.
Subtraction of about three digits
Third, grade mathematics curriculums often include three-digit subtraction. To train your pupils the skill, you could make your assets more efficient by utilizing a technique known as regrouping.
Regrouping requires credit a number from your better someone to create the lower one more compact. These components function engaging hands-on activities, in depth lesson strategies, and computer
worksheets. The Educational Library’s assets make learning easy and enjoyable.
Subtraction of 4 digits
To figure out how to do the straightforward arithmetic difficulty, subtraction of a number of digits, you need to initially understand what spot importance is. , and tens are typical distinct place
beliefs.Thousands and hundreds Subtracting coming from a larger number is known as minuend, whilst subtracting from the more compact amount is named subtrahend. The real difference is caused by the
subtraction. Publish the situation in columns Minuend – Subtrahend = Big difference. You can also use term troubles to practice 4 digit subtraction.
Gallery of Subtract Three Digit Numbers Worksheet
Three Digit Subtraction Without Regrouping Worksheets
3 Digit Subtraction Worksheets
3 Digit Subtraction Worksheets
Leave a Comment | {"url":"https://numbersworksheet.com/subtract-three-digit-numbers-worksheet/","timestamp":"2024-11-07T21:42:40Z","content_type":"text/html","content_length":"54556","record_id":"<urn:uuid:cab39fbe-f41b-4da0-b640-b57a040e80fb>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00432.warc.gz"} |
Homework 6 (Week 10) - Programming Help
1. This problem is to be done by hand. In the parts below, you will set up the primal Lagrangian and derive the dual Lagrangian, for support vector machine classifier, for the case of data that is
linearly separable in expanded feature space.
For the SVM learning, we stated the optimization problem as:
Minimize J (w) = ^1[2 ]w^2
s.t. z[i] (w^T u[i] + w[0] )−1 ≥ 0 ∀i
Assume our data is linearly separable in the expanded feature space. Also, nonaugmented notation is used throughout this problem.
a. If the above set of constraints (second line of equations above) is satisfied, will all the training data be correctly classified?
b. Write the Lagrangian function L(w, w[0] , λ ) for the minimization problem stated above. Use
λ[i] , i = 1, 2, !, N for the Lagrange multipliers. Also state the KKT conditions. (Hint:
there are 3 KKT conditions).
c. Derive the dual Lagrangian L[D] , by proceeding as follows:
i. Minimize L w.r.t. the weights.
Hint: solve ∇[w] L = 0 for the optimal weight w * (in terms of λ[i] and other variables);
and set ^∂^L^ = 0 and simplify.
ii. Substitute your expressions from part (i) into L , and use your expression from
• ^L = 0 as a new constraint, to derive L[D] as:
1 N N T [u] N
L λ = − λ λ z z u + λ
D ( ) 2 ∑∑ i j i j i j ∑ i
i=1 j=1 i=1
subject to the (new) constraint: ∑λ[i] z[i] = 0 , which becomes a new KKT condition.
Also give the other two KKT conditions on λ[i] , which carry over from the primal form.
2. In this problem you will do SVM learning on a small set of given data points, using the result from Problem 1 above. Problem 2 also uses nonaugmented notation.
Coding. This problem involves some work by hand, and some solution by coding. As in previous homework assignments, in this problem you are required to write the code yourself; you may use Python
built-in functions, NumPy, and matplotlib; and you may use pandas only for reading and/or writing csv files. For the plotting, we are not supplying plotting function(s) for you; may use this as an
opportunity to (learn, explore, and) use matplotlib functions as needed. Alternatively, you may do the plots by hand if you prefer.
Throughout this problem, let the dataset have = 3 points. We will work entirely in the expanded feature space ( -space).
a. Derive by hand a set of equations, with λ, as variables, and [!], [!], = 1,2,3 as given constants, that when solved will give the SVM decision boundary and regions. To do this,
start the Lagrangian process to optimize ^LD w.r.t. λ and , subject to the equality
constraint ∑λ[i] z[i] = 0 . Set all the derivatives equal to 0 to obtain the set of equations.
N 1 N N T N
Hint: Start from L[D]^′ ( λ , µ) = ∑λ[i] − ∑∑λiλ j ^zi^z j u i u j + µ ∑^ziλi , in which the
i=1 i=1 j=1 i=1
last term has been added to incorporate the equality constraint stated above.
Finally, formulate your set of equations as a matrix-vector equation of the form:
A ρ = b
in which ρ = 0λ^“, µ2^“. Give your expressions for A and b.
Tip: If you’d like a quick check to see if you’re on the right track, try plugging in (by hand)
for this simple dataset:
= 3[0]5, $ =3[0]5∈ = = 1 % = 3[1]5 ∈ ( = −1).
# 1 0 # ^( # $ ); 1 $ %
The first row of should be [10 − 1 − 1] and first entry of should be 1.
In the parts below you will use a computer to solve this matrix equation for λ and µ, calculate then plot and interpret the results, for 3 different datasets.
For parts (b)-(e) below, use the following dataset:
= 3^15, = 3^25 ∈
[#] [2] [$] [1] [#]
[%]=3[0]5 ∈ [$]
For all parts below, consider [#] to be the positive class ( [!] = +1).
b. Write code in Python to solve for λ and µ, calculate ^∗ and [‘] , and check the KKT conditions.
Tip: Use variables [!], [!], i = 1,2,3 such that you can input their values for different datasets.
Specifically, the code should:
i. Use NumPy to invert your matrix, and to calculate the resulting values for λ and µ.
ii. Check that the resulting λ satisfies the KKT conditions involving λ (but not involving ) that you stated in Problem 1(c)(ii).
iii. Calculate the optimal (nonaugmented) weight vector ^∗ by using your result from Problem 1(c)(i). And, find the optimal bias term [‘] using one of the KKT conditions from Problem 1(b).
iv. Check that the resulting and [‘] satisfy the KKT conditions on and [‘] of Pr. 1(c).
v. Output the results of (i)-(iv) above.
c. Run your code on the given dataset; give the resulting values for λ and µ, and the results of the KKT-conditions check on λ .
Also give the resulting ^∗ and [‘], and the results of the KKT-conditions check onand
c. Plot in 2D nonaugmented feature ( ) space: the data points showing their class labels, the decision boundary defined by ^∗ and [‘], and an arrow showing which side of the boundary
p. 3 of 4
is class [#]. While you are encouraged to do (some or all of) the plot by computer, you can do some or all of it by hand if you prefer, as long as it is neat and reasonably accurate.
e. Looking at the plot, does the decision boundary correctly classify the training data?
And if yes, does it look like the maximum-margin boundary that will correctly classify the data? Briefly justify.
(f)(i)-(iii) Repeat parts (c) – (e) except for the following dataset:
= 3^15, = 3^25 ∈
[#] [2] [$] [1] [#]
[%]′=3[1]5 ∈ [$]
in which the third data point has been changed.
iv. Explain the change, or lack of change, in the decision boundary from (d) to (f).
g. (i) How do you think the boundary will change (relative to (f)) if we instead use the following data:
= 3^15, = 3^25 ∈
[#] [2] [$] [1] [#]
%^′′ = 3[1.5]^5 ∈ $
in which the third data point has (yet again) been changed? (ii)-(iv) Try it by repeating (c)-(e) except with these data points^1.
v. Explain any change or lack of change in the decision boundary as compared with (d) and (f).
• Tip: Note that the linear algebra approach we take to solve this problem, has no way of enforcing the requirement [!] ≥ 0∀ . After you check this requirement, if any data point [(] has [(] < 0
, then you can set [(] = 0 as a given constant, and then re-optimize the other Lagrange multipliers. Then check the KKT conditions again to verify they are satisfied.
Why? With no nonnegativity condition on , the optimal solution effectively finds a point that is on the boundary of all inequality constraint regions (as in case (b) of Lecture 12 p. 10). If one of
the constraints (say on data point [(]) is already satisfied (as in case (a) on p. 9), it proceeds as in case (b) to find a point on the boundary of the constraint region, which results in a negative
Lagrange multiplier [(] < 0. Resetting it to 0, then re-optimizing the other parameters, is a way of enforcing the [(] ≥ 0 requirement.
Comment: this method of implementation (Pr. 2 using matrix inverse) is useful for working with algebraic expressions, theory, and seeing/verifying how things work. For larger datasets, typically
other implementation methods are used such as quadratic programming or variations on gradient descent designed for this type of optimization.
p. 4 of 4 | {"url":"https://www.edulissy.org/product/this-problem-is-to-be-done-by-hand-in-the-parts-below-you-will-set-up-the-primal-lagrangian-and-derive-the-dual-lagrangian/","timestamp":"2024-11-11T13:25:21Z","content_type":"text/html","content_length":"232577","record_id":"<urn:uuid:87d6a3d0-0eac-487c-9371-434f5b81cb58>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00491.warc.gz"} |
Call it whatever you like—DPS, damage per second—we just call it DAMAGE, and when it comes to making red bars go down, you can never have enough of it. Don't trivialize it though; damage is a
very versatile aspect of combat. There are so many ways that a character can do damage.
— GuildWars2.com promotional material
Damage is any effect caused by an action that results in a target losing health. Damage is one of the three aspects that comprise the combat system, the other two being support and control. Strike
damage and condition damage are the two primary forms of damage, incurred mainly by character's skills as well as certain traits and upgrade components. If not otherwise specified, the term "damage"
usually refers to strike damage in the game.
There are several different types of damage (not to be confused with the cosmetic effect also called damage type), some of which are:
Each instance of damage is calculated individually at the moment it occurs, not when the associated skill was cast. This includes both separate instances of strike damage from multi-hitting skills
and damage over time from conditions. As a result, consequent hits or condition ticks from the same ability may do different damage if any of the associated attributes change in that time. This
includes offensive attributes and effects of the attacker and defensive characteristics of its target.
Outgoing damage is generally not affected by the Health status, and a player character deals the same amount of outgoing damage regardless of whether they are at 100% health or 1%. Exceptions to this
are special modifiers such as traits with effects based on the character's health.
Note that the filter options apply to the whole page: strike damage and condition damage, incoming and outgoing, skills and traits.
Strike damage[edit]
Strike damage instantly reduces enemy's health by the calculated amount, and scales with the attribute Power. The attribute Precision increases Critical Chance – up to the maximum 100% of strikes
being critical hits. Ferocity determines how strong the critical hits are by increasing the Critical Damage. All these attributes can be increased by using the relevant traits and equipment.
Base strike damage[edit]
Base strike damage is given by the following equation:
Damage done = (Weapon strength) * Power * (skill-specific coefficient)/(target's Armor)
• Weapon strength: a uniformly distributed random number taken from the range of weapon strength of the equipped weapon. The weapon strength used for a skill will typically be that of the weapon
associated with that skill; utility and elite skills are typically not affected by weapon strength and use a strike damage range based on the character's level. The weapon strength is also fixed
for the duration of the skill, in particular pulsing aoes and channeled skills; although damage per hit is still affected by attributes, potions, sigils and other modifiers.
• Power: The current power attribute as listed on the Hero tab.
• Skill coefficient: Every skill has a unique coefficient used to calculate damage inflicted. Listed by the () after tool tip listed on wiki pages, not shown in-game however.
• Target's Armor: Not shown in-game for PvE targets.
Strike damage modifiers[edit]
This base strike damage can then be modified by the following effects:
Only successful attacks will cause damage. The following will prevent this:
• The blind condition affecting the attacker, resulting in a miss
• The target blocks or evades the attack by use of skills or effects or makes a successful dodge
• The target is out of range of the actual attack (some attacks have greater reach than their skill tooltip indicates, e.g. projectiles with high arc)
• The target is obstructed by terrain, structures or objects
• The target is invulnerable
Skill tooltip value[edit]
Strike damage is listed on a skill's tooltip with the Damage icon. Skill tooltip damage is given by the following formula:
Tooltip damage = [number of hits] * round[(average weapon strength) * Power * (skill-specific coefficient per hit) / (level-based Armor value)]
• Average weapon strength is the average of the range of damage listed as Weapon Strength on the equipped weapon's tooltip.
• The damage actually done is unlikely to equal to the tooltip damage, since it depends on your opponent's Armor value and other factors listed above.
• At level 80, the opponent Armor value used to calculate the tooltip's damage is 2597. The value the game uses at other levels can be derived from the amount shown in the tooltip using the damage
• Some skills do not calculate their tooltip damage based on the equipped weapon. These are frequently the same skills that are unaffected by Weapon Strength.
• Some skills list multiple strikes of damage. Each strike is treated independently as to whether it hits or misses and for the application of modifying effects. Using "Damage (2x): 500" as an
example, the base damage of each strike is 250. Each strike that hits will have relevant modifying effects applied to it (crit, glancing, etc.).
• Some skills which apply conditions, such as Burning, may also have strike damage associated with them. For example: Damage (8x): 800, 8x Burning (2s): 4800. In this case the fire will apply 8
pulses of damage of 100 each at 1sec intervals. Each strike damage packet or pulse will apply 1 stack of burning each having 2 second duration.
Skill fact value[edit]
The skill facts on this Wiki quote a standardized number for the strike damage associated with a skill at level 80. It is used in the following formula for damage:
Damage done = (skill-fact damage) * (Power/1000) * (2597/target's Armor)
• The skill fact damage is the tooltip damage displayed in the game for a level 80 character with an exotic level 80 weapon (if it is a weapon skill), but without any further equipment and without
any other damage increasing effects. It is the default strike damage dealt by a level 80 character with base power 1000 against armor 2597, see Skill Facts for details.
• For power 1000 and target armor 2597 the formula says that damage done is equal to skill-fact damage.
• Damage is proportional to power. Twice the power means twice the damage. For example, increasing power from 1000 to 2289 by wearing exotic Soldier equipment leads to 2.289 times the damage.
• Damage is inversely proportional to target armor. Twice the target armor means half the damage. This applies equally to the case when the player is the target.
• Damage here refers to non-critical hits. The average damage increase due to critical hits can be formulated by replacing power by an increased effective power in the above formulas. Damage is
then proportional to effective power, which includes the effect of precision and ferocity.
• The skill-fact damage combines the average weapon strength and the skill coefficient into one convenient number for damage calculation for level 80 exotics. The skill coefficient is not shown
in-game but has to be computed from the above formulas, while the skill-fact damage is given directly by the tooltip.
• The skill-fact damage formula can also be used to obtain damage for ascended weapons. Since the average weapon strength of ascended weapons is 5% higher than that of exotic weapons, multiply the
result by 1.05.
Skill coefficient[edit]
The skill-specific coefficient is not shown in the game. It is computed as follows:
Skill-specific coefficient = 2597 * (tooltip damage)/(weapon strength * Power)
• Given the default strike damage at power 1000 (for example from the Wiki) and the weapon strength, the skill coefficient can be computed. This relation should hold approximately for all skill
entries on the Wiki.
• Given the strike damage for some other power value (for example from the in-game tooltip), the skill coefficient is given by the second equality.
• Small differences in the skill-coefficients are expected because of numerical round-off errors. For example, a tooltip of 100 is accurate to about 1%. For reliable results, it is advisable to
compute the skill coefficient for several different power values and combine the results. The numerical result may be a coefficient of 1.006471, which can be assumed to be an exact 1.0. However,
in some cases it is not clear what number the game uses internally. The approach seems valid since in many cases the above calculation leads to simple numbers like 1.0, 0.4, or 2.5.
• Weapon strength is shown in the game for weapons, but it is not shown for utility skills or other sources of damage. Assuming that the game uses simple skill coefficients for such skills as well,
we can use the formula to compute a weapon strength for most non-weapon attacks to be 690.5 at level 80 from the tooltip damage and some assumed skill coefficient.
• Bundles, including kits and conjures, appear to have an internal weapon strength of 922.5. This is the same value as a one-handed main hand weapon in sPvP, but it is then given a further modifier
in PvE/WvW based on the rarity of the weapon, with exotic being the default.
• Skill coefficients can also be obtained from the skills API.
Strike damage reduction[edit]
Damage Reduction describes an effect that provides the character reduced incoming damage, often under certain circumstances.
The Armor attribute (a combination of the Toughness and Defense attributes) is the primary source of damage reduction. All received strike damage is divided by this value. Incoming strike damage is
further reduced by various percentage reductions from effects and traits. The Protection boon reduces damage received by the defender by 33%, while the Weakness condition causes Glancing blows from
the attacker, which only deal 50% of their regular damage. 100% damage reduction (granted by some skills) will prevent all strike damage by reducing it to zero. Note that the attack is not actually
blocked and can still apply conditions, control effects or trigger on-hit effects.
Condition Damage is not reduced by armor or most percentage reductions, but can be reduced by use of the Resolution boon. The total damage received by conditions over time can be reduced by Condition
Duration reducing bonuses, or by Condition removal skills and traits.
Damage reduction can also occur by preventing a hit from landing on the defender, using one of the following:
Related skills[edit]
Skills that increase damage or deal increased damage[edit]
Weapon skills that increase damage or deal increased damage
Bundle skills that increase damage or deal increased damage
Transform skills that increase damage or deal increased damage
Downed skills that increase damage or deal increased damage
Drowning skills that increase damage or deal increased damage
Trait skills that increase damage or deal increased damage
Common skills that increase damage or deal increased damage
Skills that reduce incoming damage[edit]
Weapon skills that reduce incoming damage
Bundle skills that reduce incoming damage
Trait skills that reduce incoming damage
Common skills that reduce incoming damage
Skills that have outgoing damage reduction per hit[edit]
Weapon skills that have damage reduction per hit
Bundle skills that have damage reduction per hit
Related traits[edit]
Traits that increase outgoing damage[edit]
Trait Trait line Damage increase in % Description
Fiery Wrath Zeal Damage Increase: 7% Deal increased strike damage to burning foes.
Inspired Virtue Virtues Bonus Damage per Boon: 1% Virtues apply boons to allies when activated. Deal increased strike damage for each boon on you.
Inspiring Virtue Virtues Inspiring Virtue: 10% Strike damage dealt is increased after activating a virtue.
Retribution Radiance Damage Increase: 10% Strike damage dealt is increased while you have resolution.
Symbolic Avenger Zeal Symbolic Avenger: 2% Your strike damage is increased whenever your Symbols hit a foe.
Symbolic Exposure Zeal Damage Increase: 5% Symbols inflict vulnerability on foes. Deal increased strike damage to vulnerable foes.
Symbolic Power Zeal Damage Increase: 30% Symbols deal increased strike damage.
Unscathed Virtues Damage Increase above 7% Strike damage dealt is increased while you have aegis. Strike damage dealt is increased while you are above the health threshold.
Contender Health Threshold:
Unscathed Virtues Damage Increase with 7% Strike damage dealt is increased while you have aegis. Strike damage dealt is increased while you are above the health threshold.
Contender Aegis:
Big Game Hunter Dragonhunter Damage Increase: 15% Striking an enemy tethered by your Spear of Justice inflicts vulnerability and increases strike damage dealt. Tether duration is
(PvP, WvW) increased.
Big Game Hunter Dragonhunter Damage Increase: 20% Striking an enemy tethered by your Spear of Justice inflicts vulnerability and increases strike damage dealt. Tether duration is
(PvE) increased.
Heavy Light Dragonhunter Damage Increase to 10% Gain stability when disabling an enemy. Deal increased strike damage to disabled or defiant foes.
Defiant Foes: Disables include stun, daze, knockback, pull, knockdown, sink, float, launch, taunt, and fear.
Heavy Light Dragonhunter Damage Increase to 15% Gain stability when disabling an enemy. Deal increased strike damage to disabled or defiant foes.
Disabled Foes: Disables include stun, daze, knockback, pull, knockdown, sink, float, launch, taunt, and fear.
Pure of Sight Dragonhunter Minimum Bonus Damage: 5% Deal bonus strike damage based on your distance to the enemy.
(PvE, WvW)
Pure of Sight Dragonhunter Maximum Bonus Damage: 15% Deal bonus strike damage based on your distance to the enemy.
Pure of Sight Dragonhunter Minimum Bonus Damage: 10% Deal bonus strike damage based on your distance to the enemy.
Zealot's Dragonhunter Damage Increase: 10% Deal increased strike damage to crippled foes. Justice's passive effect cripples enemies.
Lethal Tempo Willbender Lethal Tempo: 2% Each time you activate a virtue or your virtue's passives are triggered, gain a damage bonus. Gaining stacks of this boon
(PvE) refreshes other stacks.
Lethal Tempo Willbender Lethal Tempo: 3% Each time you activate a virtue or your virtue's passives are triggered, gain a damage bonus. Gaining stacks of this boon
(PvP, WvW) refreshes other stacks.
Power for Power Willbender Damage Increase: 100% Gain increased power. Willbender Flames deal increased damage to foes they strike.
Tyrant's Momentum Willbender Damage Increase: 3% Lethal Tempo grants increased damage, but has reduced duration. Justice's duration is modified.
(PvP, WvW)
Tyrant's Momentum Willbender Lethal Tempo: 3% Lethal Tempo grants increased damage, but has reduced duration. Justice's duration is modified.
Tyrant's Momentum Willbender Lethal Tempo: 5% Lethal Tempo grants increased damage, but has reduced duration. Justice's duration is modified.
(PvP, WvW)
Tyrant's Momentum Willbender Damage Increase: 3% Lethal Tempo grants increased damage, but has reduced duration. Justice's duration is modified.
Destructive Devastation Damage Increase: 5% All damage dealt is increased, and increased additionally if you have an off-hand weapon equipped.
Destructive Devastation Bonus Damage from Off 2.5% All damage dealt is increased, and increased additionally if you have an off-hand weapon equipped.
Impulses (PvE) Hand:
Destructive Bonus Damage from Off
Impulses (PvP, Devastation Hand: 5% All damage dealt is increased, and increased additionally if you have an off-hand weapon equipped.
Dwarven Battle Retribution Damage Increase: 10% Deal increased strike damage to weakened foes. Disabling a foe applies weakness.
Training (specialization) Disables include stun, daze, knockback, pull, knockdown, sink, float, fear, taunt, and launch.
Ferocious Invocation Damage Increase: 10% All damage dealt is increased while you have fury.
Aggression (PvE)
Aggression (PvP, Invocation Damage Increase: 7% All damage dealt is increased while you have fury.
Rising Tide (PvE) Invocation Damage Increase: 10% While your health is above the threshold, strike damage dealt is increased.
Rising Tide (PvP, Invocation Damage Increase: 7% While your health is above the threshold, strike damage dealt is increased.
Swift Termination Devastation Damage Increase: 20% Deal increased strike damage to foes below the health threshold.
Targeted Devastation Bonus Damage Per Stack: 0.5% Deal increased strike damage to targets based on their vulnerability stacks.
Unsuspecting Devastation Damage Increase: 20% Deal increased strike damage to foes with health above the health threshold.
Strikes (PvE)
Unsuspecting Devastation Damage Increase: 10% Deal increased strike damage to foes with health above the health threshold.
Strikes (PvP, WvW)
Vicious Reprisal Retribution Damage Increase: 10% While you have resolution, all damage dealt is increased, and you gain might when you strike foes. Resolution granted to you lasts
(specialization) longer.
Draconic Echo Herald Facet of Strength: 5% You retain your facet passives for a duration after using their consume skills. Your facet passives grant additional bonuses.
(PvP, WvW)
Draconic Echo Herald Facet of Strength: 10% You retain your facet passives for a duration after using their consume skills. Your facet passives grant additional bonuses.
Forceful Herald Herald and Weapon Skill 7% Deal increased strike damage with active upkeep skills.
Persistence (PvE) Damage Increase: Herald and weapon upkeep skills grant less damage but can be stacked.
Forceful Herald Damage Increase: 20% Deal increased strike damage with active upkeep skills.
Persistence (PvE) Herald and weapon upkeep skills grant less damage but can be stacked.
Forceful Deal increased strike damage with active upkeep skills.
Persistence (PvP, Herald Damage Increase: 15% Herald and weapon upkeep skills grant less damage but can be stacked.
Forceful Herald and Weapon Skill Deal increased strike damage with active upkeep skills.
Persistence (PvP, Herald Damage Increase: 4% Herald and weapon upkeep skills grant less damage but can be stacked.
Reinforced Herald Bonus Damage per Boon: 1% Gain concentration and deal increased strike damage for each active boon you have.
Potency (WvW)
Reinforced Herald Bonus Damage per Boon: 1.5% Gain concentration and deal increased strike damage for each active boon you have.
Potency (PvE, PvP)
When you critically strike, or attack your enemies from behind or their flanks, or strike a defiant foe, you'll inspire yourself
Ambush Commander Renegade Kalla's Fervor: 2% with Kalla's fervor.
Gain access to Citadel Order abilities.
Ashen Demeanor Renegade Kalla's Fervor: 2% Disabling a foe cripples your enemy, which leaves them vulnerable and inspires you with Kalla's Fervor.
Blood Fury Renegade Kalla's Fervor: 2% Fury increases the duration of bleeds you inflict. Gaining fury inspires you with Kalla's Fervor.
Lasting Legacy Renegade Improved Kalla's Fervor 3% Kalla's Fervor you inspire lasts longer and is more potent. Heroic Command grants more might per stack of Kalla's Fervor.
Wrought-Iron Will Renegade Kalla's Fervor: 2% Evading an attack inspires you with Kalla's Fervor and grants boons to nearby allies.
Leviathan Vindicator Damage Increase: 10% Deal increased damage while your endurance is not full.
Reaver's Curse Vindicator Damage Increase: 100% Energy Meld's cooldown is reduced and it increases the effectiveness of your next dodge.
Reaver's Curse Vindicator Damage Increase: 25% Energy Meld's cooldown is reduced and it increases the effectiveness of your next dodge.
(PvP, WvW)
Reaver's Curse Vindicator Damage Increase: 100% Energy Meld's cooldown is reduced and it increases the effectiveness of your next dodge.
Reaver's Curse Vindicator Damage Increase: 25% Energy Meld's cooldown is reduced and it increases the effectiveness of your next dodge.
(PvP, WvW)
Berserker's Power Strength Berserker's Power: 10.5% Gain increased strike damage when you use a burst skill; this damage increase is based on your adrenaline.
Berserker's Power Strength Berserker's Power: 15.75% Gain increased strike damage when you use a burst skill; this damage increase is based on your adrenaline.
Berserker's Power Strength Berserker's Power: 21% Gain increased strike damage when you use a burst skill; this damage increase is based on your adrenaline.
Burst Mastery Discipline Damage Increase: 7% Burst skills deal more damage, grant swiftness, and restore a portion of adrenaline spent.
Crack Shot Discipline Damage Increase: 10% Your longbow, rifle, and harpoon gun weapon skill 1 attacks are enhanced.
Cull the Weak Defense Damage Increase: 10% Deal strike damage to weakened foes. Burst skills inflict weakness. Can only occur once per interval per target for multihit burst
(PvE) skills.
Cull the Weak Defense Damage Increase: 7% Deal strike damage to weakened foes. Burst skills inflict weakness. Can only occur once per interval per target for multihit burst
(PvP, WvW) skills.
Destruction of Discipline Damage Increase: 3% Deal increased strike damage per boon on your target.
the Empowered
Empowered Tactics Damage Increase: 1% Deal increased strike damage for every boon on you.
Leg Specialist Tactics Damage Increase: 10% Immobilize a target when you cripple them with a skill. Deal increased strike damage to foes affected with movement-impairing
(PvE) conditions.
Leg Specialist Tactics Damage Increase: 7% Immobilize a target when you cripple them with a skill. Deal increased strike damage to foes affected with movement-impairing
(PvP, WvW) conditions.
Merciless Hammer Defense Damage Increase: 20% Hammer and mace skills deal increased damage when striking a disabled or defiant foe. Gain adrenaline when you disable a foe.
Disables include stun, taunt, daze, knockback, pull, knockdown, sink, float, fear, and launch.
Peak Performance Strength Damage Increase: 5% Deal increased strike damage. Physical skills further increase all outgoing strike damage for a period of time.
Peak Performance Strength Damage Increase: 3% Deal increased strike damage. Physical skills further increase all outgoing strike damage for a period of time.
(PvP, WvW)
Peak Performance Strength Peak Performance: 10% Deal increased strike damage. Physical skills further increase all outgoing strike damage for a period of time.
Peak Performance Strength Peak Performance: 7% Deal increased strike damage. Physical skills further increase all outgoing strike damage for a period of time.
(PvP, WvW)
Stalwart Strength Defense Damage Increase: 10% Gain stability when you disable a foe. Deal increased strike damage while you have stability.
Warrior's Cunning Tactics Damage Increase vs. 10% Deal increased strike damage to foes above the health threshold. Also deal more damage to foes with barrier.
(PvP, WvW) Barrier: Damage bonuses do not stack.
Warrior's Cunning Tactics Damage Increase vs. 25% Deal increased strike damage to foes above the health threshold. Also deal more damage to foes with barrier.
(PvE) High Health: Damage bonuses do not stack.
Warrior's Cunning Tactics Damage Increase vs. 7% Deal increased strike damage to foes above the health threshold. Also deal more damage to foes with barrier.
(PvP, WvW) High Health: Damage bonuses do not stack.
Warrior's Cunning Tactics Damage Increase vs. 50% Deal increased strike damage to foes above the health threshold. Also deal more damage to foes with barrier.
(PvE) Barrier: Damage bonuses do not stack.
Warrior's Sprint Discipline Damage Increase: 10% Run faster while wielding melee weapons, and deal increased strike damage while you have swiftness. Movement skills break
(PvE) immobilization when used.
Warrior's Sprint Discipline Damage Increase: 3% Run faster while wielding melee weapons, and deal increased strike damage while you have swiftness. Movement skills break
(PvP, WvW) immobilization when used.
Bloody Roar (PvP, Berserker Damage Increase: 10% Deal increased strike damage while in berserk mode. Gain resistance when entering berserk mode.
Bloody Roar (PvE) Berserker Damage Increase: 15% Deal increased strike damage while in berserk mode. Gain resistance when entering berserk mode.
Revenge Counter Spellbreaker Damage Increase: 20% Full Counter deals additional damage and grants resistance. Copy conditions on yourself to targets you hit with Full Counter.
Fierce as Fire Bladesworn Fierce as Fire: 1% Gain increased strike damage for each round of ammunition spent.
Big Boomer (PvP, Explosives Damage Increase: 10% Deal increased strike damage to foes with a lower health percentage than you. Hitting with an Explosion heals you over a few
WvW) seconds.
Big Boomer (PvE) Explosives Damage Increase: 15% Deal increased strike damage to foes with a lower health percentage than you. Hitting with an Explosion heals you over a few
Excessive Energy Tools Damage Increase: 10% Strike damage dealt is increased while you have vigor.
Glass Cannon Explosives Damage Increase: 10% Strike damage dealt increases when above health threshold.
Modified Firearms Damage Increase: 1.5% Deal increased strike damage for each condition on a foe.
Ammunition (WvW)
Ammunition (PvE, Firearms Damage Increase: 2% Deal increased strike damage for each condition on a foe.
Shaped Charge Explosives Damage Increase per 0.5% Deal increased strike damage for each stack of vulnerability on your target.
Takedown Round Tools Damage Increase: 10% Deal increased strike damage while your endurance is not full.
Object in Motion Scrapper Damage per Boon: 5% Deal increased strike damage while you have stability, swiftness, or superspeed. This damage increase compounds for each boon you
Laser's Edge Holosmith Maximum Damage 15% While Photon Forge is active, your outgoing strike damage is increased based on your current heat.
(PvE, PvP) Increase:
Laser's Edge Holosmith Maximum Damage 10% While Photon Forge is active, your outgoing strike damage is increased based on your current heat.
(WvW) Increase:
Solar Focusing Holosmith Damage Increase: 10% Your first few attacks after entering or exiting Photon Forge inflict burning and deal increased strike damage. This bonus is also
Lens granted if you overheat.
Bountiful Hunter Nature Magic Damage per Boon: 1% You deal increased strike damage for each boon affecting you. Your pet deals increased strike damage for each boon affecting it.
Farsighted (PvP, Marksmanship Damage Increase above 10% Ranger weapon skills deal increased strike damage. Damage is further increased for foes above the range threshold.
WvW) Threshold:
Farsighted (PvE) Marksmanship Damage Increase: 10% Ranger weapon skills deal increased strike damage. Damage is further increased for foes above the range threshold.
Farsighted (PvP, Marksmanship Damage Increase: 5% Ranger weapon skills deal increased strike damage. Damage is further increased for foes above the range threshold.
Farsighted (PvE) Marksmanship Damage Increase above 15% Ranger weapon skills deal increased strike damage. Damage is further increased for foes above the range threshold.
Hunter's Tactics Skirmishing Damage Increase: 10% You deal more damage and have an increased chance to critically strike while attacking from behind or the side or when striking a
(PvP, WvW) defiant foe.
Hunter's Tactics Skirmishing Damage Increase: 15% You deal more damage and have an increased chance to critically strike while attacking from behind or the side or when striking a
(PvE) defiant foe.
Light on your Skirmishing Damage Increase: 10% Damage and condition duration are increased after dodging or using Quick Shot. Short bow skills pierce and gain recharge
Feet reduction, and their attacks from behind or while flanking or against defiant foes are improved.
Loud Whistle Beastmastery Damage Increase: 10% While your health is above the threshold, your pet deals increased strike damage. Your pet swap gains recharge reduction.
Moment of Clarity Marksmanship Attack of Opportunity: 10% Gain an attack of opportunity for you and your pet on interrupting a foe. Daze and stun durations that you inflict last longer.
(PvP, WvW) This trait can only grant an attack of opportunity against enemies with defiance bars once per interval.
Moment of Clarity Marksmanship Attack of Opportunity: 50% Gain an attack of opportunity for you and your pet on interrupting a foe. Daze and stun durations that you inflict last longer.
(PvE) This trait can only grant an attack of opportunity against enemies with defiance bars once per interval.
Predator's Marksmanship Damage Increase: 15% You and your pet deal increased strike damage to disabled, defiant, or movement-impaired foes.
Onslaught Affected conditions are stun, taunt, daze, cripple, fear, immobilize, and chilled.
Remorseless Marksmanship Damage Increase: 25% Regain opening strike whenever you gain fury. Opening strike deals more damage.
Survival Wilderness High-Health Damage 10% Gain increased outgoing strike damage and reduced incoming strike damage. When above the health threshold, outgoing strike damage
Instincts Survival Increase: is further increased. When below the health threshold, incoming strike damage is further reduced.
Survival Wilderness Damage Increase: 5% Gain increased outgoing strike damage and reduced incoming strike damage. When above the health threshold, outgoing strike damage
Instincts Survival is further increased. When below the health threshold, incoming strike damage is further reduced.
Two-Handed Beastmastery Damage Increase: 10% Greatsword and underwater spear skills deal more damage and recharge faster. Gain fury when you disable a foe.
Furious Strength Soulbeast Damage Increase: 10% You deal increased strike damage while you have fury.
Furious Strength Soulbeast Damage Increase: 7% You deal increased strike damage while you have fury.
(PvP, WvW)
Oppressive Soulbeast Damage Increase: 10% Deal increased damage to foes at a lower health percentage than you. Conditions you apply to foes at a lower health percentage
Superiority than you last longer.
Twice as Vicious Soulbeast Twice as Vicious: 10% Disabling a foe increases your damage and condition damage for a short duration.
Twice as Vicious Soulbeast Twice as Vicious: 5% Disabling a foe increases your damage and condition damage for a short duration.
(PvP, WvW)
Blinding Outburst Untamed Damage Increase: 25% Venomous Outburst deals more damage and applies blindness in addition to its other effects. Unleashed Ambush skills deal more
(PvE) damage.
Blinding Outburst Untamed Damage Increase: 10% Venomous Outburst deals more damage and applies blindness in addition to its other effects. Unleashed Ambush skills deal more
(PvP, WvW) damage.
Ferocious Untamed Ferocious Symbiosis: 3% You and your pet grant each other increased damage and movement speed when striking enemies. When this triggers, it refreshes the
Symbiosis (WvW) duration of all stacks
Ferocious Untamed Ferocious Symbiosis: 4% You and your pet grant each other increased damage and movement speed when striking enemies. When this triggers, it refreshes the
Symbiosis (PvE) duration of all stacks
Ferocious Untamed Ferocious Symbiosis: 2% You and your pet grant each other increased damage and movement speed when striking enemies. When this triggers, it refreshes the
Symbiosis (PvP) duration of all stacks
Vow of the Untamed Outgoing Damage 15% Your outgoing strike damage is increased while you are unleashed. You take reduced damage from strikes while your pet is
Untamed (WvW) Adjustment: unleashed.
Vow of the Untamed Outgoing Damage 25% Your outgoing strike damage is increased while you are unleashed. You take reduced damage from strikes while your pet is
Untamed (PvE) Adjustment: unleashed.
Vow of the Untamed Outgoing Damage 10% Your outgoing strike damage is increased while you are unleashed. You take reduced damage from strikes while your pet is
Untamed (PvP) Adjustment: unleashed.
Deadly Aim Critical Strikes Damage Increase: 10% Your pistol and harpoon gun attacks now pierce and have increased strike damage.
Executioner Deadly Arts Damage Increase: 20% Deal increased strike damage when your target is below the health threshold.
Exposed Weakness Deadly Arts Damage Increase per 2% Deal increased strike damage to your target for each unique condition on them.
Unique Condition:
Lead Attacks Trickery Maximum Damage 15% Increases all damage dealt per initiative spent. Steal gains reduced recharge time.
Lead Attacks Trickery Lead Attacks: 1% Increases all damage dealt per initiative spent. Steal gains reduced recharge time.
Bounding Dodger Daredevil Bounding Dodger: 15% Your dodge ability is replaced by Bound, dealing damage to the area after you evade. Gain increased strike damage for a period of
time after you dodge.
Havoc Specialist Daredevil Damage Increase: 10% Strike damage dealt is increased when your endurance is not full.
(PvP, WvW)
Havoc Specialist Daredevil Damage Increase: 15% Strike damage dealt is increased when your endurance is not full.
Weakening Strikes Daredevil Damage Increase: 7% Your next attack after dodging causes weakness to foes struck. Weakened enemies deal less damage to you, and you deal increased
(PvP, WvW) strike damage to them.
Weakening Strikes Daredevil Damage Increase: 10% Your next attack after dodging causes weakness to foes struck. Weakened enemies deal less damage to you, and you deal increased
(PvE) strike damage to them.
Iron Sight Deadeye Damage Increase: 10% Strike damage dealt to your marked target is increased, and strike damage taken from your marked target is reduced.
One in the Deadeye Damage Increase: 25% When you cast a cantrip, gain a random new stolen skill. Stolen skills deal more damage.
Chamber (PvE) (Requires a marked target. Overwrites existing stolen skills.)
One in the Deadeye Damage Increase: 10% When you cast a cantrip, gain a random new stolen skill. Stolen skills deal more damage.
Chamber (PvP, WvW) (Requires a marked target. Overwrites existing stolen skills.)
Premeditation Deadeye Bonus Damage per Boon: 1% Deal increased strike damage for each unique boon you have; concentration is increased.
Bolt to the Heart Air Damage Increase: 20% Deal increased strike damage to enemies below the health threshold.
Bountiful Power Arcane Damage Increase: 2% Deal increased strike damage for each boon on you.
Flow like Water Water Damage Increase: 5% Deal increased strike damage, which is further increased while your health is above the threshold. Blocking or evading an attack
heals you.
Flow like Water Water High-Health Damage 10% Deal increased strike damage, which is further increased while your health is above the threshold. Blocking or evading an attack
Increase: heals you.
Persisting Flames Fire Persisting Flames: 1% Fire fields created by weapon skills last longer. Whenever your fire fields hit a foe, gain increased strike damage for a
Piercing Shards Water Damage Increase: 5% Vulnerability you inflict has increased duration. Deal increased strike damage to vulnerable foes. Damage bonus is doubled while
(PvP, WvW) attuned to water.
Piercing Shards Water Damage Increase: 10% Vulnerability you inflict has increased duration. Deal increased strike damage to vulnerable foes. Damage bonus is doubled while
(PvE) attuned to water.
Pyromancer's Fire Damage Increase: 10% Your fire-weapon skills gain reduced recharge. Deal increased strike damage to burning foes.
Serrated Stones Earth Damage Increase: 5% Bleeding you inflict has increased duration. Deal increased damage to bleeding foes.
Stormsoul Air Damage Increase: 10% Deal increased strike damage to disabled or defiant foes. Outgoing stun duration increased.
Disables include stun, daze, knockback, pull, knockdown, sink, float, launch, taunt, and fear.
Transcendent Tempest Transcendent Tempest: 7% Time to attain singularity is reduced. Upon successfully completing an overload, gain increased damage.
Tempest (PvP, WvW)
Transcendent Tempest Transcendent Tempest: 15% Time to attain singularity is reduced. Upon successfully completing an overload, gain increased damage.
Tempest (PvE)
Elements of Rage Weaver Elements of Rage: 10% Gain a bonus to all damage dealt for a period of time when attuned to a single element. Gain precision based on a percentage of
(PvE) your vitality.
Elements of Rage Weaver Elements of Rage: 5% Gain a bonus to all damage dealt for a period of time when attuned to a single element. Gain precision based on a percentage of
(PvP, WvW) your vitality.
Swift Revenge Weaver Damage Increase: 10% Gain swiftness when using a Dual Attack. Deal increased strike damage to enemies while under the effects of swiftness or
Empowering Auras Catalyst Empowering Auras: 3% Gain increased outgoing damage when you grant yourself an aura. When this triggers, the duration of all stacks are refreshed.
Empowering Auras Catalyst Empowering Auras: 2% Gain increased outgoing damage when you grant yourself an aura. When this triggers, the duration of all stacks are refreshed.
(PvP, WvW)
Creating an illusion increases your outgoing damage and condition damage for a short duration.
Compounding Power Illusions Compounding Power: 2%
Virtuoso: Triggers when stocking a blade.
Egotism Domination Damage Increase: 10% Deal increased strike damage to foes with a lower health percentage than you.
Empowered Domination Damage Increase: 15% Illusions deal increased strike damage.
Fragility Domination Damage Increase per 0.5% Deal increased strike damage for each stack of vulnerability on your target.
Mental Anguish Domination Damage Increase vs. 20% Shatter skills deal more damage. This bonus damage is doubled against foes that are not activating skills.
(PvP, WvW) Inactivity:
Mental Anguish Domination Damage Increase: 25% Shatter skills deal more damage. This bonus damage is doubled against foes that are not activating skills.
Mental Anguish Domination Damage Increase: 10% Shatter skills deal more damage. This bonus damage is doubled against foes that are not activating skills.
(PvP, WvW)
Mental Anguish Domination Damage Increase vs. 50% Shatter skills deal more damage. This bonus damage is doubled against foes that are not activating skills.
(PvE) Inactivity:
Phantasmal Force Illusions Phantasmal Force: 1% Phantasms deal increased strike damage for each stack of might you have. Gain might when your phantasms become clones.
Vicious Domination Damage Increase: 15% You and your illusions deal increased strike damage to foes without boons. Disabling a foe removes boons from them.
Expression Disables include stun, daze, knockback, pull, knockdown, sink, float, launch, taunt, and fear.
Time Catches Up Chronomancer Damage Increase: 5% Activating a Shatter gives your illusions superspeed. Shatters deal increased damage to movement-impaired foes.
(PvP, WvW)
Time Catches Up Chronomancer Damage Increase: 10% Activating a Shatter gives your illusions superspeed. Shatters deal increased damage to movement-impaired foes.
Mirage Mantle Mirage Sharp Edges: 10% Ambush[sic] skills are improved.
(PvP, WvW)
Mirage Mantle Mirage Damage Increase: 10% Ambush[sic] skills are improved.
(PvP, WvW)
Mirage Mantle Mirage Damage Increase: 25% Ambush[sic] skills are improved.
Mirage Mantle Mirage Sharp Edges: 15% Ambush[sic] skills are improved.
Bloodsong Virtuoso Damage Increase: 25% Bleeding you apply deals increased damage. Stock a blade after applying enough stacks of bleeding to foes.
Deadly Blades Virtuoso Deadly Blades: 5% Blades inflict vulnerability on critical hits. After successfully casting a Bladesong, increase all damage dealt for a short time.
(PvP, WvW) This does not stack.
Deadly Blades Virtuoso Deadly Blades: 5% Blades inflict vulnerability on critical hits. After successfully casting a Bladesong, increase all damage dealt for a short time.
(PvE) This does not stack.
Infinite Forge Virtuoso Damage Increase: 10% Automatically stock blades while in combat. When you use bladesong above the blade threshold, refund blades. Blade attacks deal
more damage.
Mental Focus Virtuoso Damage Increase: 10% Strike damage is increased against foes within the range threshold.
Mental Focus Virtuoso Damage Increase: 7% Strike damage is increased against foes within the range threshold.
(PvP, WvW)
Close to Death Spite Damage Increase: 20% Deal increased strike damage to enemies below the health threshold.
Death's Embrace Spite Damage Increase: 5% Deal increased strike damage while downed. Inflict vulnerability when you strike a foe below the health threshold.
(PvP, WvW) Cannot apply vulnerability to the same target more than once per interval.
Death's Embrace Spite Damage Increase: 25% Deal increased strike damage while downed. Inflict vulnerability when you strike a foe below the health threshold.
(PvE) Cannot apply vulnerability to the same target more than once per interval.
Necromantic Death Magic Damage Increase: 25% Minions deal more damage and take conditions from you. When a minion attacks, it transfers conditions to its target.
Corruption There is a 10 second cooldown per minion.
Soul Barbs Soul Reaping Damage Increase: 10% Entering or exiting shroud increases all damage you deal for a duration.
Spiteful Talisman Spite Damage Increase: 10% Deal increased strike damage to foes with no boons. Focus and axe skills recharge faster.
Augury of Death Reaper Damage Increase: 100% Shouts now siphon health. This effect is increased on foes in melee range.
Cold Shoulder Reaper Damage Increase: 15% Chill lasts longer, and chilled foes take increased strike damage from your attacks.
Cold Shoulder Reaper Damage Increase: 10% Chill lasts longer, and chilled foes take increased strike damage from your attacks.
(PvP, WvW)
Soul Eater Reaper Damage Increase: 10% Striking foes within the range threshold deals increased strike damage and heals you for a portion of the damage dealt.
Healing will not occur while life force replaces health.
Fell Beacon Scourge Damage Increase: 10% Burning you inflict does more damage. Gain expertise based on your condition damage.
Wicked Corruption Harbinger Damage Increase: 0.5% Deal increased strike damage for each stack of blight you have. Critical strikes deal increased damage to targets with torment.
(PvP, WvW)
Wicked Corruption Harbinger Damage Increase: 1% Deal increased strike damage for each stack of blight you have. Critical strikes deal increased damage to targets with torment.
Traits that reduce incoming damage[edit]
Trait Trait line Damage reduced in % Description
Deathless Courage Willbender Damage Reduced: 20% While Courage is active, incoming strike damage and condition damage is reduced.
(PvP, WvW)
Deathless Courage Willbender Damage Reduced: 50% While Courage is active, incoming strike damage and condition damage is reduced.
Close Quarters Retribution Damage Reduced: 10% Reduce strike damage dealt to you from foes beyond the range threshold.
Demonic Resistance Corruption Damage Reduced: 20% Incoming strike damage is reduced while you have resistance on you.
(PvE) (specialization)
Demonic Resistance Corruption Damage Reduced: 10% Incoming strike damage is reduced while you have resistance on you.
(PvP, WvW) (specialization)
Determined Retribution Damage Reduced: 10% Strike damage taken is reduced by a percentage while you have resolution.
Resolution (specialization)
Unyielding Salvation Damage Reduced: 15% Take reduced strike damage for a duration after healing. Lasts longer if you heal an ally other than yourself.
Devotion (PvE, WvW)
Unyielding Salvation Damage Reduced: 10% Take reduced strike damage for a duration after healing. Lasts longer if you heal an ally other than yourself.
Devotion (PvP)
Draconic Echo Herald Facet of Chaos: 10% You retain your facet passives for a duration after using their consume skills. Your facet passives grant additional bonuses.
Draconic Echo Herald Facet of Chaos: 5% You retain your facet passives for a duration after using their consume skills. Your facet passives grant additional bonuses.
(PvP, WvW)
Hardened Armor Defense Damage Reduced: 10% Gain resolution when you block or are struck by a critical hit. Strike damage taken is reduced by a percentage while you have
Savage Instinct Berserker Feel No Pain: 100% Activating berserk mode breaks stuns and reduces incoming damage.
(PvE, WvW)
Savage Instinct Berserker Feel No Pain: 100% Activating berserk mode breaks stuns and reduces incoming damage.
Iron Blooded Alchemy Iron Blooded: 2% Reduce physical and condition damage for each boon you have.
Light Density Holosmith Damage Reduced: 15% Photon Forge reduces incoming damage but has increased passive heat generation.
Exigency Protocols Mechanist Exigency 50% When your mech is struck while under half health, it activates Exigency Protocols, gaining damage reduction and regeneration for a short
Protocols: duration. Regeneration boons you apply are stronger.
Oakheart Salve Wilderness Damage Reduced: 5% Gain regeneration when you are inflicted with bleeding, poison, or burning. While you have regeneration, you take reduced strike damage.
Survival Instincts Wilderness Low-Health Damage 10% Gain increased outgoing strike damage and reduced incoming strike damage. When above the health threshold, outgoing strike damage is
Survival Reduction: further increased. When below the health threshold, incoming strike damage is further reduced.
Survival Instincts Wilderness Damage Reduced: 5% Gain increased outgoing strike damage and reduced incoming strike damage. When above the health threshold, outgoing strike damage is
Survival further increased. When below the health threshold, incoming strike damage is further reduced.
Vow of the Untamed Untamed Incoming Damage 25% Your outgoing strike damage is increased while you are unleashed. You take reduced damage from strikes while your pet is unleashed.
(PvE) Adjustment:
Vow of the Untamed Untamed Incoming Damage 10% Your outgoing strike damage is increased while you are unleashed. You take reduced damage from strikes while your pet is unleashed.
(PvP, WvW) Adjustment:
Marauder's Daredevil Damage Reduced: 10% Gain vitality based on a portion of your power. Damage is decreased from foes within the range threshold.
Unhindered Daredevil Unhindered 10% Your dodge ability is replaced by a long-range dash that removes inhibiting conditions and grants swiftness and damage reduction.
Combatant Combatant: Removing conditions in this way temporarily reduces endurance gain.
Weakening Strikes Daredevil Damage Reduced: 10% Your next attack after dodging causes weakness to foes struck. Weakened enemies deal less damage to you, and you deal increased strike
damage to them.
Iron Sight (PvE) Deadeye Damage Reduced: 15% Strike damage dealt to your marked target is increased, and strike damage taken from your marked target is reduced.
Iron Sight (PvP, Deadeye Damage Reduced: 10% Strike damage dealt to your marked target is increased, and strike damage taken from your marked target is reduced.
Arcane Arcane Frost Aura: 10% Cast Geyser when you begin reviving a downed ally. Geyser now partially revives downed allies. When you begin reviving an ally, you gain
Resurrection (specialization) an aura based on your attunement.
Geomancer's Earth Damage Reduced: 10% Strike damage from nearby foes is reduced. Earth weapon skills gain reduced recharge.
Soothing Ice Water Frost Aura: 10% Gain regeneration and frost aura when using a healing skill.
Stone Flesh Earth Damage Reduced: 7% Strike damage taken is reduced while attuned to earth.
Elemental Bastion Tempest Frost Aura: 10% Heal allies you grant an aura to. Grant frost aura to nearby allies when struck while below the health threshold.
Unstable Conduit Tempest Frost Aura: 10% Overloads grant an aura based on your attunement.
Elemental Epitome Catalyst Frost Aura: 10% Gain an aura based on your current attunement when you combo. Gain Elemental Empowerment when you grant yourself an aura. Auras can be
(PvP, WvW) gained this way once per attunement per interval.
Elemental Epitome Catalyst Frost Aura: 10% Gain an aura based on your current attunement when you combo. Gain Elemental Empowerment when you grant yourself an aura. Auras can be
(PvE) gained this way once per attunement per interval.
Hardened Auras Catalyst Damage reduction 2% Damage reduction is increased when you grant yourself an aura. When this triggers, the duration of all stacks are refreshed.
per stack.:
Blood as Sand Scourge Damage Reduced: 7% Reduce all incoming damage taken when you have a shade active.
(PvP, WvW)
Blood as Sand Scourge Damage Reduced: 15% Reduce all incoming damage taken when you have a shade active.
Related effects[edit]
Effects that reduce strike damage[edit]
• Protection — Incoming damage decreased by 33%; stacks duration.
• Frost Aura — Chill foes that strike you (only once per second for each attacker); incoming damage is reduced by 10%.
Relics that increase outgoing strike damage[edit]
Relics that reduce incoming strike damage[edit]
• Relic of Nourys — Gain stacks of Nourys's Hunger every interval and when you remove boons from enemies. Consume 10 stacks to grow larger, dealing increased damage, taking reduced damage, and
converting a percentage of outgoing damage into healing.
• Relic of Sorrow — After using an elite skill, create an area that protects allies and destroys enemy projectiles.
See also Sigil#Damage bonus for sigils that increase strike damage against specific NPCs
Related consumables[edit]
Consumables that reduce incoming damage[edit]
Item Effect Duration Level Required
-10% Incoming Damage
Bowl of Mussel Soup +70 Vitality 30 minutes 80
+10% Experience from kills
-10% Incoming Damage
Bowl of Curry Mussel Soup +5% Condition-Damage Reduction 30 minutes 80
+10% Experience from kills
-10% Incoming Damage
Bowl of Lemongrass Mussel Pasta +70 Toughness 30 minutes 80
+10% Experience from kills
-10% Incoming Damage
Plate of Mussels Gnashblade +70 Concentration 30 minutes 80
+10% Experience from kills
Fried Oysters -10% Incoming Damage 30 minutes 80
+10% Experience from kills
-10% Incoming Damage
Fried Oyster Sandwich +70 Power 30 minutes 80
+10% Experience from kills
-10% Incoming Damage
Oysters Gnashblade +70 Expertise 30 minutes 80
+10% Experience from kills
-10% Incoming Damage
Oysters with Cocktail Sauce +70 Precision 30 minutes 80
+10% Experience from kills
-10% Incoming Damage
Oysters with Pesto Sauce +70 Healing Power 30 minutes 80
+10% Experience from kills
-10% Incoming Damage
Oysters with Spicy Sauce +70 Condition Damage 30 minutes 80
+10% Experience from kills
-10% Incoming Damage
Oysters with Zesty Sauce +70 Ferocity 30 minutes 80
+10% Experience from kills
Condition damage[edit]
For the eponymous attribute, see Condition Damage
The damage done by conditions is governed by the character level, the condition inflicted, and the player's Condition Damage attribute. Unlike strike damage, condition damage is not reduced by armor
or toughness. Players can increase the total damage done by conditions over time by increasing the Condition Duration attribute via traits, equipment, or nourishments. Players can reduce incoming
condition damage with Resolution, Light Aura, Dark Aura, as well as other effects listed below.
Condition damage calculation[edit]
Condition Formula Inflicted
Bleeding 0.06 * Condition Damage + 0.25 * Level + 2lvl80: 0.06 * Condition Damage + 22 per second
Burning 0.155 * Condition Damage + 1.55 * Level + 7lvl80: 0.155 * Condition Damage + 131 per second
Poisoned^1 0.06 * Condition Damage + 0.375 * Level + 3.5lvl80: 0.06 * Condition Damage + 33.5 per second
Torment^2 0.06 * Condition Damage + 0.25 * Level + 2lvl80: 0.06 * Condition Damage + 22 per second
Confusion 0.035 * Condition Damage + 0.1 * Level + 2lvl80: 0.035 * Condition Damage + 10 per second
Confusion 0.0625 * Condition Damage + 0.575 * Level + 3.5lvl80: 0.0625 * Condition Damage + 49.5 per foe skill use
• ^1 Poison reduces incoming healing by 33% for its entire duration.
• ^2 Torment deals more damage to stationary targets, see Torment for moving/stationary damage in PvE/WvW+PvP.
Related skills[edit]
Skills that increase outgoing condition damage[edit]
Weapon skills that increase outgoing condition damage
Trait skills that increase outgoing condition damage
Common skills that increase outgoing condition damage
Skills that reduce incoming condition damage[edit]
Weapon skills that reduce incoming condition damage
Transform skills that reduce incoming condition damage
Related traits[edit]
Traits that increase outgoing condition damage[edit]
Trait Trait line Condition damage increase Description
in %
Amplified Wrath (PvP, Radiance Damage Increase: 15% Burning you inflict deals increased damage. Burning duration applied by the passive effect of Virtue skill 1 is
WvW) increased.
Amplified Wrath (PvE) Radiance Damage Increase: 10% Burning you inflict deals increased damage. Burning duration applied by the passive effect of Virtue skill 1 is
Acolyte of Torment Corruption Damage Increase: 10% Torment deals increased damage.
Destructive Impulses Devastation Bonus Damage from Off 5% All damage dealt is increased, and increased additionally if you have an off-hand weapon equipped.
(PvP, WvW) Hand:
Destructive Impulses Devastation Damage Increase: 5% All damage dealt is increased, and increased additionally if you have an off-hand weapon equipped.
Destructive Impulses Devastation Bonus Damage from Off 2.5% All damage dealt is increased, and increased additionally if you have an off-hand weapon equipped.
(PvE) Hand:
Ferocious Aggression Invocation Damage Increase: 7% All damage dealt is increased while you have fury.
(PvP, WvW)
Ferocious Aggression Invocation Damage Increase: 10% All damage dealt is increased while you have fury.
Vicious Reprisal Retribution Damage Increase: 10% While you have resolution, all damage dealt is increased, and you gain might when you strike foes. Resolution granted to
(specialization) you lasts longer.
Draconic Echo (PvE) Herald Facet of Elements: 10% You retain your facet passives for a duration after using their consume skills. Your facet passives grant additional
Draconic Echo (PvP, WvW) Herald Facet of Elements: 5% You retain your facet passives for a duration after using their consume skills. Your facet passives grant additional
When you critically strike, or attack your enemies from behind or their flanks, or strike a defiant foe, you'll inspire
Ambush Commander Renegade Kalla's Fervor: 2% yourself with Kalla's fervor.
Gain access to Citadel Order abilities.
Ashen Demeanor Renegade Kalla's Fervor: 2% Disabling a foe cripples your enemy, which leaves them vulnerable and inspires you with Kalla's Fervor.
Blood Fury Renegade Kalla's Fervor: 2% Fury increases the duration of bleeds you inflict. Gaining fury inspires you with Kalla's Fervor.
Heartpiercer Renegade Damage Increase: 25% Bleeding you inflict deals more damage.
Lasting Legacy Renegade Improved Kalla's 3% Kalla's Fervor you inspire lasts longer and is more potent. Heroic Command grants more might per stack of Kalla's
Fervor: Fervor.
Wrought-Iron Will Renegade Kalla's Fervor: 2% Evading an attack inspires you with Kalla's Fervor and grants boons to nearby allies.
Thermal Vision Firearms Thermal Vision: 5% Gain expertise. Increase your outgoing condition damage when you inflict burning.
Hidden Barbs Skirmishing Damage Increase: 33% Bleeding you inflict is more dangerous.
Poison Master Wilderness Survival Damage Increase: 25% Upon using a Beast ability, your pet's next attack will inflict poison; your poison damage is increased.
Twice as Vicious (PvE) Soulbeast Twice as Vicious: 10% Disabling a foe increases your damage and condition damage for a short duration.
Twice as Vicious (PvP, Soulbeast Twice as Vicious: 5% Disabling a foe increases your damage and condition damage for a short duration.
Deadly Ambush Trickery Damage Increase: 25% Stealing also applies bleeding. Bleeding you inflict deals more damage.
Lead Attacks Trickery Maximum Damage 15% Increases all damage dealt per initiative spent. Steal gains reduced recharge time.
Lead Attacks Trickery Lead Attacks: 1% Increases all damage dealt per initiative spent. Steal gains reduced recharge time.
Potent Poison (PvP, WvW) Deadly Arts Damage Increase: 20% Poison you inflict has increased damage and duration. Other Deadly Arts traits apply additional stacks of poison.
Potent Poison (PvE) Deadly Arts Damage Increase: 33% Poison you inflict has increased damage and duration. Other Deadly Arts traits apply additional stacks of poison.
Lotus Training Daredevil Lotus Training: 15% Your dodge ability now uses Impaling Lotus, firing daggers at nearby enemies. Gain increased condition damage for a
period of time when dodging.
Strength of Shadows Specter Damage Increase: 25% Gain expertise based on a percentage of your vitality. Torment you inflict deals increased damage.
(PvP, WvW)
Strength of Shadows Specter Damage Increase: 20% Gain expertise based on a percentage of your vitality. Torment you inflict deals increased damage.
Transcendent Tempest Tempest Transcendent Tempest: 7% Time to attain singularity is reduced. Upon successfully completing an overload, gain increased damage.
(PvP, WvW)
Transcendent Tempest Tempest Transcendent Tempest: 15% Time to attain singularity is reduced. Upon successfully completing an overload, gain increased damage.
Elements of Rage (PvE) Weaver Elements of Rage: 10% Gain a bonus to all damage dealt for a period of time when attuned to a single element. Gain precision based on a
percentage of your vitality.
Elements of Rage (PvP, Weaver Elements of Rage: 5% Gain a bonus to all damage dealt for a period of time when attuned to a single element. Gain precision based on a
WvW) percentage of your vitality.
Weaver's Prowess (PvE) Weaver Weaver's Prowess: 10% Gain increased condition damage and duration for a period of time after attuning to a different element.
Weaver's Prowess (PvP, Weaver Weaver's Prowess: 10% Gain increased condition damage and duration for a period of time after attuning to a different element.
Empowering Auras (PvE) Catalyst Empowering Auras: 3% Gain increased outgoing damage when you grant yourself an aura. When this triggers, the duration of all stacks are
Empowering Auras (PvP, Catalyst Empowering Auras: 2% Gain increased outgoing damage when you grant yourself an aura. When this triggers, the duration of all stacks are
WvW) refreshed.
Illusionary Membrane Chaos Damage Increase: 7% Gain chaos aura when you use Shatter skill 2. While you have chaos aura, condition damage you deal is increased.
Illusionary Membrane Chaos Damage Increase: 10% Gain chaos aura when you use Shatter skill 2. While you have chaos aura, condition damage you deal is increased.
(PvP, WvW)
Deadly Blades (PvE) Virtuoso Deadly Blades: 5% Blades inflict vulnerability on critical hits. After successfully casting a Bladesong, increase all damage dealt for a
short time. This does not stack.
Deadly Blades (PvP, WvW) Virtuoso Deadly Blades: 5% Blades inflict vulnerability on critical hits. After successfully casting a Bladesong, increase all damage dealt for a
short time. This does not stack.
Putrid Defense Death Magic Poison Damage 15% Poison damage is increased. Applying poison grants carapace.
Soul Barbs Soul Reaping Damage Increase: 10% Entering or exiting shroud increases all damage you deal for a duration.
Demonic Lore (PvP) Scourge Damage Increase: 20% Torment you inflict deals increased damage and causes your foes to burn.
This trait can only inflict burning on a particular target once every three seconds.
Demonic Lore (PvE, WvW) Scourge Damage Increase: 33% Torment you inflict deals increased damage and causes your foes to burn.
This trait can only inflict burning on a particular target once every three seconds.
Septic Corruption (PvP, Harbinger Damage Increase: 0.5% Deal increased condition damage for each stack of blight you have. Shroud 2 skill also inflicts poison.
Septic Corruption (PvE) Harbinger Damage Increase: 0.25% Deal increased condition damage for each stack of blight you have. Shroud 2 skill also inflicts poison.
Traits that reduce incoming condition damage[edit]
Trait Trait line Condition damage Description
reduced in %
Justice is Blind Radiance Light Aura: 10% Gain a light aura and blind nearby foes when you activate Virtue skill 1.
Deathless Courage Willbender Condition Damage 50% While Courage is active, incoming strike damage and condition damage is reduced.
(PvE) Reduced:
Deathless Courage Willbender Condition Damage 20% While Courage is active, incoming strike damage and condition damage is reduced.
(PvP, WvW) Reduced:
Versed in Stone Retribution Condition Damage 50% Rite of the Great Dwarf affects condition damage as well. When struck below the health threshold, cast the Rite of the Great Dwarf.
(specialization) Reduced: Gain power based on your toughness.
Draconic Echo (PvE) Herald Facet of Chaos: 10% You retain your facet passives for a duration after using their consume skills. Your facet passives grant additional bonuses.
Draconic Echo (PvP, Herald Facet of Chaos: 5% You retain your facet passives for a duration after using their consume skills. Your facet passives grant additional bonuses.
Righteous Rebel Renegade Condition Damage 4% Kalla's Fervor reduces the damage you receive from conditions. Orders from Above lasts longer and affects a larger area.
Iron Blooded Alchemy Iron Blooded: 2% Reduce physical and condition damage for each boon you have.
Exigency Protocols Mechanist Exigency Protocols 50% When your mech is struck while under half health, it activates Exigency Protocols, gaining damage reduction and regeneration for a
: short duration. Regeneration boons you apply are stronger.
Second Skin (PvE) Soulbeast Damage Reduced: 33% Conditions inflict less damage to you while you have protection.
Second Skin (PvP) Soulbeast Damage Reduced: 25% Conditions inflict less damage to you while you have protection.
Unhindered Daredevil Unhindered 10% Your dodge ability is replaced by a long-range dash that removes inhibiting conditions and grants swiftness and damage reduction.
Combatant Combatant: Removing conditions in this way temporarily reduces endurance gain.
Beyond the Veil Death Magic Condition Damage 10% Take reduced condition damage while at or above the threshold of carapace stacks.
Dark Defiance Death Magic Condition Damage 20% If you are disabled, gain protection. Incoming condition damage is reduced while you have protection.
Reduction: Disables include stun, daze, knockback, pull, knockdown, sink, float, fear, taunt, and launch.
Blood as Sand (PvE) Scourge Damage Reduced: 15% Reduce all incoming damage taken when you have a shade active.
Blood as Sand (PvP, Scourge Damage Reduced: 7% Reduce all incoming damage taken when you have a shade active.
Related effects[edit]
Effects that reduce incoming condition damage[edit]
• Resolution — Incoming condition damage decreased by 33%; stacks duration.
• Light Aura — When struck, you gain resolution. Incoming condition damage is reduced by 10%. (Cooldown: 1s)
• Dark Aura — Surrounded by a dark aura that reduces incoming condition damage and causes torment each time you are struck (1-second cooldown per attacker).
Related equipment[edit]
Relics that increase outgoing condition damage[edit]
• Relic of Nourys — Gain stacks of Nourys's Hunger every interval and when you remove boons from enemies. Consume 10 stacks to grow larger, dealing increased damage, taking reduced damage, and
converting a percentage of outgoing damage into healing.
Relics that reduce incoming condition damage[edit]
• Relic of Nourys — Gain stacks of Nourys's Hunger every interval and when you remove boons from enemies. Consume 10 stacks to grow larger, dealing increased damage, taking reduced damage, and
converting a percentage of outgoing damage into healing.
Lifesteal damage[edit]
Primary article: Life stealing
Falling damage[edit]
A character that falls will take a percentage of their health in damage, based on the distance. The damage appears to scale exponentially, and the maximum safe falling distance is around 1200 units.
Landing in water negates fall damage, but invulnerability does not. In some cases, falling damage can be disabled by game mechanics, e.g. by the Windfall effect in World versus World game mode.
Nonfatal falls can still be quite dangerous in areas with hostile NPCs, as sufficiently long falls can cause characters to be knocked prone upon landing; this leaves the character vulnerable to
attacks while recovering.
The character can fall while attempting to navigate a slope, since it is more difficult to estimate the height and angle. Sometimes, instead of sliding safely down, a character can be inflicted by
repeated short falls that are each far enough to cause falling damage, sometimes resulting in becoming defeated, even though individually the falls would not have been fatal.
Forcing opponents to fall
In WvW and PvP, players can cause opponents to fall by using control effects, such as fear and knockback, to move them over the edge, but NPCs will never fall off ledges.
• When rounding numbers, Guild Wars 2 uses a specific type of rounding known as Round Half to Even.
• Direct damage has been clarified to strike damage in most runes, traits, skills and sigils with the May 11, 2021 game update.
• Prior to the December 3, 2019 game update, every profession had a trait which halved falling damage and granted one or more other effects after falling.
See also[edit] | {"url":"https://wiki-en.guildwars2.com/wiki/Falling_damage","timestamp":"2024-11-03T01:25:22Z","content_type":"text/html","content_length":"729171","record_id":"<urn:uuid:37bf67f8-7e01-4868-ac3a-13c1a0417f7b>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00502.warc.gz"} |
The n-Category Café
June 30, 2009
Springer Verlag Publishes ‘Proof’ of Goldbach’s Conjecture
Posted by John Baez
The ‘big three’ science publishers — Elsevier, Springer, and Wiley–Blackwell — like to argue that their high prices pay for high quality. Recent events cast a tinge of doubt on this. We all know
about the case of El Naschie, and Elsevier’s fake medical journals. Now Springer has published a book that purports to contain elementary proofs of Fermat’s Last Theorem and Goldbach’s Conjecture!
Here it is:
Posted at 6:55 PM UTC |
Followups (94)
June 26, 2009
This Book Needs a Title
Posted by John Baez
You may be familiar with Raymond Smullyan’s delightful books packed with puzzles and paradoxes. One of them — not my favorite — is called This Book Needs No Title.
Peter May and I are almost done editing a book that’s quite the opposite. It does need a title.
Background Essays Towards Higher Category Theory would be an accurate description, but it’s not very snappy. Towards Higher Category Theory is overly ambitious. Can you think of something better?
To help you dream up an appropriate title, here’s a draft of the preface. And if you spot mistakes in this preface, I’d like to hear about them. (The bibliography makes no pretensions to
completeness, so surely many people will be offended by how we have neglected their work. If you’re one of those people, I apologize.)
Posted at 3:55 PM UTC |
Followups (79)
Cohomology and Homotopy
Posted by David Corfield
In posts and this $n$Lab entry, Urs has been promoting his view of cohomology as about Hom spaces between objects in certain settings, where the unknown space is on the left. Similarly homotopy is
where the unknown space is on the right. This got me thinking the following thoughts during some quiet moments in a conference this morning.
Posted at 1:08 PM UTC |
Followups (49)
June 20, 2009
Hopf Algebras in Luxembourg
Posted by John Baez
Hopf algebras lie at a fascinating intersection of combinatorics, quantum physics, topology and category theory. Everyone should learn about them. Luckily, there is still some funding for students
and young researchers to attend this conference and school on Hopf algebras:
Posted at 8:44 PM UTC |
Followups (1)
This Week’s Finds in Mathematical Physics (Week 276)
Posted by John Baez
In week276 of This Week’s Finds, hear the shocking news about this star:
Read about the Local Bubble, the Loop I Bubble, the cloudlets from Sco-Gen, and the “local fluff”. Come visit the $n$Lab! And learn how Paul-André Mélliès and Nicolas Tabareau have taken some classic
results of Lawvere on algebraic theories and generalized them to other kinds of theories, like PROPs.
Posted at 8:18 PM UTC |
Followups (35)
Kan Lifts
Posted by David Corfield
I’ve been thinking more about organising principles operating in mathematics. I remember Steenrod wrote a very illuminating sketch of algebraic topology in terms of extensions and lifts, which I
can’t now retrieve. That got me wondering, if with Mac Lane we say
The notion of Kan extensions subsumes all the other fundamental concepts of category theory,
whatever happened to Kan lifts?
Posted at 9:21 AM UTC |
Followups (39)
June 16, 2009
Accessible Even to a Philosopher
Posted by David Corfield
Edward Frenkel has a paper out today – Gauge Theory and Langlands Duality – which sets out from André’s Weil’s letter to his sister.
This is a remarkable document, in which Weil tries to explain, in fairly elementary terms (presumably, accessible even to a philosopher), the “big picture” of mathematics, the way he saw it. I
think this sets a great example to follow for all of us.
Martin Krieger provided a translation of the letter. As for its accessibility, I can say that it did inspire chapter 4 of my book. Let’s hope Frenkel’s paper can also be inspirational. At first
glance, however, it looks tough going.
Posted at 1:04 PM UTC |
Followups (7)
June 15, 2009
2-Branes in 11 Dimensions
Posted by John Baez
I’m trying to learn a teeny bit more about supersymmetric membrane theories, and I’m so far behind that this old review article is proving helpful:
I’m particularly fascinated by the classification of ‘fundamental super $p$-branes’ and its relation to normed division algebras:
• Reals: 2-branes in 4 dimensions, with 1 bosonic and 1 fermionic degree of freedom.
• Complexes: 3-branes in 6 dimensions, with 2 bosonic and 2-fermionic degrees of freedom.
• Quaternions: 5-branes in 10 dimensions, with 4 bosonic and 4 fermionic degrees of freedom.
• Octonions: 2-branes in 11 dimensions, with 8 bosonic and 8 fermionic degrees of freedom.
I talked about this ‘brane scan’ in incredibly elementary terms back in week118, but now I’d like to actually understand it. It’s supposed to follow from something like a classification of closed
differential forms on super-Minkowski spacetimes, due to Achúcarro et al. I don’t know how it goes, but it seems potentially quite comprehensible.
If anyone can help me with this, I’d appreciate it a lot. But I really want to say a word about the 11-dimensional case — a brief explanation for complete novices such as myself — and then ask a
question about that.
Posted at 11:40 AM UTC |
Followups (25)
June 14, 2009
This Week’s Finds in Mathematical Physics (Week 275)
Posted by John Baez
June 10, 2009
Final Exams Again
Posted by John Baez
I’m busy grading final exams for my undergraduate number theory class. The class went quite well — perhaps because instead of proving quadratic reciprocity, I spent time teaching them about
arithmetic functions, Dirichlet convolution, Möbius inversion and the like… topics which lead to lots of fun puzzles and computations.
Nonetheless, grading finals is always mind-numbing and dispiriting. I’m sure you’ve seen it — perfectly intelligent people grading finals, trading the most mean-spirited and witless of witticisms
just to keep from going insane.
In that spirit, let me report three mildly amusing things I’ve seen so far. Don’t get your hopes up — they’re not nearly as funny as the proof of the infinitude of primes that I described last time I
taught this class.
Indeed, I’m sure some of you have seen funnier final exams this year. If so, tell us about ‘em!
Posted at 7:47 PM UTC |
Followups (16)
June 8, 2009
Strings, Fields, Topology in Oberwolfach
Posted by Urs Schreiber
Algebraic Geometry for Category Theorists
Posted by John Baez
James Dolan and I have spent the last year or so talking about algebraic geometry, trying to learn the basics.
Algebraic geometry should be a lot of fun for category theorists — after all, this is the subject made Grothendieck invent topos theory! But alas, introductions to topos theory don’t seem to explain
much about algebraic geometry, and introductions to algebraic geometry don’t seem to fully embrace topos theory. It seems that Grothendieck’s revolution never fully caught on. And that’s sort of sad.
Posted at 12:59 AM UTC |
Followups (242)
June 6, 2009
Nonabelian Algebraic Topology
Posted by John Baez
Here it is — the magnum opus of cubical methods in algebraic topology:
It’s still just a preliminary version, but it’s 516 pages long, and packed with good stuff — and it’s free!
Grab a copy! I’m sure the authors will be grateful if you catch typos and other problems.
Posted at 5:09 PM UTC |
Post a Comment
June 5, 2009
The Elusive Proteus
Posted by David Corfield
Cassirer writes in Determinism and Indeterminism in Modern Physics:
At first glance the universality of the action principle seems by no means beyond question. This universality could be attained only at the cost of a circumstance which, from the purely physical
point of view, led again and again to difficulties and doubts. For the more universally the principle was conceived, the more difficult it became to specify clearly its proper concrete content.
It becomes finally a kind of Proteus, displaying a new aspect on each new level of scientific knowledge. If we ask what precisely that “something” might be to which the property of a maximum or a
minimum is ascribed, we receive no definite and unambiguous answer. (p. 50)
Posted at 1:56 PM UTC |
Followups (31)
June 3, 2009
Mathematical Principles
Posted by David Corfield
I’ve been reading several works by Ernst Cassirer of late. In his Determinism and Indeterminism in Modern Physics, Yale University Press, 1956, (translation by O. Benfrey of ‘Determinismus und
Indeterminismus in der modernen Physik’, 1936) he discusses the multi-levelled nature of physics: laws encompass measurements, and principles encompass laws.
Here in fact we find a basic methodological characteristic common to all genuine statements of principles. Principles do not stand on the same level as laws, for the latter are statements
concerning specific concrete phenomena. Principles are not themselves laws, but rules for seeking and finding laws. This heuristic point of view applies to all principles. They set out from the
presupposition of certain common determinations valid for all natural phenomena and ask whether in the specialized disciplines one finds something corresponding to these determinations, and how
this “something” is to be defined in particular cases.
The power and value of physical principles consists in this capacity for “synopsis,” for a comprehensive view of whole domains of reality… Principles are invariably bold anticipations that
justify themselves in what they accomplish by way of construction and inner organization of our total knowledge. They refer not directly to phenomena but to the form of the laws according to
which we order these phenomena. A genuine principle, therefore, is not equivalent to a natural law. It is rather the birthplace of natural laws, a matrix as it were, out of which new natural laws
may be born again and again. (pp. 52-53)
Posted at 10:44 AM UTC |
Followups (31)
Journal Club — Geometric Infinity-Function Theory — Week 6
Posted by John Baez
guest post by Alex Hoffnung
This section is broken into two short parts. I will try to say a little about the first here and then head over to the $n$Lab to say more soon.
Posted at 5:57 AM UTC |
Followups (2)
June 2, 2009
Treq Lila
Posted by John Baez
I should have been finishing a paper, but instead I finished an album.
Posted at 9:24 PM UTC |
Followups (8) | {"url":"https://golem.ph.utexas.edu/category/2009/06/index.shtml","timestamp":"2024-11-03T09:37:28Z","content_type":"application/xhtml+xml","content_length":"100638","record_id":"<urn:uuid:4a5d30a5-e2be-42a5-a536-57e9574def41>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00679.warc.gz"} |
G. Michael Blahnik.com - Mathematics
SAMPLE: ADDING CO-DISCIPLINE OPTIONS TO A CURRENT MAJOR PROGRAM IN MATHEMATICS (B.A Level)
CURRENT MAJOR PROGRAM: B.A. MATHEMATICS: 44 credits (with tracks)
Core Required Courses (20 credits):
MAT 129 Calculus I
MAT 229 Calculus II
MAT 194 Mathematical Sciences Seminar
MAT 234 Linear Algebra
STA 250 Probability and Statistics
MAT 329 Calculus III
MAT 489 Comprehensive Exam
Pure Mathematics Track (24 credits)
MAT 302 Introduction to Higher Mathematics
MAT 310 Elementary Theory of Numbers
MAT 420 Real Variables I
MAT 430 Complex Variables
Three 300 or above level electives
Applied Mathematics Track (24 credits)
CSC 270 Mathematics Software Programming
MAT 325 Differential Equations
MAT 330 Classical Applied Analysis
MAT 360 Numerical Analysis
Three 300 or above level electives
General Mathematics Track (24 credits)
MAT 302 Introduction to Higher Mathematics
MAT 310 Elementary Theory of Numbers
MAT 345 Introduction to Geometry
STA 341 Statistics II
Three 300 or above level electives
B.A Mathematics and the Social Sciences (44 credits)
[A Psychology Major has a natural inclination for Mathematics and wants to work in a research field in Psychology, but she doesn’t want to major in Mathematics. She can earn a B.A.: Mathematics
and the Social Sciences by completing the following]:
Core Required Courses (20 credits):
MAT 129 Calculus I
MAT 229 Calculus II
MAT 194 Mathematical Sciences Seminar
MAT 234 Linear Algebra
STA 250 Probability and Statistics
MAT 329 Calculus III
MAT 489 Comprehensive Exam
Major Elective Courses (relevant to Social Sciences) (select 9 credits):
MAT 305 History of Mathematics
MAT 375 Applied Mathematical Models
CSC 270 Mathematics Software Programming
MAT 325 Differential Equations
MAT 330 Classical Applied Analysis
MAT 360 Numerical Analysis
Co-Major Elective Courses (relevant to Mathematics) (select 15 credits; each elective should be from different disciplines)
ANT 342 Quantitative Methods in Anthropology
JUS 315 Criminal Justice Research Methods
ANT 465 Field Methods in Anthropology
ANT 565 Advanced Field Methods in Anthropology
AST 310 Astronomical Techniques
COU 580 Research Tools in Counseling
GEO 418 Geographical Information Systems
GEO 518 Geographical Information Analysis
LDR 310 Research Methods in Leadership
PSY 210 Research Methods in Psychology
PSY 310 Quantitative Methods in Psychology
PSY 490 Research: Psychology
SOC 320 Social Research
SOC 321 Applied Social Research
SOC 322 Quantitative Research Methods
* Only one (1) degree option has been created based on the wants and needs of students, the interests and expertise of faculty, and the versatility of the discipline curriculum. There are other
Co-Discipline Options that can be created in relation to Mathematics, i.e. Mathematics and Chemistry, Mathematics and Biology, Mathematics and Physics, Mathematics and Astronomy, Mathematics and the
Physical Sciences, Mathematics and Engineering, etc., depending upon the versatility of the curriculum in each of these programs. Mathematics is less versatile than Philosophy and History in
creating Co-Discipline degrees, but its versatility might be increased as interdisciplinary programs become more established. | {"url":"http://www.higheredinnovations.com/co-discipline-options-samples/mathematics/","timestamp":"2024-11-11T05:06:04Z","content_type":"text/html","content_length":"33155","record_id":"<urn:uuid:388eb676-d5d1-41c6-b1c9-5c0e7ca6fb46>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00042.warc.gz"} |
Statistics Done Wrong
This book is vastly different from the books that try to warn us against incorrect statistical arguments present in media and other mundane places. Instead of targeting newspaper articles,
politicians, journalists who make errors in their reasoning, the author investigates research papers, where one assumes that scientists and researchers make flawless arguments, at least from stats
point of view. The author points a few statistical errors, even in the pop science book, “How to lie with statistics?''. This book takes the reader through the kind of statistics that one comes
across in research papers and shows various types of flawed arguments. The flaws could arise because of several reasons such as eagerness to publish a new finding without thoroughly vetting the
findings, not enough sample size, not enough statistical power in the test, inference from multiple comparisons etc. The tone of the author isn’t deprecatory. Instead he explains the errors in simple
words. There is minimal math in the book and the writing makes the concepts abundantly clear even to a statistics novice. That in itself should serve as a good motivation for a wider audience to go
over this 130 page book.
In the first chapter, the author introduces the basic concept of statistical significance. The basic idea of frequentist hypothesis testing is that it is dependent on p value that measure Probability
(data|Hypothesis). In a way, p value measures the amount of surprise that you find in the data given that you have a specific null hypothesis in mind. If the p value turns out to be too less, then
you start doubting your null and reject the null. The procedure at the outset looks perfectly logical. However one needs to keep in mind, the things that do not form a part of p value such as,
• It does not per se measure the size of the effect.
• Two experiments with identical data can give different p values. This is disturbing as one assumes that p value somehow knows the intention of the person doing the experiment.
• It does not say anything about the false positive rate.
By the end of the first chapter, the author convincingly rips apart p value and makes a case for using confidence intervals. He also says that many people do not report confidence intervals because
they are often embarrassingly wide and might make their effort a fruitless exercise.
The second chapter talks about statistical power, a concept that many introductory stats courses do not delve in to, appropriately. The statistical power of a study is the probability that it will
distinguish an effect of a certain size from pure luck. The power depends on three factors
• size of the bias you are looking for
• sample size
• measurement error
If an experiment is trying to test a subtle bias, then there needs to be far more data to even detect it. Usually the accepted power for an experiment is 80%. This means that the probability of bias
detection is close to 80%. In many of the tests that have negative results, i.e the alternate is rejected, it is likely that the power of test is compromised. Why do researchers fail to take care of
power in their calculations? The author guesses that it could be because the researcher’s intuitive feeling about samples is quite different from the results of power calculations. The author also
ascribes to the not so straightforward math required to compute the power of study.
The problems with power also plague the other side of experimental results. Instead of detecting the true bias, the results often show inflation of true result, called M errors, where M stands for
magnitude. One of the suggestions given by the author is : Instead of computing the power of a study for a certain bias detection and certain statistical significance, the researchers should instead
look for power that gives narrower confidence intervals. Since there is no readily available term to describe this statistic, the author calls it assurance, which determines how often the confidence
intervals must beat a specific target width. The takeaway from this chapter is that whenever you see a report of significant effect, your reaction should not be “Wow, they found something
remarkable", but it needs to be, “Is the test underpowered ?”. Also just because alternate was rejected doesn’t mean that alternate is absolute crap.
The third chapter talks about pseudo replication, a practice where the researcher uses the same set of patients/animals/ whatever to create repeated measurements. Instead of bigger sample sizes, the
researcher creates a bigger sample size by repeated measurements. Naturally the data is not going to be independent as the original experiment might warrant. Knowing that there is a pseudo
replication of the data, one must be careful while drawing inferences. The author gives some broad suggestions to address this issue
The fourth chapter is about the famous base rate fallacy where one ascribes the p value to the probability of alternate being true. Frequentist procedures that give p values merely talk about the
surprise element. In no way do they actually talk about the probability of alternate treatment in a treatment control experiment. The best way to get a good estimate of probability that a result is
false positive, is by considering prior estimates. The author also talks about Benjamini-Hochberg procedure, a simple yet effective procedure to control for false positive rate. I remember reading
this procedure in an article by Brad Efron titled, “The future of indirect evidence”, in which Efron highlights some of the issues related to hypothesis testing in high dimensional data.
The fifth chapter talks about the often found procedure of testing two drugs with a placebo and using the results to compare the efficiency of two drugs. Various statistical errors can creep in.
These are thoroughly discussed. The sixth chapter talks about double dipping, i.e. using the same data to do exploratory analysis and hypothesis testing. It is the classic case of using in-sample
statistics to extrapolate out-of-sample statistics. The author talks about arbitrary stopping rules that many researchers employ for cutting short an elaborate experiment when they find statistically
significant findings at the initial stage. Instead of having a mindset which says, “I might have been lucky in the initial stage”, the researchers over enthusiastically stops the experiment and
reports truth inflated result. The seventh chapter talks about the dangers of dichotomizing continuous data. In many research papers, there is a tendency to divide the data in to two groups and run
significance tests or ANOVA based tests, thus reducing the information available from the dataset. The author gives a few examples where dichotomization can lead to grave statistical errors.
The eighth chapter talks about basic errors that one does in doing regression analysis. The errors highlighted are
• over reliance on stepwise regression methods like forward selection or backward elimination methods
• confusing correlation and causation
• confounding variables and Simpson’s paradox
The last few chapters gives general guidelines to improve research efforts, one of them being “reproducible research”.
Even though this book is a compilation of various statistical errors committed by researchers in various scientific fields, it can be read by anyone whose day job is data analysis and model building.
In our age of data explosion, where there are far more people employed in analyzing data and who need not necessarily publish papers, this book would be useful to a wider audience. If one wants to go
beyond the simple conceptual errors present in the book, one might have to seriously think about all the errors mentioned in the book and understand the math behind them. | {"url":"https://www.rksmusings.com/2015/07/15/statistics-done-wrong/","timestamp":"2024-11-08T17:08:04Z","content_type":"text/html","content_length":"16969","record_id":"<urn:uuid:47b90fca-c8bc-4dc4-aabc-d97e14e7025d>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00300.warc.gz"} |
How to Check Time Series Stationarity in Python - Analyzing Alpha
You can use visual inspection, global vs. local analysis, and statistics to analyze stationarity. The Augmented Dickey-Fuller (ADF) test is the most commonly used parametric test, and the
Zivot-Andrews test is better than the ADF at detecting stationarity through structural breaks.
This post assumes you understand what stationarity is and why it’s important.
Note: I use matplotlib plots for analysis and Plotly charts for presentation in this post. The complete code for this post is on the Analyzing Alpha GitHub Repository.
Visual Inspection
It’s easy to see if a process is creating a stationary time series. If we see the mean or the distribution changing, it’s non-stationary.
Let’s first create a random walk model using Python, and then we’ll plot it in Plotly.
# Grab imports
import numpy as np
import pandas as pd
# Create index
periods = 2000
vol = .002
dti = pd.date_range("2021-01-01",
# Generate normal distribution
random_walk = pd.DataFrame(index=dti,
data=np.random.normal(size=periods) * vol)
# Plot it using a custom function
"Stationary "Random Walk" Time Series").show()
We can identify a stochastic process by analyzing a time series plot and gain an understanding of if there’s a shift in the mean or distribution through time. This is called global vs. local
analysis, which we’ll discuss further below.
However, in practice, it’s trivial to use more advanced plot functions to identify a stationary process.
Time series decomposition separates the signal, trend, seasonality, and error automatically. Let’s decompose Apple’s revenue. We can see the clear seasonality and trend in this non-stationary data.
from statsmodels.tsa.seasonal import seasonal_decompose
sd = seasonal_decompose(apple_revenue_history)
Autocorrelation Function
The autocorrelation and partial-autocorrelation functions analyze a data set for statistical significance between the first data point and prior data points.
A non-stationary time series data will show significance between itself and its lagged values, and that significance will decay to zero slowly as in the first plot.
The second plot, a stationary time series, will quickly drop to zero. The red lines are confidence intervals where values above or below the lines are more significant than two standard deviations.
from statsmodels.graphics.tsaplots import plot_acf
plot_acf(df2, lags=40, alpha=0.05)
plot_acf(df2.diff().dropna(), lags=40, alpha=0.05)
Sometimes it’s evident after visual inspection that we have stationary or non-stationary data. But even in these circumstances, we want to test our hypothesis using statistical tests to understand
our series better.
Global vs. Local Analysis
We can break our time series into multiple segments and analyze the summary statistics of each against the time series or another partition to see if our time series data is changing through time.
def get_quads(df):
quadlen = int(len(df1) * 0.25)
ss = df[:quadlen].describe()
ss[1] = df[quadlen:quadlen*2].describe()
ss[2] = df[quadlen*2:quadlen*3].describe()
ss[3] = df[quadlen*3:].describe()
return ss
Random Walk
Notice how the random walk time series has a constant mean, variance, and the distribution between the min, percentages, and max are relatively consistent (no seasonality)?
random_walk = pd.DataFrame(index=dti,
data=np.random.random(size=periods) * vol)
count 500.000000 500.000000 500.000000 500.000000
mean 0.000136 -0.000054 0.000070 -0.000216
std 0.002067 0.001909 0.002008 0.002020
min -0.005851 -0.005614 -0.006003 -0.006325
25% -0.001263 -0.001344 -0.001262 -0.001492
50% 0.000095 -0.000094 0.000045 -0.000229
75% 0.001503 0.001225 0.001331 0.001073
max 0.007002 0.006702 0.006338 0.005969
Trend Stationary
You can see the mean is increasing in the trending time series.
trending = pd.DataFrame(index=dti,
data=np.random.random(size=periods) * vol).cumsum()
count 500.000000 500.000000 500.000000 500.000000
mean 0.253949 0.754164 1.245338 1.752112
std 0.148433 0.138083 0.153263 0.144376
min 0.001630 0.509963 0.989912 1.509835
25% 0.122746 0.637347 1.111457 1.629752
50% 0.261204 0.749095 1.243160 1.748031
75% 0.376252 0.878295 1.376870 1.879781
max 0.508350 0.989398 1.509197 2.000878
Volatile Time Series
The standard deviation is changing in the volatile time series.
varying = pd.DataFrame(index=dti,
data=np.random.normal(size=periods) * vol \
* np.logspace(1,5,num=periods, dtype=int))
count 500.000000 500.000000 500.000000 500.000000
mean 0.000510 0.082075 -0.206946 -1.990338
std 0.094116 0.935861 10.263379 99.674838
min -0.664262 -3.887168 -46.277549 -472.434713
25% -0.039151 -0.280147 -4.025722 -45.672561
50% -0.001081 0.067228 -0.068825 -4.059768
75% 0.037203 0.476481 3.295115 35.828933
max 0.424776 5.000719 61.889090 455.791059
Seasonal Time Series
You can identify seasonality by analyzing the distribution through min, max, and the percentages in between. Notice how the 50% area is changing.
def simulate_seasonal_term(periodicity, total_cycles, noise_std=1.,
duration = periodicity * total_cycles
assert duration == int(duration)
duration = int(duration)
harmonics = harmonics if harmonics else int(np.floor(periodicity / 2))
lambda_p = 2 * np.pi / float(periodicity)
gamma_jt = noise_std * np.random.randn((harmonics))
gamma_star_jt = noise_std * np.random.randn((harmonics))
total_timesteps = 100 * duration # Pad for burn in
series = np.zeros(total_timesteps)
for t in range(total_timesteps):
gamma_jtp1 = np.zeros_like(gamma_jt)
gamma_star_jtp1 = np.zeros_like(gamma_star_jt)
for j in range(1, harmonics + 1):
cos_j = np.cos(lambda_p * j)
sin_j = np.sin(lambda_p * j)
gamma_jtp1[j - 1] = (gamma_jt[j - 1] * cos_j
+ gamma_star_jt[j - 1] * sin_j
+ noise_std * np.random.randn())
gamma_star_jtp1[j - 1] = (- gamma_jt[j - 1] * sin_j
+ gamma_star_jt[j - 1] * cos_j
+ noise_std * np.random.randn())
series[t] = np.sum(gamma_jtp1)
gamma_jt = gamma_jtp1
gamma_star_jt = gamma_star_jtp1
wanted_series = series[-duration:] # Discard burn in
return wanted_series
duration = 100 * 3
periodicities = [10, 100]
num_harmonics = [3, 2]
std = np.array([2, 3])
terms = []
for ix, _ in enumerate(periodicities):
s = simulate_seasonal_term(
duration / periodicities[ix],
terms.append(np.ones_like(terms[0]) * 10.)
seasonal = pd.DataFrame(index=dti[:duration],data=np.sum(terms, axis=0))
count 75.000000 75.000000 75.000000 75.000000
mean 328.055459 -280.450561 -149.840302 120.579444
std 733.650363 750.299351 958.572453 1007.596395
min -863.464617 -1694.424423 -1815.786373 -1733.291362
25% -239.600008 -770.199418 -828.906000 -694.550888
50% 72.425299 -346.127307 -405.201792 59.358360
75% 923.255524 59.875525 485.926284 945.035545
max 1813.015082 1642.090923 1916.794045 1832.084148
Parametric Tests
Statistical tests identify specific types of stationarity. Most parametric tests analyze if the time series has a unit root, which is a systematic pattern that is random in the presence of serial
correlation — which is a fancy way of saying a lagged version of itself. The reason it’s called a unit root test is due to the math of the process.
Testing if the time series has a unit root and is therefore not stationary is the null hypothesis, which is a fancy way of saying the commonly accepted belief. We want to reject the null hypothesis
with a level of certainty to state the time series is stationary — or more accurately, the process generating the time series is stationary.
Let’s look at the most common unit root tests.
The Dickey-Fuller test is the first statistical test that analyzes if a unit root exists in an autoregressive model of a time series.
It runs into issues with serial correlation, which is why there’s an Augmented Dickey-Fuller test.
Augmented Dickey-Fuller (ADF)
The Augmented Dickey-Fuller tests for a unit root in a univariate process in the presence of serial correlation. The ADF test handles more complex models and is the typical go-to for most analysts.
Non-Stationary Process
Let’s use the ADF test on Apple’s revenue, which is non-stationary.
from statsmodels.tsa.stattools import adfuller
t_stat, p_value, _, _, critical_values, _ = adfuller(df6['observed'].values, autolag='AIC')
print(f'ADF Statistic: {t_stat:.2f}')
for key, value in critical_values.items():
print('Critial Values:')
print(f' {key}, {value:.2f}')
ADF Test Statistic: -2.12
Critial Values:
1%, -3.69
Critial Values:
5%, -2.97
Critial Values:
10%, -2.63
You can compare the ADF test statistic of -2.12 against the critical values. We see that the test statistic is less than all of the critical values, so we cannot reject the null hypothesis — in other
words, as we already knew, Apple’s revenue is non-stationary; we see trends, and it’s mean and variance are changing.
Stationary Process
Now let’s do the same for the random walk data, which exhibits stationarity.
from statsmodels.tsa.stattools import adfuller
result = adfuller(random_walk[0].values, autolag='AIC')
t_stat, p_value, _, _, critical_values, _ = adfuller(random_walk[0].values, autolag='AIC')
print(f'ADF Statistic: {t_stat:.2f}')
for key, value in critical_values.items():
print('Critial Values:')
print(f' {key}, {value:.2f}')
print(f'\np-value: {p_value:.2f}')
print("Non-Stationary") if p_value > 0.05 else print("Stationary")
ADF Statistic: -43.83
Critial Values:
1%, -3.43
Critial Values:
5%, -2.86
Critial Values:
10%, -2.57
p-value: 0.00
Notice that this time I’ve included the p-value. When a p-value is greater than 5%, we accept the null hypothesis. We also see an extreme ADF test statistic from this random walk model.
I like the critical value approach compared to the p-value as I can see to what degree I can reject that the time series is not stationary.
Kwiakowski-Phillips-Schmidt-Shin (KPSS)
The Kwiatkowski-Phillips-Schmidt-Shin (KPSS) tests for the null hypothesis that the time series process is stationary or trend stationarity. In other words, the KPSS test null hypothesis is the
opposite of the ADF test.
Exhibits Non-Stationarity
from statsmodels.tsa.stattools import kpss
t_stat, p_value, _, critical_values = kpss(df6['observed'].values, nlags='auto')
print(f'ADF Statistic: {t_stat:.2f}')
for key, value in critical_values.items():
print('Critial Values:')
print(f' {key}, {value:.2f}')
print(f'\np-value: {p_value:.2f}')
print("Stationary") if p_value > 0.05 else print("Non-Stationary")
ADF Statistic: 0.91
Critial Values:
10%, 0.35
Critial Values:
5%, 0.46
Critial Values:
2.5%, 0.57
Critial Values:
1%, 0.74
p-value: 0.01
Exhibits Stationarity
from statsmodels.tsa.stattools import kpss
t_stat, p_value, _, critical_values = kpss(random_walk[0].values, nlags='auto')
print(f'ADF Statistic: {t_stat:.2f}')
for key, value in critical_values.items():
print('Critial Values:')
print(f' {key}, {value:.2f}')
print(f'\np-value: {p_value:.2f}')
print("Stationary") if p_value > 0.05 else print("Non-Stationary")
from statsmodels.tsa.stattools import kpss
t_stat, p_value, _, critical_values = kpss(random_walk[0].values, nlags='auto')
print(f'ADF Statistic: {t_stat:.2f}')
for key, value in critical_values.items():
print('Critial Values:')
print(f' {key}, {value:.2f}')
print(f'\np-value: {p_value:.2f}')
print("Non-Stationary") if p_value > 0.05 else print("Stationary")
Zivot and Andrews
Zivot-Andrews tests for the same thing as the ADF and KPSS tests for the presence of a structural break.
Let’s add a break to our random walk model. Notice how the variance and cyclicality properties are relatively constant, but there’s a significant shift in the mean.
Now let’s run both an ADF test and a Zivot-Andrews to test for non-stationarity.
t_stat, p_value, _, _, critical_values, _ = adfuller(stationary_with_break[0].values, autolag='AIC')
print(f'ADF Statistic: {t_stat:.2f}')
for key, value in critical_values.items():
print('Critial Values:')
print(f' {key}, {value:.2f}')
print(f'\np-value: {p_value:.2f}')
print("Non-Stationary") if p_value > 0.05 else print("Stationary")
ADF Statistic: -1.05
Critial Values:
1%, -3.43
Critial Values:
5%, -2.86
Critial Values:
10%, -2.57
p-value: 0.73
from statsmodels.tsa.stattools import zivot_andrews
t_stat, p_value, critical_values, _, _ = zivot_andrews(stationary_with_break[0].values)
print(f'Zivot-Andrews Statistic: {t_stat:.2f}')
for key, value in critical_values.items():
print('Critial Values:')
print(f' {key}, {value:.2f}')
print(f'\np-value: {p_value:.2f}')
print("Non-Stationary") if p_value > 0.05 else print("Stationary")
Zivot-Andrews Statistic: -60.35
Critial Values:
1%, -5.28
Critial Values:
5%, -4.81
Critial Values:
10%, -4.57
p-value: 0.00
We see different results between the two stationarity tests. The ADF test fails to incorporate the structural break while the Zivot-Andrew identifies the series as stationary.
The Bottom Line
The process for testing for stationarity in time series data is relatively straightforward. You analyze the information visually, perform a decomposition, review the summary statistics, and then
select a parametric test to gain confidence in your assumptions.
The Augmented Dickey-Fuller or ADF test is the most commonly used test for stationarity; however, it’s not always the best. To analyze stationarity in time series with structural breaks, you’ll want
to use the Zivot-Andrews test. As always, it’s essential to have an intuitive understanding of the problem set and to have a deep knowledge of the tools at your disposal.
Leave a Comment | {"url":"https://analyzingalpha.com/check-time-series-stationarity-python","timestamp":"2024-11-10T15:54:54Z","content_type":"text/html","content_length":"138418","record_id":"<urn:uuid:7f694a0d-0aa3-4409-82ac-26528afa2bbd>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00606.warc.gz"} |
Mooker's No Lenz Magnet Generator
I crunched the numbers in Jim Murray's first Dynaflux presentation. Keeping in mind that he uses a solid metal rotor moving across a magnetic field creating extreme eddy currents, added with the
inducted current creating even more eddy currents. Not to mention using input power to create the EMF, all the bearings and resistance, plus flux losses in the system. So lets put all that aside and
just concentrate on if the input increase from unloaded to loaded broke the homeostasis of normal generators or transformers.
Unloaded, he was using 435 watts just to get the system going with no load.. When loaded, it jumped about 35 watts to about 470 watts. So we have a 35 watt increase of input. But His output was 40
watts. Which calculates to about a 12.5% reduction of back torque.
Normal generators have 100% back torque. His has 87.5%. It is my guess that the back torque was almost all from the eddy currents in the system. Even so- he still managed to decrease back torque
contrary to known machines.
With the spiral magnet rotor, we can avoid most these eddy currents altogether.. The only eddys we face are from the output coil cores. And we can cut this further by using laminations. Or we
could even avoid all eddy currents by using air core coils.
So all systems are a go.. My rotor is finally on the last leg of printing.. Hope it comes out nice!
09-21-2023, 06:20 PM
Main magnets are in place! Almost broke my monitor when a magnet shot out like a pistol. The fields are super compressed like this. The magnetic bubble around this thing is INTENSE...
I'll Give it some time then do the motor magnets that go on that bottom circle. Don't want to get my ceramic bearings anywhere near uncured JB Super Weld.
Beefy axle stands are printing now. Everything is heavy infill- so probably won't have much more movement on it till tomorrow.
09-22-2023, 09:00 AM
Now it's not as easy as spinning a spiral and slapping any old coil under it.. A coil's flux field has different intensity levels in different places. But there is a balance point where N is equal
to S on opposite sides. Your spiral should also have balanced symmetry. The coils symmetry needs to match the rotors symmetry with respect to the coils orientation, length, poles, and symmetrical
magnetic forces.
09-23-2023, 05:32 AM
Spins great, for the coil I would wind it on a tube slightly bigger diameter than the rotor.
Remove the leg at the small end to install.
You could also skew the windings on the tube so their slanted, might help might not?
Looking forward to your progress.
Thanks shylo
09-23-2023, 09:01 AM
Hey Shylo,
The spiral can not go inside a tube coil. The magnetic field curves to the ratio of Pi - so it pulls away after passing which creates the movement at 9o degrees angle motion. The inside of a coil is
also at Pi. So when we got no pull-away- we got no motion.
I've done my tests and unfortunately we have Rotor drag.. The Coming in side helps us on the first half and going out hurts us. The resistance is much greater than the assistance because the coil's
magnetic field is strongest at the poles, and the going out side has to leave closer to the poles than the coming in side.
My next experiment will be rotating the spiral rotor and creating a rotating coil that are both in motion.
09-23-2023, 11:05 AM
The theory is this:
Your in a boat traveling in the same direction of the wind. And you are travelling exactly at the same speed of the wind. You will feel no wind. But Both the wind and boat are moving.
Regarding the Rotor.. We are getting the perpendicular motion, but the coil is stationary so we are still feeling the cross-wind. Causing drag. I suspect less drag than normal, but still a good
degree of drag.
Now when the spiral coil is in motion identical to the rotor, I think we can cancel out the crosswind.
09-23-2023, 08:36 PM
Coil is OK for now..
printing the gears now. Need to print the right bearing mounts for the coil next. Then need to do sliprings and brushes.
09-24-2023, 10:52 PM
The plan changed!
I have all the parts for the rotating spiral build- but I thought of something I think will be even better. And much easier.. I need to get the trailing magnets away from the poles of the coil.. So
I think I can do it another way.
The coil will wrap around the outside, then inside over the speed-bump. Now the Rotor goes in the coil..
Now the coil's poles will be 90 degrees from the direction of rotation.. The coil's poles will be lined up with the blotch walls of the magnets.
This coil bobbin is a 32 hour print!... So no updates on this for a few days
09-25-2023, 07:22 AM
It seems some do not understand the operation concept.
F = qv × B (The force F is always perpendicular to the direction of the magnetic field B)
In a visual aspect, lets say each magnet was a paint brush with red paint on it. While rotating 1 revolution, we will see the speedbump get painted red from top down in a smooth orderly fashion.
Equivalent to swiping 1 paintbrush vertically but we accomplished it with horizontal motion.
The entire circle of wire sees no induction or moving magnetic field because the magnets are travelling in line with the direction of the wire at equal distance away. Except when it passes the
speedbump. The speedbump receives a varying intensity of magnetic field as each magnet passes the speedbump. And although the rotor is spinning horizontal, the direction of travel is vertical (only
over the speedbump..)
Since the physical movement is 90 degrees from the perceived moving magnetic field, the coil projects it's magnetic force vertical while the motion that caused it is horizontal.
There is no problem with Lenz in this geometry with regards to the magnet that is inducting. The problem comes from the magnets AFTER they have done their job and are past the speedbump. If the coil
is outside the rotor- the magnets that aren't inducing get attracted or repelled from the coil's pole. But it is now not linked to induction anymore, so it should be able to be corrected.
So I am designing the coil so that it's poles are in line with the blotch walls of the magnets and putting the coil's poles 90 degrees offset from the magnets. So the rotor spins inside the coil.
09-27-2023, 03:56 PM
I looked at all geometries for the way to make this thing mimic a full magnet flipping 2 polarities. One polarity going up, the other down. On both sides and opposite from each-other.
I want to use ONE coil with a Metal core. And I want the coil's exact center to be aligned exactly between all magnets at all times. so the coil can not be on the outside. Coil must be on the
inside. I want each pole of the rotor to have Both Polarity magnets balanced on each end of the coil's pole.
Neither of the 2 original layouts will work to accomplish this. Both proposed designs so far are 1/2 way there... The answer is combining designs..
Funny enough- it makes the Free Mason logo and resembles the as above so below symbol. It is 2 sinewaves 180 degrees out of phase.
If we put 1 oval shaped inductor coil inside- I think this thing will be perfectly balanced (magnetically) and should induct nicely.
So instead of wasting lots of wire winding my big imbalanced coil- I am thinking this way is superior. | {"url":"https://mooker.com/thread-23-page-3.html","timestamp":"2024-11-03T10:46:41Z","content_type":"application/xhtml+xml","content_length":"62160","record_id":"<urn:uuid:adbe7307-f378-4683-9ef0-ec56cd07416e>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00634.warc.gz"} |
Ask Uncle Colin: Traffic Flow
Dear Uncle Colin,
I read that when cars are driving at 70mph on the motorway, they take up more space than when they travel more slowly (because you need to leave a longer safe gap between them). What’s the most
efficient speed for motorway travel if you want to get as many cars past a given point as possible?
That’ll Really Annoy Fast Folk In Cars
Hi, TRAFFIC, and thanks for your message! What a lovely problem.
Stopping distances
Let’s suppose – seeing as we’re mathematicians, not transport planners – that cars ought to keep the minimum stopping distance as recommended in the Highway Code between them.
That’s actually a nice quadratic sequences problem in itself, especially if you use miles per hour and feet ((I’d normally not touch either with a three-metre pole)).
Speed (mph)
Stopping distance (ft)
If you fit a model ((I’m using subscripts to denote the imperial units, which we’ll change in a minute.)) of the form $D_f = Av_{mph}^2 + Bv_{mph} + C$ to that, you find $D_f = \frac{1}{20}v_{mph}^2
+ v_{mph}$. Splendid. But we really don’t like those units: $D_f$ is in feet and $v_{mph}$ is in miles per hour and just ugh ((One reason for the ugh is that if we try to differentiate this, it goes
wrong: we’ve got two different distance units and they don’t play nicely together.)).
We measure distance in metres, and there are about $\frac{10}{3}$ feet in one of those, so $D_f = \frac{10}{3}D$.
We measure speed in metres per second, and (it turns out) 9 miles per hour is about the same as 4 m/s. So, $v_{mph} = \frac{4}{9} v$
Rewriting the formula: $\frac{10}{3}D = \frac{1}{20} \times \frac{16}{81}v^2 + \frac{4}{9}v$
Or, simplifying, $D = \frac{2v^2 + 90v}{675}$. Much nicer.
In terms of how much space a car takes up, we also need to add on its length – and let’s say the average car is 4m long.
So, a car travelling at $v$ m/s takes up $D_+ = \frac{2v^2 + 90v}{675} + 4$ metres - and I’ll pretend the car and its associated stopping distance simply a box that long.
How long does it take to pass?
Suppose I’m standing by the motorway ((I would not do such a thing unless my car had broken down, obviously)) and timing how long the box of the car takes to pass me. The front of the box needs to
travel $D_+$ metres, it’s travelling at $v$ m/s, so it will take $T=\frac{D_+}{v}$ seconds to pass me.
That gives $T = \frac{2v + 90}{675} + \frac{4}{v}$ seconds.
Now for some calculus!
To maximise the number of cars going past a fixed point over any period of time, we need to minimise $T$ as a function of $v$. Let’s differentiate!
$\diff Tv = \frac{2}{675} - \frac{4}{v^2} = 0$ for an extremum ((And, because of the shape of the curve, it’s a minimum)).
So, $v^2 = 1350$ and $v \approx 36.7$ m/s, which is 82.7 mph!
This is very naive analysis, of course, and just the first model that popped into my head – I don’t recommend driving at 82mph any more than I recommend standing by a motorway!
Hope that helps,
- Uncle Colin
* Many thanks to @realityminus3 for helpful comments and improving my work, as always.
A selection of other posts
subscribe via RSS | {"url":"https://www.flyingcoloursmaths.co.uk/ask-uncle-colin-traffic-flow/","timestamp":"2024-11-13T12:22:52Z","content_type":"text/html","content_length":"10908","record_id":"<urn:uuid:a13e882f-fd7b-4979-a4bd-0f3d0e31155d>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00876.warc.gz"} |
Area Formula – Android Appar — AppAgg
Ladda ner Basic Areas Formulas på datorn gratis - Windows
You can use the formula of the area of a rectangle … To use this online calculator for Breadth of rectangle when area and length are given, enter Area (A) and Length (l) and hit the calculate button.
Here is how the Breadth of rectangle when area and length are given calculation can be explained with given input values -> 16.66667 = 50/3. Area and Perimeter - Formulas for Rectangles, Squares &
Circles. You can use area of either a square or circle to find the shape's perimeter (or circumference). With rectangles, if you know only area and one side's length, you can find perimeter. You can
even find the lengths of a rectangle's sides if you know area … Area of Rectangle - The area is usually measured in units like square meters, square feet, or square inches. To find the area of a
rectangle, multiply the length by the width, different types of polygons, some of them being triangles, quadrilaterals, pentagons hexagons etc.
Camden 2022D Abstract Area Rug-Color:Beige/Black,Shape:Rectangle,Size:120"Lx94"W,Style:Transitional - Walmart.com. The Camden Collection offers an Buy Decorative Image on Both Sides- Rectangle Box
Matches- Strike Strip on Side Car Household Refrigerator Wardrobe Toilet Pet Area with Lamp Base 38W. Wood Matches, Alcohol Formula Wipes 5Pack 50pieces ,75% Alcohol Cotton Rectangular pyramid
volume formula example Since, we know that the area of rectangle is equal to the product of its length and width, such as; A Length x 0813 · Therefore, the base surface area of a cylinder equals two
times area of a The first side of this rectangle is the height of the cylinder h and the second is the The calculation is based on the following binomial equation: where: C is the If cooling is
used, fill in the temperature in the grey boxes and mark the appropriate checkboxes. Dehumidifier capacity: kg/h. rectangle.
The easy ones are Square and rectangle, circles and triangle could be a bit tricky. Virtual Nerd's patent-pending tutorial system provides in-context information, hints, and links to supporting
tutorials, synchronized with videos, each 3 to 7 minutes long. In this non-linear system, users are free to take whatever path through the material best serves their needs.
Math Formula - Exam Papers i App Store - App Store - Apple
Example 1: Given a rectangle where the length is 5cm and the width is 3cm. Find the area of this rectangle. Solution: To calculate the area, we can use the formula for the area of a The area formula
can be applied in situations like this, but the formula will need to be manipulated in order to solve for w instead of A. The formula \(A=l\times w\) becomes \(12\text{ cm}^2=6\text{ cm}\times w\),
and in order to solve for w , we need to divide both sides of the equation by 6.
A method of finding HCT roundwood corridors for - Skogforsk
For this purpose, you can always use a list of basic Rectangle Formulas where you just need to put values into the formula and calculate area, length of sides etc.
Solving for area: rectangle area Click on an equation to solve. To find out the total area (A) inside a rectangle, use the formula L x W = A. Using the numbers in the above example, an 8-foot by
4-foot rectangle has an area of Calculates the area and perimeter of a quadrilateral given four sides and two opposite angles. Finding the most all-purpose formula for use in a spreadsheet . See the
formulas to calculate the area for each shape below. Select a Shape: Rectangle; Square; Border; Trapezoid The formula they used is a simple one.
Se hela listan på wikihow.com The formula for area of a rectangle is: A = L * W where L is the length and W is the width. A square is a rectangle with 4 equal sides The formula for area of a square
is: Se hela listan på mathopenref.com Area of Rectangle: The total space or region enclosed inside a rectangle is its area. In other words, the space occupied within the perimeter of a rectangle is
called the area of the rectangle. There are many applications of the area of rectangle formula. It is used to solve different mathematical problems of mensuration and geometry. Another way to find
the area of a rectangle is to multiply its length by its width. The formula is: \(area = length \times width\) Example: What is the area of this rectangle?
Print the value of area variable that shows the area of rectangle Here %0.2f format specifier is used to print the value of area variable upto to 2 decimal places only In the rectangle area formula A
= L × W, A is the number of unit squares needed to tile the rectangle; L is the number of length units when you measure one side of the rectangle; W is the number of length units when you measure an
adjacent side of the rectangle. The reason the formula works has to do with what the × symbol means. The perimeter of rectangle formula = 2 ( length + breadth) Perimeter, P = 2 (11 + 13) P = 2 x 24
cm. P = 56 cm. Therefore, the perimeter of a rectangle is 56 cm. Example 2: The length of the rectangular field is 15m and the width is 6m.
Asiatiska bladgrönsaker
Since a square is the only rectangle with four congruent sides, you need something more than just a rectangle's area measurement to find perimeter. You need the measurement of any one side. Formula
for Perimeter of Rectangle. Let l be the length and w be the width of a rectangle. Then, the formula for perimeter of the rectangle : Perimeter = 2(l + w) Formula for Area of a Rectangle.
14. Öppna Fill Area and Line och ändra vidden till 4 mm.
Vikariebanken borås stad
bolagsverket telefonnummerados autism diagnosisändra folkbokföringsadress hur lång tidrågsved tunnelbanamaster marknadsföring distansmaskinforarutbildning gravmaskinsyr kläder på beställning
rectangular attenuator — Svenska översättning - TechDico
Area of a Rectangle. The area of a rectangle is the number of unit squares that can fit into a rectangle. Some examples of rectangular shapes are the flat surfaces of laptop monitors, blackboards,
painting canvas, etc. You can use the formula of the area of a rectangle … To use this online calculator for Breadth of rectangle when area and length are given, enter Area (A) and Length (l) and hit
the calculate button. Here is how the Breadth of rectangle when area and length are given calculation can be explained with given input values -> 16.66667 = 50/3.
Normativa en inglesoscar kylskåp bruksanvisning
14 Matematik, Fakta idéer fakta, vetenskap, fysik - Pinterest
Drawing. Fil:CircleArea.svg English: Dissection proof of the formula for the area of a circle. 240 × 260 (25 kbyte), Cmglee, Arrange as rectangle to make area more The area of a circle, aproximated
by a near-rectangle, which is composed of segments. Illustrates the formula A=πr², provided the formula for the perimeter is Kvadrat, rektangel och parallellogram. | {"url":"https://enklapengarvacdyr.netlify.app/7925/96750.html","timestamp":"2024-11-13T01:54:06Z","content_type":"text/html","content_length":"18741","record_id":"<urn:uuid:2f6e9ee9-e153-472d-8f21-f282505e2c3a>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00224.warc.gz"} |
Homework 1b
Homework 1b
Programming Language #lang htdp/bsl
Due Date Thu at 9:00pm (Week 1)
Purpose To practice basic writing basic functions & tests.
Submit all of the following in a single .rkt file to Handins. You will get (nearly-)immediate feedback and can resubmit multiple times before the deadlne.
Exercise 1 Define a function dist that consumes three numbers, x, y and z, and that computes the Euclidean distance of point (x,y,z) to the origin.
Exercise 2 Define iff. The function consumes two Boolean values, call them sunny and beach. Its answer is #true if sunny and beach are both true or both false, and #false otherwise. This Boolean
operation, short for if and only if, is an equivalence, and logicians often use the notation happy <=> outside for this purpose.
Exercise 3 Different video formats have different shapes: TV, movies, TikTok videos, etc. all have different ratios of their width to their height, called aspect ratios. The most common aspect
ratios are
□ Square: a width-to-height ratio of exactly 1:1
□ Fullscreen: a width-to-height ratio of up to 4:3
□ Widescreen: a width-to-height ratio of up to 16:9
□ Portrait: a width-to-height ratio of anything less than 1:1
Define the function image-classify, which consumes an image and conditionally produces "square", "fullscreen", "widescreen" or "portrait" according to the ratios above, and "too wide" otherwise.
Exercise 4 Define the function image-area, which counts the number of pixels in a given image. | {"url":"https://course.ccs.neu.edu/cs2500accelf24/Homeworks/ps1b.html","timestamp":"2024-11-02T21:55:51Z","content_type":"text/html","content_length":"10786","record_id":"<urn:uuid:9b8f6e86-619b-494b-ba4c-c6d0ff42196b>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00276.warc.gz"} |
Understanding Geostationary Orbits
Question Video: Understanding Geostationary Orbits Physics • First Year of Secondary School
What is the period of the orbit of a geostationary satellite?
Video Transcript
What is the period of the orbit of a geostationary satellite?
Now to answer this question, we first need to understand what a geostationary satellite is. So if we start by imagining that this is the Earth, and let’s say this here is a geostationary satellite,
then the thing that makes this satellite geostationary is that it always stays at the same position above the Earth’s surface. In other words then, as the Earth rotates, the satellite always remains
above the same point on the surface. And that’s what makes the satellite geostationary, “geo” meaning Earth and “stationary” meaning not moving.
Now in reality, the satellite as we’ve already seen is moving. But it’s not moving in relation to Earth surface. It is constantly about the exact same point on Earth’s surface. And in order for it to
do this, it must have an orbital period that’s the same as the period of Earth’s rotation. In other words, as the Earth completes one rotation, so does the geostationary satellite. And this is the
only way for this satellite to remain geostationary, in other words, to remain above the same point on the surface of the Earth.
So now, we can recall that the time taken for Earth to complete one rotation is 24 hours because it takes a day for the Earth to go around once. And it’s worth noting by the way that this is a
rotation about its axis. We’re not talking about the time taken by the Earth to go around the sun once. That period is one year. Like we said though in this particular case, we only care about the
time taken for the Earth to rotate about its own axis once. And that period like we said earlier is one day or 24 hours. And therefore, the geostationary satellite must have the same period of orbit.
And, hence, we have our answer. The period of the orbit of a geostationary satellite is 24 hours. | {"url":"https://www.nagwa.com/en/videos/763169702657/","timestamp":"2024-11-04T01:40:37Z","content_type":"text/html","content_length":"243901","record_id":"<urn:uuid:82125a06-8a1e-4a72-a9de-5fcde51e5781>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00562.warc.gz"} |
Ti- 89 exponents
ti- 89 exponents Related topics: what is the solution to the equation x squared-2x-35=0
math formulas radical
"real life examples of ellipse"
math lessons graphing intervals grade five
simultaneous equation graphs
glencoe algebra 1 answers
finding the discriminant using the calculator
division of algebraic expression worksheets
square roots fractions
computer network midterm exam
algebra sample question grade 8
Author Message
graumdfijhtir Posted: Thursday 09th of Oct 17:15
I am taking an online ti- 89 exponents course. For me it’s a bit difficult to study this course all by myself. Is there some one studying online? I really need some guidance .
From: Las
Vegas, NV
Back to top
Jahm Xjardx Posted: Saturday 11th of Oct 09:23
It's true , there are programs that can help you with homework. I think there are several ones that help you solve algebra problems, but I read that Algebrator is the best amongst them.
I used the software when I was a student in Algebra 1 for helping me with ti- 89 exponents, and it never failed me since then. In time I understood all the topics, and soon I was able
to solve the most challenging of the exercises on my own . Don't worry; you won't have any problem using it. It was designed for students, so it's very easy to use. Actually you just
have to type in the topic and that's it .Of course you should use it to learn math , not just copy the results, because you won't learn that way.
From: Odense,
Denmark, EU
Back to top
Vnode Posted: Sunday 12th of Oct 09:51
I had always struggled with math during my high school days and absolutely hated the subject until I came across Algebrator. This product is so fantastic, it helped me improve my grades
considerably . It didn't just help me with my homework, it taught me how to solve the problems. You have nothing to lose and everything to gain by buying this brilliant software.
From: Germany
Back to top
finks Posted: Sunday 12th of Oct 12:17
Thank you for helping. How do I get this software?
From: Tampa,
Back to top
SjberAliem Posted: Monday 13th of Oct 18:38
Visit https://softmath.com/news.html and hopefully all your issues will be resolved .
Macintosh HD
Back to top | {"url":"https://www.softmath.com/algebra-software-5/ti--89-exponents.html","timestamp":"2024-11-09T03:14:49Z","content_type":"text/html","content_length":"40927","record_id":"<urn:uuid:460f1e99-e6ba-4cd1-85ab-ff77042c1f29>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00260.warc.gz"} |
The Debate About math websites
By mid-century, technological marvels, corresponding to radar, nuclear energy, and the jet engine, made progressive education untenable. Exceptional undergraduates might participate in the annual
William Lowell Putnam Mathematical Competition. At many colleges and universities, college students may also compete in the Integration Bee. High school college students of remarkable capability may
be selected to hitch a contest, such as the USA Mathematical Olympiad. Over the years, perception of Math fifty five has turn into based much less on the reality of the course itself and extra on a
cumulative assortment of lore and somewhat sensationalist rumors. It’s tempting to get swept up within the thrill of rumour however while there may be kernels of truth to a few of the stories, a lot
of them are outdated and brought out of context.
Those who study biomedical and social sciences have to check elementary likelihood and statistics. Students in computer science and economics may need the option of taking algorithmic sport
principle. Advanced undergraduates and beginning graduate college students in physics might take a course on superior mathematical methods for physics. Exact necessities and obtainable courses will
depend on the institution in question. With edX, you can study at your personal pace in math lessons at every degree, from highschool pre-algebra to varsity algebra and past.
Learn the talents that will set you up for achievement in equations and inequalities; working with units; linear relationships; features and sequences; exponents radicals, and irrational numbers; and
quadratics. Learn the skills that can set you up for achievement in ratios, charges, and percentages; arithmetic operations; adverse numbers; equations, expressions, and inequalities; and geometry.
Learn seventh grade math aligned to the Eureka Math/EngageNY curriculum—proportions, algebra fundamentals , arithmetic with adverse numbers, likelihood, circles, and extra. Results from the National
Assessment of Educational Progress check present that scores in mathematics have been leveling off within the 2010s, but with a rising gap between the top and bottom college students.
• At the end of the day, nonetheless, Math fifty five is a class like some other.
• The Egyptians are recognized for being forward of their time compared to some civilisations that got here after them.
• They are ordered in grade degree from the one we are presently utilizing.
• Computer science majors should take courses on discrete arithmetic .
• The examine of arithmetic and logic as a self-discipline adds as much as much more than what you discovered in highschool algebra.
However, it was his curiosity in a completely completely different subject that radically altered the course of arithmetic. After 40 years of dabbling in maths, he revealed his table of logarithms
within the early 17th century. This free course contains an introduction to metric areas and continuity. The key idea is to use three particular properties of the Euclidean distance as the premise
for defining what is meant by a common distance function, a metric. Section 1 introduces the thought of a metric area and exhibits how this idea allows us to generalise the notion of …
The Fight Over tynker coding And Just How To Get It
Accordingly, the organizations providing post-secondary training up to date their enrollment necessities. For example, University of California requires three years of « college-preparatory
arithmetic that embody the topics lined in elementary and superior algebra and two- and three-dimensional geometry » to be admitted. After California Department of Education adopted Common Core, the
University of California clarified that « accredited built-in math courses may be used to meet half or all » of this admission requirement. Learn the skills that may set you up for achievement in
adverse number operations; fractions, decimals, and percentages; rates and proportional relationships; expressions, equations, and inequalities; geometry; and statistics and probability. Geometry,
normally taken in ninth or tenth grade, introduces students to the notion of rigor in arithmetic by way of some primary ideas in primarily Euclidean geometry. Depending on the curriculum and teacher,
students may receive orientation towards calculus, for example with the introduction of the strategy of exhaustion and Cavalieri’s principle.
That Which You Can Do About tynker.com Starting Within The Next 10 Minutes
Engineers, scientists, and medical researchers make calculations that drive new discoveries every single day, starting from lifesaving medicines to sustainable building supplies. Learn
arithmetic—addition & subtraction, multiplication & division, fractions, decimals, and extra. Learn Algebra 2 aligned to the Eureka Math/EngageNY curriculum —polynomials, rational capabilities,
trigonometry, and extra. Learn pre-algebra—all of the basic arithmetic and geometry abilities needed for algebra. EdPlace is the best UK-based curriculum for maths, English, and science for yr 1
proper via to GCSE level.
Virtually all jobs in pc science rely closely on these skills, since programming is essentially concerning the creation of methods of logic and application of algorithms. So whether or not you need
to go into software improvement, knowledge science, or synthetic intelligence, you’ll need a robust background in logic and discrete math as well tynker discount code as statistics. Whether you’re
just 55-curious, or a previous or present pupil in the class, that is something everybody agrees on. The course condenses 4 years of math into two semesters, after all. “For the primary semester, you
work on linear and abstract algebra with a little bit of illustration principle,” said sophomore math concentrator Dora Woodruff.
Majorities agree that arithmetic is essential, however there has been many divergent opinions on what type of arithmetic should be taught and whether or not relevance to the « actual world » or rigor
must be emphasised. The decision to include the subjects of race and sexuality into the mathematical curriculum has additionally met with stiff resistance. But not that many took Precalculus (34%),
Trigonometry (16%), Calculus (19%), or Statistics (11%) and solely an absolute minority took Integrated Mathematics (7%). Overall, female college students had been more prone to full all arithmetic
programs, besides Statistics and Calculus. Asian Americans had been the most probably to take Precalculus (55%), Statistics (22%), and Calculus (47%) while African Americans have been the least prone
to complete Calculus (8%) but most probably to take Integrated Mathematics (10%) in highschool. Students of decrease socioeconomic standing had been much less prone to move Precalculus, Calculus, and
AP Calculus AB and AP Calculus BC are highschool advanced placement programs meant to arrange students for the respective College Board AP Exams. While AP Calculus BC is meant to symbolize the
material lined in the two-semester university calculus sequence Calculus I and Calculus II, AP Calculus AB is a less comprehensive remedy, overlaying about 70% of the fabric. Learn advanced
trigonometry and core ideas in chance and statistics. Encounter objects from larger math together with advanced numbers, vectors, and matrices.
It performed better than other progressive nations in mathematics, rating 36 out of 65 different countries. The PISA evaluation examined the students’ understanding of arithmetic as properly as their
approach to this subject and their responses. Another group concentrated extra on ideas that they haven’t yet studied. The U.S. had a excessive proportion of memorizers compared to different
developed international locations. During the most recent testing, the United States failed to make it to the top 10 in all categories including arithmetic. From kindergarten by way of high school,
arithmetic schooling in public faculties in the United States has historically varied broadly from state to state, and infrequently even varies significantly inside particular person states. | {"url":"http://m2g2.metis.upmc.fr/the-debate-about-math-websites/","timestamp":"2024-11-07T19:45:20Z","content_type":"text/html","content_length":"28433","record_id":"<urn:uuid:1a553d32-7e59-49f8-acc6-2e45dbbb1c45>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00871.warc.gz"} |
Prime Suspects in a Quantum Ladder
Mussardo G.
Trombettoni A.
, Zhang Z.
In this Letter we set up a suggestive number theory interpretation of a quantum ladder system made of N coupled chains of spin 1/2. Using the hard-core boson representation and a leg-Hamiltonian made
of a magnetic field and a hopping term, we can associate to the spins σa the prime numbers pa so that the chains become quantum registers for square-free integers. The rung Hamiltonian involves
permutation terms between next-neighbor chains and a coprime repulsive interaction. The system has various phases; in particular, there is one whose ground state is a coherent superposition of the
first N prime numbers. We also discuss the realization of such a model in terms of an open quantum system with a dissipative Lindblad dynamics.
Dynamical phase diagram of ultracold Josephson junctions
Xhani K., Galantucci L., Barenghi C.F., Roati G.,
Trombettoni A.
, Proukakis N.P.
We provide a complete study of the phase diagram characterising the distinct dynamical regimes emerging in a three-dimensional Josephson junction in an ultracold quantum gas. Considering trapped
ultracold superfluids separated into two reservoirs by a barrier of variable height and width, we analyse the population imbalance dynamics following a variable initial population mismatch. We
demonstrate that as the chemical potential difference is increased, the system transitions from Josephson plasma oscillations to either a dissipative (in the limit of low and narrow barriers) or a
self-trapped regime (for large and wider barriers), with a crossover between the dissipative and the self-trapping regimes which we explore and characterize for the first time. This work, which
extends beyond the validity of the standard two-mode model, connects the role of the barrier width, vortex rings and associated acoustic emission with different regimes of the superfluid dynamics
across the junction, establishing a framework for its experimental observation, which is found to be within current experimental reach.
Quantum thermoelectric and heat transport in the overscreened Kondo regime: Exact conformal field theory results
Karki D.B.,
Kiselev M.N.
We develop a conformal field theory approach for investigation of the quantum charge, heat, and thermoelectric transport through a quantum impurity fine-tuned to a non-Fermi liquid regime. The
non-Fermi-liquid operational mode is associated with the overscreened spin Kondo effect and controlled by the number of orbital channels. The universal low-temperature scaling and critical exponents
for Seebeck and Peltier coefficients are investigated for the multichannel geometry. We discuss the universality of Lorenz ratio and power factor beyond the Fermi-liquid paradigm. Different methods
of verifying our findings based on the recent experiments are proposed.
Polynomial scaling of the quantum approximate optimization algorithm for ground-state preparation of the fully connected p -spin ferromagnet in a transverse field
Wauters M.M., Mbeng G.B.,
Santoro G.E.
We show that the quantum approximate optimization algorithm (QAOA) can construct, with polynomially scaling resources, the ground state of the fully connected p-spin Ising ferromagnet, a problem that
notoriously poses severe difficulties to a vanilla quantum annealing (QA) approach due to the exponentially small gaps encountered at first-order phase transition for p≥3. For a target ground state
at arbitrary transverse field, we find that an appropriate QAOA parameter initialization is necessary to achieve good performance of the algorithm when the number of variational parameters 2P is much
smaller than the system size N because of the large number of suboptimal local minima. Instead, when P exceeds a critical value PN∗N, the structure of the parameter space simplifies, as all minima
become degenerate. This allows achieving the ground state with perfect fidelity with a number of parameters scaling extensively with N and with resources scaling polynomially with N.
Back-reaction in canonical analogue black holes
Liberati S., Tricella G.,
Trombettoni A.
We study the back-reaction associated with Hawking evaporation of an acoustic canonical analogue black hole in a Bose–Einstein condensate. We show that the emission of Hawking radiation induces a
local back-reaction on the condensate, perturbing it in the near-horizon region, and a global back-reaction in the density distribution of the atoms. We discuss how these results produce useful
insights into the process of black hole evaporation and its compatibility with a unitary evolution.
Entanglement spreading in non-equilibrium integrable systems
Calabrese P.
These are lecture notes for a short course given at the Les Houches Summer School on “Integrability in Atomic and Condensed Matter Physics”, in summer 2018. Here, I pedagogically discuss recent
advances in the study of the entanglement spreading during the non-equilibrium dynamics of isolated integrable quantum systems. I first introduce the idea that the stationary thermodynamic entropy is
the entanglement accumulated during the non-equilibrium dynamics and then join such an idea with the quasiparticle picture for the entanglement spreading to provide quantitive predictions for the
time evolution of the entanglement entropy in arbitrary integrable models, regardless of the interaction strength.
Time crystals in the driven transverse field Ising model under quasiperiodic modulation
Liang P.,
Fazio R.
, Chesi S.
We investigate the transverse field Ising model subject to a two-step periodic driving protocol and quasiperiodic modulation of the Ising couplings. Analytical results on the phase boundaries
associated with Majorana edge modes and numerical results on the localization of single-particle excitations are presented. The implication of a region with fully localized domain-wall-like
excitations in the parameter space is eigenstate order and exact spectral pairing of Floquet eigenstates, based on which we conclude the existence of time crystals. We also examine various
correlation functions of the time crystal phase numerically, in support of its existence.
Dirac electrons in the square-lattice Hubbard model with a d -wave pairing field: The chiral Heisenberg universality class revisited
Otsuka Y., Seki K.,
Sorella S.
, Yunoki S.
We numerically investigate the quantum criticality of the chiral Heisenberg universality class with the total number of fermion components N=8 in terms of the Gross-Neveu theory. Auxiliary-field
quantum Monte Carlo simulations are performed for the square lattice Hubbard model in the presence of a d-wave pairing field, inducing Dirac cones in the single-particle spectrum. This property makes
the model particularly interesting because it turns out to belong to the same universality class of the Hubbard model on the honeycomb lattice, which is the canonical model for graphene, despite the
unit cells being apparently different (e.g., they contain one and two sites, respectively). We indeed show that the two phase transitions, expected to occur on the square and on the honeycomb
lattices, have the same quantum criticality. We also argue that details of the models, i.e., the way of counting N and the anisotropy of the Dirac cones, do not change the critical exponents. The
present estimates of the exponents for the N=8 chiral Heisenberg universality class are ν=1.05(5), ηφ=0.75(4), and ηψ=0.23(4), which are compared with the previous numerical estimations.
Complexity of mixed Gaussian states from Fisher information geometry
Di Giulio G.,
Tonni E.
We study the circuit complexity for mixed bosonic Gaussian states in harmonic lattices in any number of dimensions. By employing the Fisher information geometry for the covariance matrices, we
consider the optimal circuit connecting two states with vanishing first moments, whose length is identified with the complexity to create a target state from a reference state through the optimal
circuit. Explicit proposals to quantify the spectrum complexity and the basis complexity are discussed. The purification of the mixed states is also analysed. In the special case of harmonic chains
on the circle or on the infinite line, we report numerical results for thermal states and reduced density matrices.
Phase diagram of the two-dimensional Hubbard-Holstein model
Costa N.C., Seki K., Yunoki S.,
Sorella S.
The electron–electron and electron–phonon interactions play an important role in correlated materials, being key features for spin, charge and pair correlations. Thus, here we investigate their
effects in strongly correlated systems by performing unbiased quantum Monte Carlo simulations in the square lattice Hubbard-Holstein model at half-filling. We study the competition and interplay
between antiferromagnetism (AFM) and charge-density wave (CDW), establishing its very rich phase diagram. In the region between AFM and CDW phases, we have found an enhancement of superconducting
pairing correlations, favouring (nonlocal) s-wave pairs. Our study sheds light over past inconsistencies in the literature, in particular the emergence of CDW in the pure Holstein model case.
AEDGE: Atomic Experiment for Dark Matter and Gravity Exploration in Space
El-Neaj Y.A., Alpigiani C., Amairi-Pyka S., Araújo H., Balaž A.,
Bassi A.
, Bathe-Peters L., Battelier B., Belić A., Bentine E., Bernabeu J., Bertoldi A., Bingham R., Blas D., Bolpasi V., Bongs K., Bose S., Bouyer P., Bowcock T., Bowden W., Buchmueller O., Burrage C.,
Calmet X., Canuel B., Caramete L.I., Carroll A., Cella G., Charmandaris V., Chattopadhyay S., Chen X., Chiofalo M.L., Coleman J., Cotter J., Cui Y., Derevianko A., De Roeck A., Djordjevic G.S.,
Dornan P., Doser M., Drougkakis I., Dunningham J., Dutan I., Easo S., Elertas G., Ellis J., El Sawy M., Fassi F., Felea D., Feng C.H., Flack R., Foot C., Fuentes I., Gaaloul N., Gauguet A., Geiger
R., Gibson V., Giudice G., Goldwin J., Grachov O., Graham P.W., Grasso D., van der Grinten M., Gündogan M., Haehnelt M.G., Harte T., Hees A., Hobson R., Hogan J., Holst B., Holynski M., Kasevich M.,
Kavanagh B.J., von Klitzing W., Kovachy T., Krikler B., Krutzik M., Lewicki M., Lien Y.H., Liu M., Luciano G.G., Magnon A., Mahmoud M.A., Malik S., McCabe C., Mitchell J., Pahl J., Pal D., Pandey S.,
Papazoglou D., Paternostro M., Penning B., Peters A., Prevedelli M., Puthiya-Veettil V., Quenby J., Rasel E., Ravenhall S., Ringwood J., Roura A., Sabulsky D.
We propose in this White Paper a concept for a space experiment using cold atoms to search for ultra-light dark matter, and to detect gravitational waves in the frequency range between the most
sensitive ranges of LISA and the terrestrial LIGO/Virgo/KAGRA/INDIGO experiments. This interdisciplinary experiment, called Atomic Experiment for Dark Matter and Gravity Exploration (AEDGE), will
also complement other planned searches for dark matter, and exploit synergies with other gravitational wave detectors. We give examples of the extended range of sensitivity to ultra-light dark matter
offered by AEDGE, and how its gravitational-wave measurements could explore the assembly of super-massive black holes, first-order phase transitions in the early universe and cosmic strings. AEDGE
will be based upon technologies now being developed for terrestrial experiments using cold atoms, and will benefit from the space experience obtained with, e.g., LISA and cold atom experiments in
microgravity. KCL-PH-TH/2019-65, CERN-TH-2019-126.
Generalized measure of quantum synchronization
Jaseem N., Hajdušek M., Solanki P., Kwek L.C.,
Fazio R.
, Vinjanampathy S.
We present a generalized information-Theoretic measure of synchronization in quantum systems. This measure is applicable to dynamics of anharmonic oscillators, few-level atoms, and coupled oscillator
networks. Furthermore, the new measure allows us to discuss synchronization of disparate physical systems such as coupled hybrid quantum systems and coupled systems undergoing mutual synchronization
that are also driven locally. In many cases of interest, we find a closed-form expression for the proposed measure.
Two-Dimensional Quantum-Link Lattice Quantum Electrodynamics at Finite Density
Felser T., Silvi P.,
Collura M.
, Montangero S.
We present an unconstrained tree-tensor-network approach to the study of lattice gauge theories in two spatial dimensions, showing how to perform numerical simulations of theories in the presence of
fermionic matter and four-body magnetic terms, at zero and finite density, with periodic and open boundary conditions. We exploit the quantum-link representation of the gauge fields and demonstrate
that a fermionic rishon representation of the quantum links allows us to efficiently handle the fermionic matter while finite densities are naturally enclosed in the tensor network description. We
explicitly perform calculations for quantum electrodynamics in the spin-one quantum-link representation on lattice sizes of up to 16×16 sites, detecting and characterizing different quantum regimes.
In particular, at finite density, we detect signatures of a phase separation as a function of the bare mass values at different filling densities. The presented approach can be extended
straightforwardly to three spatial dimensions.
Domain wall melting in the spin- 12 XXZ spin chain: Emergent Luttinger liquid with a fractal quasiparticle charge
Collura M.
, De Luca A.,
Calabrese P.
, Dubail J.
In spin chains with local unitary evolution preserving the magnetization Sz, the domain-wall state typically "melts."At large times, a nontrivial magnetization profile develops in an expanding region
around the initial position of the domain wall. For nonintegrable dynamics, the melting is diffusive, with entropy production within a melted region of size t. In contrast, when the evolution is
integrable, ballistic transport dominates and results in a melted region growing linearly in time, with no extensive entropy production: The spin chain remains locally in states of zero entropy at
any time. Here we show that, for the integrable spin-1/2 XXZ chain, low-energy quantum fluctuations in the melted region give rise to an emergent Luttinger liquid which, remarkably, differs from the
equilibrium one. The striking feature of this emergent Luttinger liquid is its quasiparticle charge (or Luttinger parameter K), which acquires a fractal dependence on the XXZ chain anisotropy
parameter Δ.
Enhancement of charge instabilities in Hund's metals by breaking of rotational symmetry
Chatzieleftheriou M., Berović M., Villar Arribi P.,
Capone M.
, De'Medici L.
We analyze multiorbital Hubbard models describing Hund's metals, focusing on the ubiquitous occurrence of a charge instability, signaled by a divergent/negative electronic compressibility, in a range
of doping from the half-filled Mott insulator corresponding to the frontier between Hund's and normal metals. We show that the breaking of rotational invariance favors this instability: both spin
anisotropy in the interaction and crystal-field splitting among the orbitals make the instability zone extend to larger dopings, making it relevant for real materials like iron-based superconductors.
These observations help us build a coherent picture of the occurrence and extent of this instability. We trace it back to the partial freezing of the local degrees of freedom in the Hund's metal,
which reduces the allowed local configurations and thus the quasiparticle itinerancy. The abruptness of the unfreezing happening at the Hund's metal frontier can be directly connected to a rapid
change in the electronic kinetic energy and thus to the enhancement and divergence of the compressibility.
Boson-exchange parquet solver for dual fermions
Krien F., Valli A., Chalupa P.,
Capone M.
, Lichtenstein A.I., Toschi A.
We present and implement a parquet approximation within the dual-fermion formalism based on a partial bosonization of the dual vertex function which substantially reduces the computational cost of
the calculation. The method relies on splitting the vertex exactly into single-boson exchange contributions and a residual four-fermion vertex, which physically embody, respectively, long- and
short-range spatial correlations. After recasting the parquet equations in terms of the residual vertex, these are solved using the truncated-unity method of Eckhardt et al. [Phys. Rev. B 101, 155104
(2020)2469-995010.1103/PhysRevB.101.155104], which allows for a rapid convergence with the number of form factors in different regimes. While our numerical treatment of the parquet equations can be
restricted to only a few Matsubara frequencies, reminiscent of Astretsov et al. [Phys. Rev. B 101, 075109 (2020)2469-995010.1103/PhysRevB.101.075109], the one- and two-particle spectral information
is fully retained. In applications to the two-dimensional Hubbard model the method agrees quantitatively with a stochastic summation of diagrams over a wide range of parameters.
Finite temperature off-diagonal long-range order for interacting bosons
Colcelli A., Defenu N.,
Mussardo G.
Trombettoni A.
Characterizing the scaling with the total particle number (N) of the largest eigenvalue of the one-body density matrix (++0) provides information on the occurrence of the off-diagonal long-range
order (ODLRO) according to the Penrose-Onsager criterion. Setting ++0Gê+NC0, then C0=1 corresponds in ODLRO. The intermediate case, 0
Mixed-State Entanglement from Local Randomized Measurements
Elben A., Kueng R., Huang H.Y.(., Van Bijnen R., Kokail C.,
Dalmonte M.
Calabrese P.
, Kraus B., Preskill J., Zoller P., Vermersch B.
We propose a method for detecting bipartite entanglement in a many-body mixed state based on estimating moments of the partially transposed density matrix. The estimates are obtained by performing
local random measurements on the state, followed by postprocessing using the classical shadows framework. Our method can be applied to any quantum system with single-qubit control. We provide a
detailed analysis of the required number of experimental runs, and demonstrate the protocol using existing experimental data [Brydges et al., Science 364, 260 (2019)SCIEAS0036-807510.1126/
The ABC's of science
Mussardo G.
Science, with its inherent tension between the known and the unknown, is an inexhaustible mine of great stories. Collected here are twenty-six among the most enchanting tales, one for each letter of
the alphabet: the main characters are scientists of the highest caliber most of whom, however, are unknown to the general public. This book goes from A to Z. The letter A stands for Abel, the great
Norwegian mathematician, here involved in an elliptic thriller about a fundamental theorem of mathematics, while the letter Z refers to Absolute Zero, the ultimate and lowest temperature limit, -
273,15 degrees Celsius, a value that is tremendously cooler than the most remote corner of the Universe: the race to reach this final outpost of coldness is not yet complete, but, similarly to the
history books of polar explorations at the beginning of the 20th century, its pages record successes, failures, fierce rivalries and tragic desperations. In between the A and the Z, the other letters
of the alphabet are similar to the various stages of a very fascinating journey along the paths of science, a journey in the company of a very unique set of characters as eccentric and peculiar as
those in Ulysses by James Joyce: the French astronomer who lost everything, even his mind, to chase the transits of Venus; the caustic Austrian scientist who, perfectly at ease with both the laws of
psychoanalysis and quantum mechanics, revealed the hidden secrets of dreams and the periodic table of chemical elements; the young Indian astrophysicist who was the first to understand how a star
dies, suffering the ferocious opposition of his mentor for this discovery. Or the Hungarian physicist who struggled with his melancholy in the shadows of the desert of Los Alamos; or the French
scholar who was forced to hide her femininity behind a false identity so as to publish fundamental theorems on prime numbers. And so on and so forth. Twenty-six stories, which reveal the most
authentic atmosphere of science and the lives of some of its main players: each story can be read in quite a short period of time -- basically the time it takes to get on and off the train between
two metro stations. Largely independent from one another, these twenty-six stories make the book a harmonious polyphony of several voices: the reader can invent his/her own very personal order for
the chapters simply by ordering the sequence of letters differently. For an elementary law of Mathematics, this can give rise to an astronomically large number of possible books -- all the same, but
- then again - all different. This book is therefore the ideal companion for an infinite number of real or metaphoric journeys.
Symmetry resolved entanglement in integrable field theories via form factor bootstrap
Horváth D.X.,
Calabrese P.
We consider the form factor bootstrap approach of integrable field theories to derive matrix elements of composite branch-point twist fields associated with symmetry resolved entanglement entropies.
The bootstrap equations are determined in an intuitive way and their solution is presented for the massive Ising field theory and for the genuinely interacting sinh-Gordon model, both possessing a ℤ2
symmetry. The solutions are carefully cross-checked by performing various limits and by the application of the ∆-theorem. The issue of symmetry resolution for discrete symmetries is also discussed.
We show that entanglement equipartition is generically expected and we identify the first subleading term (in the UV cutoff) breaking it. We also present the complete computation of the symmetry
resolved von Neumann entropy for an interval in the ground state of the paramagnetic phase of the Ising model. In particular, we compute the universal functions entering in the charged and symmetry
resolved entanglement. | {"url":"https://www.tqt-trieste.it/publications/?cyear=2020","timestamp":"2024-11-03T03:38:32Z","content_type":"text/html","content_length":"57431","record_id":"<urn:uuid:17de45fa-84d8-4433-aebb-7c2b353667dc>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00015.warc.gz"} |
Math Balloons - Fraction Arithmetic - Educational Games For Kids
Math Balloons – Fraction Arithmetic
Play in Fullscreen Mode
What is Fraction Arithmetic?
“.There are Math Balloons.Fraction Arithmetic is an educational game that helps players practice fractions with fractions. This game has a balloon with fraction problems and players must pop The
balloons which contain The correct answers to addition subtraction multiplication or division of fractions. It is an excellent resource for students who are learning about fractions and need practice
in applying their knowledge to solve fraction problem. | {"url":"https://games.forkids.education/math-balloons-fraction-arithmetic/","timestamp":"2024-11-09T23:43:42Z","content_type":"text/html","content_length":"168218","record_id":"<urn:uuid:b927bbd2-b4cc-442d-ae35-83d69f9356bd>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00522.warc.gz"} |
What is the Intuition Behind Machine Learning?
Machine learning has become a popular term as this advanced technology is full of immense potential. Before explaining the intuition behind machine learning let’s understand the meaning of the term
first which is becoming so popular in this era of scientific innovation and is a trend that everybody wants to follow.
What is Machine Learning?
Machine learning if explained in a very layman language is a program running behind an application which has an ability to learn from what is sees and the errors that it makes and then tries to
improve itself through trial and error. A programming language like Python and a method of calculation (statistics) is what helps propel this application in the right direction.
Now that you know what machine learning is, let’s discuss about what is the intuition behind building a machine learning algorithm or a program.
In my previous blog I have discussed about a statistical concept called Linear Regression which follows given a X independent variable, prediction of a Y dependent variable is possible if we
understand the rate at which X and Y are changing and the direction towards which they are moving i.e. we understand the hidden pattern they are following, we will be able to predict the value of Y
when X= 15.
In the process of all that, we need to reduce the error between the predicted Y and the observed Y which we had to train our model but this is not possible with only calculating the slope i.e. b[1] a
single time and this is where machine learning comes in handy.
The idea behind machine learning is to learn from the past mistakes and try to find the best possible coefficients i.e. b[0 ]and b[1] so that we are able to reduce the distance between predicted and
observed y which leads to the minimization of error in predictions which we are making. This intuition remains the same throughout all the machine learning algorithms only the problem in question and
the methodology to solve the problem changes.
Now let’s quickly look at the branches of Machine Learning.
Branches of Machine Learning
• Supervised (Parametric) Machine Learning Algorithm:- Under this branch both the independent variable X and the dependent variable Y is given in the form of Y = f(X) and this branch can further be
divided based on the kind of problem we are dealing with i.e. whether the variable Y is continuous or a category.
• Unsupervised (Non-parametric) Machine Learning Algorithm:- Under this branch you do not have the Y variable i.e. Y ≠ f(X) and you can only solve classification problems.
• Semi-Supervised Machine Learning Algorithms:- This is the most difficult to solve as under this kind of problem the data which is available for the analysis has missing values of Y which makes it
quite difficult to train the algorithm as the possibility of false prediction is very high.
So, with that this discussion here on machine learning wraps up, hopefully, it helped you understand the intuition behind machine learning, also check out the video tutorial attached down the blog
to learn more. The field of machine learning is full of opportunities, DexLab Analytics offers machine learning course in delhi ncr, keep on following the blog to enhance your knowledge as we
continue to update it with interesting and informative posts for you. | {"url":"https://m.dexlabanalytics.com/blog/what-is-the-intuition-behind-machine-learning","timestamp":"2024-11-13T23:06:06Z","content_type":"text/html","content_length":"63679","record_id":"<urn:uuid:c4e1a3aa-0ba6-415c-8a56-9936b82defbf>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00435.warc.gz"} |
Thierry Legault - The sampling
THE SAMPLING
The sampling represents the area of the sky seen by a pixel of the CCD.
The sampling S (in arc second per pixel) depends on the size P of the pixel (in microns) and the focal lenght F (in mm):
S = 206 P/F
Example : at a focal lenght of 2000 mm, the sampling on a KAF-0400 chip (9 microns pixels ) is S = 206x9/2000 = 0.93"/pixel.
The same formula can be used to determine the focal lenght necessary to reach a given sampling:
F = 206 P/S
In high resolution, theory and practice tell that a good basic value of sampling, in favourable conditions, is about twice the maximum spatial frequency (see What is a MTF curve ?). It is the Nyquist
sampling. It depends on the diameter D of the telescope and the wavelenght l, its value is (in radian per pixel):
S[N] = l/2D
If P is the pixel size (in the same unity as l), the focal ratio corresponding to this sampling is:
R[N] = 2P/l
Example : for a 250 mm telescope, whose maximum spatial frequency is 2 lines pairs per arc second at 0.6 µm, S[N] is 0.0000012 radians per pixel, ie 0.25 arc second per pixel. For 9 microns pixels
(KAF-0400), the corresponding focal ratio is about 30.
A higher sampling could bring a little gain, at the expense of a reduction of the field and an increase of the exposure time which can be harmful if seeing is not very good. Only the experience of
the amateur can tell him which value of sampling is better, depending on its hardware and the circumstances.
In CCD imaging, as in photography, adjusting the focal lenght is therefore of the highest importance to work at an adequate sampling. For more details, see How to adjust the focal lenght ? | {"url":"http://www.astrophoto.fr/sampling.html","timestamp":"2024-11-12T21:35:15Z","content_type":"text/html","content_length":"3464","record_id":"<urn:uuid:62789577-1d09-4807-8330-3af34a64d3f0>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00782.warc.gz"} |
Mathematics for Elementary Teachers
Binary numbers, using just 0’s and 1’s, are the language of computers.
The idea of expressing all quantities by nine digits whereby is imparted to them both an absolute value and a value of position is so simple that this very simplicity is the very reason for our not
being sufficiently aware how much admiration it deserves.
The “Dots and Boxes” approach to place value used in this part (and throughout this book) comes from James Tanton, and is used with his permission. See his development of these and other ideas at
1. Images and Videos on Pixabay are released under Creative Commons CC0. ↵ | {"url":"https://pressbooks-dev.oer.hawaii.edu/math111/part/place-value/","timestamp":"2024-11-09T05:09:52Z","content_type":"text/html","content_length":"72877","record_id":"<urn:uuid:09e6f73f-1c46-49b0-bcf5-66551aea57b5>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00399.warc.gz"} |
Control Structures
Programming in Modern Fortran
Control Structures
This section gives a brief introduction to the constructs of the Fortran programming language:
if Construct
Fortran features three different types of conditionals: arithmetic if (deprecated), logical if, and block if.
Arithmetic if
The first FORTRAN version from 1957 introduced the arithmetic if conditional that evaluates a numeric expression and then jumps to one of three labeled statements, depending on whether the expression
is either negative, zero, or positive:
if (x * y) 100, 200, 300
If negative, the statement does a goto to the first label (100), if zero to the second (200), and if positive to the third (300). This is equivalent to:
if (e < 0) goto 100
if (e == 0) goto 200
goto 300
Arithmetic if is obsolescent since Fortran 90, and has been removed from the Fortran 2018 standard. Like the goto statement, it should not be used anymore.
Logical if
The logical if conditional was added to FORTRAN IV and allows the execution of a statement depending on a logical or arithmetic expression, using operators. Only a single statement may be declared:
if (x * y < 0) y = 1
A line-break following the expression is legitimate, but must be indicated by an ampersand:
if (x * y < 0) &
y = 1
Block if
The block if allows the conditional execution of a group of statements, for example:
if (a == 0) then
else if (a < 0) then
b = 0
b = b + a
end if
The conditional was introduced in FORTRAN 77.
Expressional if
The Fortran 2023 standard added conditional expressions to the language. The expression
b = (a > 0.0 ? a : 0.0)
is a short form of
if (a > 0.0) then
b = a
b = 0.0
end if
The general form of a conditional expression is (condition ? expression [: condition ? expression] … : expression), where each expression has the same declared type, kind type parameters, and rank.
select Switch
The select case switch is an alternative to block if:
select case (grade)
case ('A')
print *, 'Excellent!'
case ('B', 'C')
print *, 'Well done'
case ('D')
print *, 'You passed'
case ('F')
print *, 'Better try again'
case default
print *, 'Invalid grade'
end select
You may want to use ranges inside select case with case (begin:end):
select case (marks)
case (91:100)
print *, 'Excellent!'
case (81:90)
print *, 'Very good!'
case (71:80)
print *, 'Well done!'
case (61:70)
print *, 'Not bad!'
case (41:60)
print *, 'You passed!'
case (:40)
print *, 'Better try again!'
case default
print *, 'Invalid marks'
end select
Only parameters can be evaluated in a select case construct.
select type
The select type construct was introduced in Fortran 2003, and lets us execute blocks depending on a type or class of a variable. The following type guard statements are used to match a selector
type is
Matches the selector if the dynamic type and kind type parameter values are the same as those specified by the statement
class is
Matches the selector if the dynamic type is an extension of the type specified by the statement, and the kind type parameter values specified by the statement are the same as the corresponding
type parameter values of the dynamic type of the selector.
class default
Executed if selector is not matched by any other type guard statement.
The name of the identifier that becomes associated with the selector can be choosen freely, but must be unique within the construct.
type :: vec_type
real :: x, y
end type vec_type
type, extends(vec_type) :: vec3_type
real :: z
end type vec3_type
type, extends(vec3_type) :: color_type
integer :: color
end type color_type
type(vec_type), target :: v
type(vec3_type), target :: v3
type(color_type), target :: c
class(vec_type), pointer :: ptr
v = vec_type(1.0, 2.0)
v3 = vec3_type(1.0, 2.0, 3.0)
c = color_type(0.0, 1.0, 2.0, 9)
! Point to either v, v3, or c:
ptr => c
select type (a => ptr)
class is (vec_type)
print *, a%x, a%y
type is (vec3_type)
print *, a%x, a%y, a%z
type is (color_type)
print *, a%x, a%y, a%z, a%color
end select
do Loop
The do construct loops over statements until an exit occurs:
a = a + 1
print *, a
if (a > 10) exit
end do
The do loop is further similiar to the for loop in other programming languages. Set begin, end, and step size in the head of the loop:
integer :: i
do i = 1, 10, 2
print *, i
end do
The loop index variable has to be declared a priori. The step size is optional. It is further possible to label a loop, in order to reference it in a cycle or exit statement:
loop: do
a = a + 1
print *, a
if (a > 10) exit loop
end do loop
statement skips to the next iteration:
integer :: i
do i = 1, 10
if (modulo(i, 2) == 0) cycle
print *, i
end do
Implied do
Implied do loops return a number of items, using the syntax (item[1], item[2], …, item[n], var = initial, final, step), for example:
integer :: i
print *, ('Hi there. ', i = 1, 3)
The print statement will output the string three times. Arrays are often initialised using an implied do loop:
integer, parameter :: N = 10
integer :: i
integer :: values(N) = [ (i * 2, i = 1, N) ]
The array values is filled with 2, 4, …, 20. An implied do loop may be nested into another implied do loop. Starting with Fortran 2018, we can declare the index variable in the implied loop:
integer, parameter :: N = 10
integer :: values(N) = [ (i * 2, integer :: i = 1, N) ]
do concurrent
The Fortran 2008 standard added the do concurrent construct that allows easy loop parallelisation and is equivalent to forall:
integer :: i
real :: a(100)
do concurrent (i = 1:size(a))
a(i) = sqrt(i**i)
end do
Inside a do concurrent loop, only calls to pure functions are allowed. Multiple indices may be declared at once:
integer :: x, y
real :: a(8, 16)
do concurrent (x = 1:size(a, 1), y = 1:size(a, 2))
a(x, y) = real(x)
end do
Since Fortran 2018, the loop indices may be declared inside the contruct:
real :: a(8, 16)
do concurrent (integer :: x = 1:size(a, 1), y = 1:size(a, 2))
a(x, y) = real(x)
end do
do while
The do while loop cycles through statements as long as a given condition is true:
do while (i < 5)
i = i + 1
print *, i
end do
forall Loop
The forall loop has been introduced with Fortran 95, and declared obsolescent in Fortran 2018 in favour of do concurrent. It selects elements in an array by index or index range, with an optional
step size. In the following example, the loop statement changes the values of the first three elements of the array to 0:
integer :: a(5) = [ 1, 2, 3, 4, 5 ]
integer :: i
forall (i = 1:3) a(i) = 0
! with explicit step size of 1:
forall (i = 1:3:1) a(i) = 0
Furthermore, a mask can be added to the condition of the statement in order to select only specific value, for instance:
forall (i = 1:3, a(i) == 0) &
a(i) = 1
The mask can be any pure function. To allow more than one statement, use the forall block statement:
forall (i = 3:5)
a(i) = 1
print *, a(i)
end forall
Inside the loop, you can assign pure functions to the elements:
forall (i = 1:3) &
a(i) = my_func(a(i))
Since Fortran 2018, the forall indices can be declared in the loop contruct:
real :: a(8, 16)
forall (integer :: i = 1:3)
a(i) = i
end forall
where Statement
The where statement is used for masked array assignments. Elements of an array will be modified directly upon given conditions:
integer :: a(5) = [ 1, 2, 3, 4, 5 ]
where (a >= 3) &
a = 0
You can add else where statements to the block form of where:
where (a > 0 .and. a < 2)
a = 0
else where (a >= 4)
a = 1
end where
block Construct
The block construct is available since Fortran 2008, and defines an executable block that can contain declarations. A block may be exited prematurely with exit, allowing a control flow similar to
using goto:
real :: a
a = 1024.0
block1: block
real :: b
if (a < 0.0) exit block1
b = sqrt(a)
print *, b
end block block1
The variable b declared inside block1 is not accessible from outside of the construct. The identifier block1 is optional.
< Online Resources [Index] Intrinsic Procedures > | {"url":"https://cyber.dabamos.de/programming/modernfortran/control-structures.html","timestamp":"2024-11-02T06:03:42Z","content_type":"application/xhtml+xml","content_length":"13428","record_id":"<urn:uuid:3898f9c6-c3d2-4fe9-9600-9988e0825c43>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00501.warc.gz"} |
piecewise-linear convex function
We consider the problem to minimize the sum of piecewise-linear convex functions under both linear and nonnegative constraints. We convert the piecewise-linear convex problem into a standard form
linear programming problem (LP) and apply a primal-dual interior-point method for the LP. From the solution of the converted problem, we can obtain the solution of the … Read more | {"url":"https://optimization-online.org/tag/piecewise-linear-convex-function/","timestamp":"2024-11-14T14:55:31Z","content_type":"text/html","content_length":"82533","record_id":"<urn:uuid:4990b139-b5a0-4ddb-986a-a7e6494bfe58>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00731.warc.gz"} |
Mit Mathematics - DE
Mit Mathematics
A separate article, South Asian arithmetic, focuses on the early historical past of arithmetic in the Indian subcontinent and the event there of the modern decimal place-value numeral system. The
article East Asian arithmetic covers the mostly unbiased growth of mathematics in China, Japan, Korea, and Vietnam. This doesn’t imply, however, that developments elsewhere have been unimportant.
Indeed, to grasp the history of mathematics in Europe, it is essential to know its history at least in historic Mesopotamia and Egypt, in ancient Greece, and in Islamic civilization from the ninth to
the 15th century. The means during which these civilizations influenced each other and the essential direct contributions Greece and Islam made to later developments are mentioned within the first
components of this text. E. J. Brouwer even initiated a philosophical perspective often recognized as intuitionism, which primarily identifies arithmetic with sure artistic processes in the mind.
Numerical analysis and, extra broadly, scientific computing also research non-analytic topics of mathematical science, especially algorithmic-matrix-and-graph concept. Other areas of computational
mathematics embody computer algebra and symbolic computation. The introduction of mathematical notation led to algebra, which, roughly speaking, consists of the examine and the manipulation of
formulas. Calculus, a shorthand of infinitesimal calculus and integral calculus, is the examine of continuous features, which model the change of, and the relationship between various quantities .
Mathematics has since been significantly prolonged, and there has been a fruitful interaction between arithmetic and science, to the benefit of each. This approach allows contemplating “logics” ,
theorems, proofs, etc. as mathematical objects, and to prove theorems about them. For instance, Gödel’s incompleteness theorems assert, roughly speaking that, in each principle that accommodates the
pure numbers, there are theorems which are true , however not provable inside the speculation. Geologists at the University of Utah have developed a mathematical model to foretell the basic resonant
frequencies of this and similar formations based on the formations’ geometry and material properties. We offerundergraduate packages leading to Bachelor of Science levels in mathematics, applied
mathematics, mathematical biology and actuarial mathematics. India’s contributions to the development of latest arithmetic were made via the considerable influence of Indian achievements on Islamic
mathematics throughout its formative years.
This technical vocabulary is each exact and compact, making it potential to mentally process complex ideas. Until the 19th century, algebra consisted mainly of the research of linear equations that
known as presently linear algebra, and polynomial equations in a single unknown, which were known as algebraic equations . During the nineteenth century, variables began to symbolize other issues
than numbers , on which some operations can operate, which are sometimes generalizations of arithmetic operations.
Welcome To Annals Of Mathematics
Ideas that initially develop with a particular application in mind are often generalized later, thereupon becoming a member of the overall stock of mathematical ideas. Several areas of applied
arithmetic have even merged with practical fields to turn out to be disciplines in their own proper, corresponding to statistics, operations research, and laptop science. During the early modern
interval, mathematics began to develop at an accelerating pace in Western Europe.
It is in Babylonian arithmetic that elementary arithmetic first seem in the archaeological report. The Babylonians also possessed a place-value system and used a sexagesimal numeral system which is
still in use at present for measuring angles and time. Numerical analysis, mainly devoted to the computation on computers of options of strange and partial differential equations that arise in many
functions of arithmetic. Most of mathematical exercise consists of discovering and proving properties of abstract objects. These objects are either abstractions from nature , or summary entities of
which sure properties, known as axioms, are stipulated. These abstract issues and technicalities are what pure arithmetic makes an attempt to unravel, and these makes an attempt have led to major
discoveries for humankind, together with the universal Turing machine, theorized by Alan Turing in 1937.
Math Division Newsletters
Though their methods weren’t always logically sound, mathematicians within the 18th century took on the rigorization stage and were able to justify their methods and create the ultimate stage of
calculus. The growth of mathematics was taken on by the Islamic empires, then concurrently in Europe and China, according to Wilder. Leonardo Fibonacci was a medieval European mathematician and was
well-known for his theories on arithmetic, algebra and geometry. The Renaissance led to advances that included decimal fractions, logarithms and projective geometry. Number principle was
significantly expanded upon, and theories like probability and analytic geometry ushered in a brand new age of mathematics, with calculus on the forefront. Mathematics, the science of construction,
order, and relation that has evolved from elemental practices of counting, measuring, and describing the shapes of objects.
Euclidean geometry was developed and not using a change of methods or scope till the seventeenth century, when René Descartes introduced what is now referred to as Cartesian coordinates. This was a
serious change of paradigm, since as a substitute of defining real numbers as lengths of line segments , it allowed the illustration of factors utilizing numbers , and for the use of algebra and
later, calculus for solving geometrical problems. This split geometry in two components that differ solely by their methods, synthetic geometry, which uses purely geometrical methods, and analytic
geometry, which makes use of coordinates systemically. The leading source of data for the world’s applied mathematics and computational science communities. Our award-winning college are devoted
academics with experience in most areas of mathematical analysis.
Before this era, sets weren’t thought-about as mathematical objects, and logic, although used for mathematical proofs, belonged to philosophy, and was not specifically studied by mathematicians. The
examine of types algebraic structures as mathematical objects is the thing of common algebra and class theory. At its origin, it was launched, along with homological algebra for allowing the
algebraic study of non-algebraic objects similar to topological spaces; this particular area of application known as algebraic topology. Wolfram | {"url":"https://discoveryeducation.my.id/mit-mathematics.html","timestamp":"2024-11-10T15:19:27Z","content_type":"text/html","content_length":"88408","record_id":"<urn:uuid:7b1897ae-3caa-4d9e-926b-3d53ad4d65fc>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00740.warc.gz"} |
EMA100 and NUPL Relative Unrealized Profit Quantitative Trading Strategy
1. EMA100 and NUPL Relative Unrealized Profit Quantitative Trading Strategy
EMA100 and NUPL Relative Unrealized Profit Quantitative Trading Strategy
, Date: 2024-06-17 14:55:13
This trading strategy is based on three indicators: the 100-period Exponential Moving Average (EMA100), Net Unrealized Profit/Loss (NUPL), and Relative Unrealized Profit. It generates trading signals
by determining the crossover of price with EMA100 and the positivity or negativity of NUPL and Relative Unrealized Profit. A long signal is triggered when the price crosses above EMA100 and both NUPL
and Relative Unrealized Profit are positive. A short signal is triggered when the price crosses below EMA100 and both NUPL and Relative Unrealized Profit are negative. The strategy uses a fixed
position size of 10% and sets a stop loss of 10%.
Strategy Principles
1. Calculate the 100-period EMA as the main trend indicator
2. Use NUPL and Relative Unrealized Profit as auxiliary indicators to confirm trend strength and sustainability
3. Generate long/short signals when the price crosses above/below EMA100 while NUPL and Relative Unrealized Profit are simultaneously positive/negative
4. Adopt a fixed position size of 10% and set a stop loss of 10% to control risk
5. When holding a long position, if the price falls below the stop loss price, close the long position; when holding a short position, if the price rises above the stop loss price, close the short
Advantage Analysis
1. Simple and easy to understand: The strategy logic is clear and uses common technical indicators, making it easy to understand and implement
2. Trend following: By capturing the main trend using EMA100, it is suitable for use in trending markets
3. Risk control: Setting fixed position sizes and stop losses can effectively control risk
4. Adaptability: The strategy can be applied to various markets and trading instruments
Risk Analysis
1. False signals: In choppy markets, frequent crossovers between price and EMA100 may generate more false signals, leading to losses
2. Lag: As a lagging indicator, EMA may react slowly at trend reversals, missing the best entry opportunities
3. Parameter optimization: Strategy parameters (such as EMA period, position size, stop loss ratio) need to be optimized for different markets, and inappropriate parameters may result in poor
strategy performance
Optimization Directions
1. Parameter optimization: Optimize parameters such as EMA period, position size, and stop loss ratio to improve strategy performance
2. Signal filtering: Add other technical indicators or market sentiment indicators to filter false signals
3. Dynamic position management: Dynamically adjust positions based on market volatility, account profit/loss, and other factors to increase returns and control risk
4. Long-short combination: Hold both long and short positions simultaneously to hedge market risk and improve strategy stability
This trading strategy generates trading signals through three indicators: EMA100, NUPL, and Relative Unrealized Profit. It has advantages such as clear logic, controllable risk, and strong
adaptability. At the same time, it also has risks such as false signals, lag, and parameter optimization. In the future, the strategy can be optimized and improved through parameter optimization,
signal filtering, dynamic position management, and long-short combinations.
start: 2023-06-11 00:00:00
end: 2024-06-16 00:00:00
period: 1d
basePeriod: 1h
exchanges: [{"eid":"Futures_Binance","currency":"BTC_USDT"}]
strategy("Scalping Strategy with EMA 100, NUPL, and Relative Unrealized Profit", overlay=true)
// Input for EMA period
emaPeriod = input.int(100, title="EMA Period", minval=1)
ema100 = ta.ema(close, emaPeriod)
plot(ema100, color=color.blue, title="EMA 100")
// Placeholder function for NUPL (Net Unrealized Profit/Loss)
// Replace this with actual NUPL data or calculation
NUPL = close * 0.0001 // Dummy calculation
// Placeholder function for relative unrealized profit
// Replace this with actual relative unrealized profit data or calculation
relativeUnrealizedProfit = close * 0.0001 // Dummy calculation
// Define conditions for long and short entries
longCondition = ta.crossover(close, ema100) and NUPL > 0 and relativeUnrealizedProfit > 0
shortCondition = ta.crossunder(close, ema100) and NUPL < 0 and relativeUnrealizedProfit < 0
// Plot buy and sell signals on the chart
plotshape(series=longCondition, location=location.belowbar, color=color.green, style=shape.labelup, title="Buy Signal")
plotshape(series=shortCondition, location=location.abovebar, color=color.red, style=shape.labeldown, title="Sell Signal")
// Calculate stop loss levels
longStopLoss = close * 0.90
shortStopLoss = close * 1.10
// Strategy entry and exit rules
if (longCondition)
strategy.entry("Long", strategy.long, stop=longStopLoss)
if (shortCondition)
strategy.entry("Short", strategy.short, stop=shortStopLoss)
// Set stop loss levels for active positions
if (strategy.position_size > 0)
strategy.exit("Exit Long", "Long", stop=longStopLoss)
if (strategy.position_size < 0)
strategy.exit("Exit Short", "Short", stop=shortStopLoss)
// Alerts for long and short entries
alertcondition(longCondition, title="Long Entry Alert", message="Long entry signal based on EMA 100, NUPL, and relative unrealized profit")
alertcondition(shortCondition, title="Short Entry Alert", message="Short entry signal based on EMA 100, NUPL, and relative unrealized profit")
// Visualize the entry conditions
plotshape(series=longCondition, location=location.belowbar, color=color.blue, style=shape.cross, title="Long Condition")
plotshape(series=shortCondition, location=location.abovebar, color=color.red, style=shape.cross, title="Short Condition") | {"url":"https://www.fmz.com/strategy/454336","timestamp":"2024-11-02T02:43:30Z","content_type":"text/html","content_length":"16106","record_id":"<urn:uuid:28725a79-aa0e-49d0-bd18-af8b0ac015a9>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00843.warc.gz"} |
Roadrunner – from fringe to mainstream
A few months ago, I posted here about a couple of scientists who’d adapted Playstation consoles to serve as computers – number-crunchers – for their work.
Now we learn of Roadrunner, a new supercomputer that’s cracked the one-petaflop ‘boundary’ – over one thousand trillion calculations per second. For those interested in the supercomputing
technicalities, there’s a good discussion here, and a feel-good movie about Roadrunner here. Briefly, it uses different types of processors to tackle different elements of a computing problem.
Its speed is stunning. Think about it. 1,000,000,000,000,000. One quadrillion. The mind simply can’t comprehend it as a number.
In terms of computing history, think of it this way. We used to calculate the speed of a processor in instructions per second, or IPS. An instruction would be defined as the computer cycle needed to
process one operation against one location in the computer’s memory: for example, multiplying one number by another. Placing the number to be multiplied into a given memory location would be one
instruction. Getting the second number would be a second instruction; performing the actual multiplication would be a third; and storing the result in another memory location would be a fourth.
Back in the early to mid 1970’s, when I got into computers for the first time, corporate mainframe computers typically operated at one to three million IPS, or MIPS. The first IBM PC, in 1981,
operated at way less than one MIP, whilst mainframe computers of that period ran at ten to fifty MIPS.
The Sony Playstation 3 processor, as introduced, ran at 10.24 billion MIPS in 2006. That processor has now been enhanced by IBM, and is the heart of the new Roadrunner system. There are 6,948
dual-core computer chips and 12,960 ‘cell engines’ in Roadrunner – each of the latter operating at least as fast as the Playstation 3 processor, probably faster. Even at an equivalent speed, that’s a
raw combined processing power of 132,710 billion MIPS.
132,710,000,000,000,000,000,000 instructions per second.
1,000,000,000,000,000 calculations per second.
Suddenly I feel old and creaky . . .
1 comment
1. Great; now the next windows version will list that as a minimum system requirement. | {"url":"https://bayourenaissanceman.com/roadrunner-from-fringe-to-mainstream/","timestamp":"2024-11-08T14:26:20Z","content_type":"text/html","content_length":"38227","record_id":"<urn:uuid:ed859bd8-7161-4cbc-8078-6b129661f32c>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00860.warc.gz"} |
Statement of Power Flow Problem
Quantities associated with each bus in a system
Each bus in a power system is associated with four quantities and they are real power (P), reactive power (Q), magnitude of voltage (V), and phase angle of voltage (δ).
Work involved (or) to be performed by a load flow study
(i). Representation of the system by a single line diagram
(ii). Determining the impedance diagram using the information in single line diagram (iii). Formulation of network equation
(iv). Solution of network equations
Iterative methods to solve load flow problems
The load flow equations are non linear algebraic equations and so explicit solution as not possible. The solution of non linear equations can be obtained only by iterative numerical techniques.
Mainly used for solution of load flow study
The Gauss seidal method, Newton Raphson method and Fast decouple methods.
Flat voltage start
In iterative method of load flow solution, the initial voltages of all buses except slack bus assumed as 1+j0 p.u. This is referred to as flat voltage start
Study Material, Lecturing Notes, Assignment, Reference, Wiki description explanation, brief detail | {"url":"https://www.brainkart.com/article/Statement-of-Power-Flow-Problem_12411/","timestamp":"2024-11-05T21:48:44Z","content_type":"text/html","content_length":"29479","record_id":"<urn:uuid:8d37facd-5aaf-4118-a291-12b0cfdc3fdd>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00065.warc.gz"} |
Donloading EWKB format is larger than GeoJSON? / OSM-Boundaries / OSM-Boundaries
Donloading EWKB format is larger than GeoJSON?
I'm trying to find the most efficient way to download boundaries and thought it would naturally be EWKB, since this stores each geometry coordinate as binary floats, whereas GeoJSON stores everything
as text with one byte per numeric digit. However, when trying this out, it turns out the EWKB downloads are almost twice as large as the GeoJSON downloads, so I'm thinking something must be wrong
with the way the EWKB is generated. Here's an example of downloading all boundaries for Afghanistan:
GeoJSON (1.79MB)
EWKB (3.28MB)
Pinned replies
The GeoJSON is limited to 9 decimals while the EWKB (and EWKT) format uses whatever is in the database, which can be a lot more. For example the first coordinate in this polygon is 61.5622549375
29.6585979362252 (10 decimals in longitude).
PS! Edited your post and removed your token.
Replies 4
The GeoJSON is limited to 9 decimals while the EWKB (and EWKT) format uses whatever is in the database, which can be a lot more. For example the first coordinate in this polygon is 61.5622549375
29.6585979362252 (10 decimals in longitude).
PS! Edited your post and removed your token.
Thanks for the prompt response. But the number of decimals shouldn't really apply to EWKB since each number is always stored as a single 8-byte double (15 decimal precision). Compare that to GeoJSON,
where even with only 9 decimals the above example of 61.5622549375 would be 9 bytes for the decimals + 1 byte for the comma + 2 bytes for the whole number = 12 bytes in total. GeoJSON would also
require additional bytes for the commas between coordinates and the enclosing brackets around each coordinate, ring, and polygon, so should be substantially larger than EWKB. And yet we're seeing the
opposite, that EWKB is almost twice the size of GeoJSON. I'll see if I can dig into the byte contents, but was hoping maybe there's something obvious on your end?
I don't know the EWKB format in detail. But if it's always 8 bytes, it will at times also be longer than the GeoJSON which potentially could be a single byte for a lat/lon. But that's a very rare
case and definitely not the cause.
I ran a few SQL queries to see what's happening. First I compared the length (number of bytes), GeoJSON vs EWKB.
SELECT LENGTH(ST_AsGeoJSON(way_landonly))
FROM administrative_boundaries
WHERE osm_id = -303427
-- 985130
SELECT LENGTH(ST_AsEWKB(way_landonly))
FROM administrative_boundaries
WHERE osm_id = -303427
-- 739466
As we can see the EWKB is shorter, just as you suggest that it should be.
I then looked at what the site actually is doing (while stripping of some columns).
SELECT LENGTH(foo::text) FROM (
SELECT JSON_BUILD_OBJECT(
'type', 'FeatureCollection',
'features', JSON_AGG(ST_AsGeoJSON(t.*, 'geom')::json)
FROM (
osm_id AS osm_id,
way_landonly AS geom
FROM administrative_boundaries
WHERE osm_id IN (-303427)
) AS t(osm_id, geom)
) AS f(foo);
-- 985245
SELECT LENGTH(foo::text) FROM (
SELECT JSON_AGG(ROW_TO_JSON(t.*))
FROM (
osm_id AS osm_id,
ST_AsEWKB(way_landonly) AS geom
FROM administrative_boundaries
WHERE osm_id IN (-303427)
) AS t(osm_id, geom)
) AS f(foo);
-- 1478965
Suddenly the EWKB is larger. And I then assumed it's because it's wrapped into a JSON object which needs to escape the bytes. The fact that it does this is probably not very well documented, if even
mentioned. But it's done to be able to return a single file when given more than one OSM-ID.
But, when hex-dumping it, I can't see that it's being escaped. I honestly don't know why it's growing. But from what I know there aren't any issues, besides the size then. We built this site to feed
another site with data, and we use the EWKB format to do that. We have not seen a single issue with the end result, and we have transferred over 35,000 polygons to the other site. I still believe
that it has to do with the EWKB to GeoJSON though, I just can't see how.
Aha, so based on what you're saying, I think it's starting to make sense now. I wasn't originally aware that the EWKB download was wrapped inside a JSON, probably in order to store the non-geometry
properties/attributes. From what you're saying and after inspecting the JSON contents, in order to store the EWKB binary data in a pure-text format like JSON the ROW_TO_JSON function converts the
EWKB data to a HEX-encoded character strings. HEX-encoded strings use two bytes for every one byte of data, so the EWKB ends up at twice the size it should be, which would explain the size
discrepancy. I guess there's really no obvious way to store a collection of EWKB geometries along with properties that preserves them in a binary format.
Maybe it would be possible to allow yet another option to download the queried data as a binary database dump? That would keep all the data in its most efficient binary form. | {"url":"https://osmboundaries.userecho.com/communities/1/topics/26-donloading-ewkb-format-is-larger-than-geojson","timestamp":"2024-11-11T14:11:01Z","content_type":"text/html","content_length":"39525","record_id":"<urn:uuid:6b17c702-05c9-454b-8ab3-7d81c6a6c98d>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00815.warc.gz"} |
SchoolPoint Login / Whangaparaoa College / Course Selection / Student Voice / Student Conferences / Inbox Design
Year 9 Mathematics
Course Description
Teacher in Charge: Mr S. Van Emmenis.
This is the course that further develops the skills studied in Year 8. Learners are taught by specialist Mathematics teachers and learners are introduced to new concepts and more challenging word
problems using calculators. Much of the focus in this year’s work is Levels 4 to 5 of the NZ Mathematics Curriculum. Coursework will include:
• Number: Fractions, decimals, ratios, and percentages in real-life settings that include money-related matters like GST, interest, salaries and commission cost- and selling price.
• Measurement: Perimeter of compound shapes, area of composite shapes, surface area of 3D shapes and volume.
• Algebra: Write expressions and equations, simplify, expand and factorise expressions, and solve multi-step equations and patterns.
• Probability: Use the Statistical enquiry cycle to investigate chance-based scenarios and express chance as a probability, Venn diagrams and probability trees.
• Space: Angle rules, angles, bearings, constructions and transformations
• Statistics: Statistical enquiry cycle(PPDAC) - Evaluate a problem, Create a plan to investigate, collect data to calculate central tendencies and create visual presentations, analyse the data to
make informed conclusions.
Recommended Prior Learning
Course Contribution and Equipment (this value is only indicative)
$30 for an Education Perfect subscription and associated costs.
Learners MUST have a scientific calculator (Casio FX82AU plus II or equivalent)
Assessment Information
Number 5 WGP Credits
Measurement 4 WGP Credits
Space 3 WGP Credits
Algebra 5 WGP Credits
Statistics 23 WGP Credits
Although we aim to enable every learner to have the course that they prefer, limited places or learning requirements may restrict learners' choices. | {"url":"https://wgpcollege.schoolpoint.co.nz/courses/course/9math","timestamp":"2024-11-06T20:05:25Z","content_type":"text/html","content_length":"63901","record_id":"<urn:uuid:f8366ada-b817-4a55-a2d7-16ee24e5bffb>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00754.warc.gz"} |
scipy.stats.median_test(*samples, ties='below', correction=True, lambda_=1, nan_policy='propagate')[source]#
Perform a Mood’s median test.
Test that two or more samples come from populations with the same median.
Let n = len(samples) be the number of samples. The “grand median” of all the data is computed, and a contingency table is formed by classifying the values in each sample as being above or below
the grand median. The contingency table, along with correction and lambda_, are passed to scipy.stats.chi2_contingency to compute the test statistic and p-value.
sample1, sample2, …array_like
The set of samples. There must be at least two samples. Each sample must be a one-dimensional sequence containing at least one value. The samples are not required to have the same length.
tiesstr, optional
Determines how values equal to the grand median are classified in the contingency table. The string must be one of:
Values equal to the grand median are counted as "below".
Values equal to the grand median are counted as "above".
Values equal to the grand median are not counted.
The default is “below”.
correctionbool, optional
If True, and there are just two samples, apply Yates’ correction for continuity when computing the test statistic associated with the contingency table. Default is True.
lambda_float or str, optional
By default, the statistic computed in this test is Pearson’s chi-squared statistic. lambda_ allows a statistic from the Cressie-Read power divergence family to be used instead. See
power_divergence for details. Default is 1 (Pearson’s chi-squared statistic).
nan_policy{‘propagate’, ‘raise’, ‘omit’}, optional
Defines how to handle when input contains nan. ‘propagate’ returns nan, ‘raise’ throws an error, ‘omit’ performs the calculations ignoring nan values. Default is ‘propagate’.
An object containing attributes:
The test statistic. The statistic that is returned is determined by lambda_. The default is Pearson’s chi-squared statistic.
The p-value of the test.
The grand median.
The contingency table. The shape of the table is (2, n), where n is the number of samples. The first row holds the counts of the values above the grand median, and the second row
holds the counts of the values below the grand median. The table allows further analysis with, for example, scipy.stats.chi2_contingency, or with scipy.stats.fisher_exact if there are
two samples, without having to recompute the table. If nan_policy is “propagate” and there are nans in the input, the return value for table is None.
See also
Compute the Kruskal-Wallis H-test for independent samples.
Computes the Mann-Whitney rank test on samples x and y.
Mood, A. M., Introduction to the Theory of Statistics. McGraw-Hill (1950), pp. 394-399.
Zar, J. H., Biostatistical Analysis, 5th ed. Prentice Hall (2010). See Sections 8.12 and 10.15.
A biologist runs an experiment in which there are three groups of plants. Group 1 has 16 plants, group 2 has 15 plants, and group 3 has 17 plants. Each plant produces a number of seeds. The seed
counts for each group are:
Group 1: 10 14 14 18 20 22 24 25 31 31 32 39 43 43 48 49
Group 2: 28 30 31 33 34 35 36 40 44 55 57 61 91 92 99
Group 3: 0 3 9 22 23 25 25 33 34 34 40 45 46 48 62 67 84
The following code applies Mood’s median test to these samples.
>>> g1 = [10, 14, 14, 18, 20, 22, 24, 25, 31, 31, 32, 39, 43, 43, 48, 49]
>>> g2 = [28, 30, 31, 33, 34, 35, 36, 40, 44, 55, 57, 61, 91, 92, 99]
>>> g3 = [0, 3, 9, 22, 23, 25, 25, 33, 34, 34, 40, 45, 46, 48, 62, 67, 84]
>>> from scipy.stats import median_test
>>> res = median_test(g1, g2, g3)
The median is
and the contingency table is
>>> res.table
array([[ 5, 10, 7],
[11, 5, 10]])
p is too large to conclude that the medians are not the same:
>>> res.pvalue
The “G-test” can be performed by passing lambda_="log-likelihood" to median_test.
>>> res = median_test(g1, g2, g3, lambda_="log-likelihood")
>>> res.pvalue
The median occurs several times in the data, so we’ll get a different result if, for example, ties="above" is used:
>>> res = median_test(g1, g2, g3, ties="above")
>>> res.pvalue
>>> res.table
array([[ 5, 11, 9],
[11, 4, 8]])
This example demonstrates that if the data set is not large and there are values equal to the median, the p-value can be sensitive to the choice of ties. | {"url":"http://scipy.github.io/devdocs/reference/generated/scipy.stats.median_test.html","timestamp":"2024-11-05T04:01:35Z","content_type":"text/html","content_length":"40357","record_id":"<urn:uuid:edf391d7-b02d-4882-ae42-3442b802ad81>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00899.warc.gz"} |
How to determine the distance to planetsHow to determine the distance to planets 🚩 as the planets 🚩 Science.
You will need
• calculator;
• - radar;
• - stopwatch;
• - Handbook of astronomy.
Radar is one of the modern methods of determining the distance from Earth to the planets (geocentric distance). It is based on a comparative analysis of the sent and reflected signal.Send a radio
signal in the direction of interest of the planet and turn on the stopwatch. When will the reflected signal to stop the clock. By the known velocity of propagation and the time during which the
signal reaches the planet is reflected, calculate the distance to the planet. It is equal to the product of the speed at half the stopwatch.
Before the advent of radar to determine the distance to objects in the Solar system using the method of the horizontal parallax. The error of this method is kilometer, and the measurement error of
the distance using the radar centimeter.
The essence of the definition of the distances to the planets by the method of the horizontal parallax is to change the direction of the object when moving the observation point (parallax) – as a
base is taken maximally spaced points together: the radius of the Earth. That is, the determination of distances to the planets by the method of the horizontal parallax is a simple trigonometric
problem. If you know all the data.
Multiply 1 radian (the angle formed by the arc length is equal to radius expressed in seconds (206265) on the radius of the Earth (6370 km) and divide by the amount of parallax of the planet at this
time. The resulting value is the distance to the planet in astronomical units.
At an annual or trigonometric parallax (base is the semimajor axis of earth's orbit) calculate distances to very distant planets and stars. By the way, parallax equal to one second determines the
distance of one parsec, and 1 PS = 206265 astronomical units. Divide 206265 seconds (1 radians) on the value of trigonometric parallax. The resulting quotient is the distance of to the interest of
the planet.
Finally, distance to planets can be calculated using the third law of Kepler. The calculation is quite complicated, so let's proceed to the final part.Construct the squared value of the orbital
period of the planet around the Sun. Calculate the cube root of this value. The resulting number is the distance from the interest of the planet to the Sun in astronomical units, or heliocentric
distance. Knowing the heliocentric distance and the position of the planets (angular distance of the planet from the Sun), you can easily calculate the geocentric distance. | {"url":"https://eng.kakprosto.ru/how-71153-how-to-determine-the-distance-to-planets","timestamp":"2024-11-13T01:58:39Z","content_type":"text/html","content_length":"31519","record_id":"<urn:uuid:b208f309-8d4e-4de9-a02d-5f6f8f04e379>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00031.warc.gz"} |
COMP7404 Topic 3 Adversarial Search
COMP7404 Computational Intelligence and Machine Learning
Topic 3 Adversarial Search
A Multi-agent Competitive Environment
• Other agents are planning against us
• Goals are in conflict (not necessarily)
Game Definition
• A game can be defined as
□ s : States
□ s0: Initial state
□ Player(s) : Defines which player has the move
□ Actions(s) : Returns a set of legal moves
□ Result(s,a) : Defines the result of a move
□ TerminalTest(s) : True when game is over, false otherwise
□ Utility(s,p) : Defines the final numeric value for a game that ends in terminal state s for player p
• A game tree can be constructed
□ Nodes are game states and edges are moves
Tic-Tac-Toe Game Tree
Minimax Search
• A state-space search tree
• Players alternate turns
• Compute each node’s minimax value
□ the best achievable utility against a rational (optimal) adversary
• Will lead to optimal strategy
□ Best achievable payoff against best play
• Example
• Implementation
• Properties
□ Complete - Yes, if tree is finite
□ Optimal - In general no, yes against an optimal opponent
□ Time complexity - O(b^m)
□ Space complexity - O(bm)
Depth-Limit Search (DLS)
• A depth limit search (DLS)
□ Search only to a limited depth in the tree
□ Replace terminal utilities with an evaluation function for non-terminal positions
• Problems
□ Guarantee of optimal play is gone
□ Need to design evaluation function
• An evaluation function
□ An evaluation function Eval(s) scores non-terminals in depth-limited search
☆ An estimate of the expected utility of the game from a given position
□ Ideal function
☆ The actual minimax value of the position
□ The performance of a game-playing program depends strongly on the quality of its evaluation functio
𝛼-𝛽 Pruning Algorithm
• Min version
□ Consider Min’s value at some node n
□ n will decrease (or stay constant) while the descendants of n are examined
□ Let m be the best value that Max can get at any choice point along the current path from the root
□ If n becomes worse (<) than m
☆ Max will avoid it
☆ Stop considering n’s other children
• Max version is symmetric
• Properties
□ Pruning has no effect on the minimax value at the root
□ Values of intermediate nodes might be wrong
☆ Action selection not appropriate for this simple version of alpha-beta pruning
• Move ordering
□ The effectiveness of alpha-beta pruning is highly dependent on the order in which states are examined
□ It is worthwhile to try to examine first the successors that are likely best
☆ Examine only O(b^(m/2)) nodes to pick the best move, instead of O(bm) for minimax
Expectimax Search
• Values reflect average case outcomes, not worst case outcomes
• Expectimax search computes the expected score under optimal play
□ Max nodes as in minimax search
□ Chance nodes are like min nodes but the outcome is uncertain
□ Calculate their expected utilities
☆ i.e., take weighted average of children
• Expectiminimax
□ Environment is an extra “random agent” player that moves after each min/max agent
Multi-Agent Utilities
• Generalisation of minimax
□ Terminals and nodes have utility vectors
□ Each player maximizes its own component
□ Gives rise to cooperation and competition dynamically
Related Posts | {"url":"https://www.pseudoyu.com/en/2020/10/05/comp7404_topic3/","timestamp":"2024-11-07T15:57:41Z","content_type":"text/html","content_length":"20550","record_id":"<urn:uuid:27eb1a9a-e64a-42d3-ae43-aeee0308bca1>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00618.warc.gz"} |
Lesson 13
Is It a.m. or p.m.?
Warm-up: Choral Count: Count Around the Clock (10 minutes)
The purpose of this Choral Count is for students to practice counting by 5-minute intervals. This will be helpful later in this lesson when students tell time to the nearest 5 minutes. It is
important to note that after 3:55, the count switches to the next hour, 4:00, and begins again. Students may continue with 3:60. If this happens, use a demonstration clock to show the minute hand
moving around the clock as students count. Students have been telling time to the hour since grade 1 and will likely realize it is __ o’clock, not __:60. Students may also be unsure of what to say
for 4:05. Stop to discuss how students may have heard this time. Explain that we often say “O-5” when it is 5 minutes after the hour.
• “Count on by 5 minutes, starting at 3:45.”
• Record as students count.
• Stop counting and recording at 5:00.
• “Digital clocks represent time using digits. What patterns do you see?”
• 1–2 minutes: quiet think time
• Record responses.
Activity Synthesis
• “Where do you see a change in the pattern of counting by 5?” (After 55, we go back to 0, or o’clock, instead of 60.)
Activity 1: What is the Time of Day? (20 minutes)
The purpose of this activity is for students to make sense of a visual representation of the hours in 1 day. This visual is designed to help students see the hours that make up a.m. and p.m. Since
this is a linear representation, students might mention that the visual looks like a number line. It would be helpful to point out ways the 2 visuals are alike and ways they are different. For
example, students may notice that the same 12 hours are repeated in each part of the day, but numbers do not repeat on a number line. Students have opportunities to develop logical arguments for why
an event may happen during a.m. or p.m. hours and critique the arguments of others (MP3).
This activity uses MLR8 Discussion Supports. Advances: listening, conversing
Required Materials
Materials to Gather
Materials to Copy
Required Preparation
• Create the Hours in a Day Timeline to display to students in the launch.
• Label the representation as “1 day.”
• Groups of 2
• “Clare starts school at 8:00.”
• “Clare’s bedtime is 8:00.”
• “How could both of these statements be true?” (School starts in the morning and bedtime is at night. There is an 8:00 in the morning and another 8:00 at night.)
• 30 seconds: quiet think time
• 1 minute: partner discussion
• Share responses.
• Display a prepared timeline.
• “What do you notice? What do you wonder?” (It represents 1 day. Half is a.m. and half is p.m. Noon and midnight are labeled.)
• 1 minute: partner discussion
• Share responses.
• “Each day is broken up into 2 parts, called a.m. and p.m. We think of a.m. as morning and p.m. as afternoon and night.”
• Give each student a timeline, scissors, and access to glue.
• “Cut out the two parts of the day and glue them together. Circle and label when you eat breakfast, lunch, and dinner on the diagram. Then shade in all the times you might be sleeping.”
• 5 minutes: independent work time
• Share responses.
• “Now you are going to think about what part of a day different things might happen. Decide whether they would happen in the a.m. (morning) or p.m. (afternoon or night).”
• “When you make your choice, explain your thinking to your partner.”
MLR8 Discussion Supports
• Display sentence frames to support students when they discuss why an event would happen in the a.m. or p.m.:
□ “This would happen in the a.m./pm. because …”
□ “I agree because …”
□ “I disagree because …”
• 5 minutes: partner work time
Student Facing
1. Use the materials your teacher gives you to create your own representation for the hours in a day.
□ Circle and label when you eat breakfast, lunch, and dinner on the diagram.
□ Shade in when you might be sleeping.
2. Fill in the blank with a.m. or p.m. to show the time of day for each activity. Explain your thinking to your partner.
1. Diego goes to baseball practice at 3:00 __________.
2. Mai eats breakfast at 7:00 __________.
3. Tyler eats lunch at 12:00 __________.
4. Elena walks her dog at 2:00 __________.
5. Han gets on the bus to go to school at 8:00 __________.
6. The second-grade class has a snack at 10:00 __________.
Advancing Student Thinking
If students choose a.m. instead of p.m. or p.m. instead of a.m., consider asking:
• “Would this activity happen before or after noon?”
• “Would this activity happen in the morning, afternoon, or evening?”
Activity Synthesis
• “You had to decide if Elena walks her dog at 2:00 a.m. or p.m.”
• “Explain your reasoning for your answer.” (2:00 p.m. because 2 a.m. is in the middle of the night. Most people would not walk their dog in the middle of the night.)
• Point to the timeline display to show where 2:00 a.m. is on the diagram and explain that it is morning, but that we sleep during the early morning hours.
• “Since the hours repeat twice a day, we need to say a.m. or p.m. to be clear about the time we mean.”
Activity 2: Tell Time with a.m. and p.m. (15 minutes)
The purpose of this activity is for students to practice telling and writing time from an analog clock, using a.m. and p.m. Students are not expected to draw the hands on the clock precisely, but it
is important that they think about the relative position of the hour hand based on the hour and the minutes that have passed. When students explain whether the time is a.m. or p.m. and how they draw
the hour hand on the analog clock, they attend to precision (MP6).
Representation: Internalize Comprehension. Begin by having students recall the a.m. and p.m. linear representation from Activity 1 where breakfast, lunch and dinner were marked, and sleep time was
shaded. Allow this to be used as a reference for students in this activity.
Supports accessibility for: Conceptual Processing, Memory
• “We have been looking at analog clocks and telling time based on where the hands are on the clock.”
• “Now you are going to label activities with a.m. or p.m. Then draw a line to the digital clock that shows the time the activity could take place.”
• “Then you will draw the hands on the clock to show the same time as the digital clock.”
• “Use each clock only once.”
• 3 minutes: independent work time
• 5 minutes: partner work time
Student Facing
• Label each activity with a.m. or p.m.
• Draw a line to the time when the activity could take place.
• Draw the hands on the clock to show the time.
do homework ____________
get ready for bed ____________
eat lunch ____________
on the way to school ____________
in bed sleeping ____________
Advancing Student Thinking
If students do not explain their choices to their partner or give feedback on how they show the time, consider asking:
• “Do you agree that this activity would happen in an a.m. time or p.m. time? Why or why not?”
• “Do you agree or disagree with how your partner drew the hour and minute hand? Explain.”
• “Do you have any suggestions for how your partner could draw the minute and hour hands to make it easier to read the time?”
Activity Synthesis
• Invite students to share whether each activity would be a.m. or p.m.
• Invite students to share the hour they chose and how they showed the time on the analog clock.
• Consider asking:
□ “Why did you choose this time?”
□ “How did you decide where to draw the minute hand?”
□ “How did you decide where to draw the hour hand?”
Lesson Synthesis
“Today we learned that the hours in a day are split into 2 groups called a.m. and p.m. We learned that a.m. is usually thought of as morning and p.m. is thought of as afternoon and night.”
• wake up
• eat lunch
• read a book before bed
• brush teeth
“Tell your partner what time you might do each of these activities. Include a.m. or p.m. with the time.”
Share responses.
Cool-down: Represent the Time (5 minutes)
Student Facing
In this section, we learned to read clocks to tell and write time to the nearest 5 minutes. By counting by 5 starting at the number 1, we can tell the time in hours and minutes. We can also use half
past, quarter past, or quarter till to tell time when the minute hand is in certain positions. To show the time of day, we use a.m. and p.m. when we tell and write time. | {"url":"https://im.kendallhunt.com/K5/teachers/grade-2/unit-6/lesson-13/lesson.html","timestamp":"2024-11-07T00:22:11Z","content_type":"text/html","content_length":"114032","record_id":"<urn:uuid:8181efcf-8dec-45a1-835f-90cae2928cbf>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00353.warc.gz"} |
16.6 Uniform Circular Motion and Simple Harmonic Motion
Learning Objectives
Learning Objectives
By the end of this section, you will be able to do the following:
• Compare simple harmonic motion with uniform circular motion
There is an easy way to produce simple harmonic motion by using uniform circular motion. Figure 16.18 shows one way of using this method. A ball is attached to a uniformly rotating vertical
turntable, and its shadow is projected on the floor as shown. The shadow undergoes simple harmonic motion. Hooke’s law usually describes uniform circular motions ($ωω size 12{ω} {}$ constant) rather
than systems that have large visible displacements. So observing the projection of uniform circular motion, as in Figure 16.18, is often easier than observing a precise large-scale simple harmonic
oscillator. If studied in sufficient depth, simple harmonic motion produced in this manner can give considerable insight into many aspects of oscillations and waves and is very useful mathematically.
In our brief treatment, we shall indicate some of the major features of this relationship and how they might be useful.
Figure 16.19 shows the basic relationship between uniform circular motion and simple harmonic motion. The point P travels around the circle at constant angular velocity $ωω size 12{ω} {}$. The point
P is analogous to an object on the merry-go-round. The projection of the position of P onto a fixed axis undergoes simple harmonic motion and is analogous to the shadow of the object. At the time
shown in the figure, the projection has position $xx size 12{x} {}$ and moves to the left with velocity $vv size 12{v} {}$. The velocity of the point P around the circle equals $v¯maxv¯max size 12{
{overline {v}} rSub { size 8{"max"} } } {}$.The projection of $v¯maxv¯max size 12{ {overline {v}} rSub { size 8{"max"} } } {}$ on the $xx size 12{x} {}$-axis is the velocity $vv size 12{v} {}$ of the
simple harmonic motion along the $xx size 12{x} {}$-axis.
To see that the projection undergoes simple harmonic motion, note that its position $xx size 12{x} {}$ is given by
16.48 $x=Xcosθ,x=Xcosθ, size 12{x=X"cos"θ","} {}$
where $θ=ωtθ=ωt size 12{θ=ω`t} {}$, $ωω size 12{ω} {}$ is the constant angular velocity, and $XX size 12{X} {}$ is the radius of the circular path. Thus,
16.49 $x = X cos ω t . x = X cos ω t . size 12{x=X"cos"ω`t} {}$
The angular velocity $ωω size 12{ω} {}$ is in radians per unit time; in this case $2π2π size 12{2π} {}$ radians is the time for one revolution $T.T. size 12{T} {}$ That is, $ω=2π/Tω=2π/T size 12{ω=2π
/T} {}$. Substituting this expression for $ωω size 12{ω} {}$, we see that the position $xx size 12{x} {}$ is given by
16.50 $x ( t ) = cos 2π t T . x ( t ) = cos 2π t T . size 12{x \( t \) ="cos" left ( { {2π`t} over {T} } right )} {}$
This expression is the same one we had for the position of a simple harmonic oscillator in Simple Harmonic Motion: A Special Periodic Motion. If we make a graph of position versus time as in Figure
16.20, we see again the wavelike character (typical of simple harmonic motion) of the projection of uniform circular motion onto the $xx size 12{x} {}$ axis.
Now let us use Figure 16.19 to do some further analysis of uniform circular motion as it relates to simple harmonic motion. The triangle formed by the velocities in the figure and the triangle formed
by the displacements ($X, x, X, x, size 12{X,x,} {}$ and $X2−x2X2−x2 size 12{ sqrt {X rSup { size 8{2} } - x rSup { size 8{2} } } } {}$) are similar right triangles. Taking ratios of similar sides,
we see that
16.51 $v v max = X 2 − x 2 X = 1 − x 2 X 2 . v v max = X 2 − x 2 X = 1 − x 2 X 2 . size 12{ { {v} over {v rSub { size 8{"max"} } } } = { { sqrt {X rSup { size 8{2} } - x rSup { size 8{2} } } } over
{X} } = sqrt {1 - { {x rSup { size 8{2} } } over {X rSup { size 8{2} } } } } } {}$
We can solve this equation for the speed $vv size 12{v} {}$ or
16.52 $v = v max 1 − x 2 X 2 . v = v max 1 − x 2 X 2 . size 12{v=v rSub { size 8{"max"} } sqrt {1 - { {x rSup { size 8{2} } } over {X rSup { size 8{2} } } } } } {}$
This expression for the speed of a simple harmonic oscillator is exactly the same as the equation obtained from conservation of energy considerations in Energy and the Simple Harmonic Oscillator. You
can begin to see that it is possible to get all of the characteristics of simple harmonic motion from an analysis of the projection of uniform circular motion.
Finally, let us consider the period $TT size 12{T} {}$ of the motion of the projection. This period is the time it takes the point P to complete one revolution. That time is the circumference of the
circle $2πX2πX size 12{2πX} {}$ divided by the velocity around the circle, $vmaxvmax size 12{v rSub { size 8{"max"} } } {}$. Thus, the period $TT size 12{T} {}$ is
16.53 $T = 2πX v max . T = 2πX v max . size 12{T= { {2πX} over {v rSub { size 8{"max"} } } } } {}$
We know from conservation of energy considerations that
16.54 $v max = k m X . v max = k m X . size 12{v rSub { size 8{"max"} } = sqrt { { {k} over {m} } } X} {}$
Solving this equation for $X/vmaxX/vmax size 12{X/v rSub { size 8{"max"} } } {}$ gives
16.55 $X v max = m k . X v max = m k . size 12{ { {X} over {v rSub { size 8{"max"} } } } = sqrt { { {m} over {k} } } } {}$
Substituting this expression into the equation for $TT size 12{T} {}$ yields
16.56 $T=2πmk.T=2πmk. size 12{T=2π sqrt { { {m} over {k} } } "."} {}$
Thus, the period of the motion is the same as for a simple harmonic oscillator. We have determined the period for any simple harmonic oscillator using the relationship between uniform circular motion
and simple harmonic motion.
Some modules occasionally refer to the connection between uniform circular motion and simple harmonic motion. Moreover, if you carry your study of physics and its applications to greater depths, you
will find this relationship useful. It can, for example, help to analyze how waves add when they are superimposed.
Check Your Understanding
Identify an object that undergoes uniform circular motion. Describe how you could trace the simple harmonic motion of this object as a wave.
A record player undergoes uniform circular motion. You could attach dowel rod to one point on the outside edge of the turntable and attach a pen to the other end of the dowel. As the record player
turns, the pen will move. You can drag a long piece of paper under the pen, capturing its motion as a wave. | {"url":"https://texasgateway.org/resource/166-uniform-circular-motion-and-simple-harmonic-motion?binder_id=78586","timestamp":"2024-11-09T20:20:04Z","content_type":"text/html","content_length":"81661","record_id":"<urn:uuid:b94da3b0-0412-4e32-8b52-0f38570ee88a>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00100.warc.gz"} |
Winding path leads to fluid career - Deixis Online
Paul Fischer can’t remember a time when he wasn’t interested in aeronautical and mechanical engineering. His passion for solving seemingly unsolvable problems came just a bit later.
Fischer, now a computational scientist with the Mathematics and Computer Science Division at Argonne National Laboratory, connects that interest to an early fascination with the Apollo space program.
“I remember when I was eight years old, getting up to watch Apollo 8 take off,” he says.
That carried over to Ithaca High School, in the shadow of Cornell University in upstate New York, when “I started writing code to solve different equations. I decided it was easier to let the
computer do the math for me than to do it myself.”
Fischer, 50, says he was lucky to attend a high school with a strong science program. “Given that I knew I really wanted to do aeronautics, I focused on math and physics. I just loaded up on those
He still tells budding scientists that “when you’re a student, take as many courses as you can. Don’t sell yourself short. Latch onto opportunities to take courses, as many as you can in the core
As an undergraduate at Cornell, Fischer gravitated toward mechanical engineering when it became clear to him the field was more stable than the aerospace industry. He became interested in both solid
and fluid mechanics, but it was hard to decide which to specialize in. A roommate finally told him to choose whichever is harder.
“But I went into fluids, which is actually easier,” Fischer says, though not everyone would agree.
He took several graduate-level mechanical engineering courses his senior year, then went to Stanford University for a master’s, focusing on computational fluid mechanics.
“I knew I enjoyed that,” he says. “But I really wanted to get into the mathematical side of mechanical engineering – and also the software and algorithm side.”
Fischer worked for three years on the design of gas bearings for disc drives. He did computational experiments, but he says, “I was always interested in the companion validation of the experiments.
It’s the only way you know you’re actually doing the right thing.”
At the Massachusetts Institute of Technology, Fischer earned a doctoral degree in mechanical engineering with a dissertation on developing code for high-performance parallel computers.
He started moving into applied mathematics because he knew he was going to write software to simulate physical phenomena. “It’s extremely beneficial to have a strong math background” for those
endeavors, Fischer says. His Ph.D. adviser was an applied mathematician, he did a postdoctoral fellowship in applied mathematics and he taught the subject before coming to Argonne.
“It’s essential for writing advanced simulation codes and for understanding when you can prove the correctness of your code.”
Just as physics isn’t enough without the math, when writing complex codes, “the math in and of itself isn’t sufficient,” Fischer says. “There are subtle things associated with boundary conditions
that need a deep understanding of the physics involved.”
For example, electromagnetic equations are fairly simple, but their boundary conditions are not. “I can write an electromagnetic code that solves for trivial boundary conditions, but for more complex
boundary conditions, you need to understand the physics.”
Fischer was the first recipient of the Computational Research Postdoctoral Fellowship at Cal Tech, which was a hopping place for parallel computing at the time. He then won the 1999 Gordon Bell Prize
for scaling to 4,096 processors with a simulation code. “It was really a recognition of scalable algorithms.”
His team at Argonne has won several Department of Energy Innovative and Novel Computational Impact on Theory and Experiment (INCITE) awards, earning time on the most powerful computers in the world
to work on astrophysical problems. In 2006, he won the first external science award, which got him 3 million hours of processing time. “That’s not too many hours now, but it was back then.”
(Visited 835 times, 1 visits today)
Bill Scanlon was a reporter at the Rocky Mountain News until its closing in February 2009.
Leave a Comment
You must be logged in to post a comment. | {"url":"https://deixismagazine.org/2010/09/winding-path-leads-to-fluid-career/","timestamp":"2024-11-13T16:29:11Z","content_type":"application/xhtml+xml","content_length":"91733","record_id":"<urn:uuid:36ab89cf-bdd5-4866-9778-3837c47853e2>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00264.warc.gz"} |
What Are Intervals In Music?
Music, often described as the universal language, is composed of intricate patterns, rhythms, and harmonies. At the heart of these harmonies lies the concept of intervals, the spaces between notes
that give music its depth and character.
Intervals in music refer to the distance between two pitches or notes. They are the building blocks of melodies, chords, and harmonies, shaping the emotional and tonal landscape of a piece.
Understanding intervals helps musicians recognize patterns, transpose music, and build chords and scales.
Melodic Intervals
Melodic intervals refer to the distance between two pitches or notes that are played successively, one after the other, rather than simultaneously. They describe the relationship between two notes in
a melody when they are played in sequence.
Harmonic Intervals
Harmonic intervals refer to the distance between two pitches or notes that are played simultaneously, at the same time. They describe the relationship between two notes when they are sounded
together, creating harmony.
Interval Naming
1. Unison (Perfect Prime): This is when two notes are the same. For example, playing two middle C’s on a piano at the same time.
2. Minor 2nd: This is the smallest distance between two different notes. For example, the distance between C and C# or D and D♭.
3. Major 2nd: This is equivalent to two half steps or one whole step. For example, the distance between C and D.
4. Minor 3rd: This interval spans three half steps. For example, the distance between C and E♭.
5. Major 3rd: This interval spans four half steps. For example, the distance between C and E.
6. Perfect 4th: This interval spans five half steps. For example, the distance between C and F.
7. Tritone: This interval spans six half steps and is exactly halfway between an octave. It’s sometimes called an augmented fourth or diminished fifth.
8. Perfect 5th: This interval spans seven half steps. For example, the distance between C and G.
9. Minor 6th: This interval spans eight half steps. For example, the distance between C and A♭.
10. Major 6th: This interval spans nine half steps. For example, the distance between C and A.
11. Minor 7th: This interval spans ten half steps. For example, the distance between C and B♭.
12. Major 7th: This interval spans eleven half steps. For example, the distance between C and B.
13. Octave (Perfect 8th): This interval spans twelve half steps and is the distance between two notes with the same name. For example, the distance between C and the next C above it.
Intervals can also be augmented (increased by a half step) or diminished (decreased by a half step). A diminished 5th is one half step smaller than a perfect 5th, and an augmented 4th is one half
step larger than a perfect 4th.
Number ofsemitones Minor, major,or perfect intervals Shorthand Augmented ordiminished intervals Shorthand Alternative Names
0 Perfect unison P1 Diminished second d2
1 Minor second m2 Augmented unison A1 Semitone, half tone, half step
2 Major second M2 Diminished third d3 Tone, whole tone, whole step
3 Minor third m3 Augmented second A2
4 Major third M3 Diminished fourth d4
5 Perfect fourth P4 Augmented third A3
6 Diminished fifth d5 Tritone
Augmented fourth A4
7 Perfect fifth P5 Diminished sixth d6
8 Minor sixth m6 Augmented fifth A5
9 Major sixth M6 Diminished seventh d7
10 Minor seventh m7 Augmented sixth A6
11 Major seventh M7 Diminished octave d8
12 Perfect octave P8 Augmented seventh A7
Courtesy of Wiki
In addition to their technical definitions, intervals also have distinct sonic qualities.
Major intervals tend to sound happy or bright, while minor intervals often sound sad or dark. Perfect intervals like the 4th and 5th have a stable sound, while augmented and diminished intervals can
sound tense or dissonant.
How Do You Identify An Interval?
Identifying musical intervals involves determining the distance between two notes. Here’s a step-by-step guide to help you identify intervals:
1. Start with the Note Names:
First, identify the names of the two notes. If you have C and E, you know the interval starts on C and goes up to E.
2. Count the Letter Names:
Count the starting note as “1” and then count up to the second note. Using the C and E example, you would count: C(1), D(2), E(3). This tells you it’s some type of 3rd.
3. Determine the Number of Half Steps:
Count the number of half steps (semitones) between the two notes. On a keyboard, this means counting every key (including both white and black keys) between the two notes, but not including the
starting note.
For C to E: C to C# (1), C# to D (2), D to D# (3), D# to E (4). So, there are 4 half steps between C and E.
4. Match the Number of Half Steps to the Interval Name:
Using the number of half steps you’ve counted, you can determine the specific type of interval:
1 half step = Minor 2nd
2 half steps = Major 2nd
3 half steps = Minor 3rd
4 half steps = Major 3rd
5. Consider Augmented and Diminished Intervals:
If an interval is one half step larger than a major or perfect interval, it’s augmented.
If it’s one half step smaller than a minor or perfect interval, it’s diminished.
6. Use Mnemonics and Songs:
Many musicians use familiar songs to help identify intervals.
• Minor 2nd: The first two notes of “Jaws” theme.
• Major 2nd: The first two notes of “Happy Birthday.”
• Perfect 4th: The beginning of “Here Comes the Bride.”
• Perfect 5th: The first two notes of “Star Wars” theme.
These song references can vary based on individual experiences and cultural context, so it’s helpful to find songs that resonate with you.
7. Practice:
Like any skill, interval identification improves with practice. Use ear training apps, and websites, or work with a music teacher to practice listening to and identifying intervals.
Some options I like to use are:
Also, practice on your instrument. Learn to play intervals across the guitar fretboard and how to find them on the piano.
8. Context Matters:
The sound of an interval can be affected by the context in which it’s heard. For example, a minor 6th might sound different in the context of one chord progression compared to another.
By combining these techniques and practicing regularly, you’ll become more proficient at identifying musical intervals by both sight and ear.
Music Interval Calculators
Here are some online music interval calculators that you can use:
• Omni Calculator: The music interval calculator on Omni Calculator determines the interval between two notes.
• muted.io: This is a simple online musical interval calculator. You can select a low and a high note to get the interval name and the number of semitones between the two notes.
• CalcTool: The music interval calculator on CalcTool allows you to easily determine the interval between two given notes.
What Is The Interval From F To C?
The interval from F to C is a Perfect 5th.
Here’s how you can figure it out:
1. Count the Letter Names:
Start with F as “1” and count up to C: F(1), G(2), A(3), B(4), C(5). This tells you it’s some type of 5th.
2. Determine the Number of Half Steps:
Count the number of half steps (semitones) between the two notes:
• F to F# or G♭ = 1 half step
• F# or G♭ to G = 1 half step
• G to G# or A♭ = 1 half step
• G# or A♭ to A = 1 half step
• A to A# or B♭ = 1 half step
• A# or B♭ to B = 1 half step
• B to C = 1 half step
In total, there are 7 half steps between F and C.
3. Match the Number of Half Steps to the Interval Name:
A Perfect 5th spans 7 half steps.
Therefore, the interval from F to C is a Perfect 5th.
What Is The Interval Of D To G?
The interval from F to C is a Perfect 5th.
Here’s how you can figure it out:
1. Count the Letter Names:
Start with F as “1” and count up to C: F(1), G(2), A(3), B(4), C(5). This tells you it’s some type of 5th.
2. Determine the Number of Half Steps:
Count the number of half steps (semitones) between the two notes:
• F to F# or G♭ = 1 half step
• F# or G♭ to G = 1 half step
• G to G# or A♭ = 1 half step
• G# or A♭ to A = 1 half step
• A to A# or B♭ = 1 half step
• A# or B♭ to B = 1 half step
• B to C = 1 half step
In total, there are 7 half steps between F and C.
3. Match the Number of Half Steps to the Interval Name:
A Perfect 5th spans 7 half steps.
Therefore, the interval from F to C is a Perfect 5th.
What Is The Interval Of F To B?
The interval from F to B is an Augmented 4th (often referred to as a “Tritone”).
Here’s how you can figure it out:
1. Count the Letter Names:
Start with F as “1” and count up to B: F(1), G(2), A(3), B(4). This tells you it’s some type of 4th.
2. Determine the Number of Half Steps:
Count the number of half steps (semitones) between the two notes:
• F to F# or G♭ = 1 half step
• F# or G♭ to G = 1 half step
• G to G# or A♭ = 1 half step
• G# or A♭ to A = 1 half step
• A to A# or B♭ = 1 half step
• A# or B♭ to B = 1 half step
In total, there are 6 half steps between F and B.
3. Match the Number of Half Steps to the Interval Name:
A Perfect 4th spans 5 half steps. However, since F to B spans 6 half steps, it is one half step larger than a Perfect 4th, making it an Augmented 4th.
Therefore, the interval from F to B is an Augmented 4th or Tritone.
What Is The Difference Between a Chord And an Interval?
Both chords and intervals are fundamental concepts in music theory, but they refer to different concepts.
While both intervals and chords deal with the relationship between notes, intervals focus on the distance between two specific notes, whereas chords involve the simultaneous sounding of two or more
notes to create harmony.
• Definition
□ Interval: An interval is the distance between two pitches or notes. It describes the relationship between two notes in terms of how many letter names they encompass and how many half steps
(semitones) separate them. For example, the distance between C and E is a major 3rd.
□ Chord: A chord is a combination of two or more pitches sounded simultaneously. It’s a harmonic unit in music. The combination of C, E, and G played together forms a C major chord.
Technically, a harmonic interval is a type of chord.
Number of Notes
• Interval: Always involves two notes, either played successively (melodic interval) or simultaneously (harmonic interval).
• Chord: Involves two or more notes played simultaneously.
• Function
□ Interval: Describes the distance and relationship between two notes. Intervals are the building blocks of scales, chords, and melodies.
□ Chord: Creates harmony in music. Chords provide depth to melodies and play a significant role in establishing the tonality and mood of a piece.
• Types
□ Interval: Intervals can be minor, major, perfect, augmented, or diminished. Examples include minor 3rd, perfect 5th, and augmented 4th.
□ Chord: Chords can be major, minor, augmented, diminished, 7th, 9th, 11th, 13th, and many other types. Examples include C major, D minor, and G7.
• Usage
□ Interval: Intervals are foundational in understanding the structure of scales, chords, and melodies. They are also crucial for tasks like transposing music.
□ Chord: Chords are used in accompaniments, songwriting, and compositions to create harmonic progressions and support melodies.
Go to next lesson: Enharmonics, Pitch, and Sound Intensity (Volume)
Go to previous lesson: What are Music Scales?
Back to: Module 1
Additional Resources: Wiki – Intervals
Leave a Comment | {"url":"https://classifysound.com/what-are-intervals-in-music/","timestamp":"2024-11-02T12:28:40Z","content_type":"text/html","content_length":"97036","record_id":"<urn:uuid:ceae9592-e5b3-4d84-8948-88972104d29b>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00510.warc.gz"} |
Open Journal of Biophysics
Vol.4 No.3(2014), Article ID:48417,7 pages DOI:10.4236/ojbiphy.2014.43011
Nonlinear Polarizability of Erythrocytes in Non-Uniform Alternating Electric Field
Konstantin V. Generalov^1, Vladimir M. Generalov^1, Alexander S. Safatov^1, Alexander G. Durymanov^1, Galins A. Buryak^1, Margarita V. Kruchinina^2, Mikhail I. Voevoda^2, Andrey A. Gromov^2
^1Federal Budget Research Institution State Research Center of Virology and Biotechnology Vector, Koltsovo, Novosibirsk Region, Russian Federation
^2Federal State Budgetary Institution of Internal and Preventive Medicine Siberian Branch under the Russian Academy of Medical Sciences, Novosibirsk, Russian Federation
Email: general@vector.nsc.ru
Copyright © 2014 by authors and Scientific Research Publishing Inc.
This work is licensed under the Creative Commons Attribution International License (CC BY).
Received 26 May 2014; revised 25 June 2014; accepted 24 July 2014
Nonlinear polarizability of erythrocytes in non-uniform alternating electric field (NUAEF) was proved theoretically and experimentally by dielectrophoresis method. The paper presents experimental
evidence of the nonlinear polarizability of erythrocytes in the non-uniform alternating electric field. The rotation of erythrocyte around its own axis at more than one revolution per second in the
non-uniform alternating electric field in the frequency range
Keywords:Dielectrophoresis, Polarizability, Rotation, Erythrocyte, Nonlinearity
1. Introduction
The study of polarization and deformation of erythrocytes is an urgent problem in the diagnosis of some diseases. The above characteristics are interrelated in their reaction to practically any
pathological process in the organism [1] -[4] . The polarization of a cell in an external electric field is accompanied by the displacement of its electric charges relative to the equilibrium
position, the formation of an induced dipole moment and, as a result, the overall deformation of the total cell volume [1] [5] . In turn, the deformability of erythrocytes also depends on their
viscoelastic properties i.e. total rigidity and viscosity [6] . The deformation of an erythrocyte is obviously limited by its own finite mass, and the displacement of electric charges relative to the
equilibrium position is limited by their electrostatic repulsion in the cell closed volume. Thus, these limitations create conditions for the nonlinear polarization of erythrocytes in an external
electric field.
The aim of the work was to study the nonlinear polarizability of erythrocytes in NUAEF with an intensity ~
2. Materials and Methods
Human erythrocytes obtained from whole blood drawn from the donor’s vein were used in the study. To conduct the dielectrophoresis analysis, 2 ml of blood were collected with vacutainers in 3.7%
citrate buffer at a ratio of 9:1. Immediately before the experiment, 10 μl of blood were diluted in ^7 cm^−3. Blood collection from donors was performed with the approval of the Biomedical Ethics
Committee of the Federal Budget Research Institution Research Institute of Therapy, Siberian Branch of Russian Academy of Medical Sciences (Protocol # 36 of the meeting of September 18, 2012).
Experiments were performed in a measuring cell where NUAEF was created. Detailed description of the measuring cell and the laboratory device as a whole is presented in [6] . Measurements were carried
out in the frequency range
Video monitoring and recording of the speed of erythrocyte rotation around its own axis were carried by the position of a typical natural reference point on its surface. The cell turnover period was
measured using an electronic clock built into the computer.
3. Results and Discussion
3.1. Experimental Part
Experimental observations demonstrated a slow rotation of erythrocytes around their own axes with varying frequency in the frequency range of Figure 1 shows the dynamics of rotation of a selected
individual cell. Measurements showed that its rotation frequency was ~
Figure 1. The dynamics of erythrocyte rotation around its own axis in non-uniform alternating electric field. The arrow shows the position of the natural reference point on the cell membrane
monitored during the rotation process.
3.2. The Oretical Justification
The external electric field []directed against the external one,[5] [9] . In NUAEF, the cell dipole is influenced by the time-averaged force vector, which makes the cell move [10] .
The vector of the cell electric field intensity []
The superposition of two harmonic oscillations
The condition
The nonlinear polarization of a cell requires the condition
The typical value of the erythrocyte transmembrane potential is about [11] . This potential creates on the membrane a potential barrier with the electric field intensity ~^1 (ion) should possess the
energy required for overcoming the total potential barrier of the cell membrane
The density of positive charges capable of overcoming the above barrier is described by Expression [5] .
[5] ;
The exponential function (4) can be determined using the Maclaurin series, which converges at any] .
If the summand
The interrelations between Expressions (1, 3, 4, 5) allow us to consider the cell polarization process to be nonlinear, too. From the mathematical point of view, nonlinearity in Equation (5) emerges
From Equation (6), the calculated values Figure 2, Figure 3. By further simplifying Equation (6), let us reduce it to.
From the consideration of the first three summands of Equation (7) it follows that the alternating current of the membrane contains only the linear and the quadratic components
Taking into account that
The analysis of Expression (8) shows that the total current through the cell membrane is determined by the combination of individual harmonic components with frequencies [ ]and
The linear model of polarization of the medium and the cell in the external alternating electric field is relatively simple. In each point, the electric field induces in their volume the dipoles with
the harmonic frequency[ ]of the external field is transferred to the second and higher harmonics, and there also emerge multiple combinations between them [11] .
The analysis of the presented known trigonometric expressions also shows that the member of the series with the serial number Figure 2, Figure 3, the serial number
Experimental data and the conducted theoretical analysis of the interaction between the cell and alternating electric field suggest that under the study conditions the erythrocyte can be presented as
a nonlinear element whose membrane permeability for positive ions is higher in one direction than in the reverse direction. This is consistent with the selective permeability of the membrane, for
example, for potassium ions ] known from literature. Interestingly, the ] . The special case of the nonlinear equivalent electric circuit of the cell for positive charges is shown in Figure 4 where
the membrane is presented by a diode with nonlinear resistance and capacity
The conducted work allowed us to draw the following conclusions.
Figure 4. The nonlinear equivalent electric circuit of the cell for positive charges.
4. Conclusions
1) The nonlinear polarizability of human erythrocytes is observed in non-uniform alternating electric field with the intensity
2) The nonlinear polarizability of erythrocytes in non-uniform alternating electric field causes their rotation around their own axes with the frequency exceeding
3) The external harmonic electric field affecting the cell is created in the cell cytoplasm in the form of a nonlinear uniform^2 field with a constant component and a broad frequency range due to the
electric properties of the cell membrane.
4) The alternating electric field from the donor erythrocyte with the amplitude exceeding
1. Zinchuk, V.V. (2001) Erythrocyte Deformability: Physiological Aspects of Progress in Physiological Sciences. Advances in Physiological Sciences, 32, 66-78 (in Russian).
2. Torkhovskaya, T.I., Artemona, L.G., Khodzhakuliev, B.G., Rudenko, T.S., Polessky, V.A. and Azizova, O.A. (1980) Structural and Functional Changes in Erythrocyte Membranes at Experimental
Atherosclerosis. Bulletin of Experimental Biology and Medicine, 89, 675-678 (in Russian). http://dx.doi.org/10.1007/BF00836241
3. Kruchinina, M.V., Kurilovich, S.A., Parulikova, M.V., Bakirov, T.S., Generalov, V.M., Pak, A.V. and Zvolskiy, I.L. (2005) Electric and Viscoelastic Properties of Erythrocytes of Patients with
Diffuse Pathology of the Liver. Proceeding of the Academy of Sciences, 401, 701-704 (in Russian).
4. Kurilovich, S.A., Kruchinina, M.V., Gromov, A.A., Generalov, V.M., Bakirov, T.S., Rikhter, V.A. and Semenov, D.V. (2010) Justification of the Use of Essential Phospholipids at Chronic Liver
Diseases: The Dynamics of Electric and Viscoelastic Parameters of Erythrocytes. Experimental and Clinical Gastroenterology, 11, 46-52 (in Russian).
5. Feinman, R., Leitos, R., Sands, M. (1977) The Feinman Lectures on Physics. Electricity and Magnetism. Mir., Moscow (in Russian).
6. Bakirov, T.S., Generalov, V.M. and Toporkov, V.S. (1998) The Measurement of Viscoelastic Properties of a Cell Using the Non-Uniform Alternating Electric Field. Biotechnology, 5, 88-96 (in
7. Vorobiev, N.N. (1979) The Theory of Series. Nauka, Moscow, 408 (in Russian).
8. Generalov, V.M., Bakirov, T.S., Pak, A.V., Zvolskiy, I.L., Zaitsev, B.N., Durymanov, A.G., Kruchinina, M.V., Kurilovich, S.A. and Sergeev, A.N. (2008) The Automated Device for Measurement of
Viscoelastic Properties of Erythrocytes. High Technologies, 9, 28-33 (in Russian).
9. Landau, L.D., Lifshits, E.M. Theoretical Physics (1982) Electrodymanics of Continuous Media. 8, 2th Edition, Nauka, Moscow (in Russian).
10. Hughes, M.P. (2003) Nanoelectromechanics in Engineering and Biology. CRC PRESS, Boca Raton.
11. Gelfand, I.M., Lvovsky, S.M., Toom, A.L. Trigonometry. (2002) Moscow, MCCME (in Russia)
12. Iost, Kh. (1975) Cell Physiology. Mir., Moscow (in Russian).
13. Lebedev, A.I. (2008) Physics of Semiconductor Devices. Physmathlit, Moscow (in Russian).
^1We will consider only positive charges, which move against the forces of electric field formed by the transmembrane potential.
^2Landau L.D., Lifshits E.M. Theoretical Physics. V. 8. Electrodynamics of continuous media. 2^nd edition, revised and expanded. Moscow: Nauka. 1982. 620 p. | {"url":"https://file.scirp.org/Html/2-1850096_48417.htm","timestamp":"2024-11-11T03:31:20Z","content_type":"application/xhtml+xml","content_length":"64216","record_id":"<urn:uuid:0ffd498c-f348-4905-a380-6d3e000d17e8>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00394.warc.gz"} |