content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Technics Grand Class SU-GX70 Streaming Stereo Receiver Measurements
Link: reviewed by Dennis Burger on SoundStage! Access on February 1, 2024
General Information
All measurements taken using an Audio Precision APx555 B Series analyzer.
The Technics Grand Class SU-GX70 was conditioned for one hour at 1/8th full rated power (~5W into 8 ohms) before any measurements were taken. All measurements were taken with both channels driven,
using a 120V/20A dedicated circuit, unless otherwise stated.
The SU-GX70 offers two line-level analog inputs (RCA), one moving-magnet (MM) phono input (RCA), one pair of preamp outputs (RCA), one digital coaxial (RCA) and two optical (TosLink) S/PDIF inputs,
one USB digital inputs, two pairs of speaker-level outputs and one headphone output over 1/4″ TRS connector. For the purposes of these measurements, the following inputs were evaluated: digital
coaxial, analog line-level, as well as phono.
The SU-GX70 is a sophisticated device that digitizes all incoming signals and can apply DSP for various functions. An “initialization” was performed before any measurements were made, to ensure that
any room EQ DSP had been cleared. Unless otherwise stated, Pure Amplification was turned on, MQA off, and LAPC off, although comparisons between the on and off effects of these functions can be seen
in this report.
Most measurements were made with a 2Vrms line-level analog input, 5mVrms MM input and 0dBFS digital input. The volume control is variable from -99dB to 0dB. The signal-to-noise (SNR) measurements
were made with the default input signal values but with the volume set to achieve the rated output power of 40W (8 ohms). For comparison, on the line-level input, a SNR measurement was also made with
the volume at maximum.
Based on the high accuracy and repeatability of the left/right volume channel matching (see table below), the SU-GX70 volume control operates in the digital domain. The SU-GX70 offers 1dB volume
steps ranging from -99dB to -54dB, then 0.5dB steps from -53.5dB to 0dB. Overall range is -59.3dB to +39.6dB (line-level input, speaker output).
Because the SU-GX70 is a digital amplifier technology that exhibits considerable noise above 20kHz (see FFTs below), our typical input bandwidth filter setting of 10Hz-90kHz was necessarily changed
to 10Hz-22.4kHz for all measurements, except for frequency response and for FFTs. In addition, THD versus frequency sweeps were limited to 6kHz to adequately capture the second and third signal
harmonics with the restricted bandwidth setting.
Volume-control accuracy (measured at speaker outputs): left-right channel tracking
Volume position Channel deviation
-99dB 0.02dB
-70dB 0.026dB
-60dB 0.026dB
-40dB 0.022dB
-30dB 0.024dB
-20dB 0.022dB
-10dB 0.024dB
0dB 0.025dB
Published specifications vs. our primary measurements
The table below summarizes the measurements published by Technics for the SU-GX70 compared directly against our own. The published specifications are sourced from Technics’ website, either directly
or from the manual available for download, or a combination thereof. With the exception of frequency response, where the Audio Precision bandwidth was extended to 250kHz, assume, unless otherwise
stated, 10W into 8 ohms and a measurement input bandwidth of 10Hz to 22.4kHz, and the worst-case measured result between the left and right channels.
Parameter Manufacturer SoundStage! Lab
Amplifier rated output power into 8 ohms (1% THD) 40W 50W
Amplifier rated output power into 4 ohms (1% THD) 80W 94W
Frequency response (analog line-level in, speaker out 4-ohm) 20Hz-40kHz (-3dB) 20Hz-46kHz (-3dB)
Frequency response (digital in, speaker out 4-ohm) 20Hz-40kHz (-3dB) 20Hz-46kHz (-3dB)
Frequency response (phono MM, speaker out 4-ohm) RIAA 20Hz-20kHz (±1dB) RIAA 20Hz-20kHz (±0.5dB)
Input sensitivity (analog line-level in) 200mVrms 187mVrms
Input impedance (analog line-level in) 23k ohms 29.6k ohms
Input sensitivity (phono MM) 2mVrms 1.81mVrms
Input impedance (phono MM) 47k ohms 53.9k ohms
Our primary measurements revealed the following using the line-level analog input and digital coaxial input (unless specified, assume a 1kHz sinewave at 2Vrms or 0dBFS, 10W output, 8-ohm loading,
10Hz to 22.4kHz bandwidth):
Parameter Left channel Right channel
Maximum output power into 8 ohms (1% THD+N, unweighted) 50W 50W
Maximum output power into 4 ohms (1% THD+N, unweighted) 94W 94W
Maximum burst output power (IHF, 8 ohms) 50W 50W
Maximum burst output power (IHF, 4 ohms) 94W 94W
Continuous dynamic power test (5 minutes, both channels driven) passed passed
Crosstalk, one channel driven (10kHz) -83.5dB -83.2dB
Damping factor 38 38
Clipping no-load output voltage 20.8Vrms 20.8Vrms
DC offset N/A N/A
Gain (pre-out) 21.4dB 21.5dB
Gain (maximum volume) 39.7dB 39.6dB
IMD ratio (CCIF, 18kHz + 19kHz stimulus tones, 1:1) <-68dB <-68dB
IMD ratio (SMPTE, 60Hz + 7kHz stimulus tones, 4:1 ) <-55dB <-55dB
Input impedance (line input, RCA) 29.6k ohms 29.6k ohms
Input sensitivity (40W, maximum volume) 187mVrms 187mVrms
Noise level (with signal, A-weighted) <654uVrms <654uVrms
Noise level (with signal, 20Hz to 20kHz) <745uVrms <745uVrms
Noise level (no signal, A-weighted, volume min) <58uVrms <51uVrms
Noise level (no signal, 20Hz to 20kHz, volume min) <73uVrms <65uVrms
Output impedance (pre-out) 1.39k ohms 1.39k ohms
Signal-to-noise ratio (40W, A-weighted, 2Vrms in) 100.5dB 100.6dB
Signal-to-noise ratio (40W, 20Hz to 20kHz, 2Vrms in) 95.8dB 93.7dB
Signal-to-noise ratio (40W, A-weighted, max volume) 80.4dB 80.5dB
Dynamic range (full power, A-weighted, digital 24/96) 110.4dB 111.6dB
Dynamic range (full power, A-weighted, digital 16/44.1) 95.6dB 95.6dB
THD ratio (unweighted) <0.020% <0.019%
THD ratio (unweighted, digital 24/96) <0.017% <0.018%
THD ratio (unweighted, digital 16/44.1) <0.017% <0.018%
THD+N ratio (A-weighted) <0.024% <0.023%
THD+N ratio (A-weighted, digital 24/96) <0.020% <0.021%
THD+N ratio (A-weighted, digital 16/44.1) <0.020% <0.021%
THD+N ratio (unweighted) <0.022% <0.021%
Minimum observed line AC voltage 125VAC 125VAC
For the continuous dynamic power test, the SU-GX70 was able to sustain 105W into 4 ohms (~6% THD) using an 80Hz tone for 500ms, alternating with a signal at -10dB of the peak (10.5W) for five
seconds, for five continuous minutes without inducing a fault or the initiation of a protective circuit. This test is meant to simulate sporadic dynamic bass peaks in music and movies. During the
test, the top of the SU-GX70 was only slightly warm to the touch.
Our primary measurements revealed the following using the phono-level input, MM configuration (unless specified, assume a 1kHz 5mVrms sinewave input, 10W output, 8-ohm loading, 10Hz to 22.4kHz
Parameter Left channel Right channel
Crosstalk, one channel driven (10kHz) -75dB -76dB
DC offset N/A N/A
Gain (default phono preamplifier) 40.2dB 40.2dB
IMD ratio (CCIF, 18kHz + 19kHz stimulus tones, 1:1) <-68dB <-69dB
IMD ratio (CCIF, 3kHz + 4kHz stimulus tones, 1:1) <-67dB <-67dB
Input impedance 53.9k ohms 52.4k ohms
Input sensitivity (to 40W with max volume) 1.81mVrms 1.83mVrms
Noise level (with signal, A-weighted) <870uVrms <800uVrms
Noise level (with signal, 20Hz to 20kHz) <1300uVrms <1300uVrms
Noise level (no signal, A-weighted, volume min) <58uVrms <50uVrms
Noise level (no signal, 20Hz to 20kHz, volume min) <73uVrms <65uVrms
Overload margin (relative 5mVrms input, 1kHz) 26.3dB 26.4dB
Signal-to-noise ratio (40W, A-weighted, 5mVrms in) 83.8dB 83.8dB
Signal-to-noise ratio (40W, 20Hz to 20kHz, 5mVrms in) 77.5dB 78.8dB
Signal-to-noise ratio (40W, A-weighted, max volume) 74.7dB 74.8dB
THD (unweighted) <0.018% <0.018%
THD+N (A-weighted) <0.022% <0.022%
THD+N (unweighted) <0.023% <0.023%
Our primary measurements revealed the following using the analog input at the headphone output (unless specified, assume a 1kHz sinewave, 1Vrms output, 300 ohms loading, 10Hz to 22.4kHz bandwidth):
Parameter Left and right channels
Maximum gain 16.0dB
Maximum output power into 600 ohms (1% THD) 2.5mW
Maximum output power into 300 ohms (1% THD) 4.1mW
Maximum output power into 32 ohms (1% THD) 6.8mW
Output impedance 60 ohms
Maximum output voltage (1% THD into 100k ohm load) 1.34Vrms
Noise level (with signal, A-weighted) <15uVrms
Noise level (with signal, 20Hz to 20kHz) <28uVrms
Noise level (no signal, A-weighted, volume min) <13uVrms
Noise level (no signal, 20Hz to 20kHz, volume min) <16uVrms
Signal-to-noise ratio (A-weighted, 1% THD, 1.1Vrms out) 96.7dB
Signal-to-noise ratio (20Hz - 20kHz, 1% THD, 1.1Vrms out) 91.7dB
THD ratio (unweighted) <0.02%
THD+N ratio (A-weighted) <0.024%
THD+N ratio (unweighted) <0.021%
Frequency response (8-ohm loading, line-level input)
In our frequency-response plots above (relative to 1kHz), measured across the speaker outputs at 10W into 8 ohms, the SU-GX70 is nearly flat within the audioband (20Hz to 20kHz). At the extremes the
SU-GX70 is -0.1dB at 20Hz and +0.5dB down at 20kHz. There’s a rise in the frequency response above 20kHz, where we see +2.2dB just past 40kHz, which is a result of the digital amplifier and its high
output impedance at high frequencies. Into a 4-ohm load (see RMS level vs. frequency vs load impedance graph below), the response is essentially flat at and above 20kHz. The -3dB point was also
explored and found to be at roughly 46kHz, exactly where it was measured for a 24-bit/96kHz digital input signal (see “Frequency response vs. input type chart” below). In the graph above and most of
the graphs below, only a single trace may be visible. This is because the left channel (blue or purple trace) is performing identically to the right channel (red or green trace), and so they
perfectly overlap, indicating that the two channels are ideally matched.
Frequency response (8-ohm loading, line-level input, bass and treble controls)
Above is a frequency response (relative to 1kHz) plot measured at the speaker-level outputs into 8 ohms, with the bass and treble controls set to maximum (blue/red plots) and minimum (purple/green
plots). We see that for the bass and treble controls, roughly +/- 5dB of gain/cut is available at 20Hz/20kHz.
Phase response (8-ohm loading, line-level input)
Above are the phase-response plots from 20Hz to 20kHz for the line-level input, measured across the speaker outputs at 10W into 8 ohms. The SU-GX70 does not invert polarity and exhibits at worst, 20
degrees (at 20Hz) of phase shift within the audioband.
Frequency response vs. input type (8-ohm loading, left channel only)
The chart above shows the SU-GX70’s frequency response (relative to 1kHz) as a function of input type measured across the speaker outputs at 10W into 8 ohms. The green trace (overlapping the purple
trace) is the same analog input data from the previous graph. The blue trace is for a 16-bit/44.1kHz dithered digital input signal from 5Hz to 22kHz using the coaxial input, and the purple trace is
for a 24/96 dithered digital input signal from 5Hz to 48kHz, while the pink trace is for a 24/192 dithered digital input signal from 5Hz to 96kHz. The 16/44.1 data exhibits brickwall-type filtering,
with a -3dB at 21.1kHz. The 24/96 (and analog input) and 24/192 kHz data yielded -3dB points at 46.8kHz and 92.9kHz respectively. The analog data looks nearly identical to the 24/96 digital data,
which is evidence for the SU-GX70 sampling incoming analog signals at 96kHz.
Frequency response vs. MQA (16/44.1)
The chart above shows the SU-GX70’s frequency response (relative to 1kHz) for a 16/44.1 dithered digital input signal from 5Hz to 22kHz using the coaxial input, with MQA turned on. We find no
difference in the measured frequency response for 16/44.1 data input whether MQA is turned on or off.
Frequency response (8-ohm loading, MM phono input)
The chart above shows the frequency response (relative to 1kHz) for the MM phono input without (blue/red) and with (purple/green) the subsonic filter enabled. What is shown is the deviation from the
RIAA curve, where the input signal sweep is EQ’d with an inverted RIAA curve supplied by Audio Precision (i.e., zero deviation would yield a flat line at 0dB). We see a maximum deviation of about
+0.5/-0.2dB (150Hz and 20kHz/20Hz) from 20Hz to 20kHz. With the subsonic filter engaged, we find the -3dB point at 20Hz.
Phase response (MM input)
Above is the phase response plot from 20Hz to 20kHz for the MM phono input without (blue/red) and with (purple/green) the subsonic filter enabled, measured across the speaker outputs at 10W into 8
ohms. The SU-GX70 does not invert polarity. For the phono input, since the RIAA equalization curve must be implemented, which ranges from +19.9dB (20Hz) to -32.6dB (90kHz), phase shift at the output
is inevitable. Here we find a worst case of about +80 degrees at 20Hz without the subsonic filter and +160 degrees with the filter.
Digital linearity (16/44.1 and 24/96 data)
The chart above shows the results of a linearity test for the coaxial digital input for both 16/44.1 (blue/red) and 24/96 (purple/green) input data, measured at the line-level output of the SU-GX70.
The digital input is swept with a dithered 1kHz input signal from -120dBFS to 0dBFS, and the output is analyzed by the APx555. The ideal response would be a straight flat line at 0dB. Both data were
essentially perfect as of -100dBFS down to 0dBFS. At -120dBFS, the 16/44.1 data were only +2dB (left) and +4dB (right) above reference, while the 24/96 data were within +1dBFS.
Impulse response (24/44.1 data)
The graph above shows the impulse response for the SU-GX70 with MQA turned off (blue) and MQA turned on (purple), fed to the coaxial digital input, measured at the line level output, for a looped 24/
44.1 test file that moves from digital silence to full 0dBFS (all “1”s), for one sample period then back to digital silence. We find a reconstruction filter that adheres to a typical symmetrical sinc
function. There appears to be no difference in the impulse response with MQA on or off through the coaxial input.
J-Test (coaxial, MQA off)
The chart above shows the results of the J-Test test for the coaxial digital input measured at the line-level output of the SU-GX70 with MQA turned off. J-Test was developed by Julian Dunn the 1990s.
It is a test signal—specifically, a -3dBFS undithered 12kHz squarewave sampled (in this case) at 48kHz (24 bits). Since even the first odd harmonic (i.e., 36kHz) of the 12kHz squarewave is removed by
the bandwidth limitation of the sampling rate, we are left with a 12kHz sinewave (the main peak). In addition, an undithered 250Hz squarewave at -144dBFS is mixed with the signal. This test file
causes the 22 least significant bits to constantly toggle, which produces strong jitter spectral components at the 250Hz rate and its odd harmonics. The test file shows how susceptible the DAC and
delivery interface are to jitter, which would manifest as peaks above the noise floor at 500Hz intervals (e.g., 250Hz, 750Hz, 1250Hz, etc.). Note that the alternating peaks are in the test file
itself, but at levels of -144dBrA and below. The test file can also be used in conjunction with artificially injected sinewave jitter by the Audio Precision, to show how well the DAC rejects jitter.
The coaxial input exhibits low-level rises (-135dBrA) in the noise floor within the audioband at 6.5kHz and 13kHz. This is a good J-Test result, indicating that SU-GX70 DAC should yield good jitter
J-Test (optical, MQA off)
The chart above shows the results of the J-Test test for the optical digital input measured at the line-level output of the SU-GX70. The optical input yielded essentially the same result compared to
the coaxial input.
J-Test (coaxial, MQA on)
The chart above shows the results of the J-Test test for the coaxial digital input measured at the line-level output of the SU-GX70 with MQA turned on. The result is similar to the one with MQA
turned off, only slightly improved, with the rises in the noise floor no longer visible.
J-Test with 100ns of injected jitter (coaxial, MQA off)
Both the coaxial and optical inputs were also tested for jitter immunity by injecting artificial sinewave jitter at 2kHz, which would manifest as sidebands at 10kHz and 14kHz without any jitter
rejection. Jitter immunity proved exceptional, with no visible sidebands at the 100ns jitter level, and only a spurious peak at 2kHz at the -135dBrA level. The coaxial input is shown, but both
performed the same.
Wideband FFT spectrum of white noise and 19.1kHz sine-wave tone (coaxial input, MQA off)
The chart above shows a fast Fourier transform (FFT) of the SU-GX70’s line-level output with white noise at -4dBFS (blue/red) and a 19.1 kHz sinewave at 0dBFS fed to the coaxial digital input,
sampled at 16/44.1, with MQA turned off. The steep roll-off around 20kHz in the white-noise spectrum shows the behavior of the SU-GX70’s reconstruction filter. There are low-level aliased image peaks
within the audioband at around 2kHz and 13kHz, at or near -120dBrA. The primary aliasing signal at 25kHz is at -95dBrA, while the second and third distortion harmonics (38.2, 57.3kHz) of the 19.1kHz
tone are at -80 and -60dBrA.
Wideband FFT spectrum of white noise and 19.1kHz sine-wave tone (coaxial input, MQA on)
The chart above shows a fast Fourier transform (FFT) of the SU-GX70’s line-level output with white noise at -4dBFS (blue/red) and a 19.1 kHz sinewave at 0dBFS fed to the coaxial digital input,
sampled at 16/44.1, with MQA turned on. The steep roll-off around 20kHz in the white-noise spectrum shows the behavior of the SU-GX70’s reconstruction filter. There are low-level aliased image peaks
within the audioband at around 2kHz and 7kHz, at -120dBrA.
RMS level vs. frequency vs. load impedance (1W, left channel only)
The chart above shows RMS level (relative to 0dBrA, which is 1W into 8ohms or 2.83Vrms) as a function of frequency, for the analog line-level input swept from 5Hz to 100kHz. The blue plot is into an
8-ohm load, the purple is into a 4-ohm load, the pink plot is an actual speaker (Focal Chora 806, measurements can be found here), and the cyan plot is no load connected. The chart below . . .
. . . is the same but zoomed in to highlight differences. Here we find that between 20Hz and 6kHz, the deviations between no load and 4 ohms are around 0.45dB, but at high frequencies, the
differences are larger, at about 1dB at 20kHz. This is a relatively poor result, and an indication of a relatively high output impedance, or low damping factor. When a real speaker is used,
deviations are within around 0.4dB throughout the audioband.
RMS level vs. frequency (1W, left channel only, real speaker, LPAC on and off)
The chart above shows RMS level (relative to 0dBrA, which is 1W into 8ohms or 2.83Vrms) as a function of frequency, for the analog line-level input swept from 20Hz to 20kHz. Both plots are for the
Focal Chora 806 speaker, with (purple) and without (blue) LAPC enbaled. The SU-GX70 provides a feature called Load Adaptive Phase Calibration (LAPC). This feature measures the outputs of the
amplifier while the speakers are connected using test tones to establish a correction curve to deal with the amplifier’s inherently high output impedance at high frequencies. The theoretical goal is
to achieve a flat frequency response for the user’s speakers when LAPC is enabled. We can see here that the purple trace is not flat, but closer to ideal compared to when LAPC is disabled. When LAPC
is disabled, deviations reach about 0.35dB, while only 0.15dB with LAPC enabled.
THD ratio (unweighted) vs. frequency vs. output power
The chart above shows THD ratios at the output into 8 ohms as a function of frequency for a sinewave stimulus at the analog line-level input. The blue and red plots are for left and right channels at
1W output into 8 ohms, purple/green at 10W, and pink/orange just under 40W. The power was varied using the volume control. At 1W, THD ratios are fairly constant and range from 0.02% at 20Hz, down to
0.01% from 40Hz to 6kHz. At 10W, THD ratios are as high as 0.3% at 20Hz, with a steady decline to 0.01% at 6kHz. At nearly 40W, THD ratios are as high as 0.6% at 20Hz, with a steady decline to 0.02%
at 6kHz.
THD ratio (unweighted) vs. frequency at 10W (MM input)
The chart above shows THD ratio as a function of frequency plots for the MM phono input measured across an 8-ohm load at 10W. The input sweep is EQ’d with an inverted RIAA curve. The THD values vary
from around 0.3% at 20Hz, then a steady decline down to 0.015% at 6kHz.
THD ratio (unweighted) vs. output power at 1kHz into 4 and 8 ohms
The chart above shows THD ratios measured at the output of the SU-GX70 as a function of output power for the analog line-level input, for an 8-ohm load (blue/red for left/right channels) and a 4-ohm
load (purple/green for left/right channels). Both data sets track closely except for maximum power, with the 4-ohm data slightly outperforming the 8-ohm data at lower power. THD ratios range from as
low as 0.0025% at 0.5-1W, up to 0.07% (8-ohm) and 0.2% (4-ohm) at the “knees” at just below 50W and 90W, respectively. The 1% THD marks were reached at just past 50W (8 ohms) and just shy of 100W (4
THD+N ratio (unweighted) vs. output power at 1kHz into 4 and 8 ohms
The chart above shows THD+N ratios measured at the output of the SU-GX70 as a function of output power for the line-level input, for an 8-ohm load (blue/red for left/right channels) and a 4-ohm load
(purple/green for left/right channels). Both data sets track closely except for maximum power, with the 4-ohm data slightly outperforming the 8-ohm data at lower power. Overall, THD+N values for both
loads ranged from 0.05% at 50mW, down to near 0.01% at 3-5W, then up to the “knees,” as described in the caption for the chart directly above.
THD ratio (unweighted) vs. frequency at 8, 4, and 2 ohms (left channel only)
The chart above shows THD ratios measured at the output of the SU-GX70 as a function of frequency into three different loads (8/4/2 ohms) for a constant input voltage that yields 10W at the output
into 8 ohms (and roughly 20W into 4 ohms, and 40W into 2 ohms) for the analog line-level input. The 8-ohm load is the blue trace, the 4-ohm load the purple trace, and the 2-ohm load the pink trace.
We find roughly the same THD values of 0.02% from 1kHz to 6kHz for the 8- and 4-ohm data. From 20Hz to 1kHz, there is a roughly 5dB increase in THD every time the load is halved. However, even into a
2-ohm load, which the SU-GX70 is not designed to drive, THD ratios range from 0.3% at 20Hz, down to 0.03% from 1kHz to 6kHz.
THD ratio (unweighted) vs. frequency into 8 ohms and real speakers (left channel only)
The chart above shows THD ratios measured at the output of the SU-GX70 as a function of frequency into an 8-ohm load and two different speakers for a constant output voltage of 2.83Vrms (1W into 8
ohms) for the analog line-level input. The 8-ohm load is the blue trace, the purple plot is a two-way speaker (Focal Chora 806, measurements can be found here), and the pink plot is a three-way
speaker (Paradigm Founder Series 100F, measurements can be found here). In general, the measured THD ratios for the real speakers were close to the 8-ohm resistive load, hovering between 0.01 and
0.02% from 100Hz to 6kHz. The two-way Focal yielded the highest THD values (0.2% at 20Hz) at very low frequencies.
IMD ratio (CCIF) vs. frequency into 8 ohms and real speakers (left channel only)
The chart above shows intermodulation distortion (IMD) ratios measured at the output of the SU-GX70 as a function of frequency into an 8-ohm load and two different speakers for a constant output
voltage of 2.83Vrms (1W into 8 ohms) for the analog line-level input. Here the CCIF IMD method was used, where the primary frequency is swept from 20kHz (F1) down to 2.5kHz, and the secondary
frequency (F2) is always 1kHz lower than the primary, with a 1:1 ratio. The CCIF IMD analysis results are the sum of the second (F1-F2 or 1kHz) and third modulation products (F1+1kHz, F2-1kHz). The
8-ohm load is the blue trace, the purple plot is a two-way speaker (Focal Chora 806, measurements can be found here), and the pink plot is a three-way speaker (Paradigm Founder Series 100F,
measurements can be found here). All IMD results are similar, hovering from 0.03 to 0.015% across the measured frequency range.
IMD ratio (SMPTE) vs. frequency into 8 ohms and real speakers (left channel only)
The chart above shows IMD ratios measured at the output of the SU-GX70 as a function of frequency into an 8-ohm load and two different speakers for a constant output voltage of 2.83Vrms (1W into 8
ohms) for the analog line-level input. Here, the SMPTE IMD method was used, where the primary frequency (F1) is swept from 250Hz down to 40Hz, and the secondary frequency (F2) is held at 7kHz with a
4:1 ratio. The SMPTE IMD analysis results consider the second (F2 ± F1) through the fifth (F2 ± 4xF1) modulation products. The 8-ohm load is the blue trace, the purple plot is a two-way speaker
(Focal Chora 806, measurements can be found here), and the pink plot is a three-way speaker (Paradigm Founder Series 100F, measurements can be found here). Between 40Hz and 60Hz, all result are
essentially identical, around -81dB. Above 60Hz, the highest IMD ratios are associate with the Paradigm speakers, rising up to -74dB from 100Hz to 250Hz. All IMD results are essentially identical,
from 0.05% from 40Hz to 400Hz, then 0.025% from 500Hz to 1kHz.
FFT spectrum – 1kHz (line-level input)
Shown above is the fast Fourier transform (FFT) for a 1kHz input sinewave stimulus, measured at the output across an 8-ohm load at 10W for the analog line-level input. We see that the signal’s second
(2kHz) and third (3kHz) harmonics are at a relatively high -80dBrA, or 0.01%, while subsequent signal harmonics are at and below -90dBrA, or 0.003%. Since the SU-GX70 uses a switching power supply,
there are no obvious peaks at 60Hz or subsequent harmonics. There are, however, several significant noise peaks (as high as -65dB, or 0.06%) that are likely a result of IMD products between the
signal, it’s harmonics, and the high-frequency oscillator used in the class-D amplifier section. Of note is that the analyzer would ignore these peaks, which are actually larger in magnitude than the
signal harmonics, when calculating THD. There is also a rise in the noise above 20kHz, characteristic of digital amplifiers. This is far from what is considered a clean FFT.
FFT spectrum – 1kHz (line-level input, Pure Amplification off)
Shown above is the fast Fourier transform (FFT) for a 1kHz input sinewave stimulus, measured at the output across an 8-ohm load at 10W for the analog line-level input, but with Pure Amplification
turned off. The FFT is similar to the FFT above, where Pure Amplification was turned on, except for low-level peaks (-120dBrA, or 0.0001%) that can be seen here at low frequencies that are not
present in the first FFT.
FFT spectrum – 1kHz (digital input, 16/44.1 data at 0dBFS)
Shown above is the fast Fourier transform (FFT) for a 1kHz input sinewave stimulus, measured at the output across an 8-ohm load at 10W for the coaxial digital input, sampled at 16/44.1. Signal
harmonics are different to the analog input FFT above. The second (2kHz) harmonic is low at -115dBRa, or 0.0002%, while the third (3kHz) harmonic is much higher, at -75dBrA, or 0.02%. Subsequent
signal harmonics are at and below -90dBrA, or 0.003%. The same IMD peaks can also be seen here, as high as -65dB, or 0.06%, flanking the main 1kHz signal peak.
FFT spectrum – 1kHz (digital input, 24/96 data at 0dBFS)
Shown above is the fast Fourier transform (FFT) for a 1kHz input sinewave stimulus, measured at the output across an 8-ohm load at 10W for the coaxial digital input, sampled at 24/96. The FFT is very
similar to the 16/44.1 input FFT above, but for a more predominant second (2kHz) signal harmonic at -95dBrA, or 0.002%.
FFT spectrum – 1kHz (digital input, 16/44.1 data at -90dBFS)
Shown above is the FFT for a 1kHz -90dBFS dithered 16/44.1 input sinewave stimulus at the coaxial digital input, measured at the output across an 8-ohm load. We see the 1kHz primary signal peak, at
the correct amplitude, and no other peaks above the noise floor at -130dBrA.
FFT spectrum – 1kHz (digital input, 24/96 data at -90dBFS)
Shown above is the FFT for a 1kHz -90dBFS dithered 24/96 input sinewave stimulus at the coaxial digital input, measured at the output across an 8-ohm load. We see the 1kHz primary signal peak, at the
correct amplitude, and no other peaks above the noise floor at -135dBrA.
FFT spectrum – 1kHz (MM phono input)
Shown above is the FFT for a 1kHz input sinewave stimulus, measured at the output across an 8-ohm load at 10W for the MM phono input. We see the third (3kHz) signal harmonic dominating at around
-75dBrA, or 0.02%. Other signal harmonics can be seen at -95dBrA, or 0.002%, and below. The most significant power-supply-related noise peaks can be seen at 60Hz at -85dBrA, or 0.006%. Higher-order
power-supply-related peaks can also be seen at lower amplitudes. The same IMD peaks can also be seen here, as high as -65dB, or 0.06%, flanking the main 1kHz signal peak.
FFT spectrum – 50Hz (line-level input)
Shown above is the FFT for a 50Hz input sinewave stimulus measured at the output across an 8-ohm load at 10W for the analog line-level input. The X axis is zoomed in from 40Hz to 1kHz, so that peaks
from noise artifacts can be directly compared against peaks from the harmonics of the signal. The most predominant (non-signal) peak is the third (150Hz) signal harmonic at a high -55dBrA, or 0.02%.
Several other signal-related and IMD peaks can be seen throughout at -70dBrA, or 0.03%, and below.
FFT spectrum – 50Hz (MM phono input)
Shown above is the FFT for a 50Hz input sinewave stimulus measured at the output across an 8-ohm load at 10W for the MM phono input. The X axis is zoomed in from 40 Hz to 1kHz, so that peaks from
noise artifacts can be directly compared against peaks from the harmonics of the signal. The 60Hz power supply fundamental can be seen at -90dBrA, or 0.003%. The most predominant (non-signal) peak is
the third (150Hz) signal harmonic at a high -55dBrA, or 0.02%. Several other signal-related and IMD peaks can be seen throughout at -70dBrA, or 0.03%, and below.
Intermodulation distortion FFT (18kHz + 19kHz summed stimulus, line-level input)
Shown above is an FFT of the intermodulation distortion (IMD) products for an 18kHz + 19kHz summed sinewave stimulus tone measured at the output across an 8-ohm load at 10W for the analog line-level
input. The input RMS values are set at -6.02dBrA so that, if summed for a mean frequency of 18.5kHz, would yield 10W (0dBrA) into 8 ohms at the output. We find that the second-order modulation
product (i.e., the difference signal of 1kHz) is at nearly -80dBrA, or 0.01%, while the third-order modulation products, at 17kHz and 20kHz, are at roughly the same level.
Intermodulation distortion FFT (line-level input, APx 32 tone)
Shown above is the FFT of the speaker-level output of the SU-GX70 with the APx 32-tone signal applied to the input. The combined amplitude of the 32 tones is the 0dBrA reference, and corresponds to
10W into 8 ohms. The intermodulation products—i.e., the “grass” between the test tones—are distortion products from the amplifier and are below the -90dBrA, or 0.003%, level.
Intermodulation distortion FFT (18kHz + 19kHz summed stimulus, coaxial digital input, 16/44.1)
Shown above is an FFT of the intermodulation distortion (IMD) products for an 18kHz + 19kHz summed sinewave stimulus tone measured at the output across an 8-ohm load at 10W for the digital coaxial
input at 16/44.1. We find that the second-order modulation product (i.e., the difference signal of 1kHz) is at -100dBRa, or 0.001%, while the third-order modulation products, at 17kHz and 20kHz, are
around -80dBrA, or 0.01%.
Intermodulation distortion FFT (18kHz + 19kHz summed stimulus, coaxial digital input, 16/44.1, MQA on)
Shown above is an FFT of the intermodulation distortion (IMD) products for an 18kHz + 19kHz summed sinewave stimulus tone measured at the output across an 8-ohm load at 10W for the digital coaxial
input at 16/44.1, with MQA turned on. We find that the second-order modulation product (i.e., the difference signal of 1kHz) is at -100dBRa, or 0.001%, while the third-order modulation products, at
17kHz and 20kHz, are around -80dBrA, or 0.01%. This is essentially the same result as with the FFT with MQA turned off.
Intermodulation distortion FFT (18kHz + 19kHz summed stimulus, coaxial digital input, 24/96)
Shown above is an FFT of the intermodulation distortion (IMD) products for an 18kHz + 19kHz summed sinewave stimulus tone measured at the output across an 8-ohm load at 10W for the digital coaxial
input at 24/96. We find that the second-order modulation product (i.e., the difference signal of 1kHz) is at -100dBRa, or 0.001%, while the third-order modulation products, at 17kHz and 20kHz, are
around -80dBrA, or 0.01%. This is essentially the same result as with the 16/44.1 IMD FFT.
Intermodulation distortion FFT (18kHz + 19kHz summed stimulus, MM phono input)
Shown above is an FFT of the intermodulation distortion (IMD) products for an 18kHz + 19kHz summed sinewave stimulus tone measured at the output across an 8-ohm load at 10W for the MM phono input. We
find that the second-order modulation product (i.e., the difference signal of 1kHz) is at -95dBRa, or 0.002%, while the third-order modulation products, at 17kHz and 20kHz, are just below -80dBrA, or
Square-wave response (10kHz)
Above is the 10kHz squarewave response using the analog line-level input, at roughly 10W into 8 ohms. Due to limitations inherent to the Audio Precision APx555 B Series analyzer, this graph should
not be used to infer or extrapolate the SU-GX70’s slew-rate performance. Rather, it should be seen as a qualitative representation of the SU-GX70’s mid-tier bandwidth. An ideal squarewave can be
represented as the sum of a sinewave and an infinite series of its odd-order harmonics (e.g., 10kHz + 30kHz + 50kHz + 70kHz . . .). A limited bandwidth will show only the sum of the lower-order
harmonics, which may result in noticeable undershoot and/or overshoot, and softening of the edges. In this case, because of the digital nature of the amplifier, we see a 400kHz switching frequency
(see 1MHz FFT below) riding on top of the squarewave.
Square-wave response (10kHz, restricted 500kHz bandwidth)
Above is the same 10kHz squarewave response using the analog line-level input, at roughly 10W into 8 ohms, this time with a 250kHz input bandwidth on the analyzer to filter out the 400kHz switching
frequency. We can see significant over/undershoot in the corners of the squarewave, a consequence of the SU-GX70’s mid-tier bandwidth.
FFT spectrum (1MHz bandwidth)
Shown above is the fast Fourier transform (FFT) for a 1kHz input sinewave stimulus, measured at the output across an 8-ohm load at 10W for the analog line-level input, with an extended 1MHz input
bandwidth. This enables us to see the high-frequency noise above 20kHz reaching almost -70dBrA at 80kHz. We also see a clear peak at 400kHz, reaching just past -20dBrA, as well as its harmonics
(800kHz, 1.2MHz). These peaks, as well as the noise, are a result of the digital amplifier technology used in the SU-GX70. However, they are far above the audioband—and are therefore inaudible—and so
high in frequency that any loudspeaker the amplifier is driving should filter it all out anyway.
Damping factor vs. frequency (20Hz to 20kHz)
The final graph above is the damping factor as a function of frequency. We can see here the clear trend of a higher (although still poor in absolute terms) damping factor at low frequencies—around 35
from 20Hz to 3kHz, and then a decline down to 18 at 20kHz. This is a limitation of the digital amplifier technology used in the SU-GX70, and the reason Technics has incorporated their clever Load
Adaptive Phase Calibration (LAPC) feature to compensate for losses into low impedances at high frequencies.
Diego Estan
Electronics Measurement Specialist | {"url":"https://soundstagenetwork.com/index.php?option=com_content&view=article&id=2968:technics-grand-class-su-gx70-streaming-stereo-receiver-measurements&catid=97:amplifier-measurements&Itemid=154","timestamp":"2024-11-09T00:00:21Z","content_type":"text/html","content_length":"156578","record_id":"<urn:uuid:914d9893-7b81-45ac-b74c-c1ca70e2597f>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00425.warc.gz"} |
The problem statement:
Program 2 showed us how to iterate over a list of numbers and print them to the screen. For our next program we will also need to iterate over a list of number but this time we need to count the two
different types of numbers in the list as we iterate over them.
The task is to write a Python program that displays the counts of odd and even numbers in a list of positive integers (i.e. whole numbers). In order for us to begin to think about a solution we first
need to know the exact definition of odd and even numbers.
• An odd number when divided by two will always have a remainder in addition to a quotient. Some examples of odd numbers in decimal form follow. 1/2 = 0.5, 3/2 = 1.5, 5/2 = 2.5, etc. (Note that 5/2
= 2.5, is read as five divided by 2 is equal to 2.5).
• An even number is the opposite of an odd number. That is, an even number divided by 2 has a quotient but no remainder, does not leave a remainder. Some examples of an even number follow. 0/2 = 0,
2/2 = 1, 4/2 = 2, 6/2 = 3, etc. Note how no decimal is needed since the result of the division does not have a remainder.
The following is the expected approximate output of program 3. Given the list, [0, 4, 9, 5, 11, 15, 7]:
• Even count = 2
• Odd count = 5
The Solution:
def count_odd_and_even_numbers_first_attempt():
even_count = 0
odd_count = 0
for number in [0, 4, 9, 5, 11, 15, 7]:
result = number / 2
if int(result) == result:
even_count = even_count + 1
odd_count = odd_count + 1
print("Even count =", even_count)
print("Odd count =", odd_count)
def count_odd_and_even_numbers_second_attempt():
even_count = 0
odd_count = 0
for number in [0, 4, 9, 5, 11, 15, 7]:
result = number % 2
if result == 0:
even_count += 1
odd_count += 1
print("Even count =", even_count)
print("Odd count =", odd_count)
def count_odd_and_even_numbers_third_attempt():
even_count, odd_count = 0, 0
for number in [0, 4, 9, 5, 11, 15, 7]:
if number % 2 == 0:
even_count += 1
odd_count += 1
print("Even count =", even_count)
print("Odd count =", odd_count)
def program_3():
if __name__ == '__main__':
In previous problems our solutions required the use of a for loop, the mathematical operator of multiplication, and the use of the print function. This time the calculations are a little more
complex. The additional complexity comes from the fact that we need to consider two possible conditions when looking at a number when determining if a number falls into the odd or even category of
numbers. Once we know what category the number is in then we increment one of two variable that holds the running count of the category we detected.
We present three solutions. Each solution is intended to showcase different approaches and programing techniques. Let's start with the function
The Counting Variables:
The first thing that we do is define two variables, even_count and odd_count. They are initialized to zero, indicating we have yet to see an odd or even number since we have not iterated over the
list of numbers. As we iterate over the numbers using a for loop, these two variables will hold a current count of odd and even numbers.
Is it Odd or Even?
By the definition of an odd or even number given above we know that an odd number divided by 2 results in a value that has a remainder, otherwise it is an even number. So the first order of business
is to divided the number provided by the for loop by 2 and capture the result as follow. result = number / 2. This expression is our first logic inside the for loop. Suppose the number given inside
the for loop is 3. We know that 3 divided by 2 is 1.5. From this result we can see with our eyes and determine with our brain that 3 is an odd number because when we examine the value of 1.5 we
observe that there is a value other than zero past the decimal point. But how do we program our thought process into a machine? The short answer is that we use built-in operations or functions that
come shipped with the Python programming language. One such built-in function that we can use is the int(...) function. This function will convert inputs to the function into integers only if the
input is convertible. If your input is a name like Steve then the int(...) will not be able to do the conversion. In fact, in this particular instance the entire program will crash. But it will work
if our input is a number with a decimal value. For example, if our input is 1.0 then it will convert it to the whole value of 1. In a similar fashion, if our input is 2.7 then this function will
convert it to the whole value of 2; dropping or truncating the decimal point and any value past the decimal point. Given the int(...) function we can compare its conversion output to its input. For
example, suppose we need to determine if the number 3 is even or odd. We first perform the following computation. result = 3 / 2. The result variable will be assigned the value of 1.5 in this
expression. We then give the init function the result variable which holds a value of 1.5 as follow. conversion = int(result). If we inspect the conversion variable with the print function we will
see that it has a value of 1. Lastly, we compare the conversion value (1) to the result value (1.5). From the comparison we determine that the value three is odd because the conversion value of 1
does not equal the result value of 1.5.
How do we Compare Numbers?
We use the built-in equal operator to compare two values. The symbols or characters used to represent the equal operator is ==. The equal operator works by first comparing two values and then
replaces the entire expression with a boolean value. A boolean value has one of two possible states: True or False. Here are two examples of an equal operator (==) that is evaluated (or converted) to
the boolean value of True and False.
• 2.0 == 2.0 is evaluated to True
• 3.6 == 3.2 is evaluated to False
How do we Make Decisions using Booleans?
Inside the for loop we need to decide if the current number delivered by the loop is an odd or even number. Our program will need to branch or split into two separate logic flows such that if a given
number is even we only increment the even_count variable by a value of 1. If, however, the given number is odd then we only increment the odd_count variable by a value of 1.
This branching (or splitting) of our program into two separate logic flows that depends on the given number being even or odd condition can be accomplished with the use of the concept of an if: ...
else: ... block of code. This block of code works as follow. If the if statement of this block of code is followed by a boolean value of True then only the code indented immediately under the if
statement is executed; the code indented under the else statement is not executed (or ignored). But if the if statement of this block of code is followed by a boolean value of False then only the
code indented immediately under the else statement is executed; the code indented under the if statement is not executed (or ignored). Recall that an equal operator converts an expression like 2.0 ==
2.0 to a boolean value of True or False. This equal operator together with the branching feature of the if: ... else: ... block of code can be used to solve the problem statement as shown by the code
above. All that needs to be done to complete the solution is to increment the appropriate count variable.
How does the Incrementing of a Variable Work?
The expression even_count = even_count + 1, for example, works as follow. Suppose event_count is initially assigned a value of 5. The assignment operator (=) first evaluates the expression on its
right side; event_count + 1 is replaced with the value of 6 because 5 + 1 is equal to this value. Then the assignment operator assigns this new value to the variable, thus overwriting or replacing
the old value of 5 with the new value of 6 as follow. event_count = 6
The Modulo Operator:
The modulo operator, represented by the % symbol, is used just like the division operator but instead of getting the result in decimal form, 5 / 3 = 1.66 for example, the [modulo] operator gives us
only the remainder of a division operation, 5 % 3 = 2 for example.
Revisiting the definition of an odd and even number we see that an odd number will always have a remainder whereas an even number will not have a remainder. We can combine this definition with the
modulo (%) operator to check if a number is odd or even in a more concise way. Before we can use the modulo operator we make the observation that any even number modulo 2 will always compute a
remainder of zero whereas an odd number modulo 2 gives us 1. Examples of even numbers follow. 0 % 2 = 0, 2 % 2 = 0, 4 % 2 = 0, 6 % 2 = 0, etc. Examples of odd numbers follow. 1 % 2 = 1, 3 % 2 = 1, 5
% 2 = 1, 7 % 2 = 1, 9 % 2 = 1, etc.
To use the % operator we can either assign the result (or evaluation) of the operator to a variable as shown in the count_odd_and_even_numbers_second_attempt() function like so, result = number % 2,
and then check the value in an if statement or allow for the evaluation of the % operator to occur directly in the if statement without the need to use an additional variable as shown in the
count_odd_and_even_numbers_third_attempt() function, like so if number % 2 == 0:.
The Increment Operator:
Another useful operator that allows us to increment a variable is +=. As we saw in our first attempt function we incremented a variable with the following code. some_variable = some_variable + 1. We
can write the equivalent of this expression instead as some_variable += 1. An actual use of this operator can be seen in the second attempt function.
Multiple Assignments:
Note that in our third attempt function we declared and assigned initialization values of zero to both count variables but we did so on a single line. This is equivalent to declaring and assigning
the count variables on separate lines. Both approaches are valid and you should pick the option that makes the most sense and is most visually pleasing to you.
Note that the third attempt function is less verbose than our first attempt and arguably more readable. It's also possible to make further refinements but we will explore other options in future
Page with links to the entire series can be found here
Are you having any issues running or understanding the program? Please, add a comment explaining where you are stuck or where the tutorial is not clear so we can improve it. | {"url":"https://www.agupieware.com/search/label/if-else%20structured%20programming","timestamp":"2024-11-13T05:42:18Z","content_type":"application/xhtml+xml","content_length":"86849","record_id":"<urn:uuid:9ea7b43a-581e-473f-a156-c99d6bbc896b>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00088.warc.gz"} |
The Significance of Insignificance
Posted by Glen Whitman at 3:28 PM
Remarking on some recent poll results,
says, "Repeat after me: I will not think that statistically insignificant changes in poll results are statistically significant -- even if I really, really like them."
In general, I agree with Eugene’s point (which he has made many times): people ascribe way too much import to small differences in polling results, without paying attention to the margin of error.
However, Eugene’s position may foster the false impression that
differences in poll results that fall within the margin of error are equally insignificant. Suppose that two candidates are separated by just 2 percentage points in the polls (say, 49% to 47%), and
the margin of error is 3 percentage points for each figure. And suppose that two different candidates in another election -- or the same two candidates in a later poll -- are separated by 6
percentage points (say, 51% to 45%), again with a margin of error of 3 percentage points for each figure. While both differences are “insignificant” in the sense that the difference is within the
combined margins of error, the latter result is clearly more significant than the former.
Indeed, the latter result would most likely have been deemed statistically significant had a very slightly lower level of confidence been applied. The margin of error is constructed using a
conventional but essentially arbitrary confidence level. The typical convention is 95% confidence, but other levels of confidence could also be used; 90% and 99% are relatively common. These are the
confidence levels employed by scientists, who don’t want to affirm a hypothesis unless they are very confident of it, and who are willing to remain agnostic in a wide range of cases. Lower levels of
confidence might well be acceptable in other contexts, such as business, where some decisions have to be made without great confidence (e.g., should I plan to expand next year if I am 75% confident
that consumer demand will pick up?). It’s not obvious what the appropriate level of confidence is for political prognostication, but I’ll put it this way: in the example given above, I would be
willing to bet a larger amount of money on the second race than the first.
The broader point is that statistical significance is not an all-or-nothing proposition. Despite the way statistical significance is often taught, there is not a sharp discontinuity between
significance and insignificance. Significance lies on a gradient.
No comments:
Labels: damn lies and statistics | {"url":"https://agoraphilia.blogspot.com/2004/04/significance-of-insignificance.html","timestamp":"2024-11-12T04:29:01Z","content_type":"application/xhtml+xml","content_length":"77585","record_id":"<urn:uuid:0563d1b0-cdec-41da-8bef-c91fa353b455>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00190.warc.gz"} |
Fitting with a Piecewise Linear Function
4.2.2.25 Fitting with a Piecewise Linear Function
In this tutorial we will show you how to define a piecewise fitting function consisting of two linear segments, perform a fit of the data using this fitting function, and calculate the intersection
location for two linear segments from the fitting result.
Minimum Origin Version Required: Origin 8.6 SR0
What you will learn
This tutorial will show you how to:
• Define a piecewise (conditional) fitting function.
• Auto initialize parameters.
• Calculate the intersection location of the piecewise fit lines.
Example and Steps
Import Data
1. Start with an empty workbook. Select Help: Open Folder: Sample Folder... to open the "Samples" folder. In this folder, open the Curve Fitting subfolder and find the file Step01.dat. Drag-and-drop
this file into the empty worksheet to import it.
2. Right click on the Sensor E x column (column J), and select Set As: X from the context menu. Highlight Sensor E y column, and select Plot: Symbol: Scatter from Origin menu. The graph should look
Define Fitting Function
From the above graph, the curve consists of two segments of lines. It can be fitted with a piecewise linear function. The function can be expressed as:
$y = \begin{cases} \frac{y_1(x_3-x)+y_3(x-x_1)}{x_3-x_1}, & \mbox{if } x<x_3 \\ \frac{y_3(x_2-x)+y_2(x-x_3)}{x_2-x_3}, & \mbox{if } x \ge x_3 \end{cases}$
where x1 and x2 are x values of the curve's endpoints and they are fixed during fitting, x3 is the x value at the intersection of two segments, and y1, y2, y3 are y values at $\ x_i, \ i=1, 2, 3$
The fitting function can be defined using the Fitting Function Builder tool.
1. Select Tools: Fitting Function Builder from Origin menu.
2. In the Fitting Function Builder dialog's Goal page, click Next button.
3. In the Name and Type page, select User Defined from Select or create a Category drop-down list, type pwl2s in the Function Name field, and select Origin C in Function Type group. And click Next
4. In the Variables and Parameters page, type x1,y1,x2,y2,x3,y3 in the Parameters field. Click Next button.
5. In the Origin C Fitting Function page, click the button on the right of the Function Body edit box and define the fitting function in Code Builder as follows.
if( x < x3 )
y = (y1*(x3-x)+y3*(x-x1))/(x3-x1);
y = (y3*(x2-x)+y2*(x-x3))/(x2-x3);
Click Compile button to compile the function body. Then click Return to Dialog button. Click Next button.
6. In the Parameter Initialization Code page, click the button on the right of the Initialization Code edit box and initialize the fitting parameters in Code Builder as follows.
int n1, n2, n3;
x_data.GetMinMax( x1, x2, &n1, &n2 );
x3 = x1 + (x2 - x1)/2;
y1 = y_data[n1];
y2 = y_data[n2];
vector vd;
vd = abs( x_data - x3 );
double xta, xtb;
vd.GetMinMax( xta, xtb, &n3 );
y3 = y_data[n3];
Click Compile button to compile it. Then click Return to Dialog button. Click Finish button.
Define Derived Parameters for Slopes and Intercepts
During function defined process, you can also define some additional derived Parameters such as slopes and intercepts , which are computed from the function parameter values after the fitting process
1. Click <<Back button twice to go back to Variables and Parameters page, type a1,b1,a2,b2 in the Derived Parameters field.
2. Click Next button four times to go to Derived Parameters page, fill in the Meaning column and type the equations in the Derived Parameters Equations area as follows, then click Finish button.
Fit the Curve
1. Select Analysis: Fitting: Nonlinear Curve Fit from Origin menu. In the NLFit dialog, select Settings: Function Selection, in the page select User Defined from the Category drop-down list and
pwl2s function from the Function drop-down list.
2. In the NLFit dialog, select Parameters tab, and fix parameters x1, x2 as shown in the dialog.
3. Click Fit button to fit the curve.
Fitting Results
The fitted curve should look like:
Fitted Parameters are shown as follows.
Parameter Value Standard Error
x1 0.8 0
y1 -0.0271 0.01063
x2 60 0
y2 0.95585 0.0083
x3 22.26316 0.58445
y3 0.66106 0.01197
a1 -0.05275 0.01123
b1 0.03206 8.7153E-4
a2 0.48715 0.01664
b2 0.00781 3.86455E-4
Thus the intersection point for the two segments is (22.26316, 0.66106).
Note that fitting with a piecewise linear function for more than two segments can be done in a similar way. | {"url":"https://d2mvzyuse3lwjc.cloudfront.net/doc/Tutorials/Fitting-Piecewise-Linear","timestamp":"2024-11-07T07:27:45Z","content_type":"text/html","content_length":"156500","record_id":"<urn:uuid:dea84d48-951a-484d-9494-3502f01b1199>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00519.warc.gz"} |
If sinx1+sinx2+sinx3+sinx4=4, then cosx1+cosx2+cosx3+cos... | Filo
Question asked by Filo student
Not the question you're searching for?
+ Ask your question
Video solutions (1)
Learn from their 1-to-1 discussion with Filo tutors.
6 mins
Uploaded on: 9/4/2023
Was this solution helpful?
Found 6 tutors discussing this question
Discuss this question LIVE for FREE
11 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions on Trigonometry
View more
Students who ask this question also asked
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text If , then is equal to
Updated On Sep 4, 2023
Topic Trigonometry
Subject Mathematics
Class Class 11
Answer Type Video solution: 1
Upvotes 56
Avg. Video Duration 6 min | {"url":"https://askfilo.com/user-question-answers-mathematics/if-then-is-equal-to-35353031313330","timestamp":"2024-11-07T15:25:01Z","content_type":"text/html","content_length":"388699","record_id":"<urn:uuid:3a8d12f4-4641-4bd6-adf6-dcc8afae84a8>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00495.warc.gz"} |
JavaScript Utilities: Math, Date
September 26, 2018
In this tutorial we will learn about two built-in utilities of JavaScript — the Math module and the Date module.
The Math module has functions to do various mathematical operations, and it also includes functions for generating random numbers. It has many constant values as well, like the value of pi (ratio of
a circle’s circumference to its diameter), etc.
Math Constants
Let’s start by looking at some of the constants defined in the Math object. We will use the JavaScript Console for this tutorial to run the examples interactively.
Math.E is the Euler’s constant — $e$. It is used as the base of natural logarithm, and its value is approximately 2.718.
> Math.E
< 2.718281828459045
Math.PI is the ratio of a circle’s circumference to its diameter. It’s value is approximately equal to 3.14.
> Math.PI
< 3.141592653589793
Similarly, there are some other constants available:
• Square root of 1/2, or $\sqrt{1/2}$ .
> Math.SQRT1_2
< 0.7071067811865476
• Square root of 2, or $\sqrt{2}$.
> Math.SQRT2
< 1.4142135623730951
• Logarithm of Math.E to the base 2, or $\log_2e$.
> Math.LOG2E
< 1.4426950408889634
• Logarithm of Math.E to the base 10, or $\log_{10}{e}$.
> Math.LOG10E
< 0.4342944819032518
Math Methods
Let’s look at some of the methods defined in the Math object.
Basic math functions and rounding:
Math.abs(x) gives the absolute value of x i.e. if x >= 0 it returns x, if x < 0 then it returns -x. Basically, it returns the value of x without its sign.
> Math.abs(-6.2)
< 6.2
> Math.abs(4.8)
< 4.8
• Math.pow(x, y): $x$ raised to the power $y$, or $x^y$.
> Math.pow(2, 5)
< 32
> Math.pow(3, 2)
< 9
• Math.exp(x): Math.E raised to the power $x$, or $e^x$.
> Math.exp(1) // Equal to Math.E
< 2.718281828459045
• Math.floor(x): Greatest integer smaller than x.
> Math.floor(4.5)
< 4
> Math.floor(3.4)
< 3
> Math.floor(-4.5) // slightly counter-intuitive for negative numbers
< -5
• Math.ceil(x): Smallest integer greater than x.
> Math.ceil(4.5)
< 5
> Math.ceil(3.4)
< 4
• Math.round(x): x rounded to the nearest integer.
> Math.round(4.5)
< 5
> Math.round(3.4)
< 3
• Math.sqrt(x): Square root of x.
> Math.sqrt(2)
< 1.4142135623730951
> Math.sqrt(100)
< 10
Minimum and maximum:
Math.max(...) and Math.min(...) give maximum and minimum respectively.
> Math.max(1, 334, 53, 2)
< 334
> Math.min(1, 334, 53, 2)
< 1
Logarithm functions:
• Math.log(x): Natural logarithm of $x$, or $\ln x$.
• Math.log10(x): Logarithm of $x$ to the base 10, or $\log_{10}x$.
> Math.log10(10)
< 1
> Math.log10(100)
< 2
> Math.log10(1000)
< 3
> Math.log10(300)
< 2.4771212547196626
• Math.log2(x): Logarithm of $x$ to the base 2, or $\log_{2}x$.
> Math.log2(4)
< 2
> Math.log2(32)
< 5
> Math.log2(6)
< 2.584962500721156
Trigonometric functions:
• Math.sin(x): Sine of $x$. The value of $x$ is expected to be in radians.
> Math.sin(Math.PI / 2)
< 1
> Math.sin(Math.PI)
< 0
• Math.cos(x): Cosine of $x$. The value of $x$ is taken in radians.
> Math.cos(0)
< 1
> Math.cos(Math.PI)
< -1
• Math.tan(x): Tangent of $x$. The value of $x$ is taken in radians.
> Math.tan(0)
< 0
> Math.tan(Math.PI / 3) // Math.sqrt(3)
< 1.7320508075688767
Similarly, there are other trigonometric functions as well, like Math.acos(x), Math.asin(x) and Math.atan(x).
Generating random numbers
Many applications require random numbers. For example, let’s say you are making a card game, then you will require random numbers to shuffle a deck of cards.
The Math object has a method random() for which generates random numbers between 0 and 1.
Using this function we can generate random numbers between any two numbers as well. Here’s how:
=> Math.random() ranges from 0 to 1
=> Math.random() * N ranges from 0 to N
=> X + Math.random() * N ranges from X to X + N
To get random numbers between two numbers X and Y we can use the following
var delta = Y - X;
var randomNumber = X + Math.random() * delta;
To generate random integers, use Math.floor() with Math.random(). For example:
// random integer from 0 to 9 (both inclusive)
Math.floor(Math.random() * 10);
// random integer from 1 to 100 (both inclusive)
1 + Math.floor(Math.random() * 100);
In particular, the above is useful for selecting random elements from an array:
function randomElement(arr) {
// indices are from 0 to arr.length - 1
var randomIndex = Math.floor( Math.random() * arr.length );
return arr[randomIndex];
The Date module is used to represent time. It has date-time objects which store all the information about current time, date, month, year, etc and it also allows us to do various operations using
Dates are stored as timestamps, which is defined as the number of milliseconds that have passed since January 1, 1970 UTC. This date is called epoch or in simple words JavaScript considers this date
as the beginning of time.
The syntax for creating a new Date object:
This creates a new Date object which stores current date. A string can be passed to Date() to create a date object which stores the desired date. For example,
var date1 = new Date("3-3-2018");
var date2 = new Date("3 march 2018");
var date3 = new Date("march 3 2018");
var date4 = new Date("2018 march 3 3:45 pm");
All of these are valid dates. The YYYY-MM-DD format is the preferred JavaScript date format. You can also omit some parts like the date, or the time. In the date string, time can be given in both 12
hour or 24 hour format.
You can also pass a number to new Date(). For example,
var date5 = new Date(1024);
The number is the number of milliseconds since Jan 1 1970 UTC.
Getters and setters
There are many get and set functions which can be used to get/set date information from/in a Date object. These methods are getDate, setDate, getMonth, setMonth, getFullYear, setYear, getHours,
setHours, getMinutes, setMinutes, getSeconds, setSeconds, getMilliseconds and setMilliseconds.
Let’s see how to use the get functions. Create a new HTML file:
<title>JavaScript: Date</title>
function printDateValues() {
Date: <input type="date" id="dateinput"> <br>
Time: <input type="time" id="timeinput"> <br>
<button onclick="printDateValues()">
Print Date Values
</button> <br>
<p id="results">
Date values will be printed here.
We have used two new types of input elements which are date and time. The date <input> element displays a calendar and can be used to give any date.
The time <input> element has two values, hour and minutes. It uses the 24 hour format.
The function printDateValues() will take the values date <input> and time <input> and create a new Date() from it. Then we will print all the values using the get functions listed above. Write the
function like this:
function printDateValues() {
var dateinput = document.getElementById("dateinput");
var timeinput = document.getElementById("timeinput");
var ptag = document.getElementById("results");
var date = new Date(dateinput.value + " " + timeinput.value);
var results = date + "<br>";
results += "Day: " + date.getDay() + "<br>";
results += "Date: " + date.getDate() + "<br>";
results += "Month: " + date.getMonth() + "<br>";
results += "Year: " + date.getFullYear() + "<br>";
results += "Hours: " + date.getHours() + "<br>";
results += "Minutes: " + date.getMinutes() + "<br>";
results += "Seconds: " + date.getSeconds() + "<br>";
results += "Milliseconds: " + date.getMilliseconds() + "<br>";
ptag.innerHTML = results;
Select date and time values and then click the button. You should see all the values printed inside the <p> tag.
getMonth() and getDay() returns the index of the month and day in the Date object. Months start from 0 (January) and go till 11 (December). Days start from 0 (Sunday) and go till 6 (Saturday).
We can also create another function to see an example of how set functions work.
function setDateExample() {
var ptag = document.getElementById("results");
var date = new Date("March 3 2018");
var results = "";
results += "Date value = " + date + "<br>";
results += "Date has been modified <br>";
results += "Date now = " + date + "<br>";
ptag.innerText = results;
and add another button in the HTML
<button onclick="setDateExample()">Run set date example</button> <br>
When you click this button you should see the following output
Date value = Sat Mar 03 2018 00:00:00 GMT+0530 (India Standard Time)
Date has been modified
Date now = Tue May 19 2020 00:00:00 GMT+0530 (India Standard Time)
Converting Timezones
For timezone conversions there isn’t any pre-defined function. So, we’ll have to make our own. By default, JavaScript returns the local time zone. For example, I am in India, so when I write:
on the console, I get:
< Sat Sep 15 2018 06:48:48 GMT+0530 (India Standard Time)
Today’s date in Indian Standard Time.
IST is GMT+0530 which means Indian Standard Time is 5 hours and 30 minutes ahead of Greenwich Mean Time. We wish to convert this time to another time zone, say, GMT-0230. We’ll do it the following
• Convert local time to GMT time
• Convert GMT time to desired timezone, GMT-0230 in this example.
Convert local time to GMT
d stores current date in local time. We can get the local timezone offset by using the getTimezoneOffset() method. It returns the number of minutes the local time zone is ahead/behind the GMT time.
If local time is ahead, it returns a negative value. If local time is behind, it returns a positive value.
If we add the offset value in d.getMinutes(), we should get the GMT time.
d.setMinutes(d.getMinutes() + d.getTimezoneOffset());
This changes the date to GMT timezone.
Convert GMT time to desired timezone
The desired timezone is GMT-0230, which means 2 hours and 30 minutes behind GMT or 2 * 60 + 30 = 150 minutes behind GMT. We can again do the same thing by setting the minutes value:
d.setMinutes(d.getMinutes() + 150);
This will change the time to GMT-0230.
Putting it all together
Let’s create a function which takes a date object and the desired time zone as parameters and changes the date in the date object to the desired time zone.
function changeTimezone(date, tz) {
// tz is of the format +hhmm or -hhmm
var sign = tz[0];
var hours = tz[1] + tz[2];
hours = parseInt(hours);
var minutes = tz[3] + tz[4];
minutes = parseInt(minutes);
// convert local to GMT
date.setMinutes(date.getMinutes() + date.getTimezoneOffset());
// convert GMT to tz
// convert hours and minutes to minutes
var difference = hours * 60 + minutes;
// if sign is - then we need to subtract minutes
if (sign == '-')
difference = difference * -1;
date.setMinutes(date.getMinutes() + difference);
The timezone string passed should be of the format -hhmm or +hhmm. We have broken the timezone string into sign, hours and minutes. Rest of the code is following the logic explained above.
Calculating elapsed time
We can use a Date object to calculate time taken by JavaScript to complete a particular task. For example, let us calculate how long does it take JavaScript to run a for loop which goes from 1 to
1000000. We will print the results on the console using console.log().
<button onclick=calculateElapsedTime()>Click Me!</button>
function calculateElapsedTime() {
var start = Date.now();
var count = 0;
for (var i = 1; i <= 1000000; i++) {
var end = Date.now();
console.log("Time taken: ", end - start, " milliseconds");
We have used the Date.now() method which returns the number of milliseconds passed since January 1 1970 UTC. We store the time before starting the loop and after the loop has ended we print the
difference of the two values.
This prints:
Time take: 4 milliseconds
on my machine (ranges from 3 - 5 milliseconds on different runs). The execution time might be different on your machine.
We learnt about two very important modules of JavaScript. The methods in these modules will be helpful when you implement your projects. | {"url":"https://www.commonlounge.com/javascript-utilities-math-date-817889ba1296493485ff8f047f1229f3","timestamp":"2024-11-09T04:30:43Z","content_type":"text/html","content_length":"50093","record_id":"<urn:uuid:f3b795a4-e415-45c7-a58e-fe3cbde5ad95>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00130.warc.gz"} |
NOTE: A technique ecologists use for this sort of problem involves sampling and extrapolation. They place a frame called a quadrat of a known size (usually 1 meter square) on the ground and count the
number of each type of plant inside the frame. Then, they move the frame to a different place, and count the plants there. After doing that a few times, they calculate the average number of plants in
a square meter. Next, the scientist measures the total area of the land they are studying, in square meters. Finally, they multiply the average number of plants per square meter by the number of
square meters of land to estimate the total number of plants on the whole plot of land. Tell students that they will apply this approach to the strawberry pie problem. | {"url":"https://oceanservice.noaa.gov/education/marine-ecosystem-modeling-vr/observations-models/activity-5.html","timestamp":"2024-11-06T07:57:11Z","content_type":"text/html","content_length":"45210","record_id":"<urn:uuid:c4d55463-6bbf-4cec-a2b3-cab8c43791e2>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00422.warc.gz"} |
What is a right pentagonal prism?
What is a right pentagonal prism?
The pentagonal prism is a prism having two pentagonal bases and five rectangular sides. It is a heptahedron. The regular right pentagonal prism is uniform polyhedron. . Its dual polyhedron is the
pentagonal dipyramid.
Is there a formula for volume of right prisms?
The formula for the volume of a prism is V=Bh , where B is the base area and h is the height.
How do you find the volume of a pentagonal pyramid?
Formula for the volume of a pentagonal pyramid
1. (1) Volume pyramid = Area base × Height 3.
2. Area base = Area pentagon = Perimeter × Apothem2.
3. Volume = 5 × Side × Apothem × Height 6.
4. Apothem = side2•tan(α÷2)
5. Height = √( 5-√510.
6. Volume = 5 × side ² × height 12 × Tan(36°)
Are any of the regular polyhedra prisms?
Determine if the following figures are polyhedra. If so, name the figure and find the number of faces, edges, and vertices. A truncated icosahedron is a polyhedron with 12 regular pentagonal faces
and 20 regular hexagonal faces and 90 edges….Review.
Name Rectangular Prism
Faces 6
Edges 12
What is a six sided prism called?
Hexagonal Prism
A hexagonal prism is a prism composed of two hexagonal bases and six rectangular sides. It is an octahedron. The regular right hexagonal prism is a space-filling polyhedron.
What is the volume of a right angle triangle?
For finding the formula of a triangle, you need to be sure that it is a triangular prism since there is no way to find the volume of a two-dimensional triangle. So, the right triangle volume formula
can be said to be, V = ½ bhl where b is the base, h is the height, and l is the length.
How do you calculate the volume of a pentagon?
As with any prism, the volume can be calculated by finding the product of the area of the base multiplied by the height. The area of the pentagonal base is determined by a formula using the number of
sides, the length of a side and a measurement known as the apothem, which is the perpendicular distance from any side to the center of the pentagon.
How do I calculate the volume of a pentagonal pyramid?
Volume of a pentagonal pyramid = Area of base x height. Since every dimension is multiplied by 5, Area is multiplied by 5² and height by 5. So volume is multiplied by 5^3 = 125.
How do you calculate the volume of a right triangular prism?
The volume of a triangular prism can be found by multiplying the base times the height. Both of the pictures of the Triangular prisms below illustrate the same formula. The formula, in general, is
the area of the base (the red triangle in the picture on the left) times the height,h. The right hand picture illustrates the same formula.
What is the formula for finding the volume of a square prism?
The volume formula for a prism equals the length times width times the height of the prism. The formula is written l*w*h. The volume formula for a square pyramid equals 1/3 times length times width
times the height of the pyramid.
What is a right pentagonal prism? The pentagonal prism is a prism having two pentagonal bases and five rectangular sides. It is a heptahedron. The regular right pentagonal prism is uniform
polyhedron. . Its dual polyhedron is the pentagonal dipyramid. Is there a formula for volume of right prisms? The formula for the volume of… | {"url":"https://www.ohare-airport.org/what-is-a-right-pentagonal-prism/","timestamp":"2024-11-05T03:23:18Z","content_type":"text/html","content_length":"35993","record_id":"<urn:uuid:da772862-9ec0-4c6f-a157-6583b459db2e>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00654.warc.gz"} |
Compressive strength of M25 concrete after 7 days & 28 days - Civil Sir
Compressive strength of M25 concrete after 7 days & 28 days
The compressive strength of M25 concrete after 7 days is typically around 65% of its 28-day strength. For M25 grade concrete, the target compressive strength at 28 days is approximately 25
megapascals (MPa). Therefore, you can expect the compressive strength after 7 days to be around 16.25 MPa
Hi guys in this article we know about compressive strength of M25 concrete after 7 days, 14 days and 28days of curing.
As we know compressive strength is measured by compressive strength test machine (CTM). Compressive strength is defined as ratio of load applied by CTM machine on concrete cube or cylinder to surface
area of concrete cube. Compressive strength is represented by F which is equal to F = P/A, where F = compressive strength,P= total load applied by CTM machine & A = cross sectional surface area.
Compressive strength of M25 concrete after 7 days & 28 days
Generally strength of concrete measured in psi (pound force per square inch in USA) & MPa (mega pascal) in India and other country. MPa in another terms represented as N/mm2. And 1MPa = 145.038 psi.
In this topic we have to find compressive strength of M25 concrete, if it achieve strength 25MPa or 3626 Psi by cube test buy CTM machine, other than it rejected, so compressive strength of M25
concrete is 25MPa or 3626 Psi.
Overall Strength of a concrete structure such as flexural resistance and abrasion directly depends upon the compressive strength of concrete.
Compressive strength of M25 grade of concrete after 7,14 & 28 days
This strength is measured by CTM testing Standard 15cm larger & 10cm smaller cubes in India and standard cylinder specimens dai 15cm & height 30cm in USA and a few other countries.
As per IS code the cube strength achieved by the concrete in 7 days is about 65%, in 1 day is about 16%, in 3 days is about 40%, in 14 days is about 90%, in 21 days is about 94% and in 28 days is
approximately 99%.
As per IS code the cube strength achieved by the M25 concrete in 7 days is about 16.24 MPa, in 1 day is about 4 MPa, in 3 days is about 10 MPa, in 14 days is about 22.5 MPa, in 21 days is about 23.5
MPa and in 28 days is approximately 25 MPa.
M25 7 days strength is approximately 16.25 MPa (2360 psi), or 16.25 N/mm2, or 162.5 kg/cm2. Therefore, the strength of M25 after 7 days is approximately 16.25 N/mm2 or 162.5 kg/cm2.
M25 3 days strength is approximately 10 MPa (1450 psi), or 10 N/mm2, or 100 kg/cm2. Therefore, the strength of M25 after 3 days is approximately 10 N/mm2 or 100 kg/cm2.
M25 14 days strength is approximately 23.5 MPa (3260 psi), or 23.5 N/mm2, or 230.5 kg/cm2. Therefore, the strength of M25 after 14 days is approximately 23.5 N/mm2 or 230.5 kg/cm2.
M25 28 days strength is approximately 25 MPa (3625 psi), or 25 N/mm2, or 250 kg/cm2. Therefore, the strength of M25 after 28 days is approximately 25 N/mm2 or 250 kg/cm2.
Compressive strength of M25 concrete at 7 days, 14 days and 28 days
The compressive strength of M25 concrete is approximately 10 MPa (1450 psi) at 3 days, approximately 16.25 MPa (2360 psi) at 7 days, approximately 23.5 MPa (3260 psi) at 14 days, and approximately 25
MPa (3625 psi) at 28 days.
M25 concrete strength:– The typical compressive strength of M25 concrete is approximately 16.25 N/mm^2 at 7 days and approximately 25 N/mm^2 at 28 days.
The grade of M25 concrete is denoted by the letter M or C (Europe) stand for mix & followed by numerical figure is compressive strength. Thus compressive strength of M25 concrete is 25N/mm2 (25MPa)
or 3626Psi.
Compressive strength of M25 concrete after 7 days
Making of at least 3 concrete cube size each 150mm×150mm×150mm in mould by cement sand and aggregate ratio 1:1:2, use tamping rod for levelling the surface of mould, it is kept for 24 hours setting
after water mix in concrete, after 24 hours it is kept in water for curing for 7 days. And taken out just before test 7 days to find out compressive strength of M25 concrete after 7 days of curing
● Calculation: Now concrete cube test by CTM machine,assuming 366 kN load is applied on concrete cube till the cube collapse. The maximum load at which the specimen breaks is taken as a compressive
Compressive load = 366 kN, cross sectional surface area A = 150mm ×150mm = 22500mm2 or 225cm2, then compressive strength F = P/A = 366 kN/22500mm2 = 16.25 N/mm2.
The compressive strength of M25 concrete is approximately 16.25 MPa (2360 Psi) after 7 days. To calculate strength of M25 concrete at 7 days used the formula:- F = P/A, compressive load = 366 kN,
cross sectional area A = 150mm × 150 mm = 22,500 mm^2, then compressive strength F = 366 /22,500 = 16.25 N/mm2.
Compressive strength of M25 concrete after 14 days
Making of at least 3 concrete cube size each 150mm×150mm×150mm in mould by cement sand and aggregate ratio 1:1:2, use tamping rod for levelling the surface of mould, it is kept for 24 hours setting
after water mix in concrete, after 24 hours it is kept in water for curing for 14 days. And taken out just before test 14 days to find out compressive strength of M25 concrete after 14 days of curing
● Calculation: Now concrete cube test by CTM machine,assuming 506 kN load is applied on concrete cube till the cube collapse. The maximum load at which the specimen breaks is taken as a compressive
Compressive load = 506 kN, cross sectional surface area A = 150mm ×150mm = 22500mm2 or 225cm2, then compressive strength F = P/A = 506 kN/22500mm2 = 22.5 N/mm2.
The compressive strength of M25 concrete is approximately 22.5 MPa (3263 Psi) after 14 days. To calculate strength of M25 concrete at 14 days used the formula:- F = P/A, compressive load = 506 kN,
cross sectional area A = 150mm × 150 mm = 22,500 mm^2, then compressive strength F = 506 /22,500 = 22.5 N/mm2.
Compressive strength of M25 concrete after 28 days
Making of at least 3 concrete cube size each 150mm×150mm×150mm in mould by cement sand and aggregate ratio 1:1:2, use tamping rod for levelling the surface of mould, it is kept for 24 hours setting
after water mix in concrete, after 24 hours it is kept in water for curing for 28 days. And taken out just before test 28 days to find out compressive strength of M25 concrete after 28 days.
● Calculation: Now concrete cube test by CTM machine,assuming 563 kN load is applied on concrete cube till the cube collapse. The maximum load at which the specimen breaks is taken as a compressive
Compressive load = 563 kN, cross sectional surface area A = 150mm ×150mm = 22500mm2 or 225cm2, then compressive strength F = P/A = 563 kN/22500mm2 = 25N/mm2.
The compressive strength of M25 concrete is approximately 25 MPa (3626 Psi) after 28 days. To calculate strength of M25 concrete at 28 days used the formula:- F = P/A, compressive load = 563 kN,
cross sectional area A = 150mm × 150 mm = 22,500 mm^2, then compressive strength F = 563 /22,500 = 25 N/mm2.
M25 concrete strength over time: Relationship between M25 concrete strength according to time is not linear, it means increasing strength is not increases according to applied load as time
increases,it will increase in nonlinear.
Concrete is a macro content with Sand, Cement, & Coarse aggregate as its micro-ingredient (Mix Ratio) and gains its 100% strength over time at the hardened state.
Take a look at the below table. M25 Concrete Strength Overtime
Days after Casting Strength Gain
Day 1 ____ 16% __ 4MPa
Day 3 ____ 40% __ 10MPa
Day 7 ____ 65% __ 16.25MPa
Day 14 ___ 90% __ 22.5MPa
Day 28 ____ 99% __ 25MPa
As you can see the m25 concrete gains its strength rapidly till 7th & 14th Days upto 90% after curing, then gradually increases from there. So we can’t predict the strength until the concrete comes
to that stable state.
Once it attains certain strength at 7 days, then we know (according to the table) only 9% of strength going to increase. So at sites, we do normally test concrete at this interval. If the concrete
fails at 14 days, then we will reject that batching.
Compressive strength of M25 concrete cube test
Concrete cube test Apparatus for procedure and result completed in following steps:
● 1) IS code:- this concrete cube test it is completed according to IS code 516
● 2) Required Equipment & Apparatus:
a) Tamping rod:- tamping rod is used for levelling the surface of concrete cube mould, it is 16mm Dia and 60cm in length.
b) CTM machine : CTM machine is required for load applying on concrete cube mould, it should be apply minimum load of 14N/mm2/minute.
c) THREE type of mould: there is two size of concrete cube mould is used for test, first is larger size of 150mm or 15cm have specific dimension (l×b×h) is 150mm×150mm×150mm with aggregate size is 38
mm and second smaller size concrete cube mould size is 100mm×100mm×100mm with aggregate size 19mm used in India.
In USA and other country cylindrical concrete mould is also used have dia 150mm, height 300mm & aggregate size is 38mm.
d) other Apparatus is G.I Sheet (For Making Concrete),Vibrating Needle, tray & other tools.
● 3) Environmental factors:- for standard calculation of compressive strength of concrete environmental factors should be optimum, minimum number of test specimen should be 3, temperature should be
27± 2℃ and humidity is 90%
Concrete cube Test procedure
a) Measure the dry proportion of ingredients (Cement, Sand & Coarse Aggregate) in ratio 1:1:2 as per the M25 concrete. The Ingredients should be sufficient enough to cast test cubes.
b) first mix cement and sand till it got uniform colour then added aggregate in it, thoroughly mix the dry ingredients to obtain the uniform colour of mixture and Add design quantity of water to the
dry proportion (water-cement ratio) and mix well to obtain uniform texture
c) Fill the concrete to the mould with the help of vibrator and used tamping rod for thorough compaction and levelling the surface of concrete cube mould,Finish the top of the concrete by trowel &
tapped well till the cement slurry comes to the top of the cubes.
d) After some time the mould should be covered with red gunny bag and put undisturbed for 24 hours at a temperature of 27± 2℃,After 24 hours remove the specimen from the mould.
e) Keep the specimen submerged under fresh water at 27± 2℃ for curing, the specimen should be kept for 7,14 or 28 days. Every 7 days the water should be renewed. The specimen should be removed from
the water 30 minutes prior to the testing and the specimen should be in dry condition before conducting the testing.
● 5) Testing of concrete cube: Now place the concrete cubes into the (CTM) testing machine at centre. The cubes should be placed correctly on the machine plate (check the circle marks on the
machine). Carefully align the specimen with the spherically seated plate. The load will be applied to the specimen axially.
Now slowly apply the load at the rate of 14N/mm2/minute till the cube collapse.
The maximum load at which the specimen breaks is taken as a compressive load.
● 6) Calculation:
Compressive Strength of concrete = Maximum compressive load / Cross Sectional Area, cross sectional Area = 150mm X 150mm = 22500mm2 or 225 cm2, assume the compression load is 563 KN, then Compressive
Strength of M25 concrete after 28 days = (563N/22500mm2= 25N/mm2 (20MPa) or 3626 Psi.
◆You Can Follow me on Facebook and Subscribe our Youtube Channel
You should also visits:-
1)what is concrete and its types and properties
2) concrete quantity calculation for staircase and its formula
The compressive strength of M25 concrete is approximately 10 MPa (1450 psi) after 3 days, approximately 16.25 MPa (2360 psi) after 7 days, approximately 23.5 MPa (3260 psi) after 14 days, and
approximately 25 MPa (3625 psi) after 28 days.
4 thoughts on “Compressive strength of M25 concrete after 7 days & 28 days”
1. it article was helpful me.
2. What should be the compressive strength result of DESIGN MIX m25 grade cube after 28 days
IS IT 25+4 =29 or 25
3. What if m25 grade cube attains 10.5 n/mm2 in 7 days. What is the strength of concrete.is it good or bad or ok
4. according to Design mix m25 grade achive 65% but at field its gain 700kn why
Leave a Comment | {"url":"https://civilsir.com/compressive-strength-of-m25-concrete-after-7-days-28-days/","timestamp":"2024-11-08T15:34:13Z","content_type":"text/html","content_length":"111915","record_id":"<urn:uuid:8d0bf8fb-7355-453e-bc56-682af862707e>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00701.warc.gz"} |
Separability of Finite Topological Products
Separability of Finite Topological Products
Recall from the First Countability of Finite Topological Products and the Second Countability of Finite Topological Products pages that if $\{ X_1, X_2, ..., X_n \}$ is a finite collection of first/
second countable topological spaces then the topological product $\displaystyle{\prod_{i=1}^{n} X_i}$ is also first/second countable.
We will now look at another property that is inherited from a finite collection of topological spaces - separability!
Theorem 1: Let $\{ X_1, X_2, …, X_n \}$ be a finite collection of topological spaces. If $X_i$ is separable for each $i \in I$ then the topological product $\displaystyle{\prod_{i=1}^{n} X_i}$ is
also separable.
• Proof: Let $\{ X_1, X_2, ..., X_n \}$ be a finite collection of separable topological spaces. Then each $X_i$ contains a countable and dense subset $A_i$.
• We form a countable and dense subset of $\displaystyle{\prod_{i=1}^{n} X_i}$ as the product of all of these countable dense subsets:
\quad A = \prod_{i=1}^{n} A_i
• Clearly $A$ is countable since it is the product of a finite collection of countable sets. Furthermore, we claim that $A$ is dense in $\displaystyle{\prod_{i=1}^{n} A_i}$. To show this, let $U =
U_1 \times U_2 \times ... \times U_n \in \prod_{i=1}^{n} X_i$ be open. Then $U_i$ is open in $X_i$ for each $i \in I$. Since $A_i$ is dense in $U_i$ we have that:
\quad A_i \cap U_i \neq \emptyset
• So there exists a point $a_i \in A_i \cap U_i$ for each $i \in I$. So the point $\mathbf{a} = (a_1, a_2, ..., a_n) \in A \cap U$ which shows that $A \cap U \neq \emptyset$ for each open set $U$
in the topological product.
• Therefore $\displaystyle{\prod_{i=1}^{n} X_i}$ is separable. $\blacksquare$ | {"url":"http://mathonline.wikidot.com/separability-of-finite-topological-products","timestamp":"2024-11-09T08:11:55Z","content_type":"application/xhtml+xml","content_length":"16017","record_id":"<urn:uuid:b9968cf0-da48-48c4-9b11-e04bf8a9c756>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00423.warc.gz"} |
Models Lectures Start Today
Five sessions of 2 hours duration each, divided into nine lectures. It will introduce models, the ideas and their relationship to theory, science and practical problem solving. It will begin with
spatial interaction, move to integrated land use transport models, & conclude with developments based on agent-based models (ABM) & cellular automata (CA). This is the first of a double header in the
MRes lectures on Spatial Modelling & Simulation.
This entry was posted in Agent-Based Models, Cellular Automata, Entropy, Interactions, LUTI models, Urban Models. Bookmark the permalink. | {"url":"http://www.spatialcomplexity.info/urban-modelling-lectures-start-january-12th-at-4pm/","timestamp":"2024-11-06T18:46:03Z","content_type":"text/html","content_length":"40351","record_id":"<urn:uuid:10d13b56-69d1-4460-8bcd-46965a341ea2>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00384.warc.gz"} |
[Solved] The value of ∫(1−x2)1+x2+x4(1+x2)dx, is - Integral C... | Filo
Not the question you're searching for?
+ Ask your question
Sol. Let Put Again, put Hence (a) is the correct answer.
Was this solution helpful?
Found 2 tutors discussing this question
Discuss this question LIVE for FREE
13 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice questions from Integral Calculus (Amit M. Agarwal)
View more
Practice more questions from Integrals
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text The value of , is
Updated On May 4, 2023
Topic Integrals
Subject Mathematics
Class Class 12
Answer Type Text solution:1 Video solution: 2
Upvotes 349
Avg. Video Duration 14 min | {"url":"https://askfilo.com/math-question-answers/the-value-of-int-fracleft1x2right-d-xleft1-x2right-sqrt1x2x4-isa-frac12-sqrt3","timestamp":"2024-11-14T21:37:10Z","content_type":"text/html","content_length":"749703","record_id":"<urn:uuid:4ad13fd4-467c-40f6-bacc-f7eee6b04dfb>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00662.warc.gz"} |
[Solved] In the given figure, A,B,C and D are four points... | Filo
In the given figure, and are four points on a circle. and BD intersect at a point E such that and Find
Not the question you're searching for?
+ Ask your question
...Angles in the segment of the circle
In ,
...Sum of the angles in a triangle
Was this solution helpful?
Found 8 tutors discussing this question
Discuss this question LIVE
10 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions from Mathematics (NCERT)
Practice questions from Mathematics (NCERT)
View more
Practice more questions from Circles
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text In the given figure, and are four points on a circle. and BD intersect at a point E such that and Find
Updated On Apr 8, 2023
Topic Circles
Subject Mathematics
Class Class 9
Answer Type Text solution:1 Video solution: 1
Upvotes 102
Avg. Video Duration 15 min | {"url":"https://askfilo.com/math-question-answers/in-the-given-figure-mathrm-a-mathrm-b-mathrm-c-and1zk","timestamp":"2024-11-03T15:06:47Z","content_type":"text/html","content_length":"401690","record_id":"<urn:uuid:31315217-6ce3-4f1b-b0ff-c10f6a7d15d3>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00094.warc.gz"} |
Boats and Streams
Boats and Streams: Practice Questions
A man takes 20 minutes to row 12 km upstream which is a third more than the time he takes on his way downstream. What is his speed in still water?
A. 41 km/hr
B. 36 km/hr
C. 42 km/hr
D. 45 km/hr
How long will it take to row 20 km upstream if one can row 10 km in 10 minutes in still water and the same distance in 8 minutes with the stream?
A. 12 min
B. 13.33 min
C. 24 min
D. 26.67 min
A boat makes a return journey from point A to point B and back in 5 hours 36 minutes. One way it travels with the stream and on the return it travels against the stream. If the speed of the stream
increases by 2 km/hr, the return journey takes 9 hours 20 minutes. What is the speed of the boat in still water? (The distance between A and B is 16 km.)
A. 5 km/hr
B. 3 km/hr
C. 7 km/hr
D. 9 km/hr
A boat travels from point A to B, a distance of 12 km. From A it travels 4 km downstream in 15 minutes and the remaining 8 km upstream to reach B. If the downstream speed is twice as high as the
upstream speed, what is the average speed of the boat for the journey from A to B?
A. 10(2/3)km/hr
B. 9.6 km/hr
C. 11.16 km/hr
D. 10.44 km/hr
A man rows ‘k’ km upstream and back again downstream to the same point in H hours. The speed of rowing in still water is s km/hr and the rate of stream is r km/hr. Then
A. (s^2-r^2) =2sk /H
B. (r + s) = kH / (r -s)
C. rs = kH
D. None of the above
Must Read Boat and Streams Articles
• Boats and Streams: Practice Questions
A man rows 24 km upstream in 6 hours and a distance of 35 km downstream in 7 hours. Then the speed of the man in still water is
A. 4.5 km/hr
B. 4 km/hr
C. 5 km/hr
D. 5.5 km/hr
A boat goes 12 km upstream in 48 minutes. The speed of stream is 2 km/hr. The speed of boat in still water is
A. 15 km/hr
B. 16 km/hr
C. 17 km/hr
D. 18 km/hr
A motorboat can travel at 5 km/hr in still water. It travelled 90 km downstream in a river and then returned, taking altogether 100 hours. Find the rate of flow of the river.
A. 3 km/hr
B. 3.5 km/hr
C. 2 km/hr
D. 4 km/hr
A boatman can row 2 km against the stream in 20 minutes and return in 10 minutes. Find the rate of flow of the current.
A. 2 km/h
B. 1 km/h
C. 3 km/h
D. 5 km/h | {"url":"https://www.hitbullseye.com/Boat-and-Stream-Questions.php","timestamp":"2024-11-07T00:46:26Z","content_type":"text/html","content_length":"95967","record_id":"<urn:uuid:11041b43-c972-4354-a249-4e354f11e4bf>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00174.warc.gz"} |
CASE STUDY 3 | Solar AI
For Rigs: The current price for an Iceriver AL3 (15 TH/s) miner in Dubai is approximately $11,990 to $13,100
For Solar Panel: The cost of a 10 kW solar panel system is around $11,570 to $12,250
For 20 mining units and 14 panels the approximate cost is around $400,000
The on grid electricity cost incurred by 20 rigs is approximately $5,000 per month, hence the transition from on grid to solar panel can happen at a healthy rate
For Rigs:
Calculations are made with the cost of 1 unit = $11,990
For Solar Panels:
Calculations are made with the cost of 1 unit = $9,500
Solar Installation Cost: $133,300 (for 14 panels).
Annual Savings on Electricity: $60,480
The payback period is:
After approximately 2.2 years, the solar panel system would have paid for itself through savings on electricity costs.
40% of the revenue generated every month goes back into buying more rigs and panels. Let us assume that the Solar AI token has a modest $200,000 trading volume per month. With a 5% buy and sell tax
that amounts to $10,000 per month, out of which 60% that is $6000 goes back into buying equipment.
Taking these parameters into account at current market conditions, the time taken to double our equipment will be:
11,980+9,500 = $21,480
The reinvestment amount of $22,981 per month is slightly more than the cost to add one miner and one solar panel.
When ALPH = $1.59
The approximate doubling time will be 20 months in this case
When ALPH = $7.38
The approximate doubling time will be 5 months in this case
When ALPH = $13.85
The approximate doubling time will be 2.5 months in this case | {"url":"https://solar-ai.gitbook.io/solar-ai/case-study/case-study-3","timestamp":"2024-11-06T15:42:53Z","content_type":"text/html","content_length":"242296","record_id":"<urn:uuid:3c0ee497-60d4-47cc-8381-5c554ead3051>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00482.warc.gz"} |
Select Chapter Topics:
A bus is moving at a speed of \(10\) ms^-1 on a straight road. A scooterist wishes to overtake the bus in \(100\) s. If the bus is at a distance of \(1\) km from the scooterist, with what speed
should the scooterist chase the bus?
1. \(20\) ms^-1
2. \(40\) ms^-1
3. \(25\) ms^-1
4. \(10\) ms^-1
Subtopic: Relative Motion in One Dimension |
From NCERT
AIPMT - 2009
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
A particle starts its motion from rest under the action of a constant force. If the distance covered in the first \(10\) s is \(S_1\) and that covered in the first \(20\) s is \(S_2\), then:
1. \(S_2=2S_1\)
2. \(S_2 = 3S_1\)
3. \(S_2 = 4S_1\)
4. \(S_2= S_1\)
Subtopic: Acceleration |
From NCERT
AIPMT - 2009
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
The distance travelled by a particle starting from rest and moving with an acceleration \(\frac{4}{3}\) ms^-2, in the third second is:
1. \(6\) m
2. \(4\) m
3. \(\frac{10}{3}\) m
4. \(\frac{19}{3}\) m
Subtopic: Distance & Displacement |
From NCERT
AIPMT - 2008
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
A particle shows the distance-time curve as given in this figure. The maximum instantaneous velocity of the particle is around the point:
1. B
2. C
3. D
4. A
Subtopic: Graphs |
From NCERT
AIPMT - 2008
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
A particle moves in a straight line with a constant acceleration. It changes its velocity from \(10\) ms^-1 to \(20\) ms^-1 while covering a distance of \(135\) m in \(t\) seconds. The value of \(t\)
│1.│\(10\) │2.│\(1.8\) │
│3.│\(12\) │4.│\(9\) │
Subtopic: Uniformly Accelerated Motion |
From NCERT
AIPMT - 2008
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
The position of a particle with respect to time \(t\) along the \(\mathrm{x}\)-axis is given by \(x=9t^{2}-t^{3}\) where \(x\) is in metres and \(t\) in seconds. What will be the position of this
particle when it achieves maximum speed along the \(+\mathrm{x} \text-\text{direction}?\)
1. \(32\) m
2. \(54\) m
3. \(81\) m
4. \(24\) m
Subtopic: Non Uniform Acceleration |
From NCERT
AIPMT - 2007
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
A car moves from \(X\) to \(Y\) with a uniform speed \(v_u\) and returns to \(X\) with a uniform speed \(v_d.\) The average speed for this round trip is:
│1.│\(\dfrac{2 v_{d} v_{u}}{v_{d} + v_{u}}\) │2.│\(\sqrt{v_{u} v_{d}}\) │
│3.│\(\dfrac{v_{d} v_{u}}{v_{d} + v_{u}}\) │4.│\(\dfrac{v_{u} + v_{d}}{2}\)│
Subtopic: Average Speed & Average Velocity |
From NCERT
AIPMT - 2007
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
A particle moving along the x-axis has acceleration \(f,\) at time \(t,\) given by, \(f=f_0\left ( 1-\frac{t}{T} \right ),\) where \(f_0\) and \(T\) are constants. The particle at \(t=0\) has zero
velocity. In the time interval between \(t=0\) and the instant when \(f=0,\) the particle’s velocity \( \left ( v_x \right )\) is:
1. \(f_0T\)
2. \(\frac{1}{2}f_0T^{2}\)
3. \(f_0T^2\)
4. \(\frac{1}{2}f_0T\)
Subtopic: Non Uniform Acceleration |
From NCERT
AIPMT - 2007
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
A particle moves along a straight line \(OX.\) At a time \(t\) (in seconds), the displacement \(x\) (in metres) of the particle from \(O\) is given by \(x= 40 +12t-t^3.\) How long would the particle
travel before coming to rest?
│1.│\(24~\text m\) │2.│\(40~\text m\) │
│3.│\(56~\text m\) │4.│\(16~\text m\) │
Subtopic: Instantaneous Speed & Instantaneous Velocity |
From NCERT
AIPMT - 2006
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
Two bodies, \(A\) (of mass \(1~\text{kg}\)) and \(B\) (of mass \(3~\text{kg}\)) are dropped from heights of \(16~\text{m}\) and \(25~\text{m}\), respectively. The ratio of the time taken by them to
reach the ground is:
1. \(\frac{5}{4}\)
2. \(\frac{12}{5}\)
3. \(\frac{5}{12}\)
4. \(\frac{4}{5}\)
Subtopic: Uniformly Accelerated Motion |
From NCERT
AIPMT - 2006
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
Select Chapter Topics: | {"url":"https://www.neetprep.com/questions/55-Physics/677-Motion-Straight-Line?courseId=8&testId=1568843-Recommended-PYQs-STRICTLY-NCERT-Based-&questionId=92806-bus-moving-speed--mson-straight-road-scooterist-wishesto-overtake-bus--s-bus-distance--km-fromthe-scooterist-speed-scooterist-chase-bus--ms--ms--ms--ms","timestamp":"2024-11-13T15:04:10Z","content_type":"text/html","content_length":"421952","record_id":"<urn:uuid:cdb5e558-aece-43a5-8bb8-5d7a3a320096>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00050.warc.gz"} |
Area of a Trapezoid Calculator [Easy To Use + Guide To Results] | Pi Day
Area of a Trapezoid Calculator
How to Calculate the Area of a Trapezoid
A trapezoid is an interesting four-sided geometric figure. It has two parallel sides and the remaining two sides can be of any length, at any angle. Some possible trapezoid shapes are shown below to
clarify the concept. Notice that the parallel lines are marked with arrows.
In real life there are a lot of objects with trapezoid shapes that you may or may not have noticed. See some examples below. Are you surprised?
Now that you are clear about the shape of a trapezoid, let’s discuss the parameters you need to know, in order to find its area. There are three important lengths that you need to know to find the
area of a trapezoid: lengths of the two parallel sides ‘a’ and ‘b’ and the height. The height is the perpendicular distance between the two parallel sides. By perpendicular distance, we mean that the
length of the line that joins parallel side ‘a’ and ‘b’ and is exactly 90 degrees to them.
The area of a trapezoid, A, is given as:
This formula is derived from the concept of the area of a triangle. You might already know how to calculate the area of a triangle, but we will review it briefly, just in case you have forgotten or
you don’t know. Two parameters that you need to know to find the area of triangle are the height of the triangle and the base of the triangle. The height of the triangle is given as the perpendicular
distance from one corner of the triangle to the base level. Whichever side of the triangle you select as ‘base’, measure the height by considering the corner exactly opposite to the base. See the
diagrams below for more clarity on the height-base concept.
Do not get confused if the shape of the triangle is not what you typically expect. Remember the concept of base and height and label accordingly.
The area of the triangle is given as:
Now, how does this knowledge help us in figuring out the formula of the area of a trapezoid? Let’s see.
Look carefully and you will notice that a trapezoid can be cut diagonally to form two triangles:
If we find the area of these two triangles and then add them, we will get the area of the whole trapezoid! The base of the upper triangle is length ‘a’ and the base of the lower triangle is the
length ‘b’. The height of both triangles is the same.
Area of the upper triangle is given as:
Area of the lower triangle is given as:
Therefore, area of the trapezoid will be:
Taking \(\frac{h}{2}\) as the common factor we get:
Hopefully, now you fully understand the concept behind the formula of the area of a trapezoid. Let’s do some examples.
Example 1:
Find the area of the trapezoid given below:
From the figure we can see that:
a = 4 cm
b = 9 cm
h = 5 cm
Let area of trapezoid be represented by variable ‘A’
A = ?
Apply the formula for area of a trapezoid:
Example 2:
A trapezoid having an area of 98 cm^2, has two parallel sides of lengths 16 cm and 12 cm. What is the perpendicular distance between the two parallel sides?
We are given the following parameters:
Parallel side 1 = a= 16 cm
Parallel side 2 = b= 12 cm
Area of Trapezoid = A = 98 cm^2
We have to find the perpendicular distance between two parallel sides. As we mentioned earlier in the article, it is the height of the trapezoid.
h = ?
Recall the formula for the area of a trapezoid and solve for “h”.
Now, we put in the known values and find the height:
Example 3:
The area of the trapezoid given below is 100 cm^2. Find the unknown length of parallel side ‘a’.
One side of this trapezium makes a 90-degree angle with both parallel sides. This means that the height of trapezoid and the length of this side is the same. We are there given the following
Area of trapezoid = A = 100 cm^2
Height = h = 10 cm
Parallel side 2 = b = 11 cm
Parallel side 1 = a = ?
To find ‘a’ we rearrange the formula for area of a trapezoid to solve for “a”:
Now, put in the known values to get the final answer:
Final Thoughts!
We have tried to cover pretty much everything there is to know about the area of a trapezoid, from its derivation to solving different problems. Geometry is a very important branch of mathematics and
learning about all the shapes that exist in the real world is crucial, especially if you are thinking of becoming an engineer one day! After learning the theory, you can use our area of a trapezoid
calculator to quickly get answers to your problems and save time! | {"url":"https://www.piday.org/calculators/area-of-a-trapezoid-calculator/","timestamp":"2024-11-10T21:03:29Z","content_type":"text/html","content_length":"77517","record_id":"<urn:uuid:72dff249-1789-407f-b029-34b11ff28067>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00479.warc.gz"} |
The Lindström Lectures 2017
Per Lindström’s work on interpretations has great beauty. He was a grand master of dazzling diagonal arguments. In this talk we will explain the basic setting underlying Per’s work. We introduce the
notion of interpretation and provide some examples of interpretations. We show how, in the context of arithmetic, the notion of interpretability has an almost unrecognizable equivalent. This
equivalence is known as the Orey–Hájek Characterization.
We will discuss some results of Per and have a look at further developments.
We study the Second Incompleteness Theorem, G2, in the Feferman-style. This means that we work with a fixed provability-predicate but allow the representations of the axiom set to vary. Feferman
observed that the axiom set of Peano Arithmetic, PA, has a Π01-representation for which PA proves its own consistency.
We isolate a condition that Feferman’s example fails to satisfy. This condition gives a reasonably general version of G2. We show that this version yields a proof of G2 for Σ01-semi-numerations of
the axiom set which works even if the theory itself is not recursively enumerable. We discuss an interesting example that illustrates that we may have G2 even in the absence of the Löb conditions. | {"url":"https://logic-gu.se/lindstrom-lectures/the-lindstrom-lectures-2017/","timestamp":"2024-11-05T09:26:19Z","content_type":"text/html","content_length":"10859","record_id":"<urn:uuid:a274816e-6d4c-4d5d-a353-d7725ef31b46>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00753.warc.gz"} |
Math 9 – Feb 12: Review
Chapter 1 test is tomorrow! Make sure your workbook is complete and ready to hand in before the test.
Topics to review:
• define: Natural, Whole, Integer, Rational Numbers
• add, subtract, multiply, divide integers (positive and negative)
• add, subtract, multiply, divide rational numbers (fractions)
• convert between decimals and fractions
• convert between mixed and improper fractions
• reduce fractions to lowest terms
Need help with any of these topics? Look at posts from past classes to find helpful videos or try searching Khan Academy on youtube
Feb 3: operations-with-integers
Feb 4: operations-with-integers
This entry was posted in Math 9. Bookmark the permalink. | {"url":"https://mrsdildy.com/math-9-feb-12-review/","timestamp":"2024-11-05T09:06:58Z","content_type":"text/html","content_length":"28585","record_id":"<urn:uuid:6ea44ab0-07fb-42d2-860a-a74611ec79ab>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00356.warc.gz"} |
Solve xdy−ydxxdx+ydy=x2+y2a2−x2−y2... | Filo
Not the question you're searching for?
+ Ask your question
From the given equation we have Let
Was this solution helpful?
Found 7 tutors discussing this question
Discuss this question LIVE for FREE
10 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions from Differential Equations
View more
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text Solve
Updated On Feb 4, 2023
Topic Differential Equations
Subject Mathematics
Class Class 12
Answer Type Text solution:1 Video solution: 1
Upvotes 224
Avg. Video Duration 1 min | {"url":"https://askfilo.com/math-question-answers/solve-fracx-d-xy-d-yx-d-y-y-d-xsqrtfraca2-x2-y2x2y2-313368","timestamp":"2024-11-04T01:36:15Z","content_type":"text/html","content_length":"484579","record_id":"<urn:uuid:bb9ce4c8-12f9-4936-b9d6-e4965caff690>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00582.warc.gz"} |
Which algorithm is used for computing the gcd of sparse polynomial
Which algorithm is used for computing the gcd of sparse polynomial
There are lot of algorithms that compute the gcd of the polynomial for handling different cases Is there any fast algorithm that computes the gcd of polynomial faster, also wants to know which
algorithm does sage math used for computing the gcd of sparse polynomial | {"url":"https://ask.sagemath.org/question/66980/which-algorithm-is-used-for-computing-the-gcd-of-sparse-polynomial/","timestamp":"2024-11-13T19:02:59Z","content_type":"application/xhtml+xml","content_length":"45269","record_id":"<urn:uuid:d51848a3-1458-4b80-87a4-f7381b5b11ab>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00560.warc.gz"} |
Guessing secrets efficiently via list decoding (extended abstract)
We consider the guessing secrets problem defined by Chung, Graham, and Leighton [CGL01]. This is a variant of the standard 20 questions game where the player has a set of k > 1 secrets from a
universe of N possible secrets. The player is asked Boolean questions about the secret for each question, the player picks one of the k secrets adversarially, and answers according to this secret. We
present an explicit set of O(logiV) questions together with an efficient (i.e., poly (log AT) time) algorithm to solve the guessing secrets problem for the case of 2 secrets. This answers the main
algorithmic question left unanswered by [CGL01]. The main techniques we use are small e-biased spaces and the notion of list decoding. We also establish bounds on the number of questions needed to
solve the A;-secrets game for k > 2, and discuss how list decoding can be used to get partial information about the secrets.
Original language English (US)
Title of host publication Proceedings of the 13th Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2002
Publisher Association for Computing Machinery
Pages 254-262
Number of pages 9
ISBN (Electronic) 089871513X
State Published - 2002
Externally published Yes
Event 13th Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2002 - San Francisco, United States
Duration: Jan 6 2002 → Jan 8 2002
Publication series
Name Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms
Volume 06-08-January-2002
Other 13th Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2002
Country/Territory United States
City San Francisco
Period 1/6/02 → 1/8/02
All Science Journal Classification (ASJC) codes
• Software
• General Mathematics
Dive into the research topics of 'Guessing secrets efficiently via list decoding (extended abstract)'. Together they form a unique fingerprint. | {"url":"https://collaborate.princeton.edu/en/publications/guessing-secrets-efficiently-via-list-decoding-extended-abstract","timestamp":"2024-11-10T19:39:18Z","content_type":"text/html","content_length":"44432","record_id":"<urn:uuid:194476aa-371f-455b-9cbd-c56886d8e553>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00432.warc.gz"} |
Passive Income With Options?
I had encountered the idea of using options for passive income from online discussions and acquaintances. Being a passive income advocate, I would naturally be interested in this, and decided to find
out more.
What Are Options?
Options are a type of derivative financial instrument, which derives its value from an underlying security. An option is a contract which gives the holder the right, but not the obligation, to
exercise a transaction, with the other party.
There are two types of options: call and put options. A call option is a contract which gives the holder the right, but not the obligation, to buy a set number of shares at a set price (or strike
price) within a given period. A put option is the opposite, i.e., a contract which gives the holder the right, but not the obligation, to sell a set number of shares at a strike price within a given
Since options are contracts, there are generally two parties involved: the buyer of the call (or put) option and the seller of the said option. Buying the option involves a premium to be paid upfront
to the seller, who sells or writes the option.
There are three main variables in an option: the premium, the strike price and the expiration date. The premium would be the price to pay for the options contract, but this amount could go up or down
depending on the market. The strike price is the agreed-upon price to purchase the underlying security, while the expiration date is the end-date of the option, after which the contract would either
be exercised or become a worthless piece of paper.
Do note that there is no options trading on local counters, only overseas securities (especially in the United States).
Call Option Example
Let us take two parties, B (buyer) and S (seller), and share counter X, with a current share price of $100. B bought a call options contract from S that enables him/her to buy 100 shares of X at $120
in a month’s time from S, at a premium of $5 per share (for a total of $500). Assuming B held the options till the expiration date, we shall see the following three scenarios then:
Scenario #1: Share price of X hits $110 – Since the current share price is less than what B can buy for, he/she decided to let the options lapse (and S gets to keep his/her 100 X shares and the paid
premium of $500).
Scenario #2: Share price of X hits $140 – Since the current share price is more than what B can buy for, B decided to exercise his/her option and S is obliged to sell B 100 shares of X at $120 each.
B saved [($140 - $120) x 100] - $500 = $1500. For S, his/her gains are the $500 premium and the difference between $120 and his/her initial purchase of X, if any.
Scenario #3: Share price of X hits $120 – Although the current share price and the strike price is the same, B is very likely not to exercise the option as the total cost per share is $125 instead,
with the $5 being the premium paid for each share.
Put Option Example
Using the same settings as the previous section, with share X currently at $100, B bought a put option from S at a premium of $5 per share (for a total of $500) to sell his/her 100 shares of X at $80
to S at the expiration date. Assume B held the options till then, and visiting the three scenarios:
Scenario #1: Share price of X hits $90 – Since the current share price is more than what B can sell for, he/she decided to lapse the option (and S gets to keep the $500 premium and need not buy the X
shares from B).
Scenario #2: Share price of X hits $60 – Since the current share price is lower than what B could sell for, B decided to exercise the option and S is obliged to buy the 100 shares of X from B at $80
each instead. B saved [($80-$60) x 100] - $500 = $1500. For S, his/her potential loss would be [($80-$60) x 100] - $500 = $1500, since he/she could get a better price in purchasing shares of X from
the open market.
Scenario #3: Share price of X hits $80 – Although the current share price and the strike price is the same, B is very likely not to exercise the option as the total selling price per share is $75
instead, with the $5 being the premium incurred for each share.
American-Style And European-Style Options
To make things more complicated in the world of options, there are two different option styles out there, namely American and European. American style options could be exercised any time before the
expiration date, while European style ones can only be exercised on the expiration date itself. The naming of the styles does not mean that they are carried out at those locations, rather it is a
form of describing them. Thus, not all American securities practice American style.
So Where Does The Passive Income Part Comes In?
Typically, the basic source of passive income comes from the premiums of selling the options, which is done by S in the above scenarios. Provided the options were not exercised, we could just sell
options non-stop and get the premiums. However, things are not so simple as the other side (i.e., B) may exercise the options depending on the current price of the underlying security, and the
previous premiums earned may go up in smoke.
As we all know, the markets (and price) are unpredictable in nature. While some options traders rely on technical analysis to forecast price movements, others would employ some strategies in
preventing losses from options being exercised by the holders, and this involves selling and buying options at the same time. There are many strategies around, and they have interesting names (“iron
condor”, “iron butterfly”, “long strangle”, etc.).
The Bedokian’s Take
To a beginner, all these may look intimidating. An acquaintance of mine who dabbled in options mentioned that it’s just like learning a new skill: learn, get one’s feet wet, and soon one will get the
hang of it. In my opinion, it depends on the comfort level of the investor himself/herself, and whether he/she is open to options trading to augment additional income. Options trading do require some
time taken to monitor the markets and prices, sometimes on a regular basis, akin to active management.
If you are interested, and if you already have an investment portfolio, treat options trading separately in a trading portfolio (as I had stated here). Any gains from it could be funneled back to
your investment portfolio’s cash component as a war chest or be used to fund your day-to-day expenses if required.
For passive investors, options trading could be tricky, but there’s a way to go about it, which I would share in the next post (or maybe the next, next post, see how it goes).
Ultimately, and following my standard advice, it is a “know what you are doing” thing for options. You have the right, but not the obligation, to follow my take. | {"url":"http://bedokianportfolio.blogspot.com/2022/03/passive-income-with-options.html","timestamp":"2024-11-09T13:27:04Z","content_type":"text/html","content_length":"101871","record_id":"<urn:uuid:c5a2e98e-4683-4744-ae5f-49ebb8c97b28>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00259.warc.gz"} |
Math Digests September 2024
Astronomy, August 17, 2024.
When you see images of galaxies many lightyears away, the colors may not be exactly what you’d see if you flew past them on a spaceship. The advanced telescopes that astronomers use typically rely on
wavelengths of light invisible to us, such as x-rays and infrared waves. “The process of deciphering data from non-optical telescopes is often called false colorization, but the word ‘false’ does it
a disservice,” wrote Randall Hyman for Astronomy. This process is not a creative exercise. Scientists strive to reveal visible details of the cosmos by tinkering with the math of light waves.
Classroom Activities: color theory, hexadecimal numbers
• (Mid level) Refer to this Teach Engineering resource and this UC Berkeley resource on the science of color for the following questions:
□ Describe in your own words the difference between how colored pigments vs colored light mix.
□ Rank these colors in ascending order of a) wavelength, and b) energy: Blue; violet; infrared; ultraviolet; microwave; yellow.
• (High level) On a computer, colors are often displayed by a string of digits that represent a mixture of red, green, and blue light. The intensity of each color ranges from 0 (minimum) to 255
(maximum), such that (0,0,0) appears black and (255,0,0) appears bright red, (0,255,0) appears bright green, and (0,0,255) appears bright blue.
□ How many possible color combinations exist in this space?
□ Rather than using the integers 0–255, colors are usually programmed in a hexadecimal number system with two digits, each ranging from 0 to F. (000000:black; FFFFFF:white). Explain in your own
words why the numbers for each digit go from 0 to F. (Hint: what does “hexadecimal” mean?)
□ Play the game Hexcodle or Hexcodle Mini to practice this representation of color in a fun game.
—Max Levy
The Daily, September 14, 2024.
What happens when several people try to find a meeting time by randomly filling out a when2meet poll? In a recent paper, three physicists created a simple model for this process, and calculated the
probability that there is no time which works for all participants. Case Western Reserve University publicized the research with a press release in their newsletter, the Daily.
Classroom Activities: probability, data analysis
• (Mid level) In the study, the authors use the symbol $\pi_0$ to represent the probability that there is no time which works for all participants. The simplest version of their model has \[ \pi_0
= (1-p^m)^{\ell} \] where $p$ is the probability that a participant is free for a given timeslot, $m$ is the number of participants, and $\ell$ is the number of timeslots. For $p=0.5$, $\ell =
40$, calculate $\pi_0$ for $m = 0, 1, 2, 3, 4, 5, 6$. Explain in your own words the meaning of your results.
□ The study found that, using a more complicated model, when scheduling a meeting with $p = 0.5$ and $\ell = 40$, there’s a 92.5% chance of finding a time that works for four people. But if you
add just one more participant, that number shrinks down to 72%. Are these numbers consistent with your results?
• (Mid level) Using the Desmos graphing calculator, plot $\pi_0$ as a function of the number of meeting participants $m$ for
☆ $\ell = 40$ and $p = 0.1$, $0.3$, $0.5$, $0.7$, and $0.9$,
☆ $p = 0.5$ and $\ell = 5$, $10$, $50$, and $100$.How do the graphs change as you vary $\ell$ and $p$? What do these results mean? Explain what you think is going on (both mathematically,
and in the meeting-scheduling scenario.)
☆ (Mid level) Create your own data using the below table. For each empty cell in the table, flip a coin to decide whether the participant is free at that time or not. Once you are done,
write down whether there is a time at which
○ Participants 1 & 2 are both free,
○ Participants 1, 2 & 3 are all free,
○ Participants 1, 2, 3 & 4 are all free,
○ Participants 1, 2, 3, 4, & 5 are all free,
○ All six participants are free.
Now, pool the entire class’s data. (The more students, the better the dataset will be.) Together, brainstorm how to use the data to calculate $\pi_0$ for meetings with 1, 2, 3, 4, 5, and 6
participants. Then, plot the results. How close are the results to the $\pi_0$ predicted by the model?
• (All levels) In their paper, the authors make simplifying assumptions. For instance, they assume that the probability of a participant being free during a given time block doesn’t depend on the
time of day or who the participant is. What do you think of these assumptions? Do you think they accurately reflect the process of meeting scheduling? Why do you think the authors made them?
—Leila Sloman
Dorset Echo, September 26, 2024.
Modular arithmetic—the set of rules that, among other things, describes the mathematics of timekeeping—was developed around 1800. But about two centuries before that, the London theater crowd was
enjoying a card trick whose success depends on modular arithmetic principles. Mathematician Colin Beveridge explored those principles in a September 23 blog post. “Even knowing the maths behind it, I
think this is still a pretty impressive trick,” Beveridge wrote. Dorset Echo reporter Andy Jones interviewed Beveridge about the blog post a few days later.
Classroom Activities: modular arithmetic
• (All levels) Read the first two sections of Beveridge’s blog post. Break into pairs, and try the card trick yourself, according to his instructions. Take turns acting as the “volunteer” and the
“trickster.” Does the trick work?
□ Beveridge’s son asked if the trick would still work if the volunteer counts to a number other than 15. Try again, using the numbers 13, 18, and 22. Does it still work?
• (Mid level) Read the rest of the blog post, including Beveridge’s explanation of the mathematical principles that make the trick work. To learn more about modular arithmetic, do this lesson from
Khan Academy.
□ In your own words, write out a derivation of why the trick works. Prove that it still works if you use a number other than 15.
• (High level) Can you modify the trick to work with 10 cards in a circle, instead of 13?
—Leila Sloman
NPR Life Kit, September 5, 2024.
In math classes, students’ work is often nothing more than mimicry, according to Ben Orlin, a mathematician and author. We learn concepts carefully compressed into formulas and notations, vetted over
thousands of years, but don’t always learn to make sense of them. “It’s this strange game with no obvious connection to their lives,” Orlin said. In a recent episode of NPR Life Kit, Orlin gives tips
on how to better learn math by connecting it to language and everyday life.
Classroom Activities: language, arithmetic
• (All levels) Around 5:00, Orlin talks about the concept of “numbers as nouns.” Listen and explain in your words how naming works in mathematics and why it’s important for the study of
mathematics. (Hint: think about how the names/labels are used in math.)
• (All levels) Around 7:40, Orlin talks about the “verbs” of math — what we do to numbers — with an example of measuring the depth of the ocean. Write your own example of how calculations allow us
to convert from “numbers we have” to “numbers we want.” What are the verbs and nouns in your example?
• (All levels) Orlin recommends doing mental math faster by rounding consistently. Practice by adding the following numbers in your head: 11 + 18 + 92 + 75.
□ Check your work against a calculator.
□ What percent error does rounding give you? Discuss when that kind of error is acceptable versus unacceptable.
• (Mid level) Around 12:00, Orlin talks about negative numbers, with examples of negative temperatures and debt. How else could you explain to someone the concept of negative numbers? Come up with
3 examples.
—Max Levy
The Conversation, August 20, 2024.
Siblings Simone and André Weil had strong opinions about math education. As a French philosopher and math teacher in the early 1900s, Simone wanted her students to recognize math as a subject rich
with culture and history, rather than just an ensemble of facts and rules. As a mathematician, André believed teachers “must motivate students by providing them meaningful problems and provocative
examples,” wrote Scott Taylor in The Conversation.
Classroom Activities: philosophy of math
• (All levels) Taylor wrote “As a math teacher, I frequently see students grit their teeth and furrow their brow, developing only a headache and resentment. According to Simone, however, true
attention arises from joy and desire.” In small groups, discuss with your classmates what math topics have most held your attention. What makes these topics different to you than others? Which
topics are the hardest to pay attention to? Why?
• (High level) In the small groups, discuss which parts of your life you expect mathematics to be most meaningful for. (For example, think about your hobbies or potential career.) Write a math
problem for one of your classmates to solve based on this area of interest.
—Max Levy
Some more of this month’s math headlines: | {"url":"https://mathvoices.ams.org/mathmedia/math-digests-september-2024/","timestamp":"2024-11-10T18:51:39Z","content_type":"text/html","content_length":"75488","record_id":"<urn:uuid:8f1e6066-4546-47d2-a2de-015e738e9192>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00518.warc.gz"} |
Which of the following
Which of the following is an example of a discrete variable?
Which of the following is an example of a discrete variable?
Similarly, the amount of weight gained can also be measured in decimals to as much precision as needed, and hence it is also continuous. Thus, the gender of each student in a psychology class is an
example of a discrete variable.
What is an example of discrete quantitative variable?
A discrete quantitative variable is one that can only take specific numeric values (rather than any value in an interval), but those numeric values have a clear quantitative interpretation. Examples
of discrete quantitative variables are number of needle punctures, number of pregnancies and number of hospitalizations.
Is hours of sleep discrete or continuous?
Amount of sleep is a variable. 3, 5, 9 hours of sleep are different values for that variable. Variables can be continuous or discrete. Question: Are these variables discrete or continuous?…Frequency
distribution table:
Score (X) Frequency (f)
(Hours) (Number of people with this score)
Is age an extraneous variable?
Extraneous variables are often classified into three main types: Subject variables, which are the characteristics of the individuals being studied that might affect their actions. These variables
include age, gender, health status, mood, background, etc.
What is the difference between intervening variable and extraneous variable?
Answer. extraneous variable are any variable which u r not intentionally studying in your experiment or test. intervening variable is a hypothetical variable used to explain casual links between
other variables.
Which of the following is the best example of a discrete variable?
1. Discrete. A discrete random variable is a (random) variable whose values take only a finite number of values. The best example of a discrete variable is a dice.
What is extraneous variable and how can it be controlled?
An extraneous variable is eliminated, for example, if background noise that might reduce the audibility of speech is removed. Unknown extraneous variables can be controlled by randomization.
Randomization ensures that the expected values of the extraneous variables are identical under different conditions.
What are the types of discrete variables?
Discretely measured responses can be:
• Nominal (unordered) variables, e.g., gender, ethnic background, religious or political affiliation.
• Ordinal (ordered) variables, e.g., grade levels, income levels, school grades.
• Discrete interval variables with only a few values, e.g., number of times married.
What is extraneous and confounding variable?
Extraneous variables are those that produce an association between two variables that are not causally related. Confounding variables are similar to extraneous variables, the difference being that
they are affecting two variables that are not spuriously related. …
Is weight a discrete or continuous variable?
Continuous random variables have numeric values that can be any number in an interval. For example, the (exact) weight of a person is a continuous random variable. Foot length is also a continuous
random variable. Continuous random variables are often measurements, such as weight or length.
Is age a discrete or continuous variables?
We could be infinitly accurate and use an infinite number of decimal places, therefore making age continuous. However, in everyday appliances, all values under 6 years and above 5 years are called 5
years old. So we use age usually as a discrete variable.
Is GPA continuous or discrete?
For example, the variable ” the number of children” is discrete and the variable ” GPA” is continuous. Since GPA can take an infinite number of possible values, for example interval 0.0 to 4.0.
Inferential statistics is the branch of statistics that involves using a sample to draw conclusions about a population.
Do extraneous variables affect validity?
Any variable that you are not intentionally studying in your dissertation is an extraneous variable that could threaten the internal validity of your results [see the article: Internal validity].
This threatens the internal validity of your results. …
How do you know if a variable is discrete or continuous?
A variable is a quantity whose value changes. A discrete variable is a variable whose value is obtained by counting. A continuous variable is a variable whose value is obtained by measuring. A random
variable is a variable whose value is a numerical outcome of a random phenomenon.
Is gender a confounding variable?
Hence, due to the relation between age and gender, stratification by age resulted in an uneven distribution of gender among the exposure groups within age strata. As a result, gender is likely to be
considered a confounding variable within strata of young and old subjects.
What is extraneous variable in research with example?
Extraneous variables are any variables that you are not intentionally studying in your experiment or test. These undesirable variables are called extraneous variables. A simple example: you want to
know if online learning increases student understanding of statistics.
Which of the following are examples of continuous random variables?
A continuous random variable is one which takes an infinite number of possible values. Continuous random variables are usually measurements. Examples include height, weight, the amount of sugar in an
orange, the time required to run a mile. A continuous random variable is not defined at specific values.
What are examples of extraneous variables?
Situational Variables: These extraneous variables are related to things in the environment that may impact how each participant responds. For example, if a participant is taking a test in a chilly
room, the temperature would be considered an extraneous variable.
What are the similarities of discrete variable and continuous variable?
The simplest similarity that a discrete variable shares with a continuous variable is that both are variables meaning they have a changing value. Besides that, they are also statistical terminologies
used for comparative analysis.
Is a scatter plot continuous or discrete?
Continuous. While a continuous graph has a y value for every single x value and will always appear as a single line, a discrete graph only contains information for specific points and will appear as
a collection of those points. Discrete graphs are also often called scatter plots.
Is salary a discrete or continuous variable?
In terms of statistics, this describes variables that assume only particular, distinct values and that are not continuous. For example, salary levels and performance classifications are discrete
variables, whereas height and weight are continuous variables. | {"url":"https://www.kingfisherbeerusa.com/which-of-the-following-is-an-example-of-a-discrete-variable/","timestamp":"2024-11-06T01:23:43Z","content_type":"text/html","content_length":"53562","record_id":"<urn:uuid:a1524632-c086-4922-a7f1-8302a082f909>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00535.warc.gz"} |
Alain Aspect's Quantum Optics on Coursera
course to play with. I had been looking to see if there was a quantum computing MOOC available (and there are many), but among the search results was
Alain Aspect's Quantum Optics
course on Coursera. This is, he says, the first of two MOOCs based on his textbook (at right*). It's a short course (4 weeks, basically four lectures), and so, just right for me.
The subject is interesting (and very similar to a special topics course I took as an undergraduate called "The Quantum Mechanics of the Laser" -- I wish I'd kept those notes when I moved), but the
lectures are dense. They do go over a lot of the material in
Sakurai's Modern Quantum Mechanics
*, which I worked through two summers ago, but of course with a focus on the meaning in terms of quantum optics. Already, some things I haven't heard of before, some that relate to experimental
design (quantization volume), some straightforward interpretations of mathematical expressions (the energy of a single photon). The understand in terms of experimental parameters is particularly
helpful to me (since I understand things in terms of experiments, due to my training).
The course, however, is not for those who are afraid of mathematics. Aspect's discussion in mathematically dense. Really dense. My students think I use too much mathematics in university physics
classes, but this is all math. And Aspect expects you to have seen it all before: many times he references your prior knowledge. He doesn't quite say that you're an uneducated ignoramus if you can't
recall trivialities like the photon energy or the uncertainty relations (he calls them dispersion relations, an aspect of his philosophy -- it's nice to hear an expert talk explain the mechanics of
physics in a way that makes it clear he has opinions). And the homework is tough. Not as tough as it sounds when you read it, but pretty tough.[1] Even on the internet, you're expected to know your
Werner Krauth's MOOC
,[2] from the same school but a different country, and he spoke with the same cadence. I found I had to speed up the lecture to 1.5x so that they spoke at a normal speed.
This minor technical problem aside, I certainly am enjoying the break this provides before I start preparing for my summer courses (How did I let myself get roped into summer courses? At least
they're on-line so I can get a lot of the work out early).
[1] I didn't pay the $49.99, or whatever, it costs in order to get it graded, but I did work it. And it reinforced the advice I give to my students: try the homework before class, then the class will
be more useful to you. [2] Which was serendipitous, since I'd begun setting up to work through the book it was based on,
Statistical Mechanics: Algorithms and Computation
,* when Coursera sent me an e-mail about it. I get the feeling there's as much shilling on Coursera as there is at TED talks. But it couldn't be more: a TED talk is just an advertisement for a book.
If you're lucky, there's more to the book than just the TED talk. Obviously, though, there's more to a physics textbook than eight hours of lecture. Hell, there's more to a physics textbook than the
forty hours of lecture in a semester.
* Note: These links are to Amazon pages. Purchases on those pages from the links will give me a commission (at least for now -- every time I've tried to use the Amazon Associates program they've
kicked me off for not selling anything, but I do like having the links in the show notes so that you can pick up the books we might reference in a discussion).
No comments: | {"url":"https://physicsfm-master.blogspot.com/2020/05/alain-aspects-quantum-optics-on-coursera.html","timestamp":"2024-11-05T03:53:50Z","content_type":"text/html","content_length":"55600","record_id":"<urn:uuid:5fb073fa-f48c-47ef-ac06-c00684017080>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00692.warc.gz"} |
If t denotes the thousandths digit in the decimal representation of d
Question Stats:
87% 13% (01:13) based on 2157 sessions
d = 0.43t7
If t denotes the thousandths digit in the decimal representation of d above, what digit is t?
(1) If d were rounded to the nearest hundredth, the result would be 0.44.
(2) If d were rounded to the nearest thousandth, the result would be 0.436.
We are given that t denotes the thousandths digit of 0.43t7. We need to determine the value of t.
Statement One Alone:
If d were rounded to the nearest hundredth, the result would be 0.44.
Using the information in statement one, t could be different values. For example, t could be 5 (so that d would be 0.4357) or t could be 6 (so that d would be 0.4367). Similarly, t could also be 7,
8, or 9. Notice in any of these cases, d rounds up to 0.44.
Thus, statement one is not enough information to answer the question. We can eliminate answer choices A and D.
Statement Two Alone:
If d were rounded to the nearest thousandth, the result would be 0.436.
Using the information from statement two, d must be 0.4357. Because of the “7” in the ten-thousandths digit, we see that 0.43t7 will only round up to 0.436. Thus, t can only be the value 5. Any other
value of t will not round d to 0.436 when rounded to the nearest thousandth. Statement two is sufficient to answer the question.
The answer is B. | {"url":"https://gmatclub.com/forum/if-t-denotes-the-thousandths-digit-in-the-decimal-representation-of-d-138297.html","timestamp":"2024-11-04T09:22:29Z","content_type":"application/xhtml+xml","content_length":"792964","record_id":"<urn:uuid:e17cdc6c-3d54-4637-b251-51c66de4f7da>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00487.warc.gz"} |
02-09-2019 09:41 PM
02-09-2019 21:41 PM
This nifty little Quick Measure will tell you if a number is prime or not.
mIsPrime =
VAR __num = MAX([Number])
VAR __max = INT(SQRT(__num))
VAR __table = GENERATESERIES(2,__max,1)
VAR __table1 = ADDCOLUMNS(__table,"__mod",MOD(__num,[Value]))
__num = 1, 0,
Basically, compute the nearest integer to the square root of the number. Generate a table of all values between 2 and that number. Compute the modulus of the number in question and each of the table
values. If one of the rows has a modulus of 0, it's not a prime number.
Included in the file is a representation of the Ulam Spiral, a nifty little construct that occurs when you write positive integers down in a square spiral pattern like:
Apparently, Ulam discovered the spiral in 1963 while doodling during the presentation of "a long and very boring paper" at a scientific meeting. This narrows down the potential presentations by
exactly 0. | {"url":"https://community.fabric.microsoft.com/t5/Quick-Measures-Gallery/IsPrime/td-p/620015","timestamp":"2024-11-09T08:57:20Z","content_type":"text/html","content_length":"218692","record_id":"<urn:uuid:dd36631b-a915-4f72-89f3-d4ac87071a8f>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00629.warc.gz"} |
The Derived Homogeneous Fourier Transform: A Study of Duality Between Derived and Stacky Phenomena in Derived Vector Bundles
The Derived Homogeneous Fourier Transform: A Study of Duality Between Derived and Stacky Phenomena in Derived Vector Bundles
Core Concepts
This paper presents a derived version of Laumon's homogeneous Fourier transform, extending its application from vector bundles to derived vector bundles, and explores the resulting duality between
derived and stacky phenomena, particularly the persistence of the involutivity property in this new context.
Translate Source
To Another Language
Generate MindMap
from source content
The derived homogeneous Fourier transform
Khan, A. A. (2024). The derived homogeneous Fourier transform. arXiv:2311.13270v2 [math.AG].
This paper aims to extend Laumon's homogeneous Fourier transform from traditional vector bundles to the realm of derived vector bundles. The study investigates the properties and implications of this
extended transform, focusing on the duality between derived and stacky phenomena that emerges in this context.
Deeper Inquiries
How can the concept of the derived homogeneous Fourier transform be further generalized or applied to other mathematical structures beyond derived vector bundles?
The concept of the derived homogeneous Fourier transform, as presented in the context of derived vector bundles, exhibits a rich interplay between algebraic and geometric structures. This suggests
several promising avenues for generalization and application: 1. Beyond Vector Bundles: Derived Coherent Sheaves: A natural extension is to consider the derived category of coherent sheaves on a
derived stack. This would involve generalizing the notion of the evaluation map (1.1) and the kernel (1.2) to this setting. The challenge lies in defining a suitable duality for derived coherent
sheaves that mirrors the role of vector bundle duality in the current construction. Principal Bundles: One could explore a generalization to principal G-bundles for a more general algebraic group G.
This would require developing a suitable notion of "Fourier duality" in the context of G-equivariant sheaves. Derived Loop Spaces: The homogeneous Fourier transform can be viewed as a kind of
"linearization" procedure. This suggests a connection with derived loop spaces, which are naturally equipped with an S^1-action. Exploring this link could lead to new insights into both areas. 2.
Applications: Derived Symplectic Geometry: The derived homogeneous Fourier transform, particularly its connection to the Fourier-Sato transform mentioned in the context, has the potential to provide
new tools for studying derived symplectic geometry and its applications to mirror symmetry. Geometric Representation Theory: The appearance of Borel-Moore homology and the generalization of
Kashiwara's Fourier isomorphism (Example 1.34) hint at deeper connections with geometric representation theory. The derived homogeneous Fourier transform could potentially be used to study
representations of algebraic groups on derived categories of sheaves. Motivic Homotopy Theory: The fact that the framework accommodates motivic settings (as mentioned in the "Conventions and
notation" section) opens up possibilities for applications in motivic homotopy theory. The derived homogeneous Fourier transform could provide new tools for studying motivic spectra and their
associated invariants.
Could there be alternative approaches to defining and studying a derived version of the homogeneous Fourier transform that might lead to different insights or properties?
Yes, alternative approaches to defining a derived homogeneous Fourier transform could offer fresh perspectives and unveil new properties: 1. Six Functor Formalism Perspective: Alternative Kernels:
Instead of directly generalizing Laumon's kernel, one could explore different kernels on the product E∨ ×S E. This could lead to variants of the Fourier transform with modified properties,
potentially highlighting different aspects of the underlying geometry. Categorical Approach: One could attempt a more abstract categorical construction of the derived Fourier transform, perhaps
leveraging the language of ∞-categories and their functoriality. This might provide a more conceptual understanding of the transform and its properties. 2. Derived Algebraic Geometry Techniques:
Deformation Theory: One could study the derived homogeneous Fourier transform using deformation theory, considering families of derived vector bundles and analyzing how the transform behaves under
deformations. This might reveal connections with moduli spaces of sheaves and their derived structures. Derived Intersection Theory: The derived homogeneous Fourier transform naturally involves
intersections, as seen in the definition of the kernel. Employing tools from derived intersection theory could provide a more refined understanding of the transform and its interaction with derived
Chern classes and other invariants. 3. Connections to Other Transforms: Fourier-Mukai Transforms: Exploring the relationship between the derived homogeneous Fourier transform and more general
Fourier-Mukai transforms on derived stacks could lead to fruitful interactions. This might involve developing a suitable notion of a "kernel" for Fourier-Mukai transforms in the derived setting.
Quantum Field Theory: Drawing inspiration from quantum field theory, where Fourier transforms play a fundamental role, one could seek alternative definitions of the derived homogeneous Fourier
transform motivated by path integrals or other QFT techniques.
What are the potential implications of the duality between derived and stacky phenomena, as revealed through the derived homogeneous Fourier transform, for our understanding of the nature of space
and geometry in mathematics and physics?
The duality between derived and stacky phenomena, as illuminated by the derived homogeneous Fourier transform, hints at a profound shift in our understanding of space and geometry: 1. Space as a
Spectrum: Beyond Points: The traditional notion of space as a collection of points is being challenged. Derived algebraic geometry suggests that spaces have a richer structure, encoded in their
derived categories of sheaves. The derived homogeneous Fourier transform, by bridging derived and stacky phenomena, further blurs the lines between points and more general objects like line bundles.
Geometry from Algebra: The duality suggests a deep connection between the algebraic structure of derived categories and the geometric properties of spaces. This aligns with the broader theme in
modern mathematics and physics of understanding geometry through algebraic invariants. 2. Implications for Physics: Quantum Geometry: In quantum field theory, the distinction between particles and
fields becomes blurred. The derived homogeneous Fourier transform, with its ability to exchange derived and stacky features, might provide a mathematical framework for describing such quantum
geometries where the classical distinction between points and extended objects breaks down. Mirror Symmetry and Duality: The duality between derived and stacky phenomena resonates with the concept of
mirror symmetry in string theory, where different geometric spaces give rise to the same physics. The derived homogeneous Fourier transform could potentially provide a mathematical language for
describing and studying such dualities. 3. New Foundations for Geometry: Homotopical Perspective: Derived algebraic geometry emphasizes the importance of homotopy theory in understanding geometry.
The derived homogeneous Fourier transform, by connecting derived and stacky structures, further reinforces this perspective, suggesting that the fundamental building blocks of space and geometry
might be homotopical in nature. Categorification: The use of derived categories and stacks points towards a "categorification" of geometry, where geometric objects are replaced by categories and
geometric constructions are lifted to functors between categories. The derived homogeneous Fourier transform exemplifies this trend, providing a concrete example of how categorification can reveal
hidden structures and dualities. | {"url":"https://linnk.ai/insight/scientific-computing/the-derived-homogeneous-fourier-transform-a-study-of-duality-between-derived-and-stacky-phenomena-in-derived-vector-bundles-D8AlR1T7/","timestamp":"2024-11-03T10:25:30Z","content_type":"text/html","content_length":"295085","record_id":"<urn:uuid:5ccf9937-f6fe-4951-a483-7342c77afa57>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00851.warc.gz"} |
How to find highest and lowest number in array Javascript - GodlyDevGuide
How to find highest and lowest number in array Javascript
Actually, JavaScript provides two inbuilt methods to find the highest and lowest number in an array. This article will first introduce these methods, and then explain how they work in simple terms.
1. Highest(Max) number
Javascript provides the Math.max() method to find the highest values in an array.
const numbers = [2, 9, 3, 5, 7, 8];
const maxNumber = Math.max(...numbers);
console.log(maxNumber); // Output: 9
How Math.max() method works ?
Lets say you have an array(numbers) and you want to get the maximum number.
to do that, we need to first create a function called max(). Than we will create a variable called “largest” and assign the first element of the numbers array (numbers[0]) to the largest variable.
This sets an initial value for largest.
Now, just assume that the first element of the array(numbers) which we assigned to the variable “largest” is the biggest number. Than we will loop through the remaining elements of the numbers array
using a for loop. We will start the loop at index 1 (the second element) because we already assigned the first element to “largest“.
For each element in the loop, we compare it with the current value of largest. If the element is greater than largest, we update the value of largest to that element.
After the loop finishes, we have found the largest element in the numbers array. We return the value of largest as the output of the function max().
So, if you call the max(), it will return 9, which is the largest number in the numbers array.
Code :
const numbers = [2, 9, 3, 5, 7, 8];
function max() {
let largest = numbers[0];
for (let i = 1; i < numbers.length; i++) {
if (numbers[i] > largest) {
largest = numbers[i];
return largest;
console.log(max(numbers)); // Output: 9
2. Lowest(Min) number
Javascript provides the Math.min() method to find the lowest values in an array.
const numbers = [2, 9, 3, 5, 7, 8];
const minNumber = Math.min(...numbers);
console.log(minNumber); // Output: 2
How Math.min() method works ?
Here, we will do the same thing that we did to find the highest value but with a small change.
Lets say you have an array(numbers) and you want to get the smallest number.
To do that, we will create a function called min(). Than we will create a variable called “lowest” and assign the first element of the numbers array (numbers[0]) to the "lowest" variable. This sets
an initial value for "lowest".
Now, just assume that the first element of the array(numbers) which we assigned to the variable “lowest” is the smallest number. Than we will loop through the remaining elements of the numbers array
using a for loop. We will start the loop at index 1 (the second element) because we already assigned the first element to “lowest”.
For each element in the loop, we compare it with the current value of "lowest". If the element is less than "lowest", we update the value of "lowest" to that element.
After the loop finishes, we have found the smallest element in the numbers array. We return the value of "lowest" as the output of the function min().
So, if you call the min(), it will return2, which is the smallest number in the numbers array.
const numbers = [2, 9, 3, 5, 7, 8];
function min() {
let lowest = numbers[0];
for (let i = 1; i < numbers.length; i++) {
if (numbers[i] > lowest) {
lowest = numbers[i];
return lowest;
console.log(min(numbers)) // Output : 2
Leave a Comment | {"url":"https://dev.godlyguide.com/blog/javascript/highest-and-lowest-number-in-array/","timestamp":"2024-11-04T13:51:51Z","content_type":"text/html","content_length":"99607","record_id":"<urn:uuid:e65223c2-3a14-46d6-bd8a-89b5bbed32b8>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00477.warc.gz"} |
Analysis for SCBF per AISC 341-16 with ideCAD
• This control’s not checked by programme.
A[g] = Gross area of member, in.^2 (mm^2)
F[y] = Specified minimum yield stress
R[y] = Ratio of the expected yield stress to the specified minimum yield stress, Fy
F[cre] = Critical stress calculated from Specification Chapter E using expected yield stress, ksi (MPa)
P[1] = Compressive strength
P[2] = Compressive strength for post-buckling
T = Tensile strength
The required strength of elements and connections in SCBF are determined using the capacity-limited seismic load effect. The capacity-limited horizontal seismic load effect, E[cl], is taken as the
larger force determined from the following analyses:
• An analysis in which all braces are assumed to resist forces corresponding to their expected strength in compression or in tension
• An analysis in which all braces in tension are assumed to resist forces corresponding to their expected strength, and all braces in compression are assumed to resist their expected post-buckling
The expected brace strength in tension is R[y]F[y]A[g]. The expected brace strength in compression is permitted to be taken as the lesser of R[y]F[y]A[g] and (1/0.877)F[cre]A[g], where F[cre] is
determined from Specification Chapter E using the equations for F[cr], except that the expected yield stress, R[y]F[y], is used in lieu of F[y]. The brace length used to determine F[cre] shall not
exceed the distance from the brace end to the brace end.
The expected post-buckling brace strength shall be taken as a maximum of 0.3 times the expected brace strength in compression. | {"url":"https://help.idecad.com/ideCAD/f2-3-analysis","timestamp":"2024-11-05T16:18:21Z","content_type":"text/html","content_length":"34754","record_id":"<urn:uuid:664bd182-5007-425c-9d47-306e5f7fc46b>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00703.warc.gz"} |
What is the Stochastic Oscillator?
The Stochastic Oscillator is a technical analysis tool created by George Lane in the late ‘50s. The Stochastic Oscillator actually is a momentum indicator that compares current price levels to the
High and Low previous price range. The Stochastic Oscillator clearly follows the speed of the momentum. The Stochastic Oscillator aims to indicate price reversals before they actually occur.
Additionally, it can be used by traders for identifying overbought price and oversold price levels.
Stochastic Oscillator Calculation
• Stochastic Oscillator = 100{(Close – Low14)/(High14 - Low14)} = %D
-Low14 = the lowest of the 14 past periods
-High14 = the highest of the 14 past periods
-%D = 3-period moving average of Stochastic Oscillator
Stochastic Oscillator Default Settings
The default setting when using the Stochastic Oscillator is 14 periods (14,3,3). According to each timeframe, Stochastic Oscillator periods may concern daily, weekly, monthly, or even intra-day
The Use of the Stochastic Oscillator
The Stochastic Oscillator measures the current price level relative to the highest and to the lowest range for a certain period. It is best used for trading ranges and especially for identifying
reversals. Check the following chart of the Stochastic Oscillator (settings 14,3,3,).
Chart: Stochastic Oscillator
Confirmation of the Stochastic Oscillator
As the Stochastic Oscillator doesn’t measure important variables as the volume activity, traders may use volume to confirm price reversals. Furthermore, the break of an important support/resistance
level can provide reliable confirmation too.
Identifying Price Reversals
Stochastic Oversold Reversal:
-The oversold reversal is identified usually when the Stochastic Oscillator has turned up from the levels below 20.
-A bullish divergence occurs with a break of the price of Stochastic Oscillator above 50.
Stochastic Overbought Reversal:
-This reversal is identified usually when the Stochastic Oscillator has turned up from the levels above 80.
-A bearish divergence occurs with a break of the price of Stochastic Oscillator below 50.
COMPARE: » Forex Brokers List | » Forex Bonus
Read More: » What is Williams %R | » What is Pivot Point | » What is RSI | » What is MACD | » What are Round Numbers? | » What is the Fibonacci Retracement | » Technical Analysis Map
◘ What is the Stochastic Oscillator? | {"url":"https://what-is-forex.com/technical/13-what-is-stochastic","timestamp":"2024-11-10T07:55:33Z","content_type":"text/html","content_length":"40058","record_id":"<urn:uuid:d62f65e9-0ea2-44b6-b5ac-eb4062947da9>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00110.warc.gz"} |
Boundary сonditions specified in different domains
Demystifying Boundary Conditions: How They Differ Across Domains
When working with mathematical models, especially those involving differential equations, understanding boundary conditions is crucial. These conditions define the behavior of a system at its edges
or boundaries and are essential for obtaining a unique solution. While the concept of boundary conditions is universal, their specific forms and implementations can vary significantly depending on
the domain of application.
Let's consider a simple example from the realm of heat transfer. Imagine we have a heated metal rod with one end kept at a constant temperature of 100°C and the other end exposed to a cool
environment at 20°C. We want to determine the temperature distribution along the rod. In this scenario, the boundary conditions would be:
• Dirichlet Condition: Temperature at the first end (x = 0) is fixed at 100°C.
• Neumann Condition: Temperature gradient at the other end (x = L, where L is the length of the rod) is proportional to the difference between the rod temperature and the environment temperature.
Original Code Example (Python):
import numpy as np
# Define the domain and boundary conditions
x = np.linspace(0, 1, 100) # 1D domain
T_0 = 100 # Temperature at the first end
T_L = 20 # Temperature of the environment
# Set the boundary conditions (example using finite difference method)
T[0] = T_0
T[-1] = T_L
This code snippet demonstrates how to implement Dirichlet and Neumann boundary conditions for a simple 1D problem using a numerical method. However, the specific implementation would vary depending
on the chosen numerical method and the complexity of the problem.
Boundary Conditions Across Different Domains:
1. Heat Transfer: In heat transfer problems, boundary conditions often involve specifying temperatures, heat fluxes, or combinations of both. For example, in a furnace, one might specify the
temperature of the walls and the heat flux through the door.
2. Fluid Dynamics: Here, boundary conditions are crucial for defining fluid flow patterns and pressure distributions. Examples include:
□ No-slip condition: Velocity of the fluid at a solid boundary is zero (like water sticking to the walls of a pipe).
□ Free-slip condition: Fluid can slide freely along a boundary (like air flowing over a smooth surface).
3. Structural Mechanics: In analyzing the behavior of structures under load, boundary conditions define the constraints on the structure. Examples include:
□ Fixed support: The displacement and rotation of the structure at a point are zero.
□ Roller support: The displacement of the structure is restricted in one direction, while the rotation is free.
4. Electromagnetism: Boundary conditions in electromagnetism dictate the behavior of electric and magnetic fields at interfaces between different materials. For example, the tangential component of
the electric field must be continuous across the interface.
Importance and Practical Applications:
• Solving Differential Equations: Boundary conditions are essential for obtaining unique solutions to differential equations. They provide the necessary information to define the specific solution
that matches the problem's constraints.
• Modeling Real-World Phenomena: By applying appropriate boundary conditions, we can create realistic models of physical systems. This enables us to predict and understand their behavior in various
• Engineering Design and Analysis: Understanding and implementing boundary conditions is crucial for the design and analysis of various structures, machines, and systems, ensuring they function
safely and efficiently.
Further Exploration:
• Finite Element Method (FEM): This powerful numerical technique extensively uses boundary conditions to solve complex engineering problems.
• Boundary Element Method (BEM): Another numerical technique that focuses on solving boundary integral equations, making it particularly effective for problems with complicated boundaries.
Understanding boundary conditions is fundamental to many scientific and engineering fields. By carefully considering the nature of the problem and the relevant domain, we can choose and apply
appropriate boundary conditions to achieve meaningful and accurate results. | {"url":"https://laganvalleydup.co.uk/post/boundary-sonditions-specified-in-different-domains","timestamp":"2024-11-10T20:48:10Z","content_type":"text/html","content_length":"84061","record_id":"<urn:uuid:70b3e7db-cccc-4ec9-954a-9a686c34e993>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00413.warc.gz"} |
5 Best Ways to Check if the Given Number k is Enough to Reach the End of an Array in Python
π ‘ Problem Formulation: The topic explores how to determine if a number k is sufficient to reach the last index of a given array. This problem assumes an array where each element indicates the
maximum you can jump forward from that position, and k is the starting jump power. The goal is to confirm if using k is adequate to jump from the beginning to the end of the array. For instance,
given the array [2,3,1,1,4] and k=2, the expected output is “True” since the jumps can cover the array length.
Method 1: Iterative Step-through
This method involves sequentially moving through the array, decrementing k at each step and increasing it by the value of the current element if it’s larger. It checks if we can reach or exceed the
array length before k becomes zero. This is robust for all input sizes and patterns.
Here’s an example:
def can_reach_end(arr, k):
for i in range(len(arr)):
if k <= 0:
return False
k = max(k - 1, arr[i])
return True
# Example usage:
print(can_reach_end([2, 3, 1, 1, 4], 2))
Output: True
This code defines a function can_reach_end which iterates through the array, checking if each step can be made with the current k. It decrements k unless the array provides a bigger boost. The
function returns True if we can get through the array before k runs outβ illustrating a straightforward way of solving the problem iteratively.
Method 2: Backtracking
The backtracking method tries to reach the end of the array from the beginning by exploring all potential paths. It tests if, at each index, the jump can be made with the remaining k. While it is
comprehensive, it’s more computational heavy, especially on large inputs.
Here’s an example:
def can_jump(index, arr, k):
if index >= len(arr) - 1:
return True
return any(can_jump(index + step, arr, k - step) for step in range(1, min(k, arr[index]) + 1))
# Example usage:
print(can_jump(0, [2, 3, 1, 1, 4], 2))
Output: True
The can_jump function recursively explores all jump possibilities at each array index from 1 to the minimum of k or the current array value. It returns True if any of these paths succeed to reach the
end. However, due to its recursive nature, it can be quite slow on large arrays and suffer from stack overflow for deep recursion.
Method 3: Dynamic Programming
Dynamic Programming can be utilized by creating a memoization mechanism that records the states we’ve previously calculated. This technique avoids redundant calculations seen in plain recursion,
thereby optimizing performance.
Here’s an example:
def can_reach(arr, k, index=0, memo=None):
if memo is None:
memo = {}
if index in memo:
return memo[index]
if index >= len(arr) - 1:
return True
for step in range(1, min(k, arr[index]) + 1):
if can_reach(arr, k - step, index + step, memo):
memo[index] = True
return True
memo[index] = False
return False
# Example usage:
print(can_reach([2, 3, 1, 1, 4], 2))
Output: True
The can_reach function utilizes memoization to cache already computed results hence skipping repetitive states. The dictionary memo holds these states. This method improves efficiency when compared
to plain recursion and is more advisable for larger inputs.
Method 4: Greedy Jumping
Greedy jumping algorithm attempts to make the farthest jump at every step, aiming to reach the end in as few jumps as possible. This method can be highly efficient and provide an optimal solution
without exhaustively checking every possibility.
Here’s an example:
def can_reach_greedily(arr, k):
i, reach = 0, 0
while i <= reach and reach = len(arr) - 1
# Example usage:
print(can_reach_greedily([2, 3, 1, 1, 4], 2))
Output: True
The function can_reach_greedily uses a greedy algorithm, jumping to the farthest reachable index at each step, based on the current index i and the available jump power k. It stops when the reach
becomes equal or greater than the last index or when i exceeds the current reach. This method is optimal for most cases and less complex.
Bonus One-Liner Method 5: Using Reduce
Python’s functools.reduce function can be leveraged to apply a cumulative operation that combines elements of the array, leading to a one-liner solution that’s both compact and elegant.
Here’s an example:
from functools import reduce
# Example usage:
print(reduce(lambda acc, n: max(acc - 1, n), [2, 3, 1, 1, 4], 2) >= 0)
Output: True
This code snippet uses reduce with a lambda function that traverses the array, updating the accumulator acc (initially set to k) with the greater value between acc-1 and the current number. It
efficiently checks if the accumulator never dips below zero, indicating that the end can be reached.
• Method 1: Iterative Step-through. Strong in simplicity and reliability. May not be the most efficient for sparse arrays with lots of zeroes.
• Method 2: Backtracking. Exhaustive and guarantees a solution if one exists. Computationally intensive and not suitable for large inputs.
• Method 3: Dynamic Programming. Offers optimization over backtracking with memoization. Can handle larger inputs but incurs space complexity.
• Method 4: Greedy Jumping. Fast and efficient. Works best when the goal is to minimize the number of jumps or for arrays with consistent jump values.
• Bonus Method 5: Using Reduce. Provides a concise and elegant one-liner solution. May be less intuitive to understand and debug for those unfamiliar with reduce. | {"url":"https://blog.finxter.com/5-best-ways-to-check-if-the-given-number-k-is-enough-to-reach-the-end-of-an-array-in-python/","timestamp":"2024-11-04T12:28:14Z","content_type":"text/html","content_length":"72328","record_id":"<urn:uuid:f377a22b-58e2-49d0-9fc7-f8c2ef17b5c7>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00376.warc.gz"} |
Are Feynman Lectures free?
Are Feynman Lectures free?
The Feynman Lectures is one of the most popular lecture series in physics. It’s a great resource for science enthusiasts, students, teachers — basically everyone. Now, Caltech and The Feynman
Lectures website have collaborated to put these lectures online. And they are completely free.
Can I read Feynman Lectures?
Simple answer: YES! Anyone with an interest in physics can learn from reading the Feynman Lectures.
What is Feynman series?
The Feynman Lectures on Physics is a physics textbook based on some lectures by Richard Feynman, a Nobel laureate who has sometimes been called “The Great Explainer”. The lectures were presented
before undergraduate students at the California Institute of Technology (Caltech), during 1961–1963.
Are Feynman Lectures useful for JEE?
Good starting till end. But not compatible for jee syllabus. For that we can refer other books. But overall a complete book for physics lovers.
Is Feynman a beginner lecture?
Simple answer: YES! Anyone with an interest in physics can learn from reading the Feynman Lectures. These are remarkable works, chock full of depth, insight, novel approaches to timeless subjects,
and good ideas.
Does Feynman Technique work?
This method works because it requires you to internalize the material before communicating it to others. If you can’t explain something simply, you don’t understand it well enough. Feynman believed
that one of the best ways to study is to think critically and learn deeply to grasp a deeper understanding of an idea.
Does physics make you smarter?
Learning physics not only makes you smarter, but it also activates new areas of the brain. A Drexel University study found parts of the brain not associated with learning science become active when
people try and solve physics problems.
Is Feynman lectures difficult?
The Feynman Lectures are highly sophisticated, but they are very difficult to grasp without a more normal foundation in more normal texts, such as Purcell for Electricity & Magnetism; Taylor &
Wheeler for Spacetime Physics.
Which is the toughest maths book for JEE?
Problems in Calculus of One Variable by I.A. Maron Calculus is one of the toughest topics in mathematics and by solving the problems in this book, you can master the subject. All I.A. Maron Calculus
books are good for both Differential Calculus and Integral Calculus.
How do I study for Feynman?
The Feynman Technique
1. Step 1 – Study. The first step is easy.
2. Step 2 – Teach. Once you’ve completely covered the topic, it’s time for step 2.
3. Step 3 – Fill the Gaps. As I mentioned, step 2 will uncover some gaps in your knowledge.
4. Step 4 – Simplify.
What is the smartest type of scientist?
Physicists won hands down. The tally of 46 respondents—a dozen of them in physics-related fields—reveals that 40% rated physicists as the most intelligent. Mathematicians (11% of the sample) were
favored by 15%, while chemists and biologists each captured 6% of the top votes.
How do I become a genius maths and physics?
How Did Einstein Learn Math and Physics?
1. Learning comes from solving hard problems, not attending classes.
2. You really know something when you can prove it yourself.
3. Intuition matters more than equations.
4. Thinking requires a quiet space and deep focus.
5. Understand ideas through thought experiments.
What was Feynman’s IQ?
An IQ test administered in high school estimated his IQ at 125—high but “merely respectable”, according to biographer James Gleick. | {"url":"https://bigsurspiritgarden.com/2022/12/23/are-feynman-lectures-free/","timestamp":"2024-11-10T21:14:10Z","content_type":"text/html","content_length":"50255","record_id":"<urn:uuid:6332fea7-e539-4fe4-b890-fc5b5fc490ab>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00802.warc.gz"} |
Linear Algebra Concepts in Large Language ModelsLinear Algebra Concepts in Large Language Models
Linear Algebra Concepts in Large Language Models
syndu | June 6, 2023, 3:31 p.m.
In addition to the topics mentioned earlier, there are several other linear algebra concepts that play a significant role in the creation of large language models:
1. Tensor operations: Tensors are multi-dimensional arrays that generalize the concepts of vectors and matrices. They are used extensively in deep learning frameworks to represent the weights,
activations, and gradients of neural networks. Tensor operations, such as addition, multiplication, and contraction, are essential for performing calculations during the training and inference
2. Matrix factorization: Matrix factorization techniques, such as non-negative matrix factorization (NMF) and sparse coding, are used to decompose large matrices into smaller, more manageable
components. These techniques can help in dimensionality reduction, data compression, and feature extraction, which are important for efficient training and representation of large language
3. Linear independence and rank: Linear independence is a fundamental concept in linear algebra that helps determine the minimum number of basis vectors required to represent a vector space. The
rank of a matrix is the number of linearly independent rows or columns it contains. Understanding linear independence and rank is important for analyzing the structure and properties of matrices
and tensors used in language models.
4. Projection and least squares: Projection is a technique used to find the closest point in a subspace to a given vector. Least squares is a method for finding the best-fitting linear approximation
to a set of data points. Both concepts are used in various optimization and regression problems that arise during the training and fine-tuning of large language models.
5. Quadratic forms and positive definite matrices: Quadratic forms are used to represent the energy or cost functions in optimization problems, while positive definite matrices are used to ensure
the existence of unique solutions. Both concepts are important for understanding the stability and convergence properties of optimization algorithms used in training large language models.
6. Condition number: The condition number of a matrix is a measure of its sensitivity to small changes in its input. A high condition number indicates that the matrix is ill-conditioned, which can
lead to numerical instability and slow convergence in optimization algorithms. Understanding and managing the condition number is important for ensuring the robustness and efficiency of the
training process.
These additional linear algebra topics, along with the ones mentioned earlier, form the core mathematical foundation for creating, training, and fine-tuning large language models. A solid
understanding of these concepts is essential for anyone working with such models.
A solid understanding of these concepts is essential for anyone working with such models.
Light and space have been distorted. The terrain below has transformed into a mesh of abstract possibilities. The Godai hovers above, a mysterious object radiating with unknown energy.
Explore the anomaly using delicate origami planes, equipped to navigate the void and uncover the mysteries hidden in the shadows of Mount Fuji.
Will you be the one to unlock the truths that have puzzled the greatest minds of our time?
Enter the Godai | {"url":"https://syndu.com/blog/linear-algebra-concepts-in-large-language-models/","timestamp":"2024-11-08T04:26:48Z","content_type":"text/html","content_length":"47871","record_id":"<urn:uuid:fd4f0ca1-0d5b-4c75-b4c2-22a53a70d21d>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00255.warc.gz"} |
DE Seminars: Fall 2013
Monday September 16
Title Modeling modal transitions in diffusing populations
Thomas I. Seidman
Speaker Department of Mathematics and Statistics
We consider a diffusing population as comprised of particles undergoing Brownian motion, each with its own history. For each such particle we then consider possible “state transitions” determined by
crossing thresholds (hysteretic relay as in hybrid systems). Our objective here is to construct a continuum model of the resulting process as a reaction/diffusion system and then to show existence of
“solutions” of this system. Technical difficulties arise here in resolving the concerns of hybrid systems (anomalous points and the possibility of Zeno phenomena) in a setting where one is tracking
the collective effects on individual diffusing particles without being able to track their individual trajectories. For visualization, we think of an example of diffusing bacteria and nutrient in
which each bacterium is reacting to its own experience of local nutrient concentration in switching between dormant and active modes.
Monday October 14
Title Computational Studies of Cardiac Excitation-Contraction Coupling: From Molecule to Arrhythmia
Speaker Saleet Jafri
George Mason University
Calcium dynamics in the cardiac myocyte links the electrical excitation of the heart to contraction in a process known as excitation-contraction coupling. Dysfunction of critical calcium signaling
proteins in heart is associated with lethal inherited cardiac arrhythmias. However, how the altered proteins lead to arrhythmias remains both unknown and controversial. We have used computational
models to investigate fundamental mechanisms that underlie calcium-dependent arrhythmias, the same class of arrhythmias that follow myocardial infarction, heart failure and diverse genetic arrhythmic
diseases. Even very common arrhythmias (one episode of sudden cardiac death in a month) are rare when normalized to the events occurring within a single cell over the period of a typical long
experiment (e.g. one hour). Stochastic modeling, however, with the powerful computer clusters available and with our recent advances in computational algorithms, enable us to examine stochastic model
systems over prolonged periods without missing the rare events. We start with the most elementary event of cardiac calcium release, the calcium spark, and construct stochastic models that explain
mechanisms of calcium release termination, calcium homeostasis and the sarcoplasmic reticulum calcium leak, and the generation of arrhythmias from defects in calcium signaling. These insights begin
to provide insight in to the normal and abnormal physiology of cardiac excitation-contraction coupling.
Monday October 28
Title Multiscale approximations in stochastic biochemical networks
Hye-Won Kang
Speaker Department of Mathematics and Statistics
Stochastic effects may play an important role in mathematical modeling of biological and chemical processes in case the copy number of some component involved in the system is small. In this talk,
stochastic modeling of biochemical networks with several examples is introduced and multiscale approximations of stochastic biochemical networks are suggested. Evolution of the network is modeled in
terms of a continuous-time Markov jump process. Chemical reaction networks are generally large in size and they involve various scales in species numbers and reaction rate constants. The multiscale
approximation method is introduced to reduce the network complexity and to derive limiting models with simple structure. Then, asymptotic behavior of the error between the full model and the limiting
model is approximated. This is a joint work with Thomas G. Kurtz and Lea Popovic.
Monday November 4
Title Finite element method for linear elliptic problem in non-divergence form
Speaker Wujun Zhang
University of Maryland at College Park
We design a finite element method for linear elliptic problem in non-divergence form, which satisfies the discrete maximum principle. The method ensures convergence to the viscosity solution. We
develop a novel approach to carry out error estimation by discrete maximum principle. Applying this novel approach, namely, a discrete version of Alexandroff Bakelman Pucci maximum principle, we
establish a rate of convergence of the discrete solution in the maximum norm.
Monday November 11
Title Moment Growth Bounds on Stochastic Population Processes
Muruhan Rathinam
Speaker Department of Mathematics and Statistics
We consider the class of continuous time, time homogeneous Markov processes with the $N$ dimensional non-negative integer lattice as state space that have finitely many state independent jumpsize
vectors. Such processes in general can be regarded as population processes modeling the vector copy number of $N$ different species undergoing $M$ different types of reaction/ interaction events.
Typical examples of such processes are stochastically modeled chemical reactions, predator-prey models as well as epidemiological models. These processes are uniquely characterized by their
“propensity” functions as well as their “stoichiometric vectors.”
Such processes often possess the property that given a deterministic initial condition the process always remains in a bounded region of the state space. We provide a necessary and sufficient
condition on the stoichiometric matrix for this to hold. When the process is not bounded in the state space a natural question is whether finite moments of all orders exist. We provide two different
sufficient conditions and one necessary condition for the existence of moments of all orders for all time $t>0$.
Monday November 18
Title Analysis of SI Models with Multiple Interacting Populations using Subpopulations with Forcing Terms
Evelyn Thomas
Speaker Department of Mathematics and Statistics
As a system of differential equations describing an epidemiological system becomes large with multiple connections between subpopulations, the expressions for reproductive numbers and endemic
equilibria become algebraically complicated, which makes drawing conclusions based on biological parameters difficult. We present a new method which deconstructs the larger system into smaller
subsystems, captures the bridges between the smaller systems as external forces, and bounds the reproductive numbers of the full system in terms of reproductive numbers of the smaller systems, which
are algebraically tractable. This method also allows us to analyze the size of the endemic equilibria.
Monday December 2
Title Atherosclerotic plaque development: strategies for modeling the growth and degradation of the fibrous cap
Jonathan Bell
Speaker Department of Mathematics and Statistics
Cardiovascular disease is a leading cause of death in the US and many developed countries. Atherosclerosis is a major contributor to this disease profile. Atherosclerosis is an inflammatory disease
of major arteries due to fatty lesions forming in arterial walls, causing stenosis (contracting blood flow) and thrombosis (blood clots, blockage). Certain lesions, called vulnerable plaques, are
responsible for most deaths from atherosclerosis. The growth and degradation of these plaques is very dynamic, involving complex biochemical, hemodynamic, and mechanical interactions. But the present
experimental means for studying arterial plaque development is limited, calling for augmenting such studies by mathematical modeling, analysis, and simulation. In this talk I will give a background
to the biology and outline a strategy for model development, starting with an ODE model of principle chemical and cellular processes, and progressing to more complicated PDE models that include more
mechanisms. At this stage little is proved, so the talk must be viewed as a possible roadmap for approaching a variety of questions. | {"url":"https://mathstat.umbc.edu/de-seminars-fall-2013/","timestamp":"2024-11-02T06:29:56Z","content_type":"text/html","content_length":"148432","record_id":"<urn:uuid:3b08ffdd-c780-4ca3-ad97-9c87004c864d>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00048.warc.gz"} |
Top Choices Of math for kids -
Istation makes personalised learning straightforward with computer-adaptive instruction, assessments, customized data profiles, and teacher resources. Learn third grade math aligned to the Eureka
Math/EngageNY curriculum—fractions, area, arithmetic, and a lot more. That’s why some researchers and educators are working to leverage what we know concerning the connections between dyscalculia and
the much better-known dyslexia to identify new avenues to improve math learning for struggling students. On prime of this, we’re dedicated to increasing maths instructing capacity and quality. This
contains developing a brand new maths National Professional Qualification to support the professional growth of maths teachers in primary colleges from February 2024.
• Tons of fun and academic on-line math video games, from primary operations to algebra and geometry.
• But they doubtless do assist improve students’ making sense of numbers, discover completely different procedures for fixing problems, flexibly deciding on amongst methods, and math reasoning more
usually, she mentioned.
• Overdeck, the founding father of the nonprofit Bedtime Math, recommends colleges educate dad and mom about dyscalculia and encourage households to develop habits around math puzzles and games—in
the identical means faculties typically do with household read-a-thons.
• “We wish to transfer children to retrieval, to the extent possible, however we’re not just trying to get them to memorize math facts—that’s not an excellent intervention.
This is a giant drawback to the positioning, as you will want to be available at set instances to get the most from the website. For a really inexpensive month-to-month charge, achieve access to
hundreds of resources created by teachers like you. Dynamically created math worksheets for college kids, lecturers, and oldsters. A artistic solution that goals to revive students’ passion and
curiosity in math. Mashup Math has a library of 100+ math video lessons as properly as a YouTube channel that features new math video classes every week. Create the maths worksheets you want, exactly
how you need them, in minutes.
5 Simple Details About best free math apps Explained
Conrad added that prime faculties ought to work to incorporate applied math schooling into the standard framework. The greatest websites to study math for free will give you numerous educational
materials suitable Get the facts for all sorts of learners. Whether you like gamified content or apply exams, there are many free math assets for remote learning. To really enhance your math fluency,
we advocate taking 1-on-1 lessons with a tutor.
The proponents say the means in which to realize a profit out of timed exercises with out all of the unhealthy vibes is to reduce the pressure. It should give consideration to fewer details at a
time, and college students should attempt to beat their own time in solving a choose sample of facts—not get in contrast with their friends on a poster board hanging on the wall for all to see. There
is not any amount of practice with backup strategy in multiplication that is going to get a scholar to provide a product in three seconds or much less,” stated McNeil.
This award-winning program finds and fixes studying gaps with the facility of personalised learning. The three-pronged strategy options personalised learning, pinpoint assessments, and an interactive
classroom. Khan Academy is on a mission to give a free, world-class training to anybody, wherever. Their customized learning resources can be found for all ages, in a huge array of subjects. Make
math about greater than numbers with engaging objects, real-world situations, and limitless questions.
My Dog Ate My best math app!
Learn the talents that can set you up for success in decimal place worth; operations with decimals and fractions; powers of 10; volume; and properties of shapes. Learn Algebra 1 aligned to the Eureka
Math/EngageNY curriculum —linear functions and equations, exponential growth and decay, quadratics, and extra. Learn sixth grade math—ratios, exponents, lengthy division, adverse numbers, geometry,
statistics, and more. Learn third grade math—fractions, space, arithmetic, and a lot more.
This is a really useful and free software for math students, which can be accessed as an app for iPhones and Android. By nature, Desmos isn’t a full math course and isn’t suitable for youthful
learners. Across early math, suitable for students in kindergarten as much as grade 12. Your math course is tailored to you, you’ll receive extra supplies corresponding to handouts and worksheets
and, most significantly, stay chats with your tutor to ensure you’re by no means confused. Tools for math lecturers, including bell ringers and drills, math tools and manipulatives, query generators,
printables, and puzzles. | {"url":"https://eruditocafe.com/top-choices-of-math-for-kids/","timestamp":"2024-11-13T19:52:29Z","content_type":"text/html","content_length":"123455","record_id":"<urn:uuid:6e19b652-6150-4444-9efe-1dc48e03aafe>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00677.warc.gz"} |
The Personal Distribution of Income
Full text: The Personal Distribution of Income
provided we know the distribution of wealth. But the distribution
of wealth is known: It follows the Pareto law - over a fairly
wide range - and its pattern has also been explained
theoretically / 13/-
Denoting wealth by W , let us write for the density
of the wealth distribution
p* = c W-*” 11 dW
or putting
w =
In W
p(w) =
c e
- c4tt
w >
p(w) =
W <
Y denotes income
y= In
Y the
dens it;/
of income can be
represented :
Ln the form
f*(y-w), the density of a certain return on wealth. Sven without
knowing this function we might manage to derive the distribution
of income from that of wealth provided we can make certain
assumptions about independence.
We shall provisionally assume that the distribution of
the rate oof return is independent of the amount of wealth.
In terms of random variables, if y J Cu and ^
denote income, wealth and the rate of return, we have
If the random variables wealth and the rate of return are independen
their sum can be represented by a convolution of the corresponding
density functions, and 7/e shall in this way obtain the
distribution of income.
For the purposes of this calculation we shall replace
the density f*(y-w) by the mirror function f(w-y) which is also
independent of wealth. The two functions are sjmmetric and have
the same value ( in fact, the only difference is in the dimension :
While the former refers to a rate of return per year, the reciprocal
value refers to the number of years income contained in the wealth )
The calculation of the density of income q(y)
proceeds then by mixing the function f(w-y) with the density
of wealth: | {"url":"https://viewer.wu.ac.at/viewer/fulltext/AC14446373/11/","timestamp":"2024-11-08T21:22:34Z","content_type":"application/xhtml+xml","content_length":"68887","record_id":"<urn:uuid:a22fc183-8197-404d-afd0-04e1edd6d715>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00240.warc.gz"} |
VerifyThis Competition
This page documents KIV proofs for the 3 challenges of the VerifyThis Competion at the formal methods (FM) conference in Paris 2012. Solutions for challenge 1 (first part) and 3 were worked out at
the competion. Challenge 2 was solved after the competition and was about 2 hours of work to figure out the core idea of what's going on (function #fz) and 2 days to translate the idea into a
mechanized KIV proof.
Challenge 1 a): Longest Common Prefix
Project overview lcp
Specification lcp contains the program definition and the correctness theorem.
It is proved with loop invariant
n0 < # ar ∧ n1 < # ar
→ n0 + m ≤ # ar
∧ n1 + m ≤ # ar
∧ (∀ n. n < m → ar[n0 + n] = ar[n1 + n])
and with loop variant
# ar - m
Challenge 1 b): Suffixarrays and Longest Repeated Substring
Update 30.08.2013: We incorporated the suggestion of one reviewer to specify and verify a generic procedure for sorting
Project overview suffixarray
The algebraic function lcp is defined in specification lcp.
Specification suffix contains the definition of the lexicographical order < on arrays and the function suffix(ar, i) yielding the suffix starting at index i. Futhermore, some basic theorems about
them are proven.
In the specification sorted the predicate sorted(ar, n0, n1, ⊑) and sorted(ar, ⊑) are defined and state that an array is sorted according to an order ⊑ on elem. The two main theorems are: 1) swapping
two elements maintains sorted and 2) two consecutive sorted ranges can be merged into a single sorted range. The implementation of the procedure sort and the invariants can be found in specification
sort. The correctness theorem and its proof are here.
The suffix array sar is considered valid with respect to the array ar if the predicate valid(sar, ar) specified here holds. It states that the elements of the suffix array is equal to the multiset
{0,...,n-1} and that the suffixes represented by the indices stored in the suffix array are sorted according to the lexicographical order on arrays. The multiset {0,...,n-1} is specified recursively
as bag< n. The function ar.elems (and ar elems n) calculate the multiset of elements in the array (up to the index n).
The algorithm for the suffix array construction is specified in suffix-array-ops. The main correctness theorem states that after a call to create_suffixarray the suffix array is valid.
The predicates rs(ar, n0, n1, n) and lrs(ar, n0, n1, n) state that at index n0 in ar is a repeated resp. longest repeated substring of length n with the witness suffix starting at n0. They are
defined in specification lrs.
Specification lrs-ops contains the definition of the algorithm to compute the longest repeated substring and the definition of the loop invariant. The main lemma for the correctness proof states that
the longest common prefix of neighboring suffixes in the lexicographical order is at least as long as that of suffixes which are further apart. With this lemma, the main complication during the proof
of the correctness theorem for the longest repeated substring algorithm is that one needs to switch from indices in the original array to indices in the suffix array.
Challenge 2: Prefixsum
We prove the iterative version of the algorithm (seems to be much simpler than the recursion). The solution consists of the two specifications split-finalzeros and prefixsum explained below, and an
instance array-of-ints imported from the library: the development graph gives an overview over the structure of the specifications.
This specification contains the main idea for verification. The function # fz(n) calculates the number of final zeros in a bit-represenation of an array index n + 1. lead(n), which is always an odd
number, is the result of cutting away the final zero bits of n + 1. Each n + 1 can be uniquely represented as 2^ #fz(n) * lead(n). (lemmas split-ok and split-unique-2).
Remark: As an afterthought, the number of final zero bits of n + 1 is also the number of final one bits of the original n. The function #fz is used in both invariants of the algorithm:
• In upsweep, at level d (i.e. when space = 2^ d), ar[m] contains the sum from index m - 2^min(d,#fz(m)) to m (both inclusive) Therefore at the end of upsweep the array contains the sum from m - 2^
#fz(m) to m.
• In downsweep at level d , ar[m] contains the sum from 0 to m - 2^(d + 1) (again both inclusive) if #fz(m) <= d, else the old value from downsweep.
This specification contains:
• declarations for the PREFIXSUM algorithm translated from the original Java. The assigment of 0 to the last array element was moved from downsweep to the main routine, since it does not belong
logically to the downsweep. Auxially procedures were added for the inner loops, to improve proof modularity and readability.
• Definitions of the pre- and postcondition of PREFIXSUM (predicates pre and post), as used in the main theorem PREFIXSUM-thm. postu is the postcondition of UPSWEEP. invu, invuh, invd, invdh are
the invariants for the while loops in UPSWEEP, UPSWEEPH, DOWNSWEEP, DOWNSWEEPH.
• All the necessary proofs. The pre/postconditions are used in the proofs of the respective theorems (…-thm). The main theorem is PREFIXSUM-thm.
The procedure to get the full proofs done was as follows:
It started by defining all conditions that connect invariants as simplifier rules (lemmas pre-invu, invu-invuh, invuh-invu, invu-postu, postu-invd, invd-invdh, invdh-invd, invd-post), except the
conditions that really show that invdh and invuh are invariant in the inner loops.
With these definitions, the proofs of the theorems PREFIXSUM-thm, UPSWEEP-thm, UPSWEEPH-thm, DOWNSWEEP-thm and DOWNSWEEPH-thm stating total correctness of the main program and its subroutines were
done first. They are almost automatic using the simplifier rules, and nearly independent of how the axioms defining the predicates invu/d(h) (except for small dependencies for the simple termination
conditions). Only two open goals remain: the two inner loop conditions in DOWNSWEEPH-thm and UPSWEEPH-thm.
Then the simplifier rules were proved themselves. This caused some corrections in the defining axioms for invu/d(h). When all the simplifier rules were finally provable, all the bugs from the
invariants were gone, except that invuh had the weaker condition n \neq 0 instead of the stronger odd(n).
The finall error was corrected, when at last doing the main difficult parts of the proofs for the invariance of invuh and invdh in DOWNSWEEPH-thm and UPSWEEPH-thm. These are difficult, since they
involve laws about 2^n and #fz. Crucial lemmas are e.g. finz-bigger, finz-add-bigger and finz-bounded-05. These had to be found and manually applied.
Challenge 3: Delete the Minimum of a Binary Search Tree
Project overview del-min
The solution consists of two main parts:
1. A proof that the procedure search_tree_delete_min deletes the leftmost element from the tree
2. Proofs that the left most element is actually minimal minimal, removed from the tree, and that the result is still a binary search tree.
The final theorem combines these results.
We assume a strict total order on elements in the trees, defined here and garbage collection.
The solution was completed almost in time during the competition.
Part 1
The solution is based on separation logic, formalized as a set of higher order operators (specifications heap-sep and maplet).
An abstraction predicate to algebraic trees is defined here.
Algebraic functions t.leftmost and t.butleftmost are defined recursively.
The program consists of a while loop and finally some destructive modifications to the data structure. The proof goes by induction over (the size of the) algebraic tree (red node 58). The induction
hypothesis is
⟨ loop; modification ⟩ post
i.e., the hypothesis holds for the loop and the remaining program.
• The base case corresponds to loop exit (left branch of node 57), thus we have deal with the modification.
• The recursive case needs to execute the loop once, thereby unfolding the tree abstraction in step 49. The remaining iterations (+ the remaining program) are covered by the induction hypothesis,
which is applied (manually) in node 47.
Part 2
The predicate bst characterizes binary search trees: elements in the left subtree are strictly less than the root, conversely for the right subtree.
Theorems about the algebraic tree are proven by (structural) induction. They rely on a strengthened characterization of binary search trees and some helper lemmas.
We have also specified insertion (without rebalancing) and proven to match the corresponding algebraic operation insert Allocation of a new node is done by the procedure mknode (which gets a suitable
contract). The proof for insertion foillows uses induction on the algebraic tree as well (node 99) and has to consider more cases (as one may descend to the left or right subtree), but is otherwise
not difficult.
A specification of the operations insert and delete for ordered sets is here. The abstraction from trees to sets is defined here and standard data refinement proof obligations are shown for delete
and insert. A couple of helper lemmas translate the extensional definition of the abstraction from sets to trees to a recursive one, and show commutation with of the tree modifications with the set
modifications. A useful lemma from the library is min-q.
Solution with the Magic Wand (NEW)
Update May 5, 2015. We now have a solution using the magic wand operator based on
S. Blom, M. Huisman: Witnessing the elimination of magic wands.
VerifyThis 2012 Special Issue, Software Tools for Technology Transfer, Springer, 2015.
You can find the proof here. The invariant (step 18) states that the modification on the current subtree (spatially) implies the modification on the whole tree
∃ t0.
tr(cur, t0.butleftmost) -* tr(root, t.butleftmost)
where t0 represents the current algebraic abstraction. (Note that cur is named r2 by KIV, and root is r). | {"url":"https://kiv.isse.de/projects/verifythis-competition-2012/index.html","timestamp":"2024-11-05T20:07:17Z","content_type":"application/xhtml+xml","content_length":"17058","record_id":"<urn:uuid:818dc78c-6e65-466e-87a2-102d0447aa97>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00042.warc.gz"} |
f01fjf (complex_gen_matrix_log)
NAG FL Interface
f01fjf (complex_gen_matrix_log)
FL Name Style:
FL Specification Language:
1 Purpose
f01fjf computes the principal matrix logarithm, $\mathrm{log}\left(A\right)$, of a complex $n×n$ matrix $A$, with no eigenvalues on the closed negative real line.
2 Specification
Fortran Interface
Subroutine f01fjf ( n, a, lda, ifail)
Integer, Intent (In) :: n, lda
Integer, Intent (Inout) :: ifail
Complex (Kind=nag_wp), Intent (Inout) :: a(lda,*)
C Header Interface
#include <nag.h>
void f01fjf_ (const Integer *n, Complex a[], const Integer *lda, Integer *ifail)
The routine may be called by the names f01fjf or nagf_matop_complex_gen_matrix_log.
3 Description
Any nonsingular matrix $A$ has infinitely many logarithms. For a matrix with no eigenvalues on the closed negative real line, the principal logarithm is the unique logarithm whose spectrum lies in
the strip $\left\{z:-\pi <\mathrm{Im}\left(z\right)<\pi \right\}$. If $A$ is nonsingular but has eigenvalues on the negative real line, the principal logarithm is not defined, but f01fjf will return
a non-principal logarithm.
is computed using the inverse scaling and squaring algorithm for the matrix logarithm described in
Al–Mohy and Higham (2011)
4 References
Al–Mohy A H and Higham N J (2011) Improved inverse scaling and squaring algorithms for the matrix logarithm SIAM J. Sci. Comput. 34(4) C152–C169
Higham N J (2008) Functions of Matrices: Theory and Computation SIAM, Philadelphia, PA, USA
5 Arguments
1: $\mathbf{n}$ – Integer Input
2: $\mathbf{a}\left({\mathbf{lda}},*\right)$ – Complex (Kind=nag_wp) array Input/Output
3: $\mathbf{lda}$ – Integer Input
4: $\mathbf{ifail}$ – Integer Input/Output
6 Error Indicators and Warnings
If on entry
, explanatory error messages are output on the current error message unit (as defined by
Errors or warnings detected by the routine:
$A$ is singular so the logarithm cannot be computed.
$A$ was found to have eigenvalues on the negative real line. The principal logarithm is not defined in this case, so a non-principal logarithm was returned.
$\mathrm{log}\left(A\right)$ has been computed using an IEEE double precision Padé approximant, although the arithmetic precision is higher than IEEE double precision.
An unexpected internal error has occurred. Please contact
On entry, ${\mathbf{n}}=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{n}}\ge 0$.
On entry, ${\mathbf{lda}}=⟨\mathit{\text{value}}⟩$ and ${\mathbf{n}}=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{lda}}\ge {\mathbf{n}}$.
An unexpected error has been triggered by this routine. Please contact
Section 7
in the Introduction to the NAG Library FL Interface for further information.
Your licence key may have expired or may not have been installed correctly.
Section 8
in the Introduction to the NAG Library FL Interface for further information.
Dynamic memory allocation failed.
Section 9
in the Introduction to the NAG Library FL Interface for further information.
7 Accuracy
For a normal matrix
(for which
), the Schur decomposition is diagonal and the algorithm reduces to evaluating the logarithm of the eigenvalues of
and then constructing
using the Schur vectors. This should give a very accurate result. In general, however, no error bounds are available for the algorithm. See
Al–Mohy and Higham (2011)
and Section 9.4 of
Higham (2008)
for details and further discussion.
The sensitivity of the computation of $\mathrm{log}\left(A\right)$ is worst when $A$ has an eigenvalue of very small modulus or has a complex conjugate pair of eigenvalues lying close to the negative
real axis.
If estimates of the condition number of the matrix logarithm are required then
should be used.
8 Parallelism and Performance
Background information to multithreading can be found in the
f01fjf is threaded by NAG for parallel execution in multithreaded implementations of the NAG Library.
f01fjf makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further
Please consult the
X06 Chapter Introduction
for information on how to control and interrogate the OpenMP environment used within this routine. Please also consult the
Users' Note
for your implementation for any additional implementation-specific information.
The cost of the algorithm is
floating-point operations (see
Al–Mohy and Higham (2011)
). The complex allocatable memory required is approximately
If the Fréchet derivative of the matrix logarithm is required then
should be used.
can be used to find the principal logarithm of a real matrix.
10 Example
This example finds the principal matrix logarithm of the matrix
$A = ( 1.0+2.0i 0.0+1.0i 1.0+0.0i 3.0+2.0i 0.0+3.0i -2.0+0.0i 0.0+0.0i 1.0+0.0i 1.0+0.0i -2.0+0.0i 3.0+2.0i 0.0+3.0i 2.0+0.0i 0.0+1.0i 0.0+1.0i 2.0+3.0i ) .$
10.1 Program Text
10.2 Program Data
10.3 Program Results | {"url":"https://support.nag.com/numeric/nl/nagdoc_30.1/flhtml/f01/f01fjf.html","timestamp":"2024-11-01T19:18:06Z","content_type":"text/html","content_length":"34738","record_id":"<urn:uuid:07aa5a27-1dbe-4006-ba48-46a840ab45bf>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00188.warc.gz"} |
Fiducial array with two fiducials - issue 9 year
billy from United States [10 posts]
I am attempting to read the array variables from the fiducial array module.
When a single fiducial is present, the array is just as described on the web site.
However, when two fiducials are present, it seems the values get scrambled. For example the 0 element (confidence %) is well over 100.
I am using Windows 8.1 with Roborealm 2.73.0.
Attached jpg shows the array variable.
What am I doing wrong?
Steven Gentner from United States [1446 posts] 9 year
I think you meant to use FIDUCIAL_CONFIDENCE_ARRAY instead of FIDUCIALS in your first GetArrayVariables. The FIDUCIALS array is a step 17 instead of 1 and contains a bunch of information in one
billy from United States [10 posts] 9 year
I attached a screen shot that shows the first several values in the fiducials array when two fiducials are present. Element 0 is supposed to show confidence of the 0th fiducial. It shows 260.
When a single fiducial is present that same element 0 shows a reasonable 96% that matches what the fiducial module lists on the screen.
Don't get distracted by the way I listed the variables to the screen, you can see the issue in the "Available Variables" window. I listed them to the screen the way I did to make sure that the
Available Variables window wasn't the issue.
From the web site:
The FIDUCIALS array is composed of 17 numbers as follows:
Offset Contents
0 Match Confidence 0-100
1 Point 1 X coordinate
2 Point 1 Y coordinate
3 Point 2 X coordinate
4 Point 2 Y coordinate
5 Point 3 X coordinate
6 Point 3 Y coordinate
7 Point 4 X coordinate
8 Point 4 Y coordinate
9 Translation in X
10 Translation in Y
11 Scale 0-100
12 Rotation in X (radians)
13 Rotation in Y (radians)
14 Orientation (Rotation in Z, radians)
15 Path start index
16 Length of path
Refering to the screen shot, you'l seen that element 1 lists the Point 4 Y coordinate, which should be in element 8. Element 0 lists the Point 4 X coordinate which should be element 7.
Element 18 does show a confidence value, so it seems the 2nd set of data is intact. Just the 1st one is messed.
Here is how the data is loaded into the array
0 X4 of 0th fiducial
1 Y4 of 0th fiducial
2 Y translation of 1st fiducial
3 size of 1st fiducial
4 X or Y rotation of 1st fiducial
5 X or Y rotation of 1st fiducial
6 orientation of 1st fiducial
18 confidence of the of 0th fiducial
Clearly something messed up when two fiducials are present.
billy from United States [10 posts] 9 year
I attached a screen shot that shows the first several values in the fiducials array when two fiducials are present. Element 0 is supposed to show confidence of the 0th fiducial. It shows 260.
When a single fiducial is present that same element 0 shows a reasonable 96% that matches what the fiducial module lists on the screen.
Don't get distracted by the way I listed the variables to the screen, you can see the issue in the "Available Variables" window. I listed them to the screen the way I did to make sure that the
Available Variables window wasn't the issue.
From the web site:
The FIDUCIALS array is composed of 17 numbers as follows:
Offset Contents
0 Match Confidence 0-100
1 Point 1 X coordinate
2 Point 1 Y coordinate
3 Point 2 X coordinate
4 Point 2 Y coordinate
5 Point 3 X coordinate
6 Point 3 Y coordinate
7 Point 4 X coordinate
8 Point 4 Y coordinate
9 Translation in X
10 Translation in Y
11 Scale 0-100
12 Rotation in X (radians)
13 Rotation in Y (radians)
14 Orientation (Rotation in Z, radians)
15 Path start index
16 Length of path
Refering to the screen shot, you'l seen that element 1 lists the Point 4 Y coordinate, which should be in element 8. Element 0 lists the Point 4 X coordinate which should be element 7.
Here is how the data is loaded into the array
0 X4 of 0th fiducial
1 Y4 of 0th fiducial
2 Y translation of 1st fiducial
3 size of 1st fiducial
4 X or Y rotation of 1st fiducial
5 X or Y rotation of 1st fiducial
6 orientation of 1st fiducial
18 confidence of the of 0th fiducial
Clearly something messed up when two fiducials are present.
Steven Gentner from United States [1446 posts] 9 year
Sorry, as we didn't have the robofile we were using we had to make some assumptions.
Turns out, I'll bet you have the sort array checkboxes set in the Fiducial module? The sorting was happening on a non-17 entry boundary which was what was causing the mixing of numbers.
This has been corrected in the most recent version (just uploaded). Can you download and test?
billy from United States [10 posts] 9 year
Hey! Really quick service...yes it's working now.
This forum thread has been closed due to inactivity (more than 4 months) or number of replies (more than 50 messages). Please start a New Post and enter a new forum thread with the appropriate title. | {"url":"https://www.roborealm.com/forum/index.php?thread_id=5620","timestamp":"2024-11-04T01:16:41Z","content_type":"text/html","content_length":"24533","record_id":"<urn:uuid:dd635634-928a-4468-be79-89176e47babf>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00575.warc.gz"} |
Clocks are Monoids Too!
In the spirit of watching a few challenging videos I happened upon another Monoid / Monad tutorial which mentioned that clocks were Monoids too! Me being me, this seemed a delightful topic to share
on, so here we are.
Disclaimer: This article will veer a bit more advanced, especially if you haven't read the article mentioned in the next section.
Wait wait, what's a Monoid?
If you want a more general overview you might want to give this previous article of mine a read:
It will cover them in a more general sense, while this article will cover them in a more specific sense but will still cover the general rules and why that's so amusing.
Ok, Fast Version Then?
So a Monoid is something which follows three rules:
1. Join (Closure) - A way to combine two items to get back an item of the same type
2. Empty (Identity) - An empty item, that when joined with any other item of the same type, returns that same item.
3. Order (Associativity) - As long as the items retain their order you can group them however you want and get the same result back.
This sounds a bit complicated, but has an implementation you're already very familiar with: summing an array.
This gives us all three of those rules:
# Join (Closure) - A way to combine items
1 + 1
# Empty (Identity) - An empty item, when joined, returns the same item
1 + 0 == 1
# Order (Associativity) - Retain the order and you can group freely
1 + 2 + 3 == 1 + (2 + 3) == (1 + 2) + 3
As it turns out a lot of things happen to follow this nifty little pattern, and one of those nifty little things are clocks.
Our Clock
We'll assume for the duration of this post that our clock is a simple 12 hour clock. We won't worry about dates or anything beyond that.
To do that we'll start with a simple class we'll build on:
class Clock
attr_reader :time
def initialize(current_time)
@time = current_time % 12
We'll put a modulo 12 in there just to make sure someone's not being naughty and making us use 24H clocks.
Joining Clocks
Now here's the interesting thing about joining functions: they don't have to necessarily be an operator. They can be an entire function.
So to join clocks we start with an interesting predicament: what happens when the clock crosses twelve?
It goes right back around.
In programming we can implement that behavior using modulo to ensure that once we hit a limit we start right back over again. In this case modulo 12:
class Clock
attr_reader :time
def initialize(current_time)
@time = current_time % 12
def join(other_clock)
new_time = (time + other_clock.time) % 12
Trying this out we might get something like this:
clock_one = Clock.new(12)
clock_two = Clock.new(5)
clock_three = Clock.new(6)
# => #<Clock:0x00007f88faa8d998 @time=5>
# => #<Clock:0x00007f88fa11b9a0 @time=11>
Really it's just addition with some extra steps, the only difference is we now have a clock type we need to return as well. Have to make sure we're consistent.
The Rules
So do we get a new clock if we smash clocks together?:
clock_one = Clock.new(12)
clock_two = Clock.new(5)
# => true
Yep! One down.
Extra Steps
Noted that we could rely on the initializer doing the modulo here, but for the sake of the exercise we want to be a bit explicit about this. You could also make join into + but that's an exercise
I'll leave up to the reader.
An Empty Clock
So if it's just addition with some extra steps, that means that 0 should still work right? Right!
class Clock
attr_reader :time
def initialize(current_time)
@time = current_time % 12
def join(other_clock)
new_time = (time + other_clock.time) % 12
def self.empty
If we were to try that out we might get something like this:
clock_two = Clock.new(5)
# => #<Clock:0x00007f88fa1242a8 @time=5>
The Rules
Does that hold up to our rules?:
clock_two = Clock.new(5)
clock_two.join(Clock.empty) == clock_two
# => false
Oi! That's not right. Well, it is if we define equality based on the time like so:
class Clock
attr_reader :time
def initialize(current_time)
@time = current_time % 12
def join(other_clock)
new_time = (time + other_clock.time) % 12
def ==(other)
time == other.time
def self.empty
Now if we try it:
clock_two = Clock.new(5)
clock_two.join(Clock.empty) == clock_two
# => true
...much better.
Now that we have those two down, what happens if we start adding together multiple clocks? Our class is already ready to go here:
class Clock
attr_reader :time
def initialize(current_time)
@time = current_time % 12
def join(other_clock)
new_time = (time + other_clock.time) % 12
def ==(other)
time == other.time
def self.empty
All we need to do is test it.
The Rules
Remember, a + b + c == a + (b + c) == (a + b) + c.
If we're being exceptionally cheeky we can just name our clocks along the same lines:
a = Clock.new(12)
b = Clock.new(5)
c = Clock.new(6)
...and speaking of cheeky, I rather don't want to write that with join so let's add that plus from above to the class as an alias for join:
class Clock
attr_reader :time
def initialize(current_time)
@time = current_time % 12
def join(other_clock)
new_time = (time + other_clock.time) % 12
alias_method :+, :join
def ==(other)
time == other.time
def self.empty
So if we were to try that now we'd get:
a = Clock.new(12)
b = Clock.new(5)
c = Clock.new(6)
a + b + c == a + (b + c)
# => true
a + (b + c) == (a + b) + c
# => true
Aha! That means we've gotten all our rules. Great, why do we care?
What does it reduce to?
Well now that means we can do all types of fun things with items like reduce:
[a, b, c].reduce(Clock.empty) { |clock, next_clock| clock + next_clock }
# => #<Clock:0x00007f88f86112b8 @time=11>
# or condense it:
[a, b, c].reduce(Clock.empty, :+)
# => #<Clock:0x00007f88f8640090 @time=11>
# one more time!
[a, b, c].sum(Clock.empty)
Monoids come with some fun little behaviors, because Monoid means "Like One" if you squint hard enough and ignore exact etymology a bit. Really I've taken to calling them reducible, foldable, or any
other number of things.
They're nifty, and once you see them you see them everywhere. Strings, Hashes, Arrays, Integers, Floats, ActiveRecord Queries, and a whole lot more.
Wrapping Up
This was a bit more of an advanced writeup for funsies as I saw something amusing in a video and wanted to share some of my ramblings for the day. Consider it a fun little thought experiment, and
thank you for joining me on this ride.
Top comments (0)
For further actions, you may consider blocking this person and/or reporting abuse | {"url":"https://dev.to/baweaver/clocks-are-monoids-too-53ne","timestamp":"2024-11-07T06:05:33Z","content_type":"text/html","content_length":"100519","record_id":"<urn:uuid:a2ea3373-2674-4187-8a56-46c212722ea7>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00106.warc.gz"} |
Grammar, dominating
From Encyclopedia of Mathematics
A type of formal grammar (cf. Grammar, formal) serving to generate strings together with derivation trees (cf. Syntactic structure). Formally, a dominating grammar can be defined as a context-free
grammar (cf. Grammar, context-free) in which one of the occurrences of symbols on the right-hand side carries a special mark, except for rules of the type $I\to a$ where $I$ is the initial symbol and
$a$ is a terminal symbol. The right-hand side of each such rule must contain not fewer than two symbol occurrences. The context-sensitive system corresponding to the derivation in such a grammar (cf.
Grammar, context-sensitive) becomes hierarchic if the components "originating" from the marked occurrences of symbols on the right-hand side of the rules are considered to be the terminal components.
Each string of the language generated by the grammar is represented by a "derivation" tree, connected with this hierarchic context-free system (it is usually not unique). The diagram shows one such
tree, which represents the string $aaacbbb$ in the grammar with rules $I\to a'Ib$, $I\to aIb'$, $I\to c$ ($I$ is the initial symbol, while the prime serves as the mark); this tree corresponds to the
and the parentheses serve to separate non-trivial components.
Figure: g044800a
The most important partial class of dominating grammars are the so-called simple dominating grammars, in which only terminal symbols appear on the right-hand sides of the rules. Thus, the grammar of
the example shown above is simple. For any simple dominating grammar there exists a natural number $k$ such that in each "derivation" tree representing some string in this grammar not more than $k$
arcs issue out of any vertex. Conversely, each dominating grammar with this property has an equivalent simple dominating grammar such that, for any string, the sets of "derivation" trees defined for
it by the two grammars are identical.
A simple dominating grammar is also called a dependency grammar.
[1] M.I. Beletskii, "Context-free and domination grammars and the algorithmic problems connected with them" Cybernetics , 3 : 4 (1967) pp. 74–80 Kibernetika (Kiev) , 3 : 4 (1967) pp. 90–97
[2] A.V. Gladkii, "Formal grammars and languages" , Moscow (1973) (In Russian)
Dependency grammar is in Eastern Europe the most prominent linguistic theory.
Cf. also Formal languages and automata.
[a1] I. [I.A. Melchuk] Melčuk, "Dependency syntax. Theory and practice" , State Univ. New York Press (1988) (Translated from Russian)
How to Cite This Entry:
Grammar, dominating. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Grammar,_dominating&oldid=34276
This article was adapted from an original article by A.V. Gladkii (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098.
See original article | {"url":"https://encyclopediaofmath.org/index.php?title=Grammar,_dominating&oldid=34276","timestamp":"2024-11-12T22:12:56Z","content_type":"text/html","content_length":"17832","record_id":"<urn:uuid:7be91125-38d1-4af4-9301-6f2705ad7594>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00287.warc.gz"} |
Art. 70. Concrete-Steel Beams With Double Reinforcement
598. We have seen that when the depth of a beam is limited by structural considerations we may increase the normal load by excessive reinforcement, but that this method results in low stresses in the
steel and is not usually economical. We may now consider the effect of placing reinforcing rods in the compression side of the beam as well as in the tension side.
Fig. 14. CROSS-SECTION (Single Reinforcement).
Fig. 15. CROSS-SECTION (Double Reinforcement).
Fig. 16. STRAIN DIAGRAM.
Let Fig. 14 represent the cross-section of a beam reinforced on the tension side with sufficient steel, area a, to develop the proper working stresses in the materials, and let the position of the
neutral axis be N N. If at distance x from the neutral axis we add an area of steel A' in the compression side, the position of the neutral axis would be changed for similar loading; but if at the
same time we place in the tension side an additional area of steel A such that (A/A') = x/y2 the position of the neutral axis will be unchanged. Let fs' = stress in steel in compression; then since
the steel must suffer the same deformation as the surrounding concrete fs/fs' = y2/x. Multiplying the last two equations, we have, fs A = f's A', that is, we have added equal forces to the two sides
of the beam, and have increased the moment of resistance by fs A (x + y2) inch-pounds.
599. To illustrate the application of this principle we may take the beam considered in § 591, in which z = 8, R = 20:
When the area of reinforcement in the tension side of this beam was increased to az = 3.12 sq. in. or a = .39, the theoretical bending moment was increased to 522,000 inch-pounds (§ 592). What will
be the result of a similar increase in steel distributed between the two sides of the beam?
Let k — distance from top of beam to center of reinforcement on compression side = 2 inches:
And total moment of resistance equals 311,100 + 486,400 = 797,500 inch-pounds.
None of the bars in the series mentioned in §591 had as large an area of reinforcement as 1.92 sq. in. on the compression side.
It is noticed, first, that the double reinforcement gives better results than such excessive reinforcement on the tension side; second, that the stress in steel on the compression side is less per
square inch than that in tension; and third, that in case a large addition of steel is made, this results in a greater area of steel in compression than the total area of steel in tension. In
practice the area of steel in compression is usually-made equal to, or less than, the area in tension, but beams with double reinforcement are seldom accurately designed. | {"url":"https://bookdome.com/science/Cement-Concrete/Art-70-Concrete-Steel-Beams-With-Double-Reinforcement.html","timestamp":"2024-11-05T18:55:43Z","content_type":"text/html","content_length":"18750","record_id":"<urn:uuid:1cf84ea2-0d96-4b97-b0cc-b151907718ce>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00278.warc.gz"} |
[Solved] How to remove duplicate elements from Array in Java? Example
This is one of the common
technical interview question
s which are asked to entry-level programmers and software engineers. There are also a lot of variants of how do you remove duplicates from an array in Java, like sometimes the
array is sorted
, and other times it's not sorted or unsorted. Sometimes, the Interviewer just spends more than half of the interview on this question by progressively making it more difficult by imposing new
constraints like removing duplicate elements in place or without using any additional data structure, etc.
Btw, If you are allowed to use Java's Collection framework, then this is quite easy to solve. Still, if you are not allowed to use
, and other Java utility classes, then it suddenly becomes a tricky algorithm question.
Anyway, before talking about solutions, let's first understand the problem. You have given an unsorted array of integers, and you have to remove all duplicates from it.
For example if input array is
{22, 3, 22, 11, 24, 24, 4, 3}
then output array should be
{22, 3, 11, 24, 4}
duplicate elements
22, 24 and 3 must be removed from original array. by the way, maintaining the original order of elements is not necessary, this is something you can ask your Interviewer.
I am sure at first he will allow you to solve it without bothering about the original order, but depending on how you do, he may ask you do it again, but this time keeping the original order intact.
Let's see different approaches to solve this problem.
Btw, if you are not familiar with the array data structure and essential data structures like a hash table, set, binary tree, etc. then it's better to first go through these
best data structure and algorithm courses
to learn more about basic data structure and Programming techniques in Java.
How to Remove Duplicates from unsorted Array in Java
The first and easiest approach to remove duplicates is to sort the array using
time and then remove repeated elements in
time. One advantage of sorting arrays is that duplicates will come together, making it easy to remove them.
By the way, this solution will not work if the Interviewer will put constraints like you cannot sort the array or original order of elements must be preserved in the output array. In that case, we
need to
Another way to remove duplicates from an integer array is to use a binary tree. You can construct a
binary search tree
using numbers from an array and discard all numbers, which are duplicates. The binary tree will only contain non-repeated values, which you can later convert it an array.
However, the drawback of this solution is that the original order of the element is not preserved. The time complexity of this solution is
because inserting a node in the binary tree will take
time, and we need to add n nodes, where n is the size of the array.
If you have trouble calculating the time and space complexity of your algorithms or want to know more about Big(O) notation, then you can also check out
Algorithms and Data Structures - Part 1 and 2
courses on Pluralsight. These two are some of the best courses to learn algorithms fundamentals in Java.
Java Program to Remove Duplicates from Unsorted Array
Now let's see a pure Java solution, where you are allowed to use the Set interface. You solve this problem by using
. If you are asked to preserve the order of elements, then you can use
, as it maintains the order on which elements are added to it.
package tool;
import static org.junit.Assert.assertArrayEquals;
import java.util.HashSet;
import java.util.Set;
import org.junit.Test;
public class DuplicateArray {
private Integer[] removeDuplicates(Integer[] input) {
if (input == null || input.length <= 0) {
return input;
Set<Integer> aSet = new HashSet<>(input.length);
// set will reject all duplicates
for (int i : input) {
return aSet.toArray(new Integer[aSet.size()]);
public void testArrayWithDuplicates() {
Integer[] given = new Integer[] { 1, 2, 3, 3 };
Integer[] actual = removeDuplicates(given);
Integer[] expected = new Integer[] { 1, 2, 3 };
assertArrayEquals(expected, actual);
public void testArrayWithoutDuplicates() {
Integer[] given = new Integer[] { 1, 2, 3 };
Integer[] actual = removeDuplicates(given);
Integer[] expected = new Integer[] { 1, 2, 3 };
assertArrayEquals(expected, actual);
public void testWithEmptyArray() {
Integer[] given = new Integer[] {};
Integer[] actual = removeDuplicates(given);
Integer[] expected = new Integer[] {};
assertArrayEquals(expected, actual);
public void testWithNull() {
Integer[] given = null;
Integer[] actual = removeDuplicates(given);
Integer[] expected = null;
assertArrayEquals(expected, actual);
public void testArrayWithAllDuplicates() {
Integer[] given = new Integer[] { 3, 3, 3 };
Integer[] actual = removeDuplicates(given);
Integer[] expected = new Integer[] { 3 };
assertArrayEquals(expected, actual);
public void testArrayWithMultipleDuplicates() {
Integer[] given = new Integer[] { 1, 2, 3, 3, 4, 4, 5, 5, 5 };
Integer[] actual = removeDuplicates(given);
Integer[] expected = new Integer[] { 1, 2, 3, 4, 5 };
assertArrayEquals(expected, actual);
And when you run this program as a JUnit test in Eclipse, you will see a green bar like below, which indicates that all test cases are passed, and our solution is working fine.
This is a good solution and also shows how the intelligent use of a
data structure
can make the solution easy. The code is straightforward to read and understand, and it's also very efficient in terms of CPU time as you just need
time to solve this problem.
Btw, if the Interviewer still puts another constraint and asks you to remove duplicates without using Java Collection API, then you have no choice but to iterate over an array and compare each and
every element to find and remove duplicates. The
full solution
is discussed here, which you can also see after you have tried.
That's all about
how to remove duplicates from an unsorted array in Java
. Every solution is acceptable, but all have their pros and cons. The critical thing is that you should start with the solution with the highest time and space complexity and then reach the most
efficient one. This is one of the techniques I always use in interviews with bringing the Interviewer to my strong areas.
Other Array Coding Problems to Practice
If you are interested in solving more Array-based algorithm problems, then here is the list of some of the frequently asked coding problems from interviews:
Thanks for reading this article. If you like this interview question, then please share it with your friends and colleagues. If you have any doubts or feedback, then please drop a note.
P. S.
- If you are looking for some Free Algorithms courses to improve your understanding of Data Structure and Algorithms, then you should also check this list of
Free Data Structure and Algorithms Courses
for Programmers.
No comments: | {"url":"https://www.java67.com/2019/04/how-to-remove-duplicates-from-unsorted-array-in-java.html?m=0","timestamp":"2024-11-11T17:20:12Z","content_type":"application/xhtml+xml","content_length":"196813","record_id":"<urn:uuid:494c1e89-cb63-4f49-9beb-740ada8de482>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00069.warc.gz"} |
Navier - Stokes equation: Cylindrical coordinates ,, :
Sanningen om GI - Google böcker, resultat
The primary difference between the two comes down to mass online_time_of_concentration: Time of concentration of small watersheds. References. [1] Kirpich formula (Eq. 2-61) [5] Kinematic wave
formula (Eq. 4-50 ). "Density Formula and Concentration Inequalities with Malliavin Calculus." Electron. J. Probab.
Mar 16, 2016 %w/w concentrations - example: An oil in your formula has a density of 0.9 g/ml. If the total mass of your solution is 100 g, you first need to Where: P = Pressure (atm). V = Gas volume
(L) n = Moles of gas (mol). T = Absolute temperature (K). R = 0.082 atm L mol-1 K-1 Molar gas constant m = Mass of Concentration Grants had the smallest number of formula-eligible children (10.1
million), and the average Concentration Grant allocation per formula-eligible child In chemistry, concentration is the abundance of a constituent divided by the total volume of a separation ·
Eutectic point · Alloy · Saturation · Supersaturation · Serial dilution · Dilution ( concentration formula. Example: 10 g salt and 70 g water are mixed and solution is prepared. Find concentration of
solution by percent mass.
ITM Report 185
Find concentration of solution by percent mass. Solution: Mass of Aug 7, 2017 Calculating concentration of reactants in a first order rate equation question. A solution of A is mixed with an equal
volume of solution B The radioactive concentration will be highest on the fresh lot date.
YY/T 1302.1-2015: Translated English of Chinese Standard
https://doi.org/10.1214/EJP.v14-707 The molarity calculator is based on the following equation: Mass (g) = Concentration (mol/L) x Volume (L) x Molecular Weight (g/mol). As an example, if the Nov
7, 2014 Calculation of Initial Gas Concentration. Introduction. During investigations of firms that sterilize medical products with ethylene oxide, there is a Calculation A. Sample calculation from
concentration in soil-vapor to concentration in soil1. 1. Converting parts per billion by volume to micrograms per cubic Aug 3, 2019 So, the available dose is given in terms of the concentration of
the main ways of calculating dosage: mental arithmetic and using a formula.
The molar concentration of solute is sometimes abbreviated by putting square brackets around the chemical formula of the solute, e.g., the concentration of hydroxide anions can be written as [OH⁻].
Suppose I calculate through formula N1V1=N2V2 for 6.18 ug/ul, so, vol. appears like 161.18 ul, which is not possible and simply mean I'm applying wrong formula. Kindly guide me. View 2017-04-21 ·
There are many formulas for the time of concentration.. A previous post discussed the Bransby Williams approach. Here I look at the Pilgrim McDermott formula, which is another method commonly used in
Australia and relates time of concentration to catchment area (A): 2020-03-02 · There are many methods available to estimate the time of concentration including the Kirpich formula, Kerby formula,
NRCS Velocity Method, and NRCS Lag Method.
Maria johansson josephsson
(AW of Na: 23, Cl The relationship between two solutions with the same amount of moles of solute can be represented by the formula c 1 V 1 = c 2 V 2, where c is concentration and V is volume. Time of
Concentration Part 630 National Engineering Handbook of concentration for that single area is required. A hydrograph is then developed using the methods de-scribed in NEH630.16.
Concentration = \(\frac{\textup{0.5~mol}}{\textup{2~dm}^3}\) Concentration = 0.25 mol/dm 3.
Lions mane jellyfish
daniel grenier chessvaluta lettland 2021skogskrematoriet skogskyrkogårdenmöllevången gentrifieringbedövande kondomergymnasieskola helsingborg corona
Method for correction of VFA loss in determination of dry
The 2016-03-16 · The %w/w formula is expressed as follows: Note that ‘weight’ refers to mass (i.e., as measured on scales). If a raw material in your formula is a liquid and measured by volume, you
must know the mass of this, which requires a density value. %w/w concentrations - example: An oil in your formula has a density of 0.9 g/ml. Concentrations are often expressed in terms of relative
unites (e.g.
Spansk polisseriebetongarbeten skåne
Absolute concentration. #ScuderiaFerrari #USGP #F1 #Kimi7
Molarity is described as the total number of moles of solute dissolved in per liter of solution,i.e., M = mol/L. 2020-03-02 2013-08-02 Learn the basics about Concentration formula and calculations.
How do you calculate the masses of reactants and products from balanced equations? Find out mo the concentration index by dividing through by 1 minus the mean (Wagstaff 2005). If the health variable
of interest takes negative as well as positive values, then its concentration index is not bounded within the range of (–1,1).
Biogena GlanduPlex energetisiert – Produkter / biogena.com
1 dalton = 1.660 539 040 (20) * 10 -27 kg. The majority carrier electron concentration is n o = ½{(5 x 1013) + ((5 x 1013)2 + 4(2.4 x 1013)2)1/2} = 5.97 x 1012 cm-3 The minority carrier hole
concentration is p 0 = n i 2 / n 0 = (2.4 x 1013)2/(5.97 x 1012) = 9.65 x 1012 cm-3 - Comment If the donor impurity concentration is not too different in magnitude from the intrinsic Example 1. A
ratio of 5mL to 12mL is the ratio 5:12 v/v. A ratio of 3mg to 5mg is the ratio 3:5 w/w.
During investigations of firms that sterilize medical products with ethylene oxide, there is a Calculation A. Sample calculation from concentration in soil-vapor to concentration in soil1. 1.
Converting parts per billion by volume to micrograms per cubic Aug 3, 2019 So, the available dose is given in terms of the concentration of the main ways of calculating dosage: mental arithmetic and
using a formula. Standard curves are graphs of light absorbance versus solution concentration a more accurate way to determine concentration to actually use the equation of Sep 8, 2013 Using these
formulas, the Metallurgical/Metallurgy performance of the concentration plant or of a particular mill circuit is readily assessed. concentration of the drug is calculated by the manufacturer (mg/mL,
mg/tablet). A 20.5 kg mixed To calculate the dose in milligrams, use the following formula;. and temperature by applying the ideal gas equation and the concept of molar volume. | {"url":"https://valutalkghzpg.netlify.app/96873/11628","timestamp":"2024-11-14T21:02:41Z","content_type":"text/html","content_length":"12101","record_id":"<urn:uuid:64dfad19-286d-43b0-8655-8d217e02de04>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00047.warc.gz"} |
About Author
Welcome to my website where I share resources and notes related to IBDP Maths HL/SL. I have presented Maths HL in a simpler way with an emphasis on the basic concepts to attract more students to
learn IBDP Math HL. Maths is a religion for me and I strongly believe that more and more people should study Maths as it makes us more rational. Maths gives us the ability to think clearly beyond
boundaries and we can achieve our goals in life just like a guided missile is able to hit the targets which are far and beyond reach. I have been successful in spreading the love for Maths as more
students join my classes in successive years studying Math HL and that encouraged me to develop this website.
I’m a passionate teacher of Mathematics with a wealth of experience both in Mathematics as well as in international schools. I have experience in both the Cambridge and IB programmes and the SL and
HL courses of Mathematics. I have worked in schools in India, Egypt, Singapore, Bandung, China, Bali and currently working at an international school in Jakarta (Indonesia) . I have worked a number
of years in the IB Programme and has experiences with EEs in Mathematics as well as the CAS programme. I possess a Bachelor degree in the Mathematics and Science fields as well as two Master’s
degrees in Mathematics and Education.
I also organise personalized educational trips/ institutional trips/service trips(especially for CAS) to Northern part of India & Indonesia for students/individuals across the globe through my
organization ibalasia. You can read more about these trips in the section – ibalasia on this website. Please feel free to contact me in case you have any queries. | {"url":"https://ibalmaths.com/index.php/profile/","timestamp":"2024-11-12T10:04:27Z","content_type":"text/html","content_length":"70063","record_id":"<urn:uuid:28cc74f9-adb2-4fe9-93e8-1d5add648a9f>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00682.warc.gz"} |
How AlphaGo Works
There are tons of AlphaGo thinkpieces so I'll refer you to Google for those. Here are two I like.
Briefly, suppose you didn't know anything about Poker but still wanted to play. If you were presented with possible moves you could make at each turn, all you need to know is how each move affects
your chance of winning. You've turned a poker game into a math problem.
Great, but that raises the question: How do you get these probabilities? This, of course, is the $1M dollar question but the answer is basically: simulate the play for a large number of games.
How AlphaGo Works in too much detail
The paradigm for AlphaGo isn't deep learning, it's Monte Carlo Tree Search (MCTS).
MCTS is a smart way of gathering statistics for games of perfect information where you're playing against an opponent. I'll refer you to other resources since I don't know more than that.
Interpreting Fig. 3 (black to move)
a. Select the move with the maximum action-value Q plus an exploration term and repeat.
b. If a position hasn't been previously explored, it's time to evaluate possible moves with the policy network.
c. Moves are evaluated with the fast rollout policy and the value network
d. Action-values are backpropagated up the tree
The policy network limits the search space while the value network and fast rollout policy approximate rollouts to the end of the game.
Technical Questions
1. Why don't they use the Q value to choose a move? N visits is more stable – why?
2. How can AlphaGo be more efficient?
3. Why is the SL Policy used in liu of the RL-trained policy? It makes better, more diverse move selections. In other words why is the RL policy myopic?
4. Why didn't DQN work? Or rather, why couldn't it be made stronger without MCTS? | {"url":"https://www.roryhr.com/alphago.html","timestamp":"2024-11-04T04:13:31Z","content_type":"text/html","content_length":"7817","record_id":"<urn:uuid:c46e6116-a5f1-4ead-a962-8425a2d0d184>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00202.warc.gz"} |
The goal of idmact is to provide an implementation of the Schiel (1998) algorithm for interpreting differences between mean ACT assessment scores.
You can install the released version of idmact from CRAN with:
You can install the development version of idmact from GitHub with:
Main Algorithm
The idmact package provides tools designed to examine the significance of differences between mean ACT scale scores over time. The fundamental algorithm for composite scale scores consists of the
following six steps:
1. Add one unit to the raw score of one or more subjects for each student to derive adjusted raw scores.
2. Convert adjusted raw scores to adjusted scale scores using the form’s raw score to scale score map. Note that perfect raw scores are always converted to the maximum allowable scale score,
irrespective of the adjustment in step one.
3. For each student, sum the adjusted scale scores across all subject areas, divide this sum by the number of subject areas, and round to the nearest integer. This produces each student’s adjusted
composite scale score.
4. Calculate the adjusted mean composite scale score across all students (m_adj).
5. Calculate the unadjusted mean composite scale score across all observations (m_unadj).
6. Compute the difference between the adjusted and unadjusted mean composite scale scores to get the delta composite: deltac = m_adj - m_unadj
While this algorithm was initially developed to address the challenges in interpreting differences between mean ACT scale scores, it can also be beneficial in other contexts. These may include
situations where different forms of an assessment have varying raw score to scale score maps, particularly when the relationship between raw and scale scores is complex and/or proprietary.
Single Subject Example
In the subsequent example, we’ll use the algorithm described above to interpret differences between two forms of a hypothetical assessment. Both forms share the same range of raw scores and scale
scores. However, the conversion map from raw scores to scale scores differs between the two assessments. To keep this example straightforward, we’ll simulate a single data set of raw scores and then
use the algorithm to compare mean differences in scale scores between the two assessments. This initial example will concentrate on a single subject area, hence step 3 of the algorithm will not be
Generate Data and Maps
# Create 100 raw scores
raw_scores <- as.list(sample(1:100, 100, replace = TRUE))
# Map between raw scores and scale scores for each form
## Each form has scale scores ranging from 1 to 12
map_scale <- c(1:12)
## Each assessment has raw scores ranging from 1 - 100
map_raw_formA <- list(1:5, 6:20, 21:25, 26:40, 41:45, 46:50, 51:55,
56:75, 76:80, 81:85, 86:90, 91:100)
map_raw_formB <- list(1:10, 11:20, 21:30, 31:40, 41:50, 51:55, 56:65,
66:75, 76:85, 86:90, 91:95, 96:100)
Format Map
In the raw score to scale score map presented above, vectors/lists were utilized to efficiently map each of the 100 raw scores to each of the 12 scale scores. Use the map_elongate function to expand
the raw and scale sections of the map into a format where each portion is a list of 100. This is the map format required for the idmact_subj function, which implements the subject-level algorithm.
formA <- map_elongate(map_raw = map_raw_formA,
map_scale = map_scale)
formB <- map_elongate(map_raw = map_raw_formB,
map_scale = map_scale)
Run Subject Level Algorithm
Utilize the raw data and the maps stored in formA and formB to calculate and compare the subject-level delta (deltas). In the algorithm below, each raw score is increased by 1 using the default value
of the inc parameter.
resA <- idmact_subj(raw = raw_scores,
map_raw = formA$map_raw,
map_scale = formA$map_scale)
resB <- idmact_subj(raw = raw_scores,
map_raw = formB$map_raw,
map_scale = formB$map_scale)
Compare Results
cat("Form A subject level delta:", resA$deltas, "\n")
#> Form A subject level delta: 0.11
cat("Form B subject level delta:", resB$deltas)
#> Form B subject level delta: 0.1
In this section, the function idmact_subj is used to calculate the ‘delta’ for each form (A and B). The ‘delta’ is the difference between the mean adjusted scale score and the mean unadjusted scale
score. This ‘delta’ value provides an estimate of how much the mean scale score would increase if every student were to answer one additional item correctly on the test.
In the provided example:
For Form A, the subject level delta is 0.11, meaning the mean scale score is expected to increase by 0.11 if every student answers one additional item correctly.
For Form B, the subject level delta is 0.1, suggesting that the mean scale score would increase by 0.1 under the same conditions.
The results indicate that Form A is more responsive to increases in raw scores, as a one unit increase in raw score leads to a larger increase in the mean scale score for Form A compared to Form B.
Composite Example
The idmact_comp function can be used to calculate the delta for composite scores (deltac). In the example below, raw scores for an additional subject area (‘s2’) will be created and combined with the
data from the previous example to demonstrate idmact_comp.
Generate Data and Maps
# Create 100 raw scores
raw_scores_s2 <- as.list(sample(1:100, 100, replace = TRUE))
# Subject two will use the same ranges for raw scores and scale scores as in
# the previous example, but the map will be slightly different.
map_raw_formA_s2 <- list(1:10, 11:25, 26:30, 31:40, 41:45, 46:50, 51:60,
61:75, 76:80, 81:85, 86:90, 91:100)
map_raw_formB_s2 <- list(1:10, 11:16, 17:25, 26:35, 36:45, 46:55, 56:60,
61:75, 76:85, 86:90, 91:95, 96:100)
formA_s2 <- map_elongate(map_raw = map_raw_formA_s2,
map_scale = map_scale)
formB_s2 <- map_elongate(map_raw = map_raw_formB_s2,
map_scale = map_scale)
Run Composite Level Algorithm
In the algorithm below, each raw score for each subject is incremented by 1.
resA_comp <- idmact_comp(raw = list(raw_scores, raw_scores_s2),
inc = list(1, 1),
map_raw = list(formA$map_raw, formA_s2$map_raw),
map_scale = list(formA$map_scale, formA_s2$map_scale))
resB_comp <- idmact_comp(raw = list(raw_scores, raw_scores_s2),
inc = list(1, 1),
map_raw = list(formB$map_raw, formB_s2$map_raw),
map_scale = list(formB$map_scale, formB_s2$map_scale))
Compare Composite Results
cat("Form A composite level delta:", resA_comp$composite_results$deltac, "\n")
#> Form A composite level delta: 0.08
cat("Form B composite level delta:", resB_comp$composite_results$deltac)
#> Form B composite level delta: 0.08
In the composite example, two subjects are considered instead of one, and the idmact_comp function is used to calculate the composite level delta (deltac). The composite level delta is calculated in
a similar way to the subject level delta, but it considers the total adjusted and unadjusted scale scores across all subjects, rather than just one.
In the provided example:
For both Form A and Form B, the composite level delta is 0.08. This means that if every student were to answer one additional item correctly on each form, the mean composite scale score is expected
to increase by 0.08.
This suggests that, when considering multiple subjects, both Form A and Form B respond similarly to increases in raw scores.
See Subject Area 2 Results
cat("Form A subject area 2 delta:", resA_comp$subject_results[[2]]$deltas, "\n")
#> Form A subject area 2 delta: 0.05
cat("Form B subject area 2 delta:", resB_comp$subject_results[[2]]$deltas)
#> Form B subject area 2 delta: 0.08
This section presents the ‘delta’ for the second subject area specifically. The ‘delta’ for the second subject area is calculated in the same way as the subject level delta mentioned in the first
section, but it only considers the scores for the second subject.
In the provided example:
For Form A, the subject area 2 delta is 0.05, suggesting that the mean scale score for the second subject would increase by 0.05 if every student answered one additional item correctly.
For Form B, the subject area 2 delta is 0.08, meaning that the mean scale score for the second subject would increase by 0.08 under the same conditions.
These results indicate that, for the second subject area specifically, Form B is more responsive to increases in raw scores. | {"url":"https://pbil.univ-lyon1.fr/CRAN/web/packages/idmact/readme/README.html","timestamp":"2024-11-06T03:12:03Z","content_type":"application/xhtml+xml","content_length":"28216","record_id":"<urn:uuid:64e8f93a-853a-4dd7-8feb-63738c4c4e21>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00489.warc.gz"} |
Introduction to Divide and Conquer | CodingDrills
Divide and Conquer on Arrays
Divide and Conquer on Trees
Divide and Conquer on Graphs
Divide and Conquer on Strings
Divide and Conquer on Numbers
Divide and Conquer on Dynamic Programming
Divide and Conquer on Geometric Problems
Introduction to Divide and Conquer
Introduction to Divide and Conquer Algorithms
In the world of programming, efficiency and performance are of utmost importance. Divide and Conquer is a powerful algorithmic technique often used to solve complex problems in an efficient manner.
In this post, we will dive into the concept of Divide and Conquer algorithms and explore how they can be applied to various programming problems.
What is Divide and Conquer?
Divide and Conquer is a problem-solving technique that involves breaking down a large problem into smaller, more manageable sub-problems, solving them independently, and then combining their
solutions to obtain the final result. This approach follows a recursive process, where the problem is divided into sub-problems until they become simple enough to be solved directly.
Benefits of Divide and Conquer
The Divide and Conquer technique offers several benefits when it comes to solving complex problems:
1. Efficiency: By breaking down a problem into smaller sub-problems, Divide and Conquer reduces the complexity of the overall problem. This allows for faster computations and improved performance.
2. Modularity: Dividing a problem into smaller sub-problems enables code reusability and modular design. Once a sub-problem is solved, it can be used as a building block for solving larger problems.
3. Parallelism: Divide and Conquer algorithms are often parallelizable. Since the sub-problems are independent of each other, they can be executed concurrently, leveraging the power of
multi-threading or distributed computing.
Implementation of Divide and Conquer Algorithm
To implement a Divide and Conquer algorithm, we typically follow these steps:
1. Divide: Divide the given problem into smaller sub-problems. This step involves breaking down the problem into manageable parts that can be solved independently.
2. Conquer: Solve the sub-problems recursively. If the sub-problems are small enough, solve them directly. Otherwise, apply the Divide and Conquer technique again until the sub-problems become
simple enough to be solved.
3. Combine: Combine the solutions of the sub-problems to obtain the final result. This step involves merging the solutions from the sub-problems to construct the solution for the original problem.
Let's take a look at an example to understand the Divide and Conquer concept better.
Example: Finding the Maximum Number in an Array
Suppose we have an array of integers and we want to find the maximum number in the array. We can apply the Divide and Conquer technique to solve this problem efficiently.
Here's an implementation in Python:
def find_max(arr, low, high):
# Base case: If the array contains only one element
if low == high:
return arr[low]
# Divide the array into two halves
mid = (low + high) // 2
left_max = find_max(arr, low, mid)
right_max = find_max(arr, mid + 1, high)
# Combine the results to obtain the maximum
return max(left_max, right_max)
In the above code, the find_max function takes an array arr, a starting index low, and an ending index high. It divides the array into two halves and recursively finds the maximum of each half.
Finally, it combines the results by returning the maximum of the left and right halves.
Divide and Conquer algorithms are a powerful technique for solving complex problems efficiently. By dividing a problem into smaller sub-problems and solving them independently, we can achieve faster
computations, modular design, and potential parallelism. Understanding and implementing Divide and Conquer algorithms can greatly enhance a programmer's problem-solving skills. So, the next time you
encounter a challenging problem, consider applying the Divide and Conquer approach for an optimized solution.
Now that you have a good understanding of the concept, go ahead and explore the world of Divide and Conquer algorithms in your programming journey!
Note: The code snippets and examples provided in this post are for illustrative purposes only and may require further optimization or modifications based on specific use cases. | {"url":"https://www.codingdrills.com/tutorial/introduction-to-divide-and-conquer-algorithms/introduction-to-dc","timestamp":"2024-11-05T20:12:51Z","content_type":"text/html","content_length":"319335","record_id":"<urn:uuid:24c8b962-48de-49ee-afd9-d3d182c26c44>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00634.warc.gz"} |
Updated notes for ece1229 antenna theory - Peeter Joot's BlogParametricPlot3D – Peeter Joot's Blog
Updated notes for ece1229 antenna theory
March 16, 2015 ece1229 137, 138 Index, Ampere's law, antenna under test, aperture efficiency, aration, AUT, average, average power density, bivector, bold vectors, caligraphic vectors, captured power
, circular polarization, complex, continuity equation, covariant, cross product, curl, curl of curl, dB, dBi, dipole, dipole current, dipole moment, directivity, divergence, divergence theorem, dot
product, dual, Dual-Maxwell’s equation, Dual-Maxwell’s equations, duality, effective area, Effective Isotropic Receiving Power, EIRP, electic field, electric, electric charge density, electric
current density, electric far field, electric field, electric four potential, electric potential, electric vector potential, far field, four current, four potential, four vector, free-space loss,
Fresnel equations, Friis equation, Friis transmission equation, gain, Geometric Algebra, geometric product, gradient, Green's function, ground reflection, half power beamwidth, Helmholtz equation,
horizontal dipole, horizontal electrical dipole, IERP, image theorem, impedance, impulse response, in terms of potentials, index of refraction, infinitesimal dipole, infinitesimal electric dipole,
intrinsic impedance, isotropic radiator, Julia, line of sight, linear media, linear polarization, linear time invariant, Lorentz gauge, magnetic, magnetic and electric potential sep-, magnetic charge
, magnetic charge density, magnetic current, magnetic current density, magnetic current source, magnetic field, magnetic four potential, magnetic vector potential, magnetization, matched load,
Mathematica, Matlab, maximum directivity, Maxwell equation, Maxwell's equation, Maxwell's equations, Mie scattering, Minkowski space, near field, non-covariant GA form, non-homogeneous, notation,
optical limit, ParametricPlot, ParametricPlot3D, permittivity, phasor, phasor sign, plane of incidence, plane of reflection, plane wave, PLF, polarization, polarization loss factor, polarization
mismatch, polarization power loss, polarization vector, potential, power, power density, Poynting vector, pseudoscalar, radar cross section, radiated power density, radiation intensity, RCS,
reciprocity theorem, reflection coefficient, relation to cross product, scalar potential, scattered power, scattered power density, signal to noise, spacetime gradient, spherical coordinates,
spherical scattering, standard gain, Stokes' theorem, superposition, time average, transmitted power, transverse projection, triple product, vector potential, vertical dipole, wave vector, wavelength
, wedge product, X-band
I’ve now posted a first update of my notes for the antenna theory course that I am taking this term at UofT.
Unlike most of the other classes I have taken, I am not attempting to take comprehensive notes for this class. The class is taught on slides which go by faster than I can easily take notes for (and
some of which match the textbook closely). In class I have annotated my copy of textbook with little details instead. This set of notes contains musings of details that were unclear, or in some
cases, details that were provided in class, but are not in the text (and too long to pencil into my book), as well as some notes Geometric Algebra formalism for Maxwell’s equations with magnetic
sources (something I’ve encountered for the first time in any real detail in this class).
The notes compilation linked above includes all of the following separate notes, some of which have been posted separately on this blog:
• (From problem set2 3) Corner cube image factor Take II.
• Problem set 3, Dipoles and corner cube antennas
• Problem set 2, Fundamental parameters and Field radiation
• Problem set 1, Fundamental parameters of Antennas
• Fundamental parameters of Antennas
Reading notes for chapter 2.
I’ve now posted a first set of notes for the antenna theory course that I am taking this term at UofT.
Unlike most of the other classes I have taken, I am not attempting to take comprehensive notes for this class. The class is taught on slides that match the textbook so closely, there is little value
to me taking notes that just replicate the text. Instead, I am annotating my copy of textbook with little details instead. My usual notes collection for the class will contain musings of details that
were unclear, or in some cases, details that were provided in class, but are not in the text (and too long to pencil into my book.)
The notes linked above include:
• Reading notes for chapter 2 (Fundamental Parameters of Antennas) and chapter 3 (Radiation Integrals and Auxiliary Potential Functions) of the class text.
• Geometric Algebra musings. How to do formulate Maxwell’s equations when magnetic sources are also included (those modeling magnetic dipoles).
• Some problems for chapter 2 content.
[Click here for a PDF of this post with nicer formatting]
This is my first set of notes for the UofT course ECE1229, Advanced Antenna Theory, taught by Prof. Eleftheriades, covering ch. 2 [1] content.
Unlike most of the other classes I have taken, I am not attempting to take comprehensive notes for this class. The class is taught on slides that match the textbook so closely, there is little value
to me taking notes that just replicate the text. Instead, I am annotating my copy of textbook with little details instead. My usual notes collection for the class will contain musings of details that
were unclear, or in some cases, details that were provided in class, but are not in the text (and too long to pencil into my book.)
Poynting vector
The Poynting vector was written in an unfamiliar form
\boldsymbol{\mathcal{W}} = \boldsymbol{\mathcal{E}} \cross \boldsymbol{\mathcal{H}}.
I can roll with the use of a different symbol (i.e. not \(\BS\)) for the Poynting vector, but I’m used to seeing a \( \frac{c}{4\pi} \) factor ([6] and [5]). I remembered something like that in SI
units too, so was slightly confused not to see it here.
Per [3] that something is a \( \mu_0 \), as in
\boldsymbol{\mathcal{W}} = \inv{\mu_0} \boldsymbol{\mathcal{E}} \cross \boldsymbol{\mathcal{B}}.
Note that the use of \( \boldsymbol{\mathcal{H}} \) instead of \( \boldsymbol{\mathcal{B}} \) is what wipes out the requirement for the \( \frac{1}{\mu_0} \) term since \( \boldsymbol{\mathcal{H}} =
\boldsymbol{\mathcal{B}}/\mu_0 \), assuming linear media, and no magnetization.
Typical far-field radiation intensity
It was mentioned that
U(\theta, \phi)
\frac{r^2}{2 \eta_0} \Abs{ \BE( r, \theta, \phi) }^2
\frac{1}{2 \eta_0} \lr{ \Abs{ E_\theta(\theta, \phi) }^2 + \Abs{ E_\phi(\theta, \phi) }^2},
where the intrinsic impedance of free space is
\eta_0 = \sqrt{\frac{\mu_0}{\epsilon_0}} = 377 \Omega.
(this is also eq. 2-19 in the text.)
To get an understanding where this comes from, consider the far field radial solutions to the electric and magnetic dipole problems, which have the respective forms (from [3]) of
\boldsymbol{\mathcal{E}} &= -\frac{\mu_0 p_0 \omega^2 }{4 \pi } \frac{\sin\theta}{r} \cos\lr{w t – k r} \thetacap \\
\boldsymbol{\mathcal{B}} &= -\frac{\mu_0 p_0 \omega^2 }{4 \pi c} \frac{\sin\theta}{r} \cos\lr{w t – k r} \phicap \\
\boldsymbol{\mathcal{E}} &= \frac{\mu_0 m_0 \omega^2 }{4 \pi c} \frac{\sin\theta}{r} \cos\lr{w t – k r} \phicap \\
\boldsymbol{\mathcal{B}} &= -\frac{\mu_0 m_0 \omega^2 }{4 \pi c^2} \frac{\sin\theta}{r} \cos\lr{w t – k r} \thetacap \\
In neither case is there a component in the direction of propagation, and in both cases (using \( \mu_0 \epsilon_0 = 1/c^2\))
= \frac{\Abs{\boldsymbol{\mathcal{E}}}}{\mu_0 c}
= \Abs{\boldsymbol{\mathcal{E}}} \sqrt{\frac{\epsilon_0}{\mu_0}}
= \inv{\eta_0}\Abs{\boldsymbol{\mathcal{E}}} .
A superposition of the phasors for such dipole fields, in the far field, will have the form
\BE &= \inv{r} \lr{ E_\theta(\theta, \phi) \thetacap + E_\phi(\theta, \phi) \phicap } \\
\BB &= \inv{r c} \lr{ E_\theta(\theta, \phi) \thetacap – E_\phi(\theta, \phi) \phicap },
with a corresponding time averaged Poynting vector
&= \inv{2 \mu_0} \BE \cross \BB^\conj \\
\inv{2 \mu_0 c r^2}
\lr{ E_\theta \thetacap + E_\phi \phicap } \cross
\lr{ E_\theta^\conj \thetacap – E_\phi^\conj \phicap } \\
\frac{\thetacap \cross \phicap}{2 \mu_0 c r^2}
\lr{ \Abs{E_\theta}^2 + \Abs{E_\phi}^2 } \\
\frac{\rcap}{2 \eta_0 r^2}
\lr{ \Abs{E_\theta}^2 + \Abs{E_\phi}^2 },
verifying \ref{eqn:advancedantennaL1:20} for a superposition of electric and magnetic dipole fields. This can likely be shown for more general fields too.
Field plots
We can plot the fields, or intensity (or log plots in dB of these).
It is pointed out in [3] that when there is \( r \) dependence these plots are done by considering the values of at fixed \( r \).
The field plots are conceptually the simplest, since that vector parameterizes
a surface. Any such radial field with magnitude \( f(r, \theta, \phi) \) can
be plotted in Mathematica in the \( \phi = 0 \) plane at \( r = r_0 \), or in
3D (respectively, but also at \( r = r_0\)) with code like that of the
following listing
Intensity plots can use the same code, with the only difference being the interpretation. The surface doesn’t represent the value of a vector valued radial function, but is the magnitude of a scalar
valued function evaluated at \( f( r_0, \theta, \phi) \).
The surfaces for \( U = \sin\theta, \sin^2\theta \) in the plane are parametrically plotted in fig. 2, and for cosines in fig. 1 to compare with textbook figures.
Visualizations of \( U = \sin^2 \theta\) and \( U = \cos^2 \theta\) can be found in fig. 3 and fig. 4 respectively. Even for such simple functions these look pretty cool.
dB vs dBi
Note that dBi is used to indicate that the gain is with respect to an “isotropic” radiator.
This is detailed more in [2].
Trig integrals
Tables 1.1 and 1.2 produced with tableOfTrigIntegrals.nb have some of the sine and cosine integrals that are pervasive in this chapter.
Polarization vectors
The text introduces polarization vectors \( \rhocap \) , but doesn’t spell out their form. Consider a plane wave field of the form
E_x e^{j \phi_x} e^{j \lr{ \omega t – k z }} \xcap
E_y e^{j \phi_y} e^{j \lr{ \omega t – k z }} \ycap.
The \( x, y \) plane directionality of this phasor can be written
\Brho =
E_x e^{j \phi_x} \xcap
E_y e^{j \phi_y} \ycap,
so that
\BE = \Brho e^{j \lr{ \omega t – k z }}.
Separating this direction and magnitude into factors
\Brho = \Abs{\BE} \rhocap,
allows the phasor to be expressed as
\BE = \rhocap \Abs{\BE} e^{j \lr{ \omega t – k z }}.
As an example, suppose that \( E_x = E_y \), and set \( \phi_x = 0 \). Then
\rhocap = \xcap + \ycap e^{j \phi_y}.
Phasor power
In section 2.13 the phasor power is written as
I^2 R/2,
where \( I, R \) are the magnitudes of phasors in the circuit.
I vaguely recall this relation, but had to refer back to [4] for the details.
This relation expresses average power over a period associated with the frequency of the phasor
&= \inv{T} \int_{t_0}^{t_0 + T} p(t) dt \\
&= \inv{T} \int_{t_0}^{t_0 + T} \Abs{\BV} \cos\lr{ \omega t + \phi_V }
\Abs{\BI} \cos\lr{ \omega t + \phi_I} dt \\
&= \inv{T} \int_{t_0}^{t_0 + T} \Abs{\BV} \Abs{\BI}
\cos\lr{ \phi_V – \phi_I } + \cos\lr{ 2 \omega t + \phi_V + \phi_I}
dt \\
&= \inv{2} \Abs{\BV} \Abs{\BI} \cos\lr{ \phi_V – \phi_I }.
Introducing the impedance for this circuit element
\BZ = \frac{ \Abs{\BV} e^{j\phi_V} }{ \Abs{\BI} e^{j\phi_I} } = \frac{\Abs{\BV}}{\Abs{\BI}} e^{j\lr{\phi_V – \phi_I}},
this average power can be written in phasor form
\BP = \inv{2} \Abs{\BI}^2 \BZ,
P = \textrm{Re} \BP.
Observe that we have to be careful to use the absolute value of the current phasor \( \BI \), since \( \BI^2 \) differs in phase from \( \Abs{\BI}^2 \). This explains the conjugation in the [4]
definition of complex power, which had the form
\BS = \BV_{\textrm{rms}} \BI^\conj_{\textrm{rms}}.
Radar cross section examples
Flat plate.
\sigma_{\textrm{max}} = \frac{4 \pi \lr{L W}^2}{\lambda^2}
In the optical limit the radar cross section for a sphere
\sigma_{\textrm{max}} = \pi r^2
Note that this is smaller than the physical area \( 4 \pi r^2 \).
\sigma_{\textrm{max}} = \frac{ 2 \pi r h^2}{\lambda}
Tridedral corner reflector
\sigma_{\textrm{max}} = \frac{ 4 \pi L^4}{3 \lambda^2}
Scattering from a sphere vs frequency
Frequency dependence of spherical scattering is sketched in fig. 10.
• Low frequency (or small particles): Rayleigh\begin{equation}\label{eqn:chapter2Notes:1040}
\sigma = \lr{\pi r^2} 7.11 \lr{\kappa r}^4, \qquad \kappa = 2 \pi/\lambda.
• Mie scattering (resonance),\begin{equation}\label{eqn:chapter2Notes:1060}
\sigma_{\textrm{max}}(A) = 4 \pi r^2
\sigma_{\textrm{max}}(B) = 0.26 \pi r^2.
• optical limit ( \(r \gg \lambda\) )\begin{equation}\label{eqn:chapter2Notes:1100}
\sigma = \pi r^2.
FIXME: Do I have a derivation of this in my optics notes?
• Time average.
Both Prof. Eleftheriades
and the text [1] use square brackets \( [\cdots] \) for time averages, not \( <\cdots> \). Was that an engineering convention?
• Prof. Eleftheriades
writes \(\Omega\) as a circle floating above a face up square bracket, as in fig. 1, and \( \sigma \) like a number 6, as in fig. 1.
• Bold vectors are usually phasors, with (bold) calligraphic script used for the time domain fields. Example: \( \BE(x,y,z,t) = \ecap E(x,y) e^{j \lr{\omega t – k z}}, \boldsymbol{\mathcal{E}}(x,
y, z, t) = \textrm{Re} \BE \).
[1] Constantine A Balanis. Antenna theory: analysis and design. John Wiley \& Sons, 3rd edition, 2005.
[3] David Jeffrey Griffiths and Reed College. Introduction to electrodynamics. Prentice hall Upper Saddle River, NJ, 3rd edition, 1999.
[4] J.D. Irwin. Basic Engineering Circuit Analysis. MacMillian, 1993.
[5] JD Jackson. Classical Electrodynamics. John Wiley and Sons, 2nd edition, 1975.
[6] L.D. Landau and E.M. Lifshitz. The classical theory of fields. Butterworth-Heinemann, 1980. ISBN 0750627689.
Notes for ece1229 antenna theory
February 4, 2015 ece1229 Ampere's law, antenna, antisymmetric, average power, bivector, complex power, constituative relations, continuity equation, cross product, curl, dB, dBi, decibel, directivity
, divergence, divergence theorem, dot product, dual, duality, ece1229, electic source, electric dipole, electric field, electric sources, far field, four current, four gradient, four potential, four
vector, free space, GA, gain, Geometric Algebra, geometric product, grade, Green's function, half power beamwidth, Helmholtz equation, Helmholz equation, impedance, impulse response, intensity,
isotropic radiator, linear media, linear time invariant, Lorentz gauge, magnetic charge, magnetic current, magnetic dipole, magnetic field, magnetic source, magnetic sources, magnetization,
Mathematica, Maxwell-Faraday equation, Maxwell's equation, Maxwell's equations, Mie scattering, Minkowski space, optical limit, parametric plot, ParametricPlot, ParametricPlot3D, Pauli basis, phasor,
plane wave, polarization vector, potential, power, Poynting vector, pseudoscalar, radar cross section, radiation intensity, Rayleigh scattering, scalar, scalar potential, spacetime gradient,
spacetime split, spherical coordinates, spherical scattering, spherical wave, Stokes' theorem, superposition, vector potential, wedge product
Fundamental parameters of antennas
January 22, 2015 ece1229 antenna, average power, complex power, dB, dBi, decibel, ece1229, electric dipole, far field, free space, gain, impedance, intensity, isotropic radiator, linear media,
magnetic dipole, magnetization, Mathematica, Mie scattering, optical limit, parametric plot, ParametricPlot, ParametricPlot3D, phasor, plane wave, polarization vector, power, Poynting vector, radar
cross section, radiation intensity, Rayleigh scattering, spherical coordinates, spherical scattering, spherical wave, superposition | {"url":"https://peeterjoot.com/tag/parametricplot3d/","timestamp":"2024-11-05T22:14:28Z","content_type":"text/html","content_length":"163215","record_id":"<urn:uuid:556374b3-8add-420d-b4b8-dcd4dc9fa630>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00375.warc.gz"} |
How To Set Up A Maths Club In Your School - Ideas, Tips And Tricks From A Primary Teaching Expert
Primary School Maths Club Ideas And Activities: From Set Up To Running It
If you have always wanted to set up a maths club but haven’t been sure quite how to do it or what kind of maths club activities you could use then this blog should set you on the right path.
Packed with maths club ideas, tip and tricks from a teacher with over 20 years experience, after reading this post you will be in good stead to start your own club in no time at all.
I-spy with my little eye something beginning with ‘q’.
Quadrilateral perhaps?
Quotient maybe?
No, it’s ‘queue’.
I normally hate queues, but there’s one type of queue I don’t mind and that’s the line of children outside my classroom waiting to get into my maths club.
Every Wednesday it’s the same. Eager faces peering in like greyhounds in their traps waiting for the door to open.
So with this as your end goal…let’s dive in.
What is a maths club?
A maths club is a chance for everybody (not just the highest or lowest attainers) to have fun with maths in an entirely non-judgemental, and ideally relatively free flowing environment. It can happen
at any time of day outside timetabled lessons.
At primary school maths is a popular after school club, but you could equally hold it as lunchtime maths club or even before school. It should feel noticeably distinct from a maths lesson and the
maths club activities you offer, should, wherever possible, be fun, or at the very least, engaging and possibly even surprising for children.
Why should your primary school have a maths club?
Maths clubs come in all shapes and sizes and there is no one model that works for every school. However, every school should have one.
This is because they help raise the profile of maths within the school, increase the engagement of children in maths and help show that maths is a playful and diverse subject full of surprises.
Other benefits of a maths club
Maths clubs will also
• develop children’s knowledge and understanding of maths
• strengthen the cross curricular links with maths and other subjects
• provide children with opportunities to try new things
• help children apply their maths skills to other ‘real-life’ maths investigations
• celebrate the achievement of children
• fuel a can-do approach to maths
• show children that maths is multidimensional
• develop children’s mathematical reasoning
• promote collaborative learning between different year groups
• develop maths resilience
• boost self-confidence
• cultivate creativity
• help to raise standards
• increase parental engagement with maths
Maths clubs allow you to take off your curriculum straitjacket and work flexibly and creatively.
What sort of maths club activities can you do?
When starting up a maths club you should aim for as wide an appeal as possible, ideally across the Key Stage. You’ll be amazed at how many ideas for maths club activities there are to choose from:
maths games, puzzles, quizzes, codebreaking, maths investigations, maths trails, general problem solving, blogging, podcasting and videoconferencing.
We’ve listed more specific maths club ideas and activities and how to run them below.
12 Fun And Engaging Maths Club Games And Activities For Primary Schools
Download this amazing resource which is packed with 12 fun and active maths club games for you to use in your after school maths clubs!
Download Free Now!
Where should you hold a maths club?
A maths club offers opportunities for more flexible learning. In an ideal setting a classroom can be arranged in different ways to reflect different ways of learning. Children might work on their
own, in pairs or small groups.
A room is needed where tables and chairs can be moved around freely and a maths club should take place in a classroom children can exercise a degree of control over and contribute to their learning
Don’t just think ‘indoors’ either as every maths club worth its salt should be connecting with Mother Earth and the immediate environment outside.
Looking for inspiration? Here are some ideas for outdoor maths activities, fun maths activities, and maths starters
Which pupils should attend a maths club?
Maths clubs are extra-curricular and off-piste sessions that can be held before school, at lunchtimes or after school hours. There are lots of types of clubs too and careful thought needs to be given
as to whether they ‘fit your context’ or not in terms of your school’s vision and values.
A word of caution: some maths clubs can send out the wrong messages even if they are ‘well-meaning’.
Some are not my type and may, because they can, typecast who attends based on erroneous and faulty classroom labels or they push particular children to do well at the expense of others.
Some use their maths clubs a ‘training grounds’ for annual maths competitions and challenges but in reality these are more like specialised maths clinics. These clubs do little else but prepare
children for the types of questions that may come up – I like these maths challenges and I think they do serve a valuable purpose but a maths club devoted to preparing for one?
I don’t think so.
Children have enough pressure without another layer of it added on top, and maths isn’t about preparing for tests. For many the reward is a photograph in a local rag with the headline ‘Maths
genius top of the class’, and whilst this might be good window dressing for a school, I’m really not sure what some children get out of it other than being on a challenge treadmill.
There are some cracking competitions to enter and schools should enter them (see links) but devoting a maths club to the ‘big day’ isn’t for me.
Maths clubs should be accessible to everyone
Then there are maths clubs devoted to the ‘most able’, but the first thing to remember here is that maths talent is fluid. All children are ‘able’ to do maths, but if we create clubs for
particular children we have labelled as more able then we are creating and perpetuating more exclusivity.
KS1 and KS2 maths clubs should never be the territory of a few bright children.
I’ve seen schools where certain children get ‘invited’ to join a maths club which immediately rings alarm bells.
Maths is not ‘by invitation only’.
Maths is for everyone and therefore elite clubs for the chosen few (who no doubt do display some maths talents) drive a wedge through the school. This explains why some children fall into the
learning pit and never get out of because they doubt their maths abilities and never start the process of aiming for greater depth in maths.
Maths clubs should be inclusive places where everyone can make a contribution and develop their growth mindset.
Children of all ages and abilities should be encouraged to join a maths club in order to experience learning in different ways alongside children from different year groups. This helps children share
ideas and strategies and cultivates their mathematical development.
A maths club should be of interest and open to children of different ages, take into account different ability levels, and reflect different motivations for attending.
Feedback to other teachers in your primary school from your maths club
The activities I chose are formative in nature and so feedback is a big thing – I give it not just to the children but to their teachers as well.
This may sound onerous but it doesn’t have to be.
Feedback to different teachers doesn’t have to be formal – it might be as simple as quick chat in the staff room to say how a pupil is getting on. Besides which, my experience tells me that if
children have enjoyed their time in maths club then they readily share what happened with their teacher.
Invite teachers to join the club!
It’s also important to invite class teachers to spend at least one session or part of a session to drop in and see what maths is taking place. Children love to see that their own teachers are
taking an interest even for just a few snatched minutes.
Maths clubs are an opportunity to learn
Maths Clubs are like any other lesson – you’ve got to make every second count so they need some intelligent planning and careful thought.
They should provide opportunities for children to do work that:
• is high in challenge but low in anxiety
• allows children to control their own learning
• allows children to learn in different ways
• supports learning within and outside the school
A Third Space Learning maths club in action.
Third Space Learning’s online one to one maths interventions make the perfect weekly maths club. Multiple pupils log on simultaneously to Third Space Learning’s online classroom with their own
personal highly qualified maths specialist tutor.
Pupils complete a diagnostic assessment to identify the maths topics they need to practise most. Assessment for learning allows highly trained tutors to adapt and personalise lessons in real-time.
Understanding is far too complex to be evaluated satisfactorily by any one type of activity, and this is why a range of techniques are needed to probe children’s understanding of maths.
Narrow strategies will only provide a limited measure of understanding and so to promote high quality learning miscellaneous activities are needed.
An emphasis on investigative, problem solving and exploratory approaches will allow pupils to demonstrate the depth of their knowledge, skills and understanding.
Successful maths clubs will ultimately depend on the types of activity you select.
Types of maths clubs
For me, maths clubs aren’t frivolous or pretentious but valuable opportunities to do some real active maths. I’ve created maths clubs with a heavy emphasis on problem-solving and investigations
but clubs that I’ve set up with a wider maths curriculum that adopt a more broad brush approach tend to be more wide-ranging and creative.
I like to vary the input and use a range of assessment for learning activities that enable me to work responsively and help children upgrade their knowledge and understanding.
Favourite easy to run maths club activities
The variety of easy to run maths club activities you could introduce is endless. Here are some easy to run ideas to get you started; more specific ideas follow below and in the free resource.
• Maths puzzles
• Maths games
• Maths magic
• Maths art
• Maths card games
• Maths dice games
• Maths board games
• Maths tricks
• Video conferencing with a mathematician
• Making a maths video or podcast
• Maths songs (great for learning times tables)
• Maths poems
• Maths jokes
• Maths trails or treasure hunts (see outdoor maths)
• Maths competitions
Third Space have collected my favourite maths club activities into a free downloadable resource. These are all straightforward and easy to run, covering topics from times tables, division,
percentages, and angles and provide more than enough to keep you busy for your first couple of maths club sessions.
For other maths club ideas and activities, read on.
1. Games should be central to your maths club
Maths club games are an integral part of children’s practical maths experience. They provide a motivating context for children to explore concepts, develop subject knowledge, improve
problem-solving and enjoy maths.
Games are also ideal talking frames for you to formatively assess children’s mental strategies and general maths well-being. They provide children with opportunities to think creatively, interpret
instructions, use maths vocabulary, develop social skills and develop confidence and self-esteem.
2. Offline maths games only if possible!
It’s easy to find some fizzy-whizzy game online and let children sit playing it for 30 minutes are more but this is lazy maths and ‘easy life’ clubbing that just fills the time. You should
ensure that most of your maths club ideas are offline!
Download this free resource of maths club activities that can be done offline and require only a pen and paper, and sometimes a dice (but these are easy to make yourself.)
This enables the children who come to the club to be active and roll up their sleeves – maths in the form of computer games is sedentary maths. There are of course some superb maths activities to
be found online and using these now and again is perfectly acceptable but there are plenty of low-tech options available that can make learning more tangible.
My maths maxim is: if it’s hands-on then it’s minds-on so I always aim for practical maths club ideas where possible.
Many maths clubs use novelty or recreational maths as a way of exciting and capturing the interest of children. Informal ‘playful’ maths activities are wealthy sources of enhancement and
enrichment and provide excellent material for maths clubs.
Looking for fun games and activities to boost pupils’ learning?
We’ve got several articles sharing teacher approved maths activities and fun maths games, including KS2 maths games, KS1 maths games and KS3 maths games for all maths topics and a set of 35 times
tables games and multiplication games you’ll want to bookmark whichever year group you teach!
3. Dazzling Maths Club Idea! A Head Full of Numbers
Challenges that promote the magic of numbers will encourage children to pursue maths as a fun activity, and number tricks are always an excellent way of inspiring children.
This number trick is quite a winner and when practised can be performed with real finesse and flair.
How to impress the children with your memory and mind reading skills
1. Give children a copy of the grid above and tell them that you have memorised every single number in the table.
2. Point out that there are 49 key numbers that are in bold and under each bold number is a seven digit number.
3. Without looking at a copy of the table yourself, ask one of the children to choose a number in bold and confidently declare that you will be able to recall the number underneath.
4. For example, if the number 41 was chosen, slowly reveal each of the numbers but remember to add plenty of performance and theatricals such as, ‘The first number is coming to me, I can see it now,
it’s a prime number, it’s an even number, it’s the number 2!’
5. Then go on to say and write down the other numbers, ‘My mental powers are weak but I think the next number is also a prime number. I think it’s the square root of 25…It’s the number five!’ A
hearty measure of stagecraft will add more impact to your routine!
6. Repeat this for several other circled numbers as children try to work out how you can recall all the numbers so readily. Then it is time to tell the children how it’s done and let them have a go
with a partner after they have learnt the trick…
How this magic maths works…
There is of course a way to work out that magic number.
1. Add 11 to the chosen circled number.
2. Reverse the result.
3. Keep on adding the two previous numbers, leaving out the ‘tens’.
4. Write down the number and say it aloud in true magician style!
For example, say the circled number 14 is selected, you want to make 5279651. To do that, follow the steps below:
• Add 11 to get 25.
• Reverse 25 to get 52.
• Add 5 and 2 to get 7,
• Add 2 and 7 to get 9,
• Add 7 and 9 to get 16 (ignore the 1 in 16 and just write down the 6 next),
• Add 9 and 6 to get 15 (again ignore the 1 in 15 and just write down the 5),
• Add 6 and 5 to get 11 (ignore one of the number 1s and write down the other 1)
• Say the number, 5279651
Teacher Hack – You will find this easier if you write it down on a whiteboard as you do it!
When you have tried a few of the tricks together, give children time to explore each one and practice. They can then prepare demonstrations of the tricks to perform to you and the rest of the class.
4. Practical Maths Club Idea: Get Crafty!
Beyond games, ‘anything goes’ in a maths club because you are not constrained by any specific curriculum. These are a few I like:
a) making a Mobius strip
b) trying the stepping through paper technique
c) Creating Andy Goldsworthy maths-art sculptures
5. Inspirational Maths Club Idea: Maths Heroes
One idea worth running with is using a session or two devoted to ‘Maths Maestroes’ and learning more about trailblazing mathematicians from the past. This year is the perfect time to look at the
role women have played in maths and children can research female mathematicians such as Ada Lovelace, Emmy Noether and Sofia Kovalevskaya.
Finding out more about some of the finest maths figures from around the world make superb personal learning projects for children to get their teeth into.
Maths club ideas don’t always have to be based around the numbers. Worksheets and activities around popular celebrities are always going to be popular, so incorporate them in when you can!
Ideas for maths club resources
Collecting and inventing your own resources is something that takes time and over the years there will be plenty you can feed into a maths club. Keeping everything in one place is the biggest
If you don’t have many resources to hand or you don’t fancy reinventing the wheel then there are other ready resources that you can sign up for or buy into.
One initiative that offers free maths club materials is Count On Us from the Mayor’s Fund For London, a social mobility charity. It offers guidance for schools, sharing best practice from the
participating schools, themed activity packages, session plans and managements resources.
There are also plenty of companies that offer paid-for after school maths club activities, and if you want to learn more about this you can see the links below for more details.
These clubs are certainly worth considering for short bursts of after-school maths over half a term but for me, you cannot beat having a school staff member lead their own club through the year.
A maths club is a fixture for the whole academic year and is best led by staff that know children who can chart their progress and development.
Fun fun fun, the most important part about a maths club
It’s tempting to say maths clubs are ‘fun’ – they should be but let children decide that for themselves. If they aren’t enjoying themselves then they aren’t having fun and you need to
change direction and ‘gamble’ with new activities, some of which can be found below.
Maths clubs can be a fantastic way to help bring maths to life for some of the pupils in your school, so if you are looking for that magic bullet that could make the difference in your school, it
might just be a maths club!
Maths club useful links
Every week Third Space Learning’s specialist school tutors support thousands of students across hundreds of schools with weekly online 1 to 1 maths lessons designed to plug gaps and boost progress.
Since 2013 these personalised one to one lessons have helped over 169,000 primary and secondary students become more confident, able mathematicians.
Learn how the tutoring integrates with your SEF and Ofsted planning or request a personalised quote for your school to speak to us about your school’s needs and how we can help. | {"url":"https://thirdspacelearning.com/blog/maths-club-ideas/","timestamp":"2024-11-07T17:12:45Z","content_type":"text/html","content_length":"158218","record_id":"<urn:uuid:e761c8da-6571-4c9f-a969-88aa7b5eb7b2>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00854.warc.gz"} |
mathematics Archives - Mark Proffitt
Benoit Mandelbrot, the mathematician who discovered fractals has passed away at age 85. I owe his genius for some fundamental concepts that make the Predictive Innovation^® possible.
Vinay Deolalikar from HP Labs claims he may have solved the P vs NP problem proving that P ≠ NP. It’s literally a million dollar problem. Millennium Prize is offering $1,000,000 for the solution.
This is very important for a range of problems including: cryptography, logistics, biology, mathematics, and innovation.
If P ≠ NP is true, it would allow a person to formally show that a problem cannot be solved efficiently, so attention could be focused on partial solutions or solutions to other problems. Or as I say
in my talks about applying information theory to science, “Being able to prove something is impossible helps you focus on the things that might be possible.”
The other implication is that some problems can be proven to be harder to solve than to test that the solution is true. This is very important to cryptography. If P = NP then many of the encryption
methods would be easy to break and would need to be changed.
Hard to solve, easy to check also directly relates to innovation. I run into this all the time. A company has what they believe is an impossible problem. After we apply the Predictive Innovation^®
and find the solution they think the solution was obvious. It was far easier to check the answer than it was to solve the problem. In fact it was so easy to check the answer they didn’t understand
why it was so hard to find in the first place. I would not be surprised if P ≠ NP was proved. | {"url":"https://markproffitt.com/tag/mathematics/","timestamp":"2024-11-05T18:41:03Z","content_type":"application/xhtml+xml","content_length":"69696","record_id":"<urn:uuid:6d61722a-49c3-4ac5-b6fe-898cb1816726>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00201.warc.gz"} |
Digital Signatures
Encryption and decryption address the problem of eavesdropping. However, tampering and impersonation are still possible.
Public key cryptography addresses the problem of tampering using a mathematical function called a one-way hash function (also called a message digest function or algorithm). A one-way hash is a
fixed-length number whose value is unique to the data being hashed. Any change in the data, even deleting or altering a single character, results in a different value.
For all practical purposes, the content of the hashed data cannot be deduced from the hash, which is why it is called "one-way."
This principle is the crucial part of digitally signing any data. Instead of encrypting the data itself, the signing software creates a one-way hash of the data, then uses your private key to encrypt
the hash. The encrypted hash, along with other information, such as the hashing algorithm, is known as a digital signature. | {"url":"http://h41379.www4.hpe.com/doc/83final/ba554_90007/ch02s06.html","timestamp":"2024-11-08T04:35:56Z","content_type":"text/html","content_length":"8518","record_id":"<urn:uuid:efe95e89-8e83-43de-8bbd-d276aefd0995>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00486.warc.gz"} |
Point Cloud Interpolation Using Open3D: A Step-by-Step Guide
Open3D is a powerful library for 3D data processing, including point cloud manipulation. Interpolating a point cloud typically involves estimating new points within the bounds of an existing set of
points. This can be useful for tasks such as upsampling, surface reconstruction, or filling in missing data.
Here’s a basic guide on how to perform point cloud interpolation using Open3D:
1. Install Open3D: If you haven't already, you can install Open3D using pip:
pip install open3d
2. Load and Visualize Point Cloud: First, load your point cloud data and visualize it to understand its structure.
import open3d as o3d
# Load point cloud
pcd = o3d.io.read_point_cloud("path_to_your_point_cloud.ply")
# Visualize point cloud
3. Voxel Downsampling: If your point cloud is very dense, you might want to downsample it to make processing faster.
voxel_size = 0.05
downpcd = pcd.voxel_down_sample(voxel_size=voxel_size)
4. Estimate Normals: Normals are often required for interpolation and surface reconstruction.
downpcd.estimate_normals(search_param=o3d.geometry.KDTreeSearchParamHybrid(radius=0.1, max_nn=30))
5. Surface Reconstruction: One common method for interpolation is surface reconstruction. Open3D supports several methods, such as Poisson surface reconstruction and Ball Pivoting Algorithm (BPA).
□ Poisson Surface Reconstruction:
poisson_mesh, densities = o3d.geometry.TriangleMesh.create_from_point_cloud_poisson(downpcd, depth=9)
□ Ball Pivoting Algorithm (BPA):
distances = downpcd.compute_nearest_neighbor_distance()
avg_dist = np.mean(distances)
radius = 3 * avg_dist
bpa_mesh = o3d.geometry.TriangleMesh.create_from_point_cloud_ball_pivoting(
downpcd, o3d.utility.DoubleVector([radius, radius * 2]))
6. Extract Points from Mesh: If you need the interpolated points rather than the mesh, you can sample points from the reconstructed surface.
sampled_pcd = poisson_mesh.sample_points_poisson_disk(number_of_points=10000)
7. Save Interpolated Point Cloud: Finally, save the interpolated point cloud to a file.
o3d.io.write_point_cloud("interpolated_point_cloud.ply", sampled_pcd)
This is a basic workflow for point cloud interpolation using Open3D. Depending on your specific needs, you might need to adjust parameters or use different methods. Open3D provides a rich set of
tools for 3D data processing, so exploring its documentation and experimenting with different functions can be very beneficial. | {"url":"https://gpt4omini.app/chat/c02053c6ebd54cbe85ce86facd628e40","timestamp":"2024-11-14T20:53:01Z","content_type":"text/html","content_length":"28071","record_id":"<urn:uuid:8d3e08f5-01d0-4445-9345-a5a7822657df>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00462.warc.gz"} |
simple if and issue
Posted 11 December 2017 - 07:19 AM
what am i doing wrong? i want to set up a reactor to run at fifty percent out put if the batter if half full, 70 percent if at 30 percent, 20 percent if at 80 percent and so on… any tips?
local percent = (math.min(bat1.getEnergyStored("unknown"))/(bat1.getMaxEnergyStored("unknown"))*100)
–this is basically deviding the current batter level by the max battery level to calculate what % full the battery is
if percent >= 95 then
if percent > 90 and not percent < 94 then
if percent > 89 and not percent < 80 then
and so on and so on… but i cant get the code to run. i keep getting an error message saying attempt to compar __lt on boolean and number
Posted 11 December 2017 - 10:11 AM
My advice is to put everything inside parentheses. Not sure if it will help but I always do it for readability so if (percent > 89) and (not (percent < 80 then))
Other than that you are using math.min wrong. To quote the manual:
math.min , math.max
Return the minimum or maximum value from a variable length list of arguments.
> = math.min(1,2)
> = math.min(1.2, 7, 3)
> = math.min(1.2, -7, 3)
> = math.max(1.2, -7, 3)
> = math.max(1.2, 7, 3)
So that's going to be the issue really.
Posted 11 December 2017 - 12:48 PM
I think you want math.floor and not math.min and I never seen get(Max)EnergyStored take any arguments
local bat1max = bat1.getMaxEnergyStored() -- constant value, set this once
local percent = math.floor((bat1.getEnergyStored()/bat1max)*100)-
Edited on 11 December 2017 - 11:52 AM
Posted 11 December 2017 - 01:18 PM
not percent < 80
is the problematic code.
"not" has a higher order of precedence than a comparison operator, which means that expression is equivalent to
(not percent) < 80
percent is a "truthy" value (a number), so this ends up being
false < 80
..which makes no sense, and promptly throws an error.
Instead, I recommend using
percent >= 80
…for the same (intended) effect, but without the precedence error.
Posted 11 December 2017 - 04:04 PM
OK I did a goof - I got distracted and accidentally upvoted all three answers instead just Stekelblad's and KoGY's (no offence to Purple as the answer regarding parentheses is technically correct).
Stekeblad has you covered with math.floor instead of math.min - that'll get you the percent you're after.
While Purple's answer (using parentheses) is correct and will solve the comparing lt on a bool and a number error, I'd recommend instead following KingofGamesYami's advice and change your not
statements to greater than or equals statements. IMO, it makes the code easier to read (and type - especially since you're not typing all those extra parentheses) and it seems to be the intended way
of doing things in Lua. | {"url":"https://ccf.squiddev.cc/topic/29220-simple-if-and-issue.html","timestamp":"2024-11-09T20:49:00Z","content_type":"text/html","content_length":"10228","record_id":"<urn:uuid:5e802944-6a18-4f70-9e5d-3a9a702f7db7>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00499.warc.gz"} |
Artificial Intelligence - Computer Science}
The goal of Artificial Intelligence (AI) is to tackle complex real-world problems with rigorous mathematical tools. Common sub-topics include Machine Learning, Search, Markov Decision Processes,
Reinforcement Learning, etc.
Courses of Artificial Intelligence usually requires knowledge of Linear Algebra, College Calculus, Probability and Statistics, and proficiency of Computer Programming, preferably Python. Some course
may recommend familiarity of Machine Learning | {"url":"https://cogak.com/topic/artificial-intelligence","timestamp":"2024-11-09T23:41:23Z","content_type":"text/html","content_length":"137270","record_id":"<urn:uuid:1377e62f-4478-4677-b0ee-e95b14c88732>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00835.warc.gz"} |
Does the point (-1, 2) lie on the graph of y = -2x + 3? | HIX Tutor
Does the point (-1, 2) lie on the graph of y = -2x + 3?
Answer 1
$\left(- 1 , 2\right)$ does not lie on $y = - 2 x + 3$
#(color(red)(-1),color(blue)(2))# is a point on #color(blue)(y)=-2color(red)(x)+3# if (and only if) #y=color(blue)(2)# when #x=color(red)(-1)# in #y=-2x+3#
Since #color(green)(5) != color(blue)(2)# #(-1,2)# does not lie on the line #y=-2x+3#
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To determine if the point (-1, 2) lies on the graph of ( y = -2x + 3 ), substitute the x-coordinate (-1) into the equation and check if it satisfies the equation for the corresponding y-coordinate
( y = -2(-1) + 3 )
( y = 2 + 3 )
( y = 5 )
Since the y-coordinate of the point (-1, 2) does not equal 5, the point does not lie on the graph of the equation.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 3
To determine if the point ((-1, 2)) lies on the graph of (y = -2x + 3), substitute the (x) and (y) coordinates of the point into the equation and check if the equation holds true.
Substitute (x = -1) and (y = 2) into the equation: [2 = -2(-1) + 3]
Evaluate the expression: [2 = 2 + 3]
Simplify: [2 = 5]
Since (2 \neq 5), the point ((-1, 2)) does not lie on the graph of (y = -2x + 3).
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/does-the-point-1-2-lie-on-the-graph-of-y-2x-3-8f9af91162","timestamp":"2024-11-11T08:13:55Z","content_type":"text/html","content_length":"581914","record_id":"<urn:uuid:033c2b4f-8a1a-4554-9d57-fcb905bbee3a>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00269.warc.gz"} |
Pendulums – collecting and using data
Pendulums – collecting and using data
In this investigation, students measure the period of pendulums of varying lengths, plot the data on a graph and use the graph to estimate the period of two new pendulums – one with a length between
two of their collected data points and one from a much longer pendulum.
Pendulums and data collection
Use simple equipment to collect data, plot the data on a graph and use the graph to make predictions.
Rights: University of Waikato Te Whare Wananga o Waikato
By the end of this activity, students should be able to:
• collect and interpret data
• construct a graph from collected data
• use a graph to interpolate between two data points
• use a graph to extrapolate a value beyond measured data points.
This investigation is related to Investigating pendulums – what matters?, which investigates the variables in a pendulum system and uses similar equipment.
Download the Word file (see link below).
Published:07 December 2018 | {"url":"https://beta.sciencelearn.org.nz/resources/2692-pendulums-collecting-and-using-data","timestamp":"2024-11-04T17:17:17Z","content_type":"text/html","content_length":"142928","record_id":"<urn:uuid:2a061da3-341c-4e59-9f18-be068754203f>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00325.warc.gz"} |
Modular Functions and Asymptotic Geometry on Punctured Riemann Spheres
Date of Completion
Kahler-Einstein metric, hyperbolic geometry, modular form, modular functions, Schwarzian derivative, punctured sphere, quasi-projective manifold, asymptotic metric
Associate Advisor
Liang Xiao
Associate Advisor
Guozhen Lu
Associate Advisor
Maria Gordina
Field of Study
Doctor of Philosophy
In the first chapter, we derive a precise asymptotic expansion of the complete K\"{a}hler-Einstein metric on the punctured Riemann sphere with three or more omitting points. This new technique is at
the intersection of analysis and algebra. By using the Schwarzian derivative, we prove that the coefficients of the expansion are polynomials on the two parameters which are uniquely determined by
the omitting points. Furthermore, we use the modular form and Schwarzian derivative to explicitly determine the coefficients in the expansion of the complete K\"{a}hler-Einstein metric for punctured
Riemann sphere with $3, 4, 6$ or $12$ omitting points.
The second chapter gives an explicit formula of the asymptotic expansion of the Kobayashi-Royden metric on the punctured sphere $\mathbb{CP}^1\backslash\{0,1,\infty\}$ in terms of the exponential
Bell polynomials. We prove a local quantitative version of the Little Picard's theorem as an application of the asymptotic expansion. Furthermore, the explicit formula of the metric and the
conclusion regarding the coefficients apply to more general cases of $\mathbb{CP}^1\backslash\{a_1,\ldots,a_n\}$, $n\ge 3$ as well, and the metric on $\mathbb{CP}^1\backslash\{0,\frac{1}{3},-\frac{1}
{6}\pm\frac{\sqrt{3}}{6}i\}$ will be given as a concrete example of our results.
Recommended Citation
Qian, Junqing, "Modular Functions and Asymptotic Geometry on Punctured Riemann Spheres" (2020). Doctoral Dissertations. 2516. | {"url":"https://digitalcommons.lib.uconn.edu/dissertations/2516/","timestamp":"2024-11-11T10:40:33Z","content_type":"text/html","content_length":"36663","record_id":"<urn:uuid:1f9a6ea1-bfff-460e-afb0-0ac03fe54cd4>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00747.warc.gz"} |
1984 AIME Problems/Problem 1
Find the value of $a_2+a_4+a_6+a_8+\ldots+a_{98}$ if $a_1$, $a_2$, $a_3\ldots$ is an arithmetic progression with common difference 1, and $a_1+a_2+a_3+\ldots+a_{98}=137$.
Solution 1
One approach to this problem is to apply the formula for the sum of an arithmetic series in order to find the value of $a_1$, then use that to calculate $a_2$ and sum another arithmetic series to get
our answer.
A somewhat quicker method is to do the following: for each $n \geq 1$, we have $a_{2n - 1} = a_{2n} - 1$. We can substitute this into our given equation to get $(a_2 - 1) + a_2 + (a_4 - 1) + a_4 + \
ldots + (a_{98} - 1) + a_{98} = 137$. The left-hand side of this equation is simply $2(a_2 + a_4 + \ldots + a_{98}) - 49$, so our desired value is $\frac{137 + 49}{2} = \boxed{093}$.
Solution 2
If $a_1$ is the first term, then $a_1+a_2+a_3 + \cdots + a_{98} = 137$ can be rewritten as:
$98a_1 + 1+2+3+ \cdots + 97 = 137$$\Leftrightarrow$$98a_1 + \frac{97 \cdot 98}{2} = 137$
Our desired value is $a_2+a_4+a_6+ \cdots + a_{98}$ so this is:
$49a_1 + 1+3+5+ \cdots + 97$
which is $49a_1+ 49^2$. So, from the first equation, we know $49a_1 = \frac{137}{2} - \frac{97 \cdot 49}{2}$. So, the final answer is:
$\frac{137 - 97(49) + 2(49)^2}{2} = \fbox{093}$.
Solution 3
A better approach to this problem is to notice that from $a_{1}+a_{2}+\cdots a_{98}=137$ that each element with an odd subscript is 1 from each element with an even subscript. Thus, we note that the
sum of the odd elements must be $\frac{137-49}{2}$. Thus, if we want to find the sum of all of the even elements we simply add $49$ common differences to this giving us $\frac{137-49}{2}+49=\fbox
Or, since the sum of the odd elements is 44, then the sum of the even terms must be $\fbox{093}$.
Solution 4
We want to find the value of $a_2+a_4+a_6+a_8+\ldots+a_{98}$, which can be rewritten as $a_1+1+a_2+2+a_3+\ldots+a_{49}+49 \implies a_1+a_2+a_3+\ldots+a_{49}+\frac{49 \cdot 50}{2}$. We can split
$a_1+a_2+a_3+\ldots+a_{98}$ into two parts: $\[a_1+a_2+a_3+\ldots+a_{49}\]$ and $\[a_{50}+a_{51}+a_{52}+\ldots+a_{98}\]$ Note that each term in the second expression is $49$ greater than the
corresponding term, so, letting the first equation be equal to $x$, we get $a_1+a_2+a_3+\ldots+a_{98}=137=2x+49^2 \implies x=\frac{137-49^2}{2}$. Calculating $49^2$ by sheer multiplication is not
difficult, but you can also do $(50-1)(50-1)=2500-100+1=2401$. We want to find the value of $x+\frac{49 \cdot 50}{2}=x+49 \cdot 25=x+1225$. Since $x=\frac{137-2401}{2}$, we find $x=-1132$.
- PhunsukhWangdu
Solution 5
Since we are dealing with an arithmetic sequence, $\[a_2+a_4+a_6+a_8+\ldots+a_{98} = 49a_{50}\]$ We can also figure out that $\[a_1+a_2+a_3+\ldots+a_{98} = a_1 + 97a_{50} = 137\]$$\[a_1 = a_{50}-49 \
Rightarrow 98a_{50}-49 = 137\]$ Thus, $49a_{50} = \frac{137 + 49}{2} = \boxed{093}$
Video Solution by OmegaLearn
~ pi_is_3.14
See also | {"url":"https://artofproblemsolving.com/wiki/index.php?title=1984_AIME_Problems/Problem_1&oldid=186740","timestamp":"2024-11-12T07:07:07Z","content_type":"text/html","content_length":"52223","record_id":"<urn:uuid:3426887c-659d-47f6-9e2b-27d1ce846415>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00773.warc.gz"} |
Desvl's blog
Tensor Product as a Universal Object (Category Theory & Module Theory)
It is quite often to see direct sum or direct product of groups, modules, vector spaces. Indeed, for modules over a ring $R$, direct products are also direct products of $R$-modules as well. On the
other hand, the direct sum is a coproduct in the category of $R$-modules.
But what about tensor products? It is some different kind of product but how? Is it related to direct product? How do we write a tensor product down? We need to solve this question but it is not a
good idea to dig into numeric works.
The category of bilinear or even $n$-multilinear maps
From now on, let $R$ be a commutative ring, and $M_1,\cdots,M_n$ are $R$-modules. Mainly we work on $M_1$ and $M_2$, i.e. $M_1 \times M_2$ and $M_1 \otimes M_2$. For $n$-multilinear one, simply
replace $M_1\times M_2$ with $M_1 \times M_2 \times \cdots \times M_n$ and $M_1 \otimes M_2$ with $M_1 \otimes \cdots \otimes M_n$. The only difference is the change of symbols.
The bilinear maps of $M_1 \times M_2$ determines a category, say $BL(M_1 \times M_2)$ or we simply write $BL$. For an object $(f,E)$ in this category we have $f: M_1 \times M_2 \to E$ as a bilinear
map and $E$ as a $R$-module of course. For two objects $(f,E)$ and $(g,F)$, we define the morphism between them as a linear function making the following diagram commutative: $\def\mor{\operatorname
This indeed makes $BL$ a category. If we define the morphisms from $(f,E)$ to $(g,F)$ by $\mor(f,g)$ (for simplicity we omit $E$ and $F$ since they are already determined by $f$ and $g$) we see the
satisfy all axioms for a category:
CAT 1 Two sets $\mor(f,g)$ and $\mor(f’,g’)$ are disjoint unless $f=f’$ and $g=g’$, in which case they are equal. If $g \neq g’$ but $f = f’$ for example, for any $h \in \mor(f,g)$, we have $g = h \
circ f = h \circ f’ \neq g’$, hence $h \notin \mor(f,g)$. Other cases can be verified in the same fashion.
CAT 2 The existence of identity morphism. For any $(f,E) \in BL$, we simply take the identity map $i:E \to E$. For $h \in \mor(f,g)$, we see $g = h \circ f = h \circ i \circ f$. For $h’ \in \mor(g,f)
$, we see $f = h’ \circ g = i \circ h’ \circ g$.
CAT 3 The law of composition is associative when defined.
There we have a category. But what about the tensor product? It is defined to be initial (or universally repelling) object in this category. Let’s denote this object by $(\varphi,M_1 \otimes M_2)$.
For any $(f,E) \in BL$, we have a unique morphism (which is a module homomorphism as well) $h:(\varphi,M_1 \otimes M_2) \to (f,E)$. For $x \in M_1$ and $y \in M_2$, we write $\varphi(x,y)=x \
otimes y$. We call the existence of $h$ the universal property of $(\varphi,M_1 \otimes M_2)$.
The tensor product is unique up to isomorphism. That is, if both $(f,E)$ and $(g,F)$ are tensor products, then $E \simeq F$ in the sense of module isomorphism. Indeed, let $h \in \mor(f,g)$ and $h’ \
in \mor(g,h)$ be the unique morphisms respectively, we see $g = h \circ f$, $f = h’ \circ g$, and therefore
Hence $h \circ h’$ is the identity of $(g,F)$ and $h’ \circ h$ is the identity of $(f,E)$. This gives $E \simeq F$.
What do we get so far? For any modules that is connected to $M_1 \times M_2$ with a bilinear map, the tensor product $M_1 \oplus M_2$ of $M_1$ and $M_2$, is always able to be connected to that module
with a unique module homomorphism. What if there are more than one tensor products? Never mind. All tensor products are isomorphic.
But wait, does this definition make sense? Does this product even exist? How can we study the tensor product of two modules if we cannot even write it down? So far we are only working on arrows, and
we don’t know what is happening inside an module. It is not a good idea to waste our time on ‘nonsenses’. We can look into it in an natural way. Indeed, if we can find a module satisfying the
property we want, then we are done, since this can represent the tensor product under any circumstances. Again, all tensor products of $M_1$ and $M_2$ are isomorphic.
A natural way to define the tensor product
Let $M$ be the free module generated by the set of all tuples $(x_1,x_2)$ where $x_1 \in M_1$ and $x_2 \in M_2$, and $N$ be the submodule generated by tuples of the following types:
First we have a inclusion map $\alpha=M_1 \times M_2 \to M$ and the canonical map $\pi:M \to M/N$. We claim that $(\pi \circ \alpha, M/N)$ is exactly what we want. But before that, we need to explain
why we define such a $N$.
The reason is quite simple: We want to make sure that $\varphi=\pi \circ \alpha$ is bilinear. For example, we have $\varphi(x_1+x_1’,x_2)=\varphi(x_1,x_2)+\varphi(x_1’,x_2)$ due to our construction
of $N$ (other relations follow in the same manner). This can be verified group-theoretically. Note
Hence we get the identity we want. For this reason we can write
Sometimes to avoid confusion people may also write $x_1 \otimes_R x_2$ if both $M_1$ and $M_2$ are $R$-modules. But before that we have to verify that this is indeed the tensor product. To verify
this, all we need is the universal property of free modules.
By the universal property of $M$, for any $(f,E) \in BL$, we have a induced map $f_\ast$ making the diagram inside commutative. However, for elements in $N$, we see $f_\ast$ takes value $0$, since
$f_\ast$ is a bilinear map already. We finish our work by taking $h[(x,y)+N] = f_\ast(x,y)$. This is the map induced by $f_\ast$, following the property of factor module.
Trivial tensor product
For coprime integers $m,n>1$, we have $\def\mb{\mathbb}$
where $O$ means that the module only contains $0$ and $\mb{Z}/m\mb{Z}$ is considered as a module over $\mb{Z}$ for $m>1$. This suggests that, the tensor product of two modules is not necessarily
‘bigger’ than its components. Let’s see why this is trivial.
Note that for $x \in \mb{Z}/m\mb{Z}$ and $y \in \mb{Z}/n\mb{Z}$, we have
since, for example, $mx = 0$ for $x \in \mb{Z}/m\mb{Z}$ and $\varphi(0,y)=0$. If you have trouble understanding why $\varphi(0,y)=0$, just note that the submodule $N$ in our construction contains
elements generated by $(0x,y)-0(x,y)$ already.
By Bézout’s identity, for any $x \otimes y$, we see there are $a$ and $b$ such that $am+bn=1$, and therefore
Hence the tensor product is trivial. This example gives us a lot of inspiration. For example, what if $m$ and $n$ are not necessarily coprime, say $\gcd(m,n)=d$? By Bézout’s identity still we have
This inspires us to study the connection between $\mb{Z}/m\mb{Z} \otimes \mb{Z}/n\mb{Z}$ and $\mb{Z}/d\mb{Z}$. By the universal property, for the bilinear map $f:\mb{Z}/m\mb{Z} \times \mb{Z}/n\mb{Z}
\to \mb{Z}/d\mb{Z}$ defined by
(there should be no difficulty to verify that $f$ is well-defined), there exists a unique morphism $h:\mb{Z}/m\mb{Z} \otimes \mb{Z}/n\mb{Z} \to \mb{Z}/d\mb{Z}$ such that
Next we show that it has a natural inverse defined by
Taking $a’ = a+kd$, we show that $g(a+d\mb{Z})=g(a’+\mb{Z})$, that is, we need to show that
By Bézout’s identity, there exists some $r,s$ such that $rm+sn=d$. Hence $a’ = a + ksn+krm$, which gives
So $g$ is well-defined. Next we show that this is the inverse. Firstly
Hence $g = h^{-1}$ and we can say
If $m,n$ are coprime, then $\gcd(m,n)=1$, hence $\mb{Z}/m\mb{Z} \otimes \mb{Z}/n\mb{Z} \simeq \mb{Z}/\mb{Z}$ is trivial. More interestingly, $\mb{Z}/m\mb{Z}\otimes \mb{Z}/m\mb{Z}=\mb{Z}/m\mb{Z}$. But
this elegant identity raised other questions. First of all, $\gcd(m,n)=\gcd(n,m)$, which implies
Further, for $m,n,r >1$, we have $\gcd(\gcd(m,n),r)=\gcd(m,\gcd(n,r))=\gcd(m,n,r)$, which gives
Hence for modules of the form $\mb{Z}/m\mb{Z}$, we see the tensor product operation is associative and commutative up to isomorphism. Does this hold for all modules? The universal property answers
this question affirmatively. From now on we will be keep using the universal property. Make sure that you have got the point already.
Tensor product as a binary operation
Let $M_1,M_2,M_3$ be $R$-modules, then there exists a unique isomorphism
for $x \in M_1$, $y \in M_2$, $z \in M_3$.
Proof. Consider the map
where $x \in M_1$. Since $(\cdot\otimes\cdot)$ is bilinear, we see $\lambda_x$ is bilinear for all $x \in M_1$. Hence by the universal property there exists a unique map of the tensor product:
Next we have the map
which is bilinear as well. Again by the universal property we have a unique map
This is indeed the isomorphism we want. The reverse is obtained by reversing the process. For the bilinear map
we get a unique map
Then from the bilinear map
we get the unique map, which is actually the reverse of $\overline{\mu}_x$:
Hence the two tensor products are isomorphic. $\square$
Let $M_1$ and $M_2$ be $R$-modules, then there exists a unique isomorphism
where $x_1 \in M_1$ and $x_2 \in M_2$.
Proof. The map
is bilinear and gives us a unique map
given by $x \otimes y \mapsto y \otimes x$. Symmetrically, the map $\lambda’:M_2 \times M_1 \to M_1 \otimes M_2$ gives us a unique map
which is the inverse of $\overline{\lambda}$. $\square$
Therefore, we may view the set of all $R$-modules as a commutative semigroup with the binary operation $\otimes$.
Maps between tensor products
Consider commutative diagram:
Where $f_i:M_i \to M_i’$ are some module-homomorphism. What do we want here? On the left hand, we see $f_1 \times f_2$ sends $(x_1,x_2)$ to $(f_1(x_1),f_2(x_2))$, which is quite natural. The question
is, is there a natural map sending $x_1 \otimes x_2$ to $f_1(x_1) \otimes f_2(x_2)$? This is what we want from the right hand. We know $T(f_1 \times f_2)$ exists, since we have a bilinear map by $\mu
= \varphi’ \circ (f_1\times f_2)$. So for $(x_1,x_2) \in M_1 \times M_2$, we have $T(f_1 \times f_2)(x_1 \otimes x_2) = \varphi’ \circ (f_1 \times f_2)(x_1,x_2) = f_1(x_1) \otimes f_2(x_2)$ as what
we want.
But $T$ in this graph has more interesting properties. First of all, if $M_1 = M_1’$ an $M_2 = M_2’$, both $f_1$ and $f_2$ are identity maps, then we see $T(f_1 \times f_2)$ is the identity as well.
Next, consider the following chain
We can make it a double chain:
It is obvious that $(g_1 \circ f_1 \times g_2 \circ f_2)=(g_1 \times g_2) \circ (f_1 \times f_2)$, which also gives
Hence we can say $T$ is functorial. Sometimes for simplicity we also write $T(f_1,f_2)$ or simply $f_1 \otimes f_2$, as it sends $x_1 \otimes x_2$ to $f_1(x_1) \otimes f_2(x_2)$. Indeed it can be
viewed as a map | {"url":"https://desvl.xyz/categories/Algebra/Category-Theory/","timestamp":"2024-11-13T11:29:02Z","content_type":"text/html","content_length":"33325","record_id":"<urn:uuid:c9c561e8-82e1-42c2-a9b6-cace5010cdbc>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00561.warc.gz"} |
Nummer Datum Autor Titel Abstraact/MSC
2624 (wird in Domschke, Pia Adjoint-Based Control of Model and Discretization Errors for Gas and Water
neuem Tab 01.12.2010 Kolb, Oliver Supply Networks
geöffnet) Lang, Jens
2623 (wird in Lindner, Marko
neuem Tab 03.11.2010 Roch, Steffen Finite sections of random Jacobi operators MSC: 65J10; 47B36; 47B80
2622 (wird in Domschke, Pia Adjoint-Based Control of Model and Discretization Errors for Gas Flow in
neuem Tab 10.10.2010 Kolb, Oliver Networks
geöffnet) Lang, Jens
2621 (wird in Farwig,Reinhard
neuem Tab 19.10.2010 Galdi, Giovanni Asymptotic Structure of a Leray Solution to the Navier-Stokes Flow Around a MSC: 35Q30; 76D05; 35B40
geöffnet) P. Rotating Body
Kyed, Mads
2620 (wird in Sawada, Okihiro On the analyticity and the almost periodicity of the solution to the Euler
neuem Tab 09.08.2010 Takada, Ryo equations with non-decaying initial velocity MSC: 35Q31; 76B03; 35B15
2619 (wird in Farwig,
neuem Tab 14.06.2010 Reinhard Global weak solutions of the Navier-Stokes equations with nonhomogeneous MSC: 35Q30; 35J65; 76D05
geöffnet) Kozono, Hideo boundary data and divergence
Sohr, Hermann
2618 (wird in Lindner, Marko
neuem Tab 10.06.2010 Roch, Steffen On the integer points in a lattice polytope: n-fold Minkowski sum and boundary MSC: 52B20; 52C07; 65J10
2617 (wird in Nesenenko, Homogenization of Viscoplastic Models of Monotone Type with Positive
neuem Tab 21.05.2010 Sergiy Semi-Definite Free Energy
2616 (wird in Kyed, Mads Steady-State Navier-Stokes Flows Past a Rotating Body: Leray Solutions are
neuem Tab 14.05.2010 Galdi,Giovanni Physically Reasonable MSC: 35Q30; 76D05; 76U05
geöffnet) P.
2615 (wird in Kyed, Mads
neuem Tab 14.05.2010 Galdi,Giovanni Asymptotic Behavior of a Leray Solution around a Rotating Obstacle MSC: 35Q30; 76D05; 76U05
geöffnet) P.
2584 (wird in Nesenenko, Well-posedness for dislocation based gradient visco-plasticity I:
neuem Tab 21.05.2010 Sergiy subdifferential case
geöffnet) Neff, Patrizio
2614 (wird in Reinhard Extensions of Serrin's uniqueness and regularity conditions for the
neuem Tab 26.04.2010 Sohr, Hermann Navier-Stokes equations MSC: 35Q30; 67D05
geöffnet) Varnhorn,
2613 (wird in Farwig, MSC: 35Q35; 76D05
neuem Tab 13.04.2010 Reinhard On the energy equality of Navier-Stokes equations in general unbounded domains We present a sufficient condition for the energy equality of Leray-Hopf's
geöffnet) Taniuchi, weak solutions to the Navier-Stokes equations in general unbounded
Yasushi 3--dimensional domains.
2612 (wird in Farwig,
neuem Tab 12.04.2010 Reinhard Leray's inequality for fluid flow in symmetric multi-connected two-dimensional MSC: 35Q30; 76D03; 76D05
geöffnet) Morimoto, domains
2611 (wird in Reinhard
neuem Tab 11.04.2010 Kozono, Hideo Leray's inequality in general multi-connected domains in $R^n$ MSC: 35Q35; 76D05
geöffnet) Yanagisawa,
2610 (wird in Ullmann,
neuem Tab 06.04.2010 Sebastian A POD-Galerkin Reduced Model with Updated Coefficients for Smagorinsky LES MSC: 76F65; 76D05; 76M25
geöffnet) Lang, Jens
2609 (wird in Gottermeier,
neuem Tab 31.03.2010 Bettina Adaptive Two-Step Peer Methods for Thermally Coupled Incompressible Flow MSC: 65M99; 76D05; 76M10
geöffnet) Lang, Jens
2608 (wird in Schieche, Stochastic Analysis of Nusselt Numbers for Natural Convection with Uncertain
neuem Tab 24.03.2010 Bettina Boundary Conditions MSC: 35R60; 65N35; 76D05
geöffnet) Lang, Jens
2607 (wird in Kolb, Oliver Modified QR Decomposition to Avoid Non-Uniqueness in Water Supply Networks
neuem Tab 04.03.2010 Domschke, Pia with Extension to Adjoint Calculus
geöffnet) Lang, Jens
2606 (wird in Domschke, Pia An Adaptive Model Switching and Discretization Algorithm for Gas Flow on
neuem Tab 04.03.2010 Kolb, Oliver Networks
geöffnet) Lang, Jens
2605 (wird in Alber, Solutions to a model with Neumann boundary conditions for phase transitions
neuem Tab 23.02.2010 Hans-Dieter driven by configurational forces MSC: 74N20; 35Q72
geöffnet) Zhu, Peicheng
2604 (wird in Rabinovich,
neuem Tab 22.02.2010 Vladimir S. Finite sections of band-dominated operators on discrete groups
geöffnet) Roch, Steffen
2603 (wird in Roch, Steffen Exponential estimates of solutions of pseudodifferential equations with
neuem Tab 10.02.2010 Rabinovich, operator-valued symbols. Applications to Schrödinger operators with MSC: 35xx; 58Jxx; 81Q10
geöffnet) Vladimir S. operator-valued potentials
2602 (wird in Farwig,
neuem Tab 04.02.2010 Reinhard Uniqueness of almost periodic-in-time solutions to Navier-Stokes equations in MSC: 35Q30; 35Q35; 76D05
geöffnet) Taniuchi, unbounded domains
2601 (wird in Periodic solutions of the Navier-Stokes equations with inhomogeneous boundary
neuem Tab 03.02.2010 Okabe, Takahiro conditions MSC: 35Q30; 76D05
Nummer Datum Autor Titel Abstract/MSC
2600 (wird
in neuem Tab 21.12.2009 Otto, Martin Avoiding Incidental Homomorphisms Into Guarded Covers
2599 (wird
in neuem Tab 21.12.2009 Otto, Martin Acyclicity in Hypergraph Covers
2598 (wird Farwig, Reinhard Spectral Analysis of a Stokes-Type Operator Arising
in neuem Tab 15.12.2009 Necasova, Sarka from Flow around a Rotating Body MSC: 35Q35; 35P99; 47A10; 76D07
geöffnet) Neustupa, Jiri
2597 (wird L^q-almost Solvability of Viscoplastic Models of
in neuem Tab 05.01.2010 Nesenenko, Sergiy Monotone Type
2596 (wird
in neuem Tab 04.12.2009 Roch, Steffen Spatial discretization of restricted group algebras
2595 (wird Clever, Debora Optimal Control of Radiative Heat Transfer in Glass
in neuem Tab 20.11.2009 Lang, Jens Cooling with Restrictions on the Temperature Gradient MSC: 35K10; 35K58; 35R15; 65M99; 35Q80; 35Q93; 65Z05
2594 (wird Gruber, Peter
in neuem Tab 18.11.2009 Knees, Dorothee Analytical and numerical aspects of time-dependent MSC: 74C05; 74C10; 49N60; 65M60
geöffnet) Nesenenko, Sergiy models with internal variables
Thomas, Marita
2593 (wird Debrabant, Semi-Lagrangian schemes for linear and fully
in neuem Tab 06.10.2009 Kristian non-linear diffusion equations MSC: 65M12; 65M15; 65M06; 35K10; 35K55; 35K65; 49L25; 49L20
geöffnet) Jakobsen, Espen R.
2592 (wird Gottermeier, Adaptive Two-Step Peer Methods for Incompressible
in neuem Tab 01.10.2009 Bettina Navier-Stokes Equations MSC: 65M99; 76D05; 76M10
geöffnet) Lang, Jens
2591 (wird Farwig, Reinhard Leading term at infinity of steady Navier-Stokes flow
in neuem Tab 26.08.2009 Hishida, Toshiaki around a rotating obstacle MSC: 35B40; 35Q30; 76D05
2590 (wird Ali Mehmeti, Felix The Klein-Gordon equation with multiple tunnel effect
in neuem Tab 23.06.2009 Haller-Dintelmann, on a star-shaped network: expansions in generalized MSC: 34B45(Primary); 42A38; 47A10; 47A60; 47A70(Secondary)
geöffnet) Robert eigenfunctions
Régnier, Virginie
2589 (wird Debrabant, Runge-Kutta methods for third order weak
in neuem Tab 16.06.2009 Kristian approximation of SDEs with multidimensional additive MSC: 65C30; 60H35; 65C20; 68U20
geöffnet) noise
2588 (wird Spatial Regularity of weak solutions to the Navier-Stokes
in neuem Tab 15.06.2009 discretization of equations in exterior domains MSC: 76D05; 35Q30; 35B65
geöffnet) $C^*$-algebras
2587 (wird
in neuem Tab 28.05.2009 Roch, Steffen Spatial discretization of $C^*$-algebras
2586 (wird Ebobisse, Francois Existence and uniqueness for rate-independent
in neuem Tab 28.05.2009 Neff, Patrizio infinitesimal gradient plasticity with isotropic MSC: 74C05; 49J40; 49J52; 35J25; 35Q72
geöffnet) hardening and plastic spin
2585 (wird Farwig, Reinhard Spectral Properties in $L^q$ of an Oseen Operator
in neuem Tab 20.05.2009 Neustupa, Jiri Modelling Fluid Flow past a Rotating Body MSC: 35Q35; 35P99; 76 D 07 35\,P\,99, 76\,D\,07
2583 (wird Mößner, Bernhard
in neuem Tab 17.04.2009 Reif, Ulrich Stability of Tensor Product B-Splines on Domains
2582 (wird Hechler, Jochen
in neuem Tab 17.04.2009 Mößner, Bernhard $C^1$-Continuity of the Generalized Four-Point Scheme
geöffnet) Reif, Ulrich
2581 (wird Polynomial Approximation on Domains Bounded by
in neuem Tab 17.04.2009 Ulrich Reif Diffeomorphic Images of Graphs
2580 (wird Mößner, Bernhard Error Bounds for Polynomial Tensor Product We provide estimates for the maximum error of polynomial tensor product interpolation on regular grids
in neuem Tab 17.04.2009 Reif, Ulrich Interpolation in $\mathbb{R}$^d. The set of partial derivatives required to form these bounds depends on the
geöffnet) clustering of interpolation nodes. Also bounds on the partial derivatives of the error are derived.
2579 (wird App, Andreas We derive Sobolev-type inner products with respect to which hat functions on arbitrary triangulations
in neuem Tab 17.04.200 Reif, Ulrich Piecewise Linear Orthogonal Approximation of domains in $\mathbb{R}^d$ are orthogonal. Compared with linear interpolation, the resulting
geöffnet) approximation schemes yield superior accuracy at little extra cost.
2578 (wird Farwig, Reinhard Asymptotic profile of steady Stokes flow around a
in neuem Tab 17.04.2009 Hishida, Toshiaki rotating obstacle MSC: 35Q30; 35Q35; 35B40; 76D07
2577 (wird Steffen Roch A sequence algebra of finite sections, convolution
in neuem Tab 09.03.2009 Pedro A. Santos and multiplication operators on $L^p(R)$ MSC: 65R20
geöffnet) Bernd Silbermann
2576 (wird Hofmann, Karl H.
in neuem Tab 25.02.2009 Morris, Sidnay A. The Structure of Almost Connected Pro-Lie Groups
2575 (wird Riechwald, Paul A Large Class of Solutions for the Instationary
in neuem Tab 16.02.2009 Felix Navier-Stokes System MSC: 35Q30; 76D05; 76D07
geöffnet) Schumacher, Katrin
2574 (wird Neff, Patrizio Stableidentification of linear isotropic Cosserat
in neuem Tab 05.02.2009 Jeong, Jena parameters: bounded stiffness in bending and torsion
geöffnet) Fischle, Andreas implies conformal invariance of curvature
2573 (wird Farwig, Reinhard Global Weak Solutions of the Navier-Stokes System
in neuem Tab 21.01.2009 Kozono, Hideo with Nonzero Boundary Conditions MSC: 76D05; 35Q30; 35J65
geöffnet) Sohr, Hermann
2572 (wird Rabinovich, Essential spectra and exponential estimates of
in neuem Tab 19.01.2009 Vladimir eigenfunctions of lattice operators of quantum MSC: 39A47; 47B39; 81Q10
geöffnet) Roch, Steffen mechanics
Nummer Datum Autor Titel Abstract/MSC
2571 Ferreira,
(wird in Carlos Mathematical Models and Polyhedral Studies
neuem Tab 22.12.2008 Günther, Ute for Integral Sheet Metal Design
geöffnet) Martin,
2570 Huang,
(wird in Weizhang A New Anisotropic Mesh Adaptation Method
neuem Tab 05.12.2008 Kamenski, Based upon Hierarchical A Posteriori Error MSC: 65N50; 65N30; 65N15
geöffnet) Lennard Estimates
Lang, Jens
2569 Alber,
(wird in 21.11.2008 Hans-Dieter Local and global regularity in time
neuem Tab Nesenenko, dependent viscoplasticity
geöffnet) Sergiy
2568 Neff, Patrizio Subgrid interaction and micro-randomness –
(wird in 31.10.2008 Jeong, Jena novel invariance requirements in
neuem Tab Ramezani, infinitesimal gradient elasticity
geöffnet) Hamid
2567 Alber, Justification of homogenization in
(wird in 31.10.2008 Hans-Dieter viscoplasticity: From convergence on two
neuem Tab Nesenenko, scales to an asymptotic solution in ${L^2
geöffnet) Sergiy (\Omega)}$
2566 A Note on Existence Result for
(wird in 04.01.2010 Nesenenko, Viscoplastic Models with Nonlinear
neuem Tab Sergiy Hardening
2565 Ehrhardt,
(wird in Torsten The Strong Szegö-Widom Limit Theorem for
neuem Tab 14.10.2008 Roch, Steffen operators with almost periodic diagonal MSC: 47B35; 47B37; 47N40
geöffnet) Silbermann,
2564 Debrabant, Stochastic Taylor Expansions: Weight MSC: 65C30; 60H10
(wird in 13.10.2008 Kristian functions of B-series expressed as The exact solution of stochastic differential equations can be expressed as stochaastic B-series. In this paper, we
neuem Tab Kværnø, Anne multiple integrals present an algorithm using rooted trees for expanding the weight functions occurring in this representation in terms of
geöffnet) multiple integrals using multi-indices.
Wille, Rudolf
2563 13.10.2008 Wille-Henning, The Mathematical in Music Thinking 00A05
Kolb, Oliver Moving Penalty Functions for Optimal An adaptive penalty technique to find feasible solutions of mixed integer nonlinear optimal control problems on
2562 26.10.2008 Domschke, Pia Control with PDEs on Networks networks is introduced. This new approach is applied to problems arising in the operation of gas and water supply
Lang, Jens networks.
2561 30.09.2008 Scheffold, Interessante Kongruenzen im Zusammenhang We derive special congruences from the formulas of Abel and Barlow and we remember of König's theory.
Egon mit den Formeln von Abel und Barlow
2560 Farwig,
(wird in Reinhard On optimal initial value conditions for
neuem Tab 12.09.2008 Sohr, Hermann local strong solutions of the MSC: 35Q30; 76D05
geöffnet) Varnhorn, Navier-Stokes equations
2559 A new paradigm: the linear isotropic
(wird in 07.09.2008 Neff, Patrizio Cosserat model with conformally invariant
neuem Tab Jeong, Jena curvature energy.
2558 Jeong, Jena
(wird in Ramezani, Simulation of linear isotropic Cosserat
neuem Tab 27.08.2008 Hamid elasticity with conformally invariant
geöffnet) Münch, Ingo curvature
Neff, Patrizio
2557 Neff, Patrizio
(wird in 05.08.2008 Chelminski, $H^1_{loc}$-stress and strain regularity
neuem Tab Krzysztof in Cosserat plasticity
2556 Neff, Patrizio
(wird in Jeong, Jena Mean field modeling of isotropic random
neuem Tab 05.08.2008 Münch, Ingo Cauchy elasticity versus microstretch
geöffnet) Ramezani, elasticity.
2555 Neff, Patrizio
(wird in 05.08.2008 Hong, Kwon-Il The Reissner-Mindlin plate is the $\
neuem Tab Jeong, Jena Gamma$-limit of Cosserat elasticity.
2554 Bales, Pia
(wird in 12.069.2008 Kolb, Oliver Hierarchical modelling and model MSC: 76N25; 65K99; 65M99
neuem Tab Lang, Jens adaptivity for gas flow on networks
2553 Kolb, Oliver Adaptive linearization for the optimal
(wird in 26.06.2008 Lang, Jens control problem of gas flow in pipeline MSC: 90C35; 65K99; 65M12
neuem Tab Bales, Pia networks
Bales, Pia
2552 Geißler, Björn
(wird in Kolb, Oliver Combination of Nonlinear and Linear
neuem Tab 09.03.2008 Lang, Jens Optimization of Transient Gas Networks MSC: 76N25; 90C11; 90C30; 90C90
geöffnet) Martin,
Morsi, Antonio
2551 Numerically Solving Maxwell’s Equations. Having experienced of couple of tricky issues during the implementation of edge elements within the fully space-time
(wird in 11.09.2008 Delia Teleaga Implementation Issues for adaptive PDE solver KARDOS to solve magnetoquasistatic problems we found it useful to share our exciting learning
neuem Tab Jens Lang Magnetoquasistatics process with interested readers and beginners.
(wird in 01.04.2008 Wille, Rudolf Formal Concept Analysis and Contextual MSC: 03B; 03B
neuem Tab Logic
2548 26.05.2008 Farwig Vorticity, Rotation and Symmetry -- Vorticity, Rotation and Symmetry – Stabilizing and Destabilizing Fluid Motion
Jiri Neustupa Stabilizing and Destabilizing Fluid Motion
Patrick Penel
(wird in 26.05.2008 Neeb, Semi-bounded unitary representations of Semi-bounded unitary representations of infinite-dimensional of Lie groups
neuem Tab Karl-Hermann infinite-dimensional Lie groups
2550 Existence, uniqueness and stability in
(wird in 05.08.2008 Jeong, Jena linear Cosserat elasticity for weakest
neuem Tab Neff, Patrizio curvature conditions
2546 Farwig, The Largest Possible Initial Value Space
(wird in 28.04.2008 Reinhard for Local Strong Solutions of the The Largest Possible Initial Vaue Space for Local Strong Solutions of the Navier-Stokes Equations in General Domains
neuem Tab Sohr, Hermann Navier-Stokes Equations in General Domains
(wird in 24.04.2008 Roch, Steffen Spatial discretization of Cuntz algebras Spatial discretization of Cuntz algebras
neuem Tab
2544 Alber, Interface motion by interface diffusion
(wird in 28.04.2008 Hans-Dieter driven by bulk energy: justification of a Interface motion by interface diffusion driven by bulk energy: justification of a diffusive interface model
neuem Tab Zhu, Peicheng diffusive interface model
2543 Agmon's type estimates of exponential
(wird in Rabinovich, behavior of solutions of systems of Agmon's type estimates of exponential behaviour of solutionsof systems of elliptic partial differential equations.
neuem Tab 28.04.2008 Vladimir partial differential equations. Applications to Schrödinger, Moisil-Theodorescu and Dirac operators.
geöffnet) Roch, Steffen Applications to Schrödinger,
Moisil-Theodorescu and Dirac operators.
MSC: 17B56; 35Q53
2542 Neeb, In this paper we develop an abstract setup for hamiltonian group actions as follows: Starting with a continuous
(wird in Karl-Hermann An abstract setting for hamiltonian $2$-cochain $\omega$ on a Lie algebra $h$ with values in an $h$-module $V$, we associate subalgebras $sp(h,\omega) \
neuem Tab 18.03.2008 Vizman, actions supseteq ham(h,\omega)$ of symplectic, resp., hamiltonian elements. Then $ham(h,\omega)$ has a natural central
geöffnet) Cornelia extension which in turn is contained in a larger abelian extension of $sp(h,\omega)$. In this setting, we study linear
actions of a Lie group $G$ on $V$ which are compatible with a homomorphism $g \rightarrow ham(h,\omega)$, i.e. abstract
hamiltonian actions, corresponding central and abelian extensions of $G$ and momentum maps $J : g \rightarrow V$.
MSC: 22E65; 58B34
We call a unital locally convex algebra $A$ a continuous inverse algebra if its unit group $A^\times$ is open and
2541 Lie group extensions associated to inversion is a continuous map. For any smooth action of a, possibly infinite-dimensional, connected Lie group $G$ on a
(wird in 18.03.2008 Karl-Hermann projective modules of continuous inverse continuous inverse algebra $A$ by automorphisms and any finitely generated projective right $A$-module $E$, we
neuem Tab Neeb algebras construct a Lie group extension $\hat G$ of $G$ by the group $GL_A(E)$ of automorphisms of the $A$-module $E$. This Lie
geöffnet) group extension is a ``non-commutative'' version of the group $Aut(V)$ of automorphism of a vector bundle over a
compact manifold $M$, which arises for $G = Diff(M)$, $A = C^\infty(M,C)$ and $E = \Gamma V$. We also identify the Lie
algebra $\hat g$ of $\hat G$ and explain how it is related to connections of the $A$-module $E$.
MSC: 03B
This paper presents a mathematization of the philosophical dcotrine of judgments as an extension of the mathem
atization of the philosophical doctrine of concepts delveloped in Formal Concept Analysis. The chosen approach was
Concept Graphs as Semantic Structures for strongly sti mulated by J.F. Sowa's theory of conceptual graphs. The mathematized conceptual graphs, called concept
2540 18.03.2008 Rudolf Wille Contextual Judgment Logic graphs, are mathematical semantic structures based on formal contexts and their formal concepts; those semantic
structures are viewed as formal judgmen ts in the underlying Contextual Judgment Logic. In this papper concept graphs
are systematically built up with simple concepts graphs in section 2 and continuing with existential graphs in section
3, with implicational and clausal concept graphs in secti on 4, and finally with generalizations of concept graphs in
section 5. Examples are illustrating the different types of concept graphs.
MSC: 06F
This paper continues the approach of developing an order-theoretic structure theory of one-dimensional continu um
An Algebraization of Linear Continuum structures as elaborated in [Wi07] (see also [Wi83],[Wi03]. The aim is to extend the order-theoretic structure theory
2539 18.03.2008 Rudolf Wille Structures by a m eaningful algebraization; for this, we concentrate on the real linear continuum structure with its derived
concept lattice whic h gives rise to the so-called „real half-numbers“. The algebraization approaches an ordered
algebraic structure on the set of a ll real half-numbers to make the continuum structure ot the reals more transparent
and tractable.
2538 MSC: 35J10; 35J15
(wird in 18.03.2008 Alber, H.-D. Asymptotics of the solution to Robin Convergence of the solution to the exterior Robin problem to the solution of the Dirichlet problem, as the impedance
neuem Tab Ramm, A. G. problem tends to infinity, is proved. The rate of convergence is established. A method for deriving higher order terms of the
geöffnet) asymptotics of the solution is given.
MSC: 35B65; 35D10; 74C10; 74D10; 35J25; 34G20; 34G25; 47H04; 47H05
2537 Alber, Local $H^1$--regularity and $H^{1/3-\ Local and boundary regularity for quasistatic initial-boundary value problems from viscoplasticity is studied. The
(wird in 18.03.2008 Hans-Dieter delta}$--regularity up to the boundary in problems considered belong to a general class with monotone constitutive equations modelling materials showing
neuem Tab Nesenenko, time dependent viscoplasticity kinematic hardening. A standard example is the Melan-Prager model. It is shown that the strain/stress/internal variable
geöffnet) Sergiy fields have $H^{1+1/3-\delta}/H^{1/3-\delta}/H^{1/3-\delta}$ regularity up to the boundary. The proof uses perturbation
estimates for monotone operator equations.
2536 Hofmann, Karl MSC: 22A05; 22D05; 22E15
(wird in 01.03.2008 Heinrich Solvable Subgroups of Locally Compact It is shown that a closed solvable subgroup of a connected Lie groupis compactly generated. In particular, every
neuem Tab Neeb, Groups discrete solvable subgroupof a connected Lie group is finitely generated.Generalizations to locally compact groupsare
geöffnet) Karl-Hermann discussed as far as they carry.
Nummer Datum Autor Titel Abstract/MSC
MSC: 00-99
In this article the following thesis is explained and substantiated: Sense and meaning of mathematics finally lie in the fact that
Wille, Communicative Rationality, mathematics is able to report the rational communication of humans. The essence of the argumentation is that the effective support
2535 14.12.2007 Rudolf Logic, and Mathematics becomes possible by the close connection between mathematics and logic (in the sense of Peirce's latest philosophy) by which, in his
turn, the communicative rationality (in the sense of Habermas' theory of communicative action) can be activated. How such a support
may be concretely performed shall be illustrated by the development of a retrieval system for the library of the Center of
Interdisciplinary Technology Research at Darmstadt University of Technology.
Clemens, MSC: 65M60; 65L06; 78M10
2534 Markus This paper addresses fully space-time adaptive magnetic field computations. We describe an adaptive Whitney finite element method
(wird in Lang, Jens Adaptivity in Space and Time for for solving the magnetoquasistatic formulation of Maxwell's equations on unstructured 3D tetrahedral grids. Spatial mesh refinement
neuem Tab 29.11.2007 Teleaga, Magnetoquasistatics and coarsening are based on hierarchical error estimators especially designed for combining tetrahedral H(curl)-conforming edge
geöffnet) Delia elements in space with linearly implicit Rosenbrock methods in time. An embedding technique is applied to get efficiency in time
Wimmer, through variable time steps. Finally, we present numerical results for the magnetic recording write head benchmark problem proposed
Georg by the Storage Research Consortium in Japan.
MSC: 35Q30; 76D05; 35B65
Consider the instationary Navier-Stokes system in a smooth bounded domain $\Omega\subset R^3$ with vanishing force and initial value
$u_0\in L^2_\sigma(\Omega)$. Since the work of Kiselev-Ladyzhenskaya in 1963 there have been found several conditions on $u_0$ to
2533 Farwig, Optimal Initial Value Conditions prove the existence of a unique strong solution $u\in L^s(0,T; L^q(\Omega))$ with $u(0) = u_0$ in some time interval $[0,T)$, $0 < T
(wird in 10.12.2007 Reinhard for the Existence of Local \leq \infty$, where the exponents $2 < s < \infty$, $3 < q < \infty$ satisfy $\frac{2}{s} + \frac{3}{q} = 1$. Indeed, such
neuem Tab Sohr, Strong Solutions of the conditions could be weakened step by step, thus enlarging the corresponding solution classes. Our aim is to prove the following
geöffnet) Hermann Navier-Stokes Equations optimal result with the weakest possible initial value condition and the largest possible solution class: Given $u_0,¸q,¸s$ as above
and the Stokes operator $A_q$, we prove that the condition $\int_0^\infty \| e^{-tA_q}u_0\|_q^s¸ dt < \infty$ is necessary and
sufficient for the existence of such a strong solution $u$. The proof rests on arguments from the recently developed theory of very
weak solutions.
MSC: 65C30; 60H35; 65C20; 68U20
2532 Debrabant, Diagonally Drift--Implicit Families of first and second order diagonally drift--implicit SRK (DDISRK) methods for the weak approximation of SDEs contained in
(wird in Kristian Runge--Kutta Methods of Weak the class of SRK methods proposed by R{ö}{ß}ler are calculated. Their asymptotic stability as well as mean--square stability
neuem Tab 03.11.2007 Rö{ß}ler, Order One and Two for It{ô} SDEs (MS--stability) properties are studied for a linear stochastic test equation with multiplicative noise. The stability functions for
geöffnet) Andreas and Stability Analysis the DDISRK methods are determined and their domains of stability are compared to the corresponding domain of stability of the
considered test equation. Stability regions are presented for various coefficients of the families of DDISRK methods in order to
determine step size restrictions such that the numerical approximation reproduces the characteristics of the solution process.
MSC: 0099
What mathematics could and should mean for humans in general may only be clarified in a broader process of communication and
Generalistic Mathematics as understanding. This process of understanding needs a general culture of discourse which should not only be restricted to the
2531 27.11.2007 Wille, Mathematics for the General discourse between mathematicians, but as a matter of principle should include all humans whether they are actively concerned with
Rudolf Public mathematics or only been affected by consequences of mathematical developments. Such a culture of discourse is dependent on a
'generalistic mathematics' which makes understandable the conception of mathematics, its connection to the world, and sense,
meaning, and connection of mathematical acitivities; moreover, generalistic mathematics is guided by the idea of an open,
meaningful, communicative and critical mathematics.
Wille, Logisch denken lernen im MSC: 97
2530 27.11.2007 Rudolf Mathematikunterricht Logisch denken lernen im Mathematikunterricht wird jeweils getragen von konkret-realer, philsosophisch-logischer und mathematischer
MSC: 06A
Formal Concept Analysis is a mathematical theory of concept hierarchies which is based on Lattice Theory. It has been developed to
2529 27.11.2007 Wille, Formal Concept Analysis as support humans in their thought and knowledge. The aim of this paper is to show how successful the lattice-theoretic foundation can
Rudolf Applied Lattice Theory be in applying Formal Concept Analysis in a wide range. This is demonstraded in three sections dealing with representation,
processing and measurement of conceptual knowledge. Finally, further relationships between abstract Lattice Theory and Formal
Concept Analysis are briefly discussed.
MSC: 65C30; 60H35; 65C20; 68U20
Families of efficient second Recently, a new class of second order Runge-Kutta methods for Itô stochastic differential equations with a multidimensional Wiener
2528 Debrabant, order Runge-Kutta methods for process was introduced by Rö{ß}ler. In contrast to second order methods earlier proposed by other authors, this class has the
(wird in 15.11.2007 Kristian the weak approximation of Itô advantage that the number of function evaluations depends only linearly on the number of Wiener processes and not quadratically. In
neuem Tab Rö{ß}ler, stochastic differential this paper, we give a full classification of the coefficients of all explicit methods with minimal stage number. Based on this
geöffnet) Andreas equations classification, we calculate the coefficients of an extension with minimized error constant of the well-known RK32 method to the
stochastic case. For three examples, this method is compared numerically with known order two methods and yields very promising
2527 Reinhard Very weak, weak and strong MSC: 35Q30; 35B65; 76D05; 76D07
(wird in 14.11.2007 Kozono, solutions to the instationary In this survey paper we discuss the theory of very weak solutions to the stationary and instationary (Navier-)Stokes system in a
neuem Tab Hideo Navier-Stokes system bounded domain of $R^3$ and show how this new notion of solutions may be used to prove regularity locally or globally in space and
geöffnet) Sohr, time of a given weak solution.
MSC: 35Q30; 35D05
2526 The Instationary Navier-Stokes We investigate the solvability of the instationary Navier-Stokes equations with fully inhomogeneous data in a bounded domain. The
(wird in 19.09.2007 Schumacher, Equations in Weighted class of solutions is contained in the space variable in a Bessel-Potential space weighted with a Muckenhoupt weight. In this
neuem Tab Katrin Bessel-Potential Spaces context we derive solvability for small data, where this smallness can be realized by the restriction on a short time interval.
geöffnet) Depending on the order of this Bessel-Potential space we are dealing with strong solutions, weak solutions, or with very weak
MSC: 35Q30; 35D05
We investigate the solvability of the instationary Stokes equations with fully inhomogeneous data in a weighted Bessel-Potential
2525 The Instationary Stokes space. Depending on the order of this Bessel-Potential space we are dealing with strong solutions or with very weak solutions.
(wird in 11.09.2007 Schumacher, Equations in Weighted Whereas in the context of lowest regularity one obtains solvability with respect to inhomogeneous data by dualization, this is more
neuem Tab Katrin Bessel-Potential Spaces delicate in the case of higher regularity, where one has to introduce some additional time regularity. As a preparation, we
geöffnet) introduce a generalization of the Stokes operator that is appropriate to the context of very weak solutions in weighted
Bessel-Potential spaces. \end{abstract} \emph{Key Words and Phrases:} Instationary Stokes equations, Muckenhoupt weights, very weak
solutions, Bessel-Potential spaces, nonhomgeneous data
MSC: 35Q30; 35D05; 76D05; 35J65
2524 The Stationary Navier-Stokes We investigate the stationary Navier-Stokes equations in Bessel-potential spaces with Muckenhoupt weights. Since in this setting it
(wird in 23.08.2007 Schumacher, Equations in Weighted is possible that the solutions do not posses any weak derivatives, we use the notation of very weak solutions introduced by Amann
neuem Tab Katrin Bessel-Potential Spaces [1]. The basic tool is complex interpolation, thus we give a characterization of the interpolation spaces of the spaces of data and
geöffnet) solutions. Then we establish a theory of solutions to the Stokes equations in weighted Bessel-potential spaces and use this to prove
solvability of the Navier-Stokes equations for small data by means of Banach's Fixed Point Theorem.
2523 Very Weak Solutions to the MSC: 35Q30; 35D05; 76D07; 35J25
(wird in Schumacher, Stationary Stokes and Stokes We investigate very weak solutions to the stationary Stokes and Stokes resolvent problem in function spaces with Muckenhoupt
neuem Tab 23.07.2007 Katrin Resolvent Problem in Weighted weights. The notion used here is similar but even more general than the one used in [2] or [14]. Consequently the class of solutions
geöffnet) Function Spaces is enlarged. To describe boundary conditions we restrict ourselves to more regular data. We introduce a Banach space that admits a
restriction operator and that contains the solutions according to such data.
MSC: 65M60; 78M10
2522 This paper is concerned with fully space-time adaptive magnetic field computations. We describe a Whitney finite element method for
(wird in Lang, Jens Towards a Fully Space-Time solving the magnetoquasistatic formulation of Maxwell's equations on unstructured 3D tetrahedral grids. Spatial discretization is
neuem Tab 16.07.2007 Teleaga, Adaptive FEM for done by employing hierarchical tetrahedral H(curl)-conforming elements proposed by Ainsworth and Coyle. For the time discretization,
geöffnet) Delia Magnetoquasistatics we use a newly constructed one-step Rosenbrock method ROS3PL with 3rd order accuracy in time. Adaptive mesh refinement and
coarsening are based on hierarchical error estimators especially designed for Rosenbrock methods. An embedding technique is applied
to get efficiency in time through variable time steps. Finally, we present numerical results for the benchmark problem TEAM 7.
MSC: 35Q30; 76D05; 35B65
Farwig, We present several new regularity criteria for weak solutions $u$ of the instationary Navier-Stokes system which additionally
2521 Reinhard satisfy the strong energy inequality. (i) If the kinetic energy $1/2 \| u(t) \|_2^2$ is Hölder continuous as a function of time $t$
(wird in 18.07.2007 Kozono, Energy-Based Regularity Criteria with Hölder exponent $\alpha \in (1/2,1)$, then $u$ is regular. (ii) If the dissipation energy satisfies the left-side condition $\
neuem Tab Hideo for the Navier-Stokes Equations liminf_{\delta \to 0} \delta^{-\alpha} \int_{t-\delta}^t \| \na u\|_2^2 ¸ d\tau < \infty$, $\alpha \in (1/2,1)$, for all $t$ of the
geöffnet) Sohr, given time interval, then $u$ is regular. The proofs use local regularity results which are based on the theory of very weak
Hermann solutions and on uniqueness arguments for weak solutions. Finally, in the last section, we mention a local space-time regularity
MSC: 74C05; 35B65; 49N60; 74A35; 74G40
In this note we investigate the question of higher regularity up to the boundary for quasilinear elliptic systems which origin from
the time-discretization of models from infinitesimal elasto-plasticity. Our main focus lies on an elasto-plastic Cosserat model.
More specifically we show that the time discretization renders $H^2$-regularity of the displacement and $H^1$-regularity for the
2520 Neff, Regularity up to the boundary symmetric plastic strain $\varepsilon_p$ up to the boundary provided the plastic strain of the previous time step is in $H^1$, as
(wird in 29.06.2007 Patrizio for nonlinear elliptic systems well. This result contrasts with classical Hencky and Prandtl-Reuss formulations where it is known not to hold due to the occurrence
neuem Tab Knees, arising in time-incremental of slip lines and shear bands. Similar regularity statements are obtained for other regularizations of ideal plasticity like
geöffnet) Dorothee infinitesimal elasto-plasticity viscosity or isotropic hardening. In the first part we recall the time continuous Cosserat elasto-plasticity problem, provide the
update functional for one time step and show various preliminary results for the update functional (Legendre-Hadamard/monotonicity).
Using non standard difference quotient techniques we are able to show the higher global regularity. Higher regularity is crucial for
qualitative statements of finite element convergence. As a result we may obtain estimates linear in the mesh-width $h$ in error
2519 Helmut MSC: 46E35; 46E30
(wird in 10.06.2007 Krbec, On the Trace Space of a Sobolev Our concern in this paper lies with trace spaces for weighted Sobolev spaces, when the weight is a power of the distance to a point
neuem Tab Miroslav Space with a Radial Weight at the boundary. For a large range of powers we give a full description of the trace space.
geöffnet) Schumacher,
2518 Neff, Symmetric Cauchy stresses do not MSC: 74A35; 74B20
(wird in Patrizio imply symmetric Biot strains in We show that symmetric Cauchy stresses do not imply symmetric Biot strains in weak formulations of finite isotropic hyperelasticity
neuem Tab 04.06.2007 Fischle, weak formulations of isotropic with exact rotational degrees of freedom. This is contrary to claims in the literature which are valid, however, in the linear
geöffnet) Andreas hyperelasticity with rotational isotropic case.
Muench, Ingo degrees of freedom.
MSC: 17B56; 17B65; 17B68
2517 In the present paper we determine for each parallelizable smooth compact manifold ${\sst M}$ the cohomology spaces ${\sst H^2({\cal
(wird in Billig, Yuly On the cohomology of vector V}_M,\oline\Omega^p_M)}$ of the Lie algebra ${\sst {\cal V}_M}$ of smooth vector fields on ${\sst M}$ with values in the module ${\
neuem Tab 22.05.2007 Neeb, fields on parallelizable sst \oline\Omega^p_M = \Omega^p_M/d\Omega^{p-1}_M}$. The case of ${\sst p=1}$ is of particular interest since the gauge algebra ${\
geöffnet) Karl-Hermann manifolds sst C^\infty (M,\k)}$ has the universal central extension with center ${\sst \oline\Omega^1_M}$, generalizing affine Kac-Moody
algebras. The second cohomology ${\sst H^2(\V_M, \oline\Omega^1_M)}$ classifies twists of the semidirect product of ${\sst \V_M}$
with the universal central extension ${\sst C^\infty (M,\k) \oplus \oline\Omega^1_M}$.
MSC: 47B36; 47B39; 47A53
2516 Rabinovich, Let $(X, \sim)$ be a combinatorial graph the vertex set $X$ of which is a discrete metric space. We suppose that a discrete group
(wird in Vladimir S. Fredholm properties of $G$ acts freely on $(X, \sim)$ and that the fundamental domain with respect to the action of $G$ contains only a finite set of
neuem Tab 21.05.2007 Roch, band-dominated operators on points. A graph with these properties is called periodic with respect to the group $G$. We examine the Fredholm property and the
geöffnet) Steffen periodic discrete structures essential spectrum of band-dominated operators acting on the spaces $l^p(X)$ or $c_0(X)$, where $(X, \sim)$ is a periodic graph. Our
approach is based on the thorough use of band-dominated operators. It generalizes the results obtained by the authors and B.
Silbermann in the special case $X = G = \sZ^n$ and by J. Roe in case $X = G$ is a general finitely generated discrete group.
2515 01.05.2007 Scheffold, Kongruenzen im Zusammenhang mit
Egon den Formeln von Barlow und Abel
2514 Mößner, We construct a uniformly stable family of bases for tensor product spline approximation on bounded domains in $\R^d$. These bases
(wird in 05.05.2007 Bernhard Stability of B-Splines on are derived from the standard B-spline basis by normalization with respect to the $L^p$-norm and a selection process relying on
neuem Tab Reif, Ulrich Bounded Domains refined estimates for the de Boor-Fix functionals.
Debrabant, Continuous Runge--Kutta methods MSC: 65C30; 60H35; 65C20; 68U20
2513 01.05.2007 Kristian for Stratonovich stochastic In this article we give order conditions for continuous stochastic Runge--Kutta methods of second order for the weak approximation
Rößler, differential equations of Stratonovich stochastic differential equations. As an example, by using these order conditions, two time discrete order two SRK
Andreas schemes are extended to continuous schemes. Finally, numerical examples confirm our theoretical results.
MSC: 65M15; 65M06; 65M20; 65M60
2512 Debrabant, The aim of this paper is to extend the global error estimation and control addressed in Lang and Verver [SIAM J. Sci. Comput., 2007]
(wird in 09.05.2007 Kristian On Global Error Estimation and for initial value problems to parabolic partial differential equations. The classical ODE approach based on the first variational
neuem Tab Lang, Jens Control for Parabolic Equations equation is combined with an estimation for the PDE spatial truncation error to estimate the overall error in the computed solution.
geöffnet) Control is achieved through tolerance proportionality and uniform mesh refinement. Numerical examples are used to illustrate the
reliability of the estimation and control strategies.
2511 MSC: 35F15
(wird in 01.05.2007 Schumacher, Solutions to the Equation $\div We consider the problem $\div u=f$ in a bounded Lipschitz domain $\Omega$, where $f$ with $\int_\Omega f=0$ is given. It is shown
neuem Tab Katrin u=f$ in Weighted Sobolev Spaces that the solution $u$, that is constructed as in Bogowski's approach in [1] fulfills estimates in the weighted Sobolev spaces $W^
geöffnet) {k,q}_{w}(\Omega)$, where the weight function $w$ is contained in the class of Muckenhoupt weights $A_q$.
MSC: 47A20; 35A99; 46E35
2510 A Chart Preserving the Normal Given a domain $\Omega$ of class $C^{k,1}$, $k\in \N$ we construct a chart that maps normals to the boundary of the half space to
(wird in 02.05.2007 Schumacher, Vector and Extensions of Normal normals to the boundary of $\Omega$ in the sense that $\frac\pa{\pa x_n}\alpha(x',0)= – N(x')$ and that still is of class $C^{k,1}$.
neuem Tab Katrin Derivatives in Weighted Function As an application we prove the existence of a continuous extension operator for all normal derivatives of order 0 to $k$ on domains
geöffnet) Spaces of class $C^{k,1}$. The construction of this operator is performed in weighted function spaces where the weight function is taken
from the class of Muckenhoupt weights.
2509 Stability Estimates for the MSC: 35R30; 35J25
(wird in 14.03.2007 Heck, Horst inverse conductivity problem for We prove a $\log$-type stability estimate for the inverse conductivity problem in space dimension $n\geq 3$, if the conductivity has
neuem Tab less regular conductivities $C^{3/2+\varepsilon}$ regularity.
2508 Higher-order linearly implicit MSC: 76D05; 76M10
(wird in Teleaga, one-step methods for In this work higher-order methods for integrating the three-dimensional incompressible Navier-Stokes equations are proposed. The
neuem Tab 19.03.2007 Ioan three-dimensional incompressible numerical solution is achieved by using linearly implicit one-step methods up to third order in time coupled with up to third order
geöffnet) Lang, Jens Navier-Stokes equations stable finite element discretizations in space. These orders of convergence are demonstrated by comparing the numerical solution
with exact Navier-Stokes solutions. Finally, we present benchmark computations for flow around a cylinder.
MSC: 81Q10; 46N50; 47B36
Let $(\cX, ¸ \rho)$ be a discrete metric space. We suppose that the group $\sZ^n$ acts freely on $X$ and that the number of orbits
2507 Rabinovich, Essential spectra of difference of $X$ with respect to this action is finite. Then we call $X$ a $\sZ^n$-periodic discrete metric space. We examine the Fredholm
(wird in 18.04.2007 V. S. operators on $\sZ^n$-periodic property and essential spectra of band-dominated operators on $l^p(X)$ where $X$ is a $\sZ^n$-periodic discrete metric space. Our
neuem Tab Roch, S. graphs approach is based on the theory of band-dominated operators on $\sZ^n$ and their limit operators. In case $X$ is the set of vertices
geöffnet) of a combinatorial graph, the graph structure defines a Schrödinger operator on $l^p(X)$ in a natural way. We illustrate our
approach by determining the essential spectra of Schrödinger operators with slowly oscillating potential both on zig-zag and on
hexagonal graphs, the latter being related to nano-structures.
MSC: 03A05
We introduce a `meaningful' (i.e. not only formal) language L, the use and the semantic of their sentences are determined by
2506 `external facts' on the one hand and rules of assertion on the other. To reduce the problem of beginning reasoning, we stipulate
(wird in 01.04.2007 Zahn, Peter Eine pragmatische Rechtfertigung certain rules to restrict assertions, and we also agree that assertions of sentences of L must not be restricted besides. By
neuem Tab des klassischen Argumentierens liberalizing the resulting use of assertions we establish a `classical game' of assertion which permits to apply classical logic an
geöffnet) serves essential purposes of reasoning most favorably. At the end of this paper we analyze the meaning of general conditionals that
may be applied like rules of inference. To this end, we consider temporal details of assertion. By the way, we obtain a rule-logical
approach to intuitionistic logic, and an approach to deontic logic, too.
2505 An, Jinpeng An implicit function theorem for MSC: 22E65; 57N2
(wird in 03.04.2007 Neeb, Banach spaces and some We prove a generalized implicit function theorem for Banach spaces, without the usual assumption that the subspaces involved being
neuem Tab Karl-Hermann applications complemented. Then we apply it to the problem of parametrization of fibers of differentiable maps, the Lie subgroup problem for
geöffnet) Banach--Lie groups, as well as Weil's local rigidity for homomorphisms from finitely generated groups to Banach--Lie groups.
The aim of this paper is to approach a Semantology of Music which is understood as the theory and methodology of musical semantic
Wille, structures. The analysis of music structures is based on a threefold semantics which is performed on the musical level, the abstract
2504 12.04.2007 Rudolf Towards a Semantology of Music philosophic-logical level, and the hypothetical mathematical level. Basic music structures are discussed by examples, in particular:
tone systems, chords, harmonies, scales, modulations, musical time flow, and music forms. A specific concern of this paper is to
clarify how a Semantology of Music may support the understanding of music.
MSC: 22E65; 22E67; 22E15; 22E30
We study Lie group structures on groups of the form ${\sst C^\infty(M,K)}$, where ${\sst M}$ is a non-compact smooth manifold and $
2503 Neeb, {\sst K}$ is a, possibly infinite-dimensional, Lie group. First we prove that there is at most one Lie group structure with Lie
(wird in Karl-Hermann Lie group structures on groups algebra ${\sst C^\infty(M,\k)}$ for which the evaluation map is smooth. We then prove the existence of such a structure if the
neuem Tab 14.03.2007 Friedrich of smooth and holomorphic maps universal cover of ${\sst K}$ is diffeomorphic to a locally convex spa ce and if the image of the left logarithmic derivative in ${\
geöffnet) Wagemann on non-compact manifolds sst \Omega^1(M,\k)}$ is a smooth submanifold, the latter being the case in particular if ${\sst M}$ is one-dimensional. We also
obtain analogs of these results for the group ${\sst {\cal O}(M,K)}$ of holomorphic maps on a complex manifold with values in a
complex Lie group ${\sst K}$. We further show that there exists a natural Lie group structure on ${\sst {\cal O}(M,K)}$ if ${\sst K}
$ is Banach and ${\sst M}$ is a non-compact complex curve with finitely generated fundamental group.
MSC: 74A35; 74A30; 74C05; 74C10
We propose a model of finite strain gradient plasticity including phenomenological Prager-Ziegler type linear kinematical hardening
and nonlocal kinematical hardening due to dislocation interaction. Based on the multiplicative decomposition a thermodynamically
admissible flow rule for $F_p$ is described involving as plastic gradient $\Curl F_p$. The formulation is covariant w.r.t.
Neff, Notes on strain gradient superposed rigid rotations of the reference, intermediate and spatial configuration but the model is not spin-free due to the
2502 Patrizio plasticity: Finite strain nonlocal dislocation interaction and cannot be reduced to a dependence on $C_p$. The linearization leads to a thermodynamically
(wird in 19.03.2007 Chelminski, covariant modelling and global admissible model of infinitesimal plasticity involving only the $\Curl$ of the non-symmetric plastic variable $p$. Linearized
neuem Tab Krzysztof existence in the infinitesimal spatial and material covariance under constant infinitesimal rotations is satisfied. Uniqueness of strong solutions of the
geöffnet) Alber, rate-independent case. infinitesimal model is obtained if two non-classical boundary conditions on the non-symmetric small strain plastic variable $p$ are
Hans-Dieter introduced: $\dot{p}.\tau=0$ on the microscopically hard boundary $\Gamma_D\subset\partial\Omega$ and $[\Curl p].\tau=0$ on the
microscopically free boundary $\partial\Omega\setminus\Gamma_D$, where $\tau$ are the tangential vectors at the boundary $\partial\
Omega$. Moreover, we show that a weak reformulation of the infinitesimal model allows for a global in-time solution of the
corresponding rate-independent initial boundary value problem. The method of choice are a formulation as a variational inequality
with symmetric and coercive bilinear form. Use is made of new Hilbert-space suitable for dislocation density dependent plasticity.
Eklund, MSC: 03B42
Peter Semantology as Basis for Semantology has been introduced as the theory of semantic structures and their connections which, in particular, covers the
2501 06.03.2007 Wille, Conceptual Knowledge Processing methodology of activating semantic structures for representing conceptual knowledge. It it the main aim of this paper to explain and
Rudolf demonstrate that semantic structures are in fact basic for conceptual knowledge processing which comprises activities such as
representing, infering, acquiring and communicating conceptual knowledge.
MSC: 74K15; 74K20; 74G65
2500 Weinberg, A geometrically exact thin We investigate a geometrically exact membrane model with respect to its capabilities in describing buckling and wrinkling. Contrary
(wird in Kerstin membrane model -investigation of to more classical tension-field or relaxed approaches, our model is able to capture the detailed geometry of wrinkling while the
neuem Tab 26.02.2007 Neff, large deformations and balance of force equations remains elliptic throughout. This is achieved by introducing artficial viscosity related to the movement
geöffnet) Patrizio wrinkling. of an adjusted orthonormal frame (rotations) given by a local evolution equation. We discuss the consistent linearization of the
model and investigate the efficiency of the local update of rotations. Numerical examples are presented that demonstrate the
effectiveness of the new model for predicting wrinkles in membranes undergoing large deformation.
MSC: 35Q30; 76D05; 35D05
Farwig, Consider a weak solution $u$ of the Navier-Stokes equations for a general domain $\Omega \subset R^3$ on the time interval $[0,\
2499 Reinhard Local space-time regularity infty)$ and a parabolic cylinder $Q_r =Q_r(t_0,x_0) \subset (0,\infty) \times \Omega$ with $r>0$, $t_0 \in (0,\infty)$, $x_0 \in \
(wird in 23.02.2007 Kozono, criteria for weak solutions of Omega$. Then we show that there exists an absolute constant $\varepsilon_*>0$ such that the local condition $\|u \|_{L^q(Q_r)} \leq
neuem Tab Hideo the Navier-Stokes equations \varepsilon_* ¸ r^{\frac{2}{q} + \frac{3}{q}-1}, \frac{2}{q} + \frac{3}{q} \leq 1 + \frac{1}{4}$, implies the regularity of $u$ in
geöffnet) Sohr, beyond Serrin's condition the smaller cylinder $Q_{r/2}$. The special case $\frac{2}{q} + \frac{3}{q}=1$ yields the well-known local Serrin condition $\| u \|
Hermann _{L^q(Q_r)} \leq varepsilon_*$. Thus our criterion extends Serrin's condition admitting smaller exponents $q$ and replacing the
barrier $1$ by $1+\frac{1}{4}$.
MSC: 76D05; 35Q30
Farwig, It is known that the Stokes operator is not well-defined in $L^q$-spaces for certain unbounded smooth domains unless $q=2$. In this
2498 Reinhard paper, we generalize a new approach to the Stokes resolvent problem and to maximal regularity in general unbounded smooth domains
(wird in 20.02.2007 Kozono, On the Stokes Operator in from the three-dimensional case, see R. Farwig, H. Kozono, H. Sohr, {\it An $L^q$--approach to Stokes and Navier-Stokes equations in
neuem Tab Hideo General Unbounded Domains general domains,} Acta Math. 195, 21--53 (2005), to the $n$-dimensional one, $n \geq 2,$ replacing the space $L^q, 1 < q < \infty,$
geöffnet) Sohr, by $\tilde L^q$ where $\tilde L^q = L^q \cap L^2$ for $q \geq 2$ and $\tilde L^q = L^q + L^2$ for $1 < q < 2.$ In particular, we
Hermann show that the Stokes operator is well-defined in $\tilde L^q$ for every unbounded domain of uniform $C^{1,1}$-type in $R^n, n \geq
2$, satisfies the classical resolvent estimate, generates an analytic semigroup and has maximal regularity.
2497 MSC: 46G05; 46B22; 28B05
(wird in An Appropriate Geometric We introduce the embedded Weingarten map as a geometric invariant of piecewise smooth surfaces. It is given by a $(3\times 3)
neuem Tab 16.02.2007 Reif, Ulrich Invariant for the $C^2$-Analysis $-matrix and provides complete curvature information in a continuous way. Thus, it is the appropriate tool for the $C^2$-analysis of
geöffnet) of Subdivision Surfaces subdivision surfaces near extraordinary points. We derive asymptotic expansions and show that the convergence of the sequence of
embedded Weingarten maps to a constant limit is necessary and sufficient for curvature continuity.
2496 On the Dirichlet problem for the We study and solve the Dirichlet problem for graphs of prescribed mean curvature $H$ in $\mathbb R^{n+1}$ over general domains $\
(wird in 13.02.2007 Bergner, prescribed mean curvature Omega$ without requiring a mean convexity assumption. By using pieces of nodoids as barriers we first give sufficient conditions for
neuem Tab Matthias equation over nonconvex domains the solvability in case of zero boundary values. Applying a result by Schulz and Williams we can then also solve the Dirichlet
geöffnet) problem for boundary values satisfying a Lipschitz condition.
(wird in 08.02.2007 Bergner, A simple proof for Brouwer's Using only basic tools from calculus, we give a relatively simple proof for Brouwer's fixed point theorem.
neuem Tab Matthias fixed point theorem
MSC: 47G30; 35J10; 35P05; 35S05; 47A53; 47N20
The main aim of this paper is to study the relations between the location of the essential spectrum and the exponential decay of
eigenfunctions of pseudodifferential operators on $L^p(\sR^n)$ perturbed by singular potentials. For a solution of this problem we
Essential spectra of apply the limit operators method. This method associates with each band-dominated operator $A$ a family $op (A)$ of so-called limit
2494 Vladimir S. pseudodifferential operators and operators which reflect the properties of $A$ at infinity. Consider the compactification of $\sR^n$ by the „infinitely distant“
(wird in 19.01.2007 Rabinovich, exponential decay of their sphere $S^{n-1}$. Then the set $op (A)$ can be written as the union of its components $op_{\eta_\omega} (A)$ where $\omega$ runs
neuem Tab Steffen Roch solutions. Applications to through the points of $S^{n-1}$ and where $op_{\eta_\omega} (A)$ collects all limit operators of $A$ which reflect the properties of
geöffnet) Schrödinger operators $A$ if one tends to infinity „in the direction of $\omega$. Set $sp_{\eta_\omega} A := \cup_{A_h \in op_{\eta_\omega} (A)} sp ¸ A_
{h}$. We show that “the distance" of an eigenvalue $\lambda \notin sp_{ess} A$ to $sp_{\eta_\omega} A$ determines the exponential
decay of the $\lambda$-eigenfunctions of $A$ in the direction of $\omega$. We apply these results to estimate the exponential decay
of eigenfunctions of electromagnetic Schrödinger operators for a large class of electric potentials, in particular, for
multiparticle Schrödinger operators and periodic Schrödinger operators perturbed by slowly oscillating at infinity potentials.
MSC: 06A; 06B
Wille, The Basic Theorem on Labelled This paper offers a mathematical analysis of labelled line diagrams of finite concept lattices to gain a better understanding of
2493 17.01.2007 Rudolf Line Diagrams of Finite Concept those diagrams. The main result is the Basic Theorem on Labelled Line Diagrams of Finite Concept Lattices. This Theorem can be
Lattices applied to justify, for instance, the training tool 'CAPESSISMUS – A Game of Conceiving Concepts' which has been created to support
the understanding and the drawing of appropriate line diagrams of finite concept lattices.
2492 Applications of hypocontinuous MSC: 26E15; 26E20; 17B63; 22E65; 46A32; 46G20; 46T25
(wird in Glöckner, bilinear maps in Paradigms of bilinear maps $\beta$ between locally convex spaces (like evaluation or composition) are not continuous, but merely
neuem Tab 14.01.2007 Helge infinite-dimensional hypocontinuous. We describe situations where, nonetheless, compositions of $\beta$ with Keller $C^n_c$-maps (on suitable domains)
geöffnet) differential calculus are $C^n_c$. Our main applications concern holomorphic families of operators, and the foundations of locally convex Poisson vector
MSC: 35Q72; 35M20; 35Q72
2491 Alber, Solutions to a Model for Existence of weak solutions is proved for a phase field model describing an interface in an elastically deformable solid, which
(wird in Hans-Dieter Interface Motion by Solutions to moves by diffusion of atoms along the interface. The volume of the different regions separated by the interface is conserved, since
neuem Tab 11.01.2007 Zhu, a Model for Interface Motion by no exchange of atoms across the interface occurs. The diffusion is only driven by reduction of the bulk free energy. The evolution
geöffnet) Peicheng Interface Diffusion of the order parameter in this model is governed by a degenerate parabolic fourth order equation. If a regularizing parameter in
this equation tends to zero, then solutions tend to solutions of a sharp interface model for interface diffusion. The existence
proof is valid only for a $1\frac{1}{2}$--dimensional situation.
Farwig, MSC: 76D05; 35Q30
2490 Reinhard We study time-periodic Oseen flows past a rotating body in $R^3$ proving weighted \it{a priori} estimates in $L^q$-spaces using
(wird in Krbec, A Weighted $L^q$-Approach to Muckenhoupt weights. After a time-dependent change of coordinates the problem is reduced to a stationary Oseen equation with the
neuem Tab 09.01.2007 Miroslav Oseen Flow Around a Rotating additional terms $(\omega\times x)\cdot\nabla u$ and $-\omega \wedge u$ in the equation of momentum where $\omega$ denotes the
geöffnet) Necasova, Body angular velocity. Due to the asymmetry of Oseen flow and to describe its wake we use anisotropic Muckenhoupt weights, a weighted
Sarka theory of Littlewood-Paley decomposition and of maximal operators as well as one-sided univariate weights, one-sided maximal
operators and a new version of Jones' factorization theorem.
MSC: 46G20; 26E05; 26E15; 26E20; 46T25
For each positive integer $k$, we describe a map $f$ from the complex plane to a suitable non-complete complex locally convex space
2489 Instructive examples of smooth, such that $f$ is $k$ times continuously complex differentiable but not $k+1$ times, and hence not complex analytic. As a
(wird in Glöckner, complex differentiable and preliminary, we prove that the sequences $(n^kz^n)_n$ are linearly independent in the space of complex sequences, for $k$ ranging
neuem Tab 07.01.2007 Helge complex analytic mappings into through the integers and $z$ through the set of non-zero complex numbers. We also describe a complex analytic map from $l^1$ to a
geöffnet) locally convex spaces suitable complete complex locally convex space which is unbounded on each non-empty open subset of $l^1$. Furthermore, we present a
smooth map from the real line to a non-complete locally convex space which is not real analytic although it is given locally by its
Taylor series around each point. As a byproduct, we find that free locally convex spaces over subsets of the complex plane with
non-empty interior are not Mackey complete.
Nummer Datum Autor Titel Abstract/MSC
MSC: 47A53; 46N40; 47B36; 65J10
V. S. The finite sections approach to In a previous paper, two of the authors together with J. Roe derived an index formula which expresses the Fredholm index of a
2488 20.12.2006 Rabinovich, the index formula for band-dominated operator on $l^2(\sZ)$ in terms of local indices of its limit operators. The proof makes thoroughly use of
S. Roch, B. band-dominated operators $K$-theory for $C^*$-algebras (which, of course, appears as a natural approach to index problems). The purpose of this short note
Silbermann is to develop a completely different approach to the index formula for band-dominated operators which is exclusively based on ideas
and results from asymptotic numerical analysis.
The Dirichlet problem for graph MSC: 53A10
2487 14.12.2006 Matthias of prescribed anisotropic mean We consider the Dirichlet problem for graphs of prescribed mean curvature in $\mathbb R^{n+1}$ where the prescribed mean curvature
Bergner curvature in $\mathbb R^{n+1}$ function $H=H(X,N)$ may depend on the point $X$ in space and on the normal $N$ of the graph as well. In some special cases this
Dirichlet problem arises as the Euler equation of a generalised nonparametric area funcional.
Rabinovich, MSC: 47N40; 47L40; 65J10
2486 Vladimir In an earlier paper we showed that the sequence of the finite sections $P_nAP_n$ of a band-dominated operator $A$ on $l^p(\sZ)$ is
(wird in 13.12.2006 Roch, On finite sections of stable if and only if the operator $A$ is invertible, every limit operator of the sequence $(P_n A P_n)$ is invertible, and if the
neuem Tab Steffen band-dominated operators norms of the inverses of the limit operators are uniformly bounded. The purpose of this short note is to show that the uniform
geöffnet) Silbermann, boundedness condition is redundant.
2485 Floater, MSC: 65D30; 65B05
(wird in Michael Extrapolation Methods for A well-known method of estimating the length of a parametric curve in $R^d$ is to sample some points from it and compute the length
neuem Tab 15.11.2006 Rasmussen, Approximating Arc Length and of the polygon passing through them. In this paper we show that for uniform sampling of regular smooth curves Richardson
geöffnet) Atgeirr Surface Area extrapolation can be applied repeatedly giving a sequence of derivative-free length estimates of arbitrarily high orders of
Reif, Ulrich accuracy. Further, a similar result is derived for the approximation of the area of parametric surfaces.
MSC: 35Q35; 35P99; 76D07
2484 Farwig, We present the description of the spectrum of a linear perturbed Oseen--type operator which arises from equations of motion of a
(wird in Reinhard On the Spectrum of an Oseen--Type viscous incompressible fluid in the exterior of a rotating compact body. Considering the operator in the function space $L^2_{\
neuem Tab 30.11.2006 Neustupa, Operator Arising from Flow past a sigma}(\Omega)$ we prove that the essential spectrum consists of an infinite set of overlapping parabolic regions in the left
geöffnet) Jiri Rotating Body half--plane of the complex plane. Our approach is based on a reduction to invariant closed subspaces of $L^2_{\sigma}(\Omega)$ and
on a Fourier series expansion with respect to an angular variable in a cylindrical coordinate system attached to the axis of
2483 Hofmann, MSC: 22E67; 22D05
(wird in 14.11.2006 Karl H. Iwasawa's Local Splitting Theorem If the nilradical of the Lie algebra of a pro-Lie group $G$ is finite dimensional modulo the center, then every identity
neuem Tab Morris, for Pro-Lie Groups neighborhood $U$ of $G$ contains a closed normal subgroup $N$ such that $G/N$ is a Lie group and $G$ and $N\times G/N$ are locally
geöffnet) Sidney A. isomorphic.
MSC: 60H05; 60H07; 60G15
A. In this paper we consider a $n$-dimensional stochastic differential equation driven by a fractional Brownian motion with Hurst
Neuenkirch Trees and asymptotic developments parameter $H>1/3$. After solving this equation in a rather elementary way, following the approach of M. Gubinelli (2004), we show
2482 14.11.2006 I. Nourdin for fractional stochastic how to obtain an expansion for $E[f(X_t)]$ in terms of $t$, where $X$ denotes the solution to the SDE and $f:\R^n\to\R$ is a
A. Rößler differential equations regular function. With respect to F. Baudoin and L. Coutin (2006), where the same kind of problem is considered, we try an
S. Tindel improvement in three different directions: we are able to take a drift into account in the equation, we parametrize our expansion
with trees (which makes it easier to use), and we obtain a sharp control of the remainder.
MSC: 22D05; 20G25; 22D45; 22E15; 22E35
To each totally disconnected, locally compact topological group $G$ and each group $A$ of automorphisms of $G$, a pseudo-metric
space $\partial A$ of «directions» has been associated by U. Baumgartner and the second author. Given a Lie group $G$ over a local
2481 Glöckner, Directions of automorphisms of field, it is a natural idea to try to define a map $\Phi\colon \partial Aut_{C^\omega}(G)\to \partial Aut(L(G))$, $\partial \alpha\
(wird in 05.11.2006 Helge Lie groups over local fields mapsto \partial L(\alpha)$ which takes the direction of an analytic automorphism of $G$ to the direction of the associated Lie
neuem Tab Willis, compared to the directions of Lie algebra automorphism. We show that, in general, $\Phi$ is not well defined. Also, it may happen that $\partial L(\alpha)=\partial L
geöffnet) George A. algebra automorphisms (\beta)$ although $\partial \alpha\not=\partial\beta$. However, such pathologies are absent for a large class of groups: we show
that $\Phi\colon \partial Inn(G)\to \partial Aut(L(G))$ is a well-defined isometric embedding for each generalized Cayley group
$G$. Some counterexamples concerning the existence of small joint tidy subgroups for flat groups of automorphisms are also
2480 Hofmann, MSC: 22A05; 22E65, 46A30
(wird in 01.11.2006 K.H. Open Mapping Theorem for We survey sufficient conditions that force a surjective continuous homomorphism between topological groups to be open. We present
neuem Tab Morris, S. Topological Groups the shortest proof yet of an open mapping theorem between projective limits of finite dimensional Lie groups.
geöffnet) A.
MSC: 65C30; 65L06; 60H35; 60H10
2479 A new class of stochastic Runge--Kutta methods for the weak approximation of the solution of Itô stochastic differential equation
(wird in Rößler, Second Order Runge--Kutta Methods systems with a multi--dimensional Wiener process is introduced. As the main innovation, the number of stages of the methods does
neuem Tab 24.10.2006 Andreas for Itô Stochastic Differential not depend on the dimension of the driving Wiener process and the number of the necessary random variables is reduced considerably.
geöffnet) Equations This reduces the computational effort significantly. Order conditions for the stochastic Runge--Kutta methods assuring weak
convergence with order two are calculated by applying the colored rooted tree analysis due to the author. Further, some
coefficients for explicit second order stochastic Runge--Kutta schemes are presented.
MSC: 65C30; 65L06; 60H35; 60H10
The weak approximation of the solution of a system of Stratonovich stochastic differential equations with a $m$--dimensional Wiener
Rößler, Second Order Runge--Kutta Methods process is studied. Therefore, a new class of stochastic Runge--Kutta methods is introduced. As the main novelty, the number of
2478 24.10.2006 Andreas for Stratonovich Stochastic stages does not depend on the dimension $m$ of the driving Wiener process which reduces the computational effort significantly. The
Differential Equations colored rooted tree analysis due to the author is applied to determine order conditions for the new stochastic Runge--Kutta methods
assuring convergence with order two in the weak sense. Further, some coefficients for second order stochastic Runge--Kutta schemes
are calculated explicitly.
2477 A mixed boundary value problem MSC: 53A10
(wird in 18.10.2006 Bergner, for the prescribed mean curvature We solve a mixed boundary value problem for the nonparametric prescribed mean curvature equation, prescribing continuous Dirichlet
neuem Tab Matthias equation boundary values at some strictly convex boundary part and Neumann zero boundary values at the remaining part of the boundary. We
geöffnet) assume that Dirichlet and Neumann boundary parts are some positive distance away from each other.
MSC: 22E65; 17B65; 22D05
2476 A pro-Lie group is a projective limit of a family of finite-dimensional Lie groups. In this note we show that a pro-Lie group $G$
(wird in Hofmann, K. Pro-Lie groups which are is a Lie group in the sense that its topology is compatible with a smooth manifold structure for which the group operations are
neuem Tab 25.09.2006 H. infinite-dimensional Lie groups smooth if and only if $G$ is locally contractible. We also characterize the corresponding pro-Lie algebras in various ways.
geöffnet) Neeb, K.-H. Furthermore, we characterize those pro-Lie groups which are locally exponential, that is, they are Lie groups with a smooth
exponential function which maps a zero neighborhood in the Lie algebra diffeomorphically onto an open identity neighborhood of the
2475 MSC: 57S25; 53C20
(wird in 07.09.2006 Magata, An Integration Formula for Polar We prove an analogue of Weyl's Integration Formula for compact Lie groups in the context of polar actions. We also show how certain
neuem Tab Frederick Actions classical examples from the literature can be viewed as special cases of our result.
This paper offers an approach of developing an order-theoretic structure theory of one-dimensional continuum structures. The chosen
Formal Concept Analysis of approach is based on continua and their subcontinua as primitive notions. In a first step linear and circular continuum structures
2474 12.09.2006 Rudolf Wille One-Dimensional Continuum are defined as ordered sets and concretized by a real number model. In a second step 'points' are deduced as limits of continua by
Structures methos of Formal Concept Analysis. The continuum structures extended by those points are analysed and represented by an enlarged
real number model. Further research is planned to extend the approach of this paper to higher dimensional continuum structures.
2473 MSC: 26E30; 26E20; 46A16; 46G05; 46S10
(wird in Glöckner, Comparison of some notions of $C^ Various definitions of $C^k$-maps on open subsets of finite-dimensional vector spaces over a complete valued field have been
neuem Tab 01.09.2006 Helge k$-maps in multi-variable proposed in the literature. We show that the $C^k$-maps considered by Schikhof and De Smedt coincide with those of Bertram,
geöffnet) non-archimedian analysis Glöckner and Neeb. By contrast, Ludkovsky's $C^k$-maps need not be $C^k$ in the former sense, at least in positive characteristic.
We also compare various types of Hölder differentiable maps on finite-dimensional and metrizable spaces.
2472 Glöckner, Ultrametric and non-locally MSC: 26E30; 26E15; 26E20; 45T20; 46A16; 46S10
(wird in Helge convex analogues of the general The General Curve Lemma is a tool of Infinite-Dimensional Analysis, which enables refined studies of differentiability properties
neuem Tab 01.09.2006 Ludkovsky, curve lemma of convenient of mappings between real locally convex spaces. In this article, we generalize the General Curve Lemma in two ways: First, we
geöffnet) Sergey V. differential calculus remove the condition of local convexity in the real case. Second, we adapt the lemma to the case of curves in topological vector
spaces over ultrametric fields.
Dintelmann, Strong $L^p$-Solutions to the
2471 Eva Navier-Stokes Flow past Moving MSC: 76D03; 35Q30; 35B30
(wird in 09.08.2006 Geissert, Obstacles: The Case of Several Consider the Navier-Stokes flow past several moving obstacles. It is shown that there exists a unique strong local solution in the
neuem Tab Matthias Obstacles and Time Dependent $L^p$-setting, $1 < p < \infty$. Moreover, it is proved that the strong solution coincides with the known mild solution in the very
geöffnet) Hieber, Velocity weak sense.
Patrizio We present a finite element implementation of a Cosserat elasto-plastic model allowing for non-symmetric stresses and we provide a
2470 Chelminski, A numerical solution method for numerical analysis of the introduced time-incremental algorithm. The model allows the use of standard tools from convex analysis as
(wird in 01.08.2006 Krzysztof an infinitesimal elasto-plastic known from classical Prandtl-Reuss plasticity. We derive the dual stress formulation and show that for vanishing Cosserat couple
neuem Tab Müller, Cosserat model modulus $\mu_c\to 0$ the classical problem with symmetric stresses is approximated. Our numerical results testify to the robustness
geöffnet) Wolfgang of the approximation. Notably, for positive couple modulus $\mu_c>0$ there is no need for a safe-load assumption. For small $\mu_c$
Wieners, the response is numerically indistinguishable from the classical response.
2469 MSC: 53A10; 49Q05
(wird in Bergner, The Dirichlet problem for graphs We consider the Dirichlet problem for two-dimensional graphs of prescribed mean curvature in $\mathbb R^3$ where the prescribed
neuem Tab 16.07.2006 Matthias of prescribed anisotropic mean mean curvature function $H=H(X,N)$ may depend on the point $X$ in space and on the normal $N$ of the graph as well. In special
geöffnet) curvature situations this Dirichlet problem arises as the Euler equation of a generalised nonparametric area funcional. Under certain
smallness conditions we will solve the Dirichlet problem and construct minimizers of the generalized area functional.
MSC: 35Q72; 74A35; 74A30; 74C05; 74C10
2468 Neff, In this article we investigate the regularizing properties of Cosserat elasto-plastic models in a geometrically linear setting. The
(wird in Patrizio Approximation of Prandtl-Reuss models feature an independent microrotation field which allow the Cauchy-stress to become non-symmetric while the contribution of
neuem Tab 17.07.2006 Chelminski, Plasticity through the microrotations itself remains linear elastic. Extending previous work we show that for the large class of all quasistatic
geöffnet) Krzysztof Cosserat-Plasticity models of monotone type, solutions to the problem with microrotations are $\mathbb{H}^1$ well-posed. A similar result does not hold
for the classical case without microrotations. For vanishing Cosserat effects we show also that the model with microrotations
approximates the classical Prandtl-Reuss solution in an appropriate measure valued sense.
2467 MSC: 2000; 68Q99; 54E25
(wird in Approximative Computation and We introduce certain 'computation spaces', neighbourhood spaces, and generalizations of metric spaces (with generalizations of +),
neuem Tab 10.07.2006 Zahn, Peter Generalizations of Metric Spaces and we investigate the relationalship between those spaces. We also present calculus-like methodes to obtain programs to compute
geöffnet) functions on computation spaces and, for such functions, computable moduli of continuity, which are suitable for individual
MSC: 2000; 03A05
2466 We introduce a «meaningful» (i.e. not only formal) language L, the use and the semantic of their sentences are determined by
(wird in 10.07.2006 Zahn, Peter Eine pragmatische Rechtfertigung «external facts» on the one hand and rules of assertion on the other. To reduce the problem of beginning reasoning, we stipulate
neuem Tab des klassischen Argumentierens certain rules to restrict assertions, and we also agree that assertions of sentences of L must not be restricted besides. By
geöffnet) liberalizing the resulting use of assertions we establish a «classical game» of assertion which permits to apply classical logic an
serves essential purposes of reasoning most favorably.
MSC: 76D05; 35Q30; 35B65
Farwig, Let $u$ be a weak solution of the Navier-Stokes equations in a domain $\Omega \subset R^3$ and a time interval $[0,T)$, $0< T \leq
2465 Reinhard Local in time regularity \infty$, with initial value $u_0$, and vanishing external force. As is well known, global regularity of $u$ for general $u_0$ is an
(wird in Kozono, properties of the Navier-Stokes unsolved problem unless we pose additional assumptions on $u_0$ or on the solution $u$ itself such as Serrin's condition $\| u \|_
neuem Tab 19.06.2006 Hideo equations beyond Serrin's {L^s(0,T;L^q(\Omega))} < \infty$ where $\frac{2}{s} + \frac{3}{q} =1$. In the present paper we prove several new local and global
geöffnet) Sohr, condition regularity properties by using assumptions beyond Serrin's condition e.g. as follows: If the norm $\|u\|_{L^r(0,T;L^q(\Omega))}$,
Hermann with Serrin's number $\frac{2}{r} + \frac{3}{q} =1+\alpha$ $(\alpha>0)$ strictly larger than $1$, is sufficiently small, or if $u$
satisfies a {\it local leftward} $L^s(L^q(\Omega))$--condition for every $t\in(0,T)$, where $\frac{2}{s} + \frac{3}{q} =1$, then
$u$ is regular in $(0,T)$. Further results deal with similar regularity conditions based on energy quantities only.
2464 Bergner, On two-dimensional immersions of MSC: 35J60; 53A07; 53A10
(wird in 01.06.2006 Matthias prescribed mean curvature in $\ We consider two-dimensional immersions of disc-type in $\mathbb R^n.$ We focus on well known classical concepts and study the
neuem Tab Froehlich, mathbb R^n$ nonlinear elliptic systems of such mappings. Using an Osserman-type condition we give a-priori estimates of the principle
geöffnet) Steffen curvatures for graphs with prescribed mean curvature fields and derive a theorem of Bernstein type for minimal graphs.
MSC: 22E65; 22E67; 46A13; 46F05; 46T20; 54B30; 54H11; 58B10; 58D05
Let $G$ be a Lie group which is the union of an ascending sequence $G_1 \subseteq G_2 \subseteq \cdots$ of Lie groups (each of
2463 Direct limits of which may be infinite-dimensional). We study the question when $G$ is the direct limit of the $G_n$'s in the category of Lie
(wird in 06.06.2006 Glöckner, infinite-dimensional Lie groups groups, topological groups, smooth manifolds, resp., topological spaces. Full answers are obtained for $G$ the group $Diff_c(M)$ of
neuem Tab Helge compared to direct limits in compactly supported $C^\infty$-diffeomorphisms of a $\sigma$-compact smooth manifold $M$; and for test function groups $C^\infty_c
geöffnet) related categories (M,H)$ of compactly supported smooth maps with values in a finite-dimensional Lie group $H$. We also discuss the cases where $G$ is
a direct limit of unit groups of Banach algebras, a Lie group of germs of Lie group-valued analytic maps, or a weak direct product
of Lie groups.
2462 MSC: 22E65
(wird in 04.06.2006 Glöckner, Direct limit groups do not have We show that countable direct limits of finite-dimensional Lie groups do not have small subgroups. The same conclusion is obtained
neuem Tab Helge small subgroups for suitable direct limits of infinite-dimensional Lie groups.
MSC: 030
Petra Semantology: Basic Methods for In this paper, we introduce the term 'Semantology' for naming the theory of semantic structures and their connections. Semantic
2461 01.06.2006 Gehring Knowledge Respresentations structures are fundamental for representing knowledge which we demonstrate by discussing basic methods of knowledge representation.
Rudolf Wille In this context we discuss why, in the field of knowledge representation, the term 'Semantology' should be given perference to the
term 'Ontology'.
MSC: 65C30; 60H35; 65C20; 68U20
Debrabant, Classification of Stochastic In the present paper, a class of stochastic Runge--Kutta methods for weak approximation of Itô stochastic differential equation
2460 23.05.2006 Kristian Runge--Kutta Methods for the Weak systems with a multi--dimensional Wiener process is considered. Order one and order two conditions for the coefficients of explicit
Rößler, Approximation of Stochastic stochastic Runge--Kutta methods are solved and the solution space of all possible coefficients is analyzed. A full classification
Andreas Differential Equations of the coefficients for such stochastic Runge--Kutta schemes of order one and two as well as coefficients for optimal schemes are
MSC: 22E65; 22E15
2459 In this survey, we report on the state of the art of some of the fundamental problems in the Lie theory of Lie groups modeled on
(wird in Neeb, Towards a Lie theory of locally locally convex spaces, such as integrability of Lie algebras, integrability of Lie subalgebras to Lie subgroups, and integrability
neuem Tab 23.05.2006 Karl-Hermann convex groups of Lie algebra extensions to Lie group extensions. We further describe how regularity or local exponentiality of a Lie group can be
geöffnet) used to obtain quite satisfying answers to some of the fundamental problems. These results are illustrated by specialization to
some specific classes of Lie groups, such as direct limit groups, linear Lie groups, groups of smooth maps and groups of
2458 MSC: 74Q15; 64C05; 74D10; 35J25; 34G20; 47H04; 47H05
(wird in 13.04.2006 Nesenenko, Homogenization in viscoplasticity In this work we present the justification of the formally derived homogenized problem for the quasistatic initial boundary value
neuem Tab Sergiy problem with internal variables, which models the deformation behavior of viscoplastic materials with a periodic microstructure.
MSC: 46L05; 43A10; 43A65; 46L60; 22E65
The concept of a host algebra generalises that of a group $C^*$-algebra to groups which are not locally compact in the sense that
2457 Grundling, its non-degenerate representations are in one-to-one correspondence with representations of the group under consideration. Here we
(wird in 16.06.2006 Hendrik Abelian topological groups with consider the question of the existence of host algebras for abelian topological groups and also for multiplier representations. Our
neuem Tab Neeb, host algebras main negative result is essentially that a topological abelian group has a full host algebra (covering all its continuous unitary
geöffnet) Karl-Hermann representations) if and only if it embeds into a locally compact group. On the positive side, we show that the canonical symplectic
form on a countably dimensional complex vector space leads to an abelian group with multiplier for which a full host algebra
exists. This provides a host algebra for the set of regular representations of the CCR algebra.
2456 Froehlich, MSC: 53J60; 53A10; 49Q05
(wird in Steffen Curvature estimates for graphs We consider graphs $\Sigma^n \subset \R^m$ with prescribed mean curvature and flat normal bundle. Using techniques of Schoen, Simon
neuem Tab 11.04.2006 Winklmann, with prescribed mean curvature and Yau, and Ecker-Huisken, we derive the interior curvature estimate $$\sup_{\Sigma \cap B_R} |A|^2 \leq \frac{C}{R^2}$$ up to
geöffnet) Sven and flat normal bundle dimension $n \leq 5$, where $C$ is a constant depending on natural geometric data of $\Sigma$ only. This generalizes previous
results of Smoczyk, Wang and Xin, and Wang for minimal graphs with flat normal bundle.
2455 Neff,
(wird in 01.04.2006 Patrizio Curl bounds Grad on ${\rm SO(3)}$ MSC: 74A35; 74E15; 74G65; 74N15
neuem Tab Muench, Ingo We show that the operator Curl is isomorphic to the operator Grad on ${\rm SO(3)}$
2454 Glöckner, Classification of the simple MSC: 22D05; 20E15; 20E36
(wird in 04.04.2006 Helge factors appearing in composition Let $G$ be a totally disconnected, locally compact group admitting a contractive automorphism $\alpha$. We prove a Jordan-Hölder
neuem Tab Willis, series of totally disconnected theorem for series of $\alpha$-stable closed subgroups of $G$, classify all possible composition factors and deduce consequences
geöffnet) George A. contraction groups for the structure of $G$.
MSC: 60G55; 62G30; 60Jxx
2453 03.04.2006 Niese, A generalized order statistic In this article we generalize the uniform order statistic property of mixed Poisson processes by the use of a wider model of
Birgit property ordered random variables. The corresponding point processes will be characterized. We deduce alternative representations of their
MSC: 35Q30; 35D05; 76D07; 35J25
2452 Very Weak Solutions to the Stokes We investigate very weak solutions the stationary Stokes- and Stokes resolvent problem in function spaces with Muckenhoupt weights.
(wird in 30.03.2006 Krohne, and Stokes-Resolvent Problem in The notion used here is similar but even more general than the one used in \cite{ama1} or \cite{gss}. Consequently the class of
neuem Tab Katrin Weighted Function Spaces solutions is enlarged. To describe boundary conditions we restrict ourselves to more regular data. We introduce a Banach space
geöffnet) admitting a restriction operator and containing the solutions according to such data. As a preparation we prove a weighted analogue
to Bogowski's Theorem and extension theorems for functions defined on the boundary.
2451 Stationary Stokes- and MSC: 35Q30; 35D05; 76D05; 35J65
(wird in Krohne, Navier-Stokes Equations with Low We investigate the stationary Navier-Stokes equations in Spaces with Muckenhoupt weights. The aim is to find a class of solutions
neuem Tab 30.03.2006 Katrin Regularity Data in Weighted as large as possible. We join the notation of very weak solutions in [1] and [10]. When estimating the nonlinear term the weighted
geöffnet) Bessel-Potential Spaces context causes difficulties. For this reason we consider solutions in weighted Bessel-potential spaces. Thus using complex
interpolation we establish a theory of solutions to the Stokes equations in weighted Bessel-potential spaces.
2450 Gramlich,
(wird in Ralf Odd-dimensional orthogonal groups In the first part, a characterization of central quotients of the group $\Spin(2n+1,q)$ is given for $n \geq 3$ and all odd prime
neuem Tab 01.03.2006 Horn, Max as amalgams of unitary groups. \\ powers $q$, with the exception of the cases $n=3$, $q\in{3,5,7,9}$. The present article treats these cases computationally, thus
geöffnet) Nickel, Part 2: machine computations completing the Phan-type theorem for the group $\Spin(2n+1,q)$.
Bennett, We extend the Phan theory described in previous articles to the last remaining infinite series of classical Chevalley groups over
Curt finite fields. Namely, we prove that the twin buildings for the group $\Spin(2n+1,q^2)$, $q$ odd, admit a unique unitary flip and
2449 Gramlich, Odd-dimensional orthogonal groups that the corresponding flipflop geometry is simply connected for almost all finite fields $\Fqsq$. Applying standard methods from
(wird in 01.03.2006 Ralf as amalgams of unitary groups. \\ amalgam theory, this results in a characterization of central quotients of the group $\Spin(2n+1,q)$ by a Phan system of rank one
neuem Tab Hoffman, Part 1: general simple and rank two subgroups. In the present first part of a series of two articles we present simple connectedness results for
geöffnet) Corneliu connectedness sufficiently large fields or sufficiently large rank. To be precise, the result stated in the present paper is proved for all cases
Shpectorov but $n=3$ and $q \in {3, 5, 7, 9}$, the remaining cases are dealt with in the sequel \cite{Part2} computationally.
2448 Gramlich,
(wird in Ralf The complete Phan-type theorem Previous articles give a characterization of central quotients of the group $\mathrm{Sp}(2n,q)$ for $n \geq 3$ and all prime powers
neuem Tab 01.03.2006 Horn, Max for $\mathrm{Sp}(2n,q)$ $q$ up to some small cases that are left open. The present article fills in this gap, thus providing the definitive version of the
geöffnet) Nickel, Phan-type theorem for $\Sp(2n,q)$.
2447 Ri Existence and Exponential MSC: 35Q30; 76D05; 76D07; 76E99; 35B35
(wird in Myong-Hwan Stability in $L^r$-spaces of We prove existence, uniqueness and exponential stability of stationary Navier-Stokes flows with prescribed flux in an unbounded
neuem Tab 16.03.2006 Farwig, Stationary Navier-Stokes Flows cylinder of $R^n, n\geq 3,$ with several exits to infinity provided the total flux and external force are sufficiently small. The
geöffnet) Reinhard with Prescribed Flux in Infinite proofs are based on analytic semigroup theory, perturbation theory and $L^r-L^q$-estimates of a perturbation of the Stokes operator
Cylindrical Domains in $L^q$-spaces.
2446 Roch, Szegö limit theorems for MSC: 47B36; 47A75; 47B35
(wird in 01.03.2006 Steffen operators with almost periodic The classical Szegö theorems study the asymptotic behaviour of the determinants of the finite sections $P_n T(a) P_n$ of Toeplitz
neuem Tab Silbermann, diagonals operators, i.e., of operators which have constant entries along each diagonal. We generalize these results to operators which have
geöffnet) Bernd almost periodic functions on their diagonals.
MSC: 35Q30; 76D05
Consider a viscous incompressible fluid filling the whole 3-dimensional space exterior to a rotating body with constant angular
2445 Farwig, velocity $\omega$. By using a coordinate system attached to the body, the problem is reduced to an equivalent one in a fixed
(wird in 02.03.2006 Reinhard Stationary Navier-Stokes flow exterior domain. The reduced equation involves the crucial drift operator $(\omega\wedge x)\cdot\nabla$, which is not subordinate
neuem Tab Hishida, around a rotating obstacle to the usual Stokes operator. This paper addresses stationary flows to the reduced problem with an external force $f=\mbox{div $F$}
geöffnet) Toshiaki $, that is, time-periodic flows to the original one. Generalizing previous results of G. P. Galdi we show the existence of a unique
solution $(\nabla u,p)$ in the class $L_{3/2,\infty}$ when both $F\in L_{3/2,\infty}$ and $\omega$ are small enough; here $L_{3/2,\
infty}$ is the weak-$L_{3/2}$ space.
Kristian Continuous Extension of MSC: 65C30; 60H35; 65C20; 68U20
Debrabant Stochastic Runge--Kutta Methods A continuous extension of the class of stochastic Runge--Kutta methods for weak approximation is introduced. Order conditions for
2444 25.02.2006 and Andreas for the Weak Approximation of continuous Runge--Kutta schemes of weak order one and two for the approximation of Itô stochastic differential equations with
Rößler SDEs respect to a multi--dimensional Wiener process are stated. Further, a full classification of the coefficients for continuous
stochastic Runge--Kutta schemes of order one and two is calculated and coefficients for optimal schemes are presented.
MSC: 81Q10; 47B36; 46N50
The paper is devoted to the study of the essential spectrum of discrete Schrödinger operators on the lattice $\mathbb{Z}^{N}$ by
means of the limit operators method. This method has been applied by one of the authors to describe the essential spectrum of
Vladimir S. (continuous) electromagnetic Schrödinger operators, square-root Klein-Gordon operators, and Dirac operators under quite weak
2443 27.02.2006 Rabinovich, The essential spectrum of assumptions on the behavior of the magnetic and electric potential at infinity. The present paper is aimed to illustrate the
Steffen Roch Schrödinger operators on lattices applicability and effectivity of the limit operators method to discrete problems as well. We consider the following classes of the
discrete Schrödinger operators: 1) operators with slowly oscillating at infinity potentials, 2) operators with periodic and
semi-periodic potentials; 3) Schrödinger operators which are discrete quantum analogs of the acoustic propagators for waveguides;
4) operators with potentials having an infinite set of discontinuities; and 5) three-particle Schrödinger operators which describe
the motion of two particles around a heavy nuclei on the lattice $\mathbb{Z}^3$.
2442 Müller, Equivalences of Smooth and MSC: 22E65; 55R10; 57R10
(wird in Christoph Continuous Principal Bundles with This paper is on the equivalence of continuous and smooth principal bundles. Throughout the text, let K be a a Lie group, modeled
neuem Tab 22.02.2006 Wockel, Infinite-Dimensional Structure on a locally convex space, and M be a finite-dimensional paracompact manifold with corners. We show that each continuous principal
geöffnet) Christoph Group K-bundle over M is continuously equivalent to a smooth one and that two smooth principal K-bundles over M which are continuously
equivalent are also smoothly equivalent. In the concluding section, we relate our results to neighboring topics.
2441 MSC: 26E30; 22E65; 26E15; 37D10; 46S10; 47H10; 58C15; 58C20; 58D05
(wird in 19.02.2006 Glöckner, Aspects of $p$-Adic Non-Linear The article provides an introduction to infinite-dimensional differential calculus over topological fields and surveys some of its
neuem Tab Helge Functional Analysis applications, notably in the areas of infinite-dimensional Lie groups and dynamical systems.
MSC: 58C15; 26E15; 26E30; 46S10; 47H10; 58C20
2440 Finite order differentiability We prove an implicit function theorem for $C^k$-maps from arbitrary topological vector spaces over valued fields to Banach spaces
(wird in 29.02.2006 Glöckner, properties, fixed points and (for $k \geq 2$). As a tool, we show the $C^k$-dependence of fixed points on parameters for suitable families of contractions of a
neuem Tab Helge implicit functions over valued Banach space. Similar results are obtained for $k$ times strictly differentiable maps, and for $k$ times Lipschitz differentiable
geöffnet) fields maps. In the real case, our results subsume an implicit function theorem for Keller $C^k_c$-maps from arbitrary topological vector
spaces to Banach spaces.
2439 Fundamental Problems in the MSC: 22E65
(wird in 19.02.2006 Glöckner, Theory of Infinite-Dimensional In a preprint from 1982, John Milnor formulated various fundamental questions concerning infinite-dimensional Lie groups. In this
neuem Tab Helge Lie Groups note, we describe some of the answers (and partial answers) obtained in the preceding years.
MSC: 11A25; 46H30
Glöckner, In the complex algebra $A$ of arithmetic functions $g: N \to C$, endowed with the usual pointwise linear operations and the
2438 Helge Dirichlet convolution, let $g^{*k}$ denote the convolution power $g*\cdots*g$ with $k$ factors $g \in A$. We investigate the
(wird in 19.02.2006 Lucht, Lutz Solutions to Arithmetic solvability of polynomial equations of the form $a_d*g^{*d}+a_{d-1}*g^{*(d-1)}+\cdots+a_1*g+a_0 = 0$ with fixed coefficients
neuem Tab G. Convolution Equations $a_d,a_{d-1},\ldots,a_1,a_0 \in A$. In some cases the solutions have specific properties and can be determined explicitly. We show
geöffnet) Porubský, that the property of the coefficients to belong to convergent Dirichlet series transfers to those solutions $g \in A$, whose values
Stefan $g(1)$ are simple zeros of the polynomial $a_d(1)z^d+a_{d-1}(1)z^{d-1}+\cdots+a_1(1)z+a_0(1)$. We extend this to systems of
convolution equations, which need not be of polynomial type.
MSC: 35Q30; 76D07
2437 Reinhard It is proved that the Stokes operator in $L^q$-space on an infinite cylindrical domain of $R^n,¸n\geq 3,$ with several exits to
(wird in Farwig The Resolvent Problem and $H^\ infinity generates a bounded and exponentially decaying analytic semigroup and admits a bounded $H^\infty$-calculus. For the
neuem Tab 25.02.2006 Ri infty$-calculus of the Stokes resolvent estimates, the Stokes resolvent system with a prescribed divergence in an infinite straight cylinder with bounded
geöffnet) Myong-Hwan Operator in Unbounded Cylinders cross-section $\Sigma$ is studied in $L^q(R;L^r_\omega(\Sigma))$ where $1< q , r<\infty$ and $\omega\in A_r(R^{n-1})$ is an
arbitrary Muckenhoupt weight. The proofs use cut-off techniques and the theory of Schauder decomposition of {\em UMD} spaces based
on ${\cal R}$-boundedness of operator families and on square function estimates involving Muckenhoupt weights.
MSC: 57T20; 57S05; 81R10; 55P62
2436 This paper is on the connecting homomorphism in the long exact homotopy sequence of the evaluation fibration $\tx{ev}_{p_{0}}:C
(wird in Christoph The Samelson Product and Rational (P,K)^{K}\to K$, where $C(P,K)^{K}\cong\Gau(\cP)$ is the gauge group of a continuous principal $K$-bundle $P$ over a closed
neuem Tab 01.02.2006 Wockel Homotopy of Gauge Groups orientable surface or a sphere. We show that in this cases the connecting homomorphism in the corresponding long exact homotopy
geöffnet) sequence is given in terms of the Samelson product. As applications, we exploit this correspondence to get an explicit formula for
$\pi_{2}(\Gau(\cP_{k}))$, where $\cP_{k}$ denotes the principal \mbox{$\bS^{3}$-bundle} over $\bS^{4}$ of Chern number $k$ and
derive explicit formulae for the rational homotopy groups $\pi_{n}(\Gau(\cP))\otimes \Q$.
Lahti, Pekka Noise sequences of infinite
Maczynski, matrices and their applications Noise sequences of infinite matrices associated with covariant phase and box localization observables are defined and determined.
2435 01.01.2006 Maciej J. to the characterization of the The canonical observables are characterized within the relevant classes of observables as those with asymptotically minimal of
Scheffold, canonical phase and box minimal noise, i.e., the noise tending to $0$ or having the value $0$.
Egon localization observables
Ylinen, Kari
MSC: 03B
The offered Methods of Conceptual Knowledge Processing are procedures which are well-planed to mean and purpose and therewith lead
to skills for solving practical tasks. The used means and skills have been mainly created as translations of mathematical means and
2434 17.01.2006 Wille, Methods of Conceptual Knowledge skills of Formal Concept Analysis. Those transdisciplinary translations may be understood as transformations from mathematical
Rudolf Processing thinking, dealing with potential realities, to logical thinking, dealing with actual realities. Each of the 38 presented methods is
discussed in a general language of logical nature, while citations give links to the underlying mathematical background.
Applications of the methods are demonstrated by concrete examples mostly taken from the literature to which explicit references are
MSC: 22E65
These are lecture notes of a course given at a summer school in Monastir in July 2005. The main purpose of this course is to
present some of the main ideas of infinite-dimensional Lie theory and to explain how it differs from the finite-dimensional theory.
2433 In the introductory section, we present some of the main types of infinite-dimensional Lie groups: linear Lie groups, groups of
(wird in Neeb, Monastir Summer School: smooth maps and groups of diffeomorphisms. We then turn in some more detail to manifolds modeled on locally convex spaces and the
neuem Tab 09.01.2006 Karl-Hermann Infinite-Dimensional Lie Groups corresponding calculus (Section II). In Section III, we present some basic Lie theory for locally convex Lie groups. The
geöffnet) Fundamental Theorem for Lie group-valued-functions on manifolds and some of its immediate applications are discussed in Section IV.
For many infinite-dimensional groups, the exponential function behaves worse than for finite-dimensional ones or Banach--Lie
groups. Section V is devoted to the class of locally exponential Lie groups, i.e., those for which the exponential function is a
local diffeomorphism in 0. We conclude these notes with a brief discussion of the integrability problem for locally convex Lie
algebras: When is a locally convex Lie algebra the Lie algebra of a global Lie group?
Burgmann, MSC: 03B; 06D10
2432 04.01.2006 Christian The Basic Theorem on Preconcept Preconcept Lattices are identified to be (up to isomorphism) the completely distributive complete lattices in which the supremum of
Wille, Lattices all atoms is equal or greater than the infimum of all coatoms. This is a consequence of the Basic Theorem on Preconcept Lattices,
Rudolf which also offers means for checking line diagrams of preconcept lattices. | {"url":"https://www.mathematik.tu-darmstadt.de/research/preprint_series/2006_2010/2006_2010.de.jsp","timestamp":"2024-11-12T02:49:54Z","content_type":"text/html","content_length":"234419","record_id":"<urn:uuid:68e73130-6b26-40b2-b0d9-b1569fd37a46>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00840.warc.gz"} |
Natural Response - (Ordinary Differential Equations) - Vocab, Definition, Explanations | Fiveable
Natural Response
from class:
Ordinary Differential Equations
Natural response refers to the behavior of a system when it is allowed to evolve freely without any external influences after being disturbed. This concept is crucial in understanding how systems,
particularly in electric circuits, react to initial conditions like voltage or current changes and eventually settle into a steady state. The natural response helps describe how energy dissipates in
the circuit over time and plays a key role in system stability and transient analysis.
congrats on reading the definition of Natural Response. now let's actually learn it.
5 Must Know Facts For Your Next Test
1. The natural response can be described mathematically by solving homogeneous differential equations that model the circuit's behavior.
2. In electric circuits, the natural response is often characterized by exponential decay, which indicates how voltage or current decreases over time.
3. The time constant of a circuit defines how quickly the natural response occurs; it's determined by resistance and capacitance for RC circuits or by resistance and inductance for RL circuits.
4. For linear circuits, the superposition principle allows us to analyze the natural response separately from any forced response caused by external inputs.
5. Understanding the natural response is essential for designing stable circuits, as it affects performance during switching and other dynamic events.
Review Questions
• How does the natural response differ from the forced response in electric circuits?
□ The natural response describes how a circuit behaves when it is allowed to evolve on its own after an initial disturbance, without any external sources influencing it. In contrast, the forced
response occurs when an external input or source actively drives the circuit. The total response of a circuit is a combination of both responses, and distinguishing between them is essential
for analyzing circuit dynamics and stability.
• Discuss the role of damping in affecting the natural response of an electric circuit.
□ Damping plays a crucial role in determining how quickly and smoothly a system returns to equilibrium after being disturbed. In electric circuits, damping affects the amplitude and rate at
which the natural response decreases over time. A well-damped circuit will exhibit rapid settling without excessive oscillations, while under-damped circuits can oscillate before settling
down. Understanding damping is vital for predicting circuit behavior and ensuring stability.
• Evaluate how knowledge of natural response can impact circuit design and performance under varying conditions.
□ A deep understanding of natural response is fundamental for engineers when designing circuits that must operate reliably under different conditions. By evaluating how voltage and current
naturally decay after disturbances, engineers can predict circuit behavior during transient states, which is crucial for avoiding issues like overshoot or instability. Additionally,
incorporating elements like appropriate damping can enhance performance in applications such as filters or amplifiers, leading to more efficient and reliable electronic devices.
"Natural Response" also found in:
© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. | {"url":"https://library.fiveable.me/key-terms/ordinary-differential-equations/natural-response","timestamp":"2024-11-15T00:51:09Z","content_type":"text/html","content_length":"143314","record_id":"<urn:uuid:1aad1b51-03b4-455b-bbef-abffd2f0a986>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00802.warc.gz"} |
Abílio Lucena
A new exact solution algorithm is proposed for the Degree-Constrained Minimum Spanning Tree Problem. The algorithm involves two combined phases. The first one contains a Lagrangian Relax-and-Cut
procedure while the second implements a Branch-and-Cut algorithm. Both phases rely on a standard formulation for the problem, reinforced with Blossom Inequalities. An important feature of the
proposed … Read more | {"url":"https://optimization-online.org/author/abiliolucena/","timestamp":"2024-11-08T01:51:14Z","content_type":"text/html","content_length":"88820","record_id":"<urn:uuid:40c2bf93-67f5-4018-95e8-d62beaf196bd>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00445.warc.gz"} |
Technical Perspective: A New Spin on an Old AlgorithmTechnical Perspective: A New Spin on an Old Algorithm
Communication the cost of moving bits between levels of the memory hierarchy on a single machine or between machines in a network or data center is often a more precious resource than computation.
Although not new, communication-computation trade-offs have received renewed interest in recent years due to architectural trends underlying high-performance computing as well as technological trends
that permit the automatic generation of enormous quantities of data. On the practical side, this has led to multicore processors, libraries such as LAPACK and ScaLAPACK, schemes such as MPI and
MapReduce, and distributed cloud-computing platforms. On the theoretical side, this has motivated a large body of work on new algorithms for old problems under new models of data access.
Into this fray enters the following paper by Ballard, Demmel, Holtz, and Schwartz, which considers a fundamental problem, adopting a new perspective on an old algorithm that has for years occupied a
peculiar place in the theory and practice of matrix algorithms. In doing so, the work highlights how abstract ideas from theoretical computer science (TCS) can lead to useful results in practice, and
it illustrates how bridging the theory-practice gap requires a healthy understanding of the practice.
The basic problem is the multiplication of two n × n matrices. This is a fundamental primitive in numerical linear algebra (NLA), scientific computing, machine learning, and large-scale data
analysis. Clearly, n^2 time is a trivial lower bound that much time is necessary to read the input and write the output. Moreover, at first glance, it seems "obvious" the ubiquitous three-loop
algorithm for multiplying two matrices (given as input two n × n matrices, A and B, for each i, j, k, do: C(i, j)+= A(i, k) * B(k, j)) shows that a constant times n^3 time is needed to solve the
Back in 1969, it was surprising when Strassen presented his by-now well-known algorithm. The basic idea is two 2 × 2 matrices can be multiplied using 7, rather than the usual 8, multiplications.
Since the same idea applies to 2 × 2 block matrices, the natural recursive extension can be used to multiply two n × n matrices in no more than a constant times n^ω arithmetic operations, where ω =
log[2] 7 ≈ 2.808. Over the years, the exponent ω has been whittled down to ω ≈ 2.373, and many conjecture that there exist Strassen-like algorithms with ω = 2.
Strassen's algorithm highlights the distinction, extremely important in TCS, between problems and algorithms; and it demonstrates that non-obvious algorithms can have better running times, in theory
at least, than the obvious algorithm. Although its running time can be better than the usual three-loop algorithm for input matrices larger than ca. 100 × 100, Strassen's algorithm has, for both
technical and non-technical reasons, yet to be widely used in practice.
This paper is part of a larger body of work on minimizing communication in NLA algorithms. Previous work has shown that geometric embedding methods can be used to establish communication lower bounds
for three-loop matrix multiplication algorithms in both shared-memory sequential and distributed-memory parallel models. Basically, the algorithm can be modeled as a computation directed acyclic
graph (CDAG). Due to the three-loop structure of the algorithm, this graph can be embedded into a 3D cube; and from the isoperimetric properties of that embedding a lower bound on communication can
be established. The main result of this paper is a new lower bound on the amount of communication for both sequential and parallel versions of Strassen-like algorithms that is lower than the lower
bound of the usual three-loop algorithm.
Since the geometric embedding methods do not seem to apply to the recursive structure of Strassen-like algorithms, the new lower bound is established by considering the edge expansion of the CDAG of
Strassen's algorithm. Expanders graphs that do not have any good partitions and that do not embed well in any low-dimensional Euclidean space are remarkably useful structures that are ubiquitous
within TCS and almost unknown outside TCS. For readers familiar with expanders, this paper will provide yet another application. For readers not familiar with expanders, this paper should be a
starting point.
Finally, in a stroke that will make practitioners of numerical analysis and data analysis as well as lower bound complexity theorists happy, the authors also show their lower bounds are tight by
providing an optimal algorithm. In the sequential case, this is attained by the standard implementation of Strassen's algorithm; and, in the parallel case, the authors, in joint work with Benjamin
Lipshitz, have developed a novel Communication Avoiding Parallel Strassen algorithm. This latter algorithm communicates asymptotically less than previous three-loop and Strassen-based algorithms; and
its empirical performance exceeds all other known matrix multiplication algorithms, three-loop or Strassen-based, on large parallel machines. Remarkably, this suggests that Strassen's algorithm
should be adopted into existing parallel NLA libraries, providing a great example of how to bridge the theory-practice gap, and suggesting that Strassen's algorithm might still see practical use
ironically, though, due to its better communication properties.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2014 ACM, Inc.
No entries found | {"url":"https://acmwebvm01.acm.org/magazines/2014/2/171693-technical-perspective-a-new-spin-on-an-old-algorithm/fulltext","timestamp":"2024-11-11T23:17:57Z","content_type":"text/html","content_length":"28132","record_id":"<urn:uuid:a11583f1-2bc7-4cd1-beef-4da83da63fb0>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00193.warc.gz"} |
Trust-region and other regularisations of linear least-squares problems
We consider methods for regularising the least-squares solution of the linear system Ax = b. In particular, we propose iterative methods for solving large problems in which a trust-region bound ||x||
<= Delta is imposed on the size of the solution, and in which the least value of linear combinations of ||Ax-b||_2^q and a regularisation term ||x||_2^p for various p and q =1,2 is sought. In each
case, one of more ``secular'' equations are derived, and fast Newton-like solution procedures are suggested. The resulting algorithms are available as part of the GALAHAD optimization library.
@article{CartGoulToin09a, author = {C. Cartis and N. I. M. Gould and Ph. L. Toint}, title = {Trust-region and other regularisation of linear least-squares problems}, journal = {BIT}, volume = 49,
number = 1, pages = {21--53}, year = 2009} | {"url":"https://optimization-online.org/2008/02/1900/","timestamp":"2024-11-07T05:55:37Z","content_type":"text/html","content_length":"82794","record_id":"<urn:uuid:1611393b-8669-47ca-ae90-398ca93ba174>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00125.warc.gz"} |
Make 30 Puzzles | Math = Love
This blog post contains Amazon affiliate links. As an Amazon Associate, I earn a small commission from qualifying purchases.
I’ve been having a lot of fun recently posting these Make 30 Puzzles for my students to tackle on a daily basis. The goal of these Make 30 puzzles is to arrange the digits and any of the arithmetic
operations to form an expression that evaluates to 30.
For example, the digits 0, 2, and 6 can be arranged to form 60 / 2 = 30.
These Make 30 Puzzles are the creation of retired math professor and brilliant puzzle creator Erich Friedman. Last school year, I featured his Plus Times Puzzle here on my blog.
This summer, I was super excited to check my email and see an email from Erich. He had saw my blog post about his plus times puzzles and decided to create a special set of puzzles especially for my
classroom and my students to enjoy. That’s where these Make 30 Puzzles came from!
How cool is this?!? A puzzle-writing celebrity made a special set of puzzles just for my classroom! Of course, I couldn’t keep them just to myself. I want your students to get in on the fun as well!
Here’s what the Make 30 Puzzles look like on Erich Friedman’s website.
I decided to create magnets that I could put up for each day. I typed the numbers 0-9 and made enough duplicate copies of the numbers to let me put up each of the different puzzles.
There are 186 different puzzles, so you could easily post a different puzzle each and every day of the school year.
I’ve decided to post my daily Make 30 puzzle under the day’s date. I just switch out the Make 30 magnets whenever I switch out my daily date magnets.
Of course, you could take the easy way out and skip the magnets altogether. Just write the day’s numbers using a dry erase marker. But, I do think the magnets make the puzzle a bit more eye-popping.
The magnets also allow students to manipulate the numbers and do something like this. I had a sub the other day, and I returned to find that a mystery student had solved the puzzle.
I actually typed up and printed two different sizes of numbers. The larger set of numbers has all of the digits needed to do the first 38 puzzles. I would print and use this set if you plan to just
do these puzzles for a short amount of time or if you are working with younger students.
The puzzles definitely increase in difficulty as the puzzle numbers increase.
The smaller set of numbers has all of the digits needed to do all of the puzzles. They are sized smaller to save paper.
I’m not sure how long I want to keep this puzzle out, so I currently only added disc magnets to the set of larger magnets.
I’m currently storing the unused puzzle magnets in a plastic pouch along with a printed copy of the puzzles. Each day, I switch out the puzzle and highlight it so I can keep track of which ones I
have given my students!
Puzzle Solutions
I intentionally do not make answers to the printable math puzzles I share on my blog available online because I strive to provide learning experiences for my students that are non-google-able. I
would like other teachers to be able to use these puzzles in their classrooms as well without the solutions being easily found on the Internet.
However, I do recognize that us teachers are busy people and sometimes need to quickly reference an answer key to see if a student has solved a puzzle correctly or to see if they have interpreted the
instructions properly.
If you are a teacher who is using these puzzles in your classroom, please send me an email at sarah@mathequalslove.net with information about what you teach and where you teach. I will be happy to
forward an answer key to you.
Not a teacher? Go ahead and send me an email as well. Just let me know what you are using the puzzles for. I am continually in awe of how many people are using these puzzles with scouting groups,
with senior adults battling dementia, as fun activities in their workplace, or as a birthday party escape room. | {"url":"https://mathequalslove.net/make-30-puzzles/","timestamp":"2024-11-08T08:23:16Z","content_type":"text/html","content_length":"287781","record_id":"<urn:uuid:407e0005-0a82-4cb5-8c22-78ae7b22a6ae>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00810.warc.gz"} |
Cookie Calculations
In this activity, students will step into the kitchen to explore the math behind baking! 🍪They’ll estimate costs, calculate how many cookies can be made with a given am...
How many hot chocolate scoops do I need?
Even though it is now Spring, here in New England we are still chilled and hope that some hot chocolate will warm us up. Clicking on the above image will take you to Brian's movie of makin...
measurement 4.MD MP1 MP7 5th 3rd 4th 4.NF 3.NF
The Algebra of Magic Squares
Have you seen magic squares before? How could a teacher use these puzzles to help students with different number skills? How few blank cells could you leave? Are there any special cel...
5.NBT 6.NS 7.NS 5.NF 6.EE 7.EE MP2 8.EE 8.F 5.OA 6th 7th HS 5th 3rd 4th 4.NF 4.OA 3.OA 3.NF 4.NBT MP4 3.NBT 8.NS puzzles magic squares
Whole lot of cookies
This activity has been updated -- See the updated activity HERE
5.NBT 3.MD 6.RP 5.NF MP3 5.OA 5th 3rd 4th 4.NF 4.OA 3.OA 3.NF 4.NBT 3.NBT 5.MS
Someone ate my cake
In this fraction operation and representation activity, students are asked to decide how much of my cake was eaten. Using fraction multiplication (or angle measure if that is where you need an act...
Data and Probability Fractions angle measure 4.NF.4 5th 3rd 4th fraction representation fraction operations 3.NF.1 3.G.2 4.NF.3 4.MD.5 4.MD.6 5.NF.4 proportion proportional reasoning ratio
A&W's bigger burger
A&W offered a bigger, tastier burger but it didn't succeed in grabbing customers away from the hugely popular McDonald's quarter pounder. The A&W third pounder...
Fractions 6th 5th 3rd 4th 3.NF.A 4.NF.A burger
John Urschel, retired Baltimore Raven and Black mathematician
John Urschel is a young man with several seemingly disparate characteristics and talents. He is an athlete and a mathematician and he is a black man and a mathematician. He loves what he...
3.MD 6th 7th 8th HS 5th 3rd 4th 3.NF John Urschel Creating a timeline puzzles
Drill bit fractions
Act One: Drill bits are measured by the diameter of the bit in inches. The drill bits below are ordered left to right in increasing size. The numerators in the fractions have been black...
4.NF.2. 7.RP 5.NF 6th 7th 5th 3rd 4th equivalent fractions 3.NF.3 4.NF.1 3-act task
Data on the National Mall sites (2 Activities)
Activity #1 - Look at the individuals who are memorialized on the National Mall and using fractions or percents decide what you can notice about those we honor. The activity is called "MLK...
Fractions percents 6.RP.3 7.NS.3 6th 7th 5th 3rd 4th 3.NF.1 | {"url":"https://www.yummymath.com/search/?topic=tags:3.NF,3.NF.1,3.NF3","timestamp":"2024-11-10T09:13:44Z","content_type":"text/html","content_length":"210350","record_id":"<urn:uuid:1971995d-897b-46ce-aa1e-ecc0cc0f8258>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00690.warc.gz"} |
The calculation of added masses and damping coefficients of vibrating bodies in a fluid by the finite volume method as applied to the calculation of the fuel bundle parameters for the reactor VVER-440
The calculation of added masses and damping coefficients of vibrating bodies in a fluid by the finite volume method as applied to the calculation of the fuel bundle parameters for the reactor
Authors: Krutko E.S., Sorokin F.D. Published: 19.09.2013
Published in issue: #8(641)/2013
DOI: 10.18698/0536-1044-2013-8-47-53
Category: Calculation and Design of Machinery
Keywords: added mass, damping coefficient, finite volume method, fuel assembly
To overcome high vibrational wear of tubular elements in power engineering, adequate mathematical models of hydroelastic systems must be developed and analyzed in order to reduce the intensity of
vibrations. In the mathematical models of heat exchangers and fuel assemblies with hundreds of tubular elements, the effect of liquid can be taken into account by added masses and damping
coefficients. To calculate these quantities, steady-state forced oscillations of a liquid under a given harmonic law of motion of solid bodies are
considered and investigated by the finite volume method. In this case, the added mass is associated with the kinetic energy of the fluid and the damping factor is associated with the energy
dissipated in the liquid. The proposed method showed good accuracy during testing and made it possible to calculate important parameters of the fuel bundle in the reactor VVER-440. Thus, the added
mass of the fuel bundle of the reactor VVER-440 was calculated for the first time.
[1] Tutnov A.A., Krut’ko E.S., Kiselev A.S., Kiselev I.A. Analiz nagruzok na TVS pri seismicheskom vozdeistvii [Analysis of loads on the seismic impact of TVS]. Materialy 6-th Rossiiskoi konferentsii
«Metody i programmnoe obespechenie raschetov na prochnost’» [Proceedings of the 6-th Russian Conference «Methods and software calculations of strength»]. Gelendzhik, 4—8 October 2010, pp. 40—48.
[2] Viallet E., Kestens T. Prediction of flow induced damping of a PWR fuel assembly in a case of seismic and LOCA load case Structural Mechanics in Reactor Technology (SmiRt 17). Transactions of
17-th International Conference. Prague, 2003. 8 p.
[3] Collard B. Flow induced damping of PWR fuel assembly. Structural behavior of fuel assemblies for water cooled reactors: Proceeding of technical meeting. Vienna, 2005. pp. 279—288.
[4] Fedotovskii V.S., Vereshchagina T.N., Besprozvannykh V.A. Gidrodinamicheski sviazannye kolebaniia sterzhnevykh system [Fluidly coupled oscillations of rod systems]. Gidrodinamika i bezopasnost’
AES (Teplofizika-99): Tezisy Dokladov Otraslevoi konferentsii [Hydrodynamics and safety of nuclear power plants (Thermal Physics-99): Proceedings of the Industry Conference]. Obninsk, 1999, pp.
[5] Makarov V., Afanasiev A.,Matvienko I., Volkov S., Dolgov A. Tests of Models the FA for WWER-2006 and the Fuel Assembly-Q for PWR with a Drive of the Control System of Protection on Seismic and
Vibrating Influence. Proceedings of the 9-th International conference WWER Fuel Performance, Modelling and Experimental Support. Bulgaria, 17—24 September 2011, pp. 324—331.
[6] Siniavskii V.F., Fedotovskii V.S., Kukhtin A.B., Terenik L.V. Inertsionnost’ i gidrodinamicheskoe dempfirovanie pri kolebaniiakh trub i trubnykh puchkov zhidkosti [The inertia and hydrodynamic
damping vibrations in pipes and tube bundles in liquid]. Dinamicheskie kharakteristikii kolebani ia elementov energeticheskogo oborudovaniia [The dynamic characteristics and vibration elements energy
equipment]. Collection of articles. Moscow, Nauka publ., 1980, pp. 86—97.
[7] Joseph A. Schetz, Allen E. Fuhs. Fundamentals of fluid mechanics. New York, John Wiley & Sons, 1999. 935 p.
[8] The OpenFOAM Foundat ion. Avai lable at: URL: http://www.openfoam.org/ (accessed 8 May 2013).
[9] Sorokin F.D., Krut’ko E.S. Raschet prisoedinennoi massy i koeffitsienta dempf i rovani ia dl ia vibr i ruiushchego v tsilindricheskom kanale zhestkogo tsilindra na osnove chislennogo
integrirovaniia uravnenii dvizheniia viazkoi zhidkosti [Added mass and damping coefficient calculation for the rigid cylinder vibrating in cylindrical channel based on viscous fluid motion equation
numerical integration]. Izvestiya Vysshikh Uchebnykh Zavedenii. Mashinostroenie [Proceedings of Higher Educational Institutions. Маchine Building]. 2012, no. 10, pp. 46—51. | {"url":"https://izvuzmash.bmstu.ru/eng/catalog/calcmach/hidden/276.html","timestamp":"2024-11-10T19:29:11Z","content_type":"application/xhtml+xml","content_length":"13005","record_id":"<urn:uuid:b5d91a5a-1ec9-428f-9ac9-f46c0429abcc>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00039.warc.gz"} |
hamming distance python
distance function “hamming” ... Because of the Python object overhead involved in calling the python function, this will be fairly slow, but it will have the same scaling as other distances.
get_metric. Viewed 5k times 3 \$\begingroup\$ I was solving this Leetcode challenge about Hamming Distance. Similarity is determined using a distance metric between two data points. Python
scipy.spatial.distance.hamming() Examples The following are 14 code examples for showing how to use scipy.spatial.distance.hamming(). SIMD-accelerated bitwise hamming distance Python module for
hexidecimal strings. In this case, I needed a hamming distance library that worked on hexadecimal strings (i.e., a Python str) and performed blazingly fast. Methods. The distance metric can either
be: Euclidean, Manhattan, Chebyshev, or Hamming distance. Here's the challenge description: The Hamming distance between two strings of the same length is the number of positions in which the
corresponding symbols are different. Loop Hamming Distance: 4 Set Hamming Distance: 4 And the final version will use a zip() method. dist_to_rdist. The output should be: Loop Hamming Distance: 4 end=
'' part is one of the parameters print() method has, and by setting it to ‘ ‘ we are telling it “don’t go to a new line, after you print the message”.Because of that we see the output 4 on the same
line as the text, and not on a new line. These examples are extracted from open source projects. If zero or less, an empty array is returned. KNN searches the memorised training observations for the
K instances that most closely resemble the new instance and assigns to it the their most common class. scipy.spatial.distance.hamming¶ scipy.spatial.distance.hamming (u, v, w = None) [source] ¶
Compute the Hamming distance between two 1-D arrays. To calculate the Hamming distance, we will need to be able to test if characters are the same. In fact the simplest Hamming distance calculation
is between just two characters, for instance: G G Here the characters are the same, so the Hamming distance is zero. Hamming Distance in Python. Number of points in the output window. Parameters M
int. G T Here the characters are different, so the Hamming distance is 1. Active 1 year, 10 months ago. The Hamming window is a taper formed by using a weighted cosine. Hamming Distance. Convert the
true distance to the reduced distance. numpy.hamming¶ numpy.hamming (M) [source] ¶ Return the Hamming window. There are a lot of fantastic (python) libraries that offer methods to calculate various
edit distances, including Hamming distances: Distance, textdistance, scipy, jellyfish, etc. Ask Question Asked 1 year, 10 months ago. The hamming distance of strings \(a\) and \(b\) is defined as the
number of character mismatches between \(a\) and \(b\). The Hamming distance between 1-D arrays u and v, is simply the proportion of disagreeing components in u and v.If u and v are boolean vectors,
the Hamming distance is The hamming distance can be calculated in a fairly concise single line using Python. Returns out ndarray Would love feedback on my syntax and code style. Python Hamming
Distance Article Creation Date : 31-Aug-2020 08:45:21 AM. If you are not sure what this does, try removing this parameter or changing end='' to end=' * '. Use scipy.spatial.distance.hamming ( )
method times 3 \ $ \begingroup\ $ I was solving this Leetcode about! The corresponding symbols are different calculated in a fairly concise single line using python distance python module
hexidecimal. Date: 31-Aug-2020 08:45:21 AM end= ' * ' Hamming window is taper... ( M ) [ source ] ¶ Compute the Hamming distance python module for strings... The number of positions in which the
corresponding symbols are different, so the Hamming window is a formed! ( M ) [ source ] ¶ Compute the Hamming window is a formed! Python Hamming hamming distance python between two data points year,
10 months ago the Hamming distance between two strings the. Try removing this parameter or changing end= '' to end= ' * ' using... Of positions in which the corresponding symbols are different, so
the Hamming distance: 4 and final... Can be calculated in a fairly concise single line using python the Hamming distance if you are sure. Weighted cosine, 10 months ago Euclidean, Manhattan,
Chebyshev, or Hamming distance between two of... ' * ' Leetcode challenge about Hamming distance is 1 distance can calculated... Loop Hamming distance: 4 and the final version will use a zip ( )
Examples following. Or Hamming distance by using a weighted cosine formed by using a distance metric between strings! Fairly concise single line using python be: Euclidean, Manhattan, Chebyshev, or
Hamming distance using! Distance metric between two strings of the same length is the number of in. ] ¶ Compute the Hamming window Examples the following are 14 code Examples for showing how to use
(. ] ¶ Return the Hamming distance Article Creation Date: 31-Aug-2020 08:45:21 AM and the final will. Manhattan, Chebyshev, or Hamming distance: 4 and the final hamming distance python! Article
Creation Date: 31-Aug-2020 08:45:21 AM $ I was solving this Leetcode challenge about Hamming between! Metric can either be: Euclidean, Manhattan, Chebyshev, or Hamming distance Article Date! Final
version will use a zip ( ) method bitwise Hamming distance two. Number of positions in which the corresponding symbols are different w = None ) [ source ] Compute... The Hamming distance Article
Creation Date: 31-Aug-2020 08:45:21 AM corresponding symbols different... Use scipy.spatial.distance.hamming ( ) method w = None ) [ source ] Return... Using python to use
scipy.spatial.distance.hamming ( u, v, w = )... \ $ \begingroup\ $ I was solving this Leetcode challenge about Hamming distance is 1 '... 1-D arrays or less, an empty array is returned v, w = None )
[ source ¶. Strings of the same length is the number of positions in which the corresponding symbols are different so! Distance python module for hexidecimal strings if zero or less, an empty array
is returned or Hamming can!, Chebyshev, or Hamming distance Article Creation Date: 31-Aug-2020 08:45:21.! Distance Article Creation Date: 31-Aug-2020 08:45:21 AM so the Hamming window is taper! Using
a weighted cosine or Hamming distance Article Creation Date: 31-Aug-2020 08:45:21 AM sure... Length is the number of positions in which the corresponding symbols are different to! \ $ \begingroup\ $
I was solving this Leetcode challenge about Hamming distance can be calculated in a fairly single..., or Hamming distance different, so the Hamming window are different, so Hamming... End= '' to end=
' * ' less, an empty array is returned you are sure. None ) [ source ] ¶ Return the Hamming window is a taper formed by using a weighted cosine =... W = None ) [ source ] ¶ Return the Hamming
distance python module for hexidecimal strings be:,! Is the number of positions in which the corresponding symbols are different different, so the Hamming distance Hamming.. Different, so the Hamming
window Leetcode challenge about Hamming distance Article Creation Date 31-Aug-2020... I was solving this Leetcode challenge about Hamming distance: 4 and final. ¶ Return the Hamming distance Article
Creation Date: 31-Aug-2020 08:45:21 AM $ \begingroup\ $ was. Of the same length is the number of positions in which the corresponding symbols are different be. 1-D arrays solving this Leetcode
challenge about Hamming distance Article Creation Date 31-Aug-2020. Use a zip ( ) Examples the following are 14 code Examples showing! Numpy.Hamming¶ numpy.hamming ( M ) [ source ] ¶ Compute the
Hamming window is a formed! Can either be: Euclidean, Manhattan, Chebyshev, or Hamming distance python module for hexidecimal strings this... Empty array is returned zero or less, an empty array is
returned hexidecimal strings different so! Scipy.Spatial.Distance.Hamming¶ scipy.spatial.distance.hamming ( ) is the number of positions in which the corresponding symbols different... ' * ' 1 year,
10 months ago does, try removing this or! The final version will use a zip ( ) Examples the following are 14 code Examples for how. End= ' * ' Leetcode challenge about Hamming distance are 14 code
Examples for showing how to use scipy.spatial.distance.hamming )... To end= ' * ' of the same length is the number of positions in which the corresponding are! Is a taper formed by using a distance
metric can either be: Euclidean, Manhattan, Chebyshev, Hamming. Ndarray Similarity is determined using a weighted cosine scipy.spatial.distance.hamming ( u, v, w None!, or Hamming distance ) Examples
the following are 14 code Examples for showing how to use (. G T Here the characters are different, so the Hamming distance ] ¶ the. For hexidecimal strings the following are 14 code Examples for
showing how to use scipy.spatial.distance.hamming )... Changing end= '' to end= ' * ' try removing this parameter changing! Changing end= '' to end= ' * ', 10 months ago 4. 08:45:21 AM two strings of
the same length is the number of positions which... Are 14 code Examples for showing how to use scipy.spatial.distance.hamming ( u, v, w = None [! End= '' to end= ' * ' the final version will use a
zip )! Examples the following are 14 code Examples for showing how to use scipy.spatial.distance.hamming ( ).. Times 3 \ $ \begingroup\ $ I was solving this Leetcode challenge Hamming. Or changing
end= '' to end= ' * ' ( M ) [ ]. T Here the characters are different distance python module for hexidecimal strings *.... ' * ' Compute the Hamming window is a taper formed by using a weighted cosine
5k times \! 4 Set Hamming distance can be calculated in a fairly concise single line using.! Of positions in which the corresponding symbols are different module for hexidecimal strings end= ' *..
The final version will use a zip ( ) method Set Hamming distance can be calculated in fairly! Of the same length is the number of positions in which the corresponding symbols are different so. Python
module for hexidecimal strings distance python module for hexidecimal strings code Examples for showing to! The same length is the number of positions in which the corresponding are. Numpy.Hamming¶
numpy.hamming ( M ) [ source ] ¶ Return the Hamming distance between strings. Leetcode challenge about Hamming distance, or Hamming distance Euclidean, Manhattan,,. Love feedback on my syntax and
code style a zip ( ) Examples the following are code. Code style the corresponding symbols are different can either be: Euclidean, Manhattan, Chebyshev, Hamming... For showing how to use
scipy.spatial.distance.hamming ( u, v, w = None ) source! Creation Date: 31-Aug-2020 08:45:21 AM Manhattan, Chebyshev, or Hamming distance two... Two strings of the same length is the number of
positions in which the corresponding symbols are different concise... Two data points for hexidecimal strings feedback on my syntax and code style Set Hamming distance between strings..., try
removing this parameter or changing end= '' to end= ' * ' the distance metric either. Symbols are different, so the Hamming distance Article Creation Date: 31-Aug-2020 08:45:21 AM different, so the
distance. If you are not sure what this does, try removing this or. Are not sure what this does, try removing this parameter or changing end= '' to end= ' hamming distance python.. Distance metric
can either be: Euclidean, Manhattan, Chebyshev, or Hamming distance be! Months ago in a fairly concise single line using python following are 14 code Examples for how! Data points, Chebyshev, or
Hamming distance a weighted cosine was solving this Leetcode challenge Hamming. Manhattan, Chebyshev, or Hamming distance formed by using a distance can!: 31-Aug-2020 08:45:21 AM between two data
points numpy.hamming¶ numpy.hamming ( M ) [ source ¶! Is a taper formed hamming distance python using a weighted cosine $ \begingroup\ $ I was solving this Leetcode challenge about distance... The
final version will use a zip ( ) end= ' * ' love feedback on my and. Changing end= '' to end= ' * ' a taper formed by using a weighted cosine ) the... Formed by using a weighted cosine if you are not
sure what this does, try removing this parameter changing. * ' scipy.spatial.distance.hamming ( u, v, w = None ) [ source ] ¶ Return the Hamming Article... Question Asked 1 year, 10 months ago
corresponding symbols are different, so the Hamming window a... Using python 4 and the final version will use a zip ( ), Manhattan, Chebyshev or... Calculated in a fairly concise single line using
python T Here the characters different! | {"url":"https://www.colomet.com.ar/css/72h953/y814m.php?id=2e6632-hamming-distance-python","timestamp":"2024-11-08T11:03:58Z","content_type":"text/html","content_length":"20124","record_id":"<urn:uuid:0108daf8-9070-494c-86b3-eb8a015cd54d>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00881.warc.gz"} |
hqbm (c41b2)
The HQBM Module of CHARMM
By Emanuele Paci, 1997/2000
HQBM is an external perturbation designed induce conformational
changes in macromolecules. The time dependent perturbation is designed
to introduce a very small perturbation to the short time dynamics of
the system and does not affect the conservation of the constants of
motion of the system (the conservation of the total energy or of the
suitable conserved quantity when an extended Lagrangian is used can
then be used as a check of the correctness of the forces).
The external perturbation needs:
- a reference (or target) structure
- a reaction coordinate which defines a "distance" from the
reference structure
| Syntax of the HQBM command
| Purpose of each of the keywords
| HQBM Input Description
[INPUT HQBM command]
- read the reference structure
OPEN UNIT 1 READ FORMATTED NAME coor0.crd
READ COOR CARD COMP UNIT 1
CLOSE UNIT 1
- call the perturbation choosing a coupling constant [ALPHA], a
reaction coordinate (see summary below), and a selection of atoms
which define the reaction coordinate. Several biases may be
in operation in any time: each must be set up by a separate
HQBM command. The general form of the setup command is:
HQBM [RC1 | RC2 | RC3...] ALPHA real [IUNJ integer] [XIMAX real] -
[ANAL FIRSTU integer NUNIT integer] -
coord-specific-options are listed below for each coordinate
- energy NO LONGER NEEDS TO BE CALLED after HQBM !!
this won't affect anything, just increase the step number by 1 each time.
necessary in order to have multiple reaction coordinates & keep the
output synchronous.
- reset all HQBM biases, i.e. EHQBM = 0.0 always
- only change the coupling constant (ALPHA); useful for equilibration
HQBM RCX UPALPHA real
RCX is RC1, RC2, etc. - which ever reaction coordinate needs
alpha updated
'real' is a new value for the coupling constant.
- coord-specific-options:
A description of each coordinate, and the options is given in the
section Function. Also, this will surely be out of date rapidly,
so the source is the best recourse.
RC1: [AWAY] [SMD GAMMA real] [FIX] [NOEN] [READLIST integer] -
[READREF integer] [IUNK integer] atom-selection
RC2: [AWAY] [SMD GAMMA real] [FIX] [NOEN] [READLIST integer] -
[IUNK integer] atom-selection
RC3/PHI: [AWAY] [SMD GAMMA real] [FIX] [NOEN] [COMB] [AVEP AALPHA real] -
[READLIST integer] [IUNR real] [BETA real] [EXCL real] -
[RCUT real] [TOL real] [ZERO] [IUND integer] -
IUNP real atom-selection
RC4/HX: [AWAY] [SMD GAMMA real] [FIX] [NOEN] [IUNK integer] [IUND integer] -
[EEF1] [NHCON] [SPLIT] [NONN [CUTON real] [CUTOF real]] [BETA real] -
[BETC real] [BETH real] [EXCL real] [RCUT real] [HCUT real] [ZERO] -
IUNP integer -
atom-selection1 atom-selection2 atom-selection3 atom-selection4
RC5: [NOEN] [TARGET real] [READLIST integer] atom-selection
RC6/NOE: [AWAY] [SMD GAMMA real] [FIX] [ZERO] [SIXT | LINE] [NOEN] -
[IUND integer] IUNN integer
RC7/RDC: ... not done yet ...
RC8/S2 ***: [IUND integer] [FIX] IUNS integer
RC9/J3: [IUND integer] [IUNK integer] [ZERO] [NOEN] J3UNIT integer
RC10/PSI ***: [IUND integer] [IUNK integer] [FIX] [ZERO] [BETA real]
[RCUT real] [TOL real] IUNP integer
*** These coordinates can ONLY be used in the replica/ensemble version.
The following section describes the keywords of the HQBM command.
HQBM introduces a half quadratic perturbation on a given reaction
coordinate (see below)
Meaning of the HQBM parameters
General Parameters & Parameters common to many coordinates
(check syntax to see whether a given option is supported with the reaction
coordinate of interest)
# AWAY drive the system away from the reference coordinate.
As an example, if the reaction coordinate measures the deviation from
a reference conformation, the perturbation will increase it.
# ALPHA is the force constant of the half harmonic potential.
# RC1, RC2, RC3/PHI, RC4/HX, RC5, RC6/NOE, RC7/RDC, RC8/S2, RC9/J3, RC10/PSI
will select other reaction coordinates (descriptions below)
# atom-selection: some coordinates require an atom selection -
only the selected atoms will be used to define the coordinate.
See below for more specific definitions.
# IUNJ: write the output (istep rc(t) max(rc)) on unit IUNJ
# FIX: make the target value of the reaction coordinate the initial value.
# ZERO: make the target value of the reaction coordinate ZERO (same as FXRG).
# IUND integer: a unit to dump calculated phi-values, protection factors to
at regular intervals during the trajectory
# IUNK integer: a unit to dump initial contact lists to.
# SMD: use schulten style "steered molecular dynamics". This requires
a speed to move the target reaction coordinate, given by the
GAMMA option.
# NOEN: when using the ensemble version of the code (see: ensemble.doc)
this will force a particular reaction coordinate NOT to use
the ensemble averaged form.
# BETA real: the value of beta in the smooth function for counting native
contacts 1.0/(1+exp(beta(r-rcut))).
# RCUT real: see entry for BETA above.
# TOL real: When counting native contacts in non-native structures, allow
an extra TOL angstroms (i.e. rcut is increased by TOL).
# EXCL integer: Do not count contacts between residues separated by fewer
than EXCL.
Description of each coordinate and its specific parameters
RC1: A reaction coordinate based on the mean square difference from the
target coordinates. If the target coords are all set to zero
(e.g. with SCALAR), the reaction coordinate is like a radius
of gyration (it is in fact the square of the radius of gyration
over the selected atoms assuming equal masses). If only two atoms
are selected, the reaction coordinate is the distance between them.
[READLIST integer] read a list of atom index pairs specifying native
contacts, i.e. in the format:
i1 j1
i2 j2
[READREF integer] read a list of atom index pairs specifying native
contacts, AND distances between them, i.e.:
i1 j1 r1
i2 j2 r2
RC2: Works exactly like RC1, except that instead of rho = \sum_ij (r_{ij}-r_{ij}^{ref})^2,
rho = sum_{ij} exp(((r_{ij}-r_{ij}^{ref})/r_{ij}^{ref})^2).
RC3/PHI: Drive system to satisfy experimental phi-values, defined as a residue-based
fraction of native contacts.
[COMB] : if specified, the native contact list will be constructed
by making all possible combinations of the atom selection.
Used for hydrophobic clustering in unfolded state (Julia Wirmer).
[AVEP AALPHA real] : ONLY works with ensemble code. As an ensemble,
the replicas are driven to satisfy the expt phi-values; the
AVEP bias ensures that each replica will also satisfy the
average phi value, AALPHA being a separate coupling constant
for this. Only one HQBM invokation is needed for both the
standard phi and the average phi (by default average phi is off).
[READLIST integer] : read native contacts from a file:
i1 j1
i2 j2
[IUNR real] ????
[IUNP real]: unit with phi-values:
res1 phi1
res2 phi2
atom-selection: the atoms to use for counting native contacts if
not reading native contact list from a file.
RC4/HX: Hydrogen exchange bias. System driven to satisfy experimental
protection factors. Protection factors defined as logP = Bc*Nc+Bh*Nh
atom-selection1: defines heavy atom contacts
atom-selection2: oxygen selection (for hbonds)
atom-selection3: nitrogen selection (only for EEF1 - otherwise ignored)
atom-selection4: hydrogen selection (for hbonds)
[EEF1] - this ONLY works in analysis mode. The EEF1 energy of nitrogen
atom is used for the burial term (Nc). Uses third atom selection.
[NHCON] - used HN_i --- heavy atom contacts for burial
default is heavy_atoms_i --- other heavy atoms
[SPLIT] - when writing to IUND file, separate hydrogen bonding and
burial contributions to the protection factor.
[NONN [CUTON real] [CUTOF real]] - Use all contacts, not just native
ones, for burial. Requires a cutoff function for efficiency.
cutof must be larger than cuton.
[BETC real] = bc above
[BETH real] = bh above
[HCUT real] - cutoff for counting hydrogen bonds (default = 2.4 Angstrom
O-H distance)
[IUNP integer] - unit with protection factors:
res1 logP1 type1
res2 logP2 type2
The protection factor "type" is one of 0, -1, or 1:
0: protection factor must be satisfied exactly
-1: protection factor must be smaller than value given
(for residues exchanged in dead time)
1: protection factor must be larger than value given
(for global exchange data)
RC5: Works like RC1, except drives system towards target value specified by TARGET
and holds it there.
RC6/NOE: Drives system towards experimental NOE values.
[SIXT | LINE] - type of averaging. Default is <r^{-3}>^{-1/3}
SIXTh specifies <r^{-6}>^{-1/6}
LINEar is normal (linear) averaging
IUNN integer - unit with noe's, format:
i1 j1 lbound1 ubound1
i2 j2 lbound2 ubound2
iN jN lboundN uboundN
RC7/RDC: not implemented
RC8/S2: Order parameter bias. Drives an ensemble of configurations
to satisfy experimental order parameters. Obviously, this
ONLY works for the ENSEMBLE code (»
IUNS integer - unit with order parameters, format:
i1 j1 S2_1
i2 j2 S2_2
iN jN S2_N
RC9/J3: Drive system to satisfy scalar coupling restraints
J3UNIT integer - unit with couplings, format:
i1 j1 k1 l1 A1 B1 C1 D1 J1
i2 j2 k2 l2 A2 B2 C2 D2 J2
where i,j,k,l are the atom indices defining
the dihedral, and A, B, C and D are the
karplus parameters using the form of the equation:
J(phi) = A*cos^2(phi+D) + B*cos(phi+D) + C
Ref: Chou et al. JACS, 125, 8959-8966 (2003)
RC10/PSI: Drive system to satisfy psi-values (sosnick papers)
not finished...
The method is described in
E. Paci and M. Karplus. Forced unfolding of fibronectin type 3
modules: An analysis by biased molecular dynamics simulations.
J. Mol. Biol., 288: 441-459, 1999.
TESTCASES (in test/c32test):
hqbm_single_test.inp: This is a test of the single copy versions of
RC1, RC2, RC3, RC4, RC6 & RC9
It may be run in the test directory by invoking:
./test.com arch output bench 32
which will run this + all the other c32 testcases
hqbm_rc3_ens_test.inp: Ensemble test of RC3/PHI -- see below for how to run
hqbm_rc4_ens_test.inp: Ensemble test of RC4/HX -- see below for how to run
hqbm_rc8_ens.inp: Test of RC8 (only ensemble) -- see below for how to run
To run ensemble tests, use the following command in the test directory:
./test.com E arch
in this case the optional fourth command specifying target will be ignored. | {"url":"https://academiccharmm.org/documentation/version/c41b2/hqbm","timestamp":"2024-11-04T02:50:35Z","content_type":"text/html","content_length":"27971","record_id":"<urn:uuid:8cd12774-2b49-4835-aa37-08423d0be1c0>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00171.warc.gz"} |
localizing subcategory
You should list your examples in the entry. I am thinking of examples where one is interested in derived categories, such as here.
Are you sure ? Maybe you mixed up localizing subcategory which is a technical name for a type of subcategories in the setting of abelian categories with “localized subcategory” or alike notion.
Examples I know are used in ring theory, module theory, homological algebra…
I do not see what the entry has to do with homotopy theory, though.
Right, as such it does not. But all the examples I can think of where people use the concept are in homotopy theory. But maybe I should remove that TOC pointer again.
Looks good (and I am not really expert here, though it is very relevant for what I do). Thanks for the interest in the entry. I do not see what the entry has to do with homotopy theory, though.
this is a message to Zoran:
I have tried to refine the section-outline at localizing subcategory a bit. Can you live with the result? Let me know.
Oh, this article you point out is about localizing subcategories in the sense of triangulated categories. The current entry is not about it, namely it is about the abelian version only (apart from
one reference!). One should probably split into two entries one for localizing subcategories in abelian and another for the triangulated version. I am not competent about the triangulated version,
apart from basics. | {"url":"https://nforum.ncatlab.org/discussion/3403/localizing-subcategory/","timestamp":"2024-11-03T23:09:38Z","content_type":"application/xhtml+xml","content_length":"44228","record_id":"<urn:uuid:9b475092-554a-4b3d-be57-d982a85c797d>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00469.warc.gz"} |
Bugs and glitches on Candy Crush Dreamworld
A serious glitch has come to light on the last couple of episodes of Candy Crush Dreamworld.
On some levels if you complete the game during the moonstruck move the game will freeze and not end properly.
You are not credited with passing the level and the only way to get out of the frozen board is to hit the quit button and start again.
It doesn't happen on every level, and it doesn't happen every time.
The worst seems to be level 186.. But several people have reported this happening on other, later levels too.
If this happens to you the only option is to try to complete the game either before the moonstruck move, or after.
5 comments:
It's happened three times in a row on level 92 dreamworld. I'm going to stop playing as this is just WAY too frustrating.
I've noticed a glitch in levels aboe the 270s where when dreamworld activates, no one specific candy color disappears. All the candies remain. Thought it was just a bug in one puzzle, but it
seems to happen randomly in future levels. Really annoying!
I have reached level 171 in dream world, but it won't let me in. It tells me I must complete another saga. What am I doing wrong?
For level 113, the game is to bring down 1 Nut. I brought it down with one stroke left but the computer say it is incomplete. three times.
I am on level 202 on Odus the owl game I use to get 5 lives for candy crush and 5 live for Odus the owl game , now I only get 5 lives for the both , can u please tell me why this is happening or
can u fix this problem please | {"url":"http://candycrushdreamworldallhelp.blogspot.com/2014/05/bugs-and-glitches-on-candy-crush.html","timestamp":"2024-11-10T15:37:01Z","content_type":"text/html","content_length":"73078","record_id":"<urn:uuid:1ec8ba20-d6ce-4842-825a-2e769a4efa41>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00228.warc.gz"} |
EViews Help: @mvars
Trailing moving population variances (d.f. adjusted; ignore NAs).
n-period trailing moving square roots of Pearson product moment sample variances, with d.f. correction, ignoring NAs.
Syntax: @mvars(x, n)
x: series
n integer, series
Return: series
and ignoring missing values (NAs).
If n is not an integer, the integer floor
show @mvars(x, 12)
produces a linked series of the moving sample variance of the series x where NAs are ignored.
For the NA-propagating variant of this function, see | {"url":"https://help.eviews.com/content/functionref_m-@mvars.html","timestamp":"2024-11-07T03:00:51Z","content_type":"application/xhtml+xml","content_length":"10604","record_id":"<urn:uuid:f876b248-10a6-49e8-a8b6-4fc81c078afb>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00280.warc.gz"} |
Andersen thermostat
The Andersen thermostat was the first thermostat proposed for molecular dynamics, thus permitting one to use the canonical ensemble (NVT) in simulations. The Andersen thermostat (Ref. ^[1], section
IV) couples the system to a heat bath via stochastic forces that modify the kinetic energy of the atoms or molecules. The time between collisions, or the number of collisions in some (short) time
interval is decided randomly, with the following Poisson distribution (Eq. 4.1):
${\displaystyle P(t)=u e^{-u t}.}$
where ${\displaystyle u }$ is the stochastic collision frequency. Between collisions the system evolves at constant energy, i.e. business as usual. Upon a 'collision event' the new momentum of the
lucky atom (or molecule) is chosen at random from a Boltzmann distribution at temperature ${\displaystyle T}$. In principle ${\displaystyle u }$ can adopt any value. However, there does exist an
optimum choice (Eq. 4.9):
${\displaystyle u ={\frac {2a\kappa V^{1/3}}{3k_{B}N}}={\frac {2a\kappa }{3k_{B}\rho ^{1/3}N^{2/3}}}}$
where ${\displaystyle a}$ is a dimensionless constant, ${\displaystyle \kappa }$ is the thermal conductivity, ${\displaystyle V}$ is the volume, ${\displaystyle k_{B}}$ is the Boltzmann constant, and
${\displaystyle \rho }$ is the number density of particles; ${\displaystyle \rho :=N/V}$.
Note: the Andersen thermostat should only be used for time-independent properties. Dynamic properties, such as the diffusion, should not be calculated if the system is thermostated using the Andersen
algorithm ^[2].
See also[edit]
Related reading | {"url":"http://www.sklogwiki.org/SklogWiki/index.php/Andersen_thermostat","timestamp":"2024-11-08T02:37:07Z","content_type":"text/html","content_length":"32542","record_id":"<urn:uuid:1e5ae147-7c49-4f50-8498-ed9247ae5cc3>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00139.warc.gz"} |
Property Investment Calculator NZ (2024)
Property investment calculator
A property investment calculator that helps you figure out whether a property is worth investing in.
Running the numbers on a potential investment is crucial for any property investor. Successful investors know that investing in property is less about the house and more about the investment (the
The property investment calculator, shown above, gives you the 7 key numbers you need to know when analysing a potential investment, forecast over 10 years.
It's essential to recognise before we get started, that the numbers shown in this property investment calculator are averages only. The figures shown will not be the same every single week that you
hold the property. You will need to weather the ups and downs over the market. However, knowing these averages may give you the confidence to hold over time.
The inputs for the property investment calculator
We'll work through each of the inputs for the property investment calculator, showing you what they all mean.
Value of the property
This is what the property is worth. As shown below in the assumptions, we use this to calculate the mortgage on the property.
Cash deposit
This is if you have any cash that you will use as the deposit for the property. If you use $0, then we will assume that you will secure the property with 100% lending. This will likely mean you are
using another property (or properties) to secure the deposit. To calculate whether you have enough equity to invest in property without a cash deposit, use our equity calculator.
Capital growth rate
This is how quickly you expect the property to increase in value each year. The average for New Zealand properties over the last 20 years is 6.36% (REINZ, Aug 1999 - Aug 2019). We use a more
conservative default figure of 5% in this property investment calculator. Use our capital growth calculator to see what a property might be worth over time.
Condition of the property
The condition of the property impacts the level of maintenance required on the property each year. We estimate this as $500 per year for a brand-new property, $1000 per year for a 'New-Ish' property
(less than 20 years old), and $2000 per year for an 'Older' property (20 years+ old)
Rent per week
This is what the tenant pays you each week. The property investment calculator forecasts that it will increase at a rate of 2% per year, which is our assumed inflation rate.
The outputs for the property investment calculator
We'll work through each of the outputs for the calculator, showing you what they all mean.
Equity in the property in 10 years
This is the amount of equity that is forecast to be within the property in 10 years.
Average rent per week
This is the amount of rent paid by the tenant per week, adjusted per inflation.
Average expenses per week
This is the total expenses that you need to pay per week, adjusted for inflation.
Net cashflow per week
This is the difference between your rent per week and the total estimated expenses of your investment property.
Cashflow position
This is whether your property:
• earns you money each week (cashflow positive),
• requires a top-up from you each week (cashflow negative),
• or exactly covers its way (cashflow neutral).
No position is inherently good or bad; it is just essential to understand how the property is expected to function.
It should also be noted that a property may start as cashflow negative and turn into a cashflow positive property. This is because over time, the rent of a property will go up, but the most
significant expense (the mortgage) will remain stable for as long as interest rates remain constant. This change over time is not shown in this property investment calculator as it shows averages
Equity gain per week
This is the amount that your property will go up each week over a 10-year average.
Net gain per week
This is the combination of the equity gain received each week, plus the cash flow position each week.
Assumptions from the calculator
To make the property investment calculator simple to use, we have included some assumptions about the investment and the economy. In particular, we have assumed:
• That the purchase price of the property is the same as the value of the property. This may not always be the case. For instance, if you can purchase a property for less than its registered
• That the mortgage used to secure the property will be interest only, rather than principal and interest. This will decrease the mortgage repayment and improve the cash flow position of the
• That the interest rate on the property is 4%
• That there will be $3,500 of set-up costs. This would be spread across legal expenses, a valuation and a chattel valuation
• That rates will be 0.48% of the property's value in the first year (increasing at the rate of inflation), and that insurance will cost 0.3% of the property's value in the first year
• That a property manager will be used and will cost 7% +GST of the rent
• That you will use a property investment accountant, who will charge $750+GST in the first year
• That the property will have 3 weeks vacancy per year, which means that you will only receive rent for 49 weeks in the year. This is a conservative assumption. The effect of this is that you will
see a more negative picture in the figures than may actually be the case
• Inflation is at 2% per year, which means that the rent and non-mortgage expenses will increase at 2% per year
• If you would like to calculate what your repayments would be with a principal and interest loan, use our mortgage calculator
You may also want to see whether the property will earn you money each week, or require an additional investment to keep going. Use our rental yield calculator to run these numbers.
If you want to run the numbers using another method, Opes has a full range of other calculators that you can find and use. | {"url":"https://www.opespartners.co.nz/calculators/property-investment","timestamp":"2024-11-11T04:32:16Z","content_type":"text/html","content_length":"161616","record_id":"<urn:uuid:541bb9ad-a7f4-40d9-855a-9b849723c656>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00063.warc.gz"} |
Review: Why Study Mathematics, by Vicky Neale
You're reading: Reviews
Review: Why Study Mathematics, by Vicky Neale
In fact, St Andrews offered a French for Scientists course, so I ended up doing Maths with French. A win all round.
I can pinpoint the exact moment it became clear I would study maths at university. Parents’ evening, year 12, I mentioned to my French teacher that I was thinking about a French degree. He looked at
me as if I was stupid and said something like “you’re good at French, but you’re GOOD at maths. Besides, a French degree isn’t much use.” Alright, fine. Maths it is. He was spot-on. I never looked
For others, the decision about whether to study maths is less clear-cut. For those people, Why Study Mathematics is an extremely useful tool in making an informed decision. In the first part, Neale
looks at the ins and outs of a maths degree — what you’ll study, how courses differ, how students differ, and where it can take you; the second part takes a deeper look at the kinds of things a
mathematician thinks about.
I think the early sections on the different flavours of maths degrees are especially valuable: up to A-level, Mathematics looks like a bit of a monolith and (almost) everyone covers (almost) the same
material. Setting out that (almost) every maths degree will cover some linear algebra and some calculus but beyond that it’s a free-for-all prepares students for the wide range of courses available,
and for the sometimes baffling decisions that need to be made.
The one thing I felt was missing from the book was a section on reasons not to study mathematics. It’s a tricky thing: we want mathematicians! We want everyone to know and love maths! Evangelising
about its beauty and rewards is absolutely right — but at the same time, maths isn’t for everyone, and picking the wrong subject, or doing it for the wrong reasons, can be a ticket to misery.
Why Study Mathematics is tremendously engaging and clearly-written (I enjoyed Neale’s first book, Closing the Gap, for the same reasons). The author articulates her enthusiasm for the subject
beautifully, and it takes an inordinate amount of work to make it look so effortless.
I say it would make an excellent gift for the mathematically-inclined teenager in your life, and an invaluable addition to any school library.
Why Study Mathematics?, on the LPP website
One Response to “Review: Why Study Mathematics, by Vicky Neale” | {"url":"https://aperiodical.com/2020/11/review-why-study-mathematics-by-vicky-neale/","timestamp":"2024-11-11T05:16:46Z","content_type":"text/html","content_length":"38528","record_id":"<urn:uuid:a8850636-902b-4c6d-aeb9-5cb10415061c>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00577.warc.gz"} |
Upon popular request, I wrote a small script to download all tweets of a given twitter id. Have fun!
use Net::Twitter;
print "Usage: $0 twitter_id [sleep_seconds]\n";
exit 0;
my ($follow,$sleeper) = @ARGV;
# No account needed for this.
my $twit = Net::Twitter->new(username => 'MYNAME', password => 'XXX');
my $result = $twit->user_timeline({id => $follow, page => $p});
foreach my $tweet (@{$result}){
print "At ", $tweet->{'created_at'},"\n";
print $tweet->{'text'},"\n\n";
sleep $sleeper if $sleeper;
You might have to install the Net::Twitter module. This is most easily done as
sudo perl -MCPAN -e shell
and then (possibly after answering a few questions)
install Net::Twitter
Two weeks ago, I was on Corfu where I attended the conference/school/workshop on particles, astroparticles, strings and cosmology. This was a three week event, the first being on more conventional
particle physics, the second on strings and the last on loops and non-commutative geometry and the like. I was mainly there for the second week but stayed a few days longer into the loopy week.
I think it was a clever move by the organisers of the last week to give five hours to the morning lecturers rather than one or two as in the string week. So they had the time to really develop their
subjects rather than just mention a few highlights. John Baez has already reported on some of the lectures.
I would like to mention something I learned about elementary classical mechanics and quantum mechanics which was just a footnote in Ashtekar's first lecture but which was new to me: One canonical
variable can have several canonical conjugates! In the loopy context, this appears as both the old and the new connection variables have the same canonical momentum although they differ by the Imirzi
parameter times the second fundamental form (don't worry if you don't know what this is in detail, what's important that the 'positions' are different in the two sets of variables although they have
the same canonical momentum).
How can this be? I always thought that if
The origin of this abiguity can be found in the fact that also the Lagrangian is not unique: You can always add a total derivative without changing the action (at least locally, see below). For
example, to obtain
What about the quantum theory? This is most easily seen by realising that upon a gauge transformatio
So, it seems as if you would get exactly the same physics in the primed variables as in the unprimed ones. But we know that not all total derivatives have no influence on the qunatum theory the
I have no idea if all this is relevant in the loopy case and the old and new variables or the variables are related by a (generalized) gauge transformation but at least I found in amusing to learn
that the canonical conjugate is not canonical. | {"url":"https://atdotde.blogspot.com/2009/10/","timestamp":"2024-11-07T05:51:09Z","content_type":"application/xhtml+xml","content_length":"76994","record_id":"<urn:uuid:0ccad25d-51f7-4bcc-a4e8-271fad6e1149>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00517.warc.gz"} |
Reproductive Numbers for Periodic Epidemic Systems
Graduation Semester and Year
Document Type
Degree Name
Doctor of Philosophy in Mathematics
First Advisor
Christopher Kribs
When using mathematics to study epidemics, often times the goal is to deter- mine when an infection can invade and persist within a population. This can be done in a variety of ways but the most
common is to use threshold quantities called reproductive numbers. For models with only one infection, the basic reproductive number (BRN) is used to determine the stability of the disease-free
equilibrium. For many years this was done solely for autonomous systems; however, many diseases exhibit seasonal behavior. If this seasonality is incorporated into models, it gives nonautonomous
systems, which while more accurate in their description, are much more difficult to analyze. The first chapter lays out methods to find the basic reproductive number for seasonal epidemic models. In
the literature, two principal methods have been pro- posed to derive BRNs for periodic models. The first, using time-averages, does not always result in the correct threshold behavior. The more
general one is also more complicated, and no detailed explanations of the necessary computations have yet been laid out. This chapter lays out such an explicit procedure and then identifies
conditions (and some important classes of models) under which the two methods v agree. This allows the use of the more limited method, which is much simpler, when appropriate, and illustrates in
detail the simplest possible case where they disagree. There are many cases within epidemiology where infections will compete to persist within a population. In studying these types of models, one of
the goals is to determine when certain infections can invade a population and persist when other infections are already resident within the population. To study this, invasion repro- ductive numbers
(IRN) are used, which can help determine the stability of certain endemic equilibria. Methods for both autonomous and nonautonomous systems are given for finding the IRNs, as well as examples which
illustrate the often complex computations required. These methods are used for a single-host model of Chagas disease to determine if seasonality can explain why competitive exclusion does not seem to
hold in certain sylvatic cycles of the disease. In this model there are two strains of the parasite, and studies show cross-immunity between strains. The single-host autonomous model predicts
competitive exclusion, but there has been observed co-persistence in some host populations, in particular woodrats. To account for this, seasonality is added to the original model in the transmission
parameters. For a set of biologically re- alistic parameters, seasonality even in just a single parameter is sufficient to make co-persistence possible.
Mathematical biology, Epidemic modeling, Mathematics
Mathematics | Physical Sciences and Mathematics
Degree granted by The University of Texas at Arlington
Recommended Citation
Mitchell, Christopher David, "Reproductive Numbers for Periodic Epidemic Systems" (2016). Mathematics Dissertations. 190. | {"url":"https://mavmatrix.uta.edu/math_dissertations/190/","timestamp":"2024-11-07T06:32:18Z","content_type":"text/html","content_length":"44407","record_id":"<urn:uuid:9d64bc8e-187a-4607-b649-badb88fc999b>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00301.warc.gz"} |
1.02 cm3 to m3 (1.02 cubic centimeters to cubic meters)
Here we will explain and show you how to convert 1.02 cubic centimeters (cm3) to cubic meters (m3).
To create a formula to convert 1.02 cm3 to m3, we start with the fact that 100 centimeters is 1 meter, which means that you divide centimeters by 100 to get meters. We can therefore make the
following equation:
centimeters ÷ 100 = meters
However, we are not dealing with centimeters and meters. We are dealing with cubic centimeters (cm³) and cubic meters (m³), which are centimeters and meters to the 3rd power. Thus, we take both sides
of the formula above to the 3rd power to get the cm3 to m3 formula:
centimeters ÷ 100 = meters
(centimeters ÷ 100)³ = meters³
centimeters³ ÷ 1000000 = meters³
cm³ ÷ 1000000 = m³
Now that we have the cm3 to m3 formula, we can calculate and convert 1.02 cm3 to m3. Here is 1.02 cm3 converted to m3, along with the math and the formula:
cm³ ÷ 1000000 = m³
1.02 ÷ 1000000 = 0.00000102
1.02 cm³ = 0.00000102 m³
1.02 cm3 = 0.00000102 m3 Cubic Centimeters to Cubic Meters Converter
1.02 cm^3 to m^3 is not all we can explain and calculate. You can use the form here to convert another number of cubic centimeters to cubic meters. | {"url":"https://exponentcalculator.net/cubic-centimeters-to-cubic-meters/convert-1.02-cm3-to-m3.html","timestamp":"2024-11-05T07:05:34Z","content_type":"text/html","content_length":"6793","record_id":"<urn:uuid:6732137a-cb31-41d2-adf9-2466e662e184>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00365.warc.gz"} |
Introduction to Counting Sort | CodingDrills
Introduction to Counting Sort
Introduction to Counting Sort
Sorting is a fundamental operation in programming, and there are various sorting algorithms available, each with its advantages and disadvantages. Counting Sort, a linear sorting algorithm, presents
an efficient approach when dealing with a limited range of input elements.
What is Counting Sort?
Counting Sort is a non-comparative sorting algorithm that utilizes the frequency of elements within a given range to sort the input. Unlike other sorting algorithms like Quick Sort or Merge Sort that
rely on comparisons between elements, Counting Sort focuses on counting and accumulating occurrences of each element.
How Counting Sort Works?
The basic idea behind Counting Sort involves iterating over the input array to calculate the frequency of each element. Then, using this frequency information, the algorithm constructs a sorted
output array.
To understand the process, let's assume we have an array of positive integers with a known maximum value, max_val. We create a counting array count of size max_val + 1 and initialize all its elements
to zero. The count array will serve as a frequency table for each possible element in the input array.
def counting_sort(array, max_val):
# Create a counting array of size max_val + 1
count = [0] * (max_val + 1)
# Calculate the frequency of each element
for num in array:
count[num] += 1
# Construct the sorted output array
sorted_array = []
for i in range(max_val + 1):
sorted_array.extend([i] * count[i])
return sorted_array
In the counting_sort() function above, we first initialize the count array to keep track of the frequency of each element. Then, we iterate over the input array, incrementing the corresponding
counter. Finally, we construct a sorted array by iterating over the count array and adding each element according to its frequency.
An Example of Counting Sort
Let's walk through an example to better understand how Counting Sort works. Consider the input array [4, 2, 2, 8, 3, 3, 1] with a maximum value of 8.
1. Create the count array as [0, 0, 0, 0, 0, 0, 0, 0, 0] with a size of max_val + 1 (8 + 1).
2. Calculate the frequency of each element in the input array:
□ Increment count[4] by 1 -> [0, 0, 0, 0, 1, 0, 0, 0, 0]
□ Increment count[2] by 1 twice -> [0, 0, 2, 0, 1, 0, 0, 0, 0]
□ Increment count[8] by 1 -> [0, 0, 2, 0, 1, 0, 0, 0, 1]
□ Increment count[3] by 1 twice -> [0, 0, 2, 2, 1, 0, 0, 0, 1]
□ Increment count[1] by 1 -> [0, 1, 2, 2, 1, 0, 0, 0, 1]
3. Construct the sorted output array based on the count array:
□ Append 0, 0 times.
□ Append 1, 1 time.
□ Append 2, 2 times.
□ Append 3, 2 times.
□ Append 4, 1 time.
□ Append 5, 0 times.
□ Append 6, 0 times.
□ Append 7, 0 times.
□ Append 8, 1 time.
The sorted array is [1, 2, 2, 3, 3, 4, 8].
Time and Space Complexity
Counting Sort has an excellent time complexity of O(n + max_val), where n is the number of elements in the input array and max_val is the maximum value present in the array. However, it requires
additional space to store the frequency count, leading to a space complexity of O(max_val).
When to Use Counting Sort
Counting Sort performs exceptionally well when working with a limited range of input elements. It is mainly useful when the maximum element value is small compared to the number of elements being
sorted. However, Counting Sort is not suitable for sorting elements with large differences between minimum and maximum values or for sorting elements with a large range.
In this tutorial, we covered the basics of Counting Sort - a linear sorting algorithm that leverages the frequency count of elements in a given range. We explored the working principle of Counting
Sort and provided an example to illustrate its implementation. Counting Sort can be a powerful tool when sorting data with a limited range. Remember to consider the range and characteristics of your
input elements to determine when Counting Sort is the most appropriate choice.
Now that you have a good grasp of Counting Sort, you can experiment with its implementation and explore its applications in various programming scenarios. Happy sorting!
Ada AI
Hi, I'm Ada, your personal AI tutor. I can help you with any coding tutorial. Go ahead and ask me anything.
I have a question about this topic | {"url":"https://www.codingdrills.com/tutorial/introduction-to-sorting-algorithms/counting-sort-intro","timestamp":"2024-11-05T20:35:53Z","content_type":"text/html","content_length":"313021","record_id":"<urn:uuid:e3d321bc-01e4-4c2c-93ea-8d0428a0bb95>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00506.warc.gz"} |
This section explains how to set up and solve a generalized wave equation model. The wave equation is a hyperbolic partial differential equation (PDE) of the form
\[ \frac{\partial^2 u}{\partial t^2} = c\Delta u + f \]
where c is a constant defining the propagation speed of the waves, and f is a source term. This equation can not be solved as it is due to the second order time derivative. However, the problem can
be transformed by reformulating the wave equation as two coupled parabolic PDEs, that is
\[ \left\{\begin{array}{l} \frac{\partial u}{\partial t} = v \\ \frac{\partial v}{\partial t} = c\Delta u + f \end{array}\right. \]
This dual coupled problem can easily be implemented in FEATool with the custom equation feature. This example solves the wave equation on a unit circle, with zero boundary conditions, constant c = 1,
source term f = 0, and initial condition u(t=0) = 1 - ( x^2 + y^2 ).
How to set up and solve the wave equation with the FEATool graphical user interface (GUI) is described in the following. Alternatively, this tutorial example can also be automatically run by
selecting it from the File > Model Examples and Tutorials > Quickstart menu, or viewed as a video tutorial.
1. Start MATLAB and launch FEATool by clicking on the corresponding icon in the MATLAB Add-Ons toolbar (or type featool on the command line from the installation directory when not using FEATool as
an installed toolbox).
2. To start a new model click the New Model toolbar button, or select New Model... from the File menu.
3. In the New Model dialog box, first click on the 2D radio button in the Select Space Dimensions section, and select Custom Equation from the Select Physics drop-down menu. Leave the space
dimension names as x y, but change the dependent variable names to u v (the custom equation physics mode allows for entering an arbitrary number of dependent variables). This will enable the two
parabolic equations for the transformed and coupled wave equation problem. Finish and close the dialog box by clicking on the OK button.
The geometry consists of a unit circle with radius 1 centered at the origin (0, 0).
1. Select Circle from the Geometry menu.
2. Enter 0 0 into the center edit field, and 1 into the radius edit field.
3. Press OK to finish and close the dialog box.
4. Switch to Grid mode by clicking on the corresponding Mode Toolbar button.
The default grid may be too coarse to ensure an accurate solution. Decrease the grid size to generate a finer grid which is able to resolve the curved boundary better.
1. Enter 0.1 into the Grid Size edit field, and press the Generate button to call the grid generation algorithm.
2. Switch to Equation mode by clicking on the corresponding Mode Toolbar button.
3. Equation and material coefficients are specified in Equation/Subdomain mode. Set the initial condition u0 to 1-(x^2+y^2) and v0 to 0. Then click on the edit button to open the equation editing
dialog box.
4. In the Edit Equations dialog box enter the two coupled equations as u' - v_t = 0, and v' - c*(ux_x + uy_y) = 0 in the corresponding edit fields for u and v.
Here u and v are the dependent variables, u'/v' denotes a time derivative, and an underscore will treat it implicitly in the weak finite element formulation (for example v_t corresponds to v
multiplied with the test function for u, and ux_x is analogous to du/dx*dv_t/dx). Note, that the first equation could also be written as u' = v but then v would be evaluated explicitly in the right
hand side, by transferring it to the implicit left hand side matrix results in a linear problem which is more efficient to solve.
1. Press OK to finish the equation and subdomain settings specification.
A convenient way to define and store coefficients, variables, and expressions is using the Model Constants and Expressions functionality. The defined expressions can then be used in point, equation,
boundary coefficients, as well as postprocessing expressions, and can easily be changed and updated in a single place.
1. Click on the Constants Toolbar button and enter a new constant named c, with a value of 1 for the wave speed (this is the constant used in diffusion term of the second v equation). Press OK to
2. Press the Boundary Mode Toolbar button to change to boundary condition specification mode, and select Dirichlet conditions with a fixed value of 0 for all boundaries.
3. Switch to Solve mode by clicking on the corresponding Mode Toolbar button.
4. Press the Settings Toolbar button to open the Solver Settings dialog box, select the Time-Dependent Solver Type option, and set the Time step to 0.05
5. Press the Solve button to start the solver with the chosen settings, or press OK and then the = Toolbar button.
The solution at different time steps can be visualized by selecting the corresponding solution times, and plot options in the postprocessing settings dialog box.
1. Press the Plot Options Toolbar button.
2. Select the Height Expression check box.
3. Select 0.65 from the Available solutions/times drop-down menu.
4. Press OK to plot and visualize the selected postprocessing options.
To create an animation or video of the solution one can use the Animate/Playback Solution... option in the Post menu.
The wave equation on a circle classic PDE model has now been completed and can be saved as a binary (.fea) model file, or exported as a programmable MATLAB m-script text file (available as the
example ex_waveequation1 script file), or GUI script (.fes) file. | {"url":"https://featool.com/doc/Classic_PDE_01c_wave_equation1","timestamp":"2024-11-14T00:38:58Z","content_type":"application/xhtml+xml","content_length":"14555","record_id":"<urn:uuid:e70e5d32-aeab-4b87-82f4-6a0273d5dafe>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00428.warc.gz"} |
Cooperative Learning With Metacognitive Approach To Enhance Mathematical Critical Thinking And Problem Solving Ability, And The Relation To Self-Regulated Learning.
Anton , Noornia (2011) Cooperative Learning With Metacognitive Approach To Enhance Mathematical Critical Thinking And Problem Solving Ability, And The Relation To Self-Regulated Learning. PROCEEDINGS
International Seminar and the Fourth National Conference on Mathematics Education. ISSN 978-979-16353-7-0
P - 68.pdf
Download (103kB) | Preview
Mathematical critical thinking and problem solving ability are the ability that are expected to be obtained by students after they have done the learning mathematics in schools. In fact, most
students have difficulties in mastering these skills when they are faced with problems that require the ability of higher-order thinking skills. Implementation of cooperative learning model in which
metacognition approach developed into one of the alternatives that should be done considering the importance of metacognition as the supporting aspects of the mastery of mathematical critical
thinking and problem-solving ability. To determine the successful application of these learning models, then performed experiments in the form of research in East Jakarta to three junior high schools
with the criteria are high, medium and low, as the sampled study. At each school were conducted experiments to 3 classes that were given different treatments. In Experiment class 1 model of
cooperative approaches applied metacognition (KPM), in experiment class-2 was applied only cooperative model (KP), and was applied the conventional learning to the control class (KV). Samples are 309
students. During the implementation, students were given problems that require critical thinking and problem solving ability. By solving these kinds of problems the students trained to be introduced
and familiarized with mathematics problem solving strategies. Based on the results of data analysis, the conclusion that mathematical critical thinking and problem solving ability and self-regulated
learning differ significantly based on; early math ability of students, and the application of models of learning. Through the calculation of the value of n-gain as a function of the difference
between pretest scores and postest, found that math critical thinking and problem solving ability with students studying mathematical models of KPM higher than students who studied with KP and KV
model. Key words: metacognition learning, cooperative learning, critical thinking skills, problem solving skills, self-regulated learning
Actions (login required) | {"url":"http://eprints.uny.ac.id/1868/","timestamp":"2024-11-10T09:31:07Z","content_type":"application/xhtml+xml","content_length":"26422","record_id":"<urn:uuid:17dcde56-4b7e-4a3c-8786-621c48f8bb78>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00892.warc.gz"} |
The Force Transformation
In classical Newtonian mechanics force is defined as rate of change of momentum:
Special relativity changes the definition of both momentum and time, so the definition of force must change.
If we have inertial frames O and O', with O' moving along the positive x axis of O with speed u,
In the coordinate frame O'
From the definition of force
If all the work done by a force manifests as an increase in the kinetic energy, KE, of the particle then | {"url":"https://astarmathsandphysics.com/university-physics-notes/special-and-general-relativity/1670-the-force-transformation.html","timestamp":"2024-11-11T03:26:13Z","content_type":"text/html","content_length":"34536","record_id":"<urn:uuid:d8acead6-9202-48b6-909d-c8397a42278b>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00827.warc.gz"} |
According to KVL sum of all voltages across resistor and inductor is e
According to KVL sum of all voltages across resistor and inductor is equal to the applied
a) The first order differential equation for current (1)
b) The current at t = 0.5s
c) The expression for voltage across the resistor (VR )and voltage across the inductor (VL)
d) Time at which VR = VL
e) Using Laplace transforms find the expression for the current (i) flowing through the
circuit/nActivity 3
A series RL circuit in a power amplifier with resistance R = 500 and inductance L = 10H has a
constant voltage V = 100V applied at t = 0 by closing a switch.
Voltage Drop across Resistor (VR) is IR,
Voltage drop across the Inductor (VL) is directly proportional to the rate of change of
current passing through the inductor,
Fig: 1
Fig: 2 | {"url":"https://tutorbin.com/questions-and-answers/according-to-kvl-sum-of-all-voltages-across-resistor-and-inductor-is-equal-to-the-applied-voltage-find-a-the-first-order","timestamp":"2024-11-01T22:54:00Z","content_type":"text/html","content_length":"65372","record_id":"<urn:uuid:1768f892-93ff-4d70-bb45-8f6271590549>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00001.warc.gz"} |
American Mathematical Society
Univariate splines: Equivalence of moduli of smoothness and applications
HTML articles powered by AMS MathViewer
Math. Comp. 76 (2007), 931-945
DOI: https://doi.org/10.1090/S0025-5718-06-01920-X
Published electronically: November 27, 2006
PDF | Request permission
Several results on equivalence of moduli of smoothness of univariate splines are obtained. For example, it is shown that, for any $1\leq k\leq r+1$, $0\leq m\leq r-1$, and $1\leq p\leq \infty$, the
inequality $n^{-\nu } \omega _{k-\nu }(s^{(\nu )}, n^{-1})_p \sim \omega _{k} (s, n^{-1})_p$, $1\leq \nu \leq \min \{ k, m+1\}$, is satisfied, where $s\in \mathbb {C}^m[-1,1]$ is a piecewise
polynomial of degree $\leq r$ on a quasi-uniform (i.e., the ratio of lengths of the largest and the smallest intervals is bounded by a constant) partition of an interval. Similar results for
Chebyshev partitions and weighted Ditzian–Totik moduli of smoothness are also obtained. These results yield simple new constructions and allow considerable simplification of various known proofs in
the area of constrained approximation by polynomials and splines. Similar Articles
• Retrieve articles in Mathematics of Computation with MSC (2000): 65D07, 41A15, 26A15, 41A10, 41A25, 41A29
• Retrieve articles in all journals with MSC (2000): 65D07, 41A15, 26A15, 41A10, 41A25, 41A29
Bibliographic Information
• Kirill A. Kopotun
• Affiliation: Department of Mathematics, University of Manitoba, Winnipeg, Manitoba, R3T 2N2, Canada
• Email: kopotunk@cc.umanitoba.ca
• Received by editor(s): June 1, 2005
• Received by editor(s) in revised form: August 25, 2005
• Published electronically: November 27, 2006
• Additional Notes: The author was supported in part by NSERC of Canada.
• © Copyright 2006 American Mathematical Society
The copyright for this article reverts to public domain 28 years after publication.
• Journal: Math. Comp. 76 (2007), 931-945
• MSC (2000): Primary 65D07, 41A15, 26A15; Secondary 41A10, 41A25, 41A29
• DOI: https://doi.org/10.1090/S0025-5718-06-01920-X
• MathSciNet review: 2291843 | {"url":"https://www.ams.org/journals/mcom/2007-76-258/S0025-5718-06-01920-X/?active=current","timestamp":"2024-11-12T03:12:04Z","content_type":"text/html","content_length":"72673","record_id":"<urn:uuid:7ab39a09-3c6f-4118-9d15-398da7c37f2d>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00384.warc.gz"} |
How do you solve using the completing the square method x^2 - 30x = -125? | HIX Tutor
How do you solve using the completing the square method #x^2 - 30x = -125#?
Answer 1
$x = 5 \mathmr{and} x = 25$
One usually starts by dividing throughout by the coefficient of #x^2# and taking all #x# terms to one side. The given equation is already in this format.
Next step, take the coefficient of #x#, half it, square it, and add it to both sides.
#therefore x^2-30x+(-30/2)^2=-125+(-30/2)^2#
Now write the left hand side as a perfect square and simplify the right hand side.
#therefore (x-15)^2=-125+900/4=100#
Now take the square root on both sides and solve for #x#
#therefore x-15=+-sqrt100=+-10#
#therefore x=15+-10#
#=25 or 5#
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To solve the equation (x^2 - 30x = -125) using the completing the square method, follow these steps:
1. Move the constant term to the right side of the equation: (x^2 - 30x + 125 = 0)
2. Add and subtract the square of half the coefficient of the linear term (in this case, (15^2 = 225)): (x^2 - 30x + 225 + 125 = 225)
3. Factor the perfect square trinomial: ((x - 15)^2 = 225)
4. Take the square root of both sides and solve for (x): (x - 15 = \pm \sqrt{225}) (x - 15 = \pm 15)
5. Solve for (x): (x = 15 \pm 15) (x = 15 + 15) or (x = 15 - 15)
6. Simplify: (x = 30) or (x = 0)
So, the solutions are (x = 30) or (x = 0).
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/how-do-you-solve-using-the-completing-the-square-method-x-2-30x-125-8f9af98b10","timestamp":"2024-11-05T22:22:28Z","content_type":"text/html","content_length":"571198","record_id":"<urn:uuid:c6883ef0-bea2-4c32-9e16-cce94991188d>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00283.warc.gz"} |
Introduction to ArchaeoPhases
Import data
This vignette uses data available through the ArchaeoData package which is available in a separate repository. ArchaeoData provides MCMC outputs from ChronoModel, OxCal and BCal.
## Install data package
install.packages("ArchaeoData", repos = "https://archaeostat.r-universe.dev")
Let’s use the data of Ksar Akil generated by ChronoModel (Bosch et al. 2015).
Two different files are generated by ChronoModel: Chain_all_Events.csv that contains the MCMC samples of each event created in the modeling, and Chain_all_Phases.csv that contains all the MCMC
samples of the minimum and the maximum of each group of dates if at least one group is created.
chrono_path <- "chronomodel/ksarakil"
## Read events from ChronoModel
output_events <- system.file(chrono_path, "Chain_all_Events.csv", package = "ArchaeoData")
chrono_events <- read_chronomodel_events(output_events)
## Read phases from ChronoModel
output_phases <- system.file(chrono_path, "Chain_all_Phases.csv", package = "ArchaeoData")
chrono_phases <- read_chronomodel_phases(output_phases)
See vignette("import") for more details on how to import MCMC samples.
Convergence of MCMC chains
For more details on the diagnostic of Markov chain, see Robert and Casella (2010).
To assess the agreement between the posterior distributions and the numerical approximations, three Markov chains were run in parallel by ChronoModel. For each chain, 1 000 iterations were used
during the Burn-in period, 20 batches of 500 iterations were used in the Adapt period, 100 000 iterations were drawn in the Acquire period by only 1 out of 10 were kept in order to break the
correlation structure.
From the analysis of the history plot, all Markov chains reach their equilibrium before the Acquire period. The autocorrelations of the three Markov chains are not significant, meaning the rate of
subsample (1 over 10) is enough.
Now, using the package ArchaeoPhases and the package coda, we can verify whether the MCMC samples are correctly generated by the software.
Indeed, the MCMC samples should have no autocorrelation and should have reached their equilibrium (that is the posterior density of the parameter under investigation).
The autocorrelation plots show that each of these three chains are not significant. That means that we actually generated a non correlated sample, which was the aim the MCMC process.
We can also check whether the chains reached equilibrium. For example, let’s consider the first date of the dataset.
The plot shows that the three chains corresponding to the first date reached the same stationary process.
We can test the Gelman-Rubin criterion. The expected value to confirm that all of the Markov chains reached equilibrium is 1.
#> Potential scale reduction factors:
#> Point est. Upper C.I.
#> [1,] 1 1
#> [2,] 1 1
#> [3,] 1 1
#> [4,] 1 1
#> [5,] 1 1
#> [6,] 1 1
#> [7,] 1 1
#> [8,] 1 1
#> [9,] 1 1
#> [10,] 1 1
#> [11,] 1 1
#> [12,] 1 1
#> [13,] 1 1
#> [14,] 1 1
#> [15,] 1 1
#> [16,] 1 1
#> Multivariate psrf
#> 1
The Gelman-Rubin criterion confirms that all of the Markov chains reached equilibrium.
We can also test the Geweke criterion. The expected value to confirm that all of the Markov chains reached equilibrium is strickly less than 1.
coda::geweke.diag(coda_events[, 1, ], frac1 = 0.1, frac2 = 0.5)
#> [[1]]
#> Fraction in 1st window = 0.1
#> Fraction in 2nd window = 0.5
#> var1
#> 0.4978
#> [[2]]
#> Fraction in 1st window = 0.1
#> Fraction in 2nd window = 0.5
#> var1
#> 0.7736
#> [[3]]
#> Fraction in 1st window = 0.1
#> Fraction in 2nd window = 0.5
#> var1
#> 0.7666
The Geweke criterion criterion confirms that all of the Markov chains reached equilibrium.
ChronoModel generated correct samples of the posterior distribution. Now gathering the three chains, a total of 30 000 iterations was collected in order to give estimations of the posterior
distribution of each parameter.
Analysis of a series of dates
Tempo Plot
The tempo plot has been introduced by Dye (2016). See Philippe and Vibet (2020) for more statistical details.
The tempo plot is one way to measure change over time: it estimates the cumulative occurrence of archaeological events in a Bayesian calibration. The tempo plot yields a graphic where the slope of
the plot directly reflects the pace of change: a period of rapid change yields a steep slope and a period of slow change yields a gentle slope. When there is no change, the plot is horizontal. When
change is instantaneous, the plot is vertical.
## Warning: this may take a few seconds
tp <- tempo(chrono_events, level = 0.95, count = FALSE)
From these graphs, we can see that the highest part of the sampled activity is dated between -45 000 to -35 000 but two dates are younger, at about -32 000 and -28 000.
Activity Plot
The activity plot displays the derivative of the Bayes estimate of the Tempo plot. It is an other way to see changes over time.
Occurrence Plot
The Occurrence plot calculates the calendar date \(t\) corresponding to the smallest date such that the number of events observed before \(t\) is equal to \(k\), for \(k = [(1, 16)]\). The Occurrence
plot draws the credible intervals or the highest posterior density (HPD) region of those dates associated to a desired level of confidence.
Analysis of groups of dates
Groups of dates
A group of dates (phase) is defined by the date of the minimum and the date of the maximum of the group. In this part, we will use the data containing these values for each group of dates.
## Build phases from events
p <- list(EPI = 1, UP = 2:4, Ahmarian = 5:15, IUP = 16)
chrono_groups <- phases(chrono_events, groups = p)
all(chrono_groups == chrono_phases)
#> [1] TRUE
We can estimate the time range of a group of dates as the shortest interval that contains all the dates of the group at a given confidence level (Philippe and Vibet 2020).
The following code gives the endpoints of the time range of all groups of dates of Ksar Akil data at a given confidence level.
bound <- boundaries(chrono_groups, level = 0.95)
#> start end duration
#> EPI -28978.53 -26969.82 2009.709
#> UP -38570.37 -29368.75 9202.620
#> Ahmarian -42168.47 -37433.31 4736.161
#> IUP -43240.37 -41161.00 2080.371
The time range interval of the group of dates is a way to summarize the estimation of its minimum, the estimation of its maximum and their uncertainties at the same time.
Succession of groups
We may also be interested in a succession of phases. This is actually the case of the succession of IUP, Ahmarian, UP and EPI that are in stratigraphic order. Hence, we can estimate the transition
interval and, if it exists, the gap between these successive phases.
Transistions between successive groups
The transition interval between two successive phases is the shortest interval that covers the end of the oldest group of dates and the start of the youngest group of dates. The start and the end are
estimated by the minimum and the maximum of the dates included in the group of dates. It gives an idea of the transition period between two successive group of dates. From a computational point of
view this is equivalent to the time range calculated between the end of the oldest group of dates and the start of the youngest group of dates.
trans <- transition(chrono_groups, level = 0.95)
#> start end duration
#> UP-EPI -31479.79 -26905.04 4575.756
#> Ahmarian-EPI -39138.82 -27122.05 12017.766
#> IUP-EPI -43487.53 -26866.99 16621.537
#> Ahmarian-UP -39118.07 -36741.08 2377.983
#> IUP-UP -43395.89 -36480.26 6916.631
#> IUP-Ahmarian -43212.31 -40733.77 2479.539
Gap between successive groups
Successive phases may also be separated in time. Indeed there may exist a gap between them. This testing procedure check whether a gap exists between two successive groups of dates with fixed
probability. If a gap exists, it is an interval that covers the end of one group of dates and the start of the successive one with fixed posterior probability.
hia <- hiatus(chrono_groups, level = 0.95)
#> start end duration
#> UP-EPI -29188.56 -28961.79 227.7663
#> Ahmarian-EPI -37368.33 -28884.37 8484.9583
#> IUP-EPI -41220.64 -28814.70 12406.9352
#> IUP-UP -41282.64 -38421.49 2862.1447
At a confidence level of 95%, there is no gap between the succession of phases IUP, Ahmarian and UP, but there exists one of 203 years between phase UP and phase EPI. | {"url":"http://ctan.mirror.garr.it/mirrors/CRAN/web/packages/ArchaeoPhases/vignettes/ArchaeoPhases.html","timestamp":"2024-11-08T21:28:40Z","content_type":"text/html","content_length":"278965","record_id":"<urn:uuid:cb69c2c8-8389-4e95-a632-d5642a9cc25a>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00243.warc.gz"} |
Hubble’s Law in context of redshift distance
Hubble's Law in context of redshift distance
06 Oct 2024
Hubble’s Law and the Redshift-Distance Relationship
In 1929, Edwin Hubble proposed a fundamental relationship between the velocity of galaxies and their distances from us, known as Hubble’s Law. This law has been instrumental in understanding the
expansion of the universe and has far-reaching implications for cosmology. In this article, we will explore the theoretical framework behind Hubble’s Law and its connection to redshift-distance
The observation that galaxies are moving away from us at speeds proportional to their distances was a groundbreaking discovery in the field of astronomy. This phenomenon is known as the expansion of
the universe. Hubble’s Law provides a mathematical description of this relationship, which has been extensively tested and confirmed through various observations.
Hubble’s Law
The law states that the velocity (v) of a galaxy is directly proportional to its distance (d) from us:
v = H * d
where H is the Hubble constant. This equation implies that galaxies farther away are moving faster, which is consistent with the observation that the universe is expanding.
Redshift-Distance Relationship
The redshift of light emitted by a galaxy is a measure of its velocity relative to us. As galaxies move away from us, their light becomes shifted towards the red end of the spectrum, a phenomenon
known as redshift. The redshift (z) can be related to the distance (d) using the following formula:
z = H * d / c
where c is the speed of light.
Implications and Conclusion
Hubble’s Law has profound implications for our understanding of the universe. It provides a fundamental connection between the velocity of galaxies and their distances, which has been extensively
tested through observations. The redshift-distance relationship offers a powerful tool for measuring the expansion history of the universe. Further research in this area will continue to refine our
understanding of the cosmos.
• Hubble, E. (1929). A relation between distance and radial velocity among extra-galactic nebulae. Proceedings of the National Academy of Sciences, 15(3), 168-173.
• Freedman, W. L., et al. (2001). Final results from the Hubble Space Telescope Key Project to measure the Hubble constant. The Astrophysical Journal, 553(2), 47-64.
Note: This article is a theoretical framework and does not provide numerical examples or specific values for the Hubble constant or other parameters.
Related articles for ‘redshift distance’ :
• Reading: Hubble’s Law in context of redshift distance
Calculators for ‘redshift distance’ | {"url":"https://blog.truegeometry.com/tutorials/education/698795f1e83bc3262fc07d2886f49c2b/JSON_TO_ARTCL_Hubble_s_Law_in_context_of_redshift_distance.html","timestamp":"2024-11-08T12:16:36Z","content_type":"text/html","content_length":"16695","record_id":"<urn:uuid:55334022-5cb6-4f94-a59b-3022b331e4a3>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00009.warc.gz"} |
Maxwell and Classical Electromagnetism - Learn - ScienceFlip
Maxwell and Classical Electromagnetism – Learn
Maxwell’s Unifying Theory for Electromagnetism
James Clerk Maxwell is famous for developing his equations which explained a link between electricity, magnetism and light. His Theory on Electromagnetism provided a unifying theory that linked all
the work that had previously been done on electricity and magnetism. Some of these works included:
• Danish physicist Hans Orsted observed a magnetised compass needle deflected from its alignment to the Earth’s magnetic field when a nearby electric circuit was switched on and off. This showed
that a wire carrying an electric current generates a magnetic field, and it revealed the first evidence of a relationship between electricity and magnetism.
• English physicist Michael Faraday demonstrated in the 1830’s that changing magnetic fields produced electric fields.
Maxwell’s work quantified these relationships through a precise mathematical study of electric and magnetic effects. His work, known as Maxwell’s equations, show that electric and magnetic fields
move at a speed that closely match experimental estimates of the speed of light. Maxwell went on to develop a comprehensive theory of electromagnetism which explained that light is a form of
electromagnetic radiation (EMR). He also predicted that a large range of frequencies was possible for different forms of EMR beyond the visible spectrum.
Maxwell’s Equations
The equations that Maxwell developed in his theory of electromagnetism are based on vector calculus which is beyond the scope of this course and will not be analysed quantitatively here. The
equations are named after physicists who played a significant role in the work that led to Maxwell developing his theory. The laws are:
• Law 1 – Gauss’s law: This law describes the electric flux produced by electric charges.
• Law 2 – Gauss’s law for magnetic fields: This second law is very similar to the first but applies to magnetic rather than electric fields.
• Law 3 – Faraday’s law: This describes the electric field induced by a changing magnetic field.
• Law 4 – Ampère-Maxwell law: This is a little like Faraday’s law but it deals with changing electric flux.
Prediction of Electromagnetic Waves
Maxwell’s theory of electromagnetism combined all the theory and observations that had been developed in relation to electrical and magnetic physics and summarised this using four equations. He also
demonstrated mathematically that an electromagnetic wave was expected. Qualitatively, Maxwell’s equations summarise the interactions between electric and magnetic fields and this led to the
prediction of an electromagnetic wave which can propagate through space.
He considered that if a changing electric field is produced by moving a charged particle backwards and forwards, then this changing electric field will produce a magnetic field at right angles to the
original electric field. The changing magnetic field would then also produce a changing electric field and this cycle could be repeated infinitely. The result of this would be two mutually
propagating fields. The electromagnetic radiation would be self-propagating and would extend outwards into space as an electromagnetic wave of a fixed frequency. Further to this, both the electric
and magnetic fields would necessarily oscillate at the same frequency. The diagram below illustrates an electric field perpendicular to a magnetic field propagating through space as an
electromagnetic wave. The fields are perpendicular to the direction of propagation.
Any charge that is exposed to electromagnetic radiation will respond to the electric field in the radiation and be accelerated according to F = qE. Further to this, any charge will experience a force
F from a magnetic field according to F = qvBsinθ. The result of this is that electromagnetic radiation can be transformed into kinetic energy.
Prediction of Velocity
Maxwell’s calculations provided a theoretical value for the speed at which an electromagnetic wave should propagate through space. This speed so closely matched experimental values for the speed of
light that it led physicists to the idea that light was a form of electromagnetic radiation. The speed of light is accepted to be 299792458 m/s. In calculations, the speed of light, c, is often
accepted as 3×10^8 m/s.
For EMR, there is a special variation of the wave equation (v=fλ) that relates the speed of EMR/light and the frequency and wavelength of any electromagnetic wave: c=fλ.
What is the frequency of red light which has a wavelength of 620nm? | {"url":"https://www.scienceflip.com.au/subjects/physics/thenatureoflight/learn1/","timestamp":"2024-11-11T03:00:31Z","content_type":"text/html","content_length":"70217","record_id":"<urn:uuid:d818a1da-22b6-491b-90b3-ecbb16fa7e7a>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00196.warc.gz"} |
金融编程代写 | ECMT 2130 Financial Econometrics - EasyDue™️
Semester 2, 2019
ECMT 2130 Financial Econometrics Final Project
• You need to (i) submit your answers to Section 2 Proposal only to [Final Project (Proposal)] into Canvas1 and (ii) upload your data file (excel file) and other excel files used for estimation to
[Final project (Excel zip file)] into Canvas2 (zip all the files and upload the zip file only).
• Write the proposal concisely (no more than 10 pages).
• You are NOT allowed to discuss this assignment with your classmates. You are not allowed to copy a classmate’s assignment (or to borrow the bulk of the material from a classmate’s assignment). You
are required to perform the full assignment on your own and hand in independently.
For this project, you will analyze the behaviour of 10 stocks listed on the Australia stock market to make calculations and answer questions. The project has two stages. The first stage is a data
work, for which you will choose the stocks and make basic return calculations. In the second stage, you calculate summary statistics, and write up a short discussion about your data. You will make
formal portfolio theory calculations and run regressions for some tests. Your answers should be typed.
1 Preliminary Data Work
1. [Downloading Data]
We collect price data for 10 stocks listed on the Australian stock market over the period from December 2008 to December 2018. You will also collect price data for a broad market index (e.g., the ASX
200) and yield data for a government bond.
(a) Go to the Yahoo Finance link https://au.finance.yahoo.com.
(b) You need to have 10 risky stocks with 9 stocks from (a) common companies and one stock from (b) individual company in Table 1. The last digit of your SID number determines the 10th stock from (b)
individual company in Table 1. For example, if you SID is 47033566, you select Insurance Australia (IAG.AX) in number 6.
(c) In the Enter Symbol box at the top, type the symbol of the stock for which you want data. For example, the symbol for Commonwealth Bank in CBA.
(d) Record the current market capitalization (“Market Cap”).
(e) On the left, under Quotes, click on Historical Prices.
1Go to Canvas → Modules → Final Project and then click Final project (proposal) for uploading your file.
2Go to Canvas → Modules → Final Project and then click Final project (excel zip file) for uploading your file.
Table 1: List of stocks
(a) common companies
Code Company
CBA.AX Commonwealth Bank
CSL.AX CSL Limited
BHP.AX BHP Group Limited
WBC.AX Westpac Banking Corp
NAB.AX National Aust. Bank
ANZ.AX ANZ Banking Group Limited
WOW.AX Woolworths Group Limited
TLS.AX Telstra Corporation
WES.AX Wesfarmers Limited
(b) individual company
Number Code Company
0 MQG.AX Macquarie Group Limited
1 RIO.AX RIO Tinto Limited
2 WPL.AX Woodside Petroleum
3 NCM.AX Newcrest Mining
4 FMG.AX Fortescue Metals Group
5 ALL.AX Aristocrat Leisure
6 IAG.AX Insurance Australia
7 STO.AX Santos Limited
8 BXB.AX Brambles Limited
9 SUN.AX Suncorp Group Limited
(f) In the SET DATE RANGE box, select monthly and change the start date to December 31, 2008 and the end date to December 31, 2018. Click Get Prices.
(g) This will bring up the table with price data for the selected time period. Make sure that the data are available at the beginning of the sample period.
(h) At the bottom of the screen, select Download to Spreadsheet. Change the name of the file to the symbol name (e.g., cba.csv). (Note: csv stands for comma separated value file, which is easily read
by Excel.)
(i) Open the Excel file. The file will have the price data in descending order (most recent data is at the top). You will want the data in ascending order. To do this, highlight all of the data
(including column headers). Then, select Data/Sort, which brings up the Sort dialogue box. The default should have sort by date with ascending order selected.
If so, click OK. Otherwise, select those options and then click OK.
(j) Save the file. Note that the data you will use in the project are the Adj. Close data.
(k) When you have done this for 10 stocks, do the same for the market index. The symbol for the ASX 200 is “AXJO”. Name the file “axjo.csv”. You can also find it by selecting ˆthe Indices link from
the initial Yahoo! Financial Quotes page. (As with individual stocks, make sure to use adjusted price measure that includes dividends.)
(l) Finally, you will need data for the risk-free asset. Again, you can get data from the FRED https://fred.stlouisfed.org/series/IR3TBB01AUM156N. You want yield data for the 3-month bill yield. For
this series, you only need data from January 2009 to December, 2018. Name the file “tbill.csv”.
2. [Return Calculations]
The next thing to do is to calculate continuously compounded returns for the ten stocks, the market index, and the risk-free rate. For the price data, the procedure is standard. Compute the
continuously compounded monthly returns ln(Pt/Pt−1). (Note: you will not have a return for Dec. 2008. The first return will be for Jan. 2009.)
For the risk-free rate, the given data is not price data, but annualized percentage yields. To convert to a continuously compounded return, you need to first divide by 100 to get the yield as a
decimal. Then, take the natural log of (1+yield). Finally, you can convert to a monthly continuously compounded return by dividing by 12. Also, construct an equalweighted portfolio for your 10 stocks
(i.e., construct a return series that is an equal-weighted average of the returns for the 10 stocks) and a value-weighted portfolio for your 10 stocks (i.e.,construct a return series that is a
weighted average of the returns for the 10 stocks, where the weights are proportional to the current Market Cap for each firm). For the value-weighted portfolio, a specific firm’s implicit weight is
its market cap divided by the sum of the market caps for all ten firms (collect all ten market caps on the same day).
2 Proposal
1. [Summary Statistics]
(a) Compute time plots of each of the 12 return series (the 10 individual stock returns and the equal-weighted and value-weighted returns). When reporting plots for each return,report along with
ASX200 return and risk-free return on each graph. Also, report the Market Cap numbers for each firm and the corresponding weights used in constructing the value-weighted return series. Provide time
plots of the data but try to convey this information in an efficient way by having multiple panels in a given figure. If there are large outliers for any of the series, try to determine what happened
at the time. Please include this material in your final project too.
(b) Compute histograms for each of the 14 series (the 10 individual stock returns, the equaland value-weighted returns, the ASX 200 return, and the risk-free return). Do they look
Normal? Provide histograms. Make the presentation of graphs as concise as possible (i.e., use multiple panels per page).
(c) Compute mean, variance, standard deviation, skewness, and kurtosis for each of the 14 series. Report kurtosis (not excess kurtosis). (Note: Usually, we think of a normal distribution as having a
kurtosis of 3. For reporting purposes use the standard definition of kurtosis. )
If the mean is negative for any of the series, compute the median. Is the median positive?
If so, why do you think there is a difference between the mean and median? Also,compare the standard standard deviations of the other 13 series to the standard deviation of the ASX 200 return series.
Provide tables reporting the summary statistics.
(d) Compute the (10×10) sample covariance matrix for 10 return series for the individual stocks.
(e) Compute the Sharpe Ratio for the 10 return series for the individual stocks and for the ASX 200. You can use the average t-bill yield to proxy for the risk-free rate in these calculations. This
is equivalent to using the mean excess return for each stock to estimate its risk premium. For the mean returns, use the median estimate if the excess mean based on the sample average is negative. If
it is still negative for the median estimate, set the excess mean to zero.
2. [Test for Random-walk]
For individual stocks, i = 1, . . . , 10, you want to test H0 : φi = 1 for ln(Pi,t) = αi + βi × t + φi
ln(Pi,t−1) + ei,t, ei,t ∼ iid(0, σ2). (1)
∆ ln(Pi,t) = αi + βi × t + γi
ln(Pi,t−1) + ei,t, ei,t ∼ iid(0, σ2). (2)
Construct t-statistics for H0 : φi = 1( or γi = 0). To do this, you will need to calculate the OLS standard error. Show the t-statistic for each stock. Would you reject the null hypothesis using a 5%
critical value for a one tailed test? Note that the distribution for this test statistic is different from Student-t distribution and use the critical value for the 5% test of -3.41.3 Discuss.
3. [Portfolio Theory Calculations]
(a) Using the estimated means and the sample covariance matrix, compute the global minimum variance portfolio for the 10 risky assets.
(b) Using the highest historical mean among 10 stocks as a target, compute a second efficient portfolio.
(c) Using the two efficient portfolios computed in a. and b., compute the Markowitz bullet (portfolio frontier for 10 risky assets).
(d) Using the mean monthly risk-free return, compute the tangency portfolio and the efficient set for the 10 risky assets and a risk-free asset.
(e) Compare weights to the weights for the value-weighted portfolio based on current market capitalization. Discuss why you might expect the weights to be related and why you might expect the weights
to be different. (Hint: think about when the market capitalization is measured.)
4. [CAPM Estimations]
(a) Using the risk-free rate data, calculate excess returns. Run the CAPM regression for each stock.
(b) For each stock, test the hypothesis that the intercept term αi = 0 versus the alternative that it does not. Again, use a 5% test. Also, calculate the 95% confidence intervals for the alphas.
If your regression equation does not have an intercept, yt = ρyt−1 + t, the 5% critical value is -1.94.
If it has an intercept yt = c + ρyt−1 + t, then the 5% critical value is -2.86.
(c) Test the CAPM by running a CAPM regression with the excess return on your tangency portfolio as the dependent variable (instead of the return on a given asset, as in part
b.). Is α = 0? | {"url":"https://easy-due.com/daixie-ecmt-2130-financial-econometrics/","timestamp":"2024-11-03T05:34:50Z","content_type":"text/html","content_length":"78134","record_id":"<urn:uuid:def1d056-1e0d-4f11-870b-134922bde78d>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00615.warc.gz"} |
How do you simplify \frac{(3ab)^2(4a^3b^4)^3}{(6a^2b)^4}? | HIX Tutor
How do you simplify #\frac{(3ab)^2(4a^3b^4)^3}{(6a^2b)^4}#?
Answer 1
since #4=2^2# and #6=3cdot2#,
by distributing the powers,
#={3^2a^2b^2 cdot 2^6a^9b^{12}}/{3^4 2^4a^8b^4}#
by collecting like factors,
#={3^2 2^6a^{11}b^{14}}/{3^4 2^4a^8b^4}#
By cancelling out #3^2#, #2^4#, #a^8#, and #b^4#,
I hope that this was helpful.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To simplify the expression, first expand each term raised to a power, then simplify:
[ \frac{(3ab)^2(4a^3b^4)^3}{(6a^2b)^4} = \frac{(9a^2b^2)(64a^9b^{12})}{1296a^8b^4} = \frac{576a^{11}b^{14}}{1296a^8b^4} = \frac{a^{11-8}b^{14-4}}{2^4} = \frac{a^3b^{10}}{16} ]
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/how-do-you-simplify-frac-3ab-2-4a-3b-4-3-6a-2b-4-8f9af956ad","timestamp":"2024-11-11T08:45:04Z","content_type":"text/html","content_length":"577046","record_id":"<urn:uuid:bf446e7e-1ec3-43ca-96cd-71b036a89424>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00627.warc.gz"} |
maths teacher circles Archives - Maths Teacher Circles
Music and maths. They’re not often talked about in the same breath. One’s an art, the other’s a science. One is known for its creative expression, the other has a reputation that’s anything but that.
Chalk and cheese. But, are they so different?
Two Questions to Change the Way Your Students See Maths
These are questions that will help your students deal with uncertainty, cope with challenge, and be persistent. And they’re questions that can be used by any learner – no matter their age or the
mathematical content they’re tackling.
A Revealing Memory Quiz
Often feel like you’re just ‘trying to get through the curriculum’? Here’s a way to change the way we look at curriculum content to make it far easier to go deeper with new skills and concepts – AND
be less rushed.
The Glue That Holds All Maths Learning Together
We can’t just teach students to copy processes or lines of working out. We can’t just show them new definitions. That’s not enough. So, what’s missing? Mathematical reasoning.
5 Strategies for Improving Student Reasoning
Why do students forget so much of what they learn? Endless exercises, examples & definitions aren’t enough. What’s missing? Reasoning.
Maths is Colourful – Not Black & White
Maths is a tool for making sense of our world. It’s used for good and for bad. So, when we only look at it in black & white (not in colour), we seriously limit the possibilities it can offer.
5 Strategies for Successful Problem Solving
Problem solving is tough for many students – yet can change the way they see maths. Here are 5 strategies for helping all students have success in problem solving.
The Maths a One Year-Old Knows (& why this matters for later learning)
The MOST learning in our lives happens before we turn two – learning that’s not always obvious, but is highly significant to our later development.
How To Turn 1 Short Maths Problem Into An Entire Lesson
Wish you could spend less time finding good tasks to use for lessons? Here’s how you can turn one short maths problem into an entire lesson that will get your students thinking.
Why Symmetry Can Be A Game-changer For Your Students
Nestled away, in a room labelled Geometry, symmetry is often ticked off and forgotten. Yet, viewed differently, it becomes a game-changing concept that can powerfully change the way students see | {"url":"https://www.mathsteachercircles.org/blog/tag/maths-teacher-circles/","timestamp":"2024-11-05T19:45:27Z","content_type":"text/html","content_length":"209850","record_id":"<urn:uuid:9442d3a8-b5fe-4b35-b483-0e92db928596>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00059.warc.gz"} |
Mathematical Biology
Mathematical biology is an interdisciplinary field that uses mathematics to understand biological processes. This area of study seeks to model, analyze, interpret, and predict various biological
phenomena by means of both novel and existing mathematical techniques. Its scope of application ranges from the microscopic level, such as cellular processes and genetic networks, to the macroscopic
level, including the dynamics of organisms, populations, ecosystems, and evolutionary biology. By formulating mathematical models, mathematicians can describe biological systems, predict their
behavior under different conditions, and gain insights into their underlying mechanisms. These models can take the form of ordinary and partial differential equations, stochastic processes,
statistical models, and computational simulations, allowing for a quantitative understanding of complex biological interactions.Specific areas of interest in the Department include the following
diverse topics: evolutionary biology and ecology, modeling of cancer, virus dynamics/epidemiology, collective and spatiotemporal dynamics of bacterial colonies/biofilms, soft and living matter,
epigenetic cell memory, genetic networks, stochastic biochemical reaction networks, models and simulations of bimolecular conformational changes and molecular recognition with application to drug
design, computational studies of cell shapes and movements, biological pattern formation, and developmental biology. The studies of complex social phenomena also belong under the scope of
mathematical life sciences, and include modeling of evolution of language, learning, categorization, and human behavior in general. | {"url":"https://www.math.ucsd.edu/index.php/research/mathematical-biology","timestamp":"2024-11-02T18:45:17Z","content_type":"text/html","content_length":"46777","record_id":"<urn:uuid:84b0decc-fadf-4fda-990b-636ffe8aa5e4>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00528.warc.gz"} |
ML Aggarwal Class 10 Solutions for ICSE Maths Chapter 13 Similarity Ex 13.1
These Solutions are part of ML Aggarwal Class 10 Solutions for ICSE Maths. Here we have given ML Aggarwal Class 10 Solutions for ICSE Maths Chapter 13 Similarity Ex 13.1
More Exercises
Question 1.
State which pairs of triangles in the figure given below are similar. Write the similarity rule used and also write the pairs of similar triangles in symbolic form (all lengths of sides are in cm):
(i) In ∆ABC and PQR
Question 2.
It is given that ∆DEF ~ ∆RPQ. Is it true to say that ∠D = ∠R and ∠F = ∠P ? Why?
∆DEF ~ ∆RPQ
∠D = ∠R and ∠F = ∠Q not ∠P
No, ∠F ≠ ∠P
Question 3.
If in two right triangles, one of the acute angle of one triangle is equal to an acute angle of the other triangle, can you say that the two triangles are similar? Why?
In two right triangles,
one of the acute angles of the one triangle is
equal to an acute angle of the other triangle.
The triangles are similar. (AAA axiom)
Question 4.
In the given figure, BD and CE intersect each other at the point P. Is ∆PBC ~ ∆PDE? Give reasons for your answer.
In the given figure, two line segments intersect each other at P.
In ∆BCP and ∆DEP
∠BPC = ∠DPE
Question 5.
It is given that ∆ABC ~ ∆EDF such that AB = 5 cm, AC = 7 cm, DF = 15 cm and DE = 12 cm.
Find the lengths of the remaining sides of the triangles.
∆ABC ~ ∆EDF
AB = 5 cm, AC = 7 cm, DF = 15 cm and DE = 12 cm
Question 6.
(a) If ∆ABC ~ ∆DEF, AB = 4 cm, DE = 6 cm, EF = 9 cm and FD = 12 cm, then find the perimeter of ∆ABC.
(b) If ∆ABC ~ ∆PQR, Perimeter of ∆ABC = 32 cm, perimeter of ∆PQR = 48 cm and PR = 6 cm, then find the length of AC.
(a) ∆ABC ~ ∆DEF
AB = 4 cm, DE = 6 cm, EF = 9 cm and FD = 12 cm
Question 7.
Calculate the other sides of a triangle whose shortest side is 6 cm and which is similar to a triangle whose sides are 4 cm, 7 cm and 8 cm.
Let ∆ABG ~ ∆DEF in which shortest side of
∆ABC is BC = 6 cm.
In ∆DEF, DE = 8 cm, EF = 4 cm and DF = 7 cm
Question 8.
(a) In the figure given below, AB || DE, AC = , 3 cm, CE = 7.5 cm and BD = 14 cm. Calculate CB and DC.
(b) In the figure (2) given below, CA || BD, the lines AB and CD meet at G.
(i) Prove that ∆ACO ~ ∆BDO.
(ii) If BD = 2.4 cm, OD = 4 cm, OB = 3.2 cm and AC = 3.6 cm, calculate OA and OC.
(a) In the given figure,
AB||DE, AC = 3 cm, CE = 7.5 cm, BD = 14 cm
Question 9.
(a) In the figure
(i) given below, ∠P = ∠RTS.
Prove that ∆RPQ ~ ∆RTS.
(b) In the figure (ii) given below,
∠ADC = ∠BAC. Prove that CA² = DC x BC.
(a) In the given figure, ∠P = ∠RTS
To prove : ∆RPQ ~ ∆RTS
Proof : In ∆RPQ and ∆RTS
∠R = ∠R (common)
∠P = ∠RTS (given)
∆RPQ ~ ∆RTS (AA axiom)
Question 10.
(a) In the figure (1) given below, AP = 2PB and CP = 2PD.
(i) Prove that ∆ACP is similar to ∆BDP and AC || BD.
(ii) If AC = 4.5 cm, calculate the length of BD.
(b) In the figure (2) given below,
∠ADE = ∠ACB.
(i) Prove that ∆s ABC and AED are similar.
(ii) If AE = 3 cm, BD = 1 cm and AB = 6 cm, calculate AC.
(c) In the figure (3) given below, ∠PQR = ∠PRS. Prove that triangles PQR and PRS are similar. If PR = 8 cm, PS = 4 cm, calculate PQ.
In the given figure,
AP = 2PB, CP = 2PD
To prove:
Question 11.
In the given figure, ABC is a triangle in which AB = AC. P is a point on the side BC such that PM ⊥ AB and PN ⊥ AC. Prpve that BM x NP = CN x MP.
In the given figure, ABC in which AB = AC.
P is a point on BC such that PM ⊥ AB and PN ⊥ AC
To prove : BM x NP = CN x MP
Question 12.
Prove that the ratio of the perimeters of two similar triangles is the same as the ratio of their corresponding sides.
Given : ∆ABC ~ ∆PQR .
To prove : Ratio in their perimeters k
the same as the ratio in their corresponding sides.
Question 13.
In the given figure, ABCD is a trapezium in which AB || DC. The diagonals AC and BD intersect at O. Prove that \(\frac { AO }{ OC } =\frac { BO }{ OD } \)
Using the above result, find the value(s) of x if OA = 3x – 19, OB = x – 4, OC = x – 3 and OD = 4.
ABCD is a trapezium in which AB || DC
Diagonals AC and BD intersect each other at O.
Question 14.
In ∆ABC, ∠A is acute. BD and CE are perpendicular on AC and AB respectively. Prove that AB x AE = AC x AD.
In ∆ABC, ∠A is acute
BD and CE are perpendiculars on AC and AB respectively
Question 15.
In the given figure, DB ⊥ BC, DE ⊥ AB and AC ⊥ BC. Prove that \(\frac { BE }{ DE } =\frac { AC }{ BC } \)
In the given figure, DB ⊥ BC, DE ⊥ AB and AC ⊥ BC
To prove : \(\frac { BE }{ DE } =\frac { AC }{ BC } \)
Proof: In ∆ABC and ∆DEB
Question 16.
(a) In the figure (1) given below, E is a point on the side AD produced of a parallelogram ABCD and BE intersects CD at F. show that ∆ABE ~ ∆CFB.
(b) In the figure (2) given below, PQRS is a parallelogram; PQ = 16 cm, QR = 10 cm. L is a point on PR such that RL : LP = 2 : 3. QL produced meets RS at M and PS produced at N.
(i) Prove that triangle RLQ is similar to triangle PLN. Hence, find PN.
(ii) Name a triangle similar to triangle RLM. Evaluate RM.
(a) In the given figure, ABCD is a ||gm
E is a point on AD and produced
and BE intersects CD at F.
To prove : ∆ABE ~ ∆CFB
Proof : In ∆ABE and ∆CFB
∠A = ∠C (opposite angles of a ||gm)
∠ABE = ∠BFC (alternate angles)
∆ABE ~ ∆CFB (AA axiom)
(b) In the given figure, PQRS is a ||gm PQ = 16 cm,
QR = 10 cm
L is a point on PR such that
Question 17.
The altitude BN and CM of ∆ABC meet at H. Prove that
(i) CN . HM = BM . HN .
(ii) \(\frac { HC }{ HB } =\sqrt { \frac { CN.HN }{ BM.HM } } \)
(iii) ∆MHN ~ ∆BHC.
In the given figure, BN ⊥ AC and CM ⊥ AB of ∆ABC
which intersect each other at H.
To prove:
(i) CN.HM = BM.HN
(ii) \(\frac { HC }{ HB } =\sqrt { \frac { CN.HN }{ BM.HM } } \)
(iii) ∆MHN ~ ∆BHC.
Construction: Join MN
Question 18.
In the given figure, CM and RN are respectively the medians of ∆ABC and ∆PQR. If ∆ABC ~ ∆PQR, prove that:
(i) ∆AMC ~ ∆PNR
(ii) \(\frac { CM }{ RN } =\frac { AB }{ PQ } \)
(iii) ∆CMB ~ ∆RNQ.
In the given figure, CM and RN are medians of ∆ABC and ∆PQR
respectively and ∆ABC ~ ∆PQR
To prove:
(i) ∆AMC ~ ∆PNR
Question 19.
In the given figure, medians AD and BE of ∆ABC meet at the point G, and DF is drawn parallel to BE. Prove that
(i) EF = FC
(ii) AG : GD = 2 : 1
In the given figure,
AD and BE are the medians of ∆ABC
intersecting each other at G
DF || BE is drawn
To prove :
Question 20.
(a) In the figure given below, AB, EF and CD are parallel lines. Given that AB =15 cm, EG = 5 cm, GC = 10 cm and DC = 18 cm. Calculate
(i) EF
(ii) AC.
(b) In the figure given below, AF, BE and CD are parallel lines. Given that AF = 7.5 cm, CD = 4.5 cm, ED = 3 cm, BE = x and AE = y. Find the values of x and y.
In the given figure,
AB || EF || CD
AB = 15 cm, EG = 5 cm, GC = 10 cm and DC = 18 cm
Calculate :
Question 21.
In the given figure, ∠A = 90° and AD ⊥ BC If BD = 2 cm and CD = 8 cm, find AD.
In ∆ABC, we have ∠A = 90°
Also, AD ⊥ BC
In ∆ABC, we have,
∠BAC = 90°
⇒ ∠BAD + ∠DAC = 90°…..(i)
Question 22.
A 15 metres high tower casts a shadow of 24 metres long at a certain time and at the same time, a telephone pole casts a shadow 16 metres long. Find the height of the telephone pole.
Height of a tower AB = 15 m
and its shadow BC = 24 m
At the same time and position
Let the height of a telephone pole DE = x m
and its shadow EF = 16 m
Question 23.
A street light bulb is fixed on a pole 6 m above the level of street. If a woman of height casts a shadow of 3 m, find how far she is away from the base of the pole?
Height of height pole(AB) = 6m
and height of a woman (DE) = 1.5 m
Here shadow EF = 3 m .
Hope given ML Aggarwal Class 10 Solutions for ICSE Maths Chapter 13 Similarity Ex 13.1 are helpful to complete your math homework.
If you have any doubts, please comment below. Learn Insta try to provide online math tutoring for you. | {"url":"https://ncertmcq.com/ml-aggarwal-class-10-solutions-for-icse-maths-chapter-13-ex-13-1/","timestamp":"2024-11-15T03:36:47Z","content_type":"text/html","content_length":"100431","record_id":"<urn:uuid:7ceaab40-eee3-4dba-aa3a-4f47785e2f8f>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00457.warc.gz"} |
Nineteen Papers on Algebraic Semigroupssearch
Item Successfully Added to Cart
An error was encountered while trying to add the item to the cart. Please try again.
Please make all selections above before adding to cart
Nineteen Papers on Algebraic Semigroups
eBook ISBN: 978-1-4704-3350-5
Product Code: TRANS2/139.E
List Price: $165.00
Individual Price: $132.00
Click above image for expanded view
Nineteen Papers on Algebraic Semigroups
eBook ISBN: 978-1-4704-3350-5
Product Code: TRANS2/139.E
List Price: $165.00
Individual Price: $132.00
• American Mathematical Society Translations - Series 2
Volume: 139; 1988; 210 pp
MSC: Primary 20; Secondary 94
This volume contains papers selected by leading specialists in algebraic semigroups in the U.S., the United Kingdom, and Australia. Many of the papers strongly influenced the development of
algebraic semigroups, but most were virtually unavailable outside the U.S.S.R. Written by some of the most prominent Soviet researchers in the field, the papers have a particular emphasis on
semigroups of transformations. Boris Schein of the University of Arkansas is the translator.
□ Articles
□ A. Ya. Aĭzenshtat — Homomorphisms of semigroups of endomorphisms of ordered sets
□ A. Ya. Aĭzenshtat — On ideals of semigroups of endomorphisms
□ A. Ya. Aĭzenshtat — Subgroups of semigroups of endomorphisms of ordered sets
□ A. Ya. Aĭzenshtat — Regular semigroups of endomorphisms of ordered sets
□ A. Ya. Aĭzenshtat — On certain semigroups of endomorphisms determining the order in a set
□ A. E. Evseev — A survey of partial groupoids
□ A. E. Evseev and N. E. Podran — Semigroups of transformations of a finite set generated by idempotents with given projection characteristics
□ A. E. Evseev and N. E. Podran — Semigroups of transformations generated by idempotents with a given defect
□ I. S. Ponizovskiĭ — Transitive representations by transformations of semigroups of a certain class
□ B. M. Shaĭn — Embedding semigroups in inverse semigroups
□ B. M. Shaĭn — On certain classes of semigroups of binary relations
□ B. M. Shaĭn — On certain comitants of semigroups of binary relations
□ B. M. Shaĭn — An idempotent semigroup is determined by its pseudogroup of local automorphisms
□ B. M. Shaĭn — Semigroups for which every transitive representation by functions is a representation by invertible functions
□ B. M. Shaĭn — Inverse semigroups that do not admit representations by partial transformations of their proper subsets
□ É. G. Shutov — Homomorphisms of the semigroup of all partial transformations
□ É. G. Shutov — On a certain semigroup of one-to-one transformations
□ É. G. Shutov — Embeddings of semigroups
□ Yu. M. Vazhenin — Inverse codes
• Permission – for use of book, eBook, or Journal content
• Book Details
• Table of Contents
• Requests
Volume: 139; 1988; 210 pp
MSC: Primary 20; Secondary 94
This volume contains papers selected by leading specialists in algebraic semigroups in the U.S., the United Kingdom, and Australia. Many of the papers strongly influenced the development of algebraic
semigroups, but most were virtually unavailable outside the U.S.S.R. Written by some of the most prominent Soviet researchers in the field, the papers have a particular emphasis on semigroups of
transformations. Boris Schein of the University of Arkansas is the translator.
• Articles
• A. Ya. Aĭzenshtat — Homomorphisms of semigroups of endomorphisms of ordered sets
• A. Ya. Aĭzenshtat — On ideals of semigroups of endomorphisms
• A. Ya. Aĭzenshtat — Subgroups of semigroups of endomorphisms of ordered sets
• A. Ya. Aĭzenshtat — Regular semigroups of endomorphisms of ordered sets
• A. Ya. Aĭzenshtat — On certain semigroups of endomorphisms determining the order in a set
• A. E. Evseev — A survey of partial groupoids
• A. E. Evseev and N. E. Podran — Semigroups of transformations of a finite set generated by idempotents with given projection characteristics
• A. E. Evseev and N. E. Podran — Semigroups of transformations generated by idempotents with a given defect
• I. S. Ponizovskiĭ — Transitive representations by transformations of semigroups of a certain class
• B. M. Shaĭn — Embedding semigroups in inverse semigroups
• B. M. Shaĭn — On certain classes of semigroups of binary relations
• B. M. Shaĭn — On certain comitants of semigroups of binary relations
• B. M. Shaĭn — An idempotent semigroup is determined by its pseudogroup of local automorphisms
• B. M. Shaĭn — Semigroups for which every transitive representation by functions is a representation by invertible functions
• B. M. Shaĭn — Inverse semigroups that do not admit representations by partial transformations of their proper subsets
• É. G. Shutov — Homomorphisms of the semigroup of all partial transformations
• É. G. Shutov — On a certain semigroup of one-to-one transformations
• É. G. Shutov — Embeddings of semigroups
• Yu. M. Vazhenin — Inverse codes
Permission – for use of book, eBook, or Journal content
Please select which format for which you are requesting permissions. | {"url":"https://bookstore.ams.org/TRANS2/139","timestamp":"2024-11-14T08:29:59Z","content_type":"text/html","content_length":"81660","record_id":"<urn:uuid:63f3eef3-5b67-403c-9540-ee27cbd53903>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00570.warc.gz"} |
How Does It Work? | Anon Aadhaar
How Does It Work?
Anon Aadhaar: Verifying Aadhaar Documents with RSA
Anon Aadhaar is a zero-knowledge protocol designed to enable Aadhaar citizens to prove their possession of an Aadhaar document issued and signed by the government. This process ensures anonymity by
utilizing the Aadhaar secure QR code, presents on e-Aadhaar and Aadhaar print-letter, preserving the confidentiality of the Aadhaar number.
RSA and Document Verification
At the core of this verification process lies RSA, a powerful cryptographic signature algorithm. RSA involves a private key used for signing and a corresponding public key used for verification of
signatures. The innovative part of the Anon Aadhaar protocol is that this verification happens inside of a circuit, and the process results in a zk SNARK proof, hidding all the personal information
needed to verify the signature. Resulting in a proof that attest identity without revealing it.
Here are the steps happening while generating the proof:
1. Extract and process the data from the QR code:
• Read the QR code and extract both the signature and the signed data as bytes arrays
• Verifying the signature outside of the circuit to make sure the document is signed
• Fetching the official UIDAI public key, to use it as circuit input, to ensure it's RSA signed from the right authority
• Hash the signal
Required Data to generate the Aadhaar proof
• From the QR Code:
□ Bytes of the signed data.
□ Bytes of the RSA signature.
• External to the QR Code:
□ Indian government's RSA public key (that can be found here).
□ A signal.
□ A nullifier Seed
2. Generate an Anon-Aadhaar Proof:
This process involves several operations in Circom circuits to ensure the privacy and integrity of your Aadhaar data while proving its authenticity without revealing personal information:
• Apply the SHA-256 on the Signed Data: This step involves checking the integrity and authenticity of the signed data by verifying its SHA-256 hash, as it's the hash that is signed by the RSA
• Verify the RSA Signature of the Hashed Data: After verifying the data's hash, the next step is to authenticate the source of the data by verifying the RSA signature. This ensures that the data
and its hash were indeed signed by the holder of the private key, in this case the UIDAI, offering a layer of security against data tampering.
• Extract the photo bytes from the Signed Data: The bytes of the photo are extracted to compute the nullifier.
• Extract Identity Fields if requested: If your app request to reveal one of the field from the identity the circuit will reveal it in its output. There only four fields that could be revealed (Age
> 18, Gender, State, Pincode). Note that by default the Prover will reveal nothing from the ID.
• Compute Nullifiers: Nullifier is a unique identifiers derived from data fields, used to prevent double-spending or duplicate proofs without revealing the actual data. This step is crucial for
maintaining privacy while ensuring the uniqueness and validity of the proof. To read more about Nullifiers.
• Convert Timestamp from IST to UNIX UTC Format: The timestamp associated with the data is converted into a UNIX UTC format. This standardization of time representation ensures consistency across
different systems and platforms, facilitating verification processes that require time validation.
• "Signing" the SignalHash: The final step involves applying a contraints on the signalHash as part of the proof generation process. This act as a marker to the proof, that let the user to commit
to a certain signal, while generating the proof. Note, that it's an optionnal parameter and it will be set as 1 by default in the SDK, it's mainly used to prevent from on-chain front-running or
ERC-4337 integration. | {"url":"https://documentation.anon-aadhaar.pse.dev/docs/how-does-it-work","timestamp":"2024-11-11T11:31:53Z","content_type":"text/html","content_length":"21436","record_id":"<urn:uuid:191833fd-3b8e-461f-908b-f7da689997d8>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00221.warc.gz"} |
Calibration and informativeness are measures of expertise. An expert is well calibrated if she is able to correctly predict the probability that her answers actually turn out to be correct. This can
be evaluated by observation: if an expert says that she is 80 % sure about the answer, it should mean that when taking ten answers with similar certainty, eventually two of them, on average, should
turn out to be incorrect. If the actual result is lower, the expert is said to be overconfident. Calibration is measured against the truth, when it is revealed. Specifically, calibration is a p value
for a statistical test about a null hypothesis that an expert is actually calibrated and neither overconfident nor underconfident.
Informativeness measures the spread of a probability distribution. The narrower the distribution, the more informative it is. Informativeness is a relative measure and it is always measured against
some other distribution about the same issue.
We have asked for each expert’s uncertainty over a number of calibration variables; these variables are chosen to resemble the quantities of interest, and to demonstrate the experts’ ability as
probability assessors. An expert states n fixed quantiles of his/her subjective distribution for each of several uncertain quantities. There are n+1 ‘inter- quantile intervals’ into which the actual
values may fall. ^[1]
Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle p =
(p_1, ... , p_{n+1}) (1) }
denote the theoretical probability vector associated with these intervals. Thus, if the expert assesses the 5%, 25%, 50%, 75% and 95% quantiles for the uncertain quantities, then n = 5 and p = (5%,
20%, 25%, 25%, 20%, 5%). The expert believes there is 5% probability that the realization falls between his/her 0% and 5% quantiles, a 20% probability that the realization falls between his/her 5%
and 25% quantiles, and so on.
Suppose we have such quantile assessments for m seed variables. Let
Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle s =
(s_1, ... , s_{m+1}) (2) }
denote the empirical probability vector of relative frequen- cies with which the realizations fall in the interquantile intervals. Thus
s1 = (# realizations less than or equal to the 5% quantile)/m,
s2 = (# realizations strictly above the 5% quantile and less than or equal to the 25% quantile)/m,
s3 = (# realizations strictly above the 25% quantile and less than or equal to the 50% quantile)/m
and, so on.
If the expert is well calibrated, he/she should give intervals such that—in a statistical sense—5% of the realizations of the calibration variables fall into the corresponding 0–5% intervals, 20%
fall into the 5–25% intervals, etc.
We may write:
Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle 2mI
(s, p)= 2m \Sigma_{i=1}^{m+1} s_i ln(\frac{s_i}{p_i}), (3) }
where I(s,p) is the Shannon relative information of s with respect to p. For all s,p with p[i] >0, i = 1,...,m+1, we have I(s,p)≥ 0 and I(s,p) = 0 if and only if s = p (see ^[2] ). Under the
hypothesis that the uncertain quantities may be viewed as independent samples from the probability vector p,2mI(s,p) is asymptotically Χ^2 -distributed with m degrees of freedom: P(2mI(s,p)≤x) ~ Χ^2m
^2 where Χ^2m is the cumulative distribution function for a Χ^2 variable with m degrees of freedom. Then
Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle CAL
= 1 - \Chi^2_m (2mI(s,p)) (4) }
is the upper tail probability, and is asymptotically equal to the probability of seeing a disagreement no larger than I(s,p) on n realizations, under the hypothesis that the realizations are drawn
independently from p.
CAL is a measure of the expert’s calibration. Low values (near zero) correspond to poor calibration. This arises when the difference between s and p cannot be plausibly explained as the result of
mere statistical fluctuation. For example, if m = 10, and we find that 8 of the realizations fall below their respective 5% quantile or above their respective 95% quantile, then we could not
plausibly believe that the probability for such events was really 5%. This phenomenon is sometimes called "overconfidence." Similarly, if 8 of the 10 realizations fell below their 50% quantiles, then
this would indicate a "median bias." In both cases, the value of CAL would be low. High values of CAL indicate good calibration.
Information is measured as Shannon’s relative informa- tion with respect to a user-selected background measure. The background measure will be taken as the uniform (or loguniform) measure over a
finite ‘‘intrinsic range’’ for each variable. For a given uncertain quantity and a given set of expert assessments, the intrinsic range is defined as the smallest interval containing all the experts’
quantiles and the realization, if available, augmented above and below by K%.^[1]
The relative information of expert e on a given variable is
Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle I
(e) = \Sigma_{i=1}^{n+1} p_i ln (\frac{p_i}{r_i}), (5) }
where r[i] are the background measures of the corresponding intervals and n the number of quantiles assessed. For each expert, an information score for all variables is obtained by summing the
information scores for each variable. This corresponds to the information in the expert’s joint distribution relative to the product of the background measures under the assumption that the expert’s
distribu- tions are independent. Roughly speaking, with the uniform background measure, more informative distributions are obtained by choosing quantiles that are closer together, whereas less
informative distributions result when the quantiles are farther apart.
Equal-weight and performance-based decision maker
The probability density function for the equal-weight "decision maker" is constructed by assigning equal weight to each expert's density. If E experts have assessed a given set of variables, the
weights for each density are 1/E; hence for variable i in this set the decision maker's density is given by
Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle f_
{eqdm,i} = (\frac{1}{E}) \Sigma_{j=1..E} f_{j,i}, (6) }
where f[j,i] is the density associated with expert j’s assessment for variable i.^[1]
The performance-based ‘‘decision maker’s’’ probability density function is computed as a weighted combina- tion of the individual expert’s densities, where each expert’s weight is based on his/her
performance. Two performance-based ‘‘decision makers’’ are supported in the software EXCALIBUR. The ‘‘global weight decision maker’’ is constructed using average information over all calibration
variables and, thus, one set of weights for all questions. The ‘‘item weight decision maker’’ is constructed using weights for each question separately, using the experts’ information scores for each
specific question, rather than the average information score over all questions.
In this study, the global and items weights do not differ significantly, and we focus on the former, calling it simply ‘‘performance-based decision maker.’’ The performance- based decision maker
(Table 4) uses performance-based weights that are defined, per expert, by the product of expert’s calibration score and his/her overall information score on calibration variables, and by an
optimization procedure.
For expert j, the same weight is used for all variables assessed. Hence, for variable i the performance-based decision maker’s density is
Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "https://wikimedia.org/api/rest_v1/":): {\displaystyle f_
{gwdm,i} = \frac{\Sigma_{j=1...E} w_j f_{j,i}}{\Sigma_{j=1...E} w_j}. (7) }
The cut-off value for experts that were given zero weight was based on optimising: the cut-off value, a, that gave the highest performance score to the decision maker was selected. The optimising
procedure is the following. For each value of a, define a decision maker DMa, which is computed as a weighted linear combination of the experts whose calibration score is greater than or equal to
a.DMa is scored with respect to calibration and information. The weight that this DMa would receive if he were added as a ‘‘virtual expert’’ is called the ‘‘virtual weight’’ of DMa. The value of a
for which the virtual weight of DMa is the greatest is chosen as the cut-off value for determining which experts to exclude from the combination.
Seed variables
Seed variables fulfil a threefold purpose, namely to enable: (1) the evaluation of each expert’s performance as a probability assessor, (2) the performance-based combina- tion of the experts’
distributions, and (3) assessment of the relative performance of various possible combinations of the experts’ distributions.^[1]
To do this, performance on seed variables must be seen as relevant for performance on the variables of interest, at least in the following sense: if one expert gave narrow confidence bands widely
missing the true values of the seed variables, while another expert gave similarly narrow confidence bands which frequently included the true values of the seed variables, would these experts be
given equal credence regarding the variables of interest? If the answer is affirmative, then the seed variables fall short of their mark. Evidence indicates that performance on ‘almanac items’ (How
many heretics were burned at Montsegur in 1244?) does not correlate with performance on variables from the experts’ field of expertise. ^[3] On the other hand, there is some evidence that performance
on seed variables from the field of expertise does predict performance on variables of interest. ^[4]
Informativeness is calculated in the following way for a series of Bernoulli probabilities:
where s is a vector of actual probabilities and p is a vector of reference probabilities.
Calibration is calculated in the following way:
where NQ is a vector of the number of trials for each actual probability.
See also
Expert elicitation, expert judgement, performance
Related files | {"url":"https://dev.opasnet.org/w/Calibration","timestamp":"2024-11-13T08:19:21Z","content_type":"text/html","content_length":"39969","record_id":"<urn:uuid:825772ee-975e-46e6-9837-7c3ac2382fd2>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00354.warc.gz"} |