text stringlengths 49 10.4k | source dict |
|---|---|
electrostatics, electric-fields, gauss-law
Now if you look at an infinite line of charge with charge density $\lambda,$ the above procedure performed on each charge separately will turn the whole thing into a little tube of diameter $2\epsilon$ and charge density $\rho_\epsilon = \lambda/(\pi\epsilon^2).$ The full force will have some multiplicative factor due to the fact that some components of the force are cancelling out, but this only depends on geometry; it is the same for all thin charged wires and not dependent on $\epsilon$. Nevertheless, we can immediately answer "if this were twice as far away, the factor of $\epsilon$ and the charge density $\rho_\epsilon$ would not change, but the solid angle would be half as much. Everything else would be the same because the line is infinite, only the solid angle would change." So in the limit, the effect of doubling your distance to the line of charge is to reduce the force by half. Therefore the force must be proportional to $\lambda / r.$ You still need calculus to get the multiplicative constant because the different forces from different directions are cancelling out; the only way I see to remove this step would be to somehow focus on the electric potential which adds like a scalar: but even then you need to calculate the slope of it, which sounds like a calculus problem.
Similarly, with the infinite sheet of charge, the above smears it out into a charge density of a plate with thickness $2 \epsilon$ and charge density $\sigma/(2 \epsilon).$ When you go twice as far away from it, the sheet has the same charge density and the same prefactor $\epsilon$ weights it, but what happens to the solid angle? Well if it's really an infinite sheet then the solid angle is still $2\pi$ since the solid angle of the whole sphere is $4\pi,$ and nothing has changed on the projected-sphere. So that's how you see directly that this effect of "there is more charge in a given amount of solid angle" has perfectly balanced out the effect of "that charge is further away and needs to be reduced by a larger constant when I project it onto the sphere." So the force must be independent of distance and proportional to $\sigma,$ again with some geometric factor. | {
"domain": "physics.stackexchange",
"id": 38065,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electrostatics, electric-fields, gauss-law",
"url": null
} |
What's a non-brute-force method for deriving the numerators? I'm thinking there may be a recursive definition, so that $\rm P(5\;games)$ can be defined in terms of $\rm P(4\;games)$ and so on, and/or that it may involve combinations like $\rm(probability\;of\;at\;least\;4/7\;W)\times(probability\;of\;legal\;combination\;of\;7\;outcomes)$, but I'm a bit stuck. Initially I thought of some ideas involving $\left(^n_k\right)$ but it seems that only works if the order of outcomes doesn't matter.
Interestingly, another mutual friend pulled out some statistics on 7 game series played (NHL, NBA, MLB 1905-2013, 1220 series) and came up with:
4 Game Series - 202 times - 16.5%
5 Game Series - 320 times - 26.23%
6 Game Series - 384 times - 31.47%
7 Game Series - 314 times - 25.73%
That's actually a pretty good match (at least from my astronomer's point of view!). I'd guess that the discrepancy comes from the outcome of each game having being biased toward a win for one team or the other (indeed, teams are usually seeded in the first round so that the leading qualifying team plays the team that barely qualified, second place plays second last, and so on... and most of the games are in the first round).
• Am not particularly active on CV.SE, so this may need a bit of re-tagging. – Kyle Jun 3 '14 at 23:31 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9697854164256366,
"lm_q1q2_score": 0.8476673554925305,
"lm_q2_score": 0.8740772351648678,
"openwebmath_perplexity": 995.7258670450435,
"openwebmath_score": 0.4178212285041809,
"tags": null,
"url": "https://stats.stackexchange.com/questions/101063/statistics-of-7-game-playoff-series"
} |
fasta, assembly
0:24:43.505 32M / 8G INFO K-mer Index Building (kmer_index_builder.hpp : 298) Building perfect hash indices
0:25:40.125 100M / 8G INFO General (kmer_index_builder.hpp : 137) Merging final buckets.
1:11:46.553 100M / 8G INFO K-mer Index Building (kmer_index_builder.hpp : 320) Index built. Total 95426752 bytes occupied (3.70991 bits per kmer).
1:11:46.598 100M / 8G INFO K-mer Counting (kmer_data.cpp : 357) Arranging kmers in hash map order
1:12:55.597 3G / 8G INFO General (main.cpp : 155) Clustering Hamming graph.
1:21:35.596 3G / 8G INFO General (main.cpp : 162) Extracting clusters
=== Stack Trace ===
[0x407a6a]
[0x409561]
[0x40c79b]
[0x40129a]
[0x5442b0]
[0x40620d]
Verification of expression '(intptr_t) MappedRegion != -1L' failed in function 'void MMappedWriter::reserve(size_t)'. In file '/spades/src/common/io/kmers/mmapped_writer.hpp' on line 94. Message 'mmap(2) failed. Reason: Invalid argument. Error code: 22'.
Verification of expression '(intptr_t) MappedRegion != -1L' failed in function 'void MMappedWriter::reserve(size_t)'. In file '/spades/src/common/io/kmers/mmapped_writer.hpp' on line 94. Message 'mmap(2) failed. Reason: Invalid argument. Error code: 22'. | {
"domain": "bioinformatics.stackexchange",
"id": 1883,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "fasta, assembly",
"url": null
} |
performance, c, unit-testing, cyclomatic-complexity, lexical-analysis
for (size_t i = 0; i < SYNTAX_CHECK_COUNT; i++)
{
char buffer[BUFSIZ];
if (i >= ILLEGALOPCODE && necessary_items[i])
{
sprintf(buffer, "\t%s\n", error_strings[i]);
log_generic_message(buffer);
}
else if (i < ILLEGALOPCODE && !necessary_items[i])
{
sprintf(buffer, "\t%s\n", error_strings[i]);
log_generic_message(buffer);
}
}
}
static bool check_syntax_check_list_and_report_errors_as_parser_would(
unsigned syntax_check_list[], Syntax_State state, unsigned char* text_line,
size_t statement_number, Expected_Syntax_Errors* expected_errors,
char *parser_generated_error)
{
unsigned error_count = 0;
bool syntax_check_list_in_sync = true;
for (size_t i = 0; i < SYNTAX_CHECK_COUNT; i++)
{
error_count += (!syntax_check_list[i] && i < ILLEGALOPCODE) ? 1 : ((i >= ILLEGALOPCODE && syntax_check_list[i]) ? 1 : 0);
if (syntax_check_list[i] != expected_errors->syntax_check_list[i] && i != MULTIPLESTATEMENTSONELINE)
{
syntax_check_list_in_sync = false;
}
}
if (error_count != expected_errors->error_count)
{
syntax_check_list_in_sync = false;
} | {
"domain": "codereview.stackexchange",
"id": 39201,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "performance, c, unit-testing, cyclomatic-complexity, lexical-analysis",
"url": null
} |
thermodynamics
Title: what is the supercritical state? is the supercritical state of matter a separate state of matter or is it a combination of a liquid and a gas? could someone give me an overview of this state and its properties. i have seen some videos in which the meniscus of a liquid vanishes and reappears as the liquid turns to its supercritical state and back. i find this hard to comprehend as i am used to seeing the meniscus of a liquid go down the container as it vaporises. could someone explain this spontaneous change in state? In the supercritical state the difference between liquid and gas vanishes.
The sharp distinction between liquid and gas only exists up to a critical pressure and temperature at which the energy needed to vaporize the liquid vanishes and the densities of the liquid and the gas get equal, above this points no different liquid and gas phases exist.
In other words, it is possible to transform a liquid into a gas without encountering a change of state (but rather some a cross-over). But when you are below the critical temperature and the critical pressure, there is a sharp distinction due to the first order phase transition line.
The Van der Waals gas gives a good qualitative explanation for this.
In a way the properties of the supercritical state are a combination of the ones of the liquid and the gas, its viscosity is low (as for a gas) but usually it is a good solvent (supercritical $\text{CO}_2$ is used to extract caffeine from coffee beans in decaffeination).
Around the critical point (when the pressure is the critical pressure and the temperature is the critical temperature) additional phenomena can be observed, such as long range fluctuations leading to clouding in the fluid (as small temperature fluctuations drive the system between liquid and gaseous state, thus causing the light to be scattered), this phenomenon is called critical opalescence. For more on this read more about the critical point. | {
"domain": "physics.stackexchange",
"id": 21895,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "thermodynamics",
"url": null
} |
php, wordpress
It's working and I'm able to get everything as I wanted but it's looking ugly to me. Is there are any better way to do this? Or any improvement on my code? Here is my list of recommendations:
To make the database object available in the custom functions' scope, pass it as a parameter. I know your global declaration is commonly used by WPers, but I consider it to be sloppy development. If you are going to persist with using global, then you should only do it once per custom function.
In bestanswers() to make your SQL easier to read, use newlines, indentation, spaces around operators, ALLCAPS mysql keywords, double-quoting the full string, and single-quoting the values in the string. *That said, for the most secure and consistent project, you should use prepared statements anytime you are supplying variables to your query.
$results = $wpdb->get_results(
"SELECT *
FROM wp_comments
LEFT JOIN wp_custom_scoring
ON wp_custom_scoring.entryID = wp_comments.comment_ID
WHERE class = 'point'
AND comment_post_ID = " . (int)$a . "
AND type = 'plus'
GROUP BY entryID
ORDER BY COUNT(entryID) DESC");
empty() does 2 things. It checks if a variable !isset() OR loosely evaluates as "falsey". You know that $result will be set because you have unconditionally declared it on the previous line of code. (untested...)
if (!$results)) {
echo 'Its Empty!';
return false;
}else{
return $results;
}
Or if you don't need the echo:
return $results ? $results : false;
In scores(), your $b parameter only determines the WHERE clause, so you can DRY out your code like this:
function score($wpdb, $a, $b) {
if (in_array($b, ['plus','minus']) {
$where = "class = 'point' AND type = '{$b}'";
} else {
$where = "class = 'fav'";
} | {
"domain": "codereview.stackexchange",
"id": 35427,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "php, wordpress",
"url": null
} |
fluid-dynamics, energy, kinematics
Title: Derivation for kinetic energy flux I'm working to derive kinetic energy flux for fluids. I could not find a derivation online. I know from literature the correct answer is $ \phi_{kin} = (1/2) \rho v^3 $.
The specific context is in snapshots of fluids, so its okay to assume constant acceleration.
I being with the definition of kinetic energy
$$ E_k = \frac{1}{2} m v^2 \, .$$
We can assume constant acceleration between each snapshot (each time we can view the fluid). We are interested in calculating the kinetic energy flux of a system with one point, molecule or pixel and how it moves between snapshots.
We begin by placing a single particle in a box. It has energy only due to kinetic energy. As it has kinetic energy, from our previous definition of kinetic energy it must be moving.
Consider this particle in a box moving in an arbitrary direction as depicted in the figure below. | {
"domain": "physics.stackexchange",
"id": 41092,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "fluid-dynamics, energy, kinematics",
"url": null
} |
lagrangian-formalism, differential-geometry, mathematical-physics, coordinate-systems, constrained-dynamics
Here it is implicitly understood that $\chi$ vanishes on the constrained submanifold $C\subset M$, i.e.
$$C\cap \Omega ~=~\chi^{-1}(\{0\})~:=~\{x\in\Omega \mid \chi(x)=0\}.$$
[Also we imagine that the full constrained submanifold $C\subset M$ is covered by a family $(\Omega_{\alpha})_{\alpha\in I}$ of open neighborhoods, each with a corresponding constrained function $\chi_{\alpha}: \Omega_{\alpha}\subseteq M \to \mathbb{R}$, and such that the constraint functions $\chi_{\alpha}$ and $\chi_{\beta}$ are compatible in neighborhood overlaps $\Omega_{\alpha}\cap \Omega_{\beta}$.] Since there (locally) is only one constraint, the constrained submanifold will be a hypersurface, i.e. of codimension 1. [More generally, there could be more than one constraint: Then the above regularity conditions should be modified accordingly. See e.g. Ref. 1 for details.]
The above regularity conditions are strictly speaking not always necessary, but greatly simplify the general theory of constrained systems. E.g. in cases where one would like to use the inverse function theorem, the implicit function theorem, or reparametrize $\chi\to\chi^{\prime}$ the constraints. [The rank condition (3.) can be tied to the non-vanishing of the Jacobian $J$ in the inverse function theorem.]
Quantum mechanically, reparametrizations of constraints may induce a Faddeev-Popov-like determinantal factor in the path integral.
Example 1a: OP's 1st example (v1)
$$\tag{1a} \chi(x,y)~=~x^2+y^2-\ell^2$$ | {
"domain": "physics.stackexchange",
"id": 13287,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "lagrangian-formalism, differential-geometry, mathematical-physics, coordinate-systems, constrained-dynamics",
"url": null
} |
inert-gases
Title: Zeolite-based oxygen concentrators I wonder, what is content of output of zeolite-based pressure swing adsorption oxygen concentrators (both oxygen output, and exhaust output)?
Yes, they can produce 95% oxygen.
But what's the remaining 5%? Is it just argon and other noble gases, or some air contaminants could also be concentrated (like CO2, NO2, CO)?
Any references to quantitative gas analysis results would be extremely useful (wasn't able to find any, probably bad google skills). The remaining 5% could in a normal, air-operated concentrator be `anything else' that was present in the air. Looking at this example and explanation, anything that ends up in the outlet stream is what was not absorbed. Then the question remains: does absorption occur equally for all air components (except $O_2$ of course) or does it differ?
I can give you some numbers (see also the presentation I referenced above), but the absorption depends strongly on the type of zeolite. For example this $AgA$ zeolite has a 1.63:1 $Ar$ selectivity and a 5:1 $N_2$ selectivity with respect to $O_2$. Whereas the $LiAgX$ zeolite has only 1.1:1 $Ar$ selectivity.
In this article they mention that they can get 95% $O_2$ with the remaining 5% being almost completely $Ar$. However, they use a contaminant free inlet mixture of $O_2$, $N_2$ and $Ar$.
The most interesting for you is probably this article. They study how $H_2O$ and $CO_2$ affect the operation of a zeolite oxygen concentrator. What they show is that there is not going to be any $CO_2$ in the outlet stream, but instead $CO_2$ adsorbes so strongly on the zeolite that it will degrade its overall efficiency. | {
"domain": "physics.stackexchange",
"id": 6909,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "inert-gases",
"url": null
} |
electromagnetic-radiation, ideal-gas
Title: How does an ideal gas radiate? Recently I've been reading a lot about blackbody radiation, Rayleigh-Jeans law, Planck's law and the UV catastrophe.
In deriving the Rayleigh-Jeans and Planck's laws, we are examining a perfectly reflecting cavity filled with radiating ideal gas. The gas and the radiation are in thermal equilibrium. In Rayleigh-Jeans law, it is assumed that as an ideal gas has the average energy of $1/2kT$ per degree of freedom, the radiation has the same average energy per mode.
This is somewhat understandable to me, as the radiation originates from the charged particles having average energy of $1/2kT$, it would be reasonable to assume the radiation has the same average energy. But how does an ideal gas radiate in the first place?
My understanding of the ideal gas model is that it is a collection of particles moving with constant speeds and are non-interacting except during collisions, which are fully elastic. But to radiate, a particle has to accelerate. Only situation I can think of where electrons in ideal gas accelerate are during collisions (they collide and change directions), but the collisions are elastic and there is no change in kinetic energies of the particles. So if the collisions are elastic, where does the energy to produce radiation come from?
In deriving the Rayleigh-Jeans and Planck's laws, we are examining a perfectly reflecting cavity filled with radiating ideal gas.
The gas does not have to be ideal. Ideal gas is too simple a model to provide description of interaction with the EM radiation.
Real gas interacts with EM radiation, because its molecules consist of charged particles. The collisions of the molecules are not elastic. | {
"domain": "physics.stackexchange",
"id": 53212,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetic-radiation, ideal-gas",
"url": null
} |
# D.E word problem
#### bergausstein
##### Active member
1. radium decompose at the rate proportional to the amount itself. if the half life is 1600, find the percentage remaining after at the end of 200 years.
can you me go about solving this. thanks!
#### MarkFL
Staff member
Let $R(t)$ be the mass of radium in a given sample at time $t$. How can we mathematically state how this mass changes with time in general? Just look at the first sentence and use that to try to model this change with an initial value problem.
#### bergausstein
##### Active member
$R(t)=R_oe^{kt}$ where $k<0$
now the half-life is 1600 so,
$1600=3200e^{kt}$
#### MarkFL
Staff member
$R(t)=R_oe^{kt}$ where $k<0$
now the half-life is 1600 so,
$1600=3200e^{kt}$
You have the correct equation for the mass of a sample, but the half-life being 1600 years means that at time t=1600 then $R(t)=\dfrac{1}{2}R_0$. That is (I like to always use a positive constant):
$$\displaystyle R(1600)=R_0e^{-1600k}=\frac{1}{2}R_0$$
Divide through by $$\displaystyle R_0$$ and the convert from exponential to logarithmic form to solve for $k$. Another way to look at it is:
$$\displaystyle R(t)=R_02^{-\frac{t}{1600}}$$
And then to find the percentage remaining after $t$ years, use:
$$\displaystyle \frac{100R(t)}{R_0}=100\cdot2^{-\frac{t}{1600}}$$
#### bergausstein
##### Active member
from here solving for k | {
"domain": "mathhelpboards.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9805806484125338,
"lm_q1q2_score": 0.8536908405465717,
"lm_q2_score": 0.8705972751232808,
"openwebmath_perplexity": 1667.1023268420893,
"openwebmath_score": 1.0000100135803223,
"tags": null,
"url": "https://mathhelpboards.com/threads/d-e-word-problem.9047/"
} |
beginner, c, strings, programming-challenge
if (argc > 2) {
puts("Excessive arguments, only the first will be considered.");
}
FILE *file = fopen(args[1], "r");
if (file == NULL) {
perror("Error");
return 1;
}
char line[LINE_LENGTH];
while (fgets(line, LINE_LENGTH, file)) {
printf("%s\n", parse_and_evaluate(line));
}
fclose(file);
} You're bleeding memory for every line in the file. This is probably ok for the challenge, however it's a good idea to get into the habit of cleaning up after yourself.
The call to malloc in str_mul needs to have a corresponding free call somewhere to release the memory. Looking at your program in it's current state, the easiest way to do that is probably in the is_rotated method like so:
bool is_rotated(char *original, char *test_case) {
int original_length = strlen(original);
char *rotation_superset = str_mul(original, 2);
bool is_substring = strstr(rotation_superset, test_case) != NULL;
free(rotation_superset);
return original_length == strlen(test_case) && is_substring;
}
I'd be tempted to actually move the malloc into the same method and pass the buffer into the str_mul method so that the allocation and release are at the same level. This would also allow you to simply declare a local on the stack in is_rotated, rather than needing to use malloc:
void str_mul(char* str, char *string_multiplied, int times) {
string_multiplied[0] = '\0';
for (int i = 1; i <= times; i++) {
strcat(string_multiplied, str);
}
} | {
"domain": "codereview.stackexchange",
"id": 22082,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "beginner, c, strings, programming-challenge",
"url": null
} |
ros, motoman-driver, motoman
Just to double check, from what I see, I guess the indigo branch has the best support for the SDA10f. So I will go ahead using ROS Indigo?
Comment by gvdhoorn on 2016-12-14:
For the SDA10 there shouldn't be too much difference between Jade and Indigo, but Indigo is probably a good idea, yes.
Comment by motoman on 2016-12-15:
I have decided to change the axis b and axis t stl files in the URDF of the motoman SDA10 f with meshes from the CSDA10F. Apart from them , I dont see much of a change that would affect kinematics of the robot . Suggestions?
Comment by gvdhoorn on 2016-12-15:
We should probably move this discussion to the moveit_experimental repository issue tracker. But to answer your question: it's not just about kinematics. Robot geometry (shape, size) is also important. Not just for visualisation, but for collision avoidance as well.
Comment by motoman on 2017-01-06:
Understood:) I am working on URDF but havent been able to find technical drawings for the CSDA10F , i have also contacted techsupport and no reply yet. I have posted it as an issue in the motoman_experimental repository | {
"domain": "robotics.stackexchange",
"id": 26421,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, motoman-driver, motoman",
"url": null
} |
homework-and-exercises, newtonian-mechanics
Considering Newton's Third Law, the best way is to find the total force applied and rearrange the formula to find the boy's speed.
Newton's Law Method:
$F = m*a$
$a=\frac{0.3m/s}{0.5s}$
$F = 40kg * 0.6m/s^2$
$F = 24N$
Therefore the boy's speed will be:
$\frac{24N}{60kg} = a$
$a = 0.4m/s^2$
$v = 0.4m/s^2 * 0.5s$
$v = 0.2m/s$
Kinetic Energy Method:
$E_{k}=\frac{1}{2}*m*v^2$ the girl's kinetic energy
$E_{k}=\frac{1}{2}*40kg*(0.3m/s)^2$
$E_{k}=1.8J$
Since every action has an equal and opposite reaction the boy should have the same amount of kinetic energy.
$E_{k}=\frac{1}{2}*60kg*(0.2m/s)^2$
$E_{k}=1.2J$
As you can see, using Newton's Third Law, the kinetic energy was not the same. So what's going on here? I thought that in this case they should both have the same amount of kinetic energy due to Newton's Law. I must be missing something since there is more energy going in one direction than the other after meeting. My book used the first method as the answer and, well, it feels wrong. In both of your solutions, you attempted to use Newton's 3rd law: $$\vec{F}_{1\rightarrow2}=-\vec{F}_{2\rightarrow1}.\tag{Newton's 3rd law}$$
You did this correctly in your first method ("Newton's law method") but incorrectly in your second method ("Kinetic energy method"). | {
"domain": "physics.stackexchange",
"id": 10749,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, newtonian-mechanics",
"url": null
} |
newtonian-mechanics
Title: When a person bends forward, does normal reation force change as a component of his weight is applied in torque? My thoughts :i read that normal reaction is equal to the weight of the person. As in the case above, the effective vertical force in the system is zero. But, I'm confused, cuz a component of the weight vector is used in torque and other towards the point of contact.
I read that bending forward also increases the horizontal force hence causing friction on a rough surface, but if this horizontal force comes from weight , how can the vertical normal reaction force still be equal to weight.
Kindly point out the error in my thought process and correct me. :) Consider to configurations below:
Necessary condition of equilibrium is $x_N=x_G$
When you bend and still are in equilibrium (configuration 1 and 2), then certainly $x_N=x_G$. When you bend, your center of mass ($G$) displaces. But, until $x_G\le x_U$ you can be in equilibrium because application point of resultant normal reaction $N$ displaces with $G$'s displacement.
If you bend more (configuration 3), so that $x_G\gt x_U$; then you will rotate and cannot be in equilibrium (you can check this by calculating resultant torque about an arbitrary point).
Until you are in equilibrium, magnitude and direction of the resultant normal reaction $N$ don't change ($N=mg$). What that changes, is the application point of $N$. | {
"domain": "physics.stackexchange",
"id": 78454,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics",
"url": null
} |
cosmology, astronomy, astrophysics, dark-matter, structure-formation
Title: What size of object does the peak of the cosmological power spectrum correspond to? The title almost says it all, but to flesh it out more, what is the size a sphere corresponding to the peak in the cosmological power spectrum (Figure 2: https://ned.ipac.caltech.edu/level5/Sept11/Norman/Norman2.html).
It would be great to get a feel of both the size of the collapsing region (e.g. the first stars collapsing in clusters from a region with size of order kpc) and the size that such an object could theoretically collapse down to (if it could cool effectively and gravity wasn't swamped by other forces). From the figure in the website you link, we see that the peak in the power spectrum occurs at a wavelength of about 300 [h$^{-1}$/Mpc]. The value of h is 0.68 for a Hubble constant of 68 km/s/Mpc (a value based on the Planck 2013 results). If we take 300 and divide by 0.68 to get in units of Mpc and round to the nearest ten we get
$$\lambda = 440 ~\textrm{Mpc}.$$
This is much larger than any galaxy or even galaxy cluster (galaxy clusters are $\sim 1-10$ Mpc in size). We have to get to structures known as superclusters, which are groups of clusters, or galaxy filaments, which are so named because they usually longer in one direction than the other two directions.
The Laniakea Supercluster, which the Milky Way is part of, is close to this size but still a little small, at 160 Mpc.
The Sloan Great Wall, a galaxy filament, is about 400 Mpc in length.
In short, the length scale that relates to the peak in the power spectrum of matter fluctuation densities encompasses the largest structures that have been discovered in the universe. | {
"domain": "physics.stackexchange",
"id": 29542,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "cosmology, astronomy, astrophysics, dark-matter, structure-formation",
"url": null
} |
electricity, superconductivity
Title: Do superconductors experience any force from within a solenoid? If I put a superconducting superconductor in the center of a solenoid and put a current through the solenoid, does the superconducting superconductor experience a force? There is no force on a superconductor in a solenoid.
The first thing that happens when you turn on the solenoid is the superconductor develops a magnetic moment $-\frac{\alpha}{\mu_0}\vec{B}_0$ where $\vec{B}_0$ is the original field and $\alpha$ is a dimensionless constant characterizing the shape of the superconductor ($\alpha=3/2$ for a sphere), in other words, the superconductor acts like a perfect diamagnet.
Now we can ask how the moment interacts with the external magnetic field. If a field gradient is present the magnetic moment would experience a force. This is what happens in the case of a superconductor near a bar magnet; the field produced by the bar magnet is not uniform, there is a field gradient. In a solenoid the field is uniform so there is no force.
Another effect is the pinning of magnetic field to defects. This stabilizes superconductors levitating above bar magnets. But again there is no effect in a solenoid because the field is uniform. | {
"domain": "physics.stackexchange",
"id": 33200,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electricity, superconductivity",
"url": null
} |
ros, moveit, move-group-interface, move-group
srdf file:
<robot name="arm">
<!--GROUPS: Representation of a set of joints and links. This can be useful for specifying DOF to plan for, defining arms, end effectors, etc-->
<!--LINKS: When a link is specified, the parent joint of that link (if it exists) is automatically included-->
<!--JOINTS: When a joint is specified, the child link of that joint (which will always exist) is automatically included-->
<!--CHAINS: When a chain is specified, all the links along the chain (including endpoints) are included in the group. Additionally, all the joints that are parents to included links are also included. This means that joints along the chain and the parent joint of the base link are included in the group-->
<!--SUBGROUPS: Groups can also be formed by referencing to already defined group names-->
<group name="arm_eef">
<link name="link_1"/>
<link name="link_2"/>
<link name="link_3"/>
<link name="link_4"/>
<link name="link_5"/>
<link name="virtual_eef"/>
<joint name="joint_1"/>
<joint name="joint_2"/>
<joint name="joint_3"/>
<joint name="joint_4"/>
<joint name="joint_5"/>
<joint name="virtual_eef_joint"/>
<chain base_link="base_link" tip_link="virtual_eef"/>
</group>
<group name="eef">
<link name="eef_link_1"/>
<link name="eef_finger_link_1"/>
<link name="eef_link_2"/>
<link name="eef_finger_link_2"/>
<joint name="eef_joint_1"/>
<joint name="eef_finger_joint_1"/>
<joint name="eef_joint_2"/> | {
"domain": "robotics.stackexchange",
"id": 37602,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, moveit, move-group-interface, move-group",
"url": null
} |
Of course, you're... it's me, who didn't
17. Mr.Math
This problem is not difficult, but I'm missing something. What am I missing?!
18. Ishaan94
Do you think I should ping Zarkon?
19. Mr.Math
I think I have it. One minute.
20. Mr.Math
You can ping him though, but he should not answer until I finish :P
21. Ishaan94
If I'm not mistaken,$- \int_0^{2\pi} f'(x) \sin x dx = f'(x) \cos x|_0^{2\pi} - \int _0^{2\pi} f''(x) \cos xdx$
22. Mr.Math
Correct! I just noticed that :)
23. Mr.Math
So we have to show that $$f'(2\pi)+g(0)\ge g(2\pi)+f'(0).$$
24. Mr.Math
I'm getting tired. @Zarkon should come from his planet now.
25. Ishaan94
Zarkon's offline :(... @satellite73 @eseidl
26. satellite73
i am thinking parts
27. satellite73
oh i should pay attention. looks like mr.math did parts right?
28. Mr.Math
Yes.
29. Ishaan94
yeah
30. Mr.Math
I think I have it this time. By the squeeze theorem and property of inequality in integrals we have: $-\int_0^{2\pi}f''(x) dx\le \int_0^{2\pi}f''(x)\cos(x)dx\le \int_0^{2\pi}f''(x)dx.$ Thus $$g(2\pi)-g(0)\le f'(2\pi)-f'(0)$$ as required.
31. satellite73
that was nice. i was trying to come up with a counter example, and example of a positive function where this integral would be negative. can't seem to do it.
32. satellite73 | {
"domain": "openstudy.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9796676481414185,
"lm_q1q2_score": 0.8099028052619712,
"lm_q2_score": 0.8267118004748677,
"openwebmath_perplexity": 818.2094521380008,
"openwebmath_score": 0.9072598218917847,
"tags": null,
"url": "http://openstudy.com/updates/4f626ed8e4b079c5c631603c"
} |
c#, excel
Title: Loop through Excel files and copy correct range in a separate file Intro: Today I have decided to make an Excel automation task with C#. This is probably the first time I am doing something like this, thus the problems are plenty.
The task: Pretty much, the idea is the following - I have 4 excel files in folder strPath. I have to loop through all of them and make a file called Report.xlsx in the same folder, with the information from those files.
The information, that I need is anything, below row 9. Thus, the first row to copy is row number 10. That is why, the first file I loop for is saved as Report, and the bMakeOnce value is changed. After the first file is looped and saved As, I start entering into the else condition. There I locate the last used row of the Excel files and I try to copy the range into the sheetReport.
The questions:
Pretty much the code works as expected. However, I was thinking of some improvements, as far as probably there should be a way to make the things neater. E.g., some fancy try-catch-finally or similar. And some good practises, e.g. what would make to a separate function?
using System;
using System.IO;
using Excel = Microsoft.Office.Interop.Excel;
class MainClass
{
static void Main()
{
string strPath = Path.GetFullPath(Path.Combine(Directory.GetCurrentDirectory(), @"..\..\..\"));
string[] strFiles = Directory.GetFiles(strPath);
Excel.Application excel = null;
bool bMakeOnce = true;
string strReportName = "Report.xlsx";
int intFirstLine = 10;
int intLastColumn = 50;
int lastRow;
int lastRowReport;
int intTotalRows;
Excel.Workbook wkbReport = null;
string strWkbReportPath;
int n = 0;
excel = new Excel.Application();
excel.Visible = true; | {
"domain": "codereview.stackexchange",
"id": 23933,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, excel",
"url": null
} |
quantum-mechanics, homework-and-exercises, wavefunction, schroedinger-equation, potential
$$C_1 cos(\omega b) = A_1 cos(k b)\\ C_1 \omega sin(\omega b) = A_1 k sin(k b)$$
The solution reads: $$\tan(\omega b) = \frac{k}{\omega} \tan(k b) $$ which can be written as : $$\tag 1 \omega \tan(\omega b) = \frac{(2n+1)\pi}{2a} \tan\left( \frac{2n+1}{2a}\pi b \right)$$
Now, if you do $b \rightarrow a$ you get that $\tan\left(\frac{2n+1}{2}\pi\right) \rightarrow \infty$ which means that $cos(\omega b)=0$ which means that $\omega = \frac{2n+1}{2 b}\pi $ (also $k$ vanishes) which is the usual energy for the infinite well.
The limit $b \rightarrow 0$ is consistent too. Consider equation $(1)$ written as $$\omega=k \frac{\tan\left(kb\right)}{\tan\left(\omega b\right)}$$ now take the limit $b \rightarrow 0$ on both sides $$\ \omega = k\cdot \frac{k}{\omega}\longrightarrow \omega = k$$ and you are left with $k = \frac{(2n+1)\pi}{2 a}$ which is the usual infinite well energy. | {
"domain": "physics.stackexchange",
"id": 29174,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, homework-and-exercises, wavefunction, schroedinger-equation, potential",
"url": null
} |
c++, sorting, heap
// sort vector in ascending order
for (auto i = start; i != end-1; ++i) {
auto val = *start;
*start = *(start+(end-i-1));
*(start+(end-i-1)) = val;
max_heap(start, start, start+(end-i-1));
}
}
Sort your includes. That way, you can keep track even if there are more of them.
Writing a test-program is a good idea. Though print the seed-value, and allow overriding from the command-line for reproducibility.
In line with that, add a method to test whether a range is ordered, print that result and use it for the exit-code too.
I would expect a function named print_vector() to, you know, print a vector. Not an iterator-range from a vector. Also, encoding the type of an argument in the function-name hurts usability, especially in generic code.
fill_vector() is a curious interface. I would expect get_random_data() which returns the vector.
Know your operators. ++i, i <= num_of_elems is equivalent to ++i <= num_elements.
Anyway, that should be a for-loop, or you could omit i and just count the argument down to zero.
Kudos for using constexpr to avoid preprocessor-cnstants where not needed. Still, ALL_CAPS_AND_UNDERSCORES identifiers are generally reserved for preprocessor-macros. They warn/assure everyone that preprocessor-rules apply. Fix the naming too.
The C++ headers <cxxx> modelled on the C headers <xxx.h> only guarantee to put their symbols into ::std. Don't assume they are also in the global namespace.
max_heap() will often try to create pointers far beyond the passed range. Creating such a pointer invokes undefined behavior.
For simple and correct code, better use indices. | {
"domain": "codereview.stackexchange",
"id": 41348,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, sorting, heap",
"url": null
} |
javascript, jquery, html5, canvas
YhAXD1OGPfYc/nz78vUZjE1oNqnz2A17q5I0Kg2KoYKtIT6n52Gf4a1OHPx/rVZzk5iHgpE7frDAoj9HsDorJWGcxFesuZmC3Suups4jLImxj444oDKSqj92XIdOwdrN6GTIVCxMJ1ybgHXSxI5kGpaRqJ/DL4Y9GJmLzJmYCBwz/72xsvEL83UyXWxOqM5BujWX3HZCDsIlXc9EYRZ4Gsa5gYzcHURhIVqYBh2CLcR6Gzcjs9yyowL4DfKjbgygMJE8zsOc4jsKeCJ3hW05hnITdSeiKwkA89WOhcAK2S5aeBG3fy9idoa4pDCQkE7HHw08CzkDjDklcBtyQxoEUBhKqCrY5zgewzXGmulYTpl3Ycwgtt05LQmEgsZiF7YVxLjZpSuAubGHfVCgMJEbzsR+CxZT7UuIoUtwoSGEgMesFzgQ+Rfn2wliPbRCUGs07l5gNYK3yu7HFY26ng+f4I3VT2gdUZyBFMx5YDnya4i4WswMbONy | {
"domain": "codereview.stackexchange",
"id": 34315,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, jquery, html5, canvas",
"url": null
} |
# Prove the following trigonometric identity.
$$\frac{\sin x - \cos x +1}{\sin x + \cos x -1}=\frac{\sin x +1}{\cos x}$$
I tried substituting $\sin^2x+\cos^2x = 1$ but I cannot solve it.
The above method is really verifying and always quick. Another method to arrive at the answer is by rationalising denominator (mainly when the answer [or RHS] is not known or one is asked to work out only from LHS to RHS):
$$\frac{\sin x - \cos x + 1 }{\sin x + \cos x - 1 }\cdot \frac{\sin x + \cos x + 1}{\sin x + \cos x + 1}$$
$$\frac{ (\sin x + 1)^2 - \cos^2 x }{ 2 \sin x \cos x }$$
$$\frac{ \sin^2 x + 2 \sin x + 1 - \cos^2 x }{ 2 \sin x \cos x }$$
$$\frac{ \sin^2 x + 2 \sin x + \sin^2 x + \cos^2 x - \cos^2 x } {2 \sin x \cos x }$$
and the answer follows i.e. $$\frac{\sin x + 1}{\cos x}.$$
• well done i say :) – user87543 Dec 12 '13 at 13:47
• I wouldn't call this rationalizing the denominator, as there's nothing necessarily rational or irrational about the denominator before or after the initial multiplication, but this technique does mirror the technique for rationalizing denominators: using conjugates. – Isaac Dec 12 '13 at 16:05
Hint
$$\frac{a}{b}=\frac c d\iff ad=bc$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9828232879690035,
"lm_q1q2_score": 0.82531010503978,
"lm_q2_score": 0.8397339736884712,
"openwebmath_perplexity": 1164.3236865766298,
"openwebmath_score": 0.9328116774559021,
"tags": null,
"url": "https://math.stackexchange.com/questions/604169/prove-the-following-trigonometric-identity"
} |
In general, any nonzero multiple of the $Ones_{n\times n}$ matrix, say $a\cdot Ones_{n\times n}$ (matrix where all entries are $a$) will have an eigenvalue of zero and every eigenvector for zero will satisfy the relation that the sum of the entries is zero. We know that there are no others by a rank-nullity argument and that the remaining eigenvalue is equal to the trace (sum of the diagonal).
Furthermore, the other eigenvalue will necessarily be $a\cdot n$ with eigenvector $\begin{bmatrix}1\\1\\\vdots\\1\end{bmatrix}$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9828232935032463,
"lm_q1q2_score": 0.8411504375905563,
"lm_q2_score": 0.8558511414521923,
"openwebmath_perplexity": 188.01227741741098,
"openwebmath_score": 0.9563998579978943,
"tags": null,
"url": "https://math.stackexchange.com/questions/1486792/eigenvectors-for-an-eigenvalue-of-0"
} |
Then $-x<0$. We thus have
$$a>b\Rightarrow -ax<-bx\Rightarrow ab-ax<ab-bx\Rightarrow a(b-x)<b(a-x)$$
$$\Rightarrow \frac{a}{b}<\frac{a-x}{b-x}\Leftrightarrow \frac{a-x}{b-x}>% \frac{a}{b}.$$
For the second question. If $x>0$, we have:
$$a<b\Rightarrow ax<bx\Rightarrow ab+ax<ab+bx\Rightarrow a(b+x)<b(a+x)$$
$$\Rightarrow \frac{a}{b}<\frac{a+x}{b+x}\Leftrightarrow \frac{a+x}{b+x}>% \frac{a}{b}.$$
Then $-x<0$. We thus have
$$a<b\Rightarrow -ax>-bx\Rightarrow ab-ax>ab-bx\Rightarrow a(b-x)>b(a-x)$$
$$\Rightarrow \frac{a}{b}>\frac{a-x}{b-x}\Leftrightarrow \frac{a-x}{b-x}<% \frac{a}{b}.$$
-
Did you mean in your first implication, to write $a > b \rightarrow ax > bx$? – amWhy Jun 18 '11 at 20:57
@amWhy: Thanks! corrected. – Américo Tavares Jun 18 '11 at 21:07 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9830850877244272,
"lm_q1q2_score": 0.801442909930133,
"lm_q2_score": 0.8152324960856175,
"openwebmath_perplexity": 320.84531880516477,
"openwebmath_score": 0.8232020735740662,
"tags": null,
"url": "http://math.stackexchange.com/questions/46156/effect-of-adding-a-constant-to-both-numerator-and-denominator"
} |
operators, hilbert-space, hamiltonian, linear-algebra, density-operator
Can someone arrange this mathematical issue more clean?
Now, let's go to next step. Assume that $\operatorname{exp}(-\beta \mathcal{H})$ is diagonal in the energy basis. Then why the question mark is true? What the notation $\sum_n|E_n \rangle \operatorname{exp}(-\beta\mathcal{H})\langle E_n| $ exactly means? I think that I saw that kind of form in Quantum mechanics course, matrix mechanics etc.. but I don't remember its exact definition well.
I examined some example but I think that this example is not consistent well with the above equality. Example that I examined is as follows. Let $L_A : R^2 \to R^2 $ be a operator given by a matrix $A := \begin{pmatrix} 1 & 3 \\ 4 & 2 \end{pmatrix} $, $v_1 := \begin{pmatrix} 1 \\ -1 \end{pmatrix}$, and $v_2 := \begin{pmatrix} 3 \\ 4 \end{pmatrix}$. Then through some calculation, we can show that $\beta := \{ v_1, v_2\}$ is an ordered basis for $R^2$ consisting of eigenvectors of $L_A$, and $[L_A]_{\beta} = \begin{pmatrix} -2 & 0 \\ 0 & 5 \end{pmatrix} $, which is diagonalized matrix. | {
"domain": "physics.stackexchange",
"id": 96690,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "operators, hilbert-space, hamiltonian, linear-algebra, density-operator",
"url": null
} |
neural-network
Title: How did the authors manage to simulate and get the error estimate for a neural network with greater than 7840 qubits? In the paper A quantum-implementable neural network model (Chen, Wang & Charbon, 2017), on page 18 they mention that "There are 784 qurons in the input layer, where each quron is comprised of ten qubits."
That seems like a misprint to me. After reading the first few pages I was under the impression that they were trying to use $10$ qubits to replicate the $784$ classical neurons in the input layer. Since $2^{10}=1024>784$, such that each sub-state's coefficient's square is proportional to the activity of a neuron. Say the square of the coefficient of $|0000000010\rangle$ could be proportional to the activation of the $2$-nd classical neuron (considering all the $784$ neurons were labelled fom $0$ to $783$).
But if what they wrote is true: "There are 784 qurons in the input layer" it would mean there are $7840$ qubits in the input layer, then I'm not sure how they managed to implement their model experimentally. As of now we can properly simulate only ~$50$ qubits.
However, they managed to give an error rate for $>7840$ qubits (see Page 21: "Proposed two-layer QPNN, ten hidden qurons, five select qurons - 2.38"). No idea how's they managed to get that value. Could someone please explain?
As of now we can properly simulate only ~50 qubits.
You are talking about a full quantum simulation of a vector containing $2^{50}$ elements.
In quantum neural networks and quantum annealing, we usually only need something close to the ground state (optimal value) rather than the absolute global minimum.
Here is another example from 2017 where 1000 qubits are simulated:
Here's an example from 2015 where 1000 qubits are simulated (it says bits rather than qubits, but they are the qubits of the D-Wave device): | {
"domain": "quantumcomputing.stackexchange",
"id": 179,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "neural-network",
"url": null
} |
performance, file, windows, assembly, x86
Main proc
sub rsp, 1048h ; align with 16 while simultaneously making room on the stack for the "home space", some parameters, and a 4096 byte buffer
lea var0, filePath ; put address of file path into parameter slot 0
mov var1, FILE_ACCESS_READ ; put access mode into parameter slot 1
mov var2, FILE_SHARE_READ ; put share mode into parameter slot 2
xor var3, var3 ; put security attributes into parameter slot 3
mov var4, FILE_DISPOSITION_OPEN ; put disposition into parameter slot 4
mov var5, FILE_FLAG_NORMAL ; put flags into parameter slot 5
mov var6, WINDOWS_NULL ; put pointer to template handle into parameter slot 6
call CreateFile ; create file handle
cmp rax, WINDOWS_INVALID_HANDLE ; validate file handle
je exitMain ; skip to exit point if create validation failed
mov var5, rax ; save a reference to the file handle for later (taking advantage of the unused parameter slot 5)
jmp readFileHeader ; skip to read file header
readFileBody:
xor eax, eax ; TODO: something useful with the number of bytes read in ecx...
readFileHeader:
mov var0, var5 ; put file handle into parameter slot 0
lea var1, qword ptr [(rsp + 38h)] ; put pointer to file buffer into parameter slot 1
mov var2, 1000h ; put requested number of bytes to read into parameter slot 2
lea var3, var6 ; put pointer to actual number of bytes that were read into parameter slot 3 (taking advantage of the unused parameter slot 6)
mov var4, WINDOWS_NULL ; put overlapped pointer into parameter slot 4
call ReadFile ; read file handle
mov rcx, var6 ; put pointer to actual number of bytes that were read into rcx
mov edx, TRUE ; assume that body should be processed by storing TRUE in edx
test eax, eax ; validate file read operation (non-zero == no errors)
cmovz edx, eax ; store zero in edx if file read operation failed
test ecx, ecx ; check for end of file (non-zero == more data) | {
"domain": "codereview.stackexchange",
"id": 33777,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "performance, file, windows, assembly, x86",
"url": null
} |
Theorem 7.5.1
Suppose $$\mathbf{A}$$ has rank $$r$$ and let $$\mathbf{A}=\mathbf{U}\mathbf{S}\mathbf{V}^T$$ be an SVD. Let $$\mathbf{A}_k$$ be as in (7.5.2) for $$1\le k < r$$. Then
1. $$\| \mathbf{A} - \mathbf{A}_k \|_2 = \sigma_{k+1}, \quad k=1,\ldots,r-1$$, and
2. If the rank of $$\mathbf{B}$$ is $$k$$ or less, then $$\| \mathbf{A}-\mathbf{B} \|_2\ge \sigma_{k+1}$$.
Proof
(part 1 only) Note that (7.5.2) is identical to (7.5.1) with $$\sigma_{k+1},\ldots,\sigma_r$$ all set to zero. This implies that
$\mathbf{A} - \mathbf{A}_k = \mathbf{U}(\mathbf{S}-\hat{\mathbf{S}})\mathbf{V}^T,$
where $$\hat{\mathbf{S}}$$ has those same values of $$\sigma_i$$ replaced by zero. But that makes the above an SVD of $$\mathbf{A} - \mathbf{A}_k$$, with singular values $$0,\ldots,0,\sigma_{k+1},\ldots,\sigma_r$$, the largest of which is $$\sigma_{k+1}$$. That proves the first claim.
## Compression#
If the singular values of $$\mathbf{A}$$ decrease sufficiently rapidly, then $$\mathbf{A}_{k}$$ may capture the most significant behavior of the matrix for a reasonably small value of $$k$$.
Demo 7.5.2 | {
"domain": "tobydriscoll.net",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9940889311437562,
"lm_q1q2_score": 0.8409420152436624,
"lm_q2_score": 0.845942439250491,
"openwebmath_perplexity": 425.18122406494876,
"openwebmath_score": 0.8358561396598816,
"tags": null,
"url": "https://tobydriscoll.net/fnc-julia/matrixanaly/dimreduce.html"
} |
javascript, datetime, d3.js
this.plot.scale.y.domain([0, d3.max(this.plot.points, function(d) { return d.count; })]);
this.plot.area.y0(this.plot.scale.y(0));
// Draw axes
d3.select(this.$refs.xAxis)
.attr('transform', 'translate(0,' + this.layout.height + ')')
.call(
d3.axisBottom(scale.x)
.ticks(7)
.tickFormat(d3.timeFormat("%a, %b %d"))
);
d3.select(this.$refs.yAxis)
.call(
d3.axisLeft(scale.y)
);
// Draw area
var $area = d3.select(this.$refs.area);
$area
.datum(this.plot.points)
.attr('d', this.plot.area)
.attr('fill', '#1ABC9C')
.attr('fill-opacity', 0.5);
// Draw line
var $line = d3.select(this.$refs.line);
$line
.data([this.plot.points])
.attr('d', this.plot.line);
// Draw points
var $g = d3.select(this.$refs.points);
$g.selectAll('circle.point').data(this.plot.points)
.enter()
.append('circle')
.attr('r', 5)
.attr('class', 'point')
.attr('cx', function(d) { return scale.x(d.date); })
.attr('cy', function(d) { return scale.y(d.count); });
}
}
});
svg {
background-color: #eee;
display: block;
width: 100%;
}
svg g.axis text {
fill: #555;
}
svg .line {
fill: none;
stroke: #159078; | {
"domain": "codereview.stackexchange",
"id": 24923,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, datetime, d3.js",
"url": null
} |
design, bicycles
Title: Why don't motorcycles have as good a turning radius as bicycles? The question is about why are motorcycles not manufactured with handlebars that can turn to great degrees like a bicycle. In most motorcycles, you hardly have a steering angle of about 20-30 degrees, whereas on a bicycle, you can even go beyond 90 degrees. Is this done with the purpose of stability in mind, as motorcycles travel at far higher speeds than a bicycle, and sudden steering of the vehicle at such speeds can cause it to flip over. Why is this not done at least for motorcycles belonging to the 'streetbike' or 'naked' category, as they are meant to be ridden in cities, at least theoretically, where you might need to take tight turns. I'll approach this from a different perspective to the other answers: Pushbikes have no limits on the steering range because they don't need to add the complexity and weight. They're simple, light, and traditional, so include only the features necessary. Turning the bars more than a few degrees while riding along at any decent speed isn't needed, bu it may be at very low speeds or when parking (which may be done in much tighter spaces than motorbikes, and by pushing). I tried to eyeball this on my commute last night. That's never easy with angles, let alone when looking out for traffic, but at road speeds (roughly 20km/h or 15mph) the bars only moved by a couple of degrees, and 10--15° max at a slow walking pace to get from the bike parking to the road.
Most bikes will have their steering limited by the range of movement of brake/gear cables (or brake hoses), or by the handlebars coming into contact with the toptube, or something like that (in my case, often bars vs. luggage, perhaps mounted where a motorbike's fuel tank would be). There are typically 4 Bowden cables/hoses and possibly one or two lighting power cables to couple between the forks and the frame. The electrical connections, if present, may wear from handlebar movement but are unlikely to suffer catastrophic failures; the mechanical and hydraulic connections are far tougher and will wear from use much faster than from turning the bars. | {
"domain": "engineering.stackexchange",
"id": 4162,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "design, bicycles",
"url": null
} |
google-apps-script, google-sheets
Title: Exporting unread mail details for further processing As the result of some expert procrastination I have over 4000 unread mail in my gmail box. While that does not hold a candle to what some people likely have I wanted to try and do something about it. Namely unsubscribe where I can, create filters and labels or delete.
I wanted to gather data on all my unread mail so that I could process it in spreadsheet manually. The following code was running as a trigger every 5 minutes. What it does is get 100 unread threads from my mailbox and take all the matching messages and outputs the from and subject into spreadsheet. Another sheet tracks the progress in 100 mail chunks so that I don't hit an execution limit in Apps Script. Sheet records the record start which was manually set to 0 first. When the script runs out of mail to process it deletes its trigger and marks the start index to -1.
function groupUnreadMail() {
// This function will group all unread mail to help decide how to filter/remove/deleted unread mail faster.
var spreadsheetName = "Unread Mail";
var numberOfMailPerPass = 100;
var addresses = [];
// Open up the spreadsheet that contains the progress details and collected data thus far.
var sheetID = getDriveIDfromName(spreadsheetName);
// Verify that only one sheet was located.
if(sheetID.length = 1){
// Get the pertinent details from the sheet to start searching for mail.
var spreadsheet = SpreadsheetApp.openById(sheetID[0]);
// Get the starting iteration from the first cell in the first sheet.
var mainSheet = spreadsheet.getSheetByName("main")
var startSearchIndex = mainSheet.getRange("A1").getValue()
// find all messages that are unread
var unreadThreads = GmailApp.search('is:unread',startSearchIndex,numberOfMailPerPass);
Logger.log("Total number of threads found: " + unreadThreads.length); | {
"domain": "codereview.stackexchange",
"id": 20886,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "google-apps-script, google-sheets",
"url": null
} |
electric-fields, gauss-law
The answer is $$E = \frac{\sigma}{\varepsilon_0}$$ but with superposition it seems it should be twice that: $$E = \frac{2\sigma}{\varepsilon_0}$$ ($\sigma$ is the area charge density on the plates, and $\varepsilon_0$ is permittivity constant).
I'm self-studying from Physics, 4th Edition, by Halliday, Resnick, Krane. They are very explicit about the difference between conducting and non-conducting infinite sheet's electric field.
For non-conducting infinite sheet they give
$$E = \frac{\sigma}{2\varepsilon_0}$$
For conducting infinite sheet they give
$$E = \frac{\sigma}{\varepsilon_0}$$
because the other side of the metal sheet has equal charge, so you have superposition of the two sides.
In the space between the oppositely charged plates, each conducting metal plate creates a field of
$$E = \frac{\sigma}{\varepsilon_0}$$
both pointing in the same direction (from positive towards negative). Which sums via superposition to
$$E = \frac{2\sigma}{\varepsilon_0}$$
If these were non-conducting plates, then each plate contributes
$$E = \frac{\sigma}{2\varepsilon_0}$$
which sums via superposition to
$$E = \frac{\sigma}{\varepsilon_0}$$
But this problem specifies "two large metal plates", not non-conducting plates. It is because Gauss's law gives the net electric field and not due to a single component . If you draw a gaussian surface to calculate the electric field between the plates, then what you get is net electric field , irrespective of your choice of gaussian surface , instead of just the electric field due to a single plate .
It is because , if there was only one plate and the other plate being removed , the surface charge density is halved , as other charges would go to the outer face of conductor in absence of other plate. | {
"domain": "physics.stackexchange",
"id": 90472,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electric-fields, gauss-law",
"url": null
} |
c#, object-oriented, game, hangman
namespace MyName.Games.Hangman
{
class TheHangedMan
{
private readonly string _originalString;
private readonly char[] _displayChars;
public int _turnsLeft;
public TheHangedMan(string originalString, int turnsAvailable)
{
_originalString = originalString.ToLower();
_turnsLeft = turnsAvailable;
_displayChars = new String(' ', originalString.Length).ToCharArray();
}
public bool IsRunning { get; private set; } = true;
public string DisplayText => new String(_displayChars);
public string AttemptCharacter(char attempt)
{
if (_originalString.Contains(attempt)) {
for (int i = 0; i < _originalString.Length; i++) {
if (_originalString[i] == attempt) {
_displayChars[i] = attempt;
}
}
if (_originalString == DisplayText) {
IsRunning = false;
return "GG kid you won";
}
return "You found a char";
} else {
if (_turnsLeft == 0) {
IsRunning = false;
return "No attemps left originalstring = " + _originalString;
}
return "Char not in string attemps left = " + _turnsLeft--;
}
}
}
class Program
{
static void Main(string[] args)
{
string inputString = Console.ReadLine();
var game = new TheHangedMan(inputString, 6);
while (game.IsRunning) {
Console.WriteLine(game.DisplayText);
char attempt = Console.ReadLine()[0];
string message = game.AttemptCharacter(attempt);
Console.WriteLine(message);
}
}
}
}
But there is always something to be improved... maybe GuessCharacter instead of AttemptCharacter... it's up to you!
See also: The Boy Scout Rule | {
"domain": "codereview.stackexchange",
"id": 41341,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, object-oriented, game, hangman",
"url": null
} |
ros, orocos
Title: Two "orocos kdl" packages
In /opt/ros/groovy/share there are two folders with near identical names: orocos_kdl and orocos-kdl.
orocos_kdl contains:
cmake and package.xml
orocos-kdl contains:
orocos-kdl-config.cmake
I'm not sure which I should use to install. I'm leaning towards orocos_kdl as it looks more like what I'd expect as I'm going through the catkin tutorials.
Any guidance is much appreciated.
Originally posted by loughnane on ROS Answers with karma: 1 on 2013-04-08
Post score: 0
Original comments
Comment by jbohren on 2013-04-08:
What do you mean by "use to install"? What are you trying to do exactly?
Both folders you see result from the same source package, being orocos_kdl The orocos-kdl folder is the result of some legacy installation method from before the ROS era, if you are using ROS you should use the orocos_kdl folder since this is the name of the package. This discrepancy will be solved in hydro.
Originally posted by Ruben Smits with karma: 543 on 2013-04-21
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 13727,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, orocos",
"url": null
} |
ros
Title: depth_image_to_laserscan not publishing to /kinect_scan topic in ROS2 Galactic
Hello,
I am trying to convert a depth image to a laser scan using the depthimage_to_laserscan package in ROS2 Galactic, but I am having trouble getting the node to publish the converted LaserScan data. Despite being able to subscribe to the input depth image topic (/kinect_sensor/depth/image_raw), the node doesn't seem to be publishing the converted data to the /kinect_scan topic. Or the node is not properly doing its task: converting the data...
Here's the launch file snippet for the depthimage_to_laserscan node:
Kinect depthimage to laserscan conversion node
depthimage_to_laserscan_node = Node(
package='depthimage_to_laserscan',
executable='depthimage_to_laserscan_node',
name='depthimage_to_laserscan',
parameters=[kinect_params_file, {'use_sim_time': use_sim_time}],
remappings=[('depth', '/kinect_sensor/depth/image_raw'),
('scan', '/kinect_scan')],
output='screen')
And my kinect_params.yaml file:
depthimage_to_laserscan_node:
ros__parameters:
output_frame: kinect_depth_frame
scan_height: 1
range_min: 0.45
range_max: 10.0
scan_topic: /kinect_scan | {
"domain": "robotics.stackexchange",
"id": 38466,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros",
"url": null
} |
javascript, google-apps-script
Additional information of a method security flaw found by @DaveMeehan:
What if history was [‘a’,’b’] and new values was [‘ab’,’’] Your solution can be simplified to a few lines.
This peice of code is written twice in your function and simply creates a list of "keys" to identify each sub array of your list.
for(i; i<max; i++){
history_join.push(history[i][0] + history[i][1]);
}
You could create a function that handles building the key and call it whenever you want.
const buildKey = arr => arr.join('');
The advantage of that is, you won't be limitted to two items of your sub array.
You're also playing around with multiple lists within your function but that seems to overly complicate your end goal.
You should instead:
Build keys from your history list and store efficient Set object.
Build keys from your new_values list and see if it exists in the Set object.
If the new_value key doesn't exist in Set well you can filter it out.
Full solution:
const buildKey = arr => arr.join(':');
const notFound = (input, history) => {
const historySet = new Set(history.map(buildKey));
return input.filter((arr) => !historySet.has(buildKey(arr)))
}
const history = [['a','b'],['c','d']];
const input = [['c','d'],['e','f'], ['ab', '']];
const result = notFound(input, history);
console.log(result) | {
"domain": "codereview.stackexchange",
"id": 43471,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, google-apps-script",
"url": null
} |
c#, parsing, compiler, lexical-analysis
}
} I would start by replacing the tuple (int Line, int Column) location with an immutable struct named SourceLocation.
Here are some of the reasons why I would do this.
The concept already exists in several places. TokenFactory, Token, and IContainsLocation know about this tuple. Adding a FileName field to the tuple would require changes to all three types.
TokenParser and Token should not be modifying the contents of this tuple.
TokenFactory modifies its Location field but it is easy to change the interface so that it doesn't need to.
Replace
public (int Line, int Column) Location;
with
public int Line {get; private set;}
public int Column {get; private set;}
Why Properties Matter gives a detailed explanation of why you should avoid using public fields.
Mutable tuples are a relatively new idea and there isn't a consensus on whether they are appropriate to use in APIs. I use them only in types that are internal or private.
Here is how I would write SourceLocation. I used ReSharper to generate the overrides and the IEquatable implementation.
public struct SourceLocation : IEquatable<SourceLocation>
{
public int Line { get; }
public int Column { get; }
public SourceLocation(int line, int column)
{
Line = line;
Column = column;
}
public bool Equals(SourceLocation other)
{
return Line == other.Line && Column == other.Column;
}
public override bool Equals(object obj)
{
if (ReferenceEquals(null, obj)) return false;
return obj is SourceLocation && Equals((SourceLocation) obj);
}
public override int GetHashCode()
{
unchecked
{
return (Line * 397) ^ Column;
}
}
public void Deconstruct(out int line, out int column)
{
line = Line;
column = Column;
}
}
TokenParser | {
"domain": "codereview.stackexchange",
"id": 26076,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, parsing, compiler, lexical-analysis",
"url": null
} |
Input Arguments
collapse all
Input, specified as a numeric vector.
Data Types: `single` | `double`
Complex Number Support: Yes
collapse all
Vandermonde Matrix
For input vector $v=\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left[\begin{array}{cccc}{v}_{1}& {v}_{2}& \dots & {v}_{N}\end{array}\right]$, the Vandermonde matrix is
`$\left[\begin{array}{cccc}{v}_{1}^{N-1}& \cdots & {v}_{1}^{1}& {v}_{1}^{0}\\ {v}_{2}^{N-1}& \cdots & {v}_{2}^{1}& {v}_{2}^{0}\\ & ⋰& ⋮& ⋮\\ {v}_{N}^{N-1}& & {v}_{N}^{1}& {v}_{N}^{0}\end{array}\right]$`
The matrix is described by the formula $A\left(i,j\right)=v{\left(i\right)}^{\left(N-j\right)}$ such that its columns are powers of the vector `v`.
An alternate form of the Vandermonde matrix flips the matrix along the vertical axis, as shown. Use `fliplr(vander(v))` to return this form. | {
"domain": "mathworks.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9838471656514751,
"lm_q1q2_score": 0.8089084521232949,
"lm_q2_score": 0.8221891370573386,
"openwebmath_perplexity": 3050.686227375249,
"openwebmath_score": 0.937809407711029,
"tags": null,
"url": "https://se.mathworks.com/help/matlab/ref/vander.html"
} |
python, beginner, parsing, bioinformatics
it, pop, ind, locus = [int(d) for d in re.findall(r'\d+', i.id)]
maxit = max(it+1, maxit)
maxpop = max(pop+1, maxpop)
maxind = max(ind+1, maxind)
maxlocus = max(locus+1, maxlocus)
all = numpy.full((maxit, maxpop, maxind, maxlocus, maxsite), '\0', dtype="S")
for i in SeqIO.parse(filename, "fasta"):
it, pop, ind, locus = [int(d) for d in re.findall(r'\d+', i.id)]
all[it, pop, ind, locus] = i.seq
print(all)
print("---")
#### Calculate the frequencies ####
pA = numpy.full((maxit, maxpop, maxsite), 2)
pC = numpy.full((maxit, maxpop, maxsite), 2)
pG = numpy.full((maxit, maxpop, maxsite), 2)
pT = numpy.full((maxit, maxpop, maxsite), 2)
for it in xrange(maxit):
for pop in xrange(maxpop):
for locus in xrange(maxlocus):
for site in xrange(maxsite):
x = []
for ind in xrange(maxind):
x.append(all[it, pop, ind, locus, site])
pA[it, pop, site] = x.count("A") / maxind
pT[it, pop, site] = x.count("T") / maxind
pC[it, pop, site] = x.count("C") / maxind
pG[it, pop, site] = x.count("G") / maxind
print(pA)
print(pC)
print(pG)
print(pT)
Then one can use Numpy tools like fancy slicing
from __future__ import division | {
"domain": "codereview.stackexchange",
"id": 14402,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, beginner, parsing, bioinformatics",
"url": null
} |
classical-mechanics, symmetry, moment-of-inertia
Proof: Let $\hat{e}_A$, $\hat{e}_B$, and $\hat{e}_C$ be three vectors that each connect the central atom to different "satellite" atoms. For concreteness, let $\hat{e}_A = \hat{z}$ point "up", along what you describe as the symmetry axis of the tetrahedron. You (correctly) note that $\hat{e}_A$ must be a principal axis of the tetrahedron, i.e., $\mathbf{I} \hat{e}_A = \lambda \hat{e}_A$ for some $\lambda$, where $\mathbf{I}$ is the inertia tensor. By symmetry, we must also have $\mathbf{I} \hat{e}_B = \lambda \hat{e}_B$ and $\mathbf{I} \hat{e}_C = \lambda \hat{e}_C$, since all of these axes are equivalent.
Now, the space of vectors $\hat{e}_A$, $\hat{e}_B$, and $\hat{e}_C$ spans 3-D space, so we apply the Gram-Schmidt process to get an orthonormal set of vectors $\{\hat{e}_A, \hat{e}_2, \hat{e}_3 \}$, where $\hat{e}_2$ and $\hat{e}_3$ are linear combinations of the original vectors, e.g.,
$$
\hat{e}_2 = c_A \hat{e}_A + c_B \hat{e}_B + c_C \hat{e}_C
$$
and similarly for $\hat{e}_3$.
But by the properties of the inertia tensor, $\hat{e}_2$ is also an eigenvector of $\mathbf{I}$ with eigenvalue $\lambda$:
$$ | {
"domain": "physics.stackexchange",
"id": 85532,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "classical-mechanics, symmetry, moment-of-inertia",
"url": null
} |
c#, parsing, wpf, mvvm, visual-studio
Title: Visual Studio exception visualizer for lengthy exception messages I use Autofac a lot and for everything and when you make a mistake and forget to register a dependency etc. it'll tell you exaclty what's wrong. Although its exceptions are very helpful, the exception strings are at the same time hard to read because it's a large blob of text:
System.Exception: Blub ---> Autofac.Core.DependencyResolutionException: An error occurred during the activation of a particular registration. See the inner exception for details. Registration: Activator = User (ReflectionActivator), Services = [UserQuery+User], Lifetime = Autofac.Core.Lifetime.CurrentScopeLifetime, Sharing = None, Ownership = OwnedByLifetimeScope ---> None of the constructors found with 'Autofac.Core.Activators.Reflection.DefaultConstructorFinder' on type 'UserQuery+User' can be invoked with the available services and parameters:
Cannot resolve parameter 'System.String name' of constructor 'Void .ctor(System.String)'. (See inner exception for details.) ---> Autofac.Core.DependencyResolutionException: None of the constructors found with 'Autofac.Core.Activators.Reflection.DefaultConstructorFinder' on type 'UserQuery+User' can be invoked with the available services and parameters:
Cannot resolve parameter 'System.String name' of constructor 'Void .ctor(System.String)'.
at Autofac.Core.Activators.Reflection.ReflectionActivator.GetValidConstructorBindings(IComponentContext context, IEnumerable`1 parameters)
To find the reason for this exception in such a string isn't easy. This is better done by some tool so I created. It reads the string for me and presents it in a more friendly way. I implemented it as a Debugger Visualizer. | {
"domain": "codereview.stackexchange",
"id": 35350,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, parsing, wpf, mvvm, visual-studio",
"url": null
} |
python, logistic-regression, gradient-descent
def loss(x0, X, y, alpha):
# logistic loss function, returns Sum{-log(phi(t))}
#x0 is the weight vector w are the paramaters, c is the bias term
w, c = x0[:X.shape[1]], x0[-1]
z = X.dot(w) + c
yz = y * z
idx = yz > 0
out = np.zeros_like(yz)
out[idx] = np.log(1 + np.exp(-yz[idx]))
out[~idx] = (-yz[~idx] + np.log(1 + np.exp(yz[~idx])))
out = out.sum() / X.shape[0] + .5 * alpha * w.dot(w)
return out
def gradient(x0, X, y, alpha):
# gradient of the logistic loss
w, c = x0[:X.shape[1]], x0[-1]
z = X.dot(w) + c
z = phi(y * z)
z0 = (z - 1) * y
grad_w = X.T.dot(z0) / X.shape[0] + alpha * w
grad_c = z0.sum() / X.shape[0]
return np.concatenate((grad_w, [grad_c]))
def bgd(X, y, alpha, max_iter):
step_sizes = np.array([100,10,1,.1,.01,.001,.0001,.00001])
iter_no = 0
x0 = np.random.random(X.shape[1] + 1) #initialize weight vector | {
"domain": "datascience.stackexchange",
"id": 1957,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, logistic-regression, gradient-descent",
"url": null
} |
electrical-engineering, ethics, sales, safety
But all of those steps are going way above and beyond what you're obligated to do in this particular case. This is especially so when there is a safe usage for the product along with an unsafe approach. And any of those actions are likely to irreparably damage your relationship with that client. Damaging the relationship will impair your credibility with them and make it less likely that they'll listen to your concerns.
So your obligation is to lay it out to them in unambiguous terms that you believe they need to stop using the product in their "preferred" manner and that your firm will no longer provide any support whatsoever regarding future use of that product in that configuration. | {
"domain": "engineering.stackexchange",
"id": 811,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electrical-engineering, ethics, sales, safety",
"url": null
} |
# Thread: Orders, Matrices, Complex entries...
1. ## Orders, Matrices, Complex entries...
I'm having a little trouble with the difference between my notes and my textbook notation.
I have
$a = $\left( \begin{array}{cc} 0 & -1 \\ 1 & 0 \end{array} \right)$$
$b = $\left( \begin{array}{cc} i & 0 \\ 0 & -i \end{array} \right)$$
I have to determine the orders of $\left< a \right>$ and $\left< b \right>$, and whether $\left< a \right>$ and $\left< b \right>$ are isomorphic.
For the first part I have $a^4=b^4=I_2$
So they have order 4. Correct?
For the isomorpic part can I just find a 2x2 matrix that shows that $a \rightarrow b$ isn't a homomorphism?
Do I even know what I'm talking about? (just started group theory last week)
2. Originally Posted by MichaelMath
I'm having a little trouble with the difference between my notes and my textbook notation.
I have
$a = $\left( \begin{array}{cc} 0 & -1 \\ 1 & 0 \end{array} \right)$$
$b = $\left( \begin{array}{cc} i & 0 \\ 0 & -i \end{array} \right)$$
I have to determine the orders of $\left< a \right>$ and $\left< b \right>$, and whether $\left< a \right>$ and $\left< b \right>$ are isomorphic.
For the first part I have $a^4=b^4=I_2$
So they have order 4. Correct?
Yes, but also because 4 is the minimal natural power to which both matrices are I
For the isomorpic part can I just find a 2x2 matrix that shows that $a \rightarrow b$ isn't a homomorphism? | {
"domain": "mathhelpforum.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9546474168650673,
"lm_q1q2_score": 0.8016498573487767,
"lm_q2_score": 0.8397339616560072,
"openwebmath_perplexity": 267.67944608083076,
"openwebmath_score": 0.9628881812095642,
"tags": null,
"url": "http://mathhelpforum.com/advanced-algebra/157193-orders-matrices-complex-entries.html"
} |
java, c, functional-programming, lisp, compiler
bool translate_def(FILE *dst, GPtrArray *src, int *ip) {
char *s = g_ptr_array_index(src, ++*ip);
if (strcmp(s, "(") != 0) {
fprintf(dst, "static Object %s = ", s);
if (!translate(dst, src, ip)) {
return false;
}
fputs(";\n", dst);
return true;
}
s = g_ptr_array_index(src, ++*ip);
fprintf(dst, "static Object %s(", s);
bool b = false;
for (;;) {
s = g_ptr_array_index(src, ++*ip);
if (strcmp(s, ")") == 0) {
break;
}
if (b) {
fputs(", ", dst);
} else {
b = !b;
}
fprintf(dst, "Object %s", s);
}
fputs(") {\nreturn ", dst);
if (!translate(dst, src, ip)) {
return false;
}
fputs(";\n}\n", dst);
return true;
}
bool translate_if(FILE *dst, GPtrArray *src, int *ip) {
fputs("__if(", dst);
if (!translate(dst, src, ip)) {
return false;
}
fputs(", new Expression() {Object eval() {return ", dst);
if (!translate(dst, src, ip)) {
return false;
}
fputs(";}}, new Expression() {Object eval() {return ", dst);
if (!translate(dst, src, ip)) {
return false;
}
fputs(";}})", dst);
return true;
} | {
"domain": "codereview.stackexchange",
"id": 18391,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, c, functional-programming, lisp, compiler",
"url": null
} |
c#
Note we moved responsability from TruthTable (a God class and its utility class) to the proper place: behavior is defined in each class and you're using inheritance to hide it.
What's then a truth-table? TruthTable is then simply a combination of inputs (InputCollection) associated with a LogicOperation. Its output column is calculated. Note that in this way you may build simple digital logic simulator simply connecting inputs and outputs and C# events will do the job, see this Proof of Concept:
abstract class Port {
public bool? Value {
get { return _value; }
set {
if (_value != value) {
_value = value;
OnValueChanged(EventArgs.Empty);
}
} | {
"domain": "codereview.stackexchange",
"id": 16903,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#",
"url": null
} |
c++, arduino
This seems intended to do exactly the same thing as std::min in the standard library. You're probably better off using the standard version.
inline float activ_float(float x, float force_mod = 20.f){
float val = (180.f - smaller_float(abs(force_mod * x), 179.9f) ) / 180.f;
return val / sqrt(1 + pow(val, 2));
}
I'm not sure about the name you've used here, and specifically why you'd use activ instead of active. Obviously, you'd want to incorporate the previous change, and use std::min here. Along with that, I'd at least consider val * val rather than pow(val, 2). pow is good for arbitrary exponents, but for a fixed exponent of 2 can waste a fair amount of time, and doesn't (at least IMO) improve readability at all.
[ ... ]
// Compensate yaw drift a bit with the help of the compass
uint32_t time = m_iTimer != 0 ? m_pHAL->scheduler->millis() - m_iTimer : INERTIAL_TIMEOUT;
m_iTimer = m_pHAL->scheduler->millis();
I think I'd prefer to move most of this into a function, so this came out something like:
uint32_t time = delta_time();
I also question the use of uint32_t here. Do you really need the result to be exactly 32 bits, or would uint_least32_t or uint_fast32_t really express your intent better?
// Calculate absolute attitude from relative gyrometer changes
m_vAttitude.x += m_vGyro.x * (float)time/1000.f; // Pitch
m_vAttitude.x = wrap180_float(m_vAttitude.x);
m_vAttitude.y += m_vGyro.y * (float)time/1000.f; // Roll
m_vAttitude.y = wrap180_float(m_vAttitude.y); | {
"domain": "codereview.stackexchange",
"id": 6403,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, arduino",
"url": null
} |
4th dimension, now the boundary is 5904 and the interior 4096 - the boundary is now larger.
Even for smaller and smaller boundary lengths, as the dimension increases the boundary volume will always overtake the interior.
The best way to "understand" it (though it is IMHO impossible for a human) is to compare the volumes of a n-dimensional ball and a n-dimensional cube. With the growth of n (dimensionality) all the volume of the ball "leaks out" and concentrates in the corners of the cube. This is a useful general principle to remember in the coding theory and its applications.
The best textbook explanation of it is in the Richard W. Hamming's book "Coding and Information Theory" (3.6 Geometric Approach, p 44).
The short article in Wikipedia will give you a brief summary of the same if you keep in mind that the volume of a n-dimensional unit cube is always 1^n.
I hope it will help. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.978051747564637,
"lm_q1q2_score": 0.8532008807791861,
"lm_q2_score": 0.8723473813156294,
"openwebmath_perplexity": 349.5693324568689,
"openwebmath_score": 0.7195698022842407,
"tags": null,
"url": "https://datascience.stackexchange.com/questions/27388/what-does-it-mean-when-we-say-most-of-the-points-in-a-hypercube-are-at-the-bound/27399"
} |
# Partial fractions
#### Petrus
##### Well-known member
Hello MHB,
I got stuck on this integrate
$$\displaystyle \int_0^{\infty}\frac{2x-4}{(x^2+1)(2x+1)}$$
and my progress
$$\displaystyle \int_0^{\infty} \frac{2x-4}{(x^2+1)(2x+1)} = \frac{ax+b}{x^2+1}+ \frac{c}{2x+1}$$
then I get these equation that I can't solve
and I get these equation..
$$\displaystyle 2a+c=0$$ that is for $$\displaystyle x^2$$
$$\displaystyle 2b+a=2$$ that is for $$\displaystyle x$$
$$\displaystyle b+c=-4$$ that is for $$\displaystyle x^0$$
What have I done wrong?
Regards,
$$\displaystyle |\pi\rangle$$
#### MarkFL
Staff member
Re: partial fractions
The only thing I see wrong (besides omitting the differential from your original integral) is the line:
$$\displaystyle \int_0^{\infty} \frac{2x-4}{(x^2+1)(2x+1)} = \frac{ax+b}{x^2+1}+ \frac{c}{2x+1}$$
You should simply write:
$$\displaystyle \frac{2x-4}{(x^2+1)(2x+1)} = \frac{ax+b}{x^2+1}+ \frac{c}{2x+1}$$
You have correctly determined the resulting linear system of equations. Can you choose and use a method with which to solve it?
#### Petrus
##### Well-known member
Re: partial fractions
The only thing I see wrong (besides omitting the differential from your original integral) is the line: | {
"domain": "mathhelpboards.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9871787868650146,
"lm_q1q2_score": 0.8448781024445275,
"lm_q2_score": 0.8558511524823263,
"openwebmath_perplexity": 418.1120737288165,
"openwebmath_score": 0.8400549292564392,
"tags": null,
"url": "https://mathhelpboards.com/threads/partial-fractions.4812/"
} |
vb.net
Title: Reading from a text file into a structure and posting to list boxes I am trying to stay ahead of my Year 12 Software class. Starting to work with records and arrays. I have answered the question, but the solution feels very clunky. I am hoping someone has suggestions/links for completing this task in a more efficient way.
The task: read in lines from a text file and into a structure, and then loop through that, populating four list boxes if an animal hasn't been vaccinated.
Imports System.IO
Public Class Form1
'Set up the variables - customer record, total pets not vaccinated, total records in the file, and a streamreader for the file.
Structure PetsRecord
Dim custName As String
Dim address As String
Dim petType As String
Dim vacced As String
End Structure
Dim totNotVac As Integer
Dim totalRecCount As Integer
Dim PetFile As IO.StreamReader
Private Sub Form1_Load(sender As Object, e As EventArgs) Handles MyBase.Load
End Sub
Private Sub btnLoad_Click(sender As Object, e As EventArgs) Handles btnLoad.Click
'set an array of records to store each record as it comes in. Limitation: you need to know how many records in the file. Set the array at 15 to allow for adding more in later.
Dim PetArray(15) As PetsRecord
'variables that let me read in a line and split it into sections.
Dim lineoftext As String
Dim i As Integer
Dim arytextfile() As String
'tell them what text file to read
PetFile = New IO.StreamReader("patients.txt")
totNotVac = 0 | {
"domain": "codereview.stackexchange",
"id": 40823,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "vb.net",
"url": null
} |
computational-chemistry, theoretical-chemistry
Title: How is a counterpoise corrected geometry optimization done? I understand the problem of basis set superposition error (BSSE) and I know how the counterpoise correction for single point energies is calculated.
Today I found out that many software packages allow for counterpoise correction during optimization calculations, but how does this actually work, especially for methods where analyical gradients are used for optimization?
During geometry optimization we calculate the first derivative of the energy to get energy gradients which we follow down to our minima. I understand that I could use counterpoise correction if I calculate those gradients numerically, which is quite easy to understand but very expensive to do, but it seems that counterpoise can also be used in combination with analytical gradients. How is the counterpoise implemented to get counterpoise corrected gradients? Background
For a system consisting of two molecules (monomers or fragments are also used) X and Y, the binding energy is
$$
\Delta E_{\text{bind}} = E^{\ce{XY}}(\ce{XY}) - [E^{\ce{X}}(\ce{X}) + E^{\ce{Y}}(\ce{Y})]
\label{eq:sherrill-1} \tag{Sherrill 1}
$$
where the letters in the parentheses refer to the atoms present in the calculation and the letters in the superscript refer to the (atomic orbital, AO) basis present in the calculation. The first term is the energy calculated for the combined X + Y complex (the dimer) with basis functions, and the next two terms are energy calculations for each isolated monomer with only their respective basis functions. The remainder of this discussion will make more sense if the complex geometry is used for each monomer, rather than the isolated fragment geometry.
The counterpoise-corrected (CP-corrected) binding energy [1] to correct for basis set superposition error (BSSE) [2] is defined as
$$ | {
"domain": "chemistry.stackexchange",
"id": 10389,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "computational-chemistry, theoretical-chemistry",
"url": null
} |
python, python-2.x, csv
Title: Compare lines in 2 text files with different numbers of fields This is the (hopefully) final version of my script for my file comparison problem mentioned previously in two posts on Stack Overflow (here and here).
I have come up with the code shown below, which does what I need it to do, but I'm wondering if it can be written in a more pythonic (read elegant) way, especially the clean up of the lists.
#!/usr/bin/python
import sys
import csv
f1 = sys.argv[1]
f2 = sys.argv[2]
with open(f1) as i, open(f2) as j:
a = csv.reader(i)
b = csv.reader(j)
for linea in a:
lineb = next(b)
lista = ([x for x in linea if len(x) > 0])
listastr = map(str.strip, lista)
listastrne = filter(None, listastr)
listb = ([x for x in lineb if len(x) > 0])
listbstr = map(str.strip, listb)
listbstrne = filter(None, listbstr)
if len(listastrne) != len(listbstrne):
print('Line {}: different fields: A: {} B: {}'.format(a.line_num, listastrne, listbstrne))
elif sorted(map(str.lower, listastrne)) != sorted(map(str.lower, listbstrne)):
print('Line {}: {} does not match {}'.format(a.line_num, listastrne, listbstrne))
Example input files:
A.csv:
1,2,,
1,2,2,3,4
1,2,3,4
X
AAA,BBB,CCC
DDD,,EEE,
GGG,HHH,III
XXX,YYY ,ZZZ
k, | {
"domain": "codereview.stackexchange",
"id": 17941,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, python-2.x, csv",
"url": null
} |
• Shouldn't we prove that it is decreasing? If I understand it correctly, what we do is ignore $\arctan{}$ because it is a continuously increasing function, which means it behaves in a same way as if it wasn't there. Also what criteria are you using to prove its not absolutely convergent? Also when you got this at the end: ${\frac{1}{n}}$, by $\sim_\infty$ you mean "as series goes to infinity"? – Mykybo Dec 20 '15 at 15:09
• No, I use the notion of equivalent sequences at infinity. This means the ratio of the sequences tends to $1$. A general result in asymptotic analysis is that two series with positive equivalent terms both converge or both diverge. Here the problem comes down to the harmonic series. – Bernard Dec 20 '15 at 15:19
To show that the terms are decreasing:
$\arctan\left(\frac{n}{1+n^2}\right)- \arctan\left(\frac{n+1}{1+(n+1)^2}\right) =\arctan\left(\dfrac{\frac{n}{1+n^2}-\frac{n+1}{1+(n+1)^2}}{1+\frac{n}{1+n^2}\frac{n+1}{1+(n+1)^2}}\right)$
and $\dfrac{n}{1+n^2}-\dfrac{n+1}{1+(n+1)^2} =\dfrac{n(1+(n+1)^2)-(n+1)(1+n^2)}{(1+n^2)(1+(n+1)^2)}$
and | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9811668690081642,
"lm_q1q2_score": 0.8471312323392499,
"lm_q2_score": 0.8633915994285382,
"openwebmath_perplexity": 288.36972450725,
"openwebmath_score": 0.7728836536407471,
"tags": null,
"url": "https://math.stackexchange.com/questions/1583266/i-would-like-to-prove-convergence-of-the-following-series-sum-n-1-infty"
} |
27&305893372041&12,5,5,7,5,8\\ 28&801042337577&12,5,5,7,7,8\\ 29&2097687354880&12,5,7,7,7,8\\ 30&5493183075966&12,7,7,7,7,8\\ 31&14383060457018&12,7,7,7,7,10\\ 32&37658422859324&14,7,7,7,7,10\\ 33&98594676094434&14,7,7,9,7,10\\ 34&258133753770289&14,7,7,9,9,10\\ 35&675827901330148&14,7,9,9,9,10\\ 36&1769404155218244&14,9,9,9,9,10\\ 37&4632452165313827&16,9,9,9,9,10\\ \end{array} | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9843363485313248,
"lm_q1q2_score": 0.8202716974398752,
"lm_q2_score": 0.8333246035907933,
"openwebmath_perplexity": 1230.0420260421427,
"openwebmath_score": 0.7285186648368835,
"tags": null,
"url": "https://math.stackexchange.com/questions/1341929/sum-numbers-game"
} |
orbital-motion, rocket-science, solar-system, space-mission
Technical Difficulties
Sending nuclear warheads in space is still extremely risky and therefore not feasible.
Sending stuff towards sun is extremely energy costly. How do you launch tons of debris and nukes to the sun from earth orbit?
We don't have the technology required to collect the space debris.
Why it's not even worth it
If we would posses the technology required to collect the debris we could just recycle it in orbit or slow it down, making it sink
towards earth and melt at re-entry.
This would be much cheaper and wouldn't require weapons of mass destruction in our orbit. Which by the way is forbidden by The 1967 Outer Space Treaty, Article IV. | {
"domain": "physics.stackexchange",
"id": 61254,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "orbital-motion, rocket-science, solar-system, space-mission",
"url": null
} |
gas, combustion
Title: Calculating dust concentration How can I determine the concentration of dust? Let's say it's for general forest residue chips (biofuel). Perhaps the question is vague, but I am not sure. I am still new to the site, so I am not sure if this is an appropriate tag, at least it's close to what I am working with. I have the most data I need, but I couldn't find a formula to calculate the dust concentration of a certain biofuel which would be acquired with the help of the data I have. To measure the dust concentration, run a known amount of dust-laden air over an adequate dust filter. Measure the filter's weight at the start, $w_0$, and at the end, $w_1$, of the run. The volume run over the filter during the run is $V_N$ (converted to Normal volume).
The dust concentration is:
$$c=\frac{w_1-w_0}{V_N}$$ | {
"domain": "physics.stackexchange",
"id": 68885,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "gas, combustion",
"url": null
} |
fft, phase
Title: Phase response of FFT in practice As I know if the impulse response is symmetric around sample zero phase response should be entirely zero.
The code below just set a rectangular window for vector "in"
const int N = 10;
// in = 0 , 0 , 0 , 0 , 1 , 1 , 1 , 0 , 0 , 0 , 0
std::vector< std::complex<double> > in (N);
std::vector< std::complex<double> > out (N);
std::vector< std::complex<double> > polarOut (N);
auto middleElem = in.begin() + in.size()/2;
std::fill( middleElem - 1, middleElem + 2, 1);
fftw_plan my_plan = fftw_plan_dft_1d(N, reinterpret_cast<fftw_complex*>(&in[0]),
reinterpret_cast<fftw_complex*>(&out[0]), FFTW_FORWARD, FFTW_ESTIMATE);
fftw_execute(my_plan);
std::transform( out.begin(), out.end(), polarOut.begin(),
[]( auto& in ){
return std::complex<double>( std::abs(in), std::arg(in) );
} ); | {
"domain": "dsp.stackexchange",
"id": 2466,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "fft, phase",
"url": null
} |
c++, performance, algorithm
pow_mod right-shifts n (the exponent) each iteration through the loop, so the number of iterations is proportional to the number of bits in the number (where yours in the question is proportional to the exponent itself). In other words, yours is linear on the exponent's magnitude, and this is roughly logarithmic exponent's magnitude.
Actual code review
Since this is CodeReview, not (for example) Stack Overflow, let's also take a look at reviewing your code.
Variable Definitions
You've defined multiple variables in a single definition:
int base,exponent,mod;
Many people find it more readable to define one variable per definition:
int base;
int exponent;
int mod;
... or at least use a separate line for each variable:
int base,
exponent,
mod;
Naming
A function's name should reflect what it really does. Using power for modular exponentiation borders on misleading. I'd rather the name included modular or at least mod.
formatting
At least IMO, a little white space can help readability quite a bit. For one example, instead of:
int power(int base,int exponent,int mod)
...I'd rather see a space after each comma:
int power(int base, int exponent, int mod)
In addition, where there's flow control, the controlled statements should be indented, so this:
if(mod==1)return 0;
would come out like:
if (mod==1)
return 0; | {
"domain": "codereview.stackexchange",
"id": 24320,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, performance, algorithm",
"url": null
} |
square root of minus one. Of course, you could open another desmos graph and you could put E = mc^2, with c set equal to the speed of light, and m as your variable. You can then use this formula to make predictions, and also to find repeating patterns within your data. Warning, the name changecoords has been redefined. , chemistry, physics or biology). The Fourier series, Fourier transforms and Fourier's Law are named in his honour. Sine and cosine waves can make other functions! Here you can add up functions and see the resulting graph. Fourier Series Calculator. FourierSeries [ expr , { t 1 , t 2 , … } , { n 1 , n 2 , … gives the multidimensional Fourier series. org — the angular fundamental frequency (8) Then. Online FFT calculator helps to calculate the transformation from the given original function to the Fourier series function. IEEE Press. Explain any discrepancies you find. Download Fourier Series Calculator apk 1. Harmonic Analysis in Fourier Series Using Calculator FX-991ES PLUS Thanks for watching! If you like the content please "LIKE" and SUBSCRIBE :) http://bit. First term in a Fourier series. I have used the same code as before and just added a few more lines of code. The continuous time Fourier series synthesis formula expresses a continuous time, periodic function as the sum of continuous time, discrete frequency complex exponentials. 1 Periodic Pulse Signal. That I could take a periodic function, we started with the example of this square wave, and that I could represent it as the sum of weighted sines and cosines. Disclaimer: None of these examples is mine. I have a colleague who describes himself as a recovering pure mathematician. π ∫ −π |f(x)|2dx < ∞. Fourier Analysis: A type of mathematical analysis that attempts to identify patterns or cycles in a time series data set which has already been normalized. It then repeats itself. Particularly, we will look at the circuit shown in Figure 1: Figure 1. Use these observations to nd its Fourier series. Finding Fourier coefficients for a square wave If you're seeing this message, it means we're having trouble loading external resources on our website. then Bessel's inequality becomes an equality known as Parseval's theorem. 005 | {
"domain": "peterjackson.it",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9886682454669814,
"lm_q1q2_score": 0.8281282595168402,
"lm_q2_score": 0.8376199633332891,
"openwebmath_perplexity": 497.33143749852866,
"openwebmath_score": 0.9017105102539062,
"tags": null,
"url": "http://peterjackson.it/fourier-series-calculator.html"
} |
waves, polarization
Title: Intense of liner polarized wave: with a polarizer vs. without a polarizer differences We had this experiment in which we measured the intense of linear polarized wave - with and without a polarizer.
I noticed that without the polarizer the intense was slightly lower than with the polarizer. How is that possible?
The wave length was about 2.8 [cm].
With polarizer I got 0.317 [Volt] .
Without polarizer I got 0.2835 [Volt]. One possibility is that your polarizer interacts with the other parts of your setup (for example, forms a resonant cavity with some other interfaces that enhances transmission). You can test this hypothesis by rotating your polarizer (is the intensity always brighter?). If you include a drawing of your setup, it would be easier to figure out the underlying reason. | {
"domain": "physics.stackexchange",
"id": 15912,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "waves, polarization",
"url": null
} |
thermodynamics, water
Use case/origin of this question
I was getting water from the break room and I have the option of either hot or cold water... So if I had to pick, and not mix, which I do, then ... that's how I started wondering about this. :) For the original case, I would expect the warmer one to reach equilibrium quicker.
This for two (probably quite small in practice) reasons. First, the warmer one will evaporate more of it's water than the cooler one. It may not be a significant amount of evaporation; but any evaporating would reduce the amount of mass to be cooled. Depending on how you set up the scenario, getting the water to 10 degrees hotter would cause evaporation before you even start the timers.
The second reason is due to thermal convection. Heat rises, so when you have water warmer than the surroundings, this would cause more airflow upwards from the container, compared to cool water where the air would be more prone to stagnate on top, reducing the convection; thus reducing the heat transfer.
Increasing surface area of the water will only make these two effects more pronounced for the warm water, by increasing the surface area for evaporation and convection.
Increasing the ambient to 80 degrees shouldn't really change this.
The rate of change will not be completely constant. For starters, as both approach equilibrium, the heating and cooling will slow down. Also, the rate may be affected by stagnation in the water itself. The cooler bucket may decrease it's heat transfer rate as a layer of warmed water begins to develop on top. This would greatly reduce the heat transfer rate on top, only getting worse as that layer gets closer to equilibrium. This doesn't happen with the warmer bucket, because the cooler room cools the water surface, and that water sinks below the warmer water, leading to natural convection.
The only way I can think of that you might get different results is if you kept the water in a sealed container, and kept it closed with an insulated lid, while having the sides of it conductive. I'm not even sure if that would work though, it would just eliminate evaporation and reduce the effects of stagnant convection. | {
"domain": "physics.stackexchange",
"id": 56047,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "thermodynamics, water",
"url": null
} |
-
I like the connection of "[. . .]zealots who say that people who use the convention they do not happen to prefer are in a state of sin." and "sinc (or sine cardinal)". Great answer. Although, if I may ask, (feel free to ignore my ignorance) what is the purpose of those two different conventions for the Fourier transform? You left me quite curious. – 000 Apr 19 '12 at 11:08
@Limitless There are more than two conventions for the Fourier transform and different areas of study tend to use different ones because the "normalizations" suit the conventions in that area. The second version in my answer has the nice property that it is a unitary transformation and thus Parseval's theorem is an affirmation of this property: $$\int_{-\infty}^{\infty}|X(f)|^2\ \mathrm df = \int_{-\infty}^{\infty}|x(t)|^2\ \mathrm dt$$ – Dilip Sarwate Apr 19 '12 at 11:22
Thanks for that explanation. I was always a bit intrigued by Fourier transforms in general. – 000 Apr 19 '12 at 11:25
Very nice answer. :) – night owl Apr 29 '12 at 11:37
$$\frac{\sin\left(\frac{200}{500}\pi x\right)}{200\pi x} = \frac{1}{500} \frac{\sin\left(\frac{200}{500}\pi x\right)}{\frac{200}{500}\pi x}=\begin{cases} \frac{1}{500}\operatorname{sinc}\left(\frac{200}{500}\pi x\right) & \rm nonnormalized~sinc~convention \\ \color{White}X \\ \frac{1}{500}\operatorname{sinc}\left(\frac{200}{500}x\right) & \rm ~~~normalized~sinc~convention. \end{cases}$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9664104962847373,
"lm_q1q2_score": 0.8094847303389392,
"lm_q2_score": 0.8376199694135333,
"openwebmath_perplexity": 441.0252054210758,
"openwebmath_score": 0.8117313981056213,
"tags": null,
"url": "http://math.stackexchange.com/questions/133843/definition-of-sinc-function"
} |
biochemistry, histone, rna-interference, histone-modifications
Castanotto, D., Tommasi, S., Li, M., Li, H., Yanow, S., Pfeifer, G.P. & Rossi, J.J. (2005) Short hairpin RNA-directed cytosine (CpG) methylation of the RASSF1A gene promoter in HeLa cells. Mol Ther. 12 (1), 179–183.
Morris, K.V., Chan, S.W.-L., Jacobsen, S.E. & Looney, D.J. (2004) Small Interfering RNA-Induced Transcriptional Gene Silencing in Human Cells. Science. [Online] 305 (5688), 1289 –1292.
Then there was another paper the next year by Kawasaki & Taira, which was not retracted:
Kawasaki, H. & Taira, K. (2005) siRNA Induced Transcriptional Gene Silencing in Mammalian Cells. Cell Cycle. [Online] 4 (3), 442–448.
And it seems the idea is now generally accepted and supported by quite a few other studies, mainly looking at germline cells. For example:
Carmell, M.A., Girard, A., Kant, H.J.G. van de, Bourc’his, D., Bestor, T.H., Rooij, D.G. de & Hannon, G.J. (2007) MIWI2 Is Essential for Spermatogenesis and Repression of Transposons in the Mouse Male Germline. Developmental Cell. [Online] 12 (4), 503–514.
There is no review that I could find which really ties it all together convincingly, but there are several reviews on RNAi directed histone and chromatin modification in general which mention that the fact is now established in mammals:
Joshua-Tor, L. & Hannon, G.J. (2010) Ancestral Roles of Small RNAs: An Ago-Centric Perspective. Cold Spring Harbor Perspectives in Biology.
Volpe, T. & Martienssen, R.A. (2011) RNA Interference and Heterochromatin Assembly. Cold Spring Harbor Perspectives in Biology. 3 (9). | {
"domain": "biology.stackexchange",
"id": 126,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "biochemistry, histone, rna-interference, histone-modifications",
"url": null
} |
interview-questions, go, chat
type User struct {
Name string
}
chat/state.go:
package chat
type State interface {
RecentMessages() []Message
AddMessage(Message)
ActiveUsers() []User
}
chat/in_memory_state.go:
package chat
type InMemoryState struct {
messages []Message
users UserSet
}
// You may want to make this configurable by adding it to InMemoryState
const NumRecentMessages int = 100
func NewInMemoryState() *InMemoryState {
return &InMemoryState{
messages: nil,
users: NewUserSet(),
}
}
func (s *InMemoryState) RecentMessages() []Message {
return s.messages
}
func (s *InMemoryState) AddMessage(message Message) {
recentMessages = s.messages[max(0, len(s.messages) - NumRecentMessage - 1)]
s.messages = append(recentMessages, message)
s.users.Append(message.User)
}
func max(x, y int) int {
if x > y {
return x
}
return y
}
func (s *InMemoryState) ActiveUsers() []User {
return s.users.Slice()
}
That is the extent of your business logic. Really. That's it. Simple and concise. Really easy to test. UserSet should just be a wrapper struct containing a map[User]bool and []User. The former acts as a set and the later is the list of all keys in the map (provided so we have quick access to a slice of all Users without needing to enumerate through the keys of the map).
With a design like this, your main should be a lot simpler:
package main
import (
"flag"
"github.com/you/chat" // admittedly a pain point of go
)
var address = flag.String("address", ":8081", "the address for the chat HTTP server to listen on") | {
"domain": "codereview.stackexchange",
"id": 31644,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "interview-questions, go, chat",
"url": null
} |
##### Tools
This site is devoted to mathematics and its applications. Created and run by Peter Saveliev.
# Diagonalization of matrices
## 1 Example
Examples: $A = \left[ \begin{array}{} 3 & 0 \\ 0 & 2 \end{array} \right]$.
This is diagonal already. In fact, $3$ and $2$ are the eigenvalues.
That's the final output of diagonalization: two eigenvalues are lined up on the diagonal.
So, if $A$ has complex eigenvalues, it's not diagonalizable: $\left[ \begin{array}{} 0 & 1 \\ -1 & 0 \end{array} \right]$ or $\lambda = \pm i$.
Example: $A = \left[ \begin{array}{} 0 & 1 \\ 1 & 0 \end{array} \right]$. Then the characteristic polynomial is
$$\chi_A(\lambda) = \det \left[ \begin{array}{} \lambda & 1 \\ 1 & \lambda \end{array} \right] = \lambda^2-1.$$
Its roots are $\pm 1$. So, $A \sim D = \left[ \begin{array}{} 1 & 0 \\ 0 & -1 \end{array} \right]$?
By hand: find a basis so that $D = P^{-1}AP$ is diagonal.
How?
What does $A$ do? Consider:
$$Ae_1 = \left[ \begin{array}{} 0 & 1 \\ 1 & 0 \end{array} \right] \left[ \begin{array}{} 1 \\ 0 \end{array} \right] = \left[ \begin{array}{} 0 \\ 1 \end{array} \right] = e_2$$ | {
"domain": "inperc.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9914225148397243,
"lm_q1q2_score": 0.8058949672232489,
"lm_q2_score": 0.8128673246376009,
"openwebmath_perplexity": 568.3637317688763,
"openwebmath_score": 0.973077118396759,
"tags": null,
"url": "http://inperc.com/wiki/index.php?title=Diagonalization_of_matrices"
} |
I must confess that I am very new to module theory so please be patient with me. I don't even how it would be possible to have $\mathbb{Z}[i]/(1+2i) \oplus\mathbb{Z}[i]/(6-i)\cong\mathbb{Z}[i]/(8+11i)$ since $\mathbb{Z}[i]/(1+2i)$and $\mathbb{Z}[i]/(6-i)$ aren't even submodules of the same set. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9840936087546923,
"lm_q1q2_score": 0.8635193015445991,
"lm_q2_score": 0.8774767906859264,
"openwebmath_perplexity": 115.64984973242012,
"openwebmath_score": 0.8488600850105286,
"tags": null,
"url": "https://math.stackexchange.com/questions/2713690/showing-mathbbzi-12i-oplus-mathbbzi-6-i-cong-mathbbzi-811i/2715412"
} |
atkins 6 tournament of towns questions and solutions 1980-1984 pj taylor 7 tournament of towns questions and solutions 1989-1993 pj taylor. Then, have partners confer and confirm the solution. Matrices Matrices with Examples and Questions with Solutions. Now consider _ 4 2 2 1 _ _ x 1 x 2 _ = _ 2 1 _. We can nd two linearly independent eigenvectors 2 4 3 0 1 3 5; 2 4 1 3 0 3 5corresponding to the eigenvalue 3, and one. Unlock your Elementary Linear Algebra PDF (Profound Dynamic Fulfillment) today. 5: Fourier matrix 60 Exercise 10. Math 240: Some More Challenging Linear Algebra Problems Although problems are categorized by topics, this should not be taken very seriously since many problems fit equally well in several different topics. By the rst step of elimination this becomes _ 4 2 0 0 _ _ x 1 x 2 _ = _ 2 0 _. Krechmar's "A Problem Book in Algebra" (high school level) has all the solutions. We will return your money back for the solutions we provide for your problems which are deemed to be incorrect. Linear algebra I, TCD 2018/19. Many of the theorems of linear algebra obtained mainly during the past 30 years are usually ignored in text-books but are quite accessible for students majoring or minoring in mathematics. The solutions to (1) are given the following names: The λ's that satisfy (1) are called eigenvalues of A and the corresponding nonzero x's that also satisfy (1) are called eigenvectors of A. Algebra Word Problems. pdf - Free download Ebook, Handbook, Textbook, User Guide PDF files on the internet quickly and easily. #N#The Great Algebra Egg Race. If you found some mistakes or have questions/comments, feel free please contact me by [email protected] It seems to me this is a reasonable specialization for a first course in linear algebra. Preface Here are my online notes for my Linear Algebra course that I teach here at Lamar University. In its second edition, this textbook offers a fresh approach to matrix and linear algebra. Please be aware, however, that the handbook might contain, and almost certainly contains, typos as well as incorrect or inaccurate solutions. Orthogonal Matrices and Gram-Schmidt. 1 Linear Algebra Problems Solutions 1. Let z = 5+i9. What is Linear Algebra? | {
"domain": "quadri-canvas.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9805806484125338,
"lm_q1q2_score": 0.8084514167018346,
"lm_q2_score": 0.824461932846258,
"openwebmath_perplexity": 964.916444493784,
"openwebmath_score": 0.5246977210044861,
"tags": null,
"url": "http://quadri-canvas.it/tdcr/linear-algebra-problems-and-solutions-pdf.html"
} |
c++, linked-list
/**
* @brief Returns the LinkedList to its initial state.
* @param void.
* @return void.
*
* The clear function deletes all nodes in the LinkedList and returns
* all variables to their initial state.
*/
template <class T>
void LinkedList<T>::clear(void) {
moveToHead();
Node<T>* tempPtr;
for(std::size_t cnt=0; cnt<sizeL; cnt++) {
tempPtr = currPtr->next(); //hold next of currPtr
freeNode(currPtr); //delete currPtr
currPtr = tempPtr; //arrange currPtr
}
init();
}
/**
* @brief Releases allocated dynamic memory of a node.
* @param node The pointer to the target node.
* @return void.
*/
template <class T>
void LinkedList<T>::freeNode(Node<T>* node) {
delete node;
}
/**
* @brief Returns the LinkedList size.
* @param void.
* @return size
*/
template <class T>
std::size_t LinkedList<T>::size(void) const {
return sizeL;
}
/**
* @brief Returns true if the LinkedList empty,
* @brief false otherwise.
* @param void.
* @return Boolean value true or false.
*/
template <class T>
bool LinkedList<T>::empty(void) const {
return (sizeL == 0);
}
/**
* @brief Returns the data stored in the specified node.
* @param node The target node.
* @return The data stored in the node of type T.
*/
template <class T>
T LinkedList<T>::nodeData(Node<T>* node) const {
return node->getData();
} | {
"domain": "codereview.stackexchange",
"id": 35685,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, linked-list",
"url": null
} |
per unit length) is known. Homework Statement Let R be the unit square such that R= [0,1] x [0,1] Find a sequence of partitions of R such that the limit as k ->inf of the area of the largest sub-rectangle of the partition (where k is number of partitions) goes to. Students will receive the complete AP Calculus BC course as well as several topics that are not covered in the course description. Calculus - Everything you need to know about calculus is on this page. The following are to links to civil engineering Mathematics, Calculus, Geometry, Trigonometry equations. Several problems and questions with solutions and detailed explanations are included. An object's position is described by the following polynomial for 0 to 10 s. We develop a calculus for nonlocal operators that mimics Gauss's theorem and Green's identities of the classical vector calculus. Learning vector algebra represents an important step in students' ability to solve problems. As the vector calculus mark is less than 40 your final grade is UF and you will have to retake the course, or sit the additional assessment. Vector calculus topics include vector fields, flow lines, curvature, torsion, gradient, divergence, curl and Laplacian. tensor calculus, which provides a more natural and thorough formalism. The classical theorems of vector calculus are amply illustrated with figures, worked examples, and physical applications. Vector Algebra and Calculus 1. Users have boosted their calculus understanding and success by using this user-friendly product. Two semesters of single variable calculus (differentiation and integration) are a prerequisite. Goal: To achieve a thorough understanding of vector calculus, including both problem solving and theoretical aspects. A Vector is something that has two and only two defining characteristics. For example, if a vector-valued function represents the velocity of an object at time t , then its antiderivative represents position. Vector Point Function: Let be a Domain of a function, then if for each variable Unique association of a Vector , then is called as a Vector. Vector calculus identities: In this chapter, As an example of using the above notation, consider the problem of expanding the triple cross product The following identity is a very important property regarding vector fields which are the curl of another vector field. Here are a set of practice problems for the Calculus III notes. | {
"domain": "cralstradadeiparchi.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9802808753491773,
"lm_q1q2_score": 0.8059762805706365,
"lm_q2_score": 0.8221891305219504,
"openwebmath_perplexity": 797.4507241697992,
"openwebmath_score": 0.6484940052032471,
"tags": null,
"url": "http://irdu.cralstradadeiparchi.it/vector-calculus-problems.html"
} |
c++, algorithm, stack
floatstack.print(some_file); //data to file
break;
case 6:
{
Mystack<float> s5 = floatstack; // copy constructor called
s5.print(std::cout);
break;
}
case 7:
floattemp = floatstack; // assignment operator overload
floattemp.print(std::cout);
break;
case 8:
std::cout << floatstack; //operator << overloading
break;
case 9:
exit(0);
default:
std::cout << "Enter a valid input" << std::endl;
break;
}
}
}
else if (type_choice == 3)
{
int ch = 1;
while (ch > 0)
{
std::cout << "Enter the choice" << std::endl;
std::cin >> ch;
switch (ch)
{
case 1:
std::cout << "Number to be pushed" << std::endl;
std::cin >> char_elem;
charstack.push(char_elem);
break;
case 2:
try
{
std::cout << "Top Element" << charstack.topElement();
}
catch (std::out_of_range &oor)
{
std::cerr << "Out of Range error:" << oor.what() << std::endl;
}
break;
case 3:
std::cout << "Check Empty" << std::endl;
if (charstack.isEmpty())
std::cout << "Stack is Empty";
else
std::cout << "Stack is not Empty";
break;
case 4:
std::cout << "Pop the element" << std::endl;
try
{
charstack.pop();
}
catch (const std::out_of_range &oor)
{
std::cerr << "Out of Range error: " << oor.what() << '\n';
} | {
"domain": "codereview.stackexchange",
"id": 11588,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, algorithm, stack",
"url": null
} |
c, assembly, ffi
instruction[index++] = 0x1;
instruction[index++] = encodeModRM(REGISTER_ADDRESSING, srcRegisterCode, dstRegisterCode);
} else if (is64BitRegister(dst) && is64BitRegister(src)) {
unsigned int rexPrefixIndex = index++;
instruction[rexPrefixIndex] = REX_W;
instruction[rexPrefixIndex] |= (dst >= REGISTER_R8 ? REX_B : 0);
instruction[rexPrefixIndex] |= (src >= REGISTER_R8 ? REX_R : 0);
instruction[index++] = 0x1;
instruction[index++] = encodeModRM(REGISTER_ADDRESSING, srcRegisterCode, dstRegisterCode);
}
}
printMemory(instruction, index);
}
All the sources are here, it's mostly defining constants and forward declarations.
types.h: https://hastebin.com/pipuqezoxe.cpp
Encoding.h: https://hastebin.com/esiciniwex.cpp
Instruction.h: https://hastebin.com/refokaluka.cpp
Instruction.c (where all the code emitting happens): https://hastebin.com/fuboqijedi.cpp // forward declaration for emitAdd
#include "Instruction.h"
There's no declaration here. Do you mean it's in Instruction.h? I think you don't need this comment. When you include a header with the same name as the source file, everyone reading your code will assume it contains declarations of exported functions. If it doesn't, that might be worthy of comment.
unsigned int registerToIndex(Register reg) {
switch (reg) {
case REGISTER_AL:
case REGISTER_AX:
case REGISTER_EAX:
case REGISTER_RAX: return 0;
case REGISTER_CL:
case REGISTER_CX:
case REGISTER_ECX:
case REGISTER_RCX: return 1;
[...] | {
"domain": "codereview.stackexchange",
"id": 42025,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c, assembly, ffi",
"url": null
} |
# Step 1: find rightmost position i such that L[i] < L[i+1]
i = n - 2
while i >= 0 and L[i] >= L[i+1]:
i -= 1
if i == -1:
return False
#------------------------------------------------------------
# Step 2: find rightmost position j to the right of i such that L[j] > L[i]
j = i + 1
while j < n and L[j] > L[i]:
j += 1
j -= 1
#------------------------------------------------------------
# Step 3: swap L[i] and L[j]
L[i], L[j] = L[j], L[i]
#------------------------------------------------------------
# Step 4: reverse everything to the right of i
left = i + 1
right = n - 1
while left < right:
L[left], L[right] = L[right], L[left]
left += 1
right -= 1
return True | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9899864290813876,
"lm_q1q2_score": 0.8228203442161302,
"lm_q2_score": 0.831143054132195,
"openwebmath_perplexity": 1816.820134963595,
"openwebmath_score": 0.45735999941825867,
"tags": null,
"url": "https://math.stackexchange.com/questions/2478014/probability-of-bride-entering-the-church"
} |
newtonian-mechanics, forces, rotational-dynamics, torque, free-body-diagram
I understand that if the force applied is greater than $\dfrac{mgr}{h}$, i.e., if the force creates a greater torque than the weight force, the cylinder will tip.
If the force is lesser than that, but still greater than the maximum achievable friction $\mu mg$, it will slide.
My question is, what will happen if the force is greater than both $\dfrac{mgr}{h}$ and $\mu mg$?
Will it keep moving while tipping forward? Will it move some distance, and then topple down? Will it topple down first and then move forward?
How can we find the distance it would travel forward before toppling down entirely? If $F\gt f$ then there is a resultant force $F-f$ acting on the cylinder so its COM has a linear acceleration $a=(F-f)/m$ to the right. There is also a resultant couple, which is initially $\tau=hf-mgr$, acting clockwise about B. Hence the cylinder also has an initial angular acceleration of $\alpha=\tau/I$ where $I$ is the moment of inertia of the cylinder about B (not about its centre of mass E).
So yes the cylinder topples while also accelerating to the right. How long it takes the cylinder to topple, and how far it moves before this happens, requires a very tricky calculation.
If $\alpha$ were constant then the time to topple onto its side through angle $\theta=\frac12 \pi$ radians would be $t=\alpha/\theta=2\alpha/\pi$. And the distance moved by the centre of mass E would be $s=\frac12 a t^2$.
However there are complications.
If $F$ is applied horizontally at a fixed point P then as the cylinder topples the vertical distance between P and B increases, while the horizontal distance between E and B decreases. So the torque $\tau$ and acceleration $\alpha$ do not remain constant; initially both increase as the angle which QE makes with the vertical increases. The increase in torque $\tau$ reduces the time $t$ which it takes for the cylinder to topple. | {
"domain": "physics.stackexchange",
"id": 63493,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, forces, rotational-dynamics, torque, free-body-diagram",
"url": null
} |
c, array, linux
What happens when you copy your array? What do you want to happen? How should a copy be made?
How should copies behave when you modify an array that has multiple references to it?
Will this be used in a multi-threaded environment?
The answers affect how the interface should be designed. Personally, it seems to me that the reference counting is on a very low level (the caller has to take care of it himself), meaning it would be easy to make mistakes. Make your interface easy to use correctly, and hard to use incorrectly. Reference counting might not even be necessary here, depending on what the array is meant to be used for. | {
"domain": "codereview.stackexchange",
"id": 4157,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c, array, linux",
"url": null
} |
quantum-mechanics, optics, photons, atomic-physics, quantum-optics
(b) $n=m$:
\begin{align}\frac{\left(a_3^\dagger+a_4^\dagger\right)^{n}\left(a_3^\dagger-a_4^\dagger\right)^{m}}{\sqrt{n!m!2^{n+m}}}|0_3,0_4\rangle&=\frac{\left(a_3^{\dagger (2n)}-a_4^{\dagger (2n)}\right)}{\sqrt{n!m!2^{n+m}}}|0_3,0_4\rangle\\
&=\frac{\sqrt{(2n)!}}{\sqrt{n!n!2^{2n}}}\left(|2n_3,0_4\rangle-|0_3,2n_4\rangle\right)\\
&=\frac{\Gamma(n+1/2)}{\sqrt{\pi}\Gamma(n+1)}\left(|2n_3,0_4\rangle-|0_3,2n_4\rangle\right),\end{align} where I snuck in an expression using the Gamma function $\Gamma$ for fun.
(c) $n>m$: same as (a) but with the role of $3$ and $4$ reversed.
EDIT 2) Can be made better if we put the operators in the opposite order! | {
"domain": "physics.stackexchange",
"id": 95962,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, optics, photons, atomic-physics, quantum-optics",
"url": null
} |
bell-experiment, non-locality
\end{aligned}
$$
So in order to calculate the probability they win we must compute the conditional probability distribution $p(a,b|x,y)$.
An optimal quantum strategy
Now in the Lecture you watched it looks like they presented a quantum system that can achieve the best winning probability $\cos^2(\pi/8)$. We use the states and measurements described in the video in order to compute the conditional probability distribution $p(a,b|x,y)$ for that particular quantum system. The state used is the maximally entangled state $|\psi\rangle = \tfrac{1}{\sqrt{2}}|00\rangle + \tfrac{1}{\sqrt{2}} |11\rangle$. The measurements may be represented by a collection of matrices $A_{a|x}$ and $B_{b|y}$. As measurement operators sum to the identity we have that for all $x,y \in \{0,1\}$, $A_{1|x} = \mathbb{1} - A_{0|x}$ and $B_{1|y} = \mathbb{1} - B_{0|y}$. Thus we only need to specify the operators for outcome $0$. For the strategy discussed in the lecture the operators are (in the computational basis) represented by the matrices
$$
A_{0|0} = \begin{pmatrix} 1&0 \\
0&0
\end{pmatrix}, \quad
A_{0|1} = \begin{pmatrix} 1/2&1/2 \\
1/2&1/2
\end{pmatrix}, \\
B_{0|0} = \begin{pmatrix} \cos(\pi/8)^2 &\sin(\pi/4)/2 \\
\sin(\pi/4)/2& \sin(\pi/8)^2
\end{pmatrix}, \quad
B_{0|1} = \begin{pmatrix} \cos(\pi/8)^2 & -\sin(\pi/4)/2 \\ | {
"domain": "quantumcomputing.stackexchange",
"id": 1942,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "bell-experiment, non-locality",
"url": null
} |
typescript, angular-2+
Title: Angular2 Toggling showing and hiding between two elements I have a tabs nav element that has two tabs, it needs to show a component based on what tab was clicked and hide the other component. If the clicked tab is already "active", the component needs to remain showing.
I have this working, but it seems very inefficient to me. Can anyone show me a better way to do this?
Here's how I have it set up now. For the sake of not posting every file in the question, know that the project is set up correctly.
@Component({
selector: 'my-app',
template: `
<div>
<button type="button" (click)="changeShowStatus(oneShowing=true,twoShowing=false)">1</button>
<button type="button" (click)="changeShowStatus(twoShowing=true,oneShowing=false)">2</button>
<div class="box1" *ngIf="oneShowing">
<p>some content</p>
</div>
<div class="box2" *ngIf="twoShowing">
<p>some content2</p>
</div>
</div>
`,
})
export class App {
name:string;
oneShowing:boolean;
twoShowing:boolean
constructor() {
this.oneShowing = true;
this.twoShowing = false
}
}
Plunker Separate content and logic
It is a good practice (recommended by Angular style guidelines) to separate component template and component code.
Move template to an own my-app.component.html file and refer to it via templateUrl property instead of template.
Avoid logic in your template
oneShowing=true,twoShowing=false in and twoShowing=true,oneShowing=false parts are technically logic.
It does need to be in the template, and it should not be there.
my-app.component.html
<button type="button" (click)="displayBoxWithNumber(1)">1</button>
<button type="button" (click)="displayBoxWithNumber(2)">2</button> | {
"domain": "codereview.stackexchange",
"id": 27880,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "typescript, angular-2+",
"url": null
} |
So what happens when we evaluate a logarithmic integral involving dimensions on both limits (dimension $$D$$, say)? Let's take our integral to be $$\int_{aD}^{bD} du/u$$ with $$a,b$$ real numbers, and the dimension made explicit. Using straightforward substitution $$u\to vD$$ we can move the dimension from the limits into the integrand and see that $$\int_{a D}^{b D} \frac{d u}{u} = \int_{a}^{b} \frac{D d v}{v D} = \int_a^b \frac{dv}{v},$$ i.e. the dimension disappears from the integral. We can continue evaluating $$\int_a^b \frac{dv}{v} = \ln\frac{b}{a} = \ln\frac{bD}{aD},$$ (reinserting the dimension trivially by multiplying by $$1=D/D$$) and then use the last equality to define $$\ln bD - \ln aD =: \ln\frac{b}{a},$$ and then we recover (formally) the usual antiderivative relation, independent on whether $$u$$ has a dimension or not $$\int \frac{du}{u} = \ln u + C.$$ Once we did this, doing the calculation using antiderivatives without first taking care of the dimensions is justified.
I will say that I found this one of the most fascinating properties of logarithms when I first stumbled over this in my undergraduate studies. You cannot do something similar for the sine function, say. The logarithm in a way is the ideal power, and its habit of eating up dimensions allows it to appear in places where no other functions can appear out of symmetry reasons. This is something theoretical particle physicists who are evaluating scattering amplitudes using ever-more complex integrals know all too well. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9481545333502202,
"lm_q1q2_score": 0.8132967155858767,
"lm_q2_score": 0.8577681031721324,
"openwebmath_perplexity": 280.0642468307419,
"openwebmath_score": 0.9062263369560242,
"tags": null,
"url": "https://physics.stackexchange.com/questions/554241/does-the-logarithm-of-a-non-dimensionless-quantity-make-any-sense/567521#567521"
} |
sequence-alignment, sam, nanopore, minion
Title: Total reads aligning to each reference within a bam file I have two PCR amplicons that have been multiplexed and sequenced using the nanopore minion.
I have aligned the fastq reads using minimap2 with a reference file containing both amplicon sequences and generated a bam file that I have viewed using IGV.
I am looking for a way to generate some simple summary statistics.
In particular, is there a way to extract the total number of fastq reads aligning to each amplicon reference from the bam file? The quick way to get the number of alignments on each reference is
samtools idxstats my_bam.bam
Number of reads on each reference is column 3. Although, as has been pointed out, this will give you the total number of alignments per reference, not the total number of reads (each read might give rise to more than one alignment). That said I do tend to us this as generally I'm after a rough approximation, rather than an accurate number.
In theory, only one alignment for each read should be marked as primary, so the following should give you what you need quickly and at low memory usage:
samtools view -bF 2304 my_bam.bam > primary_only.bam
samtools index primary_only.bam
samtools idxstats primary_only.bam | {
"domain": "bioinformatics.stackexchange",
"id": 759,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "sequence-alignment, sam, nanopore, minion",
"url": null
} |
thermodynamics, energy, photons, astronomy, sun
Title: Do photons lose energy while travelling through space? Or why are planets closer to the sun warmer? My train of thought was the following:
The Earth orbiting the Sun is at times 5 million kilometers closer to it than others, but this is almost irrelevant to the seasons.
Instead, the temperature difference between seasons is due to the attack angle of the rays, so basically the amount of atmosphere they have to pass through.
Actually, it makes sense, heat comes from the photons that collide with the surface of the earth (and a bit with the atmosphere) and gets reflected, and there's nothing between the earth and the sun that would make a photon lose energy over a 5 million km travel on vacuum. Or is it? (Note I'm not wondering about the possible lose of energy related to the redshift of the expanding universe.)
Which made me wonder…
So why then are the planets closer to the sun warmer? It seems silly, the closer you are to a heat source, the warmer it feels, but that's because of the dispersion of the heat in the medium, right? If there's no medium, what dissipates the energy? The reason being closer to a heat source makes you warmer is the inverse square law. Think of it this way: If you have a $1~\mathrm{m}^2$ piece of material facing the Sun and located at Mercury's orbit, it will be quite hot. What does the shadow of this square look like at Earth's orbit (about $2.5$ times further away than Mercury)? Well, it will be $2.5$ times bigger in both directions, covering about $6~\mathrm{m}^2$. So the same amount of power can be delivered either to $1~\mathrm{m}^2$ on Mercury or to $6~\mathrm{m}^2$ on Earth. Every square meter of Earth gets about $6$ times less Solar power than every square meter on Mercury. The light is not losing energy to the surrounding medium, even if the medium exists. | {
"domain": "physics.stackexchange",
"id": 5888,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "thermodynamics, energy, photons, astronomy, sun",
"url": null
} |
quantum-computing, quantum-information
I'm wondering what it is I missed here? I figured out what I missed. $\Delta Fidelity=\frac{1}{2}\left(r_x-r_z\right)$ is indeed correct. However, what I missed was the fact that $r_z$ could be less than 0, since the only requirement is that $\left|\vec{r}\right|\le1$. Therefore, to achieve the maximum value of $\Delta Fidelity$, we set $r_x=\frac{1}{\sqrt{2}}$ and $r_z=-\frac{1}{\sqrt{2}}$. This means the actual correct answer is $|\psi\rangle=\frac{|+\rangle+|1\rangle}{2cos\left(\frac{\pi}{8}\right)}$, because a density matrix with $r_z=-1$ and $r_x=r_y=0$ is the density matrix of the $|1\rangle$ state, and the state $|\psi\rangle$ is a midway state between $|+\rangle$ and $|1\rangle$.
Thus, the general approach I took was correct, but I initially failed at performing mathematical optimization in my head. | {
"domain": "cstheory.stackexchange",
"id": 3986,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-computing, quantum-information",
"url": null
} |
quantum-mechanics, wavefunction, schroedinger-equation, fourier-transform, time-evolution
Title: "Stationary" vs. moving wave packet I am working through a quantum mechanics problem involving the time evolution of a free particle (the particle is a proton) given that the initial state is a Gaussian wave packet of the form:
$$
\psi(x,0)=(2\pi\sigma^2)^{-1/4}e^{-x^2/4\sigma^2}\,,
$$
where $\sigma$ is the width of the Gaussian. I worked through it and got that the evolution, $\psi(x,t)$, is
$$
\psi(x,t)=\frac{\sqrt{\frac{1}{\sigma^2}}(\sigma^2)^{3/4}}{2^{1/4}\pi^{3/4}}\int_{-\infty}^{\infty}e^{i(kx-\hbar k^2t/2m_p)-k^2\sigma^2}dk\,.
$$
To derive this, I used the Fourier transform to expand $\psi(x,0)$ in terms of the eigenfunctions of a free particle, which are the plane waves. Then, I computed $\phi(k)$ using the inverse Fourier transform and substituted this back into the Fourier integral for $\psi(x,0)$. To compute $\psi(x,t)$, I realized that the component waves of the wave packet must propagate independently from one another, and the time evolution of a general plane wave is given by $\psi(\vec{r},t)=Ae^{i(\vec{k}\cdot\vec{r}-\omega t)}$. I multiplied this by the integrand of $\psi(x,0)$ to obtain $\psi(x,t)$, where I substituted $\hbar k^2/2m_p$ for $\omega$ (I used the Planck relation and the energy eigenvalues of a free proton). | {
"domain": "physics.stackexchange",
"id": 93691,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, wavefunction, schroedinger-equation, fourier-transform, time-evolution",
"url": null
} |
# Uniform Slab-Finding Electric Field Using Gauss Law
Gold Member
## Homework Statement
Uniform Slab: Consider an infinite slab of charge with thickness 2a. We choose the origin inside the slab at an equal distance from both faces (so that the faces of the slab are at z = +a and z = −a). The charge density ρ inside the slab is uniform (i.e., ρ =const). Consider a point with coordinates (x,y,z). Using Gauss’ law, find the electric field
(a) when the point is inside the slab (−a < z < +a),
(b) and when the point is outside the slab (z > a or z < −a).
(c) Sketch the Ez vs z graph.
(d) If the density was not constant at its a function of z like ##ρ=Bz^2## then calculate the upper steps again.
Gauss Law
## The Attempt at a Solution
a) I took a cylinder Gaussian surface inside the slab forand from that I found ##E=\frac {ρz} {2ε_0}## .z is the height of the point that we choose from the origin.
b)I took a cylinder again and from that I found ##E=\frac {ρa} {2ε_0}##
c)The field will be constant cause ρ and a is constant also ##ε_0## so As z increases it inrease until a.And from that its constant.
d)Then Electric field will be ##E=\frac {Bz^3} {6ε_0}## for inside , ##E=\frac {Ba^3} {6ε_0}## for outside ?
Is these true ?
Last edited:
TSny
Homework Helper
Gold Member
## The Attempt at a Solution | {
"domain": "physicsforums.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9678992905050947,
"lm_q1q2_score": 0.822686497868748,
"lm_q2_score": 0.849971175657575,
"openwebmath_perplexity": 4029.16110112722,
"openwebmath_score": 0.8620207905769348,
"tags": null,
"url": "https://www.physicsforums.com/threads/uniform-slab-finding-electric-field-using-gauss-law.907228/"
} |
vba, excel
For Each ws In ThisWorkbook.Worksheets
If ws.Range("A5") = "Project # :" And ws.Range("E16") >= Sheet6.Range("F1") Then
x = .Range("A" & Rows.Count).End(xlUp).Offset(1).row
.Cells(x, "A").Value = ws.Name 'classifying number
.Cells(x, "B").Formula = "='" & ws.Name & "'!$B$5" 'Project #
.Cells(x, "C").Formula = "='" & ws.Name & "'!$A$1" 'Project Name
.Cells(x, "D").Formula = "='" & ws.Name & "'!$B$8" 'Project Engineer
.Cells(x, "E").Formula = "='" & ws.Name & "'!$B$11" 'In-service Due
.Cells(x, "F").Formula = "='" & ws.Name & "'!$E$16" 'In-service Actual
.Cells(x, "G").Formula = "='" & ws.Name & "'!$E$6" '30% Due
'.Cells(x, "H").Formula = "='" & ws.Name & "'!$E$13" '30% actual
.Cells(x, "H").Formula = "='" & ws.Name & "'!$F$13" '30% Success
.Cells(x, "I").Formula = "='" & ws.Name & "'!$E$7" '60% due
'.Cells(x, "J").Formula = "='" & ws.Name & "'!$E$14" '60% actual
.Cells(x, "J").Formula = "='" & ws.Name & "'!$F$14" '60% Success | {
"domain": "codereview.stackexchange",
"id": 21547,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "vba, excel",
"url": null
} |
data-compression, computer-graphics
Title: Video compression algorithm I am working on a project for my CS subject and the topic is Video compression. After theoretical aspect, I want to implement and describe one algorithm that does video compression. I am not sure if these type of questions are allowed here but: what would be the best (the easiest) algorithm to implement for an entry level?
I was looking for some and couldn't find a lot of scientific papers. You should check how JPEG works. For video compression, you have to implement how JPEG works on every frame. There is a great video on YouTube by Reducible on how JPEG works under the hood. It explains step by step how the algorithm works but you will have to code it yourself. | {
"domain": "cs.stackexchange",
"id": 19779,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "data-compression, computer-graphics",
"url": null
} |
classical-mechanics, lagrangian-formalism, reference-frames
Edit: Upon reading the beginning of the relevant section of the Wikipedia article, the problem it describes is "a single particle with mass $m$ moving in a potential field $U(r)$", so it is effectively assuming one of the masses is kept fixed. The Lagrangian above describes the problem in which both masses are free to move. | {
"domain": "physics.stackexchange",
"id": 95014,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "classical-mechanics, lagrangian-formalism, reference-frames",
"url": null
} |
First note that $$g$$ is well-defined:
• if $$x,y \in (0,1)$$ then $$(x,y)$$ is the only member of $$[(x,y)]$$
• if $$x = 0$$, then $$(0,y) \sim (1,y)$$ but $$f(0,y) = (y+1)(\cos(0),\sin(0)) = (y+1)(1,0) = (y+1)(\cos(2\pi), \sin(2\pi)) = f(1,y)$$ so in particular $$[f(0,y)] = [f(1,y)]$$.
• if $$y = 0$$ then $$(x,0) \sim (x,1)$$ but $$[f(x,0)] = [(\cos(2\pi x), \sin(2\pi x))] = [2(\cos(2\pi x), \sin(2\pi x))] = [f(x,1)]$$
Now we show that $$g$$ is continuous. As in @csprun's answer, denote the respective quotient maps by $$\pi : A \to A/_\approx$$ and $$\rho : I^2 \to I^2/_\sim$$. Recall that quotient topologies are given by $$\{U \subseteq A/_\approx : \pi^{-1}(U) \text{ open in } A\}, \quad \{V \subseteq I^2/_\sim : \rho^{-1}(V) \text{ open in } I^2\}$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9793540728763411,
"lm_q1q2_score": 0.8139833352072464,
"lm_q2_score": 0.831143054132195,
"openwebmath_perplexity": 96.31891663768356,
"openwebmath_score": 0.9640833139419556,
"tags": null,
"url": "https://math.stackexchange.com/questions/3129188/show-homeomorphism-between-two-quotient-topologies"
} |
for example, and jarque.test in the package.! - test ist ein Güte-of-fit test, ob eine Normalverteilung vorliegt the skew and kurtosis a... String giving the name of the series ( s ) W. Wallace Hui, Yulia R. Gel Joseph. X using the Jarque-Bera test at the alpha significance level, returned as a scalar... Jb.Norm.Test Arguments x a numeric vector of data values Gastwirth, Weiwen.! Defines, whether the skewness ( deviation from symmetry ) should be zero residuals have constant variance ( )! 95 % level of Lake Huron 1875–1972, annual measurements of the data well Jarque Bera test:... Is melted from the package moments, df = 2, p-value = 0.9773: LakeHuron ( of! The interpretation of bptest jarqueberatest in r that the homoscedasticity assumption would have to be:. The jarque.bera.test ( in tseries package ) and the rjb.test from the package tseries, for example, more. Andrie de Vries is a goodness-of-fit test that determines whether the data skewness! Wallace Hui, Yulia R. Gel, Joseph L. Gastwirth, Weiwen Miao we do not reject null! Hypothesis that whether the data have the skew and kurtosis coefficients is high! F shows the formulas used: Step 2: Calculate the Jarque-Bera is! Formulas used: Step 3: Calculate the Jarque-Bera test statistic } ( jarqueberatest in r ) \ ) and. Select View/Descriptive statistics & Tests/Simple hypothesis tests regarding the interpretation of bptest is that the residuals: Learn R step-by-step! The sample kurtosis and skew die Probendaten eine Schiefe und kurtosis eine Normalverteilung... Show you how jarqueberatest in r check if Râs random number generating functions are properly! If I used a random number generating functions are working properly of distributions test suitable when a fully null! Perform the original Jarque-Bera test Description, der anhand der kurtosis und der Schiefe in den Daten,! ( seeJarque, C. and Bera, a ( 1980 | {
"domain": "jimchristy.com",
"id": null,
"lm_label": "1. Yes\n2. Yes\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9658995742876885,
"lm_q1q2_score": 0.8339495851220217,
"lm_q2_score": 0.8633916064586998,
"openwebmath_perplexity": 4628.22776329525,
"openwebmath_score": 0.5811334848403931,
"tags": null,
"url": "http://www.jimchristy.com/the-matchmaker-lwqknrg/jarqueberatest-in-r-48ffc2"
} |
data-structures, graph-traversal
I assume I could use recursion to to count the accessible blue squares if the above conditions are met. Let us first calculate the coordinates $C(m,n)$ of the center of hexagon $(m,n)$. We assume that the origin is at the center of hexagon $(0,0)$ and that the hexagon radius (distance from center to vertex) is $r$. A short calculation shows that hexagon $(1,0)$ has its center at $$C(1,0) = a = (\sqrt{3} r, 0)$$ and hexagon $(0,1)$ has its center at $$C(0,1) = b = (\sqrt{3} r/2, 3 r/2).$$
Therefore, hexagon $(m,n)$ has its center at
$$C(m,n) = m \cdot a + n \cdot b = (\sqrt{3} r (m + n/2), 3 r n / 2).$$
Notice that we should allow negative $m$ and $n$ if we want to cover the whole plane.
To get from the center $C(m,n)$ to one of the adjacent centers we need to add to $C(m,n)$ one of the vectors $a$, $b$, $-a$, $-b$, $a - b$, or $b - a$. (If we add $a + b$ or $-a-b$ we go too far). If we travel half the distance we will end up exactly in the midpoint of a side. Therefore, midpoints of sides have coordinates of the form
$$C(m+1/2, n), C(m,n+1/2), C(m-1/2,n), C(m,n-1/2), C(m+1/2,n-1/2), C(m-1/2,n+1/2).$$
We have discovered how to encode sides: just use half-integer coordinates. But this is silly, it is better to multiply everything by two, which leads to the following system. | {
"domain": "cs.stackexchange",
"id": 791,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "data-structures, graph-traversal",
"url": null
} |
quantum-mechanics, operators, harmonic-oscillator
where $$\langle\phi_q|\phi_q\rangle = \langle q|\hat{N} |q\rangle = q\langle q|q\rangle = q$$ (as $\langle q|q\rangle = 1$ due to orthonormality), and then solve for $A$?
Or am I horribly mistaken in terms of how bras and kets work?
From what I have so far, I would obtain that $$A^2\sum_{q=0}^{100}\sum_{q=0}^{100}\frac{q}{(q+i)(q-i)} = 1$$ and $$A = \sqrt{\sum_{q=0}^{100}\sum_{q=0}^{100}\frac{(q+i)(q-i)}{q}}$$
Hopefully someone will be able to clarify my misunderstandings here.. | {
"domain": "physics.stackexchange",
"id": 35436,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, operators, harmonic-oscillator",
"url": null
} |
machine-learning
Title: Can we use machine learning to generate a text output based on the input strings Problem : Generate a text output based on input strings which will be combined using a number of rules.
Example :
Feature1 Feature2 O/P
Rule 1 Enum_Domain Priority /Enum_Domain/Priority
Rule 2 Enum_Domain.EnumData Name /Enum_Domain/EnumData/Name
Rule 1 Trunkgroup Gateway /Trunkgroup/Gateway
Rule 2 GatewayGrp.Gateway IP /GatewayGrp/Gateway/IP
This is a simple programming problem, but is there any machine learning algorithm that can learn these rules and generate the output based on the two inputs. Yes, sequence 2 sequence models attempt to do this. This can be used in a number of domains, from typo fixing to machine translation. They are encoder -> decoder based, which means you have a part that encodes your input and then a decoder that generates a new sequence based on this encoding (and usually some attention). In this case your encoder would likely be two recurrent neural networks of which the output would be concatenated and then a decoder that takes this concatenated output and turns this into a new sequence. If you want to use attention you need to adapt the standard attention a bit because you have two textual inputs, but if you understand how it works this would not be too difficult to adapt. | {
"domain": "datascience.stackexchange",
"id": 2091,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "machine-learning",
"url": null
} |
c#, performance, array
Title: Searching item in array I have a program that calculates some values and saves them to an array. Before save a value, program checks if there is already an identical value in array:
string someData = GetData();
bool newItem = true;
int arrayDataLength = arrayData.Length;
for (int x = 0; x < arrayDataLength; x++)
{
if (arrayData[x] == someData)
{
newItem = false;
arrayDataCount[x]++;
break;
}
}
if (newItem == true)
{
Array.Resize(ref arrayData, (arrayDataLength + 1));
Array.Resize(ref arrayDataCount, (arrayDataLength + 1));
arrayData[arrayDataLength] = someData;
arrayDataCount[arrayDataLength]++;
}
Sometimes arrays with data become big, and program works a bit slow.
How can I optimize perfomance of this action? If you don't need to count the number of occurences, use a Set:
private readonly ISet<string> data = new HashSet<string>();
public void AddNewData()
{
string someData = GetData();
data.Add(someData);
}
If you need to count the occurences, you should use a Dictionary:
private readonly IDictionary<string, int> data = new Dictionary<string, int>();
public void AddNewData()
{
string someData = GetData();
if (data.ContainsKey(someData))
{
data[someData]++;
}
else
{
data[someData] = 1;
}
}
Alternatively, you could just use a List and calculate the counts when you need them, using the LINQ GroupBy extension method:
private readonly ICollection<string> data = new List<string>();
public void AddNewData()
{
string someData = GetData();
data.Add(someData); // data might contain duplicates
} | {
"domain": "codereview.stackexchange",
"id": 2992,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, performance, array",
"url": null
} |
newtonian-mechanics, energy-conservation, friction, noethers-theorem, coupled-oscillators
Clearly none of the forces are explicit functions of time, so the system is time-translation-symmetric. However, the energy $E = \frac{1}{2}m\dot{x}_{A}^{2} + \frac{1}{2}m\dot{x}_{B}^{2} + \frac{1}{2}k(x_{A}-x_{B})^{2}$ is clearly not conserved.
According to this and this, I can convert my system to a Lagrangian system with
\begin{align*}
L &= \frac{1}{2}m(\dot{x}_{A}^{2} + \dot{x}_{B}^{2}) - \frac{1}{2}k(x_{A} - x_{B})^{2}, \qquad Q_{k} = -b\dot{x}_{k} \qquad (k=A, B),
\end{align*}
and equations of motion
\begin{align*}
\frac{d}{dt}\left( \frac{\partial L}{\partial \dot{x}_{k}} \right) - \frac{\partial L}{\partial x_{k}} = Q_{k} \qquad (k=A, B).
\end{align*}
This is again apparently a time-translation-symmetric system, but the usual definition of energy is not conserved.
How is this compatible with Noether's theorem? Noether's theorem applies to Lagrangian systems where no generalized forces are present. To have a mapping between conserved quantities and continuous symmetries of the system, you need to rewrite it with $Q_i =0$.
You can describe a damped oscillator $ m\ddot{x} = - k x - \lambda \dot{x}$ with the following lagrangian : | {
"domain": "physics.stackexchange",
"id": 78297,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, energy-conservation, friction, noethers-theorem, coupled-oscillators",
"url": null
} |
java
This looks very complicated. What's worse:
You're using synchronizedList, so you're assuming concurrent access.
The method itself is not synchronized and it'' probably break.
Imagine you test topTenItems.contains(itemId) and in the meantime another thread(s) change the list. So you're doing something based on a wrong test outcome.
I'm a bit lost
Your items are Integers and you're assuming they're sorted, too. Why should they? What do you need them for?
I might go for something like
// start empty as initially there's nothing in top ten
private final List<Integer> topTenItems = new ArrayList<Integer>();
// no topTenScores
private final Object lock = new Object();
private void updateTopTen(Integer itemId, Integer newScore) {
synchronized (lock) {
int index = topTenItems.indexOf(itemId);
if (index == -1) {
index = topTenItems.size();
topTenItems.add(itemId);
}
while (index > 0 && itemsScores.get(index) > itemsScores.get(index-1)) {
swap(index, index-1);
index--;
}
if (topTenItems.size() > TOP_TEN_SIZE) {
topTenItems.remove(TOP_TEN_SIZE);
}
}
} | {
"domain": "codereview.stackexchange",
"id": 14664,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java",
"url": null
} |
optics, visible-light, speed-of-light
Title: What would it look like if the speed of light in vacuum were to decrease? I've recently come across theories involving variable speed of light as a solution for the Horizon problem as well as for Dark Energy's accelerating expansion of space.
In a discussion with a friend, the idea that a significantly decreased speed of light would make objects appear more distant (the light takes longer to travel to the observer) was presented. I'm not so sure about this, I think objects would appear the same size and shape because they're coming from the same direction as before, only redshifted due to the loss in energy. We also thought about if light experiencing more curvature than before due to being within the gravitational field of large celestial bodies would make them appear in different locations relative to where we currently observe them. Also, perhaps everything would appear a little dimmer, as the light is hitting our eyes with less energy.
In a theoretical alternate reality where the speed of light is half of our measured $c$, and that other constants depending on $c$ have changed proportionally, what visual differences might a human observer from our present reality see, and why? Are the ideas presented above realistic at all or way off? Please note that I'm not interested in a discussion on the validity of the theory above, only what theoretical effects a slower speed of light may have on optics.
If possible, I would greatly appreciate being pointed to resources with formulas for calculating these effects - I have some interest in writing a renderer capable of simulating them. The speed of light is a dimensionful constant and, as such, it is impossible to talk about changes to the speed of light without being dead clear about what you do want to keep constant.
Your question doesn't really give any meaningful benchmarks for this, so to some extent your question is completely unanswerable. | {
"domain": "physics.stackexchange",
"id": 62702,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "optics, visible-light, speed-of-light",
"url": null
} |
# Why aren't all dense subsets of $\mathbb{R}$ uncountable?
1) we say that $\mathbb{R}$ is uncountable and $\mathbb{Q}$ is countable.
That implies $\mathbb{R}-\mathbb{Q}$, that is irrational numbers are uncountable.
2) Archimedian property of $\mathbb{R}$ suggests that there exists a rational between any two numbers. i.e. $\mathbb{Q}$ is dense in $\mathbb{R}$. Then how come $\mathbb{Q}$ is countable while irrational is uncountable?
• That's the way it is – Norbert Feb 27 '12 at 18:55
• If you regard $\mathbb{R}$ as a closure of $\mathbb{Q}$, this gives some insight why $\mathbb{R}$ might be uncountable. Informally, $\mathbb{R} \subset 2^{\mathbb{Q}}$ – Tom Artiom Fiodorov Feb 27 '12 at 19:04
• @Mathemagician1234: Yes, the irrationals are also dense in $\mathbb{R}$; but then, so is $\mathbb{R}$ itself. Why do you think the question is "moot"? – Arturo Magidin Feb 27 '12 at 19:41
• @Mathemagician1234: I think it's clear that the intention of the question is "Why aren't dense sets of $\mathbb{R}$ necessarily uncountable?" – Nate Eldredge Feb 27 '12 at 20:39
• @Mathemagician1234: I have edited the title, but I have to disagree (strongly) it was that ambiguous. – Aryabhata Feb 28 '12 at 20:18 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9796676454700793,
"lm_q1q2_score": 0.8163811462292474,
"lm_q2_score": 0.8333245973817158,
"openwebmath_perplexity": 238.14703722710274,
"openwebmath_score": 0.8955332636833191,
"tags": null,
"url": "https://math.stackexchange.com/questions/114087/why-arent-all-dense-subsets-of-mathbbr-uncountable/114090"
} |
Thus, we see that the system is consistent (as predicted by Theorem HSC) and has an infinite number of solutions (as predicted by Theorem HMVEI). With suitable choices of {x}_{3} and {x}_{4}, each solution can be written as
\eqalignno{ \left [\array{ −{7\over 5}{x}_{3} −{7\over 5}{x}_{4} \cr −{2\over 5}{x}_{3} + {3\over 5}{x}_{4}\cr {x}_{ 3}\cr {x}_{ 4} } \right ] & & }
C22 Contributed by Chris Black Statement [202]
The augmented matrix for the given linear system and its row-reduced form are:
\eqalignno{ \left [\array{ 1&−2& 1 &−1&0\cr 2&−4 & 1 & 1 &0 \cr 1&−2&−2& 3 &0 } \right ] &\mathop{\longrightarrow}\limits_{}^{\text{RREF}}\left [\array{ \text{1}&−2&0&0&0\cr 0& 0 &\text{1 } &0 &0 \cr 0& 0 &0&\text{1}&0 } \right ]. & & }
Thus, we see that the system is consistent (as predicted by Theorem HSC) and has an infinite number of solutions (as predicted by Theorem HMVEI). With a suitable choice of {x}_{2}, each solution can be written as
\eqalignno{ \left [\array{ 2{x}_{2} \cr {x}_{2} \cr 0\cr 0 } \right ] & & }
C23 Contributed by Chris Black Statement [202]
The augmented matrix for the given linear system and its row-reduced form are: | {
"domain": "ups.edu",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9905874106902668,
"lm_q1q2_score": 0.8121760188445115,
"lm_q2_score": 0.8198933381139645,
"openwebmath_perplexity": 1463.3392804900752,
"openwebmath_score": 0.9845515489578247,
"tags": null,
"url": "http://linear.ups.edu/jsmath/0230/fcla-jsmath-2.30li20.html"
} |
c, linked-list
On my system, this program prints out
The pointer in the function is: 0x7fffef30f048
The pointer in main is: 0x7fffef30f070
Notice that the addresses are different. Since the value being returned is copied, we don't need to worry about manually manipulating memory, as the compiler takes care of that for us. It works the same way when returning a pointer from function: the value we care about is copied, so that we can access it when the variable that originally held it no longer exists.
Lets look at this in a different context.
// what your alloc function is doing:
int *malloc_add(int a, int b) {
int *result = malloc(sizeof(result));
*result = a + b;
return result;
}
// what it should be doing:
int add(int a, int b) {
int result;
result = a + b;
return result;
}
While we get the value we want in both cases, in the first we need to perform two extra steps: we need to dereference the pointer to get the value, and we need to free the memory allocated on the heap. | {
"domain": "codereview.stackexchange",
"id": 33999,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c, linked-list",
"url": null
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.