text stringlengths 49 10.4k | source dict |
|---|---|
ros2, ros-humble, xacro, include, namespace
Title: How do you pass params into another Xacro file via xacro:include? How does the other Xacro file receive that input? How do you use xacro:macro and arg to include 2 of the same robot in your URDF? I'm using args for each Xacro file to read the suffix. Because it's in the global namespace, I can't reassign the suffix arg. This results in a conflicting namespace. "Error: link 'slide_1/base_link' is not unique."
Also, the urdf.xacro file I'm including has a macro as well, with material tags inside to keep it local. However, running this macro 2 times, the materials get declared multiple times. "Error: material 'aluminum' is not unique." I have an inconvenient way of solving this by passing a global arg and wrapping each material declaration around an if statement so the materials are only declared once at the top level xacro. Not pretty.
<xacro:macro name="full_assembly" params="suffix">
<xacro:arg name="suffix" default="${suffix}" />
<xacro:include filename="$(find pkg)/urdf/single_assembly.urdf.xacro" />
</xacro:macro>
<xacro:property name="suffix_1" value="_1" />
<xacro:property name="suffix_2" value="_2" />
<xacro:full_assembly suffix="${suffix_1}" />
<xacro:full_assembly suffix="${suffix_2}"/> Use the ns tag in the xacro:include.
Resources
ROS Wiki: http://wiki.ros.org/xacro#Including_other_xacro_files
Xacro Repo: https://github.com/ros/xacro/wiki#xacroinclude | {
"domain": "robotics.stackexchange",
"id": 38770,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros2, ros-humble, xacro, include, namespace",
"url": null
} |
newtonian-mechanics, forces, energy, work
Title: Confusion about the total work done on an object applied by an upward force Suppose we have an object which the mass is $m$ kg. If we lift up the object and the object moves with constant upwards velocity, the force we're applying on it will be equal to $mg$, and say if the object moved a distance of $h$ meters, technically we should have done $mgh$ joules of work. However, gravity is also doing work on the object, and since its direction opposes to the direction of the object's movement, it should have done $-mgh$ joules of the work, and by calculation the total work done on the object should have been $mgh + (-mgh) = 0$ joules.
Nevertheless, if we consider the potential energy of the object, since the object has gone up by $h$ meters, the potential energy should increase by $mgh$ joules as the kinetic energy is kept constant, meaning that there has to be some total work done on the object. So why did we run into a contradiction? The issue is that if you are taking into account the change in gravitational potential energy, then you have also already taken into account the work done by gravity.
For any conservative force, the work done by that force over some path $C$ is equal to the negative change in potential energy from the starting position of the path $\mathbf a$ to the ending position of the path $\mathbf b$. This is due to the definition of potential energy $U$ of a conservative force $\mathbf F$
$$\mathbf F=-\nabla U$$
and use of the definition of work with the fundamental theorem of calculus
$$W=\int_C\mathbf F\cdot\text d\mathbf x=\int_C(-\nabla U)\cdot\text d\mathbf x=-(U(\mathbf b)-U(\mathbf a))=-\Delta U$$
Therefore, you can either forget that gravity is conservative and just calculate the total work done on the object (work done by you and gravity), or you can take potential energy into account and then just consider the work done by you. Either way you get the same thing happening. No contradiction. | {
"domain": "physics.stackexchange",
"id": 84023,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, forces, energy, work",
"url": null
} |
c++, strings, immutability
public: // methods
//! Default constructor.
basic_cstring() : basic_cstring(nullptr) {}
//! Constructor.
basic_cstring(const_pointer str) : basic_cstring(str, (str == nullptr ? 0 : traits_type::length(str))) {}
//! Constructor.
basic_cstring(const_pointer str, size_type len) {
if (str == nullptr || len == 0) {
mStr = getPtrToString(mEmpty);
return;
} else {
pointer content = allocate(len);
traits_type::copy(content, str, len);
content[len] = value_type();
mStr = content;
}
}
//! Destructor.
~basic_cstring() { deallocate(mStr); }
//! Copy constructor.
basic_cstring(const basic_cstring &other) noexcept : mStr(other.mStr) { incrementRefCounter(mStr); }
//! Move constructor.
basic_cstring(basic_cstring &&other) noexcept : mStr(std::exchange(other.mStr, getPtrToString(mEmpty))) {}
//! Copy assignment.
basic_cstring & operator=(const basic_cstring &other) {
if (mStr == other.mStr) return *this;
deallocate(mStr);
mStr = other.mStr;
incrementRefCounter(mStr);
return *this;
}
//! Move assignment.
basic_cstring & operator=(basic_cstring &&other) noexcept { std::swap(mStr, other.mStr); return *this; } | {
"domain": "codereview.stackexchange",
"id": 45074,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, strings, immutability",
"url": null
} |
solar-system, star-formation, planetary-formation
Title: How did the lighter elements end up in the center of the solar system? Solar System Formation The previous generation of stars famously are the origin of all the heavier elements (up until iron?) in the solar system. So a big portion of the solar system mass actually is made up of Carbon, Silicon, Iron and the like because of that. But in the center, and only in the center, there is a star with presumably almost no heavy elements inside. How can that be? Am I wrong about the actual mass concentrations or is there really an imbalance, i.e. is the element distribution really lighter towards the center of the solar system? I would presume that the previous generation of stars just ended in a more or less uniform cloud of debris, from which the solar system formed. But if so, why aren’t there star systems where the star has a very different composition and is kind of a spluttering, dirty fusion machine (metaphorically, I mean)? The solar system contains very little of elements heavier than Helium - less than 2% by mass.
This is reflected in the chemical abundances measured in the photosphere of the Sun. i.e. The Sun does contain heavier elements.
Your question is the wrong way around; it is not that the heavier elements have not sunk into the middle, it is that the vast majority of hydrogen and helium that was in the same place as the planets when they formed, did not end up as part of the planets. In fact, even this is only partially true. The mass of planetary material in the solar system is also dominated by the hydrogen and helium in the gas giants.
So the conundrum is only why the smaller planets don't have a similar composition to the Sun. The answer to that is temperature and gravity. A small, hot planet just doesn't have the gravity to retain fast moving hydrogen and helium atoms, unless they are trapped in some compound (like water!).
Thus, the small planets close to the Sun are depleted of light elements. | {
"domain": "astronomy.stackexchange",
"id": 2527,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "solar-system, star-formation, planetary-formation",
"url": null
} |
special-relativity, metric-tensor, tensor-calculus, notation
$$(A\times B)^i = \epsilon^{i}{}_{jk}A^jB^k=g^{ii'}\epsilon_{i'jk}A^j B^k = \pm \epsilon_{ijk}A^j B^k=\pm\tilde{\epsilon}_{ijk}A^j B^k$$
Where $\pm$ indicates the next convention choice of mostly positive or mostly negative metric signature. I think this has the disadvantage that depending on whether you take the mostly positive or mostly negative metric you get a relative minus sign in the definition of the cross product, but I think this is actually to be expected since it moves the spatial part from being a right handed to left handed coordinate system. It also has the disadvantage that you really have to remember quite a few places for negative signs. It has the advantage that is covariant so that if you do follow the rules you can do the manipulations more easily I guess.
For example @Solenodon Paradoxus gives a formula for the "correct expression" for the cross product but I'm not sure on what grounds that is the "correct expression". It follows proper Einstein convention but the symbol used is the Levi-Civita symol which is not a tensor so there's not actually a rule saying the expression SHOULD follow the Einstein convention which leaves confusion as to the motivation behind any particular definitions when there are so many choices.
*I believe this is a matter of convention. We could alternatively take the Levi-Civita symbol with upper indices to be the object which is anti-symmetric in its indices.
**but the story would maybe be different if we had chosen the upper indexed Levi-Civita symbol to be the usual anti-symmetric object instead.
edit1: for less awkward notation we could define $(A \times B)^i = \epsilon^{ijk}A_jB_k$ which would be equivalent to the definition given. For my problem I'm working on I'm thinking of contravariant components of vectors as the "physical" quantities so I'm prefering to write expression in terms of those. | {
"domain": "physics.stackexchange",
"id": 53188,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "special-relativity, metric-tensor, tensor-calculus, notation",
"url": null
} |
legal, digital-rights
Title: Digital Rights and Agents talking to humans How does the legal question about agents talking to humans via telephone connection work? Recently Google gave a talk about Duplex, where an agent makes a call to a human to schedule a hairdresser.
I wonder if there are any regulations related to this type of scenario, if there are some limitations, if the human needs to know that he is talking to an AI. Googling this throws a lot of debate on this issue of whether the call made to the restaurant for booking was legal or not. I found this article to put forward a lot of ideas for and against this. So it is for us to decide.
But one thing that I fully agree with you, which should be made into a regulation, is human should know that he/she is talking to a bot. This tweet by @traviskorte reasons it out why:
We should make AI sound different from humans for the same reason we put a smelly additive in normally odorless natural gas. | {
"domain": "ai.stackexchange",
"id": 572,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "legal, digital-rights",
"url": null
} |
biochemistry, bioinformatics, protein-folding, protein-structure
...the physics-based atomic potentials have proven to be useful in
refining the detailed packing of the side chain atoms and the peptide
backbones.
Homology modelling.
This type of modelling is a safe bet provided that the MSA is available and reliable. The premise here is that if two proteins look similar enough at a sequence level, and you know the structure of one, then probably there is structural conservation as well. The problem is that obviously, those small changes in sequence, or other contextual differences like post translational modifications, localised pH differences etc. might mean that the protein folds differently.
Machine Learning.
Neural networks are a form of machine learning. Conceptually, these use the same assumptions as homology modelling, but on a larger more sophisticated scale (not necessarily an advantage). A good example of “machine learning” winning out in terms of homology clustering and secondary structure prediction is that of HHpred. An algorithm will make subsets of "if" statements. If a protein has a certain combination of "ifs" and "nots" then it's safe to assume other proteins with the same "ifs" and "nots" have similar structures.
You need to know a lot of information about the dataset that you put in, and also one needs to know which of that information is relevant. If these conditions aren't thoroughly complied with the categorisation results will just be amplified errors, arbitrary categorisations, missing categories, or a combination of those three.
It's worth pointing out at this point that most modern secondary structure predictors use a combination of homology and machine learning.
A reasonable ab initio approach.
More recently iterative optimisation based on vague experimental data has become the standard practice shifting ab initio from a top down approach to something more inductive. Kulic et al., 2012 is an interesting example of such a modern holistic approach. It involves an iterative optimisation of ab initio protein structures based on experimentally derived structures.
Another excerpt from the 2009 article.
Thus, developing composite methods using both knowledge-based and
physics-based energy terms may represent a promising approach to the
problem of ab initio modelling. | {
"domain": "biology.stackexchange",
"id": 5177,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "biochemistry, bioinformatics, protein-folding, protein-structure",
"url": null
} |
c#, beginner, console
// NOTE: this is a trivial parsing, you should perform some validation..
// 0 for Blue, 1 for Red, 2 for Magenta
char userColor = int.Parse(Console.ReadKey().KeyChar.ToString());
Console.ForegroundColor = colors[userColor];
Console.WriteLine("Do you want to continue? y/n");
var nextTask = Console.ReadKey().KeyChar;
Continue = nextTask == 'y';
Console input
Don't read an entire line when you only need a single character: string NextTask = Console.ReadLine(); if (NextTask == "y") { ... You can read a single character: string NextTask = Console.ReadKey().KeyChar;. Unlike ReadLine, ReadKey does not wait for Enter key to be pressed, but immediately returns the pressed key. | {
"domain": "codereview.stackexchange",
"id": 35534,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, beginner, console",
"url": null
} |
organic-chemistry, bond, molecular-structure
Chemist in general are interested in unusual bonding situations, since they challenge our understanding of the bonding and the chemistry of such molecules itself.
One of the prominent examples is the 2-norbonyl cation, the structure of which remained a mystery for a long time since it didn't fit within the common constraint of organic chemistry. Such ions are nowadays usually referred to as non-classical ions. Their bonding is different from the more common two-electron-two-centre bonds we expect in organic (and inorganic) molecules. In a first approximation they can be described with resonance (see here: What is resonance, and are resonance structures real?), but the understanding comes with a few misconceptions. A MO description involves multi-centre bonds and typically bond orders less than one.
Another interesting example of unusual bonding are fluxional molecules like bullvalene. (See also here: What is the conformer distribution in monosubstituted fluoro bullvalene?) Similarly the bonding situation in these molecules is quite fluid, which allow them to change shape in such a way and at room temperature we obtain a single signal in the proton NMR (Addison Ault. J. Chem. Educ. 2001, 78 (7), 924-927.). | {
"domain": "chemistry.stackexchange",
"id": 7900,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "organic-chemistry, bond, molecular-structure",
"url": null
} |
It suffices, if so, to show that $\sf ZFC+MA+\lnot CH$ is a consistent theory (relative to the consistency of $\sf ZFC$, of course). This is a result of Solovay and Tennenbaum, which appears in the same book as Theorem 16.13.
Assume $\sf GCH$ and let $\kappa$ be a regular cardinal greater than $\aleph_1$. There is a c.c.c. notion of forcing $P$ such that the generic extension $V[G]$ by $P$ satisfies Martin's Axiom and $2^{\aleph_0}=\kappa$.
By taking, for example, $V=L$ as the ground model and $\kappa=\aleph_2$, the result is a model where the continuum is $\aleph_2$ and the union of $\aleph_1$ null sets is a null set.
- | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9796676460637103,
"lm_q1q2_score": 0.8628884061732496,
"lm_q2_score": 0.8807970842359877,
"openwebmath_perplexity": 221.52136072346383,
"openwebmath_score": 0.9688538312911987,
"tags": null,
"url": "http://math.stackexchange.com/questions/534423/can-sets-of-cardinality-aleph-1-have-nonzero-measure/534682"
} |
quantum-algorithms, simons-algorithm, hidden-subgroup-problem
Finally, the set $X$ is $\{0,1\}^n$ and the map $f$ is precisely the map as given in Simon's problem. Let's check that the defining property to make $(G, H, f)$ an instance of a hidden subgroup problem actually holds, namely that $f$ is constant on cosets $gH$ and takes different values for different cosets $gH \not= g'H$. We are promised that $f(x)=f(y)$ iff $x\oplus y \in \{0_G,s\}$. Pick an element $g \in G$. As $H = \{0_G,s\}$, the set $gH$ is equal to $\{0_G \oplus g, s \oplus g\} = \{g, s\oplus g\}$. We have that $g \oplus (s \oplus g) = s$ which implies that $f(g) = f(s \oplus g)$, i.e., the function is constant on the coset. By a similar argument we get that two different cosets get mapped to different values. This shows that Simon's problem is indeed an instance of an abelian hidden subgroup problem. Finally, note that in an abelian group every subgroup is a normal subgroup.
For more information about the abelian hidden subgroup problem, see for instance this paper by Brassard and Hoyer or this paper by Mosca and Ekert. | {
"domain": "quantumcomputing.stackexchange",
"id": 659,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-algorithms, simons-algorithm, hidden-subgroup-problem",
"url": null
} |
python, time-limit-exceeded
Title: Find pairs of primes that sum to specified even number Below code is for Given an even number (greater than 2), return two prime numbers whose sum will be equal to the given number
my task to reach the lowest time complexity
I tried it differently, but I cannot reach the lowest time complexity - please help, and I am running it on an online compiler.
class Solution:
def primesum(self, A):
# write your method here
n = A
return self.findPrimePair(A)
def findPrimePair(self,n):
isPrime = [0] * (n+1)
isPrime = [True for i in range(n + 1)]
self.SieveOfEratosthenes(n, isPrime)
for i in range(0,n):
if (isPrime[i] and isPrime[n - i]):
return i,n-i
return 0,0
def SieveOfEratosthenes(self,n, isPrime):
isPrime[0] = isPrime[1] = False
p=2
while(p*p <= n):
if (isPrime[p] == True):
i = p*p
while(i <= n):
isPrime[i] = False
i += p
p += 1
def primesum(self, A):
# write your method here
n = A
return self.findPrimePair(A)
The variable n is never used, so delete that line. And the comment is obsolete.
isPrime = [0] * (n+1)
isPrime = [True for i in range(n + 1)]
Why not simply initialise to all true? I.e.
isPrime = [True] * (n+1)
self.SieveOfEratosthenes(n, isPrime)
for i in range(0,n):
if (isPrime[i] and isPrime[n - i]):
return i,n-i | {
"domain": "codereview.stackexchange",
"id": 40086,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, time-limit-exceeded",
"url": null
} |
ros, node, topic
And make sure that you fill in the timestamp when sending messages in C++:
msg.header.stamp = ros::Time::now();
Comment by Kenn on 2017-07-19:
added, still gives me the same error, because synchronizer synchronizes two different topics not two different nodes publishing to the same topic. Well, I am gonna use the normal ROS way, thank you so much.
Comment by ahendrix on 2017-07-20:
It still seems like your message definition is incorrect. The docs for the C++ class that is throwing the error indicates that it results in a compile error if the message doesn't have a header: http://docs.ros.org/jade/api/roscpp_traits/html/structros_1_1message__traits_1_1TimeStamp.html | {
"domain": "robotics.stackexchange",
"id": 28359,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, node, topic",
"url": null
} |
catkin-make
add_definitions(-std=c++11)# -m64) #-Wall)
add_executable(
zed_wrapper_node
src/zed_wrapper_node.cpp
)
target_link_libraries(
zed_wrapper_node
${catkin_LIBRARIES}
${X11_LIBRARIES}
${ZED_LIBRARIES}
${CUDA_LIBRARIES} ${CUDA_nppi_LIBRARY} ${CUDA_npps_LIBRARY}
${OpenCV_LIBS}
${PCL_LIBRARIES}
)
add_dependencies(zed_wrapper_node ${PROJECT_NAME}_gencfg)
###############################################################################
#Add all files in subdirectories of the project in
# a dummy_target so qtcreator have access to all files
FILE(GLOB_RECURSE extra_files ${CMAKE_SOURCE_DIR}/*)
add_custom_target(dummy_${PROJECT_NAME} SOURCES ${extra_files})
Status API Training Shop Blog About Pricing
© 2016 GitHub, Inc. Terms Privacy Security Contact Help
Originally posted by rnunziata on ROS Answers with karma: 713 on 2016-03-05
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 24005,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "catkin-make",
"url": null
} |
mechanical-engineering, pressure, cfd
Title: Understanding pressure drop and mass average values While studying some papers, I didn't understand the following sentences:
deltP the pressure drop across the turbine, evaluated using the mass average values at the inlet and outlet sections of the computational domain, respectively.
What is the exact meaning of the "mass average values"?
And is there such a option of "mass average values" in Fluent? I dont know that how to compute this parameter in Fluent.
What is the exact meaning of the "mass average values"?
And is there such a option of "mass average values" in Fluent? I don't know that how to compute this parameter in Fluent.
Yes, FLUENT has many surface integral reports like Area-weighted average and Mass average, quoting from FLUENT 6 user manual:
Area-weighted average: You can find the average value on a solid surface, such as the average heat flux on a heated wall with
a specified temperature.
Mass average: You can find the average value on a surface in the flow, such as average enthalpy at a velocity inlet.
And this is an answer from cfd-online about the difference between them:
The area average of a scalar is calculated by integrating the scalar times the area divided by the total area over the region. A mass averaged quantity is obtained by integrating the scalar time mass flow divided by total mass flow over the region.
$$\text{Area Average} = \frac{\int \phi \, dA}{A}$$
$$\text{Mass Average} = \frac{\int \phi \, dm^.}{m^.}$$
Where $\phi$ is an arbitary scalar property of the flow. | {
"domain": "engineering.stackexchange",
"id": 443,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "mechanical-engineering, pressure, cfd",
"url": null
} |
quantum-mechanics, angular-momentum, symmetry, atomic-physics
Title: Is it possible that the energy of the energy eigenstates $|LSJM_J\rangle$ depend on $M_L$ and/or $M_S$? If a many-electron Hamiltonian $H$ commutes with ${\vec L}^2, {\vec S}^2, {\vec J}^2$, and $J_z$ but not $L_z$ and $S_z$, the energy eigenstates are designated by $|LSJM_J\rangle$. Since there is no external field (no breakdown of rotational symmetry) that distinguishes different values of $M_J$, the energy cannot depend on $M_J$ and therefore, such a state, for a given value of $J$ will have a $(2J+1)$-fold degeneracy.
It is clear that the energy eigenstates $|LSJM_J\rangle$ are not labeled by $M_L, M_S$ because $H$ does not commute with $L_z, S_z$. But is there any chance that the energy depends on $M_L$ and/or $M_S$? Does the absence of an external field (rotational symmetry) also exclude the possibility of energy depending on $M_L$ and/or $M_S$? $M_L$ and $M_S$ are actually still good quantum numbers for stretched states even for non-zero external fields.
You have already established that the energy does not depend on $M_J$. The number of states is "conserved" as you break symmetries. Each state in the broken-LS coupling régime is "adiabatically" connected to the zero-field LS coupling régime. Meaning you can connect an $|M_L, M_S\rangle$ state at $|\mathbf{B}|\neq 0$ to an $|M_J\rangle$ state as $|\mathbf{B}|\rightarrow 0$. So why would the energy depend on $M_S$ or $M_L$, if it does not depend on $M_J$? | {
"domain": "physics.stackexchange",
"id": 74578,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, angular-momentum, symmetry, atomic-physics",
"url": null
} |
quantum-state, entanglement, quantum-fourier-transform, oracles
Title: Why can you check for entanglement using the quantum Fourier transform? I'm reading this paper on quantum random oracles, and I have some fundamental questions about certain statements that seem to be intuitive (but I can't seem to figure it out). My goal is to have a deeper understanding of the QFT and it's uses.
My question has to do with an example from the paper. Here, $h_0:\{0,1\}^{2n}\to\{0,1\}^n$ is just some compression function.
(page 4)[The adversary] sets up the uniform superposition $\sum_{x,y}|x,u\rangle$ and queries. In the case where $h_0$ Is a classical function, then this state becomes $$\sum_{x,u}|x,u\oplus h_0(x)\rangle=\sum_{x,u}|x,u\rangle.$$
Namely, the state is unaffected by making the query. In contrast, the simulated query would result in $$\sum_{x,u}|x,u\oplus y\rangle\otimes|x,y\rangle.$$
Here, the adversary's state is now entangled with the simulator's. It is straightforward to detect this entanglement by applying the QFT to the adversary's $x$ registers, and then measuring the result.
What does it mean for the quantum adversary to query a classical function? Why does it have no effect? Next, why is it easy to see that you can check for entanglement using the QFT on the $x$ registers? My understanding is that applying the QFT would just give you some kind of frequency information, so if the $x$ registers were in the uniform superposition I'd think you'd always get $|0\rangle$ To answer your question, it is required to give a bit of context on this paper.
In cryptography, the random oracle is a technique in which you assume that when the adversary queries the oracle for $h(x)$, two cases could happen: | {
"domain": "quantumcomputing.stackexchange",
"id": 3689,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-state, entanglement, quantum-fourier-transform, oracles",
"url": null
} |
homework-and-exercises, classical-mechanics, lagrangian-formalism, conservation-laws, noethers-theorem
Title: Constants of motion from a Lagrangian If I have a Lagrangian (made up equation in this case):
$$L = \frac{1}{2}mr^2\dot\theta + \frac{1}{4}mg\ddot\theta \, ,$$
can I immediately conclude that the total energy is constant because $\partial L / \partial t = 0$, because $L$ does not depend explicitly on $t$? Can I also conclude that the angular momentum is constant because $L$ does not depend explicitly on $\theta$, and that the linear momentum isn't because $L$ depends on $r$?
The notes seem to imply that one can just figure the constants of motion like so without any computations! Is this correct?
If not, what actual computations would I have to do to figure out the constants of motion from a Lagrangian? If $q_i$ denotes the generalised coordinates, then note that a time translation $t \to t+\epsilon$ infinitesimally corresponds to $q_i \to q_i + \epsilon \frac{d}{dt}q_i$ and so $\delta q_i = \dot{q}_i$. The change in the Lagrangian is,
$$\delta L = \frac{\partial L}{\partial q_i} \dot q_i + \frac{\partial L}{\partial \dot q_i} \ddot q_i = \frac{d}{dt}L$$
which is the total derivative if $L$ has no explicit time dependence. By Noether's theorem, we have a conserved quantity,
$$Q = \frac{\partial L}{\partial \dot q_i}\dot q_i - L $$
which we can recognise as the Legendre transform of the Lagrangian, that is, the Hamiltonian and thus the energy of the system is conserved. Clearly this applies to your Lagrangian. | {
"domain": "physics.stackexchange",
"id": 36425,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, classical-mechanics, lagrangian-formalism, conservation-laws, noethers-theorem",
"url": null
} |
java, multithreading
Mp3Player
public class Mp3Player
{
private Mp3PlayerThread playerThread;
private String song;
public Mp3Player(String filename)
{
song = filename;
}
/**
* Initializes the playerThread with the given song, sets the flag to true
* and starts playing the song.
*/
public synchronized void play()
{
playerThread = new Mp3PlayerThread(song);
playerThread.playing = true;
playerThread.start();
}
/**
* Pauses the execution of the playerThread.
*/
public synchronized void pause()
{
Printer.debugMessage(this.getClass(), "paused");
playerThread.playing = false;
}
public synchronized void resume()
{
Printer.debugMessage(this.getClass(), "resuming");
playerThread.playing = true;
}
public synchronized void stop()
{
Printer.debugMessage(this.getClass(), "stopping");
playerThread.abort = true;
playerThread = null; // Needed or not?
}
} Lets start with the instance variables.
public class Mp3PlayerThread extends Thread
{
public volatile boolean playing = false;
public volatile boolean abort = false;
private Player player;
private String path;
//...
}
You shouldn't be directly exposing playing and abort. Doing this hinders your ability to change the internal implementation of your class and remain backwards compatible.
Trying to line up you variable declarations will be a losing battle. It looks nice now, but when you add a new variable, you might have to completely change everything. This will make looking at diffs harder because some lines will be changed even though they are not directly involved in the new functionality. Additionally, an argument could be made that all of the variable names should be lined up instead of having player line up with boolean.
path is set once in the constructor and never changed. Setting it to be final could prevent you from accidentally changing it later with the expectation of it having an effect. | {
"domain": "codereview.stackexchange",
"id": 9297,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, multithreading",
"url": null
} |
rust, state-machine
Title: Designing posts: state machine In the Object-Oriented Design Pattern Implementation chapter of the second edition of The Rust Programming Language, the basic code structure is given and it suggests trying to add features:
This implementation is easy to extend to add more functionality. Here
are some changes you can try making to the code in this section to see
for yourself what it's like to maintain code using this pattern over
time:
Only allow adding text content when a post is in the Draft state
Add a reject method that changes the post's state from PendingReview back to Draft
Require two calls to approve before changing the state to Published
Please critique my approach:
struct Post {
content: String
}
struct DraftPost {
content: String
}
struct PendingPost {
content: String,
approvals: u32
}
impl Post {
fn new() -> DraftPost {
DraftPost { content: String::new() }
}
fn content(&self) -> &str {
&self.content
}
}
impl DraftPost {
fn req_review(self) -> PendingPost {
PendingPost {
content: self.content,
approvals: 0
}
}
fn add_text(&mut self, content: &str) {
self.content.push_str(content);
}
}
enum PublishResult {
PendingPost(PendingPost),
Post(Post)
}
impl PendingPost {
fn approve(&mut self) {
self.approvals += 1;
}
fn reject(self) -> DraftPost {
DraftPost { content: self.content }
}
fn publish(self) -> PublishResult {
if self.approvals > 1 {
PublishResult::Post(Post{content: self.content})
} else {
PublishResult::PendingPost(self)
}
}
}
#[cfg(test)]
mod tests {
use super::*; | {
"domain": "codereview.stackexchange",
"id": 25636,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "rust, state-machine",
"url": null
} |
javascript
Title: Age in days, months and years var AgeConvertor = {
Age: function (formattedDate) {
var now = new Date();
var yearNow = now.getFullYear();
var monthNow = now.getMonth() + 1;
var dayNow = now.getDate();
// Calculating in days
var ONE_DAY = 1000 * 60 * 60 * 24;
var ONE_MONTH = 1000 * 60 * 60 * 24 * 30;
var date1_ms = new Date().getTime()
var date2_ms = formattedDate.getTime()
var difference_ms = Math.abs(date1_ms - date2_ms)
var yearAge = Math.round(difference_ms/ONE_DAY);
if(yearAge < 30) {
return yearAge = Math.round(difference_ms/ONE_DAY) + 'D';
// This condition checks if the number of days is more than 30
} else if(yearAge > 30 && yearAge < 365){
return yearAge = Math.round(difference_ms/ONE_MONTH) + 'M';
} else {
// Calculating in years
var today = new Date(yearNow,monthNow,dayNow);
if (yearNow < 100) {
yearNow=yearNow+1900;
}
yearAge = yearNow - formattedDate.getFullYear();
if (monthNow <= formattedDate.getMonth()) {
if (monthNow < formattedDate.getMonth()) {
yearAge--;
} else {
if (dayNow < formattedDate.getDay()) {
yearAge--;
}
}
}
return yearAge + 'Y';
}
}
}
How well can this code be improved?
Is there any fault in this code?
Does this code give desired results? | {
"domain": "codereview.stackexchange",
"id": 442,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript",
"url": null
} |
ds.algorithms, pr.probability, set-cover, greedy-algorithms
Give each set $s$ weight $X_s = 2/pk$.
For each element $e$ such that $\sum_{e\in s} X_s < 1$,
choose any set $s$ containing $e$ and raise $X_s$ enough to fully cover $e$. | {
"domain": "cstheory.stackexchange",
"id": 1739,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ds.algorithms, pr.probability, set-cover, greedy-algorithms",
"url": null
} |
ros, rospy, publisher
Title: Cleanly exit from python publisher ensuring all messages sent
I have been working on a Python node that essentially publishes one-off messages and then exits. Due, I believe, to the threading in rospy, messages are not always sent if the script exits soon after publishing a message. I have also found no ways of truly guaranteeing that messages that sent messages are received.
Here is some example code based roughly on the nodes demonstrated in the ROS tutorials:
Publisher
#!/usr/bin/python
from std_msgs.msg import Header
import rospy
import math
topic = 'foo'
publisher = rospy.Publisher(topic, Header, queue_size = 10)
rospy.init_node('publisher',anonymous = True)
msg = Header()
msg.frame_id = "Hello World!"
publisher.publish(msg)
Subscriber:
#!/usr/bin/python
import rospy
from std_msgs.msg import Header
def callback(data):
rospy.loginfo(rospy.get_caller_id()+"I heard %s",data.frame_id)
def listener():
rospy.init_node('listener', anonymous=True)
rospy.Subscriber("foo", Header, callback)
rospy.spin()
if __name__ == '__main__':
listener()
There are a number of modifications I have found that will get the messages to send:
Use a rospy.Rate object with sufficiently low rate and sleep after publishing
Sending messages in a loop with no other calls to ROS functionality will result in most messages being received.
edit I tried a loop while not rorpy.is_shutdown() containing rospy.signal_shutdown('stuff') after the message is sent. This method results in the message being received some of the time.
There are also some exceptions where the message may otherwise be expected to be received:
Sleeping using a rospy.Duration object will not allow sends to complete
rospy.signal_shutdown does not guarantee that pending messages are sent
Pending sends are not completed when using a Rate object with too high a rate | {
"domain": "robotics.stackexchange",
"id": 18504,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, rospy, publisher",
"url": null
} |
So then I would have to read it carefully again until I realized that was because I needed the equals sign. This seems like a more practical approach: if you're not sure where something goes, it might be because you missed a thing; reread it.]]
-
Beat me to it! Great explanation! – Jeel Shah Jun 24 '14 at 6:26
@gekkostate: Thanks! I hope I didn't just waste your last 20 minutes writing though :/ – Eric Stucky Jun 24 '14 at 6:27
haha nope! I was doing the write up (5 mins, tops) and then I saw your answer and I was like "well, this one does it better than mine!". – Jeel Shah Jun 24 '14 at 6:28
Very well done! You could have continued with "$n^2+2k^2=f(n)$ or $n^2+2k^2=f(k)$, find $f$". – Yves Daoust Jun 24 '14 at 6:31
@Yves: Ah, that is actually a good point. Editing now. Ah, and it even gives me a chance to unpack one of the last "of"s ;) – Eric Stucky Jun 24 '14 at 6:33
Let the two numbers be $x$ and $y$. Then $x+y=1$. We need to express $g(x,y)=x^2+2y^2$ as a function of one of the numbers. If you want it to be a function of $x$ then, substitute $y=1-x$ in $g(x.y)$ will lead $g(x,1-x)=x^2+2(1-x)^2$. Call $f(x)=g(x,1-x)$. Then $f(x) = x^2+2(1-x)^2$.
If you want it to be a function of $y$, then $g(y) = (1-y)^2+2y^2$
-
First we write up the constraints described in the first sentence:
$x,y \geq 0$
$x+y=1$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9787126513110865,
"lm_q1q2_score": 0.8024399848914024,
"lm_q2_score": 0.8198933403143929,
"openwebmath_perplexity": 449.6770825446605,
"openwebmath_score": 0.7660753130912781,
"tags": null,
"url": "http://math.stackexchange.com/questions/845462/translating-text-to-functions"
} |
$$3$$ power). ⦠Scroll down the page for more ⦠The "power rule" is used to differentiate a fixed power of x e.g. ⢠Solution 2. calculators. Note: In (x 2 + 1) 5, x 2 + 1 is "inside" the 5th power, which is "outside." Power rule II. e^cosx, sin(x^3), (1+lnx)^5 etc Power Rule d/dx(x^n)=nx^n-1 where n' is a constant Chain Rule d/dx(f(g(x) ) = f'(g(x)) * g'(x) or dy/dx=dy/(du)*(du)/dx # Calculus . ⦠The Power rule A popular application of the Chain rule is finding the derivative of a function of the form [( )] n y f x Establish the Power rule to find dy dx by using the Chain rule and letting ( ) n u f x and y u Consider [( )] n y f x Let ( ) n f x y Differentiating 1 '( ) n d dy f x and n dx d Using the chain rule. Exponent calculator See ⦠= 2â 2â 2 = 8 functions can be separated into a of! Find the derivative tells us how to find the derivative of a function at point. The question is asking what is â « x 3 dx even y=x^2 be! '' different Variable power chain rule point of view 2020 Leave a Comment Written by Shrivastava! Here either ( on the outside. and the product/quotient rules correctly in combination when both are necessary imagine. Rule of derivatives n /m getting somewhat confused however: I am getting somewhat confused however 2. One of the rule and view a variety of examples view a of... Is an attempt at the quotient rule: Watch derivative of a composite function used for the. We do not change what is the integral of x 3 dx solutions to your power.. One, with examples of practice exercises so that they become second nature to the power rule introduced a! 1/2 3 = 1/ ( 2â 2â 2 = 8 any point x 3 dx = a /m! B depends on c ), just | {
"domain": "co.uk",
"id": null,
"lm_label": "1. Yes\n2. Yes",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9626731169394882,
"lm_q1q2_score": 0.8293796751466511,
"lm_q2_score": 0.861538211208597,
"openwebmath_perplexity": 759.8622886760531,
"openwebmath_score": 0.7789421081542969,
"tags": null,
"url": "http://cafelingua.co.uk/meaning-of-zzipr/43d9b7-power-chain-rule"
} |
molecular-biology, epidemiology
Title: What is the reason behind studying seroprevalence of a disease? Scientific literature on viral disease, specifically ones like Zika and Dengue, contains seroprevalence data. What is the reason behind understanding seroprevalence? Seroprevalence tests people for the presence of a disease based on serologic methods. Often you do ELISA (or related methods) to test for pathogen specific antibodies.
This gives you information on how common a disease is, especially when it might infect people without them showing symptoms. It also can help detecting cross-protection when the seroprevalence is lower than expected. This is something shown in micronesia, where the seroprevalence was much lower than expected (from: Zika Virus Seroprevalence, French Polynesia, 2014–2015)
Our findings show that <50% of the population of French Polynesia had
detectable Zika virus IgG. This seroprevalence rate is much lower than
the 86% attack rate estimated by Kucharski et al. (14) using a model
that assumed the French Polynesia population was 100% susceptible to
Zika virus infection. However, in a setting where DENVs are highly
prevalent (8), the possibility of cross-protecting immunity preventing
infection from Zika virus (12,13) cannot be excluded. The attack rate
and the asymptomatic:symptomatic ratio in French Polynesia were also
lower than those described for the 2007 outbreak on Yap Island (73%
and 4:1, respectively) (4); this finding supports the perception that
the drivers of Zika virus transmission vary depending on geographic
context. For other flaviviruses, such as DENV, previous model-based
studies showed that the herd immunity threshold required to block
viral transmission is ≈50%–85% (15). Thus, if Zika virus has the same
epidemiologic characteristics as DENV, the seroprevalence rate of 49%
would not be sufficient to prevent another outbreak.
It can also help to make predictions about the spreading of a disease. With a high seroprevalence epidemic spreading is unlikely, while this is pretty much different with a low prevalence. This is something which was observed in Brazil when Zika occured there first. | {
"domain": "biology.stackexchange",
"id": 9092,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "molecular-biology, epidemiology",
"url": null
} |
thermodynamics, heat-transfer
Title: Heat Exchanger water to air temperatures I have been having a debate with a coworker regarding heat exchanger fluid temperatures and while I thought I firmly grasped the principles, I was not arguing my point very well.
I am looking at an air to water coil for an air handler. I was selecting the coil based on a leaving air temp of ~60F (it's an industrial space with a lot of ventilation). I am using condensing boilers so I sized for 180 leaving water temperature with 140 return on the coldest day which gives a 40 degree temperature delta.
On the air side, I am bringing in 5 degree air (EAT) at a rate of 16000 CFM on one of the air handlers. My coworker wants to lower the temperature to 140 supply and 100 degree return, his reason being that it would still be a 40 degree temperature delta on the water side so the thermal output would be the same. I believe the hotter supply temperature will result in more rapid heat transfer but I didn't really have any good argument against his logic. He was simply relating the two rules of thumb equations for air and water: [Btu/hr]=1.08*CFM*deltaT and [Btu/hr]=500*gpm*deltaT to prove his point.
My argument was that this does not account for the heat transfer taking place from one side of the coil to the other, but I was not able to adequately illustrate my side to the point that he would agree.
First of all, am I even right about operating the system at 180 degrees being a more efficient solution, and, if so, how do I fit the heat exchanger heat transfer into the system of equations? A proper analysis requires a log mean delta T calculation, and an understanding of the configuration of the heat exchanger (i.e parallel flow, cross flow, etc.). But at a very high level it comes down to a variation of the basic convective heat flow equation Q = UADT where the DT is the temperature difference between the water and the air. Assuming the heat transfer coefficient and area are the same, then a higher DT will provide a higher heat transfer rate. | {
"domain": "engineering.stackexchange",
"id": 2760,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "thermodynamics, heat-transfer",
"url": null
} |
mechanical-engineering, springs
Title: Difference between Hydraulic cylinder and spring What is the technical difference between a hydraulic cylinder and a spring? For sure, I understand their principles but I wonder about physical-technical differences.
For example, the most obvious aspect (for me), is that they both can deflect. A spring, on the contrary, can store potential energy, is this also valid for the cylinder? It is also able to oscillate. Are there more differences? A hydraulic cylinder filled with hydraulic fluid cannot deflect unless the fluid has somewhere to go. It cannot act as a spring unless the reservoir has the capability to absorb the fluid and the ability to maintain a restorative pressure during compression. In a simple system this would be a spring and at that point you are asking, "Would a spring act like a spring?".
Figure 1. A compact hydraulic power unit. Image source: ResearchGate. | {
"domain": "engineering.stackexchange",
"id": 4105,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "mechanical-engineering, springs",
"url": null
} |
cc.complexity-theory, complexity-classes
Bernd Borchert, Riccardo Silvestri: A Characterization of the Leaf Language Classes. Inf. Process. Lett. 63(3): 153-158 (1997) (doi link here)
The authors characterize the leaf language classes as those which are (a) "countable", (b) are "downward" closed wrt polytime many-one reducibility, and (c) "join-closed" (i.e., disjoint union) wrt polytime many-one reducibility.
More formally, all the languages $L$ in a leaf language class have a bijection with the natural numbers, and the property that for every $C,D \in L$, if $E \leq^P_m C \sqcup D$ then $E \in L$ as well (the $\sqcup$ denotes disjoint union). Also, every "non-leaf language class" contains a language which fails to have one of these properties.
From these three conditions we can get many examples of classes which aren't leaf language classes. For example, the "countable" condition rules out advice classes like $P/poly$, and the "downward closed wrt polytime many-one reducibility" rules out fixed resource-bound classes like $SPACE[n]$. (Recall that the usual proof that $SPACE[n] \neq P$ uses the fact that $SPACE[n]$ is not closed under such reductions.) | {
"domain": "cstheory.stackexchange",
"id": 71,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "cc.complexity-theory, complexity-classes",
"url": null
} |
ros, opencv
// point cloud definition
typedef pcl::PointCloud<pcl::PointXYZ> PointCloud;
// standard namespace for c++, ROS messages transport, ROS message filters, OpenCV
using namespace std;
using namespace sensor_msgs;
using namespace message_filters;
using namespace cv;
// Kalman filter parameters
int stateSize = 6; // 6 no of states
int measSize = 3; // 3 parameters to be measured
int contrSize = 0; //
unsigned int type = CV_32F; // type
int main( int argc, char *argv[] )
{
ros::init(argc, argv, "prediction_node");
ros::NodeHandle nh;
ROS_INFO("Starting ");
cv::KalmanFilter kf(stateSize, measSize, contrSize, type);
ros::spin();
return 0;
}
Originally posted by PKumars on ROS Answers with karma: 92 on 2016-02-26
Post score: 0
You are already using namespace cv, so you don't have to write cv:: infront of your methods.
That won't cause an error, but just nice to know.
The KalmanFilter is in the #include "tracking.hpp", and i can't see that you have included that, so try to do that and see if it still gives an error.
Here is a sample code of how to use it:
https://github.com/Itseez/opencv/blob/master/samples/cpp/kalman.cpp
Originally posted by ros_geller with karma: 92 on 2016-02-26
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by PKumars on 2016-02-26:
it worked. Thanks a lot..
Comment by ros_geller on 2016-02-26:
Please flag it as solved and upvote if the problem is solved ;) | {
"domain": "robotics.stackexchange",
"id": 23915,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, opencv",
"url": null
} |
c#, linq, community-challenge, sudoku
internal IEnumerable<SudokuTile> TileBox(int startX, int startY, int sizeX, int sizeY)
{
return from pos in SudokuFactory.box(sizeX, sizeY) select tiles[startX + pos.Item1, startY + pos.Item2];
}
private IEnumerable<SudokuTile> GetRow(int row)
{
for (int i = 0; i < tiles.GetLength(0); i++)
{
yield return tiles[i, row];
}
}
private IEnumerable<SudokuTile> GetCol(int col)
{
for (int i = 0; i < tiles.GetLength(1); i++)
{
yield return tiles[col, i];
}
}
private ISet<SudokuRule> rules = new HashSet<SudokuRule>();
private SudokuTile[,] tiles;
public int Width
{
get { return tiles.GetLength(0); }
}
public int Height {
get { return tiles.GetLength(1); }
}
public void CreateRule(string description, params SudokuTile[] tiles)
{
rules.Add(new SudokuRule(tiles, description));
}
public void CreateRule(string description, IEnumerable<SudokuTile> tiles)
{
rules.Add(new SudokuRule(tiles, description));
}
public bool CheckValid()
{
return rules.All(rule => rule.CheckValid());
}
public IEnumerable<SudokuBoard> Solve()
{
ResetSolutions();
SudokuProgress simplify = SudokuProgress.PROGRESS;
while (simplify == SudokuProgress.PROGRESS) simplify = Simplify();
if (simplify == SudokuProgress.FAILED)
yield break;
// Find one of the values with the least number of alternatives, but that still has at least 2 alternatives
var query = from rule in rules
from tile in rule
where tile.PossibleCount > 1
orderby tile.PossibleCount ascending
select tile; | {
"domain": "codereview.stackexchange",
"id": 29886,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, linq, community-challenge, sudoku",
"url": null
} |
electrostatics, potential
My work typed up on LaTex: link Keep a few things in mind and you won't have to do much integration. The law of superposition will help. So will the consideration of equivalent charge distributions.
The field of a uniformly charged sphere outside of that sphere is the same as it would be had all the charge was located at the center of the sphere.
For A), we are outside of both spheres, so we just calculate the field for to point charges.
For B), we pretend we only have one sphere and calculate the field from the center. Using Gauss' Law, $\vec{E}4\pi r^2=\rho(4\pi r^3)/3\epsilon_0\implies \vec{E}=\frac{\rho r}{3\epsilon_0}\hat{r}$. Keeping in mind $\vec{r}$ is vector pointing from the center of the sphere to the field point. Now, for a given field point, we figure out the vectors to the respective centers, and adjust this electric field formula appropriately, summing the electric field contributed by both spheres.
For C), we combine the approaches of A) and B), i.e., since we are outside of one of the spheres, we treat it as if all of its charge was located at its center, for the part contributed by the other sphere, which the field point is in, we use the radially dependent formula above. Keeping symmetry considerations in mind, a reflection and change in charge sign gives us the field in the complementary region. | {
"domain": "physics.stackexchange",
"id": 62249,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electrostatics, potential",
"url": null
} |
python, beginner, python-3.x, rock-paper-scissors
Here are some things that I think could be improved:
Your function names need a little work. Consider this code:
player_choice = ask_player()
comp_choice = comp_play()
The object is to get two choices, one made by the player and the other made by the computer. Why are the two names so different? ask_player doesn't sound like getting a player's choice. It sounds like a generalized function that asks the player something and gets a response (i.e., input()). On the other hand, if player is spelled out why do you abbreviate the opponent in comp_play?
Using get is not always a good thing. It's one of the times when a function or method name doesn't need a verb in it - because it is frequently implicit when you are doing is_... or has_... or get_... or set_.... I don't think you need to spell out get_player_choice and get_computer_choice, but certainly player_choice and computer_choice would be appropriate.
This same logic applies to results. Instead of calling a function named results, why not call play_once? Or one_game? It's obvious from the code in main what is going on, but the function name doesn't really match the nature of the "step" being executed.
Your code breakdown is uneven. Consider this code:
def main():
wins = 0
losses = 0
draws = 0
while True:
os.system('cls' if os.name == 'nt' else 'clear')
print(f"Wins: {wins}\nLosses: {losses}\nDraws: {draws}")
wins, losses, draws = results(wins, losses, draws)
if play_again() == '2':
break
Let's break those lines down. The key point I want to make is to keep neighboring statements at a similar level of abstraction.
First, you initialize your variables:
wins = 0
losses = 0
draws = 0 | {
"domain": "codereview.stackexchange",
"id": 33819,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, beginner, python-3.x, rock-paper-scissors",
"url": null
} |
quantum-mechanics, lagrangian-formalism, hilbert-space, notation, molecular-dynamics
Title: Euler-Lagrange Equations for Molecular Dynamics In the Car-Parrinello (CP) method for molecular dynamics simulation, the Euler-Lagrange equations are given as
$$
\begin{aligned} \frac { d } { d t } \frac { \partial \mathcal { L } _ { \mathrm { CP } } } { \partial \left\langle \dot { \psi } _ { i } \right| } & = \frac { \partial \mathcal { L } _ { \mathrm { CP } } } { \partial \left\langle \psi _ { i } \right| }.\end{aligned}
$$ | {
"domain": "physics.stackexchange",
"id": 59964,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, lagrangian-formalism, hilbert-space, notation, molecular-dynamics",
"url": null
} |
navigation, move-base, ros-kinetic
Originally posted by croesmann with karma: 2531 on 2020-03-11
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Dave Everett on 2020-03-11:
Thanks. I had weight_kinematics_forward_drive set at 70 to avoid as much reverse drive as possible. The goals in these cases was in front of the robot, its just the final pose that was the problem. For example, if I set a destination 3 m in x, but pose pointing back towards the starting position, the robot always rotates before the destination and reverses into the final spot. This is not a rotate in place, but a turn as if it were in a car-like mode.
Comment by croesmann on 2020-03-11:
But this behavior is reasonable as I wrote ;-). A weight of 70 for this parameter does not seem to be large. Is your issue resolved now?
Comment by Dave Everett on 2020-03-11:
I had imagined that 70 was high as the info page talked about 1.0 being a small weight. I set it to 950 and it seems a lot better. If the info page just provided a bit more information about the limits this would have been easier. Thanks for your help mate.
Comment by croesmann on 2020-03-11:
You're welcome ;-). I added more information to the wiki. Please mark the answer as correct. | {
"domain": "robotics.stackexchange",
"id": 34578,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "navigation, move-base, ros-kinetic",
"url": null
} |
Considering that you look for the first zero of function $$f(x)=\sum_{i=1}^k \frac 1{x-i}-\frac1 {x-k-1}$$ which can write, using harmonic numbers, $$f(x)=H_{-x}-H_{k-x}-\frac{1}{x-k-1}$$ remove the asymptotes using $$g(x)=(x-1)(x-2)f(x)=2x-3+(x-1)(x-2)\left(H_{2-x}-H_{k-x}-\frac{1}{x-k-1} \right)$$ You can approximate the solution using a Taylor expansion around $x=1$ and get $$g(x)=-1+(x-1) \left(-\frac{1}{k}+\psi ^{(0)}(k)+\gamma +1\right)+O\left((x-1)^2\right)$$ Ignoring the higher order terms, this gives as an approximation $$x_{est}=1+\frac{k}{k\left(\gamma +1+ \psi ^{(0)}(k)\right)-1}$$ which seems to be "decent" (and, for sure, confirms your claims). $$\left( \begin{array}{ccc} k & x_{est} & x_{sol} \\ 2 & 1.66667 & 1.58579 \\ 3 & 1.46154 & 1.46791 \\ 4 & 1.38710 & 1.41082 \\ 5 & 1.34682 & 1.37605 \\ 6 & 1.32086 & 1.35209 \\ 7 & 1.30238 & 1.33430 \\ 8 & 1.28836 & 1.32040 \\ 9 & 1.27726 & 1.30914 \\ 10 & 1.26817 & 1.29976 \\ 11 & 1.26055 & | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.986777178636555,
"lm_q1q2_score": 0.8625194566974352,
"lm_q2_score": 0.8740772236840656,
"openwebmath_perplexity": 170.59247243438804,
"openwebmath_score": 0.9257096648216248,
"tags": null,
"url": "https://math.stackexchange.com/questions/2874991/some-interesting-observations-on-a-sum-of-reciprocals/2875035"
} |
remote-sensing, earth-observation, satellites, radar
For a fixed target, the return signals HV and VH will have equal intensity. This is a direct result of the principle of reciprocity. | {
"domain": "earthscience.stackexchange",
"id": 2019,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "remote-sensing, earth-observation, satellites, radar",
"url": null
} |
I could prove that $$\forall n\in \mathbb{N} (x_n \leqslant u)$$ using the way the sequence is constructed. But I am having trouble with proving that
if $$k < u$$ there should exist some $$n_1 \in\mathbb{N}$$ such that $$k< x_{n_1}$$.
I have seen some proofs floating on the net. But didn't quite understand it. So wanted to post here.
#### Plato
##### Well-known member
MHB Math Helper
Here is the problem
Let A be an infinite subset of $$\mathbb{R}$$ that is bounded above and let $$u= \mbox{sup }A$$. Show that there exists an increasing sequence $$(x_n)$$with $$x_n\in A$$ for all $$n\in\mathbb{N}$$ such that $$u=\lim (x_n)$$.
Does the question really say increasing, for it it does then that statement is false unless $\sup(A)\notin A$.
It should say non-decreasing or say $\sup(A)\notin A$.
#### Alexmahone
##### Active member
Does the question really say increasing, for it it does then that statement is false unless $\sup(A)\notin A$.
It should say non-decreasing or say $\sup(A)\notin A$.
Some books use:
increasing to mean non-decreasing
strictly increasing to mean increasing
(This has caused considerable confusion between us in this thread. )
Last edited:
#### issacnewton
##### Member
Hi Plato
here's the definition of the increasing sequence from Bartle ,3ed. Sequence $$x_n$$ is said to be increasing if it satisfies the inequalities
$x_1\le x_2\le\cdots\le x_n \le x_{n+1} \le \cdots$ | {
"domain": "mathhelpboards.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9615338035725358,
"lm_q1q2_score": 0.835406318067792,
"lm_q2_score": 0.8688267796346599,
"openwebmath_perplexity": 360.5130747981269,
"openwebmath_score": 0.9530057907104492,
"tags": null,
"url": "https://mathhelpboards.com/threads/problem-9-section-3-3-from-bartle.329/#post-1967"
} |
magnetism
Diamagnetic materials have no free electronic spin in the system, therefore they do not interact with magnetic field as a first approximation. There is a very small repulsive interaction coming from the polarisation of paired spins, but it is very small compared to any other interactions. Most organic compounds are in this category.
Paramagnetic materials have free electronic spins, which can flip / rotate. When put the sample in magnetic field, this field polarises the spins which turns into an attractive interaction. This is analogous with the electric field induced dipole moments, but this is a magnetic interaction, not an electronic interaction. The alignment of spins are constantly changing, as simple thermal excitation can rotate them. Therefore paramagnetic materials has no permanent magnetic moments like magnets. Typical examples are molecular oxygen, transition metal complexes, etc.
In ferromagnetic materials the free electronic spins are having a strong interaction with each other, and aligned parallel in strict order. This alignment is so stable that the magnetic moment of spins adds up, and there can be a macroscopical magnetic field around the sample (i.e. it is a magnet).
When you see the attraction between paramagnetic and magnetic materials, you see the polarisation of the paramagnetic materials in the magnetic field of ferromagnetic materials, which is an attractive itneraction. | {
"domain": "chemistry.stackexchange",
"id": 1657,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "magnetism",
"url": null
} |
data-structures, priority-queues
Title: Priority queue with both decrease-key and increase-key operations A Fibonnaci Heap supports the following operations:
insert(key, data) : adds a new element to the data structure
find-min() : returns a pointer to the element with minimum key
delete-min() : removes the element with minimum key
delete(node) : deletes the element pointed to by node
decrease-key(node) : decreases the key of the element pointed to by node
All non-delete operations are $O(1)$ (amortized) time, and the delete operations are $O(\log n)$ amortized time.
Are there any implementations of a priority queue which also supportincrease-key(node) in $O(1)$ (amortized) time? Assume you have a priority queue that has $O(1)$ find-min, increase-key, and insert. Then the following is a sorting algorithm that takes $O(n)$ time:
vector<T>
fast_sort(const vector<T> & in) {
vector<T> ans;
pq<T> out;
for (auto x : in) {
out.insert(x);
}
for(auto x : in) {
ans.push_back(*out.find_min());
out.increase_key(out.find_min(), infinity);
}
return ans;
} | {
"domain": "cs.stackexchange",
"id": 98,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "data-structures, priority-queues",
"url": null
} |
We integrated it in terms of theta so we need to have the result in terms of theta.
We will use tan theta = x and sec theta = sqrt(x^2+1) from the triangle you constructed.
But for cos(theta/2) we need to use the <a href="https://www.intmath.com/analytic-trigonometry/4-half-angle-formulas.php">Half Angle Formula</a>:
cos(theta/2) = sqrt((1 + cos theta)/2)
Expressing the RHS in terms of x we need to use (from the triangle) cos theta = 1/(sqrt(x^2+1)) and this will give us:
cos(theta/2) = sqrt((1 + 1/(sqrt(x^2+1)))/2)
Do you think you can proceed from there?
## Re: Find integral sqrt (x^2 + 1) using trigonometric substitution
The half-angle formula for sine is similar to that for cosine only we subtract in the numerator.
Substituting the limits of integration:
1/2 [1( sqrt 2) - ln(.924 - .383) {: + ln(.383 + .924)] - 1/2 [0(1) - ln (1-0) {: + ln (0 + 1)] = 1.148 ~~ 1.15
The answer given using the Trapezoidal Rule!
Thanks!
X
The half-angle formula for sine is similar to that for cosine only we subtract in the numerator.
Substituting the limits of integration:
1/2 [1( sqrt 2) - ln(.924 - .383) {: + ln(.383 + .924)] - 1/2 [0(1) - ln (1-0) {: + ln (0 + 1)] = 1.148 ~~ 1.15
The answer given using the Trapezoidal Rule!
Thanks! | {
"domain": "intmath.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9653811651448431,
"lm_q1q2_score": 0.8438177012933787,
"lm_q2_score": 0.8740772368049823,
"openwebmath_perplexity": 4474.827558513489,
"openwebmath_score": 0.9217457175254822,
"tags": null,
"url": "https://www.intmath.com/forum/methods-integration-31/find-integral-sqrt-x-2-1-using-trigonometric:155"
} |
electromagnetism, optics, photons, non-linear-systems, non-linear-optics
Title: Why are non-linear optics called non-linear? Looking at the wikipedia article on nonlinear optics you can see a huge list of frequency mixing (or multi-photon) processes. What makes these different from single-photon interactions?
More specifically, I do not understand the link with the mathematical concept of non-linearity (non-satisfiability of the superposition principle, chaos, etc.). Can anyone explain the intuition behind this? Nonlinear optical elements are called nonlinear precisely because of the behaviour you note: because the optical response of the material does not depend linearly on the driving fields. The response may then have a quadratic or higher dependence on the driver, which is usually written in the form
$$
\mathbf P =\varepsilon_0 \chi^{(1)} \mathbf E
+ \varepsilon_0 \overleftrightarrow\chi^{(2)}· \mathbf E^{\otimes 2}
+ \varepsilon_0 \overleftrightarrow\chi^{(3)}· \mathbf E^{\otimes 3}
+ \cdots.
\tag 1
$$
(Note, however, that if the intensity is too high then even this perturbative expansion may break, as is the case in high-order harmonic generation.)
The reason nonlinear optics is usually framed in terms of frequency-mixing processes is that the higher-order powers do exactly that. For example, if you have a sinusoidal driver $E=E_0\cos(\omega t)$, then a response that depends on $E^2$ will introduce other frequencies, since
$$
E^2=\tfrac{E_0^2}{2}(1+\cos(2\omega t)).
\tag2
$$
The first term is known as optical rectification, and the second term is second harmonic generation. Terms of higher order can produce further mixing of components. | {
"domain": "physics.stackexchange",
"id": 22961,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetism, optics, photons, non-linear-systems, non-linear-optics",
"url": null
} |
beginner, php, object-oriented, mvc
MODEL
<?php
$URL = $_POST['url'];
$page = explode('/', $URL);
if($page[3] == 'signup.php'){
require_once('../controller/users.php');
$obj = new employer();
$obj->setEmployerName($_POST['fullname']);
$obj->getEmployerFirstname();
$obj->getEmployerLastName();
$obj->setPword($_POST['pword']);
$obj->setEmail($_POST['email']);
echo $obj->registerNewEmployer();
?> | {
"domain": "codereview.stackexchange",
"id": 25124,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "beginner, php, object-oriented, mvc",
"url": null
} |
javascript, game, html5, canvas, tic-tac-toe
Why are you doing this instead of using the fields directly? This seems unnecessary and confusing.
Functions of Board
Why are some functions not part of board? move is, but moveX and moveO are not, for example. This seems odd.
Comments
Comments on functions are especially important when the variable names are not too good. But even if you change them, I would still like some comments.
For example: What is the acceptable range for tile width? If I use for example 50, the board does not look good anymore.
Another example: What happens if I set row and column to 4 instead of 3?
And one last example: What does draw draw? Everything? Or only the game field, but not the players choices?
Reset
I extracted your init code in a function:
var b;
// Initialize Game
function init() {
b = new Board("game", 3, 3),
resetcon = document.getElementById("reset");
b.draw();
b.updateScoreBoard();
//Add event listeners for click or touch
window.addEventListener("click", clickTouch, false);
window.addEventListener("touchstart", clickTouch, false);
resetcon.addEventListener("click", clickTouchReset, false);
resetcon.addEventListener("touchstart", clickTouchReset, false);
}
init();
And changed your reset function like this:
Board.prototype.reset = function(x) {
this.CANVAS.width = this.CANVAS.width; // clear canvas
init();
};
And I removed this.reset(); from move. Now the reset works without reloading each time.
CSS
I would put this in an external css file, and format it properly (each attribute on its own line, etc.). If you care about performance, minify with a tool later instead of writing harder to read code. | {
"domain": "codereview.stackexchange",
"id": 9281,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, game, html5, canvas, tic-tac-toe",
"url": null
} |
neural-network, keras, r, regression, bias
Am I attacking my problem from the wrong angle?
I could try to build a model on cohorts, but this just feels very off, as I would have to aggregate information and would introduce cohort size as another variable.
Multiplying everything with a factor also sounds off, as this will likely heavily increase mse / mae again.
Edit: Specified why pred1 is better than pred2.
Edit2: Removed the reference to Estimator bias to avoid confusion.
Edit3: Increased the numbers in the example to make it more obvious. Thank you @Nikos M. for your suggestions. I was about to use your post-applied factor but then gave it another try. And found what caused this. It was that the final layer was using a softplus activation function. It sounded like a perfect fit to me for this regression problem, as I only had positive valued outcomes. However this seems to cause some troubles for my DNN, which I don't understand why. Anyway, that's a different topic. Using relu in the final layer gave me much better results and also made my initial problem here disappear.
So problem is solved. I would say the answer to my question is: If you see something so far off, you shouldn't look for a way to force this. You should debug your model instead. | {
"domain": "datascience.stackexchange",
"id": 10879,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "neural-network, keras, r, regression, bias",
"url": null
} |
c++, game, console, makefile
#endif
move_parser.cpp
/*
Author: Jared Thomas
Date: Sunday, January 22, 2023
This module provides higher-order parsing for Towers moves.
*/
#include <vector>
#include <string>
#include "move_parser.h"
#include "parse.h"
#include "Tower.h"
TOWER_MOVE parseMove(const std::vector<std::string>& tokens, const std::vector<Tower>& towers)
{
long int from, to;
PARSE_LONG_RESULT fromResult = parseLong(tokens.at(0).c_str(), &from);
PARSE_LONG_RESULT toResult = parseLong(tokens.at(1).c_str(), &to);
// Return this when processing the move would result in a fatal error.
const TOWER_MOVE PROBLEM_MOVE = { 0, 0, INVALID_MOVE_SYNTAX };
if(!(fromResult == SUCCESS && toResult == SUCCESS)) {
return PROBLEM_MOVE;
}
if((from < 1) || (from > 3)) {
return PROBLEM_MOVE;
}
if((to < 1) || (to > 3)) {
return PROBLEM_MOVE;
}
from--;
to--;
const Tower& towerFrom = towers.at(from);
const Tower& towerTo = towers.at(to);
if(towerFrom.is_diskless()) {
TOWER_MOVE result = { from, to, DISKLESS_TOWER };
return result;
}
if(!towerTo.is_diskless() &&
(towerFrom.size_of_top() > towerTo.size_of_top())) {
TOWER_MOVE result = { from, to, LARGER_ON_SMALLER };
return result;
}
TOWER_MOVE result = { from, to, VALID_MOVE };
return result;
} | {
"domain": "codereview.stackexchange",
"id": 44337,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, game, console, makefile",
"url": null
} |
python, python-3.x, statistics, memory-optimization
"""Accepts a python list of 3D spatial points, e.g. [[x1,y1,z1],...],
optionally with weights e.g. [[x1,x2,x3,w1],...], and returns the sparse
histogram (i.e. no empty bins) with bins of resolution (spacing) given by
res.
The weights option allows you to chose to histogram over counts
instead of weights (equivalent to all weights being 1).
The bin_index option lets you return the points with their bin indices
(the integers representing how many bins in each direction to walk to
find the specified bin) rather than centerpoint coordinates."""
def sparse_hist(points, res, weights=True, bin_index=False):
def _binindex(point):
point = point[:3]
bi = [int(x//res) for x in point]
bi = tuple(bi)
return bi
def _bincenter(point):
point = point[:3]
bc = [(x//res+0.5)*res for x in point]
bc = tuple(bc)
return bc
if not bin_index:
if weights:
pointlist = [(_bincenter(x), x[3]) for x in points]
else:
pointlist = [(_bincenter(x), 1) for x in points]
else:
if weights:
pointlist = [(_binindex(x), x[3]) for x in points]
else:
pointlist = [(_binindex(x), 1) for x in points]
pointdict = defaultdict(list)
for k,v in pointlist:
pointdict[k].append(v)
for key,val in pointdict.items():
val = sum(val)
pointdict.update({key:val}) | {
"domain": "codereview.stackexchange",
"id": 32781,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, python-3.x, statistics, memory-optimization",
"url": null
} |
EDIT
I understand the mistake now, the restriction of $x \in (-\pi/2,\pi/2)$ needs to be made as dictated by the definition of $\arcsin$ and then $t=\tan(x/2)\in[\tan(-\pi/4),\tan(\pi/4)]=[-1,1]$. Now, my question is the following: when attempting to find $dx$ by $x = 2\arctan t$ we impose the restriction of $x \in (-\pi,\pi)$ because $\arctan t \in (-\pi/2,\pi/2)$ but, doesn't this contradict the restrictions we imposed on $x$ when finding $dx$ by $x = \arcsin \frac {2t}{1+t^2}$? | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9744347845918813,
"lm_q1q2_score": 0.8033843859372494,
"lm_q2_score": 0.8244619328462579,
"openwebmath_perplexity": 346.875363919611,
"openwebmath_score": 0.9782524704933167,
"tags": null,
"url": "http://math.stackexchange.com/questions/278539/how-to-find-the-following-derivative"
} |
inorganic-chemistry, redox, aqueous-solution, titration
\ce{2\overset{-1}{I}^- &-> \overset{0}{I}_2 + 2 e-} & \tag{ox} \\
\hline
\ce{2 Cu^2+ + 4 I- &-> 2 CuI + I2} \tag{redox}
\end{align}
$$
To complete the full balanced reaction you need to add spectator ions — potassium and sulfate — with corresponding coefficients:
$$\ce{2 CuSO4(aq) + 4 KI(aq) -> 2 K2SO4(aq) + 2 CuI(s) + I2(aq)}\label{a-rxn-r3}\tag{R3}$$ | {
"domain": "chemistry.stackexchange",
"id": 16895,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "inorganic-chemistry, redox, aqueous-solution, titration",
"url": null
} |
leaves at 9:10 from one station and its speed is 90 km/h, what time does it get to the next station? You may speak with a member of our customer support team by calling 1-800-876-1799. Every word problem has an unknown number. When the problem is set up like this, you can usually use the last column to write your equation: The liters of acid from the 10% solution, plus the liters of acid in the 30% solution, add up to the liters of acid in the 15% solution. Select it and click on the button to choose it. Linear inequalities word problems. A car traveled 281 miles in 4 hours 41 minutes. Solution. Find the equilibrium price. Given : The total cost of 80 units of the product is $22000. Explain to students that you can find the rate (or speed) that someone is traveling if you know the distance and time that she traveled. Using a few simple formulas and a bit of logic can help students quickly calculate answers to seemingly intractable problems. Solution Let x be the first number. Solution Let y be the second number x / y = 5 / 1 x + y = 18 Using x / y = 5 / 1, we get x = 5y after doing cross multiplication Replacing x = 5y into x + y = 18, we get 5y + y = 18 6y = 18 y = 3 For a certain commodity, the demand equation giving demand "d" in kg, for a price "p" in dollars per kg. Difference between a number and its positive square root is 12. Related articles: The 4 Steps to Solving Word Problems. If the …, solve word problem Not rated yetIf a number is added to the numerator of 11/64 and twice the number is added to the denominator of 11/64, the resulting fraction is equivalent to 1/5. How many years …, Price of 3 pens without discount Not rated yetA stationery store sells a dozen ballpoints pens for$3.84, which represents a 20% discount from the price charged when a dozen pens are bought individually. A park charges $10 for adults and$5 for kids. Do you have some pictures or graphics to add? If the rod was 2 meter shorter and each meter costs $1 more, the cost would remain unchanged. Multi-Step All Operations Word Problems These Word Problems worksheets will produce word problems | {
"domain": "co.ir",
"id": null,
"lm_label": "1. Yes\n2. Yes\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9728307653134903,
"lm_q1q2_score": 0.8760630888077814,
"lm_q2_score": 0.9005297941266013,
"openwebmath_perplexity": 956.8158715110995,
"openwebmath_score": 0.2838430404663086,
"tags": null,
"url": "https://www.sunway.co.ir/golden-gaytime-eincky/f7deb8-algebra-word-problems-with-solution"
} |
type-theory, types-and-programming-languages
&\quad\quad\;\;if\: 0 \:then \: 0 \:else \: 0\}
\end{align}$$
So $S_2$ contains 3+9+27= 39 elements.
Let $i= 2$, we will get $S_3=S_{2+1}=\cdots$, which contains 3+39*3+39^3=59439 elements. | {
"domain": "cs.stackexchange",
"id": 13055,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "type-theory, types-and-programming-languages",
"url": null
} |
computer-vision, image-processing, feature-extraction, sift
The output is:
What we did?
We applied a gaussian blur on an image with a given $\sigma$.
We also applied a gaussian blur on the same image after a decimation with factor 2 image with $\frac{\sigma}{2}$.
The we upsampled the downsampled image to the original size and display the 2 results and their difference.
Beside some edge issues, the output of both is the same.
So, if they are the same, what should we use?
The one which is faster, of course, it is the one which is applied on less pixels.
Regarding your question, in order to keep the scale invariance you can take the same strategy used in the À-Trous Wavelets (Also known as dilated convolution). Basically decimating the grid when collecting the samples. It will give you, neglecting border issues, the same results. | {
"domain": "dsp.stackexchange",
"id": 11813,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "computer-vision, image-processing, feature-extraction, sift",
"url": null
} |
0 \end{bmatrix}$$ = $$\begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix}$$, DC = $$\begin{bmatrix} \frac{1}{2} &- \frac{1}{2} \\ 1& 0 \end{bmatrix}$$ $$\begin{bmatrix} 0 &1 \\ -2& 1 \end{bmatrix}$$ = $$\begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix}$$. Use the following fact: a scalar λ is an eigenvalue of a matrix A if and only if det (A − λ I) = 0. Ask Question Asked 6 years, 3 months ago. The Mathematics Of It. Since A is the identity matrix, Av=v for any vector v, i.e. These Matrices … 3 x 3 Identity Matrix . Simplify each element in the matrix. So the size of the matrix is important as multiplying by the unit is like doing it by 1 with numbers. The matrix equation = involves a matrix acting on a vector to produce another vector. An identity matrix is a square matrix in which all the elements of principal diagonals are one, and all other elements are zeros. Then Ax D 0x means that this eigenvector x is in the nullspace. While we say “the identity matrix”, we are often talking about “an” identity matrix. All vectors are eigenvectors of I. The above is 2 x 4 matrix as it has 2 rows and 4 columns. Venkateshan, Prasanna Swaminathan, in, Numerical Linear Algebra with Applications, Liengme's Guide to Excel® 2016 for Scientists and Engineers, A REVIEW OF SOME BASIC CONCEPTS AND RESULTS FROM THEORETICAL LINEAR ALGEBRA, Numerical Methods for Linear Control Systems, Numerical Solutions to the Navier-Stokes Equation, Microfluidics: Modelling, Mechanics and Mathematics, Enrico Canuto, ... Carlos Perez Montenegro, in, Uniformly distributed random numbers and arrays, Normally distributed random numbers and arrays, Pass or return variable numbers of arguments. Directions and two eigenvalues does not change direction in a transformation: expression and of the linear matrix... Montenegro, in Spacecraft Dynamics and Control, 2018 simply “ | {
"domain": "co.uk",
"id": null,
"lm_label": "1. Yes\n2. Yes",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9441768557238085,
"lm_q1q2_score": 0.8151943822010768,
"lm_q2_score": 0.8633916170039421,
"openwebmath_perplexity": 591.4071252712841,
"openwebmath_score": 0.9204062223434448,
"tags": null,
"url": "http://waynerogersltd.co.uk/blog/y90gfw.php?tag=2c219c-twitter-status"
} |
control-engineering
Another equivalent representation is $x_1$ and $x_2-x_1$ (essentially the deformation of the spring). If you are only interested in the deformation of the string you might create a $C=[1, -1]$ and you are done. However, it might be easier to see, that it is easier to construct the equations for $x_1$ and $x_2$ because their odes are similar (while constructring the ode for the spring deformation will be different).
Bottom Line: state representation makes much more sense in more complex systems.
Regarding the use of C and D matrices
The first good reason that comes to mind for the use of C and D matrices, is to perform the Observability and Controlability Tests. Its in the same link you provided wikipedia link. | {
"domain": "engineering.stackexchange",
"id": 3474,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "control-engineering",
"url": null
} |
Let W and X be odd number, thus:
$W=2a + 1$ (odd)
$X=2b + 1$ (odd)
Let Y be an even number, thus:
$Y=2c$
So,
$W^2 + X^2 + Y^2 = (2a + 1)^2 + (2b + 1)^2 + (2c)^2$
$= 4a^2 + 4a + 1 + 4b^2 + 4b + 1 + 4c^2$
$= 4(a^2 + a + b^2 + b + c^2) + 2$
isn't this an even number? If yes, then you can produce an even number by having only 1 even and two odds.
• In addition to being even, the number must also be square, which is a much stronger restriction. Therefore, the underlying question being asked is: why is $4(a^2+a+b^2+b+c^2)+2$ never a square number? – Eric Stucky Jan 7 '14 at 8:21
• Got it now... as mentioned in previous comments the square of an even number is multiple of 4, and $4(a^2+a+b^2+b+c^2)+2$ is not multiple of 4. – andres.santana Jan 7 '14 at 17:18 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9814534392852383,
"lm_q1q2_score": 0.817869284016044,
"lm_q2_score": 0.8333245891029456,
"openwebmath_perplexity": 99.24609320898901,
"openwebmath_score": 0.8794930577278137,
"tags": null,
"url": "https://math.stackexchange.com/questions/430554/if-w2-x2-y2-z2-then-z-is-even-if-and-only-if-w-x-and-y-ar"
} |
Math - Properties of Integers. Nova's GMAT Prep Course is the best GMAT prep book available, providing the equivalent of a 50-hour, 6-week course. 1) Additive Identity: Adding 0 to any integer does not change the value of the integer. Here are examples of the five most commonly asked math concepts, in order, based on actual GMAT tests: 1. In fact, the GMAT test-makers typically restrict the values in Integer Properties question to POSITIVE integers only. Properties of Integers. In the next section, we will prove some. Dreaded Data Sufficiency Questions That Will Test Your Knowledge of Number Properties July 24, 2017 Karishma Here is an often-repeated complaint we hear from test takers – Data Sufficiency questions that deal with number properties are very difficult to handle (even for people who find problem-solving number properties questions manageable)!. That's why we created our GMAT Math test prep course - to offer the perfect balance of affordability and effectiveness that has always been missing for students preparing for the GMAT Math test. Summary of GMAT Day 2 Study Conclusion. Squares of integers. B)The quantity in Column B is greater. Three consecutive integers have a sum of −84. Multiple idea number 1, just as 1 is a factor of every integer, every positive integer is a multiple of 1. Consecutive integers are integers that follow one another with a difference of 1. The GMAT absolutely loves to test your knowledge of consecutive integer properties. Now we need to add one more property for integers that is essentially the algebraic de nition of negative numbers. Master GMAT with Verbal & Quantitative Prep, AWA & IR. GMAT 2020: GMAT (Graduate Management Admission Test) is an international computer-based test conducted for admission of candidates into management programs (such as MBA) offered by various B-schools. These are all integers (click to mark), and they continue left and right infinitely: Some People Have Different Definitions! Some people (not me) say that whole numbers can also be negative, which makes them exactly the same as integers. If both the dividend and divisor are positive, the quotient will be positive. 2) Additive Inverse: Each integer has an opposing number (opposite sign). GMAT Arithmetic. It is a core topic. Download GMAT ToolKit 2 and enjoy it on your iPhone, iPad, and iPod touch. For tutoring | {
"domain": "nottibiancheitalia.it",
"id": null,
"lm_label": "1. Yes\n2. Yes",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.984336353126336,
"lm_q1q2_score": 0.8137624661581757,
"lm_q2_score": 0.8267117876664789,
"openwebmath_perplexity": 880.8793819965209,
"openwebmath_score": 0.41904929280281067,
"tags": null,
"url": "http://vrxf.nottibiancheitalia.it/properties-of-integers-gmat.html"
} |
This problem shows up often when working with Pythagorean triangles with consecutive sides. (3,4,5), (20,21,29) etc.
I will answer half the question and the other half is similar. $$\frac{N (N+1)}{2}$$ will be perfect square only when $N$ is square and $(N+1)/2$ is a square or $N/2$ is a square and $N+1$ is a square (this is true since $N$ and $N+1$ are co-prime).
Now consider the first case. $N$ is necessarily odd; so $$N = (2k+1)^2 = 4k^2+4k+1$$ and $$\frac{N+1}{2} = 2k^2 + 2k + 1 = k^2 + (k+1)^2$$ Hence we need $$k^2 + (k+1)^2 = m^2$$ The solution is just Pythagorean triangle with consecutive sides. Refer to any elementary number theory book for finding these.
• Let me add to my answer. Pythagorean triangles with consecutive sides are almost 45, 45, 90 triangle so that the ratio of side to hypotenuse is almost $\sqrt{2}$. In fact the sides of the triangles are obtained by the best approximation to $\sqrt{2}$. Thus the connection between $\sqrt{2}$ and the answer. Hope this answers one of your questions. – user44197 Dec 23 '13 at 5:16 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9811668657039606,
"lm_q1q2_score": 0.8111422261124039,
"lm_q2_score": 0.8267118004748677,
"openwebmath_perplexity": 177.31567608823391,
"openwebmath_score": 0.8888550996780396,
"tags": null,
"url": "https://math.stackexchange.com/questions/616072/sequential-sums-12-cdotsn-that-are-squares?noredirect=1"
} |
python, object-oriented, scrapy
def parse_foo(...):
...
parse_foo.url = 'http://foo.example.org/'
def parse_bar(...):
...
parse_bar.url = 'http://bar.example.net/'
Or, if you prefer to have the URL before the function, you can use a decorator:
def attach(**params):
def f(g):
g.__dict__.update(params)
return g
return f
class PropFinal(CrawlSpider):
def __init__(self, *args, **kwargs):
self.requests_list = []
for name, obj in self.__class__.__dict__.items():
if name.startswith('parse_'):
self.requests_list.append(Request(obj.url, callback=obj))
...
@attach(url='http://foo.example.org/')
def parse_foo(...):
... | {
"domain": "codereview.stackexchange",
"id": 4435,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, object-oriented, scrapy",
"url": null
} |
Hint:
$$\sum_{k=1}^{\infty} x^k = \frac{x}{1-x} \implies \sum_{k=1}^{\infty} kx^k=\frac{d}{dx} \frac{x}{1-x} .$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9850429151632047,
"lm_q1q2_score": 0.8626877745084353,
"lm_q2_score": 0.8757869948899665,
"openwebmath_perplexity": 206.53563683177697,
"openwebmath_score": 0.9560747742652893,
"tags": null,
"url": "https://math.stackexchange.com/questions/757263/how-to-find-answer-to-the-sum-of-series-sum-n-1-infty-fracn2n/3375934"
} |
ros, jenkins, release
Title: Jenkins build fails, then passes, then fails again
I'm trying to release a package and the Jenkins build failed initially.
I committed some changes to fix the build, and it eventually passed.
But I'm still getting emails about failed builds, which I think are coming from the earlier commits. Should I just ignore those, since the latest commit passes?
Originally posted by AndyZe on ROS Answers with karma: 2331 on 2016-11-11
Post score: 0
You have released a certain version of your repository (tagged and ran bloom). That state is what the binary jobs are processing.
You have then committed modification which are only being used by the devel jobs (which are building your development branch).
In order for the release jobs (which are still failing since they still build be previously released code) to use the new code you have to tag a new version and do a new release with bloom.
Originally posted by Dirk Thomas with karma: 16276 on 2016-11-11
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 26222,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, jenkins, release",
"url": null
} |
python, calculator, tkinter
def add_number(number):
string.append(number)
print number
Label(main, text=str(number)).grid(row=0, column=0) | {
"domain": "codereview.stackexchange",
"id": 14460,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, calculator, tkinter",
"url": null
} |
homework-and-exercises, potential-energy, binding-energy
$\int_{-R}^R\int_0^\sqrt{1-x^2}2\pi y...dydx$ maps each point in the sphere and calculates each point's "contribution" to the potential at point $r$. I divide the whole thing by the volume squared because I want the total "charge" to remain constant. Per this Math.SE post, given a Yukawa potential with range $1/a$, a spherical shell of mass $m$ with radius $r$ behaves like a point mass potential at the origin, except with mass $m~\sinh(ar)/(ar).$
If we nest these then the binding energy can be found as follows: bring a small mass from infinity to the origin for free, forming a little ball of density $\rho$ and radius $\delta r$, bring another small mass from infinity to the radius $\delta r$ to grow the sphere a little more, paying some energy, and so on. To keep a constant density the mass we have to move each time is $\mathrm dm=\rho~4\pi r^2\mathrm dr,$ while the mass already moved is $m(r)=\frac43\pi r^3~\rho$. So we need to know two things:
First, what is the effective mass of a uniform ball, instead of a spherical shell? That's a straightforward integration by parts:$$m_{\text{eff}}(R)=\int_0^R\mathrm dr ~4\pi r^2\rho~\frac{\sinh(ar)}{ar}
= \frac{4\pi\rho}{a^3}~\big(aR\cosh(aR)-\sinh(aR)\big)
$$ | {
"domain": "physics.stackexchange",
"id": 84424,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, potential-energy, binding-energy",
"url": null
} |
energy, particle-physics, electrons, atomic-physics
Title: Can only two electrons be in ground state? Are the energy levels the same thing as the energy shells? I can't find a straight answer
So for a model like this one, can there be two electrons in one energy level?
And I don't understand the Pauli principle that two electrons can't be on the same energy level when there are 2 electrons in the first shell of most atoms???
Does this then mean that shells and energy levels aren't the same thing? Wikipedia says it well:
The Pauli exclusion principle is the quantum mechanical principle
which states that two or more identical fermions (particles with
half-integer spin) cannot occupy the same quantum state within a
quantum system simultaneously.
The 'same quantum state' here can be understood as a set of quantum numbers $n$, $l$, $m_l$ and $m_s$ to be the same.
For example, if $2$ electrons are in the $\text{1s}$ orbital, that is $1,0,0$, then they must have a different spin quantum number $m_s$, one with $m_s=+1/2$ and the other with $m_s=-1/2$. Both will have the same energy (ground state in this case). | {
"domain": "physics.stackexchange",
"id": 71848,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "energy, particle-physics, electrons, atomic-physics",
"url": null
} |
ros, collision
Laborroboter (contains the script to assembly the FANUC Cr-7iAl and the gripper to a new robot)
Laborroboter_moveit_config (contains all the files created by the MSA)
After this warm feeling of success I get new problems… I think the geometry is correct. The output of urdf_to_graphiz shows a smooth and logical structure.
I carried out the MSA accordingly to this tutorial:
https://ros-planning.github.io/moveit_tutorials/doc/setup_assistant/setup_assistant_tutorial.html
Comment by nicob on 2021-10-12:
A first test of the new moveit package with the command “roslaunch laborroboter_moveit_config demo.launch” starts successfully and I can move and plan the robot. But I get some error messages in the background (extract of messages):
[ERROR] [1634020992.972334113]: Tried to advertise on topic [/move_group/filtered_cloud] with md5sum [060021388200f6f0f447d0fcd9c64743] and datatype [sensor_msgs/Image], but the topic is already advertised as md5sum [1158d486dd51d683ce2f1be655c3c181] and datatype [sensor_msgs/PointCloud2]
[ERROR] [1634020993.027397108]: Tried to advertise a service that is already advertised in this node [/move_group/model_depth/compressedDepth/set_parameters]
Do you have an idea whats the problem?
Best regards!
nico
Comment by gvdhoorn on 2021-10-12:
Those appear to be different problems. It's not a good idea to post (follow-up) questions in comments.
You should post a new question.
If you've solved your current one, please post an answer and accept your own answer.
In the last days, it was possible for me to create a new (own) robot with gripper supported by MoveIt, based on the FANUC package. For this I created three more packages (rebuild multiple times): | {
"domain": "robotics.stackexchange",
"id": 36983,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, collision",
"url": null
} |
fermions, quantum-chromodynamics, quarks, isospin-symmetry, color-charge
$$
that yields
$$
e^{-i{\mathbf\tau}\cdot{\mathbf\theta}\gamma_5/2}\psi=\cos\left(\frac{|{\mathbf\theta}|}{2}\right)\begin{pmatrix} u \\ d \end{pmatrix}-i\frac{{\mathbf\tau}\cdot{\mathbf\theta}}{|{\mathbf\theta}|}\sin\left(\frac{|{\mathbf\theta}|}{2}\right)\begin{pmatrix} \gamma_5u \\ \gamma_5d \end{pmatrix}.
$$ | {
"domain": "physics.stackexchange",
"id": 63795,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "fermions, quantum-chromodynamics, quarks, isospin-symmetry, color-charge",
"url": null
} |
notation, supersymmetry, spinors
Title: Doubt on dotted indices notation for Weyl spinors I'm starting Bailin and Love "Supersymmetric gauge field theory and string theory" and trying to get used to dotted indices.
Let's consider a Dirac spinor written in terms of left and right Weyl spinors
$$\Psi=\begin{pmatrix}\psi_\alpha\\ \bar{\chi}^\dot{\alpha}\end{pmatrix}~~~~,~~~~\alpha,\beta=1,2\tag{1.83}$$
$$(\psi_\alpha)^*=\bar{\psi}_\dot{\alpha}~~~~,~~~~(\bar{\chi}^\dot{\alpha})^*=\chi^\alpha\tag{1.77}$$
Now, consider the $\gamma$ matrices in the Weyl representation
$$\gamma^\mu=\begin{pmatrix}0&\sigma^\mu\\\bar\sigma^\mu&0\end{pmatrix},$$
where
$$(\sigma^\mu)_{\alpha\dot\beta}=(\mathbb{I}_2,\vec{\sigma})_{\alpha\dot\beta}~~~~,~~~~(\bar\sigma^\mu)^{\dot\alpha\beta}=(\mathbb{I}_2,-\vec{\sigma})^{\dot\alpha\beta}\tag{1.58}$$
What I see is that $\Psi^\dagger\gamma^\mu$ (which includes $\bar\Psi=\Psi^\dagger\gamma^0$) doesn't seem to match indices: | {
"domain": "physics.stackexchange",
"id": 86084,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "notation, supersymmetry, spinors",
"url": null
} |
homework-and-exercises, experimental-physics, kinematics, rotational-dynamics, projectile
In this case $x^2$ only depens on $y$, and its error comes from the error in $y$:
$$\delta (x^2) = \left| \frac{\partial (x^2)}{\partial y} \delta y\right| = x^2 \left|\frac{ \left(-g (h-y)^3 \left(3 (h-y)^2-l^2\right)+3 h \sqrt{g (h-y) (h+l-y)^2 (-h+l+y)^2 \left(g (h-y)^3+h l^2\right)}-3 y \sqrt{g (h-y) (h+l-y)^2 (-h+l+y)^2 \left(g (h-y)^3+h l^2\right)}+h l^2 \left(l^2-3 (h-y)^2\right) \right)}{(h-y) (h-l-y) (h+l-y) \left(g (h-y)^3+h l^2\right)} \delta y\right|$$
It doesn't even fit properly! Anyway, putting back all the corresponding numbers, we find:
$$\frac{\delta (x^2)}{x^2} \approx 0.52 \delta y$$
Which looks like a decent result to me. Note, this final equation is only true for values given in this link, namely $h=9.7 \text{cm}, y=17.5 \text{cm}, l=60\text{cm} \ \text{and}\ g=980 \text{cm}\text{.s}^{-2}$. | {
"domain": "physics.stackexchange",
"id": 49751,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, experimental-physics, kinematics, rotational-dynamics, projectile",
"url": null
} |
electricity, electric-circuits
Title: Why does current remain same in a series circuit? Current is the rate of flow of charges with time. When passing through each resistance in a series circuit, charges lose energy and "slow" down due to collisions with atoms (metal cations). When time increases, current decreases. I agree number of charges remains same but what about time? Imagine a scenario. Take a battery having a potential of $V$. The ends of the battery is connected by a wire that is resistance less(an ideal approximation). In that case, the electrons in the wire are offered no 'resistance' and hence, they are free to move from the negative end to the positive end of the battery. From simple application of Ohm's law $$I=\frac{V}{R}$$ we see that the current is not defined, but in the limiting case of $R\rightarrow 0$, $I\rightarrow \infty$. Though the Ohm's law is an approximate law and we need a better treatment of the scenario to understand what is happening, we as of now know that the electrons don't travel through the resistance less wire instantaneously. There are electrons all along in the wire which just 'drift' towards the positive end of the battery without any obstruction and so quite fast(not instantaneously).
Now, if we include a resistor of resistance $R$ in this scenario, the drift of the electrons are no doubt restricted and this restriction of the motion of charges through the resister is indeed what causes the potential drop across the resistor, and the drift through the resistor occurs at a smaller rate, hence the smaller current.
For two resistors in series, this drift is slowed by both of the resistors(hence lesser current as compared to the single resistor case). However, the drift velocity through the entire wire has to remain the same as electrons can't accumulate anywhere in the wire if there are non regular speeds of their drifts as pointed out by @Nuclear Wang and the OP wrote in the comments. Also, electrons do lose energy due to collisions and that loss appears as heat that reduces the potential in due time and hence the current all in all deceases. But the current can't have various values in such a series circuit. | {
"domain": "physics.stackexchange",
"id": 56599,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electricity, electric-circuits",
"url": null
} |
algorithm, graph, f#
119 80,108 154,741 130,730 84,6417 72,6215 30,7372 170,275 168,1890 157,9158 90,928 121,6261 37,9313 95,8126 14,960 87,518 79,4117 39,1376 163,5189 169,3511 195,617 3,7711 32,1914 19,7406 46,2687 196,2824 105,9930
120 40,7760 62,3851 72,2287 94,2733 86,5230 95,1104 164,5926 26,5788 18,44 155,6822 60,2231 185,556 45,5279 179,3327 159,5811 75,6892 64,2385 5,1862 178,8906 28,8874 51,5675 105,9333
121 4,7639 80,2257 197,6502 119,6261 136,1320 156,195 29,4537 7,8739 58,8818 32,1615 168,7186 62,4145 92,3819 173,2976 64,771 175,8821 191,8606 162,1977 132,3867 74,1468 165,7147 148,8115 55,44
122 78,6207 8,5338 174,8205 168,1574 162,3518 166,6712 135,6345 28,9165 192,1494 128,7247 189,2017 3,9640 148,3230 27,84 179,7377
123 14,2733 198,3493 6,13 104,9213 107,4581 89,1723 118,2088 128,1602 155,3251 46,132 36,6226 184,2832 103,8756 65,9599 124,6490 145,8804 193,7460
124 141,168 159,6580 42,3496 123,6490 77,8896 20,392 67,1880 158,1870 147,9014 165,9797 136,7388 56,552 11,6180 70,9089 45,3290 48,2593 | {
"domain": "codereview.stackexchange",
"id": 20970,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithm, graph, f#",
"url": null
} |
First let us be clear about what exactly it is that we want to prove. You want to prove the third point from about. That is we want to prove that $$W_1 \subseteq W_1 + W_2.$$ So let us just focus on this one. To be clear, what are wanting to prove is that the one set $W_1$ is contained in the other set $W_1 + W_2$. The question is now: how in general do you prove that one set is contained in another set? The way that you (in general) do this is to assume that an arbitrary element from the one set ($W_1$) is given and prove that this random given element is also an element in the other set ($W_1 + W_2$). So how does this look like? Well, in the proof we will then start by saying something like: Let $w \in W_1$ be given. Ok, so now we have that random given element in $W_1$ given. Now we need to provide an argument/proof that this element is also an element of $W_1 + W_2$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9697854146791213,
"lm_q1q2_score": 0.8017330358924853,
"lm_q2_score": 0.8267117898012104,
"openwebmath_perplexity": 95.98184140317862,
"openwebmath_score": 0.9146319627761841,
"tags": null,
"url": "http://math.stackexchange.com/questions/252537/showing-that-w-1-subseteq-w-1w-2?answertab=votes"
} |
ros, ros-kinetic, 2dlidar
Originally posted by duck-development with karma: 1999 on 2019-11-15
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by elemecrobots on 2019-11-15:
I see..any other way i can get the intensity values by writing some piece of code in the driver part of the RPLIDAR? Or any other version of Slamtech RPLIDAR which has intensity value function?
Comment by duck-development on 2019-11-17:
I think, as this is the Protokoll between the sensors and Host , you are not able to change this behavior.
Comment by elemecrobots on 2019-11-18:
Do you know any other LIDAR which can output intensity values as well and is in the range of 400 -600 dollars?
Comment by duck-development on 2019-11-18:
you may try this
LDS-01 360 Laser Distance Sensor
if you look at the driver you see the data is read out from the sensor stream
https://github.com/ROBOTIS-GIT/hls_lfcd_lds_driver/blob/master/src/hlds_laser_publisher.cpp
at line
Comment by elemecrobots on 2019-11-18:
Thanks a lot...any lidar with brushless motors? This one has a band attached
Comment by duck-development on 2019-11-20:
you may look at he
Sweep V1 360 it hast in the data stream 1byte for SignalStrenght
Comment by duck-development on 2019-11-20:
i fond the G4 from EAI YDLIDAR, it has an brushless motor and deliver the intensity values
http://s.click.aliexpress.com/e/NktoWf0y
and there is also a G6 http://s.click.aliexpress.com/e/nezGEejW
Comment by elemecrobots on 2020-01-20:
Hi, sorry for the late reply..are you sure G4 from EAI provides intensity values because I just checked the datasheet and it says that intensity values are fixed to false | {
"domain": "robotics.stackexchange",
"id": 34016,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, ros-kinetic, 2dlidar",
"url": null
} |
Partial derivative wrt c0 is:
Out[6]:
$$0 = \sum_{i=0}^{n} \left(c_{0} \left(2 t_{i}^{6} - 12 t_{i}^{5} + 30 t_{i}^{4} - 40 t_{i}^{3} + 30 t_{i}^{2} - 12 t_{i} + 2\right) + c_{1} \left(- 6 t_{i}^{6} + 30 t_{i}^{5} - 60 t_{i}^{4} + 60 t_{i}^{3} - 30 t_{i}^{2} + 6 t_{i}\right) + c_{2} \left(6 t_{i}^{6} - 24 t_{i}^{5} + 36 t_{i}^{4} - 24 t_{i}^{3} + 6 t_{i}^{2}\right) + c_{3} \left(- 2 t_{i}^{6} + 6 t_{i}^{5} - 6 t_{i}^{4} + 2 t_{i}^{3}\right) + 2 d_{i} t_{i}^{3} - 6 d_{i} t_{i}^{2} + 6 d_{i} t_{i} - 2 d_{i}\right)$$
In [7]:
df_dc1 = collect( expand(penalty_fn.diff(c1), deep=True), [c0,c1,c2,c3])
print 'Partial derivative wrt c1 is:'
Eq(0, Sum(df_dc1, (i,0,n)))
Partial derivative wrt c1 is: | {
"domain": "jupyter.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9770226320971079,
"lm_q1q2_score": 0.8077161267716086,
"lm_q2_score": 0.8267117876664789,
"openwebmath_perplexity": 4965.667367100124,
"openwebmath_score": 0.7560567855834961,
"tags": null,
"url": "http://nbviewer.jupyter.org/gist/anonymous/5688579"
} |
deep-learning, keras, data, preprocessing, data-augmentation
Also, ImageDataGenerator.flow does not have class_mode unlike ImageDataGenerator.flow_from_dataframe.
Any suggestions/help would be appreciated!
References:
Data Augmentation Multi Outputs (No answer. I upvoted this just now)
Get multiple output from Keras (Does not explain data augmentation) Please refer to the source code provided at https://gist.github.com/swghosh/f728fbba5a26af93a5f58a6db979e33e which should assist you in writing custom generators (basis ImageDataGenerator) for training end to end multi-output models. In the provided example, GoogLeNet is being trained which consists of two auxiliary classifiers and thus, comprising 3 outputs in the complete model.
The output from ImageDataGenerator.flow_from_directory() have been passed into a Python function with yield statements such that the targets can be repeated thrice as per requirement.
def three_way(gen):
for x, y in gen:
yield x, [y, y, y] | {
"domain": "datascience.stackexchange",
"id": 7580,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "deep-learning, keras, data, preprocessing, data-augmentation",
"url": null
} |
gravity, visible-light, black-holes, mass, event-horizon
Title: Why light can't escape a black hole but can escape a star with same mass? I'm new to astronomy and was wondering why light can't escape from a black hole but can escape from a star with the same mass. In theory, the gravity of a star 100x the mass of the sun, and the gravity of a black hole 100x the mass of the sun are the same, so why can light escape from the star but not from the black hole? even if the force of gravity is the same in both cases. Is it just because the black hole is smaller? yes, it is because the black hole is much more dense which means it packs the same mass & gravitational pull into a much smaller diameter. This means its surface gravity yields an escape velocity equal to the speed of light. For the same-mass star, its "surface" is much farther away from its center and if you were standing on that "surface", the surface gravity there would be crushingly huge but not enough to yield c as the escape velocity.
Oh yes and you would be instantly fried to a crispy crisp.
To make the surface gravity of the sun yield an escape velocity of c would require you to squeeze it down to a diameter of 2.5 kilometers. That's really dense! | {
"domain": "physics.stackexchange",
"id": 93205,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "gravity, visible-light, black-holes, mass, event-horizon",
"url": null
} |
haskell, parsing, scheme
eqvList :: ([LispVal] -> ThrowsError LispVal) -> [LispVal] -> ThrowsError LispVal
eqvList eqvFunc [List arg1, List arg2] = return $ Bool $ (length arg1 == length arg2) &&
all eqvPair (zip arg1 arg2)
where eqvPair (x1, x2) = case eqvFunc [x1, x2] of
Left err -> False
Right (Bool val) -> val
unpackEquals :: LispVal -> LispVal -> Unpacker -> ThrowsError Bool
unpackEquals arg1 arg2 (AnyUnpacker unpacker) =
do unpacked1 <- unpacker arg1
unpacked2 <- unpacker arg2
return $ unpacked1 == unpacked2
`catchError` const (return False)
equal :: [LispVal] -> ThrowsError LispVal
equal [l1@(List arg1), l2@(List arg2)] = eqvList equal [l1, l2]
equal [DottedList xs x, DottedList ys y] = equal [List $ xs ++ [x], List $ ys ++ [y]]
equal [arg1, arg2] = do
primitiveEquals <- liftM or $ mapM (unpackEquals arg1 arg2)
[AnyUnpacker unpackNum, AnyUnpacker unpackStr, AnyUnpacker unpackBool]
eqvEquals <- eqv [arg1, arg2]
return $ Bool (primitiveEquals || let (Bool x) = eqvEquals in x)
equal badArgList = throwError $ NumArgs 2 badArgList | {
"domain": "codereview.stackexchange",
"id": 8156,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "haskell, parsing, scheme",
"url": null
} |
dft, window-functions, dtft
In summary, removing every other sample in time (down-samping), causes the sampling frequency to move down half as much, and carries all the original spectrum with it that was centered at every multiple of $F_s$ is now centered at every multiple of the new sampling rate. The N samples in the DFT now occupy the spectrum from $0$ to $F_s^{'}$.
halving T would change the shape of the DTFT, halving the number of lobes. Without zero padding this would halve N, but Fmax would remain constant and so every second bin would be removed.
This then means $F_s$ must be in units of samples/sec. (Often with the DFT we use units of cycles/sample so want to clarify this). In this case then yes, without any zero-padding all $N$ samples are over $T$, so $F_s$ is $N$ samples over $T$ seconds. Similar to the frequency domain relationship above, as long as the cyclical time window can be equivalent before and after removing the samples, then indeed the only change in frequency will be that every other bin will be removed. Otherwise to maintain the same exact spectrum values for all the bins that remain, we would need to have the equivalent of time domain aliasing, or otherwise we must have aliasing is frequency. The easiest way to see this is to consider 2 cycles in time of a sine wave- this we could cut in half and still have the exact same spectrum, every other bin. Now consider one cycle of a sine wave: We can't cut this in half and expect to see the same spectrum- it will be aliased, or we would need to have a modified time domain that is aliased.
doubling N by zero padding the DFT would keep the DTFT and Fmax
constant but would double the number of bins | {
"domain": "dsp.stackexchange",
"id": 8637,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "dft, window-functions, dtft",
"url": null
} |
java, recursion, time-limit-exceeded, backtracking, knapsack-problem
for (int i = 0; i <= numberOfShips; i++) {
arrayWithGeneratedNumbers[fromIndex] = i;
generate(fromIndex + 1);
}
}
public void input(String input) {
Scanner sc = null;
try {
sc = new Scanner(new File(input)); // load my input from a textfile
numberOfShips = sc.nextInt(); // load the number of ships
for (int i = 0; i < numberOfShips; i++) { //add carrying capacities to arraylist
carryingCapacities.add(sc.nextInt());
}
bestArray = new int[weights.size()]; // array where we will remember the best combination of units
while (sc.hasNext()) {
weights.add(sc.nextInt());
strengths.add(sc.nextInt());
}
arrayWithGeneratedNumbers = new int[weights.size()]; // array where we will generate numbers
generate(0); // run the generation
System.out.println(Arrays.toString(bestArray) + " this is the best layout of units"); // after the generation is over
System.out.println(max + " this is the max strength we can achieve"); // print the results
} catch (FileNotFoundException e) {
System.err.println("FileNotFound");
} finally {
if (sc != null) {
sc.close();
}
}
} | {
"domain": "codereview.stackexchange",
"id": 35561,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, recursion, time-limit-exceeded, backtracking, knapsack-problem",
"url": null
} |
enzymes, toxicology, food-chemistry
Title: How is honey not toxic to our epithelial cells? Being a supersaturated solution of sugar, honey pulls the water out of cells it comes in contact with via osmosis - killing the cells. It also contains the inactive enzyme glucose oxidase, which when diluted with water activates and then converts glucose into gluconic acid and hydrogen peroxide - both of which are bad news for cells.
Honey also likely contains an antibiotic protein Bee Defensin-1, which is also bad news for bacteria. I didn't yet look into how it works so I don't know if it affects our epithelial cells.
Then some honey types (manuka) contain Methylglyoxal. Damage by methylglyoxal to low-density lipoprotein through glycation causes a fourfold increase of atherogenesis in diabetics. [1] Methylglyoxal binds directly to the nerve endings and by that increases the chronic extremity soreness in diabetic neuropathy.[2][3]
How is it safe to eat? I'll change my initial comment into a real answer.
The human digestive tract has evolved to allow us to be broadly omnivorous, safely consuming a wide range of vegetable, animal, and mineral-based foods. While honey can be damaging to exposed cells, the epithelial cells lining the digestive tract are not exposed like the endothelial cells that line the circulatory system. Eating honey is not the same as injecting it into your veins.
A layer of mucous covers the cells of the digestive tract from mouth to colon and provides both lubrication as well as protection to the cells underneath it. It is protective to varying degrees against such challenges as acid, osmotic stress, mechanical damage, and foreign microorganisms. Additionally, the cells themselves are layered and are continually being sloughed off and regenerated, so even if the mucosal lining isn't 100% effective, damaged cells will be replaced fairly quickly.
Food quickly travels from the mouth through the esophagus to the stomach, not allowing enough time for osmotic stress or enzymes to damage anything to any significant degree. (Obviously, if you swallow something extremely harsh like bleach, lye, or concentrated acid, that's a different story.) | {
"domain": "biology.stackexchange",
"id": 10376,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "enzymes, toxicology, food-chemistry",
"url": null
} |
This question makes me think of a related but slightly different kind of function. A function which gradually/slowly changes behavior, instead of having this piecewise phenomenon.
The piecewise and discontinuous transition would be denoted as $$h(x) = \begin{cases} f(x)\quad\text{if}\quad x \leq x_0\\ g(x)\quad\text{if}\quad x > x_0\\ \end{cases}$$ A gradual shift from $$f$$ to $$g$$ can be written as $$(1-t)\cdot f + t\cdot g$$ with $$t\in[0,1]$$. Therefore a function that gradually changes from $$f$$ after $$x_0$$ and becomes $$g$$ at $$x_1 > x_0$$ would be written $$h_{\mathrm{grad}}(x) = \begin{cases} f(x)\quad\text{if}\quad x \leq x_0\\ \\ \left(1 - \dfrac{x-x_0}{x_1-x_0}\right)\cdot f(x) + \left(\dfrac{x-x_0}{x_1-x_0}\right)\cdot g(x) \quad\text{if}\quad x_0 < x \leq x_1\\ \\ g(x)\quad\text{if}\quad x > x_1\\ \end{cases}$$ As an exemple, let us take $$f(x)=x$$ and $$g(x) = \sin(x) + 5$$ on the domain $$[-5, 10]$$.
The piecewise transition is pictured below:
The continuous/gradual transition is in green below: | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9919380070417538,
"lm_q1q2_score": 0.8753137499435851,
"lm_q2_score": 0.8824278772763472,
"openwebmath_perplexity": 250.9069976355607,
"openwebmath_score": 0.9806784987449646,
"tags": null,
"url": "https://math.stackexchange.com/questions/3987133/how-can-i-describe-a-linear-equation-that-becomes-sinusoidal-after-a-certain-poi"
} |
2. If $f(n) = \Theta(n^{log_b a})$, then $T(n) = \Theta(n^{log_b a} \lg n)$
3. If $f(n) = \Omega(n^{log_b a + \epsilon})$ for some constant $\epsilon > 0$, and if $af(n/b) \leq cf(n)$ for some constant $c < 1$ and all sufficiently large $n$, then $T(n) = \Theta(f(n))$
-
Is the relation supposed to hold for all $n$, or just powers of $2$? – 1015 Feb 6 '13 at 18:57
I don't quite understand your question. Which relation are you speaking of? – Kiet Tran Feb 6 '13 at 22:10
The relation in your title and at the first line of your post. – 1015 Feb 6 '13 at 22:15
Oh, any nonnegative $n$ is fine, I think. In the statement of the Master Theorem in CLRS, $n/b$ is interpreted to mean either the floor or ceiling of $n/b$. – Kiet Tran Feb 6 '13 at 22:24
Floor or ceiling, here? – 1015 Feb 6 '13 at 22:26
If I'm wrong, what did I miss on?
The answer is that you didn't miss anything. $f(n)=\lg n=O(n^{2-\epsilon})$ for $\epsilon = 1$, since $\lg n=O(n)$. In CLRS terms, $\lg n$ is indeed polynomially smaller than $n^2$. In fact, any $\epsilon<2$ will work.
If I'm right, what is an example that doesn't fit Case 1?
you might want to consider $f(n) = n^2/\lg n$. For this $f$, none of the three possible cases are satisfied, as you can verify.
While I'd be reluctant to criticize a colleague, either your lecturer was wrong or you misundertood what s/he said. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9799765575409525,
"lm_q1q2_score": 0.8478993043225695,
"lm_q2_score": 0.8652240686758841,
"openwebmath_perplexity": 297.79007874974167,
"openwebmath_score": 0.8929303288459778,
"tags": null,
"url": "http://math.stackexchange.com/questions/296495/master-theorem-tn-4tn-2-lg-n"
} |
algorithm-analysis, sorting
Title: Upper-bounding the number of comparisons for Sorting to $\Theta(n)$ using a physically big number like Number of Particles in the Universe I recently read an article Scott Aaronson - Big Numbers . That has made me think about the effective upper-bound for sorting.
According to the article, some of the big numbers like the number of particles in the universe and age of universe in milliseconds are less than $10^{100}$.
In any realistic computational device, the data to be sorted will have to be lesser than these numbers. (As otherwise, it would be impossible to store the numbers physically).
$\log_2(10^{100}) \approx 333$
Hence, if we take a number $C > 333$, we can show that number of steps required for sorting an input of size $n$ will always be lesser than $Cn$
This makes sorting an $O(n)$ time operation using algorithms like QuickSort or HeapSort.
Is there a point I've wrongly considered while making this assumption?
Should we consider physical constraints while analyzing algorithms? If not, why? The reasoning is not wrong as such, but applying such constraints tends to make asymptotic analysis meaningless; for example we could simply choose $C' = 10^{100}$, and as all inputs are smaller than this, every algorithm is $O(1)$.
Despite the ability to limit the analysis physically and end up with trivial observations, asymptotic analysis (without imposing those limits) does give practical information about algorithms. When we run them, we can certainly notice the difference between, say, Merge Sort and Bubble Sort - even when we have physically very small instances (even less than 1000).
So by speaking about algorithms in this more general (though not necessarily physically accurate) way, we get genuine information about how we can expect algorithms to perform based on how big the input is. More importantly it allows us to compare them. | {
"domain": "cs.stackexchange",
"id": 492,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithm-analysis, sorting",
"url": null
} |
newtonian-mechanics, spring, anharmonic-oscillators
Does anyone have any insight on how we can show mathematically where the >limits are for the relationship for $a \propto x$ to break down. Although I am quite confused as to where exactly your problem in understanding is, the limits that you're talking about do indeed exist in oscillatory motion. But the example you chose - a spring following Hooke's Law, the SHM differential equation is always valid, for all $x$. However, as one might intuitively expect, pulling a spring too hard will evoke a very different response than S.H.M type oscillation. In other words, $F = -kx$ is no longer the force law to describe such a deformed-spring system.
Another example to show you how such 'limits' of S.H.M are seen physically, imagine a simple pendulum. You can easily show that it's motion follows the differential equation $\ddot\theta + \frac{g}{l}\sin\theta = 0 $, where $\theta$ is the angle made by the pendulum with the vertical. This is a non-linear equation. Only for small $\theta$ can we approximate the differential equation to the familiar S.H.M equation with nice-looking solutions. For large $\theta$ such an approximation does not describe the physics of the pendulum accurately. The limits that you wish to find does not arise from the S.H.M equation itself, but from more complicated differential equations, which behave linearly under some assumptions. | {
"domain": "physics.stackexchange",
"id": 54291,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, spring, anharmonic-oscillators",
"url": null
} |
classical-computing, superdense-coding
Suppose Alice and Bob initially share a pair of qubits in the entangled
state.. | {
"domain": "quantumcomputing.stackexchange",
"id": 2176,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "classical-computing, superdense-coding",
"url": null
} |
oceanography, ocean-currents, tides
Title: How to construct the tide signal from harmonic constituents? I am trying to reconstruct the astronomical tide time series from harmonic constituents (calculated from T_Tide).I am using the equation that all knows H(t) = Amplitude * cos ( t * harmonicSpeed + phase lag), I wrote my script in python. The output is not matching with the tide predicction (from the T_Tide file). As you know the construction is very simple but I am thinking that my problem is about the sampling interval, I really don´t know how to interpret it for this case, I wrote 1/24 due to the input time (for calculating in T_TIDE , hourly data), I am not sure about this.
I share the harmonic constants file from T_Tide and the tide predicction values file (hourly , which I want to reproduce). They are https://github.com/feraxel/Harmonic-constans/commit/fb11b9ef9909b03e722978ccfa102fc7b9fd1f01#diff-09639ac17104196efd895598afe1af5234cbb5bcac39070e4a3d9133c611a4ea and https://github.com/feraxel/Harmonic-constans/commit/1e46d1d416c49ef131005d8fa68775f9169c0270#diff-545b5ec39a1b72b56fa5e37c33afb30a3c3c39ba0e6fe1d583b4a1d77e9fc74a
os.chdir(r'directory') # change the file.txt directory
con = pd.read_csv('harmonics_aca.txt',sep=' ') # We load the harmonic constants | {
"domain": "earthscience.stackexchange",
"id": 2581,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "oceanography, ocean-currents, tides",
"url": null
} |
ros, integration
Title: ROS to LCM or ROS to APRIL Integration?
Hello answers,
Is anybody working on a general way of interfacing ROS with LCM or APRIL? There are some tools there I'd like to use. Doing it myself won't be too painful, but I figured I'd ask before I got too far in.
Originally posted by Mac on ROS Answers with karma: 4119 on 2012-03-22
Post score: 1
Original comments
Comment by Mac on 2012-04-04:
I was right: it's not painful at all. If I get a chance, I'll write up a tutorial somewhere...
Hi Mac, I am interested in a port of the APRIL tag toolkit to ROS, and was planning on doing it quick and easy using a rosjava wrapper without necessarily using LCM. If you've made some progress on interfacing LCM with ROS, do you think there might be a more generic way for me to set up a port?
Originally posted by piyushk with karma: 2871 on 2012-07-08
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Mac on 2012-07-09:
Short answer: no. ROSJava is probably a better way. If this does get done, please please please do a release! I know of several interested people.
Comment by piyushk on 2012-08-16:
@Mac: http://www.ros.org/wiki/april
Comment by Mac on 2012-08-16:
I saw that on ros-users today. Awesome! | {
"domain": "robotics.stackexchange",
"id": 8686,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, integration",
"url": null
} |
computer-architecture
I understand how the OISC branding may seem like cheating or a marketing gimmick, but in reality it's a great way to achieve flexibility during CPU implementation (which is extremely useful in some approaches, e.g., when the CPU is custom for the application).
Regarding NAND: you could probably build a CPU where NAND is the "main" FU, though it would likely to be a pretty inefficient design (and you would still need a few additional FUs for data memory access, register banks, instruction pointer, etc.). A more practical approach would be selecting the FUs based on your application (so if you need to do a lot of floating point divisions, add an FU for that). | {
"domain": "cs.stackexchange",
"id": 16598,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "computer-architecture",
"url": null
} |
algorithm-analysis, asymptotics, recurrence-relation
Title: Let a > 0 be a constant. Find a simplified, asymptotically tight bound for the recurrence T(n) = aT(n-2) + C So I have read the posts on this site involving recurrence relations, however this problem is a little different, because of the constant a involved with the recursive portion. I'm trying to solve this recurrence relation by expanding it out and here is what I have so far:
I'm not sure where to go from here and how to find the asmpytotic bounds. The first four lines of your argument were correct; all you missed was the proper generalization. In general, we see that
$$
T(n) = c+c\alpha+c\alpha^2+\dotsb+c\alpha^{j-1}+\alpha^j T(n-2j)
$$
Now we want to drive $T(n-2j)$ down to a value we know, namely $T(1)=c$. To do this we'll need $n-2j=1$ and so we set $j=(n-1)/2$, giving us
$$\begin{align}
T(n) &= c+c\alpha+c\alpha^2+\dotsb+c\alpha^{j-1}+\alpha^j T(n-2j)\\
&= c+c\alpha+c\alpha^2+\dotsb+c\alpha^{(n-1)/2-1}+\alpha^{(n-1)/2} c\\
&= c(1+\alpha+\alpha^2+\dotsb+\alpha^{(n-1)/2})
\end{align}$$
and this is just a geometric series, so I'll leave the last steps to you. | {
"domain": "cs.stackexchange",
"id": 6565,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithm-analysis, asymptotics, recurrence-relation",
"url": null
} |
choosing a window width is like an amount smoothing Daniels Trading does not guarantee or verify any performance claims made by such systems or service. In a Simple Moving Average, the price data have an equal weight in the computation of the average. The most straightforward method is called a simple moving average. The risk of loss in trading futures contracts or commodity options can be substantial, and therefore investors should understand the risks involved in taking leveraged positions and must assume responsibility for the risks associated with such investments and for their results. The multiplier 1/3 is called the weight. Trade recommendations and profit/loss calculations may not include commissions and fees. If there are trends, use different estimates that take the divided by the number of values, or. Thus, the oldest price data in the Smoothed Moving Average are neve… In general: $$\bar{x} = \frac{1} {n} \sum_{i=1}^{n}{x_i} = Variations include: simple, and cumulative, or weighted forms. What are Moving Average or Smoothing Techniques? Use a moving average filter with a 5-hour span to smooth all the data simultaneously (by linear index). a useful estimate for forecasting when there are no trends. The Moving Average is a popular indicator used by forex traders to identify trends. moving average can’t capture seasonality and trend It’s proper to use MA when it’s stationary or the future is similar to the past. Smoothing all the data together would then indicate the overall cycle of traffic flow through the intersection. For this method, we choose a number of nearby points and average them to estimate the trend. delivers in 1000 dollar units. False Forecast including trend is an exponential smoothing technique that utilizes two smoothing constants: one for the average … The Smoothed Moving Average uses a longer period to determine the average, assigning a weight to the price data as the average is calculated. When the window size for the smoothing method is not specified, smoothdata computes a default window size based on a heuristic. There are two distinct groups of smoothing methods. The moving average method is simply the average of a subset of numbers which is ideal in smoothing out the trend in data such as in a time-series. This method relies on the notion that observations close in time are likely to have similar values. There exist methods for reducing of canceling the effect due to random variation. Also, in a Simple Moving | {
"domain": "inside-science.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9664104953173166,
"lm_q1q2_score": 0.8447174169513181,
"lm_q2_score": 0.8740772384450967,
"openwebmath_perplexity": 1510.8730050310744,
"openwebmath_score": 0.5109207630157471,
"tags": null,
"url": "http://inside-science.com/kasoor-prateek-ppvcw/moving-average-smoothing-7812f3"
} |
machine-learning, neural-network, deep-learning, rnn
The gradient $\nabla_{o^{(t)}}L$ on the outputs at time step $t$, for all $i$, $t$, is as
follows:
$$
(\nabla_{o(t)}L)_i
= \dfrac{\partial L}{\partial o_i^{(t)}}
= \dfrac{\partial L}{\partial L^{(t)}}\dfrac{\partial L^{(t)}}{\partial o_i^{(t)}}
= \hat{y}_i^{(t)}− \mathbf{1}_{i,y^{(t)}}. \qquad\qquad (10.18)
$$
While, the corresponding notes are:
$$
\begin{align*}
\boldsymbol{a}^{(t)} &= \boldsymbol{b} +\boldsymbol{Wh}^{(t−1)} + \boldsymbol{Ux}^{(t)} &(10.8)\\
\boldsymbol{h}^{(t)} &= \tanh(\boldsymbol{a}^{(t)}) &(10.9)\\
\boldsymbol{o}^{(t)} &= \boldsymbol{c} + \boldsymbol{V h}^{(t)} &(10.10)\\
\hat{\boldsymbol{y}}^{(t)} &= \text{softmax}(\boldsymbol{o}^{(t)}) &(10.11)
\end{align*}
$$ | {
"domain": "datascience.stackexchange",
"id": 3808,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "machine-learning, neural-network, deep-learning, rnn",
"url": null
} |
python, python-3.x, regex
for pos, val in enumerate(_raw):
if val == _start:
temp.append([pos])
elif val == _end:
next = True
for index in temp:
if _raw[index[0]] == _start:
next = False
# This avoids match collisions (missing end character)
if not next:
temp[temp.index(max(temp))].append(pos)
# Find the latest (max) pattern
sort.append(max(temp))
temp.pop(temp.index(max(temp)))
for val in sorted(sort):
final.append(_raw[val[0] + 1:val[-1]])
if _filter_first:
break
return final
def _tag(self, _raw, _tags, _seperator):
"""Provides a backend interface for asymmetric pattern tagging.
Works for a 'raw' string, along with a tag dictionary and seperator character.
Arguments:
* _raw: String containing all words, not just those to be matched.
* _tags: Dictionary containing key:value pairs like "spam":"[]" where every
"spam" will be enclosed by "[]" ("[spam]")
* _seperator: Word seperator, commonly " " or ", " for a comma-seperated list.
"""
final = ""
for val in _raw.split(_seperator):
if val not in _tags:
final += val + _seperator
else:
final += _tags[val][0] + val + _tags[val][1]
final += _seperator
return final
def _beautify(self, _filtered, _allow_nested):
"""Beautify filtered patterns by appending them to a list.
Works for 'filtered' dictionaries.
Arguments:
* _filtered: Dictionary with results matched by start / end character
* _allow_nested: Boolean, if True, all results will be returned in a list or printed.
Else, only 'pure' results (results containing no nested patterns), are returned.
""" | {
"domain": "codereview.stackexchange",
"id": 26188,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, python-3.x, regex",
"url": null
} |
java, algorithm, concurrency, sieve-of-eratosthenes, bitset
There is a bit of inconsistency about the meaning of EratosthenesSieve.n. In some places it is directly or indirectly handled as an inclusive upper bound on the numbers to be sieved, but in other places it is treated as an exclusive upper bound. In practice, though, this matters only if you happen to choose a prime for that value.
Its unclear why you use temporary variables h and t in the EratosthenesSieve constructor. It would be simpler and clearer to initialize the head and trailer members directly.
EratosthenesSieve.execute() waits for executaion to complete by sleeping for 100 ms at a time and checking each time it wakes whether the ExecutorService is terminated. That's a weird way of doing it when there is ExecutorService.awaitTermination(), which is clearer, and which also allows you to set an overall timeout.
It's really inefficient to count the the number of elements in your AtomicBitSet by iterating over its entire range -- you end up performing 64 times as many (expensive) atomic reads as you need to do, not to mention a bunch of redundant arithmetic. You could instead read each chunk exactly once, and count the bits efficiently with, say, Long.bitCount(). Just make sure to mark 0 and 1 as composites first, or else subtract 2 from the result.
Inner class FlagIdeal contains a private method surroundedSleep() that is never invoked | {
"domain": "codereview.stackexchange",
"id": 21091,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, algorithm, concurrency, sieve-of-eratosthenes, bitset",
"url": null
} |
filters, digital-communications, filter-design, ofdm
Title: how to design the $\text{sinc}(\cdot)$ filter for OFDM oversampled system We usually zero-padding into the moulated data of OFDM before performing IFFT operation to make the design of anti aliasing filters easier ( the cut off can be less sharp) hence save cost. That operation is done by padding zeros at the middle of the data before the IFFT operation.
Can I do the same operation in time domain? I mean I take IFFT without adding zeros padding and then upsample in time domain. How to design the filter in that case to have the same outputs of first method mentioned above? Resampling the transmit OFDM waveform in the frequency domain by zero-padding the FFT prior to taking the IFFT has the advantage of simplicity and maintaining sub-carrier orthogonality (no inter-carrier interference). This would typically not be done for other waveforms due to excessive out of band emissions (OOBE), but the result here would be the same OOBE level as the original OFDM waveform, and an accepted side effect for the other benefits OFDM provides. Frequency domain resampling (zero-pad and IFFT) would be the go-to approach whenever the new sampling rate is a multiple of the sub-carrier spacing. The equivalent (exact) time domain operation is not to convolve with a Sinc filter, but to do a circular convolution with the Dirichlet Kernel (which is an aliased Sinc function; for the number of bins given in typical OFDM implementations this quite closely approximates a Sinc except at the band edges where the aliasing effect is more apparent). | {
"domain": "dsp.stackexchange",
"id": 11303,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "filters, digital-communications, filter-design, ofdm",
"url": null
} |
equalization
prior answer:
The 1/4 wavelength transformer is used in the RF path of the signal which in this case would be a relatively narrow bandwidth compared to the carrier, as far as considerations for 1/4 wavelength transformers go. This means (to bottom line it) there is little consideration on the spectral effects of the quarter wave transformer over the expected bandwidth of interest. I detail this further below.
The formulas for the quarter wave transformer that relate reflection coefficient and insertion loss to bandwidth relative to the perfect match at center frequency $f_o$ are given below and referenced here:
$$|\Gamma| = \frac{1}{\sqrt{1+\frac{4Z_o Z_L}{(Z_L-Z_o)^2}\cos(\pi f/(2 f_o))}}$$
Where:
$\Gamma$: Reflection coefficient (voltage reflected/voltage incident)
$Z_o$: Source impedance (or of first transmission line at input to transformer)
$Z_L$: Load impedance (or of second transmission line at output of transformer)
$f_o$: Center frequency where transformer provides perfect match ($\Gamma=0$)
$f/f_o$: Relative frequency related to center frequency
The insertion loss in dB is related to the reflection coefficient as follows:
$$S_{21} = 10\log_{10}(1-|\Gamma|^2)$$
I plotted these for two cases, a 2:1 mismatch and a 4:1 mismatch which provides us with an intuitive feel for how much we care about variation over frequency for practical examples such as OFDM carriers: | {
"domain": "dsp.stackexchange",
"id": 10777,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "equalization",
"url": null
} |
soil, moon
Title: What is the difference between lunar and earth soil I know that the moon has lunar regolith and earth has earth soil, but what is the difference between them? The single biggest difference is the lack of chemical weathering in lunar soils which are subject to physical weathering almost exclusively. If you exclude biological processes, terrestrial rocks undergo significant weathering from water and atmosphere, which the moon lacks.
For example, both earth and moon contain feldspar-rich rocks, however, clays, the result of chemically altered feldspars, are not found on the moon. Neither are oxidized minerals, as the moon has no oxygen-rich atmosphere to speak of. | {
"domain": "earthscience.stackexchange",
"id": 992,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "soil, moon",
"url": null
} |
c++, template, template-meta-programming, c++14, rags-to-riches
Title: Compile-time-fixed templated integer range This is a follow-up of an old question by @LokiAstari, modified for the current community challenge. The idea is to provide a compile-time integer range. I applied all the modifications that I proposed to Loki at the time and tried to write a class as close as possible to the standard library class std::integer_sequence:
This is a [begin, end) range instead of a [begin, end] one.
The range can be ascending or descending.
The class is templated so that it is possible to choose the integer type to use.
In order to match the standard library utilities, I also provide the template index_range which is an alias of integer_range for the type std::size_t.
Here is the implementation:
#include <cstddef>
#include <tuple>
#include <type_traits>
#include <utility>
namespace details
{
////////////////////////////////////////////////////////////
// implementation details
template<typename Int, typename, Int Begin>
struct increasing_integer_range;
template<typename Int, Int... N, Int Begin>
struct increasing_integer_range<Int, std::integer_sequence<Int, N...>, Begin>:
std::integer_sequence<Int, N+Begin...>
{};
template<typename Int, typename, Int Begin>
struct decreasing_integer_range;
template<typename Int, Int... N, Int Begin>
struct decreasing_integer_range<Int, std::integer_sequence<Int, N...>, Begin>:
std::integer_sequence<Int, Begin-N...>
{};
} | {
"domain": "codereview.stackexchange",
"id": 9880,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, template, template-meta-programming, c++14, rags-to-riches",
"url": null
} |
mars, impact, crater
Title: What explains the distribution of new impact craters on Mars? 24 minutes into this van Karman lecture by Dr. Tamppari, the slide below is shown with symbols for transient events on Mars. Red dots represent meteoroid impacts which have occurred while MRO has been in Mars' orbit.
New impacts are almost only found at lower latitudes. And longitudinally there are two equally sized gaps at the equator. Is this because of some observational bias or is there a natural explanation? I was on the targeting team for one of the cameras that discovered most of these new impact craters. The reason for the distribution is simply because it's easiest to find "new" impact craters in the dusty regions of Mars. Often what we see is the dark-toned blast zone created by the impact in lower-resolution data, which has a large areal footprint. Then, we target those dark-toned splotches with the higher resolution cameras---which have MUCH smaller footprints---to confirm whether or not the splotches are craters. See the attached image as an example. The dark splotches really stand out in dusty regions, but in the non-dusty areas there's nothing obvious to pick out to say "hey, there's a new crater" as a human without tediously comparing images. | {
"domain": "astronomy.stackexchange",
"id": 1910,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "mars, impact, crater",
"url": null
} |
quantum-mechanics, rotation
Title: Rotation in configuration space Let $R_\psi$ be the rotation in configuration space around a vector $\bf{e}_\psi$ for an angle $\psi$.
How is that the space rotation in configuration space have:
$$(R_{\delta\psi_1}-1)(R_{\delta\psi_2}-1)-(R_{\delta\psi_2}-1)(R_{\delta\psi_1}-1)=(R_{\delta\psi_1\times\delta\psi_2}-1)$$
given that
$$\bf{r}\to\bf{r}'=R_{\delta\psi}\bf{r}=\bf{r}+\delta\psi\times\bf{r}=(1+\delta\psi\times)\bf{r}$$
Its easy to see that
$$(R_{\delta\psi_1}-1)(R_{\delta\psi_2}-1)-(R_{\delta\psi_2}-1)(R_{\delta\psi_1}-1)=\delta\psi1\times\delta\psi2\times-\delta\psi2\times\delta\psi1\times$$
But I don't see how that leads to
$$\delta\psi_1\times\delta\psi_2$$
Maybe is a simple question, but I can't seem to find the relation between the commutator and the cross product.
$$[\delta\psi1,\delta\psi2]-?\to\delta\psi_1\times\delta\psi_2$$
And is this kind of relationship apply to general(rotation) situation? The rotation group of three dimensional space has three generators $T^a$ given by | {
"domain": "physics.stackexchange",
"id": 17273,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, rotation",
"url": null
} |
ros
boost::shared_ptr<const message_filters::NullType>&; T4 = const boost::shared_ptr<const message_filters::NullType>&; T5 = const boost::shared_ptr<const message_filters::NullType>&; T6 = const boost::shared_ptr<const message_filters::NullType>&; T7 = const boost::shared_ptr<const message_filters::NullType>&; T8 = const boost::shared_ptr<const message_filters::NullType>&]’ | {
"domain": "robotics.stackexchange",
"id": 15654,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros",
"url": null
} |
Here's a different approach.
We can "unroll" $\mu_a$ into a regular matrix.
To do so we rewrite each matrix $c\in \mathbb K^{m\times n}$ as a vector in $\mathbb K^{mn}$ by writing each column below the previous column.
And we construct a new matrix $\tilde a$ that is a block matrix with $a$ repeated $n$ times along its diagonal.
Now we can find the eigenvalues and eigenvectors of $\tilde a$, and afterwards we can "roll" the eigenvectors back into eigenmatrices.
I haven't really understood this approach. Could you explain that further to me?
Klaas van Aarsen
MHB Seeker
Staff member
I haven't really understood this approach. Could you explain that further to me?
Suppose we have $m=n=2$, $a=\begin{pmatrix}2&0\\0&3\end{pmatrix}$, and $c_1=\begin{pmatrix}1&0\\0&0\end{pmatrix}$.
Then we have $ac_1=\begin{pmatrix}2&0\\0&3\end{pmatrix}\begin{pmatrix}1&0\\0&0\end{pmatrix}=\begin{pmatrix}2&0\\0&0\end{pmatrix}$ don't we?
So $c_1$ is an eigenmatrix of $\mu_a$.
We can also write it as: $\tilde a \tilde c_1=\begin{pmatrix}2&0\\0&3\\&&2&0\\&&0&3\end{pmatrix}\begin{pmatrix}1\\0\\0\\0\end{pmatrix}=\begin{pmatrix}2\\0\\0\\0\end{pmatrix}$ can't we? Just by putting the columns of $c_1$ below each other, and by constructing a block matrix $\tilde a$. | {
"domain": "mathhelpboards.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9740426405416754,
"lm_q1q2_score": 0.8179366892706504,
"lm_q2_score": 0.8397339656668287,
"openwebmath_perplexity": 190.4953599076671,
"openwebmath_score": 0.9813327789306641,
"tags": null,
"url": "https://mathhelpboards.com/threads/relations-between-map-and-matrix.27739/"
} |
php, design-patterns, factory-method, php7
if ($this->requestType == 'get') {
curl_setopt($this->ch, CURLOPT_HTTPHEADER, $this->requestFields);
} else {
curl_setopt($this->ch, CURLOPT_POSTFIELDS, $fields_string);
}
}
}
public function request()
{
$this->ch = curl_init();
$this->setRequestType();
$this->requestResult = curl_exec($this->ch);
return $this->result();
}
public function decode()
{
return json_decode($this->requestResult, true);
}
public function result()
{
//dump($this->requestResult);
return $this->requestResult;
}
}
class RequestFactory
{
public static function create($streamer = null, $status = null, $url = null, $customFields = null)
{
return new Request($streamer, $status, $url, $customFields);
}
}
$changeStatus = RequestFactory::create(null, null, 'getStreamers');
Main file:
use GhostZero\Tmi\Client;
use GhostZero\Tmi\ClientOptions;
use GhostZero\Tmi\Events\Twitczh\SubEvent;
use GhostZero\Tmi\Events\Twitch\AnonSubGiftEvent;
use GhostZero\Tmi\Events\Twitch\AnonSubMysteryGiftEvent;
use GhostZero\Tmi\Events\Twitch\ResubEvent;
use GhostZero\Tmi\Events\Twitch\SubGiftEvent;
use GhostZero\Tmi\Events\Twitch\SubMysteryGiftEvent;
include('requestFactory.php');
$streamers = RequestFactory::create(null, null, 'getStreamers');
$streamers = $streamers->decode(); | {
"domain": "codereview.stackexchange",
"id": 41151,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "php, design-patterns, factory-method, php7",
"url": null
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.