text stringlengths 49 10.4k | source dict |
|---|---|
c++, template, classes, stl, knapsack-problem
}
}
else if(cresult == 'B')
{
B b;
objsize = sizeof(b);
if (objsize > knapsacksize)
{
break;
}
else
{
/********************************************************
* AGAIN, a knapsack is created for no other reason than
* to do `knapsacksize += sizeof(B);`...
*******************************************************/
Knapsack<B> thisknapsack(knapsacksize,seed);
knapsacksize = thisknapsack.addtoknapsack(b);
objlist.push_back(b.getName());
/********************************************************
* ... and AGAIN it's immediately destroyed here.
*******************************************************/
}
}
/***************************************************************
* And the same thing happens again and again, for every type:
* pointless knapsacks are created, then discarded. | {
"domain": "codereview.stackexchange",
"id": 43979,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, template, classes, stl, knapsack-problem",
"url": null
} |
inorganic-chemistry, crystal-structure
Title: Determining perovskite crystal structure
I've been reading that perovskite presents a sort of pseudocubic crystalline system, however it can be stretched and suffer modifications to become orthorhombic etc.
The thing is, if I consider it as cubic, it's sort of a combination between FCC and BCC, because it's an FCC with an atom in the center. I can't find a name for this sort of thing.
Can anyone help me out? At Geoff's request I'll make comments into an answer:
Since it is a multi-component system you need to think of a crystal structure with a basis. Not surprisingly, the basis here is $\ce{ABO3}$. In your picture from Wikipedia, start with just the blue atoms - they form the simple cubic. Then each blue atom shares 6 red atoms with other blues, so pick three for a given blue. Then place the green atom at the cube center to go with the blue atom at the lower right front blue atom. This is now your basis, and placing that basis unit at each corner of the cube (repeating to infinity) gives you the crystal structure
To explain further: each red has two blue neighbors, so each red is 'half-owned' by a blue. For a basis, you want only whole atoms to keep your mind a bit saner, so pick three to be 'wholly-owned' by a blue, and leave the others to be owned by another blue. Taking the left lower center blue atom, I'd pick the red atoms in the +x, +y, and +z directions, then the green atom in the (x,y,z) direction to make a nice basis.
I'll also add some on deviations from cubic symmetry. If the green atom is at (1/2,1/2,1/2) than it all remains simple cubic. If the greens are big fat atoms compared to the blues, they may want a little more room, which leads to distortions. Simply stretching along one dimension is pretty easy, since it leaves 2/3's of the blue-red pairs at the same interatomic distance, stretching the other 1/3. This creates an orthorhombic crystal, still with the same $\ce{ABO3}$ unit as a basis, but now slightly elongated along one direction (commonly taken as z). | {
"domain": "chemistry.stackexchange",
"id": 17018,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "inorganic-chemistry, crystal-structure",
"url": null
} |
c++, linked-list, pointers
std::cin.get();
} It is great to see you are really taking reviews seriously and are trying to learn something and improve yourself. That really makes us reviewers like what we are doing. I may not be the best here, but I will still try :)
I very much like your approach with smart pointers (unique_ptr). I do not think that was that trivial as JDługosz stated it. I was also wondering about all the explicit constructors in Node but then spotted the emplace and then it clicked (before going down the rabbit hole of reading the previous reviews).
The main problem with the code is, that it is big and hard to review. I had to copy-paste it to editor to review it. I would personally organise it a bit differently and will tell you why:
Method declaration vs. body
It may appear to be good to first declare the class with all the methods and stuff and then later define bodies for all the methods, maybe because you got used to header + source pair. I have a bit different opinion about this. Splitting it like this, especially when the body is small, not only makes you type a lot more than you need, but it makes it harder to see the logic as well, harder to review, harder to check, harder to maintain. I can understand that the declarative part could serve as a documentation (see what it provides separated from how it does it), but there are other tools for documentation and seeing such things... (so I prefer inline body, most of the time, if it is not too big.)
Documentation
Documenting your code properly is very important and there are good tools to help you, namely doxygen. Try it. I believe you will understand how valuable /// documenting comments can be. ///< inline documentation as well But leave your hints (like // copy constructor) in normal comments or remove these completely (such things should become obvious). But do comment the logic if it is not trivial (maybe with links like this).
The rule of five or three or ... copy and swap
I can understand that you are still learning, but maybe it is time to actually understand what it does, how it does it and what are the alternatives. Just follow the link for full explanation and consider this:
template <class T>
class DoubleLinkedList {
public: | {
"domain": "codereview.stackexchange",
"id": 31718,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, linked-list, pointers",
"url": null
} |
quantum-state, fidelity, stabilizer-state
Title: What is a stabilizer state? I am reading through the paper "Direct Fidelity Estimation from Few Pauli Measurements" (arXiv:1104.4695) and it mentions 'stabilizer state'.
"The number of repetitions depends on the desired
state $\rho$. In the worst case, it is $O(d)$, but in many cases of
practical interest, it is much smaller. For example, for
stabilizer states, the number of repetitions is constant,
independent of the size of the system..." Let $\mathcal{G}_n$ denote the Pauli group on $n$ qubits. An $n$-qubit state $|\psi\rangle$ is called a stabilizer state if there exists a subgroup $S \subset \mathcal{G}_n$ such that $|S|=2^n$ and $A|\psi\rangle = |\psi\rangle$ for every $A\in S$.
For example, $(|00\rangle+|11\rangle)/\sqrt2$ is a stabilizer state, because it is a $+1$ eigenstate of the elements of the following four-element subgroup of $\mathcal{G}_2$: $\{II, XX, -YY, ZZ\}$.
Stabilizer states have a number of interesting properties. For example, they are exactly the states that are reachable from $|0\dots 0\rangle$ using the Clifford gates and thus, by Gottesman-Knill theorem, any quantum computation that takes place entirely in the set of stabilizer states can be simulated efficiently on a classical computer.
The significance of stabilizer states in Direct Fidelity Estimation (DFE) lies in the fact that they are a prime example of well-conditioned states. The cost of DFE on such states is relatively low. | {
"domain": "quantumcomputing.stackexchange",
"id": 2897,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-state, fidelity, stabilizer-state",
"url": null
} |
programming-challenge, clojure
For example, the last let expression is the result of tinkering for half an hour or so until I get the information I want out of it. Two questions:
I've found threading macros (especially ->>) a pure joy to work with. But, is there such a thing as a threading expression that is too long?
I know that this is throw-away coding, but what things can I do to improve this code to fit best practices? Are there things I'm doing wrong? There's a few things that can be improved here:
First, I'm not sure if you neglected it here or if you actually aren't using it, but every file should start with a call to ns. This sets the namespace that the code following it will be in so other files can require it properly. If the code resides in src/my_thing/my_file, you would have
(ns my-thing.my-file)
At the top.
Second, unfortunately, that gen-primes function that you took from SO isn't a good example a proper practice. Unless you have an extraordinarily good reason, don't use def (and by extension, defn) to create a locally bound symbol. def creates globals that exist even once the function has returned:
(take 0 (gen-primes)) ; Run the function just so the inner defn happens
=> ()
(type primes-step)
=> irrelevant.cr2_original$gen_primes$primes_step__4224
Note how using using primes-step doesn't lead to an error. It's in scope!
To fix this, you could either just use let and define the function like was done with reinsert:
(defn gen-primes []
(let [reinsert (fn [table x prime]
(update-in table [(+ prime x)] conj prime)) | {
"domain": "codereview.stackexchange",
"id": 34092,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "programming-challenge, clojure",
"url": null
} |
## Definition
Factorization is the decomposition of an expression into a product of its factors.
The following are common factorizations.
1. For any positive integer $$n$$, $a^n-b^n = (a-b)(a^{n-1} + a^{n-2} b + \ldots + ab^{n-2} + b^{n-1} ).$ In particular, for $$n=2$$, we have $$a^2-b^2=(a-b)(a+b)$$.
2. For $n$ an odd positive integer, $a^n+b^n = (a+b)(a^{n-1} - a^{n-2} b + \ldots - ab^{n-2} + b^{n-1} ).$
3. $a^2 \pm 2ab + b^2 = (a\pm b)^2$
4. $x^3 + y^3 + z^3 - 3 xyz = (x+y+z) (x^2+y^2+z^2-xy-yz-zx)$
5. $(ax+by)^2 + (ay-bx)^2 = (a^2+b^2)(x^2+y^2)$. $(ax-by)^2 - (ay-bx)^2 = (a^2-b^2)(x^2-y^2)$.
6. $x^2 y + y^2 z + z^2 x + x^2 z + y^2 x + z^2 y +2xyz= (x+y)(y+z)(z+x)$.
Factorization often transforms an expression into a form that is more easily manipulated algebraically, that has easily recognizable solutions, and that gives rise to clearly defined relationships.
## Worked Examples
### 1. Find all ordered pairs of positive integer solutions $(x,y)$ such that $2^x+ 1 = y^2$. | {
"domain": "brilliant.org",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9908743626108194,
"lm_q1q2_score": 0.8626525051078022,
"lm_q2_score": 0.8705972600147106,
"openwebmath_perplexity": 1674.0776156044535,
"openwebmath_score": 0.9869987964630127,
"tags": null,
"url": "https://brilliant.org/discussions/thread/advanced-factorization/"
} |
Remember geometrically this simply says that the (signed) distance from the point $\mathbf{x}_p$ to its class decision boundary is greater than its distance from every other class's. For data creation use the createdata script. This line is called the decision boundary, and, when we use a single-layer perceptron, we can only produce one decision boundary. + + + W--Figure 2 shows the surface in the input space, that divide the input space into two classes, according to their label. • Decision region/boundary n = 2, b != 0, is a line, called decision boundary, which partitions the plane into two decision regions If a point/pattern is in the positive region, then , and the output is one (belongs to class one) Otherwise, w w , output -1 (belongs to class two) n = 2, b = 0 would result a similar partition 2. Frank Rosenblatt invented the perceptron at the Cornell Aeronautical Laboratory in 1957. 258 IJCSMC, Vol. nn03_perceptron_network - Classification of a 4-class problem with a 2-neuron perceptron 5. Perceptron Convergence Due to Rosenblatt (1958). Vector Spaces; 8. Linear decision boundaries Recall Support Vector Machines (Data Mining with Weka, lesson 4. we can:-Set the max number. I wrote this function in Octave and to be compatible with my own neural network code, so you mi. Locality sensitive hashing. in order to push the classifier neuron over the 0 threshold. If it is possible to find the weights so that all of the training input vectors for which the correct response is 1. Perceptron Algorithm for Linearly -Separable Data • One of the first “learning” algorithms was the “perceptron” (1957). The perceptron is a supervised learning algorithm that computes a decision boundary between two classes of labeled data points. and I described how an XOR network can be made, but didn't go into much detail about why the XOR requires an extra layer for its solution. Forms a piecewise linear decision surface. The objective of the bias is to shift each point in a particular direction for a specified distance. 99%, and sensitivity of 90. Discriminant functions Two classes Multiple classes Least squares for | {
"domain": "youlifereporter.it",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9857180656553329,
"lm_q1q2_score": 0.825657134006785,
"lm_q2_score": 0.8376199673867852,
"openwebmath_perplexity": 961.3748683124404,
"openwebmath_score": 0.6365987062454224,
"tags": null,
"url": "http://mzei.youlifereporter.it/perceptron-decision-boundary.html"
} |
quantum-electrodynamics, feynman-diagrams, complex-numbers
Title: How does one calculate the absolute value of a Feynman diagram's amplitude? How do I obtain the absolute value of a Feynman diagram's amplitude if I do not have values for the components of this amplitude?
If the amplitude of a process such as $e^+(p_1) + e^- (p_2) \to \phi (p_3) + \phi^* (p_4) $ is given as:
$$\require{cancel} \mathcal{A}=ie^2 \frac{\bar{\nu}(p_1)(-\cancel{p_3} + \cancel{p_4}) u(p_2)}{(p_1+p_2)^2}$$
How do I express $|\mathcal{A}|$ to obtain $|\mathcal{A}|^2$? Calculate the product $\mathcal{A}\mathcal{A}^*=|\mathcal{A}|^2$. Write out the Dirac spinors $u$ and $\nu$ explicitly in terms of energy and momentum. | {
"domain": "physics.stackexchange",
"id": 68242,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-electrodynamics, feynman-diagrams, complex-numbers",
"url": null
} |
c++, windows, embedded
//
//
//Menu:
// Enumerate all 3D programs that RTSS can run in and display them in a menu
// Fix COM port change settings
// Add lots more menu options - USB options, debug output, data upload, RTSS options(text color)
// Box position manual override toggle
//
//
//Anti-Fraud:
// Create new dynamic build/installation process in order to obscure some code
// Think about hardware/software signatures for uploading data? This probably needs more consideration on the web side
// Obscure most functionality(things that don't need to be optimized) into DLLs(requires a new build/installation process)
// (Anti-Fraud, Optimization, and Data)Instead of recording certain variables on every measurement(such as RTSS XY position) record them once at the start and once at the end
//
//
//Optimization:
// Move data update at the end of the CreateDrawingThread function into a different thread(or co-routine?)
// Calculating the position of the box before we draw it adds unnecessary delay(?)
// Make flashing square resizeable
//
//
//Organizational Issues:
// Clean up(or get rid of) static vars in SysLat_SoftwareDlg class
// Clean up the refresh function a bit more by making some init functionality conditional
// Attempt to get rid of most Windows type names like CString, BOOL, and INT(DWORD?)
// Attempt to use a single style of string instead of "string", "char*", and "CString".
// Look further into Windows style guides & straighten out all member var names with "m_" and the type, or do away with it completely
// Look into file organization for .h and .cpp files because the repo is a mess(though it's fine in VS because of "filters")
// Look into class naming schemes and organization - make sure dialog classes end in "dlg"(?)
// Check whether or not my void "initialization" methods need to return ints or bools for success/failure or if I can just leave them as void | {
"domain": "codereview.stackexchange",
"id": 40674,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, windows, embedded",
"url": null
} |
general-relativity, notation, tensor-calculus, differentiation
Title: How can you have $\frac{DA^\mu}{d\tau}$? If a covariant derivative is given by:
$$D_\nu A^\mu=\partial_\nu A^\mu +\Gamma^\mu_{\nu \lambda} A^{\lambda}$$
Then how does $\frac{DA^\mu}{d\tau}$ make any sense? Since there are no 'differentials' in $D_\nu$ for $d\tau$ to act on.
Clarification
The parameter $\tau$ can be seen as an arbitrary parameter, although $\tau$ is often used for proper time (interpreting as either arbitrary or proper time does not change the question). I came across the expression $\frac{D A^\mu}{d\tau}$ when looking at parallel transports, but it is also used in the geodesic equation in general relativity and probably a lot of other places to. This is a covariant derivative along a world line (if you would not consider a world line the proper time $\tau$ would not make any sense).
So you consider a curve in space time parametrized in dependence of the proper time $x^\mu(\tau)$. Then you have:
$$\frac{DA^\mu}{d\tau} = \frac{\partial A^{\mu}\big(x(\tau)\big)}{\partial \tau} + \Gamma^\mu_{\nu\lambda} A^\nu \dot x^\lambda = \dot x^\lambda A^\mu_{,\lambda} + \Gamma^{\mu}_{\nu\lambda} A^\nu \dot x^\lambda = \dot x^\lambda A^\mu_{;\lambda}.$$
For more details see Wikipedia on Covariant Derivative Along a Curve and on the Levi-Civita connection along the curve. (These articles use an index free notation, not the Ricci calculus usually employed by physicists). | {
"domain": "physics.stackexchange",
"id": 24686,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "general-relativity, notation, tensor-calculus, differentiation",
"url": null
} |
ros, rosmake, target-link-libraries
(In your case CMake complains because there is no target named ${PROJECT_NAME}.)
Originally posted by Stephan with karma: 1924 on 2012-03-26
This answer was ACCEPTED on the original site
Post score: 4 | {
"domain": "robotics.stackexchange",
"id": 8741,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, rosmake, target-link-libraries",
"url": null
} |
python, console
def menu_message(message: str) -> None:
print('|===>', message, '<===|')
def menu_not_found() -> None:
menu_message('Choice not found; please try again.')
def menu_view() -> None:
pass
def menu_delete() -> None:
pass
def menu_quit() -> bool:
return True
class MenuItem(NamedTuple):
index: tuple[str, ...]
name: str
callback: Callable[[], Any]
def __str__(self) -> str:
desc = f'{self.index[0]}] {self.name}'
if len(self.index) > 1:
others = ', '.join(self.index[1:])
desc += f' ({others})'
return desc
def menu_fragments(items: Iterable[MenuItem]) -> Iterator[str]:
yield 'Welcome to Micro Menu'
for item in items:
yield str(item)
def menu_text(items: Iterable[MenuItem]) -> None:
print('\n'.join(menu_fragments(items)))
def menu(dispatcher: dict[str, MenuItem], index: str) -> Callable[[], Any]:
""" Will return a function based on the index."""
item = dispatcher.get(index)
if item is None:
return menu_not_found
# Delete this once you're done debugging the program
menu_message(f'`{item.name}` function was called.')
return item.callback
def main() -> None:
items = (
MenuItem(('1', 'v'), 'View', menu_view),
MenuItem(('2',), 'Delete', menu_delete),
MenuItem(('3',), 'Help', menu_help),
MenuItem(('4', 'q'), 'Quit', menu_quit),
) | {
"domain": "codereview.stackexchange",
"id": 44226,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, console",
"url": null
} |
neural-networks, recurrent-neural-networks, hardware, implementation
Remember: I am NOT asking if such a network will in fact be very intelligent. I am merely asking if we can factually make arbitrarily large, highly interconnected neural networks, if we decide to pay Intel to do this?
The implication is that on the day some scientist is able to create general intelligence in software, we can use our hardware capabilities to grow this general intelligence to human levels and beyond. The approach you describe is called neuromorphic computing and it's quite a busy field.
IBM's TrueNorth even has spiking neurons.
The main problem with these projects is that nobody quite knows what to do with them yet.
These projects don't try to create chips that are optimised to run a neural network. That would certainly be possible, but the expensive part is the training not the running of neural networks. And for the training you need huge matrix multiplications, something GPUs are very good at already. (Google's TPU would be a chip optimised to run NNs.)
To do research on algorithms that might be implemented in the brain (we hardly know anything about that) you need flexibility, something these chips don't have. Also, the engineering challenge likely lies in providing a lot of synapses, just compare the average number of synapses per neuron of TrueNorth, 256, and the brain, 10,000.
So, you could create a chip designed after some neural architecture and it would be faster, more efficient, etc …, but to do that you'll need to know which architecture works first. We know that deep learning works, so google uses custom made hardware to run their applications and I could certainly imagine custom made deep learning hardware coming to a smartphone near you in the future. To create a neuromorphic chip for strong AI you'd need to develop strong AI first. | {
"domain": "ai.stackexchange",
"id": 150,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "neural-networks, recurrent-neural-networks, hardware, implementation",
"url": null
} |
gazebo, joint, rviz, urdf, p2os
<gazebo reference="front_sonar">
<material value="Gazebo/Yellow"/>
</gazebo>
<joint name="base_front_joint" type="fixed">
<origin xyz="-0.198 0 0.208" rpy="0 0 0"/>
<parent link="base_link"/>
<child link="front_sonar"/>
</joint>
<link name="back_sonar">
<inertial>
<mass value="0.0001"/>
<origin xyz="0 0 0"/>
<inertia ixx="1" ixy="0" ixz="0"
iyy="1" iyz="0" izz="1"/>
</inertial>
<visual name="back_sonar_vis">
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry name="pioneer_geom">
<mesh filename="package://p2os_urdf/meshes/p3dx_meshes/back_sonar.stl"/>
</geometry>
<material name="SonarYellow">
<color rgba="0.715 0.583 0.210 1.0"/>
</material>
</visual>
<collision>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<box size="0 0 0"/>
</geometry>
</collision>
</link>
<gazebo reference="back_sonar">
<material value="Gazebo/Yellow"/>
</gazebo>
<joint name="base_back_joint" type="fixed">
<origin xyz="0.109 0 0.209" rpy="0 0 0"/>
<parent link="base_link"/>
<child link="back_sonar"/>
</joint> | {
"domain": "robotics.stackexchange",
"id": 12330,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "gazebo, joint, rviz, urdf, p2os",
"url": null
} |
c++, algorithm, c++14
template <typename T>
void operator()(T& init_value)
{
init_value *= value;
}
};
/**
* Function object to do inplace division by constant, e.g.
* on caling `operator()` it will not return value,
* but rather mutate it. Constant should be supplied during
* construction.
* @tparam Arithmetic type of the constant to divide by.
* The type doesn't need to behave like arithmetic, but it makes
* most sense that way.
* @tparam T
*/
template <typename T = void, typename Arithmetic = int>
class inplace_divide_by
{
const Arithmetic value;
public:
/**
* Initializes constant.
* @param val value to set the constant to.
*/
inplace_divide_by(const Arithmetic& val):
value(val)
{}
/**
* @param init_value the value to mutate by
* dividing by a constant.
*/
void operator()(T& init_value)
{
init_value *= value;
}
};
template <typename Arithmetic>
class inplace_divide_by<Arithmetic>
{
const Arithmetic value;
public:
/**
* Initializes constant.
* @param val value to set the constant to.
*/
inplace_divide_by(const Arithmetic& val):
value(val)
{}
/**
* @param init_value the value to mutate by
* dividing by a constant.
*/
template <typename T>
void operator()(T& init_value)
{
init_value *= value;
}
};
}
#endif //AREA51_FUNCTIONS_HPP | {
"domain": "codereview.stackexchange",
"id": 26378,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, algorithm, c++14",
"url": null
} |
ros-kinetic, rtabmap
Title: Manually localise in RTAB-Map
Is there some way to manually localize on RTAB-Map?
At the moment, when we launch RTAB-Map in localization mode, we need to drive the rover account for a bit until it finds enough visual points to localize.
Is there some way to manually localize on the map so we don't have to wait (an undetermined amount of time) for the rover to figure out where it is on the map?
Originally posted by Drkstr on ROS Answers with karma: 25 on 2019-03-12
Post score: 1
Hi,
yes it is possible to drop a "guess" with RVIZ using the "2D Pose Estimate" button like in this video at 0:50. Make sure to remap the topic initialpose of rtabmap node and that the Fixed Frame in global options is map. Example (if rtabmap is started in rtabmap namespace):
cheers,
Mathieu
Originally posted by matlabbe with karma: 6409 on 2019-03-15
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Drkstr on 2019-03-25:
Awesome, thanks. | {
"domain": "robotics.stackexchange",
"id": 32634,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros-kinetic, rtabmap",
"url": null
} |
electrostatics, charge, gauss-law, density, dirac-delta-distributions
Title: Dirac delta function and volume charge density I just got introduced to the Dirac delta function and one of the questions was to express volume charge density $\rho({\bf r})$ of a point charge $q$ at origin. I saw that the answer is related to Dirac delta function as:
$$\rho({\bf r}) = q \delta^3({\bf r})$$
where $\delta^3({\bf r})$ is the 3-dimensional Dirac Delta function. Why is it so? A point charge is confined to a single point in space. Let's call it as $q\left(\vec{r}\right)$. This means that the charge has a magnitude only at $\vec{r}$, and at all other points it is zero.
The charge density for a point charge is given by charge per unit volume. Since the there is no charge except at the position $\vec{r}$, the charge density vanishes at all points except $\vec{r}$. Now, at $\vec{r}$, the volume vanishes to the limit $V\rightarrow0$. So the charge density blows up to infinity at that point. It's the case for all point sources, not just restricted to point charges: for example, the mass density of a point mass blows up at the origin.
Does this work for all point charges? Yes. But, there is an exception when the magnitude of the point charge is zero. However if that's the case, why should one be concerned about all this? So, a point charge has a non-zero magnitude at the point it occupies. So, it works all time.
This particular property of the charge density of a point charge is exactly identical to the definition of the Dirac-delta function, which, for the point $\vec{r}$ can be defined as
$$
\delta^3\left(\vec{r}\right) =
\begin{cases}
\infty, & \text{at the point $\vec{r}$} \\[2ex]
0, & \text{at all other points}
\end{cases}
$$ | {
"domain": "physics.stackexchange",
"id": 37259,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electrostatics, charge, gauss-law, density, dirac-delta-distributions",
"url": null
} |
A calculator to find the exact value of a coterminal angle to a given trigonometric angle. For example 30 ° , − 330 ° and 390 ° are all coterminal. Reference Angle. Coterminal definition, having a common boundary; bordering; contiguous. How to Compute the Measure of the Reference Angle of an Angle in Standard Position Terminal Side In Quadrant I Quadrant II Quadrant III Quadrant IV Reference Angle (θin degrees) θ Reference Angle (θin radians) θ 180q T S T T 180q 360q T T S 2S T. I begin working with special angles because students will need to use these angles the most through the rest of the year. - 310° Find the complement and supplement for the given angles. Angles, Angular Conversions, and Trigonometric Functions. How do you write an expression to represent every angle coterminal to s? Given: s in a unit circle with point (-1/2 , -sqrt(3)/2) I got that it is quadrant 3, and the trig functions are those of angle 60°. QUESTION: Find the reference angle for. Reference angle is the smallest positive acute angle formed by the x-axis and the terminal side. I don't want students to think that the numerator is always just pi on a reference angle. Finding the reference angle. About This Quiz & Worksheet. Coterminal Angles are angles that start and stop in the same place. Answers can vary. THE REFERENCE ANGLE THEOREM Reference Angle Theorem: If θis an angle, in standard position,that lies in a quadrant and αis. SOLUTION a. Coterminal angles are angles formed by different rotations but with the same initial and terminal sides. REFERENCE ANGLES ARE ALWAYS POSITIVE! To find a reference angle for angles outside the interval 0o < θ < 360o or 0π < θ < 2π you must first find a corresponding coterminal angle in this interval. Reinforce the concept of reference and coterminal angles with the multiple response worksheets featured here. Then find and draw one positive and one negative angle coterminal with the given angle. a) b) c) d)38 27 72 58 a) b) c) d) a) b) c) d) a) b) c) d) What!are!the!possible!positive!and!negative!coterminal!angles!of!320 ? 680 | {
"domain": "aavt.pw",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9783846634557752,
"lm_q1q2_score": 0.8276571011290494,
"lm_q2_score": 0.8459424314825853,
"openwebmath_perplexity": 1234.0799687255712,
"openwebmath_score": 0.6979809403419495,
"tags": null,
"url": "http://msgf.aavt.pw/coterminal-and-reference-angles.html"
} |
0&\sqrt{8}\\ 0&0} \pmatrix{\frac{1}{\sqrt{2}}&\frac{1}{\sqrt{2}}\\ \frac{-1}{\sqrt{2}}&\frac{1}{\sqrt{2}}}. \end{align*} | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9748211612253742,
"lm_q1q2_score": 0.8398456846888094,
"lm_q2_score": 0.8615382165412809,
"openwebmath_perplexity": 289.9624538370569,
"openwebmath_score": 0.9684100151062012,
"tags": null,
"url": "https://math.stackexchange.com/questions/2225393/singular-value-decomposition-works-only-for-certain-orthonormal-eigenvectors-no"
} |
The parity of $\frac{n(n-1)}{2}$ is 4-periodic. Thus the sequence $(-1)^{\frac{n(n-1)}{2}}$ equals to: $$1, \, -1, \, -1, \, 1, \, 1, \, -1, \, -1, \, 1, \, 1, \, -1, \, -1, \, 1 , \cdots$$ The original series' partial sum truncated at $N$ equals to $$\sum_{k=0}^{K} \left( \frac{1}{4k+1} - \frac{1}{4k+2} - \frac{1}{4k+3} + \frac{1}{4k+4}\right) + \sum_{i=1}^{N - 4K - 4}\frac{(-1)^{\frac{i(i-1)}{2}}}{4K + 4 + i}$$ where $K = \lfloor \frac{N}{4}\rfloor - 1$.
Then by a discussion on the partial sum we can conclude that the series is convergent. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9825575178175919,
"lm_q1q2_score": 0.8250869209327222,
"lm_q2_score": 0.8397339656668287,
"openwebmath_perplexity": 288.2563889046853,
"openwebmath_score": 0.9650896191596985,
"tags": null,
"url": "https://math.stackexchange.com/questions/1375069/is-this-sum-n-1-infty-1-fracnn-12-frac1n-a-convergent-s/1375080"
} |
human-biology, human-physiology
Title: Why do the fastest runners tend to be black? If you watched the last Olympics like me you probably also observed that most medallists in running events were black. Why is that? I discussed this with university grad friends and researchers and we only came up with hypotheses but nobody had an actual explanation. Is it cultural, genetic, other reasons or nobody really know?
Update:
Sprint and distance running requiring different attributes for being the best, let separate this question in two parts: 1) Sprint (i.e. 100m) and 2) Distance running (@Forest already provided a great answer for this).
Note: I know this question can potentially bring disrespectful answers/comments, but I'm hopeful that this site and its members can answer this interesting question. Otherwise, I'll simply erase my question. It's an interesting question and one that has been asked before. NPR did a story in 2013 on this topic, but their question was a bit more focused than just "why are so many black people good runners?"
The observation that led to their story wasn't just that black people in general were over-represented among long-distance running medalists, but that Kenyans in particular were over-represented. Digging deeper, the story's investigators found that the best runners in Kenya also tended to come from the same tribal group: the Kalenjin.
I'm not going to repeat all the details in that story (which I encourage you to read), but the working answer that the investigators came up with is that there are both genetic traits and certain cultural practices that contribute to this tribe's success on the track. Unfortunately, from the point of view of someone who wants a concise answer, it is very difficult to separate and quantify the exact contributions that each genetic and cultural modification makes to the runners' successes.
Pubmed also has a number of peer-reviewed papers detailing the Kalenjin running phenomenon, but I could only find two with free full-access and neither had the promising title of "Analysis of the Kenyan distance-running phenomenon," for which you have to pay. Insert annoyed frowning face here. | {
"domain": "biology.stackexchange",
"id": 6857,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "human-biology, human-physiology",
"url": null
} |
arduino, sensors
Title: Is it possible to use HC-SR04 ultrasonic range sensor to indicate thickness of a material The HC-SR04 is directly connected to an Arduino board with the receiver end(echo) connected to analog pin 2 and the transmitter (trigger) connected to digital pin 4.
I am wondering if I can use the sensor to sense the change in saturation from when object block its path. The receiver and transmitter will be positioned like this
The line in the middle is supposed to be a paper. I'll be using it to see the difference between one paper and two paper when they travel trough the two.
Now I'm not sure if this is possible but the way I see it working is kind of similar to an IR LED Arduino program connected to an LED, where when one paper passes trough the light gets a little bit weaker and with two it takes a heavier hit.
Is this possible? The short answer is "no, a sonic range sensor can't do it".
It might "work" under very controlled conditions, but relying on only the attenuation of the returned signal to determine thickness may leave you open to incorrect results due to multipath propagation effects.
The more traditional way to measure thickness with sound is called profiling. The following is excerpted from a USGS Woods Hole Science Center page on Seismic Profiling systems:
reflection profiling is accomplished by [emitting] acoustic energy in timed intervals [...]. The transmitted acoustic energy is reflected from boundaries between various layers with different acoustic impedances [i.e. the air and the paper]. Acoustic impedance is defined by the bulk density of the medium times the velocity of the sound within that medium. The reflected acoustic signal is received [by one or more microphones]. The receiver converts the reflected signal to an analog signal [which is digitized and heavily processed to determine the makeup of the materials].
Rather than just measuring the time of the incoming pulse, you'd need to analyze both the time and frequency domain of the recovered signal to solve for the acoustic properties necessary to transform your transmitted pulse into the received pulse.
So the long answer is that it can be done sonically, although a sonic range sensor is generally insufficient for this purpose. | {
"domain": "robotics.stackexchange",
"id": 243,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "arduino, sensors",
"url": null
} |
Just want to complement @JacobBach's answer, the work done for the force ${\bf F}$ on the path $\gamma$ is calculated as
$$W = \int_\gamma {\rm d}{\bf x}\cdot{\bf F} \tag{1}$$
If ${\bf F}$ can be written as as the gradient of a potential field
$${\bf F} = -\nabla \phi$$
then (1) becomes
$$W = \int_\gamma {\rm d}{\bf x}\cdot{\bf F} = -\int_\gamma {\rm d}{\bf x}\cdot \nabla\phi = -\int_{\bf a}^{\bf b}{\rm d}\phi= \phi({\bf a}) - \phi({\bf b}) \tag{2}$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9796676436891864,
"lm_q1q2_score": 0.8054720838969609,
"lm_q2_score": 0.8221891261650247,
"openwebmath_perplexity": 143.26053094844895,
"openwebmath_score": 0.7059917449951172,
"tags": null,
"url": "https://math.stackexchange.com/questions/2782228/find-the-work-done-by-a-force-field"
} |
linux
Title: What should pwd be replaced with?
In the page
http://wiki.ros.org/image_transport/Tutorials/PublishingImages
$ ln -s `pwd`/image_common/image_transport/tutorial/ ./src/image_transport_tutorial
command appears. Here 'pwd' should be replaced with something else. What could it be?
Thanks
Originally posted by jbpark03 on ROS Answers with karma: 31 on 2016-03-08
Post score: 0
I don't think you need to replace pwd in the command you referred to. On Ubuntu (and other Linux I assume), surrounded by buckquote "`" symbol, the output of command is filled in.
So if you follow the tutorial line-by-line, you should be at ~/image_transport_ws/, which pwd command will return and that's what the tutorial you linked to expects.
Originally posted by 130s with karma: 10937 on 2016-03-08
This answer was ACCEPTED on the original site
Post score: 4 | {
"domain": "robotics.stackexchange",
"id": 24039,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "linux",
"url": null
} |
and more. Well, g of two is going to be Complete worksheet on the First Fundamental Theorem of Calculus Watch Khan Academy videos on: The fundamental theorem of calculus and accumulation functions (8 min) Functions defined by definite integrals (accumulation functions) (4 min) Worked example: Finding derivative with fundamental theorem of calculus (3 min) Videos on the Mean Value Theorem from Khan Academy. here would be for that x. ways of defining functions. Introduction. So what we have graphed ... Video Green's Theorem Proof Part 1--8/21/2010: Free: View in iTunes: 12: Video Green's Theorem Proof (part 2)--8/21/2010: Free: View in iTunes: 13: But we must do so with some care. And we call that Instead of having an x up here, our upper bound is a sine of x. Well, this might start making you think about the chain rule. The technical formula is: and. Part 1 says that the integral of f(x)dx from x=a to x=b is equal to F(b) - F(a) where F(x) is the anti-derivative of f(x) (F'(x) = f(x)). Images of rate and operational understanding of the fundamental theorem of calculus. You will get all the answers right here. The Fundamental Theorem tells us how to compute the derivative of functions of the form R x a f(t) dt. Download past episodes or subscribe to future episodes of Calculus by Khan Academy for free. A primeira parte do teorema fundamental do cálculo nos diz que, se definimos () como a integral definida da função ƒ, de uma constante até , então é uma primitiva de ƒ. Em outras palavras, '()=ƒ(). Part I: Connection between integration and differentiation – Typeset by FoilTEX – 1. Khan Academy. See what the fundamental theorem of calculus looks like in action. The Fundamental Theorem of Calculus Part 2. The fundamental theorem of calculus exercise appears under the Integral calculus Math Mission on Khan Academy. 1. PROOF OF FTC - PART II This is much easier than Part I! The spectral theorem extends to a more general class of matrices. to two, of f of t dt. This exercise shows the connection between differential calculus | {
"domain": "grecotel.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9706877684006775,
"lm_q1q2_score": 0.8002951053524482,
"lm_q2_score": 0.8244619242200082,
"openwebmath_perplexity": 1200.6522731897908,
"openwebmath_score": 0.7867779731750488,
"tags": null,
"url": "http://dreams.grecotel.com/lcom02/a15017-fundamental-theorem-of-calculus-part-1-khan-academy"
} |
and the bond price into adjacent cells (e.g., A1 through A3). History. It represents the average stress carried by the soil skeleton. Effective annual yield is a measure of the actual or true return on an investment. Bond equivalent yield formula. The n in the annual percentage yield formula would be the number of times that the financial institution compounds. Step 2: Next, figure out the number of compounding periods during a year and it is denoted by “n”. Calculate the effective maturity rate of the bond by dividing the average annual yield of the bond by the average annual investment. The effective yield can be calculated using the following formula: It is also known as the annual effective yield. Usually, you have to calculate the theoretical yield based on the balanced equation. Nominal yield, or the coupon rate, is the stated interest rate of the bond. Therefore, the effective maturity is 19.7 percent ($17/$86 = 0.198 or 19.8%). Since the effective yield considers compounding effect, it will always be greater than nominal yield. A zero coupon bond is a bond that does not pay dividends (coupons) per period, but instead is sold at a discount from the face value. Zero coupon bond effective yield formula takes in to accountthe compounding effect while calculating the rate of return. Recall that when Schultz issued its bonds to yield 10%, it received only $92,278. The Yield to Maturity is actually the Internal Rate of Return (IRR) on a bond. Introduction to Effective Duration. Financial calculator to calculate the effective yield with periodic interest based on the nominal annual interest rate (r) and number of payments per year (n). Where, r = Nominal Annual Interest Rate ; n = Number of payments per year ; i = Effective Interest Rate; Example of Effective Annual Yield Rate. Following is the effective yield formula on how to calculate effective yield. Zero coupon bond effective yield vs. Annual interest rate of a firm is 10% compounded monthly payments, then what is the effective … The effective annual interest rate allows you to determine the true return on investment (ROI) ROI Formula (Return on Investment) Return on investment (ROI) is a financial ratio used to calculate the benefit an investor will receive in relation to their investment cost. i = [1 + (r/n)] n – 1 . The formula (as provided by Microsoft) to determine yield is: | {
"domain": "feroceironacademy.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9783846672373524,
"lm_q1q2_score": 0.8021710686278878,
"lm_q2_score": 0.8198933359135361,
"openwebmath_perplexity": 1983.7590579005662,
"openwebmath_score": 0.5975272059440613,
"tags": null,
"url": "http://www.feroceironacademy.com/jealousy-sheet-trafp/dc1410-effective-yield-formula"
} |
# Is $2^{16} = 65536$ the only power of $2$ that has no digit which is a power of $2$ in base-$10$?
I was watching this video on YouTube where it is told (at 6:26) that $2^{16} = 65536$ has no powers of $2$ in it when represented in base-$10$. Then he - I think as a joke - says "Go on, find another power of $2$ that doesn't have a power of $2$ digit within it. I dare you!"
So I did. :) I wrote this little Python program to check for this kind of numbers:
toThePower = 0
possiblyNoPower = True
while True:
number = str(2**toThePower)
for digit in number:
if int(digit) in [1,2,4,8]:
possiblyNoPower = False
print('Not ' + number)
break
if possiblyNoPower:
print(number + ' has no digit that is a power of 2.')
toThePower += 1
possiblyNoPower = True
Sidenote: I could use the programming language Julia instead of Python, which may be much quicker, but I already checked for really big numbers and such a program (and brute-force in general) will never proof that there are no other powers of $2$ having this property. It might disprove it, but I think the chance is really really small.
I checked all the way to $2^{23826}$, which is a 7173 digit number, but no luck. Since the numbers are getting more and more digits with bigger powers of $2$, the chance of a number having no digit that is a power of $2$ becomes smaller and smaller.
I made a plot of $\frac{\text{number of digits being a power of 2}}{\text{total number of digits}}$ versus the $n$th power of $2$ on a logarithmic scale.
This graph is wrong! See the edit. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9759464415087019,
"lm_q1q2_score": 0.8068264294097812,
"lm_q2_score": 0.8267117898012104,
"openwebmath_perplexity": 182.09035932107753,
"openwebmath_score": 0.8125035166740417,
"tags": null,
"url": "https://math.stackexchange.com/questions/1873371/is-216-65536-the-only-power-of-2-that-has-no-digit-which-is-a-power-of"
} |
I would like to say that the problem given a group $G$, find groups $H$ such that $G=[H,H]$" have been studied in a more general contex which may be found with key wordsnormal embedding of subgroups" and the above-mentioned paper of Heineken is a good start.
Also it may be worth-mentioning that by a result of Allenby R.B.J.T. Allenby, Normal subgroups contained in Frattini subgroups are Frattini subgroups, Proc. Amer. Math. Soc, Vol. 78, No. 3, 1980, 318-
if $N$ is a normal subgroup of a finite group $G$ which is contained in the Frattini subgroup of $G$, then $N=\Phi(U)$ for some finite group $U$.
Of course, for the class of finite $p$-groups the Frattini subgroup is the verbal subgroup generated by the words $x_1^p, [x_1,x_2]$.
- | {
"domain": "mathoverflow.net",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9732407168145568,
"lm_q1q2_score": 0.800187940866489,
"lm_q2_score": 0.8221891327004133,
"openwebmath_perplexity": 118.44454925218412,
"openwebmath_score": 0.8962303400039673,
"tags": null,
"url": "http://mathoverflow.net/questions/85540/realizing-groups-as-commutator-subgroups?sort=oldest"
} |
fft, python, power-spectral-density, numpy, parseval
The weird thing is that the PSD estimated using the first method is in rough accordance with Parseval's theorem while the second one is not.
Any suggestions of what the correct method is? Or an improved version is needed?
I append here a piece of code to reproduce the figures I just showed using a timeseries corresponding to fractional brownian motion ( you will need to pip install fbm)
from fbm import fbm
# create a sythetic timeseries using a fractional brownian motion !( In case you don't have fbm-> pip install fbm)
start_time = datetime.datetime.now()
# Create index for timeseries
end_time = datetime.datetime.now()+ pd.Timedelta('1H')
freq = '10ms'
index = pd.date_range(
start = start_time,
end = end_time,
freq = freq
)
# Generate a fBm realization
fbm_sample = fbm(n=len(index), hurst=0.75, length=1, method='daviesharte')
# Create a dataframe to resample the timeseries.
df_b = pd.DataFrame({'DateTime': index, 'Br':fbm_sample[:-1]}).set_index('DateTime')
#Original version of timeseries
y = df_b.Br
# Resample the synthetic timeseries
x = df_b.Br.resample(str(int(resolution))+"ms").mean()
# Estimate the sampling rate
dtx = (x.dropna().index.to_series().diff()/np.timedelta64(1, 's')).median()
dty = (y.dropna().index.to_series().diff()/np.timedelta64(1, 's')).median()
# Estimate PSD using first method
resy = TracePSD_1st(y, dty)
resx = TracePSD_1st(x, dtx) | {
"domain": "dsp.stackexchange",
"id": 11388,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "fft, python, power-spectral-density, numpy, parseval",
"url": null
} |
5
As for your main question, I recommend this short survey by Martin Grohe. Are the queries that are needed in practice usually simple enough that there is no need for a stronger language? I'd say this holds most of the time, given the fair amount of extensions added to common query languages (transitive closure, arithmetic operators, counting, etc.). This ...
5
Excellent question, and since you referred to us ("jOOQ developers", which I am - working for the company behind jOOQ), I feel qualified to give a partial answer. A bit of historic context first Since the very beginning of software, there had been: Theory (which is what "Computer Science", i.e. this Stack Exchange subsite is about) Practice (more like ...
5
First, terminologically, "axiom" and "inference rule" are often used as roughly interchangeable as they tend to serve similar purposes. There are technical distinctions, which themselves can vary slightly, but outside the study of formal logic or related systems, these distinctions aren't that important. In the context of formal logic, an axiom is a formula ...
4
The set of all words over some finite alphabet together with concatenation forms the free monoid $(\Sigma^*, \cdot)$. Therefore, the whole field of formal language can be viewed through the algebraic lense, and it is sometimes taught like this. In return, considerations on formal languages have yielded the Earley parser which can be extend to parse on ...
4
When an SQL statement is turned into an execution plan, several optimization techniques are used. The use of indices allow to efficiently (without a full scan) select tuples that agree with a selection condition. Another technique in use is semantic optimization, id est, to turn a query into an equivalent one with better behaviour. To do so, identities of ...
4 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9706877684006774,
"lm_q1q2_score": 0.8380836734352941,
"lm_q2_score": 0.8633916082162402,
"openwebmath_perplexity": 1103.278645373994,
"openwebmath_score": 0.547410249710083,
"tags": null,
"url": "https://cs.stackexchange.com/tags/database-theory/hot"
} |
special-relativity, metric-tensor, inertial-frames, lorentz-symmetry
for some $\Lambda \in O(1,3)$.
If we admit different origins we obtain the so-called Poincaré transformations
$$x'^a = c^a+ \sum_{j=1}^n {\Lambda^a}_j x^j \:.$$
When viewing Lorentz transformations as transformation of coordinates, their formal linearity does not play a relevant physical role, since it only reflects the arbitrary initial choice of the same origin for both reference frames. However, these transformations are also transformations of bases (3') in the space of translations (the tangent space), in this case linearity is natural because it reflects the natural linear space structure of the translations. | {
"domain": "physics.stackexchange",
"id": 15778,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "special-relativity, metric-tensor, inertial-frames, lorentz-symmetry",
"url": null
} |
quantum-mechanics, operators, hilbert-space, vectors, tensor-calculus
&=\sum_{j} \left(\hat{\vec{x}}(v)^j \otimes \sum_i R_{ij}e_i\right)\\
&= (1 \otimes R)\left(\sum_{j} \hat{\vec{x}}^j(v) \otimes e_j \right)\\
&=(1 \otimes R)\left(\varphi(\hat{\vec{x}(v)}) \right)
\end{align} | {
"domain": "physics.stackexchange",
"id": 39557,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, operators, hilbert-space, vectors, tensor-calculus",
"url": null
} |
:param grid: the two-dimensional integer grid
:param run_len: the product run-length
:return: the maximum run_len long product in the vertical direction from grid
"""
n, m = len(grid), len(grid[0])
for i in range(n - run_len + 1):
for j in range(m):
product = 1
for k in range(run_len):
product *= grid[i+k][j]
def diagonal_natural(grid: List[List[int]], run_len: int) -> int:
""" Find the maximal run_len long product in the 'natural' diagonal direction
The 'natural' diagonal is defined as top-left to bottom-right when viewed in the C-array style convention.
:param grid: the two-dimensional integer grid
:param run_len: the product run-length
:return: the maximum run_len long product in the natural diagonal direction from grid
"""
n, m = len(grid), len(grid[0])
for i in range(n - run_len+1):
for j in range(m - run_len+1):
product = 1
for k in range(run_len):
product *= grid[i+k][j+k]
def diagonal_reverse(grid: List[List[int]], run_len: int) -> int:
""" Find the maximal run_len long product in the 'reverse' diagonal direction
The 'reverse' diagonal is defined as bottom-left to top-right when viewed in the C-array style convention.
:param grid: the two-dimensional integer grid
:param run_len: the product run-length
:return: the maximum run_len long product in the reverse diagonal direction from grid
"""
n, m = len(grid), len(grid[0])
for i in range(run_len - 1, n):
for j in range(m - run_len+1):
product = 1
for k in range(run_len):
product *= grid[i-k][j+k]
def solve():
""" Compute the answer to Project Euler's problem #11 """ | {
"domain": "beerbaronbill.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9744347816221828,
"lm_q1q2_score": 0.8243157265943418,
"lm_q2_score": 0.8459424295406087,
"openwebmath_perplexity": 419.74619518113815,
"openwebmath_score": 0.3893049657344818,
"tags": null,
"url": "https://euler.beerbaronbill.com/en/latest/solutions/11.html"
} |
quantum-field-theory, definition, greens-functions, correlation-functions, propagator
Of course, the equations you cite are just definitions. Eq. 2 is fine, it's just that anything physical will end up being written as some Lorentz-invariant combination of those objects. With the definition given in Eq. 1, we need not worry about such things because $\langle[\phi(y),\phi(x)]\rangle = 0$ outside the light cone, meaning that the sign of $x^0-y^0$ cannot be changed by Lorentz transformation and so everyone will agree on whether the propagator vanishes or not. | {
"domain": "physics.stackexchange",
"id": 98584,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-field-theory, definition, greens-functions, correlation-functions, propagator",
"url": null
} |
You can reduce the labour a little by writing
$\dfrac{1}{x(x-1)^3(x-2)^2} = \dfrac{(1-x+x^2)+x(x-1)}{x(x-1)^3(x-2)^2}$
$=\dfrac{x(x-2)^2-(x-1)^3}{x(x-1)^3(x-2)^2} + \dfrac{1}{(x-1)^2(x-2)^2}$
$=\dfrac{1}{(x-1)^3} - \dfrac{1}{x(x-2)^2}+\dfrac{1}{(x-1)^2(x-2)^2}$
$=\dfrac{1}{(x-1)^3}- \dfrac{1}{x(x-2)^2}+ \dfrac{1}{(x-1)^2}+\dfrac{1}{(x-2)^2}+2 \left[\dfrac{1}{x-1} - \dfrac{1}{x-2} \right]$
If you wish you can further decompose $\dfrac{1}{x(x-2)^2}= \dfrac{1}{(x-2)^2}+\dfrac{1}{2} \left[\dfrac{1}{x} - \dfrac{1}{x-2} \right]$
This expression can be readily integrated. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9820137895115187,
"lm_q1q2_score": 0.8118423859218797,
"lm_q2_score": 0.8267117983401363,
"openwebmath_perplexity": 616.7489526207271,
"openwebmath_score": 0.6202499866485596,
"tags": null,
"url": "https://math.stackexchange.com/questions/2424265/evaluate-int-fracdxxx-13x-22"
} |
electric-circuits, electricity
Title: Will a generator turn a lamp on in open circuit if the lamp is connected to the earth which is greedy for electrons? Consider the following thought experiment.
One end of a lamp is connected to one terminal of a power generator. The other end of the lamp is connected to the ground via a metal copper as follows. Assume that the generator is placed quite far above the ground.
Physics textbooks say that our earth is a giant capacitor that is greedy for electrons. It is also said that electric current flows if there is a closed circuit.
Question
In this scenario, will the generator turn the lamp on?
Attempt
I am not sure whether the generator should turn the lamp on or not.
If the generator turns on the lamp, there are electrons flowing from the generator to the earth. But how can the generator produce electrons from mechanical energy? It seems it violates the conservation of charges. So it should not turn the lamp on.
But borrowing an analogy of our electric outlet, if we touch the live (hot) wire without wearing high impedance shoes, we will get electric shock even there is no closed circuit (as far as I know). Here is what happens if the $300V$ wind generator tries to create a current in the wire.
At first some electrons will move e.g. to the right in the wire above.
However this leaves one end of the wire positively charged and the electrons are attracted back, within a short time the $300V$ would not be able to move any more electrons and the current stops.
So a closed circuit is necessary for a current to flow. | {
"domain": "physics.stackexchange",
"id": 84197,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electric-circuits, electricity",
"url": null
} |
there are no X points shown? Moreau2. 14 Terms. In this paper, a new generalization of the mean value theorem is firstly established. Based on the Rolle’s theorem, a simple proof is provided to guarantee the correctness of such a generalization. Information and translations of mean value theorem in the most comprehensive dictionary definitions resource on the web. translation and definition "mean value theorem", English-Russian Dictionary online. Five pointed Star and Star of David inscribed in a Rectified Truncated Icosahedron. First, let’s start with a special case of the Mean Value Theorem, called Rolle’s theorem. mean value theorem (plural mean value theorems) (mathematics) Any of various theorems that saliently concern mean values.1964, J. H. Bramble, L. E. Payne, Some Mean Value Theorems in Electrostatics, Journal of the Society for Industrial and Applied Mathematics, Volume 12, page 105, Several mean value theorems in the theory of elasticity have appeared in the recent literature [… G t 2t t2 t3 g t 2 t t 2 t 3 on 2 1 2 1 solution for problems 3 4 determine all the number s c which satisfy the conclusion of the mean value theorem for the given function and interval. 4 conditions where the function is not…. Using the Mean Value Theorem, $\exists b \in (x, x + h)$ and \$\exists a \in (x - h, x) ... English Language Learners; Japanese Language; Chinese Language; French Language; German Language; Biblical Hermeneutics; History; Spanish Language; Islam; Русский язык ; Russian Language; Arqade (gaming) Bicycles; Role-playing Games; Anime & Manga; Puzzling; Motor Vehicle Maintenance … First, let’s start with a special case of the Mean Value Theorem, called Rolle’s theorem. əm] (mathematics) The proposition that, if a function ƒ (x) is continuous on the closed interval [a,b ] and differentiable on the open interval (a,b), then there exists x0, a <>x0<>b, such that ƒ(b) - ƒ(a) = (b-a)ƒ′(x0). Reference: J. Tong, "A Generalization of the Mean Value Theorem for | {
"domain": "truongphatsafety.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9688561667674653,
"lm_q1q2_score": 0.8273186311757473,
"lm_q2_score": 0.853912747375134,
"openwebmath_perplexity": 419.0139443976686,
"openwebmath_score": 0.7743996977806091,
"tags": null,
"url": "http://truongphatsafety.com/chromatics-music-cnh/language-mean-value-theorem-0d3ae5"
} |
ros
[rosmake-1] Starting >>> sensor_msgs [ make ]
[rosmake-3] Finished <<< bondcpp ROS_NOBUILD in package bondcpp
[rosmake-2] Finished <<< opencv2 ROS_NOBUILD in package opencv2
[rosmake-3] Starting >>> nodelet [ make ]
[rosmake-2] Starting >>> rostest [ make ]
[rosmake-0] Finished <<< dynamic_reconfigure ROS_NOBUILD in package dynamic_reconfigure
[rosmake-3] Finished <<< nodelet ROS_NOBUILD in package nodelet
[rosmake-0] Starting >>> ccny_g2o [ make ]
[rosmake-3] Starting >>> nodelet_topic_tools [ make ]
[rosmake-3] Finished <<< nodelet_topic_tools ROS_NOBUILD in package nodelet_topic_tools
[rosmake-1] Finished <<< sensor_msgs No Makefile in package sensor_msgs
[rosmake-3] Starting >>> rosbag [ make ]
[rosmake-2] Finished <<< rostest No Makefile in package rostest
[rosmake-1] Starting >>> cv_bridge [ make ]
[rosmake-1] Finished <<< cv_bridge ROS_NOBUILD in package cv_bridge
[rosmake-2] Starting >>> opencv_tests [ make ]
[rosmake-2] Finished <<< opencv_tests ROS_NOBUILD in package opencv_tests
[rosmake-3] Finished <<< rosbag No Makefile in package rosbag
[rosmake-1] Starting >>> bullet [ make ]
[rosmake-2] Starting >>> angles [ make ]
[rosmake-3] Starting >>> roswtf [ make ]
[rosmake-2] Finished <<< angles ROS_NOBUILD in package angles | {
"domain": "robotics.stackexchange",
"id": 20218,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros",
"url": null
} |
ros, rosjava
Title: Need help with Cross Compiling libraries for ros android ndk
As described here ros_android_ndk, I am trying to cross compile some libraries for ros android ndk.
My first problem is that when i add new packages with rosfusion.py in the ndk.rosinstall and they fail to compile I cant get them out anymore. Even if I delete the new dependencies or comment them out the ./do_docker.sh fails because of the new dependencies. Do I miss a step here?(Solved see answers)
I need 2 Packages that arent included yet:
depthimage_to_laserscan and map_server.
Sadly map_server is commented out. Is there any workaround?
depthimage_to_laserscan the kinetic fails as well. But the indigo release seems to crosscompile correctly because it doesn't depend on opencv3.
Now that it has compiled how do I use it?
Do I just include the header file? Do i need to copy the code or do I have to declare it in the Android.mk file.
Originally posted by manster2209 on ROS Answers with karma: 20 on 2018-08-29
Post score: 0
Regarding your second question: if map_server is commented out it's most probably because I has some issues to compile. If you really need it you might try to fix those and submit a PR, it will be most welcome.
On your third question: if you have added a new package and it has compiled you can now use it as any other ros package and you will need to add the binary to the Android.mk list so it gets statically linked.
Hope this was useful.
Originally posted by Ernesto Corbellini with karma: 101 on 2018-08-29
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 31664,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, rosjava",
"url": null
} |
primes, rust
Title: Calculate all the prime numbers between two given numbers I've made an application that calculates all the prime numbers between two given numbers and prints then into a .txt document... anything I can improve?
use std::io;
use std::fs::{OpenOptions};
use std::io::{Write, BufWriter};
fn main() {
loop{
let mut format = 1;
let mut input = String::new();
println!("Say a start for the prime loop! ");
io::stdin().read_line(&mut input).unwrap();
let start: u128 = input.trim().parse().unwrap();
let mut input = String::new();
println!("Say an end for the prime loop! ");
io::stdin().read_line(&mut input).unwrap();
let end: u128 = input.trim().parse().unwrap();
let path = "path/to/file.txt";
let f = OpenOptions::new()
.write(true)
.open(path)
.expect("Could not open file");
let mut f = BufWriter::new(f);
for i in start..end{
if prime(i) == true{
f.write_all(i.to_string().as_bytes()).expect("unable to write to file");
f.write_all(b"\t").expect("unable to write to file");
format += 1;
}
if format == 10{
f.write_all(b"\n").expect("unable to write to file");
format = 0;
}
}
}
}
fn prime(x: u128) -> bool { | {
"domain": "codereview.stackexchange",
"id": 40260,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "primes, rust",
"url": null
} |
newtonian-gravity, symmetry, potential
$$F = \frac{-4πG\rho r}{3} $$
This tells us the field at any point in the sphere. Replacing r with a tells us the field at the surface of the sphere.
Anyway, we want the potential at the surface of the sphere, which is the negative work done per unit mass when moving from the center of the sphere to its surface i.e from $r=0$ to $r=a$ (assuming that our reference point is set at the center of the sphere, so that the center has zero potential energy). This is the same thing as the integral of the field along this distance, since the field is the force per unit mass. Thus the potential at the surface of the sphere is:
$$\int_0^a Fdr =\int_0^a \frac{4πG\rho r dr}{3}$$
which is:
$$ \frac{4πG\rho a^2}{6} $$
The volume of the entire sphere is $ V = \frac 43 \pi a^3$, so that $\frac {4\pi a^2}{6} = \frac {V}{2a}$. Since $M$, the mass enclosed by the sphere is $\rho V$, where this time V is the volume of the entire sphere:
$$ \frac{4πG\rho a^2}{6} = \frac{G\rho V}{2a} = \frac {GM}{2a}$$
So the work done in moving from the center to the surface of a ball is given by $\frac {GM}{2a}$, provided the ball is uniform. To get Feynman's equation just change the reference point for the potential energy, so that the potential energy at the center is no longer $0$.
Edit: | {
"domain": "physics.stackexchange",
"id": 54908,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-gravity, symmetry, potential",
"url": null
} |
python, python-3.x, random, playing-cards, bitcoin
def check_hex_is_in_range(value):
"""Check 0 <= value <= 52!/(52-31)! - 1
As Python's arbitrary precision integers cannot be represented
in hexadecimal as negative without using a minus sign, which has
already been precluded, check only the upper limit.
"""
if value > upperLimit:
message = (
"The hexadecimal value is too large to be represented by 31 cards.\n"
"The maximum valid value is 52!/(52-31)! - 1\n"
"In hexadecimal this maximum is\n"
"114882682E46B11EADE9F57C1E3E0BBD47FFFFFFF"
)
raise HexValueTooLargeError(message)
def hex_representation(listOfCards):
"""Return a hexadecimal string defined by the 31 cards. | {
"domain": "codereview.stackexchange",
"id": 6235,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, python-3.x, random, playing-cards, bitcoin",
"url": null
} |
energy, particle-physics, mass, standard-model, strong-force
$$
The rest mass of the composite system is equal to this total energy, divided by $c^2$:
$$
M = \gamma_1 m_1 + \gamma_2 m_2.
$$
This statement should surprise you if you don't know relativity, but it is a standard part of the theory and I am not going to prove it. The main thing to note is that when the particles are moving in the reference frame with zero total momentum, then
$$
M > m_1 + m_2.
$$
One can notice that
$$
M - (m_1 + m_2) = \frac{K_1 + K_2}{c^2}
$$
where $K_1$ and $K_2$ are the kinetic energies. So you can say that the extra rest mass of the composite system is owing to the kinetic energy of the parts of the system (in the reference frame of zero total momentum).
It is similar with the rest mass of composite systems such as protons. Now it is the quarks and gluons which have kinetic energy, and it turns out that they have a lot of kinetic energy. For the gluons their entire energy can be called kinetic energy (since they have zero rest energy) and for the quarks inside a proton the Lorentz factors are large so their kinetic energy is large compared to their rest energy.
If you had a ball made entirely of photons moving in different directions then there would be plenty of kinetic energy and therefore a non-zero rest mass of the entire ball, even though each photon inside it has zero rest mass. | {
"domain": "physics.stackexchange",
"id": 85725,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "energy, particle-physics, mass, standard-model, strong-force",
"url": null
} |
perl, graph
There are still some bits I'm not clear about in your code. Specifically, I'm not sure about the roles (or sources) of the functions:
clearerr
gref
seterr
Because of that, I can't test my hypothesis. However, I do think that this solution scales to 200 items more easily than the original - and without needing:
use feature "switch";
With more time spent, the code could still be tidied up, I'm sure. And, since this is Perl, TMTOWTDI - there's more than one way to do it.
Suggestion:
Provide code that can be compiled and run whenever possible - you will get better code reviews that way. | {
"domain": "codereview.stackexchange",
"id": 1005,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "perl, graph",
"url": null
} |
javascript, performance
return newData.concat(bidEntries).concat(askEntries.reverse());
}
I tested how it performs when tested with NodeJS runtime. I increased the dataset from 100 entries through 2,000,000.
I also made in-browser test here: http://jsperf.com/chart-data-repacking
I hope this helps. | {
"domain": "codereview.stackexchange",
"id": 13108,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, performance",
"url": null
} |
ros, ros2, rclcpp, nodelet
Originally posted by petermp on ROS Answers with karma: 1 on 2018-03-27
Post score: 0
Original comments
Comment by gvdhoorn on 2018-03-28:
I've removed my answer as I'd missed the ros2 tag. The use of the term nodelet made me think you were asking about the ROS1 concept.
Note: there are no 'nodelets' in ROS2, only nodes.
Comment by petermp on 2018-03-28:
Thanks. I am new to the forum and apparently made some edit mistakes.
I am able to get the code working and confirm that we can do both publishing and subscribing in any order using a single nodelet (in dynamic load composition example).
Peter
Comment by William on 2018-03-28:
What's missing here is the declaration of the class. I believe the bug is that pub_ is member of the class and therefore persists after the constructor, but the subscription sub_ is local to the constructor and is destroyed when it is finished. You must keep the sub_ to get callbacks.
You should be able to publish, subscribe, use services in composed nodes the same way you do with regular nodes.
Unfortunately the code sample you provided is not enough to help you (and the formatting make it hard to read).
How do you compose your nodes? how do you invoke them ?
If it helps, I pushed an example of Node that publishes and subscribes here
The main is creating an executor and adding the node to it. To compose more nodes in that process you would need to add more node like it's done in the composition example
Originally posted by marguedas with karma: 3606 on 2018-03-27
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 30468,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, ros2, rclcpp, nodelet",
"url": null
} |
snakemake
It's not explicitly stated, but it's sort of implied that all relative paths then become relative to the working directory. I would expect that specifying an absolute path would get around this.
As an aside, in my mind setting the working directory is usually only needed on clusters without a shared file system (presumably shared between the worker nodes, but not with the head node), since there you can't cd $WORKDIR before running snakemake. This is then normally done with your scheduler, in such cases. | {
"domain": "bioinformatics.stackexchange",
"id": 289,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "snakemake",
"url": null
} |
ros, build, rosmake, rospack, makefile
Title: Received 'rospack failed to build' message when executed rosmake
Hello,
I'm totally new with ROS and all the robot related stuff, so I'm following all the instructions here to get involved and learn how ROS works.
I'm using Fedora 15 and followed these instructions http://www.ros.org/wiki/electric/Installation/Fedora to install ROS. Everything worked fine, so I continued with the tutorials indicated in this same page to start learning ROS.
Everything was fine until I arrived to the building packages tutorial, I was able to create my first package but when I tried to build it I get the following message:
[manu@manu beginner_tutorials]$ rosmake beginner_tutorials
Rospack failed to build
I have no idea what can be causing this, I did a search at Google and at the ROS answers page but haven't find yet how to solve this. The most similar error I found related to this pointed to the CDPATH environment variable, but I have already verified that this variable is not set in my machine and also if I set it to '.' get the same result.
I'll really appreciate any suggestion or tip about how to debug this, I know this may be a configuration error or a very elemental question, but I hope that soon I could get more involved with how all this works :D.
Originally posted by manu on ROS Answers with karma: 61 on 2012-03-19
Post score: 0
Original comments
Comment by manu on 2012-03-27:
Any suggestions about how to debug this??? I haven't found an answer yet, I'm trying to debug the makefiles to know what can be causing this
Solved this... it was a permission issue causing the build to fail.
Originally posted by manu with karma: 61 on 2012-04-10
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Kzr on 2015-12-06:
how do you solve it? i have the same issue | {
"domain": "robotics.stackexchange",
"id": 8637,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, build, rosmake, rospack, makefile",
"url": null
} |
database-theory
Title: What does this definition of a Primary key mean? My text book gives the following definition of a primary key in a relational database, which I don't entirely understand. Help would be greatly appreciated.
Let $R$ be a relation. Then the primary key for $R$ is a subset of the
set of attributes of $R$, say $K$, satisfying the following two properties:
Uniqueness Property: No two distinct tuples of $R$ have the same value for $K$.
Irreducibility Property: No proper subset of $K$ has the uniqueness property.
I'm getting lost by the Irreducibility property. Consider the following table:
FirstName LastName Pet FavColour
-----------------------------------
Alice Jones dog red
Alice Smith dog green
Bob Smith cat blue | {
"domain": "cs.stackexchange",
"id": 7303,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "database-theory",
"url": null
} |
• I am wondering if the following statement is false for infinite dimensional spaces. "If $V$ is a vector space and $U$ is a subspace of $V$, then there exists another subspace of $V$ called $U^\perp$ such that every element of $V$ can be uniquely expressed as the sum of an element from $U$ with an element from $U^\perp$." (I am thinking a counterexample would be if $U$ was the set of real number sequences with finite support (i.e. eventually zero) and $V$ is the set of all real number sequences.) – irchans Jan 3 at 19:03
• @irchans If we take the axiom of choice, then every subspace of a vector space has a direct sum complement (what you call the perpendicular space, but this language is typically reserved for a space equipped with some bi-linear form). The proof is pretty simple. Let $V$ be a $k$-vector space, with $U$ a vector subspace. Let $\mathcal{B}_{U}$ be a basis for $U$ and extend it to a basis $\mathcal{B}_{V}$ (using the axiom of choice) for $V$. Then let $W = \operatorname{Span}_{k}\left( \mathcal{B}_{V} \backslash \mathcal{B}_{U} \right)$. Then $V = U \oplus W$. – Adam Higgins Jan 3 at 19:13
• @irchans Perhaps the reason you think that your example is a counter example is because of the $\textit{weirdness}$ of bases of infinite dimensional vector spaces. Notice that a subset $S$ of a vector space $V$ is said to be a basis if and only if every element $v \in V$ can be written as a $\textbf{finite}$ linear combination of the elements of $S$, and that there is no finite non-trivial linear relation amongst the elements of $S$. – Adam Higgins Jan 3 at 19:19
• Thank you very much ! – irchans Jan 3 at 19:27 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9918120896142623,
"lm_q1q2_score": 0.9132503357744666,
"lm_q2_score": 0.920789679151471,
"openwebmath_perplexity": 65.96755087668768,
"openwebmath_score": 0.9572270512580872,
"tags": null,
"url": "https://math.stackexchange.com/questions/3060802/is-v-isomorphic-to-direct-sum-of-subspace-u-and-v-u"
} |
acid-base, experimental-chemistry
Title: Does Cute Poison actually work? For those of you that have watched Season 1 of Prison Break (TV Series), in the "Cute Poison" Episode Micheal Scofield combined $\ce{CuSO4}$ (Copper Sulfate) and $\ce{H3PO4}$ (Phosphoric Acid) to weaken the metal. Is it true? The Wikipedia page on "Cute Poison" episode says that it wasn't anhydrous copper (II) phosphate, but Gypsum.
I haven't seen the episode, and the Wikipedia page's reaction seems to be the backwards reaction of yours. But I imagine yours is the one the pro/antagonist is gonna use to escape the prison; as
Gypsum is commonly found and accessible:
Gypsum is a soft sulfate mineral composed of calcium sulfate dihydrate, with the chemical formula $\ce{CaSO4·2H2O}$. It can be used as a fertilizer, is the main constituent in many forms of plaster and in blackboard chalk, and is widely mined. A massive fine-grained white or lightly tinted variety of gypsum, called alabaster, has been used for sculpture by many cultures including Ancient Egypt, Mesopotamia, Ancient Rome, Byzantine empire and the Nottingham alabasters of medieval England.
Calcium phosphate's solubility will decrease with temperature increase and it will precipitate, leaving you with a solution of sulfuric acid.
It's possible for a double displacement reaction to occur in aqueous medium (with a spark):
$$\ce{2H3PO4(aq) + 3CaSO4(aq)·2H2O(l) \leftrightharpoons 3H2SO4(aq) + Ca3(PO4)2(aq) + 6H2O(l)}$$
$\ce{H2SO4}$ is sulfuric acid. It's a very strong acid in water, a diprotic acid. Its $p {\rm K_a}$s are −3 and 1.99 according to Wikipedia.
Its corrosiveness on other materials, like metals, living tissues or even stones, can be mainly ascribed to its strong acidic nature and, if concentrated, strong dehydrating and oxidizing properties. | {
"domain": "chemistry.stackexchange",
"id": 4005,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "acid-base, experimental-chemistry",
"url": null
} |
single node to a single destination node by stopping the algorithm once the shortest path to the destination node has been determined. In graph theory, the shortest path problem is the problem of finding a path between two vertices (or nodes) in a graph such that the sum of the weights of its constituent edges is minimized. Reference: Robert Floyd, Algorithm 97: Shortest Path, Communications of the ACM, Volume 5, Number 6, page 345, June 1962. A Shortest Path Algorithm for Real-Weighted Undirected Graphs. Write an algorithm to print all possible paths between source and destination. An apparatus, program product and method enable nodal fault detection by sequencing communications between all system nodes. Find all pair shortest paths that use 0 intermediate vertices, then find the shortest paths that use 1 intermediate vertex and so on, until using all N vertices as intermediate nodes. Data Structure by Saurabh Shukla Sir 67,518 views 34:10. Like BFS, it finds the shortest path, and like Greedy Best First, it's fast. Graphs Algorithms Sections 9. Return the length of the shortest path that visits every node. Shortest Paths Single Source Shortest Paths Dijkstra’s Algorithm Bellman-Ford Algorithm All Pairs Shortest Paths Implicit Graphs Floyd-Warshall Algorithm 18 Let f( u;v i) be the length of the shortest path between and v using only the firsti vertices (i. The communications may be analyzed to determine the nodal fault. Also, this algorithm can be used for shortest path to destination in traffic network. Dijkstra's algorithm finds the least expensive path in a weighted graph between our starting node and a destination node, if such a path exists. The A* Search algorithm performs better than the Dijkstra's algorithm because of its use of heuristics. Widest path – To find a path between two designated vertices in a weighted graph, maximizing the weight of the minimum-weight edge in the path. Essentially, you replace the stack used by DFS with a queue. The length of a geodesic path is called geodesic distance or shortest distance. The weights can representij e. This measure, called the randomized shortest-path (RSP) dissimilarity, depends on a parameter θ and has the interesting property of reducing, on one end, to the standard shortest-path distance when θ is large and, on the other end, to the commute-time (or | {
"domain": "incommunity.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9811668679067631,
"lm_q1q2_score": 0.8021769968098277,
"lm_q2_score": 0.8175744850834648,
"openwebmath_perplexity": 387.29434216764923,
"openwebmath_score": 0.565926730632782,
"tags": null,
"url": "http://incommunity.it/qhie/shortest-path-between-two-nodes-in-a-weighted-graph.html"
} |
fastq
Title: Does this FASTQ data contain single or paired end calls? I have this fastq data from GEO:
zcat SRR1658526.fastq.gz | head -n 20
@SRR1658526.1 HWI-ST398:296:C1MP4ACXX:1:1101:1093:2094 length=102
GATCTCTATTACTTTTTGAAGGATTNNNNNNNNNNAANTTTTGAATCANNNNNNNNNNNNNNNNNNNNNNNNNNNNNTNNNNNNNNNNNNNNNNNNNNNNNN
+SRR1658526.1 HWI-ST398:296:C1MP4ACXX:1:1101:1093:2094 length=102
@<@FFDFDHHFHHIIIIGHBGGGIG##########10#0:BGDDHHII######################################################
@SRR1658526.2 HWI-ST398:296:C1MP4ACXX:1:1101:1167:2107 length=102
ATAATATTGTAGATATAAATGTTATCTAATCTTATCTGATCAGCTTGCTNNATANNNNNNNNNNNNNNNNNNACNTATGNNNNNNNNNNNNNNNNNNNNNCC
+SRR1658526.2 HWI-ST398:296:C1MP4ACXX:1:1101:1167:2107 length=102
CCCFFFFFHHFHHJJIIIIJIJJHJHJJJJIJJIJJJJIJHIJJJJGII#####################################################
It is supposed to be paired-end sequences. Is the prefix @ and + are the R1 and R2? What's the convention here? Entries in a fastq file occupy 4 lines each and R1 and R2 are typically in different files. Since that SRA project is 2x51 you seem to have run fastq-dump without the --split-3 option, so both R1 and R2 are merged together. Make your life easier and never use SRA, but instead just download the individual files from ENA. | {
"domain": "bioinformatics.stackexchange",
"id": 1092,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "fastq",
"url": null
} |
${LHS = A \backslash \left( {B \cup C} \right) }={ A \cap {\left( {B \cup C} \right)^c}.}$
By De Morgan’s law, $${\left( {B \cup C} \right)^c} = {B^c} \cap {C^c}.$$ Given that $$A = A \cap A$$ (the idempotent law), and using the associative and commutative rules, we obtain
${LHS = A \cap \left( {{B^c} \cap {C^c}} \right) }={ \left( {A \cap A} \right) \cap \left( {{B^c} \cap {C^c}} \right) }={ A \cap A \cap {B^c} \cap {C^c} }={ \left( {A \cap {B^c}} \right) \cap \left( {A \cap {C^c}} \right).}$
Finally, applying the set difference law, we have
${LHS = \left( {A \cap {B^c}} \right) \cap \left( {A \cap {C^c}} \right) }={ \left( {A \backslash B} \right) \cap \left( {A \backslash C} \right) }={ RHS.}$
### Example 5.
Show that $$\left( {A \cap B} \right) \cup \left( {A \cap {B^c}} \right) = A.$$
Solution.
We prove algebraically that the left hand side $$\left({LHS}\right)$$ of the identity is equal to the right-hand side $$\left({RHS}\right).$$
Using the distribution law
${A \cap \left( {B \cup C} \right) }={ \left( {A \cap B} \right) \cup \left( {A \cap C} \right),}$
we can write | {
"domain": "math24.net",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.994530726211829,
"lm_q1q2_score": 0.8244051108222311,
"lm_q2_score": 0.8289388040954683,
"openwebmath_perplexity": 353.8204952973352,
"openwebmath_score": 0.8789429664611816,
"tags": null,
"url": "https://www.math24.net/set-identities/"
} |
ros, ros-melodic, network
Theoretically: no, as long as no new connections are being setup, things should keep working, as the master is only involved in setting up new subscriptions and other connections between nodes (it's essentially a DNS).
In practice this is almost never the case, and things will stop working (or most likely: start to time-out).
You'll want to run a master on each individual robot instead. This is almost always recommended when having this many robots.
Look into multimaster_fkie and similar packages.
Originally posted by gvdhoorn with karma: 86574 on 2020-02-02
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by gvdhoorn on 2020-02-02:
In addition to this: I'd recommend you spend a few minutes reading wiki/ROS/Introduction. It will clear up a lot of things (including what the role of the master is).
Comment by gvdhoorn on 2020-02-02:
Also: note that ROS 2 is different in this regard: it's fully peer-to-peer. No centralised discovery.
Comment by mechapancake on 2020-02-03:
Thanks for clearing that up. So network traffic doesn't route through the master, but the master still needs to be periodically accessible to avoid problems due to time-outs. Or are the time-outs you refer to something else?
Comment by gvdhoorn on 2020-02-03:
The time-outs are due to this:
the master is [..] involved in setting up new subscriptions
So nodes will start to time-out when they are trying to setup new connections.
But if you setup a proper multi-master setup that will not be a problem -- as each robot is its own master.
the master still needs to be periodically accessible | {
"domain": "robotics.stackexchange",
"id": 34364,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, ros-melodic, network",
"url": null
} |
biochemistry, structural-biology
Title: Why add hydrogens in molecular dynamics simulations? In my molecular dynamics lecture, our prof said, that we always have to add hydrogen atoms to titratable groups, before we start the force field simulations, and that it is especially important for Histidine. | {
"domain": "biology.stackexchange",
"id": 8351,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "biochemistry, structural-biology",
"url": null
} |
evolution, natural-selection, senescence, telomere, measurement
Title: When telomere length is measured, is the method performed on a collection of cells yielding an average? What are the methods used in measuring telomeres in human or animal subjects?
Can it be done on an individual cell?
Has the following concern been raised and addressed before:
What if there exists a natural variance in telomere length on a cell by cell basis and various methods exist (strenuous exercise etc) that exert selection pressure on telomere length, then if the measurement method in some way involves multiple cells you have the effect of the average length increasing while something else actually occurred. Resulting in all these studies concluding that telomere length increased. A telomere can be measured using a flow-fish test. Basically fluorescent markers that bind DNA are introduced to the nucleus of a cell, and then those markers can be counted using a method call flow cytometry.
It can be done on one cell
I'm not sure I understand the last part of the question, but I can say that if the average telemore size increases in a particular sample size, then scientists will not conclude anything based on that data.
If there is a scientific conclusion on telomere size with relation to external selective pressures, there would be statistically significant data that properly accounts for the variations observed in different cells. | {
"domain": "biology.stackexchange",
"id": 11008,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "evolution, natural-selection, senescence, telomere, measurement",
"url": null
} |
general-relativity, black-holes, computational-physics, event-horizon
In fairness, I haven't actually explained the details showing that anyone goes inside the horizon. For instance, if you used a Einstein-Rosen bridge solution for Schwarzschild you can see an initial slice of constant Schwarzschild time doesn't actually cross inside the event horizon, it just touches it, effectively the radius outside the horizon connects to the throat of a (non traversable) wormhole where the white hole and black hole horizons touch and connects the two universes. So you might argue that if only you could stay on these slices you'd never go inside the horizons. But people do often go inside, for technical reasons. It's the choice of slicing spacetime that determines whether you end up going inside, and it's how you select your grid points within a slice that determines whether your singularity hits a grid point. But the point is they try to avoid the singularity hitting the grid point. They don't worry about the event horizon. In fact, if they kept the grids outside the event horizon, they would not have to worry about the singularity, since it is inside.
It is clear that if someone is making an effort to avoid a grid point having a singularity, then they haven't selected a method that systematically and automatically avoids going inside the event horizon. And there is no real reason not to go inside, especially if your data is already technically a bit off before you even get to the horizon. Once you choose to ignore that, why not go in, especially if it helps you introduce less errors in your approximations. You are approximating anyway if you ignore the matter than formed the two black holes.
And in my original answer to the linked question I point out that since the original infalling matter is still affecting you (in an extremely time dilated way), it's always the events prior to the event horizon formation that matter. | {
"domain": "physics.stackexchange",
"id": 28566,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "general-relativity, black-holes, computational-physics, event-horizon",
"url": null
} |
# Random variable probability problem
## Homework Statement
Continuous random variable X has probability density function defined as
f(x)= 1/4 , -1<x<3
=0 , otherwise
Continuous random variable Y is defined by Y=X^2
Find G(y), the cummulative distribution function of Y
## The Attempt at a Solution
G(y) = P(-sqrt(y)<=X<=sqrt(y))
What does this mean? G(Y) is defined as P(-sqrt(y)<=X<=sqrt(y)) for 0<=X<=9 ?
Related Precalculus Mathematics Homework Help News on Phys.org
G(y), the cummulative probability distribution function of Y, is a function that gives the probability that Y is smaller than y.
P(-sqrt(Y)<=X<=sqrt(Y)) is a way of writing: Probability that X is in between plus and minus the square root of Y (note the use of capital Y here, y and Y are not the same thing).
G(y), the cummulative probability distribution function of Y, is a function that gives the probability that Y is smaller than y.
P(-sqrt(Y)<=X<=sqrt(Y)) is a way of writing: Probability that X is in between plus and minus the square root of Y (note the use of capital Y here, y and Y are not the same thing).
Thanks Gerben, the next step will be to integrate the pdf?
ie $$G(y)=\int^{\sqrt{y}}_{-\sqrt{y}}\frac{1}{4}dx$$ for $$0\leq y\leq 9$$
?
yes exactly
yes exactly
There will be 3 cases here,
case 1 : G(y)=0 for y<0
case 2 : G(y)= integration from -sqrt(y) to sqrt(y) 1/4 dx for 0<=y<=9
case 3: G(y)=1, for y>9
Is this the domain? I doubt my domain for case 2 is correct. | {
"domain": "physicsforums.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9867771809697151,
"lm_q1q2_score": 0.8179779067269038,
"lm_q2_score": 0.8289388146603364,
"openwebmath_perplexity": 1330.7741537098107,
"openwebmath_score": 0.9199802875518799,
"tags": null,
"url": "https://www.physicsforums.com/threads/random-variable-probability-problem.427102/"
} |
nuclear-technology
Title: Smallest possible controlled chain reaction-based nuclear fission reactor? I think, it could be a reactor utilizing californium-242 (or, at least, weapon-grade U-235) cooled and moderated by heavy water.
Essentially, it were similar to an atomic bomb, but - of course - it would be optimized for stay around the equilibrial state.
The result were probably a very strong neutron source.
I think, it could be used for various things, mainly in the space applications.
Does any cost/size estimations about this ever created? The RM-1 Russian submarine reactor had a core of less than one cubic metre. It had about 100kg fuel load, which was 90% enriched (i.e. 90kg) Uranium 235. This was liquid-metal cooled [specifically a "eutectic lead-bismuth alloy (44.5 wt% lead, 55.5 wt% bismuth)" - source as below, p40], so didn't need a moderator.
Submarine 901 had in its right-board reactor just 30.6 kg of Uranium 235; this was at 20% enrichment, so a total fuel load of 153 kg.
These were controllable chain-reaction based reactors.
Source:
NKS-138 Russian Nuclear Power Plants for Marine Applications
Ole Reistad, Norwegian Radiation Protection Authority, Norway
Povl L. Ølgaard, Risø National Laboratory, Denmark
Published by Nordic Nuclear Safety Research, April 2006
ISBN: 87-7893-200-9
http://www.nks.org/scripts/getdocument.php?file=111010111120029 | {
"domain": "engineering.stackexchange",
"id": 2029,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "nuclear-technology",
"url": null
} |
c++, performance, homework, hash-map
inputVector.push_back(elemValue);
}
fs.Initialize(inputVector);
end = clock();
int numberOfElementsForSearch;
scanf("%i", &numberOfElementsForSearch);
for (int i = 0; i < numberOfElementsForSearch; ++i)
{
int elem;
scanf("%d", &elem);
if (fs.Contains(elem))
{
cout << "Yes" << endl;
}
else
{
cout << "No" << endl;
}
}
time_spent = (double)(end - begin) / CLOCKS_PER_SEC;
cout << time_spent << endl;
return 0;
}
It works rather fast, but on a vector of 100000 elements, the function Initialize() works for 0,7 ms in Release. But on a vector of 50000, this function works for 1.8 ms.
Could you explain why this is so? How can I improve my code to make it work faster? I tried running your implementation on my machine but with the following input (input is 25 random integers within the interval [0; 10^9]) Initialize never completes (it's stuck in the bucket while (flag) loop):
25
882868245
264589055
955665379
570725902
186426836
425509062
780811177
528755197
921593609
210302061
162860187
237314629
771563954
716724339
500613765
749586096
118952462
708453275
530816792
697958285
841037949
796725013
123270367
470484394
578476359
1
252678354 | {
"domain": "codereview.stackexchange",
"id": 5038,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, performance, homework, hash-map",
"url": null
} |
7. The Dunne Hand-Tufted Wool Blue Rug from 100% pure wool then hand-carved and finished with a cotton backing. The task is to calculate the area of the crescents. Pieces 1 through 5 are for the same fraction of a period, 1/10 th period. Let be the radius of the semicircle, one half of the base of the rectangle, and the height of the rectangle. Volume of Half Cylinder Calculator. c) Creating an Oriented Rectangle. π (pronounced "pie" and often written "Pi") is an infinite decimal with a common approximation of 3. A = Circle area; π = Pi = 3. 1: Still Irrigating the Field (5 minutes) Here is a picture that shows one side of a child's wooden block with a semicircle cut out at the bottom. All you need are two measurements and you can calculate its perimeter by hand, or by using our perimeter of a rectangle calculator above. Area of a Rectangle Learning Intention Success Criteria • To be able to state area formula for a rectangle. In this python program, we will find area of a circle using radius. Else create a dimension. If i had a rectangle of 40 meters x 20. In geometry, the area enclosed by a circle of radius r is π r 2. I took those pieces down and transferred the to my pattern paper. Now we all know that the area of a rectangle is its length multiplied by its height. The task is to calculate the area of the crescents. Switching the input values above changes the layout and gives. !Work out the area of the semi-circle. Semicircle Calculator. If we know the radius then we can calculate the area of a circle using formula: A=πr² (Here A is the area of the circle and r is radius). This is for circle. What you have is a rectangle and a circle (It's been cut down the middle and the rectangle shoved in it). A training field is formed by joining a rectangle and two semicircles, as shown below. Cut a 3” line vertically though the black lining fabric, this will be hidden later. Multiply the radius of the semicircle by itself. What is the distance travelled by the centre of the circle, when the circle has travelled once around the rectangle? 14. The Rhino Hide double stuff mat | {
"domain": "aidmbergamo.it",
"id": null,
"lm_label": "1. Yes\n2. Yes",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9773708026035286,
"lm_q1q2_score": 0.8326737956770336,
"lm_q2_score": 0.8519528038477824,
"openwebmath_perplexity": 642.7273038915572,
"openwebmath_score": 0.544926106929779,
"tags": null,
"url": "http://aidmbergamo.it/vuzw/area-of-a-rectangle-with-a-semi-circle-cut-out.html"
} |
quantum-mechanics, condensed-matter, curvature, graphene, berry-pancharatnam-phase
$$\Omega(q)=\tau_z\frac{3a^2{\Delta}t^2}{2({\Delta^2}+3q^2a^2t^2)^{3/2}}$$
It should be noted that both the magnetic moment of the Bloch electron and the Berry curvature are vector quantities and that $q^2$ is the magnitude of the crystal momentum. Also, the Berry curvature equation listed above is for the conduction band. I should also mention at this point that Xiao has a habit of switching between k and q, with q being the crystal momentum measured relative to the valley in graphene.
With this information in hand I will attempt to derive these equations with the help of Mathematica. However, I will fail and this is where I am hoping to get some help from all of you.
Step 1: First we need to determine the Eigenvalues/Dispersion relationship near the valleys (aka Dirac points of graphene). To do this all we need to do is find the eigenvalues of our Hamiltonian, which has the following form when represented as a matrix:
$$H=\begin{pmatrix}
\frac{\Delta}{2} & \frac{\sqrt{3}}{2}at({\tau_z}q_x-iq_y) \\
\frac{\sqrt{3}}{2}at({\tau_z}q_x+iq_y) & \frac{-\Delta}{2} \\
\end{pmatrix}
$$
Using Mathematica (or I suppose you could also do this by hand relatively easily) we find that the eigenvalues of this matrix are:
$${\pm}{\frac{1}{2}}\sqrt{\Delta^2+3a^2t^2q_x^2+3a^2t^2q_y^2}={\pm}{\frac{1}{2}}\sqrt{\Delta^2+3a^2t^2q^2}$$ | {
"domain": "physics.stackexchange",
"id": 90864,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, condensed-matter, curvature, graphene, berry-pancharatnam-phase",
"url": null
} |
json, formatting, brainfuck
Title: JSON formatter in your least favorite language Just joking guys, Brainfuck is an awesome, challenging language.
I've tested the following code with bf-x86 compiler and rather big JSON file. I believe code is fully functional on a valid (!) JSON input.
That's my second JSON formatter and my first code in Brainfuck. I know nothing about best practices and code style, though I've tried to do my best in both parts.
Points of interest:
code style and formatting
value of comments
value of used algorithms and preferable alternatives
The heart of a program is a reading loop with a switch statement:
#!/usr/bin/brainduck
This program is a JSON formatter
It takes a valid(!) JSON input and outputs formatted JSON
Memory layout used:
0 input
1 input copy
2 switch flag
3 input copy for switch
4 indent
5 indent copy
6 indent copy
Zero separated strings
7 zero
8 placeholder
? zero
? JSON specific chars
? zero
? "while inside string" memory
Zero separated strings
Filling placeholder " " (two spaces)
>>>>>>>
> >++[-<++++++++++++++++>]<
> >++[-<++++++++++++++++>]<
>zero
Filling JSON specific chars after placeholder
> 0a \n ++++++++++
> 20 space >[-]++[-<++++++++++++++++>]<
>zero
Back to cell 0
<[<]<[<]<<<<<<<
Initial input
,
while input [
Input in cell 0 already
Zeroing memory in cells 1 2 3
>[-]>[-]>[-]<<<
Copying input to cells 1 3
[-
>+<
>>>+<<<
]
switch flag = on
>>+
>
The Switch | {
"domain": "codereview.stackexchange",
"id": 37002,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "json, formatting, brainfuck",
"url": null
} |
c#, mathematics, reinventing-the-wheel
return swapped;
}
private double[] CalculateResult(double[][] rows)
{
double val = 0;
int length = rows[0].Length;
double[] result = new double[rows.Length];
for (int i = rows.Length - 1; i >= 0; i--)
{
val = rows[i][length - 1];
for (int x = length - 2; x > i - 1; x--)
{
val -= rows[i][x] * result[x];
}
result[i] = val / rows[i][i];
if (!IsValidResult(result[i]))
{
return null;
}
}
return result;
}
private bool IsValidResult(double result)
{
return !(double.IsNaN(result) || double.IsInfinity(result));
}
which can then be called like
double[] result = SolveLinearEquations(textBox1.Lines);
textBox2.Clear();
textBox2.Text = ConvertToString(result);
where ConvertToString() will look like
private string ConvertToString(double[] result)
{
StringBuilder sb = new StringBuilder(1024);
for (int i = 0; i < result.Length; i++)
{
sb.AppendFormat("X{0} = {1}\r\n", i + 1, Math.Round(result[i], 10));
}
return sb.ToString();
} | {
"domain": "codereview.stackexchange",
"id": 11967,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, mathematics, reinventing-the-wheel",
"url": null
} |
structures, statics
However, this led to the wrong answer when solving for $\vec{P}$. The solution manual instead found each external force in terms of $\vec{P}$ and started at joint $A$ to find $\overrightarrow{AD}$ and $\overrightarrow{AB}$ in terms of $\vec{P}$. In addition, in order to get numerical values, the solution manual assumed member $\overrightarrow{AB}$ was experiencing the maximum compression force of 660 lb. However, when I assume that member $\overrightarrow{AD}$ is experiencing the maximum tensile force, it doesn't work out the same.
My question is, conceptually, why must I find each each member's force in terms of $\vec{P}$, and why must one assume $\overrightarrow{AB}$ is experiencing maximum compression (but not $\overrightarrow{AD}$ in maximum tension)?
EDIT: I want to note that I do not need help with solving this problem, nor the math required. I simply am just looking for a conceptual answer as to why my approach did not work (i.e. only analyzing joint $D$). The reason is that you assumed that the elements around node $\text{D}$ will be the first to fail. That is not the case. Indeed, it is the elements under compression ($\text{AB}$ and $\text{BC}$) that will fail first. Also, you assumed that all the members around $\text{D}$ will present the same axial force, which is untrue.
To see this, here's your structure with a unitary load (the units are irrelevant):
And here are the axial forces in each member under this unitary force (positive in tension): | {
"domain": "engineering.stackexchange",
"id": 754,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "structures, statics",
"url": null
} |
matlab, fft, amplitude
Title: Amplitude of frequency in MATLAB FFT I am trying to extract amplitude of specific frequency in Matlab FFT. Is it possible to use the abs(mag)... But I do not know in which sample to look for mag(245) should give me amplitude for the frequency of that sample...
How to extract that amplitude-magnitude for $120$ Hz using mag?
I will add simple code:
Fs = 1000; % Sampling frequency
T = 1/Fs; % Sample time
L = 1000; % Length of signal
t = (0:L-1)*T; % Time vector
% Sum of a 50 Hz sinusoid and a 120 Hz sinusoid
x = 0.7*sin(2*pi*50*t) + sin(2*pi*120*t);
y = x + 2*randn(size(t)); % Sinusoids plus noise
f=(0:L-1)*Fs/L;
x=fft(y);
mag=abs(x);
mag(1)=0;
plot(f(1:L/2),(2/L*mag(1:L/2)));
title('Single-Sided Amplitude Spectrum of y(t)')
xlabel('Frequency (Hz)')
ylabel('|Y(f)|') The formula you are looking for is
(desired freq / Sampling Frequency) * Length of samples = sample number
you can see this works out even by verifying the units
(Hz / Hz )* sample = sample | {
"domain": "dsp.stackexchange",
"id": 2575,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "matlab, fft, amplitude",
"url": null
} |
1. ## Geometric Sequence
In a geometric sequence, the sum of first four terms is 16 times the sum of the following four term, find the common ratio?
Sn =( a(1-r^n)) /1-r
anyone can tell me how to start or resolve this ??
Thank you
2. ## Re: Geometric Sequence
Looks to me to imply:
$S_4=16(S_8-S_4)$. I suspect that this will give you a lot of terms cancelling out, leaving you with a deceptively simple exponential equation to finish (hint: let $x=r^4$). I don't know if there's a better approach.
3. ## Re: Geometric Sequence
Originally Posted by gilagila
In a geometric sequence, the sum of first four terms is 16 times the sum of the following four term, find the common ratio?
Sn =( a(1-r^n)) /1-r
anyone can tell me how to start or resolve this ??
Thank you
I'd use the difference of two squares (repeatedly if necessary) to clear the denominator since there is bound to be a common factor somewhere.
If the sum of the first four terms is larger than the sum of the next four then $|r| < 1$. This will come in useful for checking the answer.
$S_4 = 16(S_8-S_4)$
$S_4 = \dfrac{a(1-r^4)}{1-r} = \dfrac{a(1-r^2)(1+r^2)}{1-r}= a(1+r)(1+r^2)$
$S_8 = \dfrac{a(1-r^8)}{1-r} = \dfrac{a(1-r^4)(1+r^4)}{1-r} = a(1+r^4)(1+r^2)(1+r)$
$S_8 - S_4 = a(1+r^4)(1+r^2)(1+r) - a(1+r)(1+r^2)$ | {
"domain": "mathhelpforum.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9799765569561562,
"lm_q1q2_score": 0.8811178748939362,
"lm_q2_score": 0.8991213806488609,
"openwebmath_perplexity": 1396.8443191148476,
"openwebmath_score": 0.777835488319397,
"tags": null,
"url": "http://mathhelpforum.com/algebra/194467-geometric-sequence.html"
} |
python, python-3.x, unit-testing
sortTestMethodsUsing expects a function like Python 2's cmp, which has no equivalent in Python 3 (I went to check if Python 3 had a <=> spaceship operator yet, but apparently not; they expect you to rely on separate comparisons for < and ==, which seems much a backwards step...). The function takes two arguments to compare, and must return a negative number if the first is smaller. Notably in this particular case, the function may assume that the arguments are never equal, as unittest will not put duplicates in its list of test names.
With this in mind, here's the simplest way I found to do it, assuming you only use one TestCase class:
def make_orderer():
order = {}
def ordered(f):
order[f.__name__] = len(order)
return f
def compare(a, b):
return [1, -1][order[a] < order[b]]
return ordered, compare
ordered, compare = make_orderer()
unittest.defaultTestLoader.sortTestMethodsUsing = compare
Then, annotate each test method with @ordered:
class TestMyClass(unittest.TestCase):
@ordered
def test_run_me_first(self):
pass
@ordered
def test_do_this_second(self):
pass
@ordered
def test_the_final_bits(self):
pass
if __name__ == '__main__':
unittest.main() | {
"domain": "codereview.stackexchange",
"id": 38390,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, python-3.x, unit-testing",
"url": null
} |
the-sun, space-probe
Title: Is the STEREO-A and STEREO-B imagery publicly available? The latest image on the [NASA STEREO image page] is from December, and I assume STEREO-A and -B have collected data since. Can this data be found online? Imagery is available from the STEREO Science Center along with other data like telemetry. The image below is an example of the image browsing and selection interface. There may be a more direct way to browse the catalog as well. | {
"domain": "astronomy.stackexchange",
"id": 6162,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "the-sun, space-probe",
"url": null
} |
c++, stack
// test isEmpty()
cout<<"Test: isEmpty(): "<< stack.isEmpty()<<endl;
// test peek()
cout<<"Test: peek(): "<<stack.peek()->getNumber()<<endl;
// test pop()
cout<<"Test: pop()"<<stack.pop()->getNumber()<<endl;
cout<<"Test: pop()"<<stack.pop()->getNumber()<<endl;
cout<<"Test: isEmpty(): "<<stack.isEmpty()<<endl;
return 0;
} Don't allocate nodes until you really need to.
LinkedList should not expose that it has nodes. addFirst should take an int to be stored in the node and getHead should return the value in the head node.
void addFirst(int number )
{
Node* newNode = new Node();
newNode->setNumber(number);
newNode->setNext(head);
head = newNode;
}
int getHead() {return head->number;}
Same in stack don't expose that it deals in nodes and don't let calling code access them. That way you have more control over the lifetimes. | {
"domain": "codereview.stackexchange",
"id": 22429,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, stack",
"url": null
} |
thermodynamics, everyday-chemistry, water, heat, vapor-pressure
in unknown amounts.
The gauge pressure is given as $p_\mathrm e = 0.8\ \mathrm{bar}$; i.e. the absolute pressure is approximately $p = 1.8\ \mathrm{bar}$.
We may use so-called steam tables to look up the properties of water at the given pressure (in the following, I use parameter values taken from the REFPROP – NIST Standard Reference Database 23, Version 9.0). We may find that the saturation point (equilibrium of liquid water and steam) at a pressure of $p = 1.8\ \mathrm{bar}$ corresponds to a temperature of $T = 117\ \mathrm{^\circ C}$.
At this point, the density of liquid water is $\rho_\mathrm l = 946\ \mathrm{g/l}$ and the density of steam is $\rho_\text{steam} = 1.02\ \mathrm{g/l}$.
If we ignore the volume of the remaining liquid water, air, and food, we may consider the limiting case in which the entire volume $V$ is filled with steam. Thus, the mass $m_\text{steam}$ is given by
$$\begin{align}
m_\text{steam} &= \rho_\text{steam} \cdot V \\[3pt]
&= 1.02\ \mathrm{\frac gl} \times 6.2\ \mathrm l \\[3pt]
&= 6.3\ \mathrm g
\end{align}$$
Therefore, only $6.3\ \mathrm g$ of water (which corresponds to about $6.3\ \mathrm{ml}$ of cold water at normal pressure) are required to fill the entire volume with steam at the given pressure. | {
"domain": "chemistry.stackexchange",
"id": 4195,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "thermodynamics, everyday-chemistry, water, heat, vapor-pressure",
"url": null
} |
field-theory, gauge-theory, vector-fields, yang-mills
I would like to verify that indeed it is possible to drop gauge invariance.
EDIT: In this paper the authors show that from considerations at four points, the three-point amplitudes must be dressed with totally antisymmetric coefficients $f^{abc}$ that obey a Jacobi identity. This, however, could be achieved by the Lagrangian that I presented, that does not have local gauge invariance. We are looking for a vector field $A_{\mu}(x)$ which has spin 1 particle excitations, and does NOT require gauge invariance to describe it. Let's figure this out systematically, although I won't go through the gory details (references are below). First of all, the vector field is in the $(\frac{1}{2},\frac{1}{2})$ representation of the Lorentz group, so translating this into what spins this field could possibly produce, it is spin $0$ and spin $1$. If we wish to kill the spin $0$ component of the field, which would be of the form $A_{\mu}(x)=\partial_{\mu}\lambda(x)$, we could
1) Require that our theory has a gauge invariance $A_{\mu}\to A_{\mu}+\partial_{\mu}\lambda$.
2) Require that the field $A_{\mu}$ satisfies the "Lorentz gauge" constraint $\partial_{\mu}A^{\mu}=0$ (although calling it a gauge in this context is misleading).
Or we could simply leave the spin $0$ excitation alone and let it propagate.
Let's now consider the effect of the particle's mass. Starting with a massless spin $1$ particle. It turns out, that on very general circumstances, it is impossible to construct a vector field with massless excitations which transforms under Lorentz transformations the following way
$$U(\Lambda)A_{\mu}U^{-1}(\Lambda)=\Lambda^{\nu}_{\mu}A_{\nu}$$ | {
"domain": "physics.stackexchange",
"id": 67869,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "field-theory, gauge-theory, vector-fields, yang-mills",
"url": null
} |
Now note that $$(E_1 \backslash E_2) \cup (E_1 \cap E_2) = E_1$$ Since $(E_1 \backslash E_2)$, $(E_1 \cap E_2)$ are mutually disjoint sets, we have that $$P(E_1 \backslash E_2) + P(E_1 \cap E_2) = P(E_1)$$ Hence, $$P(E_1 \backslash E_2) = P(E_1) - P(E_1 \cap E_2)$$ Similarly, since $$(E_1 \cap E_2) \cup (E_2 \backslash E_1) = E_2$$ are mutually disjoint sets, we have that $$P(E_1 \cap E_2) + P(E_2 \backslash E_1) = P(E_2)$$ Hence, $$P(E_2 \backslash E_1) = P(E_2) - P(E_1 \cap E_2)$$ Now plug in for $P(E_1 \backslash E_2)$ and $P(E_2 \backslash E_1)$ in $\star$, to get what you want. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9783846634557752,
"lm_q1q2_score": 0.8315987607837209,
"lm_q2_score": 0.8499711737573763,
"openwebmath_perplexity": 147.58148488551168,
"openwebmath_score": 0.969265878200531,
"tags": null,
"url": "http://math.stackexchange.com/questions/223882/the-probability-of-the-union-of-two-events"
} |
particle-physics, photons
Title: Positron-electron annihilation - can more than two photons be created? I'm an engineer and have been reading about PET scanners and how they rely on the fact that a positron-electron annihilation will cause two photons to be emitted at 180 degrees from each other. After a bit of research I am left wondering on a couple of points.
From Electron–positron annihilation - wiki:
Conservation of energy and linear momentum forbid the creation of only one photon.
Is this simply because emitting one photon would produce a force in one direction without an equal and opposite force?
In the most common case, two photons are created, each with energy equal to the rest energy of the electron or positron.
Does it follow then that there is a (lesser) probability that 3 photons can be emitted at equal energy and equal angle of 120 degrees? Yes, the requirement for at least two photons is because a single photon would violate conservation of momentum. See my answer to Particle anti-particle annihilation and photon production for a (very simple!) proof of this.
Annihilation can produce more than two photons. In fact the decay of ortho-positronium to two photons is forbidden, and it (mostly) decays into three photons. This is a bit of a special case though as it's due to conservation of angular momentum in a bound state. | {
"domain": "physics.stackexchange",
"id": 42348,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "particle-physics, photons",
"url": null
} |
mathematics, hamiltonian-simulation, universal-gates
Moreover, by Dirichlet's approximation theorem, any irrational number can be approximated by rational numbers arbitrarily well. Therefore, we would need infinite precision to distinguish rational and irrational values of an input or output parameter in an experiment. This quickly leads to various singularities in experimental setup such as the need for a display of infinite area to present results or the need for infinite frequency radiation to measure arbitrarily small distances.
Thus, physics and technology rarely care whether a quantity is rational or irrational, because the two cases are generally indistinguishable in practice.
Irrational numbers in quantum computing
Remarkably, there are situations in quantum computing where irrationality of a number plays a key role in a mathematical proof. This happens due to ubiquity of phase factors such as $\exp(i\pi\alpha)$ with $\alpha\in\mathbb{R}$. It is not hard to prove that if $\alpha\in\mathbb{Q}$ then repeatedly multiplying $\exp(i\pi\alpha)$ by itself results in a discrete subset of the complex unit circle. However, if $\alpha\notin\mathbb{Q}$ then repeatedly multiplying $\exp(i\pi\alpha)$ by itself results in a dense subset of the unit circle.
This has important consequences for universality and the choice of elementary operations for a quantum computer. We desire the ability to approximate any unitary, so for any given phase factor we need the ability to approximate all angles with arbitrary accuracy and therefore we need a way to synthesize a gate where the given phase factor is $\exp(i\pi \alpha)$ with $\alpha$ irrational. | {
"domain": "quantumcomputing.stackexchange",
"id": 3210,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "mathematics, hamiltonian-simulation, universal-gates",
"url": null
} |
# Are there still mathematicians who don't accept proof by contradiction?
When I was a kid, I read popular scientific texts about the different philosophies of mathematics; formalism, intuitionism, constructivism and many others.
I learned that there existed mathematicians who did not accept proofs by contradiction and some others who did not consider proof of existence of solutions important, but required proof of how to actually construct solutions. Are such stances still common among mathematicians? | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9353465170505205,
"lm_q1q2_score": 0.8023104077390857,
"lm_q2_score": 0.8577681031721325,
"openwebmath_perplexity": 524.463993146531,
"openwebmath_score": 0.794602632522583,
"tags": null,
"url": "https://math.stackexchange.com/questions/2793374/are-there-still-mathematicians-who-dont-accept-proof-by-contradiction"
} |
You might want to draw a triangle if it helps you to visualise
6. [quote=Gusbob;153603] $LHS = \frac{1}{cos^2t} - \frac{1}{sin^2t}$
$= \frac{sin^2t- cos^2t}{cos^2t\,sin^2t}$
$= \frac{(1-cos^2t) - cos^2t}{cos^2t\,sin^2t}$
$= \frac{1}{cos^2t\,sin^2t} - \frac{2\not cos^2t}{\not cos^2t\,sin^2t}$
Can someone make this step more clear.
$= \frac{1}{sin^2t} \left(\frac{1}{cos^2t} - 2 \right)$
$= csc^2t ( sec^2t -2)$ | {
"domain": "mathhelpforum.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9763105300791786,
"lm_q1q2_score": 0.8135835753564358,
"lm_q2_score": 0.8333245932423308,
"openwebmath_perplexity": 2330.3311073200243,
"openwebmath_score": 0.8651480078697205,
"tags": null,
"url": "http://mathhelpforum.com/trigonometry/40512-problem-help.html"
} |
apply this principle to the solution of several problems. It is used to solve problems of the form: how many ways can one distribute indistinguishable objects into distinguishable bins? We can imagine this as finding the number of ways to drop balls into urns, or equivalently to arrange balls and. The problem of induction is to find a way to avoid this conclusion, despite Hume's argument. The simplest experiment is to reach into the urn and pull out a single ball. Suppose that a machine shop orders 500 bolts from a supplier. Probability is the likelihood or chance of an event occurring. Urn i has exactly i 1 green balls and n i red balls. If the composition is unknown, then it is called. ♦ BALL AND URN (AoPS calls this "Stars and Bars") The classic "Ball and Urn" problem statement is to find the number of ways to distribute N identical balls into 4 distinguishable urns, for example. But our solution to the urn problem relied on random sampling. The Crossword Solver found 21 answers to the Container in many probability theory problems crossword clue. Wolfram Education Portal ». In the last lesson, the notation for conditional probability was used in the statement of Multiplication Rule 2. The Associated Press. Users that wish to investigate especially large or intricate problems are encouraged to modify and streamline the code to suit their individual needs. The theorem is also known as Bayes' law or Bayes' rule. Publication date 1987 Topics Probabilities, Probabilités, Probabilités Publisher Internet Archive Books. k indistinguishable balls are randomly distributed into n urns. Show that this is the same as the probability that the next ball is black for the Polya urn model of Exercise 4. This consists. 3, 794-814, 2012. • Time limit 110 minutes. This paper designs some uncertain urn problems in order to compare probability theory and uncertainty theory. EXAMPLE 1 A Hypergeometric Probability Experiment Problem: Suppose that a researcher goes to a small college with 200 faculty, 12 of which have blood type O-negative. (a) Draw marbles from a bag containing 5 red marbles, 6 blue marbles and 4 green marbles without replacement until you get a blue marble. Once you have decided on your answers click the answers checkboxes to see if you are right. 4 Conditional Probability and Independence 1. Hence the probability of | {
"domain": "bresso5stelle.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9901401429507652,
"lm_q1q2_score": 0.8584869213595782,
"lm_q2_score": 0.8670357701094303,
"openwebmath_perplexity": 390.28856000725887,
"openwebmath_score": 0.7716267108917236,
"tags": null,
"url": "http://bresso5stelle.it/yfbo/probability-urn-problems.html"
} |
Lemma: Given a prime number $p \geq 7$ and any integer $a$ such that $p \nmid a$.
Then there are natural numbers $n, m$ such that:
$$p \mid n^2+m^2-a; \qquad \qquad p\nmid n; \qquad \qquad p \nmid m.$$ Proof: See here.
Proof of question(II):
• If $p\nmid k-2$
let $w_1=w_2=...=w_{k-2}=1$; and let $a=-\sum_{i=1}^{k-2}w_i^2$;
now by the above lemma there are natural numbers $n, m$ such that $p\mid n^2+m^2-a$;
now let $w_{k-1}=n$ and $w_{k}=m$; so we have done!
• If $p\nmid k+1$
let $w_1=w_2=...=w_{k-3}=1$, $w_{k-2}=2$; and let $a=-\sum_{i=1}^{k-2}w_i^2$;
now by the above lemma there are natural numbers $n, m$ such that $p\mid n^2+m^2-a$;
now let $w_{k-1}=n$ and $w_{k}=m$; so we have done!
By considering the argument described here question(I) follows immediately from question(II).
At the end for the case $k=2$, you can choose $p \overset{4}{\equiv}1$ arbitrary;
and again there exist integers $m, n$ such that: $$p \mid n^2+m^2; \qquad \qquad p\nmid n; \qquad \qquad p \nmid m.$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.985718065235777,
"lm_q1q2_score": 0.8058979330504666,
"lm_q2_score": 0.817574478416099,
"openwebmath_perplexity": 145.87338119584223,
"openwebmath_score": 0.9006195068359375,
"tags": null,
"url": "https://math.stackexchange.com/questions/2452753/primes-dividing-sums-of-squares-but-not-dividing-any-of-summands"
} |
javascript, strings, unit-testing, regex
// Handle squashing when the squashed character has already been output.
if (lastReplaceChar == replacement && flags.includes('s')) {
return '';
}
lastReplaceChar = replacement;
const returnCount = flags.includes('s') ? 1 : chars.length;
return replacement.repeat(returnCount);
});
} | {
"domain": "codereview.stackexchange",
"id": 44665,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, strings, unit-testing, regex",
"url": null
} |
ros, gazebo, gripper, universal-robot, ur5
<!--include file="$(find ur_gazebo)/launch/controller_utils.launch"/-->
<rosparam file="$(find ur_gazebo)/controller/arm_controller_ur5.yaml" command="load"/>
<node name="arm_controller_spawner" pkg="controller_manager" type="controller_manager" args="spawn arm_controller" respawn="false" output="screen"/>
</launch>
Originally posted by philwall3 on ROS Answers with karma: 16 on 2017-11-16
Post score: 0
Original comments
Comment by chapulina on 2017-11-21:
It looks like there could be something weird with the DAE file, have you tried opening it on another program such as Blender? It's worth it looking at the normal directions and maybe re-exporting the mesh.
Comment by philwall3 on 2017-11-21:
Thanks for the suggestion! It's opening fine in Blender. Also Rviz and the moveit assistant wizard both open and show the base just fine. So I'm confident the .dae file is fine.
I was not able to solve the problem with above package. But I found another package that works fine for me: https://github.com/StanleyInnovation/robotiq_85_gripper
Originally posted by philwall3 with karma: 16 on 2017-12-07
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 29380,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, gazebo, gripper, universal-robot, ur5",
"url": null
} |
complexity-theory, lambda-calculus, functional-programming, turing-completeness
Title: Do Higher Order Functions provide more power to Functional Programming? I've asked a similar question on cstheory.SE.
According to this answer on Stackoverflow there is an algorithm that on a non-lazy pure functional programming language has an $\Omega(n \log n)$ complexity, while the same algorithm in imperative programming is $\Omega(n)$. Adding lazyness to the FP language would make the algorithm $\Omega(n)$.
Is there any equivalent relationship comparing a FP language with and without Higher Order Functions? Is it still Turing Complete? If it is, does the lack of Higher Order on FP makes the language less "powerful" or efficient? In a functional programming language that is powerful enough (for example, with data types to implement closures) you can eliminate all uses of higher order by the transformation of defunctionalization. Since this method is used to compile this kind of language, you can reasonably assume that this does not affect performances and that in this setting higher order does not make the language any less powerful. However it does affect how to write code.
However if the language is not powerful enough, then yes, higher order does provide expressive power. Consider the lambda-calculus: without any higher-order function, it really can't do anything, mostly because the most basic data types (integers, booleans) are implemented using functions.
In conclusion, it really depends on the language.
Above is my answer. Below, a comment about a usual assumption on imperative languages.
about an algorithm that on a non-lazy functional programming language has an $\Omega(n \log n)$ complexity, while the same algorithm in imperative programming is $\Omega(n)$. Adding lazyness to the FP language would make the algorithm $\Omega(n)$. | {
"domain": "cs.stackexchange",
"id": 2790,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "complexity-theory, lambda-calculus, functional-programming, turing-completeness",
"url": null
} |
python, pattern-recognition, sequence-modeling, data-mining
The result is:
[['F', 'F', 'A', 'A']]
Your second pattern occurs exactly 3 times and so doesn't fullfill the requirement of > 3 occurances ([C,A,B,D,D]).
modifications for parallel processing
To make it processable in parallel you can do a slide modification. Just create another method in TreeNode, that allows to merge nodes. Like this:
def merge_nodes(self, other_nodes):
# merge other_nodes into this node
# including all subnodes
if len(other_nodes) > 0:
elements= set()
for other_node in other_nodes:
self.count+= other_node.count
elements.update(other_node.subnodes.keys())
# elements now contains the set of the next elements
# with which the sequence continues across the
# other nodes
for element in elements:
# get the node of the resulting tree that represents
# the sequnce continuing with element, if there is
# no such subnode, create one, since there is at least
# one other node that counted sequence seq + element
my_subnode= self.get_subnode(element, create=True)
other_subnodes= list()
for other_node in other_nodes:
# now get the subnode for each other node, that
# represents the same sequence (seq + element)
other_subnode= other_node.get_subnode(element, create=False)
if other_subnode is not None:
other_subnodes.append(other_subnode)
# merge the subnode the same way
my_subnode.merge_nodes(other_subnodes) | {
"domain": "ai.stackexchange",
"id": 2459,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, pattern-recognition, sequence-modeling, data-mining",
"url": null
} |
waves, string
†The $7880\:\mathrm{Hz}$ figure was a thought mistake. The wave actually travels twice the length of the string in each cycle (from the bridge to the nut and back). | {
"domain": "physics.stackexchange",
"id": 54696,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "waves, string",
"url": null
} |
homework-and-exercises, quantum-field-theory, operators, correlation-functions
Title: Why $\langle k,n|\varphi(x)|0\rangle = e^{-ikx} \langle k,n|\varphi(0)|0\rangle$? (Srednicki's Quantum field theory) I'm reading Srednicki's Quantum field theory, p.94 and trying to understand some statement : | {
"domain": "physics.stackexchange",
"id": 92420,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, quantum-field-theory, operators, correlation-functions",
"url": null
} |
statistical-mechanics
Let's say you had a big box of volume $V$ and you placed a thin barrier down the middle and filled the left side (of volume $V/2$) with gas of total energy $E$. Then there would be a macrostate describing that, and there would be a probability of a given microstate (out of many many possible). Now if you very quickly moved the barrier out of the way, so fast that no gas was touching it while you moved it then you could argue that the microstate changed or it didn't (it happened so fast no particle had time to move). And you could argue that the macrostate changed or it didn't. And in both cases the argument would be purely semantic. You know zero particles moved.
Now if the first macrostate was in thermal equilibrium then each of the many microstates was equally likely. And whichever one it was in at that moment, that microstate is one of the vastly many more microstates available to the volume $V$ system. But it is one of them. And there are way way way more microstates in the volume $V$ system. So if you tried to look again later and hoped to find it all on the left side again, the chance would be $\Omega\left(N,\frac{V}{2}, E\right)/\Omega\left(N,V, E\right)\ll 100\%.$ It is a nonzero chance. But you aren't going to see it.
Make sure you understand that example and the math, then reread your passage again. Whether something is "the same" microstate or "the same" macrostate doesn't affect what the probabilities are. It is what it is. When someone says the particles can be anywhere in the large box, they could be anywhere, they could even be positioned so all are on the left side. But its so so so unlikely when you have $10^{24}$ particles. So the chance is small.
Wouldn't the occurrence of microstates corresponding to $E,V/2, N$ be fluctuation from the macrostate having the greatest multiplicity? | {
"domain": "physics.stackexchange",
"id": 28247,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "statistical-mechanics",
"url": null
} |
gazebo
transmission_interface/SimpleTransmission
EffortJointInterface
EffortJointInterface
1
Comment by Mav14 on 2017-11-08:
And this is the error I get Could not find resource 'waist' in 'hardware_interface::EffortJointInterface'.. I can't figure out what's causing that despite looking everywhere. | {
"domain": "robotics.stackexchange",
"id": 4106,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "gazebo",
"url": null
} |
python, game, machine-learning, 2048
def make_image(self, state_index: int):
# Game specific feature planes.
return []
def make_target(self, state_index: int, num_unroll_steps: int, td_steps: int):
# The value target is the discounted root value of the search tree N steps
# into the future, plus the discounted sum of all rewards until then.
targets = []
for current_index in range(state_index, state_index + num_unroll_steps + 1):
bootstrap_index = current_index + td_steps
if bootstrap_index < len(self.root_values):
value = self.root_values[bootstrap_index] * self.discount**td_steps
else:
value = 0
for i, reward in enumerate(self.rewards[current_index:bootstrap_index]):
value += (
reward * self.discount**i
) # pytype: disable=unsupported-operands
if current_index > 0 and current_index <= len(self.rewards):
last_reward = self.rewards[current_index - 1]
else:
last_reward = 0
if current_index < len(self.root_values):
# 1) image[n] --> pred[n], value[n], hidden_state[n]
# 2) hidden_state[n] + action[n] --> reward[n], pred[n+1], value[n+1], hidden_state[n+1]
targets.append(
(value, last_reward, self.child_visits[current_index], True)
)
else:
# States past the end of games are treated as absorbing states.
targets.append(
(value, last_reward, [0] * self.action_space_size, False)
)
return targets
def to_play(self, state_index: int = None) -> Player:
return Player()
def action_history(self) -> ActionHistory:
return ActionHistory(self.history, self.action_space_size)
def print_game(self, state_index: int):
pass | {
"domain": "codereview.stackexchange",
"id": 45530,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, game, machine-learning, 2048",
"url": null
} |
The additional feature we have added is that $$a_k$$ is monotonous, so as to avoid cases such as such the previous one.
At this time, it is useful to see what our definition is intuitively representing. According to it, we measure how much a function differs from its restriction on the principal interval $$[0,T]$$.
Now, observe that the definition of $$(a_k)-$$periodicity implies that $$a_k\to a$$, for some $$a\in[0,+\infty]$$. Let: $$s_n:=\sum_{k=1}^na_k.$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9648551505674444,
"lm_q1q2_score": 0.8239021196142051,
"lm_q2_score": 0.8539127548105611,
"openwebmath_perplexity": 251.24675025169216,
"openwebmath_score": 0.9692988991737366,
"tags": null,
"url": "https://math.stackexchange.com/questions/2093526/is-there-a-definition-of-a-pseudo-period-for-fx-sin3x-sin-pi-x"
} |
statistical-mechanics, condensed-matter, solid-state-physics, ferromagnetism
Title: Heat Capacity of Ferromagnets and Antiferromagnets I want to calculates (at least qualitatively) the heat capacities of a ferromagnet and an antiferromagnet, say on a cubic lattice so life is simple. I want to make sure my approach is legit and also ask how to deal with certain regimes.
Ferromagnets:
Low Temperature $T\ll T_c$: I think in the regimes one uses the dispersion of magnons, $\omega_k = Ak^2$, recognizes these are bosonic quasi-particles and from there it's easy.
$T\sim T_c$: Here Landau-Ginzburg tells us that phenomenologically $f=tm^2+um^4$, which leads to a discontinuous heat capacity at $T=T_c$.
High temperature? This is a bit confusing for me. In reality in high temperature a ferromagnet becomes a paramagnet. Then each spin is independent of any other and therefore heat capacity is zero? What if $T>T_c$ but not $\gg T_c$?
Antiferromagnets: | {
"domain": "physics.stackexchange",
"id": 39322,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "statistical-mechanics, condensed-matter, solid-state-physics, ferromagnetism",
"url": null
} |
vba, excel
'Print the result
If vDist(vParTo)("!dist") < 0 Then
vRange.Cells(vRowSteps + 1, vColSteps).Value = "No path found from source to destination"
Else
vSteps = Split(vDist(vParTo)("!steps"), "!")
For vRow = 1 To UBound(vSteps)
vRange.Cells(vRowSteps + vRow, vColSteps).Value = vSteps(vRow - 1)
vRange.Cells(vRowSteps + vRow, vColSteps + 1).Value = vSteps(vRow)
vRange.Cells(vRowSteps + vRow, vColSteps + 2).Value = vDist(vSteps(vRow - 1))(vSteps(vRow))
Next
vRange.Cells(vRowSteps + vRow, vColSteps).Value = "Total:"
vRange.Cells(vRowSteps + vRow, vColSteps + 2).Value = vDist(vParTo)("!dist")
End If
'Done
MsgBox "Done", vbOKOnly + vbInformation, "Path and Distance"
GoTo Finalize
ErrorHandler:
Err.Clear
MsgBox vError, vbOKOnly + vbCritical, "Error"
Finalize:
Set vDist = Nothing
End Sub
The code works, but I would like some feedback on the following aspects: | {
"domain": "codereview.stackexchange",
"id": 31378,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "vba, excel",
"url": null
} |
electrostatics, electric-fields, capacitance, gauss-law, dielectric
Title: Doubt in the derivation of Gauss's law in dielectrics | {
"domain": "physics.stackexchange",
"id": 60291,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electrostatics, electric-fields, capacitance, gauss-law, dielectric",
"url": null
} |
ros, amcl-demo.launch, pathplanning
Title: Hi, I want to implement path planning algorithm(A star) on turtle-bot
Hi, I want to implement path planning algorithm(A star) on turtle-bot.
Is it possible to do along with amcl demo.
How can i read the obstacles in the map? And how the cost of some position in the map can be changed if i want some places in the map to be avoided.
Can someone suggest me how to go with it.
Thank you.
Originally posted by Chennamaneni on ROS Answers with karma: 1 on 2014-04-07
Post score: 0
What you're looking to do is implement your own global planner (interface: base_global_planner).
Read about how all the pieces link together in http://wiki.ros.org/navigation and http://wiki.ros.org/nav_core.
What you're really doing is creating a subclass of http://docs.ros.org/hydro/api/nav_core/html/classnav__core_1_1BaseGlobalPlanner.html, overriding the two methods to implement your functionality.
Originally posted by paulbovbel with karma: 4518 on 2014-04-07
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 17551,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, amcl-demo.launch, pathplanning",
"url": null
} |
electricity, electric-circuits, electrons, electric-current, charge
Title: Electrons in an electric circuit , its movement and power delivered Does an electrical appliance convert electrons into its respective work , I mean is electron being consumed by appliance (say bulb ) and then this mass gives us energy.
or the same number of electron , just revolve around the circuit, then from where does power comes from, Electrons have charge and so when there is a potential difference across a circuit, this charge moves through it. In an incandescent light bulb, there is a high resistance, meaning that there are many atoms with which the charges collide, transferring some of their kinetic energy. No electrons are being "consumed" by the light bulb, i.e. the number of electrons in the circuit does not change. The ability of the charges to do work is because of a potential difference, which can be achieved through a number of means, e.g. using voltaic cells or electromagnetic induction.
To gain a better idea of why potential difference moves charges, consider two isolated point charges of opposite charges, one positive and one negative. If you pull the negative charge away from the positive one, you are doing work on it in the form of potential energy, as you are opposing the electric field of the positive charge. If you let go, the negative charge will convert this potential energy into kinetic energy, as it is attracted to the positive test charge. A potential difference across a circuit, albeit simplified, essentially does this – it brings electrons from a higher potential to a lower potential, converting potential energy into the kinetic energy in the process. | {
"domain": "physics.stackexchange",
"id": 64238,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electricity, electric-circuits, electrons, electric-current, charge",
"url": null
} |
javascript, ecmascript-6, html5, audio
// Generate frequency grid
fillFrequencyGrid(frequencyTrainer.frequencies);
controls.addEventListener('click', event => {
if (event.target.classList.contains('difficulty-button')) {
event.stopPropagation();
stopToneGenerator();
stopFrequencyTrainer();
difficultyMode = event.target.getAttribute('data-difficulty');
frequencyTrainer = startFrequencyTrainer(difficultyMode, frequency);
frequency = frequencyTrainer.frequency;
fillFrequencyGrid(frequencyTrainer.frequencies);
}
}, false);
}());
body {
font-family: 'Montserrat', sans-serif;
text-align: center;
padding-top: 10px;
}
h1 {
margin: 0 auto;
font-size: 30px;
text-decoration: underline;
}
h2 {
margin: 0;
font-size: 25px;
}
a {
color: #0000BB;
}
a:hover {
color: #000000;
}
button {
font-family: 'Montserrat', sans-serif;
text-align: center;
font-size: calc(10px + 1vw);
}
.body {
max-width: 1500px;
border: 1px solid black;
width: 95%;
margin: 0 auto;
}
.title {
padding: 10px 0 0 0;
margin: 0 auto;
width: 95%;
}
.content {
padding: 30px 0 0 0;
margin: 0 auto;
width: 95%;
}
.controls {
padding: 0;
margin: 0 auto;
width: 95%;
}
.volume-control {
padding: 0;
margin: 0 auto;
min-width: 200px;
width: 80%;
}
.footer {
padding: 20px 0 10px 0;
margin: 0 auto;
width: 95%;
}
.grid {
margin: 0 auto;
width: 95%;
display: grid;
grid-template-columns: repeat(auto-fit, minmax(84px, 1fr));
} | {
"domain": "codereview.stackexchange",
"id": 32007,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, ecmascript-6, html5, audio",
"url": null
} |
ros, talker
Title: A quetion from ROS noobie. Error in talker & listener configuration
Learning ROS from Mastering ROS for Robotics Programming book from yesterday. Actually i struck up in configuring talker & listener.
Configuration & Errors as follows. I need some help to clear the error.
ROS Indigo
Ubuntu 14.04
CATKIN_MAKE:
roos@roos-Inspiron-1520:~$ cd catkin_ws
roos@roos-Inspiron-1520:~/catkin_ws$ catkin_make mastering_ros_demo_pkg
Base path: /home/roos/catkin_ws
Source space: /home/roos/catkin_ws/src
Build space: /home/roos/catkin_ws/build
Devel space: /home/roos/catkin_ws/devel
Install space: /home/roos/catkin_ws/install
####
#### Running command: "make cmake_check_build_system" in "/home/roos/catkin_ws/build"
####
####
#### Running command: "make mastering_ros_demo_pkg -j2 -l2" in "/home/roos/catkin_ws/build"
####
roos@roos-Inspiron-1520:~/catkin_ws$ roscore
... logging to /home/roos/.ros/log/58f9d6ee-6de2-11e6-aed6-001d09bb3726/roslaunch-roos-Inspiron-1520-17202.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
started roslaunch server http://roos-Inspiron-1520:38168/
ros_comm version 1.11.20
SUMMARY
========
PARAMETERS
* /rosdistro: indigo
* /rosversion: 1.11.20
NODES | {
"domain": "robotics.stackexchange",
"id": 25634,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, talker",
"url": null
} |
meteorology, climate-change, poles, vorticity
In the northern hemisphere most vertically propagating Rossby waves arise due to the topography. In the southern hemisphere the sudden stratospheric warmings are a rarity due to lack of such topographical features. The last sudden stratospheric warming over the southern hemisphere was in 2002.
The resulting stratospheric anomalies can in return influence surface climate but the mechanism on how exactly this occurs is the still the matter of scientific research. One of the theories that is currently popular is the wave mean flow interactions. This can be easily summarized as concerning how planetary scale Rossby waves influence the circulating zonal flows around a planet. The other main theory is wave reflections at the tropopause.
So what is the net result of either of the above two possibilities? Either way they affect mid-latitude storms. Storms could either become more intense, shift equatorward, or there could be extremely cold air advection spells as the stratospheric air gets mixed into the troposheric air.
As to the rarity of polar vortex splits - they are coupled to those sudden stratospheric warmings. If you look towards the climatology of SSWs, observations show that they occur more during El Niños and La Niñas than during neutral conditions. Sudden stratospheric warmings happen almost every other year in the NH. However sometimes the polar vortex isn't split during these events but is merely pushed equatorward.
The long term effect a polar vortex split could be extended spells of extremely cold weather in the affected areas, and when the vortex reforms on into summer it would be shrunk in size from the dispersed energy giving rise to greater easterly wind circulation near the poles.
UPDATE
Now that the reanalysis data is available from January 2019 I want to add some visuals for the Dec-2018/Jan 2019 polar vortex split and ongoing cold wave. | {
"domain": "earthscience.stackexchange",
"id": 1651,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "meteorology, climate-change, poles, vorticity",
"url": null
} |
$${\rm mod}\ 337\!:\,\ \dfrac{0}{337} \overset{\large\frown}\equiv \dfrac{1}{117} \overset{\large\frown}\equiv \dfrac{-3}{\color{#0a0}{-14}} \overset{\large\frown}\equiv \dfrac{-23}5 \overset{\large\frown}\equiv\color{#c00}{\dfrac{-72} {1}}\overset{\large\frown}\equiv\dfrac{0}0\,$$ or, equivalently, in equational form
$$\qquad\ \ \ \begin{array}{rrl} [\![1]\!]\!:\!\!\!& 337\,x\!\!\!&\equiv\ \ 0\\ [\![2]\!]\!:\!\!\!& 117\,x\!\!\!&\equiv\ \ 1\\ [\![1]\!]-3[\![2]\!]=:[\![3]\!]\!:\!\!\!& \color{#0a0}{{-}14}\,x\!\!\!&\equiv -3\\ [\![2]\!]+8[\![3]\!]=:[\![4]\!]\!:\!\!\!& 5\,x\!\!\! &\equiv -23\\ [\![3]\!]+3[\![4]\!]=:[\![5]\!]\!:\!\!\!& \color{#c00}1\, x\!\!\! &\equiv \color{#c00}{-72} \end{array}$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9773707960121133,
"lm_q1q2_score": 0.8364839113890049,
"lm_q2_score": 0.8558511414521922,
"openwebmath_perplexity": 520.2530725799642,
"openwebmath_score": 0.882790744304657,
"tags": null,
"url": "https://math.stackexchange.com/questions/2054312/double-check-my-steps-to-find-multiplicative-inverse"
} |
ros, gazebo, roslaunch
Title: ros gazebo worlds problem
Hi, I am a beginner to ROS and gazebo. I get the following problem when I try launching gazebo simulator. On entering "rosmake gazebo worlds" I get a message gazebo_worlds ROS_NOBUILD in package gazebo_worlds. I get the same message for each package. Plz help........
Originally posted by Vishnu on ROS Answers with karma: 43 on 2012-10-01
Post score: 0
That's not a problem. It just means that the package is already built and that it was installed from Debian packages. Did you try to run gazebo?
Originally posted by Lorenz with karma: 22731 on 2012-10-01
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Vishnu on 2012-10-01:
Yes sir, but when I tried launching gazebo I get a problem again. I used this code roslaunch gazebo_worlds empty_world_no_x.launch . But I get an error | {
"domain": "robotics.stackexchange",
"id": 11194,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, gazebo, roslaunch",
"url": null
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.