text stringlengths 1 1.11k | source dict |
|---|---|
Sorry, I should note that the first part of the argument shows that Aut$\mathbb{Q}$ is also trivial.
• The "order-preserving" argument is spectacular. Thanks and plus one! – Vim Dec 23 '15 at 18:22
• That's right, thanks! – Noah Olander Dec 23 '15 at 18:38
• Note that order is crucial - once we pass to $\mathbb{C}$, the automorphism group becomes huge. – Noah Schweber Dec 23 '15 at 18:43
• – lhf Dec 23 '15 at 19:32
• Almost verbatim, if $F$ and $G$ are isomorphic subfields of $R$, where $F$ contains the square roots of all its positive members, and if $\psi:F\to G$ is a field-isomorphism ,then $\psi = id_F$ and $F=G$. – DanielWainfleet Dec 23 '15 at 20:31
The group $\mathrm{Aut}(\mathbb Q)$ is actually trivial and therefore finite! Take an automorphism $$\sigma:\mathbb Q\to\mathbb Q$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9773707999669627,
"lm_q1q2_score": 0.8035836524807121,
"lm_q2_score": 0.822189134878876,
"openwebmath_perplexity": 198.43196993046783,
"openwebmath_score": 0.9769202470779419,
"tags": null,
"url": "https://math.stackexchange.com/questions/1586966/automorphism-group-of-an-infinite-field"
} |
homework-and-exercises, geometry, moment-of-inertia
The above calculation is plausible but flawed. The mistake is that you assumed the red disk has a uniform density. It does not. In the torus the amount of mass increases with radius from the z axis. When the torus is opened and straightened to make a cylinder, the inner side must be stretched, reducing density, while the outer side has to be compressed, increasing density. So the density of the red disk increases with distance from the z axis.
This also affected your use of the parallel axis theorem, because this theorem uses the distance of the centre of mass of the disk from the z axis, not the distance of the geometrical centre of the disk from the z axis. | {
"domain": "physics.stackexchange",
"id": 39738,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, geometry, moment-of-inertia",
"url": null
} |
c++, gui, qt, data-importer
Instead, use the existing read float capabilities of the stream input operator.
double 1_val;
int >> 1_val;
Copy ss Move
This is a copy:
dS.d1 = data1;
dS.d2 = data2;
dS.d3 = data3;
But you never use data[123] again. So you may as well try and move the data.
dS.d1 = std::move(data1);
dS.d2 = std::move(data2);
dS.d3 = std::move(data3); | {
"domain": "codereview.stackexchange",
"id": 26765,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, gui, qt, data-importer",
"url": null
} |
java, object-oriented, game, role-playing-game
monster.setStat(statType, originalStat);
}
@Override
public void update() {
duration--;
if(duration <= 0) {
expire();
}
}
}
An example of a StatModifier class
public final class AttackBuff extends StatModifier {
public AttackBuff(Monster monster, int duration) {
super(monster, Effect.Type.ATTACKBUFF, Stat.Type.ATTACK, 1.5 ,duration);
}
}
The Monster class (partial)
public class Monster {
private final Map<Effect.Type, Effect> effects;
public Monster() {
effects = new HashMap<>();
}
public void addEffect(Effect.Type type, int duration) {
if(effects.containsKey(type)) {
effects.get(type).expire();
effects.remove(type);
}
Effect effect = Effect.newEffect(type, this, duration);
effect.apply();
effects.put(effect.getType(), effect);
}
public void updateEffects() { | {
"domain": "codereview.stackexchange",
"id": 18411,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, object-oriented, game, role-playing-game",
"url": null
} |
linear-programming, matching
By duality, any setting of $\vec{p}$ witnesses an upper bound on the best possible matching. But from the market perspective, I don't understand why this should be the case. It means that we can prove that the buyers cannot collectively be very happy (in the primal) by imagining a "hypothetical scenario" where the items are assigned prices (in the dual). In this hypothetical scenario, the buyers can behave selfishly (i.e., they take their most preferred item), and the objective contains a $\sum_i p_i$ term, which seems to correspond to a "seller" who pays the price $p_i$ exactly once per item, regardless of which the buyers actually want to buy.
Is there an example (maybe even an "everyday life" one), or any sort of reasoning, that shows why this makes any sense? My only intuition is that in the primal buyers may have to be selfless, while in the dual they can be selfish. I think you are right that the dual problem is a story about selfish agents and the primal about selfless ones. | {
"domain": "cstheory.stackexchange",
"id": 5570,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "linear-programming, matching",
"url": null
} |
physical-chemistry, mixtures, viscosity
A nice summary of more models can be found in this paper: K.D Danov, J. Colloid Interface Sci., 235, 144–149 (2001). That paper also includes a discussion that shows that the factor 2.5 should be dependent on the mobility of the particle. This would mean that it will be higher for smaller particles, so in your case that the solution with part B will be somewhat more viscosity.
Another overview of models, including models which have an increasing viscosity with increasing particle size can be found here. So that would point to part A giving the more viscous solution.
So to summarize: to first order (i.e. low concentrations) the two viscosities will be the same. At higher orders you have competing effects of the particle radius that should increase the viscosity and the particle number that should also increase it. Which effect is dominant is probably hard to tell theoretically. | {
"domain": "chemistry.stackexchange",
"id": 1033,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "physical-chemistry, mixtures, viscosity",
"url": null
} |
complexity-theory, graphs, np-complete, combinatorics, circuits
In the latter case, could you supply me with an example of a monotone function computed by both a monotone circuit and a general Boolean circuit, whilst the size of the monotone circuit is gretater than the general Boolean circuit? (I have been stuck on this for hours, seeking for such an example, so I believe that there is no such an example..) Éva Tardos gave a function which can be computed by a polynomial size general circuit but requires an exponential size monotone circuit. The circuit computes a good enough approximation to the Lovász theta function of the input graph.
Razborov gave an $n^{\Omega(\log n)}$ lower bound monotone circuits computing the bipartite perfect matching function, for which polynomial size general circuits exist. | {
"domain": "cs.stackexchange",
"id": 15848,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "complexity-theory, graphs, np-complete, combinatorics, circuits",
"url": null
} |
beginner, image, ios, swift
progressCircle.lineWidth = 2.5;
progressCircle.strokeStart = 0;
progressCircle.strokeEnd = 0.0;
self.layer.addSublayer(progressCircle);
}
func setProgress(progress:Float) {
self.progress = progress
progressCircle.strokeEnd = CGFloat(self.progress)
self.setNeedsDisplay()
}
required init(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
} | {
"domain": "codereview.stackexchange",
"id": 9421,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "beginner, image, ios, swift",
"url": null
} |
navigation, ros-melodic, gmapping
Originally posted by gvdhoorn with karma: 86574 on 2019-03-31
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 32787,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "navigation, ros-melodic, gmapping",
"url": null
} |
viterbi-algorithm
struct v27 *vp = p;
decision_t *d;
assert(vp->magic == V27MAGIC);
>> if(endstate < 0){ // Start at the state with the best metric
>> // Search through the stored metrics, finding the smallest
>> int metric = UINT_MAX;
>> for(int state = 0; state < 64; state++){
>> if(vp->old_metrics[state] < metric){
>> metric = vp->old_metrics[state];
>> endstate = state;
>> }
>> }
>> } | {
"domain": "dsp.stackexchange",
"id": 7954,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "viterbi-algorithm",
"url": null
} |
java, cache
Output
key : xab value : 5
key : xbc value : 6
key : xyz value : 4
{xab=lrucache.LRUCache$IndexNode@c3d9ac, xbc=lrucache.LRUCache$IndexNode@7d8bb,
xyz=lrucache.LRUCache$IndexNode@125ee71} You can instead use a LinkedHashMap, which lets you get the in access order if you made it with the (int, float, boolean) constructor.
Then adding means removing map.entrySet().iterator().next().remove() if it becomes too large.
However LinkedHashMap is specially designed to let a subclass decide when to remove the oldest entry using the removeEldestEntry which gets called on each put:
public class MyLRUCache<K,V> extends LinkedHashMap<K,V> {
private static final int MAX_CACHE = 4;
public MyLRUCache(){
super(MAX_CACHE, 1, true);
// sets the super class to use access order instead of insertion order.
}
@Override
protected boolean removeEldestEntry(
java.util.Map.Entry<K, V> eldest) {
return size() > MAX_CACHE;
}
} | {
"domain": "codereview.stackexchange",
"id": 12087,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, cache",
"url": null
} |
waves, velocity, speed
If we fix $\phi$, the value of $\psi$ remains constant. The positions $x$ at which $\phi$ (the phase) remains constant satisfy $x=vt$, or $x=-vt$, if we have one of these elementary solutions.
Although $x$ and $t$ are indeed independent variables, if one focuses on a point of constant $\phi$, such as the crest of a wave, as a function of time, a velocity $v=\pm x/t$ emerges from the equation $\phi=$ constant. | {
"domain": "physics.stackexchange",
"id": 53940,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "waves, velocity, speed",
"url": null
} |
physical-chemistry, thermodynamics, energy
Once in the product potential the extra energy is lost quickly, a few picoseconds, to any solvent surrounding the molecule. Energy is also lost to other vibrational /rotational energy levels in the product molecule. These are vibrations not involved in the bond breaking. Eventually all this energy is spread out in the surrounding solvent (or gas) raising its temperature. The kinetic energy change in a reaction may not be that large if the energy is taken up by the vibrations and rotations. This is unlike the case when an atom is formed as a product. | {
"domain": "chemistry.stackexchange",
"id": 12025,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "physical-chemistry, thermodynamics, energy",
"url": null
} |
quantum-mechanics, quantum-electrodynamics, superconductivity, quantum-computer
Title: Why is a transmon a charge qubit? The classic charge qubit is the cooper pair box which is a capacitor in series with a Josephson junction. In my understanding, by changing the gate voltage at the capacitor, one can create a superposition of $n$ and $n+1$ cooper pairs on the 'island' in between the junction and capacitor.
A transmon looks far more like a classic LC circuit. It is often depicted as a Josephson junction in parallel with a very large capacitor and thus it is manipulated using microwave frequencies, not gate voltages. However, in all literature I can find it is called a special case of a charge qubit. I cannot seem to make sense of these two ideas. How are they equivalent? There are two things to consider:
What does the potential look like?
Is the wave function of the qubit narrow in the flux or charge basis? | {
"domain": "physics.stackexchange",
"id": 22189,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, quantum-electrodynamics, superconductivity, quantum-computer",
"url": null
} |
python, python-3.x, playing-cards, curses
# NOTE: Re-Factor this into some more DRY-compliant code
# NOTE: Re-Factor into multiple smaller functions that are easier for others
# to follow along with!
# player hand
for card in range(len(player_hand)):
# for each card in the hand
value = player_hand[card].value
suit = player_hand[card].suit
if suit == "H" or suit == "D":
color = colors_dict.get("RED_CARD")
elif suit == "C" or suit == "S":
color = colors_dict.get("BLACK_CARD")
# place the blank card rect
card_height = 5
card_width = 3
for cell_y in range(player_hand_coords[card][0][0], player_hand_coords[card][0][0] + card_height):
for cell_x in range(player_hand_coords[card][0][1], player_hand_coords[card][0][1] + card_width):
stdscr.addstr(cell_y, cell_x, " ", curses.color_pair(color)) | {
"domain": "codereview.stackexchange",
"id": 33304,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, python-3.x, playing-cards, curses",
"url": null
} |
# Playing with Integrals: Inequalities and Integrals 1
Integration, just as derivation, reveals a new approach to proving the inequalities. Let's take a detailed view on inequalities solved by or involving intgrals.
Theorem 1. If for all $$x\in[a,b]$$ the following inequality holds $f(x)\geq g(x),$ then for all $$x\in[a,b]$$ we have $\int^x_af(t)\,dt\geq\int^x_ag(t)\,dt.$
I won't present a formal proof, but rather just a simple image to ilustrate the idea.
Now armed with this theorem let's solve few very basic problems.
Problem 1. Prove the following inequality $\ln(2\sin x)>\frac{1}{2}x(\pi-x)-\frac{5}{72}\pi^2,\forall x\in\left(\frac{\pi}{6},\frac{\pi}{2}\right)$ | {
"domain": "brilliant.org",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9835969645988575,
"lm_q1q2_score": 0.8259597875905171,
"lm_q2_score": 0.8397339736884712,
"openwebmath_perplexity": 2050.2774724903493,
"openwebmath_score": 0.9984526038169861,
"tags": null,
"url": "https://brilliant.org/discussions/thread/playing-with-integrals-ineqaulitites-and-integrals/"
} |
condensed-matter, topology, ising-model, duality, spin-chains
Does that the operators having the same algebra really imply that $H(J,h)$ and $H(h,J)$ have the same spectrum? We know for a given algebra we can have different representations and these different representations may give different results. For example, the angular momentum algebra is always the same, but we can have different eigenvalues of spin operators. | {
"domain": "physics.stackexchange",
"id": 16206,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "condensed-matter, topology, ising-model, duality, spin-chains",
"url": null
} |
go, udp
func newSession(port, password *string) *session {
return &session{
Conn: bindAddress(*port),
Message: make(chan string),
Password: *password,
Address: make(chan *net.UDPAddr),
}
}
func (s session) listenForClients() {
for {
buf := make([]byte, 1024)
n, addr, err := s.Conn.ReadFromUDP(buf)
if err != nil {
log.Println(err)
}
m := buf[0:n]
if s.Password == "" {
s.Address <- addr
continue
}
if s.authenticate(string(m), addr) {
s.Address <- addr
}
}
}
func (s *session) authenticate(message string, address *net.UDPAddr) bool {
if s.Password == message {
_, _ = s.Conn.WriteToUDP([]byte("ok"), address)
return true
}
return false
}
func bindAddress(port string) *net.UDPConn {
laddr, err := net.ResolveUDPAddr("udp", port)
if err != nil {
log.Fatal(err)
} | {
"domain": "codereview.stackexchange",
"id": 26581,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "go, udp",
"url": null
} |
computer-vision, image-processing, image-segmentation
Apply Super Pixel based Segmentation (SLIC Based).
Per Label Mean
Calculate the mean value of each Super Pixel by the indices of each super pixel.
Per Label Variance
Calculate the variance of each super pixel using only its pixels.
In a more general form if you look for features for homogeneity and use them to find homogenous regions and then select the inverse. Those are very popular in segmentation.
Another approach would be using more advanced features such as:
BRISK Feature.
FAST Feature.
HOG Feature.
MSER Feature.
Then count how many of those are found within each Super Pixel. A Super Pixel with more features will be less homogenous.
The full code is available on my StackExchange Signal Processing Q75536 GitHub Repository (Look at the SignalProcessing\Q75536 folder)..
Update: today I encountered Robust Segmentation Free Algorithm for Homogeneity Quantification in Images. | {
"domain": "dsp.stackexchange",
"id": 10244,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "computer-vision, image-processing, image-segmentation",
"url": null
} |
c++, algorithm, simulation, union-find
Goal
Find the threshold that the system will percolate. That is, the percentage of open site in the system for it to be percolated
WeightedQuickUnionUF.h
#pragma once
#include <vector>
class WeightedQuickUnionUF{
private:
std::vector<int> parent; //parent link(site indexed)
std::vector<int> sz; //size of component for roots(site indexed) / no of element in tree
int count_; //number of components/tree
public:
WeightedQuickUnionUF(int n);
int count()const; //return no of component
bool connected(int p, int q); //check if their components' are equal
void WeightedUnion(int p, int q); //merge trees tgt, smaller tree becomes child of larger tree
private:
int root(int p); //return root of p
};
WeightedQuickUnionUF.cpp
#include "WeightedQuickUnionUF.h" | {
"domain": "codereview.stackexchange",
"id": 43054,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, algorithm, simulation, union-find",
"url": null
} |
gauge-theory, topology, yang-mills, gauge, instantons
Edit - Start from just outside the box near the origin, where $x_1$ and $x_2$ are both slightly negative.
Move along the $x_1$ axis. There is an immediate gauge transformation as you cross the $x_1$ boundary, $Ω_1(x_1=0;x_2=0)$. Continue to just short of $x_1 = a_1$
Move along the $x_2$ axis. There is an immediate gauge transformation as you cross the $x_2$ boundary, $Ω_2(x_1=a_1;x_2=0)$. Continue to just short of $x_2 = a_2$
The total gauge transformation is
$$Ω_2(x_1=a_1;x_2=0)Ω_1(x_1=0;x_2=0)$$
Do the same trip along opposite edges to get the right half of the equation. | {
"domain": "physics.stackexchange",
"id": 65429,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "gauge-theory, topology, yang-mills, gauge, instantons",
"url": null
} |
This question I found in internet please help me to solve this , Write a function called odd_index that takes a matrix, M, as input argument and Note that both the row and the column of an element must be odd to be included in the (1,2), (2,1), (2,2) because either the row or the column or both are even. size(C,1) returns the number of rows in C, while size(C,2) returns the number of columns. size(C) returens the number of rows and columns, for matrices with higher dimentions it returns the number of vectors in each dimension | {
"domain": "thetopsites.net",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9683812309063187,
"lm_q1q2_score": 0.8529473524906787,
"lm_q2_score": 0.880797071719777,
"openwebmath_perplexity": 360.06919482051563,
"openwebmath_score": 0.6124218702316284,
"tags": null,
"url": "https://thetopsites.net/article/60225030.shtml"
} |
Remember that for all $z$, we have $$\cos^2(z) + \sin^2(z) = 1$$
-
I don't know what that weird a is. – user138246 Jun 21 '12 at 20:26
I give up ${}{}$ – user17762 Jun 21 '12 at 20:31
Because Jordan :) – user17762 Jun 21 '12 at 20:40
@Jordan: Dear Jordan, Marvis was trying to tell you that the equation $\cos^2(z)+\sin^2(z)=1$ can make your question solved. In fact, if you change your variable from $\frac{1}{2}\theta$ to $z$ then by using a very basic formula, you will get your answer as you want. This basic trigonometric formula is what Marviz noted you. – Babak S. Jun 21 '12 at 20:44
$x$ and $y$ are related because $x$ is the sine of a certain thing, and $y$ is the cosine of the same thing, and sine and cosine are closely related to one another, as you might guess from the names. – MJD Jun 21 '12 at 21:07
This is JUST to clear what Marvis noted.
$x=\sin(\frac{1}{2}\theta)$ $\longrightarrow$ $x^2=\sin^2(\frac{1}{2}\theta)$ , | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9838471670723234,
"lm_q1q2_score": 0.834270669288675,
"lm_q2_score": 0.8479677506936878,
"openwebmath_perplexity": 396.0389857678149,
"openwebmath_score": 0.9580629467964172,
"tags": null,
"url": "http://math.stackexchange.com/questions/161338/finding-cartesian-equations-from-parametric?answertab=votes"
} |
java, strings
Or, we could look in the result, if we have it already:
public static void removeMultipleOccurrence(final String input) {
final StringBuilder result = new StringBuilder();
for (int i = 0; i < input.length(); i++) {
String currentChar = input.substring(i, i + 1);
if (result.indexOf(currentChar) < 0) //if not contained
result.append(currentChar);
}
System.out.println(result);
} | {
"domain": "codereview.stackexchange",
"id": 3528,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, strings",
"url": null
} |
algorithms, graphs
I imagine this as a minimum spanning tree on directed graphs problem. I found Chu-Liu/Edmond's algorithm, I know that this algorithm works for edge-weighted graphs and I have vertices-weighted, so I just set the edge weights to what are the weights of the vertices at the end of the edge. But this is not optimal solution. I don't need direct connections between people in the set D.
So after I have result from that algorithm, I can apply on it some greedy algorithm, which will go recursively over each element and check how removing it from the subset D will affect the structure - when the sum of the weights will be minimal and will ensure that no element falls out of set D (check below).
Refer to an example, my MST result will be John,Adam,Victor,Bob(27). Best solution is John,Bob(9). Interesting bad solution is Viktor,Bob(8) - the sum is minimal, unfortunately John will fall out of the D subset.
Also I assume that: | {
"domain": "cs.stackexchange",
"id": 16716,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithms, graphs",
"url": null
} |
thermodynamics, water, freezing
IMHO we now have a pretty decent picture of what is (and isn’t) happening. Remember, the objective is to make an impressive fog. It is exponentially important to have hot water in order to drive a lot of H2O into the vapor phase. It is important to do a decent job of tossing the water, in order to create sufficient surface area for evaporation to occur. It is necessary to have reasonably cold air, to cause recondensation to occur. Extreme cold is not necessary, but doesn’t hurt, and an ice-fog will be more persistent than the other kind of fog. | {
"domain": "physics.stackexchange",
"id": 11203,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "thermodynamics, water, freezing",
"url": null
} |
thermodynamics, statistical-mechanics, ideal-gas, kinetic-theory
As for $d\tau_N$, it is the infinitesimal element of phase space,
$$d\tau_N=\frac{1}{N!}d^3p_1d^3x_1\cdots d^3p_Nd^3x_N.$$
The factorial accounts for the indistinguishability of the atoms but this is not important here as we shall see very soon. Thus, in the expression of $Z$, the integration for each component of momentum ranges from $-\infty$ to $+\infty$, whereas each integration $\int d^3x_i$ shall cover the entire volume $V$.
We see that $u_N$ is the sum of one term depending only on the momenta and of another term depending only on the positions. This allows to factorise the expression of $Z$ as follow,
$$Z=\frac{1}{N!}\int \exp\left(-\frac{1}{kT}\sum_i \frac{p_i^2}{2m}\right)d^3p_1\cdots d^3p_N\int \exp\left(-\sum_{i<j}\frac{\phi(r_{ij})}{kT}\right)d^3x_1\cdots d^3x_N.$$
For a perfect gas, we would have $\phi=0$, and therefore the partition function would read
$$Z_\text{ideal}=\frac{V^N}{N!}\int \exp\left(-\frac{1}{kT}\sum_i \frac{p_i^2}{2m}\right)d^3p_1\cdots d^3p_N.$$ | {
"domain": "physics.stackexchange",
"id": 43596,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "thermodynamics, statistical-mechanics, ideal-gas, kinetic-theory",
"url": null
} |
antennas, radio-frequency
Title: Is there a way to tell what frequency an unmarked antenna is designed for? I have accumulated a large amount of R/C gear over the years. I have several antennas which are not labelled as to their original use. This antenna is either for 5.8ghz, 2.4ghz, or 910mhz. | {
"domain": "physics.stackexchange",
"id": 50955,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "antennas, radio-frequency",
"url": null
} |
newtonian-mechanics, angular-momentum, conservation-laws, inertial-frames, galilean-relativity
Title: Angular momentum conservation under Galileo transformation I was trying to see when angular momentum is independent of choice of origin, but then it seems angular momentum no longer conserved under Galileo transformation to me :
Given a point mass is doing circular orbital motion in an inertial frame:
$$\vec L = \vec r \times \vec p $$
In a new relatively stationary frame with displacement $\vec R$:$^{\dagger}$
$$\vec {L'}=\vec {r'} \times \vec {p'}$$
$$\vec {L'}=({\vec R +\vec {r}} )\times \vec {p}$$
$$\vec {L'}=({\vec R +\vec {r}} )\times \vec {p}$$
Take time derivative:
$$\dot {\vec {L'}}=({\dot{\vec R} +\dot{\vec {r}}} )\times \vec {p} +({\vec R +\vec {r}} )\times \dot{\vec {p}}$$
$$\dot {\vec {L'}}=0 +({\vec R +\vec {r}} )\times \dot{\vec {p}}$$
Given angular momentum is conserved in an orbital motion in the old frame ($\vec {r} \times \dot{\vec {p}} = 0$):
$$\dot {\vec {L'}}=\vec R \times \dot{\vec {p}}$$ | {
"domain": "physics.stackexchange",
"id": 43132,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, angular-momentum, conservation-laws, inertial-frames, galilean-relativity",
"url": null
} |
Thanks
The solution is: you cheated!
If we write ##g(x)=x^4## then ##f=\ln\circ g## which is only defined if we use absolute values: ##f=\ln\circ \operatorname{abs} \circ g##. So the correct expression is ##f(x)=\ln|x^4|## which equals ##4\cdot \ln|x|##. The fact that you could omit the absolute value is due to your unmentioned knowledge that ##x^4\geq 0## for all ##x##. Hence you used an additional information which was hidden, whereas the camouflage vanished in ##\ln x##.
Adesh, songoku and dRic2
Mark44
Mentor
For function ##f(x)=\ln x^4## the domain is x ∈ ℝ , x ≠ 0 but if I change it into ##f(x) = 4 \ln x## then the domain will be x > 0
In my opinion ##\ln x^4## and ##4 \ln x## are two same functions but I am confused why they have different domains
The property of logarithms that you used, ##\ln a^b = b\ln a## is valid only for a > 0. ##x^4 > 0## if and only if ##x \ne 0##, but the same is not true for x itself. | {
"domain": "physicsforums.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9372107878954105,
"lm_q1q2_score": 0.8039095181016838,
"lm_q2_score": 0.8577681013541613,
"openwebmath_perplexity": 2625.6523321814757,
"openwebmath_score": 0.8927342295646667,
"tags": null,
"url": "https://www.physicsforums.com/threads/confusion-about-the-domain-of-this-logarithmic-function.991159/"
} |
Your alternative solution is correct and is definitely the way to go. There are some problems with your first attempt; I’ll quote part of it with comments.
Let $x \in \overline{B}$.
You’re trying to show that $B$ is dense in $\Omega$, so this is the wrong place to start: if you’re going to use this sequences approach, you need to start with an arbitrary $x\in X$ and show that there’s a sequence in $B$ that converges to it.
Then there is a sequence $(x_n)_{n \in \mathbb{N}} \in B$ such that $x_n \to x$.
True, but (as noted above) not really useful.
Since $A \subseteq \overline{B}$, then $x_n \to a$ for some $a \in A$.
This is not true unless $x$ happens to be in $A$. Since $A$ may well be a proper subset of $\overline{B}$, there’s no reason to suppose that $x\in A$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9838471668355155,
"lm_q1q2_score": 0.8261698790708262,
"lm_q2_score": 0.8397339616560072,
"openwebmath_perplexity": 46.434508059379304,
"openwebmath_score": 0.9791748523712158,
"tags": null,
"url": "https://math.stackexchange.com/questions/1169712/showing-a-set-is-dense-in-metric-space-omega-d"
} |
have the chance to practice with an example. However, we know that this is only part of the truth, because from Faraday’s Law of Induction, if a closed circuit has a changing magnetic flux through it, a circulating current will arise, which means there is a nonzero voltage around the circuit. Consider the mass balance in a stream tube by using the integral form of the conservatin of mass equation. The package follows a modular concept: Fluxes can be calculated in just two simple steps or in several steps if more control is wanted. Again, flux is a general concept; we can also use it to describe the amount of sunlight hitting a solar panel or the amount of energy a telescope receives from a distant star, for example. While the line integral depends on a. For example, if you had a nozzle with a circular. pyplot as plt resolution = 10 # pixels/um sx = 16 # size of cell in X direction sy = 32 # size of cell in Y direction cell = mp. The concept of electric flux is useful in association | {
"domain": "dearbook.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9820137910906878,
"lm_q1q2_score": 0.8074010586395752,
"lm_q2_score": 0.822189123986562,
"openwebmath_perplexity": 939.595995278296,
"openwebmath_score": 0.7990542650222778,
"tags": null,
"url": "http://dearbook.it/izsw/flux-integral-examples.html"
} |
quantum-mechanics, potential-energy, measurement-problem, quantum-tunneling
In quantum mechanics, the rectangular (or, at times, square) potential barrier is a standard one-dimensional problem that demonstrates the phenomena of wave-mechanical tunneling (also called "quantum tunneling") and wave-mechanical reflection. The problem consists of solving the one-dimensional time-independent Schrödinger equation for a particle encountering a rectangular potential energy barrier. It is usually assumed, as here, that a free particle impinges on the barrier from the left.
https://en.wikipedia.org/wiki/Rectangular_potential_barrier
Now there is a classical interpretation.
Although classically a particle behaving as a point mass would be reflected, a particle actually behaving as a matter wave has a non-zero probability of penetrating the barrier and continuing its travel as a wave on the other side.
This means that as per QM, the particle behaving as a wave can actually exist at some probability on the other side of the barrier. | {
"domain": "physics.stackexchange",
"id": 59721,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, potential-energy, measurement-problem, quantum-tunneling",
"url": null
} |
BST cannot be efficiently implemented on an array
Heap operations only need to bubble up or down a single tree branch, so O(log(n)) worst case swaps, O(1) average.
Keeping a BST balanced requires tree rotations, which can change the top element for another one, and would require moving the entire array around (O(n)).
Philosophy
• BSTs maintain a global property between a parent and all descendants (left smaller, right bigger).
The top node of a BST is the middle element, which requires global knowledge to maintain (knowing how many smaller and larger elements are there).
This global property is more expensive to maintain (log n insert), but gives more powerful searches (log n search).
• Heaps maintain a local property between parent and direct children (parent > children).
The top note of a heap is the big element, which only requires local knowledge to maintain (knowing your parent). | {
"domain": "kiwix.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9416541626630937,
"lm_q1q2_score": 0.8164478307778266,
"lm_q2_score": 0.8670357580842941,
"openwebmath_perplexity": 2245.870410932403,
"openwebmath_score": 0.4849940538406372,
"tags": null,
"url": "http://library.kiwix.org/cs.stackexchange.com_eng_all_2018-08/A/question/27860/whats-the-difference-between-a-binary-search-tree-and-a-binary-heap.html"
} |
rosjava
My idea is to describe something similar to this one:
<rosjava-pathelement location="./lib/pc/pccomm.jar" />
<rosjava-pathelement location="./lib/bluecove-gpl.jar" />
<rosjava-pathelement location="./lib/bluecove.jar" />
<rosjava-pathelement location="./lib/yamlbeans-1.06.jar" />
Is it possible in a alternative way?
Juan Antonio
Originally posted by Juan Antonio Breña Moral on ROS Answers with karma: 274 on 2012-01-17
Post score: 0
If the path is relative, it will be evaluated as relative to the root of the package.
Originally posted by damonkohler with karma: 3838 on 2012-01-20
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 7917,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "rosjava",
"url": null
} |
beginner, go, server, raspberry-pi, video
Performance
If you need to make your code faster, you will need to identify the bottlenecks. For this, I recommend you to profile your code : https://blog.golang.org/profiling-go-programs
Add the code var cpuprofile = flag.String("cpuprofile", "", "write cpu profile to file")...
transfer the *.profile file to your computer
analyze the function calls | {
"domain": "codereview.stackexchange",
"id": 25054,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "beginner, go, server, raspberry-pi, video",
"url": null
} |
thermodynamics, statistical-mechanics
Here, to find $E_i$ why do they use $\sum E_i P(E)$ and not $\sum E_i P(E_i)$? Why are they finding expectation value of $E_i$ using probability of the system having total energy $E$ ?
I'm not really sure how they got and what they mean by
$$\bar{E_i}=\frac{\sum E_ie^{-\beta E}}{\sum e^{-\beta E}}$$
Shouldn't it be
$$\bar{E_i}=\frac{\sum E_ie^{-\beta E_i}}{\sum e^{-\beta E_i}}$$
instead where $P(E_i)$ i.e. probability of occurrence (for a molecule) of energy $E_i$ is $\frac{e^{-\beta E_i}}{\sum e^{-\beta E_i}}$ ? I think that all those doubts share the same root. The probability density of the system states depends on the system energy ($E$).
The author is splitting the energy in two additive terms, just for convenience. The occurrence of a state do not depends on such arbitrary partition.
why do they use $\sum E_i P(E)$ and not $\sum E_i P(E_i)$?
Just because of the stated above. | {
"domain": "physics.stackexchange",
"id": 43597,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "thermodynamics, statistical-mechanics",
"url": null
} |
c#, unit-testing
// Changed to UriBuilder. This will handle cases like trailing slashes and whatnot
UriBuilder address = new UriBuilder();
address.Host = host;
address.Path = file;
using (WebClient client = new WebClient())
{
if (!String.IsNullOrWhiteSpace(user) && !String.IsNullOrWhiteSpace(pass))
{
client.Credentials = new NetworkCredential(user, pass);
}
// Changed to a try catch so that we can handle the error with a better message
// If you'd rather have this bubble up, that's fine
try
{
client.DownloadFile(address.Uri, path);
}
catch (Exception e)
{
throw new InvalidOperationException(
"Error downloading file from: " + address.Uri.ToString(),
e)
}
}
} | {
"domain": "codereview.stackexchange",
"id": 297,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, unit-testing",
"url": null
} |
similarity
If the classes weren't part of a hierarchy, I'd probably I'd look at cosine similarity (or equivalent) between classes assigned to an object, but I'd like to use the fact that different classes with the same parents also have some similarity value (e.g. in the example above, beef has some small similarity to omelette, since they both have items from the class '1 produce').
If it helps, the hierarchy has ~200k classes, with a maximum depth of 5. While I don't have enough expertise to advise you on selection of the best similarity measure, I've seen a number of them in various papers. The following collection of research papers hopefully will be useful to you in determining the optimal measure for your research. Please note that I intentionally included papers, using both frequentist and Bayesian approaches to hierarchical classification, including class information, for the sake of more comprehensive coverage.
Frequentist approach: | {
"domain": "datascience.stackexchange",
"id": 187,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "similarity",
"url": null
} |
javascript, node.js, web-scraping
// Gets current items Search Results
const getItems = async searchTerm => {
browser = await puppeteer.launch({
headless: true,
timeout: 0,
args: ["--no-sandbox"]
});
page = await browser.newPage();
await page.goto(`https://facebook.com/marketplace/tampa/search/?query=${encodeURI(searchTerm)}&sort=created_date_descending&exact=true`);
await autoScroll(page);
const itemList = await page.waitForSelector('div > div > span > div > a[tabindex="0"]')
.then(() => page.evaluate(() => {
const itemArray = [];
const itemNodeList = document.querySelectorAll('div > div > span > div > a[tabindex="0"]');
itemNodeList.forEach(item => {
const itemTitle = item.innerText;
const itemURL = item.getAttribute('href');
const itemImg = item.querySelector('div > div > span > div > a > div > div > div > div > div > div > img').getAttribute('src'); | {
"domain": "codereview.stackexchange",
"id": 38191,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, node.js, web-scraping",
"url": null
} |
rna-seq, deseq2
Title: Error in checkFullRank(modelMatrix) deseq2 Trying to use deseq2 for differential expression analysis (rna-seq) between three groups and also account for batch effect as the control were sequenced at a different time point.
control: con
sample with mutation A: mutA
sample with mutation B: mutB
here is my design
sample condition batch
1 C_1 con 1
2 C_2 con 1
3 C_5 con 1
4 C_3 con 1
5 C_4 con 1
6 M_6 mutA 2
7 M_2 mutA 2
8 M_5 mutA 2
9 M_1 mutA 2
10 M_4 mutA 2
11 M_3 mutA 2
12 MA_2 mutB 2
13 MA_6 mutB 2
14 MA_5 mutB 2
15 MA_4 mutB 2
16 MA_3 mutB 2
17 MA_1 mutB 2
when i create matrix:
dds <- DESeqDataSetFromMatrix(countData = data, colData = coldata, design = ~batch + condition) | {
"domain": "bioinformatics.stackexchange",
"id": 418,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "rna-seq, deseq2",
"url": null
} |
ros, ros2, joint-state-publisher, gazebo-plugin
You can have a look at this tutorial for more details.
Comment by aash on 2021-12-10:
what i get is
std_msgs/Header header
string[] name
float64[] position
float64[] velocity
float64[] effort.
May i know with only these information without the .py file, which of these info that i can use for subscribing to this topic?
Comment by Ranjit Kathiriya on 2021-12-10:
It completely depends on you, Just have a look by echoing into the terminal which data you required, and based on that you can just obtain the data from the subscribed topic.
I would suggest you have a look at the following tutorials for an in-depth understanding of ros2 publisher and subscriber and custom message and services. This will help you a lot in ros development.
https://docs.ros.org/en/foxy/index.html
Comment by Ranjit Kathiriya on 2021-12-10:
If you think, This is a solution for your answer, tick this answer, so it can help others.
Comment by aash on 2021-12-10: | {
"domain": "robotics.stackexchange",
"id": 37235,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, ros2, joint-state-publisher, gazebo-plugin",
"url": null
} |
java, android, random
The x and y value constraints are a bad thing in the first place. Smaller devices have less entropy to generate, larger devices have repetition when the x and y coordinates "repeat", since you truncate any value larger than 8 bits.
Luckily screen-sizes are usually a nice multiple of \$256\times256\$ "tiles". The actually useful entropy you can currently extract accordingly is in one such tile.
Unfortunately this also means that "randomly" tapping similar areas in these tiles (as probably many non-technical people would do) will produce somewhat predictable bytes.
This directly brings us to the next point. The "Blue Seven Phenomenon". There's a metric ton of academic articles about the subject, but the bottom line is: Humans are bad at pretending to be random. | {
"domain": "codereview.stackexchange",
"id": 24097,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, android, random",
"url": null
} |
reference-request, concurrency, pi-calculus
Forbidding reduction under input prefixes corresponds to the fact that read/receive operations are blocking, which is quite natural in programming languages. When we are waiting for some information to be received and we decide to proceed without, we might make "bad" choices. In the example, we proceed as if $w$ could not become equal to $y$. If you think about passing values other than names (for example, integers), the phenomenon should become even more apparent intuitively.
In a deterministic context (like the $\lambda$-calculus), proceeding without knowing the value of an input parameter (like reducing $M$ in $\lambda x.\!M$) is fine, but in a concurrent context it is not a good idea. | {
"domain": "cstheory.stackexchange",
"id": 5431,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "reference-request, concurrency, pi-calculus",
"url": null
} |
electromagnetism, hamiltonian-formalism, dipole
How does one interpret the third term at the classical level? It's a nonzero contribution to energy even when the charged particle is at rest. Usually, there should be no contribution to energy if a charged particle is at rest in a magnetic field. The thing is, the momentum $\vec{p}$ that occurs in the Hamiltonian is the canonical momentum, not the kinetic momentum. For a classical particle, the thing that we can measure is $\vec{x}(t)$ and its derivatives, including $\dot{\vec{x}}(t) = \vec{v}(t)$, the velocity. The problem is that the canonical momentum is related to the velocity by $\vec{p} = m\vec{v} + q \vec{A}$, which means that its value isn't gauge-invariant because $\vec{A}$ changes under a gauge transformation, but $\vec{v}$ does not. You can understand the Hamiltonian in terms of gauge-invariant quantitites as $H = m|\vec{v}|^2 /2$. You can't use this form of the Hamiltonian to get the equations of motion, though, because that formalism depends on the Hamiltonian begin a | {
"domain": "physics.stackexchange",
"id": 98549,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electromagnetism, hamiltonian-formalism, dipole",
"url": null
} |
ruby
# log the external request
def get_artist(*args)
log_around 'Discogs::Wrapper.get_artist', *args do
_orig_get_artist *args
end
end
end
This solution is much closer to what I want compared with what I had before, but ideally what I'm looking for is a function in LogAround that could be used in this way
# this class is already defined by Discogs
# and this is a customization
class Wrapper
include LogAround
log_around :get_artist
end
and provides the same (or better params list) output, which is:
Discogs::Wrapper.get_artist(["pink floyd"]) - start
Discogs::Wrapper.get_artist(["pink floyd"]) - end (1.919805308s) If you are using Rails (and it seems so, because you used Rails.logger) you can use the alias_method_chain function.
# Functions to track when a block begins and ends
module LogAround
extend ActiveSupport::Concern | {
"domain": "codereview.stackexchange",
"id": 1094,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ruby",
"url": null
} |
c++, template, c++17, variadic
std::tuple<T...> get_data() const
{
return m_data;
}
private:
std::string m_id{};
std::tuple<T...> m_data{};
template<typename ...T>
friend std::ostream& operator<<(std::ostream& os,
const Variadic_datablock<T...>& obj);
template<typename ...T>
friend std::istream& operator>>(std::istream& is,
Variadic_datablock<T...>& obj);
};
template<class Tuple, std::size_t n>
struct Printer {
static std::ostream& print(
std::ostream& os, const Tuple& t, const std::string& id)
{
Printer<Tuple, n - 1>::print(os, t, id);
auto type_name =
extract_type_name(typeid(std::get<n - 1>(t)).name());
os << " " << id << "." << type_name << " := "
<< std::get<n - 1>(t) << "; " << '\n';
return os;
}
}; | {
"domain": "codereview.stackexchange",
"id": 33467,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, template, c++17, variadic",
"url": null
} |
quantum-mechanics, electromagnetism, quantum-field-theory
When an optics experiment is done using a laser beam, it is perfectly meaningful to talk about photons being in the beam. We can also speak of a photon being emitted by an atom, in which case it is obviously localized near the atom when the emission occurs. Furthermore, in the usual analysis of the double slit experiment one has, at least implicitly, a wavefunction for the photon, which successfully recovers the high school result.
When one talks about scattering experiments, such as in photon-photon scattering, one has to talk about localized wavepackets in order to describe a real beam. Furthermore, unlike the massive case, where the Compton wavelength provides a characteristic length, there is no characteristic length for photons, suggesting that beams can be made arbitrarily narrow in principle: the complaint that you would start causing pair production below the Compton wavelength doesn't apply. | {
"domain": "physics.stackexchange",
"id": 73359,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, electromagnetism, quantum-field-theory",
"url": null
} |
truth value in this realm. On this account, there is a fact of the matter about whether CH is true or false, or whether your assertion $\phi$ is true or false. On this account, the set you have described is either in fact identical to $\{1\}$ or is identical to $\{1,2\}$. The fact that $\phi$ is independent of the ZFC axioms merely illustrates our lack of knowledge about which one of these sets you have actually described. On this account, the set is definitely one of them or the other, depending on whether it is the case that $\phi$ is true or not, and exactly one of these is the case. The independence of $\phi$ is an irrelevant distraction from whether $\phi$ is true. On this view, the pervasive independence phenomenon is seen as a side-show about our epistemological weakness---the weakness of our theories---rather than indicating any issue about the singular nature of mathematical truth. </p> <p><strong>Formalism.</strong> On this view, there is no realm of real existence for | {
"domain": "mathoverflow.net",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9621075739136381,
"lm_q1q2_score": 0.8324386563542616,
"lm_q2_score": 0.8652240964782011,
"openwebmath_perplexity": 380.9785412055225,
"openwebmath_score": 0.8968687653541565,
"tags": null,
"url": "http://mathoverflow.net/feeds/user/1946"
} |
react.js
to
<Grid columns={16} className="project-upload">
<Grid.Row>
Destructuring improvements This:
const { riskModel } = this.state;
const { loading } = this.state;
const initialDataLoaded = this.props;
can be
const { riskModel, loading } = this.state;
const { initialDataLoaded, handleCancel } = this.props;
// destructure everything you ever use from props in the line above
// so you don't have to go through props later
initialDataLoaded? This line is a bit suspicious:
loading || (typeof initialDataLoaded !== 'undefined' && !initialDataLoaded)
A typeof check shouldn't be necessary - unless you're dealing with a very unpredictable codebase where an undefined identifier may well actually not refer to undefined, comparing against undefined directly would be a bit nicer:
loading || (initialDataLoaded !== undefined && !initialDataLoaded) | {
"domain": "codereview.stackexchange",
"id": 40184,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "react.js",
"url": null
} |
This is true for arbitrary integers $p$ and $q$, not just distinct odd primes.
It suffices to show that every prime power dividing both $\mathrm{lcm}(p-1,q-1)$ and $pq-1$ must also divide $p-1$ and $q-1$.
Suppose $\ell^n$ is a prime power dividing $\mathrm{lcm}(p-1,q-1)$ and $pq-1$. The first divisibility implies $p\equiv 1$ or $q\equiv 1\mod\ell^n$. The second divisibility means $p\equiv q^{-1}\mod\ell^n$, so whichever of $p$ or $q$ is congruent to $1$, the other is also. This shows $\ell^n|p-1$ and $\ell^n|q-1$.
• If our answers aren't identical, I think you should post your solution – Julian Rosen Oct 8 '15 at 2:53
• I don't understand why if $\ell^n$ is a prime power dividing $\mathrm{lcm}(p-1,q-1)$ then $\ell^n$ divides $p-1$ and $q-1$ (I would if $\ell^n$ were a prime)? – Gero Oct 10 '15 at 21:29 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9777138102398167,
"lm_q1q2_score": 0.8082875318895634,
"lm_q2_score": 0.8267117876664789,
"openwebmath_perplexity": 130.40960573191592,
"openwebmath_score": 0.9158579111099243,
"tags": null,
"url": "https://math.stackexchange.com/questions/1469644/if-a-divisor-of-pq-1-divides-the-lcm-of-p-1-and-q-1-then-it-also-divides"
} |
# Modifying $\frac{\prod_\alpha A_\alpha}{\prod_\alpha B_\alpha}\simeq \prod_\alpha\frac{A_\alpha}{B_\alpha}$ for direct sums
Let $$\{A_\alpha\}$$ be a family of $$R$$-modules, each $$B_\alpha\subset A_\alpha$$ a submodule and $$\pi_\alpha:A_\alpha\to A_\alpha/B_\alpha$$ be the canonical projection map. Then the map
$$\prod_\alpha\pi_\alpha:\prod_\alpha A_\alpha\to \prod_\alpha\frac{A_\alpha}{B_\alpha}$$
is surjective, and has kernel $$\prod_\alpha B_\alpha$$. Therefore, by the first isomorphism theorem we have
$$\frac{\prod_\alpha A_\alpha}{\prod_\alpha B_\alpha}\simeq \prod_\alpha\frac{A_\alpha}{B_\alpha}$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9861513910054508,
"lm_q1q2_score": 0.8636585648337853,
"lm_q2_score": 0.8757869965109765,
"openwebmath_perplexity": 59.39496517026847,
"openwebmath_score": 0.9745703339576721,
"tags": null,
"url": "https://math.stackexchange.com/questions/3756066/modifying-frac-prod-alpha-a-alpha-prod-alpha-b-alpha-simeq-prod-alph"
} |
java, algorithm, combinatorics, set, bitset
Title: Generate all elements of a power set /*
* Power set is just set of all subsets for given set.
* It includes all subsets (with empty set).
* It's well-known that there are 2N elements in this set, where N is count of elements in original set.
* To build power set, following thing can be used:
* Create a loop, which iterates all integers from 0 till 2^N-1
* Proceed to binary representation for each integer
* Each binary representation is a set of N bits (for lesser numbers, add leading zeros).
* Each bit corresponds, if the certain set member is included in current subset.
*/
import java.util.NoSuchElementException;
import java.util.BitSet;
import java.util.Iterator;
import java.util.Set;
import java.util.TreeSet;
import java.util.List;
import java.util.ArrayList;
public class PowerSet<E> implements Iterator<Set<E>>, Iterable<Set<E>> {
private final E[] ary;
private final int subsets;
private int i; | {
"domain": "codereview.stackexchange",
"id": 16194,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, algorithm, combinatorics, set, bitset",
"url": null
} |
\dfrac{1}{\sqrt{x}}dx,$$ we replace 0 with a and let a approach 0 from the right. Improper Integral with Infinite Discontinuity at Endpoint. PRACTICE PROBLEMS: For problems 1-13, evaluate each improper integral or show that it diverges. Improper Integrals In this section we need to take a look at a couple of different kinds of integrals. 1 decade ago. Compute the value of the following improper integral. Give a clear reason for each. improper integral calculator Related topics: quadratic equation solver graph | rational expression calculator | factor quadratic equations calculator | permutation and combination for grade 6 | free printable quadratic equation factoring worksheets | conjugate algebra | the rational numbers | addition and subtraction of fractional numbers. The improper integral converges if this limit is a finite real number; otherwise, the improper integral diverges. edu This is a supplement to the author’s Introductionto Real Analysis. 2) is firstly analytically solved | {
"domain": "wind4us.it",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9899864296722662,
"lm_q1q2_score": 0.8116832763331256,
"lm_q2_score": 0.8198933359135361,
"openwebmath_perplexity": 482.0014043585325,
"openwebmath_score": 0.9617266654968262,
"tags": null,
"url": "http://pvod.wind4us.it/improper-integral.html"
} |
mechanical-engineering, stresses, bolting
So far these problems are straight forward. The total shear on a bolt is equal to the direct shear plus the torsional shear. First you get the direct shear component by distributing the force evenly on each bolt so $\displaystyle\tau_D=\frac{P}{4A}$ where $A$ is the cross sectional area of the bolt parallel to the force. Then you calculate the torsional shear by finding the torque $T$ of the force $P$ about the centroid of the bolts and $\displaystyle\tau_T=\frac{Tc}{J}$ where $c$ is the distance of the bolt from the centroid and $J$ is the polar moment of inertia of the bolts. The direction of the torsional shear is found perpendicular to a line from the centroid to the bolt in the sense of the rotation of the torque. | {
"domain": "engineering.stackexchange",
"id": 708,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "mechanical-engineering, stresses, bolting",
"url": null
} |
Yeah… I find Google Docs API clumsy, verbose and generally frustrating; unlike Google Sheets API. | {
"domain": "calculus7.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9777138177076644,
"lm_q1q2_score": 0.8230645635386604,
"lm_q2_score": 0.8418256432832333,
"openwebmath_perplexity": 743.7698651267632,
"openwebmath_score": 0.6562976837158203,
"tags": null,
"url": "https://calculus7.org/page/2/"
} |
eigenvector of the same eigenvalue. If a matrix has some special property (e.g. And then the transpose, so the eigenvectors are now rows in Q transpose. From (9), the characteristic polynomial of B0AB can be written as det(B0AB −λIn)=(λi −λ)det(Y 0AY −λIn−1). Proof. The eigenvalues of a matrix are on its main diagonal because the main diagonal remains the same when the matrix is transposed, and a matrix and its transpose have the same eigenvalues. Formally, =. Because equal matrices have equal dimensions, only square matrices can be symmetric. det (A T – λ I) = det (A T – λ I T) = det (A –λ I) T = det (A –λ I) so any solution of det (A –λ I) = 0 is a solution of det (A –λ I) T = 0 and vice versa. Those are the lambdas. A basis is a set of independent vectors that span a vector space. Two Matrices with the Same Characteristic Polynomial. 2020. december. The eigenvector .1;1/ is unchanged by R. The second eigenvector is .1; 1/—its signs are reversed by R. Does Transpose preserve | {
"domain": "citationspros.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9643214450208031,
"lm_q1q2_score": 0.8157604372951458,
"lm_q2_score": 0.8459424411924673,
"openwebmath_perplexity": 463.276269096186,
"openwebmath_score": 0.785873532295227,
"tags": null,
"url": "https://citationspros.com/jnyfp/does-a-matrix-and-its-transpose-have-the-same-eigenvectors-9fac64"
} |
cell-biology, cancer
Title: Can cancer cells transmit from one organism to another? I know cancer cells are very resilient, so would it be possible for them to survive outside of the original organism for long enough to be absorbed by another? Furthermore, would that type of "infection" be possible? Yes. Devil facial tumour disease is an example:
https://en.wikipedia.org/wiki/Devil_facial_tumour_disease
Look here for other examples of transmissible cancers.
I have not heard of any human transmissible cancer. | {
"domain": "biology.stackexchange",
"id": 8283,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "cell-biology, cancer",
"url": null
} |
c#, sql, asp.net
if (ID == "" && lTABLE == "" && lDATE == "" && lRecord == "") //ALL 4 blank
{
string query1 = "SELECT TOP 100 [pk],[databaseName],[tableName],[tablefk],[fieldname],[old],[new],[userfk],[action],[entrytime],[username] FROM AUDITACT order by entrytime desc";
SqlDataAdapter da = new SqlDataAdapter(query1, constr2);
DataTable table2 = new DataTable();
da.Fill(table2);
ListView1.DataSource = table2;
}
else if (lRecord != "") // if Record ID has any value entered in it, search by it only, disregard other textboxes
{
string query1 = "SELECT [pk],[databaseName],[tableName],[tablefk],[fieldname],[old],[new],[userfk],[action],[entrytime],[username] FROM AUDITACT where [tablefk] = '" + lRecord + "' order by entrytime desc";
SqlDataAdapter da = new SqlDataAdapter(query1, constr2);
DataTable table2 = new DataTable();
da.Fill(table2);
ListView1.DataSource = table2;
} | {
"domain": "codereview.stackexchange",
"id": 16268,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, sql, asp.net",
"url": null
} |
ros-melodic, rosdep
If so, is there a different tool to automate installing third-party ROS dependencies from information in my package.xml file based on which ROS release the end-user is using?
If you haven't released your own package and users must build it from source in their/a Catkin workspace, rosdep would be the tool to install all dependencies. System dependencies and dependencies which happen to be ROS packages.
(Note: I would not use the name third-party here, it's really confusing, at least to me. The whole of ROS is community contributed, so what exactly would make a package third-party?)
Or a recommended process?
I've written about workflows when building packages from source before. See #q252478 fi.
Of course, if you have multiple repositories to clone, use vcstool or wstool to automate that step for you.
But as I wrote earlier, those tools do not do any dependency resolution or fetching. They will only fetch whatever you specify in a .rosinstall or .repos file. | {
"domain": "robotics.stackexchange",
"id": 35734,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros-melodic, rosdep",
"url": null
} |
optics, polarization
Can you build a full-pass circular polarizer? Sure.
The first thing to keep in mind is that like linear polarizers, quarter plates have a definite orientation to them in how they operate on light. Along one side-to-side axis of a quarter plate, light moves faster and in one linear polarization, while along the other perpendicular axis light move slower and with the opposite polarization. They call it a "quarter plate" because its thickness is just right for delaying the slower-moving light by exactly one fourth of a wavelength of light. The shift remains (mostly) proportional to wavelength for different lengths of light, so the neat thing about a quarter plate is that it works even with white light of many different frequencies.
A circular polarizer consists of two parts. The initial linear polarizer, say $V$ for vertical, that absorbs 50% of non-polarized light and transmits the other 50% as polarized light. | {
"domain": "physics.stackexchange",
"id": 4163,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "optics, polarization",
"url": null
} |
c++, array, c++14
Title: Nullable array wrapper class with small size optimization I have an array wrapper class that I'd like to get reviewed.
There are two differences with other common questions on this site. First, my class needs to be "nullable", where a "null array" has a different meaning than an array of length zero.
Second, I'm trying to test whether small size optimization (like gcc's std::string implementation) can improve performance of my application.
Below is my implementation. I also have another nullable array wrapper class without SSO, but it is pretty similar.
I want to know if my implementation is "correct" assuming up to C++14, but not C++17.
And is using std::numeric_limits::max size value a reasonable approach as a flag for "null array"? I considered using nullptr as the null flag, but allocate(0) may also return nullptr (e.g. on MSVC).
#ifndef SMALL_NULLABLE_ARRAY_H
#define SMALL_NULLABLE_ARRAY_H | {
"domain": "codereview.stackexchange",
"id": 42037,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, array, c++14",
"url": null
} |
quantum-mechanics, homework-and-exercises
(* For clarity, this list is not exhaustive; as a simple third possibility, they could be superpositions with different weights or just different basis states entirely. Also, the observation "they differ by a phase" for superposition states need not be basis-independent.) | {
"domain": "physics.stackexchange",
"id": 51477,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, homework-and-exercises",
"url": null
} |
a sentence. Thus the tension in the string is maximum. Measuring from the bottom of the split-cork to the centre of the bob. The following equations are true for all SHM systems but let us use the simple pendulum when thinking about them. These principles predict how a pendulum behaves based upon its. The weight at the end of the string is called the “bob” of the. At which position is the kinetic energy of the pendulum bob least? A. The shell has a cylindrical cavity with a spherically curved bottom surface on which is a freely moveable electrically conductive ball. system and below the pendulum bobs damp the swinging motion of the pendulums so that the static deflection due to the gravi-tational pull of the source masses can be measured. 3 m/s and the tension in the rope is T = 22. If taken to another planet where the acceleration due to gravity is twice that on Earth, which line, A to D, in the table gives the correct new time periods? € € simple pendulum mass-spring A B T C T D | {
"domain": "detectorband.it",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9724147153749275,
"lm_q1q2_score": 0.8060722994664742,
"lm_q2_score": 0.8289388125473628,
"openwebmath_perplexity": 391.47228602467754,
"openwebmath_score": 0.7101962566375732,
"tags": null,
"url": "http://detectorband.it/acceleration-of-a-pendulum-at-the-bottom.html"
} |
evolution, book-recommendation, resource-recommendation
Title: Resource recommendation: a good book explaining and evaluating the evidence for evolution? I take a great interest in the intersection between science and religion and evolution is therefore something I often read about. Many of the critics of evolution like to poke "scientific" holes in evolution, perhaps the most common one being that there is simply "not enough" evidence. As someone trained in physics but not biology, I would like a book that is accessible and which summarises the evidence we have accumulated for the evolution of life on Earth. Ideally, I would like a book that remains strictly scientific, without any particular atheistic or religious agenda, and which evaluates each line of evidence rather than simply describing it.
Does anyone have a book recommendation? I like "The Darwinian Revolution" by Michael Ruse. | {
"domain": "biology.stackexchange",
"id": 11488,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "evolution, book-recommendation, resource-recommendation",
"url": null
} |
c++, design-patterns, observer-pattern
Then in main() you would have to write:
nationalgeographic.subscribe(std::make_unique<PaidSubscriber>("Ivy Parks"));
In a way, that just moves the problem to the caller. A possible way to make the caller simpler as well is to still have subscribe() create the Subscriber objects, but then to make it a template to avoid code repetition:
template<class SubscriberType>
void subscribe(const std::string& name) {
sublist.push_back(std::make_unique<SubscriberType>(name));
};
Then the caller looks like:
nationalgeographic.subscribe<PaidSubscriber>("Ivy Parks");
Merge changeissue() and notify()
Whenever a new issue is released by the magazine, you always want to notify the subscribers. So it doesn't make sense to have two separate functions that you need to call. I would create one function instead:
void release_issue(const std::string& issue, double price) {
for (auto& sub: sublist)
sub->update(issue, price);
} | {
"domain": "codereview.stackexchange",
"id": 43277,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, design-patterns, observer-pattern",
"url": null
} |
ros, ros-hydro, ubuntu-precise, ubuntu
The manual tells you which commands to send to the controller to change this setting (^00 00 and then reset with %rrrrrr to apply). I couldn't figure out how to do that, so I used the RoboRun utility. Get your hands on a Windows machine, download the utility, figure out which COM port the controller is on (Device Manager > Ports), click "Change COM/LAN Port" (it has to be a COM port between 1 and 8 because that was a great design decision; try plugging into a different USB port to hopefully get assigned a different COM), click "Load from Controller", change Controller Input to "RC Radio", click "Save to Controller", and reset the controller (Console tab, click %rrrrrr). | {
"domain": "robotics.stackexchange",
"id": 18626,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, ros-hydro, ubuntu-precise, ubuntu",
"url": null
} |
python, mathematics, statistics, numerical-methods, scipy
Questions:
The code produces correct values and is already pretty quick, but can its speed be improved? At the moment I'd like to keep the method of calculation the same because it's the method used in my calculus course, but are there any possible improvements to this method?
Can the clarity of the code be improved? Are the variable names clear?
Does the code follow standard python conventions? I'm new to Python so I'm not familiar with these.
Minor Question/Bug: Printing the accumulated values will sometimes result in the last value being 1.00001 or 0.999 rather than 1.0. I presume this is due to precision errors, but it's seems odd when dealing with adding numbers with only 3 decimal places. Is there an easy fix for this? Stick closely to the sources
It's helpful when coding math in cases like this to base your approach on established methods and language.
It might seem a bit extreme, but this can include: | {
"domain": "codereview.stackexchange",
"id": 23374,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, mathematics, statistics, numerical-methods, scipy",
"url": null
} |
rosnode
#(Single time publisher)
def publisher_For_Pointing(self):
pub = rospy.Publisher('Pointing', Pointing_Message)
rate = rospy.Rate(10) # 10hz
msg = Pointing_Message()
if not rospy.is_shutdown():
rospy.loginfo(msg)
pub.publish(msg)
rospy.spin()
rate.sleep()
#(Subscriber for some external event)
def subscriber_for_event_ObserveStarted(self):
topic = rospy.get_param('~topic', 'ObserveStarted')
print "Subscription Started"
rospy.Subscriber(topic, ObserveStarted_Message ,self.callback_for_event_ObserveStarted)
# Create a callback function for the subscriber.
def callback_for_event_ObserveStarted(self,data):
# Simply print out values in our custom message.
rospy.loginfo(rospy.get_name() + " Event %s", data) | {
"domain": "robotics.stackexchange",
"id": 28166,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "rosnode",
"url": null
} |
thermodynamics, energy-conservation, ideal-gas
On further thinking maybe $dU$ and $dW$ cancel for conservative fields but I’ve not mentioned such a thing mentioned anywhere (someone on this site was giving an example of protostars but I didn’t exactly get the point). It’s also not obvious at all to me whether the effect of gravity will be considered if we simply take the pressure variation (the one used to derive barometric equation but here we’ll consider effect of changing temperature too) because the gas is not in a container on the ground whose external pressure is being changed but constantly moving upwards too. | {
"domain": "physics.stackexchange",
"id": 97901,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "thermodynamics, energy-conservation, ideal-gas",
"url": null
} |
turing-machines, decision-problem
Title: Non-deterministic Turing machine and palindromes I have to design a Non-deterministic Turing machine that accepts only non-palindromes in $NTime(n\log n)$.
I think this would be easy on a 2-tape DTM. Simply copy the string onto the second tape – $O(n)$ time – and then check both tapes (one from beginning and the second from the end) – $O(n)$ time again. | {
"domain": "cs.stackexchange",
"id": 3376,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "turing-machines, decision-problem",
"url": null
} |
To me this part really makes me feel the beauty of the Yoneda lemma.
The morphisms determining the cocone are encoded in the natural transformation we obtain.
Let $$A=\colim J$$.
We have a natural isomorphism of functors in $$B$$, $$\Hom_D(L(A),B)\simeq \Hom_{[I,D]}(L\circ J,\Delta_B).$$ If we let $$B=L(A)$$, we have a natural isomorphism $$\Hom_D(L(A),L(A))\simeq \Hom_{[I,D]}(L\circ J, \Delta_{L(A)}).$$ The left hand set has a distinguished element, $$1_{L(A)}$$. This corresponds to a natural transformation $$\lambda : L\circ J \to \Delta_{L(A)}$$ on the right hand side.
In other words, for every object $$X\in J$$, we get a map $$\lambda_X : L(J(X))\to L(A)$$, and these maps commute with the maps in the diagram $$L\circ J$$, since $$\lambda$$ is a natural transformation. Thus our natural isomorphism encodes the cocone for us already. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9852713835553861,
"lm_q1q2_score": 0.8123187471393811,
"lm_q2_score": 0.8244619306896955,
"openwebmath_perplexity": 133.70918232818872,
"openwebmath_score": 0.9810760617256165,
"tags": null,
"url": "https://math.stackexchange.com/questions/3257918/functor-l-mathcala-to-mathcalb-is-left-adjoint-to-r-functor-then-l-p/3258153"
} |
qiskit, transpile
and here's the error message:
raise TranspilerError(
qiskit.transpiler.exceptions.TranspilerError: "Unable to translate the operations in the circuit: ['x', 'if_else', 'measure'] to the backend's (or manually specified) target basis: ['cu2', 'mcp', 'ry', 'rz', 'measure', 'rzz', 'ryy', 'tdg', 'mcz', 'mcswap', 'cz', 'rxx', 'cp', 'kraus', 'cx', 't', 'z', 'h', 'rx', 'u3', 'sdg', 'mcry', 'multiplexer', 'delay', 'u1', 'roerror', 'u', 'mcsx', 'mcy', 'diagonal', 'x', 'mcrz', 'ccx', 'mcrx', 'p', 'cswap', 'initialize', 'r', 'sx', 'mcr', 'mcu1', 'y', 'mcu3', 'rzx', 'csx', 's', 'id', 'mcx', 'barrier', 'swap', 'u2', 'cy', 'cu3', 'unitary', 'snapshot', 'mcu2', 'cu1']. This likely means the target basis is not universal or there are additional equivalence rules needed in the EquivalenceLibrary being used. For more details on this error see: https://qiskit.org/documentation/stubs/qiskit.transpiler.passes.BasisTranslator.html#translation_errors" | {
"domain": "quantumcomputing.stackexchange",
"id": 4935,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "qiskit, transpile",
"url": null
} |
gazebo, rviz, ubuntu, ros-fuerte, ubuntu-precise
Originally posted by Eman on ROS Answers with karma: 164 on 2014-05-06
Post score: 2
Hey,
I dont have your robot's urdf model file so i could't test the solution, I can guide you with basic hack (hope it ll help), any ways
Here's the trick,
In your urdf model file, remove //hokuyu stuff (now here we should put kinnect stuff,and that can be taken from turtlebot urdf) ( b4 that note down the transformations of hokuyu, i.e., xyr rpy of sensor position)
Locate turtlebot_description folder in your system (easiest way is : $ roscd turtelbot_description), in folder 'urdf/sensor' you will find kinect.urdf.xacro file. copy it to package folder.
also locate these files 1) turtlebot_gazebo.urdf.xacro 2) turtlebot_properties.urdf.xacro(mostly in parent directory) and copy it to same package folder (as we dont want to mess up original files).
Changes:
in "kinnect" file first change first two files. :
<xacro:include filename="$(find your_package)/turtlebot_gazebo.urdf.xacro"/> | {
"domain": "robotics.stackexchange",
"id": 17857,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "gazebo, rviz, ubuntu, ros-fuerte, ubuntu-precise",
"url": null
} |
c, generics, collections, hash-map, set
\
return false; \
} \
\
FMOD V PFX##_max(SNAME *_set_) \
{ \
if (PFX##_empty(_set_)) \
return 0; \
\
V result, max; \ | {
"domain": "codereview.stackexchange",
"id": 34165,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c, generics, collections, hash-map, set",
"url": null
} |
technique, the prey–predator algorithm, is employed with the objective to find the optimal values for the heat sink performance parameters, i. pdf L43-PPAddingSINullclines-handout. Predator prey offers this graphic user interface to demonstrate what we've been talking about the predator prey equations. The predator (gold) seems to smooth over the variation in the prey (blue) but take a look around t=85: there is a random bump in prey which results in a little shoulder in the decline of the predators. L42-PPMatlab-handout. Nonreal eigenvalues. The main objective was to investigate the spatio-temporal pattern of diffusive prey-predator model and the emergence of irregular chaotic pattern as a result of prey-predator interaction. ts is the vector of time values (Matlab chooses h automatically at each step) ys is an array containing the values of y: row j contains the values of y1 (rabbits) and y2 (foxes) at the time ts(j). (3)yxy to illustrate the Maple, Mathematica, and MATLAB techniques | {
"domain": "cigibuilding.it",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9828232884721166,
"lm_q1q2_score": 0.8373190581479064,
"lm_q2_score": 0.8519528057272543,
"openwebmath_perplexity": 1839.2252276001213,
"openwebmath_score": 0.5434060096740723,
"tags": null,
"url": "http://goyy.cigibuilding.it/predator-prey-model-matlab.html"
} |
ros, slam, navigation
Title: RGBDSLAM -- points of interest
Hi,
I need to get all of the points of interest detected with rgbdslam including depth.
So what are the topics to use with RGBDSLAM to get all of the points of interest (SIFT) detected during the creation of the 3D map ?
I think all of these topics will help me:
/rgbdslam/online_clouds /rgbdslam/pose_graph_markers
but I'm not sure what are published in these topics. Are they SIFT points (and where is the origin of x, y and z in this case ?).
Thanks for reply | {
"domain": "robotics.stackexchange",
"id": 23394,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, slam, navigation",
"url": null
} |
formatting, go, floating-point, i18n
The number 0
A short number, with less than 3 digits before the decimal point
A number with only a fractional part specified
A negative number
An "overspecified" format (that is, more decimal places specified than decimal places)
A number with no fractional part (integers)
This is just a quick list off the top of my head. Let's turn them into test cases:
package numformat
import (
"testing"
)
func TestNegPrecision(test *testing.T) {
s := NumberFormat(123456.67895414134, -1)
if s != "123,456" {
test.Fatalf("Number format failed on negative precision test\n")
}
}
func TestShort(test *testing.T) {
s := NumberFormat(34.33384, 1)
expected := "34.3"
if s != expected {
test.Fatalf("Number format failed short test: Expected: %s, " +
"Actual: %s\n", expected, s)
}
} | {
"domain": "codereview.stackexchange",
"id": 7560,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "formatting, go, floating-point, i18n",
"url": null
} |
c++, algorithm, image, matrix, c++20
FloatingType half_width = static_cast<FloatingType>(input.getWidth()) / 2.0;
FloatingType half_height = static_cast<FloatingType>(input.getHeight()) / 2.0;
FloatingType new_width = 2 *
std::hypot(half_width, half_height) *
std::abs(std::sin(std::atan2(half_width, half_height) + radians));
FloatingType new_height = 2 *
std::hypot(half_width, half_height) *
std::abs(std::sin(std::atan2(half_height, half_width) + radians));
Image<ElementT> output(input.getWidth(), input.getHeight());
for (std::size_t y = 0; y < input.getHeight(); ++y)
{
for (std::size_t x = 0; x < input.getWidth(); ++x)
{
FloatingType distance_x = x - half_width;
FloatingType distance_y = y - half_height;
FloatingType distance = std::hypot(distance_x, distance_y);
FloatingType angle = std::atan2(distance_y, distance_x) + radians; | {
"domain": "codereview.stackexchange",
"id": 45545,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, algorithm, image, matrix, c++20",
"url": null
} |
ordinary differential equations (ODE) step-by-step This website uses cookies to ensure you get the best experience. If we were to disturb the ball by pushing it a little bit up the hill, the ball will roll back to its original position in between the two hills. All solutions that do not start at (0,0) will travel away from this unstable saddle point. Note that the graphs from Peter Woolf's lecture from Fall'08 titled Dynamic Systems Analysis II: Evaluation Stability, Eigenvalues were used in this table. Then, y=1 and the eigenvector associated with the eigenvalue λ1 is. We can use Mathematica to find the eigenvalues using the following code: All Rights Reserved. 4 & 8 \\ The top of the hill is considered an unstable fixed point. at (Bookshelves/Industrial_and_Systems_Engineering/Book:_Chemical_Process_Dynamics_and_Controls_(Woolf)/10:_Dynamical_Systems_Analysis/10.04:_Using_eigenvalues_and_eigenvectors_to_find_stability_and_solve_ODEs), /content/body/div[9]/div/p[4]/span/span, line 1, | {
"domain": "icemed.is",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9840936050226358,
"lm_q1q2_score": 0.8135617981448421,
"lm_q2_score": 0.8267118026095992,
"openwebmath_perplexity": 362.18596150976043,
"openwebmath_score": 0.7310795187950134,
"tags": null,
"url": "http://icemed.is/iyxfw/viewtopic.php?page=solving-differential-equations-using-eigenvalues-and-eigenvectors-calculator-db1453"
} |
c#, meta-programming, rubberduck, antlr
return " " + result; // todo: smarter indentation
}
private static readonly IEnumerable<string> ValueTypes = new[]
{
Tokens.Boolean,
Tokens.Byte,
Tokens.Currency,
Tokens.Date,
Tokens.Decimal,
Tokens.Double,
Tokens.Integer,
Tokens.Long,
Tokens.LongLong,
Tokens.Single,
Tokens.String
};
public static bool IsValueType(string typeName)
{
return ValueTypes.Contains(typeName);
}
[ComVisible(false)]
private string GetExtractedMethod()
{
const string newLine = "\r\n";
var access = _view.Accessibility.ToString();
var keyword = Tokens.Sub;
var returnType = string.Empty;
var isFunction = _view.ReturnValue != null && _view.ReturnValue.Name != "(none)";
if (isFunction)
{
keyword = Tokens.Function;
returnType = Tokens.As + ' ' + _view.ReturnValue.TypeName;
} | {
"domain": "codereview.stackexchange",
"id": 12273,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c#, meta-programming, rubberduck, antlr",
"url": null
} |
performance, c, library, logging, benchmarking
Instead of 64-bit GUIDs, could you maybe use sequentially assigned IDs, and probe a binary tree instead of a hash map? Consider instrumentation points P1 & P2 that are near one another in the code, perhaps in the same loop. My goal for speed is to get the P1 lookup to use the same cache line as the P2 lookup.
I hope you verified that the "/* Might need to reallocate if all of the preallocated paths used */" code uses few cycles as it is seldom run. But I wonder, could you make it disappear completely? Maybe deal with it at compile time, bump up the number of preallocated tags, or explicitly switch from preallocated "phase1 tags" to "phase2 tags" when the target app switches from, say, initialization phase to some heavy looping phase. Maybe view tags as hierarchical, based on return addresses that a program location typically sees in the stack frames above it. Then use the hierarchy to define phases and for switching to tags used during each phase. | {
"domain": "codereview.stackexchange",
"id": 27228,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "performance, c, library, logging, benchmarking",
"url": null
} |
## 10 thoughts on “Incidences: Lower Bounds (part 2)”
1. Frank de Zeeuw |
The constant 0.63 for Elekes’s construction is mentioned in Roel Apfelbaum’s thesis, section 1.1.
2. Excellent post! I am in the process of giving a crash course on geometric combinatorics in my data science seminar. This construction is definitely the way to go!
3. Josef Cibulka |
The calculation in the paper of Pach and Tóth contains a numerical error. The correct lower bound given by the Erdős construction is three times larger, that is 1.27. We have more details in the footnote on page 3 of https://arxiv.org/pdf/1703.04767v2
• Right! I read your paper and saw this footnote before. Your new incidence bound for flats is very nice, and I should also have a post in the lower bounds series about it. I’ve been neglecting this series for a while…
I just updated this post accordingly. Thank you for pointing this out. | {
"domain": "wordpress.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9790357573468176,
"lm_q1q2_score": 0.8745818529870886,
"lm_q2_score": 0.8933094081846422,
"openwebmath_perplexity": 394.8438946660683,
"openwebmath_score": 0.8944559693336487,
"tags": null,
"url": "https://adamsheffer.wordpress.com/2014/07/01/incidences-lower-bounds-part-2/"
} |
• Thanks. Two follow-ups: (1) how would I then express xsol algebraically in terms of x and r? (2) I'm particularly interested in whether this function converges (looks likely!), and if so to what - how would I find out? – Richard Burke-Ward Sep 12 '18 at 13:27
• Hi @Ulrich Neumann, I have edited the original question to clarify what I'm looking for. Many thanks for your input; I'm already further ahead than I was! – Richard Burke-Ward Sep 12 '18 at 14:02
• @Richard Burke-Ward I edited my answer hoping it is the answer you 're looking for... – Ulrich Neumann Sep 12 '18 at 15:00
• Amazing. Just one more question: is there a way to give the x value in terms of functions of rational numbers in the same that asym gives Sin[2*Pi*x]/(2*Pi*x)? (I'll mark as answered anyway, just, it would be great if there was a way...) – Richard Burke-Ward Sep 12 '18 at 15:40 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.984810949854636,
"lm_q1q2_score": 0.8051563030432637,
"lm_q2_score": 0.8175744828610095,
"openwebmath_perplexity": 2649.0443245731235,
"openwebmath_score": 0.390487402677536,
"tags": null,
"url": "https://mathematica.stackexchange.com/questions/181757/finding-algebraic-expression-for-local-minimum-of-a-function-with-one-unassigned"
} |
general-relativity, cosmology, space-expansion, density, dark-energy
($2 \pi$ times that and a straight line closes in on itself) with:
$$\rm \Omega_k=1-\Omega_{total}$$
and $\rm H$ as a function of $\rm a$:
$$\rm H(a)=H_0 \sqrt{\Omega_{r0}/a^4+\Omega_{m0}/a^3+\Omega_{k0}/a^2+\Omega_{\Lambda 0}}$$
Since you want it as a function of $\rm t$ the relation to $\rm a$ is:
$${\rm t}=\int_0^{\rm a} \frac{{\rm d}a}{a {\rm H}(a)}$$
For a plot of the density evolution in a flat universe as it is favored by observation see here, and below the same for a hypothetical closed universe (the $\rm x$ axis is $\rm t$ in years): | {
"domain": "physics.stackexchange",
"id": 88398,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "general-relativity, cosmology, space-expansion, density, dark-energy",
"url": null
} |
lagrangian-formalism, field-theory, functional-derivatives
\begin{eqnarray}
\frac{\delta_h S}{\delta \phi} = \lim_{\varepsilon \to 0} \frac{S[\phi + \varepsilon h] - S[\phi]}{\varepsilon}
\end{eqnarray}
Fortunately, in the case of Lagrangian mechanics, the direction doesn't matter. Given our action functional, for any function $h$, this derivative will be zero at any function $\phi$ which satisfies the Euler-Lagrange equation (ignoring any issues related to gauge or boundaries, to keep things simple) :
\begin{eqnarray}
\frac{\partial L}{\partial \phi} - \nabla \frac{\partial L}{\partial(\partial \phi)} = 0
\end{eqnarray}
The derivatives here are, roughly speaking, your usual derivatives. To do this in a bit more detail, the Lagrangian here is a function of the form (to simplify)
\begin{eqnarray}
L : \mathbb{R} \times \mathbb{R}^4 &\to& \mathbb{R}\\
(f, v) &\mapsto& L(f, v)
\end{eqnarray} | {
"domain": "physics.stackexchange",
"id": 67235,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "lagrangian-formalism, field-theory, functional-derivatives",
"url": null
} |
beginner, verilog, hdl
f = 3'b001; // 0 | 0
a = 32'h0000_0000;
b = 32'h0000_0000;
#10;
if ( out !== 32'h0 | zero !== 1'b1)
$display("\t%s0 | 0 failed.\tExpected out = 0x%0x, z = %b%s","\033[0;31m", 32'h0, 1'b1, "\033[0m");
$finish;
end
endmodule | {
"domain": "codereview.stackexchange",
"id": 24642,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "beginner, verilog, hdl",
"url": null
} |
I think you can find $x_2$ by yourself, now.
You can find the inverse map by solving the system $$t = {2x_1\over1+x_2} \\ x_1^2+x_2^2=0$$ as in N74’s answer, or perform a “simple” trigonometric computation. Considering the angle that the dotted line makes with the $y$-axis, we have $x_1 = \sin\left(2\arctan\frac t2\right)$ and $x_2=\cos\left(2\arctan\frac t2\right)$. It’s fairly easy to derive the identities $\sin(\arctan y)={y\over\sqrt{1+y^2}}$ and $\cos(\arctan y)={1\over\sqrt{1+y^2}}$ and using the identity for the sine of a double angle, we get \begin{align} x_1 &= 2 \sin\left(\arctan\frac t2\right)\cos\left(\arctan\frac t2\right) \\ &= 2{t/2 \over \sqrt{1+t^2/4}}{1 \over \sqrt{1+t^2/4}} \\ &= {t \over 1+t^2/4} \\ &= {4t \over 4+t^2}. \end{align} A similar calculation using the double-angle cosine identity yields an expression for $x_2$ as a rational function of $t$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9702399051935108,
"lm_q1q2_score": 0.8147433993448957,
"lm_q2_score": 0.8397339616560072,
"openwebmath_perplexity": 128.36260629569978,
"openwebmath_score": 0.9990919828414917,
"tags": null,
"url": "https://math.stackexchange.com/questions/2493430/map-between-circle-and-line-specific-example"
} |
python
Use an Enum
Use a callback object where there is one function per result option
Along similar lines, do not store 'x' and 'o' strings in the cells. You can use an Enum here, perhaps with values PLAYER1/PLAYER2. Strings are unconstrained, and a matter of style/presentation/UI rather than business logic, which should be more verifiable.
Set membership
if abs(num_x - num_o) not in [0, 1]:
can be
if abs(num_x - num_o) not in {0, 1}:
since order does not matter and set lookup has O(1) time complexity. That said, I think this is equivalent to
if not(-1 <= num_x - num_o <= 1):
assuming that num_x and num_o are integral.
Zip
I question this logic:
for row, column in zip(rows, columns):
if len(set(row)) == 1:
num_winners += 1
if len(set(column)) == 1:
num_winners += 1
I think what you're looking for is a chain, not a zip:
for sequence in chain(rows, columns):
if len(set(sequence)) == 1:
num_winners += 1 | {
"domain": "codereview.stackexchange",
"id": 38355,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python",
"url": null
} |
python, game
Method 1
def RiskGame(attacker, defender):
a_score = 0
a_loose = 0
d_score = 0
d_loose = 0
for e in range(len(defender)):
a= max(attacker)
d= max(defender)
if a>d:
a_score +=1
d_loose +=1
else:
d_score +=1
a_loose +=1
attacker.remove(a)
defender.remove(d)
if a_loose == 0:
return 'Defender Loses %i armies.' %d_loose
elif d_loose == 0:
return 'Attacker loses %i armies.' %a_loose
else:
return 'Attacker loses %i army and defender loses %i army.' %(a_loose, d_loose)
RiskGame([1,2,6], [1, 5])
RiskGame([1,4,1], [1, 2])
RiskGame([6,2,6], [6, 6]) | {
"domain": "codereview.stackexchange",
"id": 39332,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, game",
"url": null
} |
111 121 131 211 221 231 311 321 331 112 122 132 212 222 232 312 322 332 113 123 133 213 223 233 313 323 333
Similarly, if $n = 4$ fair dice, the probability is $\frac{50}{81}$.
Is there a more elegant method of obtaining the solution? I have a hunch it involves combining permutations with Binomial Probability but I haven't found the pattern.
-
Imagine an $s$-sided die, say with all sides equally likely. One of these sides has $1$ written on it, and another side has $2$ written on it. We toss the die $n$ times, and want the probability of at least one $1$ and at least one $2$.
Let $A$ be the event "no $1$" and let $B$ be the event "no $2$." We will compute $\Pr(A\cup B)$, This is $\Pr(A)+\Pr(B)-\Pr(A\cap B)$.
The probability of $A$ is $\left(\frac{s-1}{s}\right)^m$. The probability of $B$ is the same.
The probability of $A\cap B$ is $\left(\frac{s-2}{s}\right)^m$.
So now we know $\Pr(A\cup B)$. You are looking for $1-\Pr(A\cup B)$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9879462203063133,
"lm_q1q2_score": 0.8436198749135445,
"lm_q2_score": 0.8539127510928476,
"openwebmath_perplexity": 136.97739171097695,
"openwebmath_score": 0.9023319482803345,
"tags": null,
"url": "http://math.stackexchange.com/questions/454594/calculate-probability-of-obtaining-at-least-a-sequence-of-numbers-for-a-given-nu"
} |
reinforcement-learning, implementation, value-iteration, c++
This is just a notation that the value function is a mapping between S and the real numbers. When implementing, you would want to store V(s) and π(s) as either arrays or some kind of hashmap like unordered_map (in which case your states must be hashable). Here we also have to assume that this container will fit in the memory, otherwise, the value function has to be approximated with function approximation which is not covered explicitly by this pseudo-code.
At line 12: Vk would be the element in index k inside of array V? | {
"domain": "ai.stackexchange",
"id": 496,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "reinforcement-learning, implementation, value-iteration, c++",
"url": null
} |
homework-and-exercises, conformal-field-theory
I shall do the $2$-point case and leave the $3$-point one to you as an exercise. The canonical reference which I'm using is Di Francesco, Mathieu and Senechal. This is pretty much obligatory reading if you're interested in conformal field theory.
Let $\phi_1$ and $\phi_2$ be two primary fields. Then by definition their conformal transformations are
$$\phi_1(x_1)=\left|\frac{\partial x'}{\partial x}\right|_{x=x_1}^{\Delta_1/d}\phi_1(x_1')$$
where $d$ is the spacetime dimension and $\Delta_1$ the scaling dimension of the field. There's a similar formula for $\phi_2$. Now since the measure and action in the functional integral are conformally invariant we can promote the transformation above to one of the correlation function, viz.
$$\langle\phi_1(x_1)\phi_2(x_2)\rangle=\left|\frac{\partial x'}{\partial x}\right|_{x=x_1}^{\Delta_1/d}\left|\frac{\partial x'}{\partial x}\right|_{x=x_2}^{\Delta_2/d}\langle\phi_1(x_1')\phi_2(x_2')\rangle$$ | {
"domain": "physics.stackexchange",
"id": 21434,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, conformal-field-theory",
"url": null
} |
time-complexity, runtime-analysis, bipartite-matching
Each shortest augmenting path in the $1$st phrase contains at least $1$ edge.
Each shortest augmenting path in the $2$nd phrase contains at least $3$ edges.
And so on.
Each shortest augmenting path in the $k$th phrase contains at least $2k-1$ edges.
In particular, each shortest augmenting path in each phrase after $\lfloor\sqrt{|V|}\rfloor$ phrases will contain at least $2 \lfloor\sqrt{|V|\rfloor} + 1$ edges.
On the other hand, that Wikipedia page is not wrong when it says "Each phase increases the length of the shortest augmenting path by at least one". Had it said "... by one", it would have been wrong.
Although the description "at least one" is not as strong as "at least two", it is good enough to help deduce the algorithm "takes a total time of $O(|E|\sqrt{|V|})$ in the worst case.".
Even with the strong version, "at least two", we can not lower that asymptotic complexity, although we could reduce the estimate for the hidden constant factor of that big $O$-notation. | {
"domain": "cs.stackexchange",
"id": 19639,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "time-complexity, runtime-analysis, bipartite-matching",
"url": null
} |
java, interview-questions, json, csv
if (false == manager.employees.containsKey(employeeIdBox))
manager.employees.put(employeeIdBox, manager.createEmployee(employeeIdBox, firstName, lastName, birthdate)); // NOTE - Duplicates will not be added more than once
}
/**
*
* Class representing survey data
*
*/
public final static class SurveyCSVData extends CSVData {
private static final short VERSION = 1; // NOTE - Good idea to apply version to data structures
private Map<Integer, Division> divisions;
public SurveyCSVData(Employee.SortOrder sortOrderOfDataOrEmployees) {
super(new String[] {"divisionId", "teamId", "managerId", "employeeId", "lastName", "firstName", "birthdate"});
if (Employee.SortOrder.ORIGINAL == sortOrderOfDataOrEmployees)
divisions = new LinkedHashMap <>();
else
divisions = new TreeMap<>();
} | {
"domain": "codereview.stackexchange",
"id": 32296,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, interview-questions, json, csv",
"url": null
} |
rviz
but a 0,0,0,1 quaternion is normalized.
I suspect that this is not a simple fix but I would appreciate any suggestions or recommendations.
Thanks.
Originally posted by Tyrone Nowell on ROS Answers with karma: 48 on 2017-10-11
Post score: 0
Try running roswtf. It should print a message with the offending transform parent and child frame_id.
http://wiki.ros.org/roswtf
https://github.com/ros/geometry/blob/indigo-devel/tf/src/tf/tfwtf.py#L134
Originally posted by kmhallen with karma: 1416 on 2017-10-11
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Tyrone Nowell on 2017-10-26:
Thanks, roswtf helped. The problem seemed to be an issue with the 'tf' package. I reinstalled it and it came right. | {
"domain": "robotics.stackexchange",
"id": 29051,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "rviz",
"url": null
} |
vcf, gnomad
If you dig a bit deeper into the BCFtools documentation, you will find many things you can do extra if you wanted, like re-calculating the allele frequencies based on this new cohort of individuals, if you wanted that too. Or subset for positions/variants, instead of individuals.
GnomAD data
Using GnomAD data, selecting 1K random samples is impossible since data is aggregated and instead of genotypes per individual (and individuals you can select thus) we only have the frequency of the variants across various sub-cohorts within the GnomAD project. No individual level data.
The most you can do is select one of such sub-cohorts and filter for variants with a frequency > 0 (or a range) in such cohort. You could get variants present only and only is a specific cohort too, by using either frequency information or other info such as allele counts, etc. | {
"domain": "bioinformatics.stackexchange",
"id": 2284,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "vcf, gnomad",
"url": null
} |
random-forest, cross-validation
You are working in a regression task, that is, you are trying to predict not labels, but continuous values as your target. By default, scikit-learn estimators calculate as regression scores the R_squared metric. Intuitively, R_squared measures how much of variance your model explains. It is calculated by the below equation: | {
"domain": "datascience.stackexchange",
"id": 4730,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "random-forest, cross-validation",
"url": null
} |
javascript, beginner, css, console, animation
(function greet() {
if (greeting.length > 0 && greeting.length < 3) {
text.insertBefore(document.createElement('br'), text.lastChild);
}
var line = greeting.shift();
if (!line) {
return;
}
line = line.split('');
(function type() {
var character = line.shift();
if (!character) {
return setTimeout(greet, 2000);
}
text.insertBefore(document.createTextNode(character), text.lastChild);
setTimeout(type, 300);
}());
}());
body{
background-color: #000000;
color: #99ffcc;
font-family: Courier;
}
i{
font-style: unset;
font-size: 1em;
animation: blink 1100ms linear infinite;
}
@keyframes blink {
49% {
opacity: 1;
}
50% {
opacity: 0;
}
89% {
opacity: 0;
}
90% {
opacity: 1;
}
}
<div id="text"></div> | {
"domain": "codereview.stackexchange",
"id": 14720,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, beginner, css, console, animation",
"url": null
} |
expansion
Title: Space expansion in layman terms So far I got to understand the expansion of space is not to be understood as stars drifting further apart through space. There's something more fundamental - e.g. you can't simply measure speed of it, nor tell where the center of the universe (the midpoint of expansion) is. Well, some smart people got me from knowing the answer that is "Simple, obvious, neat and wrong" to "I have no clue what it is".
Is there a resource/method that would allow a layman to understand how to think about distortions of space like its expansion; to visualize and understand the concept correctly and without erroneous simplifications; understand it before trying to delve into underlying details of physics? Basically, if two particles are placed with no other interaction between them, the distance between them will increase. | {
"domain": "astronomy.stackexchange",
"id": 43,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "expansion",
"url": null
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.