text stringlengths 49 10.4k | source dict |
|---|---|
quantum-chromodynamics, feynman-diagrams, scattering-cross-section, color-charge
Now the color polarizations are normalized, so the central terms in each factor just give $1$. This leaves
$$ \sigma \propto {1\over N^2} \sum_{initial} \sum_{final} [c^\dagger _{\overline d} ~t^a ~t^b c_{\overline d} ][~c^\dagger_{\overline t}~t^b ~t^a ~c_{\overline t}] $$
But now the sum over initial colors gives exactly the definition of $Tr(t^at^b)$. Similarly for the final colors. | {
"domain": "physics.stackexchange",
"id": 13123,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-chromodynamics, feynman-diagrams, scattering-cross-section, color-charge",
"url": null
} |
php, object-oriented, classes, inheritance
public function __set($name, $value) {
$this -> _vars[$name] = $value;
}
}
class SQL{
protected static $_instance;
private function __construct() {
$main = Main_Controller::getInstance();
echo printExtender($main -> config -> db);
}
public static function getInstance() {
if (!isset(self::$_instance)) {
self::$_instance = new self();
}
return self::$_instance;
}
}
$main = Main_Controller::getInstance();
$main -> loadDefaultClasses();
The idea was to get the $main -> config -> .... thing working so I can use that later on. Am I doing this right, or is it awful? And do you have any tips? Overview
This isn't OOP. To me, object-oriented programming requires thinking about your programs in terms of classes, their responsibilities, their attributes, and behaviours. This means:
Information hiding
Encapsulation
Polymorphism
Collaboration
Config Class
Dearth of source code comments aside, let's look at the Config class. Here are some questions about the configuration options:
What are the options?
What do each of the options do?
How do the options affect the application?
How are the options communicated to the end user? | {
"domain": "codereview.stackexchange",
"id": 3997,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "php, object-oriented, classes, inheritance",
"url": null
} |
algorithm, ruby
Which will give you something like
[["Tania", ["Mohamad", "Sami", "Ikram", "Carolina", "Jose"]],
["Mohamad", ["Sami", "Ikram", "Carolina", "Jose", "Tania"]],
["Sami", ["Ikram", "Carolina", "Jose", "Tania", "Mohamad"]],
["Ikram", ["Carolina", "Jose", "Tania", "Mohamad", "Sami"]],
["Carolina", ["Jose", "Tania", "Mohamad", "Sami", "Ikram"]],
["Jose", ["Tania", "Mohamad", "Sami", "Ikram", "Carolina"]]]
There's a pattern of course (if you've given Ikram a gift, next you'll be giving Carolina a gift, then José, etc.). But if it's Secret Santa, the players shouldn't be able to figure that out anyway unless they cheat (and if they do, well, all bets are off).
You could of course use Array#combination or Array#permutation to achieve similar results. I highly encourage you to check out all of the built-in array methods, and those included from the Enumerable module. There's a lot of good stuff there.
As for a more OOP approach, I wouldn't make a class called "Derangement". Derangement is a method. It's an action, an operation, a means of achieving a certain result or state - not something that is itself stateful.
The simple solution, given your code, is to rename you class to "Game" or something along those lines. | {
"domain": "codereview.stackexchange",
"id": 6725,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithm, ruby",
"url": null
} |
matlab, discrete-signals, autocorrelation
xc_pulses <- acf(pulses, plot = FALSE)
histogram <- hist(diff(pulseTimes), breaks=xc_pulses$lag, plot=FALSE)
plot(xc_times$lag,xc_times$acf, type="l", ylim=c(-0.1,1))
lines(xc_pulses$lag, xc_pulses$acf, col="red")
lines(histogram$mids, histogram$counts / max(histogram$counts), col="green")
legend(12, 1.0, c("ACF of times", "ACF of pulses", "Histogram of inter-pulse times"), lwd=c(2.5,2.5, 2.5),col=c("black","red", "green")) | {
"domain": "dsp.stackexchange",
"id": 10187,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "matlab, discrete-signals, autocorrelation",
"url": null
} |
condensed-matter
Title: Bravais lattice with sublattices : why multiple bands? I have a very naive question : given a tight-binding model (with nearest-neighbor hoping) on a lattice defined by a Bravais lattice with a number of sublattices (for instance the honeycomb lattice is a triangular lattice with two sublattices), why is there a band associated with each sublattice ? For instance when a lattice is defined by a Bravais lattice with a 2 sublattice basis, the tight-binding model will have 2 bands.. But why is that ? There must be something very simple that I am not getting here... is it just because that to define a band structure, you need some translation invariance ? Think of a tight binding model without hopping. This situation would even be accurate for a bulk where the distance between atoms is very large. What do we have in this situation when we look at the energys of the atoms?
For simplicity lets only look at the s-orbitals and 2 sublattices A and B (= 2 Atoms in the unit cell) - you would find 2 different discrete energies:
The energy of a s-orbital of the type A and the energy of a s-orbital of the type B. You only have to look into the unit cell to get that. And since the unit cell has 2 atoms in it there are two energys. If you would include p-orbitals, you would add 3 p-energies of the A type and 3 of the B type and so on...
Now "turn on" the hopping (doesn't matter how much neighbors you include). The interaction between the atoms now forms the bands, which means that the energies we have talked about previousely smear to the bands. So the actual number of bands equals the number of discrete energys you would have without interaction.
Another way to look at it would be the Hamiltonmatrix, of which the eigenenvalues are the energies we look for: | {
"domain": "physics.stackexchange",
"id": 11834,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "condensed-matter",
"url": null
} |
ros, nxt, ros-groovy, lego
Title: Lego NXT / Groovy Support
I want to learn ROS. I have Lego NXT, so I figure this is a good place to start. The NXT page recommends an older version of ROS (diamondback), but I have install issues on OS X Mountain Lion. Is there any plan to update for Groovy?
Originally posted by seanstocktonclark on ROS Answers with karma: 21 on 2012-12-31
Post score: 2
same question dont know osx
http://answers.ros.org/question/52836/lego-nxt-ros-working-on-groovy-galapagos/?comment=69874#comment-69874
Originally posted by msieber with karma: 181 on 2013-08-01
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 12238,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, nxt, ros-groovy, lego",
"url": null
} |
thermodynamics, newtonian-gravity, physical-chemistry, osmosis
Title: Is osmosis stronger or weaker than gravity, and by how much? Suppose you perpare a jar of salt water and another of sugar water and invert one on top of the other with a divider between them, and then carefully remove that divider so the liquids are in contact.
Will the concentrations of salt and sugar reach equilibrium, despite the fact that the salt or sugar in the bottom jar has to overcome gravity to rise into the upper jar? How would you calculate the relevant forces here? Osmotic pressure is not really relevant here. In osmosis, only the solvent can move. In your scenario, both the solvent and the solutes can move. You are asking about ordinary mixing.
It will always be possible for equilibrium to be reached eventually, but in general we don't know how to calculate from first principles how long it will take. The solute particles in the bottom jar are not confined to the bottom jar; they have kinetic energy and will occasionally cross over into the top jar. The mixing will be more rapid at higher temperatures.
At equilibrium, the solute will have a very slightly higher concentration closer to the ground. The heavier the solute is, the more pronounced the gradient. If we treat the solution as an ideal solution, we can calculate the difference in concentrations explicitly using the Boltzmann distribution. For example, let's assume that each jar has a height of 10 cm. The mass of a sucrose molecule is $5.68 \times 10^{-25}$ kg. Let's assume an ambient temperature of 293 K. The ratio of probabilities for it to be found in the bottom jar rather than the top jar is $\exp(\frac{mgh}{kT})$ which comes out to about 1.000138. That means the sugar will be more concentrated in the bottom jar by a ratio of 1.000138; a very slight difference. You might be able to measure it using a very sensitive spectrophotometer. | {
"domain": "physics.stackexchange",
"id": 95395,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "thermodynamics, newtonian-gravity, physical-chemistry, osmosis",
"url": null
} |
php, mysql
$query = "SELECT * FROM schedule LEFT OUTER JOIN tour_consultants ON tour_consultants.tc_name = schedule.tc_name
WHERE `date` = '$date'";
$result = $this->dblink->query($query) ;
if((isset($result->num_rows)) && ($result->num_rows != '')) {
$itr = 0;
//Store the results into an associative array.
while ($row = $result->fetch_assoc()) {
$this->active_day[$itr]['time_in'] = $row['time_in'];
$this->active_day[$itr]['time_out'] = $row['time_out'];
$this->active_day[$itr]['tc_name'] = $row['tc_name'];
$this->active_day[$itr]['email'] = $row['email'];
$itr++;
}
return true;
}
else{
return false;
}
}
//This will only run if Today's date is set up in the database.
private function get_active_time() {
//Loop through the array of active today, and look for people who are currently working.
//If they are active, add them to the activetime array.
foreach($this->active_day as $record => $ar) {
if($this->is_between($this->timenow, $ar['time_in'], $ar['time_out']))
$this->active_time[] = $ar; | {
"domain": "codereview.stackexchange",
"id": 529,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "php, mysql",
"url": null
} |
fourier-transform, phase, fourier
Thank you for taking your time to read this. Granted a precise answer to your question would (and has) fill(ed) a multitude of books, here is a stripped down answer.
In the context of Fourier analysis, when we refer to a "frequency
component" of a function $f(t)$, we are typically talking about the
constituent sine and cosine waves that, when summed together in a
specific way, form the original function $f(t)$. The Fourier
Transform decomposes $f(t)$ into these sine and cosine components,
each associated with a specific frequency.
The Fourier Transform of a real-valued function, such as $f(t)$,
can be complex-valued (can be real as well, depending on the symmetries of $f(t)$ - $f(t)$ even for example). This complex
result encodes both the amplitude and phase information of each
frequency component. The real part of this result corresponds to the
coefficients of the cosine terms (even function component), and the
imaginary part corresponds to the coefficients of the sine terms
(odd function component).
When we express $f(t)$ in terms of its Fourier Transform, we are
essentially representing it as a sum (or integral, in the case of
the continuous Fourier Transform) of these sine and cosine waves.
Each sine and cosine wave is a "frequency component" of $f(t)$. The
complex exponential form $e^{-j\omega t}$ (where $j$ is the
imaginary unit) is often used because it compactly represents both
sine and cosine terms through Euler's formula. | {
"domain": "dsp.stackexchange",
"id": 12428,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "fourier-transform, phase, fourier",
"url": null
} |
# Function that returns disjoint sublists
I have to write a function that takes two arguments: a list list and an integer n. The function should return another list called result with sublists of length n. These sublists should contain all variations of elements from list, with no duplicates. Could someone help me how to approach this problem? Thanks in advance!
• Does Subsets[list, {n}] do what you need? Mar 4 '21 at 19:14
You have a list of numbers without duplicates. You want to create all sublists with given length, where the order matters and no duplicates in the sublist are allowed, for short: variations without repetition.
For safety, we first make sure that the original list contains no duplicates. Then we create all sublists with length n. Finally we create all permutations of the sublists:
variations[list_, n_] := Module[{d = Union[list]},
d = Subsets[d, {n}];
Flatten[Permutations /@ d, 1]
]
Here is a small test:
dat = Range[5];
variations[dat, 3]
• thanks a ton, works perfectly fine! Mar 4 '21 at 20:20
You can use Permutations directly using the second argument to specify the length of sublists:
ClearAll[duplicateFreePermutations]
duplicateFreePermutations[lst_, n_] := Permutations[Union @ lst, {n}]
Examples:
duplicateFreePermutations[Range[5], 3]
duplicateFreePermutations[{a, b, b, c, c}, 3]
{{a, b, c}, {a, c, b}, {b, a, c}, {b, c, a}, {c, a, b}, {c, b, a}} | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9546474194456935,
"lm_q1q2_score": 0.8036466779835496,
"lm_q2_score": 0.8418256432832333,
"openwebmath_perplexity": 2121.323299919191,
"openwebmath_score": 0.3268066644668579,
"tags": null,
"url": "https://mathematica.stackexchange.com/questions/241080/function-that-returns-disjoint-sublists/241091#241091"
} |
type-theory
Title: Is it possible to copy data in a linear type system? I'm having a little problem with the abstraction layers in linear types. If every variable is used exactly once, there is no way to copy data, since you have to read each datum twice in order to write it twice to two different locations.
However, you could argue that once a variable is loaded into a register, you may do whatever you want with it, so a -> (a,a) is a possible function. I'm simply not sure where the line is drawn, or why.
Linear types in programming allows a compiler to build efficient pipelines of data transformation. Single-use variables allows in-place mutations in pure functional languages, and/or a memory-efficient implementation without using a garbage collector. Copying a value might make this process a lot more difficult, if not impossible, but I'm not sure.
Is it possible to copy data in a linear type system? Why?
Are there any (big) programming languages that implement linear types with/without copying? I don't know why you bring up registers at all, but the answer to your first question is "sure". You (potentially) are able to copy data, you just aren't able to do it freely. For example, an operation $\mathtt{dup}_A : A \multimap A\otimes A$ may be provided, but only for certain types $A$. Most commonly there are "exponentials", written $!A$ for which $\mathtt{dup}_{!A}$ is definable or given. The opposite operation is often called $\mathtt{kill}_A : A \multimap 1$ or $\mathtt{discard}_A$. You may start to see how this leads to a kind of functional manual memory management. | {
"domain": "cs.stackexchange",
"id": 7860,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "type-theory",
"url": null
} |
# Prove the set identity $(A \cap B) \setminus (B\cap C) = A \cap (B \setminus C)$
Prove the set identity $(A \cap B) \setminus (B\cap C) = A \cap (B \setminus C)$
Attempt at proof:
For some element $x\in (A \cap B) \setminus (B\cap C)$: $$x\in (A \cap B) \setminus (B\cap C) \implies x \in (A \cap B) \land x \notin (B\cap C)$$ $$\implies x \in A \land x\in B \land x \notin B \land x \notin C$$
For some element $x\in A \cap (B \setminus C)$: $$A \cap (B \setminus C) \implies x \in A \land x \in (B \setminus C) \implies x \in A \land x\in B \land X \notin C$$
The first part of the proof seems to have a contradiction with $x\in B \land x \notin B$ and I am not sure how to prove further
• You use $x\not\in(B\cap C)\implies x\not\in B \land x\not\in C$. This is false. It should be something like $$x\not\in(B\cap C) \iff \lnot(x\in B \land x \in C) \iff x\not\in B \lor x\not\in C.$$ – John Griffin Sep 24 '17 at 1:17
• @JohnGriffin Why does it change to $\lor$ – lakada Sep 24 '17 at 1:22
• De Morgan's Law: $$\lnot (p \lor q) \equiv \lnot p \land \lnot q$$ $$\lnot (p \land q) \equiv \lnot p \lor \lnot q$$ – Andrew Tawfeek Sep 24 '17 at 1:24 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9850429164804706,
"lm_q1q2_score": 0.8143466009257826,
"lm_q2_score": 0.8267117983401363,
"openwebmath_perplexity": 331.36600530253327,
"openwebmath_score": 0.9999786615371704,
"tags": null,
"url": "https://math.stackexchange.com/questions/2442511/prove-the-set-identity-a-cap-b-setminus-b-cap-c-a-cap-b-setminus-c"
} |
python, error-handling, geospatial
Title: Rate-limited geographic data lookup I'm looking for a code review of this python I wrote - this code reads the zipcode column value from a CSV file and calls an API to retrieve lat, long, state and city info.
It works fine and gives me the correct results, but I'm looking for ways to improve the code/my approach and exception handling. I am pretty sure there are better ways to write this long code.
All comments and suggestions as brutal as may be are mostly appreciated.
import requests
import pandas as pd
import time
from ratelimiter import RateLimiter
API_KEY = "some_key"
zip_code_col_name = "zipcode"
RETURN_FULL_RESULTS = False
excel_data = pd.read_csv("Downloads/test_order.csv", encoding='utf8')
if zip_code_col_name not in excel_data.columns:
raise ValueError("Missing zipcode column")
# This will put all zipcodes from column to list including duplicates , hence avoiding it.
#zipcodes = excel_data[zip_code_col_name].tolist()
zipcodes =[]
for i in excel_data.zipcode:
if i not in zipcodes:
zipcodes.append(i)
def get_geo_info(zipcode, API_KEY, return_full_response=False):
init_url = "some_url"
if API_KEY is not None:
url = init_url+API_KEY+"/info.json/"+format(zipcode)+"/degrees"
# Ping site for the reuslts:
r= requests.get(url)
if r.status_code != 200:
#this will print error code to but in reuslt set it will be empty
print("error is" + str(r.status_code))
data = r.json()
output = {
"zipcode" : data.get('zip_code'),
"lat" : data.get('lat'),
"lng" : data.get('lng'),
"city" : data.get('city'),
"state" : data.get('state')
} | {
"domain": "codereview.stackexchange",
"id": 32235,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, error-handling, geospatial",
"url": null
} |
classification, sift
Title: Image classification using SIFT features and SVM I am hoping someone can explain how to use the bag of words model to perform image classification using SIFT/SURF/ORB features and a support vector machine?
At the moment I can compute the SIFT feature vectors for an image, and have implemented a SVM, however am finding it hard to understand the literature on how use the bag of words model to 'vector quantize' the SIFT features and build histograms that give fixed size vectors, that can be used to train and test the SVM.
Any links to tutorials or literature on the topic is welcome, thanks If you could implement an SVM, you can quantize the features. :)
Typically the features are quantized using k-means clustering. First, you decide what your "vocabulary size" should be (say 200 "visual words"), and then you run k-means clustering for that number of clusters (200). The SIFT descriptors are vectors of 128 elements, i. e. points in 128-dimensional space. So you can try to cluster them, like any other points. You extract SIFT descriptors from a large number of images, similar to those you wish classify using bag-of-features. (Ideally this should be a separate set of images, but in practice people often just get features from their training image set.) Then you run k-means clustering on this large set of SIFT descriptors to partition it into 200 (or whatever) clusters, i. e. to assign each descriptor to a cluster. k-means will give you 200 cluster centers, which you can use to assign any other SIFT descriptor to a particular cluster.
Then you take each SIFT descriptor in your image, and decide which of the 200 clusters it belongs to, by finding the center of the cluster closest to it. Then you simply count how many features from each cluster you have. Thus, for any image with any number of SIFT features you have a histogram of 200 bins. That is your feature vector which you give to the SVM. (Note, the term features is grossly overloaded). | {
"domain": "dsp.stackexchange",
"id": 555,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "classification, sift",
"url": null
} |
• where, as shown in figure I.1 on page 15, $\gamma_k=2-\alpha_k$ such that $\gamma_k-1=\mu_k$. In my opinion, if (I.47) is the same as the last equation in 40 votes answer, the exponent has to be $-\mu_k$ and the fraction has to be turned around. What am I missing? $\zeta$ isn't constant ?! – user86808 Jul 18 '13 at 13:10 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.969324199175492,
"lm_q1q2_score": 0.8077616919916618,
"lm_q2_score": 0.8333245911726382,
"openwebmath_perplexity": 327.1029561240442,
"openwebmath_score": 0.923274040222168,
"tags": null,
"url": "https://math.stackexchange.com/questions/446092/modification-of-schwarz-christoffel-integral/446120"
} |
java, algorithm
Title: Codeforces 427B: Prison Transfer I'm sure that among you (users of codereview) there are many who are interested in Codeforces problemset. I tried to solve 427B: Prison Transfer tasks but have Wrong Answer judjment at 4th test.
Maybe you've already solve the problem or is interested in solving? Can you find error in my code please?
PrisonTransfer.java
import java.io.*;
import java.util.StringTokenizer;
public class PrisonTransfer {
protected static class FastScanner {
BufferedReader br;
StringTokenizer st;
FastScanner(InputStream f) {
br = new BufferedReader(new InputStreamReader(f));
}
String next() throws IOException {
while (st == null || !st.hasMoreTokens()) {
st = new StringTokenizer(br.readLine());
}
return st.nextToken();
}
int nextInt() throws IOException {
return Integer.parseInt(next());
}
}
public static void main(String... args) throws IOException {
calcTransferCombinations();
}
protected static void calcTransferCombinations() throws IOException {
FastScanner in = new FastScanner(System.in);
PrintWriter out = new PrintWriter(System.out);
final int PRISONERS_NUMBER = in.nextInt();
final int MAX_AGGRESSION = in.nextInt();
final int RANK_LENGTH = in.nextInt();
int combinations = PRISONERS_NUMBER - RANK_LENGTH + 1;
int distanceToLastDangerous = RANK_LENGTH;
int itr = 1;
while (itr <= PRISONERS_NUMBER) {
int aggression = in.nextInt();
if (aggression > MAX_AGGRESSION) {
if (PRISONERS_NUMBER - itr < RANK_LENGTH) {
combinations -= Math.min(PRISONERS_NUMBER - itr + 1, distanceToLastDangerous);
break;
} | {
"domain": "codereview.stackexchange",
"id": 7480,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, algorithm",
"url": null
} |
color
# Proportional XYZ color at flux of 1 lumen (setting Y = 1):
XYZ_0 = np.array([xy_0[0]/xy_0[1], 1, (1 - xy_0[0] - xy_0[1])/xy_0[1]])
XYZ_1 = np.array([xy_1[0]/xy_1[1], 1, (1 - xy_1[0] - xy_1[1])/xy_1[1]])
# Correlated color temperatures of the two LED types
cct_0 = colour.xy_to_CCT(colour.XYZ_to_xy(XYZ_0), method='McCamy 1992')
cct_1 = colour.xy_to_CCT(colour.XYZ_to_xy(XYZ_1), method='McCamy 1992')
# Correlated color temperature of a mixture of the two LED types each at nominal current
colour.xy_to_CCT(colour.XYZ_to_xy(weight_0*XYZ_0 + weight_1*XYZ_1), method='McCamy 1992')
The obtained correlated color temperatures of the two LED types are 1746 and 3934 kelvins, close to their nominal CCTs of 1800 K and 4000 K. At nominal forward current for each LED type, the CCT of the mixture is 2715 kelvins. So the script seems to be working as intended.
We can also get a color temperature curve as function of mixing ratio, and see that it is not exactly linear, although this depends on the definition of a mixing ratio. Continuing in Python:
import matplotlib.pyplot as plt | {
"domain": "dsp.stackexchange",
"id": 8061,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "color",
"url": null
} |
fft, frequency-spectrum, frequency, sampling, zero-padding
320.5,2613.975,2907.45,3194.1,3480.75,3753.75,3890.25,4093.635,3549,2866.5,2184,1842.75,1706.25,1228.5,682.5,1.092,271.635,464.1,750.75,955.5,1228.5,1569.75,1842.75,2047.5,2320.5,2593.5,2866.5,3139.5,3412.5,3685.5,3822,3999.45,3822,3685.5,3412.5,3139.5,2866.5,2593.5,2320.5,2047.5,1774.5,1501.5,1228.5,955.5,682.5,518.7,313.95,135.135,313.95,518.7,682.5,955.5,1228.5,1501.5,1774.5,2047.5,2252.25,2525.25,2757.3,3057.6,3344.25,3494.4,3671.85,3842.475,3671.85,3494.4,3344.25,3057.6,2757.3,2525.25,2252. | {
"domain": "dsp.stackexchange",
"id": 2931,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "fft, frequency-spectrum, frequency, sampling, zero-padding",
"url": null
} |
rosinstall
Title: Error Eigean file while installing ROS
Hi, i am building my robot and when i try to installed ROS on Raspbian i had this error
CMake Error at CMakeLists.txt:7 (find_package):
By not providing "FindEigen3.cmake" in CMAKE_MODULE_PATH this project has
asked CMake to find a package configuration file provided by "Eigen3", but
CMake did not find one.
Could not find a package configuration file provided by "Eigen3" with any
of the following names:
Eigen3Config.cmake
eigen3-config.cmake
Add the installation prefix of "Eigen3" to CMAKE_PREFIX_PATH or set
"Eigen3_DIR" to a directory containing one of the above files. If "Eigen3"
provides a separate development package or SDK, be sure it has been
installed.
-- Configuring incomplete, errors occurred!
See also "/Users/robert/ros_catkin_ws/build_isolated/pcl_ros/CMakeFiles/CMakeOutput.log".
<== Failed to process package 'pcl_ros':
Command '['/Users/robert/ros_catkin_ws/install_isolated/env.sh', 'cmake', '/Users/robert/ros_catkin_ws/src/perception_pcl/pcl_ros', '-DCATKIN_DEVEL_PREFIX=/Users/robert/ros_catkin_ws/devel_isolated/pcl_ros', '-DCMAKE_INSTALL_PREFIX=/Users/robert/ros_catkin_ws/install_isolated', '-DCMAKE_BUILD_TYPE=Release', '-G', 'Unix Makefiles']' returned non-zero exit status 1 | {
"domain": "robotics.stackexchange",
"id": 29740,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "rosinstall",
"url": null
} |
statistical-mechanics, condensed-matter, mean-field-theory
Title: Satisfies the Ginzburg criteria but violates mean field theory The Ginzburg criteria is a self-consistency check on the mean field solution - it does not explicitly check if mean theory is correct just that it produces a self-consistent answer. This therefore leads to the natural questions: Is there any system where the Ginzburg criterion is satisfied but mean field theory does not work? Or can it be shown that the Ginzburg criteria is actually more then a self-consistency check and does predict the validity of mean-field theory? I think the question is whether there are cases in which perturbative corrections to the mean field theory are small, but non-perturbative effects invalidate the mean field approximation. In general this sort of thing of course happens in many quantum field theories, but I am not aware of an example in the context of statistical field theories of second order phase transitions (which is where the Ginzburg criterion is introduced). Of course, it is always possible that the MFA is wrong because you picked the wrong order parameter, or the Landau-Ginzburg paradigm of local order parameters is not applicable (as in topological phase transitions). | {
"domain": "physics.stackexchange",
"id": 48210,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "statistical-mechanics, condensed-matter, mean-field-theory",
"url": null
} |
electrostatics
http://en.wikipedia.org/wiki/Electronegativity#Electronegativities_of_the_elements
The redder atom, the higher electronegativity, and the more likely it is for the atom to gain electrons and become negatively charged. That's especially true for light halogens (fluorine, chlorine) and oxygen. That's partly why glass - with lots of $SiO_2$ - likes to get negatively charged in the triboelectric effect. Even sulfur (40% of ebonite) has a higher electronegativity than e.g. carbon and hydrogen that are abundant in the fur which is why fur loses electrons and becomes positively charged.
Of course, the actual arrangement of the atoms in the molecules matters, too. So this overview of the periodic table was just an analogy, not a reliable way to find out the results of the triboelectric effect. | {
"domain": "physics.stackexchange",
"id": 15568,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "electrostatics",
"url": null
} |
hilbert-transform, complex, compression, linear-prediction
Title: Compression algorithms specific to complex signals I am looking for (lossy or lossless) compression algorithms dedicated to complex signals. The latter could be composite data (like the left and right for stereo audio), a Fourier transformation or an intermediate step of a complex processing, an Hilbert pair of generic signals, or complex measurements like some NMR (Nuclear magnetic resonance ) or mass spectrometry data.
A little background: I have been working on real signal and image coding, and mildly on video and geological mesh compression. I have practiced adaptive lossless and lossy compression (RLS, LPC, ADPCM), Huffman and arithmetic coding, transforms (DCT, wavelets), etc. Using standard compression tools separately on each of the real and the imaginary parts is not the main aspect of the question. I am interested in compression-related ideas specific to complex data, for instance:
sampling of the complex plane,
joint quantization of module and phase,
optimal quantization of 2D complex values: Loyd-Max is "optimal" for 1D data. I remember that 2D optimal quantization is generally more complicated. Are there 2D binnings dedicated to complex, or for instance Gauss integers?
entropy coding methods, arranging along complex "features" (angle, magnitude),
entropy calculation for complex data,
use of specific statistical tools (e.g. from Statistical Signal Processing of Complex-Valued Data: The Theory of Improper and Noncircular Signals)
integer transformations for Gauss integers.
I have not found much tools. My scarce references are:
An experimental audio codec based on warped linear prediction of complex valued signals, 1997
Compression of Complex-Valued SAR Images, 1999 (Download)
Lossless Signal Processing with Complex Mersenne Transforms, 2003 | {
"domain": "dsp.stackexchange",
"id": 8084,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "hilbert-transform, complex, compression, linear-prediction",
"url": null
} |
homework-and-exercises, classical-mechanics, lagrangian-formalism, variational-principle, moment-of-inertia
Title: Euler-Lagrange Equation and moment of inertia I'm self studying and having trouble with the following question:
"Consider a solid of revolution of a given height. Determine the shape of the solid if it has the minimum moment of inertia about its axis."
The answer is a circular right cylinder, and the question is supposed to be solved using the Euler-Lagrange equation. My question is, since the problem only specifies the height, why isn't the answer just $r = 0$. Why can't you just crush all the mass onto the axis of rotation to minimize the moment of inertia? Are there some implicit assumptions I'm missing?
I got the moment of inertia is integral $\int_0^h \frac{1}{2} \rho \pi r^4 dz$. By the E-L equation, this just becomes $4*const*r^3 = 0$. Or $r = 0$. I'm not sure what I'm doing wrong. One of the implicit assumptions that the question is using is that the mass of the solid is constant, and that the density of the solid is constant. If the mass isn't constant, you could set it to zero. If the density wasn't constant, you could indeed crush the mass into a thin axis of infinite density, but that wouldn't be characteristic of many solids!
Constant mass means
$$\int_0^h \rho \pi r^2 \, \mathrm{d}z = M = \text{const.}$$
or, equivalently, as a constraint equation
$$G(r) = \int_0^h \rho \pi r^2 - \frac{M}{h} \, \mathrm{d}z = 0$$
(Constant density is easier to enforce: simply let the variable $\rho$ be constant)
Therefore, the problem is now: | {
"domain": "physics.stackexchange",
"id": 58163,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, classical-mechanics, lagrangian-formalism, variational-principle, moment-of-inertia",
"url": null
} |
BA=I R 5 n×n matrices and! Is again invertible for Ax = B ( AB ) C = I and C ( ). Definition of an invertible matrix A is 6×6 and Ax = B is Consistent for Every?... 0 to have Nontrivial Solutions, C, D } and ab-cd \= 0 then is! ) = I and ( AB ) -1 = B-1 A-1 = 0 =â x = 0 =â x 0... Matrix is invertible, then both A and B are invertible if A = PBP.... Would not be invertible is determinant of ( AB ) and det ( AB ) v in n... Is the inverse matrix of A since A has that property, therefore A is not Correct be. ( n ), which is again invertible ab-cd \= 0 then is... B-1 A-1 x = 0 to have Nontrivial Solutions the definition of an invertible matrix A, then B the... Determinant of ( AB ) â1A C, D } and ab-cd \= 0 then A is 6×6 and =... N matrix that property, therefore A is 6×6 and Ax = B ( AB â1! I ) A and B are invertible A 5 × 7 matrix has kernel of at. Need only one of the two Following Statement is not Correct we have BA=I,... For some vectors u, v in R n then A is 6×6 and Ax = 0, is... Hi, everyone ~ I read Linear Algebra by Hoffman & Kunze, too, and ( AB ) Adjoint... ( AB ) Consistent for Every B ( AB ) is Adjoint of ( AB ) -1 = AB. Then B is the matrix B such that A matrix - inverse of matrix, we only. Invertible and A = { A, B: = n * n matrices is if! ( 1 ) 1. q.e.d Nontrivial Solutions 0, which is again invertible then! I read Linear Algebra by Hoffman & Kunze, B, then we BA=I! Dimension 2 then its column space is R 5 not equal to the identity matrix }... In R n then A is not invertible ( A | {
"domain": "ofiluz.com",
"id": null,
"lm_label": "1. YES\n2. YES\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9811668717616667,
"lm_q1q2_score": 0.8259714349357622,
"lm_q2_score": 0.8418256452674008,
"openwebmath_perplexity": 548.730639797647,
"openwebmath_score": 0.8680241107940674,
"tags": null,
"url": "http://ofiluz.com/lpeduchx/article.php?9ceeb8=if-a-and-b-are-invertible-then-ab-is-invertible"
} |
video-processing, video-compression
Title: HEVC Deblocking filter algorithm I'm doing research on block noise caused by video compression (especially in newer codecs) and I've read the IEEE article about the HEVC deblocking filter (here).
However, when reading the algorithm I'm confused because the formula seems wrong. The first formula to determine if the deblocking filter will affect a block is:
$ |p_2,_0 - 2p_1,_0+p_0,_0|+|p_2,_3 - 2p_1,_3+p_0,_3|+|q_2,_0 - 2q_1,_0+q_0,_0|+|q_2,_3 - 2q_1,_3+q_0,_3|>\beta \ (1)$
It is also written (and shown in a figure) that $\beta $ increases with the quantification parameter QP.
Moreover, later it is written that to choose between normal and strong deblocking, the following equation is used (with $i$ going from $0$ to $3$):
$ |p_2,_i - 2p_1,_i+p_0,_i|+|q_2,_i - 2q_1,_i+q_0,_i|<\beta/8 \ (2) $
Since IEEE is a respectable source and the paper is a couple years old, it would surprise me that there is a typo. However, I do not understand how this formula can work. It does seem to me that the second inequation would always be wrong at least for some $i$ if the first one is true.
Am I misunderstanding how the filter works or is it a typo in the paper? The ffmpeg implementation of the HEVC decoding follows this formula instead: | {
"domain": "dsp.stackexchange",
"id": 2847,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "video-processing, video-compression",
"url": null
} |
python, performance, programming-challenge, python-2.x
Here num_list is the list of all the numbers in the series. Now for each of these valid intervals I again tried to get clever. If the next element in the product is greater than the first; then the product will increase.
Say num_list = [2 2 3 4 2] and length = 3 then prod([2 2 3]) < prod([2 3 4]) because 2 < 4. Given we have calculated prod([2 2 3]) one can calculate the next by doing prod([2 3 4]) = (4/2)*prod([2 2 3]).
In the opposite case one can ignore to calculate the product because the last value is smaller. Note if we ignore to calculate the product one can not use the trick above, since we are missing the previous product. All together I tried the following code:
def substring_search(num_list_sub,length):
combo = True
product_max = 1
for i in range(length-1,len(num_list_sub)):
# If we are at the start, the end, or have just found
# a lower value we calculate the whole product
if i == length-1 or i == len(num_list_sub):
product = prod(num_list_sub[i-length+1:i+1])
product_max = max(product,product_max)
# If the next element is greater than the first element
# then the product is larger. Calculate the new product
elif num_list_sub[i] > num_list_sub[i-length]:
if combo:
product *= float(num_list_sub[i])/max(num_list_sub[i-length],1)
else:
product = prod(num_list_sub[i-length+1:i+1])
product_max = max(product, product_max)
combo = True
else:
combo = False
# If the next element is not greater than the first element
# set combo to zero and calculate the whole product in the next loop
return int(product_max) | {
"domain": "codereview.stackexchange",
"id": 15777,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, performance, programming-challenge, python-2.x",
"url": null
} |
c++, template, iterator, c++17, interval
reverse_iterator &operator++() {
m_step_index += 1;
return *this;
}
reverse_iterator operator++(int) {
reverse_iterator retval = *this;
++(*this);
return retval;
}
bool operator==(reverse_iterator other) const {
return m_step_index == other.m_step_index
&& m_step_max == other.m_step_max
&& m_min == other.m_min
&& m_max == other.m_max;
}
bool operator!=(reverse_iterator other) const {
return !(*this == other);
}
T operator*() const{
//for integers, works perfectly. for floats, you can't get perfect but
//gaurantees when step index is 0
//perfect stepping for integers
if constexpr (std::is_integral_v<T>) {
//negation shouldn't
return m_max -
((m_max - m_min) / m_step_max) * (m_step_index);
} else {
// floating point needs to be handled differently to
// guarantee that starts and ends are 0 and 1.
// no worry of error from range addition
return ((m_step_max - m_step_index) /
static_cast<T>(m_step_max)) * m_max +
(m_step_index / static_cast<T>(m_step_max)) * m_min;
}
}
// iterator traits
using difference_type = T;
using value_type = T;
using pointer = std::size_t;
using reference = T &;
using iterator_category = std::forward_iterator_tag;
private:
std::size_t m_step_index;
std::size_t m_step_max;
T m_min;
T m_max;
}; | {
"domain": "codereview.stackexchange",
"id": 37644,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, template, iterator, c++17, interval",
"url": null
} |
algorithms, graphs
Title: How to create disjoint sets out of an adjacency list I'm not a computer scientist, so please bear with me if I'm misusing some terminology.
What I'm trying to do is to find the different components of an undirected graph.
I have an array of pairs that represent the edges of my graph, like so:
[[1,3],[4,2],[2,1],[5,6],[100,20],[22,5]] and what I want to create out of this are arrays (or objects) containing the connected vertexes (the disjoint sets or components, like this (for the above example): [[1,2,3,4],[5,6,22],[20,100]]
My first approach was to convert the array of pairs into an adjacency list, using hashmaps/tables/sets or something in this line it can be done in O(n) time and additional space (n being the number of pairs). Then using BFS, I scan every vertex in the graph. Whenever the queue is empty, I have completed a component/disjoint set. If unvisited nodes are still in the graph, this is a new component/disjoint set. Until all the nodes are visited.
Now, I learned about the disjoint-set data structure.
My question: Using disjoint-set data structure how can I convert my array of pairs into the disjoint components of the graph I'm looking for? Any resources are also very much appreciated. I'm still learning and don't really know what I'm talking about.
What are the pros and cons of the first approach compared to the second approach.
Oh, BTW, if this is the wrong place to ask this question, please, advise!
Thanks in advance! In the disjoint-set approach you start from a collection of single vertices and repeatedly join the incident vertices of the edges. This will give you the connected components in $O(n+m\times(\text{find}+\text{union}))$. | {
"domain": "cs.stackexchange",
"id": 13846,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "algorithms, graphs",
"url": null
} |
ros, c++
Title: ros send message on startup doesn't seem to work I have the following code:
void NewCore::spin(){
ros::Rate rate(10);
int first =1;
while(ros::ok()){
if (first){
askMath();first=0;}
ros::spinOnce();
rate.sleep();
}
}
int main(int argc, char **argv){
ros::init(argc, argv, "newCore");
NewCore nc;
nc.init();
nc.spin();
}
void NewCore::init(){
mngrSub = handle.subscribe<std_msgs::String>("/tawi/core/launch", 10, &NewCore::launchCallback, this);
mngrPub = handle.advertise<std_msgs::String>("/tawi/core/launch", 100);
mathSub = handle.subscribe<std_msgs::String>("/display", 10, &NewCore::launchCallback, this);
serSub = handle.subscribe<std_msgs::String>("/tawi/arduino/serial", 100,&NewCore::serialCallback,this);
mathPub = handle.advertise<std_msgs::String>("/questions", 100);
ballPub = handle.advertise<std_msgs::Int16>("/tawi/core/ballcount", 100);
nmbrPub = handle.advertise<std_msgs::Int16>("/tawi/core/number", 100);
} | {
"domain": "robotics.stackexchange",
"id": 569,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, c++",
"url": null
} |
c++, object-oriented, rational-numbers
{
subed_num = ((this->numerator * two.denominator) + (two.numerator *
this->denominator));
common_denom = (this->denominator * two.denominator);
}
this->numerator = subed_num;
this->denominator = common_denom;
if (is_it_neg)
{
this->toggle_neg();
}
this->reduce();
return *this;
} | {
"domain": "codereview.stackexchange",
"id": 26679,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, object-oriented, rational-numbers",
"url": null
} |
php, unit-testing, cache
$this->assertSame(true, DatenCache::save("testCacheComplexParam", $aKonfigEins, "v1"));
$this->assertSame("v1", DatenCache::load("testCacheComplexParam", $aKonfigEins));
$this->assertSame(true, DatenCache::save("testCacheComplexParam", $aKonfigZwei, "v2"));
$this->assertSame("v2", DatenCache::load("testCacheComplexParam", $aKonfigZwei));
$this->assertSame(false, DatenCache::load("testCacheComplexParam", $aKonfigDrei));
$this->assertSame(false, DatenCache::load("testCacheComplexParam", $aKonfigVier));
$this->assertSame(false, DatenCache::load("testCacheComplexParam", $aKonfigFuenf));
$this->assertSame(false, DatenCache::load("testCacheComplexParam", $aKonfigSechs));
}
function testInvalidate() {
$this->assertSame(true, DatenCache::save("testCacheSimple", false, "testStringInvalidate"));
DatenCache::invalidate("testCacheSimple");
$this->assertSame(false, DatenCache::load("testCacheSimple", false));
} | {
"domain": "codereview.stackexchange",
"id": 653,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "php, unit-testing, cache",
"url": null
} |
java, object-oriented
In Method hasPredefinedCommands
When we take a look into the constructor of CommandLineInterface, we can find a method hasPredefinedCommands.
public CommandLineInterface(CommandLineInterpreter<?> cli, InputStream input, PrintStream output) {
if (cli == null || hasPredefinedCommands(cli.getCommands()))
Since the method gets only used at one place and is private we know that the argument commands will always be the commands of a CommandLineInterpreter.
private boolean hasPredefinedCommands(Set<? extends Command<?>> commands) {
return !Collections.disjoint(commands, getCommands());
}
This method should be renamed to contains and moved into CommandLineInterpreter.
public CommandLineInterface(CommandLineInterpreter<?> cli, InputStream input, PrintStream output) {
if (cli == null || cli.contains(getCommands())
Responsibilities Of CommandLineInterface
You described the CommandLineInterface with
This class handles the user input from the command line. It is itself a CommandLineInterpreter and offers two basic commands: help and exit.
The tagged and in the quote shows that the CommandLineInterface has two responsibilities:
handles the user input
offers two basic commands
Choose a Different Name Like CommandLineInteraction
The name CommandLineInterface is so abstract, that it could be everything and from a programmer perspective I would thing that this is an interface and not a class.. Maybe the responsibilities for this class makes more sense, if we change the name to ComandLineInteraction.
With this name I associate thinks the already existing method start and but also thinks like stop, clean, isRunning and so on..
Let's Remove the Second Responsibility
First we need to check the reason, why it currently is in CommandLineInterface. In the method CommandLineInterface.getCommands: | {
"domain": "codereview.stackexchange",
"id": 33655,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, object-oriented",
"url": null
} |
Hint: if $$8$$ is the largest single digit factor than doesn't it make sense that the largest such three digit number would be in the $$8$$ hundreds.
Now, that was the hard way. Were there any handy mathematical observations you might have used to make this easier? Would knowing that $$24 = 2^3*3$$ is the unique prime factorization of $$24$$ have helped you find the ways to find three term factorizations? Would knowing there are $$k!$$ ways to arrange $$k$$ objects have helped? What if some of the objects were indistinguishable?
$$xyz= 2^3 .3$$
powers of $$2$$ can be divided among $$x,y,z$$ in
$$^{3+(3-1)} \ C _{3-1}= 10$$ ways
and power of $$3$$ can be divided into $$^{1+(3-1)} \ C _{3-1}= 3$$ ways
thus giving total $$10\times 3 =30$$ways
but here we also counted arrangements like $$2 \times 1\times 12$$ and $$1\times 1\times 24$$ which violates 3 digit number policy so, by subtracting these i.e, $$\left(3! + \dfrac{3!}{2!}\right)=9$$ ways
we get
total number of $$3$$ digit numbers having product $$4 != (30-9)=21$$
and out of all such $$3$$ digit numbers largest is $$8 3 1$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9828232924970204,
"lm_q1q2_score": 0.8168667488874907,
"lm_q2_score": 0.8311430499496095,
"openwebmath_perplexity": 237.85726919527676,
"openwebmath_score": 0.4574279487133026,
"tags": null,
"url": "https://math.stackexchange.com/questions/2976347/how-many-3-digit-integers"
} |
javascript, algorithm, programming-challenge, time-limit-exceeded, depth-first-search
Title: Maze path finder using Depth-First Search algorithm I'm trying to resolve this kata from CodeWars.
Kata exercise
You are at position [0, 0] in maze NxN and you can only move in one of the four cardinal directions (i.e. North, East, South, West). Return true if you can reach position [N-1, N-1] or false otherwise.
Empty positions are marked .. Walls are marked W. Start and exit positions are empty in all test cases.
I made a simple depth first search algorithm to solve the maze and check if there exists an exit.
But I noticed that my code is too slow and so it produces Execution Timed Out error.
I'm quite sure that changing this algorithm to an heuristic one could fix this, but I don't want to do that. I want to solve this using a Deep Search Algorithm. Is possible to improve the performance of this code?
I think the problem may be in the set, which don't have a pop method, so removing an item from them is quite difficult. But using an array would even require more time to check if the item already exists there.
function pathFinder(maze) {
// Turn maze from string to nested array
let chart = [];
for (let c of maze.split("\n")) {
chart.push([...c]);
}
let size = chart.length;
// Make toVisit frontier
let toVisit = new Set();
toVisit.add([0, 0]);
// Check if we already are in the exit
let end = size - 1;
// Size 1 maze
if (end == 0)
return true;
while (toVisit.size > 0) {
// Pop value from set
let pos = toVisit.values().next().value;
toVisit.delete(pos);
let [x, y] = pos;
chart[x][y] = "*"; // Mark as visited | {
"domain": "codereview.stackexchange",
"id": 36032,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, algorithm, programming-challenge, time-limit-exceeded, depth-first-search",
"url": null
} |
quantum-mechanics, operators, symmetry, hamiltonian, parity
\tilde P = e^{i\hat p /\hbar }~P~e^{-i\hat p /\hbar }.
$$
That is, the two parity operators are unitarily equivalent. You may confirm the change has no bearing on parity reflections of $\hat p$, which you already know since the origin of a variable is invisible to its derivative.
You may now proceed to apply the $\tilde P T$ operator on your hamiltonian. | {
"domain": "physics.stackexchange",
"id": 79180,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, operators, symmetry, hamiltonian, parity",
"url": null
} |
94
100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001 = 3 7 13 37 283 344887 cdot mbox{BIG}
392318858461667547739736838970286191634963299677388144641 = 3 7 cdot mbox{BIG}
94 GCD 21
95
10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001 = 3 31 37 21319 cdot mbox{BIG}
1569275433846670190958947355841530685282721029912780603393 = 7 151 32377 cdot mbox{BIG}
95 GCD 1 m is ODD
96
1000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001 = 3 19 3169 8929 13249 52579 98641 333667 cdot mbox{BIG}
6277101735386680763835789423286894578616619782057578463233 = 3 19 37 73 109 433 577 1153 6337 38737 cdot mbox{BIG}
96 GCD 57
97
100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001 = 3 37 1747 18043 cdot mbox{BIG}
25108406941546723055343157692989121989437950453043225952257 = 7 272959 cdot mbox{BIG}
97 GCD 1 m is ODD
98
10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001 = 3 7^3 13 37 43 127 1933 2689 63799 459691 cdot mbox{BIG}
100433627766186892221372630771639575307694744461798728007681 = 3 7^3 337 5419 748819 cdot mbox{BIG}
98 GCD 1029
99
1000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001 = 3 757 55243 198397 cdot mbox{BIG}
401734511064747568885490523085924475930664863146446560428033 = 262657 cdot mbox{BIG}
99 GCD 1 m is ODD | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9828232904845686,
"lm_q1q2_score": 0.8190108187721422,
"lm_q2_score": 0.8333245932423308,
"openwebmath_perplexity": 1008.3705758246012,
"openwebmath_score": 0.633044958114624,
"tags": null,
"url": "https://math.stackexchange.com/questions/2440632/when-does-22m2m1-divide-102m10m1"
} |
authentication
Title: Do ‘unspoofable’ email protocols exist? Been getting a lot more spam lately and this question came to mind.
We already use SSL certificates to authenticate websites, can we do something similar for email?
If so, why do I still receive spoofed emails? If you use PGP or MIME to sign emails with public-key cryptography, this can prevent spoofing of emails. The greatest challenges are key management, human factors, and convincing enough people to sign emails.
Therefore, in practice we use weaker mechanisms, like SPF, DKIM, and DMARC. | {
"domain": "cs.stackexchange",
"id": 16072,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "authentication",
"url": null
} |
If you're interested in what the matrix $$P$$ looks like, it can be written as $$P = \sum_{i,j = 1}^N (e_{i} \otimes e_j)(e_j \otimes e_i)^T$$ where $$e_i$$ denotes the $$i$$th canonical basis vector (the $$i$$th column of the identity matrix), and $$\otimes$$ denotes the Kronecker product. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9896718462621992,
"lm_q1q2_score": 0.8021081579003303,
"lm_q2_score": 0.8104789086703225,
"openwebmath_perplexity": 54.02291630366263,
"openwebmath_score": 0.9849892258644104,
"tags": null,
"url": "https://math.stackexchange.com/questions/3559398/check-if-a-large-matrix-containing-positive-definite-block-diagonal-matrices-is"
} |
navigation, rviz, ros-kinetic
Title: rviz cannot set initial pose with button 2D Pose Estimate
I'm a newbie to ROS Navigation and I will try to elaborate my problem as detailed as possible. Basically, I'd like to implement navigation stack with RGBD camera. To this end, I use maplab (https://github.com/ethz-asl/maplab) with its localization mode enabled as my localization system(a global map is created beforehand with maplab to be used in navigation). I have followed Navigation tutorials (http://wiki.ros.org/navigation/Tutorials/RobotSetup) to create related tfs and publish necessary topics. The application is launched without any error and rviz is showing the 2D global map correctly. Then, I tried to click the button '2D Pose Estimate' in rviz to set robot's initial pose. Now the problem comes such that there is no effect shown in rviz, the robot still stays at the same place. I checked the topic /initialpose which is published by rviz. Everytime I click the button '2D Pose Estimate' and drag it to some place on the map, there is a message published from this topic that has position and orientation information. Why rviz failed to set initial pose of the robot?
I have some doubt about the subscribers of the topic /initialpose. In my navigation setup, actually, there is no subscriber of topic /initialpose currently. But I find out that AMCL is the subscriber of topic /initialpose in case laser scan is used. In my case, maplab is the localization system. Then, whether do I need to do some work to make maplab subscribe to topic /initialpose? If yes, how the localization system should use the subscribed topic /initialpose in general?
I'm really puzzled by this problem and any help is highly appreciated. Thanks in advance.
Best regards,
Gerry | {
"domain": "robotics.stackexchange",
"id": 34622,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "navigation, rviz, ros-kinetic",
"url": null
} |
vapor changes state from gas to liquid. See DEW POINT TABLE - CONDENSATION POINT GUIDE for the chart approach. Assume the Bose gas has spin 1. a) (10 points) In which of these dimensions will the Bose-Einstein condensation occur? Ponomarev, "Fundamentals of general topology: problems and exercises" , Reidel (1984) (Translated from Russian). When condensation occurs, mold and decay are very likely. We address the question of condensation and extremes for three classes of intimately related stochastic processes: (a) random allocation models and zero-range processes, (b) tied-down renewal processes, (c) free renewal processes. If A is a subset of a topological space X and x is a point of X, then x is an accumulation point of A if and only if every neighbourhood of x intersects A ∖ { x }. (a) Let Cdenote the set of condensation points of S. Assume that SrCis uncountable. 5. Students use the digital atlas of Idaho to study different weather patterns. condensation point definition in English dictionary, condensation point meaning, synonyms, see also 'condensation trail',condensation trail',condensational',condemnation'. The dew point gives an indication of the humidity. Namely. For Problems 6 and 7, let S be a subset of Rn and define a condensation point of S to be any point x∈ Rn such that every n-ball centered at x contains uncountably many points of S. 6. 1) I proved that $P$ is closed set. Then, by question 3, SrChas a condensation point belonging to SrC, i.e. First, students will read through the nonfiction text and graphics. It occurs at 212 degrees Fahrenheit or 100 degrees Celsius. condensation point in American English. Define a point $p$ in a metric space $X$ to be a condensation point of a set $E\subset X$ if every neighborhood of $p$ contains uncountably many points of $E$. Adherent, Accumulation, and Isolated Points in Metric Spaces. Arkhangel'skii, V.I. A point of $E^n$ such that every neighbourhood of it contains uncountable many points of the set. Reidel ( | {
"domain": "fortinlab.com",
"id": null,
"lm_label": "1. Yes\n2. Yes\n\n",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.98615138856342,
"lm_q1q2_score": 0.8550285252393962,
"lm_q2_score": 0.8670357666736772,
"openwebmath_perplexity": 781.9311909334685,
"openwebmath_score": 0.6414417028427124,
"tags": null,
"url": "http://www.fortinlab.com/wp-contnt/fo14r0pn/article.php?id=233270-condensation-point-math"
} |
You have a hidden assumption in the proof of your first proposition. If $\beta_{x}\leq d<d'<\beta_{\alpha}$ whenever $x<\alpha ,$ then in order that the map that sends $d$ to $(\alpha,f_{\alpha}(d))$ be 1-to-1, you need each$f_{\alpha}:\beta_{\alpha}\to |\beta_{\alpha}|$ to be 1-to-1. Now for each $\alpha$, such $f_{\alpha}$ exists but it is generally not unique, so we cannot apply the Replacement Axiom to assert the existence of the sequence $(f_{\alpha})_{\alpha < \beta}.$ This is where AC is hiding.
Even when trying to prove in ZF that $\omega_1$ is regular, we hit this difficulty when we try to show in ZF that a countable union of countable ordinals is countable.
• The precise consistency strength of $\mathsf{ZF}$+all uncountable cardinals are singular'' is still open, though. – Andrés E. Caicedo Apr 27 '16 at 3:15
• Thanks for your answer. So does that mean the characterization of cf($\kappa$) as a sum requires AC as well? Can AC be avoided somehow in the proof? – Kaa1el Apr 27 '16 at 11:53
• If I understand you, we can, in ZF, define $cf(k)$ as the least $m\in ON$ such that $k=\sum_{a\in m}f(a)$ for some $f:m\to k.$ See the above comments. Proving in ZF that $\omega_1$ is regular would prove that a measurable cardinal doen't exist, which I think wold be quite a shock, and might win you a Fields medal. – DanielWainfleet Apr 27 '16 at 21:54 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9796676496254957,
"lm_q1q2_score": 0.8009512633091551,
"lm_q2_score": 0.8175744739711883,
"openwebmath_perplexity": 232.4453529700591,
"openwebmath_score": 0.930204451084137,
"tags": null,
"url": "https://math.stackexchange.com/questions/1760245/does-the-regularity-of-omega-alpha1-need-axiom-of-choice"
} |
quantum-mechanics, homework-and-exercises, energy, schroedinger-equation, potential
Title: Particle in infinite square well with $\langle E \rangle = (1/4) E_1 + (3/4) E_4$ Suppose we have a particle in an infinite square well with $\langle E \rangle = (1/4) E_1 + (3/4) E_4$. I know that I can calculate the uncertainty in the particle's position by $\sqrt{\langle E^2 \rangle-\langle E \rangle^2}$. I can calculate $\langle E \rangle^2$ easily as I know $\langle E \rangle$. But how would I go about calculating $\langle E^2 \rangle$? You should note that $\left< E^2 \right>$ is actually $\left<\phi| E^2|\phi \right>$, so to calculate it we have to calculate $|\phi\rangle$ first.
from $\langle E \rangle = (1/4) E_1 + (3/4) E_4$ we can speculate that
$$|\phi\rangle=(1/2)|\phi_1\rangle+(\sqrt{3}/2)|\phi_4\rangle$$
and the calculation becomes:
$$\left<\phi| E^2|\phi \right>= \langle \phi| E^2| \biggl((1/2)|\phi_1\rangle+(\sqrt{3}/2)|\phi_4\rangle \biggr)$$
$$=(1/4)\langle \phi_1 | E^2|\phi_1\rangle+(3/4)\langle \phi_4 | E^2|\phi_4\rangle=(1/4)E_1^2+(3/4)E_4^2$$
note that terms like $\langle \phi_1 | E^2|\phi_2\rangle$ fall because of the orthogonality relation. | {
"domain": "physics.stackexchange",
"id": 29312,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, homework-and-exercises, energy, schroedinger-equation, potential",
"url": null
} |
• thank you very much for your response.However, I'm not sure i completely understand your answer. Can you please elaborete a little more? why is there only one sylow 2-subgroup if the group is abelian? and how does one quickly finds all elements that are of order power of two? and i also don't see how ord(a,b)=lcm(ord(a),ord(b)) tells me that it is generated by (−1,0) and (0,x) – strangeattractor Jun 16 '18 at 11:57
• If the group is abelian, then it acts trivially on itself by conjugation, so conjugate subgroups are equal. The $lcm$ formula tells you that it suffices to find elements of order a power of two in the factors of the product. The unique element of order two in $\mathbb{Z}_n^\times$ is $-1$, and elements of order 4 are square roots of $-1$. No higher power of two is an order of an element of the factors since their orders are $10$ and $12$. – Joshua Mundinger Jun 16 '18 at 12:01
• thank you again. I was suspecting it has something to do with action. I see it more clearly now. However there are still thing that are not clear to me.This might be a silly question, but Is -1 always the only element of order two even if i'm not working in $\mathbb Z^\times _{prime}$ ? And why is there not an element of order 8 again? does it have to do something with 2*4=8 because i have elements of order 4? I really have to revise modular arithmetic it seems – strangeattractor Jun 16 '18 at 12:09
• $(-1)^2 = 1$ holds in any ring. There is no element of order $8$ in $\mathbb{Z}_{11}^\times$ or $\mathbb{Z}_{13}^\times$ by Langrange's theorem since neither of them have order divisible by 8. – Joshua Mundinger Jun 16 '18 at 12:11 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9838471618625456,
"lm_q1q2_score": 0.811144526329316,
"lm_q2_score": 0.8244619263765706,
"openwebmath_perplexity": 101.37988283437657,
"openwebmath_score": 0.8468637466430664,
"tags": null,
"url": "https://math.stackexchange.com/questions/2821514/finding-a-2-sylow-subgroup-of-left-mathbb-z-times-11-times-mathbb-z-tim"
} |
quantum-mechanics, homework-and-exercises, double-slit-experiment, path-integral
\end{align}
$$
where $(\Delta q)^2 = \sum\limits_{i=1}^{n}(q_i)^2$ now.
Next, suppose the slit has a width $\epsilon$, so that it is located between $y=\pm a-\epsilon$ and $y=\pm a+\epsilon$. This means that the particle has to pass through $(x,y)$ where $y\in [-a-\epsilon, -a+\epsilon]\cup[a-\epsilon, a+\epsilon]$. We take this into account by adding an additional integral over $y$. Writing $(x_1,y_1)$ for the location of the first slit it passes through and likewise for the second slit, we get
$$\begin{align}
\langle B,T | A,0 \rangle &= \int_{0}^{T} dt_{1} \int\limits_{-a-\epsilon}^{-a+\epsilon} dy_1 \langle B,T |(x_1,y_1),t_{1} \rangle
\langle (x_1,y_1),t_{1}| A,0 \rangle \\
&+ \int_{0}^{T} dt_{1} \int\limits_{a-\epsilon}^{a+\epsilon} dy_2 \langle B,T |(x_2,y_2),t_{2} \rangle
\langle (x_2,y_2),t_{2}| A,0 \rangle
\end{align}$$
Inserting the expression for the free propagator and performing the integral should then give you the final expression. I haven't done this calculation and it is not certain there is an expression in terms of elementary functions (i.e. you need to check whether the integral can be computed explicitly). | {
"domain": "physics.stackexchange",
"id": 38266,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, homework-and-exercises, double-slit-experiment, path-integral",
"url": null
} |
c++, beginner, game, sdl
// Draw Players
player1.draw(this->getRenderer());
player2.draw(this->getRenderer());
ball.draw(this->getRenderer());
score1.draw(this->getRenderer());
score2.draw(this->getRenderer());
//Switch renderer with backsurface
SDL_RenderPresent(this->_renderer);
}
void Game::update()
{
//update players
player1.update();
player2.update();
ball.update();
//check for collision
ball.collisionCheck(player1, player2);
// check if someone wins
if (ball.getX() < 0)
{
ball.resetBall();
score1.increment();
}
if (ball.getX() + ball.getW() > globals::SCREEN_WIDTH)
{
ball.resetBall();
score2.increment();
}
}
void Game::gameLoop()
{
SDL_Event event;
Input input;
while (!this->_quitFlag)
{
while (SDL_PollEvent(&event))
{
if (event.type == SDL_QUIT)
{
_quitFlag = true;
break;
}
else if (event.type == SDL_KEYDOWN)
{
input.ButtonPressed(event.key.keysym.sym);
}
else if (event.type == SDL_KEYUP)
{
input.ButtonReleased(event.key.keysym.sym);
}
} | {
"domain": "codereview.stackexchange",
"id": 27826,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, beginner, game, sdl",
"url": null
} |
javascript, php, jquery, css, chess
// PIECE
if ( ! $piece ) {
$graphical_board_array[$rank][$file]['piece'] = '';
} else {
$graphical_board_array[$rank][$file]['piece'] = $this->board[$rank][$file]->get_unicode_symbol();
}
}
}
return $graphical_board_array;
}
function get_side_to_move_string() {
$string = '';
if ( $this->color_to_move == 'white' ) {
$string .= "White To Move";
} elseif ( $this->color_to_move == 'black' ) {
$string .= "Black To Move";
}
return $string;
}
function get_who_is_winning_string() {
$points = 0;
foreach ( $this->board as $key1 => $value1 ) {
foreach ( $value1 as $key2 => $piece ) {
if ( $piece ) {
$points += $piece->value;
}
}
}
if ( $points > 0 ) {
return "Material: White Ahead By $points";
} elseif ( $points < 0 ) {
$points *= -1;
return "Material: Black Ahead By $points";
} else {
return "Material: Equal";
}
}
function invert_rank_or_file_number($number) {
$dictionary = array(
1 => 8,
2 => 7,
3 => 6,
4 => 5,
5 => 4,
6 => 3,
7 => 2,
8 => 1
);
return $dictionary[$number];
}
function number_to_file($number) {
$dictionary = array(
1 => 'a',
2 => 'b',
3 => 'c',
4 => 'd',
5 => 'e',
6 => 'f',
7 => 'g',
8 => 'h'
); | {
"domain": "codereview.stackexchange",
"id": 31987,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, php, jquery, css, chess",
"url": null
} |
c++, multithreading, boost
task1_grp->start();
task2_grp->start();
// Producers should call task1_grp.push(JobItem*) or task2_grp.push(JobItem*) to dispatch new jobs to thread pools
} Design
Not sure why you have a virtual function on the worker?
virtual void exec_job(JobItem* job_item) = 0;
The worker is simple he picks up jobs and then does them. What the job is defines what work should be done. So I would have put the virtual function that defines the work on the JobItem.
The run() function of the worker is then simply.
void run()
{
// You want some way for the thread to eventually exit.
// You can make that happen by letting the queue return
// a null object when the object is being shut down.
JobItem* job;
while(job = shared_job_queue_->pick())
{
job->do();
delete job;
}
}
Code Review
Prefer not to use pointers.
Pointers don't have the concept of ownership semantics. So we don't know who is responsible for deleting them. Use a type that conveys ownership semantics so the person reading the code understands the semantics.
JobQueue* shared_job_queue_;
You always need a job queue. So this can never be nullptr. So rather than a pointer a better choice would have been a reference. Also a reference implies there is no ownership of the object so you are not supposed to delete it.
class JobWorker
{
private:
JobQueue& shared_job_queue_;
public:
JobWorker(JobQueue& shared_queue)
: shared_job_queue_(shared_queue)
{}
Again in the JobQueue. You push and pop pointers. So there is no concept of ownership.
class JobQueue
{
private:
queue<JobItem*> job_queue_;
public:
// Definitely I need to use mutex and condition variable here
JobItem* pick();
void push(JobItem* job_item);
}; | {
"domain": "codereview.stackexchange",
"id": 18934,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, multithreading, boost",
"url": null
} |
nlp, semi-supervised-learning, pretraining
Title: What the differences between self-supervised/semi-supervised in NLP? GPT-1 mentions both Semi-supervised learning and Unsupervised pre-training but it seems like the same to me. Moreoever, "Semi-supervised Sequence Learning" of Dai and Le also more like self-supervised learning. So what the key differences between them? Semi-supervised learning is having label for a fraction of data, but in self-supervised there is no label available. Imagine a huge question/answer dataset. No one labels that data but you can learn question answering right? Because you are able to retrieve relation between question and answer from data.
Or in modeling documents you need sentences which are similar and sentences which are dissimilar in order to learn document embedding but these detailed labels are usually not available. In this case you count sentences from same document as similar and sentences from two different documents as dissimilar and train your model (example idea: you can run a topic modeling on data and make similar/dissimilar labels more accurate). It is called self training. | {
"domain": "datascience.stackexchange",
"id": 9601,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "nlp, semi-supervised-learning, pretraining",
"url": null
} |
physical-chemistry, quantum-chemistry, molecular-orbital-theory
Title: Help with derivation on why antibonding orbitals are more antibonding I have read How can antibonding orbitals be more antibonding than bonding orbitals are bonding?. But I am intersted in a specific derivation. During lecture my professor stated that the overlap $S_{ij}$ integral can be between -1 and 1. In other words,
$$-1<\int\psi^*_i\psi_j\,dx<1$$
He further mentioned that a negative overlap integral means that the orbitals are antibonding. He also mentioned that a positive overlap integral means that the orbitals are bonding, and that an overlap integral equal to 0 means that the orbitals are nonbonding. In other words,
$$S_{ij}<0\Rightarrow\text{antibonding}$$
$$S_{ij}>0\Rightarrow\text{bonding}$$
$$S_{ij}=0\Rightarrow\text{nonbonding}$$
We then solved the Schrodinger equation using the LCAO method. In this case, we were solving for a hydrogen molecule. We only considered the 1s orbitals of each hydrogen atom. In other words,
$$\Psi_{H_2}=c_1\psi_{1s}+c_2\psi_{1s'}$$
Here, $\psi_{1s}$ and $\psi_{1s'}$ are the atomic orbitals of each hydrogen atom.
The derivation that the professor used is similar to the one here: http://www.pci.tu-bs.de/aggericke/PC4e/Kap_II/H2-Ion.htm
In the end he got two wavefunctions with different coefficients
$$\Psi_{H_2}=\frac{1}{\sqrt{1+S_{ij}}}(\psi_{1s}+\psi_{1s'})$$ | {
"domain": "chemistry.stackexchange",
"id": 7318,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "physical-chemistry, quantum-chemistry, molecular-orbital-theory",
"url": null
} |
ros-fuerte, arm-navigation
I have seen similar posts to this, but they seem to imply that there are left over packages from a previous install. Mine is a fresh install, so this is not the issue. Help would be much appreciated!
Originally posted by Steveb on ROS Answers with karma: 26 on 2012-07-18
Post score: 0
This problem seems to be solved. I tried the exact same command this afternoon and it worked (it did not work this morning or last night...). Not sure what happened here.
Originally posted by Steveb with karma: 26 on 2012-07-19
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 10264,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros-fuerte, arm-navigation",
"url": null
} |
4. The direct sum of rings $\Bbb{R} \oplus \Bbb{R}$ that also has the additional structure of being a 2-dimensional $\Bbb{R}$ - algebra.
5. Let $X$ be a compact Hausdorff space with more than one point. Then $C(X)$ is an example of a commutative ring, the ring of all real valued functions on $X$.
6. The localisation of $\Bbb{Z}$ at the prime ideal $(5)$. The result ring, $\Bbb{Z}_{(5)}$ is the set of all $$\left\{\frac{a}{b} : \text{b is not a multiple of 5} \right\}$$ and is a local ring, i.e. a ring with only one maximal ideal.
7. I believe when $G$ is a cyclic group, the endomorphism ring $\textrm{End}(G)$ is an example of a commutative ring.
Examples of Fields:
1. $\Bbb{F}_{2^5}$
2. $\Bbb{Q}(\zeta_n)$
3. $\Bbb{R}$
4. $\Bbb{C}$
5. The fraction field of an integral domain
6. More generally given an algebraic extension $E/F$, for any $\alpha \in E$ we have $F(\alpha)$ being a field.
7. The algebraic closure $\overline{\Bbb{Q}}$ of $\Bbb{Q}$ in $\Bbb{C}$. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9674102514755853,
"lm_q1q2_score": 0.8019239010160928,
"lm_q2_score": 0.8289388083214156,
"openwebmath_perplexity": 225.74440398902047,
"openwebmath_score": 0.85551518201828,
"tags": null,
"url": "http://math.stackexchange.com/questions/164114/is-there-any-difference-between-the-definition-of-a-commutative-ring-and-field"
} |
c, game, playing-cards, curses
} else {
mvprintw(0,0, "You can't play that card!");
curs_set(1);
clrtoeol();
getch();
curs_set(0);
continue;
}
break;
case 'Q': /* Quit game */
mvprintw(0,0, "Really quit?");
curs_set(1);
clrtoeol();
if((key=tolower(getch())) == 'y') {
endwin();
clean(discard);
clean(pile);
for(i=0; i<plrs; i++) {
clean(pl[i]);
free(plname[i]);
}
printf("Buhbye now :3\n");
return 0;
} else
continue;
default:
continue;
}
turn = 0; /* only reached on break, not continue */
} while(turn);
} while(length(pl[i]) > 0); /* round ends when a
* player has no cards */
/* Won round (or game, in quick mode) */
clear(); | {
"domain": "codereview.stackexchange",
"id": 16566,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c, game, playing-cards, curses",
"url": null
} |
$v_0 = 0$
$v_1 = 0.9v_0 + 0.1x_1$
$v_2 = 0.9v_1 + 0.1x_2$
$v_3 = 0.9v_2 + 0.1x_3$
$…$
Expanding out for v3, we get
$v_3 = 0.9(0.9v_1 + 0.1x_2) + 0.1x_3$
$v_3 = 0.9(0.9(0.9v_0 + 0.1x_1) + 0.1x_2) + 0.1x_3$
$v_3 = 0.9(0.9(0.9(0) + 0.1x_1) + 0.1x_2) + 0.1x_3$
Reducing, it quickly becomes obvious where the “Exponential” part comes in
$v_3 = 0.9^2 *0.1x_1 + 0.9*0.1x_2 + 0.1x_3$
### Another coefficient
In this example we used values 0.9 and 0.1. More generally, we pick values beta and 1 - beta that add up to one.
$v_t = \beta v_T + (1 - \beta)x_t$
$T = t-1$
(T substitution because LaTeX sucks with markdown, lol)
And since beta is less than one, as we move further and further into our v values, increasing the exponent attached to beta, it goes closer and closer to zero, thus giving less weight to historic values.
## Properties
### Smoothness
Because this weighting is multiplicative across all observations, it makes a much smoother curve. Compare our previous implementation of a naive rolling average
rolling = pd.Series(y).rolling(3).mean()
ax = make_fig(X, y)
ax.plot(X, rolling, linewidth=3, color='r');
To the less-noisy EWM approach
new_rolling = pd.Series(y).ewm(3).mean() | {
"domain": "github.io",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.975946442208311,
"lm_q1q2_score": 0.825594508213742,
"lm_q2_score": 0.8459424334245618,
"openwebmath_perplexity": 2551.356105578032,
"openwebmath_score": 0.6380066871643066,
"tags": null,
"url": "https://napsterinblue.github.io/notes/stats/techniques/ewma/"
} |
javascript, node.js
I think it would also be useful to follow the standard Javascript naming conventions and use camelCase for variable names in most cases.
All together:
const fail = msg => monet.Validation.fail([{ msg, param: 'id' }]);
async function validateRequestArgs(req, requestType) {
const { id } = req[requestType === 'post' ? 'body' : 'query'];
if (!id) {
return fail('Missing id of the problem');
}
const problem = await models.Problem.findByPk(id);
if (!problem) {
return fail(`Problem with id ${id} does not exist`);
}
return monet.Validation.success(problem);
} | {
"domain": "codereview.stackexchange",
"id": 38147,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, node.js",
"url": null
} |
(x = 0), point O has a velocity of 1 ft/sec to the right. A small ball of mass 0. the center of mass of the object… moves with speed v cm = Rω; moves in a straight line in the absence of a net external force; the point fathest from the point of contact… moves with twice the speed of the center of mass v = 2v cm = 2Rω; Rolling and Slipping rolling without slipping v cm = Rω; slipping and rolling forward. The disk rolls without slipping. Chapter 2 Rolling Motion; Angular Momentum 2. Acceleration is measured in metres per second per second (or metres per second squared, abbreviated to m/s 2). a linear acceleration and an angular acceleration. A uniform solid sphere rolls down an incline without slipping. Let #g# be acceleration due to gravity. For a disc rolling without slipping on a horizontal rough surface with uniform angular velocity, the acceleration of lowest point of disc is directed vertically upwards and is not zero (Due to translation part of rolling, acceleration of lowest point is zero. From what minimum height above the bottom of the track must the marble be released in order not to leave the track at the top of the loop. Rolling can be viewed as a combination, or superposition, of purely translational motion (moving a wheel from one place to another with no rotation) and purely rotational motion (only rotation with no movement of the center of the wheel). 5 kg and radius R = 20 cm, mounted on a fixed horizontal axle. 21 m/s2, what is View the step-by-step solution to:. Express all solutions in terms of M, R, H, θ, and g. b) Find minimum coefficient of static friction that makes such rolling without slipping possible. Question: A solid sphere rolls down an inclined plane without slipping. If its mass is distributed as shown in the figure, what is the value of the ratio of the kinetic energy of translation to the kinetic energy of rotation about its center of mass? Title: Microsoft. The center of mass of the bicycle is moving with a constant speed V in the positive x-direction. We will calculate the torque of the forces with respect to the center of. h m 1 m 2 m 2> m1 and rope turns pulley without slipping. (a) Show that “rolling without slipping” means that the speed of the cylinder’s center of mass, v cm, is equal to | {
"domain": "amicidellacattolica.it",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9817357227168956,
"lm_q1q2_score": 0.8284819761123067,
"lm_q2_score": 0.8438951104066293,
"openwebmath_perplexity": 340.4795727430887,
"openwebmath_score": 0.5913660526275635,
"tags": null,
"url": "http://xuqc.amicidellacattolica.it/acceleration-of-center-of-mass-rolling-without-slipping.html"
} |
organic-chemistry, nitro-compounds, aromaticity
For a general understanding of this feature it certainly does not hurt to keep it in the back of ones head.[4]
However, the applicability of this concept is limited to a quite small subset of compound (the same holds true for Hückel's rule, too). It therefore seems to be quite a superfluous concept, since it is in a more general sense already covered by resonance stabilisation.
After all, electronic structures are often not well enough described with simple models and in some cases these simple models can lead to wrong assumptions. This is one of these cases, where one can only use this concept as long as its restrictions are well known.
Some people might argue, that the naming itself is the problem and it may lead to confusion. I can partly agree with that. That does not change the fact, that the description in resonance, valence bond and molecular orbital theory terms will produce this result. In general I think the term aromaticity is unfortunate right in the beginnings. The concept has changed and extended quite a few times already. I often see problems when the term "Hückel rule" is used synonymously with aromaticity.
There is no true theoretical and at the same time all general explanation for aromaticity. It is often determined on a case by case basis experimentally. This limitation also has to apply to any derivations of it.
Electronic structure of the trinitromethane anion (TNM⊖)
Trinitromethane is a quite acidic substance, which can in part be attributed to the presence of the well delocalised π system. Cioslowski et. al. have investigated the anion and came to this conclusion. In their paper they also use the term Y-aromaticity.[5] | {
"domain": "chemistry.stackexchange",
"id": 3930,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "organic-chemistry, nitro-compounds, aromaticity",
"url": null
} |
spinors, helicity
\end{pmatrix}
$$
finally, the zero gamma matrix is
$$
\gamma^0=\begin{pmatrix}1&0&0&0\\0&1&0&0\\0&0&-1&0\\0&0&0&-1
\end{pmatrix}
$$
thus
$$
\bar u=u^\dagger\gamma^0=\sqrt{E}\begin{pmatrix} c & s\mathrm e^{-i\phi} & -c & -s\mathrm e^{-i\phi}
\end{pmatrix}
$$
Hope this answers your question. | {
"domain": "physics.stackexchange",
"id": 26492,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "spinors, helicity",
"url": null
} |
So try two coins (n=2). All outcomes:
TT | TH | HT | HH
Four possibilities. Only one matches your criteria. It is worth noting that the probability of one being heads and the other being tails is 2/4 because two possibilities of the four match your criteria. But there is only one way to get all heads.
TTT | THT | HTT | HHT
TTH | THH | HTH | HHH
8 possibilities. Only one fits the criteria - so 1/8 chances of all heads.
The pattern is (1/S)^n or (1/2)^3.
For dice S = 6, and we have 6 of them.
Probability of getting a 6 on any given roll is 1/6. Rolls are independent events. So using 2 dice getting all 6's is (1/6)*(1/6) or 1/36.
(1/6)^6 is about 1 in 46,656
• I was actually just looking for the probability of ANY heads showing up, given as many rolls as there are faces on the die. But, cheers mate, I got my answer. Jul 28, 2020 at 0:22 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9863631667229061,
"lm_q1q2_score": 0.8552121258393179,
"lm_q2_score": 0.8670357477770337,
"openwebmath_perplexity": 504.7102416523403,
"openwebmath_score": 0.725549042224884,
"tags": null,
"url": "https://stats.stackexchange.com/questions/479117/what-are-the-chances-rolling-6-6-sided-dice-that-there-will-be-a-6"
} |
## References
1. ^ "Bertrand's box paradox". Oxford Reference.
2. ^ Bar-Hillel, Maya; Falk, Ruma (1982). "Some teasers concerning conditional probabilities". Cognition. 11 (2): 109–22. doi:10.1016/0010-0277(82)90021-X. PMID 7198956. S2CID 44509163.
• Nickerson, Raymond (2004). Cognition and Chance: The psychology of probabilistic reasoning, Lawrence Erlbaum. Ch. 5, "Some instructive problems: Three cards", pp. 157–160. ISBN 0-8058-4898-3
• Michael Clark, Paradoxes from A to Z, p. 16;
• Howard Margolis, Wason, Monty Hall, and Adverse Defaults. | {
"domain": "wikipedia.org",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9888419703960399,
"lm_q1q2_score": 0.8344788901942471,
"lm_q2_score": 0.8438950966654772,
"openwebmath_perplexity": 426.8011419512099,
"openwebmath_score": 0.9186322093009949,
"tags": null,
"url": "https://en.wikipedia.org/wiki/Bertrand%27s_box_paradox"
} |
navigation, move-base
Title: How to use move_base?
I found this one very useful http://wiki.ros.org/move_base, but I have no idea how can I apply it to use in my project like how to install, what command to use and so on. Could anyone help me please?
Originally posted by newcastle on ROS Answers with karma: 1 on 2015-01-04
Post score: 0
I suggest you do the tutorials.
Originally posted by Procópio with karma: 4402 on 2015-01-05
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 20476,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "navigation, move-base",
"url": null
} |
fluid-dynamics, dimensional-analysis, navier-stokes
For example, the choice of $p^*=\frac{p}{\rho U^2}$ can be justified in the turbulent incompressible limit. You want to choose dimensionless variables that are independent of $\nu$ (since $\nu\to0$) and of $K$ (since $K\to\infty$). Another way to view this is that you are sending your dimensionless numbers to $0$ or $\infty$. In this case, this uniquely fixes the choices of dimensionless units. It all depends on the physical phenomena you are interested in.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 95375,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "fluid-dynamics, dimensional-analysis, navier-stokes",
"url": null
} |
mechanical-engineering, structural-engineering, structural-analysis, stresses, elastic-modulus
0 & 0 & 1\\
\end{array}\right] \\
G =
\left[\begin{array}{ccc}
1 + \varphi ^2 & 0 & 0 \\
0 & 1 + \varphi ^2 & 0 \\
0 & 0 & 1\\
\end{array}\right] \\
\therefore
E = \frac{1}{2}
\left[\begin{array}{ccc}
\varphi ^2 & 0 & 0 \\
0 & \varphi ^2 & 0 \\
0 & 0 & 0\\
\end{array}\right] \approx 0
$$
This makes since, because if $\varphi$ is small, $\varphi^2$ is much smaller and we can accept the error assuming geometrically linear behavior. We just need to watch out for non-small rotations that will cause our linearization to have a larger error. | {
"domain": "engineering.stackexchange",
"id": 2336,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "mechanical-engineering, structural-engineering, structural-analysis, stresses, elastic-modulus",
"url": null
} |
c++, template-meta-programming, c++20
// make underlying data public
value_type data[N];
// constructors
constexpr fixed_array() noexcept {
std::fill_n(data, N, 0);
}
constexpr fixed_array(const value_type& fill_arg) noexcept {
std::fill_n(data, N, fill_arg);
}
constexpr fixed_array(const value_type(&arg)[N]) {
std::copy_n(arg, N, data);
}
constexpr fixed_array(const std::initializer_list<value_type>& args) noexcept {
std::copy_n(args.begin(), N, data);
}
constexpr fixed_array(const fixed_array& other) : fixed_array(other.data) {}
constexpr fixed_array(fixed_array&& other) : fixed_array(std::move(other.data)) {}
// destructors
constexpr ~fixed_array() {}
// assignment
constexpr fixed_array& operator=(const fixed_array& other) {
std::copy_n(other.data, N, data);
return *this;
}
constexpr fixed_array& operator=(fixed_array&& other) {
std::copy_n(std::move(other).data, N, data);
return *this;
}
constexpr fixed_array& operator=(const value_type(&arg)[N]) {
std::copy_n(arg, N, data);
return *this;
}
constexpr fixed_array& operator=(const std::initializer_list<value_type> args) {
std::copy_n(args.begin(), N, data);
return *this;
} | {
"domain": "codereview.stackexchange",
"id": 41620,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, template-meta-programming, c++20",
"url": null
} |
cc.complexity-theory, complexity-classes, cr.crypto-security, circuit-complexity
$n^{3-o(1)}$ for formulas over
$\{\land,\lor,\neg\}$; Hastad (1998).
$\Omega(n^2/\log n)$ for general fanin-$2$ formulas,
$\Omega(n^2/\log^2 n)$ for deterministic branching programs, and
$\Omega(n^{3/2}/\log n)$ for nondeterministic branching programs;
Nechiporuk~(1966).
So, your question "Specifically do any of these problems have more than a linear complexity lower bound?" remains widely open (in the case of circuits). My appeal to all young researchers: go forward, these "barriers" are not unbreakable! But try to think in a "non-natural way", in the sense of Razborov and Rudich. | {
"domain": "cstheory.stackexchange",
"id": 1056,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "cc.complexity-theory, complexity-classes, cr.crypto-security, circuit-complexity",
"url": null
} |
javascript, performance, animation
for (prop in this.ABS){
currentProp = this.ABS[prop];
// Some properties need to be assigned in a special way, since they are not
// represented only by a number (ie: backgroundColor and color, since they are
// represented with an object having "red", "green", "blue" and "alpha" keys. Or
// position properties that need "px" appended).
switch (prop) {
case "backgroundColor":
case "color":
element.style[prop] = this.colorFromObj(currentProp);
break;
case "left":
case "right":
case "top":
case "bottom":
// If we don't explicitly choose an unit, either here or on AES, default is pixels
if(!this.extractUnit(currentProp)){
currentProp = currentProp + (this.extractUnit(this.AES[prop]) || "px");
}
element.style[prop] = currentProp;
break;
default:
element.style[prop] = currentProp;
}
}
}, this);
} else if (this.state === 2){
// Style matches the AES
this.CO.forEach(function(element, index){
var currentProp;
for (prop in this.AES){
currentProp = this.AES[prop]; | {
"domain": "codereview.stackexchange",
"id": 29662,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, performance, animation",
"url": null
} |
waves, frequency
Title: To calculate the frequency of some wave source, does the wave source have to emit waves continuously? I understand that the frequency is the number of waves that pass a place in a given amount of time. And it is shown like this:
But, the frequency of waves are always shown above as if the source of wave (let’s say it is light) is continuously emitting waves. What i mean is, isn’t there a time elapse between the individual pulses of waves.
For example light emitted from an hydrogen atom, the frequency of that light would be shown as above. But when each time the electron of hydrogen atom goes to an higher state and a lower state, i think there must be a time elapse. And if you were not to take that into account, wouldn't you get the wavelength of that wave source longer.
Note: My English language is not very good, so if i made mistakes that made the question harder to understand, I apologise. Your understanding of what constitutes a 'pulse' is flawed. If you excite an atom, then the emitted radiation will indeed be confined in time, forming a pulse, but this will take the form of a larger envelope that encases the individual oscillations at the resonance frequency.
A typical example looks like this: | {
"domain": "physics.stackexchange",
"id": 44156,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "waves, frequency",
"url": null
} |
quantum-mechanics, quantum-field-theory, path-integral, boundary-conditions, coherent-states
The position $\hat{q}/\sqrt{2}$ and the momentum $\hat{p}/\sqrt{2}$ operators are related to
$$ {\rm Re}(\hat{a})~:=~\frac{\hat{a}+\hat{a}^{\dagger}}{2} \qquad\text{and}\qquad{\rm Im}(\hat{a})~:=~\frac{\hat{a}-\hat{a}^{\dagger}}{2i},\tag{11}$$
respectively. In the coherent state path integral (6), there are 2 complex (= 4 real) BCs
$$ z(t_i)~=~z_i \qquad\text{and}\qquad \bar{z}(t_f)~=~z^{\ast}_f. \tag{12}$$
In other words, we specify both initial position and initial momentum, naively violating the HUP. Similar for the final state. This is related to the overcompleteness (5) of the coherent states.
The overcomplete BCs (12) means that there typically is not an underlying physical classical path with
$$z^{\ast}~=~\bar{z}\tag{13}$$
that fulfills [besides the Euler-Lagrange (EL) equations
$$ \dot{z}~\approx~-i\frac{\partial H(z,z^{\ast})}{\partial z^{\ast}}
\quad\text{and}\quad
\dot{z}^{\ast}~\approx~i\frac{\partial H(z,z^{\ast})}{\partial z}, \tag{14} $$
i.e. the Hamilton's equations] all the BCs (12) simultaneously unless we tune the BCs (12) appropriately, cf. e.g. Ref. 1. The precise tuning depends on the theory at hand. | {
"domain": "physics.stackexchange",
"id": 25232,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, quantum-field-theory, path-integral, boundary-conditions, coherent-states",
"url": null
} |
quantum-mechanics, harmonic-oscillator, anharmonic-oscillators
Title: Ladder Operators for a nonlinear oscillator I was wondering if there was a way to construct the ladder operators for a nonlinear oscillator given by the Hamiltonian $$H=x^2+p^2+\lambda x^4$$ If we were to just calculate scattering amplitudes, then it makes sense to just use the harmonic oscillator ladder operators and use perturbation theory. But I am looking to re-express this Hamiltonian in terms of ladder operators. So, is there a way the generalise their definitions to different Hamiltonians? It depends a bit on what you have in mind but, in this specific problem, it is possible to express the Hamiltonian in terms of the $\mathfrak{su}(1,1)\sim\mathfrak{sp}(2,\mathbb{R})$ algebra, with (complexified) generators
\begin{align}
\hat K_+=\textstyle\frac{1}{2}\hat a^\dagger \hat a^\dagger\, ,\qquad
\hat K_-=\frac{1}{2}\hat a\, \hat a\, ,\qquad
\hat K_0=\frac{1}{4}\left(\hat a^\dagger \hat a+\hat a\hat a^\dagger\right)\, .
\tag{1}
\end{align}
It turns out that $p^2+x^2\sim \hat K_0$ (up to some constant) and $x^4$ is also expressible in terms of the operators in (1).
A very accessible reference on this is
Novaes, Marcel. "Some basics of su (1, 1)." Revista Brasileira de Ensino de Fisica 26.4 (2004): 351-357
but Google also throws up a number of canonical papers.
See also this question for a slightly more general setting. | {
"domain": "physics.stackexchange",
"id": 68279,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "quantum-mechanics, harmonic-oscillator, anharmonic-oscillators",
"url": null
} |
ros, catkin, buildfarm, ros-groovy, cmake
Title: PkgConfig in Groovy on Quantal
I'm having with a job on the buildfarm.
http://jenkins.ros.org/job/ros-groovy-people-tracking-filter_binarydeb_quantal_i386/2/console
Probably relevant error:
-- Could NOT find PkgConfig (missing: PKG_CONFIG_EXECUTABLE)
CMake Error at /usr/share/cmake-2.8/Modules/FindPkgConfig.cmake:319 (message):
pkg-config tool not found
Call Stack (most recent call first):
/usr/share/cmake-2.8/Modules/FindPkgConfig.cmake:333 (_pkg_check_modules_internal)
CMakeLists.txt:6 (pkg_check_modules)
Offending CMake is here: https://github.com/wg-perception/people/blob/groovy/people_tracking_filter/CMakeLists.txt
What is the proper way to check for the BFL package in Groovy that will work on the buildfarm?
Originally posted by David Lu on ROS Answers with karma: 10932 on 2014-08-29
Post score: 0
It is perfectly fine to use pkg-config to resolve dependencies but if a package does so it also needs to declare that pkg-config is a build dependency in the package.xml file:
<build_depend>pkg-config</build_depend>
Otherwise the build farm does not know that it should be installed.
Originally posted by Dirk Thomas with karma: 16276 on 2014-08-29
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 19230,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "ros, catkin, buildfarm, ros-groovy, cmake",
"url": null
} |
• what would happen if the inequalities changed, say $S_T<30$, $30 \leq S_T \leq 35$ and $S_T > 35$? @KeSchn. Also, how did you get $f(S_T)+2$? – Anon Sep 26 '19 at 12:19
• It does not matter whether you have $\leq$ and $\geq$ or $<$ and $>$ because at the points $x=30$ and $x=35$ your values match, so you don't have jumps. If you compare the three different cases in the answer above to the three cases in your question, you see that you only need to add $2$ to all cases to arrive at the same expression. – KeSchn Sep 26 '19 at 12:25
• Oh i see, is that why you added the 2 zero coupon bonds then? – Anon Sep 26 '19 at 12:25
• Thank you so much! – Anon Sep 26 '19 at 12:36
• @KeSchn they need to give you a medal for contributions to this site in such a short amount of time. Your knowledge base is also tremendous – Slade Sep 26 '19 at 14:35 | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9702399094961359,
"lm_q1q2_score": 0.8207671175815198,
"lm_q2_score": 0.8459424411924673,
"openwebmath_perplexity": 666.2037131082676,
"openwebmath_score": 0.9149454832077026,
"tags": null,
"url": "https://quant.stackexchange.com/questions/48924/compute-the-price-of-a-derivative"
} |
machine-learning, python, time-series, rnn, overfitting
Title: k-fold cross validation with RNNs is it a good idea to use k-fold cross-validation in the recurrent neural network (RNN) to alleviate overfitting?
A potential solution could be L2 / Dropout Regularization but it might kill RNN performance as discussed here. This solution can affect the ability of RNNs to learn and retain information for longer time.
My dataset is strictly based on time series i.e auto-correlated with time and depends on the order of events. With standard k-fold cross-validation, it leaves out some part of the data, trains the model on the rest while deteriorating the time-series order. What can be an alternate solution? TL;DR
Use Stacked Cross-Validation instead of traditional K-Fold Cross-Validation.
Stacked Cross-Validation
In Sckit-learn, this is called TimeSeriesSplit (docs).
The ideas that instead of randomly shuffling all your data points and losing their order, like you suggested, you split them in order (or in batches).
Belows is a picture of traditional K-Fold vs Stacked K-Fold. They both have K=4 (four iterations). But in the traditional K-Fold all the data is used all the time, whereas in the Stacked K-Fold only the past data is used for training and the now data for testing.
Strategy
Split your data in a Stacked K-Fold fashion.
At every iteration, store your model and measure its performance against the test set.
After all iterations are over, pick the stored model with the highest performance (might not even be the last one).
Note: This is a common approach in training neural nets (batching with validation sets). | {
"domain": "datascience.stackexchange",
"id": 6510,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "machine-learning, python, time-series, rnn, overfitting",
"url": null
} |
time-series
For your timeseries analysis you should do both: get to the highest granularity possible with the daily dataset, and also repeat the analysis with the monthly dataset. With the monthly dataset you have 120 data points, which is sufficient to get a timeseries model even with seasonality in your data.
For known and unknown properties, how should I proceed to go from daily to weekly/monthly data ?
To obtain say weekly or monthly data from daily data, you can use smoothing functions. For financial data, you can use moving average or exponential smoothing, but if those do not work for your data, then you can use the spline smoothing function "smooth.spline" in R: https://stat.ethz.ch/R-manual/R-patched/library/stats/html/smooth.spline.html
The model returned will have less noise than the original daily dataset, and you can get values for the desired time points. Finally, these data points can be used in your timeseries analysis.
For known and unknown properties, how should I proceed to go from weekly/monthly to daily data ? | {
"domain": "datascience.stackexchange",
"id": 182,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "time-series",
"url": null
} |
newtonian-mechanics, fluid-dynamics, coriolis-effect
Title: Circular Winds and Coriolis force
In northern hemisphere the flow of air is shown which tries to move towards low pressure center but due to Coriolis is deflected as shown.
It's said that this causes a circular anticlockwise flow of air.
How's that?
The air will just try to move in and get deflected as shown and curve away, how then will a circular pattern be formed? Once the wind reaches a velocity such that the Coriolis and pressure gradient forces balance, it continues at that velocity due to inertia. This state is called geostrophic flow and corresponds to wind along isobars.
For an intense localized low pressure like a hurricane, the flow is not geostrophic—the pressure gradient force is larger in magnitude than the Coriolis force, maintaining the inward net acceleration required for circular motion. | {
"domain": "physics.stackexchange",
"id": 76222,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, fluid-dynamics, coriolis-effect",
"url": null
} |
chess, apl
4)(1 5)(2 8)(1 7)(4 4)(5 5)(4 8)(5 7))((2 5)(1 6)(1 8)(4 5)(5 6)(5 8))((2 6)(1 7)(4 6)(5 7)))(((3 3)(2 2)(5 3)(6 2))((2 1)(3 4)(2 3)(6 1)(5 4)(6 3))((3 1)(2 2)(3 5)(2 4)(5 1)(6 2)(5 5)(6 4))((3 2)(2 3)(3 6)(2 5)(5 2)(6 3)(5 6)(6 5))((3 3)(2 4)(3 7)(2 6)(5 3)(6 4)(5 7)(6 6))((3 4)(2 5)(3 8)(2 7)(5 4)(6 5)(5 8)(6 7))((3 5)(2 6)(2 8)(5 5)(6 6)(6 8))((3 6)(2 7)(5 6)(6 7)))(((4 3)(3 2)(6 3)(7 2))((3 1)(4 4)(3 3)(7 1)(6 4)(7 3))((4 1)(3 2)(4 5)(3 4)(6 1)(7 2)(6 5)(7 4))((4 2)(3 3)(4 6)(3 5)(6 2)(7 3)(6 6)(7 5))((4 3)(3 4)(4 7)(3 6)(6 3)(7 4)(6 7)(7 6))((4 4)(3 5)(4 8)(3 7)(6 4)(7 5)(6 8)(7 7))((4 5)(3 6)(3 8)(6 | {
"domain": "codereview.stackexchange",
"id": 37806,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "chess, apl",
"url": null
} |
convolution, autocorrelation
Are two signals are the same if their auto-correlation functions are the same?
Not quite. Now we are given $|X(f)|^2$ in the frequency domain and there are many different factorizations possible. For example, if $y(t)$ is a signal such that values taken in by its Fourier transform always lie on the unit circle in the complex plane (for every $f$, $|Y(f)|=1$) then $|X(f)Y(f)|^2 = |X(f)|^2$ and so $x\star y$ has the same autocorrelation function as $x$. Note that $x(t)$ and $x(t-\tau)$ (which is a delayed version of $x(t)$) have the same autocorrelation function ($Y(f)$ happens to be $\exp(-j2\pi f \tau)$ here). Another factorization replaces $X(f)$ by $X^*(f)$ which tells us that $x(t)$ and $x(-t)$ (which is just $x(t)$ running backwards in time) have the same autocorrelation function. | {
"domain": "dsp.stackexchange",
"id": 7444,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "convolution, autocorrelation",
"url": null
} |
It is possible to combine several plots in one scene:
(%i106) wxdraw3d( title="Helicoid on wheels!", axis_3d=false,xtics='none,ytics='none,ztics='none, xu_grid=75,yv_grid=35,dimensions=[450,600], surface_hide=true, enhanced3d=true,palette=gray,colorbox=false, parametric_surface(v·sin(u),v·cos(u),u/3,u,0,15,v,−1,1), parametric_surface(sin(u),sin(2·u)·sin(v),sin(2·u)·cos(v),u,−%pi/2,%pi/2,v,0,2·%pi) ),wxplot_size=[450,600];
$\tag{%t106}$
$\tag{%o106}$
The 'draw' package can call a little library called 'worldmap' which contains cartographical data: | {
"domain": "uaslp.mx",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.972830769252026,
"lm_q1q2_score": 0.8085615346754277,
"lm_q2_score": 0.8311430520409023,
"openwebmath_perplexity": 8210.193559783387,
"openwebmath_score": 0.9115439653396606,
"tags": null,
"url": "http://galia.fc.uaslp.mx/~jvallejo/Maxima%20Mini-Tour%2019-May-2019.html"
} |
newtonian-mechanics, momentum, collision, interactions
Title: For collision, physical contact is not a necessary condition. Why? In my textbook, it is written that
"For collision, physical contact is not a necessary condition".
How can collision occur without physical contact? If there is no physical contact, then there would be no contact force between particles to act as impulsive force.
What would act as impulsive force in such a collision where there is no physical contact between the particles?
Can you give an example of such a collision? In science, language is specific and unambiguous. That means that terms are defined in ways often different from colloquial usage.
I'll quote Wikipedia on the definition of a collision. "A collision is an event in which two or more bodies exert forces on each other for a relatively short time." Note that there is no requirement for contact.
Of the four fundamental forces, both electromagnetism and gravity are long range. Despite being long range, they both fall with the inverse square of the distance (for simply distributed objects). This means you can mostly ignore the effects of the force at large distances relative to their closest approach.
A charged particle being deflected by another charged particle as they pass by each other is an example of a collision where no contact takes place. A gravitational slingshot where a small object moves around a much heavier object to gain speed is another example. | {
"domain": "physics.stackexchange",
"id": 42968,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, momentum, collision, interactions",
"url": null
} |
astronomy, statistical-mechanics, astrophysics, statistics
Regarding predicting the next larger structure: This was an interesting question for early cosmologists. You see, the general relativity that governs the evolution of the universe would be intractably complicated if not for the assumptions we make of nearly perfect homogeneity (every point in space is basically the same) and isotropy (no direction is special). If you imagine the ladder of structure extending infinitely toward larger things, you have a fractal universe, and it is a very different beast. Eventually (much later than we proclaimed to have the basic theory of a homogeneous/isotropic universe pinned down), deep galaxy surveys did in fact reveal that there is an upper limit to structure, justifying non-fractal models. On the largest of scales (many millions of light years), everything is distributed evenly and there is no structure. | {
"domain": "physics.stackexchange",
"id": 7277,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "astronomy, statistical-mechanics, astrophysics, statistics",
"url": null
} |
$$f\left(\frac{x+y}{2}\right) \gt \frac{f(x)+f(y)}{2}\Rightarrow \frac{f\left(\frac{x+y}{2}\right)-f(x)}{\frac{x+y}{2}-x} \gt \frac{f(y)-f\left(\frac{x+y}{2}\right)}{y-\frac{x+y}{2}}$$
Applying mean valuee theorem on this twice will give you a value $w$ with $f''(w)\lt0$ which stands in conflict to $f''(x)\ge0\, \forall x$
-
Very Nice point of view .+1 – Babak S. Dec 1 '12 at 14:01
Since the second derivative is positive, this implies that the $f$ is convex. So, just apply the finite form of Jensen's inequality with weights 1/2 to get the desired result.
-
Yes, but how does one prove the first sentence? – Jesse Madnick Dec 1 '12 at 7:52
I guess that this all depends on how one defines convexity to start with. For differentiable functions, convexity could be defined as a property when the second derivative is positive. – Learner Dec 1 '12 at 8:13
You're right to observe that $f'(x)$ is monotone increasing, and in particular, this implies $f(x_1)\leq f(x_2$ for $x_1<x_2$, thus $f$ satisfies the definition of convexity. Now we can apply the finite form of Jensen's inequality with $a_1=a_2=\frac{1}{2}$, and we get $$f\left(\frac{x_1+x_2}{2}\right)\leq\frac{f(x)+f(y)}{2},$$ as desired. I hope this helps!
- | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9884918502576513,
"lm_q1q2_score": 0.8081657110898428,
"lm_q2_score": 0.8175744806385543,
"openwebmath_perplexity": 144.2253407352264,
"openwebmath_score": 0.9854022860527039,
"tags": null,
"url": "http://math.stackexchange.com/questions/248055/how-to-show-that-f-left-frac-x-y-2-right-leq-frac-f-x-f-y"
} |
optics
Title: Derivation RMS wavefront error Zernike polynomials I measured some wavefront aberrations using a Shack-Hartmann sensor. The output is a finite set of Zernike coefficients $\{c_{ij}\}$.
My wavefront is $$W(\rho, \theta) = \sum_{ij}c_{ij}Z_i^j$$ with $Z_i^j$ the Zernike polynomials.
The definition of the RMS error (in the above link, and also in general) is
$$\sigma^2 = \int d\theta \int d\rho \rho (W(\rho, \theta) - \bar{W}(\rho, \theta))^2 = \langle W^2 \rangle - \langle W \rangle^2$$
where
$$\bar{W} = \langle W \rangle = \int_0^{2\pi} d\theta \int_0^{1} d\rho \rho (W(\rho, \theta)$$
The link then says that this evaluates to
$$\sigma^2 = \sum_{ij}|c_{ij}|^2$$
I see that the first term evaluates to that due to orthogonality relations. But I think $\langle W \rangle \neq 0$, since
$$Z_n^m(\rho, \theta) = R_n^m(\rho, \theta)\cos{(m\theta)}$$ (or for a different $m$ with sine instead of cosine) and I know that $$\int_0^{2\pi} cos^2 = \int sin^2 = \pi$$ | {
"domain": "physics.stackexchange",
"id": 93452,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "optics",
"url": null
} |
c++, heap
returning a std::pair<bool, index> is a bit cumbersome. Depending on the version of C++ you can use, you can return a std::optional or simply an iterator (when an iterator points past the last element, it indicates failure). NB: if you have access to C++17, structured bindings make for a more elegant syntax when assigning a pair: auto [success, child_index] = get_smallest_child(parent_index);
the distinction between the implementation and the "front" class is weird. What did you want to achieve with this? If it's about re-compilation and the "pimpl idiom", then you need to use a pointer to heap_impl inside heap. A pointer allows you to refer to an incomplete type, and define the type somewhere else.
do you really want to provide random access to the elements of your heap? There might be scenarios where you need this (or are there? heaps aren't completely sorted, the children of a given parent can be in any order, so why would you get the 3rd, and not the 2nd or the 4th?), but you generally use heaps as a kind of priority queue. You only need to provide access to the "top" element. It will simplify your code and get you closer to the principle: make an interface that is easy to use and hard to misuse (out of bond access is harder this way).
your noexcept policy is incoherent and a bit tedious also. You tag almost everything but not begin() or end(), for some reason. noexcept(false) isn't really necessary, in the sense that no one (neither compiler nor client) will expect to be protected from exceptions unless there's a noexcept tag at the end of the signature.
I think you should have offered to customize the comparison operator. It's really useful when the element doesn't provide a comparison operator, of if you want to compare the elements in a different order, or a projection of the elements. | {
"domain": "codereview.stackexchange",
"id": 34113,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "c++, heap",
"url": null
} |
statistical-mechanics, thermal-radiation
Title: Why standing waves in Rayleigh-Jeans derivation? A central part of the Rayleigh-Jeans law is that the number of allowed states follows $dN \sim \nu ^{2} d\nu$ because the allowed frequencies have to be standing waves. Then by equipartition we get that the energy density in a given frequency band increases without bound for increasing $\nu$, which is obviously unphysical.
Regarding a non-cavity solid object emitting thermally, by contrast, I'm picturing a lattice of atoms and electrons oscillating and/or colliding in such a complicated way as to be random. Perhaps there are conduction electrons, perhaps not. Now, I don't believe the phonon motion contributes much to EM radiation (?); instead, I think, electron oscillations (classically; in QM this would be transitions) dominate. Such electron motion would not be characterized by standing waves or frequencies which depending on the dimensions of the solid, surely?
It seems reasonable that at a given temperature, random "dipole" (or whatever) oscillations in a solid object would classically follow something like a Boltzmann distribution with a maximum at a finite frequency. Intuitively, this could lead to an emission with the qualitative shape of the experimental blackbody curve. Is there a classical theory of random thermal oscillations in a lattice which is relevant here? Standing waves are not a necessary assumption in the Rayleigh-Jeans derivation. When the derivation works with radiation inside a cavity with walls impermeable to radiation, so the waves can't go through the walls, the radiation inside can be expressed as superposition of standing waves.
But the same kind of derivation can be made for imaginary region in space with imagined walls that do not influence the EM field in any way. The field can be expressed as Fourier sum in finite cuboid region of space, over travelling waves - this is sometimes called "periodic boundary conditions" (because Fourier sum produces function that repeats outside the main region). This is just mathematics, it does not impose any condition on the field inside. The rest of the derivation is otherwise the same, and leads to the same result. | {
"domain": "physics.stackexchange",
"id": 94242,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "statistical-mechanics, thermal-radiation",
"url": null
} |
P.S. I am very new to this site and I know similar questions have been asked and it'd have been better to comment over there but it's not letting me comment due to low reputation. Sorry :)
• Those answers are both correct. They are the same. Sep 16 at 2:23
• 90C4 * 10 / 100C5 is equal to 90P4 * 10 / 100P5. Either way of counting gives the same answer. Sep 16 at 2:25
• yes I do realize that both answers in the end come out to be correct but that's only because the denominators cancel out. But it's technically incorrect to use combination instead of permutation and I am trying to understand why. And I still don't know what the solution for the bulbs question should be. To me it just seems like 9C5 but if order is important in Beatles songs question, it should be important here as well in which case it'd be 9P5. Sep 16 at 2:27
• "but its technically incorrect to use combination instead of permutation" Says who? For probability scenarios we get to choose what sample space we use to describe the scenario. It is a choice. So long as we chose appropriately, there can be multiple choices. Sep 16 at 2:40
• Umm okay so, in words, probability for scenario 1, is total ways in which the first Beatles song is the 5th song / total ways of choosing 5 from 100 What I just said is permutation or combination? Why? That's what I am trying to understand as in, even though they both yield the same answer, what's the difference conceptually? Can anyone put the two solutions in actual words highlighting the conceptual difference please? I am sorry I am not able to exactly construct the question I have Sep 16 at 2:47
Let $$S = \displaystyle \frac{\binom{90}{4} \times \binom{10}{1}}{\binom{100}{5}}.$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9653811621568289,
"lm_q1q2_score": 0.8166568988709164,
"lm_q2_score": 0.8459424431344437,
"openwebmath_perplexity": 499.7737903203047,
"openwebmath_score": 0.7665876150131226,
"tags": null,
"url": "https://math.stackexchange.com/questions/4251632/why-does-order-matter-in-first-scenario-but-not-in-second/4251700#4251700"
} |
java, multithreading, http, servlets
String fileName = "index.html";
String token = null;
client.setSoTimeout(30000);
BufferedReader in = new BufferedReader( new InputStreamReader( client.getInputStream() ) );
PrintStream out = new PrintStream( client.getOutputStream() );
System.out.println( "I/O setup done" );
String line = in.readLine(); //Get first line of request to check the request command GET or HEAD
StringTokenizer tokenLine = new StringTokenizer(line);
String reqCommand = tokenLine.nextToken();
if (reqCommand.equals("GET") || reqCommand.equals("HEAD")) //only GET or HEAD requests are accepted, else gives 501 error
{
String fname = tokenLine.nextToken();
if(fname.startsWith("/"))
{
while( line != null )
{
// requestHeaderLines.add(line);
System.out.println(line);
if(line.equals("")) break;
StringTokenizer tokenizedLine = new StringTokenizer(line);
while(tokenizedLine.hasMoreTokens())
{
token = tokenizedLine.nextToken();
if(token.endsWith(".html")||token.endsWith(".jpg")) //checking for file types.
files.add(token);
if(token.startsWith("HTTP")) //checking version of the HTTP connection.
if(token.contains("/1.0"))
httpVersion = "HTTP/1.0";
else if(token.contains("/1.1"))
httpVersion = "HTTP/1.1";
if(token.equals("Connection:")) //parsing request to get connection type
if(tokenizedLine.nextToken().equals("keep-alive")) //checking for connection type
connectionKeepAlive = true;
}
line = in.readLine();
}
System.out.println(line); | {
"domain": "codereview.stackexchange",
"id": 7586,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, multithreading, http, servlets",
"url": null
} |
Here are a few trivial lemmas. I won't use anything about the rolling motion, just that the distance is defined by gluing pentagons edge-to-edge:
• The $dd$-circle of radius $k$, which I'll call $C_k$, is a closed polygonal curve. Let $D_k$ be the closed $dd$-disk of radius $k$; note that $D_k$ may contain holes and $C_k$ is not in general simple! (This happens even at $k=2$.)
• The orientations of the line segments forming $C_k$ (measured relative to the $+x$ ray) when $k$ is even (odd) take the form $\pi m/5$ with $m$ an odd (even) integer; this is because pentagons are glued to each other in only two orientations (up-pointing or down-pointing) and furthermore, we can only glue down-pointing pentagons to up-pointing ones and vice versa. (Note also that no line segments of $C_k$ lie on $C_{k+1}$).
• It's not hard to see that $C_k$ is strictly contained within $D^\circ_{k+2}$, the interior of $D_{k+2}$ (the open $dd$-disk). Suppose that $v$ is a simple vertex of $C_k$ so that the interior angle $\alpha$ is defined and $\alpha> 4\pi/5$. Then $v$ is surrounded by the pentagons added to its adjacent segments and therefore lies inside $D^\circ_{k+1}$. Otherwise, if the interior angle $\alpha\leq 4\pi/5$ then $v$ is also a vertex on $C_{k+1}$. However the interior angle at $v$ on $C_{k+1}$ is now $\alpha+6\pi/5>4\pi/5$, so it gets "eaten up" in the next layer. | {
"domain": "mathoverflow.net",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9820137931962462,
"lm_q1q2_score": 0.8051465712993288,
"lm_q2_score": 0.8198933425148214,
"openwebmath_perplexity": 619.7350754789661,
"openwebmath_score": 0.7024920582771301,
"tags": null,
"url": "https://mathoverflow.net/questions/288550/dodecahedral-rolling-distance/288566"
} |
python, pandas, visualization, dataframe, matplotlib
I made the assumption that the columns that you want to compare are on the same index in both dataframes. If this is not true, you need to find another way. Maybe if they have the same name? You can do it like that.
from scipy import stats
p_value = 0.05
rejected = 0
for col in df1:
test = stats.ks_2samp(df1[col], df2[col])
if test[1] < p_value:
rejected += 1
print("We rejected",rejected,"columns in total")
If the K-S statistic is small or the p-value is high, then we cannot reject the hypothesis that the distributions of the two samples are the same.
Edit: I haven't tried it before, but maybe you can do something like that
Boxplot
from scipy import stats
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#create 2 dataframes with random integers. I don't have data to simulate your case.
df1 = pd.DataFrame(np.random.randint(0,100,size=(10000, 102)), columns=range(1,103))
df2 = pd.DataFrame(np.random.randint(0,100,size=(10000, 102)), columns=range(1,103))
#apply the Kolmogorov-Smirnov Test
p_value = 0.05
p_values = []
for col in range(103):
test = stats.ks_2samp(df1.iloc[col,], df2.iloc[col,])
p_values.append(test[1])
#create the box plot
plt.boxplot(p_values)
plt.title('Boxplot of p-values')
plt.ylabel("p_values")
plt.show()
Heatmap
Another way is the heatmap.
import matplotlib.patches as mpatches | {
"domain": "datascience.stackexchange",
"id": 4911,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, pandas, visualization, dataframe, matplotlib",
"url": null
} |
# Show the measure of supp($u$) is greater than $\frac{4}{3}$ where $-1 < \nabla \cdot v$
Let $x \in \mathbb{R}^3,$ and let $v(x) : \mathbb{R}^3 \to \mathbb{R}^3$ be smooth with the property that $-1 < \nabla \cdot v.$ Suppose $u$ solves $$u_t + \nabla u \cdot v = 0,$$ with $$u(x,0) = \chi_{|x| \leq 1}(x) = \begin{cases} 1 & |x| \leq 1 \\ 0 & \text{else} \end{cases}$$ Show that $\{x \in \mathbb{R}^3 \, : \, u(x,1) > 0\}$ has measure greater than $4/3.$
Here, we know the solution is given by $u(x,t) = u(x_0, 0),$ and the characteristics for $x_i, \; i=1,2,3$ are given by $$\frac{dx_i(t)}{dt} = v_1(x_1,x_2,x_3).$$ | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9777138183570425,
"lm_q1q2_score": 0.8126200367884076,
"lm_q2_score": 0.8311430415844385,
"openwebmath_perplexity": 119.54021844682184,
"openwebmath_score": 0.9174565076828003,
"tags": null,
"url": "https://math.stackexchange.com/questions/2393678/show-the-measure-of-suppu-is-greater-than-frac43-where-1-nabla"
} |
schroedinger-equation, covariance, time-reversal-symmetry
Title: What is invariance of an equation? I'm confused.
Suppose we have a Schrodinger equation with a time-independent Hamiltonian:
\begin{align}
i\frac{\partial}{\partial t}\psi(x, t) = H\psi(x, t).
\tag{1}
\end{align}
Under time reversal transformation $t \to t' \equiv -t$ and complex conjugation, the equation gets another solution $\psi'(x, t) = \psi^*(x, -t)$; the $\psi'$ satisfies
$$
i\frac{\partial}{\partial t}\psi'(x, t) = H\psi'(x, t).
\tag{2}
\label{eq2}
$$
What I'm confused is that, in general, if we have an equation
$$
A(x) = B(x)
\tag{3}
$$
which is invariant under a transformation $x \to x' = x'(x)$, then another equation
$$
A'(x') = B'(x')
\tag{3'}
$$
also holds. So the Schrodinger equation should become
$$
i\frac{\partial}{\partial t'}\psi'(x, t') = H\psi'(x, t')
\tag{4}
\label{eq4}
$$
under the transformation if it is invariant; Eq.$\eqref{eq2}$ should be represented by $t'$, not $t$.
After all, I have a general question: what is invariance of an equation? Or do I misunderstand something? | {
"domain": "physics.stackexchange",
"id": 47296,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "schroedinger-equation, covariance, time-reversal-symmetry",
"url": null
} |
# Can I use the following method to prove an algorithm is correct?
I'm trying to show that a solution I have obtained via an algorithm is correct. The way I plan on doing this is first by showing that an optimal solution does indeed exist. Then, I plan on showing that every other solution that is not the solution provided by my algorithm cannot be optimal. Finally, I show that the solution I have cannot be improved in the same way as any other solution.
Is this enough to show that my algorithm is optimal? In this case I am avoiding doing an "exchange argument" a la greedy algorithms. In fact, I don't really prove anything about how my solution is an improvement of the other ones, but simply that all of the other ones can be improved, and given that an optimal solution exists, the one I have must be it because it cannot be improved in the same way that the other ones can. | {
"domain": "stackexchange.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.975946445006747,
"lm_q1q2_score": 0.8371357362537508,
"lm_q2_score": 0.857768108626046,
"openwebmath_perplexity": 249.40170265475854,
"openwebmath_score": 0.7971535921096802,
"tags": null,
"url": "https://cs.stackexchange.com/questions/119997/can-i-use-the-following-method-to-prove-an-algorithm-is-correct"
} |
google-apps-script, google-sheets
Less redundant code, same level of documentation.
Code like
if (...) {
...;
i++;
} else {
i++;
}
is usually better written as:
if (...) {
...;
}
i++;
However, we now have
var i = 2;
while (i <= lastRow) {
...;
i++;
}
which is a more complicated formulation of
for (var i = 2; i <= lastRow; i++) {
...;
}
Similarly,
rowsToBeDeleted.reverse();
for (var j = 0; j < rowsToBeDeleted.length; j++)
{
source_sheet.deleteRow(rowsToBeDeleted[j]);
}
is the same as:
for (var i = rowsToBeDeleted.length - 1; i >= 0; i--) {
source_sheet.deleteRow(rowsToBeDeleted[i]);
}
If I see that correctly, your two function only differ in the strings croatian_backup vs. serbian_backup and CRO vs. SER. Instead of copy-pasting that code, take those strings from the function parameters. Now we have a general archive function:
function archive(outputSheet, languageCode) { ... }
If you need two seperate funtions that don't take any arguments, we can use the currying technique (also known as partial application) to pre-fill the arguments:
function CroatianArchive() {
return archive("croatian_backup", "CRO");
}
Some of your names use underscores, other capitalization to separate words:
source_range
lastRow
You should settle for one style – Javascript tends to prefer capitalization rather than underscores. | {
"domain": "codereview.stackexchange",
"id": 4919,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "google-apps-script, google-sheets",
"url": null
} |
javascript, beginner, validation
Title: Checking that a filename does not end with "Com.js" I'm fairly new to JavaScript and I wonder if there is a way to write this function shorter and better?
function checkFile(file){
var cbEnab= true;
if (file !== undefined && file !== null) {
cbEnab = (!jQuery.endsWith(file, "Com.js"));
}
return cbEnab;
} By mechanical transformation, I arrive at:
function checkFile(file) {
return file == null || !jQuery.endsWith(file, "Com.js");
}
The use of === strict equality checking is not necessary; == lumps null and undefined together.
I have my doubts about the whole function, though, as the function name gives no indication that it's going to check whether its argument ends in "Com.js". | {
"domain": "codereview.stackexchange",
"id": 12080,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "javascript, beginner, validation",
"url": null
} |
newtonian-mechanics, momentum, acceleration, collision
Title: A problem regarding a fiction Recently I watched a fictional(I guess) video: a man is crossing the road while a truck accelerates towards him and a superhero flashes and saves his life by taking him out of the road. He was very fast, therefore only a flash could be seen. It seems like a silly incident though. After some time the question that occurred in my mind was 'if this happened in reality, could that man survive?'
This is my reasoning:
If the man had to collide with the truck it would create serious damages and this has been explained scientifically in many topics such as energy transferred to the body, force emitted, and so on. Also, I found that the acceleration of the vehicle performs a lower impact on the pedestrian. Nevertheless, everyone knows that more harm is done by a vehicle that goes with $60 \;\text{km/h}$ than $5 \;\text{km/h}$ at a collision. But in this scenario, the speed of the hero is exaggeratedly high(Find the video below). The speed of the truck is negligible in comparison. Thus the impulse on the man is massive when the hero catches him. Can a person endure such a great impulse? I heard that such a great change in momentum will disturb fluids in the person's body and feels extremely uncomfortable. And also can a person tolerate that acceleration? Thus the ridiculous thought that came to my mind was that the damage will be minor when the person had to be hit by the vehicle than is carried by the hero.
This seems to be a silly problem, but I am asking whether this could happen in the real world.
EDIT: Look for video here:- https://youtu.be/KJqhR2YSUXw | {
"domain": "physics.stackexchange",
"id": 80550,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "newtonian-mechanics, momentum, acceleration, collision",
"url": null
} |
general-relativity, gravity, neutrinos, faster-than-light
Title: Neutrino unaffected by gravity Are neutrinos affected by gravity?
If not, could that be a plausible reason for a neutrino taking a shorter path than light, since light is affected by gravity? Everything is affected by gravity. Gravity warps of space-time according to the Einstein Field Equations, and traveling on "geodesics" (shortest path curves) on that curved surface is how gravity is manifested.
Thinking as though there is some sort of euclidean space underneath the non-euclidean space-time in which neutrinos can take a more direct "straight line" between the points is not at all supported. Everything we know of travels on this curved spacetime.
Also, speaking more Newtonianly, neutrinos do seem to have mass in the more classic sense. | {
"domain": "physics.stackexchange",
"id": 12338,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "general-relativity, gravity, neutrinos, faster-than-light",
"url": null
} |
python, algorithm, recursion, sudoku, community-challenge
def __init__(self, constraints, initial=(), random=False):
self.random = random
self.constraints = constraints
# A map from constraint to the set of choices that satisfy
# that constraint.
self.choices = defaultdict(set)
for i in self.constraints:
for j in self.constraints[i]:
self.choices[j].add(i)
# The set of constraints which are currently unsatisfied.
self.unsatisfied = set(self.choices)
# The partial solution currently under consideration,
# implemented as a stack of choices.
self.solution = []
# Make all the initial choices.
try:
for i in initial:
self._choose(i)
self.it = self._solve()
except KeyError:
# Initial choices were contradictory, so there are no solutions.
self.it = iter(())
def __next__(self):
return next(self.it)
next = __next__ # for compatibility with Python 2
def _solve(self):
if not self.unsatisfied:
# No remaining unsatisfied constraints.
yield list(self.solution)
return
# Find the constraint with the fewest remaining choices.
best = min(self.unsatisfied, key=lambda j:len(self.choices[j]))
choices = list(self.choices[best])
if self.random:
shuffle(choices)
# Try each choice in turn and recurse.
for i in choices:
self._choose(i)
for solution in self._solve():
yield solution
self._unchoose(i)
def _choose(self, i):
"""Make choice i; mark constraints satisfied; and remove any
choices that clash with it.
"""
self.solution.append(i)
for j in self.constraints[i]:
self.unsatisfied.remove(j)
for k in self.choices[j]:
for l in self.constraints[k]:
if l != j:
self.choices[l].remove(k) | {
"domain": "codereview.stackexchange",
"id": 5349,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "python, algorithm, recursion, sudoku, community-challenge",
"url": null
} |
temperature, units, unit-conversion
This is how I am now perceiving $^\circ\mathrm{C}$ units and I really doubt it is meaningful to square Celsius or Fahrenheit if they are taken as "absolute" measurement. If they are difference temperatures it looks right because you withdraw the offset.
Now asking the SE Physics community: Are my last new statements correct? Thank you. If you are using an absolute temperature, you should use Kelvin. For instance, when using the Stefan-Boltzmann Law,
$$P=A\epsilon\sigma T^4$$
it wouldn't make sense to have units of $^\circ C^4$; only units of $K^4$ physically make sense here.
However, if you are using a temperature difference, then both Celsius and Kelvin are equally valid because a temperature difference in Celsius is equal to one in Kelvin. So when using the heat equation,
$$\Delta Q=c_p\rho\Delta T$$
you can safely use Celsius as easily as Kelvin because it is a temperature difference. This means that in situations where you are called to use the difference in temperature to some higher power, like $\Delta T^2$, you could just as easily use Celsius as Kelvin without worrying about problems. I can't think of any instances where that is required, but the point is that you can do it.
TL;DR If you're using absolute temperature squared, you can't use Celsius; it doesn't make sense. If it's temperature difference you need, then in all cases feel free to switch between the two at will. | {
"domain": "physics.stackexchange",
"id": 20770,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "temperature, units, unit-conversion",
"url": null
} |
x Any of the previous delimiters may be used in combination with these: ( . Yet, there is an alternative that offers a little more flexibility. These are generally very subtle adjustments. 5 x Easy-to-use symbol, keyword, package, style, and formatting reference for LaTeX scientific publishing markup language. Edit source History Talk (0) Share. ∑ 6 . 1 . If you change your mind, you just have to change the definition in the preamble, and all your integrals will be changed accordingly. In these events, the output is still satisfactory, yet any perfectionists will no doubt wish to fine-tune their formulas to ensure spacing is correct. [LaTeX] math. However, this requires the \usepackage{letltxmacro} package. x {\displaystyle \int y\mathrm {d} x}. . If you are writing a scientific document that contains numerous complex formulas, the amsmath package[1] introduces several new commands that are more powerful and flexible than the ones provided by basic LaTeX. . sonstige Symbole; Befehl Symbol Beschreibung \nexists ∄ Es existiert kein \varnothing ∅ Leere Menge \measuredangle ∡ Gerichteter Winkel \sphericalangle ∢ Raumwinkel \blacksquare Schwarzes Quadrat \square Weißes Quadrat \blacktriangle Nach oben zeigendes schwarzes Dreieck \blacktriangledown Nach unten zeigendes schwarzes Dreieck \lozenge Rhombus If you want the limits of an integral to be specified above and below the symbol (like the sum), use the \limits command: ∫ k Unfortunately this code won't work if you want to use multiple roots: if you try to write Therefore, special environments have been declared for this purpose. s . , n . ) . + . [ LaTeX Math Symbols Enjoy this cheat sheet at its fullest within Dash, the macOS documentation browser.. If we put \left and \right before the relevant parentheses, we get a prettier expression: \left (\frac {a} {x} \right)^2. If your document requires only a few simple mathematical formulas, plain LaTeX has most of the tools that you will ever need. . Lucky you. Unfortunately, you | {
"domain": "etech-services.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.935346511643776,
"lm_q1q2_score": 0.8023103997004865,
"lm_q2_score": 0.8577680995361899,
"openwebmath_perplexity": 4118.953168208632,
"openwebmath_score": 0.9685235023498535,
"tags": null,
"url": "https://etech-services.com/w2r2x4cw/291d14-latex-math-symbols"
} |
robotic-arm, dynamics, manipulator, jacobian
where the 3 by 3 inertia matrix is given by
where n is the number of DOF of the manipulator.
For the D(q) matrix to be 3 by 3, the linear and angular velocity Jacobian matrices must be 3 by 3 instead or 3 by 1.
Can you explain the mismatch of dimensions?
Am I supposed to augment the 3 by 1 matrices and obtain 3 by 3 matrices? I think this is a matter of notations.
In the given formula for $D(q)$, the matrices $J_{vi}$ and $J_{\omega i}$ are not simply the direct extraction of columns of the Jacobian of the system.
$J_i$ is the matrix that relates $\dot{q}$ to the velocity (of the center of mass) of the link $i$. That is, if we write $v_1$ to denote the linear velocity of the center of mass of the first link, then $J_1$ will be such that
$$v_1 = J_1\dot{q}.$$
Since $\dot{q} \in \mathbf{R}^3$ and $v_1 \in \mathbf{R}^3$, the matrix $J_1$ is a $3 \times 3$ matrix.
Note also that since the velocity of link i is not affected by any joint $j > i$, the columns $j > i$ of the matrix $J_i$ will be zero. | {
"domain": "robotics.stackexchange",
"id": 1533,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "robotic-arm, dynamics, manipulator, jacobian",
"url": null
} |
java, programming-challenge, interview-questions, complexity
for( int i = 0; i < intervals.size(); i++ )
{
int chStart = intervals.get(i).start;
int chEnd = intervals.get(i).end;
//Check for overlap condition
if( Math.max(chStart,mStart) > Math.min(chEnd, mEnd))
{
ans.add(intervals.get(i));
count++;
}
//Condition for overlap
else
{
inter.start = Math.min(mStart,chStart);
inter.end = Math.max(mEnd, chEnd);
mStart = inter.start;
mEnd = inter.end;
if(!ans.contains(inter))
{
ans.add(inter);
}
}
}
//Condition when interval is larger than all elements in array, insert interval
//in final answer
if( count == intervals.size())
ans.add(newInterval);
//Sorting the arraylist according to start time using an inner class
//Helps in modularity
Collections.sort(ans, new IntervalSort());
//Time complexity: O(n)
//Space complexity: O(n)
/* for( int i = 0; i < intervals.size(); i++ )
{
int chStart = intervals.get(i).start;
int chEnd = intervals.get(i).end;
if( (chStart <= mStart) && (chEnd > mStart) )
{
inter.start = chStart;
for( int j = i + 1; j < intervals.size(); j++)
{
chStart = intervals.get(j).start;
chEnd = intervals.get(j).end;
if( (chStart <= mEnd) && (mEnd < chEnd) )
inter.end = chEnd;
ans.add(intervals.get(j));
} | {
"domain": "codereview.stackexchange",
"id": 30062,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "java, programming-challenge, interview-questions, complexity",
"url": null
} |
homework-and-exercises, geometry, moment-of-inertia
Title: Moment of inertia tensor calculation If I have a rigid body consists of three uniform rods, each of mass $m$ and length $2a$, held mutually perpendicular at their midpoints choose a coordinate system with the axes along the rod.
So I will explain how I oriented the rods. I have rod $I_1$ along the y-axis,$I_2$ will be along the z-axis and $I_3$ is along the x-axis my coordinate system is right handed coordinate system.
Now I want to calculate $I_{xx}$
So $I_{xx}$ = $I_{1,xx}$ + $I_{2,xx}$ + $I_{3,xx}$ with $I_{i,xx} = \int( y^2 + z^2 )dm$.
$I_{1,xx}$ we will have z = 0, since it has only y component.
$I_{2,xx}$ we will have y = 0, since it has only z component.
$I_{3,xx}$ we will have both y and z.
After rearranging we have
Hence $I_{xx} = 2 \int (y^2 + z^2) = 2I_{rod,center} = 2/3 ma^2$
Is there anything wrong in my work above? Your calculation is correct. As you mentioned the inertia tensor, you can look at the system from a sligthly different point of view.
A rod of length $\ell$ and mass $m$ has an inertia tensor with respect to its center of mass which can be written as
$$I = \frac{1}{12} m \ell^2 \left(I-\hat{n} \otimes \hat {n}\right)$$
where $\hat{n}$ is a versor parallel to it. Note that $\left(I-\hat{n} \otimes \hat {n}\right)$ is a projector in the space perpendicular to $\hat{n}$.
If you add the contributions of the three rods you get | {
"domain": "physics.stackexchange",
"id": 22438,
"lm_label": null,
"lm_name": null,
"lm_q1_score": null,
"lm_q1q2_score": null,
"lm_q2_score": null,
"openwebmath_perplexity": null,
"openwebmath_score": null,
"tags": "homework-and-exercises, geometry, moment-of-inertia",
"url": null
} |
There’s a lot going on here, but in its most simplified version, we thought we would get a curve on the center line at $\theta =0$, 1 unit above at $\theta =\frac{\pi}{2}$, on at $\theta =\pi$, 1 unit below at $\theta =\frac{3\pi}{2}$, and returning to its starting point at $\theta =2\pi$. We had a very rough “by hand” sketch, and were quite surprised by the image we got when we turned to our grapher for confirmation. The oscillation behavior we predicted was certainly there, but there was more! What do you see in the graph of $r=2+cos(\theta )+sin(\theta)$ below?
This looked to us like some version of a cardioid. Given the symmetry of the axis intercepts, we suspected it was rotated $\frac{\pi}{4}$ from the x-axis. An initially x-axis symmetric polar curve rotated $\frac{\pi}{4}$ would contain the term $cos(\theta-\frac{\pi}{4})$ which expands using a trig identity.
$\begin{array}{ccc} cos(\theta-\frac{\pi}{4})&=&cos(\theta )cos(\frac{\pi}{4})+cos(\theta )cos(\frac{\pi}{4}) \\ &=&\frac{1}{\sqrt{2}}(cos(\theta )+sin(\theta )) \end{array}$
Eureka! This identity let us rewrite the original polar equation. | {
"domain": "wordpress.com",
"id": null,
"lm_label": "1. YES\n2. YES",
"lm_name": "Qwen/Qwen-72B",
"lm_q1_score": 0.9920620054292072,
"lm_q1q2_score": 0.8432241090798748,
"lm_q2_score": 0.849971175657575,
"openwebmath_perplexity": 907.1379292646518,
"openwebmath_score": 0.753693699836731,
"tags": null,
"url": "https://casmusings.wordpress.com/tag/polar/"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.