anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Macro for transforming 2 way matrix into 1 way matrix | Question: I have written some code that transforms 2 way matrices into 1 way matrices. Although it's comfortable to use, it's very sluggish and slow in execution. Does anyone have any idea how to improve the performance? I'm aware that the same transformation can be done with standard formulas, but solely for learning reasons I would like to stick with vba.
How to use it - select whole matrix and execute the Sub.
Sub TwoToOneMatrix()
Set Mrange = Selection
colsX = Mrange.Columns.Count
rowsX = Mrange.Rows.Count
Set objects = Range(Mrange.Columns(1).Rows(2), Mrange.Columns(1).Rows(rowsX))
Set titleX = Range(Mrange.Columns(1).Rows(1), Mrange.Columns(1).Rows(1))
Set Rng = Application.InputBox("Select a range", "Obtain Range Object", Type:=8)
'Column 1
Rng.Value = titleX.Value
def = 1
For j = 2 To objects.Rows.Count + 1
curCountry = Mrange.Columns(1).Rows(j)
For i = 1 To colsX - 1
Rng.Offset(def, 0).Value = curCountry
def = def + 1
Next i
Next j
'Column 2
def = 1
Rng.Offset(0, 1).Value = "Date"
For j = 1 To objects.Rows.Count
For i = 2 To colsX
curDate = Mrange.Columns(i).Rows(1)
Rng.Offset(def, 1).Value = curDate
def = def + 1
Next i
Next j
'Column 3
Rng.Offset(0, 2).Value = "Values"
def = 1
For r = 2 To rowsX
For c = 2 To colsX
Rng.Offset(def, 2).Value = Mrange.Columns(c).Rows(r)
def = def + 1
Next c
Next r
End Sub
Answer: I've corrected my original code accordingly to @AJD tips. Now it's fully working and lightning fast.
Sub MatrixTransform()
Dim originalMatrix() As Variant
Dim finalMatrix() As Variant
originalMatrix() = Selection.Value
MatrixRows = 1 + (Selection.Rows.Count - 1) * (Selection.Columns.Count - 1)
ReDim finalMatrix(MatrixRows, 3)
finalMatrix(1, 1) = CStr(Selection.Rows(1).Columns(1))
finalMatrix(1, 2) = "Date"
finalMatrix(1, 3) = "Value"
k = 2
For i = 2 To Selection.Rows.Count
For j = 2 To Selection.Columns.Count
finalMatrix(k, 1) = originalMatrix(i, 1) 'object
finalMatrix(k, 2) = originalMatrix(1, j) 'date
finalMatrix(k, 3) = originalMatrix(i, j) 'value
k = k + 1
Next j
Next i
Set outputRng = Application.InputBox("Select range", "Obtain Range Object", Type:=8)
Range(outputRng, outputRng.Offset(MatrixRows, 3)).Value = finalMatrix
End Sub | {
"domain": "codereview.stackexchange",
"id": 35464,
"tags": "vba, excel"
} |
Can a free String Field Theory be described in terms of a "wavefunctional" of classical string field configurations? | Question: A free scalar QFT can be understood as a wavefunctional that maps classical field configurations to complex numbers representing amplitudes. An eigenstate of this basis is a classical field configuration that assigns a specific real number to each point in spacetime $n$. Each of these points can then be thought of as a local oscillator, and the global field as an array of oscillators. One can calculate the expectation value of the displacement or field strength $q(n)$ of such local oscillators in arbitrary global states (which for practical purposes are never actually eigenstates of this basis).
For example, when one thinks of a quasi-localized one particle state in a free QFT (see diagram), the wavefunctional of such a state is one in which the oscillators at and near the localization point have higher expectation values for their displacement relative to other sites.
In this sense it is apparent how and why the underlying degrees of freedom are fields and how particle states, coherent states, etc., are particular superpositions of configurations of fields.
Diagram taken from Linde, Helmut: "A New Way of Visualizing Quantum Fields", (2019), https://arxiv.org/abs/1907.11311
Does this reasoning generalize to a free scalar String Field Theory?
In furtherance of this, I am imagining a system where each of the local oscillators - each $n$ in the diagram - is replaced by a space of all unique classical 1D string "shapes" or "curves" $\sigma_i$ which pass through $n$. Perhaps these are the curves with n as its midpoint, as its center of mass, or as some arbitrary point, given some fixed choice of coordinates along the string.
Then, in an eigenstate of the string field wavefunctional, one assigns a real number to each such $\sigma_i$ at each point $n$. So classical string field values $q(n[\sigma_i])$ are assigned to each $n[\sigma_i]$.
Note a field configuration is now an infinite set of values at each point $n$, one for each $\sigma_i$, and not merely a single value at each $n$. In a general state of the string field, which is a superposition of these classical configurations, one has an expectation value for each $q(n[\sigma_i])$. The underlying system is then like an array of "string oscillators" which are displaced about their origin.
I expect this procedure would have to be duplicated for each internal mode of the quantized string itself, treated as a separate field/array. A point $n$ could be indexed to a reference background spacetime or to a brane worldvolume.
Is a straightforward analogy like this sensible? Or are there string-theoretic reasons why in the generalization to strings, only a Fock representation is a viable form of second quantization?
I'm not expecting that this is in any way a practical or useful approach to string theory, just whether it is a faithful or misleading way to think of the general idea of a string field theory.
Answer: String field theory is vastly much more sophisticated than that.
Let me mention a few points to illustrate why the way you propose to visualize string fields theory is not a good one.
Take the example of closed bosonic string field theory. If you look at the highly complicated action you can learn that a string field has an infinite amount of "excitations" and those components have typically non zero ghost number. Not to mention that every excitation of a string field is an ordinary field. What is a "classical superposition of fields" whith excitations that are fields themselves and whose components have non-zero ghost numbers?
The rules of string theory are still valid when the underlying spacetime is noncommutative or when the worldsheet theory sits at a Gepner point or even when the notion of target spacetime topology is meaningless. What does a "classical superposition of string fields" mean when the spacetime is fundamentally noncommutative or non-geometric or lacks sensible notion of spacetime topology?
The best I can do is to recommend some good divulgative texts in the beautiful subject of string field theory (and related string issues).
String Field Theory
How and why strings generalize geometry | {
"domain": "physics.stackexchange",
"id": 68420,
"tags": "quantum-field-theory, string-theory, second-quantization"
} |
How do stars or galaxies get their spin? | Question: It is my understanding that when a star, a planetary disk, or a galaxy forms, the rotational momentum of the whole system is conserved.
Due to the smaller size of the resulting object, it will spin with a significantly higher speed than the original nebula.
What I do not understand is where the original rotation comes from. Why should a random dust cloud have an overall spin? Wouldn't the impulses of all particles in the cloud tend to average each other out?
Is there some alternative source of spin or a reason why nebula have an inherent spin?
Answer: You could start from the premise that there was no net angular momentum in the universe at all; but it would still be the case that everything of interest was spinning.
On the scales of stars and planets there are (at least) two important mechanisms that result in individual systems having angular momentum. The first is turbulence. If you take a parcel of turbulent gas from a giant molecular cloud it will always possess some angular momentum, even if the total cloud does not. As the parcel collapses to form a star/planets conservation of angular momentum $J$ and dissipative interactions result in an ncrease in spin rate and collapse towards a planar geometry.
Second, stars form in clusters. There is interaction between stellar systems early in their lives. Again, the cluster may have little net J, but groups of stars can, relative to their own centre of mass frame.
On bigger scales (galaxies) the second of these explanations becomes more important. The interaction and accretion of galaxies is what gives individual galaxies a spin, even if the clusters they are born in have much less or even no net angular momentum.
As an example of how turbulent velocity fields lead to gravitational condensations containing angular momentum you could do worse than study the star formation simulation performed by Matthew Bate and collaborators. These simulations start off in clouds with zero net angular momentum, yet produce a host of stars with swirling accretion disks, binary systems of all shape and sizes etc. An example journal paper can be found here: http://adsabs.harvard.edu/abs/2009MNRAS.392..590B Here is a web page where you can download the animations and study them at length http://www.astro.ex.ac.uk/people/mbate/Cluster/cluster500RT.html
Turbulent clouds are by their nature random and stochastic in terms of their motions. Often the velocity field is defined in terms of a power law dependence on spatial scale. The formation of vortices is a characteristic of turbulent media. They can be produced in the absence of external forces. The vortices contain angular momentum.
It is also worth noting that not all galaxies have an appreciable spin. Spiral galaxies do, but many elliptical galaxies have little net rotation. See https://physics.stackexchange.com/questions/93830/why-the-galaxies-forms-2d-plane-or-spiral-like-instead-of-3d-ball-or-spherica | {
"domain": "astronomy.stackexchange",
"id": 731,
"tags": "star, galaxy, rotation, protostar"
} |
Create JSON file from WordPress ACF options | Question: I am using a WordPress multisite network and want to pull information from options pages in Advanced Custom Fields to create a json file. I currently have this code running whenever an options field is updated on a subsite. The json file lives on the main site.
The code works and runs successfully, however, it takes forever and often results in 502 errors due to running out of time. This is solved by refreshing the page and attempting to update the options value again.
How can I optimize my code and make it run faster or more efficiently?
functions.php
require_once( 'httpful.phar' );
function create_locations_json( $post_id ) {
// check if acf field updated is on options page
if ($post_id == 'options' || $post_id == 0) {
$stageurl = array();
$posts = array();
$args = array(
'public' => true,
'limit' => 500
);
// get all public sites in multisite network
$sites = wp_get_sites($args);
foreach ($sites as $site) {
// loop through sites and pull stage variable from options
switch_to_blog($site['blog_id']);
$stage = get_field('stage', 'option');
if (isset($stage)) {
$stageurl[] = $site['domain'];
}
restore_current_blog();
}
// loop through sites to generate json content
foreach ($stageurl as $i => $stages) {
$mainurl = parse_url(network_site_url());
// check if http (local) or https (production)
if($mainurl['scheme'] == 'https'){
$url = "https://" . $stageurl[$i] . "/wp-json/acf/v2/options";
} else {
$url = $stageurl[$i] . "/wp-json/acf/v2/options";
}
// send api url to Httpful to get acf json
$response = \Httpful\Request::get($url)->send();
// get fields from acf json
$name = "{$response->body->acf->small_location_name}";
$sitestatus = "{$response->body->acf->stage}";
$city = "{$response->body->acf->city}";
$state = "{$response->body->acf->state}";
$email = "{$response->body->acf->email}";
// get lat and lng from google maps api
$mapsurl = "https://maps.googleapis.com/maps/api/geocode/json?address=" . urlencode($city) . ",+" . $state . "&key=KEYHERE";
$mapsresponse = \Httpful\Request::get($mapsurl)->send();
$lat = "{$mapsresponse->body->results[0]->geometry->location->lat}";
$lng = "{$mapsresponse->body->results[0]->geometry->location->lng}";
// create different json responses based on stage
if ($sitestatus > 1) {
$address = "{$response->body->acf->address_1}";
$address2 = "{$response->body->acf->address_2}";
$postal = "{$response->body->acf->zip}";
$posts[] = array('id' => $i, 'name' => $name, 'site_status' => $sitestatus, 'address' => $address, 'address_2' => $address2, 'city' => $city, 'state' => $state, 'postal' => $postal, 'lat' => $lat, 'lng' => $lng, 'email' => $email, 'web' => $stageurl[$i]);
}
else {
$posts[] = array('id' => $i, 'name' => $name, 'site_status' => $sitestatus, 'city' => $city, 'state' => $state, 'lat' => $lat, 'lng' => $lng, 'email' => $email, 'web' => $stageurl[$i]);
}
}
// overwrite whole json file with new array
file_put_contents(plugin_dir_path( __FILE__ ) . '../library/json/locations.json', json_encode($posts));
}
else {
return $post_id;
}
return $post_id;
}
// run when acf field is updated
add_action('acf/save_post', 'create_locations_json', 20);
Answer: Your problem is likely making those API requests in a loop. You might consider using a REST library which can support multiple concurrent calls to your end services (perhaps something based on curl_multi_exec()). This would allow you to collapse overall API request time down to the length of whichever request happens to take the longest.
I am not familiar with Httpful which it seems you are using, but in glancing at the information available for that library, I didn't see support for making concurrent requests. This may mean you need to change libraries or build something yourself to do this.
I will link to an example class I had once written in PHP to give you an idea for how this may be accomplished.
https://github.com/mikecbrant/php-rest-client
You are free to use it if you find it useful. | {
"domain": "codereview.stackexchange",
"id": 20572,
"tags": "php, json, wordpress"
} |
Does mini-batch gradient descent nullify the effect of stratification on the training data set? | Question: In data pre-processing, stratified shuffle is used to ensure that the distribution of the original dataset is reflected in the training, test and validation dataset.
Mini-batch gradient descent uses random shuffling to ensure randomness in the mini-batches.
My doubt is- Why should we implement stratified shuffle on our dataset if it is going to be shuffled in a random manner later during training?
Answer: It doesn't, the workflow when training a model is like that:
Create 10 evenly distributed splits from the dataset using stratified shuffle
train set = 8 splits;
validation set = 1 split;
test set = 1 split
Shuffle the train set and the validation set and create minibatches from them
Train for one epoch using the batches
Repeat from step 3 until all epochs are over
Evaluate the model using the test set
If we skip the stratified shuffling in step 1 the classes of the train set, validation set and test set wont be evenly distributed.
If we skip the shuffling before each epoch in step 3 the mini-batches in each epoch will be the same.
The proportions of the train set, validation set and test set can of course vary. | {
"domain": "datascience.stackexchange",
"id": 8074,
"tags": "neural-network, scikit-learn, data-cleaning"
} |
What happens when a cavitation bubble collapses | Question: I know that when a cavitation bubble collapses, heat is given off and a shockwave is formed.
What else happens? Is there increased water pressure in that region? Can the intensity of this implosion be measured using a hydrophone?
Answer: When a cavitation bubble collapses, its internal volume is headed towards zero and it drags the surrounding fluid with it, accelerating it radially inwards towards the eventual collapse point.
At the instant when the volume goes to zero, there persists a radially-inward velocity field of fluid elements which were set in motion previously and which contain kinetic energy, and at the same instant they are all pointing inwards to the same point where there is nowhere to go!
At that collapse point, forces rise to extremely high levels over extremely small areas, generating gigantic pressures in very short times. it is possible for the water right in the collapse point to get heated enough that it emits a burst of light with a blackbody spectrum indicating a peak temperature of thousands of degrees (this is called sonoluminescence).
If there is a rigid wall near the bubble as it collapses, the collapse will be drawn towards the wall and the radial flow field will exert gigantic pressure upon a vanishingly small point on the wall, which readily punches a microscopic dent into the wall. Successive dents from successive cavitation collapses can drill a pinhole right through the wall by this mechanism.
Even if the collapse is insufficient to physically dent the wall, it is usually sufficient to disrupt any oxide layer adsorbed onto the wall, exposing fresh unoxidized wall material beneath- which then immediately oxidizes. successive disruptions of the oxide layer then drill a pinhole or a cavity into the wall, a mechanism known as cavitation corrosion.
A piezoelectric chip near the collapsing bubble can pick up the acoustic signal, which can be processed into information about the pressure and duration of the collapse impact.
In inkjet printheads based on superheat boiling of the ink, this collapse happens right after each droplet of ink is ejected from its nozzles. You can hear the impacts by pressing a phonograph cartridge needle against the orifice plate of an operating inkjet printhead and plugging the cartridge into a guitar amplifier, so everyone in the lab can hear the printhead "sing". | {
"domain": "physics.stackexchange",
"id": 97938,
"tags": "thermodynamics, acoustics"
} |
Why are Gravitational Waves so small? | Question: I'm sure you've all seen the diagrams and/or 3D visualizations of gravity; the ball sitting on a piece of fabric which makes it sink down. They've also started using it in the videos that explain gravitational waves. Two objects will be circling each other on the fabric and will emit waves.
I know this is an oversimplification of what's going on, but it is quite misleading and has left me a little bit puzzled as to why gravitational waves are infact the size of protons, not the size of planets like the videos suggest.
So I guess my question is, why are they in fact so small, and why can't we detect them from astral bodies in our solar system that we can actually detect a physical force from?
Answer: Actually the wavelengths often are the sizes of planets. If the period of something moving at $c = 3\times10^5\ \mathrm{km/s}$ is $1\ \mathrm{s}$ (similar to the recent LIGO discovery), its wavelength is $\lambda = 3\times10^5\ \mathrm{km}$. Other phenomena could well produce waves with wavelengths larger than the solar system.
What is small is the amplitude of the waves. The recently detected waves had amplitudes of $10^{-21}$. This means that they stretched spatial lengths by one part in a thousand billion billion. LIGO in particular has interferometer arms that are a few thousand meters in length, so these arms were stretched by a few parts in a billion billion.
Think about light. There is wavelength -- radio waves are meters long, visible waves are hundreds of nanometers long, and gamma rays are fractions of a nanometer long -- and there is intensity. Even if your eyes are optimized for detecting visible light, they can't see sources that are too faint.
The wavelengths of gravitational waves are set by the typical scales in the system generating them. For example, with inspiraling stellar mass black holes, the system is a bit smaller than Earth. The reason gravitational waves are weaker in amplitude than electromagnetic waves is usually given as gravity being an intrinsically weaker force, or equivalently as most matter being very highly charged (it's mostly protons and electrons) while not very massive (it takes a lot of matter to have a noticeable gravitational effect). | {
"domain": "physics.stackexchange",
"id": 28453,
"tags": "gravity, gravitational-waves, estimation"
} |
Bluebird: promisify xhr request | Question: I use bluebird promises. I want to promisify my requests from communication layer:
Utils.js
//Question: is there more beautiful way to do that?
promisifyXMLHttpRequest(xhr, timeout = 0, timeoutCallback = _.noop, responseHandler = response => JSON.parse(response).data) {
const requestPromise = new Promise((resolve, reject) => {
xhr.onload = () => {
if (xhr.readyState === 4) {
if (xhr.status === 200) {
resolve(responseHandler(xhr.responseText));
} else {
reject(xhr.statusText);
}
}
};
xhr.onerror = function (e) {
reject(xhr.statusText);
};
});
if (timeout) {
requestPromise.timeout(timeout)
.catch(TimeoutError, error => {
timeoutCallback(error);
});
}
return requestPromise;
}
Communication
_sendRequest(body, sessionId, timeout, timeoutCallback, responseHandler) {
const xhr = new XMLHttpRequest();
xhr.open('POST', SOME_URL, true);
if (sessionId) xhr.setRequestHeader('swarm-session', sessionId);
const requestPromise = Utils.promisifyXMLHttpRequest(xhr, timeout, timeoutCallback, responseHandler);
xhr.send(JSON.stringify(body));
return requestPromise;
}
Answer: const requestPromise = Utils.promisifyXMLHttpRequest(xhr, timeout, timeoutCallback, responseHandler);
xhr.send(JSON.stringify(body));
return requestPromise;
// to something like
return myLib.post(...)
To start, I suggest you just fire the request immediately and return a promise. I don't see a reason why you'd separate the creation of the XHR from the moment it fires given the code you provided.
const requestPromise = Utils.promisifyXMLHttpRequest(xhr, timeout, timeoutCallback, responseHandler);
I don't understand why you are handing over callbacks when you can simply attach then and catch to the returned promise. The purpose of promises is to have an object to which you can listen for events related to an async operation.
I also don't understand why you'd separate the XHR object creation from the code that attaches all the handlers. I suggest you just moving everything related to xhr creation into one function. This includes creating the xhr, the promise, adding the headers.
if (xhr.readyState === 4) {
if (xhr.status === 200) {
...
// to
if(xhr.readyState !== 4) return;
if(xhr.readyState === 200) resolve(...);
else reject(...);
Just twisting the logic to avoid the extra nesting. If the onload isn't ready, we just return early. No harm done. Otherwise, it is ready and we either do a resolve or reject. if statements can do no-bracket bodies, but I suggest to use them only when they're short, readable and only one line.
Simplifying your code, it should be like this. post creates your xhr, sends it and returns a promise. All _sendRequest has to be concerned about is preparing and providing post the data needed to send out the request, and attaching callbacks depending on what happens to that promise.
function post(url, headers, body, timeout){
const xhr = new XMLHttpRequest();
const promise = new Promise((resolve, reject) => {
xhr.onload = () => {
if(xhr.readyState !== 4) return;
if(xhr.status === 200) resolve(xhr.responseText);
else reject(xhr.statusText);
}
xhr.onerror = () => {
reject(xhr.statusText);
};
});
Object.keys(headers).forEach(key => {
xhr.setRequestHeader(key, headers[key]);
});
if(timeout) promise.timeout(timeout);
xhr.open('POST', url, true);
xhr.send(JSON.stringify(body));
return promise;
}
_sendRequest(body, sessionId, timeout, timeoutCallback, responseHandler){
var headers = {}
if(sessionId) headers['swarm-session'] = sessionId;
return post(SOME_URL, headers, body, timeout)
.then(responseHandler)
.catch(timeoutCallback);
} | {
"domain": "codereview.stackexchange",
"id": 17073,
"tags": "javascript, promise"
} |
Base adder, given base and two numbers of that base, it adds them | Question: Nothing more to add than the title. Looking for code review, optimizations and best practices.
public final class BaseAdder {
private BaseAdder() { }
/**
* Given two numbers and their bases, adds the number
* and returns the result in the same base.
*
* This code supports max-base 26.
*
*
* @param num1 the first number to be added
* @param num2 the second number to be added
* @param base the base of the numbers
*/
public static String add (String num1, String num2, int base) {
if (base < 2) throw new IllegalArgumentException("The base should at least be 2. Input base is: " + base);
if (base > 26) throw new IllegalArgumentException("The base should be less than 26. Input base is: " + base);
/*
* http://stackoverflow.com/questions/355089/stringbuilder-and-stringbuffer-in-java
* and StringBuilder is intended as a drop in replacement for StringBuffer where synchronisation is not required – Joel Dec 23 '09 at 8:52
*
*/
final StringBuilder stringBuilder = new StringBuilder();
int carry = 0;
for (int i = num1.length() - 1; i >= 0; i--) {
int x = getIntValue(num1.charAt(i), base) + getIntValue(num2.charAt(i), base) + carry;
if (x >= base) {
carry = 1;
x = x - base;
} else {
carry = 0;
}
stringBuilder.append(getCharValue(x, base));
}
if (carry == 1) stringBuilder.append(1);
return stringBuilder.reverse().toString();
}
public static int getIntValue(char ch, int base) {
if (ch > getCharValue(base - 1, base)) {
throw new IllegalArgumentException(" invalid character " + ch + " for input base " + base);
}
if (ch >= '0' && ch <= '9') {
return ch - '0';
}
Character.toUpperCase(ch);
return ch - 'A' + 10;
}
public static char getCharValue(int x, int base) {
if (x >= base) {
throw new IllegalArgumentException(" invalid number " + x + " for input base " + base);
}
if (x >= 0 && x <= 9) {
return (char) (x + '0');
}
return (char)(x + 'A' - 10);
}
public static void main(String[] args) {
assertEquals("100", BaseAdder.add("10", "10", 2));
assertEquals("11", BaseAdder.add("10", "01", 2));
assertEquals("25", BaseAdder.add("10", "15", 10));
assertEquals("4A", BaseAdder.add("1C", "2D", 15));
assertEquals("1B9", BaseAdder.add("CC", "ED", 16));
assertEquals("1B", BaseAdder.add("C", "D", 14));
assertEquals("1A", BaseAdder.add("C", "D", 15));
assertEquals("19", BaseAdder.add("C", "D", 16));
}
}
Answer: Your solution is incorrect by inspection, due to its disregard for num2.length().
The simplest solution would be to use BigInteger. Assuming that you are deliberately reinventing-the-wheel, you don't need to reinvent the wheel from scratch. Take advantage of Character.digit(int codePoint, int radix) and Character.forDigit(int digit, int radix).
A more flexible design would be an Accumulator object, which would allow you to add several numbers more efficiently.
public class Accumulator {
public Accumulator(int radix) { … }
public void add(String num) { … }
public String toString() { … }
} | {
"domain": "codereview.stackexchange",
"id": 7927,
"tags": "java, algorithm"
} |
techniques or examples of analyzing a series of graphs | Question: Let there be a sequence of graphs $G_1, G_2, G_3, ...$ constructed using some particular approach or algorithm. in this particular case $G_n$ is constructed by modifying $G_{n-1}$ in some "systematic" way. in this particular case $G_n$ is "more complex" than $G_{n-1}$ and generally has more edges/vertices.
Question:
What is some systematic method of determining the "basic differences" between $G_n$ and $G_{n-1}$? In some sense, how does one derive a "recurrence relation" like a formula that derives $G_n$ from $G_{n-1}$ (similar to [1] but instead of using integers as parameters, here with graphs as the parameters)? What are some famous examples of graph sequences like this, successfully analyzed in the literature?
Presumably this general question is undecidable for some contrived cases [exercise for reader] but is decidable for special cases. I am interested in the subset of decidable cases.
Background
This is an old problem that first occurred to me about 2 decades ago which to my knowledge has not been studied too much in particular but which seems significant. I have a particular algorithm in mind for generating $G_n$ from $G_{n-1}$ for which I suspect there is not much research (hint, see other question [2]), and might mention/further clarify at a later date; but for now I am interested in a more general approach.
Another analogy would be with textual "diff" programs which find lines added, deleted, modified between two text files. Is there an analogous approach for graphs that has been used somewhere? Actually, I suspect this type of construction is somewhat widespread in graph theory proofs, maybe even key/famous ones, but not necessarily studied so much in isolation.
In the best case scenario, there would be a software package that would actually successfully derive such formulas in some cases, but to my knowledge it does not exist right now.
[1] solving recurrence relations, wikipedia
[2] largest language class for which inclusion is decidable
Answer: I think your question falls in the context of "graph operators". A graph operator $S$ is just a map from the class of graphs to the class of graphs, and you then have a sequence $G$, $S(G)$, $S^2(G)$, etc.
I do not think that there could possibly be a general way to determine $S^n(G)$ as a function of $n$ and $G$ that could work for all $S$, even for a fixed $S$. For example, if $S(G)$ is the intersection graph of the maximal cliques of $G$ (considered in Harary's book and denoted by $K$), then what you ask is known for only a handful of families, including the complements of $nK_2$, the locally $C_6$ graphs and the so called "clockwork graphs".
Many other graph operators are studied in the book "Graph Dynamics" by Erich Prisner. | {
"domain": "cstheory.stackexchange",
"id": 1510,
"tags": "ds.algorithms, reference-request, graph-theory, co.combinatorics, graph-algorithms"
} |
Do hydrogen sulfide and oxygen produce pure sulfur or sulfur dioxide? If both, under which circumstances does each scenario occur? | Question: If both of the following hydrogen sulfide and oxygen chemical equations can occur, under what circumstances is the harmless pure sulfur and the toxic sulfur dioxide produced?
$\ce{2H2S + O2 -> 2H2O + 2S}$
$\ce{2H2S + 3O2 -> 2H2O + 2SO2}$
Specifically I have a fresh water aquarium with sand, plants, and small creatures I collected from a creek. I want to prevent hydrogen sulfide building up by oxygenation with an air pump, but I want to make sure the environment does not result in the harmful sulfur dioxide in the tank.
Answer: Per my review of conceivable chemistry, harsh conditions are not required, but likely acidic conditions could increasingly be problematic! Per the context of the question, to quote:
Specifically I have a freshwater aquarium with sand, plants, and small creatures I collected from a creek. I want to prevent hydrogen sulfide building up by oxygenation with an air pump, but I want to make sure the environment does not result in the harmful sulfur dioxide in the tank.
So, consider the scenarios if one passes air/O2, in the presence of say a photocatalyst (B-12, dye,..) or perhaps sonolysis (sound vibrations), into:
$\ce{H2S (aq) = H+ (aq) + HS- (aq)}$
possible issues in time depending on pH.
For example, with sonolysis:
$\ce{H2O + Sonolysis-> .H + .OH}$
albeit, any vibrations (say, from air-pump) are not that likely effective here.
However, continuing with the illustrative chemistry (source of sulfur related radical chemistry, see Page 7, this source):
$\ce{.OH + H2S -> .HS + H2O }$
$\ce{.HS + O2 -> .SO2- + H+ }$
$\ce{.SO2- + O2 -> SO2 + .O2- }$
which is one path to SO2 not contingent on pH, but elevated pH would remove formed sulfur dioxide!
$\ce{.H + O2 -> .HO2 }$
$\ce{.HO2 <=> H+ + .O2- \text{pK}_a 4.88}$
Now, if water in the fish tanks gets too acidic (pH < 4.88), then possible further aqueous reactions include:
$\ce{ .HO2 + .HO2 -> H2O2 + O2 (Slow) }$
$\ce{ .HO2 + HS- -> HO2- + .HS }$
And importantly:
$\ce{ H+ + HO2- = H2O2 }$
which readily converts H2S into H2O + S. Expect with time some aqueous SO2 and perhaps also sulfate as well.
More likely, however, is strong sunlight exposure acting on natural coloration present in the fish tank serving as a weak photocatalyst, generating some electron holes (h+) and also solvated electrons (e-(aq)):
$\ce{Dye + hv -> [Dye]{*} -> Dye + h+ + e- }$
$\ce{H2O + h+ -> H+ + .OH }$
$\ce{.OH + H2S -> .HS + H2O }$
where the reaction could continue (per the sonolysis scheme above with one path to SO2 not pH dependent) where pH < 4.88 further promotes sulfur species, $\ce{ S, SO2 {and} SO4(2-)}$.
So, my review of possible chemistry suggests keep the pH near neutral to be more safe from SO2 (but not entirely), but interestingly if H2S concentrations become dangerous, more acidic pH could be a solution with sunlight, accomodating glass, and some dye.
[EDIT] Confirming source on the scenario of employing light (and also ferrous) in the removal of hydrogen sulfide, to quote:
Such oxidation reactions are catalyzed both by soluble metals such as iron and by light. Hydrogen sulfide also can combine with metals such as iron (Fe++) to precipitate as black iron sulfide (Figure 1 bottom; FeS and FeS2). | {
"domain": "chemistry.stackexchange",
"id": 14586,
"tags": "everyday-chemistry, toxicity"
} |
Linear and rotational movement | Question: In the system shown, when you put it in motion by pulling the weight $1$ downwards and then releasing it, does body 3 exhibit rotational motion as well and linear vertical motion. What is it's center of mass linear speed? Is it $\frac{1}{2}$ the speed of body 1?
Answer: This diagram should tell you what you need to know - as the string is pulled, the pulley must rotate. Do you see it now?
Incidentally, the speed with which the second string (on 3) is pulled depends on the ratio $\frac{R}{r}$ - if you move the inner string (the one attached to 1) by $r$, the outer string will move by $R$. And that motion is then divided by two when it comes to the motion of the center of mass of 3.
I will leave you to figure out the rest. | {
"domain": "physics.stackexchange",
"id": 37463,
"tags": "homework-and-exercises"
} |
BCS-BEC crossover: relation between scattering length and bare interaction | Question: I am studying the BCS-BEC crossover in atomic Fermi gases from this reference https://arxiv.org/pdf/1402.5171.pdf , but I am having hard times understanding some of the details.
In particular I can't understand equation 1.2:
$$
\frac{m}{4\pi\hbar^2 a_s} = \frac{1}{U} + \frac{1}{V}\sum_k^{\Lambda} \frac{1}{2\varepsilon_k},
$$
where $m$ is the particle mass, $a_s$ is the scattering length, $U$ is the "bare interaction", $\varepsilon_k = \hbar^2k^2/2m$ is the dispersion relation for the non interacting particles, and $\Lambda$ is a suitable cutoff, which is inversely proportional to the range of the particle-particle interaction potential (correct me if I'm wrong), $V$ is the volume of the system (I added the factor $1/V$ myself to make a correct dimensional analysis).
Now I have many questions about this equation:
where does this come from? If a thorough demonstration is too long or difficult, at least I would like to have a qualitative explanation, because this comes out of the blue to me.
how should I use this equation? To my understanding, I should use it to find out the value of $U$ given $a_s$ and $\Lambda$. As far as I know, $a_s$ can be tuned experimentally, e.g. via a Feshbach resonance, and $\Lambda$ should be fixed for a given atomic species, so the only "theoretical" parameter is $U$. Am I right?
For a 3d system of particles with no external potential, I would simplify the last term as:
$$
\frac{1}{V}\sum_k^{\Lambda}\frac{1}{2\varepsilon_k} = \frac{1}{(2\pi)^3} \int_0^{\Lambda} 4\pi k^2 dk \frac{m}{\hbar^2k^2} = \frac{m\Lambda}{2\pi^2 \hbar^2}
$$
is this correct?
In case point 2. and point 3. are correct, then the equation can be rewritten as
$$
\frac{1}{U} = \frac{m}{4\pi\hbar^2}\left( \frac{1}{a_s} - \frac{1}{r_0} \right)
$$
where $r_0 = \pi/2\Lambda$. Now since the BCS-BEC theory assumes that $U<0$, we have that this condition is encountered just if $a_s > r_0$, or if $a_s<0$, am I right? What is the physical meaning of the former condition?
Thank you in advance for any help! :)
Answer: The scattering length equation you show is for a contact interaction
with a $k$ cutoff at $\Lambda$.
One way to get this form is to
i) Write down the Schrödinger equation ii) enforce the usual
boundary conditions that the incoming part is from the free particle
or plane wave solution to get the Lippmann-Schwinger equation. At low
energy, the scattering is s-wave, and since the
interaction is separable in momentum space, and you can solve for
the s-wave scattering amplitude $f$. The scattering length and effective
range are defined from the low energy s-wave scattering amplitude $f$ as
$$
Re \frac{1}{f(k)} = -\frac{1}{a}+\frac{1}{2}r_e k^2+...
$$
At $k=0$, the energy is $E=0$,
and the energy denominator in the relative coordinate
Lippmann Schwinger equation becomes the $2\epsilon_k$, leading to the
expression for your first equation. If you keep the next term,
you find that changing your cutoff $\Lambda$ changes the effective range,
with larger cutoffs having smaller effective ranges. For atomic systems,
the physical effective range is of order the size of the atoms, but since
the spacing between the atoms is generally much larger, for numerical
computations, you can choose a smaller $\Lambda$ and still get good
predictions. Any standard quantum book that covers the Lippmann-Schwinger equation and separable potentials will have this calculation.
Yes for a spherical cutoff in the large volume limit
you can perform the spherical
integral this way. The equation you wrote can also be used for lattice
calculations which gives a nonspherical cutoff in this limit.
In principle both $\Lambda$ and $U$ should be adjusted to
match the the scattering parameters as you change the magnetic field.
A positive scattering length with a negative potential can occur when
the potential is strong enough to have a bound state. The unitary limit
where $a$ diverges is when the potential is just at the point of having
bound state at zero energy. So as you make $U$ more negative starting from a small negative value, with $a^{-1}$ a large negative value, as you increase the magnitude of $U$, $a^{-1}$ approaches zero from the negative values. At the unitary limit $a^{-1}$ is zero. For larger negative $U$ values you get a bound state (which is then the BEC side) and $a^{-1}$ is positive. | {
"domain": "physics.stackexchange",
"id": 94616,
"tags": "condensed-matter, cold-atoms"
} |
Magnetron Tube Emit Direction | Question: for a project I am doing research on microwave ovens and their operation. I have researched most of the electronic components but I was wondering about one part, the magnetron tube. I have looked it up several times and I cannot find it anywhere, which way do the tubes emit microwave radiation? Is it parallel to the axis of the magnets or is it radially through the filament? Or is it in every direction?
Answer: The most basic magnetron is a wire parallel to the magnetic field; in such a tube the electrons circle the wire, and since the emission of an accelerating electron is perpendicular to the acceleration vector, and since the acceleration vector changes as the electron makes a complete loop, I believe you would expect isotropic radiation.
More usually, a number of resonant cavities is added. This makes the magnetron a "cavity magnetron", with resonant frequency better defined - it was the breakthrough that enabled radar, and arguably was responsible for Britain winning the Battle of Britain thus preventing the German invasion early on in WW II.
In a cavity magnetron, the radiation only comes out on the output port - everywhere else it is reflected, with the resonant frequency being reinforced.
Some interesting information on the early history of the magneton (and radar): http://www.johnhearfield.com/Radar/Magnetron.htm | {
"domain": "physics.stackexchange",
"id": 39061,
"tags": "microwaves"
} |
Relaxation time confusion | Question: I was reading kinetic theory from these notes.
In the very first chapter there is a derivation for relaxation time, which is nothing but average time between collisions, denoted by $\tau$. If $P(t)$ is the probability that a molecule does $\textit{not }$ undergo collision in time interval $[0,t]$, and $w$ is the collision rate (i.e. probability of a collision in a small time $\delta t$ is $w\delta t$), then the derivation goes on to show that
\begin{align}
P(t)=w~\mathrm{e}^{-wt}
\end{align}
which is a exponential distribution. Then $\tau$ is found as
\begin{align}
\tau=\int_0^\infty dt ~t~P(t)=\frac{1}{w}
\end{align}
Now mathematically this is just another result, but physically it doesn't make sense. Here's my confusion: $P(t)$ is the probability that a collision does not occur within time $t$, so the integral in above expression gives the average time in which a collision does $\textit{not}$ occur. But $\tau$ by definition is the average time between collisions, and the two statements are not logically equivalent. What I mean to say is that, "a collision does not occur within time $t$" is not the same as saying "first collision occurs at time $t$ (or in some small neighborhood of $t$)", and the probability of the latter is what is required to calculate $\tau$.
Any thoughts?
Answer: I think it's a matter of choosing the normalization constant, in going from:
$$ \frac{d P}{d t}=-w P,$$
to
$$P = w e^{-wt},$$
because as you could see, with this choice $P(0)=w$, but we expect $P(0)$ to be $1$. So if you just set $P$ to be $e^{-wt}$, then the mean collision time can be obtained as:
$$ \tau = \int_{0}^{\infty}t\, P(t)\, wdt,$$
where the $w \,dt$ term coming after $P(t)$ is resposible for a collision between $t$ and $t+\,dt.$ | {
"domain": "physics.stackexchange",
"id": 34189,
"tags": "classical-mechanics, kinetic-theory"
} |
Expansion for gravitational time dilation | Question: In the section on gravitational time dilation in Prof. David Tong's lecture notes on general relativity, he performs the following expansion:
$$t\sqrt{1-\frac{2GM}{r_{A}c^{2}}+\frac{2GM{\Delta}r}{r^{2}_{A}c^{2}}}\\{\approx}t\sqrt{1-\frac{2GM}{r_{A}c^{2}}}\left(1+\frac{GM{\Delta}r}{r^{2}_{A}c^2}\right)$$
I was hoping someone could fill in the steps between the lines here, as I'm having some trouble.
Answer: I called the function f(x), so $x$ instead of $\Delta r$ and all divided by $t$.
$f(x) = \sqrt{1-\frac{2GM}{r_Ac^2}+\frac{2GM x}{r_A^2c^2}}$
$f'(x) = \frac{1}{2} \frac{1}{\sqrt{1-\frac{2GM}{r_Ac^2}+\frac{2GM x}{r_A^2c^2}}} \frac{2GM }{r_A^2c^2}$
With normal Taylor expansion around $x_0 = 0 $ to first order:
$f(x) \approx \sqrt{1-\frac{2GM}{r_Ac^2}} + \frac{1}{ \sqrt{1-\frac{2GM}{r_Ac^2}}}\frac{GM x}{r_A^2c^2}$
$ = \sqrt{1-\frac{2GM}{r_Ac^2}} (1+\frac{1}{1-\frac{2GM}{r_Ac^2}} \frac{GM x}{r_A^2c^2})$
If you say $\frac{1}{1-\frac{2GM}{r_Ac^2}} \approx 1$ you recover your result.
But as for now, doing so does not make sense to me. | {
"domain": "physics.stackexchange",
"id": 86706,
"tags": "general-relativity, gravity, time-dilation"
} |
Debian install woes | Question:
[hide preview]
Issues start with the first points on the installation file (comments will be listed with the appropriate numbers)
1.1 Configure your Ubuntu repositories Sorry - - I had used a hard-link to a 'Debian' instruction set so I wouldn't be configuring my ubuntu repositories rather my debian ones. 1.2 Setup your computer to accept software from packages.ros.org. ROS Kinetic ONLY supports Wily (Ubuntu 15.10), Xenial (Ubuntu 16.04) and Jessie (Debian 8) for debian packages.
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
commands given (please note that Debian 8 (Jessie) is explicitly mentioned as being supported) don't give any kind of result like what might be wanted. - - vis there is no directory in sources.list.d called ros-latest.list and the command hangs there and doesn't 'do' anything to a somewhat noob the phrase '$(lsb_release -sc) main' doesn't mean much - - - it took me a while to understand that I needed to have 'xenial' here (a space after .../ubuntu) followed by a space and then the word 'main'. 1.3Set up your key
apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-key 0xB01FA116
gets
Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --homedir /tmp/tmp.6yfkBfPICO --no-auto-check-trustdb --trust-model always --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyring /etc/apt/trusted.gpg.d/debian-archive-jessie-automatic.gpg --keyring /etc/apt/trusted.gpg.d/debian-archive-jessie-security-automatic.gpg --keyring /etc/apt/trusted.gpg.d/debian-archive-jessie-stable.gpg --keyring /etc/apt/trusted.gpg.d/debian-archive-squeeze-automatic.gpg --keyring /etc/apt/trusted.gpg.d/debian-archive-squeeze-stable.gpg --keyring /etc/apt/trusted.gpg.d/debian-archive-wheezy-automatic.gpg --keyring /etc/apt/trusted.gpg.d/debian-archive-wheezy-stable.gpg --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-key 0XB01FA116 gpg: "0XB01FA116" not a key ID: skipping
which somehow doesn't seem to be a useful result
did the apt-get update - - just in case something useful might be there and then
next the command aptitude install ros-kinetic-desktop-full was used with the result (aptitude deals with dependencies better therefore its use)
This code block was moved to the following github gist:
https://gist.github.com/answers-se-migration-openrobotics/6a526ee79ac53f0f0e404cbc3e33f13d
With my only real option being a non-install.
All this is being run on a vm set up as debian stable (jessie 8.5) trying to install kinetic. I hope that I have been able to provide enough information enough information vis a vi the warnings and errors so that someone might be able to point me to the instruction set for Debian jessie so that I might be able to install ros which I would love to examine in some detail.
TIA
Dee
Originally posted by dabeegmon on ROS Answers with karma: 1 on 2016-08-27
Post score: 0
Answer:
It looks like you've interpreted the instructions incorrectly in two places:
In 1.2, you set the release name to xenial; which is the name of the ubuntu release. Ubuntu uses different package version from the Debian releases, and this is likely what is causing your dependency challenges later. The command as listed should work, but if you've mis-typed it in some way it could be waiting for additional input. If lsb_release -sc isn't retrieving your OS name for some reason, use jessie, since that is the proper short name for your Debian release:
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu jessie main" > /etc/apt/sources.list.d/ros-latest.list'
Note that the type and placement of quotes and other punctuation is VERY IMPORTANT. The command-line uses them to figure out which parts are commands, which parts are arguments, and how to interpret your command. If you miss a quote it my assume that you wanted to enter a newline as part of the command, and will happily accept more input until it gets the matching quote.
In 1.3, the error message indicates that you've mis-typed the key ID: it should be 0xB01FA116. Note the lowercase x. Again, it is important to enter the commands EXACTLY as per the instructions.
If you're having trouble, I recommend that you copy and paste the commands into your terminal, instead of trying to re-type them. There's much less opportunity for error that way.
Originally posted by ahendrix with karma: 47576 on 2016-08-28
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 25624,
"tags": "debian"
} |
How to show that f(x) is primitive recursive? | Question:
Let
$$f(x)=\begin{cases} x \quad \text{if Goldbach's conjecture is true
}\\ 0 \quad \text{otherwise}\end{cases}$$
Show that f(x) is primitive recursive.
I know a primitive recursive function is obtained by composition or recursion, but I don't know what should I do about this problem.
Answer: Goldbach's conjecture is either true or false. Do a case analysis on the two possibilities. In one case, $f(x)=x$, which is primitive recursive. In the other case, $f(x)=0$, which is also primitive recursive. Therefore $f$ is primitive recursive. | {
"domain": "cs.stackexchange",
"id": 561,
"tags": "computability, recursion"
} |
Pendulum attached to Oscillating Fulcrum | Question: In the specific scheme, I would expect that by pulling the spring to the right, the pendulum due to inertia would have to move to the left quartile. In order to find the r vector:
$$r = R + l$$
and derive the equations of motion using Lagrangian, I would use:
$$l = (-l\sinθ)i + (-l\cosθ)j$$
However, all tutorials I saw have $l$ as
$$l = (l\sinθ)i + (l\cosθ)j,$$
which means that the pendulum, following a displacement to the right, it also moves to the right.
Is the thought all wrong on this?
Answer: When writing out the Lagrangian, we are not exactly concerned about which direction the pendulum would move when you move the spring, we just care that it moves, which is why the tutorials show the $\hat{i}$ to be positive. The fact that it will conserve momentum and move the opposite direction from the spring will come out after you've solved for the equation of motion. | {
"domain": "physics.stackexchange",
"id": 59421,
"tags": "homework-and-exercises, newtonian-mechanics, classical-mechanics, lagrangian-formalism"
} |
Where can I find detailed description of different VARIANT-IDs in CMIP6 metadata? | Question: Where do I find the documentation of different VARIANT-IDs for the CMIP6 dataset? A VARIANT-ID is of the format r<k>i<l>p<m>f<n>, where the characters r, i, p and f denote:
r-realisation
i-initialisation method
p-physics
f-forcing
I checked the overview paper Eyring et al. (2016), but it doesn't mention anything about VARIANT-ID. I have also checked other documents provided at ESGF, but couldn't find any information.
Does anyone here knows where I can find detailed information about the VARIANT-ID for CMIP6 datasets?
Answer: The variant_labels are defined in the CMIP6 Data Reference Syntax (DRS) document and are made up of the realization_index, initialization_index, physics_index and forcing_index.
A link to this DRS document can for example be found in the CMIP6 Participation Guide for Modelers.
Edit: Following up Deditos' comment, I quote the respective section of the DRS document below:
For a given experiment, the realization_index, initialization_index,
physics_index, and forcing_index are used to uniquely identify each
simulation of an ensemble of runs contributed by a single model.
These indices are defined as follows:
realization_index = an integer (≥1) distinguishing among members of an ensemble of simulations that differ only in their initial conditions (e.g., initialized from different points in a control run). Note that if two different simulations were started from the same initial conditions, the same realization number should be used for both simulations. For example if a historical run with “natural forcing” only and another historical run that includes anthropogenic forcing were both spawned at the same point in a control run, both should be assigned the same realization. Also, each so-called RCP (future scenario) simulation should normally be assigned the same realization integer as the historical run from which it was initiated. This will allow users to easily splice together the appropriate historical and future runs.
initialization_index = an integer (≥1), which should be assigned a value of 1 except to distinguish simulations performed under the same conditions but with different initialization procedures. In CMIP6 this index should invariably be assigned the value “1” except for some hindcast and forecast experiments called for by the DCPP activity. The initialization_index can be used either to distinguish between different algorithms used to impose initial conditions on a forecast or to distinguish between different observational datasets used to initialize a forecast.
physics_index = an integer (≥1) identifying the physics version used by the model. In the usual case of a single physics version of a model, this argument should normally be assigned the value 1, but it is essential that a consistent assignment of physics_index be used across all simulations performed by a particular model. Use of “physics_index” is reserved for closely-related model versions (e.g., as in a “perturbed physics” ensemble) or for the same model run with slightly different parameterizations (e.g., of cloud physics). Model versions that are substantially different from one another should be given a different source_id” (rather than simply assigning a different value of the physics_index).
forcing_index = an integer (≥1) used to distinguish runs conforming to the protocol of a single CMIP6 experiment, but with different variants of forcing applied. One can, for example, distinguish between two historical simulations, one forced with the CMIP6-recommended forcing data sets and another forced by a different dataset, which might yield information about how forcing uncertainty affects the simulation.
Each data provider can assign whatever positive integers they like
for the realization_index, intialization_index, physics_index, and
forcing index. For each source/experiment pair, however, consistency
(in these indices) should be maintained across each parent/child pair
whenever sensible (so that, for example, both the ScenarioMIP child
and its “historical” parent simulation would be assigned the same set
of index values for realization, initialization, and physics); the
integer 1 should normally be chosen for each of these in the case of a
single variant or for the primary variant (if there is one). This is
only a suggestion, however; there should be no expectation on the part
of users that every model will have a value of 1 assigned to any of
the r, i, p, f indices, and even if a 1 is assigned it does not imply
that it is the primary variant. Note also that a child spawned by a
control run will not necessarily have the same “ripf” value as the
control, since, for example, multiple realizations of an experiment
will branch from the same control.
Note that none of the “ripf” indices can be omitted.
Example of a variant_label: if realization_index=2,
initialization_index=1, physics_index=3, and forcing_index=233, then
variant_label = “r2i1p3f233”. | {
"domain": "earthscience.stackexchange",
"id": 2501,
"tags": "climate-models, earth-system"
} |
Doping in semiconductors - location of new states | Question: A short question relative to doping semiconductors: What defines the location of new energy states?
For example, why does doping of n-type add energy states slightly below the conduction band (and not slightly above the valence band as it is in case of p-type?)?
edit: It's not quite directly connected but I found formulas to calculate the Fermi level in https://www.uwyo.edu/cpac/_files/docs/kasia_lectures/2-intrinsicanddopedsemiconductors.pdf (p. 10)
But applying an external field also shifts the Fermi level, right?
Answer: I apologize as my previous answer to this question was wrong. Just to elaborate on the correct answer by @my2cts here is how to calculate the energy level in case of a donor impurity
First of all for an electron in a semiconductor near the bandedge, the electron behaves like a free electron except with a different mass, i.e., the effective mass $m^*$. Technically this effective mass has different values in different directions corresponding to the crystal structure but for simplicity assume it to be a constant depending on the material.
Now your donor atom donates an electron to the conduction band of the semiconductor and in turn becomes positively charged ion. Or viewed another way, this donor atom behaves like a hydrogen atom with a potential $$ U(r)= \frac{-e^2}{4\pi \epsilon r}$$ where we have replaced $\epsilon_0$ with the dielectric constant of the material instead. The ion becomes ionised when the electron reaches the conduction band (then it will be free to move around in the material) so instead of taking the ionisation to happen at $E=0$ for hydrogen atom, now we take it to happen at $E_c$ (the edge of the conduction band). Writing the Schrodinger equation analogously to the Hydrogen atom with our modifications we get
$$\left[ -\frac{\hbar^2}{2m^*}\nabla^2 - \frac{e^2}{4 \pi \epsilon r} \right] \psi(r) = (E_d-E_c) \psi(r)$$
Where $E_d -E_c$ is the impurity level of the donor with respect to the conduction band edge level $E_c$ (which I believe is what you want to know).
Now the solution can just be taken as for the hydrogen atom case with our modifications
$$ E_d-E_c = -\frac{e^4 m^*}{2(4 \pi \epsilon)^2 \hbar^2} \, \frac{1}{n^2}, \quad n=1,2,\dots$$
For the ground state energy level, we get the following
$$E_d=E_c- 13.6 \left( \frac{m^*}{m_e} \right) \left( \frac{\epsilon_0}{\epsilon} \right)^2 \, \text{eV}$$
A similar calculation can be carried out for acceptor case by taking the effective mass to be of a hole.
P.S. I have taken this explanation from Section 2.8 of Semiconductor Device Physics and Design by Umesh K. Mishra and Jasprit Singh | {
"domain": "physics.stackexchange",
"id": 49723,
"tags": "semiconductor-physics"
} |
Determine the flow rate using the differential pressure | Question: Assuming an input air flow rate, the room volume and an underpressure (let's say 5%) is known, is it possible to determine the output air flow rate? What law can I use, or what additional info would I need?
Background:
I'm aiming to track the energy flows within a room as part of a research project. The input volume flow is coming from one source and can be determined by evaluating the signals of the respective volume flow controller. The outflow on the other hand is very difficult since 32 digestors are located in the room, each with its own funnel. I do have accurate temperature readings, so with the precise volume flows it should be easy to track energy flows.
Answer: If we're at steady state, the mass flow in must equal the mass flow out. You have only given the flow rate in, I'm going to assume it's at ambient density of about $1.2\,\mathrm{kg/m}^3$, so the mass flow in is $1,200\,\mathrm{kg/h}$.
To calculate the density in the room we can use the ideal gas law, $p=\rho RT$. At a 5% underpressure relative to atmospheric, the pressure in the room is $0.95\times101,325=96260\,\mathrm{Pa}$. Assuming dry air with $R=287\,\mathrm{J/kg/K}$ and a temperature of $300\,\mathrm{K}$, the density in the room is $1.12\,\mathrm{kg/m}^3$.
The volume flow out is therefore $1,200/1.12=1,070\,\mathrm{m}^3/\mathrm{h}$. | {
"domain": "physics.stackexchange",
"id": 78032,
"tags": "fluid-dynamics"
} |
Ketone reduction in presence of epoxy group | Question: In the following reaction:
According to me we can use HSCH2CH2SH, followed by Raney Ni
But the answer is that we cannot achieve this conversion by that reagents . Why we cannot use them?
Answer: You could first react the epoxide with HCl to form the halohydrin (reversible), followed by Clemmensen reduction which selectively reduces ketones and aldehydes to alkanes (no effect on the halohydrin), before finally restoring the epoxide by adding base. | {
"domain": "chemistry.stackexchange",
"id": 6203,
"tags": "organic-chemistry, reaction-mechanism"
} |
Whats the maximum number of electrons that can fit into the outer shell of boron? | Question: Whats the maximum number of electrons that can fit into the outer shell of boron? Like silicon has 4 electrons in the outer shell but the maximum it can hold is 8 electrons so whats the maximum number of electrons that can fit into the outer shell of boron which has 3 electrons but can it hold more in the outershell?
Answer: Boron can conceivably fit a maximum of 8 electrons in its outer shell. This could be achieved through boron covalently bonding with a non-metal (as boron is a metalloid).
It is in the 2nd period (row) of the periodic table, hence has 2 'shells', following the 2n2 pattern for maximum amount of outer shell electrons (where 'n' is the amount of 'shells' the element has). | {
"domain": "physics.stackexchange",
"id": 21138,
"tags": "quantum-mechanics, electrons, atoms, physical-chemistry"
} |
Add two 4-D arrays in Python | Question: I have two arrays, each with shape (1, 51, 150, 207)
I need to add them such that:
newarray[0][0][0][0] = array1[0][0][0][0] + array2[0][0][0][0]
newarray[0][0][0][1] = array1[0][0][0][1] + array2[0][0][0][1]
for every element in the array.
Currently I'm creating a new array using nested while loops but this approach is extremely time consuming. Can anyone offer a better solution?
H = 0
NS = 0
EW = 0
height = []
while H < 51:
northsouth = []
while NS < 150:
eastwest = []
while EW < 207:
eastwest.append(PH[0][H][NS][EW] + PHB[0][H][NS][EW])
EW += 1
print eastwest
northsouth.append(eastwest)
NS += 1
print northsouth
height.append(northsouth)
H =+ 1
Answer: Firstly, code that looks like:
foo = 0
while foo < bar:
... # foo doesn't change here
foo += 1
is a bad sign. In Python, this can immediately be simplified, using range, to:
for foo in range(bar):
...
Secondly, hard-coding the shape of your "arrays" everywhere (having the literals 51, 150 and 207 in the code itself), referred to as "magic numbers", is poor practice in every programming language. These are the lengths of each sub-list, so should be calculated as needed using len(...). This makes the code more flexible; it can now be easily applied to add arrays of other sizes. it's also slightly awkward to hard-code the depth of the addition (i.e. that it only handles four-dimensional arrays).
Thirdly, however, code like:
for index in range(len(sequence)):
element = sequence[index]
...
is generally better replaced with:
for element in sequence:
...
where you need the index too, you can use enumerate, and where you need to iterate over multiple iterables you can use zip; Python has lots of handy functions for dealing with iterables (see also itertools).
Fourthly, when you have a loop appending to a list, it's usually more efficient to replace it with a "list comprehension". For example:
foo = []
for bar in baz:
foo.append(bar)
can be replaced by:
foo = [bar for bar in baz]
Finally, however, this is totally trivial if you switch to numpy, which is extremely helpful for manipulating arrays:
>>> import numpy as np
>>> arr1 = np.random.randint(1, 10, (1, 51, 150, 207))
>>> arr2 = np.random.randint(1, 10, (1, 51, 150, 207))
>>> arr1 + arr2 # wasn't that easy!
array([[[[11, 13, 14, ..., 9, 14, 8],
[15, 4, 10, ..., 14, 10, 16],
[11, 11, 9, ..., 4, 9, 9],
...,
[17, 10, 14, ..., 10, 12, 12],
[12, 11, 14, ..., 7, 8, 8],
[ 9, 11, 6, ..., 3, 11, 8]],
[[10, 15, 16, ..., 11, 16, 10],
[17, 17, 8, ..., 8, 7, 8],
[11, 7, 2, ..., 16, 11, 5],
...,
[15, 10, 16, ..., 16, 13, 9],
[15, 8, 7, ..., 13, 5, 6],
[ 4, 5, 10, ..., 7, 10, 10]],
[[11, 10, 4, ..., 10, 7, 11],
[12, 3, 14, ..., 11, 12, 12],
[ 6, 11, 16, ..., 14, 9, 12],
...,
[18, 10, 11, ..., 13, 14, 9],
[11, 11, 10, ..., 9, 13, 12],
[10, 10, 10, ..., 12, 14, 8]],
...,
[[16, 11, 12, ..., 13, 13, 9],
[ 7, 15, 10, ..., 9, 11, 5],
[ 5, 11, 14, ..., 14, 4, 11],
...,
[15, 13, 6, ..., 17, 6, 10],
[ 9, 12, 7, ..., 7, 17, 11],
[12, 9, 10, ..., 4, 9, 5]],
[[15, 12, 10, ..., 5, 12, 15],
[17, 14, 15, ..., 13, 11, 8],
[13, 10, 10, ..., 8, 4, 8],
...,
[12, 9, 10, ..., 9, 7, 12],
[13, 12, 17, ..., 7, 13, 10],
[13, 6, 8, ..., 15, 13, 7]],
[[ 9, 8, 8, ..., 6, 10, 2],
[ 9, 15, 14, ..., 14, 4, 5],
[ 3, 12, 12, ..., 5, 10, 14],
...,
[ 6, 11, 15, ..., 11, 4, 15],
[ 4, 12, 14, ..., 12, 11, 9],
[10, 3, 5, ..., 5, 7, 10]]]]) | {
"domain": "codereview.stackexchange",
"id": 14331,
"tags": "python, beginner, performance, array"
} |
Mass has the same value in all inertial reference frames? | Question: Is mass the same in all inertial frames? If it is, why is that? If not, can you also explain?
Answer: Of course it is, that's why it's called "rest mass". Every body is at rest in his own inertial frame. In other inertial frames relative to which your mass is moving it also has kinetic energy, and since energy and mass are equivalent there is also the so called "relativistic mass", but when you only say "mass" you usually mean "rest mass" and that is invariant by definition. | {
"domain": "physics.stackexchange",
"id": 29325,
"tags": "special-relativity, mass, inertial-frames"
} |
How to develop a spectrogram (2D array) from audio signal? | Question: I have developed a spectrogram in Python using Scipy.Signal.Spectrogram. But I need a complete understanding of data. Here I am not asking about plotting and color selection etc. I am more into data (numbers). I am attaching a picture, please have a look :
In image 1, you can see that I have an array of frequency mapped to [0, Fs/2] i.e [0,24000] in my case.
In image 2, Time is mapped from [0,10sec] and the total array length is 2141.
In image 3, Spectrogram has been computed in a 2D array.
I want a clear understanding that how these array of Frequencies and Times have been developed. What is the data that the spectrogram holds in the 2D array? Is it a log magnitude of frequency domain components?
I need some clear steps as I want data in 2D array as it can be seen in the 3rd Image. Here I am not into plotting thing but more into a clear understanding of data behind the spectrogram. I want to get this same data in C++.
Answer: The array of time is obtained from the Sampling frequency $f_s$ and spectrogram window length M, since time duration between consecutive samples will be $T_s = \frac{1}{f_s}$.
The array of frequency depends on both $f_s$ and DFT length N.
Explanation :
Suppose you have total $L$-length data sampled at $f_s$. The way a spectrogram is obtained is dividing the $L$-length data into overlapping N length windows and then take N-DFT of the windows. If the windows are overlapping, then it means, for every subsequent window, you are moving forward by $M$ samples in time-domain where $M<N$.
Total number of $N$-Length windows, you will have is $\lfloor \frac{L}{N} \rfloor$, so the total number of $N$-DFT to be taken is $\lfloor \frac{L}{N} \rfloor$. That means, you will have $\lfloor \frac{L}{N} \rfloor$ number of $N$-length DFT coefficients. You can arrange these $\lfloor \frac{L}{N} \rfloor$ number of $N$-DFT Coeffs in a Matrix of dimension $$N X \lfloor \frac{L}{N} \rfloor$$. Each column of this matrix in the $N$-DFT Coeffs.
Now, if we write the expression for spectrogram in a single equation, it will be:
$$S[k,m] = \sum^{N-1}_{n=0}x[mM + n]e^{-j\frac{2\pi nk}{N}}$$, where $m$ denotes the $m^{th}$ window and $k$ denotes the $k^{th}$ DFT Coefficient. You can see that as $m$ increases, the time-domain data being picked up for DFT moves forward by $M$ samples.
If you want to compute the Spectrogram as Matrix Multiplication, you will have the following:
$$S = W_N \begin{pmatrix} x[0] & x[M] & x[2M] & \cdots \\ x[1] & x[M+1] & x[2M+1] & \cdots \\ \vdots & \vdots & \vdots & \cdots \\ x[N-1] & x[M+N-1] & x[2M+N-1] & \cdots \end{pmatrix}_{NX\lfloor \frac{L}{N} \rfloor}$$, where $W_N$ is the matrix of N-DFT basis vectors.
So, You will get the matrix in which values along column will give the frequency component in the data and values along row will give the variation along time index $m$. And, the value
Now, since you are moving forward by $M$ samples, each column of the Matrix above gives you the frequency domain picture keeping time constant, meaning $m^{th}$ column in the matrix $S$ gives frequency domain picture at $time = mM.T_s$. So, when you look at a column, time remain constant at $mM.T_s$, and digital frequency changes in steps of $\frac{2\pi}{N}$.
Similarly, $k^{th}$ row of the matrix $S$ gives you variation in intensity of digital frequency $\omega = 2\pi \frac{k}{N}$ as you move forward in time. So, when you look at a row, frequency remains constant at $\omega = 2\pi \frac{k}{N}$ and time moves forward in steps of $M.T_s$. | {
"domain": "dsp.stackexchange",
"id": 8645,
"tags": "fft, python, spectrogram, c++, fingerprint-recognition"
} |
What is the correct inverse function of the Matlab function fft? | Question: I used the Matlab function fft but I am not sure which function is the correct inverse to go back to the time domain.
Answer: Yes it is ifft(). If it is non radix two, the algorithm that is implemented would be DFT.
You can write the equation of the IFFT with normalizing factor (1/sqrt(N)) instead of (1/N), then you have to multiply the time domain counterpart with sqrt(N). | {
"domain": "dsp.stackexchange",
"id": 2373,
"tags": "matlab, fft"
} |
Step size in numerical differentiation | Question: I get position information and a corresponding timestamp from a motion tracking system (for a rigid body) at 120 Hz. The position is in sub-millimeter precision, but I'm not too sure about the time stamp, I can get it as floating point number in seconds from the motion tracking software. To get the velocity, I use the difference between two samples divided by the $\Delta t$ of the two samples:
$\dot{\mathbf{x}} = \dfrac{\mathbf{x}[k] - \mathbf{x}[k-1]}{t[k]-t[k-1]}$.
The result looks fine, but a bit noisy at times. A realized that I get much smoother results when I choose the differentiation step $h$ larger, e.g. $h=10$:
$\dot{\mathbf{x}} = \dfrac{\mathbf{x}[k] - \mathbf{x}[k-h]}{t[k]-t[k-h]}$.
On the other hand, peaks in the velocity signal begin to fade if I choose $h$ too large. Unfortunately, I didn't figure out why I get a smoother signal with a bigger step $h$. Does someone have a hint? Is there a general rule which differentiation step size is optimal with respect to smoothness vs. "accuracy"?
This is a sample plot of one velocity component (blue: step size 1, red: step size 10):
Answer: This answer valid only if $\Delta{t} = \mathbf{t}[k] - \mathbf{t}[k-1]$ is a constant. Then you can rewrite your equation as:
$$\dot{\mathbf{x}} = \dfrac{\mathbf{x}[k] - \mathbf{x}[k-1]}{\Delta{t}}$$
Consider:
$$
\dot{\mathbf{x}}_l = \dfrac{1}{h}\sum_{i=1}^{h}\dot{\mathbf{x}_i} =
\dfrac{(\mathbf{x}[k] - \mathbf{x}[k-1])+(\mathbf{x}[k-1] - \mathbf{x}[k-2])+\dotsb+(\mathbf{x}[k-h+1] - \mathbf{x}[k-h])}{h\Delta{t}}
= \bigg(\dfrac{\mathbf{x}[k] - \mathbf{x}[k-h]}{h\Delta{t}}\bigg)
$$
$$
h\Delta{t} = \mathbf{t}[k] - \mathbf{t}[k-h]
$$
Here $\dot{\mathbf{x}}_i$ is the $i^{th}$ sample of the reading and passing it through a moving average filter (which is a low pass filter) you can obtain $\dot{\mathbf{x}_l}$. So $\dot{\mathbf{x}_l}$ is smooth as it is a low pass signal. When you increase the value of $h$ you can minimize the bandwidth of $\dot{\mathbf{x}_l}$. So when you increase the value of $h$ the result is getting more smoother. So peaks begin to fade(Peaks means high frequency components).
As I know there isn't a generalize way to determine $h$ to get the result smoother and accurate. You have to choose appropriate $h$ by a trial and error or if you know the transfer function of the sensor, you can use that to determine an appropriate value for $h$. | {
"domain": "robotics.stackexchange",
"id": 1059,
"tags": "motion, pose"
} |
delete rows based on references to other tables in a mysql database? | Question: I have a database with the following hierarchy.
A dataset can have multiple scans (foreign key scan.id_dataset -> dataset.id)
A scan can have multiple labels (foreign key label.id_scan -> scan.id)
A label has a labeltype (foreign key label.id_labeltype-> labeltype.id)
The dataset, scan and labeltype tables all have a name column
I want to delete a label based on the names of the dataset, scan and labeltype. Here is my working code using the python mysql-connector:
def delete_by_labeltype(
self, dataset_name: str, scan_name, labeltype_name: str
) -> int:
sql = (
"DELETE l.* FROM label l "
"INNER JOIN labeltype lt "
"ON l.id_labeltype = lt.id "
"WHERE lt.name = %s AND l.id_scan = ("
" SELECT scan.id FROM scan "
" INNER JOIN dataset "
" ON dataset.id = scan.id_dataset "
" WHERE scan.name = %s AND dataset.name = %s"
");"
)
with self.connection.cursor() as cursor:
cursor.execute(
sql,
(
labeltype_name,
scan_name,
dataset_name,
),
)
self.connection.commit()
return cursor.rowcount
The function is within a class handling access to the label table so connection is already established (and later closed correctly).
I am simply wondering if the query is the best way to do this, since I am not working with SQL too often.
Answer: nit: For consistency with scan.id I would
choose the column name scan_id
instead of id_scan, meh, whatever.
Kudos on correctly using bind variables,
keeping Little Bobby Tables out of your hair.
The %s, %s, %s in the query don't seem
especially convenient, for matching up
against {label, scan, ds} names in a tuple.
Prefer to use three distinct names,
and pass in a dict.
If you do that habitually,
then when a query grows to mention half a dozen
parameters, and some newly hired maintenance engineer
adds a seventh one, you won't be sorry.
And I like the explicit COMMIT.
best way to do this?
Looks good to me.
I didn't hear you complaining about "too slow!"
or reliability / flakiness.
You didn't say that EXPLAIN PLAN showed
some index was being ignored.
Including an
ERD
entity-relationship diagram would have been helpful,
or at least the CREATE TABLE DDL including FK definitions.
Let me try to organize your prose.
Here's what I heard.
label --> labeltype
label --> scan --> dataset
It's not obvious to me why scan and dataset
should be in a subquery rather than at same level -- it
just arbitrarily came out that way, no biggie.
The l.id_scan = ( fragment suggests to me
that there's enough UNIQUE indexes on {scan, ds} name
to ensure we get no more than one result row from the subquery,
else you would have mentioned l.id_scan IN ( ....
It's perfectly fine the way it is.
Maybe JOIN label against labeltype and also against scan?
(And then scan against ds.)
If I was chicken, I would do all the JOINing in SELECT statements,
obtain an id that I could do a sanity check on,
and then send a simple DELETE single row command.
But you're braver than me, and it works, so that's cool.
When reasoning about complex JOINs I often find it convenient
to bury some of the complexity behind a VIEW.
Minimally you would want to define a scan_dataset_v view
so you never have to type the name of the "dataset" table again.
CREATE VIEW scan_dataset_v AS
SELECT s.*, ds.*
FROM scan s
JOIN dataset ds ON ds.id = s.id_dataset;
Now label has just two relationships to worry about.
(And, the * stars probably have at least
one conflicting column name -- just write them all out.
Absent the DDL, I don't know what all the columns are.)
You might possibly find it convenient to produce
a view that JOINs them both to label,
and another that JOINs labeltype to label.
It just saves some typing during interactive queries.
There's no efficiency hit -- think of it as the
backend doing a macro expansion before issuing the query.
Summary: Any improvements? Nope, not really, LGTM. | {
"domain": "codereview.stackexchange",
"id": 45229,
"tags": "python, python-3.x, mysql"
} |
Is Silica Gel fit for Ultra-High Dessication? | Question: I had the (perhaps wrong) notion that silica gel wasn't that strong of a dessicant, especially compared to CaO, MgO, CaSO4, H2SO4, KOH, Mg(ClO4)2), but I've been exposed to information* that might give me wrong.
Can silica gel beat most of those dessicants in terms of residual water in a dried solid? Can silica gel really be used to dry salts down to only twice the residual water level that can be achieved with Mg(ClO4)2 and half the residual water level than can be achieved with KOH as dessicant? Or does that only apply to air/gases?
Will anhydrous silica gel allow to dessicate compounds that have high affinity to water down to very low residual water levels? (when used at say 5-20C at atmospheric pressure on salts pre-dried with MgSO4)
What are the conditions that give silica gel its greatest capacity to dessicate salts to ultra low residual water levels?
*Merck Memento, p14, says:
Residual humidity in mg H2O /L of air after dehydration at 25°C:
White CuSO4 1,4
Molten ZnCl2 0.8
CaCl2 0.14-0.25
CaO 0.2
Molten NaOH 0.16
MgO 0.008
Anhydrous CaSO4 (plaster of Paris) 0.005
Concentrated H2SO4 (95-100%) 0.003-0.3
Dry Al2O3 0.003
Molten KOH 0.002
Silica gel (SiO2) 0.001
Anhydrous Mg(ClO4)2 0.0005
P2O5 below 0.000025
Answer: Silica gel is really one of the best desiccants known. But it has to be thoroughly dehydrated at high temperature. Usually silica gel contains a small amount of cobalt chloride $\ce{CoCl2}$ : blue when anhydrous, and red when hydrated. Silica gel should not be used if reddish. It has to be blue to act as desiccant. | {
"domain": "chemistry.stackexchange",
"id": 16868,
"tags": "equipment, separation-techniques, process-chemistry, drying"
} |
Solution to Codejam 2019's Pylons | Question: I'm trying to solve Pylons from the 2019's round 1A.
Problem
Our Battlestarcraft Algorithmica ship is being chased through space by
persistent robots called Pylons! We have just teleported to a new
galaxy to try to shake them off of our tail, and we want to stay here
for as long as possible so we can buy time to plan our next move...
but we do not want to get caught!
This galaxy is a flat grid of R rows and C columns; the rows are
numbered from 1 to R from top to bottom, and the columns are numbered
from 1 to C from left to right. We can choose which cell to start in,
and we must continue to jump between cells until we have visited each
cell in the galaxy exactly once. That is, we can never revisit a cell,
including our starting cell.
We do not want to make it too easy for the Pylons to guess where we
will go next. Each time we jump from our current cell, we must choose
a destination cell that does not share a row, column, or diagonal with
that current cell. Let (i, j) denote the cell in the i-th row and j-th
column; then a jump from a current cell (r, c) to a destination cell
(r', c') is invalid if and only if any of these is true:
r = r'
c = c'
r - c = r' - c'
r + c = r' + c'
Can you help us find an order in which to visit each of the R × C
cells, such that the move between any pair of consecutive cells in the
sequence is valid? Or is it impossible for us to escape from the
Pylons? Input
The first line of the input gives the number of test cases, T. T test
cases follow. Each consists of one line containing two integers R and
C: the numbers of rows and columns in this galaxy. Output
For each test case, output one line containing Case #x: y, where y is
a string of uppercase letters: either POSSIBLE or IMPOSSIBLE,
according to whether it is possible to fulfill the conditions in the
problem statement. Then, if it is possible, output R × C more lines.
The i-th of these lines represents the i-th cell you will visit
(counting starting from 1), and should contain two integers ri and ci:
the row and column of that cell. Note that the first of these lines
represents your chosen starting cell. Limits
Time limit: 20 seconds per test set. Memory limit: 1GB. Test set 1
(Visible)
T = 16. 2 ≤ R ≤ 5. 2 ≤ C ≤ 5. Test set 2 (Hidden)
1 ≤ T ≤ 100. 2 ≤ R ≤ 20. 2 ≤ C ≤ 20.
The analysis suggests that a brute force approach should work. However, my Python 3 solution doesn't pass the 2nd test case. Can I make it faster without complicating the algorithm?
from itertools import product, repeat
def main():
T = int(input()) # the number of test cases
for case in range(1, T+1):
R, C = map(int, input().split()) # the numbers of rows and columns
stack = []
for r, c in product(range(R), range(C)):
grid = [[False]*C for _ in repeat(None, R)]
grid[r][c] = True
stack.append((((r, c),), grid))
while stack:
moves, grid = stack.pop()
if len(moves) == R*C:
print('Case #{}: POSSIBLE'.format(case))
for r, c in moves:
print(r+1, c+1)
break
for r, c in product(range(R), range(C)):
if (not grid[r][c] and r != moves[-1][0] and c != moves[-1][1]
and moves[-1][0] - moves[-1][1] != r - c
and moves[-1][0] + moves[-1][1] != r + c):
g = [r.copy() for r in grid]
g[r][c] = True
stack.append((moves+((r, c),), g))
else:
print('Case #{}: IMPOSSIBLE'.format(case))
main()
Answer: A simple randomized brute-force works for this problem. From the analysis:
We can try these solutions anyway, or we can rely on our occasional
friend, randomness! We can pick a random starting cell, repeatedly
choose valid moves uniformly at random from the space of all allowed
moves from our current cell, and, if we run out of available moves,
give up and start over. For any case except for the impossible ones
mentioned above, this approach finds a solution very quickly.
Python code:
from itertools import product, repeat
from random import choice
def main():
T = int(input()) # the number of test cases
for case in range(1, T+1):
R, C = map(int, input().split()) # the numbers of rows and columns
if R < 2 or C < 2 or R + C < 7:
print('Case #{}: IMPOSSIBLE'.format(case))
else:
print('Case #{}: POSSIBLE'.format(case))
while True:
grid = [[False]*C for _ in repeat(None, R)]
moves = []
last = None
for _ in repeat(None, R*C):
candidates = ([(r, c) for r, c in product(range(R), range(C)) if not grid[r][c]
and r != last[0] and c != last[1] and last[0] - last[1] != r - c
and last[0] + last[1] != r + c]
if last is not None else list(product(range(R), range(C))))
if not candidates:
break
cell = choice(candidates)
moves.append(cell)
grid[cell[0]][cell[1]] = True
last = cell
else:
for r, c in moves:
print(r+1, c+1)
break
main() | {
"domain": "codereview.stackexchange",
"id": 34702,
"tags": "python, performance, time-limit-exceeded"
} |
Semantic Distance Measure Between Images | Question: I would like to compare 2 generic images, with same sizes and normalized values. Which metric would be better than the baseline Eucledian distance?
Answer: One of the most known image similarity metric is the SSIM. You can take a look to the next links:
https://en.wikipedia.org/wiki/Structural_similarity
https://www.imatest.com/docs/ssim/
https://www.cns.nyu.edu/~lcv/ssim/
EDIT 1:
Okay, you refer to semantic similarity. I have not worked on that issue really, and probably there are new DNN solutions to that purpose. I have not checked the state-of-the-art related to this topic.
Even so, I have made a quick test using the skimage SSIM and the difference between images to give you some idea about what happens with these metrics on several images. Find in the next images some hints about their performance for your target application:
Please, notice the values of the SSIM, MSE and STD DIFF on the images. SSIM and MSE works very well to compare quality of the same image. But not for semantic comparison.
Cheers. | {
"domain": "dsp.stackexchange",
"id": 9589,
"tags": "image-processing, distance-metrics"
} |
Which class is best to call .getResource on? | Question: I'm creating a Java Helper Library with helpful static methods. I find myself needing to call ...getResource(...) or ...getResourceAsStream(...) in several instances and I'm wondering what's the best practice for this. It seems to be working fine when I say: Helper.class.getResource(...), but I'm wondering whether that will bite me in the future.
I have a few alternatives I'm thinking of. Below is an example of when I need to use this and the alternatives I have thought of:
Alternative 1 (what I would prefer for the sake of users):
/**
* Takes the given resource and returns that as a string.
*
* @param location of the file
* @return the file as a string
* @throws IOException
*/
public static String resourceToString(String location) throws IOException {
InputStream is = IOHelper.class.getResourceAsStream(location);
InputStreamReader r = new InputStreamReader(is);
return readerToString(r);
}
Alternative 2:
/**
* Takes the given resource (based on the given class) and returns that as a string.
*
* @param location of the file
* @param c class to get the resource from
* @return the file as a string
* @throws IOException
*/
public static String resourceToString(String location, Class c) throws IOException {
InputStream is = c.getResourceAsStream(location);
InputStreamReader r = new InputStreamReader(is);
return readerToString(r);
}
Alternative 3 (should I just have the user give me the resource?):
/**
* Takes the given InputStream and returns that as a string.
*
* @param is the InputStream of the File to read
* @return the file as a string
* @throws IOException
*/
public static String resourceToString(InputStream is) throws IOException {
InputStreamReader r = new InputStreamReader(is);
return readerToString(r);
}
Answer: I would prefer the second one. The others will bite if the library runs inside an OSGi container.
Furthermore, I would
pass UTF-8 character encoding to the constructor of the InputStreamReader. Otherwise it will use the system encoding which vary from system to system. (Or let the user to set it with a third method parameter.)
use longer variable names. is, r, c are not too easy to read. I would call them resourceStream, clazz and resourceReader. It does not force readers to decode them.
reverse the order of String location, Class c parameters. It would be in the same order as the first line use them (c.getResourceAsStream(location)). I don't know if there is a rule of thumb for this, but it looks more natural to me. (Any pointers are welcome.)
remove the * @throws IOException javadoc line. It is meaningless in this form. | {
"domain": "codereview.stackexchange",
"id": 1689,
"tags": "java, comparative-review"
} |
How many seconds is a temporal meter? | Question: Is there a proof that time is a 4th dimension?
If it is, then why not measure it in units of the previous three?
Logical right?
How many seconds is a temporal meter?
Answer: Sure, it'd be $1/c$ seconds, which is exactly $1/299792458\text{ s}$. Although, really, there's no point because time is defined as a multiple of $c$, anyways.
As for proof that time is the fourth dimension, there's no 'proof' like any scientific theory (there's just evidence) and unlike mathematics, but one fairly large point of (fairly accessible) evidence is in your GPS. Look up "General Relativity and GPS."
It's not hard to imagine people's reactions if their GPS told them they were in the middle of Antarctica flying 500 metres in the air. | {
"domain": "physics.stackexchange",
"id": 8855,
"tags": "spacetime, time"
} |
Roslaunch files from binaries and from source | Question:
I have a launch file that is written in my catkin_ws that includes other launch files from binaries (/opt/ros/indigo/ etc.) and from source (/catkin_ws/src/ etc.). In order to run the launch file, I have to source the catkin_ws but if I do this, the nodes from the binary launch file cannot be found. If I source the /opt/ros/indigo/setup.bash, then the launch file can't be found because it is located in the catkin_ws. Do I have to install all packages from source in order to accomplish this?
EDIT:
Here is my launch file:
<launch>
<include file="$(find turtlebot_bringup)/launch/minimal.launch" />
<include file="$(find turtlebot_navigation)/launch/gmapping_demo.launch" />
<include file="$(find frontier_exploration)/launch/global_map.launch" />
<node name="send_exploration_goal" pkg="turtlebot_demos" type="send_exploration_goal"/>
<param name="set_linear_vel" value="0.1" />
<param name="set_angular_vel" value="0.1" />
</node>
</launch>
The "send_exploration_goal" is a node I created and is located in my catkin_ws. Frontier_exploration is a package I installed using apt-get and is the launch file that fails to launch. The error message states that the two nodes that the launch file is supposed to run cannot be found in the package. The documentation can be found here.
minimal.launch and gmapping_demo.launch work without any errors.
Originally posted by pgigioli on ROS Answers with karma: 354 on 2016-03-28
Post score: 0
Answer:
You need to source both. If something isn't defined in the workspace then it will be found the binary install.
Originally posted by nickw with karma: 1504 on 2016-03-28
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by pgigioli on 2016-03-28:
How do I source both? If I source the workspace, then the binary doesnt work but if I source the binary, the workspace doesnt work.
Comment by nickw on 2016-03-28:
In the standard ubuntu install for ROS it suggests you add the sourcing of the binary install to your .bashrc file, echo "source /opt/ros/indigo/setup.bash" >> ~/.bashrc
If you do then the binary install is sourced in each shell you open.
Comment by nickw on 2016-03-28:
Then when you source the workspace it means it will be searched for files before searching the binary install location.
Comment by pgigioli on 2016-03-28:
I already have that line in my ~/.bashrc. What's weird is that other binary launch files work such as turtlebot_bringup minimal.launch and turtlebot_navigation gmapping_demo. I think something is wrong with the binaries so I'm just going to reinstall from source into my catkin_ws
Comment by nickw on 2016-03-28:
would need more specific details about package names, launch files and their contents - are all the things you are trying to get to work in the same name package, but some in your workspace, and some in /opt/ros ?
Comment by pgigioli on 2016-03-28:
I updated the original question with specific details on the launch file I'm running. I have different packages, some from workspace and some from /opt/ros/, but one package cannot be run.
Comment by nickw on 2016-03-28:
can you add the actual error message as I am still unclear which nodes cannot be located
Comment by pgigioli on 2016-03-28:
I don't have access to my robot right now but the error message is something like "can't locate node [explore_client] in package [frontier_exploration]" and "can't locate node [explore_server] in package [frontier_exploration]."
Comment by pgigioli on 2016-03-28:
Here's the github page for the node scripts.
Comment by nickw on 2016-03-29:
It looks like someone else has had this issue as well https://github.com/paulbovbel/frontier_exploration/issues/17
Comment by pgigioli on 2016-03-29:
Yeah, I ended up reinstalling from source. For anyone reading this who had the same problem with this package, if you install from source, you will not get all of the required header files for some reason. My solution was to copy the header files from the binaries and add them to the source | {
"domain": "robotics.stackexchange",
"id": 24261,
"tags": "roslaunch"
} |
How does a heated object lose thermal energy in a perfect vacuum? | Question: Suppose I heat up an arbitrary object to an arbitrary temp and then place it in a near perfect vacuum (let’s assume inter-galactic space). If there is essentially nothing for the object to transfer its thermal energy to does it lose it, over time, to EM emissions?
Answer: Yes - in a vacuum, the heat loss is via radiation, typically visible light and/or infrared radiation. | {
"domain": "physics.stackexchange",
"id": 59975,
"tags": "electromagnetic-radiation, photons, electrons, momentum"
} |
What effects do inductors have on circuits? | Question:
The above diagram is a simple circuit containing an inductor.
According to the right hand grip rule, it can be deduced that the magnetic field is to the right. But the magnetic field produced by the current flowing through the inductor will induce an opposing current thanks to Lenz's law.
Does it mean that the current will be reduced by the inductor? What effects do inductors have on circuits?
Answer: Inductors only play a part in a circuit if the current through them is changing and then the act ny producing an emf so as to try and oppose the change.
If the current through an inductor is increasing/decreasing then the inductor will try and reduce the rate of increase/decrease of current. | {
"domain": "physics.stackexchange",
"id": 90929,
"tags": "electric-circuits, electrical-resistance, electromagnetic-induction, batteries, inductance"
} |
How to solve T(n) = T(n-1) + n^2? | Question: See title. I'm trying to apply the method from this question. What I have so far is this, but I don't know how to proceed from here on:
T(n) = T(n-1) + n2
T(n-1) = T(n-2) + (n-1)2 = T(n-2) + n2 - 2n + 1
T(n-2) = T(n-3) + (n-2)2 = T(n-3) + n2 - 4n + 4
T(n-3) = T(n-4) + (n-3)2 = T(n-4) + n2 - 6n + 9
Substituting the values of T(n-1), T(n-2) and T(n-3) into T(n) gives:
T(n) = T(n-2) + 2n2 - 2n + 1
T(n) = T(n-3) + 3n2 - 6n + 5
T(n) = T(n-4) + 4n2 - 12n + 14
Now I have to find a pattern but I don't really know how to do that. What I got is:
T(n) = T(n-k) + kn2 - ...???
Answer: Don't expand the squared terms; it'll just add confusion. Think of the recurrence as
$$
T(\fbox{foo}) = T(\fbox{foo}-1)+\fbox{foo}\;^2
$$
where you can replace foo with anything you like. Then from
$$
T(n)=T(n-1)+n^2
$$
you can replace $T(n-1)$ by $T(n-2)+(n-1)^2$ by putting $n-1$ in the boxes above, yielding
$$
T(n) = [T(n-2) + (n-1)^2]+n^2 = T(n-2)+(n-1)^2+n^2
$$
and similarly
$$\begin{align}
T(n) &= T(n-2)+(n-1)^2+n^2\\
&= T(n-3)+(n-2)^2+(n-1)^2+n^2\\
&= T(n-4)+(n-3)^2+(n-2)^2+(n-1)^2+n^2
\end{align}$$
and in general you'll have
$$
T(n) = T(n-k)+(n-k+1)^2+(n-k+2)^2+\dotsm+(n-1)^2+n^2
$$
Now if we let $k=n$ we'll have
$$
T(n) = T(0)+1^2+2^2+3^2+\dotsm+n^2
$$
Now if you just need an upper bound for $T(n)$ observe that
$$
1^2+2^2+3^2+\dotsm+n^2\le \underbrace{n^2+n^2+n^2+\dotsm+n^2}_n=n^3
$$
so we conclude that $T(n)=O(n^3)$, in asymptotic notation.
For a more exact estimate, you can look up the equation for the sum of squares:
$$
1^2+2^2+\dotsm+n^2=\frac{n(n+1)(2n+1)}{6}
$$
so
$$
T(n)=T(0)+\frac{n(n+1)(2n+1)}{6}
$$ | {
"domain": "cs.stackexchange",
"id": 6210,
"tags": "recurrence-relation"
} |
Strongly connected orientations of undirected graphs | Question: I'm trying to prove the following.
There exists a strongly connected orientation of a connected, undirected graph $G$ if, and only if, $G$ has no bridge.
(An orientation of an undirected graph is a directed graph made by replacing each edge $uv$ with exactly one of the directed edges $(u,v)$ and $(v,u)$; a directed graph is strongly connected if it contains a directed path from $u$ to $v$ for every pair of distinct vertices $u$ and $v$; a bridge is an edge whose deletion disconnects the graph.)
I can prove just one side.
Assume that there is no bridge in the graph. So, for vertices $u$ and $v$, we can remove the edge which is between them and still there are(is) some paths(path) in the graph which we can reach $v$ from $u$ (so still the graph is connected). Because there is no bridge in the graph and every pair of vertices are reachable from the other without the edge of between them, we can convert the undirected graph to directed graph.
how about the other side? how can I prove that?
Can I say that, because the graph is undirected and it can be converted to a digraph, for each arbitrary pair of vertices $u$ and $v$ there is a path (or maybe some paths) which we can reach $v$ from $u$ and vice versa except the directed path of between them (the edge between them). So we can remove the directed path of between them. So there is no bridge edge in the graph.
Answer: If $uv$ is a bridge then any orientation of $G$ either includes the edge $u\to v$ and has no path from $v$ to $u$, or vice-versa.
If $G$ has no bridge, then it is $2$-connected. A graph is $2$-connected if, and only if, it can be decomposed as $C\cup P_1 \cup\dots \cup P_k$ where $C$ is a cycle and each $P_i$ is a path (possibly just a single edge) that intersects $C\cup P_1\cup\dots\cup P_{i-1}$ at both its endpoints and nowhere else. Now construct your strongly connected orientation from this decomposition. | {
"domain": "cs.stackexchange",
"id": 12753,
"tags": "graphs"
} |
What are the best detection medias for cholera? | Question: I heard this fact that you can use some [hypertriade] for vibrio cholera diagnosis which has compontents
sucrose (yellow)
mannose (yellow)
arabinose (do not ferment; stay dark pink)
I did not find this hypertriade agar on Google.
It reminds me of Egg Yolk Agar, but I think cannot be it because of the arabinose.
Otherwise, it seems to be similar.
What are the best detection medias for cholera?
Answer: You should use Thiosulfate-citrate-bile salts-sucrose agar for the detection of all vibros, including cholera. When cholera is presant, you should expect round, largish (not huge), yellow colonies. This is the CDC recommended diagnoses method.
Again this question might be better phrased "what are the best detection medias for cholera," or "what plate screens should be conducted when vibros are suspected?" | {
"domain": "biology.stackexchange",
"id": 1847,
"tags": "homework, microbiology"
} |
Remove low quality reads | Question: I want to remove reads from FASTQ file that contain homopolymers > 10bp and remove reads with <35 average quality score across the entire read.
To remove homopolymers > 10bp, I tried this on a Linux machine, but it only removes the sequence line:
zcat file.fastq.gz |
awk '!/A{10,}/&&!/C{10,}/&&!/G{10,}/&&!/T{10,}/ {print}' cleaned_file.fastq
Answer: Use an off the shelf tool for read preprocessing.
Here is one:
./fastq_qual_trimmer -i test.fq -m 35 -H 10
That one does exactly what you want but seems a little old/unmaintained, so here is another that does more or less the same thing:
fastq-mcf --qual-mean 35 --homopolymer-pct {X} adapters.fa reads.fq
where {X} is 10 / read length, adapters.fa is an adapter file (which I believe can be empty or filled with dummy sequences).
You could also use a library like biopython or dnaio to write a quick script to do this, but it hardly seems worth it. | {
"domain": "bioinformatics.stackexchange",
"id": 2391,
"tags": "fastq, awk"
} |
Fundamental scales in QCD | Question: How pion decay constant is linked to $\Lambda_{QCD}$? Are they the same thing unless multiplicative constants?
Answer: They are not the same thing, although they are of the same order of magnitude, as is the chiral condensate. The pion decay constant quantifies the strength with which pseudoscalars couple to the chiral vacuum of QCD, in their role as Goldstone bosons: their amplitude to be pulled out of it by the SSBroken chiral current. $\Lambda_{QCD}$ is the characteristic energy scale of QCD, whose Compton wavelength is a typical hadronic radius, near the shell characteristic of chiral condensation.
The complete theoretical connection resides in Dashen's formula for the masses of pseudogoldstone bosons, and is neatly summarized in section 5.5 of T. P. Cheng's & L. F. Li's tasteful book. If you were a glutton for detail, you might opt for S. Weinberg's (1996) The Quantum Theory of Fields (v2. Cambridge University Press. ISBN 978-0-521-55002-4. pp. 225–231). It is also often referred to as Gell-Mann-Oakes-Renner (1968) doi:10.1103/PhysRev.175.2195 in the sloppy shorthand of chiral perturbation theory.
It is a blending of a current algebra Ward identity with PCAC ($m_\pi^2 f_\pi^2=-\langle 0|[Q_5,[Q_5,H]]|0\rangle$), so that the square of the mass of the pseudoglodstone boson is proportional to the explicit breaking part of the effective lagrangian, here linear in the quark masses.
For example, naively, the pion mass, which should have been zero for massless quarks as a full Goldstone boson of a perfect SSBroken chiral symmetry, now picks up a small value $m_\pi^2 \sim m_q \Lambda^3/f_\pi^2$, where $m_q$ is the relevant light quark mass in the real world QCD Lagrangian, which explicitly breaks chiral symmetry; $f_\pi$ is the spontaneously broken chiral symmetry constant, about 90MeV; and Λ the fermion condensate value ~ 250MeV. The latter is taken routinely to be identical to $\Lambda_{QCD}$.
Crudely, for $f_\pi \sim 90$MeV and $m_\pi \sim 140$MeV,
$$
(90\cdot 140)^2 \sim 7\cdot (250)^3
$$
is not that bad... | {
"domain": "physics.stackexchange",
"id": 83866,
"tags": "quantum-field-theory, quantum-chromodynamics"
} |
Why do sodium halides react so differently with sulfuric acid? | Question: Why do sodium halides react so differently with sulfuric acid?
\begin{align}
\ce{NaF + H2SO4 &-> NaHSO4 + HF}
\tag{1a}\label{NaF}\\
\ce{NaCl + H2SO4 &-> NaHSO4 + HCl}
\tag{1b}\label{NaCl}\\
\ce{2 NaBr + H2SO4 + 2H+ &-> Br2 + SO2 + 2H2O + 2Na+}
\tag2\label{NaBr}\\
\ce{8 NaI + H2SO4 + 8H+ &-> 4 I2 + H2S + 4 H2O + 8Na+}
\tag3\label{NaI}
\end{align}
Conventional explanation: $\ce{NaI}$ is a strong enough reducing agent to reduce the sulfur, and $\ce{NaBr}$ is a little less strong, so sulfur is not reduced as much. Equivalently, $\ce{H2SO4}$ is not as strong an oxidising agent to reduce $\ce{NaF}$ and $\ce{NaCl}$.
My ensuing question: Why is reaction \eqref{NaI} more favourable than \eqref{NaBr} and $(\ref{NaF},\ref{NaCl})$ for sodium iodide, so that it reacts with sulfuric acid like it does. What factors are involved: what is more stable, for instance, (or there is lower Gibbs free energy or something else) about reducing sulfur as much as possible (as $\ce{NaI}$ 'chooses' \eqref{NaI} over $(\ref{NaF},\ref{NaCl},\ref{NaBr})$?
Ditto for sodium bromide. I understand why \eqref{NaI} is impossible, but why is \eqref{NaBr} chosen over $(\ref{NaF},\ref{NaCl})$?
Answer: A factor you didn't touch upon which may have some weight in determining the reactions is the acidities of the hydrogen halides formed by reactions of the first type (which is fundamentally a coarse qualitative analysis of the free energy of reaction). In aqueous solutions, the order of acidity is $\ce{HF} \ll \ce{HCl} < \ce{HBr} < \ce{HI}$. While concentrated sulfuric acid is obviously not the same as water, perhaps it is fair to say the relative acidities of the hydrogen halides are approximately the same, given they are both polar protic solvents with very high dielectric constants.
As you go down the family, the acid formed in reactions of the first type become stronger, and there is less thermodynamic drive to follow such a reaction pathway. $\ce{HI}$ is somewhat close in acidity to $\ce{H2SO4}$ since neither conjugate base is very coordinating. Meanwhile, as you say, the reducing power of the halides increases, and since sulfur is in a high oxidation number in $\ce{H2SO4}$, redox reaction pathways become more attractive. A detailed analysis of free energies of reactions should agree with the change in the mechanism since it is unlikely these reactions are kinetically controlled.
Edit: Another neat consequence of the acidity strength argument is that it explains why the reactions stop at the bisulfate anion instead of going all the way to the halide sulfates; $\ce{HSO4-}$ is a much, much weaker acid than $\ce{H2SO4}$, so there is no free energy to lose out of consuming it to produce a much stronger acid (in the case of $\ce{HCl}$, $\ce{HBr}$, and $\ce{HI}$ at least). $\ce{HF}$ is similar in acidity to $\ce{HSO4-}$ (in water), so it's not as readily explainable in such a qualitative analysis. | {
"domain": "chemistry.stackexchange",
"id": 416,
"tags": "inorganic-chemistry, acid-base, redox, halides"
} |
Java Console Guess The Number Game | Question: I was given a homework assignment to code a console Guess the Number Game, where the user should guess a random number. Also, I had to code a simple menu with switch where I can change a random number origin/bound and attempts amount.
Please, give me some tips how could I improve the code and where I'm wrong.
P.S. I must not use classes / methods etc. Only Control Flow Statements.
import java.util.Scanner;
import java.util.concurrent.ThreadLocalRandom;
public class Main {
public static void main(String[] args) {
var scanner = new Scanner(System.in);
var origin = 0;
var bound = 100;
var attempts = 10;
System.out.println("Guess the Number Game!");
while (true) {
System.out.println();
System.out.println("1. Start new game.");
System.out.println("2. Change origin (" + origin + ").");
System.out.println("3. Change bound (" + bound + ").");
System.out.println("4. Change attempts amount (" + attempts + ").");
System.out.println("5. Quit.");
System.out.println();
switch (Integer.parseInt(scanner.nextLine())) {
case 1:
var number = ThreadLocalRandom.current().nextInt(origin, bound);
var currentAttempts = attempts;
System.out.println("Guess the number between " + origin + " and " + bound + "!");
while (currentAttempts > 0) {
System.out.println(currentAttempts-- + " attempts left:");
var input = Integer.parseInt(scanner.nextLine());
if (input == number)
break;
else
System.out.println("The number is " + (input < number ? "greater" : "less") + " than yours.");
}
if (currentAttempts > 0)
System.out.println("You win! Congratulations!");
else {
System.out.println("You loose! Better luck next time!");
System.out.println("The number was " + number);
}
break;
case 2:
while (true) {
System.out.println("Enter new origin:");
int newOrigin = Integer.parseInt(scanner.nextLine());
if (newOrigin < bound) {
origin = newOrigin;
break;
} else
System.out.println("Origin should be less than bound. Try again.");
}
break;
case 3:
while (true) {
System.out.println("Enter new bound:");
int newBound = Integer.parseInt(scanner.nextLine());
if (newBound > origin) {
bound = newBound;
break;
} else
System.out.println("Bound should be greater than origin. Try again.");
}
break;
case 4:
while (true) {
System.out.println("Enter new attempts amount:");
int newAttempts = Integer.parseInt(scanner.nextLine());
if (newAttempts > 0) {
attempts = newAttempts;
break;
} else
System.out.println("Attempts amount should be greater than 0. Try again.");
}
break;
case 5:
System.out.println("Bye!");
return;
}
}
}
}
Answer: Because of the 'I must not use classes / methods etc. Only Control Flow Statements' I will try to not give comments to that direction.
Here are some advices
Input checking
Integer.parseInt(scanner.nextLine())
This will crash when someone enters anything but a digit (eg. 'a'). You should add exception handling for this
way to keep things going
while (true)
I would personally go for while(isRunning), as it will allow you to end the loop without the need for the "return" statement in the switch.
usage of var
var number = ThreadLocalRandom.current().nextInt(origin, bound);
int newOrigin = Integer.parseInt(scanner.nextLine());
You are mixing the usage of var and int when reading from the command line (which is technically ok) but I think consistency is also important in code.
default case
There is no default case in the switch. That would for example allow you to handle people entering number 6
changing variables in printing output
System.out.println(currentAttempts-- + " attempts left:")
This gets really tricky when you would use a logging framework and suddenly the user decides to switch of the level of logging.
I would go for System.out.println is to print current values, and to change values before/after (whatever is appropriate)
coding style
if (input == number)
break;
else
System.out.println ...
I would advice to always put {}, even if the next line is a simple instruction. This will avoid any human reading mistakes.
eg.
if (<somecondition>)
dosomething();
break;
else
System.out.println ... | {
"domain": "codereview.stackexchange",
"id": 30826,
"tags": "java, number-guessing-game"
} |
How is the surface of a Bloch sphere a Hilbert space? | Question: In the linear algebra section of the Qiskit textbook appears the following claim regarding the Bloch sphere:
The surface of this sphere, along with the inner product between qubit
state vectors, is a valid Hilbert space.
It is pretty clear that by scaling any quantum statevector we can easily get a vector that points outside or inside the surface of the sphere.. I.e the surface of the sphere isn't a vector space.
From where is this contradiction coming from?
Answer: The surface of a Bloch sphere is not a Hilbert space.
Maybe they meant to write that it's a valid projective Hilbert space (in particular it's isomorphic to $\mathbb{CP}^1$)? It's not a vector space, so it cannot be a Hilbert space (note that a "projective Hilbert space" is, perhaps somewhat confusingly, not a Hilbert space). | {
"domain": "quantumcomputing.stackexchange",
"id": 4170,
"tags": "qiskit, bloch-sphere, hilbert-space"
} |
How can warm steel and wood merge together? | Question: I need to describe the manufacturing of japanese knives. I struggle with the step when the steel blade is put together with the wooden handle.
Here are some pictures from the video https://www.youtube.com/watch?v=j2L_Ku47afo (at 6'42"):
After putting the hot blade inside the handle, you can see that the steel immediately cools down and after hitting the handle with a hammer, the two pieces are merged. I've been told that the welding could not function between two different materials, especially not wood. Could you please explain me how those two materials merge together in this situation please ?
Thanks in advance for your answers !
EDIT:
Sorry if this doesn't seem directly related to chemistry, I was more interested of what happened in the molecular scale than the physic reactions, if you think about a better forum to ask this do not hesitate to tell it to me;
For those who can't see the picture or the video, the blacksmith has made a very long, thin end to the steel blade which he has warmed up until it becomes orange and then he puts it inside the wooden handle (which has a drilled hole). After he did this you can see smoke going out the handle, and finally the blacksmith hits the handle with a hammer (in the direction of the length).
Answer: A common press fit , no chemistry involved. Most commonly used for files (in US). All my files (old) use this same press fit, I believe newer files have plastic handles. The use of heat to custom shape the hole for the tang is a good modification. A smaller pilot hole can be used because it will be enlarged to the right size by burning. This likely gives a tighter fit compared to a larger round hole | {
"domain": "chemistry.stackexchange",
"id": 16550,
"tags": "iron"
} |
Sequence movement with delay | Question:
Hi there,
I am trying to put a delay between a movement of turtlebot so that I can execute the movement step by step.
However, the behavior of the turtlebot is different from what I expected. It's only execute the first movement and skip the other command. Could anybody help me out here? Here I attached my sample of program. Thank you.
#include <ros/ros.h>
#include <stdlib.h>
#include <iostream.h>
#include <geometry_msgs/Twist.h>
#include <geometry_msgs/Pose.h>
#include <numeric>
using namespace std;
ros::Publisher pub_vel;
geometry_msgs::Twist cmdvel;
geometry_msgs::Pose Curr_Loc;
void odomCB(const nav_msgs::Odometry::ConstPtr& msg)
{
Curr_Loc = msg->pose.pose;
double x=msg->pose.pose.position.x;
double y=msg->pose.pose.position.y;
double Z=msg->pose.pose.orientation.z;
double W=msg->pose.pose.orientation.w;
cout<<"Current location: "<<x<<", "<<y<<" Orientation "<<Z<<", "<<W<<endl;
cmdvel.angular.z=0.0;
cmdvel.linear.x=0.2;
usleep(300000); //delay
cmdvel.angular.z=-0.2;
cmdvel.linear.x=0.0;
usleep(300000); //delay
pub_vel.publish(cmdvel);
}
int main(int argc, char** argv)
{
ros::init(argc, argv, "sequence move");
ros::NodeHandle n;
pub_vel=n.advertise<geometry_msgs::Twist>("cmd_vel",1);
pub_vel.publish(cmdvel);
ros::Subscriber Sub_Odom=n.subscribe("odom",1,odomCB); //subcribe
ros::spin();
return 0;
}
Originally posted by nadal11 on ROS Answers with karma: 23 on 2014-06-11
Post score: 0
Answer:
You are just setting values to some internal struct when setting cmdvel and waiting between that. This doesn't affect the robot at all. Only the final publish call will actually send something to the robot.
You are also sending those out of a callback. You'll probably want to restructure your program, so that you can send comamnds continuously from a main loop to keep the robot moving.
Originally posted by dornhege with karma: 31395 on 2014-06-11
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 18238,
"tags": "ros"
} |
Why doesn't the inception score measure intra-class diversity | Question: It's mentioned here that there is no measure of intra-class diversity with the inception score:
If your generator generates only one image per classifier image class,
repeating each image many times, it can score highly (i.e. there is no
measure of intra-class diversity)
However, isn't it "easy" to look at the variance of the outputs of the classifier for a given class (e.g. if you only output 0.97 for all the images of your GAN class then there is no intra-class diversity but if you output 0.97, 0.95, 0.99, 0.92, there is diversity?). I'm struggling to understand why this is hard to do (but I might be missing something!).
Answer: For reference, a recap of Inception Score:
The inception score is computed by comparing the categorical output distributions of an inception model, given examples from real vs synthetic images. If the synthetic images produce similar class distributions as the real images, the inception score is high, otherwise it is low.
However, isn't it "easy" to look at the variance of the outputs of the classifier for a given class
Say you want to generate multiple horses and the model learns to generate horses with different colors but always in the same pose - then your class probabilities will vary, but I wouldn't call this very diverse horse generation. This is how I would understand what is meant by your cited statement.
The output distributions from the inception model contain class information but very little information of specific image features. Thus, the inception score cannot be sensitive to intra-class variations of the generator. | {
"domain": "ai.stackexchange",
"id": 3321,
"tags": "generative-adversarial-networks, inception"
} |
Why is visible light easily blocked by pretty much anything but radio waves are not? | Question: Light (visible part of electromagnetic spectrum) is easily blocked by most materials. Radio waves are not but also X-rays are not. What is so special about this small part of the spectrum since both shorter and longer waves seem to behave differently?
Answer: Blocking means there are interactions with the light that manage to absorb it completely.
Most materials are opaque to visible light due to the underlying quantum nature of matter. To emit light or absorb light there should be energy levels that will "trap" those frequencies
electromagnetic spectrum
Yes, x-rays and higher energy photons pass through thin layers of matter because they mostly scatter. Only very low frequency radio wave will go through anything. Most other frequencies will resonate with some electronic energy bands and particularly metals can trap them.
So it is not a "small part of the spectrum" that is blocked. What is true is that radio waves go through non metalic materials because there are no electronic energy levels to absorb them or reflect them. And the same is true for X rays and higher frequencies. | {
"domain": "physics.stackexchange",
"id": 27306,
"tags": "electromagnetism, visible-light"
} |
Can Kraus operators change a mixed state into a pure state? | Question: It seems that Kraus operators cannot change a pure state into a mixed one (wrong). For any pure state can be written as $|\psi\rangle\langle\psi|$, so after the Kraus operators. It becomes $$\sum_l\Pi_l|\psi\rangle\langle\psi|\Pi_l^\dagger = |\phi\rangle\langle\phi|.$$
But does there exist some Kraus operators that can change the mixed state $\rho$ into a pure state?
Answer: More generally, given any two states, you can always find some channel sending one into the other. Consider for example replacement maps, which have the form
$$\Phi_Y(X) = \operatorname{Tr}(X) Y.$$
Given any pair of states $\rho$ and $\sigma$, the channel $\Phi_\sigma$ will send $\rho$ (as well as any other state) into $\sigma$.
The (or a) set of Kraus operators for $\Phi_Y$ is $\{\sqrt{y_\alpha}\lvert y_\alpha\rangle\!\langle \beta|\}_{\alpha,\beta}$, where $|y_\alpha\rangle$ form an orthonormal basis of eigenvectors for $Y$ (assuming $Y$ to be normal), $y_\alpha$ are the corresponding eigenvalues, and $|\beta\rangle$ is an arbitrary orthonormal basis (for the relevant space).
See also @Norbert's answer on physics.SE for other examples of quantum channels on which to test hypotheses. | {
"domain": "quantumcomputing.stackexchange",
"id": 2849,
"tags": "quantum-operation, kraus-representation"
} |
What is Shannon's source entropy | Question: Suppose that ${X_n; Y_n}$ is a random process with a discrete alphabet, that is, taking on values in a discrete set for $n$ data length. They correspond to the input and output of a communication process. Assuming Y to be the discretized output, what is meant by Shannon's source entropy for a given discretization bin $k$ : $H_{source} = lim_{k \rightarrow \infty} \frac{1}{k} H_k$ where
$H_k$ stands for Shannons entropy. I found this in the paper. Now, problem_Set also mentions about source entropy but the formula is very different! Its the same as Shannon's entropy. So,what exactly is Source entropy? Is it the entropy of X or Y or is it the same as Shannon's entropy
Answer:
So,what exactly is Source entropy?
When the paper talks about "source entropy" all they mean is the entropy of the information source. You can see that from the following passage in the paper- "Shannon showed that there must exist at least one encoding of the sequence generated by an information source which allows error-free transmission of the sequence when the channel capacity is larger than the source entropy."
Now, problem_Set also mentions about source entropy but the formula is very different!
Yes, and no. The formulas appear different, but they are not really. The first formula is
$H_{source} = lim_{k→∞}\frac{1}{k}H_k$
$H_{source}$ is the total entropy of the source, and $H_k$ is the entropy of each possible discrete value that the random variables in $H$ can be. Thus, it is almost a trivial statement that the total entropy is the sum of the individual value entropies, while the number of discrete values can go to $\infty$.
The problem set equation is-
$H(s) = \sum_i p_ilog_2\frac{1}{p_i}$
This is really the same equation, it just replaces the generic "$H_k$" with the implied $log_2\frac{1}{p_i}$. The $\frac{1}{k}$ in the first equation is equivalent to the $p_i$ in the second equation- i.e. they are assuming that all of the discrete values of H are equally likely. This is not generally true, but is true when the entropy is maximized. | {
"domain": "dsp.stackexchange",
"id": 385,
"tags": "discrete-signals, reference-request, information-theory"
} |
Nitric acid plus Hydroiodic acid | Question: Why $\ce{I2}$ is formed when $\ce{HI}$ and $\ce{HNO3}$ are reacted?
I know that $\ce{HI}$ is more acidic than $\ce{HNO3}$ so nitric acid will accept protons from $\ce{HI}$, so $\ce{I-}$ (iodide ion ) should be formed and nitric acid on accepting proton would form $\ce{H2NO3+}$ but that is not formed why?
Answer: Nitrate is a strong oxidant which oxidizes the iodide to iodine.
$$\ce{NO_3^- + 2I^- + 2H^+\rightarrow NO_2^- + I_2 \uparrow + H2O}$$
Note that the oxidation number of the nitrogen atom in $\ce{NO3-}$ is $+V$ and in $\ce{NO2-}$ it's $+III$, so over all, we have a reduction equation of:
$$\ce{NO_3^- + 2e^- + 2H^+\rightarrow NO_2^- + H2O}$$
On the oxidation side we want to form $\ce{I_2}$ out of $\ce{I^-}$, so the oxidation equation is:
$$\ce{2I^- \rightarrow I_2 + 2e^-}$$ | {
"domain": "chemistry.stackexchange",
"id": 6259,
"tags": "inorganic-chemistry, acid-base, redox"
} |
Why does a laser cavity being finite imply that beam divergence occurs? | Question: I recently read that one of the reasons laser beam divergence occurs is because the radius of the cylindrical cavity is finite, and so the stimulated emission with amplification also occurs for photons travelling along directions not exactly parallel to the axis. I don't really understand why a laser cavity being finite implies that beam divergence occurs, and nor do I understand what this has to do with the photons travelling along directions not exactly parallel to the axis. So why does a laser cavity being finite imply that beam divergence occurs, and what does this have to do with the photons travelling along directions not exactly parallel to the axis?
Answer: When you solve Maxwell's equations, one of the easiest solutions that can occur is the solution of a plane wave $E e^{i(\vec{k} \vec{x} - \omega t) }$. Plane waves do have a straight forwards interpretation (in homogenous, isotrope media) if it comes to "direction": They "travel" only in one direction.
That is: Their poynting vector points into the same direction as $\vec{k}$, and the wave fronts "travel" in that direction.
However - the wave is spatially unlimited in the directions perpendicular to $\vec{k}$.
Another solution of Maxwell's equations (at least approximately) is the "gaussian beam"
This resembles what we observe as a "light ray" mutch better than the plane wave before, and this is the model that is used to describe a laser from a cavity. As you can see, the beam widens.
You don't find solutions to Maxwell's equations that don't show widening, but are spatially confined.
The easiest (and lazy) way to see is that the widening is simply a consequence of Maxwell's equations. However, we can observe a pattern here: The narrower and smaller $w0$ is, the bigger will the widening be. On the other side, with a spatially unlimited wave like the plane wave, no widening occurs at all.
This property is an analogue to the quantum mechanical observation that position and momentum (in this case along the $y$-axis) can't both at the same time be fixed with arbitrary precision. At the moment you require photons to pass through a hole, and limit their position in a direction perpendicular to the optical axis, their corresponding momentum in that direction can't with certainty be set to $0$ anymore. | {
"domain": "physics.stackexchange",
"id": 77674,
"tags": "laser, laser-cavity"
} |
Connect SDF model to ROS | Question:
Hello, since I'm a newbie in ROS and Gazebo I'm having trouble to find some tutorial or someone who could explain how to connect a robot SDF model to ROS so that I could acquire sensor data. Is there any tutorial or something that could give me a hint or some plugin which can connect a SDF model to ROS like a URDF model? Thanks!
Originally posted by sirsomething on Gazebo Answers with karma: 3 on 2017-04-05
Post score: 0
Answer:
Hello, there are a few resources that can be useful for you:
gazebo_ros_pkg contains a set of ROS packages to communicate Gazebo and your ROS code. I recommend to read the wiki to get a high level overview.
More specifically, take a look at the "Using a URDF in Gazebo" and the follow-up tutorial, where you can learn how to load a plugin (libgazebo_ros_camera.so) that will publish images via a ROS topic and how to visualize them in rviz.
Additionally, you can take a look at the Gazebo intermediate guided tutorials. It's composed by six tutorials where the process of customizing a model in Gazebo is explained. The tutorial number #6 contains instructions for receiving ROS messages and modify your simulation (in this case, spinning a Velodyne sensor).
Originally posted by Carlos Agüero with karma: 626 on 2017-04-05
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 4074,
"tags": "sdformat"
} |
JavaScript DOM Implementation | Question: I'm supposed to create a website that adds multiple forms to the page, is responsive and checks if the inputs are valid (validation is not important, just needs to show some attempt at regex). Below is what I've written so far. What I'm looking for is any advice on making it more efficient and compact. Any and all help is appreciated and considered. Thanks in advance!
index.html
<!DOCTYPE html>
<html lang="en-US">
<head>
<meta charset="UTF-8">
<title>index.html</title>
<script src="script.js"></script>
<link rel="stylesheet" type="text/css" href="style.css">
</head>
<body onload="execute()" id="body">
<h3>Content Below:</h3>
<div id="buffer">
<div id="content">
<!-- Content will go here -->
</div>
</div>
<div id="info">
<!-- Info will go here -->
</div>
</body>
</html>
script.js
function ContentDisplayer() {
this.count = 0;
this.show = function(id) {
var tag = document.getElementById(id);
tag.style.display = 'block';
}
this.showText = function(id, content) {
var tag = document.getElementById(id);
tag.innerHTML = content;
}
this.constructForm = function(containing_id, question) {
//Create div containing the form
var div_tag = document.createElement('div');
div_tag.id = 'div_' + this.count;
document.getElementById('body').appendChild(div_tag);
//Create the form tag
var form_tag = document.createElement('form');
form_tag.id = 'form_' + this.count;
document.getElementById(div_tag.id).appendChild(form_tag);
//Create the fieldset tag
var fieldset_tag = document.createElement('fieldset');
fieldset_tag.id = 'fieldset_' + this.count;
document.getElementById(form_tag.id).appendChild(fieldset_tag);
//Create question label
var label_tag = document.createElement('label');
var label_text = document.createTextNode(question);
label_tag.appendChild(label_text);
label_tag.id = 'label_' + this.count;
document.getElementById(fieldset_tag.id).appendChild(label_tag);
//insert line break
var break_tag = document.createElement('br');
document.getElementById(fieldset_tag.id).appendChild(break_tag);
//Create answer label
var input_tag = document.createElement('input');
input_tag.id = 'input_' + this.count;
input_tag.type = 'text';
document.getElementById(fieldset_tag.id).appendChild(input_tag);
//insert line break
var break_tag = document.createElement('br');
document.getElementById(fieldset_tag.id).appendChild(break_tag);
//Create button
var button_tag = document.createElement('button');
var button_text = document.createTextNode('Submit');
button_tag.appendChild(button_text);
button_tag.type = 'button';
button_tag.id = 'button_' + this.count;
button_tag.onclick = function() {
var x = document.getElementById(input_tag.id);
if(input_tag.id == 'input_0') {
if(/^[a-zA-Z]+$/.test(x.value)) {
x.style.backgroundColor = "green";
x.style.borderColor = "green";
}
}
if(input_tag.id == 'input_1') {
if((/^[0-9]+$/.test(x.value)) && x.value > 0 && x.value <= 100) {
x.style.backgroundColor = "green";
x.style.borderColor = "green";
}
}
if(input_tag.id == 'input_2') {
if(/^\w+@[a-zA-Z_]+?\.[a-zA-Z]{2,3}$/.test(x.value)) {
x.style.backgroundColor = "green";
x.style.borderColor = "green";
}
}
if(input_tag.id == 'input_3') {
if(/\d{1,5}\s\w{1,10}\s\w{1,10}/.test(x.value)) {
x.style.backgroundColor = "green";
x.style.borderColor = "green";
}
}
if(input_tag.id == 'input_4') {
if(/^\d{3}-\d{3}-\d{4}$/.test(x.value)) {
x.style.backgroundColor = "green";
x.style.borderColor = "green";
}
}
}
document.getElementById(fieldset_tag.id).appendChild(button_tag);
this.count += 1;
}
}
var c;
var questions = [
'Enter your first name',
'Enter your age',
'Enter your email',
'Enter your address',
'Enter your phone number (must use dashes): ###-###-####'
];
var question_ids = [
'name_content',
'age_content',
'email_content',
'address_content',
'phone_content'
];
function execute() {
c = new ContentDisplayer();
c.show('buffer');
c.showText('content', '<h1>Hello!</h1>');
c.showText('info', 'If the box turns green, the information is valid!');
//create loop to add forms to page
for (var i = 0; i < questions.length; i++) {
c.constructForm(question_ids[i], questions[i]);
}
}
style.css
body {
font-family: "Times New Roman", Times, serif;
background-color: pink;
}
.buffer {
display: none;
}
input[type=text] {
border: 2px solid red;
border-radius: 4px;
background-color: #f44242;
margin: 1px;
}
input[type=text]:focus {
border: 2px solid blue;
border-radius: 4px;
background-color: #41dcf4;
}
button {
background-color: green;
border: none;
border-radius: 6px;
color: white;
text-align: center;
font-size: 16px;
}
Answer: Duplicate If-Statements
You have 5 validations that look all the same. You could write a function to get ride of the duplication.
The function could look like:
function makeGreenIfValidationIsValid(tagId, regex) {
if(input_tag.id == tagId) {
if(regex.test(x.value)) {
x.style.backgroundColor = "green";
x.style.borderColor = "green";
}
}
}
After that, the onClick-calback would look like
button_tag.onclick = function() {
var x = document.getElementById(input_tag.id);
makeGreenIfValidationIsValid('input_0', /^[a-zA-Z]+$/)
makeGreenIfValidationIsValid('input_1', /^[0-9]+$/)
makeGreenIfValidationIsValid('input_2', /^\w+@[a-zA-Z_]+?\.[a-zA-Z]{2,3}$/)
makeGreenIfValidationIsValid('input_3', /\d{1,5}\s\w{1,10}\s\w{1,10}/)
makeGreenIfValidationIsValid('input_4', /^\d{3}-\d{3}-\d{4}$/)
document.getElementById(fieldset_tag.id).appendChild(button_tag);
this.count += 1;
}
Extract Class
The method constructForm in ContentDisplayer should be a own class. An indicator for that is that it is huge (more than 80 lines) and you have many tag-comments in it.
Comments are add all not bad but when you group your code in a method you all ready see semi-independent logic. In Robert C. Martins words: “Don’t Use a Comment When You Can Use a Function or a Variable”
For example, the class might be named "Form" and could contain several methods. Based on your comments I could look like
function Form() {
//Create div containing the form
this.createDivTag() {}
//Create the form tag
this.createFormTag() {}
//Create the fieldset tag
this.createFieldsetTag() {}
}
The logic in create[tag-name]Tag for creating a div, form and fieldset looks very similar. We should extract the common logic into a function.
Prototyping
Currently ContentDisplayer and Form (the class from above) don't use it.
A disadvantage is that on each creation of an object all methods like show will be recreated each time. The result is that it costs performance.
With prototyping it would look like
function ContentDisplayer() {
this.count = 0;
}
ContentDisplayer.prototype.show = function(id) {/**/}
ContentDisplayer.prototype.showText = function(id, content) {/**/}
// ... | {
"domain": "codereview.stackexchange",
"id": 33597,
"tags": "javascript, homework, dom"
} |
What is the difference between DC motor with encoder and DC with out encoder? | Question: My question is: what is the difference between DC motor with encoder and DC without encoder? As long as I can control the speed of DC motor using PWM, for example on the Arduino, what is the fundamental difference?
Answer: As @szczepan mentions, the difference is one of feedback.
There are many ways to get feedback regarding the motion of a dc motor. People implementing a control system will often put a mark on the motor's shaft, or attach a "flag" of masking tape, so that they can visually see the turning of the shaft. This helps to ensure the direction of motion is correct (otherwise the dc polarity needs to be reversed). It also helps for observing motion at slow speeds - but is otherwise not appropriate for an automated system.
If you want to automate the observations of the motor's rotation, you must implement some type of sensor that provides appropriate information to the control computer. There are many ways to do this. For example, you can monitor the current being consumed by the motor's windings, and use the motor constant $k_\tau$, to infer the torque generated by the motor. This can be related to acceleration of the shaft by using an appropriate dynamic model of the system. This method, however, is not very accurate and is prone to modelling errors and signal noise. This method is similar to you monitoring the PWM output and inferring motion dynamics - neither is robust to changing dynamics of the system. Another approach is to glue a magnet to the shaft, and monitor it with a Hall effect sensor. This will provide a single pulse to the computer for each rotation of the shaft. This is frequently a solution for high-temperature, or dirty, environments (such as in automotive applications). However, often you need finer granularity of the motion. That is where encoders come in.
There are two basic types of encoders: incremental and absolute. They can be further characterized as quadrature, or non-quadrature encoders. A non-quadrature incremental encoder provides a single pulse to the controller for every incremental motion of the motor shaft. As the previous answer makes clear, this position feedback can be interpreted to infer velocity, acceleration, and possibly jerk (although three derivatives of a sensed value are "spikey" in most applications). This type of encoder, however, only provides information when the position changes, and it does not provide any information about the direction of motion. A quadrature encoder provides two pulses, out of phase, that can be used to detect direction also.
An absolute encoder can provide not only the same information as the incremental encoder does, but it also has many more bits of information from which you can know the angular position of the shaft, in addition to detecting the incremental changes in position.
You can make a very simple encoder by using a disc with slots cut into it. Place a photodiode on one side of the disc, and a photodetector on the opposite side. You will get one pulse each time a slot passes between the sensor elements. As you can see, the accuracy of motion detection is determined by the number of slots in the disc. Encoders are available with many different numbers of pulses per revolution.
I suggest reading a book such as this one if you want to know more about motion metrology: | {
"domain": "robotics.stackexchange",
"id": 1230,
"tags": "motor"
} |
Breaking after one of several string replaces occur | Question: I have a script that matches a string against ~20 different regexs for the purpose of changing it. They are ordered and formed that the string will only ever match against one. How can I avoid checking all the str.replaces after I match once?
My code here works, but I repeat myself a lot and it does not seem optimal.
The while loop exists so I could break, I don't actually need to loop through anything - and in fact I ensure I don't loop at the end of it. I'm merely looking for a way to stop performing replaces, when I know they will do nothing.
function format_number(input){
var str = input.value;
str.toString();
var copy = str;
var checked = false;
while (!checked) {
str = str.replace(/regex1/, 'replace1');
if (copy != str) {break;}
str = str.replace(/regex2/, 'replace2');
if (copy != str) {break;}
str = str.replace(/regex3/, 'replace3');
if (copy != str) {break;}
str = str.replace(/regex4/, 'replace4');
if (copy != str) {break;}
str = str.replace(/regex5/, 'replace5');
if (copy != str) {break;}
str = str.replace(/regex6/, 'replace6');
if (copy != str) {break;}
str = str.replace(/regex7/, 'replace7');
if (copy != str) {break;}
str = str.replace(/regex8/, 'replace8');
if (copy != str) {break;}
str = str.replace(/regex9/, 'replace9');
if (copy != str) {break;}
str = str.replace(/regex10/, 'replace10');
if (copy != str) {break;}
str = str.replace(/regex11/, 'replace11');
if (copy != str) {break;}
str = str.replace(/regex12/, 'replace12');
if (copy != str) {break;}
str = str.replace(/regex13/, 'replace13');
if (copy != str) {break;}
str = str.replace(/regex14/, 'replace14');
if (copy != str) {break;}
str = str.replace(/regex15/, 'replace15');
if (copy != str) {break;}
str = str.replace(/regex16/, 'replace16');
if (copy != str) {break;}
str = str.replace(/regex17/, 'replace17');
if (copy != str) {break;}
checked = true;
}
input.value=str;
}
Answer: This seems to be a fine case for a loop over the expressions:
function format_input(input) {
// decouple DOM things from pure functionality!
input.value = format_number(input.value);
}
function format_number(str) {
var replacements = [
[/regex1/, 'replace1'],
[/regex2/, 'replace2'],
[/regex3/, 'replace3'],
[/regex4/, 'replace4'],
[/regex5/, 'replace5'],
[/regex6/, 'replace6'],
[/regex7/, 'replace7'],
[/regex8/, 'replace8'],
[/regex9/, 'replace9'],
[/regex10/, 'replace10'],
[/regex11/, 'replace11'],
[/regex12/, 'replace12'],
[/regex13/, 'replace13'],
[/regex14/, 'replace14'],
[/regex15/, 'replace15'],
[/regex16/, 'replace16'],
[/regex17/, 'replace17']
];
for (var i=0; i<replacements.length; i++) {
var copy = str.replace(replacements[i][0], replacements[i][1]);
if (str != copy)
return copy;
}
return str;
} | {
"domain": "codereview.stackexchange",
"id": 7951,
"tags": "javascript, strings, regex"
} |
Derivative of quantities with internal indices | Question: In the context of the 3 + 1 decomposition of spacetime needed for a Hamiltionian formulation of general relativity, quantities with so called internal indices are introduced (in the book I am reading on p.43). For such quatities $G^i$ , some kind of a "covariant derivative" is defined:
$D_aG^i = \partial_a G^i + \Gamma _{aj}^iG^i$
Using this derivative, a corresponding "curvature tensor" $\Omega_{ab}^{ji}$ is then calculated by
$D_aD_b - D_bD_a = \Omega_{ab}^{ji}G^i$
My quastions about this are:
1) Why is $\Gamma _{aj}^i$ called spin connection; it has to do with the spin of what ...?
2) How is the so called curvature of connection $\Omega_{ab}^{ji}$ related to the "conventional" curvature tensor ?
Answer: 1) The spin connection allows you to define covariant derivatives of spinors in curved spacetime. For example, to do this, you want a set of gamma matrices which are covariantly constant, so you use the combinations
$\Gamma^i=\gamma^a E_a^i$
where $\gamma^a$ are the usual flat space gamma matrices and $E_a^i$ are the tetrad components, i.e.
$E_a=E_a^i\partial_i$
where $E_a$ is the tetrad basis for the tangent space.
2) The differential geometric relations between the vielbein formalism and the "standard" one is described in detail here. Section IV B describes the curvature relationship I think you're looking for. | {
"domain": "physics.stackexchange",
"id": 2578,
"tags": "general-relativity, differential-geometry"
} |
Relativistic Kinematics - Proton-proton collision | Question: I am stuck on the following question:
A proton of total energy $E$ collides with a proton at rest and creates a pion in addition to the two protons:
$$p+p \rightarrow p+p+ \pi^{0} $$
Following the collision, all three particles move at the same velocity.
Show that $$E = m_p c^2 + \left(2+ \displaystyle \frac{m_{\pi}}{2m_p} \right) m_{\pi}c^2 $$ where $m_p$ and $m_{\pi}$ are the rest masses of the proton and pion respectively.
I have tried conserving energy to get:
$$E + m_p c^2 = 2m_p \gamma (v) c^2 +\gamma (v) m_{\pi} c^2$$
The issue is, what do we do about the $\gamma$ (where $\gamma$ is the Lorentz factor)?
Thanks - all help is appreciated.
Answer: Conservation of energy on its own is usually not sufficient for this sort of problems; you also need to work with conservation of momentum, then resort to a number of nasty substitutions and whatnot. Instead, why not go for conservation of 4-momentum?
$$p_1+p_2=q_1+q_2+q_\pi,$$
where $1$ and $2$ label the protons, and $q$ are final state momenta (for simplicity, I've dropped the "$\mu$" indices). Squaring on both sides (or, if using indices, taking the inner products):
$$p_1^2+p_2^2+p_1\cdot p_2=q_1^2+q_2^2+q_\pi^2+2q_1\cdot q_2+2q_1\cdot q_\pi+2q_2\cdot q_\pi.$$
Now remember that $p=(E,\vec{p})$ and so $p^2=E^2-|\vec{p}|^2=m^2$ :
$$2p_1\cdot p_2=2m_p^2+m_\pi^2+4m_pm_\pi.$$
Note that to simplify the RHS I've used the fact that all decay products have the same momentum. To reduce the LHS, I now use the fact that proton $1$ is at rest, i.e. $p_1=(m_p,0)$ :
$$2p_1\cdot p_2=2Em_p.$$
Plugging it in the above equation yields the desired result (up to factors of $c=1$ in natural units):
$$E=m_p+\left(2+\dfrac{m_\pi}{2m_p}\right)m_\pi.\quad\square$$ | {
"domain": "physics.stackexchange",
"id": 40392,
"tags": "special-relativity, particle-physics"
} |
Given a magnetic field how to find its vector potential? Is there an "inverse" curl operator? | Question: For a certain (divergenceless) $\vec{B}$ find $\vec{A} $ such that $\vec{B}= \nabla \times \vec{A} $.
Is there a general procedure to "invert" $\vec{B}= \nabla \times \vec{A} $? An inverse curl?
(I was thinking of taking the curl of the previous equation:
$$ \nabla \times \vec{B}= \nabla \times \nabla \times \vec{A} = 0. $$
Then using the triple cross product identity $ \nabla \times \nabla \times \vec{V} = \nabla (\nabla \cdot V) - \nabla^2 V$ but that does not quite simplify things... I was hoping to get some sort of Laplace equation for $\vec{A}$ involving terms of $\vec{B}$.)
Answer: You could try using the Helmholtz decomposition.
If $F$ is a twice-differentiable vector field on a bounded volume $V$ with boundary $S$, then it can be decomposed into divergence-free and curl free components.
$$F=-\nabla\Phi+\nabla\times \mathbf{A}$$
where
$$\Phi(\mathbf{r})=\frac{1}{4\pi}\int_V\frac{\nabla'\cdot F(\mathbf{r}')}{\left|\mathbf{r}-\mathbf{r}'\right|}dV'-\frac{1}{4\pi}\oint_S\mathbf{\hat{n}}'\cdot\frac{F(\mathbf{r}')}{\left|\mathbf{r}-\mathbf{r}'\right|}dS'$$
$$\mathbf{A}(\mathbf{r})=\frac{1}{4\pi}\int_V\frac{\nabla'\times F(\mathbf{r}')}{\left|\mathbf{r}-\mathbf{r}'\right|}dV'-\frac{1}{4\pi}\oint_S\mathbf{\hat{n}}'\times\frac{F(\mathbf{r}')}{\left|\mathbf{r}-\mathbf{r}'\right|}dS'$$
$\mathbf{\hat{n}}'$ is the unit outward normal and $\nabla'$ is the gradient with respect to $\mathbf{r}'$ rather than $\mathbf{r}$.
The curl on its own does not have a uniquely-defined inverse. However, the curl and divergence can be combined into a single operator that does have a unique inverse up to boundary conditions.
If the fields are assumed to approach zero at infinity, the boundary integral in each of the above expressions becomes zero. If the field is required to be solenoidal, then the divergence in the first expression will be zero. | {
"domain": "physics.stackexchange",
"id": 92639,
"tags": "electromagnetism, potential, vectors, vector-fields"
} |
What is a magnetic field | Question: What exactly is the magnetic field? I know what it does, but I want to know what it's made of at microscopic level, and how it gets its shape. By comparison with the electrostatic field, which is created by electric charges that attract/repel one another, what creates the magnetic field? The movement of electrons in atoms? Or, are there some elementary particles that create this field? If yes, which particles?
Answer: Magnetic fields are caused by the electric fields. An magnetic field is an electric field that a particle in a certain reference frame that changed the effect. For example electrons moving in a wire produces a magnetic field as shown in this video. This video shows how magnetic fields are just electric fields in a different reference frame. Also, the electrons moving in an atom are magnets, but this video explains why only certain materials can be magnetized and why all magnetizable materials are not immediately magnets. This is due to magnetic domains. | {
"domain": "physics.stackexchange",
"id": 20278,
"tags": "magnetic-fields"
} |
ROS network of processes | Question:
On the ROS Introduction page it says ... The ROS runtime "graph" is a peer-to-peer network of processes (potentially distributed across machines) that are loosely coupled using the ROS communication infrastructure ...
For the processes to be distributed across machines do they all have to be running ROS on top of Linux or can only run Linux without the need to install ROS?
Originally posted by vreg on ROS Answers with karma: 3 on 2014-08-02
Post score: 0
Answer:
If you want to run ROS nodes on multiple machines, ROS must be installed on every machine.
Further, each node must be installed on the machine you want to run it on. The ROS launch infrastructure does not do any code distribution for you.
Originally posted by ahendrix with karma: 47576 on 2014-08-03
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 18869,
"tags": "ros"
} |
Why does the scalar field still have a VEV in a vacuum state of a SUSY-theory? | Question: In a supersymmetric theory the vacuum state is Lorentz invariant, so all the vacuum expectation values (VEVs) of fields that transform non-trivially under Lorentz transformations should be zero. Thus, only the scalar field can have a non-zero VEV. But the vacuum is not only Lorentz symmetric, but also supersymmetric and the scalar field does not transform trivially under supertransformations. It gets mapped to a fermionic field. So why shouldn't the VEV. of the scalar field be zero as well?
Answer: Consider a single chiral multiplet with scalar and spinor components $\phi$ and $\xi$. Schematically, the SUSY transformations are
$$ \begin{split}\delta \phi &\sim \bar \xi\epsilon\\\delta \xi&\sim \bar \epsilon\gamma^\mu\partial_\mu \phi \,.\end{split}$$
(Note that I have neglected auxiliary fields and Hermitean cojugates.)
The key point is that $\delta\text{(scalar)}\sim\text{spinor}$ and $\delta\text{(spinor)}\sim\text{derivative of scalar}$. Hence, a vacuum configuration $\left(\phi,\xi\right)=\left(\phi_0,0\right)$ with constant $\phi_0$ is invariant under SUSY transformations. | {
"domain": "physics.stackexchange",
"id": 50072,
"tags": "quantum-field-theory, special-relativity, supersymmetry, vacuum, lorentz-symmetry"
} |
Why does L1 regularization yield sparse features? | Question:
In contrast to L2 regularization, L1 regularization usually yields sparse feature vectors and most feature weights are zero.
What's the reason for the above statement - could someone explain it mathematically, and/or provide some intuition (maybe geometric)?
Answer: In L1 regularization, the penalty term you compute for every parameter is a function of the absolute value of a given weight (times some regularization factor).
Thus, irrespective of whether a weight is positive or negative (due to the absolute value) and irrespective of how large the weight is, there will be a penalty incurred as long as weight is unequal 0. So, the only way how a training procedure can considerably reduce the L1 regularization penalty is by driving all (unnecessary) weights towards 0, which results in a sparse representation.
Of course, the L2 regularization will also only be strictly 0 when all weights are 0. However, in L2, the contribution of a weight to the L2 penalty is proportional to the squared value of the weight. Therefore, a weight whose absolute value is smaller than 1, i.e. $abs(weight) < 1$, will be much less punished by L2 than it would be by L1, which means that L2 puts less emphasis on driving all weights towards exactly 0. This is because squaring a some value in (0,1) will result in a value of lower magnitude than taking the un-squared value itself: $x^2 < x\ for\ all\ x\ with\ abs(x) < 1$.
So, while both regularization terms end up being 0 only when weights are 0, the L1 term penalizes small weights with $abs(x) < 1$ much more strongly than L2 does, thereby driving the weight more strongly towards 0 than L2 does. | {
"domain": "ai.stackexchange",
"id": 2142,
"tags": "machine-learning, regularization, l2-regularization, l1-regularization"
} |
Recursive type encoding on System F (and other pure type systems) | Question: I am studying pure type systems, particularly the calculus of constructions, and trying to use an encoding for recursive types on it, which, according to Philip Wadler, is possible. As an example, I'm using the Morte Haskell library to encode Scott numerals as given by Cardelli.
A summary of the encoding is as such: given a (positive) recursive type...
$$\mu X.F\ X$$
...we may encode it as a type on System F as...
$$Lfix\ X.F\ X\ =\ \Lambda X.(F\ X\rightarrow X)\rightarrow X$$
...or, using pure type systems' notation (with an explicit $F$)...
$$Lfix\ =\ \Pi F: *\rightarrow*.\Pi X: *.(F\ X\rightarrow X)\rightarrow X$$
...since $F$ is a type constructor ($*$ is the type of types).
In order to encode such, we need to declare three functions, $fold$, $in$ and $out$, according to Wadler, and used by Cardelli on the encoding for Scott numerals.
fold : All X. (F X -> X) -> T -> X
fold = \X. \k: F X -> X. \t:T. t X k
in : F T -> T
in = \s: F T. \X. \k: F X -> X. k (F (fold X k) s).
(Where $T$ is $Lfix\ X. F\ X$.)
It's trivial to write the $fold$ function as given. But when trying to write the $in$ function, it doesn't seem to typecheck. The expression $(fold\ X\ k)$ has type $T\rightarrow X$, and $(F\ (fold\ X\ k)\ s)$ should be of type $F\ X$. Then we infer that $F$ should be of type $(T\rightarrow X)\rightarrow F\ T\rightarrow F\ X$ (looks like a fmap). This doesn't typecheck, because F is a type constructor (of type $*\rightarrow*$).
This doesn't look like a typo... am I missing something?
Answer: There is a convention in category theory that the same symbol is used for a type constructor and the map function over that type constructor. Hence, if f : X -> Y then F f : F X -> F Y. | {
"domain": "cs.stackexchange",
"id": 8097,
"tags": "lambda-calculus, type-theory"
} |
How come energy is conserved if identical Physical experiments can have different results depending on time? | Question: I just learned about Noether's theorem. It states that temporal symmetry is the reason for energy conservation. But in quantum mechanics, identical isolated experiments conducted at different times can give different results because of probabilities. Does this mean energy is not conserved in quantum mechanics?
Answer: Note that Noether's theorem appeared before quantum mechanics. It is nevertheless true, following the quantum-classical correspondence principle (indeed, quantum mechanics is obtained from the theoretical classical mechanics by replacing all the variables with the corresponding operators). The probabilistic nature of the quantum mechanics however means that the theorem holds for the quantum mechanical averages rather than for the results of particular measurements.
The deviation of measured energies from the mean energy can be viewed as a manifestation of the Heisenberg uncertainty principle. Indeed, the energy operator (appearing in the Schrödinger equation) is
\begin{equation}
\hat{E} = \imath\hbar \frac{\partial}{\partial t}
\end{equation}
does not commute with time. In fact, the very form of this energy operator - as a generator of time translation - testifies to the quantum definition of energy firmly grounded in Noether's theorem.
Btw, this situation is very similar to momentum conservation, which, according to Noether's theorem, follows from the translational invariance. | {
"domain": "physics.stackexchange",
"id": 65560,
"tags": "quantum-mechanics, noethers-theorem"
} |
What about BosonSampling can be publicly verified? | Question: Boson Sampling, sometimes stylized as BosonSampling, is an attractive candidate problem to establish quantum supremacy; the engineering problems appear more surmountable than those associated with a Turing-complete quantum computer.
However, Boson Sampling has a downside, in that the output of a photonic quantum computer capable of executing Boson Sampling with only a handful ($\le 100$ or so) of qubits may not even be able to be clasically simulated. This is, of course, unlike $\mathsf{NP}$ problems such as factoring, the engineering aspects of which are significantly harder.
Thus, we may establish the results of Boson Sampling on $100$ or so photons, but in order to verify the results, we need to calculate the permanent of a $100\times100$ matrix. This is famously computationally hard to verify.
Maybe a supercomputer powerful enough to calculate the permanent can do the trick. But then everyone would have to believe both the supercomputer's results and the Boson Sampling results.
Is there anything about Boson Sampling that can be easily verified?
I've had a flight of fancy to maybe put the resources of a cryptocurrency mining network to use to calculate such a permanent, and relying on some $\mathsf{\#P}$ / $\mathsf{IP}$ tricks for public verification, but I haven't gotten very far.
EDIT
I like @gIS's answer.
Compare Boson Sampling with Appel and Franken's computer-assisted proof of the Four Color Theorem. The original proof of the 4CT was allegedly controversial, precisely because the proof was too long to be publicly verified by a human reader. We've moved so far from the '70's with our trust of computers, wherein I think now most people accept the proof of the 4CT without much controversy. But thinking about how to make things like a proof of the 4CT human-verifiable may lead to interesting ideas like the $\mathsf{PCP}$ theorem.
Answer: About the need of boson sampling verification
First of all, let me point out that it is not a strict necessity to verify the output of a boson sampler. By this, I don't mean to say that it is not useful or interesting to try and do so, but rather that it is in some sense more of a practical than a fundamental necessity.
I think you yourself put up a good argument for this when you write
Maybe a supercomputer powerful enough to calculate the permanent can do the trick. But then everyone would have to believe both the supercomputer's results and the Boson Sampling results.
Indeed, there are many instances in which one solves a problem and trusts a solution which cannot really be fully verified. I mean, forget quantum mechanics, just use your computer to multiply two huge numbers. You probably have high confidence that the result you get is correct, but how do you verify it without using another computer?
More generally, trust in a device's results comes from a variety of things, such as knowledge of the inner working of the device, and unit testing of the device itself (that is, testing that it works correctly for the special instances that you can verify with some other method).
The problem of boson sampling certification is no different. We know that, at some point, we will not be able to fully verify the output of a boson sampler, but that does not mean that we will not be able to trust it. If the device is built with due thoroughness, and its output is verified for a variety of small instances, and other tests that one is able to carry out are all successful, then at some point one builds up enough trust in the device to make a quantum supremacy claim (or whatever else one wants to use the boson sampler for) meaningful.
Is there anything about BosonSampling that can be easily verified?
Yes, there are properties that can be verified. Due to the sampling nature of the problem, what people typically do is to rule out alternative models that might have generated the observed samples. For example, Aaronson and Arkhipov (1309.7460) showed that the BosonSampling distribution is far from the uniform distribution in total variation distance (with high probability over the Haar-random matrices inducing the distribution), and gave a protocol to efficiently distinguish the two distributions.
A more recent work showing how statistical signatures can be used to certify the boson sampling distribution against alternative hypotheses is (Walschaers et al. 2014).
The other papers that I am aware of focus on certifying specific aspects of a boson sampler, rather than directly tackling the problem of finding alternative distributions which are far from the BosonSampling one for random interferometers.
More specifically, one can isolate two major possible sources of error in a boson sampling apparatus: those arising from incorrectly implementing the interferometer, and those arising from the input photons not being what they should be (that is, totally indistinguishable).
The first case is (relatively) easy to handle because one can efficiently characterise an interferometer using single photons.
However, certifying the indistinguishability of input photons is trickier. One idea to do this is to change the interferometer to a non-random one, such as the QFT interferometer, and see whether something can be efficiently verified in this simpler case. I won't try to add all the relevant references here, but this direction started with (Tichy et al. 2010, 2013).
Regarding the public verification aspect, there isn't anything done in this direction that I've heard of. I am also not sure whether it is even a particularly meaningful direction to explore: why should we require such a "high standard" of verification for a boson sampler, when for virtually any other kind of experiment we are satisfied with trusting the people doing the experiment to be good at what they are doing? | {
"domain": "quantumcomputing.stackexchange",
"id": 381,
"tags": "photonics, boson-sampling"
} |
Improving efficiency of Rust algorithm ported from Python generator | Question: I'm learning Rust by solving ProjectEuler problems.
To this end, I am trying to port a solution to problem 88 (link) in Python that heavily relies on generators to Rust (which doesn't have generators).
A natural number, N, that can be written as the sum and product of a given set of at least two natural numbers, {a1, a2, ... , ak} is called a product-> sum number: N = a1 + a2 + ... + ak = a1 × a2 × ... × ak.
For example, 6 = 1 + 2 + 3 = 1 × 2 × 3.
For a given set of size, k, we shall call the smallest N with this property a minimal product-sum number. The minimal product-sum numbers for sets of size, k = 2, 3, 4, 5, and 6 are as follows.
k=2: 4 = 2 × 2 = 2 + 2
k=3: 6 = 1 × 2 × 3 = 1 + 2 + 3
k=4: 8 = 1 × 1 × 2 × 4 = 1 + 1 + 2 + 4
k=5: 8 = 1 × 1 × 2 × 2 × 2 = 1 + 1 + 2 + 2 + 2
k=6: 12 = 1 × 1 × 1 × 1 × 2 × 6 = 1 + 1 + 1 + 1 + 2 + 6
Hence for 2≤k≤6, the sum of all the minimal product-sum numbers is 4+6+8+12 = 30; note that 8 is only counted once in the sum.
In fact, as the complete set of minimal product-sum numbers for 2≤k≤12 is {4, 6, 8, 12, 15, 16}, the sum is 61.
What is the sum of all the minimal product-sum numbers for 2≤k≤12000?
I tried doing this by basically implementing a state machine in a recursive struct that implements the Iterator trait, but it runs many times slower than the Python version, and I can't exactly figure out why.
Flamegraph implies that a lot of time is spent in allocating memory, which sort of makes sense, but I don't understand why the Rust version is so much slower.
Can anyone explain why the Rust version is so much slower and/or how to optimise it to be as fast (or faster) than the Python version, while being more idiomatic?
Thanks in advance!
Python algorithm:
def solution88(N=12000):
import itertools as it
def multiplicative_partitions(n, k=None, i_min=2):
if k is None:
# Start from k=2 to avoid the trivial partition (n,)
for k in it.count(2):
x = multiplicative_partitions(n, k)
try:
yield next(x)
except StopIteration:
return
yield from x
elif k <= 0:
return
elif k == 1:
yield (n,)
elif k == 2:
sqrt_n = int(n ** 0.5)
for i in range(i_min, sqrt_n + 1):
if not n % i:
yield (i, n // i)
else:
sqrt_n = int(n ** 0.5)
for i in range(i_min, sqrt_n + 1):
if not n % i:
for a in multiplicative_partitions(n // i, k - 1, i):
yield (i, *a)
unprocessed = N - 2 + 1
results = [None] * unprocessed
n = 4 # 4 = 2 + 2 = 2 * 2
while unprocessed > 0:
for mp in multiplicative_partitions(n):
if n >= sum(mp):
k = n - sum(mp) + len(mp)
if k <= N and results[k - 2] is None:
results[k - 2] = n
unprocessed -= 1
n += 1
print(set(results))
return sum(set(results))
print(f"The answer is: {solution88()}")
(Note: I didn't come up with this Python solution, but a much slower one, initially. Credit for this one goes to user '6557' on ProjectEuler)
Rust algorithm:
use itertools::Itertools;
use std::collections::HashSet;
const LIMIT: usize = 12_000;
struct MulPar {
n: usize,
k: usize,
i: usize,
sqrt_n: usize,
inner: Option<Box<MulPar>>,
}
impl Iterator for MulPar {
type Item = Vec<usize>;
fn next(&mut self) -> Option<Self::Item> {
if self.k == 0 || self.i > self.sqrt_n {
None
} else if self.k == 1 {
self.k = 0;
Some(vec![self.n])
} else if self.k == 2 {
while self.i < self.sqrt_n + 1 {
if self.n % self.i == 0 {
let i = self.i;
self.i += 1;
return Some(vec![i, self.n / i]);
}
self.i += 1;
}
None
} else if let Some(mut inner) = self.inner.take() {
if let Some(mut next) = inner.as_mut().next() {
next.push(self.i);
self.inner = Some(inner);
Some(next)
} else {
self.inner = None;
self.i += 1;
if self.i > self.sqrt_n + 1 {
// no more partitions to yield
None
} else {
while self.n % self.i != 0 {
self.i += 1;
}
self.inner = Some(Box::new({
let sqrt_n = ((self.n / self.i) as f64).sqrt() as usize;
MulPar {
n: self.n / self.i,
k: self.k - 1,
i: self.i,
sqrt_n,
inner: None,
}
}));
self.next()
}
}
} else {
if self.i > self.sqrt_n {
None
} else {
while self.n % self.i != 0 {
self.i += 1;
}
self.inner = Some(Box::new({
let sqrt_n = ((self.n / self.i) as f64).sqrt() as usize;
MulPar {
n: self.n / self.i,
k: self.k - 1,
i: self.i,
sqrt_n,
inner: None,
}
}));
self.next()
}
}
}
}
fn main() {
let mut unprocessed = LIMIT - 2 + 1;
let mut n = 4;
let mut results = vec![None; unprocessed];
while unprocessed > 0 {
for j in 2..(LIMIT - 1) {
for mp in multiplicative_partitions(n, j, 2) {
let mp_sum = mp.iter().sum::<usize>();
if n >= mp_sum {
let k = n - mp_sum + mp.len();
if k <= LIMIT && results[k - 2].is_none() {
results[k - 2] = Some(n);
unprocessed -= 1;
}
}
}
}
n += 1;
}
let set: HashSet<usize> = HashSet::from_iter(results.into_iter().map(|x| x.unwrap_or(0)));
println!("{:?}", set.iter().cloned().sorted().collect::<Vec<usize>>());
let answer = set.into_iter().sum::<usize>();
println!("The answer is: {answer}");
}
fn multiplicative_partitions(n: usize, k: usize, i: usize) -> MulPar {
let sqrt_n = (n as f64).sqrt() as usize;
MulPar {
n,
k,
i,
sqrt_n,
inner: None,
}
}
Answer: The fundamental reason that your original Rust code was slow was because you incorrectly translated this bit of python code:
for k in it.count(2):
x = multiplicative_partitions(n, k)
try:
yield next(x)
except StopIteration:
return
yield from x
Your Rust equivalent is:
for j in 2..(LIMIT - 1) {
for mp in multiplicative_partitions(n, j, 2) {
...
}
}
The key difference is that Python has an early termination criterion, it breaks out of the loop when multiplicative_partitions returns an empty sequence. But your Rust code doesn't have this, it looks all the way to LIMIT in every case which takes a long time even with Rust's speed.
Your second version was faster because you added the early breaking logic back where it was supposed to be:
let mut res = Vec::new();
for k in 2.. {
let x = multiplicative_partitions(n, Some(k), i_min);
if x.is_empty() {
break;
}
res.extend(x.clone());
}
res | {
"domain": "codereview.stackexchange",
"id": 44430,
"tags": "python, programming-challenge, rust, iterator, generator"
} |
Power distribution in a defocused focal plane | Question: Given an optical system of focal length $EFL$ and f number $f/n$, if the focal plane is defocused in a way that the defocused plane distance from the focused plane is $d$, assuming we have a point source at $\infty$ and the incident power on the optical system is $P_0$, what is the power distribution of the power on the new defocused plane?
Thanks
Answer: This is a problem in Fresnel diffraction. There is no closed-form solution, as there is for the PSF (point spread function) in the focal plane. There is an exact solution in terms of an integral. The details and the integral, along with series approximations good for small values of defocus can be found in Chapter 9 of Born and Wolf. Born and Wolf also presents series solutions for aberrated systems.
More recently, series solutions have been found that converge more quickly, and that are good for larger values of defocus and aberrations. These solutions are called the Extended Nijboer-Zernike series. Lots and lots of details can be found at this web site.
There are also solutions using Fourier transforms, and Fresnel transforms, and even direct integration solutions. Which approach you use depends on your needs.
All of them are way too long and complicated to present here. | {
"domain": "physics.stackexchange",
"id": 29982,
"tags": "optics, geometric-optics"
} |
Is Hawking radiation really the same as Unruh radiation? | Question: I read that Hawking radiation is the same as Unruh radiation. However, there seems to be a paradox here.
If you have an extreme black hole (say with maximum charge), then it has temperature 0 and doesn't radiate. However, it seems to me that a neutral (uncharged) observer hovering above the horizon should still see Unruh radiation because he is undergoing a high acceleration.
Does this show that Hawking radiation and Unruh radiation are really different things? If not, how does one resolve this discrepancy?
And either way, doesn't the observer hovering over the horizon see the Unruh radiation escaping from the black hole? If he sees this, why is he wrong?
Answer: Since the acceleration of a static Unruh-deWitt detector is finite rather than infinite as the horizon of an extremal black hole is approached, the response of such a detector depends not only on the UV structure of the vacuum state (which in any nonsingular state is always the same as in the Minkowski vacuum) but on IR aspects of the state. There can be more than one state that suggests itself for consideration. It might be natural to consider the Hartle-Hawking state on a charged black hole background, in the extremal limit. I don't know what what would yield for the detector response, but perhaps some of the perspective in this old paper might be helpful here (and perhaps not, since I think the states considered there are "static vacua"). On the other hand, maybe this falls outside the original question, which was about the relation (or not) to Hawking radiation in this case. I'd say there's no clear relation, because the Hawking radiation is determined by the UV structure of the state at the horizon, and that static Unruh effect is determined by something else in this case. | {
"domain": "physics.stackexchange",
"id": 89638,
"tags": "black-holes, hawking-radiation, unruh-effect"
} |
Why does a beta particle have 0 as its mass number? | Question: For a negative beta particle why is it that its mass number is 0 and its atomic number is -1 because if :
mass number = num of (protons + neutrons)
and atomic number= num of (protons)
, why wouldn't the mass number be -1 aswell , I know the mass of an electron is 0 but if protons make up the mass and atomic numbers then why is just the atomic number -1?
Answer: The so-called mass number is really a nucleon count, since the neutron's mass is very slightly more than the proton's, while the electron's is nearly 2000 times smaller. In fact, we can think of it more fundamentally as a conserved quantity called the baryon number (for which each nucleon scores 1, while the electron scores 0). | {
"domain": "physics.stackexchange",
"id": 94482,
"tags": "mass, electrons, nuclear-physics, conventions"
} |
What is the best approach to use in R and why? | Question: I'm starting working with R, and found some tutorials and exercises online.
I want to divide one variable in two, bigger and equal 79 and smaller 79.
Perhaps because I'm used to python, my first approach was to do something like this:
z <- numeric(length(faithful$waiting))
n = 0
for (i in faithful$waiting) {
n = 1 + n
if (i < 79) z[n] <- 1
}
But I found many tutorials that use this solution instead:
min_wait <- min(faithful$waiting)-0.1
max_wait <- max(faithful$waiting)
cutof <- c(min_wait,79,max_wait)
waiting_cat <- cut (faithful$waiting, breaks=cutof)
What is the best way to do something like this? And can someone explain why.
Thank you!
Answer: As you realize, your first approach works (it gives a result consistent with the criteria you specify), but it is not idiomatic R. Iterating over elements of a set/list/vector is idiomatic of python, and does have a place in R as well. However, what this approach misses is 2 aspects of R: inherent vectorization and the factor data type.
In R, all basic types are vectors and can hold multiple items (of the same type). A single value is just a special case of a length 1 vector. Since everything is a vector, all the standard functions are designed to operate on the whole vector at once. They are implicitly vectorized over the elements of the vector rather than needing an explicit loop (iteration or for loop) to operate on each element. So the first simplification is to eliminate the for loop over elements of faithful$waiting and just do the comparison on the whole vector.
> faithful$waiting < 79
[1] FALSE TRUE TRUE TRUE FALSE TRUE FALSE FALSE TRUE FALSE TRUE FALSE
[13] TRUE TRUE FALSE TRUE TRUE FALSE TRUE FALSE TRUE TRUE TRUE TRUE
[25] TRUE FALSE TRUE TRUE TRUE FALSE TRUE TRUE TRUE FALSE TRUE TRUE
[37] TRUE FALSE TRUE FALSE FALSE TRUE FALSE TRUE TRUE FALSE TRUE TRUE
[49] FALSE TRUE TRUE FALSE TRUE FALSE TRUE FALSE TRUE TRUE TRUE FALSE
[61] TRUE FALSE TRUE FALSE TRUE FALSE TRUE TRUE TRUE TRUE FALSE TRUE
[73] FALSE TRUE TRUE TRUE TRUE TRUE TRUE FALSE TRUE FALSE TRUE TRUE
[85] TRUE FALSE TRUE FALSE TRUE FALSE TRUE FALSE TRUE TRUE TRUE TRUE
[97] FALSE TRUE TRUE FALSE TRUE FALSE TRUE FALSE FALSE TRUE FALSE TRUE
[109] FALSE FALSE TRUE TRUE FALSE FALSE TRUE FALSE TRUE FALSE TRUE FALSE
[121] TRUE TRUE TRUE TRUE FALSE FALSE TRUE FALSE TRUE FALSE TRUE FALSE
[133] TRUE FALSE TRUE FALSE TRUE FALSE TRUE FALSE FALSE TRUE FALSE TRUE
[145] TRUE TRUE FALSE TRUE FALSE TRUE TRUE TRUE TRUE FALSE TRUE TRUE
[157] FALSE FALSE TRUE FALSE TRUE FALSE TRUE TRUE TRUE TRUE TRUE FALSE
[169] TRUE FALSE TRUE TRUE TRUE TRUE FALSE FALSE TRUE TRUE FALSE TRUE
[181] TRUE TRUE FALSE FALSE TRUE TRUE FALSE TRUE FALSE TRUE FALSE TRUE
[193] TRUE FALSE TRUE FALSE FALSE TRUE TRUE TRUE TRUE FALSE FALSE TRUE
[205] TRUE TRUE TRUE FALSE TRUE FALSE TRUE FALSE TRUE TRUE TRUE TRUE
[217] TRUE FALSE TRUE TRUE TRUE FALSE TRUE TRUE TRUE FALSE TRUE TRUE
[229] TRUE FALSE TRUE TRUE FALSE TRUE FALSE TRUE TRUE TRUE FALSE TRUE
[241] TRUE TRUE FALSE TRUE FALSE FALSE TRUE FALSE TRUE TRUE TRUE FALSE
[253] TRUE TRUE FALSE FALSE TRUE FALSE TRUE FALSE TRUE FALSE TRUE FALSE
[265] TRUE TRUE TRUE FALSE TRUE FALSE TRUE TRUE
This brings up another aspect. faithful$waiting is of length 272, 79 is length 1. Argument recycling causes the 79 to be repeated until it is of the same length as faithful$waiting. Then the comparison is done element-wise, returning a logical variable. If you want it as a numeric (as in your first example), this can be converted directly: FALSE becomes 0 and TRUE becomes 1
> as.numeric(faithful$waiting < 79)
[1] 0 1 1 1 0 1 0 0 1 0 1 0 1 1 0 1 1 0 1 0 1 1 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1
[38] 0 1 0 0 1 0 1 1 0 1 1 0 1 1 0 1 0 1 0 1 1 1 0 1 0 1 0 1 0 1 1 1 1 0 1 0 1
[75] 1 1 1 1 1 0 1 0 1 1 1 0 1 0 1 0 1 0 1 1 1 1 0 1 1 0 1 0 1 0 0 1 0 1 0 0 1
[112] 1 0 0 1 0 1 0 1 0 1 1 1 1 0 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 0 1 0 1 1 1 0 1
[149] 0 1 1 1 1 0 1 1 0 0 1 0 1 0 1 1 1 1 1 0 1 0 1 1 1 1 0 0 1 1 0 1 1 1 0 0 1
[186] 1 0 1 0 1 0 1 1 0 1 0 0 1 1 1 1 0 0 1 1 1 1 0 1 0 1 0 1 1 1 1 1 0 1 1 1 0
[223] 1 1 1 0 1 1 1 0 1 1 0 1 0 1 1 1 0 1 1 1 0 1 0 0 1 0 1 1 1 0 1 1 0 0 1 0 1
[260] 0 1 0 1 0 1 1 1 0 1 0 1 1
The second aspect of R I mentioned was factors. Factors are the implementation of a data type which can take on any of a predefined set of values. In some languages, these are enumerated types. Internally, they are stored as integer indexes into a vector of values (and this sometimes shows through).
You can create a factor from another vector by defining the levels it can take and, optionally, labels these levels should be displayed as. Continuing the example
> factor(as.numeric(faithful$waiting < 79), levels=c(1,0), labels=c("<79", "79+"))
[1] 79+ <79 <79 <79 79+ <79 79+ 79+ <79 79+ <79 79+ <79 <79 79+ <79 <79 79+
[19] <79 79+ <79 <79 <79 <79 <79 79+ <79 <79 <79 79+ <79 <79 <79 79+ <79 <79
[37] <79 79+ <79 79+ 79+ <79 79+ <79 <79 79+ <79 <79 79+ <79 <79 79+ <79 79+
[55] <79 79+ <79 <79 <79 79+ <79 79+ <79 79+ <79 79+ <79 <79 <79 <79 79+ <79
[73] 79+ <79 <79 <79 <79 <79 <79 79+ <79 79+ <79 <79 <79 79+ <79 79+ <79 79+
[91] <79 79+ <79 <79 <79 <79 79+ <79 <79 79+ <79 79+ <79 79+ 79+ <79 79+ <79
[109] 79+ 79+ <79 <79 79+ 79+ <79 79+ <79 79+ <79 79+ <79 <79 <79 <79 79+ 79+
[127] <79 79+ <79 79+ <79 79+ <79 79+ <79 79+ <79 79+ <79 79+ 79+ <79 79+ <79
[145] <79 <79 79+ <79 79+ <79 <79 <79 <79 79+ <79 <79 79+ 79+ <79 79+ <79 79+
[163] <79 <79 <79 <79 <79 79+ <79 79+ <79 <79 <79 <79 79+ 79+ <79 <79 79+ <79
[181] <79 <79 79+ 79+ <79 <79 79+ <79 79+ <79 79+ <79 <79 79+ <79 79+ 79+ <79
[199] <79 <79 <79 79+ 79+ <79 <79 <79 <79 79+ <79 79+ <79 79+ <79 <79 <79 <79
[217] <79 79+ <79 <79 <79 79+ <79 <79 <79 79+ <79 <79 <79 79+ <79 <79 79+ <79
[235] 79+ <79 <79 <79 79+ <79 <79 <79 79+ <79 79+ 79+ <79 79+ <79 <79 <79 79+
[253] <79 <79 79+ 79+ <79 79+ <79 79+ <79 79+ <79 79+ <79 <79 <79 79+ <79 79+
[271] <79 <79
Levels: <79 79+
Note that in the output of the factor, the possible levels are listed as any given vector may or may not have an every possible value in it. The set of allowed values can be larger than the ones that are present in any given vector.
For dividing into two groups based on a single cut point, this is fine, but this approach becomes unwieldy when making more than two groups. cut is a built-in function which is designed to do just this: categorize a continuous variable based on a set of cut points. There is always one fewer groups than cut points because the points define the edges of the groups (and anything outside the extremes is set to NA). When including everything, I don't usually compute the extremes and use those, but use -Inf and Inf. By default, the intervals are closed on the right which means that the upper value is included in the range but the lower value is not. Since you wanted 79 in the upper group, you have to include right=FALSE.
> cut(faithful$waiting, breaks=c(-Inf,79,Inf), right=FALSE)
[1] [79, Inf) [-Inf,79) [-Inf,79) [-Inf,79) [79, Inf) [-Inf,79) [79, Inf)
[8] [79, Inf) [-Inf,79) [79, Inf) [-Inf,79) [79, Inf) [-Inf,79) [-Inf,79)
[15] [79, Inf) [-Inf,79) [-Inf,79) [79, Inf) [-Inf,79) [79, Inf) [-Inf,79)
[22] [-Inf,79) [-Inf,79) [-Inf,79) [-Inf,79) [79, Inf) [-Inf,79) [-Inf,79)
[29] [-Inf,79) [79, Inf) [-Inf,79) [-Inf,79) [-Inf,79) [79, Inf) [-Inf,79)
[36] [-Inf,79) [-Inf,79) [79, Inf) [-Inf,79) [79, Inf) [79, Inf) [-Inf,79)
[43] [79, Inf) [-Inf,79) [-Inf,79) [79, Inf) [-Inf,79) [-Inf,79) [79, Inf)
[50] [-Inf,79) [-Inf,79) [79, Inf) [-Inf,79) [79, Inf) [-Inf,79) [79, Inf)
[57] [-Inf,79) [-Inf,79) [-Inf,79) [79, Inf) [-Inf,79) [79, Inf) [-Inf,79)
[64] [79, Inf) [-Inf,79) [79, Inf) [-Inf,79) [-Inf,79) [-Inf,79) [-Inf,79)
[71] [79, Inf) [-Inf,79) [79, Inf) [-Inf,79) [-Inf,79) [-Inf,79) [-Inf,79)
[78] [-Inf,79) [-Inf,79) [79, Inf) [-Inf,79) [79, Inf) [-Inf,79) [-Inf,79)
[85] [-Inf,79) [79, Inf) [-Inf,79) [79, Inf) [-Inf,79) [79, Inf) [-Inf,79)
[92] [79, Inf) [-Inf,79) [-Inf,79) [-Inf,79) [-Inf,79) [79, Inf) [-Inf,79)
[99] [-Inf,79) [79, Inf) [-Inf,79) [79, Inf) [-Inf,79) [79, Inf) [79, Inf)
[106] [-Inf,79) [79, Inf) [-Inf,79) [79, Inf) [79, Inf) [-Inf,79) [-Inf,79)
[113] [79, Inf) [79, Inf) [-Inf,79) [79, Inf) [-Inf,79) [79, Inf) [-Inf,79)
[120] [79, Inf) [-Inf,79) [-Inf,79) [-Inf,79) [-Inf,79) [79, Inf) [79, Inf)
[127] [-Inf,79) [79, Inf) [-Inf,79) [79, Inf) [-Inf,79) [79, Inf) [-Inf,79)
[134] [79, Inf) [-Inf,79) [79, Inf) [-Inf,79) [79, Inf) [-Inf,79) [79, Inf)
[141] [79, Inf) [-Inf,79) [79, Inf) [-Inf,79) [-Inf,79) [-Inf,79) [79, Inf)
[148] [-Inf,79) [79, Inf) [-Inf,79) [-Inf,79) [-Inf,79) [-Inf,79) [79, Inf)
[155] [-Inf,79) [-Inf,79) [79, Inf) [79, Inf) [-Inf,79) [79, Inf) [-Inf,79)
[162] [79, Inf) [-Inf,79) [-Inf,79) [-Inf,79) [-Inf,79) [-Inf,79) [79, Inf)
[169] [-Inf,79) [79, Inf) [-Inf,79) [-Inf,79) [-Inf,79) [-Inf,79) [79, Inf)
[176] [79, Inf) [-Inf,79) [-Inf,79) [79, Inf) [-Inf,79) [-Inf,79) [-Inf,79)
[183] [79, Inf) [79, Inf) [-Inf,79) [-Inf,79) [79, Inf) [-Inf,79) [79, Inf)
[190] [-Inf,79) [79, Inf) [-Inf,79) [-Inf,79) [79, Inf) [-Inf,79) [79, Inf)
[197] [79, Inf) [-Inf,79) [-Inf,79) [-Inf,79) [-Inf,79) [79, Inf) [79, Inf)
[204] [-Inf,79) [-Inf,79) [-Inf,79) [-Inf,79) [79, Inf) [-Inf,79) [79, Inf)
[211] [-Inf,79) [79, Inf) [-Inf,79) [-Inf,79) [-Inf,79) [-Inf,79) [-Inf,79)
[218] [79, Inf) [-Inf,79) [-Inf,79) [-Inf,79) [79, Inf) [-Inf,79) [-Inf,79)
[225] [-Inf,79) [79, Inf) [-Inf,79) [-Inf,79) [-Inf,79) [79, Inf) [-Inf,79)
[232] [-Inf,79) [79, Inf) [-Inf,79) [79, Inf) [-Inf,79) [-Inf,79) [-Inf,79)
[239] [79, Inf) [-Inf,79) [-Inf,79) [-Inf,79) [79, Inf) [-Inf,79) [79, Inf)
[246] [79, Inf) [-Inf,79) [79, Inf) [-Inf,79) [-Inf,79) [-Inf,79) [79, Inf)
[253] [-Inf,79) [-Inf,79) [79, Inf) [79, Inf) [-Inf,79) [79, Inf) [-Inf,79)
[260] [79, Inf) [-Inf,79) [79, Inf) [-Inf,79) [79, Inf) [-Inf,79) [-Inf,79)
[267] [-Inf,79) [79, Inf) [-Inf,79) [79, Inf) [-Inf,79) [-Inf,79)
Levels: [-Inf,79) [79, Inf)
This gives the idiomatic way of doing this as:
waiting_cat <- cut(faithful$waiting, breaks=c(-Inf,79,Inf), right=FALSE)
It implicitly loops over faithful$waiting and returns a factor which represents a categorical set of choices. | {
"domain": "codereview.stackexchange",
"id": 958,
"tags": "r"
} |
Merge n sorted iterators | Question: Over the weekend I was curious about efficiently merging multiple sorted iterators together, and for them to be in sorted order.
This is quite like a challenge on HackerRank:
You’re given the pointer to the head nodes of two sorted linked lists. The data in both lists will be sorted in ascending order. Change the next pointers to obtain a single, merged linked list which also has data in ascending order. Either head pointer given may be null meaning that the corresponding list is empty.
I may have cheated a bit by printing, rather than returning the items. However I don't mind that much about the HackerRank challenge.
I managed to do this in about \$O(3l)\$ space and \$O(l(2+n))\$ time. So \$O(l)\$ and \$O(ln)\$. Where \$l\$ is the amount of lists, and \$n\$ is the amount of data.
import operator
import functools
def from_iterable(fn):
@functools.wraps(fn)
def inner(*args):
return fn(args)
inner.from_iterable = fn
return inner
@from_iterable
def merge(sources):
sources = {
id_: iter(source)
for id_, source in enumerate(sources)
}
values = {}
for id_, source in list(sources.items()):
try:
values[id_] = next(source)
except StopIteration:
del sources[id_]
by_value = operator.itemgetter(1)
while sources:
id_, minimum = min(values.items(), key=by_value)
try:
values[id_] = next(sources[id_])
except StopIteration:
del values[id_], sources[id_]
yield minimum
def iter_nodes(node):
while node is not None:
yield node.data
node = node.next
def MergeLists(headA, headB):
vals = merge.from_iterable(iter_nodes(l) for l in (headA, headB))
print(' '.join(str(i) for i in vals), end='')
A bit overkill for the HackerRank challenge. But that wasn't the main reason for doing it.
Any and all reviews are welcome.
Answer:
There are no docstrings. What do these functions do? What arguments do they take? What do they return?
Decorators are normally named after the effect they have on the decorated function. For example @functools.lru_cache adds an LRU cache to a function; @unittest.skip causes a test case to be skipped, and so on.
The effect of @from_iterable is that it converts the arguments into a tuple and passes that tuple as a single argument to the original function. I would not have guessed this functionality from the name; in fact I would have guessed the reverse (that it would convert a single iterable argument to individual arguments).
It was established in comments that the intention of this decorator is to transform a function into a pair of functions like chain and chain.from_iterable (where the original function becomes available under the latter name, and the new function under the former name).
The trouble with this is that chain was a mistake. If we only had chain.from_iterable, then we would not need chain, because we could just make a tuple: we'd write chain.from_iterable((arg1, arg2, ...)) instead of chain(arg1, arg2, ...). This is a trivial change to calling convention (just two parentheses) that does not need another function to support it. (For example, it would be ridiculous to have sorted and sorted.from_iterable.)
The Python developers were constrained by backwards compatibility to leave chain alone, and so the only thing they could do was add another function. I don't think they intended it to be a pattern to be followed (it's certainly not used anywhere else in the standard library). To this day, Python programmers looking for a "flatten" function are surprised to find that it already exists in the standard library under an obscure name.
Note that the code in the post does not actually use merge, it only uses merge.from_iterable. So the obvious thing to do would be to write merge so that it takes an iterable, and avoid the need for the @from_iterable decorator and its associated confusion.
Since headA and headB are not used independently of each other, you could change the calling convention for MergeLists so that it took an arbitrary number of linked lists, like this:
def merge_linked_lists(*lists):
"Merge multiple sorted linked lists."
return merge(map(iter_nodes, lists))
The expression:
str(i) for i in vals
can be written:
map(str, vals)
Or if you prefer the comprehension, then consider improving the name of the loop variable. The name i is conventionally used for an index, but here we are not iterating over indexes, but over values, so v or value would be clearer.
The code in merge does not use the built-in function id, so there is no need to respell the variable as id_ in order to avoid shadowing the built-in.
The function merge maintains two data structures. The dictionary source maps a source id to an iterator over the corresponding source, and the dictionary values maps a source id to the value at the front of the corresponding source. The code would be simplified if the two data structures were combined into one, like this:
def merge(iterables):
"Merge sorted iterables."
entries = {} # Map from id to [front value, id, iterator].
for id, it in enumerate(map(iter, iterables)):
try:
entries[id] = [next(it), id, it]
except StopIteration:
pass
by_value = operator.itemgetter(1)
while entries:
id, entry = min(entries.items(), key=by_value)
value, _, it = entry
yield value
try:
entry[0] = next(it)
except StopIteration:
del entries[id]
The runtime bottleneck is the call to min to find the entry with the smallest value. This takes time proportional to the number of remaining iterables, that is, \$O(l)\$.
We can improve this to \$O(\log l)\$ if we keep the entries in a heap, using the heapq module:
from heapq import heapify, heappop, heapreplace
def merge(iterables):
"Merge sorted iterables."
entries = [] # Heap of [front value, id, iterator].
for id, it in enumerate(map(iter, iterables)):
try:
entries.append([next(it), id, it])
except StopIteration:
pass
heapify(entries)
while entries:
value, _, it = entry = entries[0]
yield value
try:
entry[0] = next(it)
heapreplace(entries, entry)
except StopIteration:
heappop(entries)
This improves the overall runtime to \$O((n + l)\log l)\$.
This is already built into Python as heapq.merge. It's worth taking a look at the source code — the code in §8 above is a simplified version of this. | {
"domain": "codereview.stackexchange",
"id": 29445,
"tags": "python, python-3.x, sorting, reinventing-the-wheel, iterator"
} |
What would be the molar concentration of human DNA in a normal human cell? | Question: A diploid human cell has 46 chromosomes. A haploid cell has DNA approximately 3.2 billion bases long. What is the molar concentration of DNA in the cell then? How would we calculate?
Answer: I will assume that be "molar concentration of DNA", you mean "molar concentration of nucleotides".
Cell size
Assuming a cell with a radius of 0.05 nm (nanometer). Assuming a perfect sphere, the volume of such sphere is $5.24\cdot 10^{-4} nm^3 = 5.24\cdot 10^{-11} L$.
Of course, cell volume varies a lot among different cell types. The final result will drastically depend upon the cell volume considered (Thanks to @Roland fo highlighting that in the comment).
Number of nucleotides
In the nucleus
The haploid genome size is about $3.2\cdot 10^9$ nucleotides. The whole genome is therefore $6.4\cdot 10^9$ nucleotides. As there are two DNA strands at each haploid genome, we have to further multiply by two to get $1.28\cdot 10^{10}$ nucleotides. Note that this number could be lower at other moment of the life cycle. What I am computing here is the maximal number of nucleotides per nucleus
In the mitochondria
There are about 1500 mitochondria per cell. Each mitochondria contains about 16,000 nucleotides resulting in a total of $2.4\cdot 10^7$ nucleotidesin mitochondria per cell.
mitochondria + nucleus
$2.4\cdot 10^7 + 1.28\cdot 10^{10} ≈ 1.28\cdot 10^{10}$. That is mtDNA is negligible.
In the mols
The Avogadro number is about $6 \cdot 10 ^{23}$. Hence, there are $\frac{1.28\cdot 10^{10}}{6\cdot 10^{23}} = 3.2\cdot 10^{-14}$ mols of nucleotides per cell.
Molarity
Molarity is defined is defined as the number of mols per liter. It is therefore
$$\frac{3.2\cdot 10^{-14}}{5.24\cdot 10^{-11}} ≈ 0.0006M$$ | {
"domain": "biology.stackexchange",
"id": 7963,
"tags": "genetics, biotechnology"
} |
Simple packet parsing command pattern | Question: I'm currently creating a small server back-end. My goal is to write an (efficient) packet parser in C++17 using the command pattern. The packets are given by:
type: value in {0, 1, 255}
size: value in {0, 1, 65535}
data: array[size]
Each packet is processed by a packet_operator. The packet_registry is a packet_operator which simply calls the operator for the given packet type. I currently see no need for multiple, pipelined parsers for a single packet type. The code sample is given below and does compile without warnings.
packets.hpp
#pragma once
#include <array>
#include <memory>
// Test class without proper implementation.
class packet {
public:
uint32_t m_type = 0;
uint32_t type() const {
return m_type;
}
};
template<class ... ArgTypes>
class packet_operator {
public:
virtual void operator()(const packet &, ArgTypes... args) = 0;
};
template<class ... ArgTypes>
class packet_registry : public packet_operator<ArgTypes...> {
public:
using op_ptr = std::shared_ptr<packet_operator<ArgTypes...>>;
explicit packet_registry(op_ptr op) {
// Initialize registry with default processor
for (size_t i = 0; i < m_operators.size(); i++)
m_operators[i] = op;
}
void operator()(const packet &packet, ArgTypes... args) override {
// Call processor for packet type
(*m_operators[packet.type()])(packet, args...);
}
op_ptr &operator[](uint32_t i) {
return m_operators[i];
}
private:
std::array<op_ptr, 256> m_operators; // Type will always be in [0, 255]
};
main.cpp
#include <iostream>
#include <memory>
#include "packets.hpp"
class processor : public packet_operator<> {
int m_id;
public:
explicit processor(int id) : m_id(id) {
// Empty
}
void operator()(const packet &packet) override {
std::cout << "Processor " << m_id << ". Type: " << packet.type() << std::endl;
}
};
void init(packet_registry<> ®istry) {
registry[1] = std::shared_ptr<packet_operator<>>(new processor(1));
registry[2] = std::shared_ptr<packet_operator<>>(new processor(2));
}
int main() {
// Create registry with default processor
auto registry = packet_registry<>(
std::shared_ptr<packet_operator<>>(new processor(0)));
// Initialize packet processors
init(registry);
// Test packets with type 0, 1, 2 and 3
packet p0, p1, p2, p3;
p0.m_type = 0;
p1.m_type = 1;
p2.m_type = 2;
p3.m_type = 3;
registry(p0);
registry(p1);
registry(p2);
registry(p3);
return 0;
}
My main concern is my use of functors and shared pointers. Did I use those correctly? Additionally, my linter advised me to add the explicit keyword before the constructors. I've researched what it does, but do not really understand why those types of constructors would always need to be explicit.
I'd also like to know if there are any alternatives to my init method. I'd have to register the processors for every packet type here. I'd prefer some sort of self-registering processor. For example, in Java I could annotate all packet processors using a custom annotation. Those will then be added to the right registry using reflection. Maybe this is achievable through some sort of meta-programming?
Lastly, a more more general question: are there any alternative designs, given that I cannot change the protocol?
Answer: The keyword explicit should be used with constructors taking a single argument in classes when automatic instantiation is not a universally valid operation (details and examples here).
The code review part
The packet_registry can just be an array of std::function (as you seem to be OK with paying for the virtual dispatch overhead) — std::array<std::function< packet const&, ArgTypes...>, 256>
processor-s, given they are created just once, are better be created by value. Then you will be able to have a very hackable syntax — registry[x] = [...](auto&& packet, auto&& arg) { ... actual code ...};
if for some reason you do need shared pointers, prefer make_shared over direct usage of the new operator — x = std::make_shared<type>(args...). | {
"domain": "codereview.stackexchange",
"id": 30845,
"tags": "c++, beginner, design-patterns, pointers, template-meta-programming"
} |
What is this type of consistency called in DataRetrieval? | Question: I am trying to find the name of this feature that is implemented with SQL Databases:
Given a Record $R^1$ in table having two values for two different fields, e.g.
Let value of field 1 $F_1$ be $V_1$, Let value of field 2 $F_2$ be $V_2$
A any query requiring records s.t. $F_1 = V_1$ should contain $R^1$.
Also any query requiring records s.t. $F_2 = V_2$ should also contain $R^1$.
If a database does not have the above property then it definitely considered broken.
My question is what is the name for this kind of consistency?
Answer: I would call it expected consistency.
Unless you are just defining the $=$ operator.
When you get into transactions can have uncommitted or dirty value. By default $=$ will only match committed values but you can search on the uncommitted value. Look up ACID around data consistency.
Also careful with any as with an AND condition it needs to satisfy both so you may not get $R^1$ for $F_2 = V_2$ if another AND condition is not satisfied. | {
"domain": "cs.stackexchange",
"id": 8391,
"tags": "database-theory"
} |
Algorithms for almost sorting | Question: I want to find a comparison sorting algorithm that can almost sort a set of data, using the least comparisons possible.
What I mean by "almost" is that if the perfectly sorted data is $[x_1, x_2, …, x_n]$ and the almost sorted data is $[x_{i_1}, x_{i_2}, …, x_{i_n}]$, then $\forall j \in [\![1, n]\!], |i_j - j| \leq C$, $C$ being a parameter of the algorithm.
I want to emphasize that I want to minimize the number of comparisons. Other operations don't matter in the complexity. I would like to know if a number of comparisons next to $\frac{n\log_2 n}{C}$ can be possible.
In practice, I have to work with $n \simeq 1000$ and $C \simeq 10$ and only the comparisons are done by a human.
Are there any algorithm of the kind? Could I have some insight about it? Thanks.
EDIT: I got two ideas to almost sort:
The first one is to quicksort the array, and stop when I have to sort a sub-array of $\leq C$ data. The best case scenario requires roughly $\sum\limits_{k=0}^{\log_2(n/C)}2^k\frac{n}{2^k} = n\log_2(n/C)$ comparisons.
The algorithm is correct because sub-arrays are relatively sorted, and since they are of size $\leq C$, a value can't be farther than $C$ from its sorted position. The problem is that the worst case scenario is still in $\Omega(n^2)$ ;
The second idea is to sort using an insertion sort with a dichotomic search, and stop the search when the bounds of the search are distant of at most $C$. With this method, I get roughly $\sum\limits_{k=1}^{n-1}\log_2(k/C) = \log_2((n-1)!) - n\log_2(C)\simeq n\log_2(n/C) - n$ in the worst case, which is better than the previous algorithm, but I am not quite sure that the array will be $C$-almost sorted.
EDIT 2: I found that the second algorithm is not correct. If, during the first insertion, $x_n$ is put before $x_1$, then their relative order will not change during the rest of the execution, therefore one of them will be farther than $C$ from its sorted index if $n > 2C$.
Answer: Call an array $C$-sorted if each element is at-most $C$ places away from its place had the array been sorted (this is the same as "almost"-sorted array in the question, but in plain English).
Assume we will use comparison sort.
Claim One: Given an array of $n$ elements that is $C$-sorted, we can sort it with at most $n\lceil\log_2 (C+1)\rceil$ comparisons.
Proof: Here is how we can sort the given array.
Create a buffer that is an array of size $C+1$.
One by one move the first $\min(n, C+1)$ elements to the buffer, keeping the buffer sorted using binary search to find the right place for each new element.
One by one move the min element from the buffer array to the result array, and, if the given array is not empty yet, add the next element to the buffer, keeping the buffer sorted using binary search to find the right place for that next element.
For each element in the array, at most $\lceil\log_2 (C+1)\rceil$ comparisons is used to find its right place in the buffer. The number of total comparisons used is at most $n\lceil\log_2 (C+1)\rceil$. $\quad\checkmark$
Claim Two: At least $\log_2 (n!) - n\lceil\log_2 (C+1)\rceil$ comparisons are needed to $C$-sort an array of size $n$ in the worst case.
Proof: Otherwise, suppose there is an algorithm that can $C$-sort any given array of size $n$ with less than $\log_2 (n!) - n\lceil\log_2 (C+1)\rceil$ comparisons. Extending that algorithm with the algorithm shown in the proof of claim one, we obtain an algorithm that can sort any given array of size $n$ with less than $\log_2(n!)$ comparisons. That is well-known to be impossible. $\quad\checkmark$
Claim Three: There is no algorithm that can $C$-sort an array of size $n$ with $O(\frac{n\log_2 n}{C})$ comparisons.
Proof: We have, if $n\ge C^{1+\epsilon}$ for some fixed $\epsilon\gt0$,
$$\begin{aligned}\lim_{C\to\infty} \frac{\log_2 (n!) - n\lceil\log_2 (C+1)\rceil}{\frac{n\log_2 n}{C}} &= \lim_{C\to\infty} C\left(\frac{\log_2 (n!)}{n\log_2 n}-\frac{\lceil\log_2 (C+1)\rceil}{\log_2 n}\right)\\&\ge \lim_{C\to\infty} C\left(1-\frac1{1+\epsilon}\right)=\infty.
\end{aligned}\quad\checkmark$$
Indeed, $\log_2 (n!) - n\lceil\log_2 (C+1)\rceil$ gives a pretty strong lower bound for the number of comparison needed in the worst case since
$\log_2 (n!)$ is the lower bound to sort an array by comparison sort in the worst case, and, as $\log_2(n!)\approx n\log n$, for any fixed $C$,
$$n\lceil\log_2 (C+1)\rceil = o(\log_2 (n!)). \ \ \color{#d0d0d0}{\text{(small o, not big O!)}}$$
That is, at least $(1-\epsilon)\log_2(n!)$ comparison is needed to $C$-sort array when $n$ is sufficiently large, for all fixed $\epsilon\gt0$ and $C$.
The statements above support the usage of "almost sort" and "almost sorted" to refer to "$C$-sort" and "$C$-sorted" respectively.
We also note that
on average, quicksort performs only about 39% worse than in its best case. In this sense, it is closer to the best case than the worst case. A comparison sort cannot use less than $\log_2(n!)$ comparisons on average to sort $n$ items (as explained in the article Comparison sort) and in case of large $n$, Stirling's approximation yields $log_2(n!) \approx n(\log_2 n − log_2 e)$, so quicksort is not much worse than an ideal comparison sort.
Now let us have an array of $n=1000$ elements and $C=10$.
Quicksort will perform about $2n\log n\approx 19931$ comparisons on average to sort the array, as shown on Wikipedia.
Claim Two says $\log_2 (n!) - n\lceil\log_2 (C+1)\rceil\approx4529$ are needed to $10$-sort the array in the worse case, which is sort of information-theoretical the best.
Note that $\frac{n\log_2 n}{C}\approx997$, which is much less than
of 4529.
So I believe, (some more argument are omitted here), an algorithm that performs about $n\log_2(n/C)\approx6643$ comparisons to $10$-sort the array is pretty much about the best we can get. | {
"domain": "cs.stackexchange",
"id": 19807,
"tags": "algorithms, time-complexity, sorting"
} |
How to determine signal strength verses distance for LTE base stations? | Question: All mobile phones have bars or some other type of indicator displaying signal strength. As mobile devices move away from the base station the signal strength becomes weak and at some point becomes out of range or establishes connections with a different tower.
Question: For a LTE base station is it possible to determine the relationship between the signal strength and distance?
Assumption: LTE base station is located in a plain field.
Back story: A cellular service provider is planning on installing a LTE base station using equipment from JMA wireless. The model number of the equipment is X7CQAP_FRO_645. Below is a H-plane pattern plot for 750MHz frequency.
Can anyone provide insight as how to interpret the above plot in relation to signal strength verses distance? Do I need more information?
Answer:
For a LTE base station is it possible to determine the relationship
between the signal strength and distance?
Ideally yes. Practically, probably not very reliably. Given the following pieces of information and assumptions, you can find the distance between any transmitter and receiver:
The transmitting antenna's radiation pattern and orientation (gain)
The receiving antenna's radiation pattern and orientation (gain)
The transmitter's power
An accurate measure of received power
Assume no obstructions (as you have)
Frequency of signal
These are the terms needed in the Friis Transmission equation to solve for Distance.
The plot you show is likely a gain plot along the horizontal plane in dB relative to an isotropic antenna (equal power in every direction). Every 3 dB represents a doubling of gain (and hence energy) transmitted in that direction.
Additionally, you can find a 'rough' measure of your phones received power by entering field test mode (google for instructions). The rest of the information is going to be tough to come by. | {
"domain": "engineering.stackexchange",
"id": 1001,
"tags": "electrical-engineering, wireless-communication, lte"
} |
Noether's Current in QFT with position dependent variations? | Question: Setup
Consider a mapping $F$ that takes every point $x$ on the manifold $M$ to the point $x'$ on the same manifold. Under this mapping the field $\phi(x)$ evaluated at the point $x$ changes to $\phi'(x)$ when evaluated at the same point $x$ on the manifold or $\phi'(x')$ when evaluated at the mapped point $x'$. The action before mapping is given by:
\[S=\int d^Dx \mathcal{L}(\phi(x), \partial_\mu \phi(x),x)\]
whilst that after mapping is:
$$S'=\int d^D x' \mathcal{L}(\phi'(x'), \partial'_\mu \phi'(x'),x')$$
I am focusing here on the case of QFT, meaning intergrals are over the whole of Minkowski space.
Noether's Theorem
According to Noether's theorem a continuous symmetry which leaves the action invariant:
$$\Delta S=S'-S=0$$
corresponds to a conserved quantity.
The two forms of Noether's Current
I have come across two forms of Noether's current (Peskin & Schroeder, $\S$2.2):
$$j^\mu(x)=\frac{\partial \mathcal{L}}{\partial (\partial_\mu\phi)} \delta \phi - \mathcal{J}^\mu\tag{1}$$
where $\mathcal{J}^\mu$ is defined by the mapping of $\mathcal{L}$:
$$\mathcal{L}(x) \mapsto \mathcal{L}(x)+\alpha \partial_\mu \mathcal{J}^\mu(x)$$
and (Goldstein, 3rd ed, $\S$13.7):
$$j^\nu =\left( \frac{\partial \mathcal{L}}{\partial (\partial_\nu \phi)}\partial_\sigma \phi-\mathcal{L} \delta^\nu_\sigma\right) X^\sigma-\frac{\partial \mathcal{L}}{\partial (\partial_\nu \phi)} \Psi \tag{2}$$
Where $\delta x^\nu=\epsilon X^\nu$ and $\delta \phi=\epsilon \Psi$.
Problem with form (1)
Consider the case of dilation $x^\mu \mapsto (1+\delta\lambda )x^\mu$ then:
$$\mathcal{L}(x) \mapsto \mathcal{L}(x)+\delta \lambda x^\mu \partial_\mu \mathcal{L}$$
here the change in $\mathcal{L}$ can not be written as an exact divergence (also the metric on integration will change). This does not therefore seem compatible with (1).
Problem with form (2)
In the derivation of (2) we get the following expression:
$$\int \epsilon \frac{d}{d x^\nu} \left(\left( \frac{\partial \mathcal{L}}{\partial (\partial_\nu \phi)}\partial_\sigma \phi-\mathcal{L} \delta^\nu_\sigma\right) X^\sigma-\frac{\partial \mathcal{L}}{\partial (\partial_\nu \phi)} \Psi\right)d^4x =0\tag{13.147}$$
from this Goldstein seems to infer that
$$ \frac{d}{d x^\nu} \left(\left( \frac{\partial \mathcal{L}}{\partial (\partial_\nu \phi)}\partial_\sigma \phi-\mathcal{L} \delta^\nu_\sigma\right) X^\sigma-\frac{\partial \mathcal{L}}{\partial (\partial_\nu \phi)} \Psi\right)=0\tag{13.148}$$
which given that we have a fixed range of integration (the whole of space) I cannot see any reason why this should hold.
Question
My question is what is therefore the most general form of Noether's current which can deal with things like scaling? And are my two concerns above justified?
Answer:
Peskin & Schroeder (1) are only considering situations with purely vertical transformations, so OP's horizontal spacetime dilation transformation does not apply. [For terminology, see e.g. my Phys.SE answer here.] See also this related Phys.SE post.
Goldstein's formula (2) for the bare Noether current $j^{\mu}$ holds for combined horizontal and vertical transformations. The full Noether current $J^{\mu}=j^{\mu}- k^{\mu}$ has a possible improvement term $k^{\mu}$ in case of a quasi-symmetry.
OP is right. The proof from (13.147) to (13.148) is flawed/insufficient as it is written in Goldstein. Noether's conservation law (13.148) is of course correct, but the proof in Goldstein of Noether's first theorem is incomplete. | {
"domain": "physics.stackexchange",
"id": 44814,
"tags": "lagrangian-formalism, conservation-laws, symmetry, field-theory, noethers-theorem"
} |
Logic behind topological orders | Question: Long-range entanglement (LRE) is the main feature of topological orders. The string-net condensation model was constructed to exhibit LRE.
But the many-body systems of such models do not look like any earthly materials at all but are closer to quantum gravity models. In quantum gravity, nobody sees the detailed structure beyond the Planck scale. But in condensed matter, the microscopic structures are "crystal" clear.
Then how can such strange models be used to explain LRE in laboratory materials?
Answer: The many-body systems that give rise to LRE and string-net condensation are simply quantum spin models and they CAN look like earthly materials, such as the $J_1$-$J_2$ Heisenberg model on square lattice and the Heisenberg model on Kagome lattice for 1/2-spins. The many-body systems that give rise to LRE and string-net condensation DO have a "crystal" clear microscopic structures. | {
"domain": "physics.stackexchange",
"id": 5843,
"tags": "quantum-entanglement, topological-order"
} |
Extract values from English numerals, e.g. "nine million and one" | Question: Though many have done it the other way around, I have not seen such code in many places. And, to be honest, I don't know why this cluster of if-statements, for-loops and while-loops works!
def parse_int(string):
numbers = {'zero':0,'one':1,'two':2,'three':3,'four':4,'five':5,'six':6,'seven':7,'eight':8,'nine':9,'ten':10,'eleven':11,'twelve':12,'thirteen':13,'fourteen':14,'fifteen':15,'sixteen':16,'seventeen':17,'eighteen':18,'nineteen':19,'twenty':20,'thirty':30,'forty':40,'fifty':50,'sixty':60,'seventy':70,'eighty':80,'ninety':90,'eighty-six': 86, 'thirty-one': 31, 'forty-three': 43, 'forty-two': 42, 'fifty-eight': 58, 'sixty-seven': 67, 'thirty-two': 32, 'thirty-five': 35, 'seventy-nine': 79, 'thirty-four': 34, 'fifty-seven': 57, 'twenty-nine': 29, 'eighty-nine': 89, 'ninety-four': 94, 'seventy-eight': 78, 'ninety-one': 91, 'forty-one': 41, 'sixty-two': 62, 'twenty-eight': 28, 'eighty-eight': 88, 'seventy-seven': 77, 'forty-seven': 47, 'eighty-five': 85, 'eighty-three': 83, 'fifty-two': 52, 'eighty-two': 82, 'fifty-five': 55, 'twenty-seven': 27, 'seventy-four': 74, 'thirty-seven': 37, 'twenty-six': 26, 'sixty-six': 66, 'eighty-four': 84, 'sixty-four': 64, 'forty-eight': 48, 'fifty-four': 54, 'eighty-one': 81, 'thirty-three': 33, 'forty-four': 44, 'fifty-nine': 59, 'thirty-eight': 38, 'forty-six': 46, 'sixty-nine': 69, 'sixty-one': 61, 'sixty-three': 63, 'ninety-eight': 98, 'seventy-six': 76, 'seventy-one': 71, 'ninety-three': 93, 'fifty-three': 53, 'fifty-six': 56, 'seventy-five': 75, 'eighty-seven': 87, 'ninety-seven': 97, 'ninety-six': 96, 'ninety-nine': 99, 'twenty-one': 21, 'twenty-five': 25, 'ninety-five': 95, 'thirty-nine': 39, 'sixty-eight': 68, 'thirty-six': 36, 'twenty-four': 24, 'seventy-three': 73, 'seventy-two': 72, 'ninety-two': 92, 'twenty-three': 23, 'twenty-two': 22, 'forty-nine': 49, 'sixty-five': 65, 'fifty-one': 51, 'forty-five': 45}
powers = {'vigintitrillion': 1000000000000000000000000000000000000000000000000000000000000000000000000, 'septillion': 1000000000000000000000000, 'nonillion': 1000000000000000000000000000000, 'tredecillion': 1000000000000000000000000000000000000000000, 'vigintiquadrillion': 1000000000000000000000000000000000000000000000000000000000000000000000000000, 'decillion': 1000000000000000000000000000000000, 'billion': 1000000000, 'duovigintillion': 1000000000000000000000000000000000000000000000000000000000000000000000, 'thousand': 1000, 'duodecillion': 1000000000000000000000000000000000000000, 'septemdecillion': 1000000000000000000000000000000000000000000000000000000, 'vigintinonillion': 1000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000, 'octillion': 1000000000000000000000000000, 'quinvigintillion': 1000000000000000000000000000000000000000000000000000000000000000000000000000000, 'octodecillion': 1000000000000000000000000000000000000000000000000000000000, 'novemdecillion': 1000000000000000000000000000000000000000000000000000000000000, 'trigintillion': 1000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000, 'quindecillion': 1000000000000000000000000000000000000000000000000, 'duotrigintillion': 1000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000, 'quattuordecillion': 1000000000000000000000000000000000000000000000, 'quadrillion': 1000000000000000, 'vigintiseptillion': 1000000000000000000000000000000000000000000000000000000000000000000000000000000000000, 'vigintillion': 1000000000000000000000000000000000000000000000000000000000000000, 'untrigintillion': 1000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000, 'centillion': 1000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000, 'undecillion': 1000000000000000000000000000000000000, 'vigintunillion': 1000000000000000000000000000000000000000000000000000000000000000000, 'million': 1000000, 'septvigintillion': 1000000000000000000000000000000000000000000000000000000000000000000000000000000000000, 'vigintisextillion': 1000000000000000000000000000000000000000000000000000000000000000000000000000000000, 'vigintiduoillion': 1000000000000000000000000000000000000000000000000000000000000000000000, 'sextillion': 1000000000000000000000, 'octovigintillion': 1000000000000000000000000000000000000000000000000000000000000000000000000000000000000000, 'nonvigintillion': 1000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000, 'sexdecillion': 1000000000000000000000000000000000000000000000000000, 'vigintoctillion': 1000000000000000000000000000000000000000000000000000000000000000000000000000000000000000, 'sexvigintillion': 1000000000000000000000000000000000000000000000000000000000000000000000000000000000, 'trevigintillion': 1000000000000000000000000000000000000000000000000000000000000000000000000, 'unvigintillion': 1000000000000000000000000000000000000000000000000000000000000000000, 'hundred': 100, 'quattuorvigintillion': 1000000000000000000000000000000000000000000000000000000000000000000000000000, 'quintillion': 1000000000000000000, 'googol': 10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000, 'vigintiquintrillion': 1000000000000000000000000000000000000000000000000000000000000000000000000000000}
result=0
a=string.split(" ")
b=[]
for c in a:
if c in numbers:
b.append(c)
elif c in powers:
b[-1]+=" "+c
elif c=="and":
continue
else:
print("INVALID WORD:",c)
return(None)
for d, e in enumerate(b):
if len(e.split(" "))==1:
b[d]=numbers[e]
else:
b[d]=e.split(" ")
b[d][0]=numbers[b[d][0]]
f=1
while f<len(b[d]):
b[d][f]=powers[b[d][f]]
f+=1
if not(isinstance(b[0],int)):
while len(b[0])>2:
b[0][1]*=b[0][2]
b[0].pop(2)
while len(b)>0:
if len(b)==1:
if isinstance(b[0],int):
result+=b[0]
b.pop(0)
else:
while len(b[0])>1:
b[0][0]*=b[0][1]
b[0].pop(1)
result+=b[0][0]
b.pop(0)
else:
if isinstance(b[1],int):
b[1]+=b[0][0]*b[0][1]
b.pop(0)
else:
while len(b[1])>2:
b[1][1]*=b[1][2]
b[1].pop(2)
if b[0][1]<b[1][1]:
b[1][0]+=b[0][0]*b[0][1]
b.pop(0)
else:
result+=b[0][0]*b[0][1]
b.pop(0)
return(result)
Answer: Instead of the loooooooooooooooong series of 0s for the numbers, you can use ** to increase readability.
return None can be reduced to just return. Since the len() function will never will never return a negative value,
while len(b) > 0: is the equivalent of just while len(b):
def parse_int(string):
numbers = {'zero': 0,
'one': 1,
'two': 2,
'three': 3,
'four': 4,
'five': 5,
'six': 6,
'seven': 7,
'eight': 8,
'nine': 9,
'ten': 10,
'eleven': 11,
'twelve': 12,
'thirteen': 13,
'fourteen': 14,
'fifteen': 15,
'sixteen': 16,
'seventeen': 17,
'eighteen': 18,
'nineteen': 19,
'twenty': 20,
'twenty-one': 21,
'twenty-two': 22,
'twenty-three': 23,
'twenty-four': 24,
'twenty-five': 25,
'twenty-six': 26,
'twenty-seven': 27,
'twenty-eight': 28,
'twenty-nine': 29,
'thirty': 30,
'thirty-one': 31,
'thirty-two': 32,
'thirty-three': 33,
'thirty-four': 34,
'thirty-five': 35,
'thirty-six': 36,
'thirty-seven': 37,
'thirty-eight': 38,
'thirty-nine': 39,
'forty': 40,
'forty-one': 41,
'forty-two': 42,
'forty-three': 43,
'forty-four': 44,
'forty-five': 45,
'forty-six': 46,
'forty-seven': 47,
'forty-eight': 48,
'forty-nine': 49,
'fifty': 50,
'fifty-one': 51,
'fifty-two': 52,
'fifty-three': 53,
'fifty-four': 54,
'fifty-five': 55,
'fifty-six': 56,
'fifty-seven': 57,
'fifty-eight': 58,
'fifty-nine': 59,
'sixty': 60,
'sixty-one': 61,
'sixty-two': 62,
'sixty-three': 63,
'sixty-four': 64,
'sixty-five': 65,
'sixty-six': 66,
'sixty-seven': 67,
'sixty-eight': 68,
'sixty-nine': 69,
'seventy': 70,
'seventy-one': 71,
'seventy-two': 72,
'seventy-three': 73,
'seventy-four': 74,
'seventy-five': 75,
'seventy-six': 76,
'seventy-seven': 77,
'seventy-eight': 78,
'seventy-nine': 79,
'eighty': 80,
'eighty-one': 81,
'eighty-two': 82,
'eighty-three': 83,
'eighty-four': 84,
'eighty-five': 85,
'eighty-six': 86,
'eighty-seven': 87,
'eighty-eight': 88,
'eighty-nine': 89,
'ninety': 90,
'ninety-one': 91,
'ninety-two': 92,
'ninety-three': 93,
'ninety-four': 94,
'ninety-five': 95,
'ninety-six': 96,
'ninety-seven': 97,
'ninety-eight': 98,
'ninety-nine': 99}
powers = {'hundred': 10 ** 2,
'thousand': 10 ** 3,
'million': 10 ** 6,
'billion': 10 ** 9,
'quadrillion': 10 ** 15,
'quintillion': 10 ** 18,
'sextillion': 10 ** 21,
'septillion': 10 ** 24,
'octillion': 10 ** 27,
'nonillion': 10 ** 30,
'decillion': 10 ** 33,
'undecillion': 10 ** 36,
'duodecillion': 10 ** 39,
'tredecillion': 10 ** 42,
'quattuordecillion': 10 ** 45,
'quindecillion': 10 ** 48,
'sexdecillion': 10 ** 51,
'septemdecillion': 10 ** 54,
'octodecillion': 10 ** 57,
'novemdecillion': 10 ** 60,
'vigintillion': 10 ** 63,
'vigintunillion': 10 ** 66,
'unvigintillion': 10 ** 66,
'duovigintillion': 10 ** 69,
'vigintiduoillion': 10 ** 69,
'vigintitrillion': 10 ** 72,
'trevigintillion': 10 ** 72,
'vigintiquadrillion': 10 ** 75,
'quattuorvigintillion': 10 ** 75,
'quinvigintillion': 10 ** 78,
'vigintiquintrillion': 10 ** 78,
'vigintisextillion': 10 ** 81,
'sexvigintillion': 10 ** 81,
'vigintiseptillion': 10 ** 84,
'septvigintillion': 10 ** 84,
'octovigintillion': 10 ** 87,
'vigintoctillion': 10 ** 87,
'vigintinonillion': 10 ** 90,
'nonvigintillion': 10 ** 90,
'trigintillion': 10 ** 93,
'untrigintillion': 10 ** 96,
'duotrigintillion': 10 ** 99,
'googol': 10 ** 100,
'centillion': 10 ** 303}
result = 0
a = string.split(" ")
b = []
for c in a:
if c in numbers:
b.append(c)
elif c in powers:
b[-1] += " " + c
elif c == "and":
continue
else:
print("INVALID WORD:",c)
return
for d, e in enumerate(b):
if len(e.split(" ")) == 1:
b[d] = numbers[e]
else:
b[d] = e.split(" ")
b[d][0] = numbers[b[d][0]]
f = 1
while f < len(b[d]):
b[d][f] = powers[b[d][f]]
f += 1
if not(isinstance(b[0], int)):
while len(b[0]) > 2:
b[0][1] *= b[0][2]
b[0].pop(2)
while len(b):
if len(b) == 1:
if isinstance(b[0], int):
result += b[0]
b.pop(0)
else:
while len(b[0]) > 1:
b[0][0] *= b[0][1]
b[0].pop(1)
result += b[0][0]
b.pop(0)
else:
if isinstance(b[1], int):
b[1] += b[0][0] * b[0][1]
b.pop(0)
else:
while len(b[1]) > 2:
b[1][1] *= b[1][2]
b[1].pop(2)
if b[0][1] < b[1][1]:
b[1][0] += b[0][0] * b[0][1]
b.pop(0)
else:
result += b[0][0] * b[0][1]
b.pop(0)
return result | {
"domain": "codereview.stackexchange",
"id": 40136,
"tags": "python, programming-challenge, parsing, functional-programming, iteration"
} |
Temperature and reaction quotient | Question: Consider an exothermic reaction. If we double the temperature at which the reaction is run, will the reaction quotient increase?
I know that an increase in Q relative to K causes the reaction to favor the reactants as the system tries to return to equilibrium.
If a reaction is exothermic, and we take heat to be a product, wouldn't an excess of heat cause the reaction to favor reactants?
The question I'm having trouble with is this one; apparently the answer is choice 2. I wonder if this is a typo or am I fundamentally misunderstanding something.
Answer: Just to clear things up first (I might be saying something you already know, but I just want to be sure): the intent of the question is not to ask you what happens to the reaction quotient after equilibrium is re-established.
If a system is originally in equilibrium, with $Q = K_1$, and then you disturb it such that the equilibrium constant changes to $K_2$, then after the system re-reaches equilibrium you will have $Q = K_2$. If you interpret the question as asking whether $Q$ has changed after equilibrium is re-established, then the question basically amounts to asking whether the equilibrium constants $K_1$ and $K_2$ are different.
What the question is asking is whether $Q$ is affected instantaneously by the disturbance. Re-reading your question I think you got it, so...
Anyway, heat is not a product.
The reaction quotient for that reaction is defined as
$$Q = \frac{a_{\ce{CO2}}}{(a_{\ce{CO}})^2} \approx \frac{p_{\ce{CO2}}}{(p_{\ce{CO}})^2}$$
and $K$ is defined as the value of $Q$ at equilibrium.
Nowhere does heat appear in the equation. The idea of using heat as a "product" is nothing more than a mnemonic to help you remember the direction in which the equilibrium shifts. As long as the pressures are fixed, the temperature does not affect the instantaneous value of the reaction quotient.
When you increase the temperature, what happens is that the equilibrium constant $K$ drops. Let's say that before changing the temperature $Q = K = 0.01$ (just an example). If you increase the temperature, $K$ might fall to $0.001$ but $Q$ at that instant will still be $0.01$. As the system re-equilibrates, $Q$ will drop to $0.001$.
So, it is not that $K$ remains constant and $Q$ increases. It is more like: $K$ drops and $Q$ remains constant (at least instantaneously). | {
"domain": "chemistry.stackexchange",
"id": 5608,
"tags": "thermodynamics"
} |
Ultrarelativitistic particle - what kind of a particle is this? | Question: I have heard many times that we can treat a moving particle as a:
classical particle
non-relativistic
relativistic particle
ultra-relativistic particle
While I know equations for 1, 2, & 3, I really don't know what is the difference between ultrarelativistic and relativistic particle. Can anyone explain a bit or provide some hyperrefs.
Answer: An ultra-relativistic particle is any particle you observe to have almost all its energy stored in the form of momentum. In other words, we are talking about particles that have only a very tiny fraction of their total energy stored in (rest)mass.
The relativistic mass-energy-momentum relationship
$$E^2 - c^2 \ p^2 \ = \ c^4 \ m^2 $$
is valid for a particle with (rest)mass $m$ regardless its speed. Depending on the relative magnitude of the various terms a particle is referred to as ultra-relativistic, relativistic, or non-relativistic.
An ultra-relativistic particle speeds by with $E \approx c \ p >> m \ c^2$. Examples are neutrinos (at almost any energy), but also protons accelerated to full speed in the LHC.
In contrast, non-relativistic particles (I prefer to reserve the term classical particles for particles behaving in 'non-quantum' fashion) are characterized by $E \approx m \ c^2 >> c \ p$.
Depending on the relative sizes of the $mc^2$ and the $pc$ sides of this right triangle, a particle is called non-relativistic, relativistic, or ultra-relativistic. | {
"domain": "physics.stackexchange",
"id": 8737,
"tags": "special-relativity, terminology"
} |
What is the relation between volume density and area/surface density? | Question: Is the area/surface density just the volume density with one height dimension in the volume calculation set to 1?
Answer: No, that would still give you a 3-dimensional volume. You need to think in just 2 dimensions. You can just ignore the 3rd dimension to get an area. If someone talks about buying property they use the units of square meters. They don't even consider the 3rd dimension, height. If you assume a height of 1 the number may not change but the units will. | {
"domain": "physics.stackexchange",
"id": 43697,
"tags": "geometry, density"
} |
Time complexity analysis for Reingold's UST-CONN algorithm | Question: What is the exact time complexity of the undirected st-connectivity log-space algorithm by Omer Reingold ?
Answer: It seems that the time complexity of Reingold's algorithm is not treated in either the Reingold's paper or in Arora-Barak textbook. It would also appear that the analysis is rather tedious, as the time complexity depends on the exact expander graph used in the construction and on some constants that are chosen to "sufficiently large".
To get some rough idea on the time complexity, observe that Reingold's algorithm, given graph $G$, transforms it (implicitly) into an expander graph $G'$ and traverses every walk of length $l = O(\log n)$. The $O$-notation hides some quite large constants here. The graph $G'$ has constant degree of $d = 2^b$ for some sufficiently large $b$, meaning that there are $d^l = O(n^c)$ such walks for some rather large constant $c$. Skimming some lecture notes on the topic it would seem that $c \ge 10^9b$. | {
"domain": "cstheory.stackexchange",
"id": 103,
"tags": "cc.complexity-theory, randomness"
} |
ERA5 mean evaporation rates data - Why is that negative? | Question: Please, let me know if this question is too technical/specific for Earth Science. In case, I will remove it and eventually post it elsewhere.
I am processing some data extracted from ERA5 to set up a large scale hydrological model. We are calculating the evapotranspiration using the typical P-M formula, using as input the usual variables: wind speed, solar radiation, temperature, pressure, relative humidity and so on. In order to check the consistency of our calculations with the variables provided by the ECMWF, I downloaded also some evaporation related variables. I'm struggling to understand the data related to the mean evaporation rate variable data. ERA5 are available at hourly time steps at ~0.28 degrees spatial resolution. For the mean evaporation rate, Minimum, Mean, and Maximum values are provided expressed in kg x sqmt x sec-1. Data look like this:
The ncdump is the following:
and plotting the mean values for a hour during the day, would look like this:
What looks not clear to me is why the mean evaporation rate is negative: I would expect a positive value in the wet regions and zero over the deserts. I tried to look at the ERA5 data documentation, but the issue is not addressed there. Am I misinterpreting something?
Answer: Just leaving this for future readers. I finally found in some ECMWF documentation (referring to a different variable) that: "The ECMWF Integrated Forecasting System convention is that downward fluxes are positive. Therefore, negative values indicate evaporation and positive values indicate condensation". This applies also to other variables.
Thanks to Daniel and Spencer for their help. | {
"domain": "earthscience.stackexchange",
"id": 1840,
"tags": "evaporation, era"
} |
Do stem cells have no epigenome? | Question: Till now I thought that embryonic stem cells have no epigenome as they are pluripotent. (I thought that since epigenome is what gives a cell its identity, no cellular identity means no epigenome) I saw something similar to this on this Wikipedia page. After fertilization, the paternal and maternal genomes are demethylated in order to erase their epigenetic signatures and acquire totipotency..
Other sources mention 'reset' in place of 'erase'.
This paper rather suggests that stem cells do have an epigenome. Specifically, genes associated with self-renewal are silenced, while cell-type-specific genes undergo transcriptional activation during differentiation..
I am not very literate in biology, please excuse me if I made a mistake.
Answer: So there are a couple of things to bear in mind.
pluripotent does not mean that all genes are active. It means that the stem cells have the ability to form different cell types. However, it still needs to keep the cellular programme of a neuron for example silent. So the epigenome is still present to keep other cell type programmes silent until there is a transition.
DNA methylation is not the only source of epigenetics. Active and inactive genes also correspond to particular post translational modifications on tails of histone proteins. In the cell, DNA is wrapped around histones to form what is known as chromatin.
Hope that is a starting point to answer your question | {
"domain": "biology.stackexchange",
"id": 11272,
"tags": "stem-cells, epigenetics"
} |
Why are higher acids solid while lower acids are liquid? | Question: I was just going through acids on Wikipedia in my free time, and noticed this neat trend:
Formic acid, Ethanoic acid, uptil Nonanoic acid, are all liquids. The first few are colourless, while the latter are yellowish and also "oily" (more viscous).
Decanoic acid and all its elder brothers are white crystals (or "powders")
Aromatic acids like benzoic acid or picric acid are solids.
I am unsure if there could be a reasonable logic for the colours. But, I am sure there could be a good logic for the physical state of these acids at room temperature.
To me, all these acids have hydrogen bonding common, so we can rule out that factor. Apparently, the physical state seems to be related to the "size" of the acid. The heavier acids are solids, while the lower ones are liquid. My question is: why is it so?
PS: List of all acids at one place is at the bottom of this Wikipedia page.
Answer: Acids with a larger size have greater intermolecular forces than 'smaller' acids, meaning it would take more heat energy to break those bonds in larger acids than in smaller ones. Hence at room temperature the heavier acids would be solid. | {
"domain": "chemistry.stackexchange",
"id": 9600,
"tags": "organic-chemistry"
} |
Animating a screw made up of elements | Question: I am working on code to generate a screw made up of different elements and animating it by rotatation. The elements are so called conveying elements (denoted by GFA) which are helical shaped screw elements and kneading blocks (denoted by KB) which are smaller sequential sections offset by a staggering angle.
The algorithm is as follows:
Typical initialization (in init) of Three.js objects such as renderer, scene, camera and controls
An instance of a custom object Screw is initialized and elements are added using a identifier string, e.g. 'GFA 2-40-90' or 'KB 5-2-30-90'.
The add method of Screw checks what type the element is (i.e. GFA or KB) and creates an instances of the relevant element object (i.e. GFAElement or KBElement using the parameters of the element and moves it to the end of the screw.
As an element object is instantiated, the profile shape is determined from the screw parameters and used to extrude to the required geometry using the element parameters stored in userData. For GFAElements, the geometry is subsequently twisted to generate the helical shape of the screw. For KBElements, the mesh of a block is extruded to the required thickness and then cloned while rotating in discrete steps to generate a Three.Group of smaller sections offset by an angle.
After the adding of elements to the screw has finished, the screw is cloned to a mirror screw which is offset by a certain distance and angle from the original screw. During animate, the screw and its mirrored clone are rotated by a certain angle.
What I would like to improve:
memory usage and performance - for GFAElements 'twisting' the vertices seems to be a big performance hit at initialization specifically for a large number of elements and high resolutions (defined by the steps property in extrudeSettings).
Improve feathering of GFAElements - at low resolutions the edges of the element are feathered; this is reduced by increasing the step resolution but also decreases performance at initialization and increases memory usage.
Class structuring - I am unsure if I have structured my classes logically. Particularly, I am not sure about the way I decide on which type of element is added in Screw method add. Perhaps it is better to have a abstract base class for an element and inherit from it for GFAElement and KBElement.
Using JavaScript with three.js.
Code (fiddle):
'use strict';
var container;
var camera, scene, renderer, controls;
var screw, mirror;
// Screw parameters
var P = 2; // number of flights
var D = 50, // outer diameter
Dr = D/1.66, // root diameter
Cl = (Dr+D)/2, // centerline distance
αi = 2*Math.acos(Cl/D),
Ih = D*Math.sin(αi/2)/2,
H = D-Cl;
var αf = αi,
αt = Math.PI/P - αf,
αr = αt;
//console.log(D, Dr, Cl, Ih, H);
//console.log(αi, αf, αt, αr);
function getFlankParams(α1, D1, α2, D2, ctr){
// flanks are arcs with origin (xc, yc) of radius Cl passing through (x1, y1) and (x2, y2):
// (x1-xc)^2 + (y1-yc)^2 = Cl^2
// (x2-xc)^2 + (y2-yc)^2 = Cl^2
var x1 = D1*Math.cos(α1),
y1 = D1*Math.sin(α1),
x2 = D2*Math.cos(α2),
y2 = D2*Math.sin(α2);
// Solving system of equations yields linear eq:
// y1-yc = beta - alpha*(x1-xc)
var alpha = (x1-x2)/(y1-y2),
beta = (y1-y2)*(1+Math.pow(alpha,2))/2;
// Substitution and applying quadratic equation:
var xc = x1 - alpha*beta/(1+Math.pow(alpha,2))*(1+Math.pow(-1,ctr)*Math.sqrt(1-(1-Math.pow(Cl/beta,2))*(1+1/Math.pow(alpha,2)))),
yc = y1 + alpha*(x1-xc) - beta;
// Following from law of consines, the angle the flank extends wrt its own origin:
var asq = Math.pow(Dr/2,2)+Math.pow(D/2,2)-2*(Dr/2)*(D/2)*Math.cos(αf),
af = Math.acos(1-asq/Math.pow(Cl, 2)/2);
return {xc, yc, af};
}
function getProfile() {
var shape = new THREE.Shape();
var angle = 0, ctr = 0;
// loop over number of flights
for (var p=0; p<P; p++){
// tip
shape.absarc(0, 0, D/2, angle, angle+αt);
angle += αt;
// flank
var params = getFlankParams(angle, D/2, angle+αf, Dr/2, ctr++);
shape.absarc(params.xc, params.yc, Cl, angle+αf-params.af, angle+αf, false);
angle += αf;
// root
shape.absarc(0, 0, Dr/2, angle, angle+αr);
angle += αr;
// flank
params = getFlankParams(angle, Dr/2, angle+αf, D/2, ctr++);
shape.absarc(params.xc, params.yc, Cl, angle, angle+αf-params.af, false);
angle += αf;
}
return shape;
}
class GFAElement extends THREE.Mesh {
constructor(params){
//
var p = params.split("-");
var userData = {
type: "GFA",
flights: parseInt(p[0]),
pitch: parseInt(p[1]),
length: parseInt(p[2]),
};
var shape = getProfile();
var extrudeSettings = {
steps: userData.length/2,
depth: userData.length,
bevelEnabled: false
};
var geometry = new THREE.ExtrudeGeometry( shape, extrudeSettings );
var material = new THREE.MeshStandardMaterial( {
color: 0xffffff,
metalness: 0.5,
roughness: 0.5,
} );
super( geometry, material );
this.geometry.vertices.forEach( vertex => {
var angle = -2*Math.PI/userData.flights*vertex.z/userData.pitch;
var updateX = vertex.x * Math.cos(angle) - vertex.y * Math.sin(angle);
var updateY = vertex.y * Math.cos(angle) + vertex.x * Math.sin(angle);
vertex.x = updateX;
vertex.y = updateY;
});
this.geometry.computeFaceNormals();
this.geometry.computeVertexNormals();
this.type = 'GFAElement';
this.userData = userData;
this._params = params;
this._name = 'GFA ' + params;
}
clone(){
return new this.constructor( this._params ).copy( this );
}
}
class KBElement extends THREE.Group {
//
constructor(params){
super();
var p = params.split("-");
var userData = {
type: "KB",
thickness: parseInt(p[0]),
flights: parseInt(p[1]),
length: parseInt(p[2]),
stagAngle: parseInt(p[3]),
};
var shape = getProfile();
var extrudeSettings = {
depth: userData.thickness,
bevelEnabled: false
};
var geometry = new THREE.ExtrudeGeometry( shape, extrudeSettings );
var material = new THREE.MeshStandardMaterial( {
color: 0xffffff,
metalness: 0.5,
roughness: 0.5,
} );
var mesh = new THREE.Mesh( geometry, material );
super.add( mesh );
for (var n=1, nt = userData.length/userData.thickness; n<nt; n++){
mesh = mesh.clone();
mesh.position.z += userData.thickness;
mesh.rotation.z += userData.stagAngle;
super.add( mesh );
}
this.type = 'KBElement';
this.userData = userData;
this._params = params;
this._name = 'KB ' + params;
}
clone(){
return new this.constructor( this._params ).copy( this );
}
}
class Screw extends THREE.Group {
//
constructor(){
super();
this.userData.length = 0; //length of screw starting at origin
}
add(desc){
var elem,
params = desc.split(" ");
if (params[0] == "GFA") {
elem = new GFAElement(params[1]);
} else
if (params[0] == "KB") {
elem = new KBElement(params[1]);
}
elem.position.z = this.userData.length;
this.userData.length += elem.userData.length;
super.add(elem);
}
clone(){
var clone = super.clone(false);
clone.userData.length = 0;
this.children.forEach(function(elem){
var e = elem.clone();
clone.add(e._name);
});
clone.position.x += -Cl;
clone.rotation.z += Math.PI/2;
return clone
}
}
function init() {
renderer = new THREE.WebGLRenderer();
renderer.setPixelRatio( window.devicePixelRatio );
renderer.setSize( window.innerWidth, window.innerHeight );
//renderer.gammaInput = true;
//renderer.gammaOutput = true;
document.body.appendChild( renderer.domElement );
scene = new THREE.Scene();
scene.background = new THREE.Color( 0x222222 );
camera = new THREE.PerspectiveCamera( 45, window.innerWidth / window.innerHeight, 1, 1000 );
camera.position.set( -200, 200, -200 );
scene.add( camera );
var light = new THREE.PointLight( 0xffffff );
camera.add( light );
controls = new THREE.TrackballControls( camera, renderer.domElement );
controls.minDistance = 100;
controls.maxDistance = 500;
screw = new Screw();
screw.add('GFA 2-40-90');
screw.add('KB 5-2-30-90');
screw.add('GFA 2-40-90');
screw.add('KB 10-2-120-15');
mirror = screw.clone();
scene.add(screw, mirror);
}
function animate() {
screw.rotation.z += 2*Math.PI/100;
mirror.rotation.z += 2*Math.PI/100;
requestAnimationFrame( animate );
controls.update();
renderer.render( scene, camera );
}
init();
animate();
Answer:
memory usage and performance
I see that the constructor for GFAElement uses a forEach iterator. While functional programming is great, one drawback is that it is typically slower because function calls are made for each element in the array. Instead, try using for...of to reduce the computational requirements.
Should getProfile() be called by every GFA/KB element? If not, Perhaps it would be wise to cache the profile return value and invalidate that when necessary (i.e. when the view changes?). And would it be acceptable to use the same material object for each mesh element?
Improve feathering of GFAElements
I am not sure if this will help with optimization but have you considered setting the value of the step option based on the resolution (I.e an inverse relationship)? Otherwise maybe you could allow the user to specify that value (e.g. perhaps with an <input type=“range” />), allowing him/her to make the decision of optimization vs. appearance.
Class structuring
It is a shame that the two element classes don't have the same parent class - if so, an intermediary class could be created to abstract out common code like the cloning, parameter extraction, etc. However, a mixin could be used:
let ElementMixin = superclass => class extends superclass {
clone(){
return new this.constructor( this._params ).copy( this );
}
}
class GFAElement extends ElementMixin(THREE.Mesh) { ... }
class KBElement extends ElementMixin(THREE.Group) { ... }
The code in the constructor methods could perhaps be abstracted out into that mixin (e.g. getting the userdata object, updating the geometry/vertex items, etc.). For more of an explanation of mixins in ecmascript-2015, check out "Real" Mixins with JavaScript Classes.
Other suggestions
Use more EcmaScript-2015 (ES-6) features
The code already uses Classes and Object destructuring (for the return value of getFlankParams()). It is recommended that one use const for block-scoped variables that should not be re-assigned and let for block-scoped variables that should be re-assigned (typically just iterator variables and counters).
ParseInt() radix specification
This likely won't be an issue unless the parameters/attributes of the elements are specified by input from the user but it would be wise to pass the radix (typically 10) as the second parameter to calls to parseInt(). According to the MDN documentation:
"Always specify a radix" 1
Consider wrapping code in an IIFE or wait for DOM to be ready
It is a good habit to wrap the code in an IIFE or put it all in a function called when the DOM is ready, so as to avoid putting all the variables currently declared outside of a function in the global namespace (i.e window).
1https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/parseInt | {
"domain": "codereview.stackexchange",
"id": 31902,
"tags": "javascript, performance, ecmascript-6, animation, graphics"
} |
Add classes onclick and then remove them with onfocus event | Question: Right now I'm mixing JS and jQuery. I would like to find one unique solution with JS only possibly.
How can I organize the code according to best code practices?
Another thing I need to add is a check at the beginning of the function. If the classes are there nothing should happen, if not there then add them just as below.
function changeClass() {
document.getElementById("message").className += " dropzone-wrap";
document.getElementById("dragNdrop").className += " dropzone-content";
}
window.onload = function() {
document.getElementById("ToggleTxTDrop").addEventListener( 'click' , changeClass );
}
/* lets remove the previous classes when on focus */
$('#message').on('focus', function(){
$(this).removeClass('dropzone-wrap');
$(this).parent().removeClass('dropzone-content');
})
Answer: Here's a step-by-step conversion from your code to the JS-only code of youmightnotneedjquery.com.
function changeClass() {
document.getElementById("message").className += " dropzone-wrap";
document.getElementById("dragNdrop").className += " dropzone-content";
}
Your method here can only ever be used to add "dropzone-wrap" and "dropzone-content". Consider a different approach, where you provide the DOM Element and CSS Classes as parameters.
Consider two such methods - one to add the class (from $.addClass()), and one to remove it ($.removeClass()).
// http://youmightnotneedjquery.com/#add_class
function addClass(el, className) {
if (el.classList)
el.classList.add(className);
else
el.className += ' ' + className;
}
// http://youmightnotneedjquery.com/#remove_class
function removeClass(el, className) {
if (el.classList)
el.classList.remove(className);
else
el.className = el.className.replace(new RegExp('(^|\\b)' + className.split(' ').join('|') + '(\\b|$)', 'gi'), ' ');
}
Note that I'd remove the class before adding it, as that code does not protect against long repeated classname strings.
Anyway, let's continue with your code:
window.onload = function() {
document.getElementById("ToggleTxTDrop").addEventListener( 'click' , changeClass );
}
Using onload works, but it isn't the right way to do it. With jQuery this should be $(window).on('load', function() { ... });... with native Javascript, it isn't much different (.addEventListener).
For the function code itself, create a handler method that uses the addClass() abstraction method to replace your old changeClass() functionality:
function txtDropClicked(evt) {
addClass(document.getElementById('message'), 'dropzone-wrap');
addClass(document.getElementById('dragNdrop'), 'dropzone-content');
}
window.addEventListener('load', function() {
document.getElementById("ToggleTxTDrop").addEventListener( 'click' , txtDropClicked );
});
Now we have your classes being added on click; let's update the onfocus handler in the same fashion:
/* lets remove the previous classes when on focus */
function removeClassesOnFocus(evt) {
// evt.target is the element that triggered the event. We bound to #message.
removeClass(evt.target, 'dropzone-wrap');
removeClass(evt.target.parentNode, 'dropzone-content');
}
// this bit really should wait for `window.load` too.
document.getElementById('message').addEventListener('focus', removeClassesOnFocus);
If the classes are there nothing should happen
To address that, the removeClassesOnFocus function could be reworked to look up the element if it isn't provided.
function txtDropClicked(evt) {
removeClassesOnFocus(false);
addClass(document.getElementById('message'), 'dropzone-wrap');
addClass(document.getElementById('dragNdrop'), 'dropzone-content');
}
function removeClassesOnFocus(evt) {
// evt.target is the element that triggered the event. We bound to #message.
var message = (evt === false) ? document.getElementById('message') : evt.target;
removeClass(message, 'dropzone-wrap');
removeClass(message.parentNode, 'dropzone-content');
}
The full converted code:
// http://youmightnotneedjquery.com/#add_class
function addClass(el, className) {
if (el.classList) { el.classList.add(className); }
else { el.className += ' ' + className; }
}
// http://youmightnotneedjquery.com/#remove_class
function removeClass(el, className) {
if (el.classList) { el.classList.remove(className); }
else { el.className = el.className.replace(new RegExp('(^|\\b)' + className.split(' ').join('|') + '(\\b|$)', 'gi'), ' '); }
}
// When the txtDrop toggle is clicked, add these classes to the message and drag/drop thing.
function txtDropClicked(evt) {
removeClassesOnFocus(false);
addClass(document.getElementById('message'), 'dropzone-wrap');
addClass(document.getElementById('dragNdrop'), 'dropzone-content');
}
// Removes the classes after the message receives focus.
function removeClassesOnFocus(evt) {
// evt.target is the element that triggered the event. We bound to #message.
var message = (evt === false) ? document.getElementById('message') : evt.target;
removeClass(message, 'dropzone-wrap');
removeClass(message.parentNode, 'dropzone-content');
}
// Attach the click handler to txtDrop toggle when the page loads.
window.addEventListener('load', function() {
document.getElementById("ToggleTxTDrop").addEventListener( 'click' , txtDropClicked );
document.getElementById('message').addEventListener('focus', removeClassesOnFocus);
});
It is worth noting that you can use inline functions as handlers, too - and not define txtDropClicked or removeClassesOnFocus. Either way, the point of each method is to perform a single task, and to do that in the most reusable manner possible. | {
"domain": "codereview.stackexchange",
"id": 14102,
"tags": "javascript, jquery"
} |
What does "relatively uniform resonant frequency" mean? | Question: In the book What If? from Randall Munroe, the author writes:
Q: Assuming a relatively uniform resonant frequency in a passenger jet, how many cats, meowing at what resonant frequency of said jet, would be required to "bring it down"?
I'm unable to google out any usage of "uniform resonant frequency" or even "relatively uniform resonant frequency" at all. I do meet "uniform resonance", but the results are all papers, which assuming that I have understood the word already. From my understanding, for every object there is only one resonance.
I think for a resembled object like a jet, each part has a different resonance, and if all of them is plotted, we can see if they are relatively uniform or not. The mechanical resonance does mention about airplanes, but I'm not sure this is the case.
So what does it mean?
Answer: It's exactly what you postulate it is -- a jet is composed of thousands (probably millions) of different parts, each of which would have its own frequency that induces the largest vibrations.
In order to make the problem tractable, the author had to make an assumption. The assumption made is that all of the parts have a roughly equivalent natural frequency, which the author chose to call "relatively uniform." The uniform means all equal, relatively means to within a close enough approximation. This means there is only one resonant frequency for the aircraft.
It is not a technical term. It's a plain-English term. And that's likely why you aren't able to search for precise meanings. But, the plain-English definitions are what makes the What-If series so charming and approachable. | {
"domain": "physics.stackexchange",
"id": 38316,
"tags": "frequency, resonance, relative-motion, aircraft"
} |
Why does activated carbon preferentially adsorb anions? | Question: Brands like Brita and Pur (in the U.S.) have made a name for themselves for the ability of their product (essentially a large-pore filter with activated carbon/charcoal) to extract the added chlorine from tap water. I visualize $\ce{C}^*$ as a vast network of channels in which ions are "trapped". I had assumed that most cations and anions were trapped effectively, but it seems to favor chlorine and iodine.
According to Wikipedia:
Activated carbon does not bind well to certain chemicals, including alcohols, glycols, strong acids and bases, metals and most inorganics, such as lithium, sodium, iron, lead, arsenic, fluorine, and boric acid.
Why would it not bind effectively to fluorine? Is it an issue of atomic radius? Does $\ce{F-}$, with a radius of 0.136 nm sneak by, while $\ce{I-}$ and $\ce{Cl-}$ at 0.181 and 0.216 nm, repectively, get caught up in the matrix? Why do the cations get passed through?
Answer: This answer applies to carbon filtration of water.
According to most of the sources I found, activated carbon binds to most substances through London dispersion forces (from the Wikipedia article). This should mean that it adsorbs larger molecules and non-polar molecules preferentially since they would have larger dispersion forces. As your quote indicates, it does not bind polar molecules like "alcohols, glycols, strong acids and bases" well. This source also lists neutral, non-polar compounds, along with organics and compounds with low water solubility as being effectively removed.
I think that you may have misread or misinterpreted the sentence about binding "chlorine and iodine" as saying that it traps chloride, iodide, and (by extension) other anions well. I can't find data to support this statement. It traps molecular iodine well, and as a large (in terms of electrons), non-polar molecule, that makes sense. But it removes molecular chlorine $\ce{Cl2}$ through a chemical reaction by reducing it to the chloride ion $\ce{Cl-}$ which is then soluble in water and flows through the filter.
I did find a couple of sources (1, 2) that said "activated charcoal" has a slight electropositive charge that helps it to attract negatively charged species, which would fit with your statement that it preferentially attracts anions. (However, they don't list what kind of negative species they're talking about and nitrate, sulfate, and fluoride ions were specifically excluded). The idea that it will remove ionic compounds seems to be contradicted by the statement in at least a couple of sources (one given here) that carbon filters don't remove minerals, salts, and dissolved inorganic compounds | {
"domain": "chemistry.stackexchange",
"id": 5661,
"tags": "ions, everyday-chemistry, filtering"
} |
Using electrolysis of water as a proton generator | Question: I'm working on a project right now, where the challenge is to try to use electrolysis as a way of controlling the $\ce{pH}$ of water.
I've set up an experiment where I have two separate chambers or volumes of water.
Both chambers are connected to the same electrolytic unit. The flow of water is from a respective chamber, into the unit and then back into the same chamber again, in an attempt to separate the two half reactions. For this reason I call the chamber that flows over the anode for the anodic chamber, and the chamber that flows over the cathode for the cathodic chamber. The bodies of water are separated in the electrolysis cell by a proton-exchange-membrane (PEM) that should, in theory, only allow positively charged ions to cross.
Apologies for the Danish text on my setup, but the figures should help you understand what I'm trying to do.
The anodic chamber (the water that goes to the anode half-reaction of the electrolysis cell) contains tap water while the cathode chamber contains Milli-Q water with added $\ce{NaCl}$ for conductivity purposes.
The $\ce{pH}$ of the water is alkaline in both chambers when the experiment starts, but I've also tried to run the experiment where the anodic chamber is acidic.
Anyway, here is what I hoped would happen in alkaline water:
Anode (oxidation):
$$\ce{4OH- (aq) -> 4e- + O2(g) + 2 H2O(l)}$$
Cathode (reduction):
$$\ce{2H2O (l) +2e- -> H2(g) + 2OH-(aq)}$$
In theory, I was expecting the $\ce{pH}$ to drop in the anodic chamber, since removing $\ce{OH-}$ would necessitate a flow of positively charged ions across the PEM membrane, instead of the $\ce{OH-}$ produced in the cathodic chamber in order to satisfy electronegativity. This would also mean that the $\ce{pH}$ should rise in the cathodic chamber.
However, what I see is invariably that the $\ce{pH}$ rises very rapidly, and continuously, in the cathodic chamber, whereas the $\ce{pH}$ in the anodic chamber remains either stagnant or rises very slowly.
Now, part of the reason for the rapid rise in the cathodic chamber is the complete lack of alkalinity, that much is clear to me. However, I don't understand why only the reduction half cell reaction seems to occur.
The only explanation I have been able to come up with is that something else on the anode half-reaction is donating electrons to the reaction. I tried running the experiment at different voltages in the hopes of finding a "sweet spot" that would prevent this "something" from reacting but as of now I haven't been able to do that. I'm worried it may be that my anodic electrode is donating the electrons.
My electrodes are nickel based.
A cursory glance at Wikipedia shows the standard electrode potential of nickel as
$$\ce{Ni^2+ + 2 e− <=> Ni(s)} \;\;\; \pu{E^{\circ} = −0.25 V}$$
And for my hydroxides
$$\ce{O2(g) + 2H2O + 4 e− <=> 4 OH−(aq)} \;\;\; \pu{E^{\circ} = +0.401 V}$$
When I run the reaction, I also see a yellowish/greenish precipitate forming in the anodic chamber, which according to some lab technicians at my location may be the nickel reacting.
Anyway, the question(s) are then - what do you think is happening in my system, given this information?
Am I on the right track? should I try to change my electrode?
Have I made some obvious, basic mistakes?
What suggestions do you have, so that I can get this to work?
I hope that you have the time to advice me on this, and I wish you all a pleasant day.
Answer: In a chemical sense, "anode" of an electrolytic cell is the most powerful oxidizing reagent known, so much so that it oxidize F(-) to elemental fluorine. Don't forget that the electron balance in an electrolytic cell is always maintained so the gram-equivalents of x reduced at the cathode must equal gram equivalents of y oxidized at the anode. As a result, half cell reactions cannot occur alone. So something is being oxidized at the anode, this is your nickel electrode. Platinum is the best one but it is too expensive. Graphite electrode might be an economical substitute. Secondly, you have to add an electrolyte to the anode chamber as well because tap water resistance is pretty high. You can add a non-corrosive electrolyte such as sodium sulfate. With these changes, hopefully you will see the desired trend. | {
"domain": "chemistry.stackexchange",
"id": 11716,
"tags": "ph, electrolysis"
} |
How do loophole-free Bell’s inequality violation tests rule out conspiracy via signals that have to travel back to the experimenter? | Question: First, I’d like to apologize for yet another question about crazy ideas on how quantum mechanics might actually work. I have no background in quantum mechanics. Recently, though, I started to study quantum computing, which turned my attention to Bell’s inequality and its experimental violations.
Latest of those experiments claim to be loophole-free. As I understand it, they place the measurement devices such far away from each other, that no speed-of-light communication is possible before the measurements took place.
But one thing bothers me: we can’t be sure what exactly happened at those remote devices unless classical information travels back to us from there. So it seems like a possibility that there is some kind of action, propagating as fast as light, that causes any measurements of the entangled pairs to be observed as correlated. (I imagine it as some kind of bit-flipping in cellular automata.)
Apparantly, either nobody had such ideas, or everyone regards them as obviously ridiculous. So my question is: what exactly does this idea contradict? Why do actual experimenters never care about it?
UPD
Below is the process that I picture in my mind. Please note that I'm showing it purely to clarify what I mean by post-measurement conspiracy. My question is not why this particular scheme is not going to work, but rather why any such post-measurement magic can't work, in the spirit of how far-away separation kills any possible communication channel between measurement devices.
Entangled pair of particles is created. Label "A" means that they both belong to the same pair. These labels travel along each of the particle.
The particles are measured in some bases denoted by B and C.
Measurement devices send back classical information about the outcomes. But here goes the magic: B(A) and C(A) tags get attached to it. If information gets copied at some point, so do the tags.
When the tags (with the same "A" label in parentheses) spatially collide, they rewrite each other by adding remark in superscript meaning "relative to". In this way they "agree" with each other. And there is enough things to magically rewrite the outcome (classical information) to reproduce required correlations.
Answer: I don't think you can rule out conspiracy, in the same way you can't rule out superdeterminism (which perhaps is one class of conspiracy). But I don't think either notion makes any prediction which is testable, even in principle (certainly superdeterminism does not). As such ideas like this lie outside the realm of physics, which deals in experimentally testable theories.
This is not to say such ideas aren't interesting: they may be, they're just part of philosophy rather than science.
Of course, if you can come up with a test for your 'conspiracy theory' (not in the normal derogatory sense of that term), then I'm wrong and it is part of science, but I suspect you can't. | {
"domain": "physics.stackexchange",
"id": 46629,
"tags": "bells-inequality"
} |
Natural candidate against the Isomorphism Conjecture? | Question: The famous Isomorphism Conjecture of Berman and Hartmanis says that all $NP$-complete languages are polynomial time isomorphic (p-isomorphic) to each other. The key significance of the conjecture is that it implies $P\neq NP$. It was published in 1977, and a piece of supporting evidence was that all $NP$-complete problems known at the time were indeed p-isomorphic. In fact, they were all paddable, which is a nice, natural property, and implies p-isomorphism in a nontrivial way.
Since then, the trust in the conjecture deteriorated, because candidate $NP$-complete languages have been discovered that are not likely to be p-isomorphic to $SAT$, although the problem is still open. As far as I know, however, none of these candidates represent natural problems; they are constructed via diagonalization for the purpose of disproving the Isomorphism Conjecture.
Is it still true, after nearly four decades, that all known natural $NP$-complete problems are p-isomorphic to $SAT$? Or, is there any conjectured natural candidate to the contrary?
Answer: I think the answer is yes, even today there is no known natural problem that is a candidate for violating the Isomorphism Conjecture.
The primary reason is that typically natural NP-complete problems are very easily seen to be paddable, which Berman and Hartmanis showed suffices to be isomorphic to SAT. For natural graph-related problems this typically involves adding extra vertices that are, e.g., disconnected from the graph, or connected in a very particular (but usually obvious) way. For the decision version of optimization problems, it typically involves adding new dummy variables with no constraints on them. And so on. | {
"domain": "cstheory.stackexchange",
"id": 2554,
"tags": "cc.complexity-theory, complexity-classes, p-vs-np, np-complete"
} |
Why graph planarity is important | Question: What is the reason to study planar graphs and algorithms on such graphs (as well as algorithms allowing to check a graph's planarity)? Where in industry this knowlege is needed?
I know that planarity arises in microchip design ("wires" intersection). I also found the following excerpt from a lecture notes:
The study of two-dimensional images often results in problems related to
planar graphs, as does the solution of many problems on the two-dimensional surface of our earth.
Many natural three-dimensional graphs arise in scientific and engineering problems. These often
come from well-shaped meshes, which share many properties with planar graphs.
Where I can read about it in more detail? Or may be you can explain in a few words what exactly these applications are?
Answer: For some graph classes $C$, the question "is there a fast algorithm for deciding whether a graph $G$ belongs to class $C$?" is perhaps only of theoretical curiosity. But it can also be argued otherwise: suppose a problem you care about is hard in general, but efficiently solvable for graphs of class $C$. Wouldn't it be nice if you could quickly test if you can use a fast algorithm, instead of always using a slow brute-force, for example?
In general, it is not hard to imagine some graphs arising from applications are naturally planar, like road networks, printed electric circuits, railways, or chemical molecules. For more concreteness, it seems like a good set of keywords is "planar graph applications". One of my first hits was [1] from the domain of computer vision.
[1] Schmidt, Frank R., Eno Toppe, and Daniel Cremers. "Efficient planar graph cuts with applications in computer vision." Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, 2009. | {
"domain": "cs.stackexchange",
"id": 8339,
"tags": "graphs, planar-graphs"
} |
Charging a capacitor | Question: Below is an excerpt from a physics textbook:
One common way to charge a capacitor is to connect these two wires to opposite terminals of a battery. Once the charges $Q$ and $-Q$ are established on the conductors, the battery is disconnected. This gives a fixed potential difference $V_{ab}$ between the conductors (that is, the potential of the positively charged conductor $a$ with respect to the negatively charged conductor $b$) that is just equal to the voltage of the battery.
Why does the potential difference between the conductors equal the voltage of the battery when the battery is disconnected? Could someone please provide a detailed explanation? Thanks in advance.
Answer: Once the battery is disconnected, the charge on the capacitor plates is stuck where it is and has no path to go anywhere else. Since the charge remains on the plates, there is an electric field between the plates. And because there's a electric field between the plates there must be a voltage difference between them.
We know the voltage was equal to the battery voltage when the battery was connected. And since it doesn't change when the battery is disconnected, it must still be equal to the battery voltage afterwards. | {
"domain": "physics.stackexchange",
"id": 12636,
"tags": "electromagnetism, electricity, capacitance, conductors"
} |
Body Energy Consumption: calculating watts and kwh given calories? | Question: I want to calculate the energy consumption in watts of the human body from calories consumed. My number is different than the expected value. Does anyone know where I'm going wrong?
$$E_{\rm day} = \rm 1900\,{cal} = 4.1\,\frac J{cal} \cdot 1900\,{cal} = 7790\,J$$
$$E_{\rm h} = \rm 7790\,J/24\,h = 324\,J/h = 5\,J/s$$
$$E_{\rm s} = \rm 5\,J/s = 5\,watt$$
$$E_{\rm kWh} = \rm 5\,W \cdot 3600\,s = 18,000\,Wh$$
but the expected value is that of an incandescent light bulb at $85\,{\rm W} < E_{\rm s} < \rm 120\,W$. $\ E_{\rm kWh}$ looks wrong.
Answer: (1) The value in calories that you've taken is very likely in food calories ($\text{Cal}$), because $1\,900\,\text{cal}$ a day is too little for an average human. The conversion is:
$$1\,\text{food calorie (Cal)} = 1\,000\,\text{ cal}$$
On making this correction, you will get about $92\,\rm W$ of power consumed. Also note that $324\,\rm J/h ≠ 5\,J/s$.
(2) The conversion of J to kWh is missing a factor of $10^3$ (the kilo watt-hour)
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 87419,
"tags": "energy"
} |
Prove the language $\{x \in \Sigma^* : \exists w \in \Sigma^* \ xww \in L \}$ for regular language $L$ is regular | Question:
Let $\Sigma=\{0,1\}$ and $L$ be a regular language. Prove that
$$Z(L) = \{x \in \Sigma^* : \exists w \in \Sigma^* \ xww \in L \}$$
is a regular language.
I tried to build a NFA based on the DFA that accepts $L$ but failed to do so. I don't know how to ensure the $\, ww \,$ part. Please advise.
Answer: Let $A = (Q, \delta, q_0, F)$ a DFA that accepts $L$. For $q, q' \in Q$, define:
$L_{q,q'} = \{u\in \Sigma^*, \delta^*(q, u) = q'\}$;
$L_{q',F} = \{u\in \Sigma^*, \delta^*(q', u) \in F\}$.
It is quite easy (can you prove it?) to see that those languages are regular.
Now the language $\{x\in \Sigma^*\mid \exists w\in\Sigma^*, xww\in L\}$ can be written as:
$$\bigcup\limits_{q\in Q}L_{q_0,q}\cdot M(q)$$
Where $M(q) = \left\{\begin{array}{rl}\emptyset&\text{if }\bigcup\limits_{q'\in Q}L_{q,q'}\cap L_{q',F}=\emptyset\\\{\varepsilon\}&\text{otherwise}\end{array}\right.$
$M(q)$ is regular since it is either $\emptyset$ or $\{\varepsilon\}$, so that means that $Z(L)$ is regular. | {
"domain": "cs.stackexchange",
"id": 18441,
"tags": "regular-languages, automata, finite-automata"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.