anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Perturbation theory emitting high order powers
Question: For my second-order energy correction for a harmonic oscillator in an electric field I have the following: $$q^2\varepsilon^2\sum_{m\neq n}\frac{|\langle m|x|n\rangle|^2}{E^{(0)}_n-E^{(0)}_m}+\text{ }...$$ My first question is what values of $m$ do we sum over? Secondly, my textbook says we neglect higher-power terms, does this mean higher values of $m$ or what? If I have left any information out (which is very likely) just say. Answer: No it doesn't mean higher values of m, as m here represents the set of orthonormal eigenstates of the system under study. In order to better understand where and how one arrives at 1st, 2nd, 3rd, ... order corrections, you need to look at the derivation of time independent perturbation theory that you find in most QM textbooks, there's also a rather detailed derivation on wikipedia. An overview: When you write out the time-independent Schrödinger equation for the perturbed system: $$(H_0 + \lambda V)\left|n\right > = E_n \left|n\right > $$ Where $H_0$ is the Hamiltonian of the unperturbed system, $\left|n\right >$ are the eigenstates of the perturbed system, $\lambda$ is a perturbation parameter describing its strength, $E_n$ energy eigenvalues, and $V$ can be an external field seen as perturbative. Now for small perturbations, i.e. small $\lambda$, we can expand the above equation as a power series of $\lambda$: $$E_n = E_n^{0} +\lambda E_n^{1}+\lambda^2 E_n^{2}+\lambda^3...$$ and $$\left|n\right >=\left|n^0\right >+\lambda\left|n^1\right >+\lambda^2\left|n^2\right >+...$$ Substituting the expanded version into Schrödinger's equation again, you get a system of equation with as many equations as the power of $\lambda$ you intend to keep, $\lambda^0,\lambda^1,\lambda^2$..., solving for each one of them brings you to: First order correction (expectation value of the perturbation Hamiltonian taken in the unperturbed state): $$E_n^1=\left<n^0 \right|V\left|n^0\right >$$ Second order: $$E_n^2= \left<n^0 \right|V\left|n^1\right > = \sum_{m\ne n}\frac{\left|\left<m^0 \right|V\left|n^0\right >\right|^2}{E_n^0-E_m^0}$$ Third order: $$E_n^3= \left<n^0 \right|V\left|n^2\right >=... $$ All this only valid for non-degenerate states, as I didn't not take into account degeneracy. EDIT: Concerning your ''m'' confusion: Example of m: Take the hydrogen atom, first eigenstate would be $(n,l,m_l)=(1,0,0)$, second state $(2,0,0)$ and so on... (remember the wave-functions...)
{ "domain": "physics.stackexchange", "id": 15388, "tags": "quantum-mechanics, perturbation-theory" }
Conservation law of equation involving Hilbert transform
Question: I am trying to confirm a conservation law I cam across in a paper (Janssen 1983 "On a fourth-order envelope equation for deep-water waves" Journal Fluid Mechanics), and am having difficulty. In particular, I'm trying to confirm the conservation of linear momentum $P$, where $P = \frac{i}{2} \int AA^*_x - A^*A_x \ dx$, where $^*$ denotes complex conjugate and spatial integrals are taken to be over all space in this question. Now, the governing equation takes the form $A_t = \mathcal{N}(A,A^*) - P.V. \ i\alpha\ A\int_{-\infty}^{\infty} \frac{\partial|A|^2}{\partial x'}\frac{1}{x-x'} \ dx'$, where $\mathcal{N}$ is a (nonlinear) operator describing the rest of the dynamics, which is not important for this question, P.V. denotes the principal value and $\alpha$ is some real constant. Note, this integral is proportional to the Hilbert transform. Now, I want to look at the time evolution of the momentum $P$. To that end, we have $\frac{d P}{dt} = \frac{i}{2} \int \dot{A}A^*_x +A\dot{A}_x^* -\dot{A}^*A_x -A^*\dot{A}_x \ dx$ Substituting in the relation for our governing equation, and using integration by parts (we assume the field A is compact) we find $\frac{dP}{dt}=\frac{\alpha}{2} \int (|A|^2)_x \mathcal{H} (|A|^2_x) \ dx$ where $\mathcal{H}(|A|^2_x) \equiv P.V. \int_{-\infty}^{\infty} \frac{\partial|A|^2}{\partial x'}\frac{1}{x-x'} \ dx'$. Now, this term is non-zero, where as the author claims this integral is conserved. I don't see why this integral should vanish, but perhaps I'm not exploiting a property of the Hilbert transform. Am I missing something obvious? Thanks, Nick Answer: Since $\mathcal{H}$ is anti-self-adjoint, we have $$\langle F,\mathcal{H}F\rangle=\frac{\langle F,\mathcal{H}F\rangle-\langle \mathcal{H}F,F\rangle}{2}=0$$ since $F=|A|_x^2$ is real. Thus $\frac{dP}{dt}=\frac{\alpha}{2}\langle F,\mathcal{H}F\rangle=0$ (unless I'm misreading your notation).
{ "domain": "physics.stackexchange", "id": 13570, "tags": "conservation-laws" }
How to combine GridSearchCV with Early Stopping?
Question: I'm a beginner in machine learning and want to train a CNN (for image recognition) with optimized hyperparameter like dropout rate, learning rate and number of epochs. The optimal hyperparameter I try to find via GridSearchCV from Scikit-learn. I have often read that GridSearchCV can be used in combination with early stopping, but I can not find a sample code in which this is demonstrated. With EarlyStopping I would try to find the optimal number of epochs, but I don't know how I can combine EarlyStopping with GridSearchCV or at least with cross validation. Can anyone give me a hint on how to do that, it would be a great help? My current code looks like this: def create_model(dropout_rate_1=0.0, dropout_rate_2=0.0, learn_rate=0.001): model = Sequential() model.add(Conv2D(32, kernel_size=(3,3), input_shape=(28,28,1), activation='relu', padding='same') model.add(Conv2D(32, kernel_size=(3,3), activation='relu', padding='same') model.add(MaxPooling2D(pool_size=(2,2))) model.add(Dropout(dropout_rate_1)) model.add(Dense(128, activation='relu')) model.add(Dropout(dropout_rate_2)) model.add(Dense(10, activation='softmax')) optimizer=Adam(lr=learn_rate) model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy']) return model model = KerasClassifier(build_fn=create_model, epochs=50, batch_size=10, verbose=0) epochs = [30, 40, 50, 60] dropout_rate_1 = [0.0, 0.2, 0.4, 0.6] dropout_rate_2 = [0.0, 0.2, 0.4, 0.6] learn_rate = [0.0001, 0.001, 0.01] param_grid = dict(dropout_rate_1=dropout_rate_1, dropout_rate_2=dropout_rate_2, learn_rate=learn_rate, epochs=epochs) grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=-1, cv=5) grid_result = grid.fit(X, y) Answer: Just to add to others here. I guess you simply need to include a early stopping callback in your fit(). Something like: from keras.callbacks import EarlyStopping # Define early stopping early_stopping = EarlyStopping(monitor='val_loss', patience=epochs_to_wait_for_improve) # Add ES into fit history = model.fit(..., callbacks=[early_stopping])
{ "domain": "datascience.stackexchange", "id": 6365, "tags": "scikit-learn, hyperparameter-tuning, convolutional-neural-network, gridsearchcv, epochs" }
What's the function of those coils there?
Question: What do they do? The aux contacts are normally open and the device is MR PQM 500 reactive power compensation system. Answer: The aux contacts are normally closed ... This is unlikely. I suggest that they are normally open and close before the main contacts do. You haven't provided a link for the MR PQM 500 system but I suspect that it is a capacitor bank with a controller which switches the contactors to switch in capacitance as required to bring the power factor closer to unity. There is problem when switching in capacitors. If the mains voltage is close to peak at the instant the contacts close a very high current will flow with resultant arcing, etc. because the discharge capacitor appears like a short-circuit. To prevent this a special arrangement is employed: The auxiliary contact block at the front is driven by the main contactor movement but the auxiliary contacts close slightly before the main ones. The three coils on top are inductors which have enough inductance to limit the current into the discharged capacitor to a safe value. This is enough to bring the voltage up so that the main contacts don't see the big surge they otherwise would.
{ "domain": "engineering.stackexchange", "id": 3068, "tags": "electrical-engineering, power-electronics, electrical, power-engineering, electronics" }
Induced Electric Fields and Faraday's Law
Question: I'm an undergraduate student and we just covered Faraday's law. However, I am still confused conceptually about a few things: Faraday's law states that $\oint_{\partial\Sigma} \vec{E} \cdot \vec{dl} = -\frac{d}{dt} \int_\Sigma \vec{B} \cdot \vec{dA}$. What I interpret this to be is that we choose a loop $\partial \Sigma$ and a surface $\Sigma$ bound by this loop, determine the rate change of magnetic flux through this surface , and this equals the line integral on the left. However, what necessitates that the rate change of flux through every surface bound by this loop is the same? Consider a scenario where I have a magnetic field which changes as $|B| \propto \frac{1}{t}$ in a certain region, and falls off as $\frac{1}{t^2}$ every where else; then if I have a loop completely within the first region, I may either chose a surface entirely within this first region, or I might chose one that is large enough to encompass both. Intuitively, I cannot see how $\frac{\partial B}{\partial t}$ and hence rate change of flux is the same in both cases. Hence I expect the left hand side to be different, which cannot be as it is unchanged. We learnt in class that the direction in which we integrate around the loop, that is, the direction of $\vec{dl}$ is such that it forms a right-hand system with $\vec{dA}$ (This is where the surface chosen is uniform so that the $\vec{dA}$ points uniformly in the same direction.) What if I chose instead my surface non uniformly so that my $\vec{dA}$ vectors point in all sorts of directions? Then (conceptually) how would I fix the direction of $\vec{dl}$? We don't really cover the differential forms of the laws in my course, so I don't really know if there's an answer there. I'm just looking for physical insight more than anything. Thanks for any help. =) Answer: The reason the rate change of flux through every surface bound by this loop is the same is in the Gauss's law for magnetism. Since the flux of ${\bf B}$ through any closed surface is zero, it is enough to take two surfaces bound to the same loop. The flux through the resulting closed surface is zero and then, taking into account the different position of the loop with respect to the two surfaces, the two fluxes through the two individual surfaces are equal. The convention to fix the direction on the loop is that once an orientation ${\bf N}$ for the surface has been chosen, $d\vec l$ is such that the vector which forms a right-handed system with ${\bf N}$ and $d\vec l$, and which is in the tangent plane of the surface touching the loop at the border, points toward the surface and not out of it. In a more pictorial way, a person following the loop with his head pointing towards the same direction of ${\bf N}$ should have the surface on the left.
{ "domain": "physics.stackexchange", "id": 57296, "tags": "electromagnetism, magnetic-fields, electric-fields, electromagnetic-induction" }
Given a set of trajectories produced by a fixed policy, what is the the standard approach to estimate Q?
Question: Let's say that I have a set of trajectories $\mathcal{D} = \{\tau_1, \dots, \tau_n\}$ produced by an agent acting in a (episodic) MDP with a fixed policy $\pi$. I would like to estimate the $Q$ function of $\pi$ from $\mathcal{D}$. Just to be clear, each trajectory $\tau_j$ is a finite sequence $$ \tau_j = s_0^j, a_0^j, r_0^j s_1^j, a_1^j, r_1^j \dots, s_{N_j}^j $$ representing an episode performed w.r.t. $\pi$. What would be the standard approach in this case? Better use TD learning or Monte Carlo? Answer: What would be the standard approach in this case? Better use TD learning or Monte Carlo? Both should be fine, but they might lead to different estimates, if both these things apply: The amount of data is relatively small compared to all possibilities from the given environment and policy. Either the policy or the environment are stochastic. The difference is that for each state/action pair estimated: Monte Carlo will estimate based on overall average returns, ignoring individual state transitions and policy choices. Temporal Difference will estimate based on observed state transitions and policy choices. There is a good example of what this might mean numerically in Sutton & Barto chapter 6, example 6.4. In that case it shows an advantage to TD learning when some states might be sparsely represented in the data whilst others have more instances. Monte Carlo learning will only learn the value of those rarer states from the trajectories where they occur, whilst TD learning will be able to use estimates of other trajectories, provided two or more trajectories overlap later on. This doesn't necessarily make TD learning better. If a trajectory that overlaps with others also happens to include an unusual policy choice, state transition or reward, this may spread sample bias into multiple estimates, whilst Monte Carlo would be affected less by such an outlier.
{ "domain": "ai.stackexchange", "id": 3325, "tags": "reinforcement-learning, algorithm-request, monte-carlo-methods, temporal-difference-methods" }
What is the definition of a polybasic cleavage site?
Question: I keep coming across the term "polybasic cleavage site" which is implicated in increasing the virulence of many viruses, but I'm struggling to find a definition. Is it just a sequence of amino acids where more than one has a basic side chain? I'd also love to know why it is so important to the function of a virus. Answer: I would define a “polybasic cleavage site” as: A region on a protein consisting of several basic amino acids (generally arginine (R) and lysine (K), rather than histidine) that determines the substrate specificity of certain classes of proteases for that protein. It can also be regarded as a recognition site for the enzyme. The relevance of this to the virulence of some viruses is as follows. Many RNA viruses are translated into one or more polyproteins that are subsequently cleaved proteolytically at specific sites to generate further proteins. In certain instances these sites are ‘polybasic’, and the products generated by the cleavage of such sites play a role in the virulence of viruses. This is nicely illustrated by a paper by Nao et al. in mBio on the mutation of haemagglutinin (HA) cleavage site of Avian Influenza Virus, where increasing the number of basic amino acids increased the virulence. I included an illustrative figure from that publication. HA cleavage site sequences of the ShimH5 virus strain and its variants passaged in chickens The current (2021) interest in polybasic cleavage sites no doubt relates to the presence of such a site (PRRAR) at the S1/S2 junction in the spike protein of SARS-CoV-2 — see, for example, this paper by Peacock et al. in Nature Microbiology.
{ "domain": "biology.stackexchange", "id": 11493, "tags": "virology, enzymes, amino-acids" }
I saw a weird thing around Venus
Question: I pointed my telescope (50mm aperture, 500mm focal length, 4mm eyepiece, image taken through a window) at Venus and saw this: What is it? I don't think this is Uranus or Neptune as my telescope is very bad. I haven't increased the image size too much and it was 6 A.M. (UTC+3, EEST) Answer: Venus is a very bright "morning star" at the moment, but what your image shows is only a blur. The telescope is very far from focus and you can see no details of the the planet. You may also be getting some reflections from the glass of the window If Venus was in focus you would see a very small "half moon" (or perhaps "waxing gibbous" shape) at the moment. Try to practice focusing on distant landmarks during the day. That makes it easier to get the focus right at night.
{ "domain": "astronomy.stackexchange", "id": 4817, "tags": "solar-system, telescope, amateur-observing, venus, identify-this-object" }
Median finding with "green forests"?
Question: I have a vague memory of a series of papers working to reduce the constant factor in the number of comparisons for deterministic linear time median finding, using increasingly elaborate (but interesting!) techniques to do so. In particular, I remember one of the papers using the term "green forests" for a data structure storing information from previous comparisons (in an attempt to reuse them and avoid future comparisons). Does this ring any bells? "median finding green forests" turns out to be a terrible search term. Answer: Dorit Dor and Uri Zwick ("Selecting the Median", SODA 1995, pp 28-37) use green factories and analyze their amortized production costs.
{ "domain": "cstheory.stackexchange", "id": 3727, "tags": "reference-request, selection" }
Dynamic stack implementation in C
Question: I've been studying C for a while, as a programmer coming from a C++ background, I'm used to the standard library, STL, etc., and I quickly realized that I need some kind of a containers library/data structure implementations in C. So as an exercise decided to write one! I also intend to use it on my personal C projects, so it's got to be good! Here is my stack implementation. cstack.h: /** * @file cstack.h * * @brief Contains the definition for `cstack` along with the `cstack_*` function signatures. */ #ifndef CSTACK_H #define CSTACK_H typedef signed long long cstack_size_t; typedef struct { cstack_size_t item_size; /**< The size of a single stack item, e.g. sizeof(int) */ char* data; /**< The beginning of the stack. */ char* top; /**< Pointer to the first empty 'slot' in the stack. */ char* cap; /**< Pointer to the end of the stack. */ } cstack; /** * @brief Allocate a new stack. * * @param initial_items_count Specifies how many items should the function allocate space for upfront. * @param item_size The size (in bytes) of a single item, must be > 0. e.g. `sizeof(int)`. * @return The newly allocated stack. NULL on failure. */ cstack* cstack_alloc(cstack_size_t initial_items_count,cstack_size_t item_size); /** * @brief Free the memory allocated by the stack. * * @param stack The stack whose memory to free. */ void cstack_free(cstack* stack); /** * @brief Push a new item onto the stack. * * @param stack The stack to push the item onto. * @param item The item to push onto the stack. * * @note * - The stack is modified in place. * - In the case where the stack is full, i.e. `cstack_full() != 0`, the stack is expanded as necessary. * - In case of failure, the stack remains intact, and the contents are preserved. */ void cstack_push(cstack* stack, void* item); /** * @brief Pop the last (top) item out of the stack. * * @param stack The stack which to pop the item from. * * @note * - The stack is modified in-place. * - In case the stack is already empty, `i.e. cstack_empty() != 0`, nothing is done. */ void cstack_pop(cstack* stack); /** * @brief Expand `stack` by `count`. * * @param stack The stack which to expand. * @param count Specifies the number of _extra items_ to add to the stack, must be > 0. * @return The expanded stack. * * @note * - The stack is modified in-place. * - The stack is expanded by count _items_ (_NOT_ bytes). * - In case of failure, the function returns _NULL_, and the contents of `stack` are preserved. */ cstack* cstack_expand(cstack* stack, cstack_size_t count); /** * @brief Truncate/Shrink the stack. * * @param stack The stack to truncate. * @param count Specifies the number of items to remove from the stack, must be > 0. * * The function Shrinks the stack by the amount of _items_ (_NOT_ bytes) specified * by count. * * The items removed are relative to the stack's capacity _Not_ size. * for example: * * stack is a cstack with a capacity of 10 and a size of 6, i.e. cstack_capacity() == 10 * and cstack_size() == 6, on a successful call to cstack_truncate(stack, 4), * the stack has the following properties: * 1. A capacity of 6. * 2. A size of 6. * 3. The contents (items) of the stack remain the same, since the 4 items where still non-existent. * * if you want to truncate all the extra items you may call cstack_truncate() with the result of cstack_free_items() * as the items count. * * @return The truncated stack. * * @note The stack is modified in-place. */ cstack* cstack_truncate(cstack* stack, cstack_size_t count); /** * @brief Copy the contents of src to dst. * * @param dst The stack to copy the data into. * @param src The stack to copy the data from. * @return dst is returned. * * @note * - dst should point to a valid (allocated using cstack_alloc()) stack. * - If src contains more items than dst's capacity, dst is expanded as necessary. * - dst's contents are _overwritten_ up-to src's size. */ cstack* cstack_copy(cstack* dst, const cstack* const src); /** * @brief Duplicate a stack. * * @param stack The stack to duplicate. * @return The new stack. * * @note * - The new stack is allocated using cstack_alloc() and should be freed using cstack_free(). * - In case of failure the function returns _NULL_. */ cstack* cstack_dupl(const cstack* const stack); /** * @brief Clear the stack. * * @param stack The stack to be cleared. * @return The cleared stack. * * This function resets the _top_ pointer, * and subsequent calls to cstack_push() will overwrite the existing data. * * @note After calling cstack_clear(), there is no guarantee that the data in the stack is still valid! */ cstack* cstack_clear(cstack* stack); /** * @brief Get the top-most item in the stack. i.e. the last cstack_push()ed item. * * @param stack The stack to get the item from. * @return The item at the top of the stack. * * @note * - If the stack is empty, the function returns _NULL_. * - The returned item is a `void*` which should be cast to the proper type if desired/needed. */ void* cstack_top(const cstack* const stack); /** * @brief Retrieve the size of a single stack item. * * @param stack The stack of which to get the item size of. * @return The item size in bytes. */ cstack_size_t cstack_item_size(const cstack* const stack); /** * @brief Retrieves the count of the items in the stack. * * @param stack The stack of which to get the items count of. * @return The items count. */ cstack_size_t cstack_items_count(const cstack* const stack); /** * @brief Retrieves the available (free) items in the stack. * * @param stack The stack to get the free items of. * @return The number of free items. */ cstack_size_t cstack_free_items(const cstack* const stack); /** * @brief Retrieves the size of the items in the stack. * * @param stack The stack of which to get the size of. * @return The size of the items in the stack, in _bytes_. */ cstack_size_t cstack_size(const cstack* const stack); /** * @brief Retrieves the total capacity of the stack. * * @param stack The stack of which to get the capacity of. * @return The capacity of the stack, in _bytes_. */ cstack_size_t cstack_capacity(const cstack* const stack); /** * @brief Retrieve the available (free) space in the stack. * * @param stack The stack to get the free space of. * @return The free space (in bytes) in the stack. */ cstack_size_t cstack_free_space(const cstack* const stack); /** * @brief Checks if the stack is empty, i.e. cstack_size() == 0. * * @param stack The stack to check. * @return Returns a non-zero value if empty, 0 otherwise. */ int cstack_empty(const cstack* const stack); /** * @brief Checks if the stack is full, i.e. cstack_size() == cstack_capacity(). * * @param stack The stack to check if full. * @return Returns a non-zero value if full, 0 otherwise. */ int cstack_full(const cstack* const stack); #endif // CSTACK_H cstack.c #include "cstack.h" #include <string.h> #include <stdlib.h> #if defined(ENABLE_ASSERTS) #if defined(_WIN32) #define DEBUG_BREAK __debugbreak(); #elif defined(__linux__) || (!defined(_WIN32) && (defined(__unix__) || defined(__unix))) #include <signal.h> #define DEBUG_BREAK raise(SIGTRAP) #else #define DEBUG_BREAK; #endif // WIN32 #include <stdio.h> #define ASSERT(x) \ if (x) { } \ else \ { \ fprintf(stderr, "%s (%d): Assertion failed: %s\n", __FILE__, __LINE__, #x); DEBUG_BREAK; \ } #else #define ASSERT(x) #endif #ifndef min #define min(x, y) (((x) < (y)) ? (x) : (y)) #endif #ifndef max #define max(x, y) (((x) > (y)) ? (x) : (y)) #endif cstack* cstack_alloc(cstack_size_t initial_items_count, cstack_size_t item_size) { ASSERT(initial_items_count > 0); ASSERT(item_size > 0); cstack* new_stack = malloc(sizeof(cstack)); if (!new_stack) { return NULL; } cstack_size_t size = initial_items_count * item_size; new_stack->data = malloc(size); if (!new_stack->data) { free(new_stack); return NULL; } new_stack->item_size = item_size; new_stack->top = new_stack->data; new_stack->cap = new_stack->data + (size); return new_stack; } void cstack_free(cstack* stack) { if (stack) { if (stack->data) { free(stack->data); stack->data = NULL; } stack->item_size = 0; stack->top = NULL; stack->cap = NULL; free(stack); } } void cstack_push(cstack* stack, void* item) { ASSERT(stack); ASSERT(item); if (cstack_full(stack)) { if (!cstack_expand(stack, 1)) { return; } } memcpy(stack->top, item, cstack_item_size(stack)); stack->top += cstack_item_size(stack); } void cstack_pop(cstack* stack) { ASSERT(stack); if (!cstack_empty(stack)) { stack->top -= cstack_item_size(stack); } } cstack* cstack_expand(cstack* stack, cstack_size_t count) { ASSERT(stack); ASSERT(count > 0); cstack_size_t new_size = cstack_capacity(stack) + (count * cstack_item_size(stack)); cstack_size_t top_offset = stack->top - stack->data; char* data_backup = stack->data; stack->data = realloc(stack->data, new_size); if (!stack->data) { stack->data = data_backup; return NULL; } stack->top = stack->data + top_offset; stack->cap = stack->data + new_size; return stack; } cstack* cstack_truncate(cstack* stack, cstack_size_t count) { ASSERT(stack); ASSERT(count > 0); ASSERT(count <= cstack_items_count(stack)); cstack_size_t new_size = cstack_capacity(stack) - (count * cstack_item_size(stack)); cstack_size_t top_offset = min(new_size, cstack_size(stack)); char* data_backup = stack->data; stack->data = realloc(stack->data, new_size); if (!stack->data) { stack->data = data_backup; return NULL; } stack->top = stack->data + top_offset; stack->cap = stack->data + new_size; return stack; } cstack* cstack_copy(cstack* dst, const cstack* const src) { ASSERT(dst); ASSERT(src); ASSERT(cstack_item_size(dst) == cstack_item_size(src)); cstack_size_t extra_items = (cstack_size(src) - cstack_capacity(dst)) / cstack_item_size(dst); if (extra_items > 0) { cstack_expand(dst, extra_items); } memcpy(dst->data, src->data, cstack_size(src)); cstack_size_t src_top_offset = src->top - src->data; cstack_size_t dst_top_offset = dst->top - dst->data; cstack_size_t offset = max(src_top_offset, dst_top_offset); dst->top = dst->data + offset; return dst; } cstack* cstack_dupl(const cstack* const stack) { ASSERT(stack); cstack* new_stack = cstack_alloc(cstack_items_count(stack), cstack_item_size(stack)); if (!new_stack) { return NULL; } cstack_copy(new_stack, stack); return new_stack; } cstack* cstack_clear(cstack* stack) { ASSERT(stack); stack->top = stack->data; return stack; } void* cstack_top(const cstack* const stack) { ASSERT(stack); if (cstack_empty(stack)) { return NULL; } // top points to the item after the last one. i.e. to the next empty 'slot' return (void*)(stack->top - cstack_item_size(stack)); } cstack_size_t cstack_item_size(const cstack* const stack) { ASSERT(stack); return stack->item_size; } cstack_size_t cstack_items_count(const cstack* const stack) { ASSERT(stack); return cstack_size(stack) / cstack_item_size(stack); } cstack_size_t cstack_free_items(const cstack* const stack) { ASSERT(stack); return cstack_free_space(stack) / cstack_item_size(stack); } cstack_size_t cstack_size(const cstack* const stack) { ASSERT(stack); return stack->top - stack->data; } cstack_size_t cstack_capacity(const cstack* const stack) { ASSERT(stack); return stack->cap - stack->data; } cstack_size_t cstack_free_space(const cstack* const stack) { ASSERT(stack); return cstack_capacity(stack) - cstack_size(stack); } int cstack_empty(const cstack* const stack) { ASSERT(stack); return cstack_size(stack) == 0; } int cstack_full(const cstack* const stack) { ASSERT(stack); return cstack_size(stack) == cstack_capacity(stack); } main.c #include <stdio.h> #include "cstack.h" void print_stack(const cstack* const stack); int main() { cstack* stack = cstack_alloc(4, sizeof(int)); while (1) { int choice = 0; fprintf(stdout, "1. push\n"); fprintf(stdout, "2. pop\n"); fprintf(stdout, "3. print\n"); fprintf(stdout, ">>> "); fscanf(stdin, "%d", &choice); switch (choice) { case 1: fprintf(stdout, "Number to push: "); int num = 0; fscanf(stdin, "%d", &num); cstack_push(stack, &num); break; case 2: if (cstack_empty(stack)) { fprintf(stdout, "Stack is empty!\n"); continue; } fprintf(stdout, "Poping %d (at %p)\n", *(int*)cstack_top(stack), cstack_top(stack)); cstack_pop(stack); break; case 3: print_stack(stack); break; default: fprintf(stdout, "Invalid option!"); continue; } } cstack_free(stack); return 0; } void print_stack(const cstack* const stack) { fprintf(stdout, "Item size: %lld\n", cstack_item_size(stack)); fprintf(stdout, "Items count: %lld\n", cstack_items_count(stack)); fprintf(stdout, "Free items: %lld\n", cstack_free_items(stack)); fprintf(stdout, "Stack size: %lld\n", cstack_size(stack)); fprintf(stdout, "Stack cap: %lld\n", cstack_capacity(stack)); fprintf(stdout, "Stack free space: %lld\n", cstack_free_space(stack)); if (!cstack_empty(stack)) { fprintf(stdout, "Stack top: %d (at %p)\n", *(int*)cstack_top(stack), cstack_top(stack)); } } As a beginner, I'm open to any suggestions, best practices, coding conventions, bugs (obviously), performance improvements, improvements to the interface/docs, etc. Any suggestions are very welcome. Answer: The code is nicely documented, so keep that up! I see some things that may help you improve your code. Use int main(void) in C You mentioned that you were coming from C++, so although it's not a problem in this code, it's important to realize that C and C++ are different when it comes to the formal argument list of a function. In C, use int main(void) instead of int main(). See this question for details. Think of the user The existing program has no graceful way for the user to end which also means that the cstack_free() function is never called. I'd suggest that instead of while (1), you could do this: bool running = true; while (running) and then provide a menu choice for the user to quit. Check return values for errors The calls malloc are all properly checked, but fscanf can also fail. You must check the return values to make sure they haven't or your program may crash (or worse) when given malformed input. Rigorous error handling is the difference between mostly working versus bug-free software. You should, of course, strive for the latter. Avoid function-like macros Function-like macros are a common source of errors and the min and max macros are paricularly dangerous. The reason is that any invocation of that macro with a side effect will be executed multiple times. Here's an example: int a = 7, b = 9; printf("a = %d, b = %d\n", a, b); int c = max(++a, b++); printf("a = %d, b = %d\n", a, b); printf("c = %d\n", c); The first printf, predictably, prints a = 7, b = 9 However, the second two printf statements result in this: a = 8, b = 11 c = 10 What a mess! The solution is simple: write a function instead. That's particularly simple in this case because each macro is used only once anyway. Use string concatenation The menu includes these lines: fprintf(stdout, "1. push\n"); fprintf(stdout, "2. pop\n"); fprintf(stdout, "3. print\n"); fprintf(stdout, ">>> "); There are a couple ways in which this could be improved. First, since you're printing to stdout, you could simply use printf. Second, the strings can be concatenated and use a single invocation of printf: printf("1. push\n" "2. pop\n" "3. print\n" ">>> "); Reconsider the interface If a cstack_push fails because realloc fails, the user has no way to detect this condition because cstack_push does not return anything. It would be nice to provide a bool return instead. Exercise all functions It's understood that the sample program is just an illustration and not a comprehensive test, but it would be good to write test code that exercises all functions.
{ "domain": "codereview.stackexchange", "id": 37981, "tags": "c, stack" }
Resource plant anatomy/lifecycle vocabulary
Question: I've been curious about a lot of vocabulary words in the world of botany. I've had quite a bit of trouble finding somewhere online to help me out, but I'm having some trouble finding a resource like that. Optimally, I'm looking for something with diagrams of anatomy, over the course of time, maybe some photo examples, and some sort of dictionary to accompany. Is there anything like this, online or offline that anyone knows about? Answer: specifically development-focused An Introduction to Plant Structure and Development by Beck looks comprehensive and has been well-reviewed. It has both photos and diagrams, and sections certainly deal specifically with growth/lifecycle. It's available to preview or purchase as an ebook on Google Play. more general plant anatomy vocabulary resources In print, Plant Identification Terminology: an illustrated glossary by Harris and Harris is a fairly standard reference, and I've found it very useful both for looking up terms and for just flipping through. Online, but with fewer diagrams, you could try the Glossary page on the Angiosperm Phylogeny Website. This might sound silly, but if you're looking for more of a course on the fundamentals rather than a dictionary of the specifics, the Botany Coloring Book is actually a great, seriously useful work.
{ "domain": "biology.stackexchange", "id": 1159, "tags": "botany, book-recommendation" }
Why does a stack of transparencies appear white when a single sheet is clear?
Question: My daughter and I found a box of old transparencies (clear plastic sheets) and something struck me as odd: even though each transparency is clear, the whole stack of transparencies looks whitish - why is that? Answer: Note that this is the same phenomenon that happens when you look at a roll of clear packing tape, which I have already thought through. When you hold a single sheet up, it’s clear; I expect a small reflectance because only a small amount of light is being reflected from the front and back surfaces, while most of it is being transmitted through. However, holding the whole stack will decrease the transmittance because there are more surfaces to reflect light back into the incident medium. However, your question is why the whitish appearance? When I think of white it means that all of the frequencies of light are being reflected. That is, the many surfaces of the transparencies are acting mirror-like. Since each transparency surface is smooth, you are getting specular reflections (all visible frequencies are reflected). In addition, the index of refraction for plastic does not vary much over the visible frequency range, therefore, the plastic reflects all visible light almost identically (I am essentially quoting David here). Since you are getting the entire incident frequencies reflected back into the incident medium, the transparencies will appear white.
{ "domain": "physics.stackexchange", "id": 9040, "tags": "optics, everyday-life" }
Is the second law of thermodynamics even a law?
Question: I am a high school student trying to wrap my head around the second law of thermodynamics for the past few days to no avail. Having only a cursory knowledge of calculus, and chemistry and physics in general doesn't help either. The second law of thermodynamics says that entropy of the universe always increases. For constant pressure and temperature conditions, Gibbs free energy equation is used to calculate whether the reaction is spontaneous or not, meaning whether it will occur or not. The more I try to read about it, the more proof I find against above paragraph. After having read about the Poincaré recurrence theorem, Maxwell's demon, and this excellent Quora answer, I would say that the whole law of thermodynamics is a farce. A plot by Gibbs and Helmholtz and Boltzmann and Maxwell to dupe the students while they laugh from the heavens. Please excuse my rambling. It's the product of tearing out half my hair trying to understand this. From what I have read, it seems that the second law is not really a law, but a statement about the most probable arrangement of a given system. Of course, I don't claim to understand anything from the sources I have mentioned, nor do I think I will understand before at least an undergraduate course in partial differential equations, calculus and all the other prerequisites required to even start. So my goal in asking this question is asking if anyone is capable and willing to write a concise and simple explanation for a high school student which would also sort out all the fallacies I have mentioned above, or can direct me to someone who can. I understand that this might be a Feynman-esque feat not suitable for this site and I apologise for that. EDIT: I have gained a somewhat good understanding of the second law (for a high school student). So my question is not as open ended as it was. What I really want to ask now is: What does it mean for entropy to decrease, if there was a small enough isolated system so that the chances of non-spontaneous events happening was not 1 in TREE[1000]? Would all laws of thermodynamics go out of the window? It seems to me that this weakness (I don’t know how to phrase this) of the second law is largely ignored because the chances of this happening are approximately 0. Of course, all this rests on the assumption that entropy can decrease, which is what I have gathered, although not all people agree, but many do. If it can decrease, doesn’t that mean that as the system gets smaller the laws of thermodynamics get weaker? Where do you draw the line after which the laws of thermodynamics are not reliable? Also, when I use the Gibbs equation to find the boiling point of water at NTP, would that boiling point change as I reduced the number of particles? Is my boiling point wrong? Boiling point is a bulk property, but you could easily substitute a chemical reaction in that. Answer: I'm going to specifically address the two concepts you brought up in your second point: The Poincare recurrence theorem In layman's terms, this theorem reads: "For any system in a large class of systems that contains systems in thermodynamic equilibrium: if you take a picture of the arrangement of the system at a particular instant, then if you wait long enough, there will eventually be another instant in which the system's arrangement is very close to the one in the picture." This doesn't actually contradict anything in thermodynamics, because thermodynamics is built such that it doesn't really care what specific arrangement the system is in at a particular instant. That's the reason it was developed, after all: it's impossible to measure the precise positions and velocities of $10^{23}$ particles at once, so we have to have a way to deal with our lack of knowledge of the initial state of a system. This is where thermodynamics comes in: it turns out that if you make some fairly simple assumptions about the microscopic behavior of a system, then you can make accurate predictions about how the system behaves in equilibrium. At any instant, a system in thermodynamic equilibrium is in a particular specific arrangement, which we will call a microstate. If you watch a system in thermodynamic equilibrium, it will adopt many, many different microstates. Thermodynamics makes the assumption that every accessible microstate is equally probable. If you take the set of all microstates that a given system can adopt in equilibrium, that set is called the macrostate of the system. Thermodynamic quantities are defined only on the macrostates. For example, there is no such thing as the entropy of a microstate. The entropy is a property of a system in equilibrium, not a particular arrangement of atoms. So, if a system in equilibrium is in a macrostate that contains a highly-ordered microstate, the fact that the system can sometimes be in that microstate has absolutely no bearing on the entropy of that system. The existence of that microstate was already accounted for when calculating the entropy. So the Poincare recurrence theorem doesn't really have much at all to do with the second law of thermodynamics, which talks only about how entropy behaves when a system moves between different macrostates. Maxwell's Demon Maxwell's Demon does not violate the second law of thermodynamics, because the decrease of entropy inside the chamber is more than counterbalanced by the increase of entropy of the demon itself (or the environment). In order to do its job, Maxwell's demon must measure the velocity of a particle. To act on that measurement, the measurement value must be stored somewhere. Even if the measurement is done in a completely reversible fashion, without expending energy, the stored information from the measurements must either accumulate over time, or be erased. The key point is that erasing information increases entropy. Any physical Maxwell's demon must have a finite information storage capacity, and so must eventually start erasing as much information as it records. So, in equilibrium, the increase in entropy due to the continual erasure of information in the demon is greater than or equal to the decrease in entropy inside the chamber.
{ "domain": "physics.stackexchange", "id": 70093, "tags": "thermodynamics, statistical-mechanics, entropy, information, poincare-recurrence" }
Help me understand extreme pressure
Question: So we have the tragedy of the submersible being lost near the Titanic site. I am trying to understand the pressure the vessel was under. I have heard the "$1$ atmosphere for every $10$ meters" approximation. So if the implosion happened $1$ $km$ down, $100$ atmospheres of pressure were acting on the sub. Is it also valid to think of the column of water on top of the sub? If the sub is $20'\times7'$, and $0.6$ miles down, this comes out to $28.3$ $million$ pounds ($12.9$ $million$ $kg$) of water bearing down on the hull. My numbers may not be right, but they aren't crazy - there's a lot of water on that sub. How would a physicist explain the force of the implosion to a non-physicist in a way that conveys the effect? Answer: TLDR: To understand it, don't think about pressure. Think about energy. The best approach here is to think in terms of energy/work, instead of pressure. There's lots of situations that become more intuitive if you try tothink in terms of energy, instead of other physical measures. For example, people have notoriously bad intuition about the dangers of driving above the speed limit. If driving 70 km/h is OK, then surely driving a bit faster at 100 km/h can't be that bad, they think. But if you reframe the situation in terms of energy, it becomes clear why speeding is a bad idea. Kinetic energy is calculated as $\frac{mv^2}{2}$, so it scales with the square of velocity. So, in going from 70 km/h to 100 km/h, the kinetic energy increases by $\frac{100^2}{70^2} \approx 2.04$. So a increase of less than 50% in velocity more than doubles the collision energy. The degree you end smashed in an accident is proportional to collision energy, not just to velocity, and a small difference in terms of km/h in speed may be the difference between dying on the spot or walking out of the wreck in your own feet. The same way, thinking in terms of pressure alone is meaningless. The pressure at those dephts is very high, but there are living creatures that live on even greater depths with no ill effects. To explain this difference, we must calculate the energies involved in each situation. Energy can be estimated from pressure and volume, as $E = P * \Delta V$, as pressure has units of $\frac {Force}{Area}$, and volume has units of $Area*Distance$, so $\frac {Force}{Area}*Area*Distance = Force*Distance$, that is the definition of work, with the same units as energy. Water has very low compressibility, so despite the huge pressures abyssal fish thrive in, the volume differences they are likely to face are negligible, so energies calculated from the resulting product are small. On the other hand, the submersible crew were not inmersed in water, like the fish. They inhabited a very compressible gas bubble, shielded against the surrounding ocean pressure by the submersible hull. Wikipedia gives the inner dimensions of the cylindrical hull as 2.4m lenght and 1.42m diameter (0.71m radius), so the volume of the gas bubble was approximately $\Pi*0.71m^2*1.42m = 3.8m³$. The Titanic wreck sits in the ocean floor about 3800m deep. The submersible descent was expected to last 2h. They lost contact after 1h45min. So we can estimate the implosion happened about $\frac{105min}{120min}*3800m = 3325m$ deep. There is a rule of thumb every 10m water depth adds the equivalent about one atmospheric pressure. One atmospheric pressure is about $10^5$ Pascals. So we can estimate the pressure differential at implosion moment as $332.5 * 10^5 = 3.325 * 10^7 Pa$. Now with estimates for the water pressure and gas volume bubble, we can calculate a rough estimate for implosion energy as follows: $$E = P * \Delta V = 3.325 * 10^7 Pa * 3.8m^3$$ $$E = 126.4 megajoules$$ This is a large amount of energy. For comparison, one conventional Kg TNT equivalent is 4.184 megajoules. So this implosion had the energy equivalent to a charge of $\frac{126.4}{4.184} \approx 30 kg$ of high explosives. With this information we have a scenario that is more intuitive to grasp by a layperson. We can imagine an equivalent situation in what each metal cap at the ends of the composite tube that comprised the submersible are projectiles, each connected to a cannon loaded with an explosive charge equivalent to about 15kg of TNT. Then the charges went off simultaneously, as the shoddily made central tube collapsed:
{ "domain": "physics.stackexchange", "id": 96181, "tags": "forces, classical-mechanics, pressure, fluid-statics, estimation" }
Simple Circuits
Question: I have a small and easy dilemma, judging by how sophisticated the website is, and need a simple answer for it So $P=E/t$. And $P=I*V$. Therefore $E/t=I*V$. I just need an explanation on how this is possible and an example to help me understand. Answer: First of all, I'll write the energy with the letter $U$ to not confuse with the electric field $E$. By definition of power, $P=\frac{dU}{dt}$. Now, where is this energy in the equation coming from? $$U=\int \vec{F}\cdot \vec{dl} = q \int \vec{E}\cdot \vec{dl} = qV$$ This is the amount of energy gained by the charge when moving across an electric potential differenct $V$. Because of its collisions in the resistor it gets out of it with the same kinetic energy (on average), so the collisions produced heat in the resistor, and $qV$ energy was "wasted". Now remember that in a time $dt$, the amount of charges that passed through the resistor is $q=Idt$. How much energy turned into heat in that time? $IV dt$ The conclusion is that $P=\frac{dU}{dt}=IV$.
{ "domain": "physics.stackexchange", "id": 21197, "tags": "electricity, electric-circuits, power" }
What kinds of cells does human saliva contain?
Question: I have heard that our saliva contains cells. What cell types can be found in human saliva? Answer: It contains white blood cells (leukocytes) and cells from the inner lining of the mouth (buccal epithelial cells). The DNA obtained from these cells is the basis of DNA profiling based on saliva samples. Source: Salimetrics
{ "domain": "biology.stackexchange", "id": 3411, "tags": "human-biology, cell-biology" }
$\Omega^0_c \to \Sigma^+ K^-K^- \pi^+$ Feynman diagram
Question: How can I work out the Feynman diagram for the decay process, $\Omega^0_c \to \Sigma^+ K^-K^- \pi^+$? Answer: Start by identifying the constituents of each particle. The charmed baryon $\Omega^0_c$ is $ssc$, that is, two strange quarks and a charm quark. The sigma baryon $\Sigma^+$ is $uus$, the kaon $K^-$ is $s\bar u$ and the pion is $u\bar d.$ Thus, you know the initial and final particles in your diagram. You can now use the Feynman rules of the Standard Model to work out the interactions to include, which are allowed. In some cases, processes may be mediated by the weak or electromagnetic interaction; either can be used for a diagram, though one will usually be more probable. To check which is in a particular case more probable, you will need to check the branching ratios from the Particle Data Group. Thanks to anna v for the useful link to quark flavour transformations; you can use these to see which interactions to pick in order to get the right flavours in the final state. Just a minor caveat; this will not be the Feynman diagram for the process. Remember this is all at tree level, and you could consider arbitrary numbers of loops as well though the dominant contribution I believe for this process is at tree level.
{ "domain": "physics.stackexchange", "id": 38436, "tags": "homework-and-exercises, quantum-field-theory, particle-physics, feynman-diagrams, baryons" }
Error when instantiating RVIZ2 plugin custom panel
Question: Hey all... I was having the same problem as this person: https://answers.ros.org/question/341214/custom-panel-plugin-not-being-found-when-running-rviz2/ but the solution worked and I was able to see my custom panel in the Panels list. But now whenever I try to instantiate my panel I get the following error: [ERROR] [rviz2]: PluginlibFactory: The plugin for class 'plugins/MissionPanel' failed to load. Error: Failed to load library /home/user1/raa/raa_ros2/install/ui/lib/ui/libui.so. Make sure that you are calling the PLUGINLIB_EXPORT_CLASS macro in the library code, and that names are consistent between this macro and your XML. Error string: Could not load library (Poco exception = /home/user1/raa/raa_ros2/install/ui/lib/ui/libui.so: undefined symbol: _ZTVN7plugins12MissionPanelE) I've been fighting with this for 2 days with no luck and haven't found anything online that explains what's going on. It's a VERY simple panel with just a single text field. I'm not even doing any publication/subscription stuff yet in it. Everything is set up the same as that guy (from the linked question) had. Any ideas? My package is called ui, and my code is using a namespace called plugins in a class called MissionPanel. I see the libui.so file where it says it's looking for it. It just can't seem to find my class inside there. ROS2 Dashing. I've tried both Debian package install and building from source. Same effect. Here's the critical 2 lines from my .cpp: #include <pluginlib/class_list_macros.hpp> PLUGINLIB_EXPORT_CLASS(plugins::MissionPanel, rviz_common::Panel) and here's my plugin_description file: <library path="ui"> <!-- Displays plugins --> <class name="plugins/MissionPanel" type="plugins::MissionPanel" base_class_type="rviz_common::Panel" > <description> Panel for control and display of status for the RAA robot system. </description> </class> </library> Any ideas what I'm doing wrong??? And yes, I've resourced my environment after building... Originally posted by mkopack on ROS Answers with karma: 171 on 2020-04-29 Post score: 0 Original comments Comment by mkopack on 2020-04-30: Whoops, looks like you all couldn't see the error message before. Should be able to now... Answer: Ok, finally got it working. It was in the CMakeLists.txt file. I used the one from the ros2/rviz/rviz_default_plugins package as the template and anything I didn't have in mine that they did, I added (minus the testing related things). This REALLY REALLY needs to be documented better folks! Originally posted by mkopack with karma: 171 on 2020-04-30 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 34861, "tags": "ros2, plugin" }
Exciting surface plasmon polaritons - what is the purpose of air gap in Otto configuration?
Question: I am currently reading about exciting surface plasmon polaritons (SPPs). One of the most common methods described in literature for exciting a SPP is using the Otto configuration (image from https://doi.org/10.1364/OE.24.019517): I understand that it's impossible to excite a SPP by a light plane wave incident from free space because the in-plane momentum has to be conserved and the incoming momentum would be too small. I also understand that we use a high-refractive index prism in the Otto configuration, because the momentum of light in a prism with refractive index n will be n times greater than in vacuum. I don't understand why there has to be an air gap, though. Can someone tell me if it is important and what I'm missing? Answer: I believe that in the perfect configuration for SPP you want to have total internal reflection of the incident beam which gives you an evanescent wave with the right phase matching (if you tune the air gap carefully). You may want to read page 420 of the following notes: https://www.photonics.ethz.ch/fileadmin/user_upload/Courses/NanoOptics/plasmonss.pdf
{ "domain": "physics.stackexchange", "id": 56197, "tags": "plasmon, photonics" }
Is the divergence of electric field in a solid cube of uniform charge density position dependent?
Question: Within a solid, uniformly charged non-conducting cube, the electric field is clearly position dependent, does that make the divergence of the electric field position dependent as well? If that is the case how does one reconcile it with the differential form of Gauss's law that relates the divergence of the electric field and the charge distribution? A special case to consider would be at the center of the aforementioned cube, the electric field at the center is zero(due to symmetry) but the divergence of the electric field is not because the charge distribution everywhere within the cube is non-zero. Answer: The reason that we can have $\vec{E} = 0$ at the center of the cube but $\vec{\nabla} \cdot \vec{E} \neq 0$ is the same reason that we can have a function $f(x)$ that vanishes at $x_0$ but for which $f'(x_0) \neq 0$. The two pieces of information are not incompatible. If you recall, the divergence of $\vec{E}$ is defined as $$ \vec{\nabla} \cdot \vec{E} = \frac{\partial E_x}{\partial x} + \frac{\partial E_y}{\partial y} + \frac{\partial E_z}{\partial z}. $$ If we look at the first term here, it corresponds to the rate of change of $E_x$ with respect to $x$ near the center of the cube. If we imagine starting at the center of the cube and moving away from it along the $x$-axis, we will see that $E_x$ increases along this line; after all, near the center, we would expect the electric field to be small but to point away from the center. So it should seem plausible (at least) that $\partial E_x/\partial x \neq 0$. Similar arguments hold for the $y$- and $z$-directions. Finally, note that the same "paradox" happens for a solid sphere of charge: we have $\vec{E} = 0$ and $\vec{\nabla} \cdot \vec{E} \neq 0$ at its center as well. The resolution there is exactly the same.
{ "domain": "physics.stackexchange", "id": 83022, "tags": "electrostatics, electric-fields, gauss-law" }
Why light can't escape a black hole but can escape a star with same mass?
Question: I'm new to astronomy and was wondering why light can't escape from a black hole but can escape from a star with the same mass. In theory, the gravity of a star 100x the mass of the sun, and the gravity of a black hole 100x the mass of the sun are the same, so why can light escape from the star but not from the black hole? even if the force of gravity is the same in both cases. Is it just because the black hole is smaller? Answer: yes, it is because the black hole is much more dense which means it packs the same mass & gravitational pull into a much smaller diameter. This means its surface gravity yields an escape velocity equal to the speed of light. For the same-mass star, its "surface" is much farther away from its center and if you were standing on that "surface", the surface gravity there would be crushingly huge but not enough to yield c as the escape velocity. Oh yes and you would be instantly fried to a crispy crisp. To make the surface gravity of the sun yield an escape velocity of c would require you to squeeze it down to a diameter of 2.5 kilometers. That's really dense!
{ "domain": "physics.stackexchange", "id": 93558, "tags": "gravity, visible-light, black-holes, mass, event-horizon" }
How to calculate energy along a given direction in the Brillouin zone?
Question: I am having trouble understanding how to calculate the energy dispersion relation $E(\vec{k})$ in a given direction within the first Brillouin zone.I can find the vectors in the reciprocal lattice. Any pointers? Thank you Answer: If I understand your question correctly; you'll have to compute de value of $E(\mathbf{k})$ for a given vector composed of $k_{i}$ (i=x,y,z), as the result will be the energy and the $k_{i}$ the direction. An example would be the square lattice of size $a\times a$ with a dispersion relation of $E(\mathbf{k}) = -\frac{E_{0}}{4} (\cos(k_{x}a) +\sin(k_{y} a))$ gives Hope it helps, cheers
{ "domain": "physics.stackexchange", "id": 94725, "tags": "solid-state-physics" }
Resonant inductive coupling and Schumann resonances
Question: I was reading about WiTricity (http://en.wikipedia.org/wiki/WiTricity) a technology developed by MIT to wirelessly transmit electricity through resonance, and I have this question: Given the phenomenon of resonant inductive coupling which wikipedia defines as: the near field wireless transmission of electrical energy between two coils that are tuned to resonate at the same frequency. http://en.wikipedia.org/wiki/Resonant_inductive_coupling And the Schumann resonances of the earth ( ~7.83Hz, see wikipedia), would it be theoretically possible to create a coil that resonates at the same frequency or one of it's harmonics (7.83, 14.3, 20.8, 27.3 and 33.8 Hz) to generate electricity? I have a feeling that these wavelengths may be too big to capture via resonance (they are as large as the circumference of the earth if I understand it correctly), so alternatively would it be possible to create a coil that resonates with one of the EM waves that the sun sends our way? Answer: In principle of course you try something like that. But there are three issues that will kill you: $Q$. Every resonance has a quality factor which represents how quickly the energy in the mode drains away by assorted dissipative processes. I don't know what it is for the Schumann resonances, but I'll give you long odds that it is not good: much of the energy you put into the field will just dribble away into space. Power density. Whatever energy you pump into these modes will spread out over the whole cavity, and you'll only be able to draw as much as there is in the region covered by your antenna, which will be effective nothing even with gigawatts driven into the resonance. Not only couldn't you power a iPhone, you couldn't power the little shoplifting-prevention tag that retailers put onto bits of mobile merchandise. Antenna dimensions. The naive way to design an antennas to use at frequency $f$ requires conductors of length on order of $c f$. Bit of a problem for frequencies of a few or few tens of Hertz.
{ "domain": "physics.stackexchange", "id": 13607, "tags": "electricity, resonance" }
Are there beneficial genetic mutations identified by consumer DNA genotyping?
Question: I'm looking at services like 23andme, and see that they identify a wide variety of genetic-based risks, like predisposition to diseases, hair loss, cancer, etc. Are there a more "positive" DNA analysis outcomes out there? Like increased strength, cheerfulness(resistance to depression), etc? Answer: First of all, I think it is, to a large extent, the way you perceive it. Instead of cancer-causing allele and hair-loss allele, you can think of them as cancer-protection and great-hair alleles. If you don't have disease alleles, that means that in some sense you have "life-extending" genes! Sounds much better, doesn't it? Second of all, most funding for research is directed at disease so it is not surprising that most of the scientific knowledge is regarding this type of genetic variation.
{ "domain": "biology.stackexchange", "id": 1096, "tags": "dna, dna-sequencing" }
Very simple matrix operation (Markov chain) in terms of Qiskit
Question: I am trying to approach Markov chains as a use case for Quantum Computing. For this I took the simple introductory case from Wikipedia (Markow-Kette): $v_0=\begin{pmatrix} 1 & 0 & 0 \end{pmatrix}$ M=$\begin{pmatrix} 0 & 0.2 & 0.8\\ 0.5 & 0 & 0.5\\ 0.1 & 0.9 & 0\\ \end{pmatrix}$ $v_1=v_0M$ $v_3=v_0M^3$ As a starter/"appetizer" I implemented this situation with the following 3-liner using Numpy and got very straight forwardly the correct result v3=[0.37 0.126 0.504] which is in line with the result from the Wikipedia Site: M = np.array([[0, 0.2, 0.8], [0.5, 0, 0.5], [0.1, 0.9, 0]]) v0=np.array([1,0,0]) v1=v0.dot(M) print(v1) v3=v0.dot(matrix_power(M, 3)) print(v3) Now I'm stuck porting the whole thing to Qiskit, where I am really would be satisfied with a simulator-based solution: qiskit.IBMQ.save_account('your_token', overwrite=True) qiskit.IBMQ.load_account() n_wires = 1 n_qubits = 1 provider = qiskit.IBMQ.get_provider('ibm-q') backend = Aer.get_backend('qasm_simulator') [...] Asking Google led me to Matrix product state simulation method, but it seems to be not obvious how to apply this to my simple problem. A simple nudge in the right direction would be really appreciated. Answer: I made an implementation, I'm not sure whether it has advantages or how it generalizes. I hope it might steer you (or us) in the right direction. The approach is as follows: You have a matrix $M$, then: Create a circuit with $2N=6$ qubits The first 3 qubits represent 'being in state $i$' ($i$ is a state in $N$) (i.e. $|001>$ is state $0$, $|010>$ is state $1$, $|100>$ state $2$) The last 3 qubits represent 'going to state $j$' ($j$ is a state in $N$) Controlled go from state $i$ to state $j$ with probability $M_{i,j}$ * Make sure to not go to state $j'$ when you're going to state $j$ Make sure the naming of states checks out (i.e. $|001>$ state $0$, $|010>$ is state $1$, $|100>$ is state $2$) Repeat steps 2-4 for all $i$ Swap the last 3 qubits with the first 3 qubits, (in words, these were the states that you're 'going to' and now they are the state 'you are in'). Reset the last 3 qubits. The circuit now looks like this: With this circuit, you can do the same steps as you proposed before, so in your case you have: import numpy as np M = np.array([[0, 0.2, 0.8], [0.5, 0, 0.5], [0.1, 0.9, 0]]) v0=np.array([1,0,0]) v1=v0.dot(M) print(v1) v2=v0.dot(np.linalg.matrix_power(M, 2)) print(v2) v3=v0.dot(np.linalg.matrix_power(M, 3)) print(v3) output: [0. 0.2 0.8] [0.18 0.72 0.1 ] [0.37 0.126 0.504] In Qiskit, this now is the following **: from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, transpile from numpy import pi # Inilialise registers qreg_q = QuantumRegister(6, 'q') creg_c = ClassicalRegister(3, 'c') # Create Markov Step as a circuit markov_step = QuantumCircuit(qreg_q) # Create the Markov Step # From state 0 to state 1 and 2 markov_step.cu3(2*np.arccos(np.sqrt(M[0,1])), pi/2, pi/2, qreg_q[0], qreg_q[4]) markov_step.ccx(qreg_q[4], qreg_q[0], qreg_q[5]) markov_step.cx(qreg_q[0], qreg_q[4]) # From state 1 to state 0 and 2 markov_step.cu3(2*np.arccos(np.sqrt(M[1,2])), pi/2, pi/2, qreg_q[1], qreg_q[5]) markov_step.ccx(qreg_q[5], qreg_q[1], qreg_q[3]) markov_step.cx(qreg_q[1], qreg_q[5]) # From state 2 to state 0 and 1 markov_step.cu3(2*np.arccos(np.sqrt(M[2,0])), pi/2, pi/2, qreg_q[2], qreg_q[3]) markov_step.ccx(qreg_q[3], qreg_q[2], qreg_q[4]) markov_step.cx(qreg_q[2], qreg_q[3]) # Swap markov_step.swap(qreg_q[0], qreg_q[3]) markov_step.swap(qreg_q[1], qreg_q[4]) markov_step.swap(qreg_q[2], qreg_q[5]) # Initialise circuit circuit = QuantumCircuit(qreg_q,creg_c) # Initialise state (1,0,0) circuit.x(0) # Do the markov step n times n = 3 for _ in range(n): for ins in markov_step: circuit.append(ins[0], ins[1], ins[2]) circuit.reset(qreg_q[3:]) # Measure outcome circuit.measure(qreg_q[:3], creg_c) And you can run it by from qiskit.visualization import plot_histogram backend = provider.get_backend('ibmq_qasm_simulator') job = backend.run(circuit) result = job.result() counts = result.get_counts(circuit) plot_histogram(counts) And the output: Which is approximately equal to the exact answers [0.37 0.126 0.504]. This approach definitely isn't perfect and I'm quite sure optimizations can be made (e.g. not using 1 qubit per state, but using the full $2^N$ possible states) and I'm not sure how to go to larger state spaces. But it's the first step! Some notes: * note: Step 2 is not trivial. I implemented it as a controlled $X$-rotation. An $X$-rotation is (according to the qiskit-textbook) given by \begin{equation} R_x(\theta) = \begin{pmatrix} \cos(\theta /2) & -i \sin(\theta /2) \\ -i \sin(\theta /2) & \cos(\theta /2) \end{pmatrix}, \end{equation} and it brings the $|0>$ state to $\cos(\theta /2) |0> - i \sin(\theta /2) |1>$. Now, for example, we want the target qubit to be in state $|0>$ with probability 0.2 (when starting the target is in state 0). The probability of finding the target in state $|0>$ is $|\cos(\theta /2)|^2$ and this must be equal to $0.2$. Thereby, it can be found that $\theta = 2 \arccos{\sqrt{0.2}}$. ** note: Implementing the circuit is a bit annoying because not all gates are available in all backends. Sometimes, you have to 'translate' the gates. A controlled $X$-rotation (CRX) is also a controlled U3 gate, where $CR_x(\theta) = CU3(\theta, \pi/2,\pi/2)$. Some backend also doesn't allow for the CU3 gate, but they do allow the multi-controlled gate .mcu3. In that case, just put your control qubit in a list by putting square brackets around it.
{ "domain": "quantumcomputing.stackexchange", "id": 3340, "tags": "qiskit, programming" }
Running Trajectory Following Example from Autoware.Auto without LGSVL
Question: Hi, I am trying to execute/implement the trajectory following example as listed on the tutorial at Autoware.Auto website (https://autowarefoundation.gitlab.io/autoware.auto/AutowareAuto/trajectory-following.html) I have been following the autoware class 2020 provided by Apex.AI. I am able to run the tutorial for trajectory following using the LGSVL simulator, but I would like to modify the plant environment (LGSVL) and replace it with a different plant environment (that provides it the required sensor [lidar - front and rear] data . I have gone through the the ROS2 launch.py file that launches the ROS nodes in detail to understand how the nodes are initialized and how they work. If I do not start the simulation (LGSVL) and directly run the ROS2 launch command line to start the trajectory following examples, all ROS2 nodes are generated EXCEPT for the UNITYROS2 node. After further inspection, I found that this node publishes lidar data and subscribes to data from the vehicle etc. Is there any way that someone could provide me some help as to what needs to be changed (in the python launch file, .yaml configuration file etc. ) in order to run this example using a different simulation platform (plant environment). PS. I can publish ROS topics from my plant/simulation for the lidars with any name - just need to know what needs to change on the autoware.auto side. Thanks, Shlok Originally posted by shlokgoel on ROS Answers with karma: 23 on 2021-01-05 Post score: 0 Answer: @Shlok The lgsvl_interface package is the one that translates between messages produced/received by Autoware and those produced/received by LGSVL. Another example of a vehicle interface is also provided in the ssc_interface package. Another package similar to these (that inherits from the classes in vehicle_interface) will need to be created to interface with your simulator. Once you've done this, we would be happy to review a merge request to see this included in Autoware! See https://autowarefoundation.gitlab.io/autoware.auto/AutowareAuto/autoware-vehicle-interfaces-design.html for more information. Originally posted by Josh Whitley with karma: 1766 on 2021-01-06 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 35932, "tags": "ros2" }
Replace spaces in string
Question: import re def replace_token_regex(s, token=" "): return re.sub(token, '20%', s.strip()) def replace_token_inplace(s, token=" "): for index, char in enumerate(s): if ord(char) == ord(token): s[index] = '20%' return s print replace_spaces_regex("Foo Bar ") s = list("Foo Bar ") replace_spaces_inplace(s) print ''.join(s) The run time complexity of the above code is \$O(n)\$, can it be further optimized? or is there any better way to do the above computation? Answer: If you want to avoid replace, a faster method would just split and join. This is faster simply because .split and .join are fast: "20%".join(string.split(" ")) For a more thorough review, I'll point out that your functions aren't equivalent. The first strips whitespace and the second doesn't. One of them must be wrong! In the second case: def replace_token_inplace(s, token=" "): for index, char in enumerate(s): if ord(char) == ord(token): s[index] = '20%' return s you are doing several non-idiomatic things. For one, you are mutating and returning a list. It's better to just not return it if you mutate: def replace_token_inplace(s, token=" "): for index, char in enumerate(s): if ord(char) == ord(token): s[index] = '20%' Secondly, it'll probably be faster to do a copying transform: def replace_token_inplace(s, token=" "): for char in s: if ord(char) == ord(token): yield '20%' else: yield char which can also be written def replace_token_inplace(s, token=" "): for char in s: yield '20%' if ord(char) == ord(token) else char or even def replace_token_inplace(s, token=" "): return ('20%' if ord(char) == ord(token) else char for char in s) If you want to return a list, use square brackets instead of round ones.
{ "domain": "codereview.stackexchange", "id": 11025, "tags": "python, strings, complexity" }
Does friction depend on velocity?
Question: I wanted to ask why (how) friction on wet surfaces and friction on dry surfaces have different velocity dependence? Answer: the classic manner in which friction in the wet state has a different velocity dependence than dry friction is as follows. We use the example of a slab of plywood resting on wet sand. If you stand on that plywood plank, the water in the sand beneath it oozes out from underneath and the plank is then resting directly on the sharp sand grains, just as if the sand were dry. Lots of friction in this case. But if you toss the plank so it skims along the surface, it's moving so fast that there is no time available for the water to ooze out from under it. If you now run and jump onto that plank, you will zoom along the wet sand surface with almost no friction because the water presents a film to the plank which prevents the sand grains from rubbing against the bottom of the plank. You are hydrodynamically riding on water, not sand. In the completely wet case at low velocities- where there's plenty of water always in the way- the laws of fluid dynamics apply, and for velocities that are near-zero, the fluid friction forces tend towards zero. This allows a human with a rope to (slowly) pull a floating barge weighing many tons along a canal.
{ "domain": "physics.stackexchange", "id": 61422, "tags": "newtonian-mechanics, friction, velocity" }
Where does the random in Random Forests come from?
Question: As the title says: Where does the random in Random Forests come from? Answer: For each tree you randomly select from the variables that you can use to split tree nodes. Generally you randomly select 1/3 of the variables per tree.
{ "domain": "datascience.stackexchange", "id": 960, "tags": "machine-learning, random-forest, ensemble-modeling" }
How to process Dicom Images for CNN?
Question: I am building a disease classifier. I have Dicom scans for many patients. The scans have different slice thickness, and different scans have different number of slices. However, the slice thickness for a single patient's scan has the same thickness for all slices. For example: Patient1's scan : 100 slices of 2mm thickness Patient2's scan : 500 slices of 1mm thichness The number of pixels for each slice is 512 x 512, so currently the shape of nd array containing the information for Patient1 is 100*512*512 Patient2 is 500*512*512 I want to pass all patients information into a CNN. Do I need to resample the slices and make it uniform for all patients (like 512*512*n). If yes, how to do it, and what should be the value of 'n'? Answer: Can you use zero padding? It's a pre-processing technique for CNNs, it consists in creating a frame of zeros around the image, so that all input image will have the same size. The CNN will then learn autonomously to ignore the zeros. It's a common technique, Keras layers already have padding built-in arguments.
{ "domain": "datascience.stackexchange", "id": 6775, "tags": "cnn, image-preprocessing" }
TextRenderer class for rendering text with OpenGL
Question: I'm writing a class called TextRenderer to render text with OpenGL. Here's my TextRenderer.hpp file class TextRenderer { public: void renderText(std::string text, int x, int y); private: void setup(); GLuint program; }; Here's my TextRenderer.cpp file: #include "TextRenderer.hpp" #include <string> #include <GL/glew.h> void TextRenderer::setup() { } Here's my main file: #include <GL/glew.h> #include <GLFW/glfw3.h> #include <ft2build.h> #include FT_FREETYPE_H #include "TextRenderer.hpp" int main() { //Init GLFW glfwInit(); //Set OpenGL version to 3.2 glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); //Make window un-resizable glfwWindowHint(GLFW_RESIZABLE, GL_FALSE); //Create GLFW window in fullscreen mode GLFWwindow* window = glfwCreateWindow(800, 600, "MOBA", glfwGetPrimaryMonitor(), nullptr); glfwMakeContextCurrent(window); //Init FreeType FT_Library ft; if (FT_Init_FreeType(&ft)) { fprintf(stderr, "FATAL: Could not init FreeType"); return 1; } //Init Arial FreeType Face FT_Face arial; if (FT_New_Face(ft, "Arial", 0, &arial)) { fprintf(stderr, "FATAL: Could not init font \"Arial\""); return 1; } //Main loop while (!glfwWindowShouldClose(window)) { glfwSwapBuffers(window); glfwPollEvents(); //Close on Esc key press //TODO: Bring up quit dialog if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS) { glfwSetWindowShouldClose(window, GL_TRUE); } } //Terminate GLFW glfwTerminate(); return 0; } Notice how I have a private GLuint, program. To do this I have to #include <GL/glew.h> (in TextRenderer.cpp), and I already have GL/glew.h included in my main.cpp file. What's the best way to do this? Answer: Most include files are written so that they are safe to be included multiple times. So, if your TextRenderer.h needs types from glew.h, you include it. Writing your own include files so that they include everything needed to use them (types, enums, ...) is usually a good thing. You might write your TextRenderer.hpp something like this: #ifndef TEXTRENDERER_HPP #define TEXTRENDERER_HPP #include <GL/glew.h> // For GLuint class TextRenderer { ... } #endif The define TEXTRENDERER_HPP is there to make the header safe to be included multiple times. You may choose any unique identifier you like, but usually people use file names with some amounts of underscores at start and end.
{ "domain": "codereview.stackexchange", "id": 11498, "tags": "c++, opengl, freetype" }
arbotix-m with dynamixel tutorial / example / doc?
Question: does anyone know an example / tutorial / doc that shows how to use arbotix-m with dynamixel? i got the dynamixel_motor package working with dynamixel servos (via usb2dynamixel). but i couldn't get any where with arbotix working with dynamixel via their arbotix-m board. i couldn't find much documentation on ros wiki either. it seems like the docs are really outdated too. i can run "arbotix_terminal", see all the servos, and control all of them just fine via command line. but i coulnd't control them in ros. i'm not sure which nodes / topics / params should be used. i'd like to compare usb2dynamixel vs. arbotix-m as far as how they work with dynamixel servos. any point of reference is much appreciated. d Originally posted by d on ROS Answers with karma: 121 on 2014-07-29 Post score: 1 Answer: Sorry if I'm misunderstanding your question, but why you cannot use the rest of the arbotix stack? Afaik, it contains all you need to operate dynamixel motors (at least the AX-12 i have) and is updated to indigo. You just need to flash the arbotix_firmware and run the arbotix_driver to get full access to your servos with the ControllerGUI or MoveIt. That said, I must mention that I us the arbotix stack with a different board (Robotis OpenCM9.04), but using an arbotix-m should be much easier. UPDATE arbotix_driver listen for joint positions on /command topics (std_msgs/Float64), and provides current joint positions on /joint_state topic, as explained here. But much easier is to run the arbotix_gui so you can move the servos with sliders. If properly configured, arbotix_driver also provides you with controller interfaces, that high level tools like Moveit can use. For example, for a turtlebot arm, the joints (that is, servos) are: joints: { arm_shoulder_pan_joint: {id: 1, neutral: 205, max_angle: 240, min_angle: -60, max_speed: 180}, arm_shoulder_lift_joint: {id: 2, max_angle: 125, min_angle: -125, max_speed: 180}, arm_elbow_flex_joint: {id: 3, max_angle: 150, min_angle: -150, max_speed: 180}, arm_wrist_flex_joint: {id: 4, max_angle: 100, min_angle: -100, max_speed: 180}, gripper_joint: {id: 5, max_speed: 90}, } and the controllers controllers: { arm_controller: { type: follow_controller, joints: [arm_shoulder_pan_joint, arm_shoulder_lift_joint, arm_elbow_flex_joint, arm_wrist_flex_joint], action_name: arm_controller/follow_joint_trajectory, onboard: false } } Originally posted by jorge with karma: 2284 on 2014-07-29 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by d on 2014-07-29: thansk jorge! i got the firmware flashed on arbotix-m. through "arbotix_terminal" i can see all the servos and control all of them through the terminals. my question is how to control these in ros. after running arbotix_driver, what nodes, topics, params, etc do you use to control the servos? Comment by jorge on 2014-07-29: Answer updated. Hope I'm clearer now ^_^ ! Comment by d on 2014-07-29: thanks again jorge. i did run arbotix_driver (connected successfully).. then i ran "rostopic list" on command line, but i didn't see any "/command" as a topic? what do you see when you do a "rostopic list" on your end? ps: i also don't see any of the services and params listed either. Comment by d on 2014-07-29: i can control these servos fine with dynamixel_motor package (via usb2dynamixel). i just don't understand how arbotix_ros package works (via arbotix-m board), especially how to send commands to a servo. i don't see these topics after running arbotix_driver; i do see the servos in arbotix_terminal Comment by jorge on 2014-07-29: You need to configure arbotix_driver so he knows the servos you have, I put an example in the answer. Forget the controllers by now. Comment by d on 2014-07-29: yep, i sure did. here is my .yaml file, loaded in my launch file. port: /dev/ttyUSB0 rate: 15 test: { head_pan_joint: {id: 1, invert: true}, head_tilt_joint: {id: 2, max_angle: 100, min_angle: -100} } i published to /test/head_pan_joint/command std_msgs/Float64 but nothing happened... Comment by d on 2014-07-29: i got it working finally. i'm not sure exactly how/why i did it. i'm guessing that the problem was the baud rate. it was at 1000000 (i was using usb2dynamixel) while arbotix default rate is at 115200. after i set the servos baud rate to 115200, arbotix picked up and i now can control the servos. Comment by jorge on 2014-07-29: great! tell me if you need additional support. I have recently followed the way you are now on! I said nothing about connection parameters because I'm actually using a different board, and normally default parameters "should" work. Comment by d on 2014-07-31: hey @jorge, actually i have another question for you :) what is the "controllers:{ }" above in your answer for? how does it work? i got the "joins: { }" working individually now, and hope to understand more about the controllers part. thanks! Comment by jorge on 2014-07-31: I'm not an expert on this, but afaik the follow_controller I use to operate my arm executes motion plans provided by moveit. That is, takes a secuence of configurations for and arm, and issue the appropriate individual commands to each servo. Interfaced by control_msgs/FollowJointTrajectory action
{ "domain": "robotics.stackexchange", "id": 18807, "tags": "ros, dynamixel-motor" }
Why am I getting crossvalidation scores of 0 only
Question: I am trying Catboost package with iris dataset with following code: from sklearn.datasets import load_iris iris = load_iris() from catboost import CatBoostClassifier model = CatBoostClassifier(iterations=50, learning_rate=0.1, depth=4, loss_function='MultiClass') from sklearn.model_selection import cross_val_score scores = cross_val_score(model, iris.data, iris.target) print(scores) The output is: [0. 0. 0.] Why are the scores 0 only? I expected them to be close to 1. I tried adjusting parameters but results are still the same. Are these errors rather than classification accuracies? Thanks for your insight. Edit: It appears that with CatBoostClassifier, cross_val_score() uses KFold() rather than StratifiedKFold(), since adding cv=StratifiedKFold() in cross_val_score function solves this problem. With sklearn's classifiers such as LogisticRegression or SVC, cross_val_score uses StratifiedKFold as default (see here). Answer: I think that perhaps the problem is that cross_val_score() in its default options for the parameter cv the documentation says: cv : int, cross-validation generator or an iterable, optional Determines the cross-validation splitting strategy. Possible inputs for cv are: None, to use the default 3-fold cross validation, integer, to specify the number of folds in a (Stratified)KFold, An object to be used as a cross-validation generator. An iterable yielding train, test splits. For integer/None inputs, if the estimator is a classifier and y is either binary or multiclass, StratifiedKFold is used. In all other cases, KFold is used. So my guess is that, if cv not specified the split is being done without stratification. This, coupled with the fact that by default in the iris dataset the targets are perfectly sorted (50 label 0, then 50 label 1 and then 50 label 2) means that in each 3 k-fold you are training with two classes and predicting the third one and that's why the scores are 0. Two solutions: A) Shuffle the data: from sklearn.datasets import load_iris import pandas as pd iris = load_iris() from catboost import CatBoostClassifier model = CatBoostClassifier(iterations=50, learning_rate=0.1, depth=4, loss_function='MultiClass') from sklearn.model_selection import cross_val_score df = pd.DataFrame({'X0':iris.data[:,0],'X1':iris.data[:,1], 'X2':iris.data[:,2],'X3':iris.data[:,3],'Y':iris.target}) df = df.sample(frac=1).reset_index(drop=True) scores = cross_val_score(model, df[['X0', 'X1', 'X2', 'X3']], df['Y']) print(scores) Out: [0.96 0.94 0.94] B) Modify the cv parameter: from sklearn.datasets import load_iris iris = load_iris() from catboost import CatBoostClassifier model = CatBoostClassifier(iterations=50, learning_rate=0.1, depth=4, loss_function='MultiClass') from sklearn.model_selection import cross_val_score scores = cross_val_score(model, iris.data, iris.target, cv = 4) print(scores) Out: [1. 0.92105263 0.91891892 0.78378378]
{ "domain": "datascience.stackexchange", "id": 4255, "tags": "classification, supervised-learning" }
Finite lorentz transform for 4-vectors in terms of the generators
Question: One or two sets of notes (one of them by Timo Weigand) on QFT that I have come across state explicitly that a finite lorentz transformation for 4-vectors can be written in terms of the generators $J^{\rho\sigma}$ as: $$\Lambda^{\mu}_{\,\nu}(\Omega)=\lim_{N\rightarrow \infty}(\delta^{\mu}_{\,\nu}-\frac{i}{2N}\Omega_{\rho\sigma}(J^{\rho\sigma})^{\mu}_{\,\nu})^{N}=\exp{(-\frac{i}{2}\Omega_{\rho\sigma}(J^{\rho\sigma}))^{\mu}_{\,\nu}}$$ Now, my question is that, isn't it required of the generators of a Lie group to commute(or equivalently for the group to be abelian) in order to be able to represent it as an exponential of its generators in the finite case? This follows from the condition that $e^{A+B}=e^{A}e^{B}$ only when $[A,B]=0$ (in this case, aren't we simply constructing the finite exponential by multiplying the infinitesimal exponential terms together and then assuming that the generators commute?). If so, than why can the above be written in the way it is given that the generators(J) do not commute. What am I missing? Answer: The extreme RHS exponential is correct. This is exactly the way to write a Lorentz transformation for any size parameters $\Omega$. It is also true for making the matrices of other representations, not just the 4x4 matrices which rotate 4-vectors. For the M x M representation, each of the 16 generators $J^{\rho \sigma}$ would be a M x M matrix. Actually there are not 16 but only 6 generators for the Lorentz Group because the array of parameters is antisymmetric $\Omega _{\rho \sigma}=-\Omega_{\sigma \rho}$ and picks out only the antisymmetric set of generators $J^{\rho \sigma}=-J^{\sigma \rho}$ . The limit expression is not written correctly. It should be written $$\Lambda^{\mu}_{\,\nu}(\Omega)=[\lim_{N\rightarrow \infty}(I-\frac{i}{2N}\Omega_{\rho\sigma}J^{\rho\sigma})^{N}]^{\mu}_{\,\nu}=\exp{(-\frac{i}{2}\Omega_{\rho\sigma}J^{\rho\sigma})^{\mu}_{\,\nu}}$$ where I is the 4x4 identity matrix diag(1,1,1,1). Now matrices are being multiplied together and not just the $\mu \nu $ elements. Notice that each of the N terms in the limit product is exactly the same, and therefore commutes with all the other terms and your worry is resolved. $$\Lambda^{\mu}_{\,\nu}(\Omega)=[\lim_{N\rightarrow \infty}(e^{-\frac{i}{2N}\Omega_{\rho\sigma}J^{\rho\sigma}})^{N}]^{\mu}_{\,\nu}=\exp{(-\frac{i}{2}\Omega_{\rho\sigma}J^{\rho\sigma})^{\mu}_{\,\nu}}$$
{ "domain": "physics.stackexchange", "id": 43248, "tags": "quantum-field-theory, group-theory, lie-algebra, lorentz-symmetry, commutator" }
Euler Transform on Leibniz Series of Pi
Question: So i read about simple series for Pi named Leibniz Series for Pi. Here is its Link:- Series . I made a program for it and soon realised that it converges very slows . So i did Euler accelerate for Alternating Series https://en.wikipedia.org/wiki/Series_acceleration#Euler's_transform and made this recursive program:- import functools import decimal def LebniezTerm(n): return 1/(2*n + 1) @functools.lru_cache(maxsize = 12800) def ndiffrence(n,depth): if depth == 0: return LebniezTerm(n) a = ndiffrence(n,depth-1) b = ndiffrence(n+1,depth-1) return (a - b) def EulerAccelerate(n): i = decimal.Decimal(1) pi = decimal.Decimal(0) while (i <= n): pi = pi + decimal.Decimal(ndiffrence(0,i-1))/decimal.Decimal(2**i) i+=1 return pi 4*EulerAccelerate(600) Here i did and optimisation using Functools but it is still slow. So can it become more optimised ? How can the accuracy get more increased ? Answer: What I found out by simply taking the time the calculation takes, is that casting to Decimal is a very costly operation. Dropping that in certain places (see my code) brings down the overall runtime to ~ 30-40%. Besides, the Leibnitz terms can easily be precomputed (another Best Practice of optimization) as the list will be comparatively short. Surprisingly, this does not save much. Using a module method with a local name save some time as well (import xxx from this_module as local_name instead of using this_module.xxx multiple times). In EulerAccelerate(), the loop variable i does not need to be of type Decimal which saves a lot. Replacing 2**(i+1) with a simple addition yields another (small) saving. Stepping back from the code analysis, I think changing the algorithm from recursive to iterative would speed up the calculation a lot, much more than those micro-optimizations. results on my notebook: maxdepth=24, accurate to 8 places: pi=3.1415926, runtime=10 s import functools from decimal import Decimal import time ## @functools.lru_cache(maxsize = 12800) def ndifference(n, depth): if depth == 0: return LT[n] # = 1.0/(2*n + 1) a = ndifference(n, depth-1) b = ndifference(n+1, depth-1) return (a - b) def EulerAccelerate(n): pi = 0 ith_power_of_2 = 2 # 2**(i+1) for i in range(n): pi += Decimal(ndifference(0, i)) / ith_power_of_2 ith_power_of_2 += ith_power_of_2 return pi # --------------------------------- maxdepth = 24 # create Leibnitz terms beforehand; LT is global LT = [(1.0/(2.0*i+1.0)) for i in range(maxdepth+1)] t = time.time() print 4 * EulerAccelerate(maxdepth) print time.time()-t
{ "domain": "codereview.stackexchange", "id": 39108, "tags": "python, performance, recursion, mathematics" }
Night out with friends. Calculating the costs per person with Kotlin and Android
Question: The following app basically sums up the costs for an evening and calculates the costs per person. The user can input the description of the costs [String] (for example, round one = Spare Ribs Place, round two = Beer Place, etc.), the costs of the round [Double] (for example, 50$, 10.15$, etc.) and the amount of people who participated [Int]. The description of the costs and the costs are being taken via an Alertdialog after having pressed the FAB. After pressing "Calculate" the total sum is divided by the amount of people and displayed in a textview. I know I have still a long way to go but I am willing to learn and therefore more than thankful for any advice in any respect. MAIN ACTIVITY.KT package com.hooni.nbun1kotlin import android.os.Bundle import android.view.LayoutInflater import android.view.View import android.widget.Button import android.widget.EditText import android.widget.TextView import android.widget.Toast import androidx.appcompat.app.AlertDialog import androidx.appcompat.app.AppCompatActivity import androidx.recyclerview.widget.LinearLayoutManager import com.google.android.material.snackbar.Snackbar import kotlinx.android.synthetic.main.activity_main.* import kotlinx.android.synthetic.main.alertdialog_addvalue.view.* import kotlinx.android.synthetic.main.toolbar.* class MainActivity : AppCompatActivity() { // Declaring member variables: // - mutableList contains data class Entry (Name of Entry, Value of Entry) // - sumValue is the sum of the values of all entries private val entryList: MutableList<Entry> = mutableListOf() private var sumValue: Double = 0.0 private lateinit var textViewTotalValue: TextView private lateinit var textViewValuePerPerson: TextView override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) initUI() } private fun initUI() { initToolbar() initRecyclerView() initFAB() initButtons() initSumFields() } private fun initRecyclerView() { recyclerView_ListOfItems.apply { layoutManager = LinearLayoutManager(this@MainActivity) adapter = ItemAdapter(entryList) } } private fun initToolbar() { toolbar.inflateMenu(R.menu.menu_main) toolbar.setOnMenuItemClickListener { when(it.itemId) { R.id.settings -> Toast.makeText(this, "Settings", Toast.LENGTH_SHORT).show() R.id.share -> Toast.makeText(this, "Share", Toast.LENGTH_SHORT).show() } true } } private fun initFAB() { val fab: View = findViewById(R.id.fab) fab.setOnClickListener { val mDialogView = LayoutInflater.from(this).inflate(R.layout.alertdialog_addvalue,null) val mBuilder = AlertDialog.Builder(this).setView(mDialogView).setTitle("Add Entry") val mAlertDialog = mBuilder.show() // Action when pressing "Cancel" in the popup mDialogView.button_cancel.setOnClickListener { mAlertDialog.dismiss() } // If "OK" is being pressed the values from the edit text field (Name & Value of the entry) // are converted to the corresponding data types (String, Double) and then added to the // list of entries (entryList) // // The adapter is being notified that there has been a change in the mutableList // the sumValue (Double) is being increased by the entered value and the textView // is being updated mDialogView.button_ok.setOnClickListener { mAlertDialog.dismiss() val itemName = mDialogView.editText_itemName.text.toString() val itemValue = mDialogView.editText_itemValue.text.toString().toDouble() entryList.add(Entry(itemName,itemValue)) recyclerView_ListOfItems.adapter?.notifyDataSetChanged() sumValue += itemValue textViewTotalValue.text = sumValue.toString() // A snackbar is being shown to undo the action (i.e. removing the entry from the // mutableList (entryList), updating sumValue (Double) & the corresponding textView val snackbar = Snackbar.make(mainActivity,"Item added",Snackbar.LENGTH_SHORT) snackbar.setAction("Undo") { entryList.remove(Entry(itemName,itemValue)) recyclerView_ListOfItems.adapter?.notifyDataSetChanged() sumValue -= itemValue textViewTotalValue.text = sumValue.toString() } snackbar.show() } } } private fun initButtons() { // the UI for increasing the amount of people & calculate button are initialized val buttonIncrease = findViewById<Button>(R.id.button_increasePeople) val buttonDecrease = findViewById<Button>(R.id.button_decreasePeople) val editTextAmountOfPeople = findViewById<TextView>(R.id.textView_amountOfPeople) val buttonCalculate = findViewById<Button>(R.id.button_calculate) // the textView displaying the amount of People is set to "2" editTextAmountOfPeople.text = "2" // OnClickListener for increasing/decreasing amount of People & Calculating are set buttonIncrease.setOnClickListener { if(editTextAmountOfPeople.text.toString().toInt() < 10) editTextAmountOfPeople.text = (editTextAmountOfPeople.text.toString().toInt() + 1).toString() else (Toast.makeText(this@MainActivity,"10 People Maximum",Toast.LENGTH_SHORT)).show() } buttonDecrease.setOnClickListener { if(editTextAmountOfPeople.text.toString().toInt() > 2) editTextAmountOfPeople.text = (editTextAmountOfPeople.text.toString().toInt() - 1).toString() else (Toast.makeText(this@MainActivity,"2 People Minimum",Toast.LENGTH_SHORT)).show() } buttonCalculate.setOnClickListener { val valuePerPerson = sumValue / editTextAmountOfPeople.text.toString().toDouble() textViewValuePerPerson.text = valuePerPerson.toString() } } private fun initSumFields() { textViewTotalValue = findViewById(R.id.textView_totalValue) textViewValuePerPerson = findViewById(R.id.textView_valuePerPerson) } // Entry containing the name and the value data class Entry(var name: String, var value: Double) } ITEMADAPTER.KT package com.hooni.nbun1kotlin import android.view.LayoutInflater import android.view.View import android.view.ViewGroup import androidx.recyclerview.widget.RecyclerView import kotlinx.android.synthetic.main.listview_item.view.* class ItemAdapter (private val items: MutableList<MainActivity.Entry>) : RecyclerView.Adapter<ItemAdapter.ItemViewHolder>() { override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): ItemViewHolder { return ItemViewHolder(LayoutInflater.from(parent.context) .inflate(R.layout.listview_item,parent,false)) } override fun getItemCount(): Int { return items.size } override fun onBindViewHolder(holder: ItemViewHolder, position: Int) { when(holder) { is ItemViewHolder -> { holder?.itemDescription?.text = items.get(position).name holder?.itemValue?.text = items.get(position).value.toString() } } } class ItemViewHolder constructor(view: View) : RecyclerView.ViewHolder (view) { val itemDescription = view.listView_valueDescription!! val itemValue = view.listView_value!! } } ACTIVITY_MAIN.XML <?xml version="1.0" encoding="utf-8"?> <androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/mainActivity" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity" android:orientation="vertical"> <include layout="@layout/toolbar" android:layout_width="match_parent" android:layout_height="wrap_content" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toTopOf="parent" /> <com.google.android.material.floatingactionbutton.FloatingActionButton android:id="@+id/fab" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginRight="16dp" android:layout_marginBottom="8dp" android:src="@mipmap/fab_add" app:layout_constraintBottom_toTopOf="@id/linearLayout_BottomBar" app:layout_constraintRight_toRightOf="parent" /> <androidx.recyclerview.widget.RecyclerView android:id="@+id/recyclerView_ListOfItems" android:layout_width="match_parent" android:layout_height="wrap_content" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintHorizontal_bias="0.0" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toTopOf="parent" app:layout_constraintBottom_toTopOf="@id/linearLayout_BottomBar"> </androidx.recyclerview.widget.RecyclerView> <LinearLayout android:id="@+id/linearLayout_BottomBar" android:layout_width="match_parent" android:layout_height="wrap_content" android:orientation="horizontal" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintStart_toStartOf="parent"> <LinearLayout android:layout_width="wrap_content" android:layout_height="match_parent" android:orientation="horizontal"> <Button android:id="@+id/button_decreasePeople" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="-" /> <TextView android:id="@+id/textView_amountOfPeople" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="2" /> <Button android:id="@+id/button_increasePeople" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="+" /> </LinearLayout> <Button android:id="@+id/button_calculate" android:layout_width="match_parent" android:layout_height="match_parent" android:text="Calculate" /> </LinearLayout> <LinearLayout android:id="@+id/linearLayout" android:layout_width="match_parent" android:layout_height="wrap_content" android:orientation="horizontal" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toBottomOf="@+id/toolbar"> <TextView android:id="@+id/textView_valuePerPerson" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_weight="1" android:gravity="center" tools:text="50000" /> <TextView android:id="@+id/textView_totalValue" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_weight="1" android:gravity="center" tools:text="100000" /> </LinearLayout> </androidx.constraintlayout.widget.ConstraintLayout> ALERTDIALOG_ADDVALUE.XML <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical"> <EditText android:id="@+id/editText_itemName" android:layout_width="match_parent" android:layout_height="wrap_content" android:ems="10" android:inputType="text" android:hint="Enter Description"/> <EditText android:id="@+id/editText_itemValue" android:layout_width="match_parent" android:layout_height="wrap_content" android:ems="10" android:inputType="numberDecimal" android:hint="Enter Value"/> <LinearLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:orientation="horizontal"> <Button android:id="@+id/button_cancel" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_weight="1" android:text="@android:string/cancel" /> <Button android:id="@+id/button_ok" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_weight="1" android:text="@android:string/ok" /> </LinearLayout> </LinearLayout> LISTVIEW_ITEM.XML <?xml version="1.0" encoding="utf-8"?> <androidx.cardview.widget.CardView xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_margin="4dp" app:cardElevation="4dp"> <LinearLayout android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="horizontal"> <TextView android:id="@+id/listView_valueDescription" android:layout_width="match_parent" android:layout_height="match_parent" android:layout_gravity="center" android:layout_marginStart="16dp" android:layout_marginLeft="16dp" android:layout_marginEnd="8dp" android:layout_marginRight="8dp" android:layout_weight="2" android:gravity="center_vertical|start" android:textSize="24sp" tools:text='맛있는 식당"'> </TextView> <TextView android:id="@+id/listView_value" android:layout_width="match_parent" android:layout_height="match_parent" android:layout_gravity="center" android:layout_weight="4" android:gravity="center" android:textSize="24sp" tools:text="60,000"> </TextView> </LinearLayout> </androidx.cardview.widget.CardView> TOOLBAR.XML <?xml version="1.0" encoding="utf-8"?> <androidx.appcompat.widget.Toolbar xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/toolbar" android:layout_width="match_parent" android:layout_height="wrap_content" android:theme="@style/ThemeOverlay.AppCompat.Dark.ActionBar" app:popupTheme="@style/ThemeOverlay.AppCompat.Light" android:background="@color/colorPrimary" android:elevation="4dp" app:title="Toolbar" tools:menu="@menu/menu_main" /> MENU_MAIN.XML <?xml version="1.0" encoding="utf-8"?> <menu xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:android="http://schemas.android.com/apk/res/android"> <item android:id="@+id/share" android:icon="@android:drawable/ic_menu_share" android:title="@string/action_share" app:showAsAction="always"></item> <item android:id="@+id/settings" android:icon="@android:drawable/ic_menu_preferences" android:title="@string/action_settings"></item> </menu> Answer: variable names Personally, I recommend to use the Hungarian notation. Don't search for the Hungarian notation or you would find not to use it ;-) On a serious note, the Hungarian notation is great, but it's overused and that's why you find websites advocating against it. But in GUI, you have the perfect place for it... I don't exactly use these abbreviations inside the link, but it can give you ideas. When you use a standard abreviation like this for your names, recognizing the types will be very short and you will read over them if you don't need it. Also, making variable names shorter will make it easier to think about them. Example, I always use rec for RecyclerView. this means I would change recyclerView_ListOfItems to recListOfItems. Also, I personally remove all the prepositions and change the name in a single noun instead. So I would change ListOfItems to ItemList so recyclerView_ListOfItems would become recItemList. fab -> fabAdd fab doesn't tell you what its purpose. fabAdd does. buttonIncrease -> btnIncrease editTextAmountOfPeople -> etPeopleAmount The type doesn't match the name!! either change the type to EditTextView or change et to tv btn_increasePeople -> btnIncrease don't change names in the code and the XML. This can be confusing use temporary variables When you use something twice after eachother and it takes some code or it is difficult for the computer (takes long/does work), store it in a temporary variable (A temporary variable is just a variable which lives short so is in a small function block). btnIncrease.setOnClickListener { if(etPeopleAmount.text.toString().toInt() < 10) etPeopleAmount.text = (etPeopleAmount.text.toString().toInt() + 1).toString() else (Toast.makeText(this@MainActivity,"10 People Maximum",Toast.LENGTH_SHORT)).show() } when you store the people amount in an int, the code will become easier to read: btnIncrease.setOnClickListener { val peopleAmount = etPeopleAmount.text.toString().toInt() if(peopleAmount < 10) etPeopleAmount.text = (peopleAmount + 1).toString() else (Toast.makeText(this@MainActivity,"10 People Maximum",Toast.LENGTH_SHORT)).show() } Item adapter quick side-step to Item adapter MutableList Eveytime you change something in an adapter, you need to call notifyDataSetChanged. If you ask for a MutableList, the list can be changed so you promise you call the function on a change, which you don't. It's therefor way better to ask for a List, as a List can't change. (If you want to change the adapter, add functions which will replace the list and call notifyDataSetChanged) single-expression functions When the first word in the body of your function is return, you can simplify it. override fun getItemCount(): Int { return items.size } // can be simplified to override fun getItemCount(): Int = items.size //or even to override fun getItemCount() = items.size onbindViewHolder override fun onBindViewHolder(holder: ItemViewHolder, position: Int) { when(holder) { is ItemViewHolder -> { holder?.itemDescription?.text = items.get(position).name holder?.itemValue?.text = items.get(position).value.toString() } } } You know that holder is an ItemViewHolder, as it is the only thing the parameter allows. So let's remove the second check. override fun onBindViewHolder(holder: ItemViewHolder, position: Int) { holder?.itemDescription?.text = items.get(position).name holder?.itemValue?.text = items.get(position).value.toString() } Next holder cannot be null as ItemViewHolder doesn't have a question mark. This means the questionmark after holder can be removed override fun onBindViewHolder(holder: ItemViewHolder, position: Int) { holder.itemDescription?.text = items.get(position).name holder.itemValue?.text = items.get(position).value.toString() } Next, there are some functions that can use symbols instead of text, called operator overloading. Almost every get in Kotlin can therefor be replaced with []. override fun onBindViewHolder(holder: ItemViewHolder, position: Int) { holder.itemDescription?.text = items[position].name holder.itemValue?.text = items[position].value.toString() } Ok, now back to the activity Utility functions You should create functions for code that you reuse. After you did the former refactoring you read an Int from a TextView 3 times. (OK two, but sumValue / peopleAmount gives already a Double as sumvalue is a Double and if one of the values in a devision is a Double, it gives you a Double. change it and you have three). fun readInt(textview: TextView) : Int { textview.text.toString().toInt() } now you can refactor to: btnIncrease.setOnClickListener { val peopleAmount = readInt(etPeopleAmount) if(peopleAmount < 10) etPeopleAmount.text = (peopleAmount + 1).toString() else (Toast.makeText(this@MainActivity,"10 People Maximum",Toast.LENGTH_SHORT)).show() } extension function We can improve or function by making it an extension function. This means that you can create a function for TextView which looks like it is created in the real class. fun TextView.readInt() : Int { //it acts like it's created inside the TextView class, so this refers to TextView return this.text.toString().toInt() } As you probably know, you don't have to call this, so the function can be changed to: fun TextView.readInt() : Int { return text.toString().toInt() } or even simpler: fun TextView.readInt() = text.toString().toInt() so the code now becomes: btnIncrease.setOnClickListener { val peopleAmount = etPeopleAmount.readInt() if(peopleAmount < 10) etPeopleAmount.text = (peopleAmount + 1).toString() else (Toast.makeText(this@MainActivity,"10 People Maximum",Toast.LENGTH_SHORT)).show() } You can now change the Toast.makeText().show() calls into an extension-function shortToast(...). I won't check the XML, as I don't find XML-layouts interesting . I use Anko for this.
{ "domain": "codereview.stackexchange", "id": 36194, "tags": "beginner, android, kotlin" }
What is this structure in human brain?
Question: Scientists made a new image of brain. I wonder, what is this arc (denoted by blue)? Is it the caudate nucleus? Answer: Short answer Based on shape and approximate position, I think it is the corpus callosum. Background I think it is the corpus callosum (Fig. 1). The corpus callosum is approximately 10 cm in length and is C-shaped. It becomes thicker posteriorly, as is also evident in your image. The corpus callosum is a structure consisting of white-matter, containing about 200 million axons. Fig. 1. Mid-saggital section of the brain showing the corpus callosum. source: University of Central Florida I think it is not the caudate nucleus, alhough it has also the curved shape. The caudate is positioned more laterally than the corpus callosum and by the looks of it your picture shows a mid-sagittal section of the brain. The caudate seems to be positioned too far laterally in the brain as far as I can see (Fig. 2). Fig.2. Nucleus caudatus. source: Brain Notes
{ "domain": "biology.stackexchange", "id": 6875, "tags": "brain, neuroanatomy" }
Removing transients in highpass filtering with MATLAB
Question: I've made a simple first order IIR highpass filter with a zero at z = 1 and a pole at z = 0.9. Its frequency response looks like this: Now, I filter a DC signal using this filter. Here's the MATLAB code I use to do it: b = [1 -1]; % Zero at z = 1 a = [1 -0.9]; %Pole at z = 0.9 figure(1) freqz(b, a) t = 1:100; x(1:length(t)) = 1; % Constant function y = filter(b, a, x); figure(2) plot(t, x) xlabel('Time'); ylabel('Input Signal'); figure(3) plot(t, y) xlabel('Time'); ylabel('Output Signal'); As my filter is highpass, I expect the DC to become zero, or atleast become severely attenuated. However, the output signal I get looks like this: From my understanding, this exponential output is a transient produced because I haven't set the initial conditions correctly. Sure enough, setting x[-1] = 1 solves the problem. However, this works only for this particular input DC signal. For any general input signal, how do I set the initial conditions so that transients aren't produced? Edit : I'm aware that the filtfilt() function does forward-backward filtering with transient minimization, but I really want to port the filter to an embedded platform, so I need to understand how transient removal works. Thanks in advance for the help! Edit 2 : As suggested by Kuba Ober, I tried setting x[-1] as the value that it actually should have been. It works fine for a DC input, but here's what happened for a sinusoidal input: clc; clear all; p = 0.9 a = [1 -p] b = [1 -1] n = 1:100; % Samples f = 0.2; % Frequency in Hz Fs = 10; % Sampling rate in samples per second t = n/Fs; % Time axis x = sin(2*pi*f*t); % Filter with the appropriate initial conditions y = filter(b, a, x, filtic(b, a, [], [sin(2*pi*f*0)])); figure(1) plot(t, x) xlabel('Time'); ylabel('Input Signal'); figure(2) plot(t, y) xlabel('Time'); ylabel('Output Signal'); Here's the input signal : And here's the output : The first peak is visibly smaller than the second, which indicates some transients being present. I'm not entirely sure about this, but I think the reason it doesn't work is because just setting x[-1] is not enough, I also need to set y[-1]. The problem here is that there's no way to find out what y[-1] actually should be. Edit 3 : Let me provide a little more info on the problem I'm working on. I'm trying to use filters to remove noise from ECG (Electrocardiogram) signals in an embedded platform. Here's a typical ECG signal, after filtering: Here's what an ECG signal looks before filtering: Note the DC offset in the signal before filtering. For filtering, I need a notch filter to remove high frequency power line noise and a highpass filter to remove the DC and the low frequency "drifting" of the signal. The filters I use need to be linear phase, since the time domain morphology of an ECG signal is very important for diagnosis. However, my filter doesn't need to be real-time, as I'm doing the processing offline after acquiring the ECG signal from the patient. So, for implementing nonlinear phase IIR filters, I'm currently using forward-backward zero phase filtering. One opinion that's shared by @Matt L. and @Royi is that transients are unavoidable in real-time filtering and that I should use a longer input signal and crop off the first few seconds of the filtered output instead. This is something I'd like to avoid, as acquiring long ECGs from a living patient is somewhat difficult. Also, I do not have to filter in real-time, so any technique of transient removal that hinges on knowing the entire signal in advance is perfectly all right. Any help is appreciated! Answer: Your first order filter recursion for some real constants $a,b,c$ is $$ y[n] = a x[n] + b x[n-1] - c y[n-1] $$ with the two initial memory states $x[-1]$ and $y[-1]$ at $n=0$. Your "no transient" condition can be translated to $y[0]=0$ and a necessary second condition so that you can solve for both of your memory states. The second condition could be, that the discrete derivative of $y$ also vanishes at $n=0$, so $y[0]-y[-1]=0$. You can also take any other condition that seems sensible to you. The two equations give you a unique solution for the two unknown memories, namely: $$y[-1] = 0$$ and $$x[-1] = -\frac{a}{b}x[0] $$ Alternatively, your conditions may better be chosen as $y[0]=0$ and $y[0]-y[-1]=x[0]-x[-1]$ in order to capture the initial slope of the input. The resulting recursion equation at $n=0$ is then $$0=a x[0]+b x[-1]+c(x[0]-x[-1])$$ giving you the solution $$x[-1]=-\frac{a+c}{b-c}x[0]$$ and $$ y[-1]= -\left(1+\frac{a+c}{b-c}\right)x[0]$$ (Please check my calculations!) But in general you cannot expect a simple initial condition to give you the same result as knowing the signal history. So you can only take this to a certain point and in general it would probably be better if you just discarded the transient response of your output.
{ "domain": "dsp.stackexchange", "id": 1897, "tags": "matlab, filtering" }
Experiment for measuring flow rate vs diameter
Question: I am trying to establish the relationship between the diameter size of a pipe and the flow rate water. The main idea for my experiment would be to run water through pipes of varying diameter and measure the flow rate. I was thinking of the following setup: Water tap $>$ pipe $>$ Measuring device (like venturi meter) In other words, the water would flow from the tap into the pipe. Then, I will try to measure the flow rate using something like a venturi meter. Or, I was also thinking of dropping something like a small particle on one end of the pipe and measure its velocity on the other end. Would this setup be enough to complete the experiment? Answer: Yes, but the venturi meter costs money. Here is a way that costs much less: run the water out the pipe into a bucket, and let it run for a fixed time. then measure how much water is in the bucket after that much time. That will give you the average flow rate. Please note that you will also need to know the source pressure at the tap, but water pressure gauges are easy to come by and not as expensive as a venturi meter.
{ "domain": "physics.stackexchange", "id": 61217, "tags": "fluid-dynamics, experimental-physics" }
Analysis of post-HF wavefunctions
Question: Hartree-Fock method introduces electron (spin)orbitals and they are commonly used for qualitative rationalization of many molecular properties. However, MOs have meaning only if we ignore electron correlation. Post-HF non-DFT methods build correlated wave-function, but due to correlation extracting independent orbitals from them is not possible. Electron density still is a somewhat informative concept even with no orbitals involved, but we like to talk about $\pi$-systems, sigma-bonds and the like and relate electron transitions to molecular features. So, I'm interested in existing approaches to analysis of post-HF wavefunctions (for example, obtained from Coupled Cluster method). Are there any well-known approaches to analysis of correlated wavefunctions that do not rely on concept of orbitals that allow to relate some molecular properties to local structural features? Answer: There are! First off, you're correct that Hartree-Fock and DFT produce effective one-particle states $\phi_i$ which are the molecular orbitals (MOs) in the case of HF or the Kohn-Sham (KS) orbitals in the case of DFT (I'll focus on Hartree-Fock since I am more familiar with it than DFT). The MOs are defined via expansion coefficients ($C_{\mu i}$) in the basis set $\{\chi_\mu\}$ such that $\phi_i = \sum_\mu C_{\mu i} \chi_{\mu}$. The more physically meaningful quantity one works with is the one-particle reduced density matrix (1PRDM) defined as $\Gamma_{pq} = \sum_{\mu \nu} C^*_{\mu p} P_{\mu \nu} C_{\nu q}$ where $P_{\mu \nu} = \sum_i C_{\mu i}^*C_{\nu i}$. Diagonalization of the 1PRDM produces what are known as natural atomic orbitals (NAOs) $\phi^{\rm{NAO}}_i = \sum_p a_{p i} \phi_p$ where the coefficients $a_{p i}$ are defined by the eigenvalue relation $\Gamma_{p q} a_{q i} = \rho_i a_{p i}$ where $\rho_i$ is the occupation number corresponding to th $i^{\rm{th}}$ NAO. Since $\Gamma$ is Hermitian, the NAOs are orthonormal and form a perfectly suitable basis to perform any quantum chemistry calculation, however, their value lies in the fact that NAOs actually show you where the electron density is so it's a very useful visualization tool. Once you have NAOs, all of the analysis you want is available. There are things called natural bond orbitals (NBOs) which rotate NAOs in such a way that they maximize the electron density between 2 atoms, thus simulating the notion of a chemical bond. Here, you find the orbitals describing your $\sigma$- and $\pi$-bonds. The point I'm trying to make is that what's important for this kind of analysis is having $\Gamma_{pq}$. So the answer to your question depends on whether or not you calculate $\Gamma_{pq}$ for a variety of post-HF methods, such as configuration interaction (CI), coupled-cluster (CC), and others. If we denote the Hartree-Fock determinant as $|\Phi\rangle$, the 1PRDM from before is simply $\Gamma_{pq} = \langle \Phi | a^+_p a_q | \Phi \rangle$. The wavefunction originating from post-HF calculations is $|\Psi_\mu\rangle = (1 + C_\mu)|\Phi\rangle$ in the case of CI and $|\Psi_\mu\rangle = R_\mu e^T |\Phi\rangle$ in the case of equation-of-motion (EOM) CC. The analogous expression of the 1PRDM for the $\mu^{\rm{th}}$ excited state ($\mu = 0$ is the ground state) is $$\Gamma^{\mu}_{pq} = \langle \Psi_\mu | a^+_p a_q | \Psi_\mu \rangle$$ You can calculate this object directly for the various types of wavefunction that come out of correlated electronic structure calculations. It's not light work for sure, but if you know your way around many-body algebra (e.g. Slater's Rules, diagrammatics, Wick's Theorem), you can calculate this object and diagonalize it to find correlated bonding orbitals just as you would in Hartree-Fock. Also note that in the case of CC, the bra state $\langle \Psi_\mu | \neq [|\Psi_\mu\rangle]^+$ and instead we use the biorthogonal left-CC parameterization $\langle \Psi_\mu | = \langle \Phi | L_\mu e^{-T}$ where $L_\mu$ is a linear de-excitation operator complementary to $R_\mu$. These and other technicalities are of course well-documented in the CC literature. The key observation here is that we never explicitly build a correlated $|\Psi_\mu\rangle$ defined via Slater determinants directly. In the case of CC wavefunctions, you'd blow up your computer since $e^T$ is an object as large as the full CI wavefunction for any level of truncation. So the way things are done is you obtain $|\Psi_\mu\rangle$ indirectly though a set of CI coefficients or CC cluster amplitudes and calculate the 1PRDM, again, completely defined through this sequence of finite coefficients. It may still be a single-particle orbital, but it has many-body information embedded within it. That being said, these calculations are available in most electronic structure programs.
{ "domain": "chemistry.stackexchange", "id": 14592, "tags": "quantum-chemistry, electronic-configuration" }
Implementing a tensor product formula
Question: I would like to use C to implement the following formula: $$\mathbf S(u,v)=\sum_{r=i-p}^{i}\sum_{s=j-q}^{j}N_{r,p}(u)N_{s,q}(v)\mathbf P_{r,s}$$ Namely, $$ \left(N_{i-p,p}(u),\cdots,N_{i,p}(u)\right)_{1\times(p+1)} \begin{pmatrix} \mathbf{P}_{i-p,j-q} & \cdots & \mathbf{P}_{i-p,j}\\ \vdots& \cdots &\vdots\\ \mathbf{P}_{i,j-q} & \cdots & \mathbf{P}_{i,j}\\ \end{pmatrix} \begin{pmatrix} N_{j-q,q}(v)\\ \vdots\\ N_{j-q,q}(v)\\ \end{pmatrix}_{(q+1)\times 1} $$ where, $$\mathbf P_{i,j}=\{x_{i,j},y_{i,j},z_{i,j}\}$$ or $$\mathbf P_{i,j}=\{x_{i,j},y_{i,j},z_{i,j}, w_{i,j}\}$$ So the middle matrix is actually a tensor with rank 3, like this: {{{1,1,0.09986},{1,2,0.07914},{1,3,0.80686},{1,4,0.68846},{1,5,0.297402}}, {{2,1,0.12262},{2,2,0.41283},{2,3,0.38565},{2,4,0.05670},{2,5,-0.12276}}, {{3,1,0.08301},{3,2,0.13181},{3,3,0.36565},{3,4,0.74432},{3,5,-0.62065}}, {{4,1,0.12755},{4,2,0.06099},{4,3,0.74465},{4,4,0.18402},{4,5,0.336987}}, {{5,1,0.12346},{5,2,0.97057},{5,3,0.72663},{5,4,0.23481},{5,5,0.968757}}} Here is my algorithm and implementation with C code: void calc_bspline_surf(double *P, double *U, double *V, int p, int q, double u, double v, int const *dims, double *S) { double *Nu = (double *) malloc((p + 1)*sizeof(double)); double *Nv = (double *) malloc((q + 1)*sizeof(double)); double *temp = (double *) malloc(dims[2]*sizeof(double)); int m, n, c; int i, j; int k, l, r; int uidx, vidx, idx; m = dims[0]; n = dims[1]; c = dims[2]; i = find_span_index(p, U, m - 1, u); j = find_span_index(q, V, n - 1, v); b_spline_basis(p, U, i, u, Nu); b_spline_basis(q, V, j, v, Nv); /*main implementation*/ uidx = i - p; for (l = 0; l <= q; l++){ for (r = 0; r <= c - 1; r++){ temp[r] = 0.0; } vidx = j - q + l; for (k = 0; k <= p; k++){ idx = c*n*(uidx + k) + c*vidx; for (r = 0; r <= c - 1; r++){ temp[r] = temp[r] + Nu[k]*P[idx+r]; } } for (r = 0; r <= c - 1; r++){ S[r] = S[r] + Nv[l]*temp[r]; } } free(Nu); free(Nv); free(temp); } Notes: For a tensor T with dimensions {m, n, c}, which was stored with a flattened vector V, we can refer to T[i][j][k] with V[n*c*i + n*j + k - 1] In function calc_bspline_surf(), the Nu, and Nv stores the values of vectors N(i-p,p),..., N(i,p) and N(j-q,q),..., N(j,q), respectively. S[0],...,S[c-1] store the results The complete file could be download with this link. Owing to I am a C newcomer, I think my code is not fast. I'm seeking a more efficient strategy for dealing with this problem. Answer: Readability & Maintainability Right now your code is a bit hard to read. A way to greatly help this would be to use better variable names. Put the variable declarations to separate lines and initialize them to some value at the same time. From Code Complete, 2d Edition, p. 759: With statements on their own lines, the code reads from top to bottom, instead of top to bottom and left to right. When you’re looking for a specific line of code, your eye should be able to follow the left margin of the code. It shouldn’t have to dip into each and every line just because a single line might contain two statements. You should declare your for loops as such: for(int i = 0; i < size; ++i) Note that this was introduced in the C99 standard. There is no reason you should not be using this standard in your code. Design Group your variables in structs to make them more maintainable, a good hint that you should be doing this is when you are passing in 9 parameters to your function. Return the output vector; it makes your function easier to use. You'll have to allocate the space in your method and free it elsewhere. This also eliminates the potential bug discussed below. Don't cast malloc(). Right now you are using this loop to zero out temp (or at least part of it) for (r = 0; r <= c - 1; r++){ temp[r] = 0.0; } The original algorithm doesn't implement this in a separate loop, and I don't recommend you do either. It's a loop in the code you don't need. These lines: temp[r] = temp[r] + Nu[k]*P[idx+r]; S[r] = S[r] + Nv[l]*temp[r]; Can be shortened to: temp[r] += Nu[k]*P[idx+r]; S[r] += Nv[l]*temp[r]; Set those pointers you free()ed to NULL right after. This prevents us from trying to access the data afterwards, which will result in undefined and usually catastrophic behavior. Bug Right now you are relying on the passed in values already stored in S[r] when you add Nv[l]*temp[r] to them. In the original NURBS A3.5 algorithm, they are 0'ed out right before their use. You could overwrite all these values to 0 with a memset() in our function and continue as is. However, overwriting something the user passes when we could've started from scratch is usually not expected behavior. Create the array at the top of the function like you do with the other arrays, using calloc().
{ "domain": "codereview.stackexchange", "id": 21034, "tags": "performance, beginner, c, matrix" }
Sorting an unordered pile of items into drawers with minimal drawer movements
Question: A while ago, I was doing my laundry late at night. When I brought my laundry back to my dorm, I started to put it away. My wardrobe is set up as follows: My drawers are categorized by the type of clothing they hold, and I'm very particular about that; this means I can't put t-shirts in my pants drawer (otherwise I'll be too flustered over it to sleep). Of course, I know this categorization, and I know which drawer is which even in the dark. I have $N$ pieces of clean laundry, and $M$ drawers arranged vertically (with the drawer at the top being $D_0$) I can open and close drawers as I see fit, but if drawer $D_i$ is open, I can't put any clothes in drawer $D_{i+1}$ (because it's blocked by $D_i$). All of my drawers are already closed when I enter the room. When I finish, I have to close all open drawers. Considerate gentleman that I am, I don't want to wake up my roommate (this was a college dorm, so we slept in the same room). I resolved to not turn on the lights and make as little noise as possible. This means that I have to put away my laundry under the following constraints: I can't see into my laundry bag, so I only know what I grab as soon as I pull it out. Opening and closing a drawer makes noise. Pulling items out of my laundry bag, or actually putting them in the drawer, does not. I can only put away one item of clothing at a time; I can't fold like clothes and put them in piles (and then put those piles away) because the floor's too dirty and there's no space on the furniture within reach of me. I can put an item of clothing back in my bag and take out another, but the one I take out next could be the one I had just put back in (remember, I have no control over what I pull out of the bag). Given this, I have three questions: How can I put my laundry away while opening and closing the drawers as few times as possible? Has anyone else thought of this problem or something similar? Does this problem have any practical applications? Answer: How about the following: Go through your bag; fold and stack items for drawers $D_i$ and $D_{i+1}$, $i \in 2\mathbb{N}$, in/on drawer $D_i$ (which you open on demand). Make sure you have the items for $D_{i+1}$ on top; that may require quite some reordering pancake-style, but you don't seem concerned about such operations. Note that there is always enough room (two drawer heights) to do this, assuming the drawers are roughly equal and your clothes fit into them as it is. For each open drawer $D_i$, pick up the stack for $D_{i+1}$ (and put in the bag, if necessary), close $D_i$, open $D_{i+1}$, put the stack in there, and close $D_{i+1}$. This algorithm requires $2M$ drawer movements if all drawers get clothing, so it is worst-case optimal. It will waste some movements if there are items for $D_{i+1}$ but not $D_i$; if only odd drawers get items, we execute twice the drawer movements necessary. Since this is the worst that can happen, we still have a $2$-approximation.
{ "domain": "cs.stackexchange", "id": 5463, "tags": "algorithms, optimization, sorting, randomized-algorithms" }
Text based Java game "Battle Arena"
Question: This is my first java program. I'm coming from a python background. This is a text based combat arena game. Are there any ways I could better implement the overall code structure? How might I improve the math of the attack() function? It prompts the user for a hero name, then generates hero and enemy objects of the character class with randomly generated Health, Defense, and Strength stats between 1-100 and prompts user for an option. 1 initiates battle 2 quits Selection 1 starts an attack iteration where: The game rolls for each character (random 1-6) to determine who strikes first and resolves the attack results, prints updated stats and returns to prompt the two options. Something slightly interesting happens when they tie the initiative roll. When a character's health is reduced below 1, the victor is announced and the game quits. Some announcer text is printed depending on different outcomes for interest. Game.java import java.util.Scanner; import java.io.*; public class Game { public class Character { int health; int defense; int strength; String cname; int init; public Character(String name) { health = getRandom(100, 1); defense = getRandom(100, 1); strength = getRandom(100, 1); cname = name; init = 0; } } static int getRandom(int max, int min) { int num = 1 + (int)(Math.random() * ((max - min) + 1)); return num; } static void cls() { System.out.print("\033[H\033[2J"); System.out.flush(); } static String printWelcome() { cls(); System.out.println("Welcome to the arena!"); Scanner scanObj = new Scanner(System.in); System.out.println("Enter your hero\'s name:"); String heroName = scanObj.nextLine(); cls(); return heroName; } static void printStats(Character c) { Console cnsl = System.console(); String fmt = "%1$-10s %2$-1s%n"; System.out.println("\n" + c.cname + "\'s Stats:\n---------------"); cnsl.format(fmt, "Health:", c.health); cnsl.format(fmt, "Defense:", c.defense); cnsl.format(fmt, "Strength:", c.strength); } static void clash(Character h, Character e) { System.out.println("\n" + e.cname + " took a cheapshot!\n(Croud gasps)\nBut " + h.cname + " blocked it in the nick of time!\n(Croud Chears)\n"); doBattle(h, e); } static Character roll(Character h, Character e) { h.init = getRandom(6, 1); e.init = getRandom(6, 1); if (h.init > e.init) { return h; } else if (h.init < e.init) { return e; } else { clash(h, e); return e; } } static void attack(Character a, Character d) { int apts; String aname = a.cname; String dname = d.cname; if (d.defense > a.strength) { apts = 1; d.defense = d.defense - ((d.defense % a.strength) + 1); System.out.println("\n" + dname + " blocked " + aname + "\'s attack and took no damage!\n(Croud chears)\n"); } else { apts = a.strength - d.defense; d.health = d.health - apts; System.out.println("\n" + aname + " strikes " + dname + " for " + apts + " points of damage!\n(Croud boos)\n"); } if (d.health < 1) { d.health = 0; } } static void doBattle(Character h, Character e) { Character goesFirst = roll(h, e); System.out.println(goesFirst.cname + " takes initiative!\n"); Character defender; if (h.cname == goesFirst.cname) { defender = e; } else { defender = h; } attack(goesFirst, defender); // System.out.println(defender.cname); } static int getOption() { Scanner scanObj = new Scanner(System.in); System.out.println("\nEnter option: (1 to battle, 2 to escape!)"); int option = scanObj.nextInt(); return option; } public static void main(String[] args) { Game myGame = new Game(); Game.Character hero = myGame.new Character(printWelcome()); Game.Character enemy = myGame.new Character("Spock"); System.out.println("\nAvast, " + hero.cname + "! Go forth!"); // printStats(hero); // printStats(enemy); while (hero.health > 0 && enemy.health > 0) { printStats(hero); printStats(enemy); int option = getOption(); cls(); if (option == 1) { doBattle(hero, enemy); } else if (option == 2) { System.out.println("YOU COWARD!"); System.exit(0); } else { System.out.println("Invalid Option"); } } printStats(hero); printStats(enemy); if (hero.health < 1) { System.out.println(enemy.cname + " defeated " + hero.cname + "!\n(Cround boos aggressively)\nSomeone from the croud yelled \"YOU SUCK!\"\n"); } else { System.out.println(hero.cname + " utterly smote " + enemy.cname + "!\n(Croud ROARS)\n"); } } } Answer: Make as many of the properties of Character to be final as possible. init (which should be named initiative) should not be a property at all. Delete getRandom. Pass in an instance of Random for testability, and call its nextInt. Make Character a static inner class - it won't be able to access properties of Game, and that's a good thing - it's better to pass them in explicitly when needed. Avoid static abuse on your outer functions. Most of your Game methods should be instance methods with access to the member variables they need. cls (clearing the screen) is an anti-feature. The user can clear the screen when they want through their own terminal controls. Don't \' escape apostrophes when that isn't needed. There is no need for Console in this application. Make more use of printf instead of string concatenation. Don't abbreviate variables like h (hero) and e (enemy) to single letters; spell them out. Various spelling issues like croud (should be crowd); enable spellcheck. Make your verb tenses agree - they should all be in present tense, instead of a mix of past and present tense. clash calling doBattle is recursion, which is not a good idea; you almost certainly shouldn't be recursing here. Don't nextInt for your option. Since it's only used in a simple comparison, leave it as a string; it removes the potential for an exception and simplifies validation. Whenever possible, construct your console formatting so that it produces logically-related paragraphs. Also, prefer accepting terminal input on the same line as the prompt. Suggested package com.stackexchange; import java.util.Optional; import java.util.Random; import java.util.Scanner; public class Game { public record Hit( boolean blocked, int attack_points ) { @Override public String toString() { if (attack_points == 0) return "no damage"; if (attack_points == 1) return "1 point of damage"; return "%d points of damage".formatted(attack_points); } } public static class Character { private int health; private int defense; private final int strength; public final String name; public Character(String name, Random rand) { health = rand.nextInt(1, 100); defense = rand.nextInt(1, 100); strength = rand.nextInt(1, 100); this.name = name; } public boolean isAlive() { return health > 0; } public Hit receiveHit(Character attacker) { int attack_points = Integer.max(0, attacker.strength - defense); boolean blocked = attack_points == 0; if (blocked) defense -= (defense % attacker.strength) + 1; health = Integer.max(0, health - attack_points); return new Hit(blocked, attack_points); } private void printStats() { System.out.printf("%s's Stats:%n", name); System.out.println("---------------"); final String fmt = "%-10s %-2d%n"; System.out.printf(fmt, "Health:", health); System.out.printf(fmt, "Defense:", defense); System.out.printf(fmt, "Strength:", strength); System.out.println(); } @Override public String toString() { return name; } } private final Random rand = new Random(); private final Scanner scanner = new Scanner(System.in); private final Character hero; private final Character enemy = new Character("Spock", rand); public Game() { System.out.println("Welcome to the arena!"); System.out.print("Enter your hero's name: "); hero = new Character(scanner.nextLine(), rand); System.out.printf("Avast, %s! Go forth!%n", hero); } public void clash() { System.out.printf("%s takes a cheap shot!%n", enemy); System.out.println("(Crowd gasps)"); System.out.printf("But %s blocks it in the nick of time!%n", hero); System.out.println("(Crowd cheers)"); } public Optional<Character> rollInitiative() { int hero_initiative = rand.nextInt(1, 7), enemy_initiative = rand.nextInt(1, 7); if (hero_initiative > enemy_initiative) return Optional.of(hero); if (hero_initiative < enemy_initiative) return Optional.of(enemy); return Optional.empty(); // tie ("cheap shot") } public void attack(Character attacker, Character defender) { Hit hit = defender.receiveHit(attacker); if (hit.blocked) { System.out.printf("%s blocks %s's attack and takes %s!%n", defender, attacker, hit); System.out.println("(Crowd cheers)"); } else { System.out.printf("%s strikes %s for %s!%n", attacker, defender, hit); System.out.println("(Crowd boos)"); } } public void doBattle() { Optional<Character> goesFirst = rollInitiative(); if (goesFirst.isPresent()) { Character attacker = goesFirst.get(); System.out.printf("%s takes initiative!%n", attacker); Character defender = hero == attacker? enemy: hero; attack(attacker, defender); } else { clash(); // tie ("cheap shot") } } public void run() { do { System.out.println(); hero.printStats(); enemy.printStats(); if (!hero.isAlive()) { System.out.printf("%s defeats %s!%n", enemy, hero); System.out.println("(Crowd boos aggressively)"); System.out.println("Someone from the crowd yells \"YOU SUCK!\""); break; } if (!enemy.isAlive()) { System.out.printf("%s utterly smites %s!%n", hero, enemy); System.out.println("(Crowd ROARS)"); break; } } while (doRound()); } private boolean doRound() { while (true) { System.out.print("Enter option (1 to battle, 2 to escape)! "); switch (scanner.nextLine()) { case "1": doBattle(); return true; case "2": System.out.println("YOU COWARD!"); return false; default: System.err.println("Invalid option"); } } } public static void main(String[] args) { new Game().run(); } } Output Welcome to the arena! Enter your hero's name: Kirk Avast, Kirk! Go forth! Kirk's Stats: --------------- Health: 68 Defense: 59 Strength: 50 Spock's Stats: --------------- Health: 56 Defense: 3 Strength: 49 Enter option (1 to battle, 2 to escape)! 1 Spock takes a cheap shot! (Crowd gasps) But Kirk blocks it in the nick of time! (Crowd cheers) Kirk's Stats: --------------- Health: 68 Defense: 59 Strength: 50 Spock's Stats: --------------- Health: 56 Defense: 3 Strength: 49 Enter option (1 to battle, 2 to escape)! 1 Kirk takes initiative! Kirk strikes Spock for 47 points of damage! (Crowd boos) Kirk's Stats: --------------- Health: 68 Defense: 59 Strength: 50 Spock's Stats: --------------- Health: 9 Defense: 3 Strength: 49 Enter option (1 to battle, 2 to escape)! 1 Spock takes a cheap shot! (Crowd gasps) But Kirk blocks it in the nick of time! (Crowd cheers) Kirk's Stats: --------------- Health: 68 Defense: 59 Strength: 50 Spock's Stats: --------------- Health: 9 Defense: 3 Strength: 49 Enter option (1 to battle, 2 to escape)! 1 Spock takes a cheap shot! (Crowd gasps) But Kirk blocks it in the nick of time! (Crowd cheers) Kirk's Stats: --------------- Health: 68 Defense: 59 Strength: 50 Spock's Stats: --------------- Health: 9 Defense: 3 Strength: 49 Enter option (1 to battle, 2 to escape)! 1 Spock takes initiative! Kirk blocks Spock's attack and takes no damage! (Crowd cheers) Kirk's Stats: --------------- Health: 68 Defense: 48 Strength: 50 Spock's Stats: --------------- Health: 9 Defense: 3 Strength: 49 Enter option (1 to battle, 2 to escape)! 1 Spock takes a cheap shot! (Crowd gasps) But Kirk blocks it in the nick of time! (Crowd cheers) Kirk's Stats: --------------- Health: 68 Defense: 48 Strength: 50 Spock's Stats: --------------- Health: 9 Defense: 3 Strength: 49 Enter option (1 to battle, 2 to escape)! 1 Spock takes initiative! Spock strikes Kirk for 1 point of damage! (Crowd boos) Kirk's Stats: --------------- Health: 67 Defense: 48 Strength: 50 Spock's Stats: --------------- Health: 9 Defense: 3 Strength: 49 Enter option (1 to battle, 2 to escape)! 1 Kirk takes initiative! Kirk strikes Spock for 47 points of damage! (Crowd boos) Kirk's Stats: --------------- Health: 67 Defense: 48 Strength: 50 Spock's Stats: --------------- Health: 0 Defense: 3 Strength: 49 Kirk utterly smites Spock! (Crowd ROARS) Stats tables A simple way to condense your stats to a table-like format could look like public static void printStats(Character... chars) { System.out.print(" ".repeat(9)); for (Character col: chars) System.out.printf("%8s ", col); System.out.println(); System.out.print("-".repeat(8)); for (Character col: chars) System.out.print("-".repeat(9)); System.out.println(); System.out.printf("%8s ", "Health"); for (Character col: chars) System.out.printf("%8d ", col.health); System.out.println(); System.out.printf("%8s ", "Defense"); for (Character col: chars) System.out.printf("%8d ", col.defense); System.out.println(); System.out.printf("%8s ", "Strength"); for (Character col: chars) System.out.printf("%8d ", col.strength); System.out.printf("%n%n"); } called like Character.printStats(hero, enemy); with output Kirk Spock -------------------------- Health 78 42 Defense 99 5 Strength 93 97
{ "domain": "codereview.stackexchange", "id": 45425, "tags": "java, beginner, game, console" }
Hardness of $2$ edge-disjoint spanning trees decomposition
Question: The question is clear from the title. What is the complexity of the following decision problem: Input: An undirected graph $G(V, E)$ Output: $\mathrm{YES}$ if $G$ can be decomposed into two edge-disjoint spanning trees, $\mathrm{NO}$ otherwise Answer: Your problem is in $\mathrm{NP} \cap \mathrm{co}$-$\mathrm{NP}$. First, it is obviously in $\mathrm{NP}$. The verifier receives the certificate consisting of two edge-disjoint spanning trees that consumes all the edges in $\mathrm{E}$ Second, by Tutte & Nash-Williams theorem [1], your problem is also in $\mathrm{co}$-$\mathrm{NP}$. The co-nondeterministic machine guesses any partition $\mathrm{P}$ of $\mathrm{V}$ and check that $|\mathrm{E}_\mathrm{P}(\mathrm{G})| \geq 2(|\mathrm{P}| - 1)$. Also, it must check that $|\mathrm{E}| = 2(|\mathrm{V}| - 1)$ to be sure that the tree packing of the above theorem is indeed a decomposition by trees. [1] Tomas Kaiser, A short proof of the tree-packing theorem, https://arxiv.org/pdf/0911.2809.pdf
{ "domain": "cs.stackexchange", "id": 12071, "tags": "complexity-theory, np-complete, np-hard, np, complexity-classes" }
C++ - Tic Tac Toe AI powered by minmax algorithm
Question: This was my previous post on Tic Tac Toe in C++. I received very good feedback and tried to implement all of the improvements. I could not implement some due to the AI needed functions or complexity. I wanted to know where I could - Optimize the performance. Improve my Tic Tac Toe class. Improve my AI. Improve the UX. Follow C++ conventions (I still prefer snake_case) Here is my code: #include <iostream> #include <list> #include <algorithm> class TicTacToe; class AI; void play_game(TicTacToe game); class TicTacToe { private: char board[9] = {' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '}; char current_turn = 'X'; char winner = ' '; // ' ' refers to None int state = -1; // -1 refers to running std::list<int> move_stack; void swap_turn(); void update_state(); public: friend class AI; friend void play_game(TicTacToe game); void print_board() const; int play_move(int index); void undo_move(); std::list<int> get_possible_moves(); }; class AI { private: int max(TicTacToe board, char max_symbol, int depth); int min(TicTacToe board, char max_symbol, int depth); public: int minmax(TicTacToe board, char max_symbol); }; int main() { TicTacToe game; play_game(game); return 0; } void play_game(TicTacToe game) { bool playing = true; int move; int count = 0; AI my_ai; while (playing) { game.print_board(); if (count % 2 == 1) { move = my_ai.minmax(game, game.current_turn); } else { std::cout << "Enter your move (1-9)\n"; std::cin >> move; if (!std::cin) { std::cerr << "Input error\n"; return; } --move; } if (game.play_move(move) == 0) { std::cout << "Box already occupied\n"; continue; } if (game.state == 1) { game.print_board(); std::cout << game.winner << " wins the game!\n"; playing = false; } else if (game.state == 0) { game.print_board(); std::cout << "Draw!\n"; playing = false; }; ++count; }; }; void TicTacToe::print_board() const { for (int i = 0; i < 9; ++i) { if (i % 3) { std::cout << " | "; } std::cout << board[i]; if (i == 2 || i == 5) { std::cout << "\n"; std::cout << "---------" << "\n"; } } std::cout << "\n"; }; void TicTacToe::swap_turn() { current_turn = (current_turn == 'X') ? 'O' : 'X'; } int TicTacToe::play_move(int index) { if (index >= 0 && index < 9) { if (board[index] == ' ') { board[index] = current_turn; move_stack.push_back(index); update_state(); swap_turn(); return 1; } } return 0; }; void TicTacToe::undo_move() { int move = move_stack.back(); board[move] = ' '; move_stack.pop_back(); update_state(); swap_turn(); }; std::list<int> TicTacToe::get_possible_moves() { std::list<int> possible_moves; for (int i = 0; i < 9; ++i) { bool found = (std::find(move_stack.begin(), move_stack.end(), i) != move_stack.end()); if (!found) { possible_moves.push_back(i); } } return possible_moves; } void TicTacToe::update_state() { if ( // Horizontal checks (board[0] == current_turn && board[1] == current_turn && board[2] == current_turn) || (board[3] == current_turn && board[4] == current_turn && board[5] == current_turn) || (board[6] == current_turn && board[7] == current_turn && board[8] == current_turn) || // Vertical Checks (board[0] == current_turn && board[3] == current_turn && board[6] == current_turn) || (board[1] == current_turn && board[4] == current_turn && board[7] == current_turn) || (board[2] == current_turn && board[5] == current_turn && board[8] == current_turn) || // Diagonal Checks (board[0] == current_turn && board[4] == current_turn && board[8] == current_turn) || (board[2] == current_turn && board[4] == current_turn && board[6] == current_turn)) { state = 1; winner = current_turn; } else { bool draw = true; for (int i = 0; i < 9; ++i) { if (board[i] == ' ') { draw = false; break; } }; if (draw) { state = 0; } else { winner = ' '; state = -1; } } }; int AI::minmax(TicTacToe board, char max_symbol) { int best_score = -100; int best_move = -1; // -1 refers to none for (auto move : board.get_possible_moves()) { board.play_move(move); int score = AI::min(board, max_symbol, 0); if (score > best_score) { best_score = score; best_move = move; } board.undo_move(); } return best_move; } int AI::max(TicTacToe board, char max_symbol, int depth) { if (board.state == 1) { if (board.winner == max_symbol) { return 10 - depth; } else { return -10 + depth; } } else if (board.state == 0) { return 0; } int best_score = -100; for (auto move : board.get_possible_moves()) { board.play_move(move); int score = AI::min(board, max_symbol, depth + 1); if (score > best_score) { best_score = score; } board.undo_move(); } return best_score; } int AI::min(TicTacToe board, char max_symbol, int depth) { if (board.state == 1) { if (board.winner == max_symbol) { return 10 - depth; } else { return -10 + depth; } } else if (board.state == 0) { return 0; } int best_score = 100; for (auto move : board.get_possible_moves()) { board.play_move(move); int score = AI::max(board, max_symbol, depth + 1); if (score < best_score) { best_score = score; } board.undo_move(); } return best_score; } Answer: Improve my Tic Tac Toe class. Improve my AI. Improve the UX. Maybe a redesign of the classes: Not sure you have the greatest class break down. The TikTakToa class seems to be a class that combines the state of the game the future state for the AI program and the game rules all in one. The AI class seems to just embed the AI algorithm but it does not store state in there (which seems odd). May I suggest a restructuring of the classe. class Board { int currentMove; public: // Players can interagate the board history // to work out the current state of the game // or keep track of that locally. int getCurrentMove() const; int getMove(int m) const; // Only the Game (see below) can add moves as it // is the only object that will get a non const reference // to the board; void addMove(int m); }; // An abstract player. // Human or AI (could be either). class Player { public: virtual ~Player() {}; // Players get a board to examine during there move // then they can decide return the best move to the game // who will apply the move after validation. virtual int getMove(Board const&) = 0; }; class HumanPlayer { public: // The human player may need to see a visualization // of the board (or just a list of moves) up to you. // // So the first step of the human player would be // to print the board then get input from the human. virtual int getMove(Board const&) override; }; class AIPlayer { public: virtual int getMove(Board const&) override; }; // The game object takes two player // Could be two humans or two AI or one of each. // Then sequentially gets each player to make a move // until it detects a winner. class Game { public: // Set up a game with the two players and a board. Game(Player& cross, Player& nott, Board& board); // Play a game asking each player for a move // until the game decides there is a winner. void playLoop(); }; int main() { std::unique_ptr<Player> p1 = getPlayer1(); std::unique_ptr<Player> p2 = getPlayer2(); Board board; Game game(*p1, *p2, board); game.play(); } Review of Code I prefer camelCase and you say you prefer Snake_Case. Do be honest it does not matter that much. But it would be preferable to maintain one style or the other rather than have a combination of each. class TicTacToe; void play_game(TicTacToe game); Why Not Tic_Tac_Toe? class TicTacToe This seems to be an uber class doing nearly everything: Board State: char board[9] = {' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '}; char current_turn = 'X'; Game State: char winner = ' '; // ' ' refers to None int state = -1; // -1 refers to running AI State? std::list<int> move_stack; Seems like play_game() should be a member of TicTacToe then it would not need to be a friend. int main() { TicTacToe game; play_game(game); // Don't need this. line return 0; } You passed game by value. So you are getting a copy of the game here: void play_game(TicTacToe game) You probably want to do: void play_game(TicTacToe& game) // ^ pass the game by reference. You always want an AI? What happens with two humans? Or what about pitting two AI against each other? That could be really useful if you have a nerual network and want to train AI by playing them against each other. AI my_ai; If you have two AI playing. Not sure I want the board to always be updated or printed to the UI. Make that part of the human players interface. game.print_board(); Not very fair to the AI to always go second? if (count % 2 == 1) { move = my_ai.minmax(game, game.current_turn); } else The human input code should be put into its own function. { std::cout << "Enter your move (1-9)\n"; std::cin >> move; if (!std::cin) { std::cerr << "Input error\n"; return; } --move; } This works but not the typical way to write it: std::cin >> move; if (!std::cin) { std::cerr << "Input error\n"; return; } I would write like this: if (std::cin >> move) { // Good input } else { std::cerr << "Input error\n"; return; } If the AI generates this move do you think it will change its mind if you ask again? if (game.play_move(move) == 0) { std::cout << "Box already occupied\n"; continue; } I think this check is for the human player only. Well the ouput anyway. So put this check with the human input in the function handling user input. You may need an external check that exits the application if the game recieves an invalid move as otherwise the AI is going to go into an infinite loop. You have two game states. playing and game.state. Keep this as one state so you simply exit the loop when the game is over. Then this output can be done outside the loop. if (game.state == 1) { game.print_board(); std::cout << game.winner << " wins the game!\n"; playing = false; } else if (game.state == 0) { game.print_board(); std::cout << "Draw!\n"; playing = false; }; This is not wrong. int TicTacToe::play_move(int index) { if (index >= 0 && index < 9) { if (board[index] == ' ') { board[index] = current_turn; move_stack.push_back(index); update_state(); swap_turn(); return 1; } } return 0; }; But I would do the precondition checks first and exit at the beginning of the function to show the preconditions had been broken: int TicTacToe::play_move(int index) { if (index < 0 || index >= 9) { return 0; } if (board[index] != ' ') { return 0; } // Good Move. board[index] = current_turn; move_stack.push_back(index); update_state(); swap_turn(); }; Here we are looking at an effeciency issue. std::list<int> TicTacToe::get_possible_moves() { std::list<int> possible_moves; for (int i = 0; i < 9; ++i) { bool found = (std::find(move_stack.begin(), move_stack.end(), i) != move_stack.end()); if (!found) { possible_moves.push_back(i); } } return possible_moves; } This is being built every time around the loop. Why not build it once. Then take values off when you make a move. Then you can return a const reference to the list as the result of this function. This will prevent you from constantly rebuilding the list. Do you need to check all possible states after each move? You only need to check states that include the last move. void TicTacToe::update_state() { OK. This is hard. I have no idea to tell if this is correct. int AI::minmax(TicTacToe board, char max_symbol) int AI::max(TicTacToe board, char max_symbol, int depth) int AI::min(TicTacToe board, char max_symbol, int depth)
{ "domain": "codereview.stackexchange", "id": 42204, "tags": "c++, algorithm, tic-tac-toe, ai" }
A package for the DOM
Question: Concerned with the length of this and the organization. Looking to organize it better if possible. If anyone knows the jQuery equivalent of these methods I would like to put them in the comments. I'm not interested in changing the style, i.e. - identifier names, whitespace, etc. /*************************************************************************************************** **DOM - additional coverage for the dom - consistent coding style ... - fewer function branches ***************************************************************************************************/ (function (win, doc) { "use strict"; // p(R)ivate propeties go here var Priv = {}, // (P)ublic properties go here Pub = function (selector) { return new Priv.Constructor(selector); }, // (D)ependencies go here $A; $A = (function manageGlobal() { // manually match to utility global Priv.g = '$A'; if (win[Priv.g] && win[Priv.g].pack && win[Priv.g].pack.utility) { // utility was found, add dom win[Priv.g].pack.dom = true; } else { throw new Error("dom requires utility module"); } return win[Priv.g]; }()); Pub.Debug = (function () { var publik = {}, hold = {}; // addTags and removeTags can add and remove groups of html tags for // visual feedback on how a site works publik.addTags = function (tag) { if (hold[tag]) { $A.someIndex(hold[tag], function (val) { if (tag === 'script' || tag === 'style') { document.head.appendChild(val); } else { document.body.appendChild(val); } }); } }; publik.removeTags = function (tag) { var styles = document.getElementsByTagName(tag), i; hold[tag] = []; for (i = styles.length; i; i--) { hold[tag][i] = styles[i]; $A.removeElement((styles[i])); } }; publik.removeStorage = function () { localStorage.clear(); sessionStorage.clear(); }; // extracts z-indices not set to auto publik.zIndex = function () { var obj_2d = {}, elements = document.body.getElementsByTagName("*"), z_index; // loop through elements and pull information from them $A.someIndex(elements, function (val, index) { z_index = win.getComputedStyle(val).getPropertyValue("z-index"); // ignore elements with the auto value if (z_index !== "auto") { obj_2d[index] = [val.id, val.tagName, val.className, z_index]; } }); return obj_2d; }; return publik; }()); Pub.el = function (selector_native) { if (selector_native) { var tokens = selector_native.match(/^(@|#|\.)([\x20-\x7E]+)$/), type, identifier; if (!tokens || !tokens[1] || !tokens[2]) { return new Error("mal-formed selector"); } type = tokens[1]; identifier = tokens[2]; if (type === '#') { return doc.getElementById(identifier); } if (type === '.' && doc.getElementsByClassName) { return doc.getElementsByClassName(identifier); } if (type === '@') { return doc.getElementsByName(identifier); } return new Error("mal-formed selector"); } }; Pub.removeElement = function (el) { if (el && el.parentNode) { return el.parentNode.removeChild(el); } return null; }; Pub.insertAfter = function (el, ref) { if (el && ref && ref.parentNode) { return ref.parentNode.insertBefore(el, ref.nextSibling); } return null; }; Pub.isElement = function (obj) { return !!(obj && obj.nodeType === 1); }; Pub.eachChild = function (ref_el, func, con) { if (ref_el) { var iter_el = ref_el.firstChild, result; do { result = func.call(con, iter_el, ref_el); if (result !== undefined) { return result; } iter_el = iter_el.nextSibling; } while (iter_el !== null); } return null; }; Pub.HTMLToElement = function (html) { var div = document.createElement('div'); div.innerHTML = html; return div.firstChild; }; Pub.getData = function (id) { var data, obj = {}, el = document.getElementById(id); if (el.dataset) { $A.someKey(el.dataset, function (val, key) { obj[key] = val; }); } else { data = $A.filter(el.attributes, function (at) { return (/^data-/).test(at.name); }); $A.someIndex(data, function (val, i) { obj[data[i].name.slice(5)] = val.value; }); } return obj; }; Priv.hasClass = function (el, name) { return new RegExp('(\\s|^)' + name, 'g').test(el.className); }; Priv.toggleNS = function (el, ns, prop) { Pub.someString(el.className, function (val) { if (val.match(/toggle_/)) { var names = val.split(/_/); if (names[1] === ns && names[2] !== prop) { Pub.removeClass(el, val); } } }); }; Pub.addClass = function (el, name) { if (!Priv.hasClass(el, name)) { el.className += (el.className ? ' ' : '') + name; } var temp = name.match(/toggle_(\w+)_(\w+)/); if (temp) { Priv.toggleNS(el, temp[1], temp[2]); return; } }; Pub.removeClass = function (el, name) { el.className = name ? el.className.replace(new RegExp('(\\s|^)' + name, 'g'), '') : ''; }; Priv.Constructor = function (selector) { var type, type1, type2, temp, obj_type; // $A object detected if (selector instanceof Priv.Constructor) { return selector; } // window object detected if (selector === win) { this[0] = selector; return this; } // document object detected if (selector === doc) { this[0] = selector; return this; } // element object detected if (Pub.isElement(selector)) { this[0] = selector; return this; } // only strings should be left if (selector) { obj_type = $A.getType(selector); } if (obj_type !== 'String') { return this; } // selector is a symbol follwed by asci type = selector.match(/^(@|#|\.)([\x20-\x7E]+)$/); if (!type) { return this; } type1 = type[1]; type2 = type[2]; // id if (type1 === '#') { temp = doc.getElementById(type2); if (!temp) { return this; } this[0] = temp; return this; } // class if (type1 === '.' && doc.getElementsByClassName) { temp = doc.getElementsByClassName(type2); if (!temp) { return this; } $A.someIndex(temp, function (val, index) { this[index] = val; }, this); return this; } // name if (type1 === '@') { temp = doc.getElementsByName(type2); if (!temp) { return this; } $A.someIndex(temp, function (val, index) { this[index] = val; }, this); return this; } }; // jQuery like prototype assignment Priv.proto = Priv.Constructor.prototype; Priv.proto.fade = function (direction, max_time, callback) { var privates = {}, self = this; // initialize privates.elapsed = 0; privates.GRANULARITY = 10; if (privates.timer_id) { win.clearInterval(privates.timer_id); } (function next() { privates.elapsed += privates.GRANULARITY; if (!privates.timer_id) { privates.timer_id = win.setInterval(next, privates.GRANULARITY); } if (direction === 'up') { $A.someKey(self, function (val) { val.style.opacity = privates.elapsed / max_time; }); } else if (direction === 'down') { $A.someKey(self, function (val) { val.style.opacity = (max_time - privates.elapsed) / max_time; }); } if (privates.elapsed >= max_time) { if (callback) { callback(); } win.clearInterval(privates.timer_id); } }()); }; Pub.peakOut = function (elem, offset, delay, callback) { var privates = {}; // constants initialization privates.RADIX = 10; privates.GRAN_TIME = 15; privates.GRAN_DIST = 1; privates.UNITS = 'px'; // privates initialization privates.el = elem; privates.start = parseInt(Pub.getComputedStyle(privates.el).getPropertyValue("top"), privates.RADIX); privates.status = 'down'; privates.end = privates.start + offset; privates.current = privates.start; privates.id = null; (function next() { if ((privates.status === 'down') && (privates.current < privates.end)) { privates.current += privates.GRAN_DIST; privates.el.style.top = privates.current + privates.UNITS; if (!privates.id) { privates.id = Pub.setInterval(next, privates.GRAN_TIME); } } else if ((privates.status === 'down') && (privates.current === privates.end)) { privates.status = 'up'; Priv.resetInterval(privates); Pub.setTimeout(next, delay); } else if ((privates.status === 'up') && (privates.current > privates.start)) { privates.current -= privates.GRAN_DIST; privates.el.style.top = privates.current + privates.UNITS; if (!privates.id) { privates.id = Pub.setInterval(next, privates.GRAN_TIME); } } else if ((privates.status === 'up') && (privates.current === privates.start)) { Priv.resetInterval(privates); callback(); } }()); }; Priv.resetInterval = function (privates) { Pub.clearInterval(privates.id); privates.id = 0; }; Priv.expandFont = function (direction, max_time) { var self = this, el_prim = self[0], privates = {}; if (el_prim.timer_id) { return; } el_prim.style.fontSize = Pub.getComputedStyle(el_prim, null).getPropertyValue("font-size"); privates.final_size = parseInt(el_prim.style.fontSize, privates.RADIX); privates.GRANULARITY = 10; privates.time_elapsed = 0; (function next() { $A.someKey(self, function (val) { if (direction === 'up') { val.style.fontSize = ((privates.time_elapsed / max_time) * privates.final_size) + 'px'; } else if (direction === 'down') { val.style.fontSize = ((max_time - privates.time_elapsed) / max_time) + 'px'; } }); privates.time_elapsed += privates.GRANULARITY; // completed, do not call next if (el_prim.timer_id_done) { Pub.clearTimeout(el_prim.timer_id); el_prim.timer_id = undefined; el_prim.timer_id_done = undefined; // intermediate call to next } else if (privates.time_elapsed < max_time) { el_prim.timer_id = Pub.setTimeout(next, privates.GRANULARITY); // normalizing call to guarante (elapsed === max) } else if (privates.time_elapsed >= max_time) { el_prim.timer_id = Pub.setTimeout(next, privates.GRANULARITY); el_prim.timer_id_done = true; privates.time_elapsed = max_time; } }()); }; Priv.proto.expandFont = function (direction, max_time, big_size) { return Priv.expandFont.call(this, direction, max_time, big_size); }; Pub.expandFont = (function () { return function (element, direction, max_time, big_size) { var temp = []; temp[0] = element; Priv.expandFont.call(temp, direction, max_time, big_size); }; }()); /**************************************************************************************************/ Priv.functionNull = function () { return undefined; }; // createEvent Priv.createEvent = function () { if (doc.createEvent) { return function (type) { var event = doc.createEvent("HTMLEvents"); event.initEvent(type, true, false); $A.someKey(this, function (val) { val.dispatchEvent(event); }); }; } if (doc.createEventObject) { return function (type) { var event = doc.createEventObject(); event.eventType = type; $A.someKey(this, function (val) { val.fireEvent('on' + type, event); }); }; } return Priv.functionNull; }; Priv.proto.createEvent = function (type) { return Priv.createEvent.call(this, type); }; Pub.createEvent = (function () { return function (element, type) { var temp = []; temp[0] = element; Priv.createEvent.call(temp, type); }; }()); // addEvent Priv.addEvent = (function () { if (win.addEventListener) { return function (type, callback) { $A.someKey(this, function (val) { val.addEventListener(type, callback); }); }; } if (win.attachEvent) { return function (type, callback) { $A.someKey(this, function (val) { val.attachEvent('on' + type, callback); }); }; } return Priv.functionNull; }()); Priv.proto.addEvent = function (type, callback) { return Priv.addEvent.call(this, type, callback); }; Pub.addEvent = (function () { return function (element, type, callback) { var temp = []; temp[0] = element; Priv.addEvent.call(temp, type, callback); }; }()); Priv.proto.removeEvent = (function () { if (win.removeEventListener) { return function (type, callback) { $A.someKey(this, function (val) { val.removeEventListener(type, callback); }); }; } if (win.detachEvent) { return function (type, callback) { $A.someKey(this, function (val) { val.detachEvent('on' + type, callback); }); }; } return Priv.functionNull; }()); Priv.proto.removeEvent = function (type, callback) { return Priv.removeEvent.call(this, type, callback); }; Pub.removeEvent = (function () { return function (element, type, callback) { var temp = []; temp[0] = element; Priv.removeEvent.call(temp, type, callback); }; }()); Pub.ajax = function (config_ajax) { var xhr; // get if (config_ajax.type === 'get') { xhr = new win.XMLHttpRequest(); xhr.open('GET', config_ajax.url, true); xhr.onload = function () { if (this.status === 200) { config_ajax.callback(xhr.responseText); } }; xhr.send(null); } // post if (config_ajax.type === 'post') { xhr = new win.XMLHttpRequest(); xhr.open("POST", config_ajax.url, true); xhr.setRequestHeader("Content-type", "application/x-www-form-urlencoded"); xhr.onload = function () { if (this.status === 200) { config_ajax.callback(xhr.responseText); } }; xhr.send(config_ajax.data); } // post for form_data if (config_ajax.type === 'multi') { xhr = new win.XMLHttpRequest(); xhr.open("POST", config_ajax.url, true); xhr.onload = function () { if (this.status === 200) { config_ajax.callback(xhr.responseText); } }; xhr.send(config_ajax.data); } }; Priv.Queue = (function () { var queue = [], publik = {}; function getIndexFromToken(callback) { var hold; $A.someIndex(queue, function (val, index) { if (val.callback === callback) { hold = index; return index; } }); return hold; } function getBlockedProperty(item) { var blocked; if (item) { blocked = item.blocked; } else { blocked = false; } return blocked; } publik.addItem = function (callback) { var temp = {}; temp.blocked = false; temp.callback = callback; temp.response_text = null; queue.push(temp); }; publik.itemCompleted = function (response_text, callback) { var index, item, blocked; index = getIndexFromToken(callback); if (index !== 0) { queue[index].blocked = true; queue[index].response_text = response_text; } else { item = queue.shift(); item.callback(response_text); blocked = getBlockedProperty(queue[0]); while (blocked) { item = queue.shift(); item.callback(item.response_text); blocked = getBlockedProperty(queue[0]); } } }; return publik; }()); Pub.serialAjax = function (source, callback) { Priv.Queue.addItem(callback); Pub.ajax({ type: 'get', url: source, callback: function (response_text) { Priv.Queue.itemCompleted(response_text, callback); } }); }; Pub.setTimeout = function () { return win.setTimeout.apply(win, arguments); }; Pub.clearTimeout = function () { return win.clearTimeout.apply(win, arguments); }; Pub.setInterval = function () { return win.setInterval.apply(win, arguments); }; Pub.clearInterval = function () { return win.clearInterval.apply(win, arguments); }; Pub.getComputedStyle = function () { return win.getComputedStyle.apply(win, arguments); }; Pub.createDocumentFragment = function () { return doc.createDocumentFragment.apply(doc, arguments); }; Pub.createElement = function () { return doc.createElement.apply(doc, arguments); }; Pub.FormData = win.FormData; Pub.FileReader = win.FileReader; Pub.localStorage = win.localStorage; Pub.sessionStorage = win.sessionStorage; Pub.log = function (obj) { var logger, type, temp, completed; // wrap win.console to protect from IE // bind to satisfy Safari if (win.console) { logger = win.console.log.bind(win.console); } else { return; } // validation type = $A.getType(obj); if (!type) { logger("Object did not stringify"); return; } // host objects if (type === 'Event') { logger('LOG|host|event>'); logger(obj); return; } // library objects if (win.jQuery && (obj instanceof win.jQuery)) { logger('LOG|library|jquery>'); logger(obj); return; } // language objects $A.someIndex(['Arguments', 'Array', 'Object'], function (val) { if (type === val) { try { temp = JSON.stringify(obj, null, 1); } catch (e) { temp = false; } if (temp) { logger('LOG|language|' + type + '>'); logger(temp); } else { logger('LOG|language|' + type + '>'); logger(obj); } completed = true; } }); if (completed) { return; } $A.someIndex(['Boolean', 'Date', 'Error', 'Function', 'JSON', 'Math', 'Number', 'Null', 'RegExp', 'String', 'Undefined'], function (val) { if (type === val) { logger('LOG|language|' + type + '>'); logger(obj); completed = true; } }); if (completed) { return; } // remaining logger('LOG|not_implemented|>'); logger(obj); return; }; win[Priv.g] = $A.extendSafe(Pub, $A); }(window, window.document)); Answer: From a once over ASCII header, using "use strict" in a surrounding function, good stuff addTags, would have been nice if the caller could set the parent to which the tags should be added ( a la jQuery ) addTags, it is unclear from naming what hold is, what is supposed to do, there are no explaining comments either addTags, it is clear after reading removeTags, maybe you should put that function there addTags, removeTags, the code does not put the tags back from where they were removed, that is a very limited feature removeTags -> styles seems an unfortunate name, the parameter could have been called tagName removeStorage -> Seems to be the wrong library, you have different a library for storage already zIndex -> why not just return the elements in the array ? It would take less memory, while returning more info Pub.el needs a comment with sample selector_native values Pub.removeElement etc., silent failures, could be frustrating isElement -> !! on a Boolean expression seems pointless, update : it is not pointless, it will convert a falsey value to the boolean false. eachChild -> some cryptic names, not following lowerCamelCase, the name is lying since it is not guaranteed to iterate over each child HTMLToElement is smart, but could use a comment as to how it works someKey -> sigh.. , really you should use js 1.6, or use the shims Priv.Constructor -> too much white space by far win -> you are making it too hard, just use window doc -> seriously.. (/^(@|#|\.)([\x20-\x7E]+)$/ is used twice, you should give it a good name in a constant You are having code in Priv.Constructor that is very similar to what is in Pub.el, consider merging some of this code From here I still only reviewed half of the code. Your helper functions like expandFont really need each a few lines of comment so that the reader can easily determine what it does ( yes, it increase the font but what does max_time, big_size and direction do ?? )
{ "domain": "codereview.stackexchange", "id": 5807, "tags": "javascript, jquery" }
Neato XV-11 LIDAR hector_slam mapping without odometry at walking speeds?
Question: I was inspired by this video to get an XV-11 LIDAR for indoor mapping via hector_slam: https://www.youtube.com/watch?v=jSlkjY-78SQ&feature=youtu.be (Note that the walking speed is quite low--the video had to be sped up 2.5X.) I'm finding in practice that the results shown at the end of the video may be "at best"--my maps are messy and jump around/overlap on themselves. This is probably due to a problem with the particular XV-11 unit I received (that will be returned/exchanged because it sometimes emits 0.0mm readings for entire revolutions), but while researching the map's jumpiness/overlapping, I'm finding that people either gave up on the XV-11 and went to a higher-end LIDAR with a higher scan rate or added odometry, both of which increase the project cost dramatically. So, I am wondering, before I return/exchange this XV-11, has anyone gotten the XV-11 to do decent 2D mapping without odometry at walking speeds (1.5m/s)? Is it just not possible with a 5Hz (300rpm) LIDAR? I'm not asking for perfect maps, just usable ones (no walls where there aren't really walls, etc.) I am basically tasked to do this: https://www.youtube.com/watch?v=F8pdObV_df4 (Note that the user is walking quite rapidly--probably faster than 1.5m/s. We'd love to be able to go this quickly.) If you did get it working, how did you do it? Thanks in advance! Originally posted by dchang0 on ROS Answers with karma: 187 on 2016-01-02 Post score: 2 Answer: I don't have access to a XV-11, but I'd be interested in looking at a bag file if you have one. I know of a few people who successfully used hector_slam with the XV-11, but I guess nobody used a very high walking speed of higher than 1.5 m/s with it so far. At 1.5 m/s with 5 Hz, the robot travels 30cm between scans. A further complication is that scan acquisition is not discrete but continuous, so the scans are warped by the platform motion (see high fidelity projection). At the slow scan rate and high movement speed this effect likely is much more pronounced than with other LIDARs and creates another chicken-and-egg problem (need to know platform motion to undistort scan, need undistorted scan for scanmatching and estimating platform motion). Originally posted by Stefan Kohlbrecher with karma: 24361 on 2016-01-03 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by dchang0 on 2016-01-03: Thanks for the reponse, Stefan! I do not have a bag file yet, since we suspect our XV-11 is faulty. We will probably not be getting a bag file at all, since we will likely upgrade to a Hokuyo UTM-30LX laser scanner based in a large part on your answers and past writings about your project. Comment by dchang0 on 2016-01-03: May I ask one more question? Was the IMU required for the handheld UTM-30LX to do a scan without the map breaking, or could we reliably get unbroken maps without one? I am assuming that you were merely practicing for the IMU to be included in the ground robot. Comment by Stefan Kohlbrecher on 2016-01-04: Depends on how much roll and pitch motion the sensor has and how the environment looks like. If you keep the sensor level and , a IMU is not required. Comment by dchang0 on 2016-01-04: Thanks, Stefan! In our application, the floors will be very level and the only roll and pitch will be from the human tester's gait and hand motion. Later, when on a ground robot, the roll and pitch may be less than with the human tester. It sounds like an IMU is not necessary in either case. Comment by dchang0 on 2016-01-04: BTW, Stefan--there are several people asking about how to integrate IMU data into hector_slam on the YouTube videos in the Team Hector channel. Perhaps if you have time, you could answer them... I too am curious about the code required. Thanks!
{ "domain": "robotics.stackexchange", "id": 23335, "tags": "slam, navigation, mapping, xv-11, hector-slam" }
spawned robot in Gazebo is moving by itself (Moveit!)
Question: Hello everyone, This is my first question, so please have some understanding if I haven't posted it like it was meant to be posted. I am running Ubuntu 18.04.2 with ROS_Melodic and Gazebo 9, I have also installed Moveit! package like it was explained in the moveit tutorial, after that I have made catkin package and in the src folder I have git cloned the fanuc moveit! package(Kinetic distro) and all dependent packages for a fanuc in this case. I have sourced and built the workspace (using catkin build). I have chosen the fanuc_m16ib20 robot (it was random choice) and I have made the moveit package like in instructions for the moveit_setup_assistant. At the end of the package generation in setup_assistant I was offered the URDF file for the made configuration, I have copied it and saved it under mypackage/urdf path. After this I have added the Gazebo plugin to the URDF file <gazebo> <plugin name="gazebo_ros_control" filename="libgazebo_ros_control.so"> <robotNamespace>fanuc_m16ib20</robotNamespace> </plugin> </gazebo> Also I wanted to fix(anchor) the robot to the world and I added this to the beginning of the URDF file: <link name="world"></link> <joint name="fixed" type="fixed"> <parent link="world"/> <child link="base_link"/> </joint> Next thing I have done is, added the namespace fanuc_m16ib20 to the controllers, controller spawner and controller manager. I have modified the demo_gazebo.launch file to do this and controller.yaml files accordingly. After this changes all it was left to do is to try and run the demo_gazebo.launchfile, but after running it Gazebo and Rviz are started but the robot in the Gazebo is moving by itself. Since it is connected to the RViz I can see the same behaviour in the RViz, with the exception, in RViz there robot joints stay connected. When I run rostopic list or rosservice list I can see topics and controller manager under the same namespace. topics: > /fanuc_m16ib20/fanuc_arm_controller/command /fanuc_m16ib20/fanuc_arm_controller/follow_joint_trajectory/cancel /fanuc_m16ib20/fanuc_arm_controller/follow_joint_trajectory/feedback /fanuc_m16ib20/fanuc_arm_controller/follow_joint_trajectory/goal /fanuc_m16ib20/fanuc_arm_controller/follow_joint_trajectory/result /fanuc_m16ib20/fanuc_arm_controller/follow_joint_trajectory/status /fanuc_m16ib20/fanuc_arm_controller/state /fanuc_m16ib20/joint_states services: /fanuc_m16ib20/controller_manager/list_controller_types /fanuc_m16ib20/controller_manager/list_controllers /fanuc_m16ib20/controller_manager/load_controller /fanuc_m16ib20/controller_manager/reload_controller_libraries /fanuc_m16ib20/controller_manager/switch_controller /fanuc_m16ib20/controller_manager/unload_controller I have these errors and a few warnings: [ERROR] [1573735342.035351047]: name, joints, action_ns, and type must be specifed for each controller [ERROR] [1573735343.777676683, 0.001000000]: No p gain specified for pid. Namespace: /fanuc_m16ib20/gazebo_ros_control/pid_gains/joint_1 [ERROR] [1573735343.779464187, 0.001000000]: No p gain specified for pid. Namespace: /fanuc_m16ib20/gazebo_ros_control/pid_gains/joint_2 [ERROR] [1573735343.781116227, 0.001000000]: No p gain specified for pid. Namespace: /fanuc_m16ib20/gazebo_ros_control/pid_gains/joint_3 [ERROR] [1573735343.783848394, 0.001000000]: No p gain specified for pid. Namespace: /fanuc_m16ib20/gazebo_ros_control/pid_gains/joint_4 [ERROR] [1573735343.785975603, 0.001000000]: No p gain specified for pid. Namespace: /fanuc_m16ib20/gazebo_ros_control/pid_gains/joint_5 [ERROR] [1573735343.797053511, 0.001000000]: No p gain specified for pid. Namespace: /fanuc_m16ib20/gazebo_ros_control/pid_gains/joint_6 After solving the PID errors, building and launching again, I have one error: [ERROR] [1574279938.204750673] [/move_group] [ros.moveit_simple_controller_manager.SimpleControllerManager]: No controller_list specified. [ INFO] [1574279938.204892173] [/move_group] [ros.moveit_simple_controller_manager.SimpleControllerManager]: Returned 0 controllers in list [ INFO] [1574279938.231018920] [/move_group] [ros.moveit_ros_planning.trajectory_execution_manager]: Trajectory execution is managing controllers I found this very strange because from the list of topics I have the action server /fanuc_m16ib20/fanuc_arm_controller/ up and running and I have loaded the controller_list parameter on the parameter server. After rosparam get /controller_list i get this: - action_ns: follow_joint_trajectory default: true joints: [joint_1, joint_2, joint_3, joint_4, joint_5, joint_6] name: fanuc_arm_controller type: FollowJointTrajectory This looks like the robot offers a FollowJointTrajectory action service, like it can be seen in the topic list, but moveit simple controller manager is still complains about the controller_list for some reason. Does anyone knows what could have caused this? Thanks Originally posted by marko1990 on ROS Answers with karma: 61 on 2019-11-14 Post score: 0 Answer: Ok after hours of trying to understand what went wrong I have finally figured it out. The problem was not in the /controller_list parameter, it was because the parameter move_group/controller_list was missing and since I am using namespaces I have a controller_list.yamlas a separate .yaml file and I have loaded it in the ros_controllers.launch right after ros_controllers.yaml. I thought that was all, but controllers have to be loaded under moveit also it is not enough just to load them under ros_control like I explained before, by this I mean I should have loaded controller_list.yaml file under the move_group namespace in a move_group.launch also. This means and I am assuming you made yourrobot_pkg with the moveit_assistant and you are using an urdf file created by moveit_assistant (after that you have added your namespaces if neccesarry): in yourrobot_moveit_pkg start demo_gazebo.launch which should have this structure (mostly made by moveit): different arguments declarations . . gazebo.launch -> robot_description(urdf) to param server -> spawns robot_description param in gazebo -> ros_controllers.launch //ros_controllers and the controller_list to the param server node joint_state_publisher node robot_state_publisher move_group.launch -> different arguments declarations . . -> trajectory_execution.launch.xml // load launch file under the `move_group` namespace -> different arguments declarations . . $(arg moveit_controller_manager)_moveit_controller_manager.launch.xml //loads moveit_simple_controller_manager on the parameter server // load controllers on the parameter server ros_controllers.yaml controlller_list.yaml Basically you need to think about your namespaces and I think the first loading of the controller_list.yaml in the ros_control could be taken out. Because it is important that the controller_list parameter is loaded under the move_group namespace. Originally posted by marko1990 with karma: 61 on 2019-11-21 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 34014, "tags": "ros, gazebo, rviz, moveit, ros-melodic" }
Force on a current carrying conductor and Hall effect
Question: If we consider a thin wire on which flows current, inside a magnetic field, we observe a force $\mathbf{dF}=i\mathbf{ds} \land \mathbf{B}$ on each $\mathbf{ds}$ of the wire. This force is caused by electrons, on which is acting Lorentz force, which bump into the Crystal structure of the metal. However after a while (assuming $\mathbf{B}$ constant and uniform and the wire firm) Lorentz force generates a separation of charges between two opposite side of the wire (Hall effect), and the force on each electron becomes zero, so they should stop bumping (all togheter and all in the same direction) into the Crystal structure of the metal. Then how is possible that we still observe a force on the conductor? Where I am wrong? Answer: Then how is possible that we still observe a force on the conductor? Where I am wrong? The net force due to the Hall electric field acting on the moving electrons in a small volume element of metal indeed does cancel the net force due to external magnetic field on those same moving electrons. But there are other particles in the wire and there are forces acting on them that are not cancelled in this way. The Hall electric field is electrostatic so we can assume it is produced by stationary charge distribution. Since there cannot be non-zero charge density inside the metal, it is the surface of the wire where there has to be non-zero charge density. This means there are charged particles present on the surface of the wire. These particles are distributed on the surface in such a way that their net electric force on the conduction electrons inside the metal cancels the external magnetic force on them. But if there is electric force from surface particle acting on the particle inside the wire, there has to be also the corresponding reaction - an electric force due to particle inside acting on the particle on the surface. So when we consider some element of wire, there is net electric force acting on the charged particles on its surface. Since the surface charges are bound to the wire, this force is experienced also by the rest of the wire (non-conduction part) and since all other forces acting on the particles in the wire are cancelled, this is equal to total force acting on the wire. Nitpick: The total force acting on the wire in magnetic field is often called magnetic force or incorrectly Lorentz force, because it is, in magnitude and direction, the same as the actual external magnetic force acting on the moving electrons. But now you know the total force that moves the wire is actually result of both external magnetic forces and internal electric forces between the inside and the surface particles. This total force is better called ponderomotive force (force acting on a piece of matter with mass) than the Lorentz force (this is best thought of as magnetic force acting on moving charged particles) or electromotive force (force acting on the conduction electrons that keeps them moving along the wire).
{ "domain": "physics.stackexchange", "id": 53006, "tags": "electromagnetism, forces, electric-current" }
Find the word in a string that has the most repeated characters
Question: The problem: I want to find the word in a string that has the most repeats of a single letter, each letter is independent. Does not need to be consecutive. Currently I am jumping each character and checking, which doesn't seem very efficient. Do you have any ideas that might improve the number of processes that are required to find which word has the most repeats? function LetterCountI(str) { var repeatCountList = []; var wordList = str.split(/\W/); //regular expression \W for non word characters split at. var wordCount = wordList.length; // count the number of words for (var i=0; i< wordCount ; i++) { var mostRepeat = 1; // set the default number of repeats to 1 var curWord = wordList[i]; // set the current word to the ith word from the list for(var ii=0; ii<curWord.length ; ii++) { var repeatCount = 1; // set default repeat count to 1 var curChar = curWord[ii]; //set the current character to the iith for (var iii=0; iii<curWord.length; iii++) //judge if it is the same as the iiith {var against = curWord[iii]; if (iii!=ii) // if it is Not the same referenced postion {if(curChar==against) // see if the string values match {repeatCount=repeatCount+1}}} // increase counter if match against if (repeatCount>=mostRepeat) // record repeat for the highest only {mostRepeat=repeatCount} } repeatCountList = repeatCountList.concat(mostRepeat) // take the highest from each word } mostRepeat = 0; // set the repeats value to - for (j=0;j<wordCount; j++) // go through the repeats count list { if(repeatCountList[j]>mostRepeat) // if it has more repeats than the highest So FAR { mostRepeat = repeatCountList[j]; // record if higher than last var x = j;}} // record the index of the most repeat that is the new high var ans = []; if (mostRepeat == 1) // check if there are no repeats at all. {ans=-1} // question want to return -1 if there are no repeats else {ans=wordList[x]} // display the word from the list with the most repeat characters // code goes here return ans; } Any help is appreciated. Answer: I liked that you split the string into words using a regular expression. That helps a lot. Your code formatting (indentation and braces) is haphazard. It shouldn't be that hard to follow the standard conventions for code formatting, and it will make things easier for yourself if you do. I think your function tries to do too much. It would help to break down the problem. I've extracted part of the problem into a self-contained task: Given a word, how many times does the most frequent character appear? For that, you can write a function, and test it (e.g. mostFrequentCount('hello') should return 2). /** * Given an array (or a string), returns the number of times the most frequent * element (or character) appears. */ function mostFrequentCount(elements) { var bins = {}; for (var i = 0; i < elements.length; i++) { bins[elements[i]] = (bins[elements[i]] || 0) + 1; } var max = 0; for (var c in bins) { max = Math.max(max, bins[c]); } return max; } That should simplify the main code. Rather than commenting each line (in effect writing everything once for the computer and once for other programmers), I've tried to make the code read like English by using very human-friendly variable names. function wordsWithMaxRepeatedCharacters(string) { var maxRepeatedCharacters = 0, wordsWithMaxRepeatedCharacters = []; var words = string.split(/\W/); for (var w = 0; w < words.length; w++) { var word = words[w]; var numRepeatedCharacters = mostFrequentCount(word); if (maxRepeatedCharacters < numRepeatedCharacters) { maxRepeatedCharacters = numRepeatedCharacters; wordsWithMaxRepeatedCharacters = [word]; } else if (maxRepeatedCharacters == numRepeatedCharacters) { wordsWithMaxRepeatedCharacters.push(word); } } return wordsWithMaxRepeatedCharacters; }
{ "domain": "codereview.stackexchange", "id": 18321, "tags": "javascript, optimization, beginner, strings" }
How can I apply Noether's theorem to translation symmetry?
Question: I want to calculate the Noether's current for translation symmetry of the Polyakov action in conformal gauge. The action reads: \begin{align} S = \int d^2\sigma(\eta^{\alpha\beta}\partial_\alpha x^\mu\partial_\beta x^\nu\eta_{\mu\nu}) \end{align} which is of course invariant under translation $x^\mu\rightarrow x^\mu+c^\mu$. How can I calculate the conserved current from this transformation? Since I don’t know how to construct an infinitesimal transformation from this. Answer: Use the Noether trick (see e.g. page 19 of David Tong's QFT notes). Make $c^\mu$ depend on the worldsheet coordinates, $c^\mu(\sigma)$, and take the variation of the action with respect to that transformation. The change in the action will be $$\delta S = \int\mathrm{d}^2\sigma\ (\partial_\alpha c^\mu) J_\mu^\alpha.$$ $J^\alpha_\mu$ will be your current, since you can show that $\partial_\alpha J_\mu^\alpha = 0$, by an integration by parts.
{ "domain": "physics.stackexchange", "id": 95511, "tags": "homework-and-exercises, lagrangian-formalism, symmetry, string-theory, noethers-theorem" }
Word Jumble in Python
Question: This game that I semi coded is a little bulky and kind of boring. I want to know if there is anything simple I can do to shrink it and add a little spice to the code. # Word Jumble # # The computer picks a random word and then "jumbles" it # The player has to guess the original word import random # create a sequence of words to choose from WORDS = ("python", "jumble", "easy", "difficult", "answer", "xylophone") print( """ Welcome to Word Jumble! Unscramble the letters to make a word. (Press the enter key at the prompt to quit.) """ ) play=input("Do you want to play? (yes or no)") while play=="yes": # pick one word randomly from the sequence word = random.choice(WORDS) # create a variable to use later to see if the guess is correct correct = word # create a jumbled version of the word jumble ="" while word: position = random.randrange(len(word)) jumble += word[position] word = word[:position] + word[(position + 1):] print("The jumble is:", jumble) points=100 guess = input("\nYour guess: ") while guess != correct and guess != "": print("Sorry, that's not it.") hint=input("Do you need a hint?") if hint=="yes": points=int(points)-10 if correct=="python": print("Its a snake...") elif correct=="jumble": print("Rhymes with rumble") elif correct== "easy": print("This one is so simple!") elif correct=="difficult": print("This is a hard one... its very ________________") elif correct=="answer": print("You cant find it? the _________ is ___________") elif correct=="xylophone": print("It is a toy...") print("Thanks for takeing the hint, idiot...") guess = input("Your guess: ") if guess == correct: print("That's it! You guessed it!\n") print("Your score is: "+str(points)) play=input("Do you want to play again? (yes or no)") elif guess== "": print("You failed...") play=input("Do you want to play again? (yes or no)") print("Thanks for playing.") input("\n\nPress the enter key to exit.") Answer: Some comments from this question still apply here. You do not respect PEP 8 and you should try to split your code into smaller chunks. Let's change things little by little : Data over code Copied from my other answer Sometimes, you have to write a lot of code because "Hey, I have a lot of logic to write, I have to write code, that's the point of programming" but the point is more to keep things simple and to use the right tool (which is not always code) for the right thing. Here you are comparing strings to get the relevant hint for an answer. It would be much clearer to store the hint are the words together in a structure. At the moment, I've decided to store this in a list of tuple (a dictionnary would also have done the trick). WORDS = ( ("python", "Its a snake..."), ("jumble", "Rhymes with rumble"), ("easy", "This one is so simple!"), ("difficult", "This is a hard one... its very ________________"), ("answer", "You cant find it? the _________ is ___________"), ("xylophone", "It is a toy..."), ) ... # pick one word randomly from the sequence word, word_hint = random.choice(WORDS) ... print(word_hint) print("Thanks for takeing the hint, idiot...") Extracting logic in a function The logic to create a jumble version looks like it could and should be put in a function on its own. Also we can reuse already existing functions : shuffle. Now, the whole code is much simpler (also, you don't need word AND correct) : def get_jumble_word(word): l = list(word) random.shuffle(l) return ''.join(l) ... word, word_hint = random.choice(WORDS) print("The jumble is:", get_jumble_word(word)) A bif of logic You have : while guess != word and guess != "": # code with no break if guess == word: foo() elif guess == "": bar() After the loop, we know that condition guess != word and guess != "" is not true anymore (because we would have stayed in the loop otherwise). In order for this condition not to be true, we must have : guess == word or guess == "". Thus, if condition guess == word in the if branch is not true, we always go in the guess == "" part. You can rewrite this : if guess == word: foo() else: assert guess == "" bar() Do not repeat yourself Do not repeat yourself. Do not repeat yourself. You have the same last line in both branches of : if guess == word: print("That's it! You guessed it!\n") print("Your score is: " + str(points)) play = input("Do you want to play again? (yes or no)") else: print("You failed...") play = input("Do you want to play again? (yes or no)") It probably should be : if guess == word: print("That's it! You guessed it!\n") print("Your score is: " + str(points)) else: print("You failed...") play = input("Do you want to play again? (yes or no)") More functions You can define a function to handle getting the yes value from the user, one to handle a game, one to handle the interface (asking yes/no between games) : def get_input_in_list(prompt, values): while True: s = input(prompt + '(Your choices are : ' + ', '.join(values) + ')') if s in values: return s def game(): # pick one word randomly from the sequence word, word_hint = random.choice(WORDS) print("The jumble is:", get_jumble_word(word)) points = 100 guess = input("\nYour guess: ") while guess != word and guess != "": print("Sorry, that's not it.") hint = input("Do you need a hint?") if hint == "yes": points = int(points) - 10 print(word_hint) print("Thanks for takeing the hint, idiot...") guess = input("Your guess: ") if guess == word: print("That's it! You guessed it!\n") print("Your score is: " + str(points)) else: print("You failed...") def main(): while input("Do you want to play? (yes or no)") == "yes": game() if __name__ == "__main__": main() Also, I took this chance to add an if main guard.
{ "domain": "codereview.stackexchange", "id": 10733, "tags": "python, game" }
Correctness of splitting an undirected tree into a forest of trees with even number of children
Question: Given an undirected tree (i.e. a tree without any designated root) of even number of nodes. The task is to remove as many edges from the tree as possible to obtain a forest of trees, where each such tree contains even number of nodes. And return/output number of removed edges as an output/answer. Counting maximum number of removed edges is relatively simple: Choose any node as a root node; Recursively traverse a tree using depth-first approach; Count number of children of each sub-tree (from bottom to up), and cut sub-tree from the tree if this sub-tree has even number of children (i.e. simply increment counter of removed edges); This solution is correct but I don't understand why. I would like to see proof of correctness of such algorithm. In particular, I have the following doubts (when I'm thinking on this solution): Why starting DFS from any chosen root gives correct maximum number of removed edges? Why cutting sub-trees with even number of children from bottom to up gives correct result? I wrote on paper all possible configurations 1, 2, 3, 4, 5, 6-nodes of undirected trees. For example, there are two possible configurations of undirected tree with 4 nodes: (a) x-x-x-x (b) x-x-x | x In (a), you can split a tree into 2 sub-trees with 2 nodes each. In (b), you can not do it since it will be two sub-trees of odd number of nodes (with 1 node and with 3 nodes respectively). So my doubts based on suggestions: What if it depends where you start cutting sub-trees? What if greedy cutting sub-trees with even number of children from bottom to up doesn't give correct maximum number of removed edges? (continue of first question) This puzzle is described on HackerRank as "Even Tree". Their editorial and discussion just declared algorithm but they don't say why it works. Also, I found discussion on stackoverflow but they also don't give explanation why such approach works. Answer: I got my "Aha! Gotcha" moment when I changed my point of view from nodes (as root of a subtree) to edges. Let me explain. The set of edges that can be cut is a "static property", the set of choices is not changed by the earlier cuts that have been made. Any edge is considered to be "even" if on both sides it has an even number of nodes in the tree. Clearly we can only cut even edges. But also, cutting an even edge does not change the even property of the remaining edges, as we merely remove an even number of nodes in subtrees. So, we are set to find the even edges, and we will cut them all. The proposed solution uses DFS, but any decent traversal will do I would think. The property that a subtree has an even number of nodes now means (in my terminology above) that the edge outside leading to the root of the subtree is in fact even.
{ "domain": "cs.stackexchange", "id": 4321, "tags": "algorithms, trees, graph-traversal" }
Which is the best approach for searching the dictionary items?
Question: We can search for the list of dictionaries using the below some approaches, Can one please help me, which one is the memory efficient approach to use? Provided that: names are unique in the list There will be definitely a name in each dictionary. Where as: items = [list_of_dicts] name_to_b_searched = "some_name" 1 direct return from the for loop def get_by_name(self, name_to_be_searched): for i in items: if i['name'] == name_to_be_searched: return i 2 Break from the for loop and return found dictionary def get_by_name(self, name_to_be_searched): found = None for i in items: if i['name'] == name_to_be_searched: found = i break return found 3 A generator function def get_by_name(self, name_to_be_searched): for i in items: if i['name'] == name_to_be_searched: yield i 4 An inline generator get_by_name =next(i for i in items if i['name'] == name_to_be_searched, None) Sometimes I could have a huge csv records (in items), so was thinking which approach would be better to search a record. OR if there is any other approach you would recommend please let me know. Answer: None of the above. OR if there is any other approach you would recommend please let me know. The whole point of using dictionaries is that you don't need to search anything and can access everything in O(1) (on average) instead of the O(n) in all of your suggestions. names are unique in the list There will be definitely a name in each dictionary. So, instead of using a list of dictionaries, use a dictionary of dictionaries using the name as the outer key. Instead of items = [{'name': 'a', 'age': 22, ...}, {'name': 'b', 'age': 30, ...}] you should have items = {'a': {'age': 22, ...}, 'b': {'age': 30, ...}} Now you can access each dictionary directly by using the name. name = 'b' print(items[name]['age']) # 30 No loops, no searches, O(1) access.
{ "domain": "codereview.stackexchange", "id": 28324, "tags": "python, performance, python-3.x, comparative-review, search" }
Precision measurement of sine wave amplitude with ADC
Question: I want to measure amplitude of a sine wave input precisely with a limited resolution ADC. As an example suppose that I have $1\textrm{ MHz}$ pure sine wave input to the $320\textrm{ Msps}$ $10$-bit ADC. I beleived there is many redundancy in data and with some signal processing I could get more Precision than $10$-bit. Is there any way that I can do this without any change in circuit, like adding noise or other hardware change? Answer: A sine wave has infinitesimally little bandwidth. By rotating, filtering appropriately and decimating, you can reduce the sample rate very much. Each of these filtering operations is typically a summing operation, in which you "average" out noise (which isn't your main concern), but also get a more precise estimate for the amplitude. Decimation in DSP is very commonly done. You'd probably want to do that anyway – 320 MS/s is really no fun to deal with, and you don't need that bandwidth. Of course, you can also correlate with a synthesized sine, and measure the correlation coefficient to get the power/amplitude. Other options are things like proper spectral estimators – there's a lot to choose from, including Welch's method, or Pisarenko-based approaches. Especially if your signal is noisy, these might be interesting, but it really depends a lot on what exactly you're measuring and how you model your noise.
{ "domain": "dsp.stackexchange", "id": 5318, "tags": "fft, sampling, continuous-signals, analog-to-digital" }
What does it mean when something is said to "contain ions"?
Question: (I'm looking for a very basic level explanation because my only chemistry experience is one fast-paced high school course.) So, according to the professor of that course, ions are never found in nature as ions; everything would be neutrally charged. My question is, how can something have ions in it, without having them be neutralized? Doesn't nature want to lower potential energy, which would mean bonding the ions asap? Or am I misunderstanding things and said ions are already bonded to other ions, but they are still called ions even though the charge of the whole molecule should be neutral? Answer: Your teacher is almost right but some terminology needs to be clarified. Pure substances will almost always be electrically neutral (and the exceptions involve temporary local transfer of electrons from one substance to another as when you rub a rubber balloon on a wall surface). Beyond a certain level of charge separation you get enough electrical potential to cause sparks which allows the charges to equalise again. So persistent charge imbalance isn't easy to get. I think that is what your teacher means. But if you look inside some pure substances, they do contain ions, just an equal number of each to avoid the bulk material having an overall charge. Common salt, for example, consists of ions (Na+ and Cl- in equal numbers). The substance is neutral, but the individual atoms making it up are all ions. So your teacher is wrong if she meant that ions don't exist in nature but correct if she meant that bulk substances are electrically neutral.
{ "domain": "chemistry.stackexchange", "id": 10588, "tags": "ions" }
Effect of expansion of space on CMB
Question: Is it true that the expansion of space time cause the CMB to become microwaves from a shorter wavelength. If it is has the amplitude been increased? Seeing as the amplitude has decreased; why hasn't it increased (/"stretched") in the same way the wavelength has? Answer: Yes to the first part of your question. This phenomenon is known as cosmological redshift. Also due to the increase in the volume due to cosmlogical expansion the same amount of photons, as were present during the CMB, now have to occupy a much larger region. Consequently the average temperature ("amplitude" in your words) of such a gas drops from a high of $T \sim 150,000$ deg. Kelvin (corresponding to $T=E/k_B$ with $E$ being the ionization energy of hydrogen (13.6 ev) and $k_B$ is Boltzmann's constant $k_B = 8.6 \times 10^{-5}$ eV/Kelvin) to the currently observed $T'=2.7$ deg. Kelvin.
{ "domain": "physics.stackexchange", "id": 4651, "tags": "astrophysics, space-expansion, cosmic-microwave-background" }
can kinetic work with indigo over a network?
Question: I am building a network that will include a laptop, some Raspberry pi's, Jetson TK1, Odroid C2, etc. The laptop has kinetic installed, but the ARM boards all have indigo as it seems that is the only distro that will work on the ARMs. Will this work? Is there an issue with nodes being on different releases? I imagine it would be simple enough to revert the laptop to indigo, but I'd rather not as I think that would mean reverting my OS as well (currently Ubuntu 16.04). Originally posted by richkappler on ROS Answers with karma: 106 on 2016-11-02 Post score: 1 Answer: There are a few relevant Q/As on here already, see for instance here or here. In short: It probably works, if message definitions between both ROS versions did not change. Note that this is a officially non-supported and thus not recommended use case however. Originally posted by Stefan Kohlbrecher with karma: 24361 on 2016-11-02 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 26130, "tags": "ros-kinetic, ros-indigo" }
Do any particles in AGN jets escape the galaxy?
Question: I have read, at http://www.thestargarden.co.uk/Black-holes.html for example, that whole stars can be ejected from certain galaxies. "These are thought to have been part of a binary star system that broke apart as it approached a supermassive black hole. As one star was captured, and the other was pushed away at a velocity exceeding the escape velocity of the Galaxy." My question is about the jets that can form at the poles of the rotating supermassive black hole: "At the rotation axis of the supermassive black hole, matter from the accretion disc can be pushed away at the speed of light, creating jets that can extend for thousands of light-years." At thousands of light-years away, is this matter still gravitationally bound to the galaxy? (If not, maybe some particles can go around in a big loop that takes a while.) Answer: Yes, definitely. While some matter returns to the galaxy as a so-called "galactic fountain" (e.g. Biernacki & Teyssier 2018), some material is ejected at super-escape velocity, becoming part of the intergalactic medium. This is one of the mechanisms responsible for polluting the intergalactic medium (IGM) with metal-rich gas (i.e. elements heavier than helium). A recent observation of this is presented by Fujimoto et al. (2020). The IGM itself is too dilute to form stars, and hence metals, itself, but is nevertheless observed to contain quite a lot of metals, usually seen as absorption lines in the spectra of background quasars (e.g. Songaila & Cowie 1996; Aguirre et al. 2008). These must be blown out from the galaxies, either by stellar feedback (through radiation pressure, cosmic rays, and supernova feedback), or by AGN activity (see also Germain et al. 2009). Note however that although material can reach relativistic velocities, "pushed away at the speed of light" is just a tad too fast. Only massless particles can travel at the speed of light, but if you're massless it isn't really a big achievement — photons do that all the time, which is why we see the galaxies in the first place.
{ "domain": "astronomy.stackexchange", "id": 5422, "tags": "supermassive-black-hole, intergalactic-space, active-galaxy" }
Analysis Received Signal at receiver side
Question: I'm trying to understand something which is really confusing me and I made my best to understand it but still confused. If I have transmitted and receiver. Assume I want to transmit S(t) data and there's two multipath ways L1 , L2 in order to arrive the receiver. Why when I have constructive interference then it's good for me ? in the paths/multipath there's might be destructive interference or constructive interference...my question why constructive interference is good for me..assume I get 2S(t) -constructive interference- in the received signal on the receiver ..why it's good for me ? what does it mean that I have 2S(t) and not S(t) ? Im confused on the term constructive interference and how it's related to my received signal ..? My confusion exactly is that- I received 2X although I transmitted X , so 2X isn't X ..and we say that we received a good signal? if we received 2X which isn't equal to transmitted Data X so it's not the same data so it's considered distortion and it's not a good received signal. thanks alot Answer: Constructive interference just increases the gain of the signal at the receiver, which is designed (using automatic gain control) to adjust the signal level over a very wide range to that desired for reception. So the issue the OP refers to is of no real consequence as long as the received signal is in the (relatively wide) range of the receiver between maximum and minimum signal levels allowed. Multipath distortion can come in different forms based on how fast the multipath is changing (such as in a mobile environment) resulting in "fast" or "slow" fading, and the relative distance between the paths compared the symbol rate, resulting in "flat" or "frequency selective" fading. These are the real issues with multipath distortion which are corrected with equalizers in the receiver. Please see this post which expands on these different fading models: Rayleigh fading with frequency selective fading channel
{ "domain": "dsp.stackexchange", "id": 9963, "tags": "image-processing, signal-analysis, digital-communications, radio, communication-standard" }
Can I calculate the percentage of alcohol if I only know that 100ml wheight 90,135 g
Question: Imagine I have a bottle of 100ml which weight 90,135gr. I know that there is only water and alcohol in it. I know that the weight of water is 0,998 gr/cm3 and alcohol is 0,79 gr/cm3. And when I try to figure out how it is divided I'm doubting whether it is at all possible or it is by trying out around 42% alcohol. But is that true and how can you easily calculate this? Answer: 100 $cm^3 =90.135 g$ density is $0.90135 g / cm^3$ $0.90135 g / cm^3$ = $0.998x grams + 0.79y grams/ 100 cm^3$ $x+y=100$ $y=100-x$ redefine it as $0.90135 g / cm^3 = (0.998x grams + 0.79(100-x) grams)/100$ $90.135=(0.998x grams + 0.79(100-x) grams)$ $90.135=0.998x grams + 79 - 0.79x grams$ $90.135=79 - 0.208x grams$ $11.135=0.208x grams$ $11.135/0.208=x$ $53.5336538=x$ $53.5336538 mL$ of water.
{ "domain": "chemistry.stackexchange", "id": 5667, "tags": "inorganic-chemistry" }
Physical interpretation of the Gibbs-Duhem equation
Question: For a binary system at constant temperature and pressure, the Gibbs-Duhem equation can be reduced to $$\mathrm d\mu_B=-\frac{n_A}{n_B}\mathrm d\mu_A$$ How does one make sense of the Gibbs-Duhem equation in terms of intermolecular forces and lattice packing? Why does the chemical potential of one component decrease (relative to the other) by a factor of $n_A/n_B$? Answer: If you write down the equation in terms of the very definition of $\mu_i=\frac{\rm dG}{{\rm d}n_i}$ and impose ${\rm d}n_\text{totoal}=0$, it's just the conservation of the energy potential ${\rm d}G=0$. Say you own a theme park where, every hour, each adult has to pay ${\rm d}\mu_a=\$7$ (money=energy) and each kid has to pay ${\rm d}\mu_k=\$3$. We assign positive signs to when they put money out of their pockets. If there are $n_a=10$ adults and $n_k=5$ kids in the park, and if this number of people stays fixed (${\rm d}n_\text{totoal}=0$ over some time), then they together have to spend $$n_a\cdot {\rm d}\mu_a + n_k\cdot {\rm d}\mu_k = \$70 + \$15 = \$85$$ Now say you, the park owner, are not allowed to actually make any money (${\rm d}G=0$) and you relax the condition that the kids have to pay any money at all. That is, now just the adult have to pay ... and since the money has to go somewhere, the kids are on the receiving end. Then the ten parents still spend $\$70$ per hour and now the five kids will split that money among each other. Each kids receives $$-{\rm d}\mu_k = \frac{1}{n_k}\cdot n_a\cdot {\rm d}\mu_a = \frac{1}{5}\cdot \$70 = \$14$$ The comparison with "money per time" lacks in that here people don't bring in money just from coming to the park (as it's the case with particles coming into the system and energy). But I hope this clear up the meaning of factor $\frac{1}{n_k}$ clear.
{ "domain": "chemistry.stackexchange", "id": 8230, "tags": "thermodynamics" }
Why does Lamport clocks increment on both message received and sent?
Question: According to wikipedia, Lamport clocks need to be incremented when sending a message (time = time + 1) and when receiving a message (time = max(time_stamp, time) + 1). In my implementation, Increment() is used when sending and Witness() is used when receiving, each of them following the previous rules. I can't understand why not only Increment() actually increment the counter, that is why Witness() does not simply do time = max(time_stamp, time) instead of time = max(time_stamp, time) + 1. The source of my confusion is the following scenario: clock start at 1 a message is sent, clock is at 2 this message is seen locally, clock is now at 3 In this scenario, the clock incremented twice for the same message. Ideally it would just be one. Did I break something by witnessing the clock within the same process? Did I get something wrong? Answer: From Leslie Lamport Time, Clocks, and the Ordering of Events in a Distributed System: The space-time diagram of Figure 1 might then yield the picture in Figure 2. Condition C 1 means that there must be a tick line between any two events on a process line, and condition C2 means that every message line must cross a tick line. I think my confusion was on what actually is an event. In Lamport's paper and in Wikipedia, sending and receiving the messages are the events. In that case, incrementing the counter both when sending and receiving a message makes sense, to ensure that a clock tick separates those events. In my case, the data structure modifications are the events. The message passing between the processes is irrelevant. In that case, I believe that incrementing the counter only when the data changes makes sense. Now, if applying Lamport's clock principle this way still qualifies as Lamport's clock and holds all the related properties, I do not know.
{ "domain": "cs.stackexchange", "id": 17446, "tags": "clocks" }
Creating a single layer perceptron for the OR problem
Question: I am working on the following problem Find the linear least squares unit weights for the `OR' problem, ie. $v_1^T = (0,0), v_2^T = (1,0), v_3^T = (0,1), v_4^T = (1,1)$ and $u_1 = 0, u_2 = u_3 = u_4 = 1$. Here $v$ represent inputs and $u$ outputs. For problems like this I usually find a matrix $W$ (the weights) such that $$u_i=Wv_i \quad i=1,2,3,4$$ but I think it is obvious a matrix doesn't exist. my reasoning being that the problem is equivalent to finding the matrix $W = \begin{pmatrix} x_1 & x_2 \end{pmatrix} $ such that $$\begin{pmatrix} 0 & 1 & 1 & 1 \end{pmatrix} = \begin{pmatrix} x_1 & x_2 \end{pmatrix} \begin{pmatrix} 0 & 0 & 1 & 1 \\ 0 & 1 & 0 & 1 \end{pmatrix}$$ And looking at the two middle columns of the left vector and right matrix we must have $x_1=x_2 =1$ but then we get $1 + 1 = 1$, a contradiction. In a unit perceptron a function may be applied to the output, and a step function $$f(x) \begin{cases} 0 & x \leq 0 \\ 1 & x > 0 \end{cases}$$ would work here. My question here is, is this correct? I wasn't quite sure if I am answering the right question. I am pretty sure $u_i = f(Wv_i)$ but I am not too familiar with what is meant by "linear least squares unit weights" in this context? Answer: The fact that you mention linear least squares error seems to hint that they want you to use a completely linear model $u_i=Wv_i$ for your perceptron. In this case you won't get exact answers like $0$ and $1$; you will only get approximations with some amount of error. That said, I don't think a linear model makes sense for this problem, since it is a classification problem with two classes. Using a non-linear step-like function $f$ before the final output of a perceptron classifier is a pretty standard thing to do, so I see no problem with using $u_i=f(Wv_i)$. Assuming you can use $f$, using the weights $x_1 = x_2 = 1$ as you mentioned solve this problem exactly with zero error.
{ "domain": "cs.stackexchange", "id": 4338, "tags": "neural-networks, linear-algebra, boolean-algebra" }
Naming a keto ester
Question: Find the IUPAC name of the above structure. This question was asked in my Chemistry exam. I could not figure out whether it is a ketone or an ester. My attempts: 1.If i consider the whole compound as an ester then part to the left of C-O-C is the acid part and the part to the right is the alcohol part. Thus the IUPAC name is: Ethyl 4,5-dibromoheptan-2-one-6-oate 2.If the consider the part to the left of C-O-C as ketone and the part to the right as ester then IUPAC name comes out to be: 4,5-dibromopentan-2-one Ethyl Methanoate Is any of my attempts correct and if not then why not? Answer: According to Section P-41 in the current version of Nomenclature of Organic Chemistry – IUPAC Recommendations and Preferred Names 2013 (Blue Book), the order of seniority of classes in decreasing order of seniority is as follows. (…) 9. esters (…) 16. ketones (…) Therefore, the compound that is given in the question is named as an ester. The names of esters are usually formed by placing the alcoholic component in front of the name of the acid component as a separate word. P-65.6.1 General methodology Neutral salts and esters are both named using the name of the anion derived from the name of the acid. Anion names are formed by changing an ‘-ic acid’ ending of an acid name to ‘-ate’ and an ‘-ous acid’ ending of an acid name to ‘-ite’. Then, salts are named using the names of cations, and esters the names of organyl groups, cited as separate words in front of the name of the anion. Therefore, the name of the parent structure without further substituents is ethyl hexanoate. This ester is substituted using the usual principles of substitutive nomenclature. A low locant is assigned first to the principal characteristic group (i.e. the ester group). The prefix ‘oxo’, denoting $\ce{=O}$, is used to indicate a carbonyl group when the group cannot be cited as suffix. Thus, the complete name for the compound that is given in the question is ethyl 2,3-dibromo-5-oxohexanoate.
{ "domain": "chemistry.stackexchange", "id": 7167, "tags": "organic-chemistry, nomenclature, carbonyl-compounds" }
Effect of Quantum Efficiency of a CCD on the intensity and how to normalize it?
Question: I have a multi spectral with around 25 bands in the near IR range i.e there are 25 different wavelengths that the CCD can capture. For each of these CCD filters the Quantum Efficiency curves differ remarkably. I need help understanding how QE for different wavelengths affects the brightness of the captured image for that wavelength. If it does, then I am interested in calculating the scaling factor with which I have to multiply each pixel values to get the intensity right. The camera is monochrome and allows images to be captured only in the RAW16 format without any interpolation. Answer: I have a multispectral [camera] with around 25 bands ... for each of these CCD filters the Quantum Efficiency curves differ remarkably. I need help understanding how QE for different wavelengths affects the brightness of the captured image for that wavelength. I am interested in calculating the scaling factor with which I have to multiply each pixel values to get the intensity right. With few exceptions, like the Foveon X3 sensor, all image sensors start out as a monochrome sensor. In the case of most sensors for consumer cameras a color filter array (CFA), usually a Bayer filter, is placed over the pixel sensors of an image sensor to capture color information. In the case of a multispectral sensor with 25 bands, instead of a CFA, an array of 25 different filters are arranged in a 5x5 array. You can use the entire block of 25 sensels (cells) to represent a single low resolution pixel with a wide sprectral range or choose a few of them only - the same spectral range is only going to be repeated once in each 5x5 block. If you only wanted a narrow range you will need to skip to the next block to get the sample for your next pixel. In other words, a 1000x1000 sensel monochrome image sensor with an multispectral filter array (MFA) only has 200x200 pixels each with 25 bands of the spectrum detectable by the sensor. You would probably map each band to a single visible colour, an intensity, or a color space; so you could view the infrared image. The sensor itself has a QE, and the filters have an efficiency that may not be identical for each band. You would want to use a table of floating point values and simply multiply each sensel by its individual value, that compensates for variations between individual sensels and their associated filter; so you wind up with a flat(ish) response across the usable range. You would probably want 1000x1000x4 bytes = 4MB of memory. It depends on what the underlying sensor used is, which will determine its bandwidth, it's safe to assume that it's wide enough for the MFA used but at the upper end the efficiency is likely reduced (you'll need to multiply those sensels by a larger value). When you overlay the MFA on the sensor it ends up with this response after applying the adjustment (an 8 band MFA is shown here). To calibrate the sensor you need a source of light with an appropriate spectrum, and preferably flat output intensity in the range of your MFA. A calibrated Quartz Tungsten Halogen (QTH) source will provide a flat spectrum in the infrared range you need, it's spectrum looks like this: That would be expensive to buy and possibly difficult to find somewhere to rent from. A heat lamp or infrared heater would work too, but you'll be lucky to find a spectrum chart for them. You can go to the hardware store and buy a $20 QTH Worklight, remove the safety glass, and end up with a spectrum like this: Nowhere near as good, and much cheaper; so is sunlight: Notice that the atmosphere blocks parts of the Sun's spectrum, but it's an inexpensive source of infrared light to use while you are experimenting to determine if you can get your camera running and decode the RAW16 output. If you know what brand of camera you have you can probably obtain software that is compatible with it. You can read up on the different processing algorithms or modify existing software like ImageCooker, or view this paper: "Processing RAW images in Python" to learn more about the subject. Also you could write Plugins for existing software, like ImageJ, with something like the "Image Calibration and Analysis Toolbox", available from Jolyon Troscianko's webpage: "Multispectral Image Calibration and Analysis Toolbox or the Sensory Ecology and Evolution website. Also the Photoconductor Array Camera and Spectrometer (PACS) instrument of the Herschel Space Observatory used a 25 band array, so your camera calibration problem and question is not unique. Once you are able to read a RAW image frame and save it to a file you would use a table of floating point values to multiply each sensel by its individual value, as explained above.
{ "domain": "physics.stackexchange", "id": 51253, "tags": "quantum-mechanics, optics, quantum-optics" }
From how deep into space can a human "skydive" back to earth?
Question: Possible Duplicate: From how high could have Felix Baumgartner jumped without disintegrating like a shooting star? If a human can skydive from an altitude of 24 miles (39 km), and a satellite can stay in geostationary orbit at 22,236 miles (35,786 km), then what is the maximum altitude from which a human can theoretically "skydive." Furthermore, what would be that humans' fastest speed? (Felix went 834 mph.) Answer: A satellite can stay in pretty much any orbit, LEO satellites are pretty common. The lower the orbit the more air particles slowing the satellite down and the more need for readjusting it either by its own engines or by a spacecraft. Now for a human skydiving, there is really no limit. It's just the matter of drawing a line between a suit and a capsule; if the suit can provide arbitrary amount of air, shield from vacuum, dissipate extreme heat of passing through the atmosphere and so on, there is no limit. If it can't provide arbitrary degree of that, there will always be a slightly better suit allowing for a slightly higher jump. No "top height" as such. Current 39km is the result of a golden middle between marketing value of the jump and cost of equipment and research necessary to perform it. On orbital altitudes, the vehicle the jump is performed from would have to move in such a direction and speed as to provide optimal entry curve instead of just dropping the person into the orbit and leaving them there to orbit Earth forever. But other than that, there is no reason why a man couldn't be lobbed from behind Jupiter, make a slow-down loop around the Moon, then spiral down to Earth... given some marvelous suit that will withstand the atmospheric entry.
{ "domain": "physics.stackexchange", "id": 5090, "tags": "space, velocity, kinematics" }
What is a good rule of thumb for the threshold of noise versus signal for RPK in RNA seq?
Question: I have RPK values (RNA seq) and I'm wondering what is a good rule of thumb for what is considered to be noise versus what is considered to be signal? I.e what should I choose as a threshold value for what is considered noise versus not noise? (just as a rough guideline) Answer: Signal and noise are best determined at the count level, rather than after normalisation. I don't think there's an easy way to get a signal level from normalised data. When I wanted to establish the noise level of counts from Illumina reads from lots of different samples, I looked at the total count distribution for genes that had counts in only one sample, and used the maximum value of that (or possibly an elbow of that distribution) for my signal threshold.
{ "domain": "bioinformatics.stackexchange", "id": 1845, "tags": "rna-seq, normalization" }
aLIGO potential signals mimicking GWs not considered in the team publications?
Question: [EDITED to accommodate info from the comments] Among the local atmospheric electromagnetic potential sources of a signal capable of mimicking the waveform of a GW not sufficiently considered by LIGO and are sferics and radio bursts in relation with lightning and with terrestrial gamma-ray flashes associated to thunderstorms. As discussed in the comments in the presence of some charge in the interferometer it is plausible(see calculations in the comments below) for a radio wave in the 35-70 Hz range of frequency to produce a few oscillations and with a magnetic field just under what woud be detected by the magnetometers. Once this oscillatory force produce displacement on the mirrors it is also a possibility considered in the GW optomechanics literature, by the optical spring property that the detuned signal recycled cavity that forms between the signal recycling mirror and the other mirrors, that the laser radiation pressure anti-damping effect leads to a few more cycles with freq going up towards the optical cavity resonance before it is quickly damped. Precisely this signal recycling-optical spring property is exploited by the AdvancedLigo interferometer to get close to or even beat the standard quantum limit(SQL) of detectibility and enhance the sensitivity in the presence of an actual GW in these low frequencies(under 300 Hz is where amplitude sensitivity is maximized) associated to binaries spiraling and merging, but it is also known to be capable of producing parametric instabilities.(see for instance chapters 3, 11 and 12 of "Advanced interferometers and the search for gravitational waves" edited by Bassan or consult the arxiv for papers by Meers, Chen and Buonnano) So given all this it is hard not to wonder why LIGO and other scientific groups independent of LIGO haven't apparently considered this kind of potential electromagnetic radiation sources of confusion, if only to critically scrutinize the interpretation of a single event put forth by a single team like it should always be done in science. ADDED june 18th 2017: After more than a year and over 2000 citations finally one team makes the actual effort of going critically through the published data from LIGO and publish their conclusions: https://arxiv.org/abs/1706.04191 . Basically they find that when doing the analysis template-free it can be seen that the noise during the GW "signal"(with the analysis done either with the signal substracted or not given the weakness of the putative GW amplitude) is also lagged 6.9 ms between detectors, when it should be stochastic. And that "noise" could perfectly accomodate a signal of lightning related events of terrestrial origin(that could go perfectly unnoticed by the magnetometer as shown in the comments below), but certainly as a whole not the shape of a GW signal. Answer: The radiation pressure from a ~50 Hz (freq of the aLIGO wiggle) radio wave with an amplitude of 1pT (a typical Schumann wave) or 10pT (which is E=cB=3 mV/m) for a less frequent Q-burst (associated with Sprites) is very very small. Also, it would push the 40kg mirror mass in one direction only. If it arrived as an impulse, the mirror would swing at its pendulum freq of < 1 Hz and not make a ~50 Hz chirp back and forth waveform. However, you may be looking in the correct place for a non-gravitational effect. It turns out aLIGO's end mirrors may be charged!! If there was a ~50 Hz radio wave with a chirp pattern, it might be able to explain what aLIGO saw. At a recent aLIGO talk, I asked the speaker about the charge on the end mirrors. He said a charge may be there and was an active topic of investigation within aLIGO. For now we will have to estimate it. The position of the 40 kg x .34 m diam mirror is controlled by pushing against another mass hanging from the same pendulum stage. This adjustment is needed to precisely send the laser beam back down the 4 km to the splitter mirror. This pusher plate is 5 mm away, has concentric electrodes on it, and is divided into quadrants. As I understand from an aLIGO paper, up to +-280 volts (and up to an additional 500 volt offset) may be placed on these electrodes. If the average voltage on these electrodes is not zero, then the other plate of the capacitor (the mirror) charges up. The capacitance between the two plates is 160 pf. If (wild guess) the average voltage were 100 v, then the mirror would have q=CV= 16 nC of charge on it. For a driving freq of 50 Hz, the mirror (< 1 Hz pendulum freq) behaves as a free mass. So the amplitude of its motion for a 10pT Q-burst is: $$ x=\frac{qE}{m (2\pi \nu)^2}=\frac{(16*10^{-9}coul)(10^{-11}Tesla)(3*10^8m/sec) }{(40kg)(2\pi 50 sec^{-1})^2}=1.2*10^{-17}meters $$ $$ strain=\frac{1.2*10^{-17}meters}{4000meters}=3*10^{-21} $$ The 50 Hz radio wave would also be attenuated by ~1/3 by passing thru a 1 cm aluminum vacuum chamber wall. The aLIGO signal at both interferometers had an amplitude of $.5*10^{-21}$ so a Q-burst size signal, happening in the ionosphere between the two interferometers, is large enough given our assumption of 16 nC on the mirror. A 10 pT amplitude signal would not have been vetoed by the aLIGO magnetometers. A LIGO paper said the magnetometers had a noise of 4pT/sqrt(Hz). Integrating this over the bandwidth 35-350 Hz that aLIGO filtered its strain signal, the magnetometer threshold for detecting a glitch was probably greater than 71 pT. However, in googling the literature, I have found no atmospheric effect waveform that looks like the aLIGO chirp (increases in freq and amplitude as time progresses). The Q-burst mentioned above is a spike with some decaying oscillations of about the correct freq and does not look like the aLIGO chirp. Though aLIGO's detection doesn't look like a Q-burst, the above calculation shows (if 16 nC on the mirror is correct) that an electromagnetic wave could make the observed strain and escape the magnetometer's glitch veto. Perhaps LIGO has discovered some previously unknown, small, and infrequent electromagnetic atmospheric phenomena? Maybe someone from within LIGO is on Physics Stack and can comment/ add info. What the aLIGO signal is will become clearer as they see more events. Very exciting, and I too hope it is a gravitational wave for the window this would open on the universe! Addendum 1: Calculation of the strain caused by a "~100 Hz laser spring oscillator" receiving a .005 sec (=1/2 period) impulse of EM radiation perpendicular to the mirror face and completely absorbed by the mirror. The 100 pT EM wave is probably just below what the magnetometer will veto as a glitch. The momentum pmax transferred to the mirror oscillator is $$ pmax=\frac{1}{c\mu_0} (E\times B)*Area=\frac{(3*10^8m/sec)(10^{-10}Tesla)^2(\pi (.17m)^2)(.005 sec)}{(3*10^8m/sec)(4\pi 10^{-7})}=3.6*10^{-18} kg-m/sec $$ After 1/4 of a period the mirror will have p=0 at its max amplitude of xmax $$ xmax=\frac{pmax}{m(2\pi \nu)}=\frac{3.6*10^{-18} kg-m/sec}{(40kg)(2\pi100sec^{-1})}=1.4*10^{-22}m $$ $$ strain=\frac{1.4*10^{-22}m}{4000m}=3.6*10^{-26} $$ which is much less than the $10^{-21}$ peak strain aLIGO saw. Addendum 2: Now consider if Terrestrial Gamma Flashes(TGFs) of ~1 MeV gamma rays might give a LIGO mirror enough impulse. Assume the 40 kg mirror is absorbing relativistic particles like photons so $p=\frac{E}{c}$. Calculate how much energy must absorbed by the "100 Hz laser spring osc" to account for the strain amplitude seen. $$ E=c*pmax=c*m(2\pi \nu)xmax=(3*10^8 m/sec)(2\pi*100Hz)(10^{-21}*4000m)=3*10^{-5}joules $$ Convert this to MeV and divide by the area of the mirror $$ Flux=(\frac{3*10^{-5}joules}{1.6*10^{-13}joule/Mev})(\frac{1}{\pi (17cm)^2})=2*10^5Mev/cm^2 $$ The Fermi papers say the TGF events are <1/4 msec (so ~delta function impulse to excite our 10 msec period osc) but do not report the total energy deposited in the GBM BGO. What they do say is that the largest TGF events they have seen have ~300 gammas in the ~300 cm2 area of their BGO detectors, the energy spectrum falls ~$E^{-2}$, and ~40 MeV is the largest energy gamma they have seen in ~3000 events in 4 years of data. So, we can calculate a big overestimate of the MeV/cm2 they have seen in their rarest event: $$ Flux_{GBMmax}=\frac{300*40Mev}{300cm^2}=40Mev/cm^2 $$ This falls 4-orders of magnitude short of what we calculated as needed for the aLIGO signal. Yes, the question of distance from the TGF lightning has been ignored, but Fermi with its 450 mile orbital altitude probably has been as close (or closer) to a lightning storm in 4 years as the two interferometers (2000 miles apart/2=1000 miles to the lightning) were in two weeks of LIGO data taking. We have also ignored the shielding of the atmosphere which would attenuate the energy reaching LIGO even more.
{ "domain": "physics.stackexchange", "id": 29328, "tags": "electromagnetic-radiation, ligo" }
How to handle odd word
Question: Given the language $L = \{ a^n | \text{n is odd} \}$ I'm looking for a word $w$ using $p \in \mathbb(N)$. For example, if it would be even, instead of odd I'd choose $w = a^{2p}$. But with odd, I'm really struggling to find a word. What do I do? Define a variable $j$ and say $\text{j is odd}$ so $a^j \in L$ I know the language is regular but I still want to know how to handle the $odd$. Answer: I have no idea what exactly you're asking, but if you consider $w = a^{2p}$ to be a valid answer for even-sized words, I assume $w = a^{2p+1}$ would be fine for odd words.
{ "domain": "cs.stackexchange", "id": 21820, "tags": "formal-languages, regular-languages, pumping-lemma" }
OFDM time vs. frequency domain channel estimation/equalization
Question: In OFDM, the majority of Equalizers are used in frequency domain. I mean the signal is transmitted in time domain (after performing iFFT), then to estimate the channel we should perform FFt to use any type of equalization such that LS or MMSE. My question, what's about if we perform the equalization in time domain? it means at receiver side, we perform any type of estimation/equalization before performing fft (estimate and equalize in time domain)? What's the advantages of using frequency domain OR using time domain estimation and equalization? Thank you Answer: Traditionally, OFDM became popular in WiFi and LTE because the channel model consisted of multi-path. That is, the radio signal transmitted in 1-6GHz frequencies bounced from various obstacles (walls, trees, cars, humans) at the receiver. Of course this is time varying because obstacle position or transmitter/receiver position also changes. But to simplify the computation, if we consider a multi-path channel at time $t$, its baseband model for $L$ tap multipath is $$ h[n] = \sum_{k=0}^{L-1}a_k\delta[n-k] $$ where $a_k = r_ke^{j\phi_k}$ is the complex tap for path $k$. If you plot the FFT of this channel, you will find it is not a flat but rather having different gains at different frequencies. To equalize this in time domain, you would use adaptive filtering techniques. The problem is complexity and time taken for adaptive filtering to estimate the channel. By the time adaptive filter error tapers down, channel would have changed (due to movement of transmitter/receiver/obstacle). So for each OFDM burst, there is a need to immediately equalize to demodulate the packet and ultimately show the data on the device! This is where advantage of OFDM comes into picture. Even though the complexity of OFDM is higher (overhead of preamble, pilot symbols, need to maintain orthogonality), the equalization simplifies to a single tap equalizer in frequency domain as the whole frequency selective channel is split into small flat-fading channel due to IFFT -> Cyclic Prefix -> FFT method. You make sure the circular convolution happens by inserting cyclic prefix so in frequency domain it turns into a point-by-point equalization $Y[k]/H_{eq}[k]$ where $Y[k]$ is the received symbol at subcarrier $k$, and $H_{eq}[k]$ is the equalizer tap you computed using pilots. EDIT: I understand the channel model given above is very simplified view of multi-path model. The simplification is explained in the famous book by Prof. David Tse and Prof. Pramod V (https://web.stanford.edu/~dntse/Chapters_PDF/Fundamentals_Wireless_Communication_chapter2.pdf)
{ "domain": "dsp.stackexchange", "id": 8531, "tags": "estimation, ofdm, equalization" }
Themed Folder Classification
Question: I am trying a very simple method of getting the theme information via a theme.xml within each theme folder. What it should do: Scan the theme directory ../themes Return the names of all the theme folders (array) Use the name of the theme folder returned to create a link to search in file_exists() Find the theme.xml in every folder Parse every theme.xml with simplexml_load_file() Return an array ready for output in a foreach() If !file_exists(), output "theme.xml Not Found in (themename)" I tried to make this in 2 functions but it would always output the first or last themes info, so this is what I'm doing now, can anyone suggest a way to help separate the function from the output. Get available theme folder names function public function getAvailableThemes() { $path = '../themes'; $themeDir = glob($path . '/*' , GLOB_ONLYDIR); return $themeDir; } Output in wcx_themes.php file <?php $themeDir = $backend->getAvailableThemes(); foreach($themeDir as $themeDir) { $preview = $backend->getThemePreview($themeDir); $xmlfile = $themeDir.'/theme.xml'; if(file_exists($xmlfile)) { $xml = simplexml_load_file($xmlfile); $themename = $xml->themename; $themedescription = $xml->themedescription; $themeversion = $xml->themeversion; $themeauthor = $xml->themeauthor; echo '<div class="themewrap"> <div class="themepreview"> <img src="'.$preview.'"/> </div> <div class="themeinfo"> <div class="themename">Name:&nbsp;'.$themename.'</div> <div class="themedescription">Description:&nbsp;'.$themedescription.'</div> <div class="themeversion">Version:&nbsp;'.$themeversion.'</div> <div class="themeauthor">Author:&nbsp;'.$themeauthor.'</div> </div> <div class="themebar"> <div class="themebarname">'.$themename.'</div> <div class="themebutton"> <a href="../controllers/themecontrol.php">INSTALL</a> </div> </div> </div>'; } else { echo ' <div class="themewrap"> <div class="themepreview"> <img src="'.$backend->getThemePreview($themeDir).'"/> </div> <div class="themeinfo"> <div class="themenoexist">THEME.XML DOES NOT EXIST</div> </div> </div>'; } } ?> Get theme preview image public function getThemePreview($theme) { $ext = '.png'; $preview = '../themes/'.$theme.'/preview'.$ext; if(file_exists($preview)) { return $preview; } else { $preview = 'images/nopreview'.$ext; return $preview; } } Theme.xml <?xml version="1.0" encoding="utf-8"?> <theme> <themename>Default</themename> <themedescription>This is the Default Theme</themedescription> <themeversion>1.0</themeversion> <themeauthor>WCX</themeauthor> </theme> Answer: Yes you can move the output away from the code, however that's usually difficult to do with this type of procedural code. The code is extremely tightly bound, which makes separation harder than it should be! The two function you call are fine as they are. So lets focus on those for now. getAvailableThemes() This function, since it's only being called once, is what I believe to be clutter. I don't see a need to separate it from the main logic. Doing so only increases the amount of times we need to look back at the code. Also, I see no reason to cut of "Directory" in $themeDir. Spell it out for clarity. getThemePreview() Is the return value a "preview" or a "file"? Perhaps a better name would be getThemePreviewFile. We can refactor the code a lot: $preview = '../themes/'.$theme.'/preview.png'; if(!file_exists($preview)) { $preview = 'images/nopreview.png'; } return $preview; I shortened it, and I added the extension into the literal because it doesn't look like the extension will change. Now for the main file! wcx_themes.php First off, indentation should be key here. Indentation is also missing here! Readability is very important. foreach($themeDir as $themeDir) - I don't like this at all. Collections should be pluralized. The immediate solution to separating the HTML and the PHP that I can think of, is a templating library. Other than that, you could store the HTMl in a string, and then use sprintf or another form of replacement to build the output.
{ "domain": "codereview.stackexchange", "id": 8638, "tags": "php, xml" }
Is sequencing error a function of the nucleotide being read?
Question: Checking out on Google Scholar, I can see that for Illumina (just to consider one example) the sequencing error rate is of the order of 0.001-0.01 per nucleotide. Talking about sequencing error, let's consider mismatches (substitution of one nucleotide by another) only. Knowing the "true" nucleotide at a given position, is it as likely to be read as any other specific nucleotide during a mismatch or are there bias? For example, if the true nucleotide is A, is it more likely to be found being as a G (as they both are purines) than a T or a C? Are some nucleotides more likely to be misread than others? I am hoping the answer won't depend too much on the sequencing techniques. Answer: Unfortunately, it does depend on sequencing techniques. For example, in Illumina sequencing, each sequence fragment is amplified (in order to get a stronger signal) and forms a cluster on the microarray. Each cluster is sequenced by cycles of: Adding fluorescent terminator nucleotides. These nucleotides are modified to contain an inhibiting/terminating group and prevents more nucleotides from being added. Theoretically, only one nucleotide is incorporated into every DNA fragment in this step. Washing off excess nucleotides. Capturing the incorporated nucleotide using imaging techniques and determining which base was incorporated (based on the colour of fluorescence). Cleaving the terminator from the added nucleotides, so that the reaction can continue. Image from Metzker, 2010. This way, each fragment is synthesized, one nucleotide at a time, and each nucleotide that is incorporated gets detected. However, the first step is not flawless: sometimes more than one nucleotide gets incorporated into a certain DNA fragment, or no nucleotides get incorporated. Eventually, the DNA fragments in a cluster (all containing the same sequence) will fall out of sync ("phasing") and the fluorescent signal will become less clear, with a mixture of different colours. This is the main cause of sequencing error for Illumina machines, and also the reason why Illumina reads are relatively short (~300bp). So to answer your question, in this example, nucleotides may be erroneously read as nearby nucleotides in that sequence. Errors will vary using other sequencing methods and how those methods work. The article I linked earlier explains various sequencing methods in more detail. (Unfortunately, it's behind a paywall so some may not be able to view it.)
{ "domain": "biology.stackexchange", "id": 4369, "tags": "molecular-genetics, dna-sequencing, sequence-analysis" }
Performance Issue on css and javascript styling
Question: I make some ui custom controls. Typically my controls have themes to them so it can be changed. I dont use css files for each themes. What I do instead is hava a javascript file that contains the different themes for that particular control. Example of my button control (Note i took the rest of the css off the other colors just so this wont be so long in the post) (function ($) { $.fn.buttonTheme = function () { }; $.buttonTheme = { newButtonTheme: function(){ $.buttonTheme.rules = { "button" : { "nonrounded": "cursor: pointer; cursor: hand;display:inline-block;zoom:1;*display:inline;" + "vertical-align:baseline;margin:0 2px;outline:none;text-shadow:0 1px 1px rgba(0,0,0,.3);" + "-webkit-border-radius:0;-moz-border-radius:0;border-radius:0;-webkit-box-shadow:0 1px 2px rgba(0,0,0,.2);" + "-moz-box-shadow:0 1px 2px rgba(0,0,0,.2);box-shadow:0 1px 2px rgba(0,0,0,.2);", "rounded": "cursor: pointer; cursor: hand;display:inline-block;zoom:1;*display:inline;vertical-align:baseline;margin:0 2px;" + "outline:none;text-shadow:0 1px 1px rgba(0,0,0,.3);-webkit-border-radius:.4em;-moz-border-radius:.4em;" + "border-radius:.4em;-webkit-box-shadow:0 1px 2px rgba(0,0,0,.2);-moz-box-shadow:0 1px 2px rgba(0,0,0,.2);" + "box-shadow:0 1px 2px rgba(0,0,0,.2);" }, "light_gray": { "enabled": "" + "", "disabled": "", "hover": "", "text": "" }, "black": { "enabled": "", "disabled": "", "hover": "", "text": "" }, "gray": { "enabled": "", "disabled": "", "hover": "", "text": "" }, "white": { "enabled": "", "disabled": "", "hover": "", "text": "" } }; } }; })(jQuery); So in the control javascript file I build the control by assigning the particular elements the particular style it needs based on the selected theme like ligh_gray. I find this method to be easy to maintain and update to other themes because the underlying samples are there and its not hard coded in the script. So I can just create another theme like aqua for example and just change the colors, etc. Also I can dynamically change the theme easier without having to do a page refresh What I wanted to know from others is there thoughts on this. Is there a performance issue in this method? Does it seem like its a maintenance headache? Answer: Yes! This looks like a maintenance head ache ( nightmare ). CSS belongs to CSS files ( you can apply csslint to CSS files, CSS files also have syntax highlighting, plus most developers rightfully dislike maintaining CSS in JavaScript ). If you must support different styles at run-time, you can create styles in your stylesheet for each theme and then assign the right class when themes get switched. To your points: I find this method to be easy to maintain and update to other themes because the underlying samples are there and its not hard coded in the script. I am not sure what you mean by "not hardcoded", but you are most likely wrong. So I can just create another theme like aqua for example and just change the colors, etc. You can do the same with CSS files Also I can dynamically change the theme easier without having to do a page refresh. You can do the same by changing the class dynamically As for performance, you are using extra bandwidth by sending over all style-sheets, to be avoided on mobile.
{ "domain": "codereview.stackexchange", "id": 5744, "tags": "javascript, css" }
Why does capacitance not depend on the thickness of the capacitor plates?
Question: So the formula for capacitance (being $C=\frac{A\,\epsilon_{0}\,\epsilon_{r}}{d}$) shows that the capacitance of a capacitor depends on the surface area of the capacitor plates. As I understand it, this is because if the plates are larger, then for a given potential difference between the plates more electrons can be pushed onto the negative plate by the cell. My question is, then by the same (and I am guessing flawed) logic, why does the thickness of the plates not affect the capacitance of the capacitor? Or, put another way, why is the formula for capacitance not $C=\frac{v\,\epsilon_{0}\,\epsilon_{r}}{d}$, with $v$ being the volume of the capacitor plates? Many thanks. Answer: The fundamental thing about a capacitor is that it stores energy in the electric field. In a parallel plate capacitor with metallic plates, the electric field is strongest (and thus most of the energy is stored) in the space between the plates. The electric field within the plates is (very near to) 0. So it makes sense that the geometry and composition of the gap between the plates is much more important to determining the capacitance than the geometry of the plates. There is such a thing as a coplanar capacitor, where the dimension that's analogous to the plate thickness in the parallel plate capacitor has a strong effect on the capacitance: (image source)
{ "domain": "physics.stackexchange", "id": 46720, "tags": "electric-circuits, electric-fields, capacitance, dielectric" }
Can LSTM have a confidence score for each word predicted?
Question: LSTM networks can be used to generate new text. Given a sequence, I can predict the next word. Is there a way to get a score associated to each word predicted? In particular, if the new word has never been seen by the LSTM network, can we train the LSTM to output a score of "no confidence"? For example, this article gives the following example: For example, let us consider the following three text segments: 1) Sir Ahmed Salman Rushdie is a British Indian novelist and essayist. He is said to combine magical realism with historical fiction. 2) Calvin Harris & HAIM combine their powers for a magical music video. 3) Herbs have enormous magical power, as they hold the earth’s energy within them. Consider an LM that is trained on a dataset having the example sentences given above — given the word “magical”, what should be the most likely next word: realism, music, or power? Say, that, in fact, the next word is neither one, but "power", but my LSTM has never seen that word before. So, the LSTM is going to predict one of the three words it has seen, but I would like it to output a low confidence score. Is this possible? Answer: Couldn't you require your softmax output to exceed a threshold for the prediction or you call it "not confident"? I have not tried word prediction, but the softmax gives you a value for each output, where all of those values sum to 1. So pick the column with the highest value as your predicted word and use that value to determine your confidence. (It may be fairly low, since there might be a small chance of lots of words and then larger chances of your most-common three.)
{ "domain": "datascience.stackexchange", "id": 2848, "tags": "neural-network, nlp, lstm, prediction" }
Is this learning rate schedule increasing the learning rate?
Question: I was reading a PyTorch code then I saw this learning rate scheduler: def warmup_lr_scheduler(optimizer, warmup_iters, warmup_factor): """ Learning rate scheduler :param optimizer: :param warmup_iters: :param warmup_factor: :return: """ def f(x): if x >= warmup_iters: return 1 alpha = float(x) / warmup_iters return warmup_factor * (1 - alpha) + alpha return torch.optim.lr_scheduler.LambdaLR(optimizer, f) and this is where the function is called: if epoch == 0: warmup_factor = 1. / 1000 warmup_iters = min(1000, len(data_loader) - 1) lr_scheduler = utils.warmup_lr_scheduler(optimizer, warmup_iters, warmup_factor) As I understood it gradually increase learning rate until it reach initial learning rate. Am I correct? Why we need to increase learning rate? As I know for better learning in Neural Networks we decrease learning rate. Answer: The higher (or smaller) the learning rate, the higher (or, respectively, smaller) the contribution of the gradient of the objective function, with respect to the parameters of the model, to the new parameters of the model. Therefore, if you progressively increase (or decrease) the learning rate, then you will accelerate (or, respectively, slow down) the learning process, so later training examples have higher (or, respectively, smaller) influence on the parameters of the model. In your example, the function warmup_lr_scheduler returns an object of class LambdaLR, initialized with a certain optimizer and the function f, which is defined as def f(x): if x >= warmup_iters: return 1 alpha = float(x) / warmup_iters return warmup_factor * (1 - alpha) + alpha The documentation of torch.optim.lr_scheduler.LambdaLR says that the function f should compute a multiplicative factor given an integer parameter epoch, so x is a training epoch. If the epoch x is greater than or equal to warmup_iters, then 1 is returned, but anything multiplied by 1 is itself, so, when the epoch x is greater than a threshold, warmup_iters (e.g. 1000), then the initial learning rate is unaffected. However, when x < warmup_iters, the multiplicative factor is given by alpha = float(x) / warmup_iters warmup_factor * (1 - alpha) + alpha which is a function of the epoch x. The higher the epoch x, the higher the value of alpha, so the smaller (1 - alpha) and warmup_factor * (1 - alpha). Note that float(x) / warmup_iters will never be greater than 1 because x is never greater than warmup_iters. So, effectively, as the epoch increases, warmup_factor * (1 - alpha) tends to 0 and alpha tends to 1. The learning rate can only increase if you multiply it with a constant greater than 1. However, this can only happen if warmup_factor > 1. You can verify this by solving the inequality warmup_factor * (1 - alpha) + alpha > 1. To conclude, the initial learning rate is not being increased, but the learning process starts with a smaller learning rate than the given learning rate, for a warmup_iters epochs, then, after warmup_iters epochs, it uses the initially given learning rate (e.g. 0.002).
{ "domain": "ai.stackexchange", "id": 1429, "tags": "neural-networks, machine-learning, deep-learning, pytorch, learning-rate" }
Why do we talk of the "weakness of gravity" rather than "the surprising charge to mass ratio of particles"?
Question: The relative strength of gravity and electromagnetic forces is obvious — stand on a sheet of paper, and even with the whole of Earth pulling, your motion is stopped by the electric fields inside that sheet of paper. Great. This is often phrased as "gravity is the weakest of all forces" or some variant thereof. This seems equivalent to saying "the mass-to-charge ratio of fundamental particles is such that charge dominates". My problem with phrasing it as "gravity is the weakest" is that different particles have different mass-to-charge ratios: according to this chart on Wikipedia, mass-to-charge ratios vary by over 6 orders of magnitude, even if you exclude massive neutral particles like the neutrinos where gravity is infinitely stronger. With such a wide range of mass-to-charge ratios, why is the question usually phrased as being about the strength of forces? Why invoke extra dimensions for gravity to leak into (for example) when one also needs to explain an extra factor of -5.588×10^6 between top quarks and electrons? (I'm assuming that infinities would get explained away differently, but perhaps not?) (Hope this isn't a duplicate, my searching mainly showed a lot of "why is gravity weak?" type questions, which isn't what I'm curious about — I want to know why phrasing it like that is seen as the best way of thinking about the problem). Answer: Comparing forces for particles brings us to the quantum mechanical framework, the underlying framework of nature. Forces, quantum mechanically are represented as exchanges of particles in Feynman diagrams, and the probability of the interaction happening follows mathematically from that. For example here is the Feynman diagram for same charge repulsion: A physics student at graduate level, from this diagram follows a recipe that leads to a computable integral, and the calculation will show the repulsion of like charges. The strength of the interaction enters at the two vertices of the diagram and is given by the coupling constant. Here are the coupling constants of the rest of the interactions. It is in this sense that gravity is compared as the weakest of all forces. The diagram of the two electrons could represent the exchange of a graviton, but when compared to the electromagnetic exchange of a photon , the probability of the gravitational interaction is tiny : in the integral the coupling constants enter multiplicatively and the calculational recipe raises them to the fourth power . This is the framework where all known forces are compared. Of course gravity is not yet quantized rigorously, only effective theories exist, but it is within this theory that the statement of gravity being weakest is clearly evident.
{ "domain": "physics.stackexchange", "id": 27095, "tags": "electromagnetism, forces, gravity" }
Do humans produce rennin?
Question: At school, we've been taught that human infants produce rennin/chymosin (which aids in the digestion of milk). More specifically, it is the peptic cells in the stomach which secrete prorennin, the inactive form of rennin (in addition to pepsinogen, the pepsin proenzyme). User @another'homosapien's answer here also seems to concur with this (excellent answer by the way, I enjoyed reading it). However According to Mod. @AliceD's answer here (yet another excellent answer): ...in humans there is only a chymosin pseudogene present... Which (probably?) implies that humans (infant or otherwise) do not produce rennin. I managed to get my hands on the Textbook of Medical Physiology (Guyton and Hall, South-Asian edition), and according to the book (Chapter Gastric secretions, page 406) peptic cells produce a large quantity of pepsinogen. There is, however, no mention of prorennin. I even flipped over to the Appendix at the back to look up "Rennin", but it turns out there is absolutely no mention of rennin in the book. My questions, Some sources claim that rennin is produced in human (infants). Is this true? Other sources claim that rennin is not produced in humans ( we have a pseudogene for it though). Is this correct (I mean the "rennin-is-not-produced" bit, not the "pseudogene" bit)? If rennin is produced in humans only during infancy, what stops it from being produced as we mature? (I'm asking this because every source I've seen that claims that rennin is produced in humans, explicitly states that is done so during infancy...which would suggest that rennin is not produced in adults) Answer: Scanning various reviews it seems that everyone who mentions the possibility of a human chymosin refers to a single paper. So for example this 2014 review has a single reference to a human chymosin: Henschel et al. detected a protease in the gastric aspirates of newborn infants within 6–10 h postpartum that was not pepsin [62]. The electrophoretic mobility and immunoreactivity are similar to that of calf chymosin, a protease that cleaves κ-casein and causes casein curdling. This protease is unique in that it disappears from gastric fluid at 10 days postpartum and is not found in adult gastric fluid. I don't have access to the Henschel et al. paper (from 1987) but here is the abstract: The electrophoretic mobilities of proteases present in gastric juice taken within 10 h of birth from 5 healthy, premature infants were compared with calf chymosin, pig pepsin A and human adult gastric juice. The juice from 2 infants contained predominantly a chymosin-like enzyme, another had almost exclusively pepsins similar to those of the adult juice, while the other two contained a mixture of both. The pepsins consisted of two elements, probably pepsin A (EC 3.4.23.1), and pepsin C (EC 3.4.23.3). Single radial immunodiffusion gave a definite reaction to calf anti-chymosin serum in five samples taken from a further 17 infants. These results indicate that some human infants secrete chymosin. The reaction in the immunodiffusion assay indicated a much lower enzyme activity than that implied from electrophoretic separations. It is suggested that species differences resulted in poor cross-reactivity of the antiserum. Now, obviously, without seeing the data it isn't possible to be conclusively critical, but the quality of the evidence seems to be rather weak, being based upon similar electrophoretic mobilities and an immunodiffusion assay (why not a Western blot?) with an overt apology for weak cross-reactivity of the antiserum used. However, leaving all of that to one side, the strangest thing about this is the sporadic appearance of the proposed chymosin: although 4/5 were scored as chymosin positive in the 1st experiment, is it likely that 2 of these would apparently contain little pepsin? And in the second experiment (the immunoassay) only 5/17 scored positive for chymosin. Evidently no-one has ever reproduced this result, and the evidence for the pseudogene (but no active gene) is very strong. I vote that humans do not produce a chymosin. Update Having read Bryan Krause's answer: if the human gene product was lacking an exon's worth of amino acid sequence then presumably it wouldn't have an electrophoretic mobility that was closely similar to the calf protein.
{ "domain": "biology.stackexchange", "id": 8526, "tags": "human-biology, human-physiology, digestive-system, stomach" }
How does this swap instruction achieves mutual exclusion, progress and bounded waiting if it does?
Question: do { key=true; while(key==true) { swap(&lock,&key); } critical section lock=false; Remainder Section }while(true); void swap(boolean *a, boolean *b) { boolean temp=*a; *a=*b; *b=temp; } How do I prove if it satisfied mutual exclusion, progress and bounded waiting? The swap is built into hardware and is atomic. Mutual Exclusion Definition: No two cooperating processes can enter into their critical section at the same time. For example, if a process P1 is executing in its critical section, no other cooperating process can enter in the critical section until P1 finishes with it. Progress Definition: If no process is in its CS and there are one or more processes that wish to enter their CS, this selection cannot be postponed indefinitely. No process in remainder section can participate in this decision. Bounded Waiting Definition: After a prcess P has made a request to enter its CS, there is a limit on the number of times that the other processes are allowed to enter their CS, before P's request is granted. Tests Mutual Exclusion is achieved in my opinion. P0 tries to enter CS. Sets key=true swaps lock and key; now key=false and lock=true(LOCKED) since key=false, it gets to CS. Say P1 context switched appeared and wants to enter CS. Sets key=true. Swaps lock and key. Both are true as CS is locked at the moment. So, mutual exclusion is achieved. Progress is achieved in my opinion. Say P0 is in remainder section. lock=false P1 wants to enter CS. key=true Swap lock with key lock=true and key=false Loop breaks and P1 enter CS Bounded waiting is not achieve in my opinion. P0 exits CS. lock=false P0 wants to enter CS again. It can enter forever. Correct me if I'm wrong. And please share better ways to prove it if you've them. These are some tough concepts of computer science and it'd be great if we could make this even 1% easy to future learners. Answer: Mutual exclusion: the while loop is only executed with key=true so it can only set lock=true. As the swap is atomic, only one process can get a false, which is immediately "consumed". Only at the end of the critical section is the false restored. That false acts as a unique token. Progress: if lock=false, the first process that performs a swap will enter. Bounded waiting: the other processes can indeed lock out the given process by always taking the token first when it is available.
{ "domain": "cs.stackexchange", "id": 21376, "tags": "operating-systems" }
Relation between conformal and topological field theories
Question: The Chern-Simons (CS) theory is a topological quantum field theory (TQFT). The question is, is a conformal field theory (CFT) a topological quantum theory? Or the reverse, topological quantum field theory is a CFT? What is a conformal field theory (CFT)? Answer: A conformal transformation is one which alters the metric up to a factor, i.e. $$g_{\mu\nu}(x)\to\Omega^2(x)g_{\mu\nu}(x)$$ A field theory described by a Lagrangian invariant up to a total derivative under a conformal transformation is said to be a conformal field theory. These transformations include Scaling or dilations $x^\mu \to \lambda x^\mu$ Rotations $x^\mu \to M^\mu_\nu x^\nu$ Translations $x^\mu \to x^\mu + c^\mu$ In addition to these, the conformal group includes a set of special conformal transformations given by, $$x^\mu \to \frac{x^\mu-b^\mu x^2}{1-2b \cdot x + b^2 x^2}$$ If you compute the generators of the conformal transformations, and the algebra they satisfy, with some manipulation it may be shown there is an isomorphism between the conformal group in $d$ dimensions and the group $SO(d+1,1)$. In two dimensions, the conformal group is rather special; it is simply the group of all analytic maps; this set is infinite-dimensional since one requires an infinite number of parameters to specify all functions analytic in some neighborhood. The global variety of conformal transformations, i.e. those which are not functions of the coordinates but constants, in $d=2$ are equivalent to $SL(2,\mathbb{C})$. On the other hand, a topological field theory is one which is invariant under all transformations which do not alter the topology of spacetime, e.g. they may not puncture it and increase the genus. The correlation functions do not depend on the metric, and are in fact topological invariants. Hence, a topological field theory is invariant under conformal transformations by the fact that it does not even depend on the metric. However, not all conformal field theories are topological field theories.
{ "domain": "physics.stackexchange", "id": 17076, "tags": "conformal-field-theory, topological-field-theory, chern-simons-theory" }
C program circular array rotation
Question: I'm trying to solve a Hackerrank problem where you perform k array[n] rotations, and at the end, ask for m and print array[m]. My program is probably correct (it runs perfectly for most of tests, but some of them terminate due to timeout), but is inefficient and I don't know how to improve it. #include <math.h> #include <stdio.h> #include <string.h> #include <stdlib.h> #include <assert.h> #include <limits.h> #include <stdbool.h> int main(){ int n; int k; int q; scanf("%d %d %d",&n,&k,&q); int *a = malloc(sizeof(int) * n); int *b = malloc(sizeof(int) * n); for(int a_i = 0; a_i < n; a_i++){ scanf("%d",&a[a_i]); } for(int i = 0; i < k; i++){ b[0] = a[n-1]; for(int a_i = 1; a_i < n; a_i++){ b[a_i] = a[a_i-1]; } for(int a_i = 0; a_i < n; a_i++) a[a_i] = b[a_i]; } for(int a0 = 0; a0 < q; a0++){ int m; scanf("%d",&m); printf("%d\n", b[m]); } return 0; } Answer: Do not do actual rotation. k rotations (from your code I understand that all rotations are to the right by 1) map an i'th element to an (i + k) % n position. At the time of printing, do some math to figure out which element was mapped to the position m. The math is trivial, and I intentionally do not spell it out.
{ "domain": "codereview.stackexchange", "id": 22759, "tags": "c, programming-challenge, array, time-limit-exceeded" }
MVVM pattern in Xamarin login form
Question: Please let me know if the MVVM pattern I followed in this is correct or not. Here is the code for ViewModel public class LoginViewModel { public string Email { get; set; } = ""; public string Password { get; set; } = ""; public ProgressBar ProgressBar { get; set; } public Label StatusLabel { get; set; } public Command LoginCommand { get { return new Command(async()=> { ProgressBar.IsVisible = true; ProgressBar.Progress = 0; await ProgressBar.ProgressTo(0.2, 250, Easing.BounceOut); if(Email.Length>0 && Password.Length > 0) { ApiServices apiServices = new ApiServices(); var isSuccess = await apiServices.LoginAsync(Email, Password); if (isSuccess) { await ProgressBar.ProgressTo(1, 250, Easing.Linear); Application.Current.MainPage = new FirstMasterDetailPage(); } else { await ProgressBar.ProgressTo(1, 250, Easing.Linear); StatusLabel.IsVisible = true; StatusLabel.Text = "Failed to login."; } } else { await ProgressBar.ProgressTo(1, 250, Easing.Linear); StatusLabel.IsVisible = true; StatusLabel.Text = "Username and Password is required to login."; } ProgressBar.IsVisible = false; }); } } public Command RegisterCommand { get { return new Command(()=> { Application.Current.MainPage = new RegisterPage(); }); } } } this is code behind page public LoginPage () { InitializeComponent (); var current = BindingContext as LoginViewModel; current.ProgressBar = progressBar; current.StatusLabel = lblStatus; } Answer: No, it's not correct. View-model shouldn't contain anything related to view. It means LoginViewModel shouldn't contain references on progress bar and label. ProgressBar and Label properties inside a view-model violate MVVM. Any changes in UI elements should be done via bindings specified in XAML. Those bindings have to be linked to properties of a view-model. Changing these properties, UI get updated. To make bindings work you should notify them about changes in view-model. For that purpose you need to implement INotifyPropertyChanged interface on the view-model and raise PropertyChanged event every time a property changed. Let's see example of how you can remove ProgressBar from the LoginViewModel. First of all, view -model should implement INotifyPropertyChanged interface which contains PropertyChanged event: public class LoginViewModel : INotifyPropertyChanged { public event PropertyChangedEventHandler PropertyChanged; } Then remove ProgressBar property and add LoginInProgress: private bool _loginInProgress; public bool LoginInProgress { get => _loginInProgress; set { if (value == _loginInProgress) return; _loginInProgress = value; PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(nameof(LoginInProgress))); } } Now in the LoginCommand body instead of ProgressBar.IsVisible = true; write LoginInProgress = true; And instead of ProgressBar.IsVisible = false; write LoginInProgress = false; Then in XAML bind Visibility of the progressBar to this property: <ProgressBar x:Name="progressBar" Visibility="{Binding Path=LoginInProgress, Mode=OneWay, Converter="{StaticResource BooleanToVisibilityConverter}"}" .../> Somewhere in resources you should define BooleanToVisibilityConverter: <BooleanToVisibilityConverter x:Key="BooleanToVisibilityConverter"/> And that's all. You need to eliminate all the view related stuff from the view model in such way. It is how MVVM have to be implemented.
{ "domain": "codereview.stackexchange", "id": 27604, "tags": "c#, authentication, mvvm, xaml, xamarin" }
Creating an optimized, fully functional TextureManager in SDL
Question: After I've discussed pointer semantics with Loki Astari previously, I finally managed to code the TextureManager class using auto_pointers. But the problem is that I need to pass an SDL_Renderer object to my class' constructor as an auto_ptr. But the object I want to pass is a raw pointer. How should I do it? Also, I'm not so sure about the aout_pointers I've written. Can you help me, please? TextureManager.h #pragma once //using visual C++ #include <unordered_map> #include "Texture.h" #include <memory> /** This class helps the user to manage large amount of texture at the same time (disposing them, setting their color, their renderer, ...). The dispose method of this class should be called before the end of the program (there will be no issues if it is not called : the destrctor of the TextureManager class will call the destructor of the @Texture class ! But it is better to call it, since it will release all ressources when you want !).This class contains a dynamic array, so the user can add @Texture pointers threw out the programs evolution. */ typedef std::unordered_map<int, Texture*> atlasType;//cannot use auto_ptr in STL container ! class TextureManager { public: TextureManager(std::auto_ptr<SDL_Renderer> pRenderer); ~TextureManager(); void setRenderer(std::auto_ptr<SDL_Renderer> pRenderer); bool LoadFromFile(unsigned int ID, const char* fileName); bool unloadTexture(unsigned int ID); void dispose(); Texture& getTexture(unsigned int ID); Texture getTextureCopy(unsigned int ID); private: std::auto_ptr<SDL_Renderer> m_pRenderer; atlasType m_textureAtlas; }; TextureManager.cpp #include "TextureManager.h" TextureManager::TextureManager(std::auto_ptr<SDL_Renderer> pRenderer) : m_pRenderer(pRenderer) {} TextureManager::~TextureManager(){ dispose(); } void TextureManager::dispose(){ for (atlasType::iterator it = m_textureAtlas.begin(); it != m_textureAtlas.end(); ++it){ delete (it->second); } m_textureAtlas.clear(); } bool TextureManager::unloadTexture(unsigned int ID){ Texture *pTempTexture = m_textureAtlas[ID]; if (!pTempTexture) return false; delete pTempTexture;//dispose texture pTempTexture = nullptr; m_textureAtlas.erase(ID); return true; } Texture& TextureManager::getTexture(unsigned int ID){ return *(m_textureAtlas[ID]); } Texture TextureManager::getTextureCopy(unsigned int ID){ return *(m_textureAtlas[ID]); } bool TextureManager::LoadFromFile(unsigned int ID, const char* fileName){ Texture *pTexture = m_textureAtlas[ID]; if (pTexture == nullptr){ pTexture = new Texture(); pTexture->setRenderer(m_pRenderer.get());//Texture uses raw pointer; m_textureAtlas[ID] = pTexture;//will be deleted in dispose; } if (!pTexture->LoadFromFile(fileName)) return false; return true; } void TextureManager::setRenderer(std::auto_ptr<SDL_Renderer> pRenderer){ m_pRenderer = pRenderer; } Here's where I can not call the constructor (raw pointer <-> auto_ptr): /* ......Declarations........... */ SDL_Renderer * m_pRenderer = nullptr; /* etc..........*/ m_pRenderer = SDL_CreateRenderer(m_pWindow, -1, 0); /*Trying to create TextureManager*/ m_textureManager = TextureManager(m_pRenderer);//fail to compile : is not of type auto_ptr<SDL_Renderer> ! what shall I do ? Answer: To make you wrapper usabel as a rendorer: class CWindowWrap { public: operator SDL_Window*() {return window;} .... }; // Now when you have a CWindowWrap you can use it anywhere an // SDL_Window* can be used. The compiler sees there is an // automatic conversion from CWindowWrap to a SDL_Window* and // calls this function to do the conversion.
{ "domain": "codereview.stackexchange", "id": 13260, "tags": "c++, memory-management, sdl" }
When is tension constant in a rope?
Question: Consider a massless rope with pulling forces applied at each end. How do we decide if tension is constant or not in a rope? Consider a few example scenarios: For example, if there is a knot in the rope the tension is not constant throughout (why?. Similarly, if the rope is hung over a cylindrical pulley of a non-negligible radius, the tension is not constant (why?). If there isn't anything touching the rope, for example, 2 people tugging on the rope at each end, the tension is constant (why?). Answer: In a massless rope, tension is constant unless a force is applied somewhere along the rope. Why? Because any differential tension would travel at infinite velocity (since speed of wave scales inversely with square root of mass per unit length, and the rope is massless). The only way to preserve a difference is therefore applying a force along the rope (for example, running the rope over a pulley with friction) putting some mass at a point along the rope, and accelerating that mass (because a net force is needed to accelerate the mass). When there is a knot in the rope, there will be friction between parts of the rope and that allows there to be different tension in different parts of the rope; but running the rope over a pulley does not imply that there is differential tension, unless the pulley is massive and accelerating, or unless there is friction. If you accept that the rope has finite diameter, then bending it in a curve may result in differential stresses along the diameter of the rope (the outside, being stretched more, would be under greater tension) but that depends on an assumption that the rope is solid and of finite size; when ropes are made out of twisted (or woven) filaments, these filaments can slide so as to maintain equal tension is all of them when the rope is bent. This is in fact a key reason for this construction (the other is that this ensures much greater flexibility - the two things go hand in hand).
{ "domain": "physics.stackexchange", "id": 30265, "tags": "homework-and-exercises, newtonian-mechanics, forces, free-body-diagram, string" }
Why is my Soft Actor-Critic's policy and value function losses not converging?
Question: I'm trying to implement a soft actor-critic algorithm for financial data (stock prices), but I have trouble with losses: no matter what combination of hyper-parameters I enter, they are not converging, and basically it caused bad reward return as well. It sounds like the agent is not learning at all. I already tried to tune some hyperparameters (learning rate for each network + number of hidden layers), but I always get similar results. The two plots below represent the losses of my policy and one of the value functions during the last episode of training. My question is, would it be related to the data itself (nature of data) or is it something related to the logic of the code? Answer: I would say it is the nature of data. Generally speaking, you are trying to predict a random sequence, especially if you use the history data as an input and try to get the future value as an output.
{ "domain": "ai.stackexchange", "id": 2081, "tags": "reinforcement-learning, tensorflow, actor-critic-methods, hyperparameter-optimization, soft-actor-critic" }
Feedback control of two-link planar manipulator
Question: TL;DR: how can I calculate the disturbance at each joint due to coupling forces in a two-link planar robot manipulator actuated by two independent DC motors? I'm studying control theory and trying to work through a simple example using a two-link planar robot manipulator: My goal is to simulate PID control of the planar two-link manipulator, where each joint is actuated by an independent DC motor. The input is two continuous signals $$\theta_1(t), \theta_2(t)$$ which represent the desired angle for each joint at time t. Following this paper: Modeling a Controller for an Articulated Robotic Arm, I can obtain an expression for the voltage I need to apply to each motor to achieve a desired angle. The paper also describes the PID controller necessary to maintain the desired angle given an error signal $$e(t) = \theta_{desired}(t)-\theta_{actual}(t)$$ Since each motor is controlled independently, coupling effects among joints due to varying configurations during motion are treated as disturbance inputs. My question is: how can I model this coupling effect in order to "simulate" the error signal e(t) for system under ideal conditions? By ideal conditions I mean that the disturbance due to coupling among joints accounts for 100% of the error signal. My current thought is to use an expression for the dynamics of the two-link planar manipulator, as per the following book: A Mathematical Introduction to Robotic Manipulation This way, at time t, we determine the voltage to achieve the desired angles for each independent motor, then plug that into our DC motor model to obtain the generated torques: $$\tau_1, \tau_2$$ We then plug these torques into the dynamics to get the actual angles, taking into account the coupling forces, and then use the actual angles compared to the desired angles in order to generate the error signal that feeds into the PID control loop. Does this approach make sense? If not, where have I gone wrong and how can I simulate the error signal due to coupling forces? Edit: one commenter points out PID control may not be optimal for this problem. If this is the case, what alternative control strategies should I use? Answer: Convert the nonlinear model to state-space form, $x'=f(x,u)$, and linearize it to get a linear state-space or transfer function representation. You can use this to design a PID controller and simulate it that together with the original nonlinear model to see how the controller performs. As noted, the performance of the linear controller will most likely deteriorate far from the operating point. You can design a tracking controller based on feedback linearization. This will give you the control action based on the current configuration and desired trajectory. One downside is that it will not be that robust if there are large variations in the parameters (masses, lengths). The theory for this is in the book by Isidori (link). All this will be tedious algebraic manipulation. You can find several worked examples of these computations being performed using Mathematica. (link, link, link) Update: Here is my attempt with some arbitrary numerical values and sinusoid reference inputs. I have also made the assumption that the masses are concentrated at the end of the links. (If you also want to include the inertia and assume that the mass is uniformly distributed, just update the Lagrangian suitably.)
{ "domain": "engineering.stackexchange", "id": 960, "tags": "control-engineering, control-theory, dynamics, kinematics" }
What does Friedrichs mean by "Myriotic fields"?
Question: I came across K. O. Friedrichs' very old book (1953) "Mathematical Apsects of the Quantum Theory of Fields", and hardly any of it makes sense to me. One of the strange things that he refers to are "Myriotic" fields. What are these? Is there a modern account of what Friedrichs is talking about? Answer: This is a pretty imprecise answer, because I haven't heard the term in a long time. I think (someone please correct me if this is wrong) a "myriotic" field is one in which there are an infinite number of quanta, so you don't have a well-defined notion of number operator or vacuum state.
{ "domain": "physics.stackexchange", "id": 5441, "tags": "quantum-field-theory, mathematical-physics" }
Why does Taq polymerase add 3' adenine overhangs?
Question: Is there a mechanism for the preference of Taq polymerase to add a non-templated 3' adenine (overhang) instead of other bases? Answer: Non-templated terminal addition by certain DNA polymerases is apparently dependent on the base stacking between the incoming base and the existing duplex (Fiala et al., 2007*). This was verified by using a non-base-pairing (but better stacking) nucleotide analogue called deoxyribo pyrene nucleoside triphosphate (dPTP). Adenine seems to have a higher efficiency of base stacking compared to other three bases: A (1.0), G (0.7), T (0.6), and C (0.5) It was also verified that the preference for dATP did not depend much on the terminal base-pair of the duplex with the incorporation probabilities being: 79% dATP 16% dTTP, 3% dCTP, and 2% dGTP when the last base pair is A-T (A is 3'), and 81% dATP, 8% dTTP, 8% dCTP, and 2% dGTP when the last base pair is G-C Fiala, Kevin A., et al. "Mechanism of template-independent nucleotide incorporation catalyzed by a template-dependent DNA polymerase." Journal of Molecular Biology 365.3 (2007): 590-602. *They had used the Dpo4 polymerase from Sulfolobus solfataricus as a model because it is a known lesion bypass polymerase and its kinetics and structure are well studied.
{ "domain": "biology.stackexchange", "id": 5293, "tags": "molecular-biology, dna, pcr, polymerase" }
Intuitive explanation why rate of energy transfer depends on difference in energy between two materials?
Question: The temperature of an object will decrease faster if the difference in temperature between the object and it's surroundings is greater. What is the intuitive explanation for this? Answer: I think it’ll be helpful to think in terms of the kinetics of the constituent particles. When an object is at a higher temperature, the kinetic energy of its constituents is higher. They are more in motion when compared to the ones with a lower temperature. Now if there’s a high temperature object in contact with a lower temperature one, there will be transfer of kinetic energy at the interface. Energy will be transferred in both directions except that the net transfer will be from higher to lower. This is because there are more ways for energy to be transferred from the higher to the lower. But as the temperature difference decreases, the rate of transfer from high to low and low to high are closing in. Until they reach equilibrium where the transfer of energy from either sides are now equal. Thus there will be no more net heat transfer on the average.
{ "domain": "physics.stackexchange", "id": 65478, "tags": "thermodynamics, energy, temperature, cooling" }
I want to know how to solve the following question from ground frame
Question: Particle slides down a smooth inclined plane of elevation theta, fixed in an elevator going up with an acceleration a. The base of the incline has a length L. Find the time taken by the particle to reach the bottom (solve from the inertial frame)? I know how to solve the question from the frame of reference of the particle but I want to know how to solve it from the frame of an observer standing outside the elevator. Answer: As on the particle mg acting downword
{ "domain": "physics.stackexchange", "id": 57905, "tags": "homework-and-exercises, newtonian-mechanics, reference-frames" }
Why is there a difference between work done and elastic potential energy above the elastic limit?
Question: When a spring is inelastically deformed, the work done is no longer equal to the elastic potential energy stored in the spring – what accounts for this disparity? Put another way, the spring is no longer storing all of the energy from mechanical work that has been performed on it as elastic energy, so what else is it storing it as? What is the type of energy 'stored' in irreversible deformation? Answer: The irreversible deformation causes a change in the internal structure and therefore a change in the total electrostatic energy of your spring (more precisely, the new arrangement modifies the electrostatic energy). As long as your spring doesn't break, the "internal" electrostatic energy should increase with the deformations.
{ "domain": "physics.stackexchange", "id": 58664, "tags": "elasticity" }
Momentum Equation VS Momentum of Momentum Equation
Question: Newton's second law states that the linear momentum ($P$) rate is equal to the net force: $$F=\frac{d}{dt}P \tag{1} $$ On the other side, there is a same expression for angular momentum ($L$): $$M=\frac{d}{dt} \tag{2}L$$ In some books, the second equation is derived as a result of the first equation, but in some others, the second equation expressed as a principle or physical law, that is independent to the newton's second law. My question is which one of these two approach is correct? The moment of momentum equation is a result of newton's second law or it is an independent principle? Answer: The two laws are the same. To see this break down your rotating object into a sum of point masses. Then consider one of these masses: The angular momentum of our point mass is given by: $$ L = rmv $$ so: $$ \frac{dL}{dt} = \frac{d}{dt}(rmv) $$ For circular motion $r$ is constant so we get: $$ \frac{dL}{dt} = rm\frac{dv}{dt} = rma $$ But the second law tells us that $ma$ is the just the applied force $F$, so we get: $$ \frac{dL}{dt} = rF $$ and this is just the definition of torque so: $$ \frac{dL}{dt} = T $$ To reconstruct our macroscopic object we now need to add up all our point masses. The total angular momemtum is just the sum of all the point mass angular momenta, and the total torque is just the sum of all the point mass torques, so our equation applies to the macroscopic body as well.
{ "domain": "physics.stackexchange", "id": 26751, "tags": "newtonian-mechanics, classical-mechanics, angular-momentum, momentum" }
Can gravitational waves be explained by the interactions between photons?
Question: What the question really amount to asking is, if (as LIGO said) the gravitational waves emitted from the black hole collision were emitted as 'pure energy' this surely means that they were emitted as photons. If this is the right interpretation does it therefore mean that when the waves interacted with the lasers at LIGO, were they self-interacting photons that were interacting with the lasers not 'ripples in spacetime'? Answer: General relativity is a purely classical theory and it describes the emission of gravitational waves without involving gravitons, photons or indeed any other elementary particle. The energy in a gravitational wave is basically the self energy of the spacetime curvature. To use a rather crude analogy, you can imagine spacetime as an elastic material and the self energy is the energy required to bend it. A gravitational wave carries energy in an analogous way to a wave in an elastic material carries energy. When you ask about what particles involved you are asking how we describe the gravitational wave, and the energy it carries, using quantum mechanics. Given that we have no theory of quantum gravity there isn't a firm answer to this. However if we attempt to use quantum field theory then we find gravitational waves are coherent states of gravitons just as light waves are coherent states of photons. Note that a gravitational wave isn't a hail of bullet like gravitons just as a light wave isn't a hail of bullet like photons. The relationship between the wave and particle is more subtle than that. So there is no reason to invoke photons in considering interactions of gravitational waves. The interaction with the detectors at LIGO is essentially perfectly described using the purely classical approach of GR. If you wished to use a quantum description then you'd need to consider scattering of gravitons, though it's far from clear if this would be a useful approach.
{ "domain": "physics.stackexchange", "id": 41278, "tags": "general-relativity, black-holes, photons, gravitational-waves" }
How and why can random matrices answer physical problems?
Question: Random matrix theory pops up regularly in the context of dynamical systems. I was, however, so far not able to grasp the basic idea of this formalism. Could someone please provide an instructive example or a basic introduction to the topic? I would also appreciate hints on relevant literature. Answer: The basic idea is that statistical properties of complex physical systems fall into a small number of universal classes. A very known example of this phenomenon is the universal law implied by the central limit theorem where the sum of a large number of random variables belonging to a large class of distrubutions converges to the normal distribution. Please see Percy Deift's article for a historical and motivational review of the subject. Of course, one of the most motivating examples (mentioned Percy Deift's article) in is the original Wigner's explanation of the heavy nuclei spectra. Wigner "conjectured" the universality and looked for a model which can explain the repulsion between the energy levels of the large nuclei (two close energy levels are unlikely) which led him to the Gaussian orthogonal ensemble having this property built in. Now, the heavy nuclei Hamiltonian matrix elements are not random, but since by universality , for large N, the distribution of the eigenvalues does not depend on the matrix elements distribution, then the random matrix eigenvalue distribution approximates that of the Hamiltonian.
{ "domain": "physics.stackexchange", "id": 3918, "tags": "statistical-mechanics, chaos-theory, non-linear-systems" }
Defining temperature using mercury column
Question: After one defines 0th law of thermodynamics w/ thermal equilibrium using temperature. Then a quantitative description of temperature can be made. We can measure temperature by finding another property which changes monotonically with temperature. In my textbook " concept of physics by HC verma" on pg.2, He states that we measure temperature as a function of the height of a mercury column and he assumes the temperature as a linear relation which is: $$t=al+b $$ ( where t is temperature, l is length and 'a' and 'b' are some constants) And then he states that change of one degree in temperature will mean a change in $\frac{l_2 - l_1}{t_2 - t_1}$ in the length of the mercury column I can not understand the second statement... I know that slope of $ \frac{\Delta t}{\Delta l} = a$ but how does this relate to change in one degree of temperature? Answer: Given the linear relation you can subtract the relations for a temperature difference of 1, namely: $$(t+1)-t=\left(al_f+b\right)-\left(al_i+b\right)\\ \Rightarrow 1=a\Delta l\\ \Rightarrow \Delta l=\frac{1}{a}$$
{ "domain": "physics.stackexchange", "id": 66672, "tags": "thermodynamics" }
Design a Turing Machine Checking if Apples and Bananas are Even
Question: I am having trouble with a past exam paper. I have to design a Turing Machine to do the following, but I don't really know where to start with this question. Any help would be very much appreciated. Design a Turing Machine TM checking if the numbers of sold apples and bananas are even. Formally, given a string w over the alphabet {a,b}, TM should terminate with the following result string. ab if the number of apples and bananas are even a if the number of apples is even but bananas are odd b if the number of apples is odd and bananas are even ϵ if the number of apples and bananas are both odd Thanks Answer: I'm not going to give you a direct solution, but let's think about this problem together, shall we? We will focus on 4 states. Why four? Well, you have two variables, each with two possible states, so $2^2 = 4$. So, for right now let's call our states $1$, $2$, $3$ and $4$: $1$ is the state denoting an even number of apples and bananas. This is our start state. $2$ is the state denoting an odd number of apples and an even number of bananas. $3$ is the state denoting an even number of apples and an odd number of bananas. $4$ is the state denoting an odd number of apples and an odd number of bananas. Now, let's see how we move between states. For the moment, let's ignore the possibility of $\varepsilon$ for the input. From state $1$ ($a$=even, $b$=even): If we get an $a$, go to state $2$. Eat the apple and output nothing. If we get a $b$, go to state $3$. Eat the banana and output nothing. From state $2$ ($a$=odd, $b$=even): If we get an $a$, we go to state $1$. Eat the apple and output nothing. If we get a $b$, we go to state $4$. Eat the banana and output nothing. From state $3$ ($a$=even, $b$=odd): If we get an $a$, we go to state $4$. Eat the apple and output nothing. If we get a $b$, we go to state $1$. Eat the banana and output nothing. From state $4$ ($a$=odd, $b$=odd): If we get an $a$, we go to state $3$. Eat the apple and output nothing. If we get a $b$, we go to state $2$. Eat the banana and output nothing. At this point we are pretty full and sick of apples and bananas. So let's consider what happens when we run out of food. It should be clear that we are going to be in one of the four states above. So let's augment each of them with an additional rule each: The rule is simple: when the input for any one of those states is $\varepsilon$, output the correct sequence of characters for that state and then go to a $halt$ state. I hope this helps!
{ "domain": "cs.stackexchange", "id": 1439, "tags": "turing-machines" }
Fenwick Tree/ Binary Index Tree Interval location
Question: I have an array of positive numbers, and when calculating the cumulative sum of this array, want to know which interval a given point will lie in. I have done this by searching linearly through an array, but want to use Fenwick trees in the hopes the speed up will be noticeable. I've followed tutorials online, and have now implemented construct, update and getsum functions. I'm not sure how to go about determining which interval a point will lie in though (while taking advantage of the data structure). So for example, if I had the array [1,2,2,1,4,6,7,3], then I could calculate the cumulative sum as [1,3,5,6,10,16,23,26]. If I the had the number 17, I'd want to know what interval this lies in. So starting indexing at 0, this would be in the interval between the 5th and 6th elements of the array. Answer: You can perform binary search on a Fenwick Tree. The idea is Binary Lifting. I'm assuming that we use a one-based indexing in the Fenwick tree, and that we have exactly $2^k$ elements in the array $A$. And to formalize the problem: we want to find the biggest $i$, such that $A[1] + A[2] + ... + A[i] < x$ for a given $x$. This means that the prefix sum of the first $i$ element is still too small, but the prefix sum of the first $i + 1$ elements is exactly $x$ or already too big. The exist multiple different implementations of the Fenwick tree. As I said here I use one-based indexing, since the algorithm is a lot more beautiful this way. For refreshment, one-based indexing in a Fenwick tree means the following (explained using an example): The sum of the first $13 = 1101_2$ numbers can be computed as $bit[1000_2] + bit[1100_2] + bit[1101_2]$, where $bit$ is the array of the nodes of the Fenwick tree. The first summand $bit[1000_2]$ covers the first $1000_2 = 8$ elements (last set bit), the second summand $bit[1100_2]$ covers the next $100_2 = 4$ elements (last set bit), and $bit[1101_2]$ covers the last $1_2 = 1$ elements (last set bit). This indexing allows a very cool trick: We can iterate over the bits, from the highest to the lowest one, and check in $O(1)$ time if we should set it or not. function find_biggest_smaller_index(x): // returns biggest i such that A[1] + A[2] + ... + A[i] < x i = 0 for b = log(n)...0: set bit b in i if bit[i] < x: // yay, gives a better lower bound // this handes the last 2^b elements, therefore subtract them x -= bit[i] else: // damn, too big already unset bit b in i return i edit: now that I think about it. It should also work if the size of the array $A$ is not a power of 2.
{ "domain": "cs.stackexchange", "id": 13442, "tags": "data-structures" }
How do i interpret the following electric potential diagrams?
Question: We recently started to learn about electricity in school and I'm trying to understand the concept of voltage(potential differance). From what i understand, electrons flow from the negative terminal to the positive. The electrons at the negative terminal will have 1.5 J of potential energy per Columb(see diagram), and when they reach the positive terminal, all the potential energy will have been transformed. I have a few questions about this circuit: Is the potential energy of a negatively charged particle in the negative terminal equal to the work needed to "push" it from the positive terminal to the negative terminal in the internal circuit, in other words is it equal to E * Q * d, where E is the electric field strength, Q is the charge of the particle and d is the distance between the terminals. The diagram to the right shows how the voltage drops in the circuit. According to the illustration, the electrical potential remains constant from point A to point B. But is this the case? Isn't electric potential really energy due to position per columb? And since the position change, shouldn't the potential change aswell? Similary, after the charges pass the lightbulb the diagram shows that all the potential energy is transformed to light and thermal energy. But wouldn't there be some potential energy left, since a charge at position C has potential energy with respect to the positive terminal. If the only cause of voltage drop in the circuit is when the charges encounters the lighbuld, then that suggests there would be no drop in potential from A to B if the circuit didn't have the lightbulb. This doesn't make sense to me. And lastly, how is the light energy produced in the lightbulb? Is it the kinetic energy from the electrons doing work on the filament in the lightbulb? I would greatly appriciate it if you can answer/correct any or all of these questions. Answer: Correct as far as it goes, and it works well in situations where the electric field is known. But it's not often used in circuits, because the electric field is not easily described. It is due to its position in the electric field. But the shape of the field is not simple. Because of the (mobile) charges in the conducting wire, the field inside the wire is zero. So it takes (almost) no work to move a charge along the wire. The potential of a charge does not change from one end to the other. That's correct. In an ideal conductor, there is no change in potential from one side to the other. If you leave the lightbulb out (open circuit), or if there is a reasonable resistance in the circuit, then there is no problem assuming the wire has zero resistance. If you just connect the two conductors (short circuit), then the differences between an ideal conductor and the actual conductor becomes significant. The real conductor does have some resistance. As the current rose in the circuit, this resistance (or the resistance in other parts of the circuit) would be sufficient to drop the voltage of the battery (or something would start breaking). Yes, you can think of the energy transfer in a resistor as due to thermal heating from the charge collisions.
{ "domain": "physics.stackexchange", "id": 25879, "tags": "electric-circuits" }
TF updating very slowly
Question: Hey everyone, I am trying to publish Pose messages based on distances I get from ar_track_alvar. I have static transforms which relate the markers to the map. These static transforms broadcast at 100Hz. I take the position and orientation data from the marker message and create a transform from it. I then broadcast that transform (I can't use the one supplied by ar_track_alvar so I create my own). I then listen for a transform from the map to my robot in order to get my position and orientation with respect to the map, and then publish that as a pose message. This works fine, but the problem is that the poses are published with an intolerable delay. After moving my robot, it can take up to a minute for the pose to update. To be clear, poses continue to be published, but they reflect the old position of the data. I used rosrun tf tf_echoto track the tfs, and I found that the tf is reflecting old data also. Additionally, I see a lot of errors saying: Lookup would require extrapolation into the past. Requested time XXXXXX but the earliest data is at time YYYYYYY, when looking up transform from frame [thor] to frame [map]. Despite my constant broadcasting and the static transforms. Things I have tried: I have gotten rid of the waitForTransform call in case that was adding some latency. I have reduced the tag subscribers buffer to 1 to ensure processing of only the most recent message. I have added extra AsyncSpinners. These had no effects. Does anyone have any idea why it is taking the tfs so long to broadcast? Relevant code: //Figure out which tag we are looking at. int tagID = msg->markers[0].id; std::stringstream ss; ss << "hsmrs/marker_" << tagID; std::string markerFrameID = ss.str(); //Manually create transform from robot to tag tf::Transform transform; double x, y; x = msg->markers[0].pose.pose.position.x; y = msg->markers[0].pose.pose.position.y; tf::Quaternion q; tf::quaternionMsgToTF(msg->markers[0].pose.pose.orientation, q); transform.setOrigin( tf::Vector3(x, y, 0.0) ); transform.setRotation(q); //Broadcast transform ROS_INFO("Broadcasting transform!"); br.sendTransform(tf::StampedTransform(transform, ros::Time::now(), markerFrameID, "thor")); ROS_INFO("Transform broadcasted!"); //Listen for transform from map to robot ROS_INFO("Listening for transform!"); tf::StampedTransform mapTransform; try{ listener.waitForTransform("map", "thor", ros::Time(0) ros::Duration(0.1)); listener.lookupTransform("map", "thor", ros::Time(0), mapTransform); ROS_INFO("Transform heard"); } catch (tf::TransformException &ex) { ROS_ERROR("%s",ex.what()); return; } //Unpack position and orientation double map_x, map_y; map_x = mapTransform.getOrigin().x(); map_y = mapTransform.getOrigin().y(); geometry_msgs::Quaternion quat; tf::quaternionTFToMsg(mapTransform.getRotation(), quat); //Publish pose as geometry message geometry_msgs::Pose pose; pose.position.x = map_x; pose.position.y = map_y; pose.orientation = quat; pose_pub.publish(pose); EDIT: The static publisher was running on a separate computer. By running it on the robot this problem disappeared. But I need it to work on the workstation computer. How can I make it work? Originally posted by Robocop87 on ROS Answers with karma: 255 on 2015-02-16 Post score: 0 Answer: EDIT: The static publisher was running on a separate computer. By running it on the robot this problem disappeared. But I need it to work on the workstation computer. How can I make it work? Have you checked the clocks on both computers? TF has a temporal aspect to it, and if the clocks are not in sync, things start to go wrong. From wiki.ros.org/Clock - Introduction: When using wall-clock time on multiple machines, it is important that you synchronize time between them. ROS does not provide this functionality as there are already well-established methods (e.g. ntp) for doing this. If you do not synchronize the wall-clocks of multiple machines, they will not agree on temporal calculations like those used in tf. See also wiki.ros.org/ROS/NetworkSetup - Timing issues, TF complaining about extrapolation into the future?, and Why robot needs to set NTP? fi. Originally posted by gvdhoorn with karma: 86574 on 2015-02-16 This answer was ACCEPTED on the original site Post score: 3 Original comments Comment by Robocop87 on 2015-02-16: I'll look into this, that may very well be the problem!
{ "domain": "robotics.stackexchange", "id": 20888, "tags": "transform" }