anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
Message Type: Heading and Range
Question: If you have a sensor with a beacon, which gives you the distance and the heading ( maybe Quaternion ) to the beacon, is there already a standard ROS message I could use for that? Regards, Christian Originally posted by cyborg-x1 on ROS Answers with karma: 1376 on 2016-01-07 Post score: 1 Answer: Well after some thinking, I guess I found an answer to that now. Probably the answer is just a point in the frame of the sensor. I just have polar coordinates then and I have to transform them into cartesian coordinates. So I think it will be geometry_msgs/PointStamped then, with frame_id sensor_link. Where x is looking in the direction of the sensors front. Originally posted by cyborg-x1 with karma: 1376 on 2016-01-07 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by gvdhoorn on 2016-01-07: Points don't have a 'direction'? Comment by cyborg-x1 on 2016-01-08: Yes, but actually in the sensors frame it would result in one. Thinking of the sensor as origin. You get an angle for the z axis and a distance to it -> so it is like a polar coordinate you could translate in x and y. Actually I am also not sure if this resolves my problem. Because it is not ... Comment by cyborg-x1 on 2016-01-08: really a heading I get. It is the ir-sensor from the EV3 which can detect it's infrared beacon. I need to test around with it how this works out. I think it could be used as a source for localization when using two of the beacons. http://www.ev3dev.org/docs/sensors/lego-ev3-infrared-sensor/ ... Comment by cyborg-x1 on 2016-01-08: (see IR Seeker mode)
{ "domain": "robotics.stackexchange", "id": 23366, "tags": "ros, range, message" }
"Clean Architecture" design pattern with Node.JS and MongoDB
Question: After some time poorly designing my web applications' backends (mixing database calls with the controller, etc.), I have decided to try the "Clean Architecture" approach. In this example I have a REST api users which allows you to get a list of all users in MongoDB and to put a user inside the database. Please, any suggestion on how I can make it better organized would be awesome. app.js const express = require("express"); const bodyParser = require("body-parser"); const config = require("./config.js"); const Routes = require("./src/routes.js"); const Database = require("./src/database.js"); const app = express(); app.use(bodyParser.json()); const PORT = config.PORT; app.use("/api/", Routes()); new Database(config.MONGODB_URI) // connects to the database using MONGODB cluster URL .then(() => { app.listen(config.PORT, () => { console.log(`Server running on port ${config.PORT}`); }); }) .catch((err) => console.error(err)); /src/routes.js const express = require("express"); const usersRouter = require("./user/routes.js"); const Routes = (dependencies) => { const router = express.Router(); router.use("/users", usersRouter(dependencies)); return router; }; module.exports = Routes; /src/database.js const mongoose = require("mongoose"); module.exports = class Database{ constructor(connection){ this.connection = connection; return mongoose.connect(this.connection, { useNewUrlParser: true, useUnifiedTopology: true }); } }; src/user/routes.js const express = require("express"); const UserController = require("./controller.js"); const UserModal = require("./data_access/modal.js"); const UserRepository = require("./repository.js"); const userRoutes = () => { const modal = UserModal; // pretty much the User modal/document const repository = new UserRepository(modal); // talks to the db const router = express.Router(); const controller = UserController(repository); // handles request, sends repository to services router.route('/') .get(controller.getUsers) .post(controller.addUser); return router; }; module.exports = userRoutes; src/user/repository.js module.exports = class UserRepository{ constructor(model){ this.model = model; } create(user){ return new Promise((resolve, reject) =>{ this.model(user).save(); resolve(user); }); } getByEmail(email){ return new Promise((resolve, reject) =>{ this.model.find({email: email}) .then((user) => resolve(user[0])); }); } getByUsername(username){ return new Promise((resolve, reject) =>{ this.model.find({username: username}) .then((user) => resolve(user[0])); }); } getAll(){ return new Promise((resolve, reject) => { const students = this.model.find({}); resolve(students); }); } }; src/user/controller.js const GetUsers = require("./services/get_users.js"); const AddUser = require("./services/add_user.js"); module.exports = (repository) => { const getUsers = (req, res) => { GetUsers(repository) .execute() .then((result) => res.sendStatus(200).json(result)) .catch((err) => console.error(err)); }; const addUser = (req, res) => { const { username, email } = req.body; AddUser(repository) .execute(username, email) .then((result) => res.send(200)) .catch((err) => { res.send(403); console.error(err); }); }; return { getUsers, addUser }; }; /src/user/services/add_user.js module.exports = (repository) => { async function execute(username, email){ return Promise.all([repository.getByUsername(username), repository.getByEmail(email)]) .then((user) => { if(user[0]){ return Promise.reject("username already taken!"); } else if(user[1]){ return Promise.reject("email already taken!"); } else{ repository.create({username: username, email: email}); return Promise.resolve("all good!"); } }); } return {execute}; }; src/user/services/get_user.js module.exports = (repository) => { async function execute(){ const users = repository.getAll(); return new Promise((resolve, reject) => resolve(users)); } return {execute}; }; src/user/data_access/schema.js (entities) const mongoose = require("mongoose"); module.exports = new mongoose.Schema( { username: { type: String }, email: { type: String }, password_hash: { type: String } }); src/user/data_access/model.js (entities) const mongoose = require("mongoose"); const userSchema = require("./schema.js"); const UserModal = mongoose.model("User", userSchema); module.exports = UserModal; Answer: Your code looks very structured and nicely written, but to my understanding, your solution is not "clean architecture" (CA) as described by uncle bob. Your solution is an MVC solution. In clean architecture you can (for example): easily replace express with another framework easily replace mongo with another DB In your case the framework is embedded into your logic, you can see that the controller is using res to output the JSON outside. so replacing the framework will require a rewrite. Another example is that you have no clear boundaries between all the application layers. It can be easily seen that all your imports are from express => to logic => to database and back and that's not using dependency inversion. In CA you have entities layer which is the higher layer and all other layers import the entities as a dependency and it is the communication protocol between all layers. Other things: Where is the exception handling Where is the validation handling Where are the multiple ways to output your data (XML, JSON, CSV) I think you managed to reach a working positive solution (which I think is not extensible for all future cases): Because you don't have all the required functionality in place (positive and negative). You don't have unit tests to prove that you are testable You didn't process your output, there are many HTTP Responses that your presenter needs to handle, but so far you created some basic functionality, so it works. Where the complexity (as number of API will grow, and when the complexity of the logic will grow) you will start feeling the pain of extensibility.
{ "domain": "codereview.stackexchange", "id": 38488, "tags": "javascript, design-patterns, node.js, mongodb, mongoose" }
Acceleration velocity relation
Question: I just happened to come across this specific question in physics:- Q: If a particle is moving along a straight line with increasing speed, then, (1) its acceleration is negative (2) its acceleration maybe decreasing (3) its acceleration is positive (4) both (2) and (3) My physics teacher said that as such all the options are correct but as the option in the book is given to be (4), mark this as the answer only. So, my doubt is that I agree with the options 2,3 and 4 but this option 1 to be true for the given statement is confusing me a little. Please justify the 1 option to be true in this case for me. Answer: I think where the issue lies is the question is asking about speed - not velocity. Speed is defined to be the absolute value of velocity, and is a scalar quantity. Thus, the first case could work if the velocity is negative and the acceleration is negative. This way, the velocity would decrease, becoming more negative, but the absolute value of velocity, or speed, would increase.
{ "domain": "physics.stackexchange", "id": 31576, "tags": "homework-and-exercises, kinematics" }
Finite Difference Method (FDM) solution to heat equation in material having two different conductivity
Question: I am not from a mechanical engineering background and I have not taken any courses in PDE so this may seem trivial for many. and I am writing a Matlab code with the objective to solve for the steady state temperature distribution in a 2D rectangular material that has 'two phases' of different conductivity. I was able to do it considering the entire material as one single phase where at each iteration,the value of temperature is updated as the average of temperature of 4 nearest neighbors(central difference method) until the error is less than a specified value between consecutive iterations. How can I calculate the temperature distribution when I have two phases, taking into consideration the conductivity of each material and how can I make sure that there is temperature continuity at the interface? Answer: Let the discontinuity in thermal conductivity be located half-way between grid points i and i+1. Let the conductivity to the left of the discontinuity be $k_L$ and the conductivity to the right of the discontinuity be $k_R$. Then, for a grid point at i,j (immediately to the left of the discontinuity), the steady state heat balance equation (assuming a square grid) would be: $$k_L(T_{i-1,j}-T_{i,j})+\left[\frac{2}{(\frac{1}{k_L}+\frac{1}{k_R})}\right](T_{i+1,j}-T_{i,j})+ k_L(T_{i,j+1}-2T_{i,j}+T_{i,j-1})=0$$ If you divide this equation by $k_L$ and solve for $T_{i,j}$, you will obtain the equation you desire. Do you understand how this equation was derived? Do you know how to get the balance equation on the other side of the boundary? SUPPLEMENT Let q be the heat flux from grid point i to grid point i + 1. This is given locally by the equation: $$-k\frac{\partial T}{\partial x}=q$$ Solving for $\frac{\partial T}{\partial x}$, we have:$$\frac{\partial T}{\partial x}=-\frac{q}{k}$$ Integrating this equation between i and i+1, we have: $$(T_{i+1,j}-T_{i.j})=-\int_{x_i}^{x_{i+1/2}}\frac{q}{k}dx-\int_{x_{i+1/2}}^{x_{i+1}}\frac{q}{k}dx=-\frac{q}{k_L}\frac{\Delta x}{2}-\frac{q}{k_R}\frac{\Delta x}{2}$$ So, $$q=-\left[\frac{2}{(\frac{1}{k_L}+\frac{1}{k_R})}\right]\frac{(T_{i+1,j}-T_{i,j})}{\Delta x}$$
{ "domain": "physics.stackexchange", "id": 30804, "tags": "thermodynamics, computational-physics" }
Can I use scikit learn's cross_val_predict with cross_validate?
Question: I am looking to make a visualization of my cross validation data in which I can visualize the predictions that occurred within the cross validation process. I am using scikit learn's cross_validate to get the results of my bayesian ridge model's (scikit learn BayesianRidge) performance, but am unsure if my plot using cross_val_predict expresses the same predictions? My plot is a one-to-one plot of the predicted labels that occurred during cross validation versus the observed labels the model trained on. I use the same number of folds in both cross_validate and cross_val_predict. Basically, I just want to know if the plot I make with cross_val_predict can be described by the returned performance metrics from cross_validate? Thanks for the help Answer: No, the folds used will (almost surely) be different. You can enforce the same folds by defining a CV Splitter object and passing it as the cv argument to both cross-validation functions: cv = KFold(5, random_state=42) cross_validate(model, X, y, cv=cv, ...) cross_val_predict(model, X, y, cv=cv, ...) That said, you're fitting and predicting the model on each fold twice by doing this. You could use return_estimator=True in cross_validate to retrieve the fitted models for each fold, or use the predictions from cross_val_predict to generate the scores manually. (Either way though, you'd need to use the splitter object to slice to the right fold, which might be a little finicky.)
{ "domain": "datascience.stackexchange", "id": 11300, "tags": "scikit-learn, regression, visualization, cross-validation" }
What knowledge should I gain for developing a supervised image processing software that learns how to edit photos based on past behavior?
Question: I have done several machine learning projects but all of them have been connected to the traditional machine learning (predictions, classifications, etc.). I have currently been offered a project to finish in less than 6 months. The idea is to develop/improve a pre-existing software. The software takes the image of a molecule from an advanced molecule and then tries to highlight the cell line with red, some times the software takes the background or the lines of other cells also as the highlighted part, and thus the user has to manually edit and trim such mistakes. The idea is to make the software learn from user's edits and behavior over time. One thing I want to know is whether such a 6-month project is realistic for someone with no background in image processing and pattern recognition? Or is it going to be terribly difficult because I only have had experience with "data-oriented"/statistical machine learning? My other question is; what type of concepts/topics should I dig deep into to learn the fundamentals of carrying out this project? Answer: From the description of your problem, you need both Computer Vision and Deep Learning for a task like that. It is going to be extremely difficult, but you are more in advantage than anyone else, given you have a strong statistical and machine learning background. You don't have to worry a lot about the image processing part as there are libraries that will do that for you. You can look into PIL for that.The hard part is the learning from the edits part.A simpler way to solve this problem would be to focus on image processing and pick out the cell line clearly. The other way would be to train a Convnet on a large collection of 'labelled-images' of the cell-line, so that it is able to identify it in any picture. I do think it is quite a hard problem to solve but do give it a try. Cheers. All the best.
{ "domain": "datascience.stackexchange", "id": 1338, "tags": "machine-learning, deep-learning, image-recognition" }
Checking whether a string is a permutation of a palindrome in C++20 - follow-up
Question: This post is the follow-up of Checking whether a string is a permutation of a palindrome in C++20. So, what's new? Well, nothing else except that the procedure is now generic and accepts all string_views of char, char8_t, wchar_t, char16_t and char32_t: string_utils.h: #ifndef COM_GITHUB_CODERODDE_STRING_UTILS_HPP #define COM_GITHUB_CODERODDE_STRING_UTILS_HPP #include <cstddef> // std::size_t #include <string_view> #include <unordered_map> namespace com::github::coderodde::string_utils { template<typename CharType = char> bool is_permutation_palindrome(const std::basic_string_view<CharType>& text) { std::unordered_map<CharType, std::size_t> counter_map; for (const auto ch : text) ++counter_map[ch]; std::size_t number_of_odd_chars = 0; for (const auto& [ch, cnt] : counter_map) if (cnt % 2 == 1 && ++number_of_odd_chars > 1) return false; return true; } } // namespace com::github::coderodde::string_utils #endif // !COM_GITHUB_CODERODDE_STRING_UTILS_HPP main.cpp: #include "string_utils.h" #include <iostream> namespace su = com::github::coderodde::string_utils; int main() { std::cout << std::boolalpha; std::cout << su::is_permutation_palindrome(std::wstring_view(L"")) << "\n"; std::cout << su::is_permutation_palindrome(std::u8string_view(u8"yash")) << "\n"; std::cout << su::is_permutation_palindrome(std::u16string_view(u"vcciv")) << "\n"; std::cout << su::is_permutation_palindrome(std::u32string_view(U"abnnab")) << "\n"; } Critique request I would like to know how to improve my solution code-wise. I don't care much about performance. Answer: There isn't much to talk about, the code is good. Drop in ergonomics The template now disallows usage of std::string directly, as the CharType is not automatically deduced. Some other calls where the string view could be converted to are now prohibited as well (C strings, for example). It might make sense to provide some overloads that take C strings and std strings to facilitate easier use. Missing traits as type parameter for the string_view I honestly do not know if anybody uses them, but I'll leave it here for completeness. Mixing two responsibilities I believe that histogram calculation deserves separation from this function. Perhaps having one easy to use function that composes from them is great, but not having the building blocks separately might cause problems. I haven't worked with ranges, but what if it would be possible to actually create another view from the input that is histogram of the original, then just apply the algorithm? This might solve the problem that @Toby is mentioning, perhaps the ignoring functions could be written as part of the pipeline. This might make it viable to accept any range type that has needed properties (foreshadowing concepts).
{ "domain": "codereview.stackexchange", "id": 42652, "tags": "c++, template, generics, palindrome, c++20" }
Energy of a body in circular motion?
Question: I'm confused about energy of a body in circular motion. In particular I'm having trouble to find the correct answer to this question. Consider the body in the picture that is set in motion from rest. How much energy is needed to set the body in motion? What energy does the body in motion own? In my answer I would surely include the kinetic energy that the body owns once in motion, that is $K=\frac{1}{2} m v^2=\frac{1}{2} m (\omega r)^2$ (Where $r$ is the radius of the circular trajectory of the body in motion) But is that all? I mean the body has somehow rise a bit from the previous position, therefeore it should own potential energy too, is that correct? $U_{grav}=mg h$ (Where $h$ is the height of the body in motion with respect to the previous position) Finally should also centrifugal potential energy be included? That would be $U_{centrifugal} = -\int_{0}^{r} F_{centrifugal} dx= -\int_{0}^{r} m \omega^2 x dx=-\frac{1}{2} m (\omega r)^2$ So adding up all the pieces $E_{TOT}= K+U_{grav}+U_{centrifugal}=U_{grav} =m g h$ But I'm not convinced of what I tried, am I missing something important? Answer: You don´t have to consider fictitious forces if you set an inertial frame of reference. Suppose the rod at which the mass is tied to by the rope is the z-axis, with z=0 at the lower extreme and raising from bottom to top, and the other x and y axis are in the plane perpendicular to the z-axis, no matter the specific location inside it because the problem has symmetry about z-axis. It´s obvious that the mass has less energy when idle than when twirling, just because both the potential and kinetic energy have increased but, how much? Well, if you locate the "zero level" of the potential at z=0 and you consider gravity to be constant, the potential energy is $V_{1}=mg(L-d)$, and mechanic energy too, because the mass is iddle and kinetic energy is zero $T_{1}=0$. When twirling at the given height, you have that kinetic and potential energy are: $T_{2}=\dfrac{1}{2}m\omega^{2}r^{2}=\dfrac{1}{2}m\omega^{2}(d^{2}-\dfrac{L^{2}}{4})$ $V_{2}=mg\dfrac{L}{2}$ $r$ means the same than in your exposition, computing its value only requires to apply pythagorean theorem. With this, the problem is practically solved. The energy of the mass when twirling is: $E_{2}=T_{2}+V_{2}=\dfrac{1}{2}m\omega^{2}(d^{2}-\dfrac{L^{2}}{4})+mg\dfrac{L}{2}$ And the energy increment is: $\Delta E=E_{2}-E_{1}=E_{2}-V_{1}=\dfrac{1}{2}m\omega^{2}(d^{2}-\dfrac{L^{2}}{4})-mg\dfrac{L}{2}+mgd$
{ "domain": "physics.stackexchange", "id": 28798, "tags": "homework-and-exercises, energy, potential-energy, rotation" }
Is Hawking radiation real for a far away observer?
Question: It is my understanding that Hawking radiation is observed by external observers, and A necessary condition for having Hawking radiation is the formation of an event horizon during a gravitational collapse. Since the emergence of an event horizon takes infinite time for an observer far away from the black hole, how is it possible that this observer sees thermal radiation coming from the black hole if a necessary condition for the existence of such thermal radiation is the presence of the event horizon? Am I wrong in assuming that the formation and existence of the event horizon is necessary in order to have Hawking radiation? Answer: External observers and black hole formation The event horizon is simply the delineation between the part of spacetime from which light can escape and the part of spacetime from which it cannot. In that sense, it is not directly observable, neither by external observers nor by infalling observers. Still, an external observer can observe the effects of the existence of a region from which nothing can escape. An external observer can observe an object falling toward that region. The object's motion is increasingly slowed, and the light from that object is increasingly redshifted and increasingly reduced in intensity, until it is no longer observable for all practical purposes. The external observer never sees an object cross the event horizon, but the object quickly disappears from the external observer's senses because of the increasing redshift and decreasing intensity. This happens when the object is very near the event horizon. That's true for any object falling toward the black hole, including the star itself — the star whose collapse forms the black hole. However, to say that the black hole never forms according to the external observer would be missing the point. The external observer sees the collapsing star quickly and smoothly disappear, again because of the rapidly increasing redshift as the "surface" of the star comes very close to the point of no return. In order for the distant external observer to continue detecting light from the star, larger and larger telescopes would need to be used in order to capture the ever-increasing wavelength and ever-decreasing intensity. When the redshifted wavelength exceeds the size of the universe, or when the intensity falls below one photon per age-of-the-universe, this clearly becomes hopeless. This occurs in a finite amount of time on the external observer's clock, so in this sense, the external observer does witness the formation of the black hole. And remember that the event horizon delineates a region of spacetime. If we want to try to think of it as a region of space, then we need to remember that it can grow. The part of space where infalling objects become practically unobservable to the external observer at 2:00 can be larger than the part of space where infalling objects were becoming practically unobservable to the external observer at 1:00. If the external observer takes a video of objects falling toward a black hole, the video will show that the size of the crazy-region (around which the light from distant stars on the opposite side is bent in dizzying ways) is steadily growing as a result of the mass gained from the infalling objects — even though each infalling object becomes unobservable before reaching that (growing) region. So yes, it's true that an external observer never sees an object cross the event horizon. And it's also true that an external observer does see the black hole form and grow, in the very real sense that the external observer could take a video and post it on the internet for the rest of us to watch (including seeing falling objects smoothly dwindle-and-disappear, as well as the dizzying effects on the background light from distant stars), all in a finite amount of time. Hawking radiation In contrast to the light emitted by the collapsing star, which is quickly redshifted to the point of unobservability, Hawking radiation persists. We can think of Hawking radiation as being emitted from just outside the event horizon (just outside the region from which nothing can escape), but unlike the light from the infalling star, Hawking radiation starts with arbitrarily short wavelengths, so that the wavelength received by the external observer is still finite despite the arbitrarily large redshift. Quantitatively, most of the Hawking-radiation wavelengths received by the external observer are comparable the size of the black hole. That's still a huge wavelength that would require incredibly sensitive instruments to detect (also because of the extremely low intensity), but it doesn't become increasingly difficult to detect (unless the black hole grows), in contrast to the light from the star which does become increasingly difficult to detect. Altogether, a distant observer can detect the Hawking radiation even though that observer never sees any part of the star cross the (growing) event horizon. In fact, the spacetime of a collapsing star that is used to derive Hawking radiation predicts the experience of the distant observer that was described above. Most importantly, the derivation of Hawking radiation does not rely on the perspective of any particular observer. The derivation takes all of spacetime into account, not just the part that a distant observer can see. Infalling objects cross the horizon in a finite amount of time on their own clocks, and the derivation of Hawking radiation "knows" this — just like it "knows" that distant observers never see those same infalling objects reach the horizon. By the way, Hawking radiation can be — and originally was — derived using quantum field theory in classical curved spacetime, and that's the model assumed in this answer. This answer didn't use quantum gravity, which isn't necessary for deriving Hawking radiation and isn't necessary for this question. Technical note about time and black hole formation A more technical note for those who are comfortable with the concept of a spacelike hypersurface: It is sometimes said that the emergence of an event horizon takes infinite time for a distant observer, but we need to be careful when talking about "time" in relativity. The distant observer never sees anything cross the horizon, because light cannot escape. However, there are spacelike hypersurfaces that include stuff behind the horizon and that also intersect the distant observer's worldline. In that sense, the horizon forms in finite time on the observer's clock, even though the observer can never see it. We can construct a continuous sequence of spacelike hypersurfaces (called a foliation), each one intersecting the distant observer's worldline at a particular time on that observer's clock, and each one intersecting the inside of the black hole. The black hole grows along this sequence of spacelike hypersurfaces, and this formation happens in finite time on the distant observer's clock.$^\dagger$ $^\dagger$ The details of the timeline are ambiguous, of course, because we can also construct (infinitely many!) other sequences of spacelike hypersurfaces instead. This is one of relativity's most basic lessons: "simultaneous" is generally ill-defined. We can't use a clock in one place to unambiguously assign times to events that occurred in a different place.
{ "domain": "physics.stackexchange", "id": 71643, "tags": "general-relativity, black-holes, observers, hawking-radiation, qft-in-curved-spacetime" }
Equation of state for ideal gas from Helmholtz free-energy
Question: Starting from the definition of Helmholtz free energy: $$F:=U-TS$$ (where $U$ is the internal energy , $T$ temperature and $S$ entropy) we derive in few steps the following relation: $$F=-T\int \frac{U}{T^2}\mathrm d T+ \text{constant} \tag{1}$$ Now, we know also that Maxwell relations holds so at $T=\text{constant}$ we have: $$P=-\frac{\partial F}{\partial V} \tag{2}$$ In ideal gas the internal energy have the following form: $$U=\frac{3}{2} NT \tag{3}$$ If i substitute $(3)$ in $(1)$ and put the result in $(2)$ i should find the classical equation of state for ideal gas: $$ PV=NT \tag{4}$$ ...but from calculation i don't find this. Where is the error in my steps? It could be in the value of the constant ? Yes, wrong word. Anyway we can say something about this function a posteriori : $$ p = - \frac{\partial F}{\partial V} = -C'(V) T. $$ Using (4) in this equation we obtain $$ C(V) = N * ln(V)+ constant$$ Answer: In 1) there is additive "constant" of integration. The integration is only over $T$, the terms may depend also on volume $V$ which can be arbitrary. Therefore the "constant" in that integration over $T$ can be actually a function of $V$: $$ F(T,V) = -T\int \frac{U}{T^2}dT + C(V)T. $$ Since the first term, for an ideal gas, does not depend on volume, the only part relevant for calculating pressure from $F$ is the second term: $$ p = - \frac{\partial F}{\partial V} = -C'(V) T. $$ The conclusion is, we cannot infer the familiar equation of state of ideal gas $p = nc_V T / V$ just from knowing $U = nc_V T$. The above result suggests large class of functions of $C(V)$ is consistent with $U=nc_VT$. But we did find at least that pressure must be proportional to temperature $T$. In Callen there is a rationale for this - the equation $U=nc_VT$ is not the fundamental form, that is, $U$ is not expressed as function of its natural parameters $S,V$. If it was, we should be able to derive the equation of state from it.
{ "domain": "physics.stackexchange", "id": 66958, "tags": "thermodynamics, energy, statistical-mechanics, ideal-gas" }
Practices while using a boiling tube
Question: When using a boiling tube to boil a liquid, I am frequently advised to move the boiling tube in and out of the Bunsen flame and when the liquid starts to boil, remove the tube from the flame for a few seconds before resuming. Why is this so? Answer: Usually, if people have experience of boiling water at home and think a while about it, they are able to answer it themselves. These "boiling tubes" are used primarily for ambient temperature testing, so they are called test tubes, or test-tubes. Taking a test-tube in and out is for decreasing the heat flow to prevent violent overheating and the burst of boiling liquid shooting out of the tube. Similarly, removing the boiling test-tube from the flame serves the same purpose, avoiding violent outbursts. Bear in mind there is high ratio of provided heat versus the test-tube heat capacity, so warming up is fast and subsequent boiling can be violent. Additionally, as a welcome side effect, moving of a test-tube keeps mechanical disturbances and traps air bubbles, creating boiling centers. This in large extent prevents overheating above the boiling point and then sudden boiling outbursts. This is especially important when heating liquid with strong tendency to be overheated, like strongly alkalic solutions. A burst of boiling hydroxide solution is nothing I would recommend to experience. Another precaution is warming up rather the upper part of the liquid in a tilted test-tube. If an outburst happens in spite of being careful, less liquid is shot outside.
{ "domain": "chemistry.stackexchange", "id": 15568, "tags": "experimental-chemistry, safety" }
Reconstruction of a signal using 1D discrete wavelet
Question: There is a signal of $50\textrm{ Hz}$ and $120\textrm{ Hz}$ corrupted with noise. The sampling rate is $1000\textrm{ Hz}$. Here I used a 3-level DWT to extract this two components of the signal respectively. The figure is the power density spectrum of signal reconstructed from the detailed coefficient. My question is, why there is a unknown frequency component ($129.9\textrm{ Hz}$) in it? clear all Fs = 1000; % Sampling frequency T = 1/Fs; % Sampling period L = 1024; % Length of signal t = (0:L-1)*T; % Time vector S = 0.7*sin(2*pi*50*t) + sin(2*pi*120*t); X = S + 2*randn(size(t)); plot(1000*t(1:50),X(1:50)) title('Signal Corrupted with Zero-Mean Random Noise') xlabel('t (milliseconds)') ylabel('X(t)') f = Fs*(0:(L/2))/L; h=boxcar(L); P1=psd(X,L,Fs,h); plot(f,P1) title('Single-Sided Amplitude Spectrum of X(t)') xlabel('f (Hz)') ylabel('|P1(f)|') ylim([0 300]) [c,l]=wavedec(X,3,'bior4.4'); for i=1:3 d{i}=wrcoef('d',c,l,'bior4.4',i); a{i}=wrcoef('a',c,l,'bior4.4',i); Pa{i}=psd(a{i},L,Fs,h); Pd{i}=psd(d{i},L,Fs,h); end figure plot(f,Pa{3}) title('Single-Sided Amplitude Spectrum of X(t)') xlabel('f (Hz)') ylabel('|P(f)|') figure plot(f,Pd{3}) title('Single-Sided Amplitude Spectrum of X(t)') xlabel('f (Hz)') ylabel('|P(f)|') Answer: This sounds like a typical effect due to downsampling after a convolution with a non perfect lowpass, when one of the frequency in the approximation becomes close to the cutoff $f_c$ at some scale, of the form $f_c = f_s/2^j$ for some $j$. Notice that the sum of $120+129.9 \approx 2*125$. Set the second frequency to $124$, you can see a second peak at $126$ Hz, similarly. So this is aliasing, resulting from a two-fold subsampling after a FIR lowpass, whose transition band is not sharp enough. This can be avoided be sharper wavelets (like $M$-band wavelets), or resorting to shift-invariant or stationary wavelets (swt.m instead of dwt.m). The following real-life example come from seismic data processing: the data is corrupted by a $60$ Hz powerline sine (US standard), and acquired at $4$ ms. The top graph is a seismic data spectrum, after the data has been processed by the discrete wavelet, and shows a second peak at $65$ Hz ($60+65=125$), while the second one is processed by a SI wavelet. Your question revived my memories from 2002, Efficient Coherent Noise Filtering, An application of shift-invariant wavelet denoising (source of the picture).
{ "domain": "dsp.stackexchange", "id": 4428, "tags": "matlab, wavelet" }
Return Full Month Name From 3 Letter Month
Question: I have piece metaled this code together from multiple google searches and thanks to the function from the site mentioned in the code. I am sure there are ways to optimize this for speed. What do you guys suggest? using System; using System.Globalization; public class Program { public static void Main() { string shortmonth = "Mar"; string num = GetMonthNumberFromAbbreviation(shortmonth); string monthname = CultureInfo.CurrentCulture.DateTimeFormat.GetMonthName(Convert.ToInt32(num)); Console.WriteLine(monthname); } //https://blogs.msmvps.com/deborahk/converting-month-abbreviations-to-month-numbers/ private static string GetMonthNumberFromAbbreviation(string mmm) { string[] monthAbbrev = CultureInfo.CurrentCulture.DateTimeFormat.AbbreviatedMonthNames; int index = Array.IndexOf(monthAbbrev, mmm) + 1; return index.ToString("0#"); } } Answer: The following avoids assumptions about and/or the need to convert according to culture, by letting the tried-and-tested DateTime plumbing do the work. Creating a translation map once at first instantiation of the class (the static constructor) is the best I can think of to optimise performance. If you use it, don't forget to do error-checking - specifically the result of TryParse. using System; using System.Collections.Generic; public class Program { private static readonly Dictionary<string, string> MonthNameMap; // Static constructor to build the translation dictionary once static Program() { MonthNameMap = new Dictionary<string, string>(); //var months = new List<string>() { "Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec" }; //foreach (var shortMonthString in months) //{ // DateTime.TryParse($"1 {shortMonthString} 2000", out var dt); // MonthNameMap.Add(shortMonthString, dt.ToString("MMMM")); //} for (int i = 1; i <= 12; i++) { DateTime.TryParse($"2000-{i}-25", out var dt); MonthNameMap.Add(dt.ToString("MMM"), dt.ToString("MMMM")); } } public static void Main() { foreach (var entry in MonthNameMap) { Console.WriteLine($"{entry.Key}: {entry.Value}"); } } } Results: Jan: January Feb: February Mar: March Apr: April May: May Jun: June Jul: July Aug: August Sep: September Oct: October Nov: November Dec: December
{ "domain": "codereview.stackexchange", "id": 33873, "tags": "c#, performance" }
Problem with calculating error rate for KNN
Question: I am trying to validate the accuracy of my KNN algorithm for the movie rating prediction. I have $2$ vectors: $Y$ - with the real ratings, $Y'$ - with predicted ones. When I calculate Standard Error of the Estimate (is it the one I need to calculate?) using following formula: $$\sigma_{est} = \sqrt{\frac{\sum (Y-Y')^2}{N}}$$ I'm getting result of $\sim 1.03$. But I thought that it can't be $> 1$. If it is not, then what does this number say to me? results = load('first_try.mat'); Y = results(:,1); Y_predicted = results(:,2); o = sqrt(sum((Y-Y_predicted).^2)/rows(Y)) Answer: K-NN is a measure of distance, thus the result of your equation will depend on the scale of your data. If the ratings are in a scale from 0 to 100. Then if you always predict very poorly you are evidently going to have values much larger than 1. For example, for a very bad predictor import numpy as np Y = [100, 90, 100, 90] Y_p = [10, 10, 10, 10] np.sqrt(np.sum(np.subtract(Y, Y_p)**2)/len(Y)) 85.14693182963201
{ "domain": "datascience.stackexchange", "id": 3883, "tags": "matlab, k-nn, octave" }
Radon gas in earthquake prediction - Why Rn?
Question: I recently studied an article about predicting earthquakes and how correctly realizing that an increase in $\ce{Rn}$ gas is a sign for the earthquake saved a whole city. The following is derived from the wikipedia page for earthquake prediction: There are reports of spikes in the concentrations of such gases prior to a major earthquake; this has been attributed to release due to pre-seismic stress or fracturing of the rock. One of these gases is radon, produced by radioactive decay of the trace amounts of uranium present in most rock. Radon is useful as a potential earthquake predictor because being radioactive it is easily detected, and its short half-life (3.8 days) makes it sensitive to short-term fluctuations. A 2009 review found 125 reports of changes in radon emissions prior to 86 earthquakes since 1966. However, wikipedia then attempted to prove that most of the assretions related to radon are false statements, but all this leaves me with one question: Why radon? (Why isn't another chemical species used for identifying the danger of feasible earthquake?) Maybe things I studied were too technical for me to understand, so please be as simple as you can be. (Feel free to edit tags) Answer: Uranium is found in low-levels in all rocks and soil. Radon is a gaseous radioactive decay product of uranium. As the uranium undergoes radioactive decay, radioactive radon is generated and trapped in the rocks that contain the uranium. The earthquake theory involving radon suggests that prior to the actual quake, there is some subterranean movement where rocks are crushed, soil is uncompacted and the trapped radon is released producing a pre-quake spike in radon concentration. So radon presents the following attributes: it is present in all rocks and soil if a rock is broken or if soil is disturbed, radon will be released it is gaseous and air currents / thermal gradients will carry it up to the earth's surface producing a detectable plume it is radioactive, this makes detecting small amounts or small changes in radon concentration relatively routine due to well-developed and very sensitive methods for detecting and accurately measuring radioactivity. radon has a very short radioactive half-life, a bit under 4 days. Being a gas and having a short half-life is very useful in terms of measuring radon emissions. If a radon emission spike occurs, the gas will dissipate quickly and after about 10 half-lives (40 days) normal background levels of radioactivity will return.
{ "domain": "chemistry.stackexchange", "id": 2654, "tags": "physical-chemistry, geochemistry" }
Math formulas in Haskell
Question: I wrote some math formulas in Haskell and was wondering about how to clean the code up and make it more readable. import Math.Gamma pdf :: Double -> Double -> Double -> Double -> Double pdf mu alpha beta x = ( beta / (2 * alpha * gamma ( 1/beta) ) ) ** exp ( -1* ( abs(x - mu )/alpha )) ** beta cdf :: Double -> Double -> Double -> Double -> Double cdf mu alpha beta x = 0.5 + signum(x - mu) * ( lowerGamma (1/beta) ((abs(x-mu) / alpha)**beta) / (2 * gamma(1/beta))) main = do let x = pdf 0 1 2 0.5 print x let y = cdf 0 1 2 0.5 print y Answer: Your code looks fine to me. If you wish, you could use Greek letters. This makes the formula easier to read, but if you expect to modify it often, it may be too much trouble to enter the Greek letters. You might find it helpful to split up long formulas, as I've done for pdf. If you can think of more meaningful names than foo and bar, this might be a good idea. However, if this is a well-known formula in your field, splitting it up might actually make it less recognisable. It's a judgement call. If you wrote this in literate Haskell, you could include a nicely-formatted version of the formula using LaTeX. That might be useful if you're writing code with a lot of formulas. import Math.Gamma γ :: Double -> Double γ = gamma pdf :: Double -> Double -> Double -> Double -> Double pdf μ α β x = foo ** bar ** β where foo = ( β / (2 * α * γ ( 1/β) ) ) bar = exp ( -1* ( abs(x - μ )/α )) cdf :: Double -> Double -> Double -> Double -> Double cdf μ α β x = 0.5 + signum(x - μ) * ( lowerGamma (1/β) ((abs(x-μ) / α)**β) / (2 * γ(1/β))) main = do let x = pdf 0 1 2 0.5 print x let y = cdf 0 1 2 0.5 print y
{ "domain": "codereview.stackexchange", "id": 5090, "tags": "haskell, mathematics" }
ROS Indigo Install on RPI3 - URDF Compile Errors
Question: I'm installing ROS Indigo on RPI 3 running Wheezy using the desktop version and following this wiki page: http://wiki.ros.org/ROSberryPi/Installing%20ROS%20Indigo%20on%20Raspberry%20Pi I get as far as compiling URDF from source and I get a bunch of compile-time errors and many more not shown. pi@raspberrypi ~/ros_install_ws $ cd /home/pi/ros_install_ws/build_isolated/urdf && /opt/ros/indigo/env.sh make -j4 -l4 [100%] Building CXX object CMakeFiles/urdf.dir/src/model.cpp.o In file included from /usr/local/include/urdf_model/joint.h:43:0, from /usr/local/include/urdf_model/link.h:44, from /usr/local/include/urdf_model/model.h:42, from /home/pi/ros_install_ws/src/robot_model/urdf/include/urdf/model.h:42, from /home/pi/ros_install_ws/src/robot_model/urdf/src/model.cpp:37: /usr/local/include/urdf_model/pose.h: In member function 'void urdf::Vector3::init(const string&)': /usr/local/include/urdf_model/pose.h:78:25: error: 'stod' is not a member of 'std' In file included from /usr/local/include/urdf_model/joint.h:43:0, from /usr/local/include/urdf_model/link.h:44, from /usr/local/include/urdf_model/model.h:42, from /home/pi/ros_install_ws/src/robot_model/urdf/include/urdf/model.h:42, from /home/pi/ros_install_ws/src/robot_model/urdf/src/model.cpp:37: /usr/local/include/urdf_model/pose.h:90:42: error: 'to_string' is not a member of 'std' In file included from /usr/local/include/urdf_model/joint.h:44:0, from /usr/local/include/urdf_model/link.h:44, from /usr/local/include/urdf_model/model.h:42, from /home/pi/ros_install_ws/src/robot_model/urdf/include/urdf/model.h:42, from /home/pi/ros_install_ws/src/robot_model/urdf/src/model.cpp:37: /usr/local/include/urdf_model/types.h: At global scope: /usr/local/include/urdf_model/types.h:51:9: error: 'shared_ptr' in namespace 'std' does not name a type /usr/local/include/urdf_model/types.h:53:1: error: 'shared_ptr' in namespace 'std' does not name a type /usr/local/include/urdf_model/types.h:53:1: error: 'shared_ptr' in namespace 'std' does not name a type /usr/local/include/urdf_model/types.h:53:1: error: 'weak_ptr' in namespace 'std' does not name a type /usr/local/include/urdf_model/types.h:54:1: error: 'shared_ptr' in namespace 'std' does not name a type /usr/local/include/urdf_model/types.h:54:1: error: 'shared_ptr' in namespace 'std' does not name a type /usr/local/include/urdf_model/types.h:54:1: error: 'weak_ptr' in namespace 'std' does not name a type /usr/local/include/urdf_model/types.h:55:1: error: 'shared_ptr' in namespace 'std' does not name a type /usr/local/include/urdf_model/types.h:55:1: error: 'shared_ptr' in namespace 'std' does not name a type I have liburdfdom-dev and liburdfdom-headers-dev installed via compiled source as per the wiki page and confirmed (with dpkg). Something appears to be missing here as this just isn't right... UPDATE: I've added: set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11") Which now results in a single error of: /home/pi/ros_install_ws/src/robot_model/urdf/src/model.cpp:174:33: error: no match for 'operator=' in 'model = urdf::parseURDF(const string&)()' Tim Originally posted by Tardoe on ROS Answers with karma: 11 on 2016-09-14 Post score: 1 Original comments Comment by alienmon on 2016-09-14: open /home/pi/ros_install_ws/src/robot_model/urdf/src/model.cpp , what is it in line 174? You might want to check this Comment by Tardoe on 2016-09-14: I don't quite understand - parseURDF() is mean to return a boost::shared_ptr which is the same type as the model object. This is line 174: model = parseURDF(xml_string); The sig for parseURDL is: boost::shared_ptr parseURDF(const std::string &xml_string) Comment by alienmon on 2016-09-15: According to this bug report "We believe that the bug you reported is fixed in the latest version of ros-robot-model, which is due to be installed in the Debian FTP archive.". Did you install the atest version? Comment by nickxiang0306 on 2016-10-17: Hello, where did you add "set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11")" ? Thanks in advanced Comment by ElGalloGringo on 2016-10-18: Did you ever get this figured out? I am running into the exact same problem. I have tried stopping by the IRC channel, but can't seem to get any help. I was hoping that you lack of follow-up indicated that you figured it out. Answer: I kept hammering away at this and found the solution (at least it got past that part). Apparently, ROS is moving away from boost::shared_ptr and moving toward using the std::shared_ptr available in c++11 and higher. Because the instructions at http://wiki.ros.org/ROSberryPi/Installing%20ROS%20Indigo%20on%20Raspberry%20Pi are a mix of installing packages and installing from source, this caused a discrepency where urdfdom_headers was a newer version that had the conversion from boost::shared_ptr to std::shared_ptr completed, but the rest of ROS Indigo is still expecting boost::shared_ptr. So, the solution was to go back and find the last commit at https://github.com/ros/urdfdom_headers/commits/master that still had the boost::shared_ptr in the types.h file. Interestingly enough, you and I both originally thought that we needed to add c++11 support because types.h was using std::shared_ptr. This solved the problems of the std::shared_ptr support, but introduced new ones where it was trying to assign an std::shared_ptr to a boost::shared_ptr and vice versa. The section should become liburdfdom-headers-dev: cd ~/ros_catkin_ws/external_src $ git clone https://github.com/ros/urdfdom_headers.git $ cd urdfdom_headers $ git reset --hard 9aed725 $ cmake . $ sudo checkinstall make install Here the 'git reset --hard 9aed725' moved the revision back to the point where boost::shared_ptr is still in there. This also eliminated the need to add c++11 support in the CMakeLists.txt file. Originally posted by ElGalloGringo with karma: 26 on 2016-10-18 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by 4ronie4 on 2017-04-26: I have replaced my ArchLinux system package urdfdom_headers by one compiled with the reverted commit, and then the build error has changed, but still build failing: it is still trying to assign std::shared_ptr to a boost::shared_ptr
{ "domain": "robotics.stackexchange", "id": 25761, "tags": "urdf, ros-indigo" }
Center of mass of equilateral triangle
Question: I'm trying find the center of mass ($R$) of a uniform density equilateral triangle without using symmetries to find the x coordinate of the vector $R$, but I can't get the expected $R_x = \frac{a}{2}$, do not matter what I try. Here is my attempt: With the base of triangle (=$a$) coinciding with the x axis and the origin coinciding with the origin of my coordinate system I know that $h = \frac{a \sqrt{3}}{2}$. My approach is split the triangle in two right triangles and then sum the $x$ coordinate of the center of mass of both of them. By definition $R_x=\frac{1}{M}\sum_{i}^{N}m_i r_{x_i}$. Then when $\lim_{N \to \infty}$$R_x$ becomes $\frac{1}{M}\int r_{x}dm$. Let $\frac{M}{A}$ be the mass per unity of area, then $dm=\frac{Mdydx}{A}$. Writing the height $y$ (of the first half of the triangle) in function of $x$ results in $y=\frac{2xh}{a}$, hence $ y=x\sqrt{3}$. Then, intending add the center of mass of both right triangles, I can write $\frac{1}{M}\int r_{x}dm$ as: $$\frac{1}{M}\int_{0}^{\frac{a}{2}} \int_{0}^{x\sqrt{3}} x \frac{M}{A}dydx + \frac{1}{M} \int_{\frac{a}{2}}^{a} \int_{x\sqrt{3}}^{0} x \frac{M}{A}dydx $$ $$=\frac{1}{A} {\left ( \int_{0}^{\frac{a}{2}} \int_{0}^{x\sqrt{3}} x dydx + \int_{\frac{a}{2}}^{a} \int_{x\sqrt{3}}^{0} x dydx\right )}.$$ From that point (which I suppose/hope not be wrong, except for the integration intervals) I've tried a plenty of changes and I got anything but $R_x = \frac{a}{2}$. Could someone please explain what is wrong and how to fix it? Answer: Check the bounds in the second integral, and plug in values at the extremes. When $x=a$, the far corner of the triangle, you should be integrating $y$ from $0$ to $0$, but the integral runs from $0$ to $a \sqrt{3}$. This is because you used the equation $y = \sqrt 3 x$ for the upper bound of the second half of the triangle, when it only applies to the first half. The second half of the triangle is bounded by a line with negative slope and an $x$ intercept at $a$, namely $y = -\sqrt 3 (x - a)$. Using that as your bound, your second integral would be $$ \int_{\frac a2}^{a}\int_0^{-\sqrt 3 (x - a)} x \, dy\,dx $$ which should provide the correct result.
{ "domain": "physics.stackexchange", "id": 20579, "tags": "homework-and-exercises, geometry" }
Do lone pairs on substituents (e.g. in aniline) count towards Hückel's rule?
Question: Why is aniline aromatic? Doesn't it have 8 π electrons including the lone pair on nitrogen, thereby violating Hückel's rule? The way I see it, there are 6 π electrons from the benzene ring, and an additional 2 from the $\mathrm{sp^2}$hybridized nitrogen. I am suspecting that I am mis-applying Hückel's rule. I asked an instructor today and the only rationalization he had to offer was that "we just don't count those electrons on the nitrogen" ... which is deeply unsatisfying. He still insisted that aniline is aromatic. Can someone offer an alternate, more through explanation? Answer: It is only the electrons from atoms in the ring that count when applying Huckel's rule. Electrons from substituents on the ring are only cross-conjugated with the aromatic π-system. When you think of it in terms of perturbational molecular orbital theory the substituent electrons take the part of a perturbation for the ring's aromatic system. The reason for only using the ring electrons traces back to the origin of aromaticity. Take a look at the π-orbital energy levels of benzene (figure taken from p 36 of Fleming, Molecular Orbitals and Organic Chemical Reactions, Reference Edition): Here, $\alpha$ is the energy of an isolated p-orbital and $\beta$ is the stabilization energy of the π-bond in ethylene. You can see that for benzene you get the best stabilization if you have 6 π electrons ($4n + 2$ for $n=1$) in the system because then you fill the stabilized MOs $\psi_1$, $\psi_2$ and $\psi_3$ which are lower in energy than an isolated p-orbital and gain the maximum stabilization energy possible. If you'd have more electrons in the system you would have to fill the destabilized MOs $\psi^{*}_{4}$, $\psi^{*}_{5}$ and $\psi^{*}_{6}$ which would cost a lot of energy because of the large energy gap between the bonding and antibonding MOs. Now, what happens if you add a cross-conjugating substituent to the ring. This can be seen in the (Huckel-)MO-diagram of styrene (Fleming, p 71). The main effects of the substituent are that the degeneracies of some orbitals are lifted and that the orbital energies shift a bit, especially those of HOMO and LUMO are of interest. In the case of styrene the HOMO lies higher in energy and the LUMO lies lower in energy compared to benzene. Now there are 4 stabilized energy levels instead of 3 - any electron that occupies one of those levels will lead to a stabilization of the molecule. But that is nice and well as you get 2 electrons from the substituent which will want to occupy one of the 4 levels and that leaves you with 3 levels that have to be occupied by ring electrons. Thus, your system will be most stable if the π-system of the ring contains 6 electrons, exactly like in benzene. Any more than 6 electrons will lead to destabilization as there is still quite a big HOMO–LUMO gap (even though it is smaller than in benzene). The argument I made for styrene will remain valid for other molecules with different substituents (like aniline). The MO scheme of aniline will look a little different but the essential features should be the same.
{ "domain": "chemistry.stackexchange", "id": 16447, "tags": "organic-chemistry, resonance, aromaticity" }
Text Based Blackjack
Question: I wanted to make a blackjack game to use while bored, but it turned out to be far more complicated than I thought. from random import choice, randint MASTER_DECK = ["A", "A", "A", "A", "2", "2", "2", "2", "3", "3", "3", "3", "4", "4", "4", "4", "5", "5", "5", "5", "6", "6", "6", "6", "7", "7", "7", "7", "8", "8", "8", "8", "9", "9", "9", "9", "10", "10", "10", "10", "J", "J", "J", "J", "Q", "Q", "Q", "Q", "K", "K", "K", "K"] def setup(deck): """Sets up all game variables""" # Initialize all of the hands player_hand, deck = pick_cards(deck) dealer_hand, deck = pick_cards(deck) return deck, player_hand, dealer_hand def pick_cards(deck): """Deals two random cards""" hand = [] if len(deck) <= 6: deck = MASTER_DECK.copy() for card in range(0, 2): chosen_card = choice(deck) hand.append(chosen_card) deck.remove(chosen_card) return hand, deck def print_ui(player_hand, dealer_hand, deck, game_state): """Prints out the display that tells the user there cards""" print() if game_state == "player_dealing": print("The dealer has these cards:\n_, " + ", ".join(dealer_hand[1:])) print() print("You have these cards:\n" + ", ".join(player_hand)) print() print(f"There are {len(deck)} cards left in the deck") elif game_state == "dealer_dealing": print("The dealer has these cards:\n" + ", ".join(dealer_hand)) print() print("You have these cards:\n" + ", ".join(player_hand)) print() if have_won(player_hand, dealer_hand): print("You have beaten the dealer.") else: print("You have not beaten the dealer.") else: print("Something has gone wrong") while True: pass def have_won(player_hand, dealer_hand): """Checks if the player has won""" numeric_player_hand = numeric_cards(player_hand.copy()) player_hand_total = 0 for card in numeric_player_hand: player_hand_total += card numeric_dealer_hand = numeric_cards(dealer_hand.copy()) dealer_hand_total = 0 for card in numeric_dealer_hand: dealer_hand_total += card if dealer_hand_total > 21: if player_hand_total > 21: return False return True if dealer_hand_total == 21: return False if dealer_hand_total < 21: if dealer_hand_total < player_hand_total <= 21: return True return False def betting_phase(tokens): """Takes the users bet""" print(f"You have {tokens} tokens.") while True: try: bet = int(input("Please enter you bet: ")) if int(bet) > 0: if (tokens - bet) >= 0: break print("Do not bet more than you have.") else: print("Please enter a number greater than zero.") except ValueError: print("Please enter a number.") return tokens - bet, bet def player_dealing(deck, player_hand, game_state): """Handles dealing to the player""" if not deck: print("As there are no more cards left, the round ends.") game_state = "dealer_dealing" else: while True: user_command = input("Would you like to hit or to stay? (H/S): ").lower() if user_command == "h": chosen_card = choice(deck) player_hand.append(chosen_card) deck.remove(chosen_card) break elif user_command == "s": game_state = "dealer_dealing" break else: print("Please only enter H for hit or S for stay.") return deck, player_hand, game_state def dealer_dealing(deck, dealer_hand): """Handles dealing to the dealer""" while True: if not deck: break numeric_dealer_hand = numeric_cards(dealer_hand.copy()) hand_total = 0 for card in numeric_dealer_hand: hand_total += card if hand_total < 16: chosen_card = choice(deck) dealer_hand.append(chosen_card) deck.remove(chosen_card) elif hand_total == 16: if randint(0, 1): chosen_card = choice(deck) dealer_hand.append(chosen_card) deck.remove(chosen_card) else: break elif 11 in numeric_dealer_hand and hand_total > 21: for card_number, card in enumerate(numeric_dealer_hand): if card == 11: numeric_dealer_hand[card_number] = 1 else: break return deck, dealer_hand def numeric_cards(hand): """Turns card letters into their number values""" for card_number, card in enumerate(hand): if card == "J" or card == "Q" or card == "K": hand[card_number] = 10 elif card == "A": hand[card_number] = 11 else: hand[card_number] = int(hand[card_number]) hand_total = 0 for card in hand: hand_total += card if hand_total > 21 and 11 in hand: for card_number, card in enumerate(hand): if card == 11: hand[card_number] = 1 return hand def play_again(): """Allows user to play again or quit""" while True: play_again = input("Do you want to play again? (Y/N): ").lower() if play_again == "y": break elif play_again == "n": quit() print("Please only enter a Y or N") deck = MASTER_DECK.copy() tokens = 200 while True: game_state = "betting" playing_game = True deck, player_hand, dealer_hand = setup(deck) while playing_game: if game_state == "betting": tokens, bet = betting_phase(tokens) game_state = "player_dealing" else: print_ui(player_hand, dealer_hand, deck, game_state) deck, player_hand, game_state = player_dealing(deck, player_hand, game_state) if game_state == "dealer_dealing": deck, dealer_hand = dealer_dealing(deck, dealer_hand) if have_won(player_hand, dealer_hand): tokens += 2 * bet print_ui(player_hand, dealer_hand, deck, game_state) playing_game = False if tokens: play_again() else: input("You have no more tokens to spend. Hit enter to quit.") quit() Originally I wanted to add some AI players as well as including the split and double functions, but the code got so complicated that I thought it would be better not to include those unless the code was cleaned up. Is there any way to clean this up and make it easier to add more features? As well, is there anything else that could be made better? Answer: K00lman, I wanted to chime in with my points too, some of these are covered by other answers. The most obvious problems that jump out with the code: Master deck No suits and the deck creation is literal - not programmatic. Deck Setup Lots of duplicated code Pick Cards Using .copy() to duplicate a deck when card count is low (bad structure) Using a loop to remove and append cards Printing card status Using print() instead of "\n" Using join and slice instead of a function Determining Winning hand Using .copy() adding values of the copied structure instead of the original using a function to convert strings to values numeric_dealer_hand repeatedly instead of once on a new card received early exit if the player busts, instead of confirming for dealer bust too (draw) -using only True and False to determine outcomes (draw, loss, win) Betting Phase not sanitising input/restricting input using break (spaghetti code) returning multiple values and undefined values (bet) in separate code paths Player dealing Using While True and break for logic control (spaghetti code) 3 variables in, 3 variables out - implies this function and the caller both need refactoring Dealer dealing: 2 variables in, 2 variables out (same comment as above - but why only 2 now? Using While True and break for logic control (spaghetti code) etc. more of the above. Let's run over a few of these to help you understand how you can improve your coding. The Card This is probably the start of the design issues, the intermixing of the physical cards verses intangible values associated with the card. To explain - a card is a card. Representing the card is important, but it has no value until you decide what game to play. The game determines the value of the cards, not the cards themselves. That might be confusing - but here's a perfect example: Aces. What are they? Are they 1 or 11? The answer is it depends on the game situation. And it's THERE where the value determination should take place, not with the card. When you program, most of the issues you encounter are due to bad design, data or functions crossing over each domain's barrier, or state (value of variables) changing where it shouldn't. Ensuring separating the domains is something you'll learn as you improve your craft. A book named "Code Complete" can help you to recognise common mistakes. It's good for beginners and intermediate coders. After that, "Clean Code" will begin the introduction to improving how you think about coding, but it's quite advanced. With that said, let’s construct a proper deck, starting with the card object. class Card: def __init__(self, rank, suit): self._rank, self._suit = rank, suit @property def rank(self): return self._rank @property def suit(self): return self._suit def __str__(self): return f"{self._rank}{self._suit}" As we can see, we create the card via the init method, and when the card is printed, it will display its rank and suit. Properties of the card, we can request these individually (.rank or .suit). So, we now think about the types of cards. Cards of rank 2-10 in suits Diamonds, Hearts, Clubs and Spades (D/H/C/S) are simple, Face cards are ranked as King (KD, KH, KC, KS), the Queen and the Jack. Lastly, 4 Ace cards, one per suit. Cards 2-10 will work fine as a Card object, but Face cards and Aces require small modifications. class Ace(Card): def __init__(self, rank, suit): super().__init__(rank, suit) class FaceCard(Card): def __init__(self, rank, suit): super().__init__(rank, suit) self._rank = ["K", "Q", "J"][self.rank] FaceCard(Card) looks different, but Ace(Card) you could reduce down to a simple Card - but we don’t - because Aces are seen as unique by us. It doesn’t cost us any more to represent them separately. Just a quick comment on the FaceCard ranking code: >>> ["K", "Q", "J"][1] 'Q' If that is still confusing, open python and change 1 to either 0 or 2 to understand the indexing. The Deck The deck will be made up of 52 cards. The deck has an action of dealing a card. Does a Blackjack deck have any other actions? Not that I can think of. Let's create the deck then. We'll call it BackjackDeck - but you could use this skeleton for different card games, and add Jokers too. from random import shuffle class BlackjackDeck: """ A set of cards suitable for a Blackjack game which deals a card already shuffled """ def __init__(self): self.cards = [] for suit in ["H", "D", "C", "S"]: self.cards += [Card(rank, suit) for rank in range(2, 11)] self.cards += [FaceCard(rank, suit) for rank in range(3)] self.cards += [Ace("A", suit)] shuffle(self.cards) def deal(self): for card in self.cards: yield card As mentioned previously, cards with face 2-10 are built easily, FaceCards and Aces override their classes when instantiated. Looking at Aces, we have only a single Ace per suit, which is why there's no range(x) for them. We create them independently using "A" to make it obvious. We could do some trick, but the next programmer who reads your code will need to mentally deconstruct your trick to understand your code. Tricks cost companies time and money, so try to make your code obvious and easy to read. If they don't understand your trick, it's likely they will rewrite your trick. If there are dependencies on the trick, there will be broken code somewhere downstream. The Game After creating the deck like I have, it's now incompatible with your existing code. We will resolve those issues mentioned at the top with changed code. Firstly, we have to modify you code to include the entry point. This is a basic step in Python, if it's not there, when another script loads your code, it will start playing automatically, rather than loading the objects (breaking the import system). if __name__=="__main__": deck = MASTER_DECK.copy() tokens = 200 while True: So, thinking about design. Who, What, When, Where, Why. Who plays the game? The player and the dealer. What wins the game? A better hand than the other players, 21. When is the hand calculated? After each card. Where? Why? Never mind those two. So, we need a player object which is the same - in real life the dealer has a few rules - but they're essentially the same. Winning is actually calculated after all the players have played their hands and the dealer finishes theirs - so we have a set method of playing. Calculating the hand should be done after a card is given, so we will put the logic for that into the Player.hit() function. Okay, with this design, we need a Player object, that can hit (and stand), can calculate, can track tokens obviously. What else? Well, as Player and Dealer are going to be the same but different, we need some way to know who is the dealer and who isn't. class Player: def __init__(self, name, tokens, is_dealer=None): self._name, self._tokens = name, tokens self._cards = [] self._hand_value = 0 self._is_dealer = True if is_dealer else False self._has_busted = False That looks pretty good. We have what we needed, and added a property if they've busted. Now, let's add the .hit() function - when we hit, we're given a new card, we calculate our hand value, and let's set our busted flag if we exceed 21: def hit(self, card): self._cards.append(card) self._hand_value = self._calc_hand_value() if self._hand_value > 21: self._has_busted = True We need a calculate function, the ._calc_hand_value() - def _calc_hand_value(self): total = 0 for card in self._cards: is_numeric = str(card.rank).isnumeric() if is_numeric: total += card.rank else: if card.rank in ("J", "Q", "K"): total += 10 for card in self._cards: if card.rank == "A": total += 11 if total + 11 <= 21 else 1 return total We know that the number cards and the face cards have static values, with the Ace card being variable. So, we calculate the value of Ace after the other cards. Admittedly there is lots more optimisation we can perform in this function, but for this, it's fine. But what about code that the dealer has, which the player doesn't? Primarily this comes down to the hidden card when looking at the hand. So, if we need to change the output, let's add a local variable current_cards and loop through them, hiding the first card if the player is a dealer: def show_cards(self): current_cards = [] for card in self._cards: if self._is_dealer: if not current_cards: current_cards.append("_") continue current_cards.append(str(card)) print(f"{self._name} currently has cards: {current_cards}") Which raises the point, when the dealer flips over his card - the other players are already finished with their hands. So we need a way to revert the dealer back into a player, to use all the standard logic. @property def is_dealer(self): return self._is_dealer @is_dealer.setter def is_dealer(self, value): if self._is_dealer and value == False: print("Dealer flips over his hidden card") self._is_dealer = value Here we have the "is this player a dealer?" property, and the "switch dealer to player" setter. Admittedly, you could design a class Player(Gambler): and a class Dealer(Gambler): to both inherit the parent class Gambler - but I'll leave that up to you (for when you create AI players). Now, during the game, when we set the dealer to: dealer.is_dealer = False dealer.show_cards() the dealer will "flip over the hidden card" and play out the hand like a regular player with the same logic. This is another design issue - when making games - it's important that as much behavior reuses code - except when it's very different. Such as Fighter(Character) and Mage(Character) all using .walk() and .run() from Character. When unique behavior is necessary, those should only appear in the Fighter or Mage objects. In this Blackjack instance, only a single flag differentiates a player from a dealer. The benefit is all the actions are the same, and if any bugs are there, they will appear quickly. Status This is getting rather long, so here we will give a quick status. We've covered the points raised in Master deck, Deck Setup, Pick Cards, and Printing card status. Next steps are Determining Winning hand, Betting Phase, Player dealing, Dealer dealing. Betting & Determining the Winning Hand Betting consists of asking how many tokens the player wishes to wager, and upon winning, return double, upon losing, return zero - but we also have the scenarios of drawing - both player and dealer having the same - or both busting. When we have multiple outcomes - you cannot use True or False. Boolean is for only a single outcome, yes, or no. from enum import Enum, auto class Outcome(Enum): Dealer = auto() Player = auto() Draw = auto() Unknown = auto() The Unknown scenario isn't YAGNI (You Ain't Gonna Need It), there were situations after adjusting your code where the outcome was indeterminate. Hence, it's been left in for that random cosmic ray flipping memory bits (even though I'm pretty sure all the outcomes are now covered). The winning hand is classified here from the status in both player/dealer objects. print("\nFinal Game State: ") print_game_state(player, dealer) if dealer.has_busted and player.has_busted: print("\t\t\t\tBoth Dealer and Player lost. Returning initial bet") return Outcome.Draw if dealer.has_busted: print("\t\t\t\tDealer Busted") return Outcome.Player if player.has_busted: print("\t\t\t\tPlayer busted") return Outcome.Dealer if player.hand_value > dealer.hand_value: print("\t\t\t\tPlayer beats dealer") return Outcome.Player if dealer.hand_value > player.hand_value: print("\t\t\t\tDealer beat the Player") return Outcome.Dealer if player.hand_value == dealer.hand_value: print("\t\t\t\tDealer draws the Player. Returning initial bet") return Outcome.Draw print("\t\t\t\t?Unknown scenario?") return Outcome.Unknown Now, onto the betting. Let's address user input firstly. We know that there are limited inputs, hit, stand, quit, token amount. Let's put these parameters into the choice function and get a value returned. def show_menu_in_game(tokens): separator() print(f"Your currently have {tokens} tokens remaining.") print("1. Play another hand") print("q. Exit to Main Menu") .... show_menu_in_game(tokens) choice = get_user_choice([1, "q"]) if choice == "q": do_play = False .... def get_bet_amount(tokens): print(f"How many tokens do you want to bet for this game of Blackjack? 1-{tokens}") return get_user_choice(range(1, tokens + 1), f"between 1 and {tokens}") def get_user_choice(params, display_params=""): is_a_valid_choice = False if display_params == "": display_params = params while not is_a_valid_choice: choice = input(f"\nPlease select a choice ({display_params}): ") if choice.isalpha(): choice = choice.lower() if choice.isnumeric(): choice = int(choice) if choice in params: return choice print(f"That's unfortunately not a selection you can make. Please select one of these: {display_params}") Admittedly we don't do anything with is_a_valid_choice - but it does make it clearer to the reader that we're going to continue to loop until the user selects only what is possible. Dealer/Player Dealing I've not bothered to optimise the start of the game (it's not pretty): def get_game_result(tokens): deck = BlackjackDeck() cards = deck.deal() player = Player("Player", tokens) dealer = Player("Dealer", 999, True) player.hit(next(cards)) player.hit(next(cards)) dealer.hit(next(cards)) dealer.hit(next(cards)) print_game_state(player, dealer) print(f"Player's hand calculates to {player.hand_value}") But if I was going to make it cleaner, there would be a list of players chosen from the menu (Human, AI, Dealer) and passed into the game function. From that, loop over each and allocate 2 cards from the deck. The player phase is much longer than the dealer, due to the inputs, so let's just look at the dealer: from time import sleep if dealer.hand_value < 17: playing = True while playing: print("Dealer takes another card...") sleep(2) dealer.hit(next(cards)) dealer.show_cards() print(f"Dealer's hand calculates to {dealer.hand_value}") if dealer.hand_value > 21: playing = False print(f"Dealer Busted with {dealer.hand_value}") else: if 17 <= dealer.hand_value <= 21: print(f"Dealer stands according to the rules with {dealer.hand_value}") playing = False Most casinos have rule that the dealer must hit on a certain value or less to keep the game interesting. I consulted Wikipedia and took the most discussed value, 17. Adding the time pauses in, gave it a little more excitement. Well, that covers the issues which jumped out at me - and through class objects and their methods being used, demonstrating how to perform other functionality such as hand calculations to reduce chunks of coding. I haven't pasted my whole rewrite of your code intentionally, because the last few pieces are reasonably easy to finish off, but I'm happy to answer questions if you do attempt this exercise. Keep coding!
{ "domain": "codereview.stackexchange", "id": 42043, "tags": "python, python-3.x, playing-cards" }
Getting the current IP from the command line
Question: I made a tiny little bash script, that prints the current IP Address to the command line. I checked the script with ShellCheck (0.4.6-1) on a local debian install and I get the following message: SC2026: This word is outside of quotes. Did you intend to 'nest '"'single quotes'"' instead'? What is wrong with the way I have been line-breaking the pipe, or what is proper practice? #!/bin/bash curl -s POST whatismyip.org\ | grep -A1 'Your IP Address'\ | awk -F '>' '{print $2}'\ | sed 's/<\/h2//g'\ | sed 's/<\/span//g' Answer: Your code is fine, except for the stray method. POST at that point isn't valid, and you will try to connect to a server called POST. Just try your curl call with -Iv, you will notice two connections: curl -Iv POST whatismyip.org If you want to use the method, you would have to write -x POST, but to just get information, you use -x GET (which is the default when you don't transfer any data). Also, you should specify the protocol: curl -s http://whatismyip.org That being said there are other sites such as ipecho.net that provide a direct method: curl -s -L http://ipecho.net/plain; echo By the way, whatismyip.org seems to be up for sale, so you might not get your IP in another few months, or the format might change. ipecho.net has a strange whois entry too, but at least its mentioned in a highly voted answer and returns your IP without the need to extract it from the HTML.
{ "domain": "codereview.stackexchange", "id": 26598, "tags": "beginner, bash, curl, ip-address" }
Why changing the mass of forth object does not change the center of mass in the given problem?
Question: The problem is as follows. Four masses of $1$ $kg$, $2$ $kg$, $3$ $kg$, and $4$ $kg$ are arranged in square shape. The side length of the square is $1$ $m$. Find the location of the center of mass of this system. I have found the solution to it to be ($1/2$, $3/10$) by representing the masses as points on the cortisone plane like here. However in the calculations, $4$ $kg$ is always multiplied by zero, so that made me wonder if the center of mass is the same even if the fourth object is $1000$ $kg$. Why is that? Answer: No, It would change. Note the expression $$\mathbf{r}_\text{cm}=\frac{\sum_i m_i\mathbf{r}_i}{\sum_im_i}=\frac{m_1\mathbf{r}_1+\sum_im_i\mathbf{r}_i}{m_1+\sum_i m_i}$$ If $\mathbf{r}_1=0$, $$\mathbf{r}_\text{cm}=\frac{\sum_{i=2}m_i\mathbf{r}_i}{m_1+\sum_{i=2}m_i}$$ Note $m_1$ in the denominator.
{ "domain": "physics.stackexchange", "id": 79138, "tags": "homework-and-exercises, newtonian-mechanics, mass" }
Is the observable universe homeomorphic to $B^3$?
Question: Is the observable universe homeomorphic to $B^3$? Where $$B^3=\{x\in \mathbb{R}^3 : |x|\leq 1 \}$$ Or is it even sensible to talk about space (rather than spacetime) as a 3 manifold? Answer: I think that "observable universe" is not defined precisely enough to make such statements about it. The spacetime events that we can see are the events on our past light cone. That light cone intersects the last-scattering surface (about 400,000 years after the big bang) in an approximate sphere. By convention the light cone is cut off there (because we can't see through the opaque plasma before last scattering—though future neutrino and gravitational-wave astronomy might change that). The matter passing through that sphere (which is also the boundary of the light cone) will, by the continuity equation, pass through the light cone at some point, while matter outside can't without exceeding the speed of light. The matter that passes through the sphere is called the observable universe. In a perfectly uniform zero-pressure universe described exactly by an FLRW metric, and in which the last-scattering time is precisely defined, the sphere will be exactly a sphere, and the locus of observable matter will be exactly a cylinder ($\mathbb B^3 \times \mathbb R$) in FLRW coordinates. The metric breaks spacetime symmetry, giving a natural separation into space and cosmological time, and a natural correspondence between spatial points at different cosmological times. You could think of this universe as a 3D space with a geometry that's time-invariant up to an overall conformal scale factor (the inflating-balloon analogy, sort of). The observable universe is topologically and even metrically a ball in that space. In reality, the universe went from almost opaque to almost transparent over some nonzero time, so there is an inherent ambiguity in the cutoff of the past light cone and the boundary of the observable universe. Also, the matter making up the observable universe does not stay in place relative to the FLRW "space". In the case of identical quantum particles, you can't trace the later motion of the matter that passed through the sphere even in principle. And since the geometry of spacetime is determined by the matter distribution, the FLRW metric is not exactly correct and there is no precisely defined FLRW "space". The observable universe is still a "fuzzy ball", but I don't think "fuzzy" can be given a precise mathematical definition.
{ "domain": "physics.stackexchange", "id": 15705, "tags": "cosmology, topology" }
Maximum and minimum integer number of latitude and longitude
Question: I'm developing an application where the user inputs latitude and longitude numbers in float type format. I have to validate that the format of the numbers are correct. My doubt is what is the maximun and minimum integer number on both latitude and longitude. Let me give an example: latitude: <max/min>.666666 longitude: -<max/min>.666666 What are the correct max and min in both cases? Answer: Latitude is easy: -90 to +90. Longitude can use one of two conventions: 0 to +360, or -180 to +180. You may want to handle both gracefully.
{ "domain": "earthscience.stackexchange", "id": 2334, "tags": "coordinate-system, gps" }
Doing navigation, while avoiding some region
Question: Hi All, I am using navigation stack with PR2 and turtlebot in a simulated home environment. Is it possible to constrain the path of the robot so that it can avoid some region. For example, say floor of the room has white tiles with few red tiles at center, and I want the robot, not to run over the red tiles. Or say there is some marking or carpeting on the path (having different color), and I need the robot to do navigation only on that path. Any suggestion or pointers on how I can achieve this will be greatly appreciated. I needed this for executing action for a command like : "Move to the chair, avoiding the red region." Originally posted by aknirala on ROS Answers with karma: 339 on 2013-01-15 Post score: 0 Answer: You would have to write a custom Cost function depending on your particular need!! In your case when you said that you wanted to avoid red tiles you certainly would term them as obstacles in your code..and then run a suitable path planner.. Originally posted by Karan with karma: 263 on 2013-01-15 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by aknirala on 2013-01-15: Hi Karan, Thanks for the reply. Is there any tutorial etc., on writing the custom cost function. Or can you please point me to specific file where I need to make the changes in. In my case the cost function will change depending on the command (and that is exactly what I want to do.) Thanks Comment by Karan on 2013-01-15: You would have to write a node yourself which would process the input image stream and mark the regions which turn out to be red. Then make a 2d grid of surrounding region which has those regions marked to be non traversable. Then run a suitable planner..
{ "domain": "robotics.stackexchange", "id": 12419, "tags": "navigation" }
Move_base Costmaps vs. NavFn Costmap
Question: When using standard PR2 Navigation (roslaunch pr2_2dnav pr2_2dnav.launch) there are three costmaps listed. $> rostopic list | grep /obstacles /move_base_node/NavfnROS/NavfnROS_costmap/obstacles /move_base_node/global_costmap/obstacles /move_base_node/local_costmap/obstacles Why does Navfn (which is the global planner) get its own costmap? Shouldn't it just use global_costmap? Originally posted by David Lu on ROS Answers with karma: 10932 on 2012-07-03 Post score: 1 Answer: Navfn uses the same global costmal as the local planner. The topics are just for publishing (e.g. for debugging), navfn does not get the costmap via that topic. By default navfn does not publish anything on that topic. You can imagine that it is possible to use navfn without move_base, having its own topics is for such cases, maybe. Originally posted by KruseT with karma: 7848 on 2012-07-03 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 10039, "tags": "navigation, costmap, costmap-2d" }
Pulling out a certain gene in a volcano plot
Question: I have generated a volcano plot with a differential expression file. Code for inputing file: macrophage_list <- read.table("differential_expression_macrophage.csv", header = T, sep = ",") library(tidyr) final_df <- df %>% pivot_longer(., -c(Feature.ID, Feature.Name), names_to = c("set",".value"), names_pattern = "(.+)_(.+)") # A tibble: 80 x 6 Feature.ID Feature.Name set Mean.Counts Log2.fold.change Adjusted.p.value <fct> <fct> <chr> <dbl> <dbl> <dbl> 1 a A Cluster.1 0.000961 0.292 1 2 a A Cluster.2 0.000902 0.793 1 3 a A Cluster.3 0.00181 1.46 0.758 4 a A Cluster.4 0.000642 0.269 1 5 b B Cluster.1 0.000320 1.95 0.910 6 b B Cluster.2 0.00180 4.77 0.154 7 b B Cluster.3 0 2.19 1 8 b B Cluster.4 0 1.66 1 9 c C Cluster.1 0.00128 -2.01 0.0467 10 c C Cluster.2 0.00632 0.352 1 # … with 70 more rows output of head(final_tumor) # A tibble: 6 x 6 Feature.ID Feature.Name set Mean.Counts Log2.fold.change Adjusted.p.value <fct> <fct> <chr> <dbl> <dbl> <dbl> 1 ENSG00000227232.5 WASH7P Cluster.1 0 1.50 1 2 ENSG00000227232.5 WASH7P Cluster.2 0 1.73 1 3 ENSG00000227232.5 WASH7P Cluster.3 0 1.77 1 4 ENSG00000227232.5 WASH7P Cluster.4 0.00114 4.30 0.293 5 ENSG00000227232.5 WASH7P Cluster.5 0 2.15 1 6 ENSG00000227232.5 WASH7P Cluster.6 0 1.22 1 output of tail(final_tumor) # A tibble: 6 x 6 Feature.ID Feature.Name set Mean.Counts Log2.fold.change Adjusted.p.value <fct> <fct> <chr> <dbl> <dbl> <dbl> 1 ENSG00000210196.2 MT-TP Cluster.6 0.0699 -0.202 0.790 2 ENSG00000210196.2 MT-TP Cluster.7 0.0801 0.0386 1 3 ENSG00000210196.2 MT-TP Cluster.8 0.0711 0.0875 1 4 ENSG00000210196.2 MT-TP Cluster.9 0.0152 -2.31 0.00127 5 ENSG00000210196.2 MT-TP Cluster.10 0.0147 -2.30 0.00612 6 ENSG00000210196.2 MT-TP Cluster.11 0.122 0.762 1 Code for generating volcano plot: library(ggplot2) library(ggrepel) ggplot(final_tumor, aes(x = Log2.fold.change,y = -log10(Adjusted.p.value), label = Feature.Name))+ geom_point()+ geom_text_repel(data = subset(final_tumor, Adjusted.p.value < 0.05), aes(label = Feature.Name)) Now, I want to pull out a certain gene, Casp14, from the list and box it on the plot. How do I do that? Answer: As you did for labeling genes with an adjusted p value below 0.05, you can subset your dataset for keeping only rows corresponding to "Casp14": library(ggplot2) library(ggrepel) ggplot(final_tumor, aes(x = Log2.fold.change,y = -log10(Adjusted.p.value), label = Feature.Name))+ geom_point()+ geom_text_repel(data = subset(final_tumor, Adjusted.p.value < 0.05), aes(label = Feature.Name))+ geom_text_repel(data = subset(final_tumor, Feature.Name == "Casp14"), aes(label = Feature.Name), color = "red") With this code, you should see now the labeling of the gene of interest (Casp14) in red on your volcano plot.
{ "domain": "bioinformatics.stackexchange", "id": 1247, "tags": "r, ggplot2" }
Using faster than light signals to synchronize clocks
Question: What would happen if one could communicate by non-luminous signals whose velocity of propagation differed from that of light? If, after having adjusted the watches by the optical procedure, we wished to verify the adjustment by the aid of these new signals, we should observe discrepancies which would render evident the common translation of the two stations. And are such signals inconceivable, if we admit with Laplace that universal gravitation is transmitted a million times more rapidly than light? [Poincare, 1904 address] So to eliminate possibility of measurements by faster than light signals, Poincare assumed that the gravity should not propagate at faster than light speed. He limited it to the speed of light. I think that Poincare is saying that if the clocks have been synchronized by light signals then using any faster-than-light signal would make it obvious that an inertial frame is moving. Assuming my interpretation is correct, why would it make it clear that inertial frame of reference is moving or why one "should observe discrepancies" if faster-than-light signals are used to synchronize the clocks? Answer: Assuming my interpretation is correct, why would it make it clear that inertial frame of reference is moving or why one "should observe discrepancies" if faster-than-light signals are used to synchronize the clocks? Your interpretation is correct, but in this case Poincare is wrong. Remember, if the date is correct then this address was given prior to Einstein’s paper On The Electrodynamics Of Moving Bodies. He would have been discussing Lorentz Aether Theory. If a signal propagated at v>c with respect to the Lotentz aether then it would be possible to detect the velocity of the aether using that signal. To make visualization easier let’s assume that the signal travels instantaneously with respect to the aether (similar conclusions follow for other propagation velocities). In that case, if you are at rest with respect to the aether and you synchronize your clocks optically and then emit this superluminal signal from midway between the synchronized clocks the signal will be received at the same time according to the optically synchronized clocks. On the other hand, if your clocks are optically synchronized while moving with respect to the aether then the clocks will disagree about the time that the superluminal signal is received due to the relativity of simultaneity for local time in Lorentz’s theory. Thus far Poincare is correct. A superluminal signal which traveled at a defined velocity with respect to the aether would violate the principle of relativity and allow the measurement of “absolute” velocity. However, where Poincare is incorrect is in thinking that a superluminal signal would necessarily travel at a fixed velocity with respect to the aether. Suppose, instead, that the superluminal signal travels at a fixed velocity with respect to the emitter. In that case, then the signals received at the optically synchronized clocks would be received simultaneously regardless of the velocity of the pair of clocks. Note, in this case, it is possible to show that a signal which travels at a fixed v>c with respect to the emitter can be used to violate causality (see https://pages.uoregon.edu/imamura/FPS/images/p263_1 ) whereas a signal which travels at a fixed v>c with respect to the aether cannot. Thus out of relativity, causality, and superluminal signals it is only possible to have two (see http://www.physicsmatt.com/blog/2016/8/25/why-ftl-implies-time-travel ). Most scientists believe that relativity and causality are correct so superluminal signals are impossible.
{ "domain": "physics.stackexchange", "id": 66616, "tags": "special-relativity, inertial-frames, faster-than-light" }
Machine Learning for medical researchers
Question: My friend is a medical researcher and he want to use machine learning for prediction. Is there any one who is not a computer science person and he learnt programming and machine learning in a very short time? And how? Answer: He can use no-code ML platforms such as: RapidMiner Studio, Google ML Kit, Orange, and BigML. Also, this article is very good article for learning RapidMiner https://medium.com/analytics-vidhya/machine-learning-for-programmers-and-non-programmers-f8568d357750
{ "domain": "datascience.stackexchange", "id": 8929, "tags": "machine-learning, python, deep-learning, data-mining, data-science-model" }
Proof techniques for string algorithms?
Question: I'm currently reading through the tome "Algorithms on Strings, Trees, and Sequences" by Dan Gusfield, and I find the proofs to be extremely case analysis heavy and full of finicky +-1s. This seems very error prone to program. I was hoping for a more "conceptual" way to build string algorithms, where we first construct a toolkit of basic objects that we then use. I was hoping that prefixes, suffixes, and the Z algorithm would be those. But they seem too level to construct something like Boyer-Moore or Aho-Corasick. As an analogy for some kind of abstract algebraic flavoured approach, like using matroids for greedy algorithms to capture the "hard" part of the analysis. So my question is, are there nice algebraic structures that govern string algorithms which can be used to present and implement them more elegantly? Answer: There is some work on developing an algebraic or grammar-based view of string algorithms, for example Robert Giegerich, Carsten Meyer, Peter Steffen: A discipline of dynamic programming over sequence data. Sci. Comput. Program. 51(3): 215-263 (2004) Robert Giegerich, Hélène Touzet: Modeling Dynamic Programming Problems over Sequences and Trees with Inverse Coupled Rewrite Systems. Algorithms 7(1): 62-144 (2014) These approaches deal with string problems that are solved by dynamic programming such as the computation of edit distance or local alignments or comparison of RNA secondary structures. These problems are more difficult in terms of the running time of the best algorithms than the exact pattern matching problem that is solved by Boyer-Moore. A more conceptual or programmer-friendly approach for exact pattern matching would be to use the right data structures such as suffix arrays or suffix trees. Many pattern matching algorithms become simpler when these data structures are used as a black box or augmented in some relatively easy way.
{ "domain": "cstheory.stackexchange", "id": 4929, "tags": "string-matching, string-search, analysis-of-algorithms" }
Local and non-local interactions
Question: When I look for the van der Waals interactions, they are defined as they are non-local interactions but no explanation for what they mean by non-locality. What would be the best way to understand this confusing local and non-local terms. Answer: Local interactions are interactions that are limited to a certain volume/distance. Let's examine the Coulomb repulsion between two particles. The physical interaction is $V(r_1, r_2) = q^2/|r_1-r_2|$ which is non-local. Two particles at very far positions will interact with each other via this interaction. Now, let's consider the case where we have many of these particles and they are embedded in a medium with opposite equal total charge homogeneously distributed. This is the case, for example, when considering electrons in a metal. They are free to move but in the background there are the positives charges of the atoms that make the metal, such that in total the charge is neutral. The effect is a screening effect, and two electrons far apart from one another will not really feel each other, as on average the positive charge between them will cancel the negative charge. However, two electrons close by will feel each other. So to approximate this behavior we can write $V(r_1, r_2) = U \delta(r_1-r_2)$. Now this is a local interaction. One can consider other local type of interactions: for example, on a lattice, where each electron can be at a position $(i,j)$ we can imagine that it can feel some of its neighbors, and let the interaction term run for $(i\pm n, j\pm n)$ for some finite $n$. This will still be considered local, as for distance larger than this $n$, the interaction is cut-off.
{ "domain": "physics.stackexchange", "id": 64852, "tags": "atomic-physics, interactions, molecules, orbitals, non-locality" }
How Hessian feature detector works?
Question: I know about Harris corner detector, and I understand the basic idea of its second moment matrix, $$M = \left[ \begin{array}{cc} I_x^2 & I_xI_y \\ I_xI_y & I_y^2 \end{array} \right]$$, edges and other unstable points can be removed via $M$. But about Hessian detector, it uses Hessian matrix to detect key points and remove edges, $$\mathcal{H} = \left[ \begin{array}{cc} I_{xx} & I_{xy} \\ I_{xy} & I_{yy} \end{array} \right]$$, and I don't understand how could $\mathcal{H}$ remove edge and detect stable points? What's the intuitive basic idea behind it? Answer: I will try to avoid math, because math and "how to do it" tutorials can be easily found. So, I start by pointing out one VERY important thing: One does not compute Harris for a single pixel, but for a vicinity (a patch of image) around that pixel! Let $I(i)_{xx}, I(i)_{xy} ...$ be your derivatives for a point $i_0$, then, $H = \left[ \begin{array}{cc} \sum_{i\in V}I(i)_{xx} w (i-i_0) & \sum_{i\in V}I(i)_{xy}w (i-i_0) \\ \sum_{i\in V}I(i)_{xy} w (i-i_0)& \sum_{i\in V}I(i)_{yy} w (i-i_0)\\ \end{array} \right] $ The $w(t)$ is a Gaussian kernel. The previous eq tells you to integrate the derivative values over vicinity $V$ around current pixel. Each value of the neighbors is multiplied with a value that shrinks as the distance increases. The law of decreasing follows a Gaussian, because $w(t)$ is Gaussian centred at $i_0$. And that's it with math. Now, back to the empirical observations. If you use solely the derivatives, and that pixel is part of a linear structure (edge), then, you get a strong response for the derivatives. On the other hand, if the pixel is at a corner (an intersection of two edges) then, the derivative responses will cancel themselves off. Saying that, the the Hessian is able to capture the local structure in that vicinity without "cancelling" effect. BUT very important, you have to integrate in order to get a proper Hessian. Having a Hessian, obtained using Harris method or by other means, one might want to extract information about the vicinity. There are methods to get numerical values on how likely is to have an edge at current pixel, a corner, etc. Check the corner detection theory. Now, about "stable points" or salient points. Picture that you are in a foreign town with no GPS and only with a good map. If you are "teleported" in a middle of a street, you might locate the street on the map, but you cannot tell where exactly are you on that street or in what direction you should go to move left or right (wrt to the map). Imagine now that you are at a intersection. Then, you precisely can point your position on the map!. (Of course, assume that two streets doesn't intersect more than once). Imagine now that you must match two images. One acts as a map, and the other as the city. You must find pixels that can be uniquely described, so you can do the matching. Check images on this post for example of matching. These points are called salient points. Moreover, the corner points tend not to change their 'cornerness' properties when the image is scaled, translated, rotated, skewed, etc. (affine transforms) This is why they are called "stable". Some points in the image allows you to uniquely identify them. These pixels are located at corners or at intersection of lines. Imagine that your vicinity $V$ is on a line. Except for the orientation of the line, you cannot find anything else from that vicinity. But if $V$ is on a corner, than, you can find out the directions of the lines that intersect, maybe the angle, etc. Not all corner points are salient, but only corner points have great chances of being salient. Hope it helps! p.s. How to find if a point is corner or not, take a look at Harris paper. p.p.s. More on matching, search for SIFT or SURF. p.p.p.s. There is a "generalization" of the Harris method, called Structure Tensor. Check Knutsson seminal work!
{ "domain": "dsp.stackexchange", "id": 9842, "tags": "image-processing, computer-vision, local-features, peak-detection" }
How do I determine whether two vectors have the same direction?
Question: Example E 38 Degrees N vs N 38 Degrees E I was taught that the angle is indicative of direction so is it useful to just ignore the E and the N components? Answer: Either you can describe the vector using angle and magnitude (30 degrees, 10 meters), or you can use the magnitude and a direction. The way you've written it seems to be a way of saying, 38 degrees east of north and 38 degrees north of east. These don't mean the same thing: if you look at purely the angles, the "of north" tells you the starting ray, and the "degrees east" tells you how many degrees and on which side of the ray. The following diagram should help you out.
{ "domain": "physics.stackexchange", "id": 66542, "tags": "homework-and-exercises, vectors, coordinate-systems, geometry" }
jQuery range slider change text based on hidden field value
Question: I am working on a jQuery ratings function that uses the Foundation range slider. My current implementation works correctly on 2 of 7 of my range sliders. However, I am quickly realising that my implementation will become bloated as I have to repeat myself for each of the 7 range sliders. HTML: <div class="row"> <div class="small-8 medium-8 columns"> <div class="range-slider" data-slider data-options="start: 1; end: 5;"> <span class="range-slider-handle" role="slider" tabindex="0"></span> <span class="range-slider-active-segment"></span> <%= f.hidden_field :quality %> </div> </div> <div class="small-4 medium-4 columns"> <span id="sliderOutput1">Average</span> </div> </div> <div class="row"> <div class="small-8 medium-8 columns"> <div class="range-slider" data-slider data-options="start: 1; end: 5;"> <span class="range-slider-handle" role="slider" tabindex="0"></span> <span class="range-slider-active-segment"></span> <%= f.hidden_field :communication %> </div> </div> <div class="small-4 medium-4 columns"> <span id="sliderOutput2">Average</span> </div> </div> jQuery: $(document).on("page:load ready", function() { function ratings(selector, callback) { var input = $(selector); var oldvalue = input.val(); setInterval(function(){ if (input.val()!= oldvalue){ oldvalue = input.val(); callback(); } }, 100); } ratings('input#agent_score_quality', function() { var inputValue = $('input#agent_score_quality').val(); if(inputValue === '1') { // Terrible $("span#sliderOutput1").fadeOut(function() { $(this).text('Terrible').fadeIn(); }); } if(inputValue === '2') { // Poor $("span#sliderOutput1").fadeOut(function() { $(this).text('Poor').fadeIn(); }); } if(inputValue === '3') { // Average $("span#sliderOutput1").fadeOut(function() { $(this).text('Average').fadeIn(); }); } if(inputValue === '4') { // Good $("span#sliderOutput1").fadeOut(function() { $(this).text('Good').fadeIn(); }); } if(inputValue === '5') { // Excellent $("span#sliderOutput1").fadeOut(function() { $(this).text('Excellent').fadeIn(); }); } }); ratings('input#agent_score_communication', function() { var inputValue = $('input#agent_score_communication').val(); if(inputValue === '1') { // Terrible $("span#sliderOutput2").fadeOut(function() { $(this).text('Terrible').fadeIn(); }); } if(inputValue === '2') { // Poor $("span#sliderOutput2").fadeOut(function() { $(this).text('Poor').fadeIn(); }); } if(inputValue === '3') { // Average $("span#sliderOutput2").fadeOut(function() { $(this).text('Average').fadeIn(); }); } if(inputValue === '4') { // Good $("span#sliderOutput2").fadeOut(function() { $(this).text('Good').fadeIn(); }); } if(inputValue === '5') { // Excellent $("span#sliderOutput2").fadeOut(function() { $(this).text('Excellent').fadeIn(); }); } }); }); How can I improve this code so I don't have to repeat the same thing over and over just with different selectors? My idea is to use a single selector for the sliderOutput and use the .closest() method to target each and insert my text, but I'm new to jQuery, so could I be way off? Answer: First of all you can dramatically simplify your callback function body structure. Instead of using a series of if (inputValue === ... you might define a simple array of your texts, like: var texts = [ 'Terrible', 'Poor', 'Average', 'Good', 'Excellent' ]; Then the callback function entire body becomes as simple as: // here the example of your first ratings() invocation $('span#sliderOutput1').fadeOut(function() { $(this).text(texts[$('input#agent_score_quality').val() - 1]).fadeIn(); }); Now to avoid rewriting such a callback function for each of your rows, you can make it a named, independent function, then invoke it from your `setInterval() callback, with the needed information about involved row, so the whole JS part looks like this: $(document).on("page:load ready", function() { var texts = [ 'Terrible', 'Poor', 'Average', 'Good', 'Excellent' ]; function ratings(selectorInput, selectorSlider) { var input = $(selectorInput); var slider = $(selectorSlider); var oldvalue = input.val(); setInterval(function(){ if (input.val()!= oldvalue){ oldvalue = input.val(); ratingsCallback(input, slider); } }, 100); } function ratingsCallback(input, slider) { $(slider).fadeOut(function() { $(this).text(texts[input.val() - 1]).fadeIn(); }); } ratings('input#agent_score_quality', 'span#sliderOutput1'); ratings('input#agent_score_communication', 'span#sliderOutput2'); // and so on... }); Finally better solution (but see also CAVEAT below) Thanks to combined suggestions from @Chococroc comments and @Margus answer, this is a yet much more minified version of the code. It only needs that each <span id="sliderOutput..."> element becomes <span class="slider-output">. Then the whole JS part can be reduced to: $(document).on("page:load ready", function() { var texts = [ 'Terrible', 'Poor', 'Average', 'Good', 'Excellent' ]; $('.row input').on('change', function() { var slider = $(this).closest('.row').find('.slider-output'); slider.fadeOut(function() { slider.text(texts[input.val() - 1]).fadeIn(); }); } }); Note that this solution only focuses on the initial question main issue. Beyond that, as noticed by @Chococroc, it could be enhanced in several ways, like: allow strict independency between logics and presentation, by using dedicated class names like js-slider-output, so CSS needs don't mix with JS needs ensure to avoid any potential conflict with other class="row" elements in the whole context, by affecting a supplemental class to each row, e.g. <div class="row js-slider">, so the change-event binding becomes $('.js-slider input')... CAVEAT Still focused on the main issue (how to reduce JS code) I didn't pay attention to a particular point in the given HTML example: here the source data for the range values resides in hidden elements. So as pointed out by the OP author, in this case the change event will not fire when value changes. So what? from the fact that these elements are hidden, we can assert that the changes don't come from user action then we can assume that some other JS part is involved to make this change to happen so we should have the availability to add <this element>.change(); somewhere in this JS part Hopefully not forgetting some other blocking point...
{ "domain": "codereview.stackexchange", "id": 14440, "tags": "javascript, jquery, html, form" }
What is the meaning of the third derivative printed on this T-shirt?
Question: Don't be a $\frac{d^3x}{dt^3}$ What does it all mean? Answer: It means don't be a jerk. The third derivative of position (i.e. the change in acceleration) is called "jerk", though it's a little used quantity. It's called jerk because a changing acceleration is felt as a "jerk" in that direction.
{ "domain": "physics.stackexchange", "id": 10298, "tags": "soft-question, differentiation, calculus, jerk" }
Convert time operator from momentum space to position space
Question: I'm trying to transform the time evolution operator from momentum space to position space. I know that $$ U(t) = e^{-iHt/h} = \int_{-\infty}^\infty e^{-ip^2t/2uh} | p \rangle \langle p | dp $$ and I'm trying to find the form of $$ \langle x | U(t) | x' \rangle $$ I'm given the hint (paraphrased): To evaluate this explicitly, use the Fourier transform of a Gaussian function, with imaginary $a$ I'm trying to apply the time operator to a momentum space wave function: $$ U(t)|\psi (p,0) \rangle = \int_{-\infty}^\infty e^{-ip^2t/2uh} \psi (p,0) | p \rangle dp $$ But I'm not sure how to simplify to a form where a fourier transform would be straightforward Answer: This appears in propagators, so you can google for any documents or look up any book on propagators. $$\begin{align} \langle x_2|e^{-\frac{ip^2t}{2m}}|x_1\rangle &=\int dp \langle x_2|p\rangle\langle p|e^{-\frac{ip^2t}{2m}}|x_1\rangle\\ &=\int dp \left( \frac{e^{ipx_2/\hbar}}{\sqrt[2]{2\pi \hbar}}\right)\langle p|e^{-\frac{ip^2t}{2m}}|x_1\rangle\\ &=\int dp \left( \frac{e^{ipx_2/\hbar}}{\sqrt[2]{2\pi \hbar}}\right)\left(e^{-\frac{ip^2t}{2m}} \right)\langle p|x_1\rangle\\ &=\int dp \left( \frac{e^{ipx_2/\hbar}}{\sqrt[2]{2\pi \hbar}}\right)\left(e^{-\frac{ip^2t}{2m}} \right)\left( \frac{e^{-ipx_1/\hbar}}{\sqrt[2]{2\pi \hbar}} \right)\\ &=\int dp \left( \frac{e^{ipx/\hbar}}{2\pi \hbar}\right)\left(e^{-\frac{ip^2t}{2m}} \right) \hspace{1.0cm}{(x=x_2-x_1)}\\ &=\int dp \left( \frac{1}{2\pi \hbar}\right)\left(e^{-\frac{it}{2m\hbar}\left( p-\frac{mx}{t}\right)^2+\frac{imx^2}{2\hbar t}} \right) \hspace{1.0cm}{(x=x_2-x_1)}\\ &=\sqrt[2]{\frac{m}{2\pi i \hbar t}}e^{\frac{imx^2}{2\hbar t}} \hspace{1.0cm}{(x=x_2-x_1)}\\ \end{align} $$
{ "domain": "physics.stackexchange", "id": 39695, "tags": "quantum-mechanics, homework-and-exercises, operators, fourier-transform, time-evolution" }
Is magnetic field inside a parallel plate capacitor same as outside due to cunduction current in wires.?
Question: For a circular parallel plate capacitor (when charging) with wires connected symmetrically, Is magnetic field at point M and N same? Why or why not? Answer: Point $N$ has a smaller distance $r$ from the axis than the radius $R$ of the parallel plate capacitor. Therefore the displacement current passing through the area on the circle defined by $r \lt R$ is not the total displacement current through the plate area with radius $R$. The conduction current in the wire has to be the same as the displacement current in the capacitor. Therefore the changing displacement current within the radius $r$ is smaller than the changing conduction current in the wire. Therefore the induced magnetic field at $r$ is smaller in the capacitor than at the wire.
{ "domain": "physics.stackexchange", "id": 46935, "tags": "electromagnetism, capacitance" }
Dust patterns inside electronic product - what causes this?
Question: (Note: another similar question from a few years ago yielded nought but speculation. I have at provided some detailed observations in the hope that the community can come up with something rigorous as an explanation...) We disassembled an electronic product and found this interesting pattern of dust on the plastic (ABS, HDPE or similar material)... Observations: The dust is easily wiped off. There is no cooling fan in the product and it was situated in an unventilated cupboard. Any airflow would be a result of thermal convection for the most part. It's an old broadband/DSL router which ran from a 12V wall wart ("double insulated" so no Earth-ground connection). The patterns seem to congregate around some of the minor injection-moulding features (known as "ejector marks", the 4 small circular features are almost co-planar with the main surface surrounding them, they are extremely shallow and have a slightly different surface finish to the rest of the plastic). The appearance of the features ("lightning" springs to mind) suggests to my mind that this is some sort of electromagnetic / electrostatic effect. The plastic is not coated with any kind of electrically conductive EMC coating. The circuitry housed within doesn't seem to indicate any particular correlation between the placement of electronic components and the locations of the dust patterns. The features of the plastic case seem to be a more significant catalyst for the formations. Questions: How do these interesting patterns form? What is the composition of the dust likely to be (e.g. metallic or something else?). Why does the shape of the plastic seem to catalyse these weird shapes? Why the lightning / fractalline appearance? Answer: I would suspect that either the plastic or the dust is statically charged. The charge interacts with the surroundings and will settle in the most stable position available. Once one particle is in place it adds it's charge to the system affecting how the next dust particle falls and so on. The patterns are the from the cumulative effect of the falling particle charges. As for where they start, the potential differential of the ridges or seams in the plastic probably attracted them.
{ "domain": "physics.stackexchange", "id": 40296, "tags": "electromagnetism, stress-strain, electronics" }
Combinations (repetition not allowed & order not important)
Question: How to compute a table of numbers (all possibilities), where repetition is not allowed and order is not important. Example: I have a set of prime numbers. In this example I have four: {3,5,7,11}, but it can be anything, and I want to choose every pair out of that set. To make things easier, I want to compute the indices to get those pairs of prime numbers. The set of indices is then {0,1,2,3}. We pick 2 out of 4 elements. So how do we compute the permutations or combinations: 0,1 (3,5) 0,2 (3,7) 0,3 (3,11) 1,2 (5,7) 1,3 (5,11) 2,3 (7,11) ? It was difficult to find examples on the web, because they either allowed repetitions or were order was important. Pls answer with pseudocode or c/c++ if you can. Answer: You seem interested in just pairs of indices. Then, if you have $n$ elements you can just generate all pairs of indices $(i,j)$ with $0 \le i < j < n$. For i=0,1,...,n-2: For j=i+1, i+2, ..., n-1: Output (i,j)
{ "domain": "cs.stackexchange", "id": 18029, "tags": "combinatorics, c, c++, pseudocode" }
Improving Java LinkList implementation
Question: I am trying to implement linklist data structure in java. Following is my implementation: // A singly Linked List class Node{ int value; Node next; Node(int n){ this.value = n; this.next = null; } public void printNode(){ System.out.println("Value: " + value); } } class LinkList{ private Node first; private Node last; public LinkList(){ first = null; last = null; } public boolean isEmpty(){ return first == null; } public void insertFirst(int n){ Node node = new Node(n); if(first == null){ System.out.println("1. Vlaue of n: "+n); first = node; last = node; }else{ System.out.println("2. Vlaue of n: "+ n); Node tmp = first; first = node; node.next = tmp; } } public void deleteFirst(){ if (!isEmpty()){ Node tmp = first; first = tmp.next; } } public void deleteLast(){ if(!isEmpty()){ Node tmp = first; Node oneBeforeLast = null; while (tmp.next != null){ oneBeforeLast = tmp; tmp = tmp.next; } oneBeforeLast.next = null; } } public void deleteNode(int value){ if (!isEmpty()) { Node tmp = first; Node oneBeforeLast = first; while (tmp.next != null) { if (tmp.value == value) { if (oneBeforeLast == first) { first = tmp.next; } else oneBeforeLast.next = tmp.next; } oneBeforeLast = tmp; tmp = tmp.next; System.out.println("Btmp: " + oneBeforeLast.value + " tmp: " + tmp.value); } if (tmp.next == null && tmp.value == value) { oneBeforeLast.next = null; } } } public void printList(){ Node tmp = first; System.out.println("\nPrinting List:"); while(tmp != null){ tmp.printNode(); tmp = tmp.next; } } } public class LinkListTest { public static void main(String[] args) { // TODO Auto-generated method stub LinkList linklist = new LinkList(); linklist.insertFirst(1); linklist.insertFirst(2); linklist.insertFirst(3); //linklist.insertFirst(3); linklist.insertFirst(4); linklist.printList(); //linklist.deleteFirst(); //linklist.printList(); //linklist.deleteLast(); //linklist.printList(); linklist.deleteNode(1); linklist.printList(); } } I would like to improve it further and check its efficiency for large inputs. Can anyone give some pointers on these two pointers, please? Answer: One area you could improve performance is by making it a doubly linked list. At the moment your deleteLast() method is an O(n) operation, meaning it needs to traverse the entire list to delete the last element. And since this is the case, there's no real reason for keeping a reference to it at all. If each Node had a next and a prev Node, your delete last could look something like, last.prev.next = null and let the garbage collector worry about the rest. You would of course still need to check that all of these values are okay to use like this before hand. LinkedLists are supposed to be good at adding and deleting elements from the start. And if it's a doubly linked list, also the end. As side note, I would prefer to see your Node class as a private class inside your linked list class. As this Node class probably shouldn't be used anywhere else. Your public interface is a bit unusual, for a list, I would expect to have public methods like list.add(element), list.removeAt(0) etc. list.insertFirst(data) is an implementation detail, this should be a private method that's called when the list is empty. if I wanted to insert an element at the head of the list, I would prefer list.insertAt(0) Another thing that stands out is your deleteNode method. This is definitely an implementation detail leaking out. A user of the list shouldn't know at all about Nodes. The method doesn't even take a node, it takes an int, I would prefer list.delete(element). Also, your list won't properly support duplicate values, if I have 3 nodes with the value if 5, and I try to delete 5, it will delete the first node with a value of 5 it finds. As another side, you should make as many variables private as possible.
{ "domain": "codereview.stackexchange", "id": 25435, "tags": "java, performance, unit-testing" }
Creature generator
Question: Originally posted this here, and it was helpfully suggested to post in this forum. In the past couple of years I've returned part time to programming after a 15 year gap. I was C/UNIX. So, I've picked up PHP, Java and C++ ok, but have struggled with JavaScript. Finally I think I've found a way to 'create' classes that can inherit and wondered if anyone would care to comment. Here is an example: <!doctype html> <head> <title>Basic</title> </head> <body> <div id="d1"></div> <script type="text/javascript"> function Base( options ) { var that = this; options = options || {}; Object.keys( options ).forEach( function( item ) { that[item] = options[item]; }); } function Creature( options ) { this.legs = 4; Base.call( this, options ); console.log("New creature"); } Creature.prototype.showNumberOfLegs = function() { console.log( "Number legs " + this.legs ); }; function Mammal( options ) { this.fur = true; Creature.call( this, options ); console.log("New mammal"); } Mammal.prototype = Object.create( Creature.prototype ); Mammal.prototype.showFur = function() { console.log( "Fur " + this.fur ); }; var c = new Creature(); c.showNumberOfLegs(); var m = new Mammal({ legs: 6, fur: false }); m.showNumberOfLegs(); m.showFur(); </script> </body> </html> Answer: but have struggled with JavaScript Welcome to JavaScript, where all things look fine but are actually half broken. :D Finally I think I've found a way to 'create' classes that can inherit I think there's a saying that goes like "Composition over inheritance". That's because composition is more flexible and doesn't impose hard taxonomies of classes or make you resort to multiple inheritance. See this video for a detailed comparison. Now let's go over to your code. Let's just say for now that I advocate inheritance. Let's do this! options = options || {}; Object.keys( options ).forEach( function( item ) { that[item] = options[item]; }); // to Object.keys(options || {}).forEach(function(key){ this[key] = options[key]; }, this); // to Object.assign(this, options || {}); You can streamline your assignment operation by inlining the defaulting of options to an empty object. The context (this) can also be provided to the callback of forEach via its second argument. If you can do ES6, there's Object.assign() which does the same thing in lesser code. It pretty much looks like jQuery's $.extend if you're familiar with it. Base.call( this, options ); // to Base.apply(this, arguments); The reason is that you don't actually know how many arguments you'll potentially provide the constructor. In your code, you assume your only argument is options, but that might change. apply allows you to pass in an array or array-like object as an argument, spreading it as the arguments on the receiving end. You are missing: Creature.prototype = Object.create(Base.prototype); Here's a checklist of what to do when doing prototypal inheritance manually: Inherit the parent properties (Parent.apply(this, arguments). Inherit the parent prototype (Child.prototype = Object.create(Parent.prototype)) Define child properties and their default values Define child methods
{ "domain": "codereview.stackexchange", "id": 16045, "tags": "javascript, object-oriented, inheritance" }
In a fluid flow, how do you tell if negative acceleration is deceleration or change in direction?
Question: When solving for the acceleration of a fluid flow, you use the equation: (∂u/∂t)+u(∂u/∂x)+v(∂u/∂y). If the solution at a given u and v are negative, how do you know if the flow has a negative acceleration or if it's changing direction? Answer: The acceleration of a fluid element $\mathbf a$ is the material derivative of the elements velocity $\mathbf u$: $$\frac{D\mathbf{a}}{Dt} = \frac{\partial \mathbf u}{\partial t} + \mathbf u \cdot\nabla\mathbf u.$$ In a two dimensional flow, this expression can be expanded to: $$\frac{D\mathbf{a}}{Dt} = \frac{\partial \mathbf u}{\partial t} + u\frac{\partial\mathbf u}{\partial x} + v\frac{\partial\mathbf u}{\partial y}$$ where $u$ and $v$ are respectively the $x$ and $y$-components of the velocity vector $\mathbf u$. Telling whether this acceleration will change the magnitude of the velocity vector or change its direction is quite simple, and is just like we do in classical particle mechanics: if $\mathbf a$ and $\mathbf u$ are parallel, then $\mathbf a$ only changes the magnitude of $\mathbf u$ (acceleration if $\mathbf a$ and $\mathbf u$ point in the same direction, and deceleration if they point in opposite directions). If $\mathbf a$ and $\mathbf u$ are not parallel to one another, i.e if $\mathbf a$ points in a different direction to that of $\mathbf u$, then we can decompose $\mathbf a$ into two vectors: one which is parallel to $\mathbf u$, called the tangential acceleration $\mathbf a_t$, and another which is normal to $\mathbf u$ and is thus called the normal acceleration $\mathbf a_n$. The total acceleration is the sum of these two components: $$\mathbf a = \mathbf a_t + \mathbf a_n,$$ and just like classical mechanics, $\mathbf a_t$ only changes the magnitude of $\mathbf u$ while $\mathbf a_n$ only changes its direction. If $\mathbf a$ is normal to $\mathbf u$, then $\mathbf a = \mathbf a_n$ and thus only the direction of $\mathbf u$ is changed.
{ "domain": "physics.stackexchange", "id": 95749, "tags": "fluid-dynamics, acceleration" }
Follow-up to tool for posting CodeReview questions
Question: Description This is a follow-up question to Tool for creating CodeReview questions. Things that has changed include: Removed replacing four spaces with one tab, all tabs and all spaces in the code itself is now left as-is. Added file extensions to the output. Switched order of lines and bytes as I feel that the number of lines of code is more interesting than the number of bytes. Support for command-line parameters to directly process a directory or a bunch of files, with the support for wildcards. If a directory or wildcard is used, files that don't pass an ASCII-content check gets skipped. If you have specified a file that has a lot of non-ASCII content it is processed anyway. I am asking for another review because of the things that I have added mostly, see the questions below. Class Summary (413 lines in 4 files, making a total of 12134 bytes) CountingStream.java: OutputStream that keeps track on the number of written bytes to it ReviewPrepareFrame.java: JFrame for letting user select files that should be up for review ReviewPreparer.java: The most important class, takes care of most of the work. Expects a List of files in the constructor and an OutputStream when called. TextAreaOutputStream.java: OutputStream for outputting to a JTextArea. Code The code can also be found on GitHub CountingStream.java: (27 lines, 679 bytes) /** * An output stream that keeps track of how many bytes that has been written to it. */ public class CountingStream extends FilterOutputStream { private final AtomicInteger bytesWritten; public CountingStream(OutputStream out) { super(out); this.bytesWritten = new AtomicInteger(); } @Override public void write(int b) throws IOException { bytesWritten.incrementAndGet(); super.write(b); } public int getBytesWritten() { return bytesWritten.get(); } } ReviewPrepareFrame.java: (112 lines, 3255 bytes) public class ReviewPrepareFrame extends JFrame { private static final long serialVersionUID = 2050188992596669693L; private JPanel contentPane; private final JTextArea result = new JTextArea(); /** * Launch the application. */ public static void main(String[] args) { if (args.length == 0) { EventQueue.invokeLater(new Runnable() { public void run() { try { new ReviewPrepareFrame().setVisible(true); } catch (Exception e) { e.printStackTrace(); } } }); } else ReviewPreparer.main(args); } /** * Create the frame. */ public ReviewPrepareFrame() { setTitle("Prepare code for Code Review"); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); setBounds(100, 100, 450, 300); contentPane = new JPanel(); contentPane.setBorder(new EmptyBorder(5, 5, 5, 5)); contentPane.setLayout(new BorderLayout(0, 0)); setContentPane(contentPane); JPanel panel = new JPanel(); contentPane.add(panel, BorderLayout.NORTH); final DefaultListModel<File> model = new DefaultListModel<>(); final JList<File> list = new JList<File>(); panel.add(list); list.setModel(model); JButton btnAddFiles = new JButton("Add files"); btnAddFiles.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { JFileChooser dialog = new JFileChooser(); dialog.setMultiSelectionEnabled(true); if (dialog.showOpenDialog(ReviewPrepareFrame.this) == JFileChooser.APPROVE_OPTION) { for (File file : dialog.getSelectedFiles()) { model.addElement(file); } } } }); panel.add(btnAddFiles); JButton btnRemoveFiles = new JButton("Remove files"); btnRemoveFiles.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { for (File file : new ArrayList<>(list.getSelectedValuesList())) { model.removeElement(file); } } }); panel.add(btnRemoveFiles); JButton performButton = new JButton("Create Question stub with code included"); performButton.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent arg0) { result.setText(""); ReviewPreparer preparer = new ReviewPreparer(filesToList(model)); TextAreaOutputStream outputStream = new TextAreaOutputStream(result); preparer.createFormattedQuestion(outputStream); } }); contentPane.add(performButton, BorderLayout.SOUTH); contentPane.add(result, BorderLayout.CENTER); } public List<File> filesToList(DefaultListModel<File> model) { List<File> files = new ArrayList<>(); for (int i = 0; i < model.getSize(); i++) { files.add(model.get(i)); } return files; } } ReviewPreparer.java: (233 lines, 7394 bytes) public class ReviewPreparer { public static double detectAsciiness(File input) throws IOException { if (input.length() == 0) return 0; try (BufferedReader reader = new BufferedReader(new InputStreamReader(new FileInputStream(input)))) { int read; long asciis = 0; char[] cbuf = new char[1024]; while ((read = reader.read(cbuf)) != -1) { for (int i = 0; i < read; i++) { char c = cbuf[i]; if (c <= 0x7f) asciis++; } } return asciis / (double) input.length(); } } private final List<File> files; public ReviewPreparer(List<File> files) { this.files = new ArrayList<>(); for (File file : files) { if (file.getName().lastIndexOf('.') == -1) continue; if (file.length() < 10) continue; this.files.add(file); } } public int createFormattedQuestion(OutputStream out) { CountingStream counter = new CountingStream(out); PrintStream ps = new PrintStream(counter); outputHeader(ps); outputFileNames(ps); outputFileContents(ps); outputDependencies(ps); outputFooter(ps); ps.print("Question Length: "); ps.println(counter.getBytesWritten()); return counter.getBytesWritten(); } private void outputFooter(PrintStream ps) { ps.println("#Usage / Test"); ps.println(); ps.println(); ps.println("#Questions"); ps.println(); ps.println(); ps.println(); } private void outputDependencies(PrintStream ps) { List<String> dependencies = new ArrayList<>(); for (File file : files) { try (BufferedReader in = new BufferedReader(new InputStreamReader(new FileInputStream(file)))) { String line; while ((line = in.readLine()) != null) { if (!line.startsWith("import ")) continue; if (line.startsWith("import java.")) continue; if (line.startsWith("import javax.")) continue; String importStatement = line.substring("import ".length()); importStatement = importStatement.substring(0, importStatement.length() - 1); // cut the semicolon dependencies.add(importStatement); } } catch (IOException e) { ps.println("Could not read " + file.getAbsolutePath()); ps.println(); // more detailed handling of this exception will be handled by another function } } if (!dependencies.isEmpty()) { ps.println("#Dependencies"); ps.println(); for (String str : dependencies) ps.println("- " + str + ": "); } ps.println(); } private int countLines(File file) throws IOException { return Files.readAllLines(file.toPath(), StandardCharsets.UTF_8).size(); } private void outputFileContents(PrintStream ps) { ps.println("#Code"); ps.println(); ps.println("This code can also be downloaded from [somewhere](http://github.com repository perhaps?)"); ps.println(); for (File file : files) { try (BufferedReader in = new BufferedReader(new InputStreamReader(new FileInputStream(file)))) { int lines = -1; try { lines = countLines(file); } catch (IOException e) { } ps.printf("**%s:** (%d lines, %d bytes)", file.getName(), lines, file.length()); ps.println(); ps.println(); String line; int importStatementsFinished = 0; while ((line = in.readLine()) != null) { // skip package and import declarations if (line.startsWith("package ")) continue; if (line.startsWith("import ")) { importStatementsFinished = 1; continue; } if (importStatementsFinished >= 0) importStatementsFinished = -1; if (importStatementsFinished == -1 && line.trim().isEmpty()) // skip empty lines directly after import statements continue; importStatementsFinished = -2; ps.print(" "); // format as code for StackExchange, this needs to be four spaces. ps.println(line); } } catch (IOException e) { ps.print("> Unable to read " + file + ": "); // use a block-quote for exceptions e.printStackTrace(ps); } ps.println(); } } private void outputFileNames(PrintStream ps) { int totalLength = 0; int totalLines = 0; for (File file : files) { totalLength += file.length(); try { totalLines += countLines(file); } catch (IOException e) { ps.println("Unable to determine line count for " + file.getAbsolutePath()); } } ps.printf("###Class Summary (%d lines in %d files, making a total of %d bytes)", totalLines, files.size(), totalLength); ps.println(); ps.println(); for (File file : files) { ps.println("- " + file.getName() + ": "); } ps.println(); } private void outputHeader(PrintStream ps) { ps.println("#Description"); ps.println(); ps.println("- Add some [description for what the code does](http://meta.codereview.stackexchange.com/questions/1226/code-should-include-a-description-of-what-the-code-does)"); ps.println("- Is this a follow-up question? Answer [What has changed, Which question was the previous one, and why you are looking for another review](http://meta.codereview.stackexchange.com/questions/1065/how-to-post-a-follow-up-question)"); ps.println(); } public static boolean isAsciiFile(File file) { try { return detectAsciiness(file) >= 0.99; } catch (IOException e) { return true; // if an error occoured, we want it to be added to a list and the error shown in the output } } public static void main(String[] args) { List<File> files = new ArrayList<>(); if (args.length == 0) files.addAll(fileList(".")); for (String arg : args) { files.addAll(fileList(arg)); } new ReviewPreparer(files).createFormattedQuestion(System.out); } public static List<File> fileList(String pattern) { List<File> files = new ArrayList<>(); File file = new File(pattern); if (file.exists()) { if (file.isDirectory()) { for (File f : file.listFiles()) if (!f.isDirectory() && isAsciiFile(f)) files.add(f); } else files.add(file); } else { // extract path int lastSeparator = pattern.lastIndexOf('\\'); lastSeparator = Math.max(lastSeparator, pattern.lastIndexOf('/')); String path = lastSeparator < 0 ? "." : pattern.substring(0, lastSeparator); file = new File(path); // path has been extracted, check if path exists if (file.exists()) { // create a regex for searching for files, such as *.java, Test*.java String regex = lastSeparator < 0 ? pattern : pattern.substring(lastSeparator + 1); regex = regex.replaceAll("\\.", "\\.").replaceAll("\\?", ".?").replaceAll("\\*", ".*"); for (File f : file.listFiles()) { // loop through directory, skip directories and filenames that don't match the pattern if (!f.isDirectory() && f.getName().matches(regex) && isAsciiFile(f)) { files.add(f); } } } else System.out.println("Unable to find path " + file); } return files; } } TextAreaOutputStream.java: (41 lines, 806 bytes) public class TextAreaOutputStream extends OutputStream { private final JTextArea textArea; private final StringBuilder sb = new StringBuilder(); public TextAreaOutputStream(final JTextArea textArea) { this.textArea = textArea; } @Override public void flush() { } @Override public void close() { } @Override public void write(int b) throws IOException { if (b == '\n') { final String text = sb.toString() + "\n"; SwingUtilities.invokeLater(new Runnable() { public void run() { textArea.append(text); } }); sb.setLength(0); return; } sb.append((char) b); } } Usage / Test You can now use the tool directly by downloading the jar-file from GitHub and running it with one of the following options: java -jar ReviewPrepare.jar runs the Swing form to let you choose files using a GUI. java -jar ReviewPrepare.jar . runs the program in the current working directory and outputting to stdout. java -jar ReviewPrepare.jar . > out.txt runs the program in the current working directory and outputting to the file out.txt (I used this to create this question) java -jar ReviewPrepare.jar C:/some/path/*.java > out.txt runs the program in the specified directory, matching all *.java files and outputting to the file out.txt Questions My main concern currently is with the way I implemented the command line parameters, could it be done easier? (Preferably without using an external library as I would like my code to be independent if possible, although library suggestions for this is also welcome) Is there any common file-pattern-argument that I missed? I'm also a bit concerned with the extensibility of this, right now it feels not extensible at all. What if someone would want to add custom features for the way Python/C#/C++/etc. files are formatted? Then hard-coding the "scan for imports" in the way I have done it doesn't feel quite optimal. General reviews are also of course welcome. Answer: General Now that you have such neat postings, the answers are going to need to be neater too. GUI Bugs When I run the GUI, it does not let me select directories from the File Browser. It also starts in the 'Documents' directory, and it would be better to do one of two things: start in the current directory start in the last directory used (use java.util.pefs.Preferences ?) You should add: JFileChooser dialog = new JFileChooser(); dialog.setCurrentDirectory("."); dialog.setMultiSelectionEnabled(true); dialog.setFileSelectionMode(JFileChooser.FILES_AND_DIRECTORIES); Then you should also support expanding any directory results from the chooser. This will make the behaviour in the GUI match the commandline more closely. A second problem is in the JTextArea display. It should have scroll-bars so that you can inspect the results before copying/pasting them. While looking at those changes, I discovered that you were doing all your File IO on the event-dispatch thread... this is bad practice.... I had to do the following: // add a scrollPane.... private final JScrollPane scrollPane = new JScrollPane(result); ...... // Inside the constructor: final Runnable viewupdater = new Runnable() { public void run() { result.setText(""); ReviewPreparer preparer = new ReviewPreparer(filesToList(model)); TextAreaOutputStream outputStream = new TextAreaOutputStream(result); preparer.createFormattedQuestion(outputStream); outputStream.flush(); result.setCaretPosition(0); } }; JButton performButton = new JButton("Create Question stub with code included"); performButton.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent arg0) { Thread worker = new Thread(viewupdater); worker.setDaemon(true); worker.start(); } }); scrollPane.setAutoscrolls(true); scrollPane.setHorizontalScrollBarPolicy(ScrollPaneConstants.HORIZONTAL_SCROLLBAR_AS_NEEDED); scrollPane.setVerticalScrollBarPolicy(ScrollPaneConstants.VERTICAL_SCROLLBAR_AS_NEEDED); contentPane.add(scrollPane, BorderLayout.CENTER); contentPane.add(performButton, BorderLayout.SOUTH); As I was doing this change I noticed that you are not doing any best-practice closing of the TextAreaOutputStream instance, and, I looked in to the TextAreaOutputStream code, and, it's not the right solution. It is creating a new thread for every line from every file.... and it is horrible overkill. That whole class should be removed, and replaced with: final Runnable viewupdater = new Runnable() { public void run() { ReviewPreparer preparer = new ReviewPreparer(filesToList(model)); try (final StringWriter sw = new StringWriter()) { preparer.createFormattedQuestion(sw); SwingUtilities.invokeLater(new Runnable() { @Override public void run() { result.setText(sw.toString()); result.setCaretPosition(0); } }); } } }; Note how the above is changed to use a Writer instead of an OutputStream..... Using an OutputStream for text data is a broken model.... Readers and Writers for text, and Streams for binary. That's a good segway in to the non-gui code.... The Core engine The TextareaOutputStream made me realize that all of your methods are stream based, except for some parts that are buried in the ReviewPreparer. The PrintStream code should all be replaced with a StringBuilder..... you are limited to the size of a CR post anyway, and you are accumulating the data in to a TextArea... it's not like you will run out of memory. This is also an interesting segway to the CountingOutputStream. There is no need for that either.... you are not using it to count the file sizes, but the actual post length. This should be measured in characters, not bytes.... so, it's a broken class. Get rid of it. So, get rid of the PrintStream as well. PrintStream is a synchronized class, and is much, much slower than StringBuilder. Appending the data to StringBuilder also means you can get the character-length from the StringBuilder instead of the byte-length from the CountingOutputStream. One final observation....... inside the outputFileContents(PrintStream ps) method you do: try (BufferedReader in = new BufferedReader(new InputStreamReader( new FileInputStream(file)))) { int lines = -1; try { lines = countLines(file); } catch (IOException e) { } ps.printf("**%s:** (%d lines, %d bytes)", file.getName(), lines, file.length()); ps.println(); ps.println(); String line; int importStatementsFinished = 0; while ((line = in.readLine()) != null) { This is broken for a few reasons.... Firstly, you should not be using a FileInputStream, but a FileReader. Secondly, you have the support method countLines(File): private int countLines(File file) throws IOException { return Files.readAllLines(file.toPath(), StandardCharsets.UTF_8).size(); } This method fully-reads the file.... again .... Why don't you replace all the big code above with: try { List<String> filelines = Files.readAllLines(file.toPath(), StandardCharsets.UTF_8); sb.append(String.format("**%s:** (%d lines, %d bytes)", file.getName(), filelines.size(), file.length())); sb.append("\n\n"); int importStatementsFinished = 0; for (String line : filelines) { // skip package and import declarations This saves having to read each file twice..... Anway, that's enough for now.
{ "domain": "codereview.stackexchange", "id": 6035, "tags": "java, swing, file-system, formatting, stackexchange" }
Space travel to distant stars
Question: This is more of a hypothetical question. Say space travel at near light speed was possible, and I wanted to travel in my spaceship to some distant star many light years away. At the time and location of my departure, looking out at space, I could measure that star's coordinates and adjust my spaceship's initial direction of motion accordingly. However, because of the star's large distance, there would be a deviation between the coordinates I measured, and its "actual coordinates" (because of the amount of time it would take light to travel from the star to me). Even more so, in the time it would take me to reach the coordinates I have measured, the star would have moved even more from its original coordinates (supposing it has some velocity). This means my spaceship would just miss the star. The greater the distance of the star, the greater the deviation. Another factor to consider: because the spaceship is travelling at near light speed, it will have no way of receiving information from outside its frame of reference (relying on light for receiving information), so the spaceship could not adjust its direction of motion so as to not miss the star. In this case, near light-speed space travel to faraway stars would not be very practical. Is there something I am missing? Are there ways of solving the problems presented? Answer: The most precise way to measure the distance of a star is by parallax, measuring the angles to the star from two points in the Earth's orbit that are separated by two Astronomical Units (AU) six months apart. The distance to the star can be calculated from the tiny difference in angles. Astronomers have been inventing more and more precise techniques for measuring smaller and smaller angles over the last few centuries. And they will continue to do so for all the decades, centuries, or millennia it will take to invent spaceships capable of travelling at almost the speed of light. So when a space ship starts for a distant star, it will know fairly exactly how far away it was when the light reaching Earth at the time was emitted. And thus they will know fairly exactly how long that light was emitted and how long a time there has been for the relative positions of the two stars to change. In shooting there is technique called "leading the target", not aiming in the present direction to the target, but to where the target will be when the bullet or cannonshell arrives. By noting the shift in spectral lines in the spectrum of the star, astronomers measure how fast the star is getting closer to or farther from the Earth. By measuring changes in the direction to the star over time, astronomers will know how fast the star will be travelling sideways compared to Earth. And computer programs can easily calculate the past and future positions of stars compared to Earth, once enough data has been secured. Here is a link to a table of calculated past and future close passes between the Sun and other stars within a few million years of the present time. https://en.wikipedia.org/wiki/List_of_nearest_stars_and_brown_dwarfs#Distant_future_and_past_encounters And if a future society has spaceships which can travel almost as fast as light, they will send manned or unmanned observatories outside of he solar system to make parallax observations using a much wider baseline than the Earth's orbit, which has a maximum width of 2 AU. If a star is observed from two positions, each position 1 light year to the side of the line between Earth and the star, the two positions will be 63,241.077 times as many AU apart as in Earth based observations, so parallaxes taken with equally precise techniques would result in distances 63,2141.077 times as precise. If a star is observed from two positions, each position 1 parsec to the side of the line between Earth and the star, the two positions will be 648,000 times as many AU apart as in Earth based observations, so parallaxes taken with equally precise techniques would result in distances 206,264.81 times as precise. Thus it will be simple to aim the spaceship ahead of the star's current direction so that the spaceship will arrive at the future position of the star instead of the present position of the star. Furthermore, the velocity of a star is likely to be less than 1,000 kilometers per second relative to the Sun. Suppose the voyage takes 1,000 years. By definition, a light year is the distance travelled by light in 365.25 Earth days, so there are 31,557,600 light seconds in a light years. Thus at 1,000 kilometers per second, the star would move 31,557,600,000 kilometers in one year, or 0.00333564 of a light year. In 1,000 years the star would move at most 3.335640952 of a light year. If the starship has the ability to accelerate to almost the speed of light and then decelerate again, the ability to travel three more light years wouldn't be much of a problem, even if no adjustments were made for the velocity of the star before leaving Earth. In a much shorter voyage to star much nearer Earth, and travelling much slower relative to the Sun, the position error would be much smaller. And of course the course calculations for the voyage would take the future movements of the star into account, as I wrote above, instead of pointing the ship at the present position of the star. Furthermore the starship should be able to see the target star for most or all of the journey, and account for various relativistic effects to keep track of its position. Thus they should be able to make course corrections when and if needed.
{ "domain": "astronomy.stackexchange", "id": 6531, "tags": "observational-astronomy, distances, interstellar-travel" }
Why does the salt in the oceans not sink to the bottom?
Question: This is something that just occurred to me. If heavier elements sink, then how can the entire ocean be salty? Shouldn't the 'salt', because of its density, all sink to the bottom of the ocean? In theory, only the deepest parts of the ocean should be salty, while the top of the ocean is not. Yet, the only water in the world that isn't salty comes from rain and rivers. How can this be? Answer: When dissolved in water, salt breaks up into sodium and chlorine ions, which combine with water molecules so they cannot easily sink. However, there is a tendency for streams of fresh water to float on salt water and rise to the top. This caused problems for British submarines in the Dardanelles Straits during WW1. Moving from almost fresh water to the denser salt water, they suddenly became more buoyant and rose involuntarily to the surface, making them visible to Turkish gunners on the shore. There are also parts of the ocean where there are pools of very salty water lying on the bottom in such a way as to clearly show the pool to any diver who happens to see it, as though it were a pool on land, so in some circumstances very salty water can sink.
{ "domain": "earthscience.stackexchange", "id": 2282, "tags": "oceanography, water" }
Could a neural network detect primes?
Question: I am not looking for an efficient way to find primes (which of course is a solved problem). This is more of a "what if" question. So, in theory, could you train a neural network to predict whether or not a given number $n$ is composite or prime? How would such a network be laid out? Answer: Early success on prime number testing via artificial networks is presented in A Compositional Neural-network Solution to Prime-number Testing, László Egri, Thomas R. Shultz, 2006. The knowledge-based cascade-correlation (KBCC) network approach showed the most promise, although the practicality of this approach is eclipsed by other prime detection algorithms that usually begin by checking the least significant bit, immediately reducing the search by half, and then searching based other theorems and heuristics up to $floor(\sqrt{x})$. However the work was continued with Knowledge Based Learning with KBCC, Shultz et. al. 2006 There are actually multiple sub-questions in this question. First, let's write a more formal version of the question: "Can an artificial network of some type converge during training to a behavior that will accurately test whether the input ranging from $0$ to $2^n-1$, where $n$ is the number of bits in the integer representation, represents a prime number?" Can it by simply memorizing the primes over the range of integers? Can it by learning to factor and apply the definition of a prime? Can it by learning a known algorithm? Can it by developing a novel algorithm of its own during training? The direct answer is yes, and it has already been done according to 1. above, but it was done by over-fitting, not learning a prime number detection method. We know the human brain contains a neural network that can accomplish 2., 3., and 4., so if artificial networks are developed to the degree most think they can be, then the answer is yes for those. There exists no counter-proof to exclude any of them from the range of possibilities as of this answer's writing. It is not surprising that work has been done to train artificial networks on prime number testing because of the importance of primes in discrete mathematics, its application to cryptography, and, more specifically, to cryptanalysis. We can identify the importance of digital network detection of prime numbers in the research and development of intelligent digital security in works like A First Study of the Neural Network Approach in the RSA Cryptosystem, G.c. Meletius et. al., 2002. The tie of cryptography to the security of our respective nations is also the reason why not all of the current research in this area will be public. Those of us that may have the clearance and exposure can only speak of what is not classified. On the civilian end, ongoing work in what is called novelty detection is an important direction of research. Those like Markos Markou and Sameer Singh are approaching novelty detection from the signal processing side, and it is obvious to those that understand that artificial networks are essentially digital signal processors that have multi-point self tuning capabilities can see how their work applies directly to this question. Markou and Singh write, "There are a multitude of applications where novelty detection is extremely important including signal processing, computer vision, pattern recognition, data mining, and robotics." On the cognitive mathematics side, the development of a mathematics of surprise, such as Learning with Surprise: Theory and Applications (thesis), Mohammadjavad Faraji, 2016 may further what Ergi and Shultz began.
{ "domain": "ai.stackexchange", "id": 3511, "tags": "neural-networks, prediction, primality-test" }
Definition of Dye- Reduction Test?
Question: Can some one give a simple explanation or definition on what a dye-reduction test is. Answer: Dye reduction tests (and there seem to be loads of different ones) are simply assays in which a dye becomes decolourised to give you a visual indication of whether a certain process is occuring. Here you can find an example of a dye reduction test with Methylene Blue and Reazurin which indirectly measures the bacterial densities in milk and cream. See this google book for more info.
{ "domain": "biology.stackexchange", "id": 927, "tags": "biochemistry, genetics, pharmacology, microbiology, terminology" }
Carbanion Stability
Question: I was wondering that whether a Cross-Conjugated or a Extended-Conjugated carbanion is stable I have sort of memorise that a Cross-Conjugated one is more stable For example, Which of the Carbanion is more stable, I figure it must be the left one since the chlorine withdraws electron density. For reference let the Right one be 2 and left one be 1 Answer: As phrased, this question is not answerable. Carbanions 1 and 2, are resonance contributors of a delocalized carbanion. These two contributors and a third (3) are shown in the image below. A common misconception about resonance contributors is that each structure exists and the species alternates through them. This is not the case. If this were the case, we would be able to detect all three contributors using spectroscopic methods (though 1 and 3 would likely be indistinguishable). Carbanions 1 and 2 (and 3) all have the same response to all forms of spectroscopy. Each resonance contributor is an approximation of the true structure of the delocalized carbanion. Another way of looking at the structure of the delocalized carbanion is to draw a resonance hyrbid. This structure shows the delocalization. A similar representation uses partial charges to indicate the positions where the negative charge accumulates. Sometimes when we ask the question about the comparative stability of two resonance contributors, what we really mean is which is more important for approximately the true structure of the resonance hybrid. In other words, which of the three carbons in the hybrid has the most negative charge. We can apply the same reasoning to answer this question as we would to answer the stability question. Since the two structures are resonance contributors of the same hybrid, we can ignore resonance stabilization as a consideration. Both have the same degree of resonance. We then need to consider induction. There is an electronegative chlorine atom in the structure than can stabilize negative charge through induction. Inductive stabilization is through the sigma-bond network and decreases over distance. You should be able to use this information to judge which resonance contributor is more important (1/3 or 2). On a side note, the question of stability of these anions is complicated by the ease of elimination of the chloride anion to produce benzene, which is an aromatic compound, and the chloride anion.
{ "domain": "chemistry.stackexchange", "id": 11308, "tags": "organic-chemistry, stability" }
propagate userdata smach
Question: First, am I right in assuming input_keys and output_keys are the preferred method of propagating information throughout a SMACH State Machine? From what I can see, this requires a good deal of bookeeping, but if there's no other way, then that's that. I cannot get userdata passing between a State Machine and a Sub-Container. Here's what I would expect to work. class QuickTestMain( smach.State ): def __init__( self, name ): smach.State.__init__( self, outcomes = ['succeeded','aborted','preempted'] ) self.count = 0 self.name = name def execute( self, userdata ): r = rospy.Rate( 1 ) for i in xrange( 5 ): self.count += 1 rospy.logout( '%s at count %3.2f' % (self.name, self.count)) r.sleep() return 'succeeded' class PrintStr(smach.State): def __init__(self, ins = 'Hello'): smach.State.__init__(self, outcomes=['succeeded', 'aborted', 'preempted'], input_keys=['in_key'],output_keys=['print_data']) self.ins = ins def execute(self, userdata): userdata.print_data=userdata.in_key rospy.logout( 'Received input data: %s' % userdata.in_key ) return 'succeeded' def sms(): smc = smach.StateMachine( outcomes=['succeeded', 'aborted', 'preempted'], input_keys=['outtie']) print smc.userdata.outtie # Fails with smc: smach.StateMachine.add('MAIN2', QuickTestMain('main2'),transitions={'succeeded':'succeeded'}) return smc if __name__ == '__main__': rospy.init_node( 'tmp_test') sm = smach.StateMachine(outcomes=['succeeded','aborted','preempted'],output_keys=['five']) sm.userdata.five='5' print sm.userdata.five with sm: smach.StateMachine.add( 'PS1', PrintStr(), transitions={'succeeded':'SMS'},remapping={'in_key':'five'}) smach.StateMachine.add( 'SMS', sms(), transitions={'succeeded':'succeeded'},remapping={'outtie':'five'}) sis = IntrospectionServer( 'sm_test', sm, '/SM_TEST' ) sis.start() rospy.sleep( 5 ) sm.execute() sis.stop() I am attempting to pass userdata from a State Machine to a sub-SM with the remapping command, but apparently this doesn't work for Sub-SM ? Is there anywhere in the docs I could get more info on this? Thanks Originally posted by phil0stine on ROS Answers with karma: 682 on 2012-03-21 Post score: 2 Original comments Comment by Mehdi. on 2014-06-30: I don't remember seeing a declaration of a state as a function in the tutorials. Check the tutorial for nested states, it shows how to correctly declare a state as a substatemachine. Answer: Your sms() function is being called when you create the SMS state, not when the state is executed, so that is definitely going to fail. W.r.t. userdata remapping, that maps userdata keys to other userdata keys, it does not "set" the value of a userdata key. Originally posted by jbohren with karma: 5809 on 2014-07-03 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 8673, "tags": "ros, executive-smach, smach" }
How can a 32bit CPU have an addressable memory size of 16TB?
Question: Frequently, on a 32-bit CPU, each page-table entry is 4 bytes long, but that size can vary as well. A 32-bit entry can point to one of $2^{32}$ physical page frames. If frame size is 4 KB (212), then a system with 4-byte entries can address $2^{44}$ bytes (or 16 TB) of physical memory. The above statement is taken from the book "Operating System Principles" by Galvin. If all 32 bits in a 32-bit CPU are used to refer to pages , then we can have $2^{32}$ pages. But then no more bits will be left to point to memory inside a page of size $2^{12}$ bits since all 32-bits have been used up. How can we thus say that $2^{44}$ bytes are addressable? Answer: The $32$ bit page frame address acts as a base address and will be typically stored in an index register. An individual machine code instruction (e.g. a branch instruction) will then contain a $12$ byte bit offset. The offset is added to the base address to create the complete $44$ byte bit address.
{ "domain": "cs.stackexchange", "id": 14825, "tags": "operating-systems" }
Why computer science papers rarely use advanced mathematics?
Question: I am a M.Sc. student in computer science, working on information networks and recommender systems. This days when I went through the top-tier conferences papers in the field I have seen most of them use solely simple machine learning tools such as generalized linear model, Expectation Maximization, Maximum Likelihood, etc. One may argue that this simple methods work better and We shouldn't take the blame for the sake of simplicity. That is true but I have not even seen an interest to try compare their results with some more advanced mathematical methods. I am wondering why scientists don't try to explore more mathematics to involve in their research. Because there is not something better in exploding world of mathematics or because of its difficulty, etc? Edit: I mean the areas that are more related to continuous mathematics than CS theory. Answer: There are many reasons that you may not see a lot of complex mathematics in the papers you are reading. First, the tools used depend on the task at hand. If a task is simple, a simple tool might be adequate. Also, if the task runs on a simple system, a simple tool is often best. Second, usually in Computer Science, the goal is usually to make things usable to a layprogrammer; therefore, it would be detrimental to make something depend on an advanced or abstract concept that few people understand. Often, the proofs sections of algorithms papers have much more advanced mathematics than the rest of the paper; the assumption is that anyone who wants to verify the correctness of the algorithm or its complexity can invest the time in understanding the proof, but that that is not essential to use the results. Finally, some of this has to do with a lack of familiarity; people who spend their lives reading and writing computer science papers might never have the time to learn about new, highly complex mathematics. Actually, a great way to expand the field is to introduce a concept that many mathematicians understand in a way that it is accessible to and usable by computer scientists. (For instance, I have a friend whose PhD work applied known concepts of Control Theory to problems in Motion Planning. He did very little new work, but he did advance Motion Planning.)
{ "domain": "cs.stackexchange", "id": 4962, "tags": "research" }
autowareauto.git causes permission error during ADE installation. Are additional permission rights required?
Question: git clone --recurse-submodules git@gitlab.com:autowarefoundation/autoware.auto/autowareauto.git Causes error: Permission denied (publickey). fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. During ADE installation on Ubuntu 18.10. with Docker 19.03.6 #autoware Originally posted by jpalo on ROS Answers with karma: 13 on 2020-07-16 Post score: 0 Answer: It seems a problem of ssh in gitlab. Please add the ssh key in your gitlab account. See https://docs.gitlab.com/ee/ssh/ Originally posted by TakaHoribe with karma: 181 on 2020-07-16 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by jpalo on 2020-07-17: Thank you for the response. The problem was solved by installing ade via http instead of ssh. It was indeed an ssh/gitlab problem. the ssh key should have worked, too. Comment by Josh Whitley on 2020-07-18: Please remember to accept the answer if your question was answered (the checkmark under the score buttons).
{ "domain": "robotics.stackexchange", "id": 35290, "tags": "ros2" }
Attach spotlights to robot links in Gazebo 1.9 + ROS Hydro?
Question: Is it possible to do anything like attach a light to robot and then have it move around with the robot? <gazebo reference="chassis"> <light type="spot" name="front_spot"> <pose>1 0 0 0 0 0</pose> <diffuse>0 1 0 1</diffuse> <attenuation> <range>10</range> <constant>0.2</constant> <linear>0.01</linear> </attenuation> <direction>1 0 0</direction> </light> </gazebo> (this is what I tried adding to my ros urdf file but I get this error I believe when the conversion to sdf is happening: Error [parser.cc:697] XML Element[light], child of element[link] not defined in SDF. Ignoring.[link] Update - one thing I've tried is use a texture projector, with a spotlight that fades from white to 100% transparent at the edges. This doesn't work because the projected texture looks to be purely additive, it'll produce a spot but not reveal the texture in the model it is being projected on. Originally posted by Lucas Walter on Gazebo Answers with karma: 115 on 2013-10-16 Post score: 0 Answer: This feature is currently not present. Here is an issue to help track its development. Originally posted by nkoenig with karma: 7676 on 2013-10-17 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 3496, "tags": "gazebo" }
Minimum Possible Test MSE
Question: I have a little confusion. What follows is from Introduction to Statistical Learning (2013) by Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani. My understanding of what is going on is the following. The black curve is a function, let us say $y=f(x)$. We have a random variable $g$ which I write as $g=f+\varepsilon$, $f$ plus noise. The data points are a subset of the plane, $n$ trials of $g$, the (possibly multi)-set $T:=\{(x_i,g(x_i)):i=1,\dots,n\}$. I imagine (incorrectly as I know that splines are not polynomials) that the yellow curve is a degree one polynomial (flexibility two) that is the best LS fit to the the trials (the training), the blue is the best degree five polynomial (flexibility six), and the green is the best degree $(20+\alpha)$ polynomial (flexibility $20+\alpha+1$). In my head, the training should be the data points $T$, while the test should be $f$ (as in $f$ is the expectation of the test data). I understand that the grey line is telling me that increasing the flexibility (degree of the polynomials in my head), allows me to approximate better the set $T$. However, if I have duplicates of $x$ in $T$, say $x_j=x_k=x^*$, with different $g$ values (e.g. something like $(20,3),\,(20,5)$ both in $T$), then I cannot have a polynomial (or indeed spline or any function) $p$ that has $p(x_j)$ and $p(x_k)$ different: $p(x_j)=p(x_k)=p(x^*)$, single-valued. Therefore, if I have such duplications in the $x$ variable, I cannot reduce the MSE to zero. In turn, the red line shows, that when we overfit the data with too much flexibility, the fitted curve is (my usage) biased to the training and so will not model well $f$, and so we have this increasing. The problem I can't square (excuse the pun), is the dashed line. It says minimum possible test MSE over all models. Whether 'test' refers to $f$ or $T$ this does not make sense to me. If 'test' here means $f$, well surely this is zero? We can approximate $f$ arbitrarily well with a polynomial of large enough degree. If 'test' here means the data $T$, we must conclude that $T$ contains $x$ duplicates: otherwise we could fit a polynomial of degree $n+1$ through all the test points and get this to zero. Therefore there must be duplicates, and so, perhaps, this theoretically best fit goes through all the points which are not duplicated, and goes through the average $g(x_i)$ of the duplicated points... and the answer turns out to be one... but then the grey line should not go below this... Therefore I conclude that the dashed line is the best possible fit to $f$... but why isn't this zero? Questions: Am I right to be confused by this? Is the black $f$ the test or the training? Am I misunderstanding something else? Perhaps these (smoothing) splines cannot well-approximate as well as polynomials? Answer: Important disclaimer: I'm not a statistician and I'm not sure about my interpretation! I also thought at first about the duplicates, but I think the problem might be with this assumption: In my head, the training should be the data points , while the test should be (as in is the expectation of the test data). Specifically the last part: in principle the test set is made of points from the same distribution as the training data, with the same risk of noise. In other words, the test set $t$ is similar to $T$: $t=\{(x_i,g(x_i)):i=1,\dots,m\}$ (and not $t=\{(x_i,f(x_i)):i=1,\dots,m\}$). If 'test' here means the data , we must conclude that contains duplicates: otherwise we could fit a polynomial of degree +1 through all the test points and get this to zero. Importantly, the test set $t$ is different from the training set $T$, and the estimated function $\hat{f}$ is based only on the points in $T$. So this way it makes sense that even a perfect estimate $\hat{f}=f$ might not be able to predict the true (noisy) value for every point $x\in t$. That could explain the non-zero minimum test MSE.
{ "domain": "datascience.stackexchange", "id": 6101, "tags": "regression, predictive-modeling, overfitting" }
Are laws of Physics same in frames having zero relative acceleration?
Question: If two frames have the same acceleration, then they'll be moving with a uniform speed with respect to each other. Are laws of Physics the same in these two frames? Answer: The laws of physics are not frame dependent. What is free dependent is the equations of motion. And, indeed, we will find that the equations of motion will be the same in two frames that are accelerating the same. You can do the frame transform and find that all vectors transform by the identity transform (other than position, of course) This is true for "normal" equations of motion If you embed some effect (such as an electrostatic repulsion) in the equation of motion rather than thinking of it as its own force, you obviously will see different forces at the same numeric coordinate in different frames
{ "domain": "physics.stackexchange", "id": 81208, "tags": "special-relativity, reference-frames, inertial-frames, relativity" }
Lowest point of a loose cable with a pulley/mass hanging from it
Question: This is part of a homework question which I've been stuck on for several hours. I've tried googling "lowest point of rope", "lowest point of hanging cable", "lowest point of pulley", and a bunch of other combinations without luck. Cable ABC has a length of 5m. The cable is attached to a wall on the left at A, and attached to a wall on the right at C, 0.75m above the vertical position of A (so C is attached at a higher location on the wall). The distance between the walls is 3.5m. A 100kg sack is hanging by a pulley on this cable at equilibrium, at B. Find the horizontal distance x of the pulley from the left wall (neglect the size of the pulley). Intuitively I think the the pulley would hang at the location where it's closest to the ground (hence "lowest point of loose cable..."). However, I have no idea how to calculate it. We're only allowed scientific calculators (no graphing) and whenever I try to set up an equation it blows up. Once I figure out x it should be relatively easy to calculate the component forces for equilibrium. I tried looking for examples in the textbook and internet for something like this without luck. | ---- 3.5m -----| ---D-------------* <- C | | /| | <- 0.75m / | * <- A / | |\ | / F | \|/ | * <- B E | _______ | 100kg | |--| <- x Length of cable: 5m Here's a text diagram, as best as I could make it Update: Found a hint from the textbook - (3.5 - x)/cos(o) + x/cos(o) = 5. Not quite sure what to make of it, but it does kinda remind me of an ellipse at a slant... https://math.stackexchange.com/questions/108270/what-is-the-equation-of-an-ellipse-that-is-not-aligned-with-the-axis Update 2: Upon closer inspection of aufkag's angle-suggestion and the hint from the textbook, I believe he is correct about the angles being equal - the formula calculates the two segments of the rope from the adjacent sides x and 3.5 - x. By the way, how can it be explained or "proved" or what's the law that says the angles between AB and the wall and AC and the wall in a setup like this are equal? Update 3: (after solved, see comment for aufkag): Added D, E, F. ABD = BAE and CBD = BCF, but can anyone prove or point out the law that says ABD = CBD or BAE = BCF? Anyways, the steps are: o = angle AB and the horizontal or BC and the horizontal x / cos(o) + (3.5 - x) / cos(o) = 5 (sum of segments of rope is 5) tension in AB = tension in BC, therefore they share the same "load" of the mass, so we can calculate the tension in just one side 100 * 9.81 / 2 / sin(o) = 687N (approximately - first half of answer) 0.75 + xtan(o) = (3.5 - x)tan(o) (equal lengths for line segment BD) solve for x to get 1.38m Answer: aufkag pointed out the necessary parts for the solution, but didn't make an answer. This walks through the problem using his tips The key is to realize that angles BAE and BCF are equal. By geometric laws DBA and DBC are equal to those other two too (notice parallel lines AE, DB, and CF). Proof by aufkag (quote from his last comment) Because the sack is just hanging there, the horizontal forces must cancel. Therefore the horizontal components of the tensions must cancel.[1] Because the pulley is free, the tension in both sides of the cable must be equal.[2] Combining these two facts, the vertical components of the tensions must be equal.[3] Therefore (2+3), both tensions must make equal opposite angles with DB.[4] So let $\theta$ be the angle between AB and the horizontal (same as BC and the horizontal) $\frac{x}{\cos(\theta)} +\frac{3.5 - x}{\cos(\theta)} = 5$ (sum of the segments of the rope is 5) Since the tension in the rope is the same on both segments, their horizontal and vertical components must be the same. If their vertical components are the same, they each share one half of the "load" of the pulley/weight. $\frac{100 \cdot 9.81}{ 2 \sin(\theta)} = 687 \, \mathrm{N}$ (approximately). This is the first half of the answer (tension in the cable) $0.75 + x \cdot \tan(\theta) = (3.5 - x) \cdot \tan(\theta)$ (equal lengths for the segment BD, calculated using geometry from the left hand side and the right hand side). Solve for $x$ to get 1.38m.
{ "domain": "physics.stackexchange", "id": 9633, "tags": "homework-and-exercises, newtonian-mechanics, equilibrium" }
HMD suggestions for usage with ROS software
Question: Hello all, I'm searching for an HMD (head-mounted display) to use for stereoscopic vision and pan&tilt camera control using IMU readings. As such, the requirements are: Good resolution and stereocopic view; Compatibility with ubuntu; IMU incorporated. Also, if you have any experience with some HMD and you use ROS, can you point how you use sterescopic view from 2 different image sources in ROS? There are many methods and I'd like to know which ones you use. Thank you in advance, Filipe Jesus Originally posted by Filipe Jesus on ROS Answers with karma: 23 on 2012-11-22 Post score: 1 Answer: You might find this announcement of Occulus Rift support for rviz interesting. http://ros-users.122217.n3.nabble.com/Oculus-Rift-Integration-in-RViz-td4020193.html Originally posted by tfoote with karma: 58457 on 2013-07-17 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 11841, "tags": "ros, hardware" }
Implementing Peek to IEnumerator and IEnumerator
Question: Many of you might have come to the point and wished to have a Peek for IEnumerator and IEnumerator. I tried to implement it by cheating a bit and looking up the next element before the actual MoveNext call. So I ended up with some kind of wrapper. First of the extensions to convert default enumerators: public static class PeekableEnumeratorExtension { public static PeekableEnumerator ToPeekable(this IEnumerator enumerator) { return new PeekableEnumerator(enumerator); } public static PeekableEnumerator<T> ToPeekable<T>(this IEnumerator<T> enumerator) { return new PeekableEnumerator<T>(enumerator); } } And here is the non-generic PeekableEnumerator: public class PeekableEnumerator : IEnumerator { protected enum Status { Uninitialized, Starting, Started, Ending, Ended } protected IEnumerator enumerator; protected Status status; protected object current; protected object peek; public PeekableEnumerator(IEnumerator enumerator) { this.enumerator = enumerator; status = Status.Uninitialized; MoveNext(); } public object Current { get { if (Status.Starting == status) throw new InvalidOperationException("Enumeration has not started. Call MoveNext."); if (Status.Ended == status) throw new InvalidOperationException("Enumeration already finished."); return current; } } public object Peek { get { if (Status.Ending == status || Status.Ended == status) throw new InvalidOperationException("Enumeration already finished."); return peek; } } public bool MoveNext() { current = peek; switch (status) { case Status.Uninitialized: case Status.Starting: if (enumerator.MoveNext()) { status++; peek = enumerator.Current; } else status = Status.Ending; break; case Status.Started: if (enumerator.MoveNext()) peek = enumerator.Current; else status++; break; case Status.Ending: status++; break; } return Status.Ended != status; } public void Reset() { enumerator.Reset(); status = Status.Uninitialized; MoveNext(); } } And the very analog PeekableEnumerator: public class PeekableEnumerator<T> : IEnumerator<T> { protected enum Status { Uninitialized, Starting, Started, Ending, Ended } protected IEnumerator<T> enumerator; protected Status status; protected T current; protected T peek; public PeekableEnumerator(IEnumerator<T> enumerator) { this.enumerator = enumerator; status = Status.Uninitialized; MoveNext(); } public T Current { get { if (Status.Starting == status) throw new InvalidOperationException("Enumeration has not started. Call MoveNext."); if (Status.Ended == status) throw new InvalidOperationException("Enumeration already finished."); return current; } } object System.Collections.IEnumerator.Current { get { return Current; } } public T Peek { get { if (Status.Ending == status || Status.Ended == status) throw new InvalidOperationException("Enumeration already finished."); return peek; } } public bool MoveNext() { current = peek; switch (status) { case Status.Uninitialized: case Status.Starting: if (enumerator.MoveNext()) { status++; peek = enumerator.Current; } else status = Status.Ending; break; case Status.Started: if (enumerator.MoveNext()) peek = enumerator.Current; else status++; break; case Status.Ending: status++; break; } return Status.Ended != status; } public void Reset() { enumerator.Reset(); status = Status.Uninitialized; MoveNext(); } public void Dispose() { enumerator.Dispose(); } } Before you ask: Why are there 5 statuses? It is derived from the lifetime of Current and Peek: Status | Current | Peek | Comment --------------+-----------+-----------+----------------------------------- Uninitialized | n/a | n/a | Internal for constructor and Reset Starting | Exception | Available | Before first MoveNext Started | Available | Available | After first MoveNext Ending | Available | Exception | wrapped MoveNext returned false Ended | Exception | Exception | After enumeration finished Example usage: var a = new[] { 1, 2, 3 }.GetEnumerator().ToPeekable(); a.Current; // InvalidOperationException a.Peek; // 1 a.MoveNext(); // true a.Current; // 1 a.Peek; // 2 a.MoveNext(); // true a.Current; // 2 a.Peek; // 3 a.MoveNext(); // true a.Current; // 3 a.Peek; // InvalidOperationException a.MoveNext(); // false a.Current; // InvalidOperationException a.Peek; // InvalidOperationException Update Thanks to svick here is an alternative version using a Queue. It changes the basic usage from IEnumerator to ICollection as input but I can live with that. I need to keep an copy of the original collection for resetting. public class PeekableEnumerator : IEnumerator { protected ICollection collection; protected Queue queue; protected bool current_set; protected object current; protected bool peek_set; protected object peek; public object Current { get { if (!current_set) if (peek_set) throw new InvalidOperationException("Enumeration has not started. Call MoveNext."); else throw new InvalidOperationException("Enumeration already finished."); return current; } } public object Peek { get { if (!peek_set) throw new InvalidOperationException("Enumeration already finished."); return peek; } } public PeekableEnumerator(ICollection collection) { this.collection = collection; Reset(); } public bool MoveNext() { current_set = peek_set; current = peek; if (0 == queue.Count) { peek_set = false; return current_set; } else { peek_set = true; peek = queue.Dequeue(); return true; } } public void Reset() { queue = new Queue(collection); MoveNext(); } } Answer: I think your implementation is too complicated, and what nagged me was that you start enumerate in constructor. Here is my implementation which fix that. The state reduced to a boolean telling that the peek value has been fetched from the underlying enumerator or not. public class PeekEnumerator<T> : IEnumerator<T> { private IEnumerator<T> _enumerator; private T _peek; private bool _didPeek; public PeekEnumerator(IEnumerator<T> enumerator) { if (enumerator == null) throw new ArgumentNullException("enumerator"); _enumerator = enumerator; } #region IEnumerator implementation public bool MoveNext() { return _didPeek ? !(_didPeek = false) : _enumerator.MoveNext(); } public void Reset() { _enumerator.Reset(); _didPeek = false; } object IEnumerator.Current { get { return this.Current; } } #endregion #region IDisposable implementation public void Dispose() { _enumerator.Dispose(); } #endregion #region IEnumerator implementation public T Current { get { return _didPeek ? _peek : _enumerator.Current; } } #endregion private void TryFetchPeek() { if (!_didPeek && (_didPeek = _enumerator.MoveNext())) { _peek = _enumerator.Current; } } public T Peek { get { TryFetchPeek(); if (!_didPeek) throw new InvalidOperationException("Enumeration already finished."); return _peek; } } } My test to make sure it complies to your needed behaviour: var a = new PeekEnumerator<int>(new [] { 1, 2, 3 }.AsEnumerable().GetEnumerator()); Console.WriteLine(a.Peek); // 1 Console.WriteLine(a.MoveNext()); // true Console.WriteLine(a.Current); // 1 Console.WriteLine(a.Peek); // 2 Console.WriteLine(a.MoveNext()); // true Console.WriteLine(a.Current); // 2 Console.WriteLine(a.Peek); // 3 Console.WriteLine(a.MoveNext()); // true Console.WriteLine(a.Current); // 3 try { Console.WriteLine(a.Peek); // InvalidOperationException } catch (Exception e) { Console.WriteLine(e.GetType()); } Console.WriteLine(a.MoveNext()); // false try { Console.WriteLine(a.Current); // InvalidOperationException } catch (Exception e) { Console.WriteLine(e.GetType()); } try { Console.WriteLine(a.Peek); // InvalidOperationException } catch (Exception e) { Console.WriteLine(e.GetType()); }
{ "domain": "codereview.stackexchange", "id": 40546, "tags": "c#" }
Entanglement decay and the second law of thermodynamics
Question: Having read this very interesting question and its answers, I started to wonder about the following: It is known that entanglement is rather fragile. Due to interactions with the environment, an entangled state can easily loose its entanglement and evolve to become a separable state. This is the much studied effect of entanglement decay. Now if entangled states are numerous and separable states are so few. Then we see here a natural tendency for a system to move from the more numerous to the few. This is opposite to the situation described by the second law of thermodynamics where systems tend to move from high order (few states) to low order (numerous state). So here's the question: does entanglement decay violate the second law of thermodynamics? Answer: Quantum entropy is not a good entanglement measure, hence it is not so much a measure of available states with given entanglement content, but of pure states in a statistically equivalent ensemble representing a given quantum state $\rho$. So a decreasing number of available states under decay of entanglement need not imply a decrease in entropy at all, to the contrary, and it is easy to construct a simple counterexample. At the very least we can say that it all depends on the dynamics: decay of entanglement may well comply with the Second Law. For the counterexample, take two systems in local states $\rho_A$, $\rho_B$ and some reasonable entanglement measure E. In general there exists a continuum of entangled states $\rho$ with identical entanglement content E and local states $\rho_A = Tr_B\rho$, $\rho_B = Tr_A\rho$. As usual, entropy reads $S(\rho) = -Tr\rho\ln\rho$. Let the two systems undergo maximal disentanglement from some such $\rho$ into the completely uncorrelated product state, $\rho \rightarrow \rho_A \otimes \rho_B$. In terms of entanglement content the number of available states decreases from many to 1, but the entropy increases because $0 \le S(\rho) \le S(\rho_A) + S(\rho_B)$. Clarification on entropy vs. equivalent statistical ensembles (following request in comments): Maximum entropy occurs for the maximally mixed state (microcanonical state), which is proportional to the identity operator and so it is unique as a density matrix. But in terms of equivalent statistical ensembles, it is maximally undetermined or disordered: an equivalent ensemble can be generated using any orthonormal basis set and even non-orthogonal overcomplete sets; the number of ways in which the elements (system copies) of any given ensemble can be distributed on available pure states is maximal (equivalently, each element has the same probability of being in any ensemble pure state). No other density matrix has this property, nor maximal entropy. In fact, for any other mixed state the pure state sets that can realize equivalent ensembles are much more limited, although they generally still form a continuum (for example, in absence of degeneracies there is a unique orthonormal set that is not necessarily a basis) and/or there are fewer ways to distribute elements of an ensemble on available pure states.
{ "domain": "physics.stackexchange", "id": 33597, "tags": "quantum-mechanics, thermodynamics" }
Order of electrophilic substitution
Question: What will be the order of reactivity towards electrophilic substitution in case of the following compounds: benzene, ethyl benzene, isopropyl benzene, tert-butyl benzene The answer at the end of the book says that ethyl benzene will be most reactive (the book doesn't explain the cause though it's a MCQ) but as according to inductive effect I think the tert-butyl group will have more +I effect on the ring. Doesn't it? Answer: In your series, all of the alkyl benzenes will have roughly the same +I inductive effect. Where they differ is with regard to the resonance effect. Hyperconjugated structures such as those drawn below for ethylbenzene are often invoked to explain these differences. There are two hydrogens in ethylbenzene that are capable of donating electrons into the aromatic ring by hypercojugation. In isopropyl benzene there is only one and there are none in *tert-*butylbenzene. Hence ethylbenzene should be the most electron donating of the compounds in your series due to resonance involving hyperconjugation. Realize that these effects are relatively small, for example the relative rates for the electrophilic nitration of various aromatic compounds is: benzene=1, toluene=24, tert-butylbenzene=15.7 (reference, page 1060) Benzene has no substituent (other than hydrogen), so neither resonance or inductive effects play a role, it would be the slowest. The expected order in your series would therefore be: ethylbenzene > isopropylbenzene > tert-butylbenzene > benzene
{ "domain": "chemistry.stackexchange", "id": 2870, "tags": "organic-chemistry, reactivity" }
Predicting product ratio by rearrangement in Dehydration Reactions
Question: I came across this question recently, which would have a clear major product. The OP has given satisfactory explanation for the product ratio, but wasn't sure about the major product. When I was looking for reference for better explanation, I show the following question in undergraduate textbook (Ref.1): Dehydration of 2,2,4-trimethyl-3-pentanol with acid gives a complex mixture of the alkenes in the indicated percentages. Write a mechanism that accounts for each product. I: 2,3,4-trimethyl-1-pentene, 29% II: 2,4,4-trimethyl-1-pentene, 24% III: 3,3,4-trimethyl-1-pentene, 2% IV: 2,4,4-trimethyl-2-pentene, 24% V: 2,3,4-trimethyl-2-pentene, 18% VI: 2-isopropyl-3-methyl-1-butene, 3% I thought it would be a good practice for our readers to predict the mechanism for the obtained product ratio and give explanation for the major products. Can anybody give a reasonable mechanism for this product ratio and explanation for why $\bf{I} \gt \bf{II} \ge \bf{IV} \gt \bf{V}$, and why $\bf{III}$ and $\bf{VI}$ are in such small amounts? Late edit: It is evident that the textbook print had an error (Thanks Nisarg Bhavsar for the finding). The compound V is actually 2,3,4-trimethyl-2-pentene, not as 3,3,4-trimethyl-2-pentene in the print. I have corrected it now. Nonetheless, the question is still interesting. References: Robert J. Ouellette and J. David Rawn, “Chapter 9: Haloalkanes and alcohols Nucleophilic Substitution and Elimination Reactions,” In Organic Chemistry: Structure, Mechanism, Synthesis; Second Edition; Academic Press (an imprint of Elsevier): London, United Kingdom, 2019, pp. 255-298 (ISBN: 978-0-12-812838-1). Answer: Although this question is from textbook written by a professor in well established institute, it lacks completeness. For example, the question is written such a poor way that it even didn't give any conditions or even in which solvent the reaction has performed. So, it's safe to assume that the reaction has performed in thermodynamic control. We'll say it is an acid catalyzed dehydration reaction in a refluxing condition in a protic solvent. Thus, it'd be a $\mathrm{E1}$ elimination reaction with carbocation intermediate(s). Let's see the products and their yields: The amount of products with significant yields suggest that the reaction has gone through few relatively stable intermediates. Let's look at these possible intermediates: The original dehydration of the substrate gives secondary carbocation, intermediate 1. This intermediate can gives only one product, 2,4,4-trimethyl-2-pentene $(\bf{IV})$, which is Zaitsev product (there are no possibility to form a Hofmann product from this intermediate). Since $\bf{IV}$ is not the only product detected, it is fair to say rate of this product formation is slower than the carbocation (intermediate 1) rearrangement to give more stable tertiary carbocation(s). Intermediate 1 can be stabilized $(2^\circ \rightarrow 3^\circ)$ by either hydride shift (reaction path $b$) to give intermediate 2 or methide shift (reaction path $a$) to give intermediate 3. Intermediate 3 can be further rearranged by another hydride shift (reaction path $c$) to give intermediate 4, which can be less favorably $(3^\circ \rightarrow 2^\circ)$ rearranged to intermediate 5 by a methide shift (reaction path $d$). Note that this secondary carbocation, intermediate 5, can gives only one product, 3,3,4-trimethyl-1-pentene $(\bf{III})$ with the least yield percentage (2%) to justify. Let's see how would the products would formed by these five intermediate carbocations: Intermediates 2, 3, and 4 are tertiary carbocations with possibility to form both Zaitsev and Hofmann products. In one glance, one might think all would give favorable Zaitsev products under the conditions. However, in reality, they have given almost equal ratio of both Zaitsev and Hofmann products except for intermediate 5, which gave only 3% of the Hofmann product, 2-isopropyl-3-methyl-1-butene $(\bf{VI})$. This result is also justified by less steric hindrance on proton abstraction from either of two isopropyl groups to form the Zaitsev product, 2,3,4-trimethyl-2-pentene $(\bf{V})$, compared to other two intermediates: In intermediate 2, a proton must be abstracted from $\ce{C}$3, which has an enormous steric hindrance created by nearby tertiary-butyl group. Thus, Hofmann product, 2,4,4-trimethyl-1-pentene $(\bf{II})$ would be the major product from this intermediate (there are two methyl groups to give this product). However, its Zaitsev product, 2,4,4-trimethyl-2-pentene $(\bf{IV})$, is also produced by intermediate 1, and hence it could be expected that $\bf{IV}$ may have significant yield as well. In intermediate 3, a proton must be abstracted again from $\ce{C}$3, which has almost equally enormous steric hindrance created by nearby iso-propyl group in addition to methyl group on $\ce{C}$3. The two methyl groups on positively charged carbon will play a role as well. Thus, Hofmann product, 2,3,4-trimethyl-1-pentene $(\bf{I})$ would be the major product from this intermediate (there are two methyl groups to give this product). However, its Zaitsev product, 2,3,4-trimethyl-2-pentene $(\bf{V})$, is also produced by intermediate 4 as its major product, and hence it could be expected that $\bf{V}$ may have significant yield as well. It's worth noting that according to the product ratios, it is safe to say that intermediate 3 is the major contributor during this reaction. Even though hydride shift is faster to form intermediate 2, its relatively slow rate of double bond formation to give Zaitsev product due to the steric hindrance by tert-Butyl group may have coursed the slower methide shift to dominate at the end to get intermediate 3.
{ "domain": "chemistry.stackexchange", "id": 15591, "tags": "organic-chemistry, reaction-mechanism, molecular-structure, alcohols" }
Determine minimum distance between two symbols in 16-QAM using the average symbol energy
Question: If the probability of bit error for a square M-ary QAM is $P_M = (1-(1-P_\sqrt{M})^2)$ where $P_\sqrt{M} = 2(1-\frac{1}{\sqrt{M}})Q(\sqrt{\frac{3E_s}{(M-1)N_0}})$ and $E_s$ is the average symbol energy, can I assume that $E_s=10A^2$?. $2A$ is the minimum distance between two adjacent symbols. Answer: A symbol with coordinates $(x,y)$ has energy $x^2+y^2$. In 16-QAM, minimum distance $2A$ implies that the values of $x$ and $y$ are restricted to the set $\lbrace\pm A,\pm3A\rbrace$, and in consequence the possible symbol energies are $2A^2$, $10A^2$ and $18A^2$. Furthermore, there are 4 symbols with energy $2A^2$, 8 symbols with energy $10A^2$, and 4 symbols with energy $18A^2$. Calculating the averagey symbol energy, we conclude that $E_s=160A^2/16=10A^2$. Note that a similar procedure can be used to calculate the average symbol energy of any quadrature modulation, even if they're not square (or even rectangular).
{ "domain": "dsp.stackexchange", "id": 3107, "tags": "digital-communications, modulation, symbol-energy" }
NP-Completeness of modified decision problems
Question: Suppose we have a NP-Complete decision problem like independent set or finding matching in graph theory and we change the greatness or smallness of the condition of that problem i.e. change the direction of the inequality in the problem's definition ($ \ge$ or $\le $). For example in independent set problem when the condition is $|V'| \ge k$ and I change it into $|V'|\le k $. Are problems modified in this way still NP-complete? Answer: It depends on the problem. For, say, clique or independent set, the problems become trivial if you change the direction of the inequality: $\emptyset$ is always a clique/independent set of size $\leq k$. On the other hand, the travelling salesman problem remains NP-complete if you ask for a tour of length at least $d$ instead of at most $d$. To see this, suppose that you want a tour of graph $G$ of length at most $d$. Let $m$ be the length of the longest edge in $G$, and let $G'$ be the graph that's identical to $G$ except that, if an edge has length $\ell$ in $G$, then it has length $m-\ell$ in $G'$. Now, any tour of length $t$ in $G$ corresponds to a tour of length $nm-t$ in $G'$ (where $n$ is the number of vertices), so $G$ has a tour of length at most $d$ if, and only if, $G'$ has a tour of length at least $nm-d$.
{ "domain": "cs.stackexchange", "id": 8491, "tags": "complexity-theory, np-complete" }
Fourier transform for non-sinusoidal signal
Question: Using MATLAB's fft function, I would like to retrieve the amplitude and frequency of this particular signal : My data file (a) has 2 columns, first one for time ans the second for amplitude, I tried this script DT = (a(2,1)-a(1,1)); Fs = 1/DT; % sampling frequency NFFT = 2^nextpow2(n); xdft = fft(a(:,2),NFFT)/n; f = Fs/2*linspace(0,1,NFFT/2+1); plot(f,2*abs(xdft(1:NFFT/2+1))) The output is : which is not corresponding to my amplitude. Thanks a lot for your help. fft with all the harmonics : Answer: The Fourier Transform will decompose your non-sinusoidal signal into harmonics, dominantly odd harmonics since your distortion appears symmetrical, and the amplitude as you derive would be the amplitude of the relevant sinusoidal harmonic (so in your case it looks like the fundamental is shown, so we are seeing the amplitude of the first fundamental harmonic which is a sinusoidal signal). It will be less than the peak amplitude shown in the time domain plot, which is the composite of all the harmonics. Other factors that will effect the amplitude is spectral leakage due to your use of a rectangular window (so to the extent the harmonics fall into a sidelobe of the kernel for your rectangular window) and scalloping loss (to the extent the fundamental frequency is between an integer sub-multiple of your sampling frequency. Both of these effects are described in more detail in my favorite paper by fred harris: fred harris On the Use of Windowing We don’t see any evidence of these harmonics in your frequency plot but this may be because you are only showing a portion of all the frequencies or perhaps because the magnitude is not on a log scale we are not able to make out the harmonics. For example, there should definitely be a signal at three times the fundamental shown. The fundamental appears to be close to 0.04; meaning we should see a harmonic at 0.12; if your frequency of 0.1 represents the Nyquist boundary, then the image of this would be at 0.08 assuming a real signal as you have plotted. (Could you please udpdate your plot to have a log (dB) magnitude scale so we can be sure nothing else is astray?). That said, we do see that your amplitudes are actually quite close visually from the plot (approx 0.17 in the FFT vs approx 0.19 in the time domain plot), so you must be concerned with the small difference, which is accounted for from the effects described above (the 0.17 is the amplitude of the primary tone as modified by the spectral leakage of your rectangular window and any scalloping loss).
{ "domain": "dsp.stackexchange", "id": 5170, "tags": "matlab, fft" }
server parameters vs dynamic reconfigure?
Question: I used to use the server parameters and the dynamic_reconfigure. I know the server parameters (rosparm) used to send messages to the node, in which those messages do not change much. The question is that I really don't understand the advantage of the server parameters over the dynamic_reconfigure. I feel I'm missing something or I don't understand the goal of the server parameters Originally posted by emacsd on ROS Answers with karma: 194 on 2014-06-25 Post score: 1 Answer: As you rightly suggest server parameters are based on a polling system whereby the Parameter server is polled by nodes to retrieve information loaded on startup (or during execution). This may be inefficient if the information doesn't change. Dynamic reconfigure utilises a callback that is called when the parameters are changed via a gui, hence it is more efficient than polling if we are expecting values to change. As for why there are both. I'd expect that server parameters were developed first, and then dynamic reconfigure developed as an additional feature. Server parameters are useful particularly when parameters dont need to change during execution, and between runs, the config files can be updated without recompiling code. Additionally server parameters are a little easier to implement. Dynamic reconfigure is useful when tuning during execution. But does require another node, either rqt_reconfigure or a dynamic reconfigure client to be implemented in order to send the changed parameters. If a parameter changes so much perhaps it is more efficient and simpler to implement a serviceServer/client or subscriber/publisher linking information between two nodes. Otherwise if using rqt_reconfigure then it requires a human to drive the gui, which is usually not what is required in robotic implementations. Originally posted by PeterMilani with karma: 1493 on 2014-06-25 This answer was ACCEPTED on the original site Post score: 9 Original comments Comment by emacsd on 2014-06-25: Thank you so much. The last argument you mentioned is the one I will use when someone asks me about the difference.
{ "domain": "robotics.stackexchange", "id": 18393, "tags": "dynamic-reconfigure" }
Practical way to convert jupyter notebook to MS Word document?
Question: What would be a practical way to convert a Jupyter Notebook to a Word document (.doc) ? I am asking this in a professional context, so I'd like to avoid manual solutions, do it in an efficient way (fast), avoid third parties... etc. Something that works like Rmarkdown to produce .doc would be very welcome. Answer: The easiest way is probably using a method similarly to what is described in this answer, that is, convert the notebook to markdown and then use any of the tools available (such as Pandoc) to convert the markdown to a Word document.
{ "domain": "datascience.stackexchange", "id": 10545, "tags": "python, jupyter" }
Checking for intersection points
Question: The aim of the program is to find those points which comes under the intersection of at least 2 circles.(space is a 1000x1000 matrix) n=input() mat=[[0 for i in range(1005)] for i in range(1005)] circles=[] for i in range(n): circles.append(map(int,raw_input().split())) ans=0 for circle in circles: minx=circle[0]-circle[2] maxx=circle[0]+circle[2] miny=circle[1]-circle[2] maxy=circle[1]+circle[2] for i in range(minx,maxx+1): for j in range(miny,maxy+1): if mat[i][j]<=1: if ((i-circle[0])**2+(j-circle[1])**2)<=(circle[2]**2): mat[i][j]+=1 if mat[i][j]>1: ans+=1 print ans n denoted the number of circle circles contain circle center and radius in the format [x,y,r]. For example, let circles = [[3,2,4],[2,5,2]]. Then it contains two circles centered at (3,2) and (2,5) with radius 4 and 2 respectively. Is the logic correct? Will it trigger any exceptions? Answer: Bad things will happen if any part of any circle strays outside the 0-to-1005 bounds. It's up to you to decide whether error handling for straying out of bounds is essential. Mild rewrite Representing each circle as a list is not quite appropriate. Using indexes circle[0], circle[1], and circle.[2] to mean x, y, and r, respectively, is awkward. As a remedy, I strongly recommend namedtuple. from collections import namedtuple Circle = namedtuple('Circle', ['x', 'y', 'r']) n = int(raw_input()) circles = [] for i in range(n): circles.append(Circle(*map(int, raw_input().split()))) Initialization of a 1005 × 1005 grid is better written as: mat = [[0] * 1005] * 1005 The rest of the program is straightforward. You should avoid switching nomenclature from x, y to i, j. To reduce nesting, you can eliminate one nested if by using and. ans = 0 for circle in circles: minx, maxx = circle.x - circle.r, circle.x + circle.r miny, maxy = circle.y - circle.r, circle.y + circle.r for x in range(minx, maxx+1): for y in range(miny, maxy+1): if mat[x][y] <= 1 and (x-circle.x)**2 + (y-circle.y)**2 <= circle.r**2: mat[x][y] += 1 if mat[x][y] == 2: ans += 1 print ans Going further The nesting of for: for: for: if: if is still rather overwhelming. I think it would be beneficial to split out some of that complexity. class Circle (namedtuple('Circle', ['x', 'y', 'r'])): def contains(self, x, y): return (x - self.x)**2 + (y - self.y)**2 <= self.r**2 def grid_points(self): for x in xrange(self.x - self.r, self.x + self.r + 1): for y in xrange(self.y - self.r, self.y + self.r + 1): if self.contains(x, y): yield x, y def read_ints(): return map(int, raw_input().split()) n = int(raw_input()) circles = [Circle(*read_ints()) for _ in xrange(n)] mat = [[0] * 1005] * 1005 ans = 0 for circle in circles: for x, y in circle.grid_points(): mat[x][y] += 1 if mat[x][y] == 2: ans += 1 print ans The way I've written it, I've removed the optimization of skipping grid points that are already known to be in the intersection of previously analyzed circles. I think it's probably a worthwhile tradeoff in favour of readability.
{ "domain": "codereview.stackexchange", "id": 11231, "tags": "python, python-2.x, error-handling, computational-geometry" }
Schur's Lemma in Zee's Group Theory book for reducible representations
Question: Main question Schur's lemma says: $$D(g) A = A D(g) \Rightarrow A = \lambda I\tag{1}$$ if $D$ is irreducible. How can I use this to show that if $D$ is reducible and if $SDS^{-1}$ is a direct sum of irreducible representations, then: $$SAS^{-1} = \lambda_1 I_{d_1} \oplus ... \oplus \lambda_n I_{d_n}?\tag{2}$$ What I understand I want to understand a consequence of Schur's lemma as discussed in Anthony Zee's Group Theory book. In this book, the general theory of representations is avoided (rings, etc.), so answers that avoid this would be helpful. On page 102, he discusses Schur's lemma. I'll provide the statement of the theorem (paraphrased) to show the sorts of technical terms that are avoided: If D(g) where $g \in G$ is a set of matrices representing group G, and furthermore D is an irreducible representation, then $D(g) A = A D(g)$ for all $g \in G$ implies $A = \lambda I$ for some number $\lambda$. So far so good. What I do not understand At the end of this discussion, he says that if $D$ is reducible, so $D$ is block diagonal, and then in that basis $H$ is also block diagonal. Why? I know that in some basis, $SDS^{-1}$ is block diagonal because it is a sum of reducible representations. However, we are loking at $WDW^\dagger$, where $W$ is unrelated to $S$ because $W$ diagonalizes $H$. More details The problem is that if I follow through the proof of the theorem where $D$ is a direct sum of irreducible representations: We can take $A$ to be Hermitian, call it $H$. We can diagonalize $H$ to get $H' = WHW^\dagger$. Using the same basis, we get $D' = WDW^\dagger$. Now we have $D'(g)A' = A'D'(g)$. And the rest of the argument shows that if $D'$ is block diagonal, then in that basis $H'$ is also block diagonal ($H'$ is also diagonal because of step 2). But why is $D'$ block diagonal? Related questions: A previous step of the same proof Seems relevant but references Mashke's theorem and modules, which are not discussed in Zee's book Answer: Here is a counterexample to OP's eq. (2): Let the decomposable representation $D=\begin{pmatrix} D_1 & 0 \cr 0 & D_1 \end{pmatrix}$ be 2 copies of the same representation $D_1$, and let $A=\begin{pmatrix} 0 & \lambda {\bf 1} \cr 0 & 0 \end{pmatrix}$. Then $D$ and $A$ commute, but $A$ is not on block-diagonal form. What extra assumption should one make to secure OP's eq. (2)? Well, here's one approach. Given a decomposable representation $D:G\to GL(V,\mathbb{F})$ with vector space $V=V_1\oplus\ldots \oplus V_n$ such that $D$ and $A$ commute, then we can use Schur's lemma to conclude that $A|_{V_i}=\lambda_i {\bf 1}_{V_i}.$ Let us additionally assume that $A$ is diagonalizable $A=\oplus_{a=1}^m\mu_a {\bf 1}_{E_a}$ with eigenvalues $\mu_a$ and eigenspaces $E_a:={\rm Ker}(A-\mu_a{\bf 1}_V)$. The eigenspaces $E_a\subseteq V$ are $G$-invariant subspace, i.e. subrepresentations $D|_{E_a}$. Let us additionally assume that all reducible representations are decomposable. Then we may assume (possibly after a similarity transformation of $A$) that $E_a=\oplus_{j=1}^{r_a}V_{i_j}$. By restricting to an irreducible $V_i\subseteq E_a$, we conclude that $\mu_a=\lambda_i$. Altogether, it follows that $A=\oplus_{i=1}^n\lambda_i {\bf 1}_{V_i}$. $\Box$
{ "domain": "physics.stackexchange", "id": 96160, "tags": "mathematical-physics, group-theory, representation-theory, mathematics, linear-algebra" }
Generate all possible unique combinations of positive numbers these have sum equal to N
Question: This is my function to generate all possible unique combinations of positive numbers these have sum equal to N. For example: If the input is 4 The output should be: [ '4', '1+3', '1+1+2', '1+1+1+1', '2+2' ] You can try it online here const f=n=>{ // n is a positive number if(n==0) return ["0"] if(n==1) return ["1"] if(n==2) return ["0+2", "1+1"] const result = [n+"+0"] for(let i=n-1; i>=n/2|0; i--){ for(const x of (f(n-i) || [])) { for(const y of (f(i) || [])) { result.push(y + "+" + x) } } } // Remove duplicated records const map = result.map(v=>v.split`+`.filter(x=>+x>0).sort((m,n)=>m-n).join`+`) return [...new Set(map)] } //Testing a=f(8) console.log(a) My approach is using recursion, it works like that: If I can find all possible unique combinations of positive numbers these have sum equal to N. Then I can find all possible unique combinations of positive numbers these have sum equal to N + 1. For example: If all possible unique combinations of positive numbers these have sum equal to 3 are: ["3", "1+2", "1+1+1"] and all possible unique combinations of positive numbers these have sum equal to 2 are ["2", "1+1"] Then for 4 it should be: 4 + 0 or 4 All possible unique combinations of combinations of positive numbers these have sum equal to 3 and 1 // for 3 it's combinations is ["3", "1+2", "1+1+1"] // for 1 it is ["1"] All possible unique combinations of combinations of positive numbers these have sum equal to 2 and 2, // for 2 it's combinations is ["2", "1+1"] And I only do the loops to the integer of n/2 to avoid duplicatings. Could you please help me to review it? const f=n=>{ // n is a positive number if(n==0) return ["0"] if(n==1) return ["1"] if(n==2) return ["0+2", "1+1"] const result = [n+"+0"] for(let i=n-1; i>=n/2|0; i--){ for(const x of (f(n-i) || [])) { for(const y of (f(i) || [])) { result.push(y + "+" + x) } } } // Remove duplicated records const map = result.map(v=>v.split`+`.filter(x=>+x>0).sort((m,n)=>m-n).join`+`) return [...new Set(map)] } //Testing a=f(8) console.log(a) Answer: Review Your code is a good simple solution. The style is sloppy. The complexity is a bit high and the techniques used are negatively impacting performance. The template literal call Array.split`+` always throws me, but I like it; your code reminds me to use it more often. General points Delimit all code blocks. Eg if(n==0) return ["0"] better as if(n==0) { return ["0"] } Why? JavaScript, like most C style languages, does not require delimited blocks for single statement blocks; however when modifying code it is very easy to overlook the missing {}. Use semicolons or be thoroughly familiar with automatic Semicolon Insertion (ASI). Rather than use continue consider using the statement } else {. Why? continue breaks the use of indentation that visually helps you see flow in a glance. continue and its friend break should be avoided when possible. // Avoid using continue to skip code for (a of list) { if (foo) { ...do something... continue; } ...lots of code... } // Rather use an else statement for (a of list) { if (foo) { ...do something... } else { ...lots of code... } } Spaces between operators: i>=n/2|0 should be i >= n / 2 | 0. When using short circuit expressions (f(n) || []) use the Nullish coalescing operator ?? eg f(n) ?? [] in rather than logical OR ||. In the two inner loops you recurse with the call to (f(n) || []). The function f() always returns an array so there is no need for || []. In the innermost loop you recurse on f(i) for every x but f(i) is the same for every x. This is forcing a lot of redundant processing. Always move calculations to a level that is One = One, rather than One = Many to avoid unnecessary overhead. Your inner loop: for(let i=n; i>=n/2|0; i--){ if(i==n){ result.push(n + "+0") continue } for(const x of (f(n-i)||[])) { for(const y of (f(i) || [])) { // repeated call to f(i) result.push(y + "+" + x) } } } Example of moving the recursive call out of the inner loop: for (let i = n; i >= n / 2 | 0; i--) { if (i === n) { result.push(n + "+0"); } else { const solvedForI = f(i); // called once only for (const x of f(n - i)) { for (const y of solvedForI) { result.push(y + "+" + x) } } } } Tips Bit-wise divide and floor Using | 0 to floor Numbers is a handy short cut, but you can divide by a power of 2 and floor in one operation. Example n / 2 | 0 is the same as n >> 1. For every left shift you divide by 2 and for every right shift you multiply by two. (n / 2 | 0) === (n >> 1) (n / 4 | 0) === (n >> 2) (n / 8 | 0) === (n >> 3) (n / 256 | 0) === (n >> 8) Note that the conversion to uint32 happens before the shift, thus multiplying is not equivalent. Eg 1.5 << 1 === 2 and 1.5 * 2 | 0 === 3 Note Bitwise operations convert to unsigned int32 and thus should only be used for only for numbers in the range \$-(2^{31})\$ to \$2^{31} - 1\$ Cache You can use a cache to store the results of a function. For recursive functions this can save a lot of processing. Pseudo-code example of a cache For positive integer values you can use an Array. For other types of arguments you would use a Map. // n is a positive integer function solution(n) { // wrapper const cache = []; return recurser(n); // call recursive solution. function recurser(n) { // n is a positive integer var result; if (cache[n]) { return cache[n] } // Return cache if available while ( ) { ... recurser(n - val); /* Some complicated code that adds to result */ ... } return cache[n] = result; } } Complexity, Performance, & Example TL;DR The next part of the answer addresses performance and complexity and how both can be improved with example function. As the example is a completely different different approach it is not considered a review (rewrite); however some of it can be used in your solution. Complexity Your complexity is in the sub-exponential range \$O(n^{m log(n)})\$ where \$m\$ is some value >= 2. This is rather bad. The example reduces complexity by reducing the value of \$m\$. Performance Performance is indirectly related to complexity. You can increase performance without changing the complexity. The gain is achieved by using more efficient code, rather than a more efficient algorithm. Example The example is a completely different algorithm but some of the techniques can be applied to your solutions, such as the cache and moving the check for found combinations out of the recursing function. Addressing complexity I could not modify your algorithm to improve the complexity. This is not due to their not being a less complex algorithm based on your approach, just that I was unable to find it. Addressing performance There is a lot of room to improve performance via caching, strings, sorts, and stuff. Cache The example uses a cache to reduce calculations. See above Tips regarding cache. Note the cache is set up to contain the result of n 0 to 2 which is equivalent to your first 3 if statements. Strings To avoid duplicates you use a Set and because two arrays containing the same values are not the same, you convert the array to a string that can uniquely identify the array content. However you are manipulating the strings in the inner loops and convert from string to number and back each recursing iteration. Using the approach of wrapping the recursive function we can avoid the conversion within the main solutions and use the set to filter duplicates once, just before returning the final result. Sort Though the sort is not a major part of the complexity, it is where I started when doing the example. Each iteration adds only one value to the arrays being built. By maintaining the correct order as we go the sort can be avoided completely and we just build the array inserting the new element at the correct position. The innermost for (const v of sub) { loop does this inserting the new value to each the sub-arrays returned by the previous recursive solution. Code Comparison To gauge the performance and complexity I ran your code as the base and used its results to test the examples' correctness. I then added counters to both, counting every countable iteration, including under the hood iterations such as those performed by spreads ..., array map and reverse, string concats, sorts, etc. The results are as follows. Counted iterations per tested n value n value 7 8 9 10 11 12 13 ... 18 Your code 4,834 14,179 36,630 101,818 268,192 733,260 1,947,968 277,569,323 Example 333 718 1,584 3,418 7,445 16,018 34,528 1,503,242 Note The example results may not look that bad as n increases, however it is still in the same complexity range of \$O(n^{mlog(n)})\$. All I have managed to do is lower \$m\$ Note To match your result I had to add a Array.reverse to the final combinations. The reverse was counted but is not required. function combos(n) { const cache = [[], [[1]], [[2], [1, 1]]]; return [...(new Set([...combo(n).map(v=>v.reverse().join`+`)]))]; function combo(n) { var a = n - 1, b, insert; if (cache[n]) { return cache[n] } const res = n % 2 ? [[n]] : [[n], [n >> 1, n >> 1]]; while (a > n - a) { b = n - a; for (const sub of combo(a--)) { const subRes = []; insert = true; for (const v of sub) { v > b || !insert ? subRes.push(v) : (insert = false, subRes.push(b, v)); } insert && subRes.push(b); res.push(subRes); } } return cache[n] = res; } }
{ "domain": "codereview.stackexchange", "id": 41065, "tags": "javascript, algorithm, node.js, combinatorics" }
Harmonic perturbation in interaction of radiation with quantum system; making sense of approximation of the integral
Question: In the chapter "The interaction of quantum systems with radiation" (Quantum physics book by Bransden and Joachain, 2nd edition) section 11.2 "Perturbation Theory for harmonic perturbations and transition rates" there is this integral (in equation 11.45) $$\int_0^t\sin(\omega t'-\delta_{\omega})\exp(i\omega_{ba}t')dt'$$ where $\omega$ is the frequency of external radiation field and $\omega_{ba}=\frac{E_b - E_a}{\hbar}$. This integral is trivially calculated to $$\frac{1}{2}\exp(-i\delta_{\omega})\left[\frac{1-\exp[i(\omega_{ba}+\omega)t]}{\omega_{ba}+\omega}\right]-\frac{1}{2}\exp(i\delta_{\omega})\left[\frac{1-\exp[i(\omega_{ba}-\omega)t]}{\omega_{ba}-\omega}\right].$$ Upto here everything is fine. The problem is how they have approximated this term for different cases as given below: For transitions in the infrared $\left|\omega_{ba}\right|$ is of the order $10^{12}-10^{14}$ $s^{-1}$, and even larger in the visible and ultraviolet regions. ... The product ($\left|\omega_{ba}\right|t$) is much greater than unity, it follows that the first term in square brackets on the RHS is negligible unless $\omega_{ba}\approx-\omega$, and the second term in square brackets is negligible unless $\omega_{ba}\approx+\omega$. What is happening here? This $\omega_{ba}t$ term is contained in a complex exponential, which is oscillatoy in nature. So how it may matter if the term $(\omega_{ba}\sim\omega)t\rightarrow0$? They said "first term in square brackets on the RHS is negligible unless $\omega_{ba}\approx-\omega$". But all I can see is that than the denominator will tend to zero, blowing up the whole term. Further, if $\omega_{ba}±\omega$ is not close to zero, what may happen? We have an oscillatory term here, all we shall have is $±1$. What are they doing here? Answer: integrating over a slowly varying function multiplied by a very fast oscillatory function tends to zero, by Riemann-Lebesgue lemma. You can also understand this conceptually - the fast oscillating function averages the slowly varying one to zero. So the only way out of it in this case is if the oscillations cancel out, meaning that the sine function oscillates with the same frequency of the exponent, $\omega_{ba} \simeq \pm \omega$. You can see that also from the explicit result of the integral you wrote. The denominator goes to infinity if $\omega$ is not of the order of magnitude of $\omega_{ba}$ and the latter is very large (i.e. goes to infinity), while the numerator is bound by absolute value to be no more than 2. Physically speaking, we want the external EM radiation to be close to resonance of the quantum system in order to have finite transition amplitudes. By the way, your statement that But all I can see is that than the denominator will tend to zero, blowing up the whole term. is not accurate. Note that the numerator also tends to zero in that case. You need to expand to leading order and then get that the result is just finite $$ \lim_{\epsilon \to 0}\frac{1-e^{i\epsilon t}}{\epsilon} = -it$$ and it doesn't "blow up".
{ "domain": "physics.stackexchange", "id": 93503, "tags": "quantum-mechanics, radiation, perturbation-theory, approximations, fermis-golden-rule" }
Why do nebulae turn into accretion disks but planets do not?
Question: What are the factors that prevent planets from also turning into disks, like the stellar dust in a nebula does? The Earth is not a perfect sphere, but rather it's squished at the poles and it bulges at the Equator, owing to its rotation working to squish it into a disk. I'm tempted to say that the Earth couldn't become a disk because of the density of the material in it fighting back against being pressured into a disk in a way that in stellar gas you can't? But then, why isn't the gas around the gas giants a disk? Is it because of the solid nucleus of those planets fighting against that? Answer: They both form disks, both disks are transient, and both disks are small compared with the central body. The parallels between the Sun and a planet are closer than one may think! A collapsing nebula forms a star and a circumstellar disk. Nearly all of the angular momentum in the system is in the disk. The disk is unstable and much of the material condenses into small icy or rocky bodies which mostly combine to form planets or are expelled from the system by close encounters with larger bodies. After a few tens of millions of years, a planetary system is all that remains of the disk. (Note that even today, the planets contain 96% of the angular momentum in the Solar System.) When planets form it seems likely that they also have disks of material around them. Because the planets are mainly forming from planetoids and bits of rubble, the disks don't start out mostly as gas. (The best current theory of the Moon's formation is that it condensed out of a disk formed around Earth from a large, late collision.) These disks also dissipate, falling onto the planet through drag or being ejected from orbit by perturbing bodies. It's very unlikely that Saturn's (or Jupiter's or any of the others') rings are primordial, but are probably the result of the disruption of a icy body in the (comparatively) recent past. Note that neither the Sun nor the planets are rotating fast enough for their shape to be much affected: They're all pretty much spherical. In each case, there are mechanisms which carry off angular momentum from the central body and deposits it in the disk or into the bodies which condense from the disk.
{ "domain": "astronomy.stackexchange", "id": 3425, "tags": "gravity" }
Temperature Fluctuation in a Day
Question: I was looking at the temperature plots in Nashik, India and noticed an intriguing trend. What could explain this trend? [screenshot from WeatherUnderground forecast] Shouldn't the plot be smoother? Other cities have significantly smoother plots. To confirm that this was not an error in the app, result from Google Weather was tallied and matched. Answer: Many possible mechanisms can create temperature fluctuations on the scale of an hour or two. First there is the met site itself. Are there trees or buildings that could provide partial shade, especially when the sun is low in the sky? Are there buildings that could reflect sunlight directly onto the instrumentation? Then there are local anomalies. Are there gully winds? Do cloud banks form in the mornings, or high cumulus towers build up in the afternoons (quite common in the Western Ghats of India)? Is there a local temperature inversion, and does it disperse daily? What kind of local wind patterns are there? Is there a lake nearby that could create local cooling if the winds are in the right direction? Conversely, is there a source of local heat anomalies - factories, power stations? On the larger scale, what is the area topography, and what local air circulation does it create? Is there some interaction between monsoonal air flow and continental air masses? What turbulence does this create? Some bizarre anomalies are also possible, like birds roosting on the instrumentation! Really, there is no short-cut answer. The only solution is to go there with an unbiased and inquiring mind, and check out what's going on. I'm guessing that for such a consistent fluctuation it will be something obvious.
{ "domain": "earthscience.stackexchange", "id": 801, "tags": "weather-forecasting" }
IMU alignment methods
Question: I have an IMU that is outputting the following for its measurements: accelx= 0.000909228 (g's) accely= -0.000786797 (g's) accelz= -0.999432 (g's) rotx= 0.000375827 (radians/second) roty= -0.000894705 (radians/second) rotz= -0.000896965 (radians/second) I would like to calculate the roll, pitch and yaw and after that the orientation matrix of the body frame relative to the NED frame. So I do roll = atan2(-accely,-accelz); pitch =atan2(-accelx/sqrt(pow(accely,2)+pow(accelz,2))); sinyaw = -rotycos(roll)+rotzsin(roll); cosyaw = rotxcos(pitch)+rotysin(roll)sin(pitch)+rotzcos(roll)*sin(pitch); yaw = atan2(sinyaw,cosyaw); and I get: roll = 0.000787244 pitch = -0.000909744 yaw = 1.17206 in radians. However the IMU is also outputting what it calculates for roll, pitch and yaw. From the IMU, I get: roll: -0.00261682 pitch: -0.00310018 yaw: 2.45783 Why is there is a mismatch between my roll, pitch and yaw and that of the IMU's? Additionally, I found this formula for the initial orientation matrix. Which way of calculating the orientation matrix is more correct. R1(roll)*R2(pitch)*R3(yaw), or the form above? Answer: The roll and pitch angles that you calculate using the accelerometer measurements will only be correct if (1) the IMU is non-accelerating (e.g., stationary), and (2) the accelerometer measurements are perfect. Thus, they can only be used to initialize the tilt (roll and pitch) of the IMU, not to calculate roll and pitch during acceleration. An external measurement of yaw angle is required to initialize the yaw angle. See these answers: What information an IMU gives to a drone? Multicopter: What are Euler angles used for? for some background. Say the accelerometer measurements are $f_x$, $f_y$, and $f_z$; the gyro measurements are $\omega_x$, $\omega_y$, and $\omega_z$, and the magmetometer measuremenst are $b_x$, $b_y$, and $b_z$. The roll angle ($\phi$) and pitch ($\theta$) angle can be initialized if the IMU is not accelerating using \begin{eqnarray} \phi_0 &=& \tan^{-1}\left(f_y/f_z\right) \\ \theta_0 &=& \tan^{-1}\left(-f_x/\sqrt{f_y^2+f_z^2}\right) \end{eqnarray} The yaw angle ($\psi$) can be initialized using the magnetometer measurements. Given the roll and pitch angles, the magnetic heading ($\psi_m$) can be calculated from $b_x$, $b_y$, and $b_z$. Given the magnetic declination at the system location, the true heading (or initial yaw angle, $\psi_o$) can be calculated. Once the IMU is initalized in an unaccelerated state, the gyro measurements can be used to calculate the rates of change of the Euler angles while the IMU is moving: \begin{eqnarray} \dot\phi &=& \omega_x +\tan\theta\sin\phi\,\omega_y +\tan\theta\cos\phi\,\omega_z \\ \dot\theta &=& \cos\phi\,\omega_y -\sin\phi\,\omega_z \\ \dot\psi &=& \sec\theta\sin\phi\,\omega_y +\sec\theta\cos\phi\,\omega_z \end{eqnarray} The rates of change of the Euler angles are then numerically integrated to propagate the Euler angles. The coordinate transformation matrix at each instant of time can then be obtained from the Euler angles. This will work if your IMU never pitches up or down to $\pm90^\circ$. In that case it will be better to calculate and propagate quaternions instead of Euler angles. (Euler angles can always be calculated from the quaternions.) Alas, the gyros are not perfect. Say that the gyros have bias errors, then these bias errors will also be integrated with time to result in the Euler angled "drifting". For this reason, an extended Kalman filter is often used to calculate the orientation of the IMU, aided by other measurements (magnetometer, accelerometer, and a GPS, for example). But that's another topic :)
{ "domain": "robotics.stackexchange", "id": 1429, "tags": "imu" }
Collision lab results
Question: We took a car and rolled it down a ramp into the wall. We then measured the amount that it bounced back off the wall. Next, we made a cushion that would increase the time it took for the collision to occur. We placed the cushion on the wall, rolled the car down the ramp again and then measured the bounce back with the cushion. This is in an 8th grade middle school science class. We studied Newton's 3rd law, force-pairs, and we know that we need to increase the time in the implulse to decrease the force of the collision. Our goal is to have the car experience less force in the collision so it is safer for passengers. We related this to the real-world applications of airbags and crumple zones on cars. The question is: for our cushion to be effective, do we want to have MORE bounce back or less bounce back? I was assuming that having a cushion would reduce the bounce back by absorbing some of the momentum, but the examples given by the district show that the bumper car with more bounce back is safer than a car hitting a brick wall and coming to a complete stop really fast, so it seems like more bounce is better. When our test car bounced off the wall it bounced back an average of 85 cm. Most students are getting less bounce back with cushions than the control runs without. I have also seen air drops where the delievered item is surrounded by bouncy material to protect it and it makes sense that more bounce is better than a crash landing. How can we talk about the amount of bounce back correctly in terms of Newton's 3rd law and collisions with and without a cushion. Please help, I'm the teacher ;) Answer: The goal for safety is to reduce the maximum acceleration and also the maximum change in acceleration. The key is to lengthen the time over which the collision occurs. Imagine you're driving a car toward a brick wall and gently apply the brakes so that you just stop as the bumper touches the wall. Your total momentum change is the same as if you slammed into the wall and stopped (nearly) instantly, but the consequences are quite different. Since total momentum change is equal to the total impulse, the most important thing is to apply that impulse as slowly as possible, over a greater time and greater distance. If there's a bounce, the total impulse is actually bigger, since you go from an initial positive momentum to a negative momentum, as opposed to going from positive to zero. So for the cushion, it's important to think about how it works. If it's something really springy, like hard rubber, it could actually be worse than the brick wall. If the cushion is a long, weak spring, the collision might occur over a long enough time/distance that the higher total impulse doesn't matter. On the other hand, a very thick feather pillow that causes the collision to happen over some distance but results in no bounce could be best of all. In fact, as far as I understand, the idea of crumple zones is precisely to increase the distance over which the collision takes places without causing a bounce. It might also be useful to think about the air cushions used for stunt falls in movies. The stunt performers do not bounce when they hit them--the cushions collapse, slowing the person to a complete stop over some distance.
{ "domain": "physics.stackexchange", "id": 83848, "tags": "newtonian-mechanics, kinematics, momentum, collision" }
Parsing command line options
Question: Here's is the code I'd like reviewed: module Main where import Control.Monad ( when ) import System.Exit ( exitSuccess ) import Idris.AbsSyntax import Idris.Error import Idris.CmdOptions import Idris.Info import Idris.Info.Show import Idris.Package import Idris.Main import Util.System ( setupBundledCC ) processShowOptions :: [Opt] -> Idris () processShowOptions opts = runIO $ do when (ShowAll `elem` opts) $ showExitIdrisInfo when (ShowLoggingCats `elem` opts) $ showExitIdrisLoggingCategories when (ShowIncs `elem` opts) $ showExitIdrisFlagsInc when (ShowLibs `elem` opts) $ showExitIdrisFlagsLibs when (ShowLibdir `elem` opts) $ showExitIdrisLibDir when (ShowPkgs `elem` opts) $ showExitIdrisInstalledPackages check :: [Opt] -> (Opt -> Maybe a) -> ([a] -> Idris ()) -> Idris () check opts extractOpts action = do case opt extractOpts opts of [] -> return () fs -> do action fs runIO exitSuccess processClientOptions :: [Opt] -> Idris () processClientOptions opts = check opts getClient $ \fs -> case fs of (c:_) -> do setVerbose False setQuiet True case getPort opts of Just DontListen -> ifail "\"--client\" and \"--port none\" are incompatible" Just (ListenPort port) -> runIO $ runClient (Just port) c Nothing -> runIO $ runClient Nothing c processPackageOptions :: [Opt] -> Idris () processPackageOptions opts = do check opts getPkgCheck $ \fs -> runIO $ do mapM_ (checkPkg opts (WarnOnly `elem` opts) True) fs check opts getPkgClean $ \fs -> runIO $ do mapM_ (cleanPkg opts) fs check opts getPkgMkDoc $ \fs -> runIO $ do mapM_ (documentPkg opts) fs check opts getPkgTest $ \fs -> runIO $ do mapM_ (testPkg opts) fs check opts getPkg $ \fs -> runIO $ do mapM_ (buildPkg opts (WarnOnly `elem` opts)) fs check opts getPkgREPL $ \fs -> case fs of [f] -> replPkg opts f _ -> ifail "Too many packages" -- | The main function for the Idris executable. runIdris :: [Opt] -> Idris () runIdris opts = do runIO setupBundledCC processShowOptions opts -- Show information then quit. processClientOptions opts -- Be a client to a REPL server. processPackageOptions opts -- Work with Idris packages. idrisMain opts -- Launch REPL or compile mode. -- Main program reads command line options, parses the main program, and gets -- on with the REPL. main :: IO () main = do opts <- runArgParser runMain (runIdris opts) I'd like to improve it. There are two main related problems. This code uses exitSuccess for early exit. This then leads to a misleading piece of code in runIdris. I'd prefer runIdris to look something like: runIdris opts = do runIO setupBundledCC runIO execute (processShowOptions opts <|> processClientOptions opts <|> processPackageOptions opts <|> idrisMain opts) I think I've worked out a way forward but would love to hear your thoughts. Answer: This looks fine. There is some replication in processPackageOptions, but that's manageable. Your syntactical variant isn't possible, though, since Idris isn't an instance of Alternative. type Idris = StateT IState (ExceptT Err IO) would only be an instance of Alternative if Err was an instance of Monoid, which it isn't. You could wrap Idris in another short-circuiting monad, for example Either ExitCode (Idris a) or a transformer variant, but you have to replace all runIO with liftIdrisIO or similar. That might be too much. But I concur, the exitSuccess in check isn't that obvious and should be made more obvious at type level.
{ "domain": "codereview.stackexchange", "id": 30133, "tags": "haskell" }
Why is the electric field of an infinite insulated plane of charge perpendicular to the plane?
Question: I'm studying Gauss' Law, and I came across a section where we're supposed to find the electric field of various shapes (like an infinite line of charges, etc), and for an infinite plane with a uniform positive charge per area, it says here in my notes: Planar symmetry => Charge distribution doesn't change if we slide it in any direction parallel to the sheet => At each point, the field is perpendicular to the sheet, and it must have the same magnitude at any given distance on either side of the sheet. It's not clear to me why having a charge distribution that doesn't change will result in a field perpendicular to the sheet. Can anyone help me clarify? Answer: The answer by @BrianMoths is correct. It's worth learning the language used therein to help with your future studies. But as a primer, here's a simplified explanation. Start with your charge distribution and a "guess" for the direction of the electric field. As you can see, I made the guess have a component upward. We'll see shortly why this leads to a contradiction. Now do a "symmetry operation," which is a fancy phrase for "do something that leaves something else unchanged. In this case, I'm going to reflect everything about a horizontal line. I mean everything. The "top" of the sheet became the "bottom." This is just arbitrary labeling so you can tell I flipped the charge distribution. The electric field is flipped too. (Imagine looking at everything in a mirror, and you'll realize why things are flipped the way they are.) Hopefully, everything is okay so far. But now compare the original situation with the new inverted one. You have exactly the same charge distribution. You can't tell that I flipped it, except for my arbitrary labeling. But if you have the same charge distribution, you ought to also have the same electric field. As you can see, this is not the case, which means I made a mistake somewhere. The only direction for the electric field that does not lead to this contradiction is perpendicular to the sheet of charge.
{ "domain": "physics.stackexchange", "id": 70503, "tags": "electrostatics, symmetry, electric-fields, gauss-law" }
Outliers Approach
Question: Having a schema which the majority of the values are IDs. Like this example (this isn't my real data): ID SCHOOL_ID CLASSE_ID STUDENT_ID GRADE 1 1 1 1 17 2 1 1 2 10 3 1 1 3 4 4 1 2 19 11 5 1 2 21 8 ... ... ... ... ... Which one of this can be a better approach to detect outliers using SQL: - Standard Deviation + Average - Try to implement an clustering algorithm I'm a little bit confusing about this... Thanks Answer: Student ID (and ID) doesn't make sense as a column to cluster on because it's not continuous, and is unique and high cardinality too so isn't even usable as a categorical value. Clustering school and class ID could make a little sense if converted to a one-hot encoded value, but it's also probably high cardinality. I think you may need to question whether those are even meaningful dimensions to cluster on. You might just drop them.
{ "domain": "datascience.stackexchange", "id": 1319, "tags": "preprocessing, outlier" }
Finding pH of acid in water
Question: How would you find the pH of an acid dissolved in water? Would you need to take the fact that the pH of water is already 7 into account and go on from there? Say, for example, that you add 0.0500 mL of 0.00100 M HCl into 50 L of pure water, what will be the pH of the new solution? I made the mistake of just finding the moles of the HCl placed into the pure water, and finding the new concentration by dividing by the litres of water present. This gave me a pH of 9, which does not seem to make any sense as acids are meant to lower pH! Some help would be wonderful! Answer: In this solution, there are two sorts of $\ce{H+}$ ions, those coming from added $\ce{HCl}$, whose concentration is $10^{-9}\ \mathrm M$, and those coming from the water which are unknown, and so described by $x$. But the concentration of $\ce{OH-}$ is also $x$. So $K_\mathrm w$ may be written: $$K_\mathrm w = (x + 10^{-9})x = 10^{-14}$$ If you solve this equation, you find: $$x = 0.999\times10^{-7}$$ The total concentration of $\ce{H+}$ ion is $1.0001\times10^{-7}$; $\mathrm{pH} = 6.999$.
{ "domain": "chemistry.stackexchange", "id": 12686, "tags": "acid-base, water, ph" }
Verb conjugator for French
Question: I had written with very few PHP skills a conjugator script. It was not good, so I decided to start it from scratch with a help from a friend. I'm using the code formatter from eclipse. Now I need your reviews and improvement tips. I know all my long arrays with the all verbs and many more long arrays aren't a good way,but I have no clue how to put them all in a database and how then to use them. Now a specific question: word_stem.php In this file, the script gets the correct word_stem to building in conjugate.php function in the line: Compiling a conjugated verb $conjugated_verb = word_stem($infinitiveVerb, $person, $tense, $mood) . ending($person, $tense, $mood, $endingwith, $exceptionmodel, $infinitiveVerb); They are many exception rules for many verb groups. That's why I had to writte somethings long and complicated if conditions to find the needed changed mood-tense-person combinations, which should use the correct word_stem. Excerpt 1 from word_stem() if (in_array($exceptionmodel->getValue(), [ ExceptionModel::Eler_Ele, ExceptionModel::Eter_Ete ]) && (($mood->getValue() === Mood::Indicatif && $tense->getValue() === Tense::Present && in_array($person->getValue(), [ Person::FirstPersonSingular, Person::SecondPersonSingular, Person::ThirdPersonSingular, Person::ThirdPersonPlural ]) || $tense->getValue() === Tense::Futur) || ($mood->getValue() === Mood::Subjonctif && $tense->getValue() === Tense::Present && in_array($person->getValue(), [ Person::FirstPersonSingular, Person::SecondPersonSingular, Person::ThirdPersonSingular, Person::ThirdPersonPlural ])) || ($mood->getValue() === Mood::Conditionnel && $tense->getValue() === Tense::Present) || ($mood->getValue() === Mood::Imperatif && $tense->getValue() === Tense::Present && $person->getValue() === Person::SecondPersonSingular))) { $word_stem = substr_replace($word_stem, 'è', - 2, 1); } Excerpt 2 if ($exceptionmodel->getValue() === ExceptionModel::VALOIR && (($mood->getValue() === Mood::Indicatif && $tense->getValue() === Tense::Present && in_array($person->getValue(), [ Person::FirstPersonSingular, Person::SecondPersonSingular, Person::ThirdPersonSingular ]) || $tense->getValue() === Tense::Futur) || ($mood->getValue() === Mood::Conditionnel && $tense->getValue() === Tense::Present) || (($mood->getValue() === Mood::Imperatif && $tense->getValue() === Tense::Present && $person->getValue() === Person::SecondPersonSingular)))) { $word_stem = word_stem_length($infinitiveVerb, 4) . 'u'; } if ($exceptionmodel->getValue() === ExceptionModel::VALOIR && ($mood->getValue() === Mood::Subjonctif && $tense->getValue() === Tense::Present && in_array($person->getValue(), [ Person::FirstPersonSingular, Person::SecondPersonSingular, Person::ThirdPersonSingular, Person::ThirdPersonPlural ]))) { $word_stem = word_stem_length($infinitiveVerb, 4) . 'ill'; } Sometimes I need in one if condition all persons from a tense in a mood and sometimes only some parts froma tense in a mood. So looks for example a exception array for one Person in one time and mood. $endings[Mood::Indicatif][Tense::Present][Person::FirstPersonSingular] Feel free, to have a look at the other files and give me please tips. You can also become a constributor in this project. I will use the script for my german speaking learning website. Answer: Wrap conditions to be visible The first order of business when having long if conditions is to wrap lines at each and every && or ||, and in general to wrap the lines so that you are able to see the conditions without scrolling. If you have to scroll, you'll loose context straight away. This will render your code into the following: if ($exceptionmodel->getValue() === ExceptionModel::VALOIR && ( ($mood->getValue() === Mood::Indicatif && $tense->getValue() === Tense::Present && in_array($person->getValue(), [ Person::FirstPersonSingular, Person::SecondPersonSingular, Person::ThirdPersonSingular ])) || $tense->getValue() === Tense::Futur || ($mood->getValue() === Mood::Conditionnel && $tense->getValue() === Tense::Present) || ($mood->getValue() === Mood::Imperatif && $tense->getValue() === Tense::Present && $person->getValue() === Person::SecondPersonSingular ) ) ) { $word_stem = word_stem_length($infinitiveVerb, 4) . 'u'; } if ($exceptionmodel->getValue() === ExceptionModel::VALOIR && ($mood->getValue() === Mood::Subjonctif && $tense->getValue() === Tense::Present && in_array($person->getValue(), [ Person::FirstPersonSingular, Person::SecondPersonSingular, Person::ThirdPersonSingular, Person::ThirdPersonPlural ]) ) ) { $word_stem = word_stem_length($infinitiveVerb, 4) . 'ill'; } In addition to making them all visible, I've indented the different blocks as best I can. Possible bug: Do you have an parenthesis issue in the first if block after the in_array() where there seem to be missing an parenthesis? I've changed it here, but you need to reiterate it, to verify its correctness. In general if you wrap and get alternating && vs || you need to take care and verify correctness regarding ordering and grouping of conditionals. Also note how I moved the starting brace, {, to the next line to keep it aligned with both the if and the ending brace, }. This also gives a much needed vertical space to break between conditions and the statement to be executed. I've put the ending parenthesis, ), on a separate line as well, this comes somewhat down to personal taste, so do that if that pleases you. Or pack them together on the previous line. Just be consistent, and when dealing with long lines like in this block, try avoid having more than one condition on any line. Precompute condition parts Another good option can be to precompute the parts used in the condition. This has at least the following advantages: Shorten the condition statement in general Make it read more easily, as you clearly indicate the test condition Avoid computing values more than once, i.e. $tense->getValue() Simplify grouping and ordering of multiple conditions Reuse of condition part in following if statements Doing this you can end up with the following: // Precompute condition parts $exceptionIsValoir = $exceptionmodel->getValue() === ExceptionModel::VALOIR; $moodVal = $mood->getValue(); $moodIsIndicatif = $moodVal == Mood::Indicatif; $moodIsConditionnel = $moodVal === Mood::Conditionnel; $moodIsSubjonctif = $moodVal === Mood::Subjonctif; $tenseVal = $tense->getValue(); $tenseIsPresent = $tenseVal=== Tense::Present; $tenseIsFutur = $tenseVal === Tense::Futur; $personVal = $person->getValue(); $personIs_2S = $personVal == Person::SecondPersonSingular; $personIs_1S_2S_3S = in_array($personVal, [ Person::FirstPersonSingular, Person::SecondPersonSingular, Person::ThirdPersonSingular ]); $personIs_1S_2S_3S_3P = in_array($personVal, [ Person::FirstPersonSingular, Person::SecondPersonSingular, Person::ThirdPersonSingular, Person::ThirdPersonPlural ]); // The actual validations if ($exceptionIsValoir && ( ($moodIsIndicatif && $tenseIsPresent && $personIs_1S_2S_3S) || $tenseIsFutur || ($moodIsConditionnel && $tenseIsPresent) || ($moodIsImperatif && $tenseIsPresent && $personIs_2S))) { $word_stem = word_stem_length($infinitiveVerb, 4) . 'u'; } if ($exceptionIsValoir && ($moodIsSubjonctif && $tenseIsPresent && $personIs_1S_2S_3S_3P)) { $word_stem = word_stem_length($infinitiveVerb, 4) . 'ill'; } Early returns to reduce nesting levels The concept of early returns to reduce nesting levels, is that if you have a structure like the following with no else block on the first few levels: function something() { if first_level_condition { if second_level_condition { if third_level_condition and something { do something useful } else { do something else } } // Note missing else block on second level } // Note missing else block on first level } Then you can simplify this to: function something() { if not first_level_condition or not second_level_condition { return } if third_level_condition and something { do something useful } else { do something else } } Now you've moved your code of interest, that is level three, out two indentation levels, which gives you some more room to write code in. At first glance of your example, it seemed like you could do this in your code as the two original cases both had the common element of $exceptionmodel->getValue() === ExceptionModel::VALOIR, which could have been negated, and caused an early return. However in your added excerpt, and in the original code many more variants exists making this not so useful in your context. It is still a useful pattern to be aware of, when doing multiple if statements and nesting of these. (Optionally) Build condition groups Another option which I've used sparingly is to actually precompute the entire groups of condition statements, using constructions like the following: // Precompute condition parts ... as before ... // Build condition group $c = ($moodIsIndicatif && $tenseIsPresent && $personIs_1P_2P_3P) $c = $c || $tenseIsFutur $c = $c || ($moodIsConditionnel && $tenseIsPresent) $c = $c || ($moodIsImperatif && $tenseIsPresent && $personIs_2S) if ($exceptionIsValoir && $c) { $word_stem = word_stem_length($infinitiveVerb, 4) . 'u'; } ... This construct can be useful in some circumstances, but in your case I believe I would stop at precomputing the condition part using good condensed variable names, and do an early return if possible. Addendum 1: Automating generation of condition variables You ask if one can automate stuff like $exceptionIsValoir or $tenseIsPresent, and as I'm a bit rusty I php I posted my version as a question here: Dynamic variables in Php from enum. But it is possible to make dynamic variables in PHP, and one way of doing it is: $exceptionVal = $exceptionmodel->getValue(); foreach(ExceptionModel::getConstants() as $constName => $constValue) { ${'exceptionIs' . $constName} = $exceptionVal === $constValue; } $tenseVal = $tense->getValue(); foreach (Tense::getConstants() as $constName => $constValue) { ${'tenseIs' . $constName} = $tenseVal === $constValue; } This would allow you to use $exceptionIsALLOIR or $tenseIsPresent. You could optionally change the first one to use $exception_is_ALLOIR by changing the constant string prefix. With regards to the use of in_array() within the if conditions, I'm no personal fan of it. This is partly because it is lengthy and kind of confusing as you prefix the enums with full class name. I would rather precompute these manually, like in $personIs_1S_2S_3S, as this can shorten the text compared to the rather lengthy version of $personIsFirstPersonSingular || $personIsSecondPersonSingular || $personIsThirdPersonSingular. Regarding precomputing combinations of ExceptionModel, I would do it on a case by case basis. Possibly I there would use the or'ed combination as I don't see a neat way of shortening the names as is easily done for the Person enum. Addendum 2: Reconsider general concept of word_stem() When following your links to the original code I see that word_stem() is a rather long function consisting of above 40 disconnected if statements similar to the two above (or four in modified post), spanning over 300 lines. That is a heavy function, and raise some concerns: Can it be simplified by using other functions? Can it be simplified by connecting together the if's, so that you don't need to execute all of them each and every time? How do you test such a massive beast if it is correct at all? If you match on one of the if's, does that exclude all others? In other words, can you do return $word_stem directly instead of at end? Without looking into the functions too much, I would consider if you should make functions like word_stem_Exer_Exe and word_stem_xER and word_stem_ENVOYER, and let the code in word_stem() be: if ($exceptionIs_ELER_ELE_or_ETER_ETE) { $word_stem = word_stem_Exer_Exe($person, $tense, $mood); } if ($exceptionIsCER or $exceptionIsGER or $exceptionIsE_Akut_CER or $exceptionIsE_Akut_GER) { $word_stem = word_stem_xER($person, $tense, $mood); } if ($exceptionIsENVOYER) { $word_stem = word_stem_ENVOYER($person, $tense, $mood); } Or possibly use if ... else if ... else if which would stop evaluating the if sequences at first hit. In other words, this would require that only one of the 40++ if's matches. Using return directly would also eliminate evaluating further if's effectively. Using functions like word_stem_xxxx() would also ease the testing some, as you could specific test that exception group. With these functions you would then move the person, tense and mood tests inside of the function, and then they could possibly be clearer in general even without that much shortening. You could then possibly only use the first tip of $tenseVal = $tense->getValue() to shorten them sufficiently. Disclaimer: I don't know the entire context, and haven't thoroughly read the accompanying code, so consider this carefully before reimplementing into sub-functions or else-if's as that might have side effects not known to me. Then again, you might have a lot of side effects already in the massive beast that word_stem() is. Addendum 3: Automagical is<ENUM_VALUE>() methods An alternative approach to creating the temporary variables, is to use reflection and automatically define test function for equality to any given enum value. If extending the base Enum class, or the ExtensionModel class with the following function: function __call($func, $param) { $func_prefix = substr($func, 0, 2); $func_const = substr($func, 2); if ($func_prefix == "is") { $reflection = new ReflectionClass(get_class($this)); return $this->getValue() === $reflection->getConstant($func_const); } } Then it is legal to do stuff like in the following test function: function myFunction(ExceptionModel $exceptionModel, Tense $tense) { if ($exceptionModel->isALLER() && $tense->isPresent() ) { ... do something ... } given that both ExceptionModel and Tense inherits from the Enum class. In other words, now you can do $enumobject->is<ENUM_VALUE>() for any enum value inheriting from Enum. PS: After posting my original answer you added two more excerpts. I've not included those in my analysis, but the concept will be the same only with a few more lines added.
{ "domain": "codereview.stackexchange", "id": 16981, "tags": "php, natural-language-processing" }
When can we add a total time derivative of $f(q, \dot{q}, t)$ to a Lagrangian?
Question: The other day, I was listening to this lecture on the Lagrangian for a charged particle in an electromagnetic field, and at one point in the video, the lecturer mentions that we can add any total time derivative of a function $f(q, t)$ to the Lagrangian without altering its equations of motion. This is nothing new to me, and I understand it fully, but shortly afterwards (approximately two minutes after the linked starting point), he goes on to say that you can, in fact, add a total time derivative of a function $f(q, \dot{q}, t)$, given certain conditions. This definitely surprised me, and I would love to know more about it, but the lecturer quickly moves on, so my question is as follows: under what conditions can one add the total time derivative of a function which depends on the particle's generalized velocities in addition to its generalized coordinates and time without affecting the particle's equations of motion? Answer: I) In general, it is true that if we plug a local Lagrangian $$\tag{1} L\quad \longrightarrow \quad \tilde{L}~=~L+\frac{df}{dt}$$ modified with a total derivative term into the Euler-Lagrange expression $$\tag{2} \sum_{n} \left(-\frac{d}{dt}\right)^n \frac{\partial \tilde{L}}{\partial q^{(n)}}~=~\sum_{n} \left(-\frac{d}{dt}\right)^n \frac{\partial L}{\partial q^{(n)}}, $$ it would lead to identically the same Euler-Lagrange expression without any restrictions on $L$ and $f$. II) The caveat is that the Euler-Lagrange expression (2) is only$^1$ physically legitimate, if it has a physical interpretation as a variational/functional derivative of an action principle. However, existence of a variational/functional derivative is a non-trivial issue, which relies on well-posed boundary conditions for the variational problem. In plain English: Boundary conditions are needed in order to justify integration by parts. See also e.g. my related Phys.SE answers here & here. III) A Lagrangian $L(q,\dot{q},\ldots, q^{(N)},t)$ of order $N$ leads to equation of motion of order $\leq 2N$. Typically we require the Lagrangian $L(q,\dot{q},t)$ to be of first order $N=1$. See e.g. this and this Phys.SE posts. IV) Concretely, let us assume that we are given a first-order Lagrangian $L(q,\dot{q},t)$. If one redefines the Lagrangian with a total derivative $$\tag{3} \tilde{L}(q, \dot{q}, \ddot{q}, t)~=~L(q, \dot{q}, t)+\frac{d}{dt}f(q, \dot{q}, t), $$ where $f(q, \dot{q}, t)$ depends on velocity $\dot{q}$, then the new Lagrangian $\tilde{L}(q, \dot{q}, \ddot{q}, t)$ may also depend on acceleration $\ddot{q}$, i.e. be of higher order. V) With a higher-order $\tilde{L}(q, \dot{q}, \ddot{q}, t)$, we might have to impose additional boundary conditions in order to derive Euler-Lagrange equations from the principle of a stationary action by use of repeated integrations by parts. VI) It seems that Prof. V. Balakrishnan in the video has the issues IV and V in mind when he spoke of 'putting further conditions' on the system. Finally, OP may also find this Phys.SE post interesting. -- $^1$ Here we ignore derivations of Lagrange equations directly from Newton's laws, i.e. without the use of the principle of a stationary action, such as e.g. this Phys.SE post, because they usually don't involve redefinitions (3).
{ "domain": "physics.stackexchange", "id": 13723, "tags": "lagrangian-formalism, variational-principle, gauge-invariance" }
What does pandas describe() percentiles values tell about our data?
Question: Let say this is my dataframe x=[0.09, 0.95, 0.93, 0.93, 0.34, 0.29, 0.14, 0.23, 0.91, 0.31, 0.62, 0.29, 0.71, 0.26, 0.79, 0.3 , 0.1 , 0.73, 0.63, 0.61] x=pd.DataFrame(x) When we x.describe() this dataframe we get result as this >>> x.describe() 0 count 20.000000 mean 0.50800 std 0.30277 min 0.09000 25% 0.28250 50% 0.47500 75% 0.74500 max 0.95000 What is meant by 25,50, and 75 percentile values? Is it saying 25% of values in x is less than 0.28250? Answer: It describes the distribution of your data: 50 should be a value that describes „the middle“ of the data, also known as median. 25, 75 is the border of the upper/lower quarter of the data. You can get an idea of how skew your data is. Note that the mean is higher than the median, which means your data is right skewed. Try: import pandas as pd x=[1,2,3,4,5] x=pd.DataFrame(x) x.describe()
{ "domain": "datascience.stackexchange", "id": 5264, "tags": "python, pandas" }
Is there C++ code that takes infinite time to compile?
Question: Is C++ as a formal language recursively enumerable? If yes, is there any invalid C++ code that takes "infinite" time to compile? Answer: In theory this code should compile infinitely template<long long K> struct t { enum { value = (K&1) ? t<K+1>::value : t<K-1>::value}; }; int main() { int i = t<1>::value; } But in real life compilers are limiting template instantiation depth. Another thing is that long long is limited so you cannot represent all integers.
{ "domain": "cs.stackexchange", "id": 2794, "tags": "computability, compilers, semi-decidability" }
In CNN (Convolutional Neural Network), does the combination of previous layer's filters make next layer's filters?
Question: I know that the first layer uses a low-level filter to see the edge information. As the layer gets deeper, it will represent high-level (abstract) information. Is it because the combinations of filters used in the previous layer are used as filters in the next layer? ("Does the combination of the previous layer's filters make the next layer's filters?) If so, are the combinations determined in advance? Answer: I guess you are making mistake about the filters. After applying filters to the input of each layer, the output will be used as the input of the next layer. The first layer's filters try to find the edges in the image and their output show whether those edges exist in a specified position or not. Next layer filters try to find patterns in the outputs of the previous layer which shows the existence of edges. Due to the point that each filter is a window and specifies a receptive field on the input, it finds patterns in the input which are more abstract and more complicated than the previous layers' activations.
{ "domain": "datascience.stackexchange", "id": 3471, "tags": "machine-learning, neural-network, deep-learning, cnn, object-recognition" }
Check simple human fetus answers
Question: Based on this image and questions, could someone please check my answers? Question 1: At which one of the following points will the blood passing the point be more oxygenated after birth than before it? I would think at point P. Question 2: At which one of the following points will the blood passing the point be significantly less oxygenated after birth than before it? I think point L, because the blood should not lose any more oxygen as it travels through veins and the heart to the lungs. Answer: For question 1 I would agree with "P" For question 2 the answer would be "M" although an argument could be made for "O". Consider that before birth "M" should be the points were blood contains the most oxygen since it is coming directly form the oxygen exchange While after birth it contains the least amount of oxygen becasue that exchange is lost and it instead only receiving blood from the body where most of the oxygen has been extracted. So the change at that point is about as large a change as possible. I don't know how you got "L" it should have little to no change in oxygen content as both before and after birth it is receiving low oxygen blood from the body.
{ "domain": "biology.stackexchange", "id": 11232, "tags": "human-biology, homework, reproduction, sexual-reproduction" }
Does electrostatic force act at the centre of mass of a body?
Question: I was recently solving a question in which I was given a large object in a uniform electric field. I was able to solve it by assuming that the force acts from the centre of mass of the object drawing up an analogy to gravity. I am also aware of the fact that centre of mass and centre of gravity coincide only when the gravitational field is uniform. I feel the same should be true here. However, I have never heard about "Centre of Electric Field". So where does the Electric Field act on a large body in the most general case? Answer: Suppose you have a macroscopic body that's charged. We can assume that the charge will be distributed in some way into the body. In this situation, by definition, the total electric force acting on the body is the sum of all the little forces that the charged point like subsection of the body experience, but if we want to thing about the charged body as made of point like subsection then we get an infinite amount of infinitesimal subsection, this implies that we have to perform the integral instead of the sum. Long story short: the electric force experienced by a single point like subsection of the body is: $$d\vec{F}=\vec{E}\rho dV$$ where $\vec{E}$ is of course the electric field in that point and $\rho$ is the charge density. So to get the total force $\vec{F}$ acting on the body we have to perform the integral: $$\vec{F}=\int _V \rho(\vec{x}) \vec{E}(\vec{x})dV$$ This is it. However: A fundamental law of nature is that nobody wants to compute integrals! Can we find some handy shortcut? Well: if the electric field $\vec{E}$ is constant in space (or if we can approximate it this way ;)) there is a pretty nice simplification: $$\vec{F}=\int _V \rho(\vec{x}) \vec{E}(\vec{x})dV=\vec{E}\int _V \rho(\vec{x})dV=\vec{E}Q$$ where $Q$ is the total charge of the body, an analogous concept to the total mass, but of course not the same thing. Of course in this case the electric force is applied in the center of charge ($\vec{C}$), analogous concept to the center of mass: $$\vec{C}=\frac{1}{Q}\int _V\rho(\vec{x})\vec{x}dV$$ Even better: if the charge distribution is simmetrically distributed in some nice way we can easily guess the position of $\vec{C}$ most of the times, this way we don't have to perform the integral! Keep in mind: in some situation the center of mass and the center of charge share position, but this does not happen most of the times! In fact to calculate the position of the center of mass you integrate the mass density, but to calculate the position of the center of charge, as we saw, you have to integrate the charge density.
{ "domain": "physics.stackexchange", "id": 74271, "tags": "electromagnetism, electrostatics, electric-fields" }
Under what condition is P/poly equal to the class of languages having Turing machines running in polynomial length with polynomial advice?
Question: Sanjeev Arora and Boaz Barak show the following : $P/poly = \cup_{c,d} DTIME (n^c)/n^d$ where $DTIME(n^c)/n^d$ is a Turing machine which is given an advice of length $O(n^d)$ and runs in $O(n^c)$ time. I do follow the proof. But I feel the proof only holds if we assume that $\forall n$ the advice given to any two $n$ length strings $x$ and $y$ is same. But I am unable to see if the theorem still holds if the above condition if not applicable ? Answer: Since $P/Poly$ talks about circuit families for languages (different circuit $C_n$ for length $n$ inputs), it is natural to talk about an advice which is a function of the input's length alone. Different advice for each string will make the class too big. For any $L\subseteq \Sigma^*$, your advice for some $x\in\Sigma ^*$ can be 1 if $x\in L$ and 0 otherwise.
{ "domain": "cs.stackexchange", "id": 6232, "tags": "complexity-theory, circuits" }
Debian support for libaria
Question: I have groovy installed on a raspberry pi running raspbian and am currently tying to install ROSARIA. However when running 'rosdep install ROSARIA' (following the instructions here: ros.org/wiki/ROSARIA/Tutorials/How%20to%20use%20ROSARIA) I keep running into this error: ROSARIA: No definition of [libaria] for OS [debian] This isn't due to rosdep dependancies being missing for debian in general as I have already solved that, but looking at the base.yaml file there appears to only be support for libaria for ubuntu: libaria: ubuntu: source: alternate-uri: 'http://aus-www.rasip.fer.hr/libaria.rdmanifest' md5sum: f00cf4b5496b085a6b8ef9ce0389ec0b uri: 'http://amor-ros-pkg.googlecode.com/files/libaria.rdmanifest' How can I update this successfully for debian support? I tried just changing ubuntu for debian in the above text in the base.yaml file, and no longer got the "No definition of [libaria] for OS [debian]" but when performing "rosdep install ROSARIA" I start getting errors such as: strip: Unable to recognise the format of the input file `/usr/local/Aria/ArNetworking/examples/clientDemo' I have recompiled the files that cause this error many times (I downloaded the source code for Aria from the mobilerobots website for 32-bit machines and recompiled everything) so they are built for my system, so these errors make me think that my ubuntu to debian fix is the cause. If anyone has any insight into this problem it would be greatly appreciated. Thanks. EDIT: I have rebuilt the binaries many times. When I first tried to install Aria I got similar strip errors and deleted the binaries and rebuilt them and now running 'make install' inside the Aria directory itself leads to a successful compilation so I'm guessing all the files are compiled correctly now. However with after the base.yaml file I'm getting these strip errors when running rosdep install ROSARIA. The binary for the above 'clientDemo' has definitely been rebuild (I just did it again), yet I got the same errors again when running 'rodep install ROSARIA'. Originally posted by wombat_sdu on ROS Answers with karma: 31 on 2013-07-10 Post score: 0 Answer: This error is likely to happen if the lib you are trying to use is shipped with pre-built binaries. Remember that binaries built on a PC cannot run on the RPi, which has an ARM processor with a different instruction set. If that is the case, it has nothing to do with the fact that you are running debian instead of ubuntu. If you can find the entire source code for this lib, then you should be able to compile every last bit of it on the RPi (or on a qemu chroot), and install it normally. Otherwise, you are stuck with using a PC for any node using this lib, or not using the lib at all. Originally posted by po1 with karma: 411 on 2013-07-10 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 14875, "tags": "rosdep-install, raspberrypi, rosaria, debian, ros-groovy" }
fft and inverse fft error size and applying impulse response matlab
Question: I have a signal called noise. I am using the following command to calculate the mean squared error after fft and inverse fft process: sum((noise-real(ifft(fft(noise,2048),num_of_samples))).^2)/num_of_samples Where num_of_samples=2819519 is the number of samples in the original signal. I am getting an error of $\sim10^{-8}$. Is that reasonable and due to approximation or does that indicate something is not converted correctly? My final goal is to multiply the signal in the frequency domain with a $1052\times 4$ given impulse response. This represents a room with 4 microphones. Answer: I don't know your overall goal but if you want to test the MSE associated with FFT / IFFT then you should perform the following num_of_samples = 2819519 ; fft_len = 2^nextpow2(2819519) ; X = fft(signal, fft_len); y = real( ifft(X, fft_len) ); mse = sum( (signal - y(1:num_of_samples) ).^2 )/num_of_samples
{ "domain": "dsp.stackexchange", "id": 7603, "tags": "matlab, fft, ifft" }
Unexpected behaviour when using lfilter with initial filter delay multiple times
Question: I want to filter a very long signal in smaller parts, therefor I am currently using scipys lfilter with an initial filter delay doing multiple iterations of: (signal_part_filtered, filter_delay) = lfilter(b, a, signal_part, zi = filter_delay) The first few iterarions are going as expected (original is blue; filtered is red): But after a few iterations the inital filter delay keeps getting bigger: I think the issue might be the limited precision of the float datatype, is there any way to solve this? Answer: The problem is not in the filtering process but already at the design stage. Your specifications are very difficult to realize because your desired band is at very low frequencies. With these specs, the butter routine returns an unstable filter (i.e., with two pole pairs outside the unit circle of the complex plane). One thing you could try is reduce the filter order. Always check the maximum pole radius by computing the (magnitude of the) roots of the denominator polynomial. It could be that the design turns out to be OK for a lower filter order. Of course, your filter will be less steep. Another, better approach would be to downsample your signal before filtering, so your pass band is not anymore in such a low frequency range (compared to the sampling frequency). This will avoid numerical problems in the filter design routine.
{ "domain": "dsp.stackexchange", "id": 3636, "tags": "filters, python, infinite-impulse-response, butterworth" }
JQGrid with dynamic columns and server-side functionality
Question: I have a requirement where I need to build a grid with dynamic columns. I am dealing with a large dataset so I would like to use server-side paging, sorting, filtering logic so that I render only a page-size. I think I got the basic functionality working but just wanted to get my approach reviewed. An action route will be the JSON datasource for the jqgrid. Reference My approach: First make an ajax call to get the dynamic col model and other grid params (rows, page etc.) except data. Update grid params (url, datatype and mtype) to enable server-side paging, sorting etc. I am using a flag in the query string to determine if I need the col model (or) the data. Note: I set the async flag to false for the AJAX requests to make sure I do not run into timing issues. As you can see, I need to make two requests to set up the grid. One to get the col model and another one to get data and to update it to enable server-side interaction for subsequent requests. Is this ok? $.ajax({ url: firstFetchURL, //will hit an asp.net mvc action on a controller that returns json dataType: "json", type: 'POST', async: false, success: function (result) { if (result) { if (!result.Error) { var colD = result.data; var colM = result.colModelList; var colN = result.columnNames; $("#myGrid").jqGrid('GridUnload'); $("#myGrid").jqGrid({ datatype: 'local', colModel: colM, colNames: colN, data: colD, height: "auto", rowNum: 10, sortname: viewOptionText, sortorder: "desc", pager: '#myGridPager', caption: "Side-by-Side View", viewrecords: true, gridview: true }); //Update grid params so that subsequent interactions with the grid for sorting,paging etc. will be server-side $("#myGrid").jqGrid('setGridParam', { url: secondFetchURL, datatype: 'json', mtype: 'POST' }).trigger('reloadGrid'); } } }, error: function (xhr, ajaxOptions, thrownError) { if (xhr && thrownError) { alert('Status: ' + xhr.status + ' Error: ' + thrownError); } }, complete: function () { $("#loadingDiv").hide(); } }); I saw a related post here, but I am looking for some direction from experienced JQGrid users. Answer: Small recommendations: It seems to me that you can remove async: false parameter for the $.ajax call. You can remove result.data from the data returned by the ajax call. (After that you should and the line with var colD = result.data). The data will be not really used because you call trigger('reloadGrid'); immediately. On the other side the values for sortname and sortorder parameters should be included in the data model (as the properties of result). You can use url: secondFetchURL, datatype: 'json', mtype: 'POST' parameters directly in the jqGrid definition ( in $("#myGrid").jqGrid({/*here*/});. No trigger('reloadGrid') will be needed. UPDATED: Look at this and this answers. Probably the approach is what you need from the dynamic columns. You can take a look in the answer in case if you will need to use custom formatters.
{ "domain": "codereview.stackexchange", "id": 516, "tags": "javascript, jquery, ajax, asp.net-mvc, url-routing" }
Using a library created using rosbuild_add_library
Question: I have created a package in ROS which contains my own header files and I have created a library using these header files using rosbuild_add_library (add_library in case of Catkin) . I wish to make use of this library in another ROS package. Is it enough to add the package ( containing my library ) to the list of dependency in package.xml (or manifest.xml) of the current package and using target_link_libraries( my_target my_library) in CMakeLists.txt or do I have to specify the complete path to my library? In other words, do I have to use link_directories(library_directories) even after adding my former package to the package.xml of the current package? Pkg1 (package) contains a library foo. The library has been created using rosbuild_add_library on one computer and using add_library (for catkin) on another computer. Pkg2 is the package (created in both computers) in which I want to use the library foo. Originally posted by Hemu on ROS Answers with karma: 156 on 2014-10-06 Post score: 0 Original comments Comment by BennyRe on 2014-10-06: Does Pkg2 use rosbuild or catkin? Why do you have two versions of Pkg1 on two different computers? Answer: For fuerte you have to add to your manifest.xml of the lib package: <export> <cpp cflags="-I${prefix}/include" lflags="-Wl,-rpath,${prefix}/lib -L${prefix}/lib -lyour_lib"/> </export> if your headers are placed in the include subfolder for the lib package and the library you creates using rosbulid_add_library is named your_lib like: rosbuild_abb_library( your_lib src1.cpp src2.cpp ) Your your depenign packages will be automatically compiled against the headers and linked against the lib. Note: For catkin (groovy and above) the workaround is slightly different. You do not need this in the package.xml but have to use the catkin_package cmake comand in the CMakeLists.txt of your lib package......... Originally posted by Wolf with karma: 7555 on 2014-10-06 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Hemu on 2014-10-06: It worked. I will try using catkin_package command in CMakeLists for the packages built using catkin.
{ "domain": "robotics.stackexchange", "id": 19640, "tags": "ros, package.xml, manifest.xml, target-link-libraries, cmake" }
F# program to generate a Map of substrings frequencies
Question: I wrote the following F# program that takes an input txt file and create a Map that contains all possible substrings of size N and their frequency in the file. In other words if I call the program with a text file with the following content: James John Max Gary Jess Gilles Mary With a length of 2, the probabilityTable contains a value with a key of "ar" with a value of 0.054054. Here is the program: module NameGenerator open System open System.Collections.Generic // Open a file and read all lines into IEnumerable<string> let readInputFile (filePath:string) = System.IO.File.ReadAllText(filePath) // Parses a string and count the total number of occurrences of substrings of size length let rec countOccurrences (input:string) (occurrenceTable:Map<string, float>) length = let adjLen = length - 1 match input |> Seq.toList with | head :: tail when tail.Length >= adjLen -> let other = Seq.take adjLen tail |> Seq.toList let occurrence = (head :: other |> Array.ofList |> String) // add current occurrence to the occurrence table let updatedMap = match occurrenceTable.ContainsKey (occurrence) with | true -> occurrenceTable.Add(occurrence, occurrenceTable.[occurrence] + 1.0) | false -> occurrenceTable.Add(occurrence, 1.0) // call the function recursively with the rest of the string countOccurrences (tail |> Array.ofList |> String) updatedMap length | _ -> occurrenceTable // Given a map that contains the a collection of string and their respective counts, return a // frequency map that is obtained by count / total count let buildFrenquencyTable (occurrenceTable:Map<string, float>) = // fold occurence dict, count total let total = Map.fold (fun acc key value -> acc + value) 0.0 occurrenceTable // map over previous map and replace values with count / total count Map.map (fun key value -> Math.Round(value / total, 6)) occurrenceTable // Given an input file create a probability table for the different letters in the file let buildProbabilityTable (filePath:string) length = let input = readInputFile filePath let initialDictionary = Map.empty countOccurrences input initialDictionary length |> buildFrenquencyTable I would like a review on how I could do things better and use a more functional style. Answer: Let me also leave some comments. 1.You do not needed to use function readInputFile, because it performs the same thing as System.IO.File.ReadAllText. 2.As you already wrote @Henrik Hansen - better to remove the explicit dependency on file and use the string and further use formula for calculating the total : let buildProbabilityTable (input : string) length = let initialDictionary = Map.empty let total = input.Length - length + 1 |> float countOccurrences input initialDictionary length |> Map.map (fun _ value -> Math.Round(value / total, 6)) 3.In updatedMap you use the occurrenceTable.ContainsKey and then get item by key. Better use Map.tryFind (this will allow you not to specify the type for occurrenceTable): let updatedMap = occurrenceTable |> Map.tryFind occurrence |> defaultArg <| 0.0 |> fun x -> occurrenceTable.Add(occurrence, x + 1.0) If you don't like the defaultArg, you can use the usual pattern matching: let updatedMap = occurrenceTable |> Map.tryFind occurrence |> function | Some x -> occurrenceTable.Add(occurrence, x + 1.0) | None -> occurrenceTable.Add(occurrence, 1.0)
{ "domain": "codereview.stackexchange", "id": 23685, "tags": "f#" }
Does the expansion of spacetime add energy to matter?
Question: If I understand correctly when the universe expands the matter in it is expanded as well but the "rest length" of the matter that gets stretched stays the same. So let's say I have a crystal in the ground state meaning all its atoms are at the ideal distance apart. When this crystal sits in an expanding spacetime does that mean that the atoms will be displaced from their ideal resting position resulting in the crystal gaining energy? Does this mean all the matter in the universe is always gaining a tiny amount of energy from the expansion of spacetime? Answer: The crystal is a lump of matter of fixed size, and the expansion of the universe simply means that other lumps of matter are moving away from it on average over very large scales. There is no force or other effect due to the expansion that acts on the crystal, that would cause it to gain or lose energy. If the cosmological constant $Λ$ is nonzero then there is a constant force (outward if $Λ>0$) that makes the bond length of the crystal in the ground state slightly different than it would be in a universe with $Λ=0$, but that effect is independent of time, so it doesn't cause the crystal to gain or lose energy over time, and it isn't related to the expansion as such, since it doesn't depend on $a(t)$. It's easy to get confused about this because beginning cosmology (the FLRW metric and the Friedmann equations) assumes that matter is homogeneous at all scales. If you naively apply FLRW cosmology to this problem, you'll find a time-varying force on the crystal that depends on $a'(t)$ or $a''(t)$, but it's actually a spurious result due to the contradictory assumptions of perfect homogeneity and the existence of a lump of matter. I wrote about this in more detail in another answer.
{ "domain": "physics.stackexchange", "id": 85268, "tags": "general-relativity, energy, cosmology, spacetime, space-expansion" }
Piezoelectric powder solution to measure extremely high pressure
Question: Idea - While reading about piezoelectric materials, I thought about mixing piezoelectric powder with silicon oil to measure extremely high pressures (of up to 4000bar). Experiment - Consider a cylindrical container containing the "Si + Piezo powder" mixture. When an extremely high pressure, say 4000 bar, is applied from the top as shown, the solution would act as a solid. If there are enough piezo particles in the solution, then 1) would this solution act as a piezoelectric solid? 2) When yes, how can I measure the current generated in the solution? 3) could you also suggest some conceptual ideas or designs to measure pressure (up to 4000 bar) through the solution mentioned above. I am trying to measure the pressure without using any physical sensor (piezoelectric sensor, etc.) Answer: 1) would this solution act as a piezoelectric solid? It would not. Although crystals in each individual particle have piezoelectric properties, their dipole moments are randomly oriented and, therefore, would not produce any net voltage under pressure. To make a sensor, PZT powder has to be converted (through a series of steps) into a solid part (e.g., a disc with metallic contacts on the top and bottom surfaces) and polarized (poled) by the application of high DC voltage. As a result of poling, randomly oriented dipoles get aligned and will respond to the applied pressure by generating voltage between the contacts. I don't see any practical way of sensing pressure by just mixing the powder with oil.
{ "domain": "physics.stackexchange", "id": 53701, "tags": "pressure, sensor, piezoelectric" }
Loopwise expansion of effective action $\Gamma[\phi]$
Question: My question is about the loopwise expansion of the effective action $\Gamma(\varphi)$ up to 1-loop contributions. I've understood well the results for both $Z[J]$ and $W[J]$ functionals loopwise expansions. But then something is missing when I follow the path towards the expansion of the effective action. Following this excerpt from Zinn-Justin's "Quantum field theory and critical phenomena": I don't understand the statement: "Therefore a correction of order $\hbar$ to the relation between $J(x)$ and $\varphi(x)$ will produce a change of order $\hbar^2$ to the r.h.s. of equation (6.47)" Could someone please write down some more details? My undestanding at this stage is that the relation between $J(x)$ and $\varphi$ is fixed by the stationarity of the (6.47) (as for any Legendre transformation). But then, how to proceed? Answer: The problem is to evaluate $\Gamma[\phi]$ at fixed $\phi$ given an expansion of $W[J]$ in powers of $\hbar$, where $\Gamma[\phi]$ is the Legendre transform of $W[J]$. By definition, $$ \Gamma[\phi]=\sup_{J}\Big[\phi\cdot J-W[J]\Big]. $$ Suppose that the expression in the RHS attains its maximum at some $\hat{J}$. Then formally, we can expand $\phi\cdot J-W[J]$ about $\hat{J}$ to obtain $$ \phi\cdot (\hat{J}+\delta J)-W[\hat{J}+\delta J]=\Gamma[\phi]+\phi\cdot\delta J-\frac{\delta W}{\delta J}\Big|_{\hat{J}}\cdot\delta J+\mathcal{O}(\delta J^2). $$ Since a maximum is attained at $\hat{J}$, the linear term $(\phi-\delta_J W)\cdot\delta J$ vanishes for arbitrary $\delta J$, and we see that corrections start at order $\delta J^2$. Now we return to the expansion in $\hbar$. If $W[J]$ has an expansion, then it is reasonable that $\hat J$ has an expansion too after solving $\delta_J W=\phi$. In fact, if $\hat J$ has an expansion in $\hbar$ then the zeroth order term must be the solution to $\delta_J W_0 = \phi$, where $W=W_0+\hbar W_1+\dots$. Hence, we can write $\hat J_0=\hat J-\hbar \delta\hat J$, where $\delta \hat J$ includes all higher order corrections. From the above argument, evaluating $\phi\cdot J-W[J]$ at $\hat J_0$ will differ from the evaluation at $\hat J$ by terms of order $\hbar^2$. Hence, to order $\hbar$ we can use $\hat J_0$ in evaluating $\Gamma[\phi]$.
{ "domain": "physics.stackexchange", "id": 22801, "tags": "quantum-field-theory, path-integral, 1pi-effective-action" }
Why would these hydraulic cylinders have 3 large connections?
Question: I am trying to work out how a recently acquired Arburg 220-90-350 from 1982 works so that I can build a new control system and get it working. It's a direct clamping machine with no toggle. The injection side is quite simple, just 3 motors or cylinders each controlled by a separate valve. The clamp and ejector side has me confused though, there is a directional valve controlling each cylinder (simple enough) but each cylinder has a 3rd line the same size as the others connected to another single valve labelled 'high pressure'. So to summarise, 2 hydraulic cylinders each with 3 hoses, 3 directional control valves controlling the 2 cylinders. I can't begin to draw a schematic without further disassembling the machine so I am hoping someone can give me some reasons why a hydraulic cylinder would have 3 hoses of equal size, with 1 on each cylinder connected to the same valve. Edit: On further investigation, I can confirm that non of the 3 hoses are connected together, i.e. there is not two in parallel. Answer: So it turns out that the ejector cylinder is just a standard double-acting cylinder with 2 hoses, the remaining 4 hoses are all used for the clamping cylinder. Two of the large hoses do retract and return at low pressure, the two remaining hoses one large and one small do the high pressure clamping. This is standard for a 2 platten direct clamping machine. The platen closes under low pressure and when it is closed the high pressure gives it the final clamping force.
{ "domain": "engineering.stackexchange", "id": 3652, "tags": "hydraulics, molding, injection" }
Finding products of all other members of an array
Question: I had the following interview question: Given an array of integers, for each member of the array find the product of all the other members of the array. So for instance, if you have this array: {3, 1, 2, 0, 4} you should end up with this: {0 0 0 24 0} I wrote this code which uses two loops, but I couldn't come up with a solution with just one loop. Can anyone help me? public int[] findProducts(int[] arr) { int[] products = new int[arr.length]; for (int i = 0; i < arr.length; i++) { int product = 1; for (int j = 0; j < arr.length; j++) { if (i != j) { product *= arr[j]; } } products[i] = product; } return products; } Answer: This could be done with two consecutive loops (instead of nested loops), making it O(N) instead of O(N^2). In the first loop you calculate the total product of all non-zero elements (and the number of zeros). If there are more than one zero, return result as is, since all elements already default to zero. In the second you either set the result element to zero or to total product divided by the current element depending on the number of zeros in the data and the value of the current element. BTW, There are a lot of blog posts and even some research written about the pointlessness of this kind of interview questions.
{ "domain": "codereview.stackexchange", "id": 34421, "tags": "java, algorithm, array" }
Confusion with work done in isochoric process
Question: So recently I've come up with this thought In an isochoric process, many books state that the work done is zero because there is no change in volume i.e. the gas isn't expanding nor it is contracting. So by knowing that we could write an equation; $$\delta Q = dU$$ But since by the definition of heat capacity, we have that; $$\delta Q = C_pdT$$ $$dU = C_vdT$$ Substituting, and now our new equation becomes; $$C_pdT = C_vdT$$ $$C_p = C_v$$ Which is clearly not true because $C_p$ can never be equal to $C_v$ (knowing that $C_p$ $-$ $C_v$ = $R$). Am I missing something here? What I think that I'm missing is that even though there is no change of volume in the system, there still is a change in pressure and we know that $W$ is a function of $P$ and $V$. Therefore in an isochoric process, there should exist a work that is being done by the system (or an external work?) that is proportional to the change of pressure. Because from what I think is true; $$\delta Q = dU + \delta W$$ And since the integral of a derivative is basically the function itself; hence, $$\int d(\delta W) = \delta W$$ Substituting for $d(\delta W)$ where $P$ and $V$ are both not constants, $$\int (P \delta V + V \delta P) = \delta W$$ $$\delta Q = dU + \int_1^2 (P \delta V + V \delta P)$$ Again, no change in volume, $$\delta Q = dU + \int_1^2 V \delta P$$ And this is when the problem comes. I've tried finding the integral and it became, $$\int_1^2 V \delta P = nR \delta T \ln \frac{P_2}{P_1}$$ Change in temperature is not zero in an isochoric processes, hence the $\delta T$. I'm stuck in this equation. But then I read about isentropic processes and I found a similar equation where, $$W = VdP$$ My intuition tells me that the value of the integral(I'm talking about the $\int_1^2\ V \delta P$ one) is basically the product of the volume $V$, which is a constant, times all of the infinitesimally small $\delta P$s, hence it leads to the equation above(could be wrong though). So to sum up my question; is work done by the change of pressure? Just for you to know, I'm still new to this calculus notation. So apologies for the incorrection(s). Oh and quick question, does $dT$ means the change of $T$ with respect to a constant or does it simply means $\Delta T$? EDIT I've did the integral and found a mistake, here is what I did, $$\delta W = \int_1^2 V\delta P$$ Since $P = \frac{nRT}{V}$ and $\delta P = \frac{nR \delta T}{V}$ (accounting that $V$ is constant and $T$ is not). substituting we get, $$\delta W = \int_1^2 nR \delta T$$ $$\delta W = nRdT$$ Knowing that $T = \frac {PV}{nR}$ we get, $$\delta W = VdP$$(is this correct?) Answer: $Q = C_pdT$ is valid when the pressure is constant and the volume is changing (see here), so it is inconsistent that you equate the $dU$ and the $Q$ of two different processes.
{ "domain": "physics.stackexchange", "id": 64262, "tags": "homework-and-exercises, thermodynamics, work, calculus" }
Error in adding data in columns of R data frame
Question: This is my file df1 = read.csv("HSC_LSC_BLAST_karyotyope.txt",header = TRUE,sep = ",",row.names = 1) metadata <- data.frame(row.names = colnames(df1)) metadata$Group <- rep(NA, ncol(df1)) metadata$Group[seq(1,4,1)] <- 'HSC' metadata$Group[seq(5,15,1)] <- 'Blast' metadata$Group[seq(16,23,1)] <- 'LSC' #metadata$Group[seq(13,16,1)] <- 'Mono' metadata$Cytogenetics[seq(1,4,1)] <- 'Healthy' metadata$Cytogenetics[seq(5,21,1)] <- 'NK' metadata$Cytogenetics[seq(22,23,1)] <- 'Abnormal' >s metadata$Cytogenetics[seq(1,4,1)] <- 'Healthy' Error in `$<-.data.frame`(`*tmp*`, Cytogenetics, value = c("Healthy", : > replacement has 4 rows, data has 23 Im trying to add other metadata information based on clinical information apart from giving sample labeling , im certainly doing something wrong but im not sure what it is. As my idea is the another column named Cytogenetics which would have respective information status but im getting that error. Answer: I know this is not really an answer, but I strongly discourage you from doing what you do, it's dangerous to impute data manually because it's prone to human mistakes - that you might mess up which columns should carry with metadata. Actually, I think I have spotted a mistake already. This is your table using the corrected version of your code (with metadata$Cytogenetics <- NA, see Devon's answer): >metadata Group Cytogenetics HSC1 HSC Healthy HSC2 HSC Healthy HSC3 HSC Healthy HSC4 HSC Healthy Blast11 Blast NK Blast12 Blast NK Blast1 Blast NK Blast2 Blast NK Blast3 Blast NK Blast4 Blast NK Blast6 Blast NK Blast7 Blast NK Blast8 Blast NK Blast9 Blast NK LSC1 Blast NK LSC2 LSC NK LSC3 LSC NK LSC4 LSC NK LSC6 LSC NK LSC7 LSC NK LSC8 LSC NK Blast5 LSC Abnormal LSC5 LSC Abnormal I sort of think that the column LSC1 should be in the LSC group and visa versa the Blast5 should be in Blast group. In this case it seems that it's derived from name, you can actually write a code that will extract the group from the name. df1 = read.csv("HSC_LSC_BLAST_karyotyope.txt", header = TRUE, sep = ",", row.names = 1) metadata <- data.frame(row.names = colnames(df1)) metadata$Group <- sub("[^[:alpha:]]+", "", (colnames(df1))) However, I don't know how do you know which sample is Healthy, NK or Abnormal. I suppose you have a table where this is written, if it's the case, use that table, don't impute the data manually.
{ "domain": "bioinformatics.stackexchange", "id": 882, "tags": "r" }
Is there a chance for Earth to have an additional satellite someday?
Question: I am just an space enthusiast and not a physics professional so pardon me if this is not the right place to ask. As title says, is there a chance for our Earth to have additional satellite(s) like our Moon? I tried to google it, without luck, however I know here there are very smart people that can perhaps give me some references to read or can give me an answer :) Thanks Answer: It's very unlikely. Even if a large body passed near the Earth, it would be on a hyperbolic orbit -- that is, it would have too much kinetic energy to be captured into an elliptical orbit around the Earth. In order to get a new permanent satellite, you'd need a three-body interaction: something would have to come close to the Earth, then, via an interaction with another body, lose enough energy to stay around. In the early days of the Solar System, there was a lot of junk flying around, so events like this could happen. Nowadays, when things have been largely cleaned out, such a three-body interaction is very improbable.
{ "domain": "physics.stackexchange", "id": 635, "tags": "astronomy, space" }