qid
stringclasses
147 values
q
stringclasses
147 values
answer_1
stringlengths
0
3.33k
answer_2
stringlengths
0
6.88k
label
stringclasses
2 values
rneu8z
What are some ways to describe the protagonist’s physical appearance from a first person perspective? I'm looking for advice on ways one can describe the physical appearance of the protagonist from their pov in first person, present tense (besides them getting dressed or looking in the mirror)? I am still trying to figure this out as I plan my chapter 1. Are scenes of them getting dressed or looking at a reflective surface the only ways?
As a reader, I notice descriptions but often don't remember them if they turn out to be irrelevant to the story. In first person, the thing I most hate is character observations of themself that no one ever makes. I once rage-quit a book wherein the MC described how her hair waved behind her as she swam, which isn't something you can observe for yourself in the moment. I suspect the author changed from 3rd to 1st person and failed to think about what that meant. As a writer, I drop in aspects as they become relevant. You know my MC has a beard because he habitually rubs his knuckles along his jaw as he thinks. You can drop in a huge amount of such details incidentally throughout your story and build a picture, instead of doing the 'data-dump' description of a reflection. If you have more than one point of view character, they'll of course observe each other sooner or later.
Comparison is the best way imho. That being said, I don’t think this is something you really need to do as much as you might think.
answer_1
rzek47
Any dentists here? Questions about zero waste toothpaste In writing an article on my blog about my experience with zero waste toothpaste and general dental hygiene. I've tried: Oil pulling Dental tabs (with fluoride) Georganics dental mineral toothpaste Georganics tooth powder DIY tooth powder The one I've personally stuck with is the tabs because they taste the best, come with fluoride (my dentist tells me this is best) and I find them the easiest to use. So, from a professional's point of view: Do zero waste toothpastes work? Does toothpaste need fluoride? If yes - why? If no - what does it need instead? What should I be careful about with zero waste/eco toothpaste? Does "mineral" toothpaste/powder really help rebuild enamel etc? Mouthwash - important? Need to have fluoride? Manual or electric (or sonic) toothbrush -is a manual good enough (eg I can get a bamboo/wheatstraw manual brush, eco electric brushes are more tricky) Zero waste dental routine -anything else to be aware of? Any insight I'd be really happy for! Thank you 😁
Dentistry had a huge wave of disposables, but there is a new movement to stop throwing so much stuff away, but we're also up against cross contamination, so we just try. Keep what we can, throw away what can't be cleaned.
https://youtu.be/2mUUrZJaHPU
answer_1
or48s3
How to deal (dispose) of cat feces I adopted a cat last April and I've been using a bentonite clay clumping cat litter as that was the suggested cat litter sand to use by people in the cat groups. Lately, accounting all the trash I've produced because of the cat litter, I want to minimize or completely get rid of my contributing waste in my community. I'm thinking of using soil and coco peat in his litter box, but the problem with that our house don't have a garden or a backyard to bury the contaminated soil. Our community doesn't have a good enough waste disposal, I live in Metro Manila, Philippines. I'm thinking of dumping the contaminated soil in a storage box with lid but I am also concerned with the smell and the health risks it may cause and the amount of space it'll use. If I do that what do I do next once the storage box is full, I am aware that I can't use it as fertilizer. All the best, Potates
Is your cat smart enough to be toilet trained? That might be a good low waste future option. You can train it with a template cutout over the toilet bowl.
Location? My council accept animal faeces in the green bin pickup. Definitely check that out first. Do not toilet train your cat. No council recommends that in their waste water.
answer_2
133809
What to do when you reach a conclusion and find out later on that someone else already did? <sep> Say you've thought hard about a specific issue in your research and have elaborated a possible answer, interpretation, etc., to tackle it. (I'm not thinking about huge research subjects, but rather small ideas that articulate a demonstration.) You then discover later on, while reading a new paper, that someone has thought about the exact same thing. How do you present your idea on the issue? On the one hand, you can't pretend that you haven't read what the other paper says about it, both for reasons of intellectual honesty and because the other author (or someone who read his paper) might think that you stole his idea. Citing the other paper is thus somewhat of an obligation. On the other hand, it doesn't feel right to dismiss your demonstration and just cite the other paper, since, after all, you figured out a solution on your own. Conversely, it seems somewhat pointless (and maybe arrogant), to write explicitly that you reached the given conclusion and only then found the other article. What to do in such circumstances?
This happens quite a lot if you are working in a field with a lot of current research interest. Things that you know are also known by others. People working parallel tracks can often come to the same insights at about the same time. If there is nothing novel in your work compared to the other, you just do what you would normally do and explore extensions and deeper results. You can't be denied the satisfaction of having discovered something, even if you don't get public acclaim for it. Write the next paper. But, if you think it worthwhile, you can also contact the other author, mentioning that you discovered the same thing independently and exploring whether it is worth working collaboratively. Often this can be a good way to expand your research "neighborhood."
I'd argue that this is pretty common in research. As a consequence, the right thing to do is just cite the paper. If, however, your derivation/interpretation/explanation is slightly different, you should both cite the paper and present your own work. It may feel unfair to you, that you don't get credit for coming up with the same solution, but don't worry. If you came up with the same (presumably) correct solution, it shows that you are a good way. You have the right thoughts about good topics. That's good for you.
answer_2
7450
How many class room hours does the typical university teacher teach per week? <sep> I am wondering, for full time university teachers (not those who also have research responsibilities), what is generally the number of hours per week that they teach? I currently teach 20 hours per week and find the load quite heavy giving me little time to prep new modules with quality. Adding to that the responsibility of marking and it is not uncommon that I end up working more like 50-60 hours per week to teach 20. Are these numbers average? High? Low?
It depends on the type of institution. at my university, a post 1992 UK university in Newcastle, myself and some other colleagues teach on average 14 hours a week! yes and you have to research and engage in administration, including marking (lots of it) meeting students, supervising both under and post graduate students. I have broken down finally. it was too much to bear and I am currently on sick leave. Hopefully, my hours of teaching will be reduced after this incident. In pre 1992 universities, I understand the typical teaching load is 6 hours a week.
Depends to different parameters, but university generally expects each academic staff works 40% research 40% teaching 20% involvement in committees and university meetings Of course, different personalities have different interests to focus on either of research or teaching activities. That's why, some take more courses than others. In addition to personal interests, needs of school is another issue. For instance imagine one of the lecturers needs to stay at hospital after injury in accident. Head of school asks one of the academic staff to cover his/her absent colleague.
answer_1
47411
Making latex sourcefiles publicly available by default <sep> Question: where can I find (legal/pragmatic) information about having latex source files for scientific articles publicly available on github? Or, how would you deal with it? Problem description: I like using git for collaborative tex'ing. So why not use it for scientific articles as well? Put up a github repo, collaborate, submit to arxiv while tagging the corresponding commit, incorporating reviewers suggestions, etc. Also, it comes at the additional benefit that not only a compiled preprint on arxiv is available but also your latex sourcefiles. This I think is desirable in terms of transparency and also others can easily reuse/extend complex latex bits you have in your articles. Most publishers don't have a problem with arxiv preprints. But what about github repos? They don't contain the pdf, but in general, everyone would be able to compile the stuff from the source files on github. Would the source code on github then considered the same as the compiled preprint on arxiv? I feel like the authors I know, that are using github, they might even just use github without thinking about the potential legal consequences (yes because it seems rather unlikely someone searches through the github repo). Also, if one knows upfront where the article might be submitted to, one can check the legal situation for that particular conference/journal. But sometimes one doesn't know beforehand, so one could end up having the latex source files publicly on the web, for a publication where this is not allowed? I am looking for a pragmatic answer to this problem, as I think using github for scientific writing is just very efficient and good. Having the sourcefiles in addition to the compiled preprint available seems also desirable to me.
If you upload your papers to arXiv, then your Latex source is already publicly available there (click "other formats" and then "download source"). So the only difference is that the source would also be available on Github. There is no legal issue with this unless you sign an agreement that forbids it. In that case, the publisher could ask you to remove the manuscript from Github.
It is a very common practice for authors to include the LaTeX source files within an arXiv submission. I don't understand why a publisher who is okay with a preprint being published would not be okay with the LaTeX source being published with the rest on arXiv. Most information in a PDF file can be easily plagiarized (though things like extracting the actual data points in a plot may be tricky). Postscript files are even easier to plagiarize. The advantage of keeping the source code is that the document will remain editable and compilable forever. If a publisher places a blanket ban on publishing paper source code because of concerns such as plagiarism, well... they're doing it wrong.
answer_1
60897
Storage space running out. Some system functions may not work <sep> In my Android in notification bar I see message <code>Storage space running out. Some system functions may not work</code>. When I check I see <code>188 MB free</code> in device memory and around <code>10 GB</code> in USB storage. I re-installed Android OS a few times. It helps for about a couple of months then the problem returns. If I delete some applications or data it helps for a few hours to a few days. Questions: Why do I get <code>Storage space error</code> if there is a lot of available space? Can I join device and and USB memory? Will it help or at least delay the time while the problem return? Any other advice how I can deal with this problem?
On Samsung phones, type in the dialer <code>*#9900#</code> then choose the second option: "Delete Dumpstate/logcat".
I had the exact same problem. Had the phone into the Samsung Service location a number of times, where they performed a full software recovery twice and then replaced the mainboard, but the problem persisted. I realised that the memory logs, which were filling up the logs file, seem to be connected to a network issue and decided to try and obtain a new sim to see if this helped. I got the new sim yesterday and everything that seemed to kick in the log files and fill up the system memory seems to be ok. Previously, it might be ok once I had cleared the logfiles (*#9900#, delete log files) for a few hours, but then all of a sudden the log files would start again. Within a few hours, my phone (Galaxy Note) would start becoming unresponsive, and I would get the Storage space running out message and my system memory would be down to 100mb rather than 1.4g. After weeks and weeks of the same problem I am reticent to say that things are completely resolved. Putting the new sim in seems to have solved the problem for me.
answer_1
272038
MacBook Pro 2016: Keyboard key stuck how to remove key? <sep> Regularly different keys on my new MacBook Pro (2016) are stuck. I assume little pieces of dirt or small crumbs interfere with the mechanics. With older Mac keyboards, I occasionally removed individual keys for cleaning. As the keyboard of the current model was redesigned, it seems much harder to do so now and I cant find any tutorials. So: How do I remove (and reinstall) a key of a MacBook Pro (2016)?
Yes, it is possible to remove the keys safely and clean them. But first, it is important to know how it looks below the keycap in order to not damage anything: The key cap is attached to the mechanism by two claws and two hooks. the hooks (left side of the image) are at the bottom of the cap and they would break if you lever up the cap from there. The top side is held by claws (right side of the image) which can be unclipped. The butterfly mechanism has a round surface on both sides that touch the whole button cap. That is where the problem comes from: since there is a big contact area it can stick together easily (fluids) or become clogged. I actually spilled whine into my keyboard and many keys were affected, but I did not break a single one doing it as follows. You will have to lever the top side of this mechanism up, gently, to unattach the keycap. So, just be careful and do exactly as described below! First, take a thin tool (a needle works fine, but I guess a thin plastic tool is better), squeeze it into the gap above the key and gently lever it up, until you can reach below the rim of the cap with a second tool or your fingernail. The first tool will likely be under the butterfly hinge which is why you have to release that one and make sure you are only just under the cap with the second one (short fingernail is the safest option). Now lever up (with the second one/your fingernail!) until it pops of with a slight clicking sound. Now you have to untangle the hooks at the bottom side. Hold up the top side just a bit and very gently wiggle the key around, left right up down, until it is loose. Do this really gently! It should not make a clicking sound or anything since it isn't really attached to the hinge! I mostly just cleaned the top side of the hinges with my finger or a soft cloth, but you can reach underneath the hinges, if necessary, with a fine brush. To reattach the keycap, slide the hooks underneath the cap back into place from bottom to top. Then press the key (slightly upwards, that is in the direction of the screen, so that the hooks stay in place) until the top clips back in again. There you go.
I had the same issue with a stuck / spongy Enter key, and after reading through this Reddit thread I just did what people were suggesting there strongly blow air on it and keep on hammering the key at the same time and it fixed the issue, they key is nice and clicky again!
answer_1
725
Bootup on lower power not functioning <sep> I have a custom Arduino ATMega328 board that generally runs at 5V @ 8MHz (using the Arduino Pro 3.3V 8MHz profile and bootloader). The main reason I'm using this setup is so that I can put the board to sleep when main power is disconnected and it starts being run off of battery power (3V from a coin cell). The 5V and 3V sources are diode OR'ed together and the 5V input is tied to INT0. In code, when it detects that INT0 has fallen low, it initializes sleep mode and everything powers down with the exception of the watchdog timer that keeps a 1Hz cycle to keep an internal count and check it the chip should be woken back up. This works beautifully when 5V power is applied first, then the battery is inserted, then 5V is disconnected. It goes to sleep and when 5V is brought back it wakes up and I can see it hasn't lost count. However, the problem comes when 3V is applied first. I'm honestly not sure if it's even booting. But what it is supposed to do is boot, check if INT0 (Digital 2) is low and, if so, go right to sleep. By watching the current draw I see that it powers up to a few mA for a couple seconds, then drops to about 0.3mA (still higher than it should be in sleep mode). But when I re-apply 5V, nothing. The power draw goes back up but it is unresponsive (over FTDI serial). Is there maybe something I'm missing that it can't be booted on 3V... in theory it should run just fine. Update: I dropped an LED onto D13 and tried the blink sketch. Works fine when starting from 3V or 5V. However, when I run my firmware and start it from 3V, the LED just starts flashing wildly. I have no idea what's causing it since I never even setup D13 as anything in my code. But it makes me thing it's something to do with the bootloader...
Exactly what is happening and why cannot be determined based on the amount of info provided. However, I see at least one potential problem which would at least partially explain the symptoms described. You said that you are using diodes to select the voltage supply, and one supply is a 3V battery. If you are using standard diodes that drop ~0.6V then the supply voltage to the MCU is only ~2.4V. If you are using Schottky diodes with a voltage drop between 0.15-0.45, the supply voltage is potentially as low as 2.5V. You have the BOD voltage set to 2.7 volts, so in theory the MCU will never boot with the battery. As for why you can start it at 5v, drop to 3v, and bring it back up again- I'm not sure. You could be disabling the BOD in code...maybe Not sure why it works, but it's likely not guaranteed to work. I setup a diode switch circuit with 5v and 3.3v to see what it looks like on my oscilloscope when the voltages switch. When at 3.3v switching up to 5v, the voltage oscillates quite a bit initially. This may potentially cause some problems when the MCU tries to come out of sleep. Putting a cap between VCC and GND smoothed the signal very nicely. When switching from 5v to 3.3v, there really wasn't any oscillating, just a clean drop. From this information, it seems that you should lower the BOD threshold or turn off the BOD, and put a decoupling cap between VCC and GND. You probably also should make sure you have a pull-down resistor on INT0, and read the MCU datasheet sections explaining the various sleep modes all the considerations for sleeping and waking its pretty involved. Cheers
I think the problem is with your bootloader, but not in the way you think. When an Arduino starts up, a bootloader profiting several tasks, such as looking for a serial host or loading program data from flash is going to take a lot more energy than a sleeping Arduino. I think the reason why your Arduino will not start is because the bootloader requires an accurate clock source, but applying 3v to it will mess up that clock and it will perhaps crash or wait for 3.3v to be applied for a stable source. Looking at the datasheet: We see that current draw while waking up (running the bootloader) is probably going to be around 2mA, while sleeping current is about 0.8 microamps. This could certainly lead to a blackout on startup without the proper current. Perhaps you should only turn it on with 5v power, or maybe you need a higher current, higher voltage battery.
answer_1
65935
Why were the first airplanes "backwards"? <sep> In the question Is the location of an aircraft spoiler really that vital? the accepted answer states "Surfaces behind the CoG act as stabilisers, keeping the nose pointing forward. An aeroplane has vertical and horizontal tail surfaces at the back just for this purpose." I agree that this seems straightforward, to a layman (me). So why then were so many of the first aircraft built 'backwards'. Taking a look at the Wright Flyer Image (C) Bay Images as an example. There are many other examples from the earliest days of aviation. Why did many put the elevators up front, thereby destabilizing the whole thing?
Remember that the aviation pioneers were inventing the skills required to fly while refining their designs. It would be a great help to actually see the position of the elevator while trying to relate its movements to the results. We relate control pressures (which we sense in our hands and feet) to the aircraft movements to sense how we are doing and we learn that from instructors and through practice on well designed aircraft. The Wrights were trying to figure it all out as they went.
They were not backwards, they had a huge horizontal stabiliser at the aft section!. Angular accelerations are relative to the CoG. If there is only one aerodynamic surface, it must be behind the CoG in order to self stabilise. If there are two of them, like in the plane through the Y-axis, basically the same stipulation holds: that the total centre of lift is behind the CoG.
answer_1
1210
How are the categories for climbs decided? <sep> In cycle racing, there are five grades or categories for climbs - Category 4, 3, 2, 1 & Hors (Above Category or HC). How do they decide what is category 1 and what makes it so hard it is a HC?
As has been mentioned, the actual categories are fairly subjective. Things such as the fame of a climb as well as how the organizers feel about giving out King of the Mountain points on a given stage will affect rankings. That said, there are some general rules of thumb if you want to get an idea of how your local climb rates up to a given ranked climb in the tour though. There are always exceptions to climb rankings, but this should give you a basic list to start with. <code>Category 4 2km or so @ 6% 4km or so @ < 4% Category 3 2-3km @ 8% (or less on average, but with very steep pitches) 2-4km @ 6% 4-6km @ 4% Category 2 5-10km @ 5-7% 10+km @ 3-5% Category 1 5-10km @ >8% 10-15km @ 6% HC Often Category 1 climbs as the last climb of the day 15+km @ 8%+ (Alpe D'huez, etc.) 20+km @ anything uphill. (Galibier is ~=4% over 40km if I recall correctly) </code> As I mentioned though, you can find exceptions for any of these. Some examples would include: 2006, the TdF included the Cauberg, a key climb of the Amstel Gold race. It covers about 1.5km with an average of about 5%, it was ranked Category 3. There are a couple hundred meters @ 11% on it and placed just before the finish, it shattered the peloton. 2010, stage 12 finished just after the Col de la Croix Neuve. This was ranked as a Category 2, despite being only 3.1km long. It averaged 10% though, so hardly easy. Many of the category 4 climbs in the early flat stages would be unranked on a hillier stage. They exist so there is excitement in the king of the mountains classification early.
The tour organizers rank them subjectively based on their steepness, length, and also where they occur in the stage (climbs near the finish garner a higher ranking). Another criterion which seldom makes a big difference is road condition. Some people feel that the ratings have been inconsistent over the years, or have been inflated in recent years. In short, there is no scientific way of rating the climbs, it's just a judgement call from the race organizers. Note, others have indeed tried to quantitatively rank the climbs. You could apply their methodology to climbs near where you live to figure out how a local climb might be ranked at the end of a tour stage.
answer_1
38319
Teaching a child to push off <sep> My four year old started riding a pedal bike a few days ago, and thanks to his balance bike took to it like a fish to water; he's going up and down the street or park like a pro already. He sometimes is able to push off to start, particularly downhill of course, but on straightaways he gets frustrated sometimes. I've taught him to adjust the pedals to the optimal starting spot (about 1 or 2 o'clock position, right?) and to try to get a little speed with his off foot, but sometimes it's just too hard. He's used to the balance bike, and can get plenty of speed there, but just doesn't seem to 'get it' with the regular bike. If it's relevant, he's on a 16"/40cm wheel bike with the seat in the lowest position, and is about 44"/110cm tall; the balance bike he used was a 12"/30cm wheel bike with the seat at the highest position. When he's on the seat his feet reach the ground, but I'm not sure if they get completely flat to the ground. Is there a proper way to teach a young child to push off? Anything I can do to help him other than just letting him keep trying? He's fairly persistent fortunately, so 'keep trying' is entirely reasonable, but if there's a better technique to teach I'd love to get him started right.
In my experience, kids don't realise that they have to push hard at the start. It seems obvious (even "intuitive") to us, but not to them. On the balance bike, one can just push along gently, but this new bike is bigger and heavier, and the gearing makes it harder. So my advice is to encourage him with an enthusiastic "push hard" call. Or go further and tell him to stand on the pedal. I solved this problem in a different way: I built my son's bike with a gear cluster at the back (no derailleur) with the chain on the easiest gear. As he got stronger, I gradually moved the chain across one cog at a time. The main thing is to have fun with them.
I actually just taught my 5 y/o niece to ride without training wheels last weekend. First thing was that coming from training wheels she wasn't able to take off from being on her feet, rather the balance of the training wheels (which is easy). So baby step one was to hold her balanced on the bike during take off. My philosophy is/was just keep her mind off of the harder parts and the instinct of her knowing how to ride on training wheels will take over. Once she figured out how to actually ride without me holding her and she was more comfortable (which only really took an afternoon) the rest just came to her throughout the weekend. Next was stopping, and eventually she just started to take off on her own too. I don't think there is really a right or wrong answer to this, but this is my 2 cents. I did as you did and got her feet @ the correct position and told her that's how to start, and once she gets on just to GO and don't think about it! Starting from standing to taking off, hold the seat, then hold it less and let her feel the balance shifting, continue to hold it some more as she gets a feel for the resulting weight differenced, then ultimately just let her do it solo. Again there really isn't a sure fire way, just how the kid feels imo.
answer_1
19762
Why are there species instead of a continuum of various animals? <sep> As I understand it, various animal traits have to evolve gradually, but what happens to the species that are "neither here nor there"? To put it differently, if a species evolved from another, it did so because it's somehow better, right? So why are there examples of the original species not being extinct? What factors determine weather some species "stick"?
Typically when both new and old species still exist it is because evolution pushed the new one into a different habitat or role. As a hypothetical example reef fish vs. deep water fish and their relative size. Let's say deep water fish evolved into reef fish, but we still have deep water fish. So there were deep water fish that were a little smaller than the rest of the deep water fish, and this gave them access to a new place to hide from sharks, shallow waters near reefs. As time goes on this puts evolutionary pressure on the fish to shrink so as to better hide in the reef, those "neither here nor there" fish may have gotten some benefit from being near the reef but the smaller fish got even more benefit and eventually outcompeted the middle species. Vice versa for the deep water fish vs this middle species. It was not as good in deep water so it was outcompeted there as well. This continues until evolution has separated them into two new species. edit: <blockquote> What factors determine whether some species "stick"? </blockquote> Evolution optimizes for the current environment, as long as that environment is stable and the species is a good fit for it then there is little pressure to change. If the environment changes then a species will adapt to it. Here environment is everything relevant to the species: predators, food availability, weather, everything that impacts their life.
Short answer <blockquote> Why are there species rather than a long continuum? </blockquote> Three important reasons I could think of are sex, non-uniform adaptive landscape and ancestry. Long answer I am not sure I'll answer your question so let me know if I miss your point or if I help! To start with, you might want to read this answer on the semantic difficulties behind the concept of species <blockquote> What factors determine whether some species "stick"? </blockquote> Natural selection is nothing but differential fitness (fitness is a measure of both reproductive success and survival) among genotypes within a population. Individuals having greater fitness will leave more offsprings and therefore the genes of these individuals increase in frequency in the population. There are few generalities to be made about what phenotypic traits are beneficial in a given population. For example, "white fur" is a very good trait for a polar bear but would highly deleterious for a mealworm. However, there is a thing called species selection wherein a given lineage at least, it is possible to identify specific traits that seem to either reduce the extinction rate or increase the speciation rate. This is, for example, the case for polyploidy in angiosperms (Whitton and Otto, 2000) <blockquote> if a species evolved from another, it did so because it's somehow better, right? </blockquote> If you observe different extant species you cannot say that any of these species evolve from any other one you can today observe. The correct way of looking at two species is that they share a common ancestor in a given past. Therefore, looking at a cat and a blue tit you cannot say that one species evolved from the other one but you can only say that these two species share a common ancestor (just like any other pair of species) that was neither a cat nor a blue tit. The example is obvious because cats and blue tits are "not so closely related" (everything is relative) but the same logic holds for any pair of species. <blockquote> Why are there species rather than a long continuum? </blockquote> Sex The simplest and most obvious reason why there are species within which individuals are more similar compared to each than to individuals from other species is due to the definition (the most common definition because different definitions exist!) itself of a species. A species is a group of individuals that can interbreed. See this for more info on the concept of species. Take two originally different groups of individuals and allow them to interbreed. Their traits will mix up and you won't be able to tell two different groups apart. All individuals within the new mixed group are a mixture of the individuals from the two previous groups (under some circumstances this process has been sometimes called "reverse speciation"). If now you take one single group of individuals. You split them into two groups in the sense that you don't allow individuals from group 1 to mate with individuals from group 2. You will see that after some evolutionary time, the individuals of group 1 will tend to resemble much more to individuals of group 1 (its own group) than to individuals of group 2. If you wait long enough so that these two groups of individuals become different enough so that they can't interbreed any more because they diverged too much, then you have what is called a reproductive isolation and under the common definition of species, you can say that a speciation (You may want to have a look to the wiki article for "speciation") occurred and therefore you have two new species instead of one ancestral species. why the two groups tend to diverge through times? You may wonder "But why the two groups tend to diverge through times?". There are several processes that explain that divergence: Mutations Different mutations occur in the different groups (just by chance) Natural selection The environment differs and the selection pressures differ selecting for different traits in the two species. Also, the accumulation of different mutations affects the selection pressure at other loci. Genetic drift Shortly speaking genetic drift is due to random events. Different random events occur between the two populations. For more info about genetic drift, see this post If you are not very familiar with these concepts I recommend that you have a look at Understanding Evolution (UC Berkeley). Adaptive landscape Note also that there are other reasons for explaining this pattern. One other reason is "Because the adaptive landscape is not a flat function". What this means to the layman is that there are some combinations of traits that cannot really be beneficial. Ancestry Also, individual phenotypes are not independent of each other and not only for ecological reasons but also because of shared ancestry. If you consider two families, you will easily accept no to see a continuum of phenotypes but two distinct groups (maybe in one family curly hair is common while in the other they all have straight hair).
answer_2
19246
Is there an 'anti-virus'? <sep> A virus spreads around and usually attaches itself to the host, multiplies & causes diseases. But is there something like an anti-virus? A single celled entity that does the opposite: spreads around 'kills' other viruses and/or cures diseases. Has anybody discovered something like it or is there any research group working on synthesizing one? If so any links to their publications? Forgive me if I got my facts wrong, I am physical sciences person and know nothing about biology. :)
There is a "anti-virus," although many call it a virophage.In 2008, a paper in Nature was published about the observations of a new strain of a virus known as Acanthamoeba polyphaga mimivirus. This virus mainly attacks amoeba. It was discovered in 1992. It was one of the biggest viruses ever found.Later, a related virus called the mamavirus was discovered. But, after observing this specimen under an electron microscope, scientist found tiny viral particles attacking the mamavirus. It was called Sputnik. The Sputnik virus hijacks the mamvirus's machinery and depends on the mamavirus to survive. It made scientists wonder if the mamavirus is a living thing. Here is a picture of it attacking the mamavirus: You can see small subviral particles attacking the bigger virus. That is Sputnik. Note that it is a virus that coinfects other organisms but needs a virus infecting an organism to survive. According to Wikipedia: <blockquote> Sputnik virophage is a subviral agent that reproduces in amoeba cells that are already infected by a certain helper virus; Sputnik uses the helper virus's machinery for reproduction and inhibits replication of the helper virus. </blockquote> It seems that there are more species of virophages, including one that infects the marine phagotropic flagellate Cafeteria roenbergensis in the presence of a second virus Cafeteria roenbergensis virus and another one known as the Organic Lake virophage but not much detail is known about this virus. Here is the link to the Nature article on the Sputnik virophage, published in 2008: http://www.nature.com/nature/journal/v455/n7209/full/nature07218.html It may be possible to synthesize one in the future. However using this to kill viruses would not make sense because the virophage technically still attacks the host of the virus it is hijacking, plus, from what I understand, it only uses the virus's machinery. Hopefully this was helpful. If you have questions, you can comment on this answer...
There is no anti-virus to all viruses and there is no such anti-virus against a single virus yet, but there is immune response to virus. How efficient the immune response is then depends on many things. There is no perfect immune response. To develop such an antivirus that decreases viral load requires cooperation with the immune response. To develop such an antivirus is still in very very early stages, since we do not understand the fine regulation of many processes going on in the viral pathogenesis and how to stop them. There exists in the nature some viruses that attack other viruses. However, we do not understand if they attack just one or two viruses. See Abraham's good answer here for the latest publications in Nature. The immune response tries to eliminate the virus through antigen presentation and cell-mediated immune response. The humoral immune response works in the local sides where the cell mediated immune response does not reach. However, the immune response is sometimes (and often) insufficient to kill the virus. To develop such a general anti-virus is difficult because of a variety of different viruses: (RNA vs DNA; positive sense vs negative sense; single strained vs double stranded; intracellular replication vs extracellular). Here is an example of antigen presentation for HIV virus deduced from this answer: where HIV infects the antigen presentation cells (APC) (dendritic cell and macrophages) and monocytes. It replicates actively in the lymphatic circulation. Since APCs are out, it is difficult to kill the virus. To develop such a general anti-virus against HIV would require very specific understanding of many things: probably, iPS cells and development of antigen presentation cells. My conjecture is to develop an APC cell that has receptor to HIV virus and so can reach it. However, only theory. Gamma interferon should be included in the intersection between innate and adaptive immune systems. Interferons may play a central role in the future in the development of such anti-viral drugs, because they are specific. For instance, the activation of IFN-gamma stimulates the phagocytosis of macrophages against the mycobacteria tuberculosis which is facultatively intracellular (can be intracellular when necessary). Innate and adaptive immune systems are visualised on the plane in the figure. You have then humoral immunity working around that plane as circles. I emphasize with that the local nature of humoral immune system and how it extends the cell-mediated immune system. Any attack on the heart of this system i.e. antigen presentation will also risk the humoral immunity and thus cause fast progression of the disease. In summary, all measures that are used to decrease the viral load aim to target the immune system to express efficient way of decreasing the viral load (killing is just one of them!). This can be done through many ways - most of which we do not know much yet. iPS stem cell research and interferon research can be some good ways in the development of good anti-virals. However, this will take still many years (probably at least 40-50 years) to have enough control of the specific viral pathways.
answer_1
108052
Why don't viruses reach broad concentration outdoors in a city like allergens? <sep> Why don't airborne viruses reach concentrations that infect most people vulnerable outdoors in a city the way an allergen can cause inflammation to everyone sensitive to it. Both are (often) microscopic airborne biological particles produced by a bunch of scattered organisms. Obviously my assumptions are wrong. Even at the height of Delta or Omicron Covid variants it was generally considered safe to be without a mask if you were alone and outside. The question is which assumptions are wrong?: Does particle size mean that allergens stay near the ground and dilute less? Are there just nowhere near enough organisms emitting at once to matter? Does the life cycle of a virus just prevent enough people from being infectious at once the way a whole species of tree will start producing pollen? Since producing virons isn't a normal function of a body are they just produced massively less than something like pollen from a tree? Is the immune system so powerful that it's easier to trigger an allergic response than an infection to establish a foothold? Why don't people catch colds like allergies.
In another answer elsewhere on StackExchange, a poster estimated that there might be something like 100 g to 1 kg of SARS-CoV-2 virus worldwide, and that's an estimate of all the virus, including what remains in the bodies of infected individuals, not just what makes it to the outside world. I'm not certain of the accuracy of that estimate, but consider that a single pine tree can make a couple kilograms of pollen. Even if it's off by a couple orders of magnitude, the simple answer is there isn't nearly as much virus out there as there are environmental allergens. Anecdotally, there's enough pollen during the right season that if I leave my windows open, there is a light, greenish-yellow coating on surfaces near windows. The comparison just isn't anywhere close.
2, 4, 5, and 6. 6 being that the UV light (from the sun), fluctuations in temperature, humidity, wind etc mean that the virions are decayed relatively rapidly for most virus species. To address (1): In general a virus, such as SARS-CoV-2 or influenza is about 100 nanometres (0.1 micrometres (m)) in diameter, whereas a pollen is about 10 m (sizes in links for the respective viruses) - the virus is 100x smaller, but is largely spread through droplet transmission. Droplets are comprised of typical nasal secretions (i.e. snot) and/or saliva and those of about 5-100 m in size are relatively dense and fall to the floor rapidly. Less than 5 m can float for quite some time1 but rapidly dehydrate and lose virion integrity, so can't cause infection. (2) and (4) Infections produce a lot less virus than a tree does pollen. A typical allergenic tree like a Silver Birch (Betula papyrifera) can produce about 2 million pollen grains from a single catkin - up to about 2 billion grains per tree. Multiply that by the number of trees, and you'll get some idea of the number of pollen grains for that species alone. On the other hand an infected person has about 7 million virions per millilitre (ml) of saliva. However, not all of each ml is turned into droplets when speaking and it turns out that only about 37% of 50 micrometre droplets will contain a single virion, and that this drops to 0.37% for 10 m droplets (see ref 1). This means that each infected person at their infectious peak is only producing putting a tiny proportion of the virus they contain. (3) and (4) Sort of - plenty of people can be infected at once, as you will have seen during the waves of infection, but once sick they aren't out there walking around constantly emitting virus into the environment, they are in bed, at home or in a hospital (assuming they are following good public health advice). This also ties into the answer for (1) - the droplets just don't last like a pollen grain can. Pollen's purpose is to travel to find a new flower to fertilize, so trees that are wind pollinated have a selective pressure to produce pollen that can last in the environment and still fertilize another tree. Viruses don't have the same selective pressure because transmission relies on "close" contact of a mobile organism - not a tree that sits in the same place and the closest one might be miles away. (5) How an infection takes place is multifactorial - you need the conditions to be just right for transmission to take place, and a big enough dose of virions to cause an infection (often for things like Influenza or Adenovirus this is in the 3-10 virions range), you then need it to hit the right tissue in the body and evade the immune system. All pollen has to do is hit a mucosal membrane (nose, mouth, eyes etc) for there to be an immune cell there to activate the immune response. Long story short infection is much harder to do, produces less virions and has lower likelihood of happening. 1: Stadnytskyi V, Bax CE, Bax A, Anfinrud P. The airborne lifetime of small speech droplets and their potential importance in SARS-CoV-2 transmission. Proc Natl Acad Sci U S A. 2020 Jun 2;117(22):11875-11877. doi: 10.1073/pnas.2006874117. Epub 2020 May 13. PMID: 32404416; PMCID: PMC7275719.
answer_2
8
How do I develop mobile applications for Bitcoin? <sep> I would like to develop a mobile Bitcoin application for the Android and iPhone platforms. What libraries and resources can I use?
Bitcoin client provides RPC calls which are in JSON, you could communicate via the RPC to make your app.
Currently there are only a few mobile apps that use the bitcoin protocol itself. Most communicate with a bitcoin client on a remote machine using the JSON API. Those few that do, rely on BitcoinJ which is a "selfish" implementation of bitcoin that runs in Java (handily the native language for Android apps). It's got a few key modifications that bring its overhead low enough to run on embedded devices, most notably it does not download all blocks in the block-chain, only those blocks which relate to addresses in its own wallet (hence "selfish" client). Either method works, and either is as valid a starting point as the other, given the current state of bitcoin's mobile development.
answer_2
51871
Have "molecular clusters" for azeotropes been identified? <sep> A different question about azeotropes got me thinking about this point again. Azeotropes have a very specific composition so it seems that the azeotrope ought to have some sort of physical structure. It seems to be a "molecular cluster" of some sort. The azetrope for water and ethanol is about 95.5% ethanol by weight. A little fiddling and it seems that the ratio is 8 ethanol molecules to 1 water molecule. Is there some particular physical configuration of molecules to which this would correspond?
No, in the absence of extra data, there is no reason to suppose that there is any vapor-phase cluster formation. Cluster formation in the gas phase would demand very, very strong departures from ideal-gas behavior. To the contrary, the ideal gas law is an excellent descriptor of gas phase mixtures of ethanol and water. Check out a Wolfram Demonstration for the ethanol-water system. It says: <blockquote> You can vary the pressure $P$ to any value between 50 kPa and 200 kPa (i.e., low to moderate pressure so that the ideal gas-phase assumption holds). </blockquote> If the ideal-gas assumption holds, then there is no significant structure formation in the vapor phase. The "ideal" gas law describes negligibly small particles that have no attraction or repulsion to each other. Structure formation means that molecules must be strongly attracted to each other in order for arrangement into a persistent structure to occur. An "extended" form of Raoult's law that is valid for non-ideal vapor as well as non-ideal liquids, and thus is applicable to azeotropes, is $y_i \phi_iP = x_i \gamma_i p_{i, \mathrm{sat}^{\star}}$ Here, $\phi_i$ is the fugacity coefficient and takes into account vapor-phase non-idealities (i.e. deviations from the ideal gas law), and $\gamma_i$ is an activity coefficient and takes into account liquid-phase non-idealities. For many, many systems of interest, $\gamma_i$ is the driver of non-ideality, including azeotropic behavior. Fugacity coefficients $\phi_i$ are negligible (except at enormous pressures) a much higher percentage of the time than activity coefficients $\gamma_i$. This is because liquid phases are often far more dense than vapor phases, meaning that intermolecular forces govern behavior to a much stronger degree than in vapors.
In positive azeotropes for which the boiling point is less than the boiling points of any of the constituents the intermolecular interaction of the different molecules is less than in the pure liquid phase. Therefore it is not very likely that clusters involving the different molecules will be formed. In negative azeotropes that have a higher boiling point than the constituents cluster structures in the liquid phase are more likely. E.g. for concentrated hydrochloric acid the existence of clusters has been suggested1. 1 Agmon, N., Structure of Concentrated HCl Solutions, J. Phys. Chem. A 1998, 102, 192-199
answer_1
244590
CAN Bus testing <sep> I am creating a Python 3.8 script that executes a series of tests that reads and writes information to and from a CAN bus network. I'm using the python-can and the cantools packages. I've managed to transmit and receive data with small functions individually without issue. I feel I'm not creating the proper "Pythonic" script architecture that allows my script to use all functions, instances and variables between modules. Intended architecture goal: <code>main.py</code> - contains the CAN bus transmit and receive functions, performs initialization and cycles through each test case located in the <code>test_case.py</code> module. <code>test_case.py</code> - stores all test cases. Where each test case is a stand alone function. Each test case must be an isolated function so that if one test needs to be removed or a new test added the script won't break. Additionally, there will likely be dozens maybe hundreds of test cases. So I'd like to keep them isolated to one module for code cleanliness. <code>test_thresholds.py</code> - would keep all the pass/fail threshold variables that each test case in <code>test_case.py</code> will refer to. Problems / Questions: <code>main.py</code> instantiates a CAN bus object <code>bus = can.Bus(bustype='pcan', channel='PCAN_USBBUS1', bitrate=500000)</code> this object is required for the transmit and receive functions. Because the transmit and receive functions are in <code>main.py</code>, this wasn't a problem until I tried to execute a test case in the <code>test_case.py</code> module which references the transmit and receive functions in <code>main.py</code> Once I attempted to execute a test case an error occurred stating that the <code>receive()</code> function being called from the <code>test_case.py</code> module <code>NameError: name 'bus' is not defined</code> I understand this as <code>test_case.py</code> does not know what the <code>bus</code> instance is. This problem also occurs with my <code>can</code> instances. I have <code>from main import *</code> in my <code>test_case.py</code> I know this is bad but I am not sure how else <code>test_cases.py</code> will use the transmit and receive functions along with the <code>bus</code> and <code>can</code> instances How can I share that instances between modules? What are the best practices here? I have tried to go over several posts on Stack Overflow regarding passing objects (I think that's what my problem is) but none of them seem to answer what I'm looking for. Is my architecture design acceptable? I'm new to designing larger scripts and I want to make sure I am doing it effectively/proper so that it can scale. Note: I've cut down a lot of my code to make it more readable here. It may not run if you try it. <code>main.py</code> <code>import can import cantools import test_cases.test_cases # import all test cases import time # sending a single CAN message def single_send(message): try: bus.send(message) except can.CanError: print("Message NOT sent") # receive a message and decode payload def receive(message, signal): _counter = 0 try: while True: msg = bus.recv(1) try: if msg.arbitration_id == message.arbitration_id: message_data = db.decode_message(msg.arbitration_id, msg.data) signal_data = message_data.get(signal) return signal_data except AttributeError: _counter += 1 if _counter == 5: print("CAN Bus InActive") break finally: if _counter == 5: # reports false if message fails to be received return False def main(): for name, tests in test_cases.test_cases.__dict__.items(): if name.startswith("tc") and callable(tests): tests() if __name__ == "__main__": bus = can.Bus(bustype='pcan', channel='PCAN_USBBUS1', bitrate=500000) db = cantools.db.load_file('C:\\Users\\tw\\Desktop\\dbc_file.dbc') verbose_log = open("verbose_log.txt", "a") main() bus.shutdown() verbose_log.close() </code> <code>test_case.py</code> <code>from test_thresholds.test_thresholds import * from main import * # to use the single_send and receive functions in main def tc_1(): ct = receive(0x300, 'ct_signal') # this is where the issue occurs. receive expects the bus instance message = can.Message(arbitration_id=0x303, data=1) if (ct > ct_min) and (ct < ct_max): verbose_log.write("PASS") else: verbose_log.write("FAIL") </code> <code>test_thresholds.py</code> <code>ct_min = 4.2 ct_max = 5.3 </code>
In-band error signalling <code>return signal_data # ... # reports false if message fails to be received return False </code> is problematic. You're forcing the caller of this code to understand that the return value has at least two different types: boolean or whatever "signal data" is. The Python way to approach this is to use exceptions. Rather than (say) re-throw <code>AttributeError</code>, it would probably make more sense to throw your own exception type. Also, the logic around retry counts is a little convoluted. You should be able to assume that if the loop has ended without returning, it has failed. Also, don't increment the counter yourself. In other words, <code>for attempt in range(5): msg = bus.recv(1) try: if msg.arbitration_id == message.arbitration_id: message_data = db.decode_message(msg.arbitration_id, msg.data) signal_data = message_data.get(signal) return signal_data except AttributeError: pass raise CANBusInactiveError() </code> I would go a step further. My guess is that <code>msg</code> - if it fails - does not have the <code>arbitration_id</code> attribute. So - rather than attempting to catch <code>AttributeError</code> - either: call <code>hasattr</code>, or (preferably) call <code>isinstance</code>. Context management Put this: <code>verbose_log = open("verbose_log.txt", "a") verbose_log.close() </code> in a <code>with</code>. Hard-coded paths <code>'C:\\Users\\tw\\Desktop\\dbc_file.dbc' </code> should - at least - go into a constant variable. Better would be to get it from a command-line argument, a conf file or an env var.
Protected Variables Underscore is used to mark a variable <code>protected</code> in python classes <code>_counter = 0 </code> should be <code>counter = 0 </code> Use of min_<foo<max_ is permitted in python <code> if (ct > ct_min) and (ct < ct_max): </code> can be <code> if ct_min < ct < ct_max: </code>
answer_1
19706
How do I clean a pasta maker? <sep> I am experimenting with a home pasta maker, and after all the fun comes the cleaning up. My worries come from tiny bits of dry dough I find when I clean the machine. There always seem to be more every time I shake it, and they of course contain raw egg . On the instructions, it clearly says not to wash it with water. What's the best practice in this case (besides disassembling the thing)?
A couple of things that might help help on this one: If your machine has a few dried pasta crumbs on it, just leave it out to dry and knock / pick the dried dough out with a brush or a chopstick. Don't worry too much about any crumbs of dried egg dough making you sick. You are going to boil whatever noodles you make for at least 3 minutes, aren't you? If you washed your pasta machine with soap and water -like I did- just put it in a low oven (@150 degrees F.) for an hour, to gently dry the water out. Don't go any hotter, and don't try to do this with a plastic machine. The 3 hand cranked machines that I have seen had screws holding on the covers at either end of the rollers. Open them up and brush a tiny bit of olive oil on the ends of the shafts and gears to keep them moving freely and to stop any rust.
I have a Kitchen Aide metal pasta maker! You cannot make pasta without some particles getting caught in the machine no matter how carefull you are! I have made Ravioli for 45 years and cannot understand why a machine was made that you cannot take apart and clean the inside! We have tried a paper clip, straighten and shoved between the rollers. Air is not the best either! I get so frustrated every year with this darn thing!
answer_1
27745
Can food be boiled "extra fast/hard" in water? <sep> Once water is boiling you can either leave the heat on quite high, or turn it down a bit so that it just keeps boiling. Apart from extra water vaporating, does this have any effect on the taste of food you're boiling (meat, vegetables, eggs, etc.)? With just common sense we could get to the following reasoning: The liquid water is max 100C (right?), beyond that it should vaporize (right?) Water vapor could be hotter than 100C (but how much, in normal cooking conditions?) When boiling water, the vapor originates at the bottom of the pan So technically the foot could be "hit" by this vapor, thus being heated above 100C Even if the above reasoning is correct, the questions would still be: would it matter how much you heat boiling water beyond 100C? Can you significantly change the taste of boiled food by "boiling it really hard" or "boiling it slowly"?
In my experience, the most likely impact of a gentle boil vs. a furious rolling boil is going to be on texture of starchy foods, such as potatoes or other root vegetables, rather than flavor. I've found that a gentle simmer of potatoes will result in a mostly intact shape and consistent texture, whereas an aggressive boil without perfect timing can result in the outer layers of the potato breaking apart, sometimes before the center has time to cook fully. I've seen similar issues with stuffed parcels of pasta like ravioli or boiled won tons. I've also found that open pot egg poached eggs have much nicer results with a gentle simmer than an aggressive boil, perhaps for related reasons. Since part of how we experience taste is texture, you could say that the "taste" is affected.
At a normal atmospheric pressure, even the steam created by boiling will only be 100C. However, you will have to worry about the food touching the bottom part of the pan, as that can, and will, get hotter than the water. So if what you're boiling is suspended or floating then no, it won't be any different. I figure it's also worth mentioning that if what you're boiling is sensitive to movement (like poaching an egg), then a more rapid boil can effect the structure due to larger, faster, "more violent" bubbles. I don't think it would change the flavor at all though.
answer_1
10918
How to prove the security of the PRNG? <sep> Are there any realties tests or criterias that prove the security of the PRNG? What kind of tests or criteria?
<blockquote> How to prove the security of the PRNG? </blockquote> My best advice would be to start with a statistical test suite like the one NIST describes in "A Statistical Test Suite for Random and Pseudorandom Number Generators for Cryptographic Applications" (PDF). Its a battery of statistical tests to detect non-randomness in binary sequences constructed using random number generators and pseudo-random number generators utilized in cryptographic applications. The documentation and software is available at this page of the NIST website. (If NIST STS doesnt feel complete, you might want to know that other, more diverse test suites exist.) Those tests are useful as a first step in determining whether or not a generator is suitable for a particular cryptographic application. Yet, you have to keep in mind that no statistical test can certify a generator to be appropriate for any particular use. Simpler said: statistical testing cannot serve as a substitute for cryptanalysis. For that, youll have to dive into the cryptanalysis of random number generators. Cryptanalysis will help you check for potential weaknesses to several attacks (eg: input-based attacks, backtracking attacks, state compromise extension attacks, meet-in-the-middle attacks, etc.) and it can help you optimize the security of your individual RNG in case you detect a flaws which leads to a successful attack. If you dont know where to start with cryptanalysis, you might want to check on Cryptanalytic Attacks on Pseudorandom Number Generators (PDF). That paper provides some first insights on several attacks and provides some good examples by applying some of those attacks to real-world PRNGs. In the end, all that will not be able to prove that your PRNG is cryptographically secure as (up to the time of writing this) no one was able to prove that something like a cryptographically secure random number generator actually exists. Yet, if you do your statistical tests (and your RNG passes them) and if you invest a truckload of time to do a thorough cryptanalysis, you might be able to prove that your random number generator resists a (hopefully large) number of attacks which is about as much as you can do to prove the cryptographic security of a random number generator.
As mentioned, most proofs of PRNG security are really proofs of a protocol that uses some underlying construct. The proofs say, "If the construct can't be broken, then the protocol that uses it can't be broken any easier than that." That makes all these proofs subject to the assumption that the underlying construct (like factoring, quadratic residuosity, etc.) is hard to break. To address the last part of your question, PRNGs are evaluated in a number of different settings that are meant to mimic real-life attacks they face. Typically, this includes: known state attack, where you assume the attacker can peek at the internals of the PRG when output is generated, chosen state/input attack, where you assume the attacker can control the entropy updates or even the whole state of your PRG, and known key attack, where you assume the attacker knows the key (not the same as the seed, which is thought of more as the initial state) but not the state (current or past) In each of these settings, algorithms are evaluated for their ability to: predict the next output, guess previous outputs (a property known as security against a future compromise, or forward security), and/or recover the seed These papers give a formal treatment for security in PRGs: eprint.iacr.org/2005/029.pdf (Barak, Halevi, 2005) www.iacr.org/archive/eurocrypt2002/23320364/prng.pdf (Desai, et. al., 2002)
answer_1
53257
Paillier Homomorphic encryption to calculate the means <sep> Paillier Homomorphic encryption supports addition and multiplication with plaintext value. Can I use these properties to calculate the means of cipher-text values? I try to use the following steps: Multiply set of cipher texts (to get there sum in plaintext value) Raise the calculated ciphertext in step1 to the power of $\dfrac{1}{c}$ where $c$ is the number of cipher texts) to get the average The problem I have is that, paillier is defined in integer domain $\mathbb{Z}$ thus $\dfrac{1}{c}$ is always $0$ so the final results is also $0$. Any help or suggestion?
The Paillier encryption of an integer $x_i$ is given by $c_i = (1+x_iN)r_i^N \bmod N^2$ for some random $0<r_i<N$. Given the encryption of $x_1, \dots, x_k$, the encrypted mean is defined as $$[\![\mu]\!] = \left(\prod_{i=1}^k c_i\right)^{k^{-1}\bmod N} r^N\bmod N^2$$ for some random $0<r<N$. If we now apply Paillier decryption procedure to $[\![\mu]\!]$, we get $$\mu = \frac{\sum_{i=1}^k x_i}{k} \bmod N$$ We assume $\sum_{i=1}^k x_i< \sqrt{N}$. Now an application of Lagrange-Gauss lattice-reduction algorithm yields $\mu$ as an element in $\mathbb{Q}$. Based on: <a href="https://ifca.ai/pub/fc02/10-Fostwa02.pdf">[FSW02] Pierre-Alain Fouque, Jacques Stern, and Jan-Geert Wackers. Cryptocomputing with rationals. In Financial Cryptography, volume 2357 of Lecture Notes in Computer Science, pages 136146. Springer, 2002.</a> Alternatively, instead of using Lagrange-Gauss algorithm, we can adapt the extended Euclidean algorithm: <code> [u1, u2] = [0, N]; [v1, v2] = [1, mu]; while (u2 > sqrt(N)) do Q = u2 div v2; [t1, t2] = [u1, u2] - [v1, v2]*Q; [u1, u2] = [v1, v2]; [v1, v2] = [t1, t2]; endwhile return u2/u1 </code> Here is a toy example with $p = 739$, $q = 839$, and $N = pq = 620021$. Suppose $x_1 = 97$, $x_2 = 74$ and $x_3 = 46$. We are given their respective encryptions: $c_1 = 206197787317$, $c_2 = 267770082390$, and $c_3 = 49804921902$. We have $k=3$ and $k^{-1} \bmod N = 206674$. We choose a random $r<N$, say $r = 559196$ and compute $$[\![ \mu]\!] = (c_1c_2c_3)^{k^{-1}\bmod N} \, r^N \bmod N^2 = 127639014845$$ The decryption of $[\![\mu]\!]$ yields $\mu = 206746 \pmod N$. Lagrange-Gauss algorithm then yields $206746 \equiv \frac{217}3 \pmod N$ and thus $\mu = 217/3 = 72.33$.
<blockquote> The problem I have is that, paillier is defined in integer domain $\mathbb Z$ thus $\frac1c$ is always 0 so the final results is also 0. </blockquote> You are trying to use real-valued arithmetic here. You are in the wrong field for that. If you are using Paillier Encryption you work in $\mathbb Z_n^*$. The basic operations addition, subtraction, multiplication and division work differently there. You can compute eg $\operatorname{Enc}(\frac{a+b}2)$ given $\operatorname{Enc}(a)$ and $\operatorname{Enc}(b)$, but this is not $\operatorname{Enc}(2.5)$ for $a=2,b=3$, it is the arithmetic average using the operations for $\mathbb Z_n$, ie the operations that work with a reduction $\bmod n$ after each operation and where division by $a$ works by finding $x$ such that $ax\equiv 1\bmod n$ and then multiplying by $x$.
answer_1
67135
What is a "contradiction" in constructive logic? <sep> In Practical Foundations for Programming Languages, Robert Harper says <blockquote> If for a proposition to be true means to have a proof of it, what does it mean for a proposition to be false? It means that we have a refutation of it, showing that it cannot be proved. That is, a proposition is false if we can show that the assumption that it is true (has a proof) contradicts known facts. </blockquote> But then, this begs the question- what is a contradiction in constructive/intuitionistic logic? Is this meant in the sense of deriving $(\bot\text{ true})$ somehow? How would this happen in a sensible way? Would a judgment of the form $(A \supset \bot \text{ true})$ need to be introduced? Alternatively, is it perhaps meant in the sense of the reader using their discretion to informally label something as contradictory? For example, interpreting $a = b$ and $a \neq b$ as conflicting propositions.
It is immaterial whether we speak about constructive or classical logic in this situation. If you read your questions again, you will see that they apply to boths kinds. The only difference that we need to take notice of is the presentation of negation $\lnot A$. It can be presented in several ways classically, but intuitionistically it is best to use it as an abbreviation for $A \Rightarrow \bot$ (which is precisely what Bob Harper is hinting at in the quoted paragraph). But let us not confuse negations and contradictions. In both cases, a contradiction is a situation in which we have managed to prove falsity $\bot$. How could we derive $\bot$ in a sensible way? Well, from an inconsistent set of hypotheses, that wold be a sensible way to do it. You have no discretion to "declare" a contradiction. You must prove that a given set of hypotheses is contradictory by deriving $\bot$. For instance, if $a = b$ and $\lnot (a = b)$ then we may use the fact that $\lnot (a = b)$ is an abbreviation for $(a = b) \Rightarrow \bot$ and conclude $\bot$ by modus ponens.
A contradiction is usually represented as $A \land \lnot A$. It's typical in intuitionistic logic to define $\lnot A$ as $A \Rightarrow \bot$. It's clear we can derive $\bot$ from $A \land \lnot A$. Ultimately, a contradiction will be a hypothetical derivation of $\bot$ as the very definition of $\lnot$ suggests. It will be hypothetical because otherwise your logic is inconsistent. The point Harper is making is that to prove something is to have a proof and to refute something is to have a proof that it implies $\bot$. However, you can easily be in the situation that you can (meta-logically) prove that you are unable to provide either a proof or refutation. In such a situation, the proposition is neither constructively true nor false. A way to understand classical logic and contrast it to the above is the following (essentially Kolmogorov's double negation interpretation): we say a proposition is false if it implies a contradiction, i.e. it implies $\bot$. A proposition is true if we can prove that it can't be contradicted, i.e. we can show assuming it is false leads to a contradiction. In symbols, $A$ is false in this sense if $A \Rightarrow \bot$, as usual. $A$ is true in this sense if $\lnot A \Rightarrow \bot$, i.e. $\lnot \lnot A$ is provable. You can show that the the Law of the Excluded Middle holds constructively if we interpret "true" and "false" in this sense. That is, you can prove that $\lnot \lnot (\lnot \lnot A \lor \lnot A)$ holds constructively. More compactly, you can show $\lnot \lnot \lnot A \Rightarrow \lnot A$. With this notion of "true" and "false", we can say that a proposition is true if we can prove that no refutation exists. By contrast, constructively a proposition can fail to be constructively true even if we can demonstrate within the system that no refutation can exist.
answer_1
80168
Why does this not prove $P\neq NP$? <sep> Fiorini, Massar, Pokutta, Tiwary and De Wolf (Exponential Lower Bounds for Polytopes in Combinatorial Optimization, Journal of the ACM 62(2):article 17, 2015; PDF, ArXiv) show any linear program that solves travelling salesman needs super-polynomially many constraints. Suppose $P=NP$ by 'some' method then we can solve the optimal tour explicitly and trivially setup a LP that 'solves' the TSP problem. So $P=NP$ implies that TSP has a poly-size LP formulation. The contrapositive is that TSP has no poly-size LP formulation implies $P\neq NP$. This paper shows TSP needs super-polynomially many constraints. So why doesn't this show that $P\neq NP$?
What Fiorini et al. show is the following: <blockquote> The TSP polytope $P_n$ over $n$ points is a polytope in $\binom{n}{2}$ dimensions whose vertices correspond to all Hamiltonian cycles in $K_n$ (the complete graph on $n$ vertices). (That is, it is the convex hull of the indicator vectors of all Hamiltonian cycles.) Suppose that $X_n$ is a polytope whose projection over the first $\binom{n}{2}$ dimensions is $P_n$, and let $d_n$ be the number of constraints needed to define $X_n$ (i.e., the number of facets of codimension 1). Then $d_n \geq f(n)$ for some function $f(n) = 2^{\Omega(\sqrt{n})}$. </blockquote> In other words, they show that TSP cannot be solved using LPs in one particular way. There could be some other way of using LPs to solve TSP which isn't ruled out by their result. For example, perhaps you could use iterative rounding to solve TSP, at each step solving an LP. This is consistent with the result of Fiorini et al. The method in your argument is likewise not ruled out by Fiorini et al.
What you're proposing isn't "a linear program for TSP", so it doesn't come into the scope of the proof. You've observed that, if $\mathrm{P=NP}$, then TSP can be reduced to polynomial-sized linear programs. You're using a polynomial-time Turing machine to perform a slightly more complicated version of the following reduction: if the input graph $G$ has a tour of length at most $\ell$, then output the program $x>1$; otherwise, output the program $x>1$ and $x<1$. A linear program that solves TSP is one whose only inputs are variables $x_{uv}$ giving the weight of every edge in the graph and $\ell$, giving the target distance. These variables must be instantiated with exactly the TSP instance you're trying to solve (not with some graph produced by a reduction) and the LP must output a valid tour if one exists. Fiorini et al. prove that any such LP must have exponentially many variables.
answer_1
22828
Clustering with cosine similarity <sep> I have a large data set and a cosine similarity between them. I would like to cluster them using cosine similarity that puts similar objects together without needing to specify beforehand the number of clusters I expect. I read the sklearn documentation of DBSCAN and Affinity Propagation, where both of them requires a distance matrix (not cosine similarity matrix). Really, I'm just looking for any algorithm that doesn't require a) a distance metric and b) a pre-specified number of clusters. Does anyone know of an algorithm that would do that?
All clustering methods use a distance metric of some sort. And remember that distance is essentially a dissimilarity measure. So if you normalize your similarity betwen 0 and 1, your distance is simply 1-similarity As for algorithms that do not require a number of clusters to be specified, there are of course hierarchical clustering techniques, which essentially build a tree like structure that you can "cut" wherever you please (you can use some perfomance metrics to do that automatically) X-means is a version of K-means which tries a certain number of K and picks the one that maximizes some evaluation function. Mean shift also "finds" a natural number of clusters but is sensible to other parameters such as the bandwith for instance.
I'd use sklearn's Hierarchical clustering <code>from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer from scipy.cluster import hierarchy #Vectorizing X = CountVectorizer().fit_transform(docs) X = TfidfTransformer().fit_transform(X) #Clustering X = X.todense() threshold = 0.1 Z = hierarchy.linkage(X,"average", metric="cosine") C = hierarchy.fcluster(Z, threshold, criterion="distance") </code> <code>C</code> is your clustering of the documents <code>docs</code>. You can use other metrics instead of <code>cosine</code>, and use a different threshold than <code>0.1</code>
answer_2
102620
Query the definition of a materialized view in Postgres <sep> I'm wondering how to query the definition of a materialized view in Postgres. For reference, what I hoped to do is very similar to what you can do for a regular view: <code>SELECT * FROM information_schema.views WHERE table_name = 'some_view'; </code> which gives you the following columns: <code>table_catalog table_schema table_name view_definition check_option is_updatable is_insertable_into is_trigger_updatable is_trigger_deletable is_trigger_insertable_into </code> Is this possible for materialized views? From my research so far, it appears that materialized views are deliberately excluded from information_schema, because <blockquote> The information_schema can only show objects that exist in the SQL standard. </blockquote> (http://www.postgresql.org/message-id/3794.1412980686@sss.pgh.pa.us) Since they appear to being entirely excluded from information_schema, I'm not sure how to go about this, but what I'd like to do is twofold: Query whether a particular materialized view exists. (So far the only way I've found to do this is try creating a mat view with the same name and see if it blows up.) And then query the definition of the materialized view (similar to the <code>view_definition</code> column on <code>information_schema.views</code>).
Turns out this wasn't as complicated as I thought! (With just a little knowledge of pg_catalog...) Part 1: Query whether a materialized view exists: <code>SELECT count(*) > 0 FROM pg_catalog.pg_class c JOIN pg_namespace n ON n.oid = c.relnamespace WHERE c.relkind = 'm' AND n.nspname = 'some_schema' AND c.relname = 'some_mat_view'; </code> Nice and easy. Part 2: Query the definition of a materialized view: In order to come up with a query to get the definition of the mat view, I first had to look up the definition of the <code>information_schema.views</code> view by running: <code>SELECT view_definition FROM information_schema.views WHERE table_schema = 'information_schema' AND table_name = 'views'; </code> Then I copied out the query and changed <code>c.relkind = 'v'::"char"</code> to <code>c.relkind = 'm'::"char"</code> in order to get mat views (instead of regular views). See the full query here: http://pastebin.com/p60xwfes At this point you could pretty easily add <code>AND c.relname = 'some_mat_view'</code> and run it to get the definition of <code>some_mat_view</code>. But you'd still have to do this all over again next time you want to look up the definition of a mat view... Bonus: Create a view to make this easier I opted to create a new view to make it easier to look up mat view definitions in the future. I basically just added <code>CREATE VIEW materialized_views AS</code> to the beginning of the query linked above to create the new view, and now I can query it like so: <code>SELECT * FROM materialized_views WHERE table_schema = 'some_schema' AND table_name = 'some_mat_view'; </code> Much better! I can also use this view to easily query whether a materialized view exists by changing <code>*</code> to <code>count(*) > 0</code>. Disclaimer: I don't know it the other columns in the query results are entirely correct, since materialized views are fundamentally different from standard views (I think they're right). But this does at least query the <code>table_schema</code>, <code>table_name</code> and <code>view_definition</code> correctly.
Looks like 9.3 and up you can do: <code>select * from pg_matviews; select * from pg_matviews where matviewname = 'view_name'; </code> More info found here: https://stackoverflow.com/questions/29297296/postgres-see-query-used-to-create-materialized-view
answer_2
52244
A way to reference the ID in a multi insert transaction? (postgres) <sep> Assuming table "entity.eid" is auto incrementing, I want to be able to reference the autoincrement value assigned later in the same transaction. The way I have been doing this is by doing multiple transactions which I think is not optimal. <code>START TRANSACTION; INSERT INTO entity ...; INSERT INTO t2 (eid, ...) VALUES (?NEW EID REF HERE?, ...), (...), (...); COMMIT; </code>
You don't specify your Postgresql version, but if you are using 8.4+ you can use the <code>RETURNING</code> clause to return the id (or any column) that just got inserted. Docs: http://www.postgresql.org/docs/current/static/sql-insert.html Example: <code>INSERT INTO t2 (eid, ...) VALUES (...) RETURNING eid;</code> If you are using Postgresql version 9.1+ you can also use <code>WITH</code> clauses (aka <code>Common Table Expressions</code>) to do the insert into one clause, then reference the values from the <code>RETURNING</code> clause to perform more actions (the WITH clauses can chain together). Docs on <code>WITH</code> clause: http://www.postgresql.org/docs/current/static/queries-with.html
There are different ways to do this. The easiest way is to use the <code>lastval()</code> function which will return the value generated by the "last" sequence nextval. <code>START TRANSACTION; INSERT INTO entity ...; INSERT INTO t2 (eid, ...) VALUES (lastval(), ...), (...), (...); COMMIT; </code> If you know the name of the sequence for the <code>entity</code> table you could also use the <code>currval</code> function: <code>START TRANSACTION; INSERT INTO entity ...; INSERT INTO t2 (eid, ...) VALUES (currval('entity_eid_seq'), ...), (...), (...); COMMIT; </code> This can be written in a more general way by using the <code>pg_get_serial_sequence()</code> function, avoiding to hardcode the sequence name: <code>START TRANSACTION; INSERT INTO entity ...; INSERT INTO t2 (eid, ...) VALUES (currval(pg_get_serial_sequence('entity', 'eid')), ...), (...); COMMIT; </code> For more details, please see the manual: http://www.postgresql.org/docs/current/static/functions-sequence.html
answer_2
215039
Why can I not assemble conduit around cable, but must pull it after assembly? <sep> I have seen a number of answers here from our esteemed electricians about running wiring through conduit, and many of them include a warning similar to: <blockquote> You're not allowed to piece the conduit together over the wires - they must be pulled only after the conduit is complete </blockquote> While I understand that it must be done this way because That's What The Code Says, my question is why is this specified in code? I understand that code is generally written based on objective laboratory testing and/or real-world experience. What real-world experiences could have led to this being codified? What is it about "pull wires through a piece of conduit, attach the conduit to the source box, pull wires through an elbow, secure elbow to existing conduit, pull wires through another piece of conduit, secure conduit to elbow, lather, rinse, repeat" that is inherently and/or potentially dangerous?
Guaranteed Reusability If you run the wire as you are putting together the conduit, there is a possibility, unless you are truly careful about all the details, that you could end up in a situation where your initial set of wires are perfectly fine, but that pulling them out to replace them - or more likely pulling in new wires (out is "easy") will run into unexpected problems. If you assemble everything first then the first time you pull wires through you will find and fix any problems. Plus you will be more careful (especially if you don't do this kind of thing every day) to play by the rules so that you will be able to pull that first set of wires without a problem. Remember, conduit serves three different functions: Physical protection Grounding (for metal conduit) Ease of use - i.e., add or replace wires as needed The first two, which obviously are the "safety" issues, will be the same whether you do conduit-then-wire or conduit-with-wire. But for the 3rd, it can make a big difference.
Far more opportunity to damage the wire insulation - either by mechanical damage from the exposed ends of the conduit/fittings being slid along the wires (metal or PVC), or from cement/primer if PVC. An assembled conduit (or duct) will have all the various ends joined (and de-burred to remove any internal sharp edges, if properly assembled.) wires or cables slide along and don't get torn up. In addition, there's increased opportunity for damage from the wires being draped around the workspace unprotected while you have them laid out but the conduit is not assembled.
answer_1
1881
Camera calibration/pin hole camera model and working out 3d position <sep> I have a calibrated camera and have the intrinsic parameters. I also have the extrinsic parameters relative to a point (the world origin) on a planar surface in the real world. This point I have set as the origin in the real world coordinates [0,0,0] with a normal of [0,0,1]. From these extrinsic parameters I can work out the camera position and rotation in the world plane 3d coordinates using this here: http://en.wikipedia.org/wiki/Camera_resectioning Now I have a second point which I have extracted the image coordinates for [x, y]. How do I now get the 3d position of this point in the world coordinate system? I think the intuition here is that I have to trace a ray that goes from the optical center of the camera (which I now have the 3D position for as described above), through the image plane [x,y] of the camera and then through my real world plane which I defined at the top. Now I can intersect a world coordinate 3d ray with a plane as I know normal and point on that plane. What I don't get is how I find out the 3d position and direction when it leaves the image plane through a pixel. It's the transformation through different coordinate systems that is confusing me.
If you have the extrinsics then it is very easy. Having extrinsics is the same as having "camera pose" and the same as having the homography. Check this post in stackoverflow. You have extrinsics, also called camera pose, which is described as a translation and a rotation: $\displaystyle Pose =\begin{bmatrix}R|t \end{bmatrix} = \begin{bmatrix}R_{11} &R_{12}&R_{13}&t_x\\R_{21}&R_{22}&R_{23}&t_y\\R_{31}&R_{32}&R_{33}&t_z \end{bmatrix} $ You can get Homography from Pose this way: $\displaystyle H = \frac{1}{t_z}\begin{bmatrix}{R_{1x}}&{R_{2x}}&{t_x}\\{R_{1y}}&{R_{2y}}&{t_y}\\{R_{1z}}&{R_{2z}}&{t_z}\end{bmatrix}$ Then you can project your 2D points into the corresponding 3D points by multiplying the Homography by the points: $p_{2D}=\begin{bmatrix}x &y &1\end{bmatrix}\quad$ add $\quad z=1\quad$ to make them homogeneous $p_{3D}=H*p_{2D} $ $p= p / p(z)\quad$ Normalize the points
You have two options, use back projection or projection between two planes (homography). With back projection you take a pseudo inverse of you camera matrix $P$ and multiply the result with your homogenous presentation of image point: $$ P = K\begin{bmatrix}R & -R\textbf{C}\end{bmatrix} \\ \textbf{X}_{reprojected} = P^+\textbf{x} $$ Now you have a 3D line which travels trough the camera center $\textbf{C}$ and point $\textbf{X}$. If you want, you can convert this to some more easily dealt with presentation. For example with one point and direction vector (remember to normalize the homogenous coordinates $\textbf{V} = \omega\begin{bmatrix}X & Y & Z & 1\end{bmatrix}^T$ such that scale factor $\omega=1$ before the actual calculations): $$ \textbf{u} = \textbf{X}_{reprojected}-\textbf{C} \\ \textbf{v} = \frac{ \textbf{u} }{\|\textbf{u}\|} \\ \textbf{L}(t) = \textbf{C} + t\textbf{v} $$ If you have plane $\Pi = \begin{bmatrix}\pi_1 & \pi_2 & \pi_3 & \pi_4\end{bmatrix}^T, \pi_1X + \pi_2Y + \pi_3Z + \pi_4 = 0$ you can solve the equation $\textbf{L}(t) = \Pi$ for $t$. If you decide to use homography, you need to compute the $3\times3$ homography matrix $H$ which is defined as projection between the imaged plane and the plane of camera sensor: $$ \textbf{X}_{plane} = \begin{bmatrix}X & Y & 0 & 1\end{bmatrix}^T \\ \textbf{x} = P\textbf{X}_{plane} = H\begin{bmatrix}X & Y & 1\end{bmatrix}^T $$ Now if you know $\textbf{x}$: $$ \textbf{X}_{plane} = H^{-1}\textbf{x} $$ If you did not compute the $H$ while calibrating the camera (probably using direct linear transformation, DLT), you can use following formulation: $$ H = R + \frac{1}{d}\textbf{T}\textbf{N}^T $$ Where $d$ is the dinstance of the camera from the plane and $\textbf{T} = -R\textbf{C}$. (Ma, Soatto, Koseck, Sastry - An invitation to 3-D Vision From Images to Geometric Models, p. 132)
answer_1
34936
Time domain maximum from frequency domain data? <sep> Is it possible to calculate the maximum value of a time-domain signal from frequency-domain representation without performing an inverse transform?
Suppose that Alice has a vector $\mathrm x \in \mathbb R^n$. She computes the DFT of $\mathrm x$ $$\mathrm y := \mathrm F \mathrm x \in \mathbb C^n$$ where $\mathrm F \in \mathbb C^{n \times n}$ is a Fourier matrix. Alice then tells Bob what $\mathrm y$ is. Since the inverse of the Fourier matrix is $\mathrm F^{-1} = \frac 1n \, \mathrm F^*$, Bob can recover $\mathrm x$ via $$\mathrm x = \frac 1n \, \mathrm F^* \mathrm y$$ and then compute $\| \mathrm x \|_{\infty}$ to find the maximum absolute value of the entries of $\mathrm x$. What if computing matrix inverses and Hermitian transposes is not allowed? Bob can then write $\mathrm F$ and $\mathrm y$ as follows $$\mathrm F = \mathrm F_{\text{re}} + i \,\mathrm F_{\text{im}} \qquad \qquad \qquad \mathrm y = \mathrm y_{\text{re}} + i \,\mathrm y_{\text{im}}$$ and, since $\mathrm x \in \mathbb R^n$, the equation $\mathrm F \mathrm x = \mathrm y$ yields two equations over the reals, namely, $\mathrm F_{\text{re}} \, \mathrm x = \mathrm y_{\text{re}}$ and $\mathrm F_{\text{im}} \, \mathrm x = \mathrm y_{\text{im}}$. Bob can then solve the following linear program in $t \in \mathbb R$ and $\mathrm x \in \mathbb R^n$ $$\begin{array}{ll} \text{minimize} & t\\ \text{subject to} & - t 1_n\leq \mathrm x \leq t 1_n\\ & \begin{bmatrix} \mathrm F_{\text{re}}\\ \mathrm F_{\text{im}}\end{bmatrix} \mathrm x = \begin{bmatrix} \mathrm y_{\text{re}}\\ \mathrm y_{\text{im}}\end{bmatrix}\end{array}$$ which can be rewritten as follows $$\begin{array}{ll} \text{minimize} & \begin{bmatrix} 1\\ 0_n\end{bmatrix}^{\top} \begin{bmatrix} t\\ \mathrm x \end{bmatrix}\\ \text{subject to} & \begin{bmatrix} -1_n & \mathrm I_n\\ -1_n & -\mathrm I_n\end{bmatrix} \begin{bmatrix} t\\ \mathrm x \end{bmatrix} \leq \begin{bmatrix} 0_n\\ 0_n\end{bmatrix}\\ & \begin{bmatrix} 0_n & \mathrm F_{\text{re}}\\ 0_n & \mathrm F_{\text{im}}\end{bmatrix} \begin{bmatrix} t\\ \mathrm x \end{bmatrix} = \begin{bmatrix} \mathrm y_{\text{re}}\\ \mathrm y_{\text{im}}\end{bmatrix}\end{array}$$ and not only recover $\mathrm x$ but also obtain $t = \| \mathrm x \|_{\infty}$. However, is solving a linear program cheaper than computing a Hermitian transpose? MATLAB code The following MATLAB script <code>n = 8; % build n x n Fourier matrix F = dftmtx(n); % ----- % Alice % ----- % build vector x x = randn(n,1); % compute DFT of x y = F * x; % --- % Bob % --- % solve linear program c = eye(n+1,1); A_in = [-ones(n,1), eye(n); -ones(n,1),-eye(n)]; b_in = zeros(2*n,1); A_eq = [zeros(n,1), real(F); zeros(n,1), imag(F)]; b_eq = [real(y); imag(y)]; solution = linprog(c, A_in, b_in, A_eq, b_eq); % extract t and x t = solution(1); x_rec = solution(2:n+1); % check results disp('t = '); disp(t); disp('Infinity norm of x = '); disp(norm(x,inf)); disp('Reconstruction error = '); disp(x_rec - x); </code> produces the output <code>Optimization terminated. t = 2.2023 Infinity norm of x = 2.2023 Reconstruction error = 1.0e-013 * 0.0910 0.0711 0.0167 -0.1077 0.1049 0.0322 0.1130 0.2776 </code> The original vector is <code>>> x x = -1.1878 -2.2023 0.9863 -0.5186 0.3274 0.2341 0.0215 -1.0039 </code>
It's generally not possible to compute the exact maximum value, but you can compute a bound on the maximum value. Assuming your data are discrete-time, and you're using the discrete Fourier transform (DFT), you have the following relation between time domain and frequency domain: $$x[n]=\frac{1}{N}\sum_{n=0}^{N-1}X[k]e^{j2\pi kn/N}\tag{1}$$ where $N$ is the DFT length. From $(1)$ we can derive the following bound: $$|x[n]|=\frac{1}{N}\left|\sum_{n=0}^{N-1}X[k]e^{j2\pi kn/N}\right|\le\frac{1}{N}\sum_{n=0}^{N-1}\left|X[k]\right|\left| e^{j2\pi kn/N}\right|=\frac{1}{N}\sum_{n=0}^{N-1}\left|X[k]\right|\tag{2}$$ For other types of (Fourier) transforms (DTFT, CTFT), similar bounds can be derived in the same way.
answer_1
1590
Probability distribution of windowed cross-correlation <sep> This question is in the context of time-delay estimation. Say I have a stationary Gaussian stochastic process $g$, and I know its autocorrelation function $R_g(\tau)$. To do time-delay estimation, I'm computing a windowed cross correlation between $g$ and a delayed version of it. In other words, $$ g_1 = g(x-D) \\ \phi(\tau) = \int_{-T/2}^{T/2} g(x) g_1(x + \tau) $$ and I'm going to determine the delay by finding the maximum of $\phi$. My question is, is it possible to get an expression for the probability distribution of $\phi$?
Expanding on my comment, $\{\phi(\tau)\}$ is a non-stationary nonGaussian random process, and I doubt that there is any simple answer (or even a rather complicated one) for the probability density function of the random variable $\phi(\tau)$ for an arbitrary value of $\tau$. But, the (time-varying) mean function of the process is easy to calculate. We have $$\begin{align*} E[\phi(\tau)] &= E\left[\int_{-T/2}^{T/2} g(t)g(t-D+\tau)\, \mathrm dt\right]\\ &= \int_{-T/2}^{T/2} E[g(t)g(t-D+\tau)]\, \mathrm dt\\ &= \int_{-T/2}^{T/2} R_g(\tau-D)\, \mathrm dt\\ &= T\cdot R_g(\tau-D) \end{align*}$$ where $R_g(\cdot)$ is the autocorrelation function of the input process $\{g(t)\}$. Note that it is not necessary that the input process be Gaussian for this to hold; wide-sense-stationarity is enough. Since autocorrelation functions have a peak at the origin, we see that $\phi(D)$ has the largest mean value. Also, the mean value decays away symmetrically about $D$: that is, $E[\phi(D+\epsilon)] = E[\phi(D-\epsilon)]$ and $$|E[\phi(\tau)]| \leq E[\phi(D)] = T\cdot R_g(0).$$ Finding the variance of $\phi(\tau)$ is a much messier calculation that may or may not be included in the paper cited by Charna (which is behind a paywall).
I do not know the answer to your question, but perhaps this paper can help. I realize that you are not using single bit random waveforms, but the formulation on the distribution they calculated is fairly through. "Probability distribution of the crosscorrelation function of finite-duration single-bit random waveforms" Abstract The detection errors of a digital crosscorrelator, utilizing severely clipped, bandlimited Gaussian waveforms are investigated, and the probability distribution of the correlator output due to finite-duration waveforms and distortion by wideband Gaussian noise is derived and compared with experimental results. http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=4235873
answer_1
56727
What is the purpose of a resistor in the feedback path of a unity gain buffer? <sep> I often see unity-gain followers with a resistor in the feedback path. For an ideal op-amp, of course, there is no current into the input, and this resistor does nothing. What is its effect with a real op-amp, and how do I choose its value? What does R1 do in this circuit?
Here's an excerpt from the OP27 data sheet, showing that the answer is more involved than equalizing the impedances seen by the two inputs: And another example, from the AD797 data sheet:
One reason the feedback resistor may be used is to match the output impedance of Vin. Real Op-amps have input current bias and input current offset. Take for example this representative circuit: Here, I've create a more realistic model of an op-amp by adding current sources which simulate the current flowing into a real op-amp's terminals. The difference between the two input currents is the offset input current. The input voltage at the positive input terminal actually is: \begin{equation} Vin_{actual} = Vin - I_1 \cdot R_1 \end{equation} Through ideal op-amp action, the negative input terminal voltage is the same. We can then calculate the resultant output voltage: \begin{equation} Vout = Vin_{actual} + I_2 \cdot R_2\\ Vout = Vin - I_1 \cdot R_1 + I_2 \cdot R_2\\ \end{equation} By closely matching R1 and R2 the effect of input bias current is effectively nulled. Note that this doesn't solve input offset current, though. To solve both problems ensure that the resistance of R1 and R2 are both small. This will solve both of the issues of input offset current and input bias current. With a small enough R1 there may not be any need for an actual discrete matched R2, though you will of course get better results if there is one.
answer_1
32138
How do we test reliability and quality so as to minimize the risk of board failure in the field? <sep> We are developing a smart cable for a customer. The market potential is hundreds of thousands of units. The vendor who designs and supplies the boards (with firmware) that will be built into the cables is making prototypes now. We can easily test these for correct functionality, but as far as long-term reliability and quality, I'm not sure how best to reduce the risk of systemic or high incidence field failure, which in those quantities would be an absolute nightmare scenario for a small company like ours. How do we test prototypes and mass production first article samples to minimize such reliability and quality risk as much as possible?
There are several different ways to approach this problem. Typically one does testing where the device is operated under stressful conditions to reduce its lifetime. This can include elevated temperature, temperature cycling, vibration, humidity, etc. Sometimes the test protocol runs to failure. The failure may be repaired and the test resumed until the next failure, etc. Often many samples are run at the same time. For more information see: http://en.wikipedia.org/wiki/Highly_accelerated_life_test There are many companies which specialize in this type of testing service. I suggest that you contact one of them.
The first step is common sense. Does it look robustly designed? Are there obvious mechanical stress points? Is there proper strain relief wherever something flexes? Are all the datasheet limits carefully adhered to in all possible corners of normal operation with some reasonable margin? Does the design handle the obvious expected abuse? This is both mechanical like someone yanking on it or stepping on any part of it, and electrical like ESD shocks. Get someone who has done this before and has experience with what fails. This might actually be two someones, one for mechanical and the other for electrical. Take a few 10s of these things and abuse them. Do some deliberate stress tests with mechanical abuse, temperature and humidity cycling, ESD zapping, etc. Some of these will be beyond spec. The point is you want a bunch to fail so that you can see if there is a common trend to how they fail. Make that part more robust, rinse, and repeat. You also have to test for the things that it didn't occur to you to test for. Give some to the least technically skilled people you know. You want people that don't know what they're not supposed to do with a cable. Let a few four year olds play with them, and don't try to tell them what not to do or limit what they do. Assume the four year olds are more imaginative than you are. You can decide later that jumping rope with the cable or playing tug of war with the dog thru a muddy puddle aren't things you are going to protect against, but you might uncover some interesting failure mechanisms anyway. And maybe a dog chewing on it isn't all that out of line compared to it lying on the floor and getting stepped on regularly. Don't expect people to treat your smart cable any better than a extension cord. If it's long and thin and looks like it could be stepped on it will be.
answer_1
7624
Can an FPGA design be mostly (or completely) asynchronous? <sep> We had a very short FPGA/Verilog course at university (5 years ago), and we always used clocks everywhere. I am now starting out with FPGAs again as a hobby, and I can't help but wonder about those clocks. Are they absolutely required, or can an FPGA-based design be completely asynchronous? Can one build a complex bunch of logic and have stuff ripple through it as fast as it can? I realise that there are a whole lot of gotchas with this, like knowing when the signal has propagated through all parts of the circuit and the output has stabilised. That's beside the point. It's not that I want to actually build a design that's entirely asynchronous, but just to improve my understanding of the capabilities. To my beginner eye, it appears that the only construct that absolutely requires a clock is a <code>reg</code>, and my understanding is that a typical FPGA (say, a Cyclone II) will have its flip-flops pre-wired to specific clock signals. Is this correct? Are there any other implicit clocks like this and can they typically be manually driven by the design?
A short answer would be: yes; a longer answer would be: it is not worth your time. An FPGA itself can run a completely asynchronous design no problem. The result you get is the problem since timing through any FPGA is not very predictable. The bigger problem is the fact that your timing and resultant design will almost definitely vary between different place and route sessions. You can put in constraints on individual asynchronous paths making sure that they do not take too long, but I'm not quite sure that you can specify a minimum delay. In the end it means that your design will be unpredictable and potentially completely variable with even a slight design change. You'd have to look through the entire timing report every time you change anything at all just to make sure that it would still work. On the other hand, if the design is synchronous, you just look for a pass or fail at the end of place and route (assuming your constraints are setup properly, which doesn't take long at all). In practice people aim for completely synchronous designs but if you need to simply buffer or invert a signal, you don't need to go through a flip flop as long as you constrain it properly. Hope this clears it up a bit.
Yes. If you have no process type constructs then it shouldn't do things like inferring registers. There will be things like onboard memory that require clocks, although if you really want to you could probably generate these asynchronously.
answer_1
24723
Any drawbacks to "low temp" lead-free solder paste? <sep> I am about to try my first "reflow skillet" soldering job, and as I look at the available types of solder paste I see there are lead-free pastes with much lower melting temperatures than others. For example, this one from ChipQuik. The advantages seem obvious, but somehow the marketing literature does not mention any drawbacks to this type of solder paste. In the quantities I would order the price seems about the same. Is there a reason this Sn42Bi58 formula hasn't become standard?
42/58 Tin / Bismuth is not unknown as a low temperature solder but has issues. While widely used for some very serious applications (see below) it is not a mainstream industry contender for general use. It is not obvious why not given its substantial use by eg IBM. Identical to the Bi58Sn42 solder you cite is: Indalloy 281, Indalloy 138, Cerrothru. Reasonable shear strength and fatigue properties. Combination with lead-tin solder may dramatically lower melting point and lead to joint failure. Low-temperature eutectic solder with high strength. Particularly strong, very brittle. Used extensively in through-hole technology assemblies in IBM mainframe computers where low soldering temperature was required. Can be used as a coating of copper particles to facilitate their bonding under pressure/heat and creating a conductive metallurgical joint. Sensitive to shear rate. Good for electronics. Used in thermoelectric applications. Good thermal fatigue performance. Established history of use. Expands slightly on casting, then undergoes very low further shrinkage or expansion, unlike many other low-temperature alloys which continue changing dimensions for some hours after solidification. Above attributes from the fabulous Wikipedia - link below. According to other references it has low thermal conductivity, low electrical conductivity, thermal embrittlement issues and potential for mechanical embrittlement. SO - it MAY work for you, but I'd be very very very cautious about relying on it without very substantial testing in a wide range of applications. It is well enough known, has obvious low temperature advantages, has been widely used in some niche applications (eg IBM mainframes) and yet has not been welcomed with open arms by industry in general, suggesting that it's disadvantages outweigh advantages except perhaps in areas where the low temperature aspect is overwhelmingly valuable. Note that the chart below suggests that flux cored versions seem to be specifically unavailable either as wire or as preforms. Comparison chart: The above chart is from this superb report which however does not provide detailed comment on the above issues. Wikipedia notes Bismuth significantly lowers the melting point and improves wettability. In presence of sufficient lead and tin, bismuth forms crystals of Sn16Pb32Bi52 with melting point of only 95 C, which diffuses along the grain boundaries and may cause a joint failure at relatively low temperatures. A high-power part pre-tinned with an alloy of lead can therefore desolder under load when soldered with a bismuth-containing solder. Such joints are also prone to cracking. Alloys with more than 47% Bi expand upon cooling, which may be used to offset thermal expansion mismatch stresses. Retards growth of tin whiskers. Relatively expensive, limited availability. Motorola's patented Indalloy 282 is Bi57Sn42Ag1 . Wikipedia says Indalloy 282. Addition of silver improves mechanical strength. Established history of use. Good thermal fatigue performance. Patented by Motorola. Useful lead free solder report - 1995 - nothing to add on above subject.
The only thing that springs to mind is that some components may get hotter than the solder and melt it? It'd be quite rare for that to happen, but supposing you had a component which used some pins as a heatsink (some use ground pins as this), and it got hotter than the solder could cope with - the solder would melt, the connection would break down, the heat sink would fail, and the component would fry. - This is just my thoughts, so is probably completely wrong ;)
answer_1
5671
Is it worth getting a function generator? <sep> Is a function generator necessary for every day lab use, or is it special purpose equipment? That is, does it have similar utility to an oscilloscope, or multimeter - would you use it regularly enough to justify it's cost?
If you check ebay, you can quite a few between cheap and $50. Frankly, I'd do that and put your extra money into oscilloscope.
In the audio frequency range you can use your soundcard. Use google to find the software.
answer_2
189851
Is it wrong/illogical to say ... twisted open the door?" <sep> Example sentence: <blockquote> I twisted open the door. </blockquote> Some people argue that you can't twist a door. You twist a doorknob. However, some people have used this construction. What's the real answer? Or there isn't one?
To me it sounds very odd. As you said, twist a doorknob is good here.
TL;DR "twisted" is probably wrong and "wrenched" is probably right. To me it depends on how strong the subject of the sentence is. "Twisting the door open" could theoretically be an appropriate action if the subject is literally grabbing the door/doorframe and twisting so hard that the door is torn off of its hinges. In certain fantasy or sci fi contexts, this could be accurate. Given the mechanical difficulty of grabbing the edges of a door and forcefully twisting, this seems unlikely, especially when comparatively easier options like kicking a door down exist. If this is the case, an effective writer should provide some more illustration around the action itself like: "The enraged troll grabbed the edges of the door and twisted with such force that the hinges were torn from the frame." As other posters have said, the best word for more standard contexts would be "wrenched", which could easily be mistranslated or erroneously taken from a thesaurus. In many contexts, "wrench" and "twist" are synonymous, and a "wrench" is a tool used for twisting things.
answer_2
123174
She said shyly some things are not for sharing <sep> In our bathroom there is a bottle of shower gel (see picture), and whenever I see it I wonder if there is something wrong with me or with the text. The text says: <blockquote> Maybe I won[']t tell you she said shyly some things are not for sharing. </blockquote> What is meant here? In my eyes there are two possibilities: "Maybe I won't tell you she said shyly [that] some things are not for sharing." In this case, the word that can be omitted, so the text is grammatically correct, but does it make sense in this context? "Maybe I won't tell you she said shyly some things [that] are not for sharing." That would make more sense, but my grammatical feeling tells me that the word that can not be omitted in this case. I am not a native speaker, so I might be totally mislead by my grammatical feeling, so please comment...
If I rewrite this slightly to add proper punctuation, then it would sound like; <blockquote> ..."Maybe I won't tell you," she said shyly. "Some things are not for sharing" </blockquote> To put this in context, earlier she seemed to be describing a thought she was having and was about to explain it, but decided not to. <blockquote> "My moment of sweet calm... Just to think with carefree abandon about... Maybe I won't tell you," she said shyly. "Some things are not for sharing" </blockquote> In the first part of this, she is starting to describe her thoughts, (or maybe just how she generally thinks). Just to think with carefree abandon about... Then her thoughts trail off, and she says; "Maybe I won't tell you, some things are not for sharing" The hidden subject in this, is where there marketing strategy comes in play, the subject of her thoughts are about, 'that vanilla moment' which is describing how the product make you feel. If you read the script as a story it is much easier to understand.
This sentence structure seems cluttered. The phrase and clauses run on one another without sufficient punctuation. Also, there is a smidgen of poor grammatical construction. However, is it? Let's look at the whole sentence again. My moment of sweet calm ... just to think with carefree abandon ... maybe I won't tell you she said shyly some things are not for sharing. But a closer look reveals this is the work of creative writing. Whoever wrote this for the company did a good marketing job. Here is what the writer is trying to pass across to consumers (you in this case). With the ellipses, the writer is trying to draw your attention, to tantalise you. And he consummated this by teasing you. Let me try and paraphrase what he meant with the last part of the sentence. Maybe I shouldn't tell you about someone who already used this product, and would rather no one else use this product. You see, it's a marketing strategy!
answer_1
573576
Is there a word or phrase for "promises that can't be kept"? <sep> I know that renege is a word that could suit in here. But as I understand, 'renege' describes the failure to keep a promise. But, sometimes, we make promises that we know all too well that it can't be kept for long. Is there any word or a phrase for such promises?.
If you make a promise you know you cannot keep, the word for that is lie. I'm not entirely satisfied with this because lie is a broader category; not every lie is a promise, other than in the guarantee (a fact) nuance of promise which appears in I promise you that this is true. However, it's usually clear from context that a reference to some lie is actually about a bad faith promise, rather than some other lie, such as a cover-up of events or failed responsibilities. I suspect that there might not exist a single verb which we can fill in for <blockquote> Bob ____ed that he will return the money </blockquote> where where ____ed specifically means lied as he promised. However, note that in an example like this we do not need such a verb, because the complement "that he will return the money" establishes the context that a promise is being made, allowing us to just use the verb to lie: <blockquote> Bob lied that he will return the money. </blockquote>
Somebody who commits or attempts to do something they can't succeed at can be said to have bitten off more than they can chew.
answer_1
89831
How do I interpret the dot product of non-normalized vectors? <sep> I know that if you take the dot-product of two normalized vectors, you get the cosine of the angle between them. But if I take the dot-product of two non-normalized vectors (or one normalized, one not), how can I interpret the resulting scalar?
Others have pointed out how you can use the sign of the dot product to broadly determine the angle between two arbitrary vectors (positive: < 90, zero: = 90, negative: > 90), but there's another useful geometric interpretation if at least one of the vectors is of length 1. If you have one unit vector \$\hat U\$ and one arbitrary vector \$V\$, you can interpret the dot product as the length of the projection of \$V\$ onto \$\hat U\$: Equivalently, \$(\hat U \cdot V)\$ is the length of the component of \$V\$ pointing in the direction of \$\hat U\$. ie. You can break \$V\$ into a sum of two perpendicular vectors, \$V = (\hat U \cdot V) \hat U + P\$, where \$P\$ is some vector perpendicular to \$\hat U\$. This is helpful for rewriting a vector from one coordinate system in terms of a different basis, or for removing/reflecting the component of a vector that's parallel to a particular direction while keeping the perpendicular component intact. (eg. zeroing the component of a velocity that would take an object through a barrier, but allowing it to slide along that barrier, or rebounding it away) I'm not aware of a convenient geometric interpretation of the dot product when both vectors are of arbitrary length (other than using the sign to categorize the angle).
If the resulting scalar is 0; then it means the 2 vectors are perpendicular to each other (angle difference 90 degrees) . If the resulting scalar > 0; then the angle difference between them is less than 90 degrees. If the resulting scale is < 0; then the 2 vectors are facing opposite directions ( or angle difference > 90 degrees). This can be useful in calculating backstabs for example. Or determine which quadrant one vector is relative to the other.
answer_1
kg8jgu
Do planes have super-chargers and/or turbo-chargers like cars can? If not, why? I know nothing about planes beyond the simple rotary engine and IM curious about this. It seems like they are operating at such a speed and scale that these additions could be perfect additions as long as it was designed to not add more drag and weight than it's added worth. Even then, what if they flew at a slightly downward slope from a higher altitude? or compensate with a design generating more lift? How would they roughly impact speed and fuel efficiency?
I won't beat /u/IsentropicFire for detail, so I'll go for brevity and sideways thinking: A jet engine is basically all turbocharger, and none of the rest of a car engine.
To summarize what's been said about turbos and superchargers so far: the more oxygen your can squeeze into a small space, the more fuel you can burn in that same space. More fuel equals more power. It's almost that simple. If there's not enough oxygen, the fuel doesn't combust completely. This is how some fire extinguishers do their job; blocking oxygen from combusting with the fuel. Same for a water hose, along with cooling everything down. When you burn something, you extract energy from it in the form of heat, which expands the stuff around it. How you *use* that energy is what the rest of your very interesting questions is about. Piston engines use the extra oxygen to fly higher and/or lift heavier loads. Early on, we just added more pistons to get more power, but this doesn't solve the altitude issue and adds more weight. Adding a turbocharger and/or supercharger was a weight tradeoff. The propeller must be capable of absorbing the energy produced by the engine and transferring it to the air. We solved that quite nicely in WW2 with forged aluminum props and an automatic variable pitch hub. The latter freed up the pilot's attention for shooting stuff. The German fighter planes had a manual pitch adjustment, which increased the pilot's workload. When jet engines came along, they were used for speed and altitude. Speed came in part because there was no big propeller in front of the engine that needed to be pushed through the air. Altitude came because going fast at altitude pushed more air into a small space. Then someone said, "But wait! There's more!" What if we built a tiny jet engine with the same power output as a piston engine, and used it to spin a big prop? Not all aircraft need to go as fast as military jets or fly as high, and the turboprop was born. The engine is much lighter than a comparable piston engine, and has a longer time between overhauls, so it can fly more often and/or for longer periods. Google "turboprop gearbox" and you'll see the main thing that makes turboprops possible. A propeller's tips cannot spin faster than the speed of sound or bad things will happen. The gearbox ensures the rpm's stay low enough to prevent that. It's a short mental walk to go from turboprop to turbofan, eh? Take a big jet engine, put in a secondary turbine at the butt end to capture a portion of the exhaust energy, send that energy through a nested shaft up to the front of the engine, and spin a big compressor section that pushed more of its air around the jet engine instead of through it like the normal compression section. Google "high bypass turbofan" and be amazed. The elegance of the high bypass turbofan is its simplicity; the speed of the bypass fan is not controlled by gearing, but by the design of the secondary turbine. This means less weight for the system. It's the perfect powerplant for commercial aircraft. Watch one start up and you'll hear the jet engine start first and see its exhaust, but that bypass fan is slowly spinning up. It's not mechanically attached to the axel that spins the jet engine core. It's a beautiful thing. BTW one constraint on this system is the surface speed of the *bearings* that hold the nested shafts apart. Airplanes are designed for a specific mission: going fast; carrying heavy stuff; landing on a short runway; flying very high; turning sharp; flying for a long time and over a really long distance; climbing fast; diving fast; being easy to fly. Maybe a few more. Google "Rutan Voyager circumnavigates the globe" to see one example of a plane built for one specific mission. A successful airplane design must accomplish several of these; i.e. it's not very useful to go as fast as possible if the design cannot land on a standard length runway. Aircraft design is entirely about the tradeoffs between all the mission requirements and physics. Start with a mission. Determine which powerplant comes the closest to serving that mission (you may need more than one). Take that power to weight engine and design an airframe around it. WW1 planes had few choices of powerplant, and the power to weight ratio was total shit by standards available just 10 years later. But take a low powered, heavy engine, and put two short wings around it and a short fuselage, and you get a really quick, sharp turning fighter plane. Add machine guns and go fight. About that altitude thing: if I can fly higher in my fighter plane than you can fly in your fighter plane, I can attack you from above and disappear back into the sky. Nah, nah, ne-nah, nah! Since a lack of oxygen is the constraint, we can either carry an oxygen bottle (adds weight, doesn't last very long, both bad things) or push more air into the small space where the fuel will ignite (good idea). So what are the constraints on pushing more air into a small space? What are the tradeoffs? Once you leave the ground, the constraints can be reduced to just three. They are weight, power, and drag. Of these, weight is dominant; increase the weight and drag goes up, so the power requirement goes up to overcome that drag, which takes more fuel, which may mean adding more fuel to reach the intended range, which adds more weight at takeoff and climb, which adds drag... You seeing the picture? An aerospace engineer that worked on the first cruise missiles told me this story: A junior engineer, working on some of the control electronics, chose to place his little box of important stuff on the front bulkhead, just behind the blowy up parts. A senior engineer ripped him a new one, because that half a pound of weight positioned that far from the center of lift of the wings would cause a load on the trim of the aircraft to maintain level flight, which would increase drag, which would decrease the range of the missile, which means the ship firing that missile would have to be closer to the target, which is a dumb thing to do. The sum of small mistakes in a design that flies is a bad thing to let happen, unless there is a mission parameter that requires it. Since you have a healthy curiosity, google "airplane design tradeoffs", open a beer, and enjoy the many rabbit holes available to you. If you're still curious after google, you can search for books on engineering different parts of an aircraft. One of my faves was a very large and expensive book that only covered the design of naval aircraft landing gear. That's a whole career for that author, and it was just about the rolly parts that are only needed when the plane is not flying. And only planes based on aircraft carriers; you wouldn't waste adding the weight on land based planes. Then, if you're still curious, visit EAA.org. Visit aircraftspruce.com. Your curiosity can evolve into a lifelong hobby in which you learn to fly and build your own airplane in your garage, or restore a vintage warbird, or just hang out with other curious aero-types.:-D
answer_1
kg8jgu
Do planes have super-chargers and/or turbo-chargers like cars can? If not, why? I know nothing about planes beyond the simple rotary engine and IM curious about this. It seems like they are operating at such a speed and scale that these additions could be perfect additions as long as it was designed to not add more drag and weight than it's added worth. Even then, what if they flew at a slightly downward slope from a higher altitude? or compensate with a design generating more lift? How would they roughly impact speed and fuel efficiency?
Yes planes have both modes of forced induction, there was attempts to feed compressed gasses as well like NoS. Another induction system was one called a power recovery turbine, a great system that unfortunately never reached its full potential due to high levels of back pressure. There where many attempts at a twin charging system using both a turbo and super chargers.
I won't beat /u/IsentropicFire for detail, so I'll go for brevity and sideways thinking: A jet engine is basically all turbocharger, and none of the rest of a car engine.
answer_2
kg8jgu
Do planes have super-chargers and/or turbo-chargers like cars can? If not, why? I know nothing about planes beyond the simple rotary engine and IM curious about this. It seems like they are operating at such a speed and scale that these additions could be perfect additions as long as it was designed to not add more drag and weight than it's added worth. Even then, what if they flew at a slightly downward slope from a higher altitude? or compensate with a design generating more lift? How would they roughly impact speed and fuel efficiency?
About the efficiency: commercial jets try to operate just under transonic speeds (about mach 0.8 to 1.3) where drag rapidly increases as the airspeed rises, so it would most likely not be worth it for jets. Not that modern jets would need this for cryise flight anyway, as the highest thrust applied usually is at take-off and during climb. About the climbing and then continuously descending, as airplanes climb, their minimum speed and maximum speed come closer together, until they could theoretically hit the so-called ’coffin corner’. In reality they don’t fly at this altitude, but flying higher and then descending might require them to fly at such a high altitude that they can’t reach it. As the airplane descends, it would go back to altitudes where the air is thicker, which means that they both have to fly slower and experience more drag, so it’s probably more efficient to stay at cruising altitude longer. It also makes seperating airplanes a lot easier for air traffic controllers. A design which generates more lift might have good consequences, but also a surprisingly bad consequence. One of the consequences of lift are so-called wing tip vortices (there are a few clips where you can see them pretty clearly when an airplane flies through a cloud). These create drag and the more lift, the stronger the vortices. When an airplane is not accelerating up or down or turning, it’s theoretically producing the same amount of lift as it weighs. So a plane which produces more lift could carry more, but could also produce more drag making things worse. Now there are ways to make these vortices less powerful, like having a larger wing span, which can increase weight. So when they design a new airplane, they need to considering all these things and decide what they want to favor.
There are already lots of good answers but I'll add mine in too. Yes, planes can and are sometimes supercharged or turbocharged. Unlike with cars though you have better options than piston power, so it's not like this is gonna be the best producer of power. For smaller planes it makes sense to supe up the engine, but as you get bigger you'll start to use turbine power which gives more power and has lots of other benefits. Also just fyi, the word you meant to say in your post is radial engine, not rotary. A rotary engine is a wankel rotary engine, radial is the big circular engine your see on WW2 planes.
answer_2
vbj5sg
WHY does V chord want to resolve at the I? Can anyone explain WHY the V chord wants to resolve to the tonic? I’m sure there are reason and I know my fellow Reddit kings and queens can school me on this. Any help would be greatly appreciated as I am a theory noob
Well, it actually doesn’t. No chord *wants* to resolve to any other particular chord - because chords aren’t sentient beings that have a mind of their own; it’s humans that have culturally engrained tastes and expectations of their own - and these obviously can vary quite a lot. For example, most of the music in existence (including most contemporary popular genres) don’t revolve around V-I cadences, and listeners of these kinds of music really don’t seem to mind; so I think it’d be an over generalization to say that any particular chord “wants” to resolve anywhere else. Music isn’t a universal language, so neither is music theory. However, that might not be the kind of answer you’re looking for. V-I motion (and similar sorts of tonal resolution) are obviously *very* historically important to the musical tradition of European Classical music (Mozart, Haydn, etc.), as well as related strains of music from Europe. In these styles of music, the sound of V going to I (especially scale degrees 7 and 4 resolving to 3 and 1) became a cornerstone of their musical vocabulary. If you’re lookin to understand this in more depth, I’d recommend checking out any tonal harmony textbook, or a history book about tonal harmony in Europe.
Many centuries ago, medieval Europeans decided they really liked half-step resolutions, especially when they went up. It became very much A Thing to approach resolutions with one voice moving a half step in one direction, and the other moving a whole step in the opposite direction, arriving at a perfect interval--which was, most importantly for our purposes, often an octave. So if we wanted to resolve to octave As, that meant either G#-A going up combined with B-A going down, or G-A going up combine with Bb-A going down. The former pattern, with the half step rising rather than the half step falling, was preferred for resolutions that were really meant to be final and complete. So then this B-G# major sixth expanding to the A-A octave made it into the Renaissance, and people started liking to have an independent, leapy bass voice below these cadential voices, and one place that a bass worked particularly well beneath the B-G# major sixth was on the E below it. This E could either leap up an octave to another E, or up a fourth/down a fifth to an A. Either way, you end up with an E major chord resolving to an A sonority of some kind. Eventually, upward half-step resolution combined with the root motion by descending fifth took hold as Another Thing, and there's our V-I.
answer_2
vbj5sg
WHY does V chord want to resolve at the I? Can anyone explain WHY the V chord wants to resolve to the tonic? I’m sure there are reason and I know my fellow Reddit kings and queens can school me on this. Any help would be greatly appreciated as I am a theory noob
One of my theory teachers liked to use the Harmonic/Overtone Series as part of the reason V wants to resolve to I. It also kind of relates to the jazz term "the V is the I." When you study the frequency of the waveform of a single note, you will find there are multiple frequencies resonating. If we play C2, which we call the fundamental tone, the first harmonic is C3 & the second harmonic is G3. The forth, fifth, & sixth harmonies of C2 are C4, E4, & G4. If we play a G2, we will have the harmonies of G3, D3, G4, B4, D4 as the loudest resonance frequencies. All of these D's & B's are leading tones for C. There are a whole host of other frequencies, some of the more known ones are more prevalent or more forward like the major triad. But this is all happening when playing one singular note. All of these resonant frequencies make up the timbre of the sound, with some resonances made louder or quieter depending on the medium (which instrument) the sound is playing through. So we hear a lot of V's resonance every time we play the I no matter which instrument it's played through, since it's only the 2nd harmonic. Then we play several leading tones of resonance when we play the V just playing the note & its resonances regardless if we're playing an actual triad on top.
The more comprehensive and historically accurate is the one u/Zarlinosuke gave you: it's the result of a long process of cultural development of a specific melodic and harmonic language, the cumulative result of many small improvements and innovations that led to what we call nowadays the "tonal system", where the V chord (or V7) has a fundamental importance in harmonic motion. In general, many people just enjoyed the sound of that resolution when it was done skillfully, and it's left such a profound cultural mark that we still go back to it after so long. It's part of our heritage and our identity. But there's another answer, which is: ... *does it, really?* If you listen to a lot of contemporary pop, you'll more often hear the V go to the vi, or to the IV, or even to the ii. The traditional V-I resolution is only a common feature of music done by old farts like me, who still identify with Phil Collins and Air Supply, who like big transitions and ugly men with high voices belting out over digital synths. Nowadays, people tend to prefer chord loops that create this sort of trance-like repetition, where you get lost in the groove and just dig the vibe of it--so something like the i-v-iv-i of Dua Lipa's *Levitating*, or the iv-i-♭III-♭VII of the Weeknd's *Blinding Lights*, are much more appropriate. We wanna feel like the song's never gonna end. I mean, I'm exaggerating a little bit: you do hear V-I's once in a while today. However, that mantra that "V wants to go to I" is only repeated by braindead zombies who think we're still living in the 19th century, or who just mindlessly repeat what they've heard in school from some teacher who thinks everything that's been done after Coltrane's death is "garbage". V doesn't "want" to go anywhere, because it's *the musician* who wants it to go somewhere, and we, listeners, have certain expectations that may or may not be fulfilled.
answer_1
vbj5sg
WHY does V chord want to resolve at the I? Can anyone explain WHY the V chord wants to resolve to the tonic? I’m sure there are reason and I know my fellow Reddit kings and queens can school me on this. Any help would be greatly appreciated as I am a theory noob
One of my theory teachers liked to use the Harmonic/Overtone Series as part of the reason V wants to resolve to I. It also kind of relates to the jazz term "the V is the I." When you study the frequency of the waveform of a single note, you will find there are multiple frequencies resonating. If we play C2, which we call the fundamental tone, the first harmonic is C3 & the second harmonic is G3. The forth, fifth, & sixth harmonies of C2 are C4, E4, & G4. If we play a G2, we will have the harmonies of G3, D3, G4, B4, D4 as the loudest resonance frequencies. All of these D's & B's are leading tones for C. There are a whole host of other frequencies, some of the more known ones are more prevalent or more forward like the major triad. But this is all happening when playing one singular note. All of these resonant frequencies make up the timbre of the sound, with some resonances made louder or quieter depending on the medium (which instrument) the sound is playing through. So we hear a lot of V's resonance every time we play the I no matter which instrument it's played through, since it's only the 2nd harmonic. Then we play several leading tones of resonance when we play the V just playing the note & its resonances regardless if we're playing an actual triad on top.
Fundamentally the V-I thing is just a distraction; it is better to ask why any melodic tendencies and specific melodic patterns exist at all. The answer to that question is relatively simple: because that's what people before you have done. Where it gets kind of complicated though is that then there's a set of people thinking that we should move away from that and enter into new territory. But many people out of this folk aren't aware *just how much* of their writing already uses ideas that were commonplace centuries ago, more or less. It's really hard to escape tendencies, because a lot of the times illogical musical phrases arise out of ignoring the tendencies and that's how you get lines that you're just not satisfied with. And then there's contemporary art music which, uh, at worst tried to be very hostile against existing tendencies so instead you got serialism, abstract pitch classes and such (specifically referring to Second Viennese School).
answer_1
vbj5sg
WHY does V chord want to resolve at the I? Can anyone explain WHY the V chord wants to resolve to the tonic? I’m sure there are reason and I know my fellow Reddit kings and queens can school me on this. Any help would be greatly appreciated as I am a theory noob
One of my theory teachers liked to use the Harmonic/Overtone Series as part of the reason V wants to resolve to I. It also kind of relates to the jazz term "the V is the I." When you study the frequency of the waveform of a single note, you will find there are multiple frequencies resonating. If we play C2, which we call the fundamental tone, the first harmonic is C3 & the second harmonic is G3. The forth, fifth, & sixth harmonies of C2 are C4, E4, & G4. If we play a G2, we will have the harmonies of G3, D3, G4, B4, D4 as the loudest resonance frequencies. All of these D's & B's are leading tones for C. There are a whole host of other frequencies, some of the more known ones are more prevalent or more forward like the major triad. But this is all happening when playing one singular note. All of these resonant frequencies make up the timbre of the sound, with some resonances made louder or quieter depending on the medium (which instrument) the sound is playing through. So we hear a lot of V's resonance every time we play the I no matter which instrument it's played through, since it's only the 2nd harmonic. Then we play several leading tones of resonance when we play the V just playing the note & its resonances regardless if we're playing an actual triad on top.
Wow, very interesting history behind this in the other comments. My thoughts were that if you listen to popular music the V as the tension/far away, need to come home sound is very common and helps condition this. There is some basis to it as the 3rd of the 5 is the 7 which is only a half step down from the tonic combined with root movement of 5 to 1.
answer_1
vbj5sg
WHY does V chord want to resolve at the I? Can anyone explain WHY the V chord wants to resolve to the tonic? I’m sure there are reason and I know my fellow Reddit kings and queens can school me on this. Any help would be greatly appreciated as I am a theory noob
Fundamentally the V-I thing is just a distraction; it is better to ask why any melodic tendencies and specific melodic patterns exist at all. The answer to that question is relatively simple: because that's what people before you have done. Where it gets kind of complicated though is that then there's a set of people thinking that we should move away from that and enter into new territory. But many people out of this folk aren't aware *just how much* of their writing already uses ideas that were commonplace centuries ago, more or less. It's really hard to escape tendencies, because a lot of the times illogical musical phrases arise out of ignoring the tendencies and that's how you get lines that you're just not satisfied with. And then there's contemporary art music which, uh, at worst tried to be very hostile against existing tendencies so instead you got serialism, abstract pitch classes and such (specifically referring to Second Viennese School).
Well, it actually doesn’t. No chord *wants* to resolve to any other particular chord - because chords aren’t sentient beings that have a mind of their own; it’s humans that have culturally engrained tastes and expectations of their own - and these obviously can vary quite a lot. For example, most of the music in existence (including most contemporary popular genres) don’t revolve around V-I cadences, and listeners of these kinds of music really don’t seem to mind; so I think it’d be an over generalization to say that any particular chord “wants” to resolve anywhere else. Music isn’t a universal language, so neither is music theory. However, that might not be the kind of answer you’re looking for. V-I motion (and similar sorts of tonal resolution) are obviously *very* historically important to the musical tradition of European Classical music (Mozart, Haydn, etc.), as well as related strains of music from Europe. In these styles of music, the sound of V going to I (especially scale degrees 7 and 4 resolving to 3 and 1) became a cornerstone of their musical vocabulary. If you’re lookin to understand this in more depth, I’d recommend checking out any tonal harmony textbook, or a history book about tonal harmony in Europe.
answer_1
vbj5sg
WHY does V chord want to resolve at the I? Can anyone explain WHY the V chord wants to resolve to the tonic? I’m sure there are reason and I know my fellow Reddit kings and queens can school me on this. Any help would be greatly appreciated as I am a theory noob
Wow, very interesting history behind this in the other comments. My thoughts were that if you listen to popular music the V as the tension/far away, need to come home sound is very common and helps condition this. There is some basis to it as the 3rd of the 5 is the 7 which is only a half step down from the tonic combined with root movement of 5 to 1.
The resolution relationship between root and dominant suggests to me a nature of the underlying fabric of the universe. The dominant can be considered a "quantized" measure from the root, as are the other notes of any given scale. In Western 12-step, the essential unit from which all other relationships are derived is the half-step. These repeating mathematical relationships are mirrored in the interactions of particle physics, the word "quantum" is applied to the fixed energies that guide the behavior in the subatomic realm, i.e., quantum physics. And much like a Mandelbrot set, the relationships in the tiny quantum world are reflected in the larger Newtonian world. The universe is indeed a symphony of unimaginable complexity, and I think of music as our window, our *primer*, if you will, for understanding not only the stuff of physics, but our own emotional relationships, as well.
answer_2
so4j1u
Explain like I'm five years old : If we never manage to create a true absolute zero, how do we know that it sit exactly at -273.15 °c instead of ,uh, -273.69 or something else?
You can think of temperature as the average speed/kinetic energy of each molecule in a gas, just like how you can’t be moving negative 1 miles/hour, absolute zero is the theoretical temperature at which these molecules come to a full stop. If they were moving “backwards” it’s still movement, and therefore higher than absolute zero. It’s just how absolute values work, you can’t move -3 feet, can’t throw -1 balls, and gas molecules can’t do any less than stay still. I think the reason why this can be difficult for people to wrap their heads around is because they don’t realize that cold is just the absence of heat, LITERALLY, it’s not some dumbing down of a more complex subject, you can never “add cold” to something only take away heat and at some point (0 kelvin) there’s just no more heat to be taken away, and is why we can only approach absolute zero, but never get there.
In a word: math. It is possible to calculate absolute zero using several gas laws, namely the ideal gas law, PV = nRT. If you rearrange the ideal gas law as a linear equation as you would use on a graph, you can graph temperature against either volume or pressure. Since absolute zero is the temperature at which no particle motion occurs, pressure and volume are both zero at absolute zero. Thus, no matter which of those two graph options you use, the line denoted by the ideal gas law intersects the temperature axis (where pressure and volume = 0) at -273.15°C. Therefore, absolute zero, the temperature at which no particle motion occurs (in theory), is -273.15°C.
answer_1
so4j1u
Explain like I'm five years old : If we never manage to create a true absolute zero, how do we know that it sit exactly at -273.15 °c instead of ,uh, -273.69 or something else?
It’s the same reason we can know the speed of light, even though we can’t reach it. It falls out of the math because it’s a fundamental feature of the universe we live in.
You can think of temperature as the average speed/kinetic energy of each molecule in a gas, just like how you can’t be moving negative 1 miles/hour, absolute zero is the theoretical temperature at which these molecules come to a full stop. If they were moving “backwards” it’s still movement, and therefore higher than absolute zero. It’s just how absolute values work, you can’t move -3 feet, can’t throw -1 balls, and gas molecules can’t do any less than stay still. I think the reason why this can be difficult for people to wrap their heads around is because they don’t realize that cold is just the absence of heat, LITERALLY, it’s not some dumbing down of a more complex subject, you can never “add cold” to something only take away heat and at some point (0 kelvin) there’s just no more heat to be taken away, and is why we can only approach absolute zero, but never get there.
answer_2
9496212
Navigation, highlight current page <sep> I've got a parent layout and derived from that child sites. The parent layout has a navigation, each navigation point represents one child site. How do i highlight in the parent layout the currently viewed child site? How shall the if look like?
First set a variable in Template its better redable. <code>{% set page = app.request.get('_route') %} <li class="nav-item"> <a class="nav-link {% if page == 'welcome' %}active{% endif %}" href="{{ path('welcome') }}">Home <span class="sr-only">(current)</span></a> </li> </code>
Here's what I did: <code><a href='{{ path( 'products' ) }}'{% if app.request.attributes.get( '_route' ) starts with 'products' %} class='active'{% endif %}>Products</a> <ul> <li><a href='{{ path( 'products_product1' ) }}'{% if app.request.attributes.get( '_route' ) == 'products_product1' %} class='active'{% endif %}>Product 1</a></li> <li><a href='{{ path( 'products_product2' ) }}'{% if app.request.attributes.get( '_route' ) == 'products_product2' %} class='active'{% endif %}>Product 2</a></li> </ul> </code>
answer_1
az9mmk
Should I undercook dried black beans if they will be cooked again when added to a recipe that calls for canned beans? I'm making a chili recipe that calls for canned black beans, however, I would like to start with dried black beans. Since canned beans are fully cooked, should I fully cook the dried beans first? Or should I undercook the dried beans so the 20 minutes of cooking that the chili requires completes the cooking? Or - another idea - fully cook the dried beans, but add them at the end of making chili rather than the beginning?
It kind of depends how you want your chili to come out. If you want intact perfectly cooked individual beans I would just add them nearer the end. If you want your beans to slightly break down and meld into the sauce add them earlier. Personally I enjoy a chili where the beans break down a bit and thicken the sauce if I'm using beans but it's personal preference.
Never undercook the beans regardless. If they end up under cooked, it pretty much ruins the whole dish. Over cooked, really no big deal in chili. Specially if the second cook is in chili, beans tend not to suck up too much water when the solution is acidic so they won't get much in there.
answer_2
az9mmk
Should I undercook dried black beans if they will be cooked again when added to a recipe that calls for canned beans? I'm making a chili recipe that calls for canned black beans, however, I would like to start with dried black beans. Since canned beans are fully cooked, should I fully cook the dried beans first? Or should I undercook the dried beans so the 20 minutes of cooking that the chili requires completes the cooking? Or - another idea - fully cook the dried beans, but add them at the end of making chili rather than the beginning?
From my experience I would recommend pre-cooking the beans and if you feel like sweating it, do your best to stop at al dente. Let cool and then use as you would the canned beans. Sometimes when I cooked straight from dry it turned out fine, and sometimes they never seemed to finish cooking.
Canned beans are fully cooked. If you want to recreate a recipe that calls for fully cooked bean, fully cook your dried beans. If you want to increase the flavor mingling of the beans and chili, without having exploded soft beans, you can either mix your fully cooked beans with the chili and let them marinate together overnight in the fridge, which makes fat removal easy, or you can stop your beans just before they are fully done and simmer together until the beans are perfect. If you stop your beans at just done, you need to be sure that the texture of your beans is pretty darn close to what you want, because acid environments interfere with softening and lost os chili recipes for some reason include acid tomatoes. Background.
answer_2
az9mmk
Should I undercook dried black beans if they will be cooked again when added to a recipe that calls for canned beans? I'm making a chili recipe that calls for canned black beans, however, I would like to start with dried black beans. Since canned beans are fully cooked, should I fully cook the dried beans first? Or should I undercook the dried beans so the 20 minutes of cooking that the chili requires completes the cooking? Or - another idea - fully cook the dried beans, but add them at the end of making chili rather than the beginning?
It would depend on the cook time of the final product to me. If you are simmering it for hours I might leave them slightly underdone. Canned beans are precooked. Nothing is worse than undercooked beans.
If you get some calcium chloride from the canning section of the grocery store it will prevent the beans from getting mushy. It's what canned beans have in them. It's just a type of salt and totally fine.
answer_2
az9mmk
Should I undercook dried black beans if they will be cooked again when added to a recipe that calls for canned beans? I'm making a chili recipe that calls for canned black beans, however, I would like to start with dried black beans. Since canned beans are fully cooked, should I fully cook the dried beans first? Or should I undercook the dried beans so the 20 minutes of cooking that the chili requires completes the cooking? Or - another idea - fully cook the dried beans, but add them at the end of making chili rather than the beginning?
It would depend on the cook time of the final product to me. If you are simmering it for hours I might leave them slightly underdone. Canned beans are precooked. Nothing is worse than undercooked beans.
Fully cook the beans. It's very difficult to overcook beans. Any additional cooking they get in the chili won't make a difference. Undercooked beans can also be mildly toxic. Beans naturally contain a toxin that gets broken down during cooking. How much depends on the bean variety. It's the reason you shouldn't eat a lot of raw green beans (though I eat them raw all the time without issues).
answer_2
az9mmk
Should I undercook dried black beans if they will be cooked again when added to a recipe that calls for canned beans? I'm making a chili recipe that calls for canned black beans, however, I would like to start with dried black beans. Since canned beans are fully cooked, should I fully cook the dried beans first? Or should I undercook the dried beans so the 20 minutes of cooking that the chili requires completes the cooking? Or - another idea - fully cook the dried beans, but add them at the end of making chili rather than the beginning?
From my experience I would recommend pre-cooking the beans and if you feel like sweating it, do your best to stop at al dente. Let cool and then use as you would the canned beans. Sometimes when I cooked straight from dry it turned out fine, and sometimes they never seemed to finish cooking.
If you get some calcium chloride from the canning section of the grocery store it will prevent the beans from getting mushy. It's what canned beans have in them. It's just a type of salt and totally fine.
answer_2
az9mmk
Should I undercook dried black beans if they will be cooked again when added to a recipe that calls for canned beans? I'm making a chili recipe that calls for canned black beans, however, I would like to start with dried black beans. Since canned beans are fully cooked, should I fully cook the dried beans first? Or should I undercook the dried beans so the 20 minutes of cooking that the chili requires completes the cooking? Or - another idea - fully cook the dried beans, but add them at the end of making chili rather than the beginning?
Are you using a pressure cooker? I usually add the cooked dry beans last and cook on high pressure for a few more minutes.
If you get some calcium chloride from the canning section of the grocery store it will prevent the beans from getting mushy. It's what canned beans have in them. It's just a type of salt and totally fine.
answer_2
az9mmk
Should I undercook dried black beans if they will be cooked again when added to a recipe that calls for canned beans? I'm making a chili recipe that calls for canned black beans, however, I would like to start with dried black beans. Since canned beans are fully cooked, should I fully cook the dried beans first? Or should I undercook the dried beans so the 20 minutes of cooking that the chili requires completes the cooking? Or - another idea - fully cook the dried beans, but add them at the end of making chili rather than the beginning?
Its not pasta, its beans. It's mostly fiber. So they're difficult to over cook in the first place and undercooking them is like bitimg rocks.
If you get some calcium chloride from the canning section of the grocery store it will prevent the beans from getting mushy. It's what canned beans have in them. It's just a type of salt and totally fine.
answer_2
az9mmk
Should I undercook dried black beans if they will be cooked again when added to a recipe that calls for canned beans? I'm making a chili recipe that calls for canned black beans, however, I would like to start with dried black beans. Since canned beans are fully cooked, should I fully cook the dried beans first? Or should I undercook the dried beans so the 20 minutes of cooking that the chili requires completes the cooking? Or - another idea - fully cook the dried beans, but add them at the end of making chili rather than the beginning?
Whilst we're on the subject of beans, is I possible to over soak them? Last night I set some beans soaking for tonight's dinner, I wasn't really thinking properly and read 12 hr as 24 hrs. Will they be alright?
Fully cook the beans. It's very difficult to overcook beans. Any additional cooking they get in the chili won't make a difference. Undercooked beans can also be mildly toxic. Beans naturally contain a toxin that gets broken down during cooking. How much depends on the bean variety. It's the reason you shouldn't eat a lot of raw green beans (though I eat them raw all the time without issues).
answer_2
az9mmk
Should I undercook dried black beans if they will be cooked again when added to a recipe that calls for canned beans? I'm making a chili recipe that calls for canned black beans, however, I would like to start with dried black beans. Since canned beans are fully cooked, should I fully cook the dried beans first? Or should I undercook the dried beans so the 20 minutes of cooking that the chili requires completes the cooking? Or - another idea - fully cook the dried beans, but add them at the end of making chili rather than the beginning?
The trick is to undercook the onions. Everybody is going to get to know each other in the pot.
Fully cook the beans. It's very difficult to overcook beans. Any additional cooking they get in the chili won't make a difference. Undercooked beans can also be mildly toxic. Beans naturally contain a toxin that gets broken down during cooking. How much depends on the bean variety. It's the reason you shouldn't eat a lot of raw green beans (though I eat them raw all the time without issues).
answer_2
ge5qwc
Why were Arabic borrowings so commonly taken with the definite article still attached? In the Ibero-Romance languages especially, it seems that more often than not, Arabic loan words were borrowed with some form of the Arabic definite article still attached. Most commonly this is seen in words starting with /al/ as in ‘alfombra’ or ‘alquimia’ but it’s also seen in words where the /l/ assimilated to a sun-letter like in words like ‘azeite’ and ‘açúcar.’ Why exactly was the definite article left attached for these words? Is there something about Arabic that might necessitate it’s usage more frequently than it would be seen in these other languages, thus creating the notion that it is a part of the word being borrowed? Also, if some type of confusion like that is the cause, does this speak to a lack of Arabic fluency in Iberian romance speakers during the time of Arabic rule since (from my very limited perspective) this would presumably be a fairly basic feature of Arabic morphology? Are there other languages that have borrowed heavily from Arabic that also maintain the article at the start of their words?
Haitian Creole keeps the French definite article in many words borrowed from French, so for example "la lune" (the moon) turns into the single word "lalin" in Haitian Creole. So I can at least say that the phenomenon is not specific to Arabic, although IIRC Arabic uses definite articles in more situations than French or English would. On the hand, quickly searching for Arabic loanwords in both Hindi and Farsi (both borrowed lots of Arabic vocabulary) suggests that the definite article does get dropped. Even within Europe it appears that Italian generally omitted the article when borrowing Arabic words, which is why the Italian word for sugar is "zucchero" (this got borrowed into French and then English). This Arabic-> Italian -> French -> English route is also why we call the cloth material "cotton" and not "alcotton". Sorry that I couldn't address the main question...
Hello! Thank you for posting your question to /r/asklinguistics. Please remember to flair your post. This is a reminder to ensure your recent submission follows all of our rules, which are visible in the sidebar. If it doesn't, your submission may be removed! ___ All top-level replies to this post must be academic and sourced where possible. Lay speculation, pop-linguistics, and comments that are not adequately sourced will be removed. ___ *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/asklinguistics) if you have any questions or concerns.*
answer_1
ge5qwc
Why were Arabic borrowings so commonly taken with the definite article still attached? In the Ibero-Romance languages especially, it seems that more often than not, Arabic loan words were borrowed with some form of the Arabic definite article still attached. Most commonly this is seen in words starting with /al/ as in ‘alfombra’ or ‘alquimia’ but it’s also seen in words where the /l/ assimilated to a sun-letter like in words like ‘azeite’ and ‘açúcar.’ Why exactly was the definite article left attached for these words? Is there something about Arabic that might necessitate it’s usage more frequently than it would be seen in these other languages, thus creating the notion that it is a part of the word being borrowed? Also, if some type of confusion like that is the cause, does this speak to a lack of Arabic fluency in Iberian romance speakers during the time of Arabic rule since (from my very limited perspective) this would presumably be a fairly basic feature of Arabic morphology? Are there other languages that have borrowed heavily from Arabic that also maintain the article at the start of their words?
In most languages articles - even fully segmentable ones like in English or for the most part Spanish - really don't exist as separate phonological words. Obviously they're still syntactically separate, but phonologically the article is acting as a kind of half-word called a clitic. Borrowing tends to happen at the phonological level, not the syntactic level. English is a little weird in that it tends to borrow syntax as well, which is why we have both the words stimulus and stimuli. But even we don't go for the whole shebang, which is why we don't have stimulum, stimulo, stimulorum, etc. So Spanish and Portuguese got a bunch of Arabic words but borrowed them phonetically, not morphosyntactically. The same happened with a number of languages they colonized, like the number of languages that have lamesa or lamexa as their word for table.
Hello! Thank you for posting your question to /r/asklinguistics. Please remember to flair your post. This is a reminder to ensure your recent submission follows all of our rules, which are visible in the sidebar. If it doesn't, your submission may be removed! ___ All top-level replies to this post must be academic and sourced where possible. Lay speculation, pop-linguistics, and comments that are not adequately sourced will be removed. ___ *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/asklinguistics) if you have any questions or concerns.*
answer_1
ge5qwc
Why were Arabic borrowings so commonly taken with the definite article still attached? In the Ibero-Romance languages especially, it seems that more often than not, Arabic loan words were borrowed with some form of the Arabic definite article still attached. Most commonly this is seen in words starting with /al/ as in ‘alfombra’ or ‘alquimia’ but it’s also seen in words where the /l/ assimilated to a sun-letter like in words like ‘azeite’ and ‘açúcar.’ Why exactly was the definite article left attached for these words? Is there something about Arabic that might necessitate it’s usage more frequently than it would be seen in these other languages, thus creating the notion that it is a part of the word being borrowed? Also, if some type of confusion like that is the cause, does this speak to a lack of Arabic fluency in Iberian romance speakers during the time of Arabic rule since (from my very limited perspective) this would presumably be a fairly basic feature of Arabic morphology? Are there other languages that have borrowed heavily from Arabic that also maintain the article at the start of their words?
That’s only older loanwords though. Recent loanwords from Arabic don’t have any definite article.
Hello! Thank you for posting your question to /r/asklinguistics. Please remember to flair your post. This is a reminder to ensure your recent submission follows all of our rules, which are visible in the sidebar. If it doesn't, your submission may be removed! ___ All top-level replies to this post must be academic and sourced where possible. Lay speculation, pop-linguistics, and comments that are not adequately sourced will be removed. ___ *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/asklinguistics) if you have any questions or concerns.*
answer_1
k1oiq3
Explain like I'm five years old: Why can't you boil milk in a kettle? I've burnt out my kettle attempting this. My thinking was that milk is just thicker water. I thought it would jusy take longer to boil. I'm in hot water (pun intended) with the wife.
In addition to what everybody else is saying... how are you supposed to clean the kettle?
Kettles aren't well mixed vessels, so you have a single heating element at the bottom where you'll have a hot zone. This isn't a problem when the only thing you're heating is water molecules, but is a problem when you try to heat something which contains fats and proteins. You're going to end up burning anything near the heating element. Also, depending on the kettle in question and how it functions, it may put too much thermal energy in to heat milk to the desired temperature even if you did stir it. I deal with similar issues on a chemical plant, but we have control systems that aim for setpoints, so you wouldn't see the same issue. Likewise, we heat aqueous protein solutions in big tanks, but if you don't fill it up over the minimum stir volume, you won't mix the tank as it heats up and you'll end up with burnt product around the heating elements. TLDR: heating liquids introduces a certain amount of mixing, but not enough to prevent hot zones around the element in the kettle where the fat and protein in the milk will burn.
answer_1
k1oiq3
Explain like I'm five years old: Why can't you boil milk in a kettle? I've burnt out my kettle attempting this. My thinking was that milk is just thicker water. I thought it would jusy take longer to boil. I'm in hot water (pun intended) with the wife.
The milk fat burn on the heating element. You should steam milk. Or heat a pan up water and put the milk in a vessel that doesn't touch the bottom of the pan.
Kettles aren't well mixed vessels, so you have a single heating element at the bottom where you'll have a hot zone. This isn't a problem when the only thing you're heating is water molecules, but is a problem when you try to heat something which contains fats and proteins. You're going to end up burning anything near the heating element. Also, depending on the kettle in question and how it functions, it may put too much thermal energy in to heat milk to the desired temperature even if you did stir it. I deal with similar issues on a chemical plant, but we have control systems that aim for setpoints, so you wouldn't see the same issue. Likewise, we heat aqueous protein solutions in big tanks, but if you don't fill it up over the minimum stir volume, you won't mix the tank as it heats up and you'll end up with burnt product around the heating elements. TLDR: heating liquids introduces a certain amount of mixing, but not enough to prevent hot zones around the element in the kettle where the fat and protein in the milk will burn.
answer_2
74485
Why do speedometers (in the US, at least) go so high? <sep> Typically one doesn't drive much faster than 80 MPH. Even in an emergency passing situation, it would be extremely rare to drive more than 100 MPH. In fact, as far as I know many cars have governors built into the engines that prevent them from going much faster. Yet in the United States, most cars made in the last 20 years have speedometers that go up to 120 or 140 MPH. Why? It seems to me like it might encourage people to drive faster. Or does it make the car "seem" faster if normal cruising speed is a smaller percentage of implied max speed?
Actually, the US is one of the few places to have enforced a limit on the maximum speed shown on a speedometer (reportedly to stop people trying to "speed test" their vehicles). For vehicles produced from 1979 to 1981, you'd only see vehicles showing up to 85mph: *The same law dictated the highlight at 55mph http://en.wikipedia.org/wiki/National_Maximum_Speed_Law#Speedometers As others have mentioned, some countries require a speedo to show the maximum speed a vehicle is capable of (or have no / much higher speed limits), in other cases the manufacturer chooses to (possibly for marketing purposes) display higher values. For example, check out the speedo from a Suzuki Hayabusa: Then we have a typical BMW car speedo, where most of their vehicles are limited to 155mph, the speedometers generally read up to 160mph (even on the models that don't produce enough power to ever achieve that speed): From a UX perspective, it's probably better to provide a display (a gauge in this case) capable of showing all the possible values. The use case for this in terms of speed could be: Differing local restrictions (driving to other states or different countries). Use off the public highway (track days, testing, racing). Changes to vehicle parameters (more power, improved aerodynamics) either during production or after market. The downside (of a gauge) is that the wider the range you attempt to cater for, the less readable it is and it becomes more difficult to determine your exact speed (as the needle is raised, perspective can affect "read" speed). Digital displays make it more difficult to read changing values / assess rate of change, but you only need to consider the appropriate number of digits (tens, hundreds, thousands etc.) - a three digit display should cater for most cars, though owners of vehicles incapable of 100mph/kmh might question why their speedos go so high too. :D
Interesting question. Looking around online, it seems to be a combination of marketing (makes the consumer think the engine is powerful) and manufacturing efficiency (can use the same speedometer in faster cars as well as minivans). http://mentalfloss.com/article/59478/why-do-car-speedometers-list-speeds-are-way-over-legal-limit
answer_1
7g6t1v
My flour Tortillas never bubble and always go brittle The recipe I've been using is: 3 cups flour 1/3 cup Lard 1 1/2 cup water (hot) 1 tsp Baking Powder I have adjusted as many of the variables as I can and my tortillas always just sit in the pan flat as a board and come out brittle and dry. I've tried using more and less water/flower/baking powder/lard I've tried kneading from 5 minutes to 20, letting them rest from 10 minutes to 40, used every single pan I could in my kitchen, even bought a brand new cast iron skillet (which I have taken the proper steps to prepare) to attempt them with. Used everything from med-low all the way to the hottest my stovetop will go. I have also rolled them extremely thin and also relatively thick and still the same result I've adjusted the time to leave them in the pan to cook, I always rest them in a tea towel immediately as they come out of the pan and they just never seem to work. I've run out of ideas. I have also swapped the lard out for butter and multiple different oils. I need help :(
1.) use a scale 2.) if that doesn't work, use a food processor
Have you tried using cold water and cold lard instead? There's no yeast to be concerned about, so it's fine to do. It was tested on Serious Eats, apparently it makes the tortilla more tender (just like with a flaky pastry). And make sure you don't overcook it! 25-30 seconds on medium high heat should be enough. (My grandma says to brush oil into the top before flipping it over to keep it moist, just like with roti, but idk if that actually makes a difference for either.) Also, is your baking powder old? If you stir a 1/2 tsp into a cup of hot water, does it fizz? By the way...where's the salt in your recipe? Try adding a teaspoon - it won't affect the texture, so that's just a side note.
answer_2
5318342
what happens to NSLog info when running on a device? <sep> what happens to NSLog info when running on a device? Where does the text go? Does it get saved? Is it therefore a big overhead when running on a device, or does it effectively get sent to null?
It gets written to the Console log on the device. You can read it in the "Organizer" within Xcode when your device is connected.
The text is logged. You can view the logs in xcode via the (almost-invisible) disclosure triangle in the Devices window as described in another answer. You can also access the logs in a more usable view (including filtering) by opening Console.app on your connected Mac and selecting the device on the left. There will be some overhead with logging, but considering the volume of logging that's going on all the time in recent versions of iOS, unless you're generating an awful lot of output (or spending time generating the messages), it's unlikely to be significant.
answer_2
5318342
what happens to NSLog info when running on a device? <sep> what happens to NSLog info when running on a device? Where does the text go? Does it get saved? Is it therefore a big overhead when running on a device, or does it effectively get sent to null?
Your device will continue logging even when it's not connected to your mac. To see the logs, you need to open Xcode, click the 'Window' menu item, and then 'Organizer'. Then select your device and then select the 'Device Logs' tab. For some reason (for me at least) viewing the logs seems flaky, so if nothing shows up, you may need to completely quit Xcode and restart it.
It gets written to the Console log on the device. You can read it in the "Organizer" within Xcode when your device is connected.
answer_1
5318342
what happens to NSLog info when running on a device? <sep> what happens to NSLog info when running on a device? Where does the text go? Does it get saved? Is it therefore a big overhead when running on a device, or does it effectively get sent to null?
Your device will continue logging even when it's not connected to your mac. To see the logs, you need to open Xcode, click the 'Window' menu item, and then 'Organizer'. Then select your device and then select the 'Device Logs' tab. For some reason (for me at least) viewing the logs seems flaky, so if nothing shows up, you may need to completely quit Xcode and restart it.
It gets logged out. You can retrieve it by connecting the device and looking in the organizing in xcode
answer_1
5796764
From Photoshop actions to Photoshop scripting? <sep> I would like Photoshop to automatically execute the following task for a given folder: Load all PNG files in a given folder. Convert each file's mode to <code>RGB color</code> Add one layer to each file Save the files as PSD in the same folder I have been told that this can be done with Photoshop scripting, but I don't know how to get started since unfortunately I don't have much experience with JavaScript. One thing I know is that I can't run the task above using <code>Actions</code> because when I record the last step (4), Photoshop records the action to save the PSD files in the folder that I use when recording the macro (instead of the one used to load the original PNG files). In other words, it fixes the destination folder to the one used in the macro. This takes me to the following question: Is there a way to automatically generate the Photoshop Javascript code that runs a given action? If so, I wouldn't mind learning how to modify the script to fix the above folder problem.
Let me answer the question you actually asked in bold: There is a tool that automatically generates the Javascript for the actions and events that are taking place in Photoshop. It is called the Script Listener. After using the script listener to record your actions, review the log and make your selective edits. To begin using the Script Listener Close Photoshop Copy the ScriptListener.8li file from the C:\Program Files\Adobe\Adobe Photoshop CS5\Scripting\Utilities folder Paste the file to the C:\Program Files\Adobe\Adobe Photoshop CS5\Plug-ins\Automate folder. Run Photoshop, perform actions you want to happen in your script. Close Photoshop, delete the copy of the script listener from the Automate folder. Edit the log file that is placed on your desktop by the script listener. To get your new fangled script into Photoshop place the file you've created with a jsx extension into C:\Program Files\Adobe\Adobe Photoshop CS5\Presets\Scripts.
look for the file SaveAsDifferentFileType.jsx on your computer, i think you could use this as starting point. There is now way that i know of to generate this code automatically. I think there is no way around learning how it works: Here the documentation: http://www.adobe.com/devnet/photoshop/scripting.html And here a tutorial that will tell you where to begin: http://morris-photographics.com/photoshop/tutorials/scripting1.html If you are using a MAC you could try the Automator Photoshop actions: http://www.completedigitalphotography.com/?p=339 They will let you do what you want, without any programming know-how.
answer_1
pvjx8z
What is the scientific consensus about the polygraph (lie detector)? I got a new employment where they sent me to a polygraph test in order to continue with the process, I was fine and got the job but keep wondering if that is scientifically accurate, or even if it is legal, I'm not in the US btw.
It is useful for measuring how nervous the subject is. It is useless for detecting lies by those who are not afraid of being caught lying. It is easy to make yourself nervous enough during the preliminary test establishing a baseline that the normal level of nervousness around lying won't register as elevated.
Everyone knows the polygraph is highly inaccurate. The point of the test is not to catch you lying with the test, but to scare you into telling the truth. If, for example, on a job application you said that you never did drugs when in actuality you used to smoke weed, they are hoping that they can scare you into admitting the truth on the polygraph (and it often works). This is why you must remember that you have two options when applying for work in a place that employs polygraph tests: 1. Be totally honest at all times or 2. Be perfectly consistent with your lies. It is immensely unlikely that you will be disqualified because the polygraph says you are lying. It is almost unquestionable, however, that you will be disqualified if you contradict previous statements when strapped to the machine.
answer_2
fvzw99
Are engines that are used to be used a full throttle designed differently than engines that aren’t? For example most boats/jet skis are used at full throttle pretty much the whole time you’re on one, but for cars it’s not an everyday thing to put it petal to the metal (for most people). With that, what design changes are made to engine/powertrain to run more efficiently/reliably/safely at full throttle?
Boats and jet skis don’t run at full throttle. They are dialed back throttles and usually the limiting factor are conditions around cavitation or slip with can pose many safety concerns like runaway engine scenario or catastrophic vibrations. They can run at high loads because they are not air cooled radiators. They are water cooled thru and thru and the heater transfer to water is much more efficient than a radiator. Also don’t forget throttle position != speed != load
Not that the other answers are wrong, but a huge part of it also has to do with the amount of time expected to be put on the engine. Recreational boat engines are pretty much universally derived from car engines, but are expected to run just a fraction of the time. A recreational boat that sees 50 hours a year would be the exception, this is roughly equivalent to a car that drives 2,000 miles a year.
answer_2
fvzw99
Are engines that are used to be used a full throttle designed differently than engines that aren’t? For example most boats/jet skis are used at full throttle pretty much the whole time you’re on one, but for cars it’s not an everyday thing to put it petal to the metal (for most people). With that, what design changes are made to engine/powertrain to run more efficiently/reliably/safely at full throttle?
Boats and jet skis don’t run at full throttle. They are dialed back throttles and usually the limiting factor are conditions around cavitation or slip with can pose many safety concerns like runaway engine scenario or catastrophic vibrations. They can run at high loads because they are not air cooled radiators. They are water cooled thru and thru and the heater transfer to water is much more efficient than a radiator. Also don’t forget throttle position != speed != load
Something everyone else missed. From a cam and induction standpoint, partial throttle performance is secondary. Racing engines for instance, even on a road course, commonly operate at an average throttle setting above 94%. The driver is expected to keep the speed up and keep it in the right gear to stay in the power band (another reason for 7-10 speed transmissions). If you are complaining about off-throttle performance you aren't driving the thing hard enough.
answer_2
fvzw99
Are engines that are used to be used a full throttle designed differently than engines that aren’t? For example most boats/jet skis are used at full throttle pretty much the whole time you’re on one, but for cars it’s not an everyday thing to put it petal to the metal (for most people). With that, what design changes are made to engine/powertrain to run more efficiently/reliably/safely at full throttle?
As others have said, alot of it has to do with heat management. For most Marine engines, the cooling is considerably better than that of a typical automobile. And that's a good thing, as the demand on these engines is MUCH higher than most things on wheels...there is no coasting. There's a huge load at higher throttle just to get the boat on plane. Once it's there, the throttles are dialed back but the engines still have to work under higher than average load constantly to keep it on plane. That's why the fuel burn #s are so high...2-3 mpg is in the ballpark for larger boats. Jet skis are considerably better due to their weight, but you could still burn out a 10 or 15 gallon tank in a day easily. For boats with diesels (or any boat, really), it's not so much about the throttle opening as it is the engine operating at where it makes power most efficiently. Diesels have a very narrow window of power, so it is critically important for them to be propped based on desired speed, load, etc. Overpropped and the engine will be lugging all the time. Underpropped and it'll be running at too high of an rpm and out of it's designed operational range. The same applies for gasoline marine engines...typically you'd load the boat as you would on a typical day and then do a WOT throttle run. What you're looking for is that the engine reaches its designed max rpm...usually somewhere around 6,000. If it doesn't, you're overpropped and need to be saving for a new engine (it's lugging constantly). Outside of the marine engine consideration, yes, engines that are designed to operate at peak power (not necessarily WOT) at a constant duty cycle are designed differently. Diesels are built much stronger than a typical gasoline engine, partly because of the higher compression ratio and loads demanded, but also because of the fact that they usually operate within a very narrow RPM range with alot of load behind them. This is true for marine, industrial, OTR trucks, and even "light duty" pickups with diesels. They all have stronger blocks, better cooling systems, etc to handle the loads. Even gasoline gensets (the Ford 300 comes to mind) are much more industrially-constructed. Even my F-250 with a 7.3L diesel reflects this. It runs best between 1800-2200 RPM. That's the sweet spot of where the max torque and max hp more or less average out. Anything outside of that range and it falls off quickly. It's no coincidence that at 70 MPH in sixth gear (technically 5th but first gear is a granny), the engine is turning exactly 2,000 rpm - dead in the middle of peak torque and peak hp. The engine actually runs BETTER and produces more power with a load behind it the way it was designed than it does empty. One thing to remember about a diesel is they aren't throttled. Engine RPM is controlled by amount of fuel injected, and minor variations in timing, etc. That's why you can get a runaway if the engine has an unmetered fuel source (like a blown turbo pedestal seal).
Boats and jet skis don’t run at full throttle. They are dialed back throttles and usually the limiting factor are conditions around cavitation or slip with can pose many safety concerns like runaway engine scenario or catastrophic vibrations. They can run at high loads because they are not air cooled radiators. They are water cooled thru and thru and the heater transfer to water is much more efficient than a radiator. Also don’t forget throttle position != speed != load
answer_1
fvzw99
Are engines that are used to be used a full throttle designed differently than engines that aren’t? For example most boats/jet skis are used at full throttle pretty much the whole time you’re on one, but for cars it’s not an everyday thing to put it petal to the metal (for most people). With that, what design changes are made to engine/powertrain to run more efficiently/reliably/safely at full throttle?
As others have said, alot of it has to do with heat management. For most Marine engines, the cooling is considerably better than that of a typical automobile. And that's a good thing, as the demand on these engines is MUCH higher than most things on wheels...there is no coasting. There's a huge load at higher throttle just to get the boat on plane. Once it's there, the throttles are dialed back but the engines still have to work under higher than average load constantly to keep it on plane. That's why the fuel burn #s are so high...2-3 mpg is in the ballpark for larger boats. Jet skis are considerably better due to their weight, but you could still burn out a 10 or 15 gallon tank in a day easily. For boats with diesels (or any boat, really), it's not so much about the throttle opening as it is the engine operating at where it makes power most efficiently. Diesels have a very narrow window of power, so it is critically important for them to be propped based on desired speed, load, etc. Overpropped and the engine will be lugging all the time. Underpropped and it'll be running at too high of an rpm and out of it's designed operational range. The same applies for gasoline marine engines...typically you'd load the boat as you would on a typical day and then do a WOT throttle run. What you're looking for is that the engine reaches its designed max rpm...usually somewhere around 6,000. If it doesn't, you're overpropped and need to be saving for a new engine (it's lugging constantly). Outside of the marine engine consideration, yes, engines that are designed to operate at peak power (not necessarily WOT) at a constant duty cycle are designed differently. Diesels are built much stronger than a typical gasoline engine, partly because of the higher compression ratio and loads demanded, but also because of the fact that they usually operate within a very narrow RPM range with alot of load behind them. This is true for marine, industrial, OTR trucks, and even "light duty" pickups with diesels. They all have stronger blocks, better cooling systems, etc to handle the loads. Even gasoline gensets (the Ford 300 comes to mind) are much more industrially-constructed. Even my F-250 with a 7.3L diesel reflects this. It runs best between 1800-2200 RPM. That's the sweet spot of where the max torque and max hp more or less average out. Anything outside of that range and it falls off quickly. It's no coincidence that at 70 MPH in sixth gear (technically 5th but first gear is a granny), the engine is turning exactly 2,000 rpm - dead in the middle of peak torque and peak hp. The engine actually runs BETTER and produces more power with a load behind it the way it was designed than it does empty. One thing to remember about a diesel is they aren't throttled. Engine RPM is controlled by amount of fuel injected, and minor variations in timing, etc. That's why you can get a runaway if the engine has an unmetered fuel source (like a blown turbo pedestal seal).
"Full throttle" is usually more dialed back than you think on those because its converting energy to push water, instead of spin a wheel. The watercraft ive learned on also use the water around it as a cooling feature by actually putting it in and around the engine, which helps dramatically in letting it run high.
answer_1
fvzw99
Are engines that are used to be used a full throttle designed differently than engines that aren’t? For example most boats/jet skis are used at full throttle pretty much the whole time you’re on one, but for cars it’s not an everyday thing to put it petal to the metal (for most people). With that, what design changes are made to engine/powertrain to run more efficiently/reliably/safely at full throttle?
Oh yes. If you’re looking to build a real high performance street motor, keep an eye open for a marine v8 to start with. Forged crank, rods, pistons are the norm because yes they are built to run full throttle all day long. Some blocks are stronger too but that varies. If it was a old school SBC v8 you could be assured it has 4 bolt main bearings instead of the vanilla 2 bolt caps. The accessories are quite different, water cooled exhaust manifolds and such won’t carry over to street use. The fuelling profile is not suitable for the street, air cleaner is insufficient etc.
Piston driven aircraft are another topic that hasn't been touched on. Reliability is the big topic when it comes to recreational and other small aircraft. Piston driven aircraft have oiling systems that can often work regardless the roll or pitch angle of the engine. This is not something you find on boats or stationary diesels or OTR diesel trucks or cars. This is executed is many different ways, but generically, dry sump oil systems are common, but wet sump aircraft engines still exist. For fueling, fuel injection is the same essentially, but if you're using a carburator for fuel/air mixing then you will likely have a heater on the carburator or in the air stream prior to the carburator to prevent icing inside the carburator, so as to prevent power loss in low temperature high humidity conditions. There are redundant systems everywhere in an aircraft engine as well. Redundant air intake, redundant spark, redundant fuel pumps and lines, redundant air/fuel mixing. You will find temperature sensors everywhere as well. Cylinder head temp sensors, even on the air cooled engines. Exhaust gas temperature, intake air temperature, oil temperature, etc. Even on many older analog engines. These sensors can give you a good idea on the load that is currently on the engine at any given time. A high sensor count is not limited to modern fuel injected engines.
answer_2
fvzw99
Are engines that are used to be used a full throttle designed differently than engines that aren’t? For example most boats/jet skis are used at full throttle pretty much the whole time you’re on one, but for cars it’s not an everyday thing to put it petal to the metal (for most people). With that, what design changes are made to engine/powertrain to run more efficiently/reliably/safely at full throttle?
To actually answer your question, sometimes they are designed differently. Thinking specifically about generators and pressure washers, they usually run full throttle from startup to shut down, unless it's a newer style with an "idle down" feature. I've tried using an old generator engine on a go-kart and one of the problems was idle quality. It wouldn't stay running at an rpm below where the clutch would engage so I had to replace the entire carburettor.
Piston driven aircraft are another topic that hasn't been touched on. Reliability is the big topic when it comes to recreational and other small aircraft. Piston driven aircraft have oiling systems that can often work regardless the roll or pitch angle of the engine. This is not something you find on boats or stationary diesels or OTR diesel trucks or cars. This is executed is many different ways, but generically, dry sump oil systems are common, but wet sump aircraft engines still exist. For fueling, fuel injection is the same essentially, but if you're using a carburator for fuel/air mixing then you will likely have a heater on the carburator or in the air stream prior to the carburator to prevent icing inside the carburator, so as to prevent power loss in low temperature high humidity conditions. There are redundant systems everywhere in an aircraft engine as well. Redundant air intake, redundant spark, redundant fuel pumps and lines, redundant air/fuel mixing. You will find temperature sensors everywhere as well. Cylinder head temp sensors, even on the air cooled engines. Exhaust gas temperature, intake air temperature, oil temperature, etc. Even on many older analog engines. These sensors can give you a good idea on the load that is currently on the engine at any given time. A high sensor count is not limited to modern fuel injected engines.
answer_2
fvzw99
Are engines that are used to be used a full throttle designed differently than engines that aren’t? For example most boats/jet skis are used at full throttle pretty much the whole time you’re on one, but for cars it’s not an everyday thing to put it petal to the metal (for most people). With that, what design changes are made to engine/powertrain to run more efficiently/reliably/safely at full throttle?
To actually answer your question, sometimes they are designed differently. Thinking specifically about generators and pressure washers, they usually run full throttle from startup to shut down, unless it's a newer style with an "idle down" feature. I've tried using an old generator engine on a go-kart and one of the problems was idle quality. It wouldn't stay running at an rpm below where the clutch would engage so I had to replace the entire carburettor.
"Full throttle" is usually more dialed back than you think on those because its converting energy to push water, instead of spin a wheel. The watercraft ive learned on also use the water around it as a cooling feature by actually putting it in and around the engine, which helps dramatically in letting it run high.
answer_1
fvzw99
Are engines that are used to be used a full throttle designed differently than engines that aren’t? For example most boats/jet skis are used at full throttle pretty much the whole time you’re on one, but for cars it’s not an everyday thing to put it petal to the metal (for most people). With that, what design changes are made to engine/powertrain to run more efficiently/reliably/safely at full throttle?
Lots of good answers, so I'm just going to share this: Pe**dal** to the metal. Think like accelerating a car. How do you do that? Hold the pedal down flat and it rests on the firewall (made of metal), hence pedal to the metal.
Sort of. The design and calibration of an engine aren't always done for a single use case, so in the case of a marine application it's likely that certain components will be changed, and a calibration will be developed for that application, but the fundamental architecture of the engine will remain mostly the same. If a 100% load duty cycle is an expected assumption with the design of an engine, specific components will be designed with that in mind: cam profiles, fuel injection maps, and spark timings will all be calibrated to meet the emission requirements at that condition, and deliver the best fuel efficiency at peak power instead of lower load conditions. As far as actual component design, most aspects should remain unchanged since a marine application will likely be pretty close to the 95th percentile use case that manufacturers design for, so other than validating any application specific parts, it should be pretty similar.
answer_2
zm7a25
why doesn't my sauce stick to my spaghetti? Making sauce from scratch tomatoes, tomato paste and using ground turkey in it. What am I not doing. Very amateur cook. It tastes good but I'm still bummed.
Don't rinse your pasta!
What kind of pasta are you using? A pasta with a lot of grooves will hold sauce better. If your sauce is especially chunky (like mine is) something with a lot of grooves or a hole in the middle like penne would be best.
answer_2
zm7a25
why doesn't my sauce stick to my spaghetti? Making sauce from scratch tomatoes, tomato paste and using ground turkey in it. What am I not doing. Very amateur cook. It tastes good but I'm still bummed.
I agree with finishing together but have to do separate due to someone’s tomato allergy… very light sauce
Buy high-end spaghetti. Cheap stuff you find at the supermarket (like Barilla, Dececco, store brand etc) is smooth like print paper. Sauce tends to slide off it. High end pasta is made in bronze machines and has many imperfections on it. But these imperfections create tiny dimples and crevices that add texture and hold your sauce better. It's worth shelling out a few more dollars for.
answer_2
zm7a25
why doesn't my sauce stick to my spaghetti? Making sauce from scratch tomatoes, tomato paste and using ground turkey in it. What am I not doing. Very amateur cook. It tastes good but I'm still bummed.
I agree with finishing together but have to do separate due to someone’s tomato allergy… very light sauce
Too much liquid, especially if you use fresh tomatoes this happens. Try to cook it a bit longer to boil of some liquid or add a little bit starch to it.
answer_2
zm7a25
why doesn't my sauce stick to my spaghetti? Making sauce from scratch tomatoes, tomato paste and using ground turkey in it. What am I not doing. Very amateur cook. It tastes good but I'm still bummed.
There is a good chance of it happening if you rinse your pasta or if you add oil to the water
Add a little cornflour to thicken it if it’s watery. When I say a little, I mean like a half tablespoon or something and see what happens.
answer_1
zm7a25
why doesn't my sauce stick to my spaghetti? Making sauce from scratch tomatoes, tomato paste and using ground turkey in it. What am I not doing. Very amateur cook. It tastes good but I'm still bummed.
Add a little cornflour to thicken it if it’s watery. When I say a little, I mean like a half tablespoon or something and see what happens.
Drain the pasta well. Then toss it with the hot sauce in the pan. After a minute or so it’ll stick. (Do not put plain pasta in a dish and then plop sauce on top.) A little parm can help, too.
answer_2
zm7a25
why doesn't my sauce stick to my spaghetti? Making sauce from scratch tomatoes, tomato paste and using ground turkey in it. What am I not doing. Very amateur cook. It tastes good but I'm still bummed.
Oil comrade, you forgot to add oil to the sauce mixture. tip: Pour room temp water on the pasta after removing the hot water from it. than spill out the room temp water, and add the sauce
If you're adding oil to your pasta during cooking, don't.
answer_2
zm7a25
why doesn't my sauce stick to my spaghetti? Making sauce from scratch tomatoes, tomato paste and using ground turkey in it. What am I not doing. Very amateur cook. It tastes good but I'm still bummed.
Are you putting oil in the water?
Oil comrade, you forgot to add oil to the sauce mixture. tip: Pour room temp water on the pasta after removing the hot water from it. than spill out the room temp water, and add the sauce
answer_1
kbo99c
The Simpsons] How was Homer able to eat all of Hell's donuts with no ill effects, but nearly went blind from eating 64 slices of cheese? [Homer consuming all of Hell's donuts and having the audacity to ask for more Homer struggling to eat 64 slices of american cheese
Perhaps because he is dead he no longer has a real body or any actually feelings and is just a 'spirit'. Maybe it is only the way you percieve something in hell that makes it torture.
The devil's donuts are made of far less toxic ingredients than American cheese
answer_2
kbo99c
The Simpsons] How was Homer able to eat all of Hell's donuts with no ill effects, but nearly went blind from eating 64 slices of cheese? [Homer consuming all of Hell's donuts and having the audacity to ask for more Homer struggling to eat 64 slices of american cheese
Because Homer loves donuts
Couple of things. He was in _hell_, where the punishment was designed to make him sick of donuts. You can't get sick of donuts if you get full and no longer want to eat them. Second, hell-donuts aren't nearly as toxic as processed cheese products. It's not even a contest.
answer_2
kbo99c
The Simpsons] How was Homer able to eat all of Hell's donuts with no ill effects, but nearly went blind from eating 64 slices of cheese? [Homer consuming all of Hell's donuts and having the audacity to ask for more Homer struggling to eat 64 slices of american cheese
He's in Hell. How did they dice him up without killing him on the conveyer belt when he first went down? How do damned souls not die after getting tortured or etc? It's Hell.
Because Homer loves donuts
answer_1
t1sp9m
Explain like I'm five years old what does a mathematician actually DO? Im not at all math savvy. In fact the opposite. I was having a conversation about math with a colleague and I realized that other than teaching i have no idea what someone with a math degree or a “Mathematician” actually does? Im curious now. Whats their day like? Who employs them?
Lots of industries and companies need to build models of complex systems: business analysts, stock traders, insurance actuaries, sports teams, engineering firms. Other people can also build such models, but mathematicians are the specialists at this. The tech giants employ lots of mathematicians to analyse human behaviour (you may have heard of big data). Then there are the complex systems that companies themselves need. The software that banks use to keep track of and move money around are naturally incredibly complex. Again a mathematician would have useful skills here.
A programmer once told me that as you get more and more complicated with what your code is trying to do, programming and mathematics converge. A big part of it is that for a lot of things, making a rudimentary algorithm to solve a problem is pretty easy. But this rudimentary algorithm will require a lot of steps and computations. And each step costs money in hardware (computer parts don't last forever), energy (electricity ain't cheap, and time (you actually need a solution). Mathematics can be used to improve algorithms and make things faster, easier, and cheaper. And depending on time constraints, usable. The Alan Turing film about code breaking shows all of this. In the film Turing makes a machine and has a rudimentary calculation for breaking the enigma cod le relatively early in the story. But the algorithm is rudimentary and after computing for a week it's still not done; and since the code changes every day this is useless. So they spend all day every day improving the algorithm to the point where the computer can perform all the calculations in minutes. Lots of real world problems are like this. Logistics systems change in scope and what products and locations are involved constantly, so you also want your calculations done ASAP.
answer_2
t1sp9m
Explain like I'm five years old what does a mathematician actually DO? Im not at all math savvy. In fact the opposite. I was having a conversation about math with a colleague and I realized that other than teaching i have no idea what someone with a math degree or a “Mathematician” actually does? Im curious now. Whats their day like? Who employs them?
Lots of industries and companies need to build models of complex systems: business analysts, stock traders, insurance actuaries, sports teams, engineering firms. Other people can also build such models, but mathematicians are the specialists at this. The tech giants employ lots of mathematicians to analyse human behaviour (you may have heard of big data). Then there are the complex systems that companies themselves need. The software that banks use to keep track of and move money around are naturally incredibly complex. Again a mathematician would have useful skills here.
Sometimes mathematicians are also useful in the industry because if the way learned to reason and solve problems. Math studies require the ability to focus on one problem and find the most efficient solution for this, it requires rational thinking (als logic is also a great part of mathematics) and mathematicians are used to not being able to solve a problem on first sight but to think and riddle about it to get it done. These qualities are useful in a lot of fields, e.g. I know a lot of Managers and Consultants who a mathematicians but obviously mainly use their soft skills instead of the math part they have learned.
answer_2
t1sp9m
Explain like I'm five years old what does a mathematician actually DO? Im not at all math savvy. In fact the opposite. I was having a conversation about math with a colleague and I realized that other than teaching i have no idea what someone with a math degree or a “Mathematician” actually does? Im curious now. Whats their day like? Who employs them?
Mathematician here. My regular job is as an analyst. Analyzing large data sets to identify trends, writing formulas, and yes debating the proper way to calculate metrics with my colleagues. I do a lot of work related to Lean Six Sigma, which is optimization and problem solving. Mathematicians are excellent problem solvers! We use logical thinking and break down problems into smaller components to identify root causes. Also, some statistics is involved with creating control charts and calculating the confidence intervals of the analyses.
Lots of industries and companies need to build models of complex systems: business analysts, stock traders, insurance actuaries, sports teams, engineering firms. Other people can also build such models, but mathematicians are the specialists at this. The tech giants employ lots of mathematicians to analyse human behaviour (you may have heard of big data). Then there are the complex systems that companies themselves need. The software that banks use to keep track of and move money around are naturally incredibly complex. Again a mathematician would have useful skills here.
answer_1
t1sp9m
Explain like I'm five years old what does a mathematician actually DO? Im not at all math savvy. In fact the opposite. I was having a conversation about math with a colleague and I realized that other than teaching i have no idea what someone with a math degree or a “Mathematician” actually does? Im curious now. Whats their day like? Who employs them?
I interviewed a pure mathematician for a course in university. Very interesting gent. He said most of what he does has no real application. He said he spent a lot of time thinking about the foil knot at the time. His average work process was to think about different problems and things that interested him for 2-3 months, then he'd lock himself in a room and write a paper on 1 or 2 days. Rinse and repeat. But some of solution will have an application. Not Explain like I'm five years old but pathway finding solutions Google maps uses has relations to geometric shapes (paths and nodes) i believe. So all that work thinking about 2000 sided shapes does make sense when you try to map out all the possible houses in a city.
Lots of industries and companies need to build models of complex systems: business analysts, stock traders, insurance actuaries, sports teams, engineering firms. Other people can also build such models, but mathematicians are the specialists at this. The tech giants employ lots of mathematicians to analyse human behaviour (you may have heard of big data). Then there are the complex systems that companies themselves need. The software that banks use to keep track of and move money around are naturally incredibly complex. Again a mathematician would have useful skills here.
answer_1
v2z4kz
It drives me nuts when I ask why electrons don't smash into each other and someone says "the Pauli Exclusion principle" As if the electrons get on the phone and call up Pauli and he tells them they can't smash together.. so they go "ok then". In other words, its a non answer. Whats the real answer? What is the force that causes this principle? Or is it just that we have no idea and have just noticed that thats the way things behave?
Second one. It just follows from the way quantum mechanics works. If you write out the state where two electrons occupy the same position with the same spin, you find that it has probability 0 of existing.
In addition to what others have said, Pauli doesn't have much to do with "smashing into each other". Electrons can still scatter off each other. They just can't have identical orbitals.
answer_2
v2z4kz
It drives me nuts when I ask why electrons don't smash into each other and someone says "the Pauli Exclusion principle" As if the electrons get on the phone and call up Pauli and he tells them they can't smash together.. so they go "ok then". In other words, its a non answer. Whats the real answer? What is the force that causes this principle? Or is it just that we have no idea and have just noticed that thats the way things behave?
In addition to what others have said, Pauli doesn't have much to do with "smashing into each other". Electrons can still scatter off each other. They just can't have identical orbitals.
The Pauli Exclusion principle.
answer_1
v2z4kz
It drives me nuts when I ask why electrons don't smash into each other and someone says "the Pauli Exclusion principle" As if the electrons get on the phone and call up Pauli and he tells them they can't smash together.. so they go "ok then". In other words, its a non answer. Whats the real answer? What is the force that causes this principle? Or is it just that we have no idea and have just noticed that thats the way things behave?
They call it an “exchange force”, buts it’s not really a force. It’s a property of the symmetries the wave function must satisfy, namely particle exchange. Electrons are identical particles. That means that if you swapped the two electrons you would not be able to tell which one was in which state. Mathematically that is the probability of interchanging particles remains the same. Total wave function solutions from the probability give us a plus and minus solution, where plus corresponds to bosons and minus to fermions. These are symmetric and antisymmetric wave functions. If we include spin, our wave function becomes the product of spin and position wave functions. It turns out that fermions always have an antisymmetric total wave function (for reasons other people have commented, spin statistics). To get an antisymmetric total, we need a symmetric and antisymmetric wave function. If the spatial one is symmetric, then the spin has to be antisymmetric (aka same spatial state opposite spins). If the spin is symmetric, they have to be in antisymmetric spatial states (same spins = can’t occupy same spatial state). If they don’t want to be in the same spatial state, they effectively are repelled. If two electron wave functions begin to overlap, they don’t like that and “push” apart.
Imo your issue is in still thinking of electrons as billiard ball barticles. Electrons are discrete excitations in quantum fields, and depending on the structure of the field they have certain properties. One of the properties is that they cannot 'overlap'. There isn't a *force* per se, it's just the nature of the field.
answer_2
v2z4kz
It drives me nuts when I ask why electrons don't smash into each other and someone says "the Pauli Exclusion principle" As if the electrons get on the phone and call up Pauli and he tells them they can't smash together.. so they go "ok then". In other words, its a non answer. Whats the real answer? What is the force that causes this principle? Or is it just that we have no idea and have just noticed that thats the way things behave?
You talk about "forces" as if they're an inherently more objective way to describe reality than things like the Pauli Exclusion Principle when in fact it's quite the opposite. But still, even if you ignore all quantum mechanics, electrons are charged particles, how would you smash them "into" each other?
Electrons are waves, they overlap with each other spatially. They interact and in some situations that interaction could seem like a smash, but in an atom they have found a stable way to be physically overlapping.
answer_1
v2z4kz
It drives me nuts when I ask why electrons don't smash into each other and someone says "the Pauli Exclusion principle" As if the electrons get on the phone and call up Pauli and he tells them they can't smash together.. so they go "ok then". In other words, its a non answer. Whats the real answer? What is the force that causes this principle? Or is it just that we have no idea and have just noticed that thats the way things behave?
Electrons are waves, they overlap with each other spatially. They interact and in some situations that interaction could seem like a smash, but in an atom they have found a stable way to be physically overlapping.
Often the answers in relativistic QM are purely maths, it's hard to give a physical picture of Clifford algebras and anticommutation relations to someone who hasn't dealt with the math. It's a trade off at times
answer_2