qid int64 1 74.7M | question stringlengths 12 33.8k | date stringlengths 10 10 | metadata list | response_j stringlengths 0 115k | response_k stringlengths 2 98.3k |
|---|---|---|---|---|---|
55,653 | I am looking to publish an audiobook on ACX, and I am 13 years old. **Is there any age restriction on publishing an audiobook on ACX?** For example, during registration, it asks me if I hold a US tax ID for the payments from ACX. | 2021/04/22 | [
"https://writers.stackexchange.com/questions/55653",
"https://writers.stackexchange.com",
"https://writers.stackexchange.com/users/49616/"
] | You must be 18 to publish on ACX.
But you can probably convince a parent/guardian to publish for you.
Amazon will also publish audiobooks, but you will still need a parent/guardian to publish for you since you are under 18. | There are pretty good reasons for most age restrictions.
In creating a book, an author cannot help but reveal a great deal about their personal worldview, their values, their beliefs and their dreams. At such a young age, most of that internal landscape is still evolving. The next five years of your life may include massive changes to any or all of those elements and the sum of those changes may leave you far from the person you currently are.
In publishing a book today, you may permanently associate your name, your pen-name and your talent, with enthusiasms and values which some future you might not share. If you are fortunate enough to succeed in earning a large following, that future you may resent being typecast by the words you've crafted today.
That you are already writing is spectacular! If you keep it up and keep refining your abilities, you will have a major advantage over your peers when you all finally reach legal publishing age. And more importantly, you will have five years of accumulated works, all meticulously reviewed, edited and finalized, ready all at once.
As a reader, compare the difference in experience between reading a trilogy in series over reading the first book of a trilogy and then waiting for years (or forever) for the second book. Use the next five years to create a complete trilogy (or several) so that you can give your readers that complete trilogy experience on day one.
Above all else, Keep Writing! |
55,653 | I am looking to publish an audiobook on ACX, and I am 13 years old. **Is there any age restriction on publishing an audiobook on ACX?** For example, during registration, it asks me if I hold a US tax ID for the payments from ACX. | 2021/04/22 | [
"https://writers.stackexchange.com/questions/55653",
"https://writers.stackexchange.com",
"https://writers.stackexchange.com/users/49616/"
] | You must be 18 to publish on ACX.
But you can probably convince a parent/guardian to publish for you.
Amazon will also publish audiobooks, but you will still need a parent/guardian to publish for you since you are under 18. | 18 (or the legal age of majority) is the minimum. Minors aren’t able to enter contracts, which is required for publishing.
There are a number of [steps required for publishing](https://www.acx.com/help/legal-contracts/200485430), and the first is [opening an account](https://www.acx.com/help/account-holder-agreement/201481940): “To open an account on ACX, you must be a resident of the United States, the United Kingdom, Canada or the Republic of Ireland and be at least 18 years old or the legal age of majority in the jurisdiction in which you reside.”
I believe you can have your parent/guardian get your book published as an audiobook for you by giving them the [“Authority to Enter into this Agreement”](https://www.acx.com/help/ZXZ8Q2SFT6NQA2E). Still, I suggest contacting ACX support to ensure there are no legal problems. |
275,526 | [Wikipedia](https://en.wikipedia.org/wiki/Electromagnetic_radiation) says that
>
> Classically, electromagnetic radiation consists of electromagnetic
> waves, which are **synchronized oscillations of electric and magnetic**
> fields that propagate at the speed of light through a vacuum. The
> oscillations of the two fields are perpendicular to each other and
> perpendicular to the direction of energy and wave propagation, forming
> a transverse wave.
>
>
>
The page also includes this image:
>
> [](https://i.stack.imgur.com/QbLEW.png)
>
>
>
which shows that.
But I find that sometimes the wave is represented with B-field at is peak on the nodes, like here:
>
> [](https://i.stack.imgur.com/FAKu0.png)
>
>
>
taken [from Wikimedia Commons](https://commons.wikimedia.org/wiki/File:Photon_Spin_%2B1.PNG), and it would make some sense, too, considering that it grows with acceleration and this is maximal there.
Can you please say if the second picture is wrong, and if those representations are **both** a mere pictorial, fictional, simplified, arbitrary representation of an EM wave?
Do you know if modern instruments are able to record with precision the oscillations of the electric and magnetic field when detecting photons (now we have lots of collimated photons in laser beams can you detect the fiels at the emittter or receiver) ? | 2016/08/21 | [
"https://physics.stackexchange.com/questions/275526",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
] | Radiation from the sun follows a [black body spectrum more or less](https://en.wikipedia.org/wiki/File:Solar_Spectrum.png), and is not coherent, i.e. the phases between different slices of sunlight are not defined. The photons come from innumerable incoherent de-excitations from the plasma of the sun's surface.
It can be simulated by plane waves impinging at all the frequencies of its black body spectrum, which is your first plot. Those functions describe plane waves.
Incoherent electromagnetic waves can be made coherent when passed through small openings, a slit for example, that is why interference fringes appear at single slits. The appearance of fringes validates experimentally the [plane wave functions describing the electromagnetic wave.](https://en.wikipedia.org/wiki/Electromagnetic_wave_equation#Plane_wave_solutions)
>
> Do you know if modern instruments are able to record with precision the oscillations of the electric and magnetic field when detecting a photon?
>
>
>
The photon is a quantum mechanical elementary particle, and classical beams and their electric and magnetic fields emerge from a superposition of innumerable photons.
Photons when detected individually are a single point on a screen , leaving energy h\*nu where nu is the frequency of the classical beam that was built up by such photons, and at most one can detect in its interactions the spin it has, +/-h in its direction of motion. No electric or magnetic fields, because the information about them is carried in the wavefunction describing the photon which is a complex function and cannot be susceptible to measurement. Only in the confluence of innumerable photons one reaches the classical regime where electric and magnetic fields can be detected. Yes, there are antennas which detect and measure electric fields from the electromagnetic radiation. | The shift of 90° between the maximum of the electric field component to the magnetic field component is a very natural view on how photons are propagating in free space. First this is the situation in the near field of an antenna radiation. An electric field induces a magnetic field induces again a magnetic field and so on. Second this shift conserves the energy content of the photon in any point of it's movement in space.
The derivation of the sin is a cos is a -sin is a -cos and perhaps it is possible to transform Maxwells equations in such a way, isn't it?
Perhaps it is possible to interprete in the far field of the radio waves as no shifted by 90° but my question about measurement results for such a interpretation does not get any source to this measurements. |
393,933 | The BJT diagram is shown below:

The voltage source at the base side is increased incrementally from 1V to 10V, with the voltage source at the collector side being constant.
The Beta values are recorded in OrCAD PSPICE tool:
* 1V - 147
* 2V - 168
* 3V - 174
* 4V - 176
* 5V - 177
* 6V - 176.9
* 7V - 176
* 8V - 174
* 9V - 173
* 10V - 158
The beta value increases from 1V and reaches its peak around 5V, and it starts dropping from there till 10V. I want to find the most appropriate DC amplification factor from these values, which will mainly be used in doing the DC analysis of a common emitter BJT amplifier circuit. Do I assume that the most appropriate value of a DC amplification factor is the mean value of all the beta values from 1V to 10V, or? Am a bit confused here. | 2018/09/02 | [
"https://electronics.stackexchange.com/questions/393933",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/197460/"
] | The answer is that using a fixed beta (gain) for analysis is not the right way to do BJT analysis. They are given as a range because they vary for all sorts of reasons. That's why BJT circuits that work are designed to be very insensitive to the BJT's gain - they need to still work over the whole range.
Often a BJT circuit is designed to work for a gain of **at least**, say, 50 or 100. Then you just make sure the gain of the BJT you choose can't be less than that value, and you're done. | Jd043 - If I understand your question right, you are asking for a certain beta-value that can satisfy specific voltage gain requirements, correct?
In this case, I consider it as important to know how a bipolar transistor really works.
Please note that there is one single parameter that really matters - as far as voltage amplification is concerned: The **transconductance gm=d(Ic)/d(Vbe)**. This parameter is identical to the slope of the steering characteristics Ic=f(Vbe).
The actual value of gm depends on the chosen DC collector current only (**gm=Ic/Vth**) and does NOT depend on the beta-values. The beta value (called "current gain") determines the base current and, hence, the input resistance) only. |
916,231 | I want to know how to setup a mail server like postfix on Google VP instance.
I'm running Ubuntu 16.04 (and LAMP stack) and can't get the mail server to send email from website.
I have installed postfix, and opened port 25, but no luck.
Any ideas on how to proceed?
Error logs: Network is unreachable and Connection timed out | 2018/06/12 | [
"https://serverfault.com/questions/916231",
"https://serverfault.com",
"https://serverfault.com/users/454946/"
] | According to <https://cloud.google.com/compute/docs/tutorials/sending-mail/>, you cannot set up a mail server the usual way, as ports 25, 465 and 587 are blocked for outbound connections on Google Cloud. Instead, you might take a look at relaying services such as [Mailgun](https://mailgun.com) or [SendGrid](https://sendgrid.net), which allow sending through port 2525 or an API instead. These services might cost a little bit of money, however. | Update to @XanderSmeets answer:
>
> Due to the risk of abuse, connections to destination TCP Port 25 are
> always blocked when the destination is external to your VPC network.
> This includes using SMTP relay with Google Workspace.
>
>
> Google Cloud does not place any restrictions on traffic sent to
> external destination IP addresses using destination TCP ports 587 or
> 465. The implied allow egress firewall rule allows this traffic unless you've created egress deny firewall rules that block it.
>
>
>
Source:
<https://cloud.google.com/compute/docs/tutorials/sending-mail/> |
68,879 | I'd like to run a jousting tournament within a D&D game as a nonlethal (but nevertheless dangerous) sporting event, rather than as an actual combat. Specifically, two mounted contestants with lances should ride past one another in parallel lanes, attempting to strike one another's shields with lances, seeking to knock the opponent from the mount.
**How does this work within the 5e rules?**
---
Example of the sort of answer I'm seeking:
>
> * Each pass is considered a separate combat.
> * The duration between the passes comprising a match is not long enough for a short rest.
>
>
> While the rules of D&D determine what the contestants are *practically* capable of doing, the in-game rules of the joust add further *social* constraints. The contestants may choose to break these rules, but if detected by the judges/onlookers, they will be considered cheats/unsportsmanlike and subject to disqualification/scandal.
>
>
> * On a signal, the contestants are expected to charge at and past each other from opposite ends of the lists. They're not allowed to be "creative" about their movement.
> * The contestants are expected to use a Readied action to shove (PH p.195) the opponent prone with the lance, rather than making a melee weapon attack for damage. Being knocked prone while mounted requires a DC 10 Dexterity check to avoid falling off the mount (PH p.198).
>
>
> Contestants score 1 point from a pass in which the opponent is knocked prone but remains mounted, or 3 points (and the match ends) if the opponent falls.
>
>
>
Note that there's no homebrewed features added to the D&D rules here, just what people in-game think is acceptable during a joust. | 2015/09/21 | [
"https://rpg.stackexchange.com/questions/68879",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/22041/"
] | Just writing you a set of rules would be out of the scope of the site, so what I'm trying to do is to give you the tools to make up your own rules.
It must be fun
==============
**Combat** is a very dynamic and varied tactical event. It's fun because there are so many possibilities that the obvious choice is rarely known, and you have to think about what to do, coordinate with the other Players, try to foresee what the enemies will do, etc.
**Jousting** is… dull. It may be fun in real life because there is the real component, but its core is extremely simple, so when you condense it in an RPG there's not much to do: throw dice, stuff happens. Zero decisions. Extremely boring.
Compare it with an **arm wrestling** competition. You, uh, throw dice? How many decisions do you pick? None. Arm wrestling competitions might be fun in real life because people are using their actual muscles to win, but in an RPG you need more: **Players need to pick decisions, not just roll dice!**
Direct approach
---------------
You have to make up some rules to do that, which will allow Players to actually pick decisions which will influence the outcome of the match.
To do that, you might have to:
* study in depth how jousts *actually* worked, try to understand which were the factors involved
* try to abstract them in a way that would also be fun to play in an abstract form *(i.e. just the decisions and the eventual relative dice rolls, without all the descriptions etc.)*
Indirect approach
-----------------
There is this jousting event, the Characters will have to somehow assist or contribute behind the scenes or whatever, but **they won't be directly involved**: **make it a pure storytelling Chapter**.
There are many interesting things that can happen within such a tournament, it might be really great! | If I understand correctly, points were scored in jousting by breaking your lance tip on the opponent and by dismounting your opponent.
Let's say it's 1 point for a lance break and 3 points for dismounting. Highest score at the end of 3 rounds wins.
I would have each contestant make an attack roll against their opponent's AC. On a hit, their lance breaks. Damage is not rolled. Each contestant then makes a dexterity saving throw against their opponent's attack roll to remain mounted. This saving throw is made with disadvantage if the opponent rolled a critical hit. As most participants will have military saddles, they should have advantage on the check except in the case of the crit as the disadvantage would cancel out the advantage.
Initiative is not rolled as both occur simultaneously.
Now here's the problem with my method. If two opponents had the same stats, you might as well flip a coin to see who wins. Class abilities aren't used, and only one action is possible. This is only going to be fun once or twice. After that, the players are just rolling more dice to do the same thing. In order to have an interesting system, you'd need a complete rewrite of the entire combat system. |
68,879 | I'd like to run a jousting tournament within a D&D game as a nonlethal (but nevertheless dangerous) sporting event, rather than as an actual combat. Specifically, two mounted contestants with lances should ride past one another in parallel lanes, attempting to strike one another's shields with lances, seeking to knock the opponent from the mount.
**How does this work within the 5e rules?**
---
Example of the sort of answer I'm seeking:
>
> * Each pass is considered a separate combat.
> * The duration between the passes comprising a match is not long enough for a short rest.
>
>
> While the rules of D&D determine what the contestants are *practically* capable of doing, the in-game rules of the joust add further *social* constraints. The contestants may choose to break these rules, but if detected by the judges/onlookers, they will be considered cheats/unsportsmanlike and subject to disqualification/scandal.
>
>
> * On a signal, the contestants are expected to charge at and past each other from opposite ends of the lists. They're not allowed to be "creative" about their movement.
> * The contestants are expected to use a Readied action to shove (PH p.195) the opponent prone with the lance, rather than making a melee weapon attack for damage. Being knocked prone while mounted requires a DC 10 Dexterity check to avoid falling off the mount (PH p.198).
>
>
> Contestants score 1 point from a pass in which the opponent is knocked prone but remains mounted, or 3 points (and the match ends) if the opponent falls.
>
>
>
Note that there's no homebrewed features added to the D&D rules here, just what people in-game think is acceptable during a joust. | 2015/09/21 | [
"https://rpg.stackexchange.com/questions/68879",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/22041/"
] | **A Simple Rule Set**
For simple Jousting rules with D&D origins that you can plug into your campaign, you can use [Chainmail, Third Edition](https://en.wikipedia.org/wiki/Chainmail_(game)). On pages 26 and 27 are easy-to-use rules for a jousting tournament. Appendix C, on page 42, provides a jousting combat table. *It is a diceless system that you can use in any campaign*, 5e included. Since 5e has no specific jousting rules, this fills that niche with no need to for homebrew.
* The Jousting Table compares the attack and defensive stances each participant chooses, and provides the result of that combination. During play, the way I saw it work was for both combatants to choose their attack and defense in secret (written on a 3x5 card), then submit the card to the referee who then adjudicated the result. *Playability* is a strength of using this tool.
Victory/defeat: "make three (or X) passes and see who gets unhorsed." This may be enough to meet your needs.
**What this won't do by itself**
Apply Ability score, level/proficiency, or feat bonuses to attack and defense when both combatants are mounted. (You don't need them). The joust is "skill-versus-skill" in that "what is my best offense and defense combination for this pass?" becomes the character's decision point, as well as his opponent's.
**5e Mechanics Considerations**
Some granular details of the joust can be added with opposed Ability checks per 5e. A contestant succeeding on a Dexterity (or Athletics) check on top of the joust table result could avoid being unhorsed (from a raw result of "unhorsed"). This retains your desired non-lethal character, and provides some differentiation between contestants.
The risk: this will extend combat/the match considerably for each pair, and it will become increasingly difficult to unhorse anyone at higher character levels. This folding in of ability scores, while making slight differences for each jouster's chances, can lead to ...
**Potential Balance Problems**
Are you interested in unequal combats? That may fit your story, or it may not. Some knights are much, much better than others at the joust.
* In a joust, if a Fighter had a Proficiency in Mounted Combat, or a Mounted Combatant Feat (PHB p. 168), the results would be significantly skewed in one direction.
* Paladins on their summoned mounts (Find Steed) "fight as one." All other mounted combatants, and their steeds, [behave as two discrete creatures during combat](https://rpg.stackexchange.com/questions/63646/how-does-mounted-combat-work).
>
> "Your {Paladin} steed serves you as a mount, both in combat and out, and you have an instinctive bond with it that allow you to fight as a seamless unit. (From 5e PHB *Summon Steed* spell description)"
>
>
>
**Do you want combat beyond being unhorsed?**
If both fighters are unhorsed during a given pass, or one combatant is unhorsed during a pass, the transition to standard 5e melee combat is the simple way to see who wins the fight if being unhorsed isn't the sole victory condition. To keep it non-lethal, default to the final blow being "knock out" per 5e rules.
>
> (PHB, p. 198): When an attacker reduces a creature to 0 hit points with a melee attack, the attacker can knock the creature out. The attacker can make this choice the instant the damage is dealt. The creature falls unconscious and is stable.
>
>
>
---
Notes:
(1) In re Chainmail, Third Edition: I still have my original copy of Chainmail. The .pdf I found on-line is of dubious provenance. Links to non-legit reproduction violates SE rules, so no link. (Not hard to find with a Google search). | If I understand correctly, points were scored in jousting by breaking your lance tip on the opponent and by dismounting your opponent.
Let's say it's 1 point for a lance break and 3 points for dismounting. Highest score at the end of 3 rounds wins.
I would have each contestant make an attack roll against their opponent's AC. On a hit, their lance breaks. Damage is not rolled. Each contestant then makes a dexterity saving throw against their opponent's attack roll to remain mounted. This saving throw is made with disadvantage if the opponent rolled a critical hit. As most participants will have military saddles, they should have advantage on the check except in the case of the crit as the disadvantage would cancel out the advantage.
Initiative is not rolled as both occur simultaneously.
Now here's the problem with my method. If two opponents had the same stats, you might as well flip a coin to see who wins. Class abilities aren't used, and only one action is possible. This is only going to be fun once or twice. After that, the players are just rolling more dice to do the same thing. In order to have an interesting system, you'd need a complete rewrite of the entire combat system. |
68,879 | I'd like to run a jousting tournament within a D&D game as a nonlethal (but nevertheless dangerous) sporting event, rather than as an actual combat. Specifically, two mounted contestants with lances should ride past one another in parallel lanes, attempting to strike one another's shields with lances, seeking to knock the opponent from the mount.
**How does this work within the 5e rules?**
---
Example of the sort of answer I'm seeking:
>
> * Each pass is considered a separate combat.
> * The duration between the passes comprising a match is not long enough for a short rest.
>
>
> While the rules of D&D determine what the contestants are *practically* capable of doing, the in-game rules of the joust add further *social* constraints. The contestants may choose to break these rules, but if detected by the judges/onlookers, they will be considered cheats/unsportsmanlike and subject to disqualification/scandal.
>
>
> * On a signal, the contestants are expected to charge at and past each other from opposite ends of the lists. They're not allowed to be "creative" about their movement.
> * The contestants are expected to use a Readied action to shove (PH p.195) the opponent prone with the lance, rather than making a melee weapon attack for damage. Being knocked prone while mounted requires a DC 10 Dexterity check to avoid falling off the mount (PH p.198).
>
>
> Contestants score 1 point from a pass in which the opponent is knocked prone but remains mounted, or 3 points (and the match ends) if the opponent falls.
>
>
>
Note that there's no homebrewed features added to the D&D rules here, just what people in-game think is acceptable during a joust. | 2015/09/21 | [
"https://rpg.stackexchange.com/questions/68879",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/22041/"
] | Unfortunately, like most sporting competitions, jousting is a test of skill, namely the skill of the two characters involved. What this means is that you can't, by definition, make the rules for this give agency to the player without taking some of that agency away from your character. These skills such as combat along with the conventional game-defined skills like Stealth are abstracted away because we, modern laypersons, generally don't know how to do any of that stuff correctly. You can certainly shoot for a more-involved lance combat system and that's fine, but I will write this post under the assumption that you won't choose that option, because it causes inconsistency in how the rules are designed (which again, isn't necessarily bad if your players are willing to handle that, just a choice).
I will try to explain with an analogy briefly in the next paragraph.
Imagine a "more detailed" combat system where you decide exactly how hard you swing your weapons. The system, besides being overbearing by many accounts, would remove some of the abstraction of rolling a die to hit. This abstraction is what "allows" your character to "make decisions" on their own, specifically the things that *they* are good at in place of *you* -- combat, for instance. In the example I'm using now, a player might say that they hit someone "with all their strength," to which the GM counters with "Haha! Now you've left yourself wide open for the enemy's attack," to which the player might ask "But aren't I an expert warrior?" By simply rolling the dice, you're leaving the "decision" up to your character, who is the "expert". Those "decisions" are luck-based with influence from your character's rated abilities, but that's just due to the limitations of the medium. You usually have a higher chance of success rolling the dice with your character's modifiers than knowing **exactly** what to do out-of-character.
**The Solution**
As @Lohoris mentions, decision-making is where all the fun of the game is, unless you really like rolling dice.
With all that in mind, the approach I recommend is one where all the decision-making happens *before* and *after* the joust, not during. Maybe allow the players to choose from different lengths of lance (shorter lances being easier to aim, while longer lances allow you to roll-to-hit first perhaps), different kinds of horse, maybe the house or nation they represent during the joust will cause the crowd to cheer differently which sways the judges' opinions. However, the joust will still come down to either two rolls or an opposed check of some kind -- the details of that are mostly up to you and what accomodates the "meta-joust" rules better.
Within the joust itself, there isn't much of a decision to make, unless you're trying to decide whether or not you should kill your opponent and make it look like an accident, or whether or not you should take the fall for a bribe. Those things aren't actually involved with the game's rules, either -- they dictate one strategy, which is aim your lance at the small crest slightly below the opponent's shoulder and attempt to knock them off their horse. There is no decision there, only your skill in attempting the plan of action. Therefore the logical course of action is to avoid that part entirely and build rules *around* it, which is where the meta-game comes into play.
**Real-life Relation / Rationale**
Meta-game, despite being a bad thing in a role-playing game, is key to victory in normal sporting-type competitions. The suggestions I've provided focus on the player making decisions in the "joust meta-game," so-to-speak. They are elements that are not actually part of the competition itself, but have a substantial impact on it, whether intentionally by design or not. This layer of decision-making outside of the actual game is a good place for the player to make lots of decisions (obviously), hopefully adding depth to the game in a fun way. | If I understand correctly, points were scored in jousting by breaking your lance tip on the opponent and by dismounting your opponent.
Let's say it's 1 point for a lance break and 3 points for dismounting. Highest score at the end of 3 rounds wins.
I would have each contestant make an attack roll against their opponent's AC. On a hit, their lance breaks. Damage is not rolled. Each contestant then makes a dexterity saving throw against their opponent's attack roll to remain mounted. This saving throw is made with disadvantage if the opponent rolled a critical hit. As most participants will have military saddles, they should have advantage on the check except in the case of the crit as the disadvantage would cancel out the advantage.
Initiative is not rolled as both occur simultaneously.
Now here's the problem with my method. If two opponents had the same stats, you might as well flip a coin to see who wins. Class abilities aren't used, and only one action is possible. This is only going to be fun once or twice. After that, the players are just rolling more dice to do the same thing. In order to have an interesting system, you'd need a complete rewrite of the entire combat system. |
9,433 | In what language was the [first Zionist congress](http://en.wikipedia.org/wiki/First_Zionist_Congress) in Basel held?
Was it Yiddish, Hebrew, German, English? Where there translators? | 2013/07/06 | [
"https://history.stackexchange.com/questions/9433",
"https://history.stackexchange.com",
"https://history.stackexchange.com/users/2556/"
] | This is an excellent question and this answer is only the "easy" answer based on easily available sources, and should be used primarily as a jumping off point for more research on what is in fact more likely a more complicated reality.
The full PDFs of the stenographic protocols of the Zionist congresses from 1897-1935 are available here:
* <http://edocs.ub.uni-frankfurt.de/volltexte/2008/38038/pdf/ZionKon.html>
All of these transcriptions of the speeches at the congresses are in German, but the 1897 congress, alone among all them, contains the protocol in both Hebrew and German:
* <http://edocs.ub.uni-frankfurt.de/volltexte/2008/38038/original/1897b.pdf> (German)
* <http://edocs.ub.uni-frankfurt.de/volltexte/2008/38038/original/1897a.pdf> (Hebrew)
**I think it is safe to say, however, that the main language of this congress too, was German with a Hebrew translation of the protocols added.** Skimming through the protocol, the majority of the speakers in the congress are marked as coming from Zürich, Köln, Berlin, Bingen, Wien, Frankfurt, Prague etc. where German would be the primary language. Most of those who were not from a German speaking area, very likely knew German:
* Leo Motzkin - Kiew - from Russia but studied in Berlin
* Marcus (Mordecai) Ehrenpreis - Kiakovar - but studied in Berlin
A few others among the participants you might want to check on: Adam Rosenberg (New York), Shepsel Schaffer (Baltimore), Jacob Berstein-Kohan (studied medicine in St. Petersberg, perhaps his letters to Weissmann will give a clue).
Also, the invitation card, and the programm for the conference were in German:
* <http://upload.wikimedia.org/wikipedia/commons/3/33/The_%22Basel_Program%22_at_the_First_Zionist_Congress_in_1897.jpg> (Program)
* <http://upload.wikimedia.org/wikipedia/en/0/04/Participant_card_at_the_First_Zionist_Congress.jpg> (Participant Card)
Also, the most famous two addresses, by Theodor Herzl and Max Nordau are usually translated from the German, which would be unusual for such important documents, if they were originally delivered in Hebrew or Yiddish.
**Probably More To This**
I think that even if the main language or official language was German, when you bring together something like 200 delegates from nearly two dozen countries, the actual experience was likely to be much more complex. Through a process of purification through editing, the language of the protocol very likely hid serious code-switching, the insertion of Yiddish or Hebrew phrases, and other linguistic mixing that is common in these kinds of settings.
Marcus Ehrenpreis gave a talk on the Hebrew language. He grew up writing Yiddish, and it wouldn't be surprising if Yiddish made its way into his speech. Jacob Berstein-Kohan may have used French while studying at St. Petersberg and he could probably assume, if a German word didn't come to mind, that dropping in a bit of French now and then would be fine. Of course, this doesn't come through in the record, but may come through in diaries or memoirs if you continue research.
One place to start would be the University of Basel, where there was a 1997 exhibition on the congress:
* Der Erste Zionistenkongress von 1897: Ursachen, Bedeutung, Aktualität: "... in Basel habe ich den Judenstaat gegründet. " Hg. von Heiko Haumann u.a. Basel 1997 [Begleitpublikation zur Ausstellung].
* <http://dg.philhist.unibas.ch/bereiche/osteuropaeische-geschichte/projekte-konferenzen-initiativen/ausstellungen/zionistenkongress/> | [The Encyclopedia of the Arab-Israeli Conflict: A Political, Social, and Military History](https://books.google.co.il/books?id=YAd8efHdVzIC&lpg=PA1127&ots=OTYleCo7dP&dq=kongressdeutch&pg=PA1127#v=onepage&q=kongressdeutch&f=false):
>
> The First Zionist Congress's official language, both spoken and
> written, was German, but many delegates also spoke Yiddish
> (Hebrew-German vernacular), the language of Ashkenazic Judaism, and a
> Yiddish-like German known as *Kongressdeutch*.
>
>
> |
9,433 | In what language was the [first Zionist congress](http://en.wikipedia.org/wiki/First_Zionist_Congress) in Basel held?
Was it Yiddish, Hebrew, German, English? Where there translators? | 2013/07/06 | [
"https://history.stackexchange.com/questions/9433",
"https://history.stackexchange.com",
"https://history.stackexchange.com/users/2556/"
] | This is an excellent question and this answer is only the "easy" answer based on easily available sources, and should be used primarily as a jumping off point for more research on what is in fact more likely a more complicated reality.
The full PDFs of the stenographic protocols of the Zionist congresses from 1897-1935 are available here:
* <http://edocs.ub.uni-frankfurt.de/volltexte/2008/38038/pdf/ZionKon.html>
All of these transcriptions of the speeches at the congresses are in German, but the 1897 congress, alone among all them, contains the protocol in both Hebrew and German:
* <http://edocs.ub.uni-frankfurt.de/volltexte/2008/38038/original/1897b.pdf> (German)
* <http://edocs.ub.uni-frankfurt.de/volltexte/2008/38038/original/1897a.pdf> (Hebrew)
**I think it is safe to say, however, that the main language of this congress too, was German with a Hebrew translation of the protocols added.** Skimming through the protocol, the majority of the speakers in the congress are marked as coming from Zürich, Köln, Berlin, Bingen, Wien, Frankfurt, Prague etc. where German would be the primary language. Most of those who were not from a German speaking area, very likely knew German:
* Leo Motzkin - Kiew - from Russia but studied in Berlin
* Marcus (Mordecai) Ehrenpreis - Kiakovar - but studied in Berlin
A few others among the participants you might want to check on: Adam Rosenberg (New York), Shepsel Schaffer (Baltimore), Jacob Berstein-Kohan (studied medicine in St. Petersberg, perhaps his letters to Weissmann will give a clue).
Also, the invitation card, and the programm for the conference were in German:
* <http://upload.wikimedia.org/wikipedia/commons/3/33/The_%22Basel_Program%22_at_the_First_Zionist_Congress_in_1897.jpg> (Program)
* <http://upload.wikimedia.org/wikipedia/en/0/04/Participant_card_at_the_First_Zionist_Congress.jpg> (Participant Card)
Also, the most famous two addresses, by Theodor Herzl and Max Nordau are usually translated from the German, which would be unusual for such important documents, if they were originally delivered in Hebrew or Yiddish.
**Probably More To This**
I think that even if the main language or official language was German, when you bring together something like 200 delegates from nearly two dozen countries, the actual experience was likely to be much more complex. Through a process of purification through editing, the language of the protocol very likely hid serious code-switching, the insertion of Yiddish or Hebrew phrases, and other linguistic mixing that is common in these kinds of settings.
Marcus Ehrenpreis gave a talk on the Hebrew language. He grew up writing Yiddish, and it wouldn't be surprising if Yiddish made its way into his speech. Jacob Berstein-Kohan may have used French while studying at St. Petersberg and he could probably assume, if a German word didn't come to mind, that dropping in a bit of French now and then would be fine. Of course, this doesn't come through in the record, but may come through in diaries or memoirs if you continue research.
One place to start would be the University of Basel, where there was a 1997 exhibition on the congress:
* Der Erste Zionistenkongress von 1897: Ursachen, Bedeutung, Aktualität: "... in Basel habe ich den Judenstaat gegründet. " Hg. von Heiko Haumann u.a. Basel 1997 [Begleitpublikation zur Ausstellung].
* <http://dg.philhist.unibas.ch/bereiche/osteuropaeische-geschichte/projekte-konferenzen-initiativen/ausstellungen/zionistenkongress/> | Most delegates were Ashkenazi Jews, that is native Yiddish speakers (see the linguistic note below)... unless they were assimilated in German-speaking countries, in which case they would speak (a dialect of) German as their native language. They thus had no problem understanding each other, despite coming from countries with different majority languages.
Whether the language of Zionism should be Hebrew or German was a subject of a debate lasting for a few decades, with notably Herzl advocating for German, and [Technion teaching in German](https://en.wikipedia.org/wiki/War_of_the_Languages). [10th Zionist congress was the first zionist congress where as session was held in Hebrew](https://www.jewishvirtuallibrary.org/first-to-twelfth-zionist-congress-1897-1921).
**Linguistic note on Yiddish and Yiddish speakers**
[Yiddish](https://en.wikipedia.org/wiki/Yiddish) is a *Germanic language* (more precisely a group of languages) - written in Hebrew script and with about 10-20% of its vocabulary borrowed from Hebrew, Slavic, and Romance languages. It is thus generally mutually understandable with German, to about the same extent as different German dialects are mutually understandable, or the German spoken in Germany vs. Swiss German/Alsatian.
An educated Yiddish speaker would typically speak Yiddish, the majority language of their country and Hebrew (which was learned as a part of the basic religious education, like Latin elsewhere, but was at the time of somewhat limited use for everyday communication). Even if they did not come from a German-speaking country, many would know German, since it came as an easy addition to Yiddish and, importantly, played the same role (alongside French) as English plays in the modern world. |
9,433 | In what language was the [first Zionist congress](http://en.wikipedia.org/wiki/First_Zionist_Congress) in Basel held?
Was it Yiddish, Hebrew, German, English? Where there translators? | 2013/07/06 | [
"https://history.stackexchange.com/questions/9433",
"https://history.stackexchange.com",
"https://history.stackexchange.com/users/2556/"
] | [The Encyclopedia of the Arab-Israeli Conflict: A Political, Social, and Military History](https://books.google.co.il/books?id=YAd8efHdVzIC&lpg=PA1127&ots=OTYleCo7dP&dq=kongressdeutch&pg=PA1127#v=onepage&q=kongressdeutch&f=false):
>
> The First Zionist Congress's official language, both spoken and
> written, was German, but many delegates also spoke Yiddish
> (Hebrew-German vernacular), the language of Ashkenazic Judaism, and a
> Yiddish-like German known as *Kongressdeutch*.
>
>
> | Most delegates were Ashkenazi Jews, that is native Yiddish speakers (see the linguistic note below)... unless they were assimilated in German-speaking countries, in which case they would speak (a dialect of) German as their native language. They thus had no problem understanding each other, despite coming from countries with different majority languages.
Whether the language of Zionism should be Hebrew or German was a subject of a debate lasting for a few decades, with notably Herzl advocating for German, and [Technion teaching in German](https://en.wikipedia.org/wiki/War_of_the_Languages). [10th Zionist congress was the first zionist congress where as session was held in Hebrew](https://www.jewishvirtuallibrary.org/first-to-twelfth-zionist-congress-1897-1921).
**Linguistic note on Yiddish and Yiddish speakers**
[Yiddish](https://en.wikipedia.org/wiki/Yiddish) is a *Germanic language* (more precisely a group of languages) - written in Hebrew script and with about 10-20% of its vocabulary borrowed from Hebrew, Slavic, and Romance languages. It is thus generally mutually understandable with German, to about the same extent as different German dialects are mutually understandable, or the German spoken in Germany vs. Swiss German/Alsatian.
An educated Yiddish speaker would typically speak Yiddish, the majority language of their country and Hebrew (which was learned as a part of the basic religious education, like Latin elsewhere, but was at the time of somewhat limited use for everyday communication). Even if they did not come from a German-speaking country, many would know German, since it came as an easy addition to Yiddish and, importantly, played the same role (alongside French) as English plays in the modern world. |
4,836,296 | I want to create a desktop recorder that require very little HD space.
It should capture the current display into a buffer, compare it to the previous state, and save only the rectangles that differ to the previous state.
What API, function or library I have to use ? | 2011/01/29 | [
"https://Stackoverflow.com/questions/4836296",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/244413/"
] | Well if you want to save the differences from each frame to frame only you could simply use a substraction-method. Simply substract the color values at image(t+1) from image(t)... All parts that stay equal haven't changed... only the parts that are different will result in something non-zero. You can then extract the rectangles around it and save them. But of course be aware since there might be more than one part changing of course and you probably wanna save each one instead of the big rectangle that contains all changes...
You could use OpenCV for this... it has all basic functions for image substraction, rectangle fitting, cropping, ...
Hope that helps... | Consider using Windows Media Screen Capture encoder for the task. You will feed your captured frames to it, and it will do the rest and create highly efficient wmv file for you. |
166,645 | Emigrant = someone who is leaving their country.
??? = the country from which the emigrant is departing.
I want to say Émigré country but I don't know if that makes sense.
Maybe country of emigration? But that's too wordy.
Is there *one word*? | 2014/04/28 | [
"https://english.stackexchange.com/questions/166645",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/63461/"
] | I think **home country** reflects well the idea of what you left by emigrating. | In most cases, it would be **homeland** or **motherland**.
But "**[old country](http://www.merriam-webster.com/dictionary/old%20country)**" is used also
>
> an emigrant's country of origin
>
>
>
---
Additionally, **"source country"** is used in immigration related or technical sources |
166,645 | Emigrant = someone who is leaving their country.
??? = the country from which the emigrant is departing.
I want to say Émigré country but I don't know if that makes sense.
Maybe country of emigration? But that's too wordy.
Is there *one word*? | 2014/04/28 | [
"https://english.stackexchange.com/questions/166645",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/63461/"
] | I would say "[**country of origin**](http://en.wikipedia.org/wiki/Immigration_to_the_United_States)." | In most cases, it would be **homeland** or **motherland**.
But "**[old country](http://www.merriam-webster.com/dictionary/old%20country)**" is used also
>
> an emigrant's country of origin
>
>
>
---
Additionally, **"source country"** is used in immigration related or technical sources |
166,645 | Emigrant = someone who is leaving their country.
??? = the country from which the emigrant is departing.
I want to say Émigré country but I don't know if that makes sense.
Maybe country of emigration? But that's too wordy.
Is there *one word*? | 2014/04/28 | [
"https://english.stackexchange.com/questions/166645",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/63461/"
] | [Native country](http://www.collinsdictionary.com/dictionary/english/native-country)
*the country someone is born in or native to*
>
> Born Carmela Drelano, in Spain, it was very many years since she had
> lived in her native country.
>
>
>
[Native land](http://dictionary.reference.com/browse/native%20land)
>
> When I think of my own native land, In a moment I seem to be there;
> But alas! recollection at hand Soon hurries me back to despair
>
>
>
[Native soil](http://en.wiktionary.org/wiki/native_soil)
*The country or geographical region where one was born or which one considers to be one's true homeland*
>
> Nawaz Sharif, two-time Prime Minister of Pakistan, had planned a
> triumphant return to his native soil nearly seven years after choosing
> exile.
>
>
>
If the OP wishes a similar one word expression still connected to *native*, then I suggest
[birthplace](http://dictionary.reference.com/browse/birthplace?&o=100074&s=t)
>
> * He knows what spot this is: the birthplace of their country.
> * At the age of 27, Arriaga **emigrated from his birthplace**, the port of
> Callao, Peru, to Canada. [source](http://marcosarriaga.com/promised-land/)
> * [Pulitzer](http://www.stlmediahistory.com/index.php/Print/PrintHOFDetail/pulitzer-joseph) emigrated from **his birthplace in Hungary** to New York in 1864 when he was 17.
>
>
> | In most cases, it would be **homeland** or **motherland**.
But "**[old country](http://www.merriam-webster.com/dictionary/old%20country)**" is used also
>
> an emigrant's country of origin
>
>
>
---
Additionally, **"source country"** is used in immigration related or technical sources |
166,645 | Emigrant = someone who is leaving their country.
??? = the country from which the emigrant is departing.
I want to say Émigré country but I don't know if that makes sense.
Maybe country of emigration? But that's too wordy.
Is there *one word*? | 2014/04/28 | [
"https://english.stackexchange.com/questions/166645",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/63461/"
] | This depends on your audience. A speech would be different than a novel.
In nonfiction writing or speech, "country of origin" is the most respectful and politically-sensitive way of phrasing this.
"Home country" doesn't work if the person has neither a home nor feels at home there.
"Native country" also doesn't work because it may not be where someone is native from. Consider a refugee of palestine who has emigrated from Egypt. While their native country may be Palestine/Israel, if they sought temporary political asylum in Egypt, and then emigrated to England, their country of origin would be Egypt.
In fiction and informal writing, "homeland." | In most cases, it would be **homeland** or **motherland**.
But "**[old country](http://www.merriam-webster.com/dictionary/old%20country)**" is used also
>
> an emigrant's country of origin
>
>
>
---
Additionally, **"source country"** is used in immigration related or technical sources |
109,887 | My company is about to move to Subversion and the initial plan was to put a large amount of archive material in the repo, as well as our current work. The idea was that no one would ever want to checkout the archive but would instead browse it via their web browser.
We're concerned, however, that someone might accidentally checkout the whole archive (can't trust all of our users). Since our hosting agreement for the server has relatively low bandwidth limits, checking out the entire archive could blow the limit and cost us a lot of cash.
Is there anyway of providing read-only access to the archive via a web browser but prevent anyone from checking it out? I had a look at the available repository hooks and couldn't find anything useful. Any other ideas about how we could achieve our goal? | 2010/02/05 | [
"https://serverfault.com/questions/109887",
"https://serverfault.com",
"https://serverfault.com/users/33907/"
] | Have you checked out [ViewVC](http://www.viewvc.org/)? It can provide a nicely-formatted read-only view of your repo, and is quite configurable. | One way of doing this is through path-based authorization. This way you could setup the archives to be viewed only by those people in your team. Here is a page from the [Subversion Red Book](http://svnbook.red-bean.com) about Path-Based Authorization:
<http://svnbook.red-bean.com/nightly/en/svn.serverconfig.pathbasedauthz.html>
You will be able to set the permissions using what's called an authz file. Just make sure to test this with different users. Play with the different settings. Once the authorization settings are to your liking, then you can open your firewall port to the WAN.
Subversion has 2 servers, svnserve (svn:// protocol) and Apache-WebDAV (http:// protocol). I recommend that you use [VisualSVN](http://visualsvn.com/server/), if you choose the HTTP protocol. I have a product that handles the svnserve server. It's called [PainlessSVN](http://painlesssvn.com).
There's a couple places where you may be able to get help.
[WanDisco Subversion Community](http://subversion.wandisco.com/)
[Subversion Forums](http://www.svnforum.org/) |
109,887 | My company is about to move to Subversion and the initial plan was to put a large amount of archive material in the repo, as well as our current work. The idea was that no one would ever want to checkout the archive but would instead browse it via their web browser.
We're concerned, however, that someone might accidentally checkout the whole archive (can't trust all of our users). Since our hosting agreement for the server has relatively low bandwidth limits, checking out the entire archive could blow the limit and cost us a lot of cash.
Is there anyway of providing read-only access to the archive via a web browser but prevent anyone from checking it out? I had a look at the available repository hooks and couldn't find anything useful. Any other ideas about how we could achieve our goal? | 2010/02/05 | [
"https://serverfault.com/questions/109887",
"https://serverfault.com",
"https://serverfault.com/users/33907/"
] | Have you checked out [ViewVC](http://www.viewvc.org/)? It can provide a nicely-formatted read-only view of your repo, and is quite configurable. | In the subversion contributions is a mod\_dontdothat that might do what you ask. It is an optional apache module that allows denying operations like a checkout on the root of the repository. |
109,887 | My company is about to move to Subversion and the initial plan was to put a large amount of archive material in the repo, as well as our current work. The idea was that no one would ever want to checkout the archive but would instead browse it via their web browser.
We're concerned, however, that someone might accidentally checkout the whole archive (can't trust all of our users). Since our hosting agreement for the server has relatively low bandwidth limits, checking out the entire archive could blow the limit and cost us a lot of cash.
Is there anyway of providing read-only access to the archive via a web browser but prevent anyone from checking it out? I had a look at the available repository hooks and couldn't find anything useful. Any other ideas about how we could achieve our goal? | 2010/02/05 | [
"https://serverfault.com/questions/109887",
"https://serverfault.com",
"https://serverfault.com/users/33907/"
] | Have you checked out [ViewVC](http://www.viewvc.org/)? It can provide a nicely-formatted read-only view of your repo, and is quite configurable. | Why not run your own 'svn export' and let Apache serve a copy? That way, they can't do a checkout because Subversion is completely out of the picture. |
107,788 | **The context:**
There is a population of people surviving on a lunar-analogue's surface, descended from the crew of a crashed spaceship. The rest of their society is not relevant to the question, but their technology includes cobbled together habitats and void suits that protect against some solar radiation but not all (an arbitrary amount that allows them to maintain a population, but not necessarily easily).
**The question:**
What skin colour would this select for? I'd initially say a pallid white given the lack of UV exposure, but I've recently stumbled upon research which suggest melanin provides at least some protection against gamma radiation: <https://www.news-medical.net/amp/news/20110824/Melanin-also-protects-from-ionizing-radiation.aspx>
The question is, what skintone would partial protection from gamma radiation on a longstanding permanent lunar culture select for?
For reference, this is for an art project where colour palette will be important, so injecting some realism into the skintone and working from there would be the way to go. | 2018/03/24 | [
"https://worldbuilding.stackexchange.com/questions/107788",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/48681/"
] | **Random.**
I am reminded of the teeth of the numbat.
[](https://i.stack.imgur.com/Z0AOx.jpg)
<http://animaldiversity.org/collections/contributors/anatomical_images/family_pages/dasyuromorphia/myrmecobiidae/>
The numbat has more teeth than any other land mammal. Tooth number and shape vary between individuals. It does not matter to the animal because none of the teeth are used at all. The numbat eats with its tongue exclusively.
>
> The variability in number and form of teeth, as well as the lack of
> significant tooth wear have been cited as. evidence that the teeth are
> used very little and so are not subject to intense selection pressure
> (Calaby 1960).
> <https://www.environment.gov.au/system/files/pages/a117ced5-9a94-4586-afdb-1f333618e1e3/files/22-ind.pdf>
>
>
>
So too skin color for your moon people. Skin color for earth humans is influenced by evolutionary pressures that have to do with UV damage / vitamin D synthesis. Absent selection pressures for or against given colors, skin color would evolutionarily drift, like the number and shape of teeth of the numbat. One could invoke this to explain why different individuals were colored differently one to the next: it is random.
Note that it has taken the numbat millions of years for its teeth to reach this state. But with a small population you could have evolution / genetic drift happen faster. | Since skin colour affects appearance, sexual selection comes into play. Whatever skin colour their culture finds most attractive is what will be selected for. |
107,788 | **The context:**
There is a population of people surviving on a lunar-analogue's surface, descended from the crew of a crashed spaceship. The rest of their society is not relevant to the question, but their technology includes cobbled together habitats and void suits that protect against some solar radiation but not all (an arbitrary amount that allows them to maintain a population, but not necessarily easily).
**The question:**
What skin colour would this select for? I'd initially say a pallid white given the lack of UV exposure, but I've recently stumbled upon research which suggest melanin provides at least some protection against gamma radiation: <https://www.news-medical.net/amp/news/20110824/Melanin-also-protects-from-ionizing-radiation.aspx>
The question is, what skintone would partial protection from gamma radiation on a longstanding permanent lunar culture select for?
For reference, this is for an art project where colour palette will be important, so injecting some realism into the skintone and working from there would be the way to go. | 2018/03/24 | [
"https://worldbuilding.stackexchange.com/questions/107788",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/48681/"
] | It won't differ much from the mixture available in the founders' pool.
The reason is simple: while when our ancestors moved out from Africa to colonize the world had the pressure resulting from lower UV exposure that allowed for the selection of paler skins, I am pretty sure your colonist would be assuming integration vitamin D, removing any need for the body to adapt.
In the case they would not be assuming vitamin D integrators they would still keep the original mix for quite some time: it takes lots of generations for a character to spread, and humans are not that fast breeders. | A detail that heavily effects the answer is what temperature is maintained in the colony. Several complex factors will effect temperature, and you can basically choose whatever fits the story. Temperature will effect how much clothing is worn. Amount of clothing then effects melanin levels.
* hot->little clothing->high melanin to shield from uv
* cold->thick clothing->low melanin to allow vitamin D production in the little exposed skin
If you are assuming sufficient clothing to block UV, then then assumption that gamma radiation will have the dominating effect is flawed. If opaque clothing has a negligible effect on blocking gamma rays, then so will opaque skin. Since high energy gamma rays penetrate opaque clothing they will penetrate skin as well. Clothing will actually perform better than skin could. Based on the description of the environment, it seems likely that clothing will be made from animal and plant tissue. Many animals and plants will adapt to the environment faster then humans due to shorter life cycles and selective breeding. This means clothing will more quickly adapt to blocking any radiation that can be blocked than humans will.
There is one way melanin might have a greater shielding effect than clothing. If a much thicker layer of opaque tissue could be used for shielding than just the skin. This could lead to the possibility of pale semi translucent skin for vitamin D with melanin rich fat, muscle, and or bone tissue. It sounds like you are wanting a very striking look and a way to justify it, this combination may be fitting. |
107,788 | **The context:**
There is a population of people surviving on a lunar-analogue's surface, descended from the crew of a crashed spaceship. The rest of their society is not relevant to the question, but their technology includes cobbled together habitats and void suits that protect against some solar radiation but not all (an arbitrary amount that allows them to maintain a population, but not necessarily easily).
**The question:**
What skin colour would this select for? I'd initially say a pallid white given the lack of UV exposure, but I've recently stumbled upon research which suggest melanin provides at least some protection against gamma radiation: <https://www.news-medical.net/amp/news/20110824/Melanin-also-protects-from-ionizing-radiation.aspx>
The question is, what skintone would partial protection from gamma radiation on a longstanding permanent lunar culture select for?
For reference, this is for an art project where colour palette will be important, so injecting some realism into the skintone and working from there would be the way to go. | 2018/03/24 | [
"https://worldbuilding.stackexchange.com/questions/107788",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/48681/"
] | As already pointed out - there would be no real external evolutionary pressure, thus sexual selection would be the way to go.
Short term:
-Just mixing up whole gene pool. (so mixed color skin, dark eyes, dark hair etc)
Long term:
**Survival of the cutest.** (but it is based on assumption, that there would be either a lot of time or possibility to pick designer babies)
-light skin (setting all PC aside, its not a recent phenomena, but something more entrenched. According to records in East Asia lighter skin was being perceived as attractive even in times when Europeans were just considered as some distant barbarians; moreover even in Europe there were periods when lead based white makeup was top trendy)
-blue eyes, blond hair (those genes are recessive, so would not manifest easily)
-neonate features of Asian face
-tall (it's also selected in sexual selection, and in low gravity setting it would have less drawbacks) | I'm not sure if this is the convention, but after some more research and reading all these answers I think we may have come to something approaching an answer.
So, it seems we have a number of factors that influence skin colour:
1. Original population genetics
2. UV exposure
3. Vitamin D production
4. Sexual selection
5. Resource cost of producing melanin
6. Gamma radiation exposure
Original population genetics sets the startpoint and pre-existing genetic variety, but we can split the rest into pale-selecting and dark-selecting pressures:
Pale:
1. Vitamin D production
2. Sexual selection
3. Resource cost of producing melanin
Dark:
1. UV exposure
2. Gamma ray exposure
From these, for our lunar population we can discount Vitamin D production (in order to protect from UV they'd have to avoid direct sun exposure, so vitamin D would likely be sourced from food). We can also probably discount the resource cost of producing melanin given that it's taken so long for numbats to lose their expensive-to-produce teeth (I'd like to find some other data points for that). Sexual selection is an interesting one, but considering the relative stability of skincolours and lack of sexual dimorphism it's probably pretty weak.
So, it basically comes down to relative exposure of UV on the earth's surface to gamma radiation on the moon. If the radiation on the moon is equivalent to northern Europe we might see a gradual slow movement towards paler skin. If it's equivalent to Africa (or higher) then we will likely see a move towards darker skin (potentially rapidly).
Unfortunately, there's a maddening lack of studies comparing the relative damage of gamma ray and UV exposure. Closest I've come to finding something is [a load of people stating how difficult it is to compare them and one guy who's actually done something](https://www.researchgate.net/post/How_comparable_are_gamma_and_UV_radiation) and found that 6J/m² of UV exposure and 4 Grays of gamma exposure killed the same amount of chicken cells (conditions unknown so not the greatest test but it's all we've got).
From [this study](http://pubs.rsc.org/en/content/articlelanding/2016/pp/c5pp00419e#!divAbstract) we can see that in Europe we are around 200J/m² per day. In central Africa we are around 5000J/m² per day.
The highest figure I can find quoted for average radiation on the lunar surface is 120 millirem per day (others hover around 50 millirems), which converts to 0.0012 Grays of gamma radiation. Practically nothing. Wait, why are we scared of gamma radiation on the moon again? Unless they're quoting shielded figures, or the 6-to-4 ratio of that guy was for one layer of cells (so gets multiplied by each layer of cells the gamma rays reach that the UV rays don't).
The only thing I can see that would be a problem gamma-radiation-wise is the recommended [maximum radiation dose for fetuses](http://news.mit.edu/1994/safe-0105) (50 millirems *per month* plus the 25 millirems background). So, sod all effect on adults but very dangerous for kiddos, unless I'm missing anything major.
Oh, and apparently during an 18-month study on Mars there were 2 events which saw radiation increase to 2000 millirems per day (0.02 grays).
So, all of that weighs out to a very slight selection pressure towards paler skin with a cultural trait of hiding pregnant women within rad-shielded bunkers, or a strong selection pressure towards jet-black skin in order to protect their unborn children.
Edit: apparently the safe level for radiation exposure in US legislature is 5000 millirems per year, or 13.7 millirems per day. Lower than the level our lunites will be receiving. So, leaning towards the jet black option of the two above... |
107,788 | **The context:**
There is a population of people surviving on a lunar-analogue's surface, descended from the crew of a crashed spaceship. The rest of their society is not relevant to the question, but their technology includes cobbled together habitats and void suits that protect against some solar radiation but not all (an arbitrary amount that allows them to maintain a population, but not necessarily easily).
**The question:**
What skin colour would this select for? I'd initially say a pallid white given the lack of UV exposure, but I've recently stumbled upon research which suggest melanin provides at least some protection against gamma radiation: <https://www.news-medical.net/amp/news/20110824/Melanin-also-protects-from-ionizing-radiation.aspx>
The question is, what skintone would partial protection from gamma radiation on a longstanding permanent lunar culture select for?
For reference, this is for an art project where colour palette will be important, so injecting some realism into the skintone and working from there would be the way to go. | 2018/03/24 | [
"https://worldbuilding.stackexchange.com/questions/107788",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/48681/"
] | There would be no evolutionary pressure for a specific skin tone for living on the Moon, because -- even were we to colonize the Moon -- no one lives on the Moon like they live on the Earth.
That's because people will live **inside all the time**, getting their Vitamin D from either food or the interior lighting.
It's somewhat similar to white people living in Australia. You'd think that Europeans with darker skin would be more genetically successful, but they aren't. Why? Because clothes (and hats) shield them from the excess UV, while allowing enough to get to the exposed body parts. | I do not know if more dangerous rays can trigger the same reaction in the skin as UV light does, but I expect they don't.
These people would most probably suffer lack of melanin as well as vitamin D. So they would need some sort of artificial sunlight source. Expect european people to be a bit paler if they do not attend their artificial sunlight exposures, but otherwise there shouln't be much difference. Tanning and skin color have very little to do with each other. One is a reaction of skin on dangerous environment and the other is a genetical predisposition. |
107,788 | **The context:**
There is a population of people surviving on a lunar-analogue's surface, descended from the crew of a crashed spaceship. The rest of their society is not relevant to the question, but their technology includes cobbled together habitats and void suits that protect against some solar radiation but not all (an arbitrary amount that allows them to maintain a population, but not necessarily easily).
**The question:**
What skin colour would this select for? I'd initially say a pallid white given the lack of UV exposure, but I've recently stumbled upon research which suggest melanin provides at least some protection against gamma radiation: <https://www.news-medical.net/amp/news/20110824/Melanin-also-protects-from-ionizing-radiation.aspx>
The question is, what skintone would partial protection from gamma radiation on a longstanding permanent lunar culture select for?
For reference, this is for an art project where colour palette will be important, so injecting some realism into the skintone and working from there would be the way to go. | 2018/03/24 | [
"https://worldbuilding.stackexchange.com/questions/107788",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/48681/"
] | >
> ...Realism in the skin tone...
>
>
>
You have only two perspectives that would affect skin color.
1. The original ethnicity and/or races of the crew. In the 1960's this would have been white people. Today, there is better diversity. Tomorrow, better still.
2. Time. It takes time for skin color to change. Not years. Not centuries. Possibly not even millenium. It takes eons. The genetics of skin color takes a boatload of time.
If your intrepid crew's descendants haven't experienced at least tens to hundreds of thousands of years, then their location has ***nothing*** to do with their skin color. The politics and social mores of the society that launched them into space would have everything (as in 100%) to do with skin color. | Since skin colour affects appearance, sexual selection comes into play. Whatever skin colour their culture finds most attractive is what will be selected for. |
107,788 | **The context:**
There is a population of people surviving on a lunar-analogue's surface, descended from the crew of a crashed spaceship. The rest of their society is not relevant to the question, but their technology includes cobbled together habitats and void suits that protect against some solar radiation but not all (an arbitrary amount that allows them to maintain a population, but not necessarily easily).
**The question:**
What skin colour would this select for? I'd initially say a pallid white given the lack of UV exposure, but I've recently stumbled upon research which suggest melanin provides at least some protection against gamma radiation: <https://www.news-medical.net/amp/news/20110824/Melanin-also-protects-from-ionizing-radiation.aspx>
The question is, what skintone would partial protection from gamma radiation on a longstanding permanent lunar culture select for?
For reference, this is for an art project where colour palette will be important, so injecting some realism into the skintone and working from there would be the way to go. | 2018/03/24 | [
"https://worldbuilding.stackexchange.com/questions/107788",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/48681/"
] | Gamma radiation would have no direct effect on skin color, since no pigment absorbs gamma better or worse than an equivalent mass of flesh -- or of water, for that matter. Gamma radiation is almost entirely due to *nuclear* energy level transitions, not electron energy level transitions (which are what produce color.)
Gamma absorption (absent resonances which are not relevant to the broad-spectrum gamma you get in space) depends pretty much exclusively on the density of nuclear matter in the way, which translates pretty exactly to the *mass* of absorber. So to absorb significant gamma, your skin would need to get more massive (a *lot* more massive), not change color.
It could become thicker or, conceivably, become denser by somehow developing calcium deposits. But never forget that any evolutionary change incurs a fitness cost as well, and evolution would balance the fitness cost of thicker (and hence higher energy cost and also less flexible) skin against the gains from increased gamma radiation resistance.
The only effect that gamma exposure would have on the evolution of *pigmentation* is to potentially speed the process up by causing a higher rate of mutation. If there was selection pressure for a change in skin color, that process might well be sped up. But *where* the increased mutation rate took the people would depend on other things. | I'm not sure if this is the convention, but after some more research and reading all these answers I think we may have come to something approaching an answer.
So, it seems we have a number of factors that influence skin colour:
1. Original population genetics
2. UV exposure
3. Vitamin D production
4. Sexual selection
5. Resource cost of producing melanin
6. Gamma radiation exposure
Original population genetics sets the startpoint and pre-existing genetic variety, but we can split the rest into pale-selecting and dark-selecting pressures:
Pale:
1. Vitamin D production
2. Sexual selection
3. Resource cost of producing melanin
Dark:
1. UV exposure
2. Gamma ray exposure
From these, for our lunar population we can discount Vitamin D production (in order to protect from UV they'd have to avoid direct sun exposure, so vitamin D would likely be sourced from food). We can also probably discount the resource cost of producing melanin given that it's taken so long for numbats to lose their expensive-to-produce teeth (I'd like to find some other data points for that). Sexual selection is an interesting one, but considering the relative stability of skincolours and lack of sexual dimorphism it's probably pretty weak.
So, it basically comes down to relative exposure of UV on the earth's surface to gamma radiation on the moon. If the radiation on the moon is equivalent to northern Europe we might see a gradual slow movement towards paler skin. If it's equivalent to Africa (or higher) then we will likely see a move towards darker skin (potentially rapidly).
Unfortunately, there's a maddening lack of studies comparing the relative damage of gamma ray and UV exposure. Closest I've come to finding something is [a load of people stating how difficult it is to compare them and one guy who's actually done something](https://www.researchgate.net/post/How_comparable_are_gamma_and_UV_radiation) and found that 6J/m² of UV exposure and 4 Grays of gamma exposure killed the same amount of chicken cells (conditions unknown so not the greatest test but it's all we've got).
From [this study](http://pubs.rsc.org/en/content/articlelanding/2016/pp/c5pp00419e#!divAbstract) we can see that in Europe we are around 200J/m² per day. In central Africa we are around 5000J/m² per day.
The highest figure I can find quoted for average radiation on the lunar surface is 120 millirem per day (others hover around 50 millirems), which converts to 0.0012 Grays of gamma radiation. Practically nothing. Wait, why are we scared of gamma radiation on the moon again? Unless they're quoting shielded figures, or the 6-to-4 ratio of that guy was for one layer of cells (so gets multiplied by each layer of cells the gamma rays reach that the UV rays don't).
The only thing I can see that would be a problem gamma-radiation-wise is the recommended [maximum radiation dose for fetuses](http://news.mit.edu/1994/safe-0105) (50 millirems *per month* plus the 25 millirems background). So, sod all effect on adults but very dangerous for kiddos, unless I'm missing anything major.
Oh, and apparently during an 18-month study on Mars there were 2 events which saw radiation increase to 2000 millirems per day (0.02 grays).
So, all of that weighs out to a very slight selection pressure towards paler skin with a cultural trait of hiding pregnant women within rad-shielded bunkers, or a strong selection pressure towards jet-black skin in order to protect their unborn children.
Edit: apparently the safe level for radiation exposure in US legislature is 5000 millirems per year, or 13.7 millirems per day. Lower than the level our lunites will be receiving. So, leaning towards the jet black option of the two above... |
107,788 | **The context:**
There is a population of people surviving on a lunar-analogue's surface, descended from the crew of a crashed spaceship. The rest of their society is not relevant to the question, but their technology includes cobbled together habitats and void suits that protect against some solar radiation but not all (an arbitrary amount that allows them to maintain a population, but not necessarily easily).
**The question:**
What skin colour would this select for? I'd initially say a pallid white given the lack of UV exposure, but I've recently stumbled upon research which suggest melanin provides at least some protection against gamma radiation: <https://www.news-medical.net/amp/news/20110824/Melanin-also-protects-from-ionizing-radiation.aspx>
The question is, what skintone would partial protection from gamma radiation on a longstanding permanent lunar culture select for?
For reference, this is for an art project where colour palette will be important, so injecting some realism into the skintone and working from there would be the way to go. | 2018/03/24 | [
"https://worldbuilding.stackexchange.com/questions/107788",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/48681/"
] | **Random.**
I am reminded of the teeth of the numbat.
[](https://i.stack.imgur.com/Z0AOx.jpg)
<http://animaldiversity.org/collections/contributors/anatomical_images/family_pages/dasyuromorphia/myrmecobiidae/>
The numbat has more teeth than any other land mammal. Tooth number and shape vary between individuals. It does not matter to the animal because none of the teeth are used at all. The numbat eats with its tongue exclusively.
>
> The variability in number and form of teeth, as well as the lack of
> significant tooth wear have been cited as. evidence that the teeth are
> used very little and so are not subject to intense selection pressure
> (Calaby 1960).
> <https://www.environment.gov.au/system/files/pages/a117ced5-9a94-4586-afdb-1f333618e1e3/files/22-ind.pdf>
>
>
>
So too skin color for your moon people. Skin color for earth humans is influenced by evolutionary pressures that have to do with UV damage / vitamin D synthesis. Absent selection pressures for or against given colors, skin color would evolutionarily drift, like the number and shape of teeth of the numbat. One could invoke this to explain why different individuals were colored differently one to the next: it is random.
Note that it has taken the numbat millions of years for its teeth to reach this state. But with a small population you could have evolution / genetic drift happen faster. | As already pointed out - there would be no real external evolutionary pressure, thus sexual selection would be the way to go.
Short term:
-Just mixing up whole gene pool. (so mixed color skin, dark eyes, dark hair etc)
Long term:
**Survival of the cutest.** (but it is based on assumption, that there would be either a lot of time or possibility to pick designer babies)
-light skin (setting all PC aside, its not a recent phenomena, but something more entrenched. According to records in East Asia lighter skin was being perceived as attractive even in times when Europeans were just considered as some distant barbarians; moreover even in Europe there were periods when lead based white makeup was top trendy)
-blue eyes, blond hair (those genes are recessive, so would not manifest easily)
-neonate features of Asian face
-tall (it's also selected in sexual selection, and in low gravity setting it would have less drawbacks) |
107,788 | **The context:**
There is a population of people surviving on a lunar-analogue's surface, descended from the crew of a crashed spaceship. The rest of their society is not relevant to the question, but their technology includes cobbled together habitats and void suits that protect against some solar radiation but not all (an arbitrary amount that allows them to maintain a population, but not necessarily easily).
**The question:**
What skin colour would this select for? I'd initially say a pallid white given the lack of UV exposure, but I've recently stumbled upon research which suggest melanin provides at least some protection against gamma radiation: <https://www.news-medical.net/amp/news/20110824/Melanin-also-protects-from-ionizing-radiation.aspx>
The question is, what skintone would partial protection from gamma radiation on a longstanding permanent lunar culture select for?
For reference, this is for an art project where colour palette will be important, so injecting some realism into the skintone and working from there would be the way to go. | 2018/03/24 | [
"https://worldbuilding.stackexchange.com/questions/107788",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/48681/"
] | There would be no evolutionary pressure for a specific skin tone for living on the Moon, because -- even were we to colonize the Moon -- no one lives on the Moon like they live on the Earth.
That's because people will live **inside all the time**, getting their Vitamin D from either food or the interior lighting.
It's somewhat similar to white people living in Australia. You'd think that Europeans with darker skin would be more genetically successful, but they aren't. Why? Because clothes (and hats) shield them from the excess UV, while allowing enough to get to the exposed body parts. | The selection pressures for skin colour are that dark skin copes better with exposure to ultraviolet light and light skin produces more vitamin D.
Your people are protected from UV so there's no pressure towards dark skin. If they're eating a balanced diet, there's also no particular pressure to produce more vitamin D. In that case, no skin colour would be preferred. The people's protection against UV exposure and sufficient vitamin D in their diet means that skin colour will have no influence on whether they live beyond child-bearing age. |
107,788 | **The context:**
There is a population of people surviving on a lunar-analogue's surface, descended from the crew of a crashed spaceship. The rest of their society is not relevant to the question, but their technology includes cobbled together habitats and void suits that protect against some solar radiation but not all (an arbitrary amount that allows them to maintain a population, but not necessarily easily).
**The question:**
What skin colour would this select for? I'd initially say a pallid white given the lack of UV exposure, but I've recently stumbled upon research which suggest melanin provides at least some protection against gamma radiation: <https://www.news-medical.net/amp/news/20110824/Melanin-also-protects-from-ionizing-radiation.aspx>
The question is, what skintone would partial protection from gamma radiation on a longstanding permanent lunar culture select for?
For reference, this is for an art project where colour palette will be important, so injecting some realism into the skintone and working from there would be the way to go. | 2018/03/24 | [
"https://worldbuilding.stackexchange.com/questions/107788",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/48681/"
] | There would be no evolutionary pressure for a specific skin tone for living on the Moon, because -- even were we to colonize the Moon -- no one lives on the Moon like they live on the Earth.
That's because people will live **inside all the time**, getting their Vitamin D from either food or the interior lighting.
It's somewhat similar to white people living in Australia. You'd think that Europeans with darker skin would be more genetically successful, but they aren't. Why? Because clothes (and hats) shield them from the excess UV, while allowing enough to get to the exposed body parts. | As already pointed out - there would be no real external evolutionary pressure, thus sexual selection would be the way to go.
Short term:
-Just mixing up whole gene pool. (so mixed color skin, dark eyes, dark hair etc)
Long term:
**Survival of the cutest.** (but it is based on assumption, that there would be either a lot of time or possibility to pick designer babies)
-light skin (setting all PC aside, its not a recent phenomena, but something more entrenched. According to records in East Asia lighter skin was being perceived as attractive even in times when Europeans were just considered as some distant barbarians; moreover even in Europe there were periods when lead based white makeup was top trendy)
-blue eyes, blond hair (those genes are recessive, so would not manifest easily)
-neonate features of Asian face
-tall (it's also selected in sexual selection, and in low gravity setting it would have less drawbacks) |
107,788 | **The context:**
There is a population of people surviving on a lunar-analogue's surface, descended from the crew of a crashed spaceship. The rest of their society is not relevant to the question, but their technology includes cobbled together habitats and void suits that protect against some solar radiation but not all (an arbitrary amount that allows them to maintain a population, but not necessarily easily).
**The question:**
What skin colour would this select for? I'd initially say a pallid white given the lack of UV exposure, but I've recently stumbled upon research which suggest melanin provides at least some protection against gamma radiation: <https://www.news-medical.net/amp/news/20110824/Melanin-also-protects-from-ionizing-radiation.aspx>
The question is, what skintone would partial protection from gamma radiation on a longstanding permanent lunar culture select for?
For reference, this is for an art project where colour palette will be important, so injecting some realism into the skintone and working from there would be the way to go. | 2018/03/24 | [
"https://worldbuilding.stackexchange.com/questions/107788",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/48681/"
] | A detail that heavily effects the answer is what temperature is maintained in the colony. Several complex factors will effect temperature, and you can basically choose whatever fits the story. Temperature will effect how much clothing is worn. Amount of clothing then effects melanin levels.
* hot->little clothing->high melanin to shield from uv
* cold->thick clothing->low melanin to allow vitamin D production in the little exposed skin
If you are assuming sufficient clothing to block UV, then then assumption that gamma radiation will have the dominating effect is flawed. If opaque clothing has a negligible effect on blocking gamma rays, then so will opaque skin. Since high energy gamma rays penetrate opaque clothing they will penetrate skin as well. Clothing will actually perform better than skin could. Based on the description of the environment, it seems likely that clothing will be made from animal and plant tissue. Many animals and plants will adapt to the environment faster then humans due to shorter life cycles and selective breeding. This means clothing will more quickly adapt to blocking any radiation that can be blocked than humans will.
There is one way melanin might have a greater shielding effect than clothing. If a much thicker layer of opaque tissue could be used for shielding than just the skin. This could lead to the possibility of pale semi translucent skin for vitamin D with melanin rich fat, muscle, and or bone tissue. It sounds like you are wanting a very striking look and a way to justify it, this combination may be fitting. | I'm not sure if this is the convention, but after some more research and reading all these answers I think we may have come to something approaching an answer.
So, it seems we have a number of factors that influence skin colour:
1. Original population genetics
2. UV exposure
3. Vitamin D production
4. Sexual selection
5. Resource cost of producing melanin
6. Gamma radiation exposure
Original population genetics sets the startpoint and pre-existing genetic variety, but we can split the rest into pale-selecting and dark-selecting pressures:
Pale:
1. Vitamin D production
2. Sexual selection
3. Resource cost of producing melanin
Dark:
1. UV exposure
2. Gamma ray exposure
From these, for our lunar population we can discount Vitamin D production (in order to protect from UV they'd have to avoid direct sun exposure, so vitamin D would likely be sourced from food). We can also probably discount the resource cost of producing melanin given that it's taken so long for numbats to lose their expensive-to-produce teeth (I'd like to find some other data points for that). Sexual selection is an interesting one, but considering the relative stability of skincolours and lack of sexual dimorphism it's probably pretty weak.
So, it basically comes down to relative exposure of UV on the earth's surface to gamma radiation on the moon. If the radiation on the moon is equivalent to northern Europe we might see a gradual slow movement towards paler skin. If it's equivalent to Africa (or higher) then we will likely see a move towards darker skin (potentially rapidly).
Unfortunately, there's a maddening lack of studies comparing the relative damage of gamma ray and UV exposure. Closest I've come to finding something is [a load of people stating how difficult it is to compare them and one guy who's actually done something](https://www.researchgate.net/post/How_comparable_are_gamma_and_UV_radiation) and found that 6J/m² of UV exposure and 4 Grays of gamma exposure killed the same amount of chicken cells (conditions unknown so not the greatest test but it's all we've got).
From [this study](http://pubs.rsc.org/en/content/articlelanding/2016/pp/c5pp00419e#!divAbstract) we can see that in Europe we are around 200J/m² per day. In central Africa we are around 5000J/m² per day.
The highest figure I can find quoted for average radiation on the lunar surface is 120 millirem per day (others hover around 50 millirems), which converts to 0.0012 Grays of gamma radiation. Practically nothing. Wait, why are we scared of gamma radiation on the moon again? Unless they're quoting shielded figures, or the 6-to-4 ratio of that guy was for one layer of cells (so gets multiplied by each layer of cells the gamma rays reach that the UV rays don't).
The only thing I can see that would be a problem gamma-radiation-wise is the recommended [maximum radiation dose for fetuses](http://news.mit.edu/1994/safe-0105) (50 millirems *per month* plus the 25 millirems background). So, sod all effect on adults but very dangerous for kiddos, unless I'm missing anything major.
Oh, and apparently during an 18-month study on Mars there were 2 events which saw radiation increase to 2000 millirems per day (0.02 grays).
So, all of that weighs out to a very slight selection pressure towards paler skin with a cultural trait of hiding pregnant women within rad-shielded bunkers, or a strong selection pressure towards jet-black skin in order to protect their unborn children.
Edit: apparently the safe level for radiation exposure in US legislature is 5000 millirems per year, or 13.7 millirems per day. Lower than the level our lunites will be receiving. So, leaning towards the jet black option of the two above... |
18,524 | I am using magento community version 1.8.1. I am using FedEx for shipping method.
I enabled several method like 'First Overnight','2 Day','Priority Overnight','Standard Overnight','Ground' and 'International Ground' from admin.
But only four are appearing in front end. 'Ground' and 'International Ground' is not appearing.
Can anyone help me on this ? | 2014/04/17 | [
"https://magento.stackexchange.com/questions/18524",
"https://magento.stackexchange.com",
"https://magento.stackexchange.com/users/2506/"
] | I had the same issue. After turning just about every option on and off. I figured out that if the field for "Residential Delivery" is set to 'Yes'. Ground is turned off, but you can select "Home Delivery".
**Solution!**
Switch "Residential Delivery" to "Off", this allowed the ground option to display on checkout and estimate page. Not sure if that is a Magento or FedEx API thing, I don't really care to explore it atm. Maybe one of you guys can look into it, for a better fix. | I had a similar problem, but mine wasn't related to 'Residential' being set or not - international address post-codes weren't passing validation.
USPS ignored this, but FedEx's API did not.
This issue could be one of many issues and as Magento allows shipping methods to fail silently - I suggest enabling "Debug" for the shipping method, and checking in var/log/shipping\_fedex.log and look in the "result" Object. |
18,524 | I am using magento community version 1.8.1. I am using FedEx for shipping method.
I enabled several method like 'First Overnight','2 Day','Priority Overnight','Standard Overnight','Ground' and 'International Ground' from admin.
But only four are appearing in front end. 'Ground' and 'International Ground' is not appearing.
Can anyone help me on this ? | 2014/04/17 | [
"https://magento.stackexchange.com/questions/18524",
"https://magento.stackexchange.com",
"https://magento.stackexchange.com/users/2506/"
] | I had the same issue. After turning just about every option on and off. I figured out that if the field for "Residential Delivery" is set to 'Yes'. Ground is turned off, but you can select "Home Delivery".
**Solution!**
Switch "Residential Delivery" to "Off", this allowed the ground option to display on checkout and estimate page. Not sure if that is a Magento or FedEx API thing, I don't really care to explore it atm. Maybe one of you guys can look into it, for a better fix. | Turning *Residential Delivery* to *No* worked for me. |
8,677,951 | I intend on writing a small webapp using the Play framework with Scala and CouchDB. Is there any Scala/CouchDB integration library available?
Thanks | 2011/12/30 | [
"https://Stackoverflow.com/questions/8677951",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/465594/"
] | The Play framework has its own [Scala branch](http://scala.playframework.org/).
There's a Scala/CouchDB interface on [GitHub](https://github.com/debasishg/scouchdb). | If you intend to use Play's asynchronous API, where your controller methods have to return futures, you might want to have a look at [my asynchronous library](https://github.com/KimStebel/sprouch). |
29,384 | I'm thinking about acquiring a NAS from Synology (or QNap), these are pretty cool NAS and do much more than just storing files.
Some extras include:
* ssh, ftp, telnet, mail servers
* photo gallery (directly from the NAS!)
* mysql/php (can even run a website from there)
* and much more
On the specs side it looks awesome as a LAN server but it's possible to access those services also from the Internet. And now, I'm wondering how safe that is, considering you're actually putting multiple servers and, not to forget, all your personal files stored in that device, *on the Internet*.
I've done a bit a research and ssh access could be hardened using 2-step authentication, but not sure about the other services, unless they all have to go through ssh?
I would appreciate your view on this and other possible ways to make your home NAS secure. | 2013/01/20 | [
"https://security.stackexchange.com/questions/29384",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9692/"
] | A NAS is a computer. It has a "smaller" CPU (usually an ARM in the 200 MHz range) but it still runs a "normal" operating system (often a Linux derivative) with all its normal software and assorted vulnerabilities. When a SSH server has a buffer overflow, it is vulnerable, even if the outer box does not "look like" a computer.
To be considered secure, a NAS, just like any other computer, must be managed, with prompt installation of security fixes. This is where the problem lies: *contrary* to what happens with full-fledge desktop computers, NAS vendors rarely distribute security patches on a daily basis. There is an inherent latency which means that when a vulnerability is found, attackers have a few weeks (or months !) of head start before the fix is packaged and installed in the majority of deployed devices. This is a rather big issue. It turns 0-day exploits into 0-month exploits.
My advice would be to refrain from putting such a device "on the Internet" unless you replace the OS with another one which you control, and offers low-latency security updates (and, of course, you *do* check them and install them with all due alacrity). For instance, you can install [Debian on QNAP NAS](http://wiki.qnap.com/wiki/Debian_Installation_On_QNAP). | Don't put on the internet if you don't need it. If you do need it make all the other services only available locally and only allow ssh from the internet. You can then make a tunnel through ssh and access your other services through that tunnel.
Do mind that it is a ***HOME*** NAS so better keep it on your LAN. |
29,384 | I'm thinking about acquiring a NAS from Synology (or QNap), these are pretty cool NAS and do much more than just storing files.
Some extras include:
* ssh, ftp, telnet, mail servers
* photo gallery (directly from the NAS!)
* mysql/php (can even run a website from there)
* and much more
On the specs side it looks awesome as a LAN server but it's possible to access those services also from the Internet. And now, I'm wondering how safe that is, considering you're actually putting multiple servers and, not to forget, all your personal files stored in that device, *on the Internet*.
I've done a bit a research and ssh access could be hardened using 2-step authentication, but not sure about the other services, unless they all have to go through ssh?
I would appreciate your view on this and other possible ways to make your home NAS secure. | 2013/01/20 | [
"https://security.stackexchange.com/questions/29384",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9692/"
] | A NAS is a computer. It has a "smaller" CPU (usually an ARM in the 200 MHz range) but it still runs a "normal" operating system (often a Linux derivative) with all its normal software and assorted vulnerabilities. When a SSH server has a buffer overflow, it is vulnerable, even if the outer box does not "look like" a computer.
To be considered secure, a NAS, just like any other computer, must be managed, with prompt installation of security fixes. This is where the problem lies: *contrary* to what happens with full-fledge desktop computers, NAS vendors rarely distribute security patches on a daily basis. There is an inherent latency which means that when a vulnerability is found, attackers have a few weeks (or months !) of head start before the fix is packaged and installed in the majority of deployed devices. This is a rather big issue. It turns 0-day exploits into 0-month exploits.
My advice would be to refrain from putting such a device "on the Internet" unless you replace the OS with another one which you control, and offers low-latency security updates (and, of course, you *do* check them and install them with all due alacrity). For instance, you can install [Debian on QNAP NAS](http://wiki.qnap.com/wiki/Debian_Installation_On_QNAP). | QNAP devices have a very limited ssh server installed as standard - you have to login as admin (i.e. root access), which is a risk in itself.
You can replace with openssh - see instructions here: <http://wiki.qnap.com/wiki/How_To_Replace_SSH_Daemon_With_OpenSSH>
however be very careful to change the guest account password. I learnt this to my cost when I noticed multiple unwelcome external users logged in as guest!
You can also harden the openssh config to deny password access, and only permit access with public key exchange. |
29,384 | I'm thinking about acquiring a NAS from Synology (or QNap), these are pretty cool NAS and do much more than just storing files.
Some extras include:
* ssh, ftp, telnet, mail servers
* photo gallery (directly from the NAS!)
* mysql/php (can even run a website from there)
* and much more
On the specs side it looks awesome as a LAN server but it's possible to access those services also from the Internet. And now, I'm wondering how safe that is, considering you're actually putting multiple servers and, not to forget, all your personal files stored in that device, *on the Internet*.
I've done a bit a research and ssh access could be hardened using 2-step authentication, but not sure about the other services, unless they all have to go through ssh?
I would appreciate your view on this and other possible ways to make your home NAS secure. | 2013/01/20 | [
"https://security.stackexchange.com/questions/29384",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9692/"
] | A NAS is a computer. It has a "smaller" CPU (usually an ARM in the 200 MHz range) but it still runs a "normal" operating system (often a Linux derivative) with all its normal software and assorted vulnerabilities. When a SSH server has a buffer overflow, it is vulnerable, even if the outer box does not "look like" a computer.
To be considered secure, a NAS, just like any other computer, must be managed, with prompt installation of security fixes. This is where the problem lies: *contrary* to what happens with full-fledge desktop computers, NAS vendors rarely distribute security patches on a daily basis. There is an inherent latency which means that when a vulnerability is found, attackers have a few weeks (or months !) of head start before the fix is packaged and installed in the majority of deployed devices. This is a rather big issue. It turns 0-day exploits into 0-month exploits.
My advice would be to refrain from putting such a device "on the Internet" unless you replace the OS with another one which you control, and offers low-latency security updates (and, of course, you *do* check them and install them with all due alacrity). For instance, you can install [Debian on QNAP NAS](http://wiki.qnap.com/wiki/Debian_Installation_On_QNAP). | Do you intend to put your family photos, music library, ripped DVD collection and house documents onto the NAS?
If so, how much would it matter to you if:
* They were all over the internet.
* They were all erased.
If either of these matter to you, you'd be best not to poke a hole in your firewall exposing this NAS to the internet.
If these things just don't matter, can you isolate the NAS from the rest of your home devices, e.g. a 'home DMZ'? In that way, if the NAS is compromised, your internet access and other home devices are not at risk. |
29,384 | I'm thinking about acquiring a NAS from Synology (or QNap), these are pretty cool NAS and do much more than just storing files.
Some extras include:
* ssh, ftp, telnet, mail servers
* photo gallery (directly from the NAS!)
* mysql/php (can even run a website from there)
* and much more
On the specs side it looks awesome as a LAN server but it's possible to access those services also from the Internet. And now, I'm wondering how safe that is, considering you're actually putting multiple servers and, not to forget, all your personal files stored in that device, *on the Internet*.
I've done a bit a research and ssh access could be hardened using 2-step authentication, but not sure about the other services, unless they all have to go through ssh?
I would appreciate your view on this and other possible ways to make your home NAS secure. | 2013/01/20 | [
"https://security.stackexchange.com/questions/29384",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9692/"
] | Don't put on the internet if you don't need it. If you do need it make all the other services only available locally and only allow ssh from the internet. You can then make a tunnel through ssh and access your other services through that tunnel.
Do mind that it is a ***HOME*** NAS so better keep it on your LAN. | Do you intend to put your family photos, music library, ripped DVD collection and house documents onto the NAS?
If so, how much would it matter to you if:
* They were all over the internet.
* They were all erased.
If either of these matter to you, you'd be best not to poke a hole in your firewall exposing this NAS to the internet.
If these things just don't matter, can you isolate the NAS from the rest of your home devices, e.g. a 'home DMZ'? In that way, if the NAS is compromised, your internet access and other home devices are not at risk. |
29,384 | I'm thinking about acquiring a NAS from Synology (or QNap), these are pretty cool NAS and do much more than just storing files.
Some extras include:
* ssh, ftp, telnet, mail servers
* photo gallery (directly from the NAS!)
* mysql/php (can even run a website from there)
* and much more
On the specs side it looks awesome as a LAN server but it's possible to access those services also from the Internet. And now, I'm wondering how safe that is, considering you're actually putting multiple servers and, not to forget, all your personal files stored in that device, *on the Internet*.
I've done a bit a research and ssh access could be hardened using 2-step authentication, but not sure about the other services, unless they all have to go through ssh?
I would appreciate your view on this and other possible ways to make your home NAS secure. | 2013/01/20 | [
"https://security.stackexchange.com/questions/29384",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/9692/"
] | QNAP devices have a very limited ssh server installed as standard - you have to login as admin (i.e. root access), which is a risk in itself.
You can replace with openssh - see instructions here: <http://wiki.qnap.com/wiki/How_To_Replace_SSH_Daemon_With_OpenSSH>
however be very careful to change the guest account password. I learnt this to my cost when I noticed multiple unwelcome external users logged in as guest!
You can also harden the openssh config to deny password access, and only permit access with public key exchange. | Do you intend to put your family photos, music library, ripped DVD collection and house documents onto the NAS?
If so, how much would it matter to you if:
* They were all over the internet.
* They were all erased.
If either of these matter to you, you'd be best not to poke a hole in your firewall exposing this NAS to the internet.
If these things just don't matter, can you isolate the NAS from the rest of your home devices, e.g. a 'home DMZ'? In that way, if the NAS is compromised, your internet access and other home devices are not at risk. |
3,958,600 | I have deployed my web application (ASP.NET/C#) on the Server (Win 2003) to make it possible to merge word documents and open them.
Everything worked fine: a .doc ducment was added to the bottom of another .doc document (using Microsoft.Office.Interop.Word libraries) and then opened with the client MS Word.
Unfortunately in the weekend an automatic update has been done on the Server by another team and I do not know yet which kind of update was done (maybe an Office 2003 Update, but it might be even a Server patch).
Anyway after that I get the following Warning in the Event Viewer and the application hangs when it comes to use Microsoft.Office.Interop.Word libraries:
>
> Detection of product '{90110409-6000-11D3-8CFE-0150048383C9}',
> feature 'OfficeUserData', component '{4A31E933-6F67-11D2-AAA2-00A0C90F57B0}' failed.
>
> The resource 'HKEY\_CURRENT\_USER\Software\ODBC\ODBC.INI\MS Access Database\' does not exist.
>
>
>
We do not even use MS Access, but SQL Server instead.
What I fear is that registry keys have been changed and now the system does not work anymore as expected.
I set the "NETWORK SERVICE" user with privileges to access/launch word and it is the same user used also for the Default App Pool. We use IIS6, Windows Server 2003 e Word 2003.
Could suggest any approach of solution? (even if I know that without knowing which update has been done, it might be hard).
Thanks | 2010/10/18 | [
"https://Stackoverflow.com/questions/3958600",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/479222/"
] | Using and Office components (beit Office itself or the PIAs) on a server is to be avoided at all costs:
>
> Microsoft does not currently
> recommend, and does not support,
> Automation of Microsoft Office
> applications from any unattended,
> non-interactive client application or
> component (including ASP, ASP.NET,
> DCOM, and NT Services), because Office
> may exhibit unstable behavior and/or
> deadlock when Office is run in this
> environment.
>
>
>
[Microsoft KB article](http://support.microsoft.com/kb/257757)
Like many others, I am regularly confronted with this problem and there is no easy solution.
Either dedicate a little server somewhere to do your Office stuff (and be prepared to reboot it regularly) or use a third-party product that doesn't reference the PIAs.
Sorry to be the bringer of bad tidings... | You should be able to check under the Windows updates to see what updates were recently applied. Hopefully, you could roll them back until you find the culprit.
Paul's right. Deploying a solution using MSOffice apps on a server can be a +very+ dicey proposition. Be prepared, and make use of watchdog timers and code to enumerate and kill errant WINWORD.EXE processes when you think it might have dropped off the deep end. it's not elegant stuff, but it can be made to work.
As for 3'rd party tools, I've looked at A LOT of them. If you're building a doc from scratch, most are ok, though they have varying levels of implemented functionality.
If you can work with DOCX files exclusively, the OpenDoc XML SDK works quite nicely and is free. But it sucks to actually manipulate existing documents.
If you're planning on using existing documents as templates, you're options become much more narrow. You can do it with the OpenXML SDK, but it's VERY difficult. Windward reports works, but can be $$$.
Most of the "Word Compatible" libraries for sale don't implement all the functionality you're likely to need.
You'll most likely find that using the Word Object Model API is the only route to give you the capability you'll need. But, as always YMMV. |
67,859 | Hi I am new to SalesForce, I have tried using its giving me an error as limit 1000.
Suggest me some solution.
thanks in advance. | 2015/02/28 | [
"https://salesforce.stackexchange.com/questions/67859",
"https://salesforce.stackexchange.com",
"https://salesforce.stackexchange.com/users/4549/"
] | You can Use readonly attribute to display more than 1000 records. | You can also use RemoteActions. This is a starting point <http://blog.enree.co/2013/01/apex-javascript-what-is-remoteaction.html> |
67,859 | Hi I am new to SalesForce, I have tried using its giving me an error as limit 1000.
Suggest me some solution.
thanks in advance. | 2015/02/28 | [
"https://salesforce.stackexchange.com/questions/67859",
"https://salesforce.stackexchange.com",
"https://salesforce.stackexchange.com/users/4549/"
] | You can Use readonly attribute to display more than 1000 records. | Jitendra Zaa has a good solution above, but I would also suggest that displaying more than 1,000 rows in a visualforce page can cause other problems. Depending on how the VF page is set up, for example, you can run into view state errors, or just general slowness. Users may also find it difficult to parse through this many records.
I would consider implementing pagination and/or some kind of search functionality with various filters, so that users can view a smaller subset of records (if, of course, this works for your needs). |
67,859 | Hi I am new to SalesForce, I have tried using its giving me an error as limit 1000.
Suggest me some solution.
thanks in advance. | 2015/02/28 | [
"https://salesforce.stackexchange.com/questions/67859",
"https://salesforce.stackexchange.com",
"https://salesforce.stackexchange.com/users/4549/"
] | Repeater control has limit of showing only 1000 records.
You can use REST API to fetch record and JQUery to render it. [This is good article to give you base that how to use REST API in VF.](https://developer.salesforce.com/blogs/developer-relations/2013/06/calling-the-force-com-rest-api-from-visualforce-pages-revisited.html) | You can also use RemoteActions. This is a starting point <http://blog.enree.co/2013/01/apex-javascript-what-is-remoteaction.html> |
67,859 | Hi I am new to SalesForce, I have tried using its giving me an error as limit 1000.
Suggest me some solution.
thanks in advance. | 2015/02/28 | [
"https://salesforce.stackexchange.com/questions/67859",
"https://salesforce.stackexchange.com",
"https://salesforce.stackexchange.com/users/4549/"
] | Repeater control has limit of showing only 1000 records.
You can use REST API to fetch record and JQUery to render it. [This is good article to give you base that how to use REST API in VF.](https://developer.salesforce.com/blogs/developer-relations/2013/06/calling-the-force-com-rest-api-from-visualforce-pages-revisited.html) | Jitendra Zaa has a good solution above, but I would also suggest that displaying more than 1,000 rows in a visualforce page can cause other problems. Depending on how the VF page is set up, for example, you can run into view state errors, or just general slowness. Users may also find it difficult to parse through this many records.
I would consider implementing pagination and/or some kind of search functionality with various filters, so that users can view a smaller subset of records (if, of course, this works for your needs). |
67,859 | Hi I am new to SalesForce, I have tried using its giving me an error as limit 1000.
Suggest me some solution.
thanks in advance. | 2015/02/28 | [
"https://salesforce.stackexchange.com/questions/67859",
"https://salesforce.stackexchange.com",
"https://salesforce.stackexchange.com/users/4549/"
] | Jitendra Zaa has a good solution above, but I would also suggest that displaying more than 1,000 rows in a visualforce page can cause other problems. Depending on how the VF page is set up, for example, you can run into view state errors, or just general slowness. Users may also find it difficult to parse through this many records.
I would consider implementing pagination and/or some kind of search functionality with various filters, so that users can view a smaller subset of records (if, of course, this works for your needs). | You can also use RemoteActions. This is a starting point <http://blog.enree.co/2013/01/apex-javascript-what-is-remoteaction.html> |
19,458,504 | I currently have five tables that receive replicated data on a frequent basis. PL/SQL is used to transform, cleanse, and aggregate this data based on complicated business logic. The results are then placed in three reporting tables.
I am interested in using MongoDB instead of a relational database for this. In order to do so, all of the PL/SQL business logic must be translated into another language, such as Java.
However, it has been suggested to me that Hadoop will be a better alternative than MongoDB. I am now learning about Hadoop and Mapreduce, but I get the feeling that Hadoop is used for analyzing data (deriving statistical value from data, i.e. correlations between customer actions) not for transforming data based on business logic.
In order to replace the as-is system with Hadoop, I assume that all of the PL/SQL business logic will have to be translated into a mapreduce function. Is this assumption incorrect?
**Regardless of my personal situation, is there any business logic that Hadoop (and mapreduce) cannot handle, that PL/SQL or Java can?** | 2013/10/18 | [
"https://Stackoverflow.com/questions/19458504",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1391717/"
] | First,MongoDB is type of NoSql database and Hadoop is kind of parallel processing framework.
If you really have tera/peta bytes of data it is justifiable to use parallel processing framework.
And consult your data that is it okay to go with Nosql database + parallel processing framework.
1. You can convert PL/SQL business logic into mapreduce functions
2. Hadoop can handle any business logic lot loss if indexing,centralized
processing.(For non-transactional systems)
Also I will suggest you look into hive it might help you. | I can confirm that you can use MongoDB+Hadoop. All you have to do is, place entire transformation logic from all packages at a side then identify input,output and flow (dependencies) then convert to map reduce.The major task is to identify key,values,rules from transformation logic. I hope you can use pig also. Use Sqoop for extraction. |
1,711 | At the risk of potentially asking an off-topic question, I'd like to better understand the maintenance of a septic tank. I've never had one before and am unfamiliar with how to manage one.
My primary question is: how often do septic tanks need to be pumped? We're looking to buy a 4/2 house with a concrete septic tank "supposedly" with a capacity of 2250.
My secondary question is: how should septic tanks monitored? Just an annual scheduled service check-up? I don't want to spend money needlessly but also don't want to neglect something that needs regular attention. | 2010/09/10 | [
"https://diy.stackexchange.com/questions/1711",
"https://diy.stackexchange.com",
"https://diy.stackexchange.com/users/722/"
] | The pumping question depends on your local codes. In our area, a brand new tank is good for 5 years before it needs to be pumped. A non-new tank must be pumped every 3 years. This is pretty typical.
I would say that tank monitoring will vary depending on your system. A septic tank is a pretty simple system. You have a primary tank (some call it a settling tank) that settles any solids out of the wastewater, and where most of your breakdown occurs (a well operating system will have natural occurring bacteria that do a good job of breaking down "solid" waste). As an aside, this settled goo is the majority of what is pumped out when the pump guy comes.
Usually, there is a second tank that is gravity fed, where the now "grey water" flows. From here, the grey water sits for a while (more breakdown occurs here) and it either feeds by gravity to a drain field, where it filters through the ground, or, it is pumped to a mound/elevated drain field.
In a simple non-pump system, you want to watch out for clogging in the outlets to the secondary tank and to the drain field. This can be done pretty easily- when you have the manhole covers replaced, ask the guy doing the work for you to tell you how for down from the top of the manhole, the drain is. Now, as often as you want, put a stick down the manhole, and see how deep the water is. If the water is above the drain, you have a blockage, and it should be serviced.
If you have a system with a pump, I'd highly recommend getting an alarm if one isn't installed already. The alarms will alert you if the level of the water goes above a specified height, which indicates a clogged pump or filter.
My recommendation would be to do your own monitoring (checking water level) as often as makes you comfortable, and have a scheduled service every 2-3 years.
More important than monitoring however, is making sure you are diligent with prevention. Use as little toilet paper as is practical, and stay away from quilted TP. Additionally, nothing but toilet waste, water, and TP should EVER go down your drains (the exception to this is if you have a food disposer, but even then, I'd use the disposal sparingly). Also, try not to use too harsh of chemicals when doing cleaning of your sinks/toilets/showers, and NEVER use draino.. there are cleaning products that are septic friendly, and septic friendly drain de-cloggers. Use them. This keeps the bacterial ecosystem breaking down your waste, nice and happy, which is a good thing. | MarkD already gave a really good by the book answer, but let me answer from the side of a homeowner who has had septic tanks for around the last 20 years.
The only time I had any maintenance/inspection was one time when my yard was getting really swampy. I had a septic guy come out and he pumped it rather cheaply, no permanent damage, no big whoop. Those Rid-X commercials with the bathroom full of backed up waste seem like an exaggeration to me.
We try to go easy on the chemicals, but we aren't that diligent about it. Definitely no drain cleaners, but chlorox goes in the toilets regularly.
One thing we do about 2-3 times a year is flush either some baker's yeast. It is effectively the same thing as Rid-X and much cheaper. I've also heard flushing raw hamburger (small quantities) is another way to get the bacteria going, but I'm a little nervous about that idea. I figure my septic tank gets enough hamburger the "regular" way if you know what I mean. |
1,711 | At the risk of potentially asking an off-topic question, I'd like to better understand the maintenance of a septic tank. I've never had one before and am unfamiliar with how to manage one.
My primary question is: how often do septic tanks need to be pumped? We're looking to buy a 4/2 house with a concrete septic tank "supposedly" with a capacity of 2250.
My secondary question is: how should septic tanks monitored? Just an annual scheduled service check-up? I don't want to spend money needlessly but also don't want to neglect something that needs regular attention. | 2010/09/10 | [
"https://diy.stackexchange.com/questions/1711",
"https://diy.stackexchange.com",
"https://diy.stackexchange.com/users/722/"
] | The pumping question depends on your local codes. In our area, a brand new tank is good for 5 years before it needs to be pumped. A non-new tank must be pumped every 3 years. This is pretty typical.
I would say that tank monitoring will vary depending on your system. A septic tank is a pretty simple system. You have a primary tank (some call it a settling tank) that settles any solids out of the wastewater, and where most of your breakdown occurs (a well operating system will have natural occurring bacteria that do a good job of breaking down "solid" waste). As an aside, this settled goo is the majority of what is pumped out when the pump guy comes.
Usually, there is a second tank that is gravity fed, where the now "grey water" flows. From here, the grey water sits for a while (more breakdown occurs here) and it either feeds by gravity to a drain field, where it filters through the ground, or, it is pumped to a mound/elevated drain field.
In a simple non-pump system, you want to watch out for clogging in the outlets to the secondary tank and to the drain field. This can be done pretty easily- when you have the manhole covers replaced, ask the guy doing the work for you to tell you how for down from the top of the manhole, the drain is. Now, as often as you want, put a stick down the manhole, and see how deep the water is. If the water is above the drain, you have a blockage, and it should be serviced.
If you have a system with a pump, I'd highly recommend getting an alarm if one isn't installed already. The alarms will alert you if the level of the water goes above a specified height, which indicates a clogged pump or filter.
My recommendation would be to do your own monitoring (checking water level) as often as makes you comfortable, and have a scheduled service every 2-3 years.
More important than monitoring however, is making sure you are diligent with prevention. Use as little toilet paper as is practical, and stay away from quilted TP. Additionally, nothing but toilet waste, water, and TP should EVER go down your drains (the exception to this is if you have a food disposer, but even then, I'd use the disposal sparingly). Also, try not to use too harsh of chemicals when doing cleaning of your sinks/toilets/showers, and NEVER use draino.. there are cleaning products that are septic friendly, and septic friendly drain de-cloggers. Use them. This keeps the bacterial ecosystem breaking down your waste, nice and happy, which is a good thing. | If the tank and drain-field are correctly sized for the home and occupants, it is designed to be pumped about once every 5 years. If you have more than 2 people per bedroom living there, you may want to pump it more frequently. Fewer people than bedrooms and you may not need to pump it as much.
You should have it pumped and inspected when you move in. If the septic system is more than 10 years old though, you probably won't be able to depend on the guidelines above. In this case I'd recommend you pump it 5 years after you move in (or sooner if you have more people), and have them give you an idea of what the sludge level was. You can then decide how often to pump it based on the actual functioning of the field and system, and your own habits.
The sludge layer should be less than 30% of your tank volume. You can measure this yourself with a pole and some cheesecloth attached along the bottom length - place it in the tank, and move it back and forth and side to side gently, then pull it up and you'll see the depth of the sludge layer. If it's 30% of the total height of the tank, have the tank pumped. |
1,711 | At the risk of potentially asking an off-topic question, I'd like to better understand the maintenance of a septic tank. I've never had one before and am unfamiliar with how to manage one.
My primary question is: how often do septic tanks need to be pumped? We're looking to buy a 4/2 house with a concrete septic tank "supposedly" with a capacity of 2250.
My secondary question is: how should septic tanks monitored? Just an annual scheduled service check-up? I don't want to spend money needlessly but also don't want to neglect something that needs regular attention. | 2010/09/10 | [
"https://diy.stackexchange.com/questions/1711",
"https://diy.stackexchange.com",
"https://diy.stackexchange.com/users/722/"
] | MarkD already gave a really good by the book answer, but let me answer from the side of a homeowner who has had septic tanks for around the last 20 years.
The only time I had any maintenance/inspection was one time when my yard was getting really swampy. I had a septic guy come out and he pumped it rather cheaply, no permanent damage, no big whoop. Those Rid-X commercials with the bathroom full of backed up waste seem like an exaggeration to me.
We try to go easy on the chemicals, but we aren't that diligent about it. Definitely no drain cleaners, but chlorox goes in the toilets regularly.
One thing we do about 2-3 times a year is flush either some baker's yeast. It is effectively the same thing as Rid-X and much cheaper. I've also heard flushing raw hamburger (small quantities) is another way to get the bacteria going, but I'm a little nervous about that idea. I figure my septic tank gets enough hamburger the "regular" way if you know what I mean. | If the tank and drain-field are correctly sized for the home and occupants, it is designed to be pumped about once every 5 years. If you have more than 2 people per bedroom living there, you may want to pump it more frequently. Fewer people than bedrooms and you may not need to pump it as much.
You should have it pumped and inspected when you move in. If the septic system is more than 10 years old though, you probably won't be able to depend on the guidelines above. In this case I'd recommend you pump it 5 years after you move in (or sooner if you have more people), and have them give you an idea of what the sludge level was. You can then decide how often to pump it based on the actual functioning of the field and system, and your own habits.
The sludge layer should be less than 30% of your tank volume. You can measure this yourself with a pole and some cheesecloth attached along the bottom length - place it in the tank, and move it back and forth and side to side gently, then pull it up and you'll see the depth of the sludge layer. If it's 30% of the total height of the tank, have the tank pumped. |
174,388 | The [ODO](https://en.oxforddictionaries.com/definition/us/bald-faced) has an example sentence:
>
> I lost track trying to count the sheer bald-faced, brazen, **well-swilled** out-and-out lies being palmed off as fact or being suggested for our consumption as reasonable readings of the text.
>
>
>
It appears *well-swilled* should be synonymous with other adjectives in the lineup, but I can't find this word in dictionaries. Another [example](https://www.christianitytoday.com/ct/1994/december12/4te003.html) I found:
>
> Our sixteenth-century forebears used adjectives such as "shameless, fat, **well-swilled**, stinking, papistical … ," as historian Timothy George wrote on our May 16 editorial page.
>
>
>
*Swill* means *to drink, wash, rinse*. But what does "well-swilled" mean here? Urban Dictionary has an entry that suggests it means "intoxicated". This definition does not appear to sufficiently fit the above sentences. | 2018/07/28 | [
"https://ell.stackexchange.com/questions/174388",
"https://ell.stackexchange.com",
"https://ell.stackexchange.com/users/61125/"
] | If someone swills their drink, they're sloshing it around (often in their own mouth). As gargling is to the throat, swilling is to the mouth. If a lie is "well-swilled", it's been rolled around in the liar's mouth thoroughly before being spat out on you. | In this case, I would say that **well-swilled** means "well-drunk."
It's being used metaphorically, and pairs with the later use of **consumption**.
In other words, the lies, like drink or food, have been accepted and taken as part of some people's intellectual "diets."
---
There is an idiom that says, **don't drink the Kool-Aid.** It's become popular as a way of saying that you shouldn't "blindly follow" someone or something.
It does have a serious origin, however, as described in the Mental Floss article ["The 35th Anniversary of the Jonestown Massacre"](http://mentalfloss.com/article/13015/jonestown-massacre-terrifying-origin-drinking-kool-aid):
>
> . . . Jim Jones ordered Temple members to create a fruity mix containing a cocktail of chemicals including cyanide, diazepam (aka Valium—an anti-anxiety medication), promethazine (aka Phenergan—a sedative), chloral hydrate (a sedative/hypnotic sometimes called "knockout drops"), and most interestingly . . . **Flavor Aid**—a grape-flavored beverage similar to Kool-Aid.
>
>
>
While it might be common to associate *well-swilled* with beer, there could also be this Kool-Aid association, and the author is making reference to a well-drunk Kool-Aid of lies. |
174,388 | The [ODO](https://en.oxforddictionaries.com/definition/us/bald-faced) has an example sentence:
>
> I lost track trying to count the sheer bald-faced, brazen, **well-swilled** out-and-out lies being palmed off as fact or being suggested for our consumption as reasonable readings of the text.
>
>
>
It appears *well-swilled* should be synonymous with other adjectives in the lineup, but I can't find this word in dictionaries. Another [example](https://www.christianitytoday.com/ct/1994/december12/4te003.html) I found:
>
> Our sixteenth-century forebears used adjectives such as "shameless, fat, **well-swilled**, stinking, papistical … ," as historian Timothy George wrote on our May 16 editorial page.
>
>
>
*Swill* means *to drink, wash, rinse*. But what does "well-swilled" mean here? Urban Dictionary has an entry that suggests it means "intoxicated". This definition does not appear to sufficiently fit the above sentences. | 2018/07/28 | [
"https://ell.stackexchange.com/questions/174388",
"https://ell.stackexchange.com",
"https://ell.stackexchange.com/users/61125/"
] | If someone swills their drink, they're sloshing it around (often in their own mouth). As gargling is to the throat, swilling is to the mouth. If a lie is "well-swilled", it's been rolled around in the liar's mouth thoroughly before being spat out on you. | I would suggest that "well-swilled" implies "well prepared", "well practiced" or "well used", as in the lies have been used before and roll off the tongue easily. |
2,957,027 | Is is possible to display html document in J2ME mobile app? Is it possible using for example WebKit? I was looking for some webkit's j2me implementation or some tutorial or advice in google but found nothing. | 2010/06/02 | [
"https://Stackoverflow.com/questions/2957027",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/339865/"
] | [BOLT Webkit based Browser](http://boltbrowser.com/dnld.html)
I came across this, but haven't had the time to try it myself, but the reviews seem to be reasonable.
Its closed source but they provide later versions than the current for download for free.
Hope this can shed some light | Try this Framework: [Fire](http://sourceforge.net/projects/fire-j2me/)
LWUIT now supports html and CSS in [LWUIT 1.4](http://lwuit.blogspot.com/2010/08/lwuit-14-finally-released.html). |
101,019 | I've asked a lot of questions about this lately, and I think I'll just quit beating around the bush...
Let's say I've got a storage system that is being used for a variety of applications with different I/O patterns. I've been collecting performance statistics on the system:
* transfers/sec
* sec/trasnfer
* bytes/transfer
* bytes/sec
* %idle
and I've calculated 95th percentile, average, and median for each. I also know what my average read / write ratio is.
I understand how to calculate potential average IOPS and throughput for a new system given disk, array, and average workload parameters.
I'm struggling to put this all together. Currently observed statistics are limited by the existing system which is struggling to keep up. Thus I might know that I require X IOPS, but this value may be low because of the current disk bottleneck, etc. (I know it's overworked because I'm seeing constant high disk usage and many multi-second periods of very high transfer times)
To be frank, I'm not doing anything hardcore and I can pretty much just buy some faster disks and configure my arrays better and it'll work out. But I'd like to understand how I might take a more formal approach to justifying an expense and to not over-buy. | 2010/01/08 | [
"https://serverfault.com/questions/101019",
"https://serverfault.com",
"https://serverfault.com/users/2189/"
] | I don't think you have enough data... what you also need to know is the utilisation of the other parts of the system that might become the bottleneck if the IO speeds up, so you can estimate how far you have to go on the IO before it becomes CPU, bus or network limited.
Some definition of 'fast enough' would help too. But it sounds like you want the long periods of waiting for IO to go away. Depending on exactly what you're doing, you may just not have enough memory to cache it properly. | I agree with Andrew McGregor's answer in principle, but the reality is that you probably don't have the luxury to perform any kind of benchmarking to see where the next bottlenecks are after you unclog the disk bottleneck. In a perfect world you'd either (a) have access to a faster disk subsystem in a "demo" capacity to play around with running your current workload on it, or (b) you could export trace data captured from the current environment and run it thru a "magic" mathematical model of your application software to plot the next bottlenecks.
It's not likely you can get demo hardware, and no mathematical models exist for "playing back" a live trace of your Exchange workloads (I read your other questions). Discovering what your next bottleneck will be in any objective way is going to be very difficult, at best. With that in mind, I'd do what you know will improve performance and plan for higher performance disk. There will be another bottleneck beyond that, but unless you can find a financially feasible or realistically possible way to predict it, you're really just guessing.
(I'm marking this "community wiki" becuse I'd really have rather left it as a comment on Andrew's answer but, obviously, one can't post a comment this long...) |
101,019 | I've asked a lot of questions about this lately, and I think I'll just quit beating around the bush...
Let's say I've got a storage system that is being used for a variety of applications with different I/O patterns. I've been collecting performance statistics on the system:
* transfers/sec
* sec/trasnfer
* bytes/transfer
* bytes/sec
* %idle
and I've calculated 95th percentile, average, and median for each. I also know what my average read / write ratio is.
I understand how to calculate potential average IOPS and throughput for a new system given disk, array, and average workload parameters.
I'm struggling to put this all together. Currently observed statistics are limited by the existing system which is struggling to keep up. Thus I might know that I require X IOPS, but this value may be low because of the current disk bottleneck, etc. (I know it's overworked because I'm seeing constant high disk usage and many multi-second periods of very high transfer times)
To be frank, I'm not doing anything hardcore and I can pretty much just buy some faster disks and configure my arrays better and it'll work out. But I'd like to understand how I might take a more formal approach to justifying an expense and to not over-buy. | 2010/01/08 | [
"https://serverfault.com/questions/101019",
"https://serverfault.com",
"https://serverfault.com/users/2189/"
] | I don't think you have enough data... what you also need to know is the utilisation of the other parts of the system that might become the bottleneck if the IO speeds up, so you can estimate how far you have to go on the IO before it becomes CPU, bus or network limited.
Some definition of 'fast enough' would help too. But it sounds like you want the long periods of waiting for IO to go away. Depending on exactly what you're doing, you may just not have enough memory to cache it properly. | Be sure to do what I call theorectical drive math to get your max IOPS and reduce by 20% or more based on real world usage. This varies by manufacturer, disk speed, etc. Then compare to your new drive configuration and make sure your peak loads fit well under this limit. |
101,019 | I've asked a lot of questions about this lately, and I think I'll just quit beating around the bush...
Let's say I've got a storage system that is being used for a variety of applications with different I/O patterns. I've been collecting performance statistics on the system:
* transfers/sec
* sec/trasnfer
* bytes/transfer
* bytes/sec
* %idle
and I've calculated 95th percentile, average, and median for each. I also know what my average read / write ratio is.
I understand how to calculate potential average IOPS and throughput for a new system given disk, array, and average workload parameters.
I'm struggling to put this all together. Currently observed statistics are limited by the existing system which is struggling to keep up. Thus I might know that I require X IOPS, but this value may be low because of the current disk bottleneck, etc. (I know it's overworked because I'm seeing constant high disk usage and many multi-second periods of very high transfer times)
To be frank, I'm not doing anything hardcore and I can pretty much just buy some faster disks and configure my arrays better and it'll work out. But I'd like to understand how I might take a more formal approach to justifying an expense and to not over-buy. | 2010/01/08 | [
"https://serverfault.com/questions/101019",
"https://serverfault.com",
"https://serverfault.com/users/2189/"
] | I agree with Andrew McGregor's answer in principle, but the reality is that you probably don't have the luxury to perform any kind of benchmarking to see where the next bottlenecks are after you unclog the disk bottleneck. In a perfect world you'd either (a) have access to a faster disk subsystem in a "demo" capacity to play around with running your current workload on it, or (b) you could export trace data captured from the current environment and run it thru a "magic" mathematical model of your application software to plot the next bottlenecks.
It's not likely you can get demo hardware, and no mathematical models exist for "playing back" a live trace of your Exchange workloads (I read your other questions). Discovering what your next bottleneck will be in any objective way is going to be very difficult, at best. With that in mind, I'd do what you know will improve performance and plan for higher performance disk. There will be another bottleneck beyond that, but unless you can find a financially feasible or realistically possible way to predict it, you're really just guessing.
(I'm marking this "community wiki" becuse I'd really have rather left it as a comment on Andrew's answer but, obviously, one can't post a comment this long...) | Be sure to do what I call theorectical drive math to get your max IOPS and reduce by 20% or more based on real world usage. This varies by manufacturer, disk speed, etc. Then compare to your new drive configuration and make sure your peak loads fit well under this limit. |
10,369,479 | I have read stemming harms precision but improves recall in text classification. How does that happen? When you stem you increase the number of matches between the query and the sample documents right? | 2012/04/29 | [
"https://Stackoverflow.com/questions/10369479",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/601357/"
] | It's always the same, if you raise recall, your doing a generalisation. Because of that, you're losing precision. Stemming merge words together.
>
> On the one hand, words which ought to be merged together (such as "adhere" and "adhesion") may remain distinct after stemming; on the other, words which are really distinct may be wrongly conflated (e.g., "experiment" and "experience"). These are known as understemming errors and overstemming errors respectively.
>
>
>
Overstemming lowers precision and understemming lowers recall. So, since no stemming at all means no over- but max understemming errors, you have a low recall there and a high precision.
Btw, precision means how many of your found 'documents' are those you were looking for. Recall means how many of all 'documents', which were correct, you received. | From the wikipedia entry on Query\_expansion:
>
> By stemming a user-entered term, more documents are matched, as the alternate word forms for a user entered term are matched as well, increasing the total recall. This comes at the expense of reducing the precision. By expanding a search query to search for the synonyms of a user entered term, the recall is also increased at the expense of precision. This is due to the nature of the equation of how precision is calculated, in that a larger recall implicitly causes a decrease in precision, given that factors of recall are part of the denominator. It is also inferred that a larger recall negatively impacts overall search result quality, given that many users do not want more results to comb through, regardless of the precision.
>
>
> |
13,324,762 | I have deleted my main repository on my server accidentally. This is a personal server so it does not affect anyone else. I have a local check out (via svn co) of the files in tact on my desktop. I am wondering if it is possible to reconstruct the Svn server repository from a local check out ?
I am not a pro Svn user so I really do not know the details of svn much, but I know that unlike Git, Svn`s server repo is not same as the a local check out. So it is not as easy as copying it back, at least that is what I assume myself.
thanks | 2012/11/10 | [
"https://Stackoverflow.com/questions/13324762",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1814930/"
] | The best you can do is setup a new repo, delete the .svn directories in your working copy, then import the files into the new repo. You will need to then go to any existing working copies and switch them to the new repo. You'll probably need to do a switch --relocate. | First: no backup of SVN-Repo is a pain and should not happen.
But same happened to me. I started to add thousands of files again, file for file, because I don't want to checkin temporary or other no-source-files.
Nearly all history is lost. But **not the last checkin state is lost**!
I managed to get the last checkin state, so the new repo was similar to the old one and I got also the last diff!
This is my recipe with Tortoise:
1. backup your local checkout
2. clean up your local checkout.
3. 1. uncheck: Clean up working copy status
check:
2. Delete unversioned
3. Delete ignored
4. Revert all
4. create new repo
5. checkout new repo into new empty folder
6. copy the old cleaned copy into new folder
7. add simply all files
8. commit it
9. overwrite your new checkout folder with the backup from 1.
now you've got all files as before, with the last diff and only the files versioned, like before. |
94,209 | I was thinking about how we differ from crocodiles and sharks in terms of teeth. Now of course we don't have carnivore teeth but we also don't grow new teeth unless you are talking about a young child losing his/her baby teeth. But it seems like an omnivore diet, especially one that includes bones would be better suited to growth of new teeth than a carnivore diet.
I mean, there are all kinds of things that would wear down an omnivore's teeth. Here are just some of them:
* Hard fruits
* Nuts and Seeds
* Bones
And these 3 would wear an omnivore's teeth down the most, assuming no crispy processed foods are included.
And wouldn't wearing down of the teeth be a major factor into evolving the ability to grow new teeth?
So, since my Kepler Bb Humanoids are omnivores and they eat bones, especially during these circumstances:
* Pregnancy
* Breastfeeding
* Growth spurts
And they don't get more frail with old age, why wouldn't they evolve to grow new teeth? | 2017/10/07 | [
"https://worldbuilding.stackexchange.com/questions/94209",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/2238/"
] | **Growing new teeth endlessly is the norm**, everything with teeth can do it except mammals.
Mammals kinda traded out that ability for more complex teeth. more complex teeth is a big advantage for insectivores, (insects need to be cut apart to get the most out of them) and all living mammals are descendents from the small insectivorous mammals around turing the time of the dinosaurs. Being small and short lived the limited number of teeth was not much of a problem so the mutation that swapped/transformed the genes for making more teeth into genes for creating more complex teeth was an advantage. Now us as their descendants are stuck with that evolutionary baggage. \*
I would not expect that weird fluke of evolution to occur against in an unrelated evolutionary history. **So your humanoids should grow more teeth continuously (Polyphyodont), it does not need an explanation, having a limited number would be unlikely and would need an explanation.**
Now if you are continuously growing new teeth it is a little harder to get them to fit tightly, since they are constantly dropping out an leaving gaps. But dinosaurs found a way around that, just grow lots of little tiny [interlocking](http://blogs.plos.org/paleocomm/2016/09/14/all-the-better-to-chew-you-with-my-dear/) teeth where you need a chewing surface, and normal shaped teeth everywhere else. It is called a dental battery.
[](https://i.stack.imgur.com/EIy4o.jpg)
The few mammals that have more than one set of replacement teeth (elephant,kangaroo, and manatee) still only grow a limited number, and eventually stop making more. They are not true Polyphyodont they do run out of teeth. They basically don't produce all their teeth in one or two rounds but make them two or four at a time till they run out. | I had an extra set of adult teeth (four front uppers) and later found out I also had two extra wisdom teeth (upper one on each side). Its called hyperdontia. |
94,209 | I was thinking about how we differ from crocodiles and sharks in terms of teeth. Now of course we don't have carnivore teeth but we also don't grow new teeth unless you are talking about a young child losing his/her baby teeth. But it seems like an omnivore diet, especially one that includes bones would be better suited to growth of new teeth than a carnivore diet.
I mean, there are all kinds of things that would wear down an omnivore's teeth. Here are just some of them:
* Hard fruits
* Nuts and Seeds
* Bones
And these 3 would wear an omnivore's teeth down the most, assuming no crispy processed foods are included.
And wouldn't wearing down of the teeth be a major factor into evolving the ability to grow new teeth?
So, since my Kepler Bb Humanoids are omnivores and they eat bones, especially during these circumstances:
* Pregnancy
* Breastfeeding
* Growth spurts
And they don't get more frail with old age, why wouldn't they evolve to grow new teeth? | 2017/10/07 | [
"https://worldbuilding.stackexchange.com/questions/94209",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/2238/"
] | **Growing new teeth endlessly is the norm**, everything with teeth can do it except mammals.
Mammals kinda traded out that ability for more complex teeth. more complex teeth is a big advantage for insectivores, (insects need to be cut apart to get the most out of them) and all living mammals are descendents from the small insectivorous mammals around turing the time of the dinosaurs. Being small and short lived the limited number of teeth was not much of a problem so the mutation that swapped/transformed the genes for making more teeth into genes for creating more complex teeth was an advantage. Now us as their descendants are stuck with that evolutionary baggage. \*
I would not expect that weird fluke of evolution to occur against in an unrelated evolutionary history. **So your humanoids should grow more teeth continuously (Polyphyodont), it does not need an explanation, having a limited number would be unlikely and would need an explanation.**
Now if you are continuously growing new teeth it is a little harder to get them to fit tightly, since they are constantly dropping out an leaving gaps. But dinosaurs found a way around that, just grow lots of little tiny [interlocking](http://blogs.plos.org/paleocomm/2016/09/14/all-the-better-to-chew-you-with-my-dear/) teeth where you need a chewing surface, and normal shaped teeth everywhere else. It is called a dental battery.
[](https://i.stack.imgur.com/EIy4o.jpg)
The few mammals that have more than one set of replacement teeth (elephant,kangaroo, and manatee) still only grow a limited number, and eventually stop making more. They are not true Polyphyodont they do run out of teeth. They basically don't produce all their teeth in one or two rounds but make them two or four at a time till they run out. | If you're able to bend your humanoids anatomy, you may consider giving them open rooted teeth. Essentially they are teeth that don't stop growing, and must constantly be worn down. This is usually an adaptation seen in herbivores eating grass, leaves, and hay.
I think it's usually just front teeth with this trait, but it would probably help increase longevity of the rest of the teeth by grinding tough foods before the back molars get it. It would be a good reason for your humanoids to chew on bones, since failure to wear down teeth leads to painful dental problems. |
94,209 | I was thinking about how we differ from crocodiles and sharks in terms of teeth. Now of course we don't have carnivore teeth but we also don't grow new teeth unless you are talking about a young child losing his/her baby teeth. But it seems like an omnivore diet, especially one that includes bones would be better suited to growth of new teeth than a carnivore diet.
I mean, there are all kinds of things that would wear down an omnivore's teeth. Here are just some of them:
* Hard fruits
* Nuts and Seeds
* Bones
And these 3 would wear an omnivore's teeth down the most, assuming no crispy processed foods are included.
And wouldn't wearing down of the teeth be a major factor into evolving the ability to grow new teeth?
So, since my Kepler Bb Humanoids are omnivores and they eat bones, especially during these circumstances:
* Pregnancy
* Breastfeeding
* Growth spurts
And they don't get more frail with old age, why wouldn't they evolve to grow new teeth? | 2017/10/07 | [
"https://worldbuilding.stackexchange.com/questions/94209",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/2238/"
] | If you're able to bend your humanoids anatomy, you may consider giving them open rooted teeth. Essentially they are teeth that don't stop growing, and must constantly be worn down. This is usually an adaptation seen in herbivores eating grass, leaves, and hay.
I think it's usually just front teeth with this trait, but it would probably help increase longevity of the rest of the teeth by grinding tough foods before the back molars get it. It would be a good reason for your humanoids to chew on bones, since failure to wear down teeth leads to painful dental problems. | I had an extra set of adult teeth (four front uppers) and later found out I also had two extra wisdom teeth (upper one on each side). Its called hyperdontia. |
2,057 | Electromagnetic Exposure wise, does Amateur radio (Particularly 1.8 to 440 megahertz) induce a health risk due to RF radiation?
Other than the Maximum Permissible Exposure (MPE) limits, which will cause RF burns etc., are there any studies that show long term usage of radio being bad for you? | 2014/08/14 | [
"https://ham.stackexchange.com/questions/2057",
"https://ham.stackexchange.com",
"https://ham.stackexchange.com/users/174/"
] | Sure, there are [plenty](http://www.dirtyelectricity.org/health-issues.shtml). Unfortunately, they all seem to be [selling](http://lessemf.com/) [something](http://rads.stackoverflow.com/amzn/click/145023822X).
[The](http://en.wikipedia.org/wiki/Electromagnetic_radiation_and_health#Radio_frequency_fields) [scientific](https://yourlogicalfallacyis.com/burden-of-proof) [consensus](http://www.cancer.org/cancer/cancercauses/radiationexposureandcancer/radiofrequency-radiation) [is](http://hps.org/hpspublications/articles/powerlines.html) [quite](http://www.epa.gov/radtown/power-lines.html) [clear:](http://www.niehs.nih.gov/health/topics/agents/emf/) [no](http://www.who.int/mediacentre/factsheets/fs193/en/) [known](http://en.wikipedia.org/wiki/Mobile_phone_radiation_and_health) [risk](http://transition.fcc.gov/oet/rfsafety/rf-faqs.html#Q5), beyond the obvious risk of being cooked which MPE limits are set to avoid. | There are some indications that RF has ill effects on humans, at least in certain configurations or environments. Here are a few references taken from the top search results from Google Scholar. It looks like there's a lot of debate on this issue, and support for both sides.
[aje.oxfordjournals.org/content/128/5/1175.short](http://aje.oxfordjournals.org/content/128/5/1175.short)
[onlinelibrary.wiley.com/doi/10.1002/bem.10162/full](http://onlinelibrary.wiley.com/doi/10.1002/bem.10162/full)
<http://link.springer.com/article/10.1007/BF00051295>
[aje.oxfordjournals.org/content/127/1/50.short](http://aje.oxfordjournals.org/content/127/1/50.short)
My personal *opinion* is that cell phones are safe, though I don't put mine in my hip pocket or otherwise in close proximity to the soft, permeable tissue of my testicles, nor do I sleep with a phone near my head. I also hold the *opinion* that amplifiers and things like that should be a meter or so away from the body, even the legs, where though there are no major organs, blood constituents are produced in the bone. I also recommend using external antennas for even handhelds, if they're used more than once in awhile. There was an old article on this topic written by a ham / medical doctor, I think in the ARRL magazine, but I don't remember the name of it.
Remember that RF decreases at the inverse square of the distance of propagation. Pushing the source a little away from you makes a big difference in the amount of energy that permeates your tissue. |
207,434 | We have an outdated build of SQL Server 2008 R2(SP1) and therefore we are in need of patching(SP3).
Does this have any impact on SharePoint?
What measures should I take, besides backing everything up and testing before doing in production? | 2017/02/08 | [
"https://sharepoint.stackexchange.com/questions/207434",
"https://sharepoint.stackexchange.com",
"https://sharepoint.stackexchange.com/users/63517/"
] | SharePoint 2010 is supported on SQL 20018 R2 SP3, so no, there shouldn't be any issue installing the service pack onto your SQL Server.
I'd test (if at all possible) on a test system first, simply to ensure that any other solution installed within the SharePoint farm also continues to function as expected once the service pack is installed.
Obviously you should also take a set of backups of all of the databases on the SQL Server before you proceed (and test the backups!) in case you have any issues and need to be able to roll back. A SQL alias (as mentioned by Benny) is your friend here as you can run up a new server quickly, change the SQL alias(es) to point to the new server and you're good to go.
During the installation of the service pack there will be downtime to the farm as the database services will be unavailable for a period of time during the patching process. If at all possible, shut down the SharePoint server before starting the work. | I would install a new SQL Server to the level that you require, such as SQL Server 2008 **R2**. Tehn I would backup from source SQL and restore to destination SQL, with the same permissions and settings as the source SQL.
Then I would run [`cliconfg`](https://sqlandme.com/2011/05/05/create-sql-server-alias-cliconfg-exe/) on my SharePoint Server(s) and create an alias from the source SQL to the destination SQL and restart services and IIS.
That way you can test the SQL Server without loosing the option to quickly switch back is something bad happens (by removing the SQL Alias). |
207,434 | We have an outdated build of SQL Server 2008 R2(SP1) and therefore we are in need of patching(SP3).
Does this have any impact on SharePoint?
What measures should I take, besides backing everything up and testing before doing in production? | 2017/02/08 | [
"https://sharepoint.stackexchange.com/questions/207434",
"https://sharepoint.stackexchange.com",
"https://sharepoint.stackexchange.com/users/63517/"
] | SharePoint 2010 is supported on SQL 20018 R2 SP3, so no, there shouldn't be any issue installing the service pack onto your SQL Server.
I'd test (if at all possible) on a test system first, simply to ensure that any other solution installed within the SharePoint farm also continues to function as expected once the service pack is installed.
Obviously you should also take a set of backups of all of the databases on the SQL Server before you proceed (and test the backups!) in case you have any issues and need to be able to roll back. A SQL alias (as mentioned by Benny) is your friend here as you can run up a new server quickly, change the SQL alias(es) to point to the new server and you're good to go.
During the installation of the service pack there will be downtime to the farm as the database services will be unavailable for a period of time during the patching process. If at all possible, shut down the SharePoint server before starting the work. | Upgrading / Patching of SQL server is not big issue as compare to patching the SharePoint server. You have to plan it properly.
* First thing, you have to apply the patches in lower environment and test the behavior of both SQL as well as SharePoint. Once satisfied then move to production.
**For Production**
* You have to schedule a downtime as it will interrupt the sharepoint. off hours always great.
-Backup the SQL Server( all the databases)
* Shut Down the SharePoint servers, I do this way but if you dont like it then stop the timer & admin services, IIS, Search service.
* Apply the patches to SQL server.
* Verify the patches installation.
* Bring the SharePoint server back.
* Test the Farm, also check the event log & ULS for unusual activity.
* Run the Full backup of the SQL again. |
15,155,329 | I have searched the internet for a while now. And the worst thing is I have seen a testing rig before, but can't find it anymore.
Does anyone know a testing rig for mobile devices? Or should I build something myself. It is intended for testing an application on different devices (eg. iPad, Galaxy tab 10, iPhone, etc.)
Or when I had my eyes shut please pass me a link to the answer.
Thanks in advance.
Best regards | 2013/03/01 | [
"https://Stackoverflow.com/questions/15155329",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1941180/"
] | Okay I have found my issue. I hope that posting it here might help someone else find the issue that I have been searching for so long.
In the code I have posted above the localDir points to a directory on my local machine where the projects are built to and then fetched with the MEF. The problem for me here was that there was a .dll to a different project that was still referencing an old version of CSLA and there `Save()`, was still being overridden although it is not allowed annymore.
So in short it was a .dll mismatch in my MEF directory, so be sure to check for something like that.
Hope this helps someone! | This happened to me as well, I was running my tests and was getting the same error message.
The problem was that I had an updated nuget package in one of the projects and in the Test project that nuget package was outdated and therefor generating this issue.
Updating the packages in all the projects fixed the issue. |
77,983 | I was going through annual HR training, and one thing jumped out at me. In the "diversity" section, an advice was given to the effect of "if you care about diversity, volunteer to participate in efforts... such as mentor someone".
Now, I have mentored people before, and see it as a wonderful opportunity to both help someone, and learn myself, and strengthen the company.
**But in this context, it seems to be Catch-22**!
* If I approach someone from a diverse background offering to mentor them, that singles them out and seems to *imply that they need mentoring*, which is kind of the opposite of the message that ought to be delivered and promotes less equal treatment environment.
* But if I don't, it's well known that people who would most benefit from mentoring tend to be shyer, and **are less likely to approach someone more senior and ask to be mentored**.
How can this catch-22 be resolved in a productive and appropriate way? | 2016/10/18 | [
"https://workplace.stackexchange.com/questions/77983",
"https://workplace.stackexchange.com",
"https://workplace.stackexchange.com/users/13655/"
] | Target someone who is struggling. That is the only way to avoid coming across as condescending and paternalistic. You have correctly identified the catch 22.
I have several disabilities. From my perspective, I would consider it highly offensive if someone were to decide to mentor me simply because I have these problems. However, if I were struggling and the difficulties were related to my situation, I would welcome the help.
THAT is how you make the distinction.
If the person needs mentoring due to their situation, then it's a green light, if not, then you can get into a good deal of trouble. | I find it odd that some of the answers/comments and somewhat the implication of this question is that mentoring is done to help people who are deficient in their skills. Mentoring is generally, in my experience, done to help people you think are capable of being promoted to higher level of responsibility and not the people you think are deficient at their current level. When someone is deficient, that is job training.
Mentoring is generally best done when it arises naturally. Mentor people who work for you or that you have reason to be in professional contact with anyway and don't exclude minorities or women from that mentoring. But don't exclude all white men either.
Don't go up to someone and say I want to mentor you. Pretty much anyone would find that offputting. Just start to provide advice as it is appropriate to the situation. Be someone who readily answers questions and is approachable. Then let people approach you for advice. In casual discussions start to talk about workplace issues of concern at a higher level than the person is to start to give them the senior perspective. I have mentored lots of people through the years, I have never once had to say to them, "Hey, let me mentor you." |
4,484 | I'm involved in a few StackExchange sites and moderate one not on the StackExchange platform.
I find the moderation on this particular site to be over-eager.
I know each SE community has its own norms and culture but of the four questions I've asked on here, three have been closed. One or two had reasonable cases for being closed, *perhaps*, but ...
... my most recent question:
[Are British English conventions in decline? [on hold]](https://english.stackexchange.com/questions/147645/are-british-english-conventions-in-decline)
... has been put on hold as being opinion-based, despite the question having (always) had this on the second line:
>
> In particular, I'm looking for the results of systematic studies on the topic or of strong evidence for a decline/lack of decline of British English usage (for example, the decline of a few key conventions over the past 5, 10 or 20 years).
>
>
>
This in contrast to the reason for closing:
>
> Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise.
>
>
>
(*EDIT*: To clarify, the question was closed by means of closed votes, not by moderators. When I talk about "overeager moderation", I refer not only to the actions of moderators, but to the use of any powers of moderation.)
Maybe my questions have not been suitable for the site or something, but with the current moderation:
1. I'm less inclined to ask further questions because they'll probably be closed.
2. I'm not precisely sure why my questions aren't suitable, meaning I cannot improve for next time.
As a result, I'm discouraged from using the site and I *assume* I'm not alone in this.
I understand very well the reason why certain questions need to be closed, and I understand the various entailments of the [broken windows theory](http://en.wikipedia.org/wiki/Broken_windows_theory), but in my view, the moderation of this site is overeager.
Specifically: could someone justify why my question above was put on hold as "primarily opinion-based".
More generally (and subjectively): is the current "level" of moderation healthy for the site?
EDIT: This is closely related to [this meta-question](https://english.meta.stackexchange.com/questions/4461/who-is-voting-to-close-all-these-questions). | 2014/01/27 | [
"https://english.meta.stackexchange.com/questions/4484",
"https://english.meta.stackexchange.com",
"https://english.meta.stackexchange.com/users/39922/"
] | I love this stack. I'm grateful for the answers people provide here, and I enjoy the culture. Personally, the claim that the moderation is 'overeager' is too vague for me to comment on it. But, I do wonder whether users would enjoy the stack more if the moderators closed duplicate questions less frequently than they currently do. I think one could make the argument that either, most answers on this stack involve repeating someone else's answer, or most answers on this stack are purely a matter of opinion. Most questions asked here are not answered with original research, so most answers here involve duplicating information. What does it matter if that information is on some old page of this stack or in an English usage book somewhere else? Taken to an extreme, one could argue we ought to close this stack and post a list of links to English usage texts. | I was one of the original closers. I can provide a small explanation of why *I* voted to close.
* "Decline" was a problem word because there really isn't any way definitively measure this aside from measuring the population of countries that speak BrE -- and I highly doubt that was the intended question.
* The primarly "opinion" I was reacting to is what actually constitutes "British English"; not whether it rose or declined. You offered two meager examples at the end of your post but aside from simply describing specific *changes* over the period this seems like a hopeless Question.
* "Looking for studies" is also subjective in nature and, at its core, a request for resources. I personally find this kind of completely off-topic as Too Broad *and* Primarily Opinion Based.
* The "looking for studies or strong evidence" clarification is actually unnecessary because a question of this nature would need to have that as part of the answer or it wouldn't be a good answer. The need to add such a clarification is a huge red flag and further evidence that this question is inappropriate for EL&U.
* Your third paragraph extended the question even *further* which easily pushed this into Too Broad territory.
So I voted to close and the debate was between "Too Broad" or "Primarily Opinion Based" and I don't remember which I chose. The majority chose "Primarily Opinion Based", as we can see in the post history.
Furthermore, even after all the edits and suggestions the question has received no useful answers and is unlikely to ever get answered on this site in a satisfactory manner. I feel justified in the closure and would vote to close again. I do not think the question is reasonably answerable in its current state.
---
That being said, I completely sympathize that you did not receive any meaningful feedback on why the question was voted. You did all you could to protect yourself from getting closed as Primarily Opinion Based which is why I think the best course of action is now close it as Too Broad.
My opinion on how to avoid that is to pick a specific phrase or habit and ask about its usage. This avoids all of the problems I, personally, had with the post. |
32,202 | Assume a university want to elect the student council. This was done via paper ballot and there is a need to reach more students for voting, so electronic voting is on the table.
While researching existing solutions I came across Helios: [helios voting](https://vote.heliosvoting.org/) and [agora voting](http://agoravoting.org/)
Both seem to implement somewhat peer-reviewed schemes to do a online vote, but my impression is that the devil might hide in the details.
Has anyone here experience using these schemes?
We plan to implement such an online voting scheme and would authenticate the voters via the university LDAP. Which cryptographic obstacles or security problems could we face? | 2016/01/27 | [
"https://crypto.stackexchange.com/questions/32202",
"https://crypto.stackexchange.com",
"https://crypto.stackexchange.com/users/30956/"
] | In short, my answer is no; keep paper ballot, they have essential virtues unmatched by electronic substitutes; in particular, giving voters confidence that the result of the vote is not grossly manipulated.
Full disclosure: I co-founded a (French) association towards citizen oversight of voting means, essentially opposing electronic voting for political elections. I take some pride in our contribution (however little) to an observed pause in the deployment of electronic voting machines in France since 2007.
Electronic voting has two different meanings:
1. voting in polling stations using dedicated electronic voting machines;
2. voting remotely by electronic means such as mobile phone, web browser.
The question seems to be about 2, and I'll focus on that. A major problem is that it does not propose any means to discourage vote selling or voting under duress. In most proposed systems, handing over one's credentials to vote (perhaps: during the closing of the polling time-frame) will do the trick.
Contrast with the traditional voting system used for political elections (in France, and many countries with a long history of voting, thus a long history of voting fraud, and fixes to the voting code to fight that). Voting takes places according to procedures carefully designed to discourage vote selling and threats to vote in some prescribed way, by making it hard (and prohibited) that anyone but the voter knows how the vote was cast (that goal is not reached for mail voting, instead this is purposely kept marginal by requiring formal prior declaration to police that normal voting can't be performed for some reason, like traveling). Towards that goal, complex measures have evolved over time:
* generally, making it illegal and difficult to act in any way such as showing how one voted, or transferring one's right to vote to another person (voting requires an official ID with photo, in all but very small towns).
* requiring the insertion of paper ballot materializing the vote in an opaque envelope in a voting booth (*"isoloir"* = isolating device) where no one but the voter is allowed;
* making paper ballots with a distinctive sign invalid (and not counted); this is a countermeasure to the practice of marking the paper ballots handed for vote under duress/pay with a distinctive sign, so that the bribed/intimidated voter can fear that if s/he does not use that paper, it will get noticed at vote counting;
* often, making the paper ballots available by multiple means (at the polling station, and by mail), so that voters can be seen not grabbing a ballot from the stack X at the polling station, but still actually vote for X by using a paper ballot for X that they brought secretly (ticking a choice with a pen would be superior to paper ballots in this regard, but may bring back the previous problem; also, separate availability of paper ballots at least helps the reading impaired).
---
Another issue with both forms of electronic voting is that it makes fraud by a very small group of persons conceivable, when the traditional voting system makes that impossible for large-scale voting with multiple independent polling stations (this argument thus does not apply to a small local student council with a single polling station):
* Votes are cast in urns which remain observable from vote start till end of counting (French urns are transparent, in reaction to fraud).
* Votes are counted locally at each polling stations; anyone is allowed to obverse counting and check what the tally at the polling station where s/he voted.
* The tally at each polling station is made publicly available in print, so that any observer can check that the tally at the polling station where s/he voted was not modified, and check the addition of counts thus the election result (there is a hierarchy of two levels of publication, but the principle remains valid).
Most importantly, a rational person/voter can be convinced that the traditional system does not allow centralized fraud; but electronic voting systems which manage to keep what one voted secret do not meet (or even have) that goal, to my knowledge. At best, *the organizers* of an electronic election can be convinced that there was no fraud; that's not the correct objective (and it is not even really met by any practical system that I have seen).
Again restricting to voting remotely by electronic means, some usually poorly mitigated risks include:
1. Browsers (e.g. on university computers) modified to vote as asked by the voter as far as the screen is concerned, when the vote is really cast differently on the network side; that's far from rocket science.
2. A server pretending to be the real voting server(s) to the voter's browser, performing Man-in-the-Middle attack at some point on the network; if the cryptographic defense is https with TLS as in normal web browsers, that's defeated with a copy of the private key of the true server (certificates emitted by certification authorities in breach of CPS, which abound, would also trick most voters, albeit with a risk of being caught by an observer comparing the certificate shown by the browser with the real thing obtained out-of-band). The MitM machine can go undetected to a real voting server (and even inquisitive client) scrutinizing IP address and routing info, if the MitM machine is appropriately inserted in the network near either the real voting server or the targeted browser, and competently programmed; a university network is ideal grounds for such attack.
3. In many systems, plain subversion of the machine(s) counting the votes. Having several counting machines run by multiple parties helps, but what should be the rule when they do not agree?
4. Denial of service; it's easy to prevent voting by attacking the voting server or network infrastructure, and conceivable (especially if observers are allowed) to create some ESD/EMP that zaps the server.
5. Loss of secrecy of individual votes, threatened by:
6. penetration of the central computer(s) running the election; that's a problem the industry hardly knows how to tackle when the computer operators are trusted; and in this situation, we'd like not to trust them!
7. compromise of the voter's device (this is somewhat mitigated by the diversity of devices)
8. brittleness of web security practices; e.g. consider an https connection used for a "please confirm your vote" page visually showing the ballot selected: if there's a jpg image shown and no special precaution is taken, it is likely that mere analysis of the length of TCP/IP packets reveals the choice made.
Note: there's a simple countermeasure, mitigating 1, 2, 5.2 and 5.3, that I have seldom seen proposed: the voter would key-in a few digits, received secretly, different according to the vote cast. That would largely remove the browser and network from the attack surface. This is not without its own security problems, but the real reason why some proponents of electronic voting dislike it is that it is low tech, and acknowledges the need to distrust high tech in matters of voting.
---
Addition: any voting system, electronic of not, must balance between two antagonist goals:
* Keeping individual votes secret, including to whoever runs the election; because otherwise, individual retaliation could ensue.
* Making the outcome convincingly representative of the intention of voters, to as many reasonable persons as feasible; because the elected needs legitimacy.
We can reach either goal by a sacrifice of the other (if all individual votes are made public along the name of the voter as the election goes, the outcome is verifiable; if we choose the election's winner by stone/paper/scissor, voting in one's mind is enough).
Electronic voting complicates both goals considerably, particularly when you consider that whoever runs the election is an adversary (in the sense of that in crypto) trying to breach vote secrecy and accurate vote counting. Simply put, I do not see an even mildly-satisfactory solution. | For starters: you should look up posts and papers related to "[LDAP Security](https://en.wikipedia.org/wiki/Lightweight_Directory_Access_Protocol#Further_reading)" using your favorite search engine.
I will say I like the solution that crypto has come up with in the form of not being able to spend the same BTC twice. Something involving that and user verification would be ideal for online voting although those two are very different technologies. Luckily math has the answer.
Now, whether that will be a sustainable solution without any security issues is probably not realistic but you bring up an important point and something that needs attention not just in your domain but in other domains as well. Maybe "quantum" DNA verification and multifactor authentication… |
32,202 | Assume a university want to elect the student council. This was done via paper ballot and there is a need to reach more students for voting, so electronic voting is on the table.
While researching existing solutions I came across Helios: [helios voting](https://vote.heliosvoting.org/) and [agora voting](http://agoravoting.org/)
Both seem to implement somewhat peer-reviewed schemes to do a online vote, but my impression is that the devil might hide in the details.
Has anyone here experience using these schemes?
We plan to implement such an online voting scheme and would authenticate the voters via the university LDAP. Which cryptographic obstacles or security problems could we face? | 2016/01/27 | [
"https://crypto.stackexchange.com/questions/32202",
"https://crypto.stackexchange.com",
"https://crypto.stackexchange.com/users/30956/"
] | You shouldn't use advanced crypto nor specialist algorithms here else you will be seriously over engineering and actually increasing the risks you face not reducing them. Seasoned security engineers would strongly recommend the "[KISS principle](https://en.m.wikipedia.org/wiki/KISS_principle)". It's better to do something simple with a low error rate than attempt something complex and maybe have a bug whereby there is a huge flaw in your approach that you miss and an attacker finds. With a real world public election the eligible voters are a class of citizens; registering them to vote and giving them secure credentials is hard. You know the exact population of eligible voters and they are already registered on and using the college network where they have set their own passwords already. So you have none of the complications of a real election and you have only a basic authentication problem; is the person casting the vote authenticated as a college student. This problem should already been solved by your college.
The most important outcome is confidence in the election results with maximum participation. That's not the same success criteria as a "perfect vote count". Let's say you use some complex techniques and some noisy looser of the election declares that the complex algorithm was badly implemented and led to a flawed outcome when infact it all worked perfectly: you will have a world of pain and a likely outcome is that you are fired and the election is rerun as a paper election. That may or may not vindicate you as the publicity around the claims that the original election was unfair may boost the vote of the candidate who claimed the original vote was bad. That's a form of social engineering attack. She can claim "victory" no matter what the actual facts of the matter and increase her vote share. So it is better that you do something "low tech" that people have confidence was properly implemented the do anything complex and novel.
Here is a list of ways to solve your problem:
1. If your college has a way that sudents can login and submit course work just have them login and submit the name of the candidate they are voting for; and you are done. Counting the votes is a bit more complex as they may be bad submissions but you can publish all the submissions anonymised and anyone can validate them or recount them and the outcome of the election won't be in dispute.
2. If your college has a system were users can login and answer multiple choice questions then use that.
3. If your college systems use a single sign-on API such as SAML you can use that to secure your own voting web page; students authenticate to the college system with their college password and your web page is secured via a standard authentication API.
4. If all students have a college email address create a unique random ID per student and email them a link to vote that uses their unique ID.p in the link. Ensure that page at the link only allows one vote to be cast per student.
Sure some students will have obtained the network password of other students (such is life) and can steal a few votes. Yet if a noisy looser says "the vote was stolen" when you used the colleges authentication system they are challenging the integrity of the college authentication systems and ability to stop cheating etc so the college will robustly investigate and defend the outcome. The cheater risks being expelled from college for subverting the college authentication systems so that's a deterrent; but they may think it's a safe game to attack a custom system you built. If caught they can claim that they are a hero white hat hacker improving the world by testing your system and that they always intended to tell the world that they rigged the vote after the results were declared giving you longer to discover their playful attack. | For starters: you should look up posts and papers related to "[LDAP Security](https://en.wikipedia.org/wiki/Lightweight_Directory_Access_Protocol#Further_reading)" using your favorite search engine.
I will say I like the solution that crypto has come up with in the form of not being able to spend the same BTC twice. Something involving that and user verification would be ideal for online voting although those two are very different technologies. Luckily math has the answer.
Now, whether that will be a sustainable solution without any security issues is probably not realistic but you bring up an important point and something that needs attention not just in your domain but in other domains as well. Maybe "quantum" DNA verification and multifactor authentication… |
32,202 | Assume a university want to elect the student council. This was done via paper ballot and there is a need to reach more students for voting, so electronic voting is on the table.
While researching existing solutions I came across Helios: [helios voting](https://vote.heliosvoting.org/) and [agora voting](http://agoravoting.org/)
Both seem to implement somewhat peer-reviewed schemes to do a online vote, but my impression is that the devil might hide in the details.
Has anyone here experience using these schemes?
We plan to implement such an online voting scheme and would authenticate the voters via the university LDAP. Which cryptographic obstacles or security problems could we face? | 2016/01/27 | [
"https://crypto.stackexchange.com/questions/32202",
"https://crypto.stackexchange.com",
"https://crypto.stackexchange.com/users/30956/"
] | In short, my answer is no; keep paper ballot, they have essential virtues unmatched by electronic substitutes; in particular, giving voters confidence that the result of the vote is not grossly manipulated.
Full disclosure: I co-founded a (French) association towards citizen oversight of voting means, essentially opposing electronic voting for political elections. I take some pride in our contribution (however little) to an observed pause in the deployment of electronic voting machines in France since 2007.
Electronic voting has two different meanings:
1. voting in polling stations using dedicated electronic voting machines;
2. voting remotely by electronic means such as mobile phone, web browser.
The question seems to be about 2, and I'll focus on that. A major problem is that it does not propose any means to discourage vote selling or voting under duress. In most proposed systems, handing over one's credentials to vote (perhaps: during the closing of the polling time-frame) will do the trick.
Contrast with the traditional voting system used for political elections (in France, and many countries with a long history of voting, thus a long history of voting fraud, and fixes to the voting code to fight that). Voting takes places according to procedures carefully designed to discourage vote selling and threats to vote in some prescribed way, by making it hard (and prohibited) that anyone but the voter knows how the vote was cast (that goal is not reached for mail voting, instead this is purposely kept marginal by requiring formal prior declaration to police that normal voting can't be performed for some reason, like traveling). Towards that goal, complex measures have evolved over time:
* generally, making it illegal and difficult to act in any way such as showing how one voted, or transferring one's right to vote to another person (voting requires an official ID with photo, in all but very small towns).
* requiring the insertion of paper ballot materializing the vote in an opaque envelope in a voting booth (*"isoloir"* = isolating device) where no one but the voter is allowed;
* making paper ballots with a distinctive sign invalid (and not counted); this is a countermeasure to the practice of marking the paper ballots handed for vote under duress/pay with a distinctive sign, so that the bribed/intimidated voter can fear that if s/he does not use that paper, it will get noticed at vote counting;
* often, making the paper ballots available by multiple means (at the polling station, and by mail), so that voters can be seen not grabbing a ballot from the stack X at the polling station, but still actually vote for X by using a paper ballot for X that they brought secretly (ticking a choice with a pen would be superior to paper ballots in this regard, but may bring back the previous problem; also, separate availability of paper ballots at least helps the reading impaired).
---
Another issue with both forms of electronic voting is that it makes fraud by a very small group of persons conceivable, when the traditional voting system makes that impossible for large-scale voting with multiple independent polling stations (this argument thus does not apply to a small local student council with a single polling station):
* Votes are cast in urns which remain observable from vote start till end of counting (French urns are transparent, in reaction to fraud).
* Votes are counted locally at each polling stations; anyone is allowed to obverse counting and check what the tally at the polling station where s/he voted.
* The tally at each polling station is made publicly available in print, so that any observer can check that the tally at the polling station where s/he voted was not modified, and check the addition of counts thus the election result (there is a hierarchy of two levels of publication, but the principle remains valid).
Most importantly, a rational person/voter can be convinced that the traditional system does not allow centralized fraud; but electronic voting systems which manage to keep what one voted secret do not meet (or even have) that goal, to my knowledge. At best, *the organizers* of an electronic election can be convinced that there was no fraud; that's not the correct objective (and it is not even really met by any practical system that I have seen).
Again restricting to voting remotely by electronic means, some usually poorly mitigated risks include:
1. Browsers (e.g. on university computers) modified to vote as asked by the voter as far as the screen is concerned, when the vote is really cast differently on the network side; that's far from rocket science.
2. A server pretending to be the real voting server(s) to the voter's browser, performing Man-in-the-Middle attack at some point on the network; if the cryptographic defense is https with TLS as in normal web browsers, that's defeated with a copy of the private key of the true server (certificates emitted by certification authorities in breach of CPS, which abound, would also trick most voters, albeit with a risk of being caught by an observer comparing the certificate shown by the browser with the real thing obtained out-of-band). The MitM machine can go undetected to a real voting server (and even inquisitive client) scrutinizing IP address and routing info, if the MitM machine is appropriately inserted in the network near either the real voting server or the targeted browser, and competently programmed; a university network is ideal grounds for such attack.
3. In many systems, plain subversion of the machine(s) counting the votes. Having several counting machines run by multiple parties helps, but what should be the rule when they do not agree?
4. Denial of service; it's easy to prevent voting by attacking the voting server or network infrastructure, and conceivable (especially if observers are allowed) to create some ESD/EMP that zaps the server.
5. Loss of secrecy of individual votes, threatened by:
6. penetration of the central computer(s) running the election; that's a problem the industry hardly knows how to tackle when the computer operators are trusted; and in this situation, we'd like not to trust them!
7. compromise of the voter's device (this is somewhat mitigated by the diversity of devices)
8. brittleness of web security practices; e.g. consider an https connection used for a "please confirm your vote" page visually showing the ballot selected: if there's a jpg image shown and no special precaution is taken, it is likely that mere analysis of the length of TCP/IP packets reveals the choice made.
Note: there's a simple countermeasure, mitigating 1, 2, 5.2 and 5.3, that I have seldom seen proposed: the voter would key-in a few digits, received secretly, different according to the vote cast. That would largely remove the browser and network from the attack surface. This is not without its own security problems, but the real reason why some proponents of electronic voting dislike it is that it is low tech, and acknowledges the need to distrust high tech in matters of voting.
---
Addition: any voting system, electronic of not, must balance between two antagonist goals:
* Keeping individual votes secret, including to whoever runs the election; because otherwise, individual retaliation could ensue.
* Making the outcome convincingly representative of the intention of voters, to as many reasonable persons as feasible; because the elected needs legitimacy.
We can reach either goal by a sacrifice of the other (if all individual votes are made public along the name of the voter as the election goes, the outcome is verifiable; if we choose the election's winner by stone/paper/scissor, voting in one's mind is enough).
Electronic voting complicates both goals considerably, particularly when you consider that whoever runs the election is an adversary (in the sense of that in crypto) trying to breach vote secrecy and accurate vote counting. Simply put, I do not see an even mildly-satisfactory solution. | You shouldn't use advanced crypto nor specialist algorithms here else you will be seriously over engineering and actually increasing the risks you face not reducing them. Seasoned security engineers would strongly recommend the "[KISS principle](https://en.m.wikipedia.org/wiki/KISS_principle)". It's better to do something simple with a low error rate than attempt something complex and maybe have a bug whereby there is a huge flaw in your approach that you miss and an attacker finds. With a real world public election the eligible voters are a class of citizens; registering them to vote and giving them secure credentials is hard. You know the exact population of eligible voters and they are already registered on and using the college network where they have set their own passwords already. So you have none of the complications of a real election and you have only a basic authentication problem; is the person casting the vote authenticated as a college student. This problem should already been solved by your college.
The most important outcome is confidence in the election results with maximum participation. That's not the same success criteria as a "perfect vote count". Let's say you use some complex techniques and some noisy looser of the election declares that the complex algorithm was badly implemented and led to a flawed outcome when infact it all worked perfectly: you will have a world of pain and a likely outcome is that you are fired and the election is rerun as a paper election. That may or may not vindicate you as the publicity around the claims that the original election was unfair may boost the vote of the candidate who claimed the original vote was bad. That's a form of social engineering attack. She can claim "victory" no matter what the actual facts of the matter and increase her vote share. So it is better that you do something "low tech" that people have confidence was properly implemented the do anything complex and novel.
Here is a list of ways to solve your problem:
1. If your college has a way that sudents can login and submit course work just have them login and submit the name of the candidate they are voting for; and you are done. Counting the votes is a bit more complex as they may be bad submissions but you can publish all the submissions anonymised and anyone can validate them or recount them and the outcome of the election won't be in dispute.
2. If your college has a system were users can login and answer multiple choice questions then use that.
3. If your college systems use a single sign-on API such as SAML you can use that to secure your own voting web page; students authenticate to the college system with their college password and your web page is secured via a standard authentication API.
4. If all students have a college email address create a unique random ID per student and email them a link to vote that uses their unique ID.p in the link. Ensure that page at the link only allows one vote to be cast per student.
Sure some students will have obtained the network password of other students (such is life) and can steal a few votes. Yet if a noisy looser says "the vote was stolen" when you used the colleges authentication system they are challenging the integrity of the college authentication systems and ability to stop cheating etc so the college will robustly investigate and defend the outcome. The cheater risks being expelled from college for subverting the college authentication systems so that's a deterrent; but they may think it's a safe game to attack a custom system you built. If caught they can claim that they are a hero white hat hacker improving the world by testing your system and that they always intended to tell the world that they rigged the vote after the results were declared giving you longer to discover their playful attack. |
3,988,748 | Has to be free.
Has to support all versions of Excel files.
Has to have C# .NET API.
I need to do all of the specified actions (reading/creating/updating).
Has anyone used any library l this kind sucessfully
Update:
I read a lot of bad things about Ole DB, and Interop is not an option since this is a web application running on a server. | 2010/10/21 | [
"https://Stackoverflow.com/questions/3988748",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/483153/"
] | Try to use OleDB [Reading Excel files from C#](https://stackoverflow.com/questions/15828/reading-excel-files-from-c) | From a [previous answer](https://stackoverflow.com/questions/3416722/better-option-for-hosting-ms-office-documents-in-custom-app/3458960#3458960) on a different question:
* You might consider using the Excel object model and COM interop to read the data from the Excel file into your application. Granted, this includes a dependency on Excel being installed, but it is a possibility. [This article](http://dotnetperls.com/excel-interop) has some great code for getting started with reading Excel files in this way.
* A better way might be to use a library that doesn't have a dependency on Excel being installed on the local system. [This answer](https://stackoverflow.com/questions/3449541/exporting-excel-to-c/3449613#3449613) suggests using the [Excel Data Reader library](http://exceldatareader.codeplex.com/), available on CodePlex.
Like I pointed out in [my other answer](https://stackoverflow.com/questions/3416722/better-option-for-hosting-ms-office-documents-in-custom-app/3458960#3458960), there are paid third-party libraries that will likely do *exactly* what you are looking for. I understand you want something that is free, but in my experience with free Excel libraries, you tend to need to do a good bit of extra work to get it to play the right way. |
3,988,748 | Has to be free.
Has to support all versions of Excel files.
Has to have C# .NET API.
I need to do all of the specified actions (reading/creating/updating).
Has anyone used any library l this kind sucessfully
Update:
I read a lot of bad things about Ole DB, and Interop is not an option since this is a web application running on a server. | 2010/10/21 | [
"https://Stackoverflow.com/questions/3988748",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/483153/"
] | Try to use OleDB [Reading Excel files from C#](https://stackoverflow.com/questions/15828/reading-excel-files-from-c) | I'm a big fan of [Aspose Cells](http://www.aspose.com/categories/.net-components/aspose.cells-for-.net/default.aspx). It does all you want but it isn't free. I don't know any other products that can fit all your needs (All Excel Versions, C# Api, Read/Write, etc) |
5,636 | I want to organise a podcast with some friends, with a gradual release schedule. We're about 6 people who meet up personally, but we can't get productive enough in our meetups, so we decided to use online solutions for sharing our speech's sketches and similar things, all regarding the podcast.
Until today, the only thing we have seriously tried is Google's collaborative tools, but they seem to be really distracting, and way too complex for the kind of organisation we need, which is rather simple.
What software should I use to interact with all the colaborators, including occasional guests (having in mind that not all of them are tech-savvy)? | 2012/05/06 | [
"https://pm.stackexchange.com/questions/5636",
"https://pm.stackexchange.com",
"https://pm.stackexchange.com/users/3902/"
] | During the last couple of years (enterprise and startup) the google docs + skype was the best combination I've used and seen. | a shared **Dropbox** (or similar service) might be an alternative. With the desktop client, is syncs between the service and each individuals computer, so everybody has all info readily available. |
14,399 | So, basically, I've developed an application that gets the data from the database, processes it and then displays it on a webpage and now I need to integrate it with joomla. I've read about MVC components and it looks like there are 2 MVC's available: new and legacy. Plus there is F0F. There are almost no manuals on how to work with new MVC and the last stable F0F version is like 6 months old so I am kinda afraid that it wont be compatible with the new joomla version. So how should I proceed? Should I follow the official manual on joomla docs and use a legacy MVC, try a new MVC or use F0F? | 2015/11/22 | [
"https://joomla.stackexchange.com/questions/14399",
"https://joomla.stackexchange.com",
"https://joomla.stackexchange.com/users/7314/"
] | Legacy MVC will be supported until EOL of Joomla 3 (at least). I would just go ahead and use that. You will find many tutorials for legacy MVC and you will probably get the most help for that here or in the forums. Also (almost) all core components are developed with legacy MVC so you have many examples at hand.
The new MVC is already usable but you will have to implement a lot of functionality by yourself. I think it's easier to start with legacy but if you want an example how to use the new MVC you can look at the code of com\_install because it already has been refactored.
FOF can probably help you to develop your extensions easier. I didn't get a chance to use it yet but imho I would learn the legacy MVC first. | You should never used legacy for new development as they are due to be phased out but left in to allow developers to adapt from legacy. FOF is akeebas framework on framework.
See:
<https://docs.joomla.org/J3.x:Developing_an_MVC_Component/Developing_a_Basic_Component> |
284,975 | I tried to find a thunderbird plugin which can do this:
Display a button on the top of the message (while reading, not composing) "move message to folder '...'".
The plugin checks the "from" header of the mail and remembers (or searches) where the last mails of this sender were moved to. In about 95% of my cases the plugin could guess the correct folder. The algorithm is quite easy: move the mail to the folder, which was used the last time for this from-address.
This could help to keep the inbox clean.
I know a lot of people which have a huge inbox folder because they are to lazy (or busy) to move mails into subfolders.
I don't like the manual way: right click, move message to folder "...". And the archive feature of thunderbird does not help me. I want to keep mails separated.
Any idea how to keep the mail inbox clean by moving mails to subfolders in a way that is easy and fast?
**Update**
Filters don't solve my problem: I want to see all mails in my Inbox. After reading and handling the mail I want to move it to an other folder. | 2013/04/24 | [
"https://askubuntu.com/questions/284975",
"https://askubuntu.com",
"https://askubuntu.com/users/42348/"
] | Try my addon [quickFilters](https://addons.mozilla.org/en-US/thunderbird/addon/quickfilters/). You switch it to learning mode, then move the email manually once. It will bring up a wizard so you can teach a filter to do this automatically from then on. It also integrates well with my other addon, QuickFolders. I also highly recommend using the "[Copy Sent to Current](https://addons.mozilla.org/en-US/thunderbird/addon/copy-sent-to-current)" Addon to keep filtered conversations together and the [N] shortcut for jumping to the next unread mail.
As regards the Update, maybe you can define your filters not to run when "Checking mail" but only run manually; you still have the problem that they will be applied to all mails in the folder, so one would have to add a mechanism to avoid filtering unread emails. | You can try addon called [archive-this](https://addons.mozilla.org/en-US/thunderbird/addon/archive-this/)
It works like this:
1. create a message filter (e.g. is from contact abc)
2. define the action (move to specific folder)
3. create preset keyboard shortcuts for triggering the action
I hope it helps :) |
284,975 | I tried to find a thunderbird plugin which can do this:
Display a button on the top of the message (while reading, not composing) "move message to folder '...'".
The plugin checks the "from" header of the mail and remembers (or searches) where the last mails of this sender were moved to. In about 95% of my cases the plugin could guess the correct folder. The algorithm is quite easy: move the mail to the folder, which was used the last time for this from-address.
This could help to keep the inbox clean.
I know a lot of people which have a huge inbox folder because they are to lazy (or busy) to move mails into subfolders.
I don't like the manual way: right click, move message to folder "...". And the archive feature of thunderbird does not help me. I want to keep mails separated.
Any idea how to keep the mail inbox clean by moving mails to subfolders in a way that is easy and fast?
**Update**
Filters don't solve my problem: I want to see all mails in my Inbox. After reading and handling the mail I want to move it to an other folder. | 2013/04/24 | [
"https://askubuntu.com/questions/284975",
"https://askubuntu.com",
"https://askubuntu.com/users/42348/"
] | You can try addon called [archive-this](https://addons.mozilla.org/en-US/thunderbird/addon/archive-this/)
It works like this:
1. create a message filter (e.g. is from contact abc)
2. define the action (move to specific folder)
3. create preset keyboard shortcuts for triggering the action
I hope it helps :) | I get you. I use Gmail and have filters in Gmail that tag certain mails with specific tags. So the usual thing I do is jut click archive and it's out of my inbox.
Currently, using Thunderbird works fine.Those emails that are already tagged are present in other folder and clicking archive removes it from my the inbox.
In case that's not your case, you can just click & drag the mail to folder (I have "Unread" folders turned on). You can create a custom folder to archive all emails to it. |
284,975 | I tried to find a thunderbird plugin which can do this:
Display a button on the top of the message (while reading, not composing) "move message to folder '...'".
The plugin checks the "from" header of the mail and remembers (or searches) where the last mails of this sender were moved to. In about 95% of my cases the plugin could guess the correct folder. The algorithm is quite easy: move the mail to the folder, which was used the last time for this from-address.
This could help to keep the inbox clean.
I know a lot of people which have a huge inbox folder because they are to lazy (or busy) to move mails into subfolders.
I don't like the manual way: right click, move message to folder "...". And the archive feature of thunderbird does not help me. I want to keep mails separated.
Any idea how to keep the mail inbox clean by moving mails to subfolders in a way that is easy and fast?
**Update**
Filters don't solve my problem: I want to see all mails in my Inbox. After reading and handling the mail I want to move it to an other folder. | 2013/04/24 | [
"https://askubuntu.com/questions/284975",
"https://askubuntu.com",
"https://askubuntu.com/users/42348/"
] | Try my addon [quickFilters](https://addons.mozilla.org/en-US/thunderbird/addon/quickfilters/). You switch it to learning mode, then move the email manually once. It will bring up a wizard so you can teach a filter to do this automatically from then on. It also integrates well with my other addon, QuickFolders. I also highly recommend using the "[Copy Sent to Current](https://addons.mozilla.org/en-US/thunderbird/addon/copy-sent-to-current)" Addon to keep filtered conversations together and the [N] shortcut for jumping to the next unread mail.
As regards the Update, maybe you can define your filters not to run when "Checking mail" but only run manually; you still have the problem that they will be applied to all mails in the folder, so one would have to add a mechanism to avoid filtering unread emails. | I get you. I use Gmail and have filters in Gmail that tag certain mails with specific tags. So the usual thing I do is jut click archive and it's out of my inbox.
Currently, using Thunderbird works fine.Those emails that are already tagged are present in other folder and clicking archive removes it from my the inbox.
In case that's not your case, you can just click & drag the mail to folder (I have "Unread" folders turned on). You can create a custom folder to archive all emails to it. |
30,475,944 | I'm trying neo4j with 20 million nodes. When I call a simple match statement like "MATCH (n:MYLABEL {Id:5}) RETURN n", it takes about 2 minutes from the web interface and also from the console. Is there something wrong about taking that much time? | 2015/05/27 | [
"https://Stackoverflow.com/questions/30475944",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4943564/"
] | Have you tried to create an index on this label and property? | The problem was caused by the configuration of my windows machine. I tried it with a mac and the performance becomes very good. I can match an indexed attribute within 0.1 second in a database of 10 millions nodes. Thank you all for suggestions. |
236,956 | [](https://i.stack.imgur.com/jjfUJ.gif)
Above is a transformer with its primary and secondary windings. I just wrote my understanding about a transformer very shorty before my question:
Neglecting losses, we can write the voltage and power unity equations as:
* Vs = (Ns/Np) \* Vp
* Vp \* Ip = Vs \* Is
It seems like; as long as the Vp and (Ns/Np) ratio are the same, whatever the load R is Vs will be the same. Only the current drawn will change.
And if the above argument is true the power dissipated in secondary part is:
Ps = (Vs^2)/R
And if R goes to zero or should I say the secondary winding is shorted, Ps goes to infinity which means this secondary winding would burn.
I have the following questions:
1-) Since there is power unity i.e. Pp = Ps; would that mean if the secondary winding is shorted, would the primary winding burn as well?(I'm asking because the interaction between the windings is electromagnetic which could be a different phenomenon)
2-) If the conclusion is primary winding would burn as well, is that enough to add a fuse to only before the primary winding but not secondary winding? | 2016/05/28 | [
"https://electronics.stackexchange.com/questions/236956",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/16307/"
] | Under real-world conditions no transformer is 100% efficient at converting power. If the secondary is shorted then the primary 'sees' more watts being dissipated than the secondary. In fact the primary always 'sees' more power dissipated regardless of the load.
Many fused transformers have fuses on the primary only, and they are usually the slow-blow type because of inrush currents when the transformer is turned on. Most 50HZ to 60HZ transformers are only 40% to 60% efficient at converting power, so for a given known maximum continuous load the transformer is likely over-rated by 50%.
Some transformers have short-circuit protection, usually those called 'wall-packs'. So called power transformers and industrial transformers have a fused primary.
Those with extreme high power/high voltage may have secondary fuses as well. I have seen pole mounted transformers explode from a lightning strike maybe 50 yards from me, only to see a huge fuse explode about 100 yards away on another pole. For major power distribution, it pays to fuse both sides of a transformer. | In any transformer, (or any other load connected to the voltage source) always place the fuse before the unit.
In step down transformers, shorting the secondary will stress the primary winding more than the stress taken by the secondary. The reason for this is since both windings are in tight magnetic coupling, a step current increment in the secondary will have a step increment of current in the primary following the inverse of the Np/Ns ratio.
But since in the step down transformer the primary is made from longer and thinner wire, its resistance is higher and dissipate higher power and heats up much quickly. Thus in case your transformer is the stepdown type, protect the primary is sufficient to prevent your transformer from catching fire.
If your transformer is step up type, its more appropriate to have fuse on both sides of the transformer. Derating the fuse according to the winding current limits will protect the transformer. Fuse on the secondary normally is used to protect the load connected to the transformer, not to protect the transformer itself. |
236,956 | [](https://i.stack.imgur.com/jjfUJ.gif)
Above is a transformer with its primary and secondary windings. I just wrote my understanding about a transformer very shorty before my question:
Neglecting losses, we can write the voltage and power unity equations as:
* Vs = (Ns/Np) \* Vp
* Vp \* Ip = Vs \* Is
It seems like; as long as the Vp and (Ns/Np) ratio are the same, whatever the load R is Vs will be the same. Only the current drawn will change.
And if the above argument is true the power dissipated in secondary part is:
Ps = (Vs^2)/R
And if R goes to zero or should I say the secondary winding is shorted, Ps goes to infinity which means this secondary winding would burn.
I have the following questions:
1-) Since there is power unity i.e. Pp = Ps; would that mean if the secondary winding is shorted, would the primary winding burn as well?(I'm asking because the interaction between the windings is electromagnetic which could be a different phenomenon)
2-) If the conclusion is primary winding would burn as well, is that enough to add a fuse to only before the primary winding but not secondary winding? | 2016/05/28 | [
"https://electronics.stackexchange.com/questions/236956",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/16307/"
] | In any transformer, (or any other load connected to the voltage source) always place the fuse before the unit.
In step down transformers, shorting the secondary will stress the primary winding more than the stress taken by the secondary. The reason for this is since both windings are in tight magnetic coupling, a step current increment in the secondary will have a step increment of current in the primary following the inverse of the Np/Ns ratio.
But since in the step down transformer the primary is made from longer and thinner wire, its resistance is higher and dissipate higher power and heats up much quickly. Thus in case your transformer is the stepdown type, protect the primary is sufficient to prevent your transformer from catching fire.
If your transformer is step up type, its more appropriate to have fuse on both sides of the transformer. Derating the fuse according to the winding current limits will protect the transformer. Fuse on the secondary normally is used to protect the load connected to the transformer, not to protect the transformer itself. | The maximum amount of current a transformer can handle is dependent on several things, the resistance of the windings, the *reactance* of the windings (current through an inductor cannot change instantaneously and eventually you reach a point where the current just can't ramp up fast enough to supply the load).
The current is also dependent on the quality of the coupling between the coils, see, there is a magnetic equivalent of resistance (called reluctance), it acts like a series inductor which limits the current (old fashioned fluoro ballasts are just a series inductor). Old arc welders would control the peak output current but changing the coupling between the primary and secondary windings (they'd be moved away from each other, the greater separation increases the reluctance of the magnetic circuit). |
236,956 | [](https://i.stack.imgur.com/jjfUJ.gif)
Above is a transformer with its primary and secondary windings. I just wrote my understanding about a transformer very shorty before my question:
Neglecting losses, we can write the voltage and power unity equations as:
* Vs = (Ns/Np) \* Vp
* Vp \* Ip = Vs \* Is
It seems like; as long as the Vp and (Ns/Np) ratio are the same, whatever the load R is Vs will be the same. Only the current drawn will change.
And if the above argument is true the power dissipated in secondary part is:
Ps = (Vs^2)/R
And if R goes to zero or should I say the secondary winding is shorted, Ps goes to infinity which means this secondary winding would burn.
I have the following questions:
1-) Since there is power unity i.e. Pp = Ps; would that mean if the secondary winding is shorted, would the primary winding burn as well?(I'm asking because the interaction between the windings is electromagnetic which could be a different phenomenon)
2-) If the conclusion is primary winding would burn as well, is that enough to add a fuse to only before the primary winding but not secondary winding? | 2016/05/28 | [
"https://electronics.stackexchange.com/questions/236956",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/16307/"
] | In any transformer, (or any other load connected to the voltage source) always place the fuse before the unit.
In step down transformers, shorting the secondary will stress the primary winding more than the stress taken by the secondary. The reason for this is since both windings are in tight magnetic coupling, a step current increment in the secondary will have a step increment of current in the primary following the inverse of the Np/Ns ratio.
But since in the step down transformer the primary is made from longer and thinner wire, its resistance is higher and dissipate higher power and heats up much quickly. Thus in case your transformer is the stepdown type, protect the primary is sufficient to prevent your transformer from catching fire.
If your transformer is step up type, its more appropriate to have fuse on both sides of the transformer. Derating the fuse according to the winding current limits will protect the transformer. Fuse on the secondary normally is used to protect the load connected to the transformer, not to protect the transformer itself. | Using any ideal transformer or one with low leakage inductance and resistance, you will need a fuse or your transformer will burn with shorted secondary.
Having a transformer which is inherently protected against short circuit (very few are), you have enough leakage inductance to limit the current to a safe level which the transformer can withstand continuously. |
236,956 | [](https://i.stack.imgur.com/jjfUJ.gif)
Above is a transformer with its primary and secondary windings. I just wrote my understanding about a transformer very shorty before my question:
Neglecting losses, we can write the voltage and power unity equations as:
* Vs = (Ns/Np) \* Vp
* Vp \* Ip = Vs \* Is
It seems like; as long as the Vp and (Ns/Np) ratio are the same, whatever the load R is Vs will be the same. Only the current drawn will change.
And if the above argument is true the power dissipated in secondary part is:
Ps = (Vs^2)/R
And if R goes to zero or should I say the secondary winding is shorted, Ps goes to infinity which means this secondary winding would burn.
I have the following questions:
1-) Since there is power unity i.e. Pp = Ps; would that mean if the secondary winding is shorted, would the primary winding burn as well?(I'm asking because the interaction between the windings is electromagnetic which could be a different phenomenon)
2-) If the conclusion is primary winding would burn as well, is that enough to add a fuse to only before the primary winding but not secondary winding? | 2016/05/28 | [
"https://electronics.stackexchange.com/questions/236956",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/16307/"
] | Under real-world conditions no transformer is 100% efficient at converting power. If the secondary is shorted then the primary 'sees' more watts being dissipated than the secondary. In fact the primary always 'sees' more power dissipated regardless of the load.
Many fused transformers have fuses on the primary only, and they are usually the slow-blow type because of inrush currents when the transformer is turned on. Most 50HZ to 60HZ transformers are only 40% to 60% efficient at converting power, so for a given known maximum continuous load the transformer is likely over-rated by 50%.
Some transformers have short-circuit protection, usually those called 'wall-packs'. So called power transformers and industrial transformers have a fused primary.
Those with extreme high power/high voltage may have secondary fuses as well. I have seen pole mounted transformers explode from a lightning strike maybe 50 yards from me, only to see a huge fuse explode about 100 yards away on another pole. For major power distribution, it pays to fuse both sides of a transformer. | The maximum amount of current a transformer can handle is dependent on several things, the resistance of the windings, the *reactance* of the windings (current through an inductor cannot change instantaneously and eventually you reach a point where the current just can't ramp up fast enough to supply the load).
The current is also dependent on the quality of the coupling between the coils, see, there is a magnetic equivalent of resistance (called reluctance), it acts like a series inductor which limits the current (old fashioned fluoro ballasts are just a series inductor). Old arc welders would control the peak output current but changing the coupling between the primary and secondary windings (they'd be moved away from each other, the greater separation increases the reluctance of the magnetic circuit). |
236,956 | [](https://i.stack.imgur.com/jjfUJ.gif)
Above is a transformer with its primary and secondary windings. I just wrote my understanding about a transformer very shorty before my question:
Neglecting losses, we can write the voltage and power unity equations as:
* Vs = (Ns/Np) \* Vp
* Vp \* Ip = Vs \* Is
It seems like; as long as the Vp and (Ns/Np) ratio are the same, whatever the load R is Vs will be the same. Only the current drawn will change.
And if the above argument is true the power dissipated in secondary part is:
Ps = (Vs^2)/R
And if R goes to zero or should I say the secondary winding is shorted, Ps goes to infinity which means this secondary winding would burn.
I have the following questions:
1-) Since there is power unity i.e. Pp = Ps; would that mean if the secondary winding is shorted, would the primary winding burn as well?(I'm asking because the interaction between the windings is electromagnetic which could be a different phenomenon)
2-) If the conclusion is primary winding would burn as well, is that enough to add a fuse to only before the primary winding but not secondary winding? | 2016/05/28 | [
"https://electronics.stackexchange.com/questions/236956",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/16307/"
] | Under real-world conditions no transformer is 100% efficient at converting power. If the secondary is shorted then the primary 'sees' more watts being dissipated than the secondary. In fact the primary always 'sees' more power dissipated regardless of the load.
Many fused transformers have fuses on the primary only, and they are usually the slow-blow type because of inrush currents when the transformer is turned on. Most 50HZ to 60HZ transformers are only 40% to 60% efficient at converting power, so for a given known maximum continuous load the transformer is likely over-rated by 50%.
Some transformers have short-circuit protection, usually those called 'wall-packs'. So called power transformers and industrial transformers have a fused primary.
Those with extreme high power/high voltage may have secondary fuses as well. I have seen pole mounted transformers explode from a lightning strike maybe 50 yards from me, only to see a huge fuse explode about 100 yards away on another pole. For major power distribution, it pays to fuse both sides of a transformer. | Using any ideal transformer or one with low leakage inductance and resistance, you will need a fuse or your transformer will burn with shorted secondary.
Having a transformer which is inherently protected against short circuit (very few are), you have enough leakage inductance to limit the current to a safe level which the transformer can withstand continuously. |
89,986 | The city I'm designing had a 100-year flood in the year 1907. The city has a population of 90,000 inhabitants and is situated on a confluence of two navigable rivers.
River "Q" is about 400m wide; it has significant wetlands (about 200 to 400m on each bank) which absorb the more frequent flooding events. River "V" (which flows into "Q") is only 200m wide without protection. The city is a provincial capital with significant commerce and industry.
The "flooding event" had the waters rise 10ft (3m) above the normal water level. This flooded 40% to 50% of the built-up area.
My on-line research only yielded that the term '100-year flood' is misleading. It actually means a 1% probability of such event happening in that given year. Those odds increase to 63% in any 100-year period. I also read most of the report from the UK flood of London in 2001 or there about. None of these sources answer my question.
Further 'technical' information: Planet is earth-like, with earth gravity, physics and size. There is no magic, city is on an latitude of 9\*30'S, The city is over 2000 km from the ocean, The city is about half-way down the course of the river; The source is glacial from a 30,000 ft peak, (Roughly equatorial). City is in an unexplained natural jungle clearing few hundred km in either direction. Farmers took advantage of this fact and grow their crops / livestock there. Relatively flat terrain, with the 20ft elevation line being between 600 to 1000m from the bank.
**The question is: Do I protect the city against this event with infrastructure?**
[](https://i.stack.imgur.com/RYFLh.png) | 2017/08/25 | [
"https://worldbuilding.stackexchange.com/questions/89986",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/6500/"
] | Model it after a real flood, and the measures taken after that.
===============================================================
**The city of Dayton, Ohio, flooded in 1913.**
>
> The Dayton flood of March 1913 was caused by a series of severe winter rain storms that hit the Midwest in late March. Within three days, 8–11 inches (200–280 mm) of rain fell throughout the Great Miami River watershed on already saturated soil, resulting in more than 90 percent runoff. The river and its tributaries overflowed. The existing levees failed, and downtown Dayton was flooded up to 20 feet (6.1 m) deep. This flood is still the flood of record for the Great Miami River watershed. By comparison, the volume of water that passed through the river channel during this storm equals the monthly flow over Niagara Falls.[source](https://en.wikipedia.org/wiki/Great_Dayton_Flood)
>
>
>
*(There are markers on some buildings still standing that indicate where the flood waters reached. Seeing such markers on a second story of a building is... unsettling.)*
The source describes the flood timeline, the relief efforts, casualties, property damage, and of interest to you, the **flood control efforts** that began after this flood. The TL;DR version boils down to:
1. 10-day fundraiser (raised about $2 million in 1913 US dollars)
2. Hired engineers to conduct an analysis and design a plan
3. Borrowed from a plan used in Loire Valley in France
4. Began implementing a plan consisting of
1. Five earthen dams
2. modifications to river channel through Dayton
3. conduits to release limited amounts of water from dams
4. widened river channel with series of levees
5. flood storage areas behind dams (used as farmland)
6. Relocation of any businesses inside the river channel boundaries of a 1,000-year flood
Their design goal was to contain 140% of the 1913 flood waters.
This also required passing state laws to let local governments define conservancy districts for flood control, raise money for civil engineering through taxes, and use eminent domain to take land for dams, basins, and flood plains. | Of course the city will build dikes to protect against flooding, it is well within their economic and technological capabilities to protect against once-every-100-years floods. Water management on the scale of single cities has been done since ancient times.
You'd do well to build one dike at the winter level of your river. That keeps the waters navigable even when the glacial melt water is lowest. Then you build a much bigger secondary dike at the summer river levels, when the glacial melt is highest. If you build this dike decently it can withstand your flood event, or you can build a third dike specificaly for your floods if they can predicted and are regular.
The space between the 1st and 2nd dike is excelent land for cattle to graze. If you build the 3rd dike the urban poor can live in between the 2nd and 3rd.
But realisticly 2 dikes should be enough to protect against even major floods. Building a third for such a rare event is unlikely. It's much easier to just make your second dike strong enough to withstand a flood. |
89,986 | The city I'm designing had a 100-year flood in the year 1907. The city has a population of 90,000 inhabitants and is situated on a confluence of two navigable rivers.
River "Q" is about 400m wide; it has significant wetlands (about 200 to 400m on each bank) which absorb the more frequent flooding events. River "V" (which flows into "Q") is only 200m wide without protection. The city is a provincial capital with significant commerce and industry.
The "flooding event" had the waters rise 10ft (3m) above the normal water level. This flooded 40% to 50% of the built-up area.
My on-line research only yielded that the term '100-year flood' is misleading. It actually means a 1% probability of such event happening in that given year. Those odds increase to 63% in any 100-year period. I also read most of the report from the UK flood of London in 2001 or there about. None of these sources answer my question.
Further 'technical' information: Planet is earth-like, with earth gravity, physics and size. There is no magic, city is on an latitude of 9\*30'S, The city is over 2000 km from the ocean, The city is about half-way down the course of the river; The source is glacial from a 30,000 ft peak, (Roughly equatorial). City is in an unexplained natural jungle clearing few hundred km in either direction. Farmers took advantage of this fact and grow their crops / livestock there. Relatively flat terrain, with the 20ft elevation line being between 600 to 1000m from the bank.
**The question is: Do I protect the city against this event with infrastructure?**
[](https://i.stack.imgur.com/RYFLh.png) | 2017/08/25 | [
"https://worldbuilding.stackexchange.com/questions/89986",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/6500/"
] | Model it after a real flood, and the measures taken after that.
===============================================================
**The city of Dayton, Ohio, flooded in 1913.**
>
> The Dayton flood of March 1913 was caused by a series of severe winter rain storms that hit the Midwest in late March. Within three days, 8–11 inches (200–280 mm) of rain fell throughout the Great Miami River watershed on already saturated soil, resulting in more than 90 percent runoff. The river and its tributaries overflowed. The existing levees failed, and downtown Dayton was flooded up to 20 feet (6.1 m) deep. This flood is still the flood of record for the Great Miami River watershed. By comparison, the volume of water that passed through the river channel during this storm equals the monthly flow over Niagara Falls.[source](https://en.wikipedia.org/wiki/Great_Dayton_Flood)
>
>
>
*(There are markers on some buildings still standing that indicate where the flood waters reached. Seeing such markers on a second story of a building is... unsettling.)*
The source describes the flood timeline, the relief efforts, casualties, property damage, and of interest to you, the **flood control efforts** that began after this flood. The TL;DR version boils down to:
1. 10-day fundraiser (raised about $2 million in 1913 US dollars)
2. Hired engineers to conduct an analysis and design a plan
3. Borrowed from a plan used in Loire Valley in France
4. Began implementing a plan consisting of
1. Five earthen dams
2. modifications to river channel through Dayton
3. conduits to release limited amounts of water from dams
4. widened river channel with series of levees
5. flood storage areas behind dams (used as farmland)
6. Relocation of any businesses inside the river channel boundaries of a 1,000-year flood
Their design goal was to contain 140% of the 1913 flood waters.
This also required passing state laws to let local governments define conservancy districts for flood control, raise money for civil engineering through taxes, and use eminent domain to take land for dams, basins, and flood plains. | Water is a tricky devil, and will usually get its way.
Dikes and levees can temporarily stop flooding in a region by building up the riverbanks.
HOWEVER. Excess water will still come, and it has to go somewhere. Usually, that involves the flood being shifted *upstream* of the existing dikes/levees, which kinda stinks for anyone living in that area who just got shafted by a downstream construction project.
They are usually then motivated to build dikes, which shifts the problems upstream, where the residents build dikes, which shifts... you see where this is going.
Perhaps this will be fine for a 1-in-68 year flood, but I would assume the river would marginally overflow its banks more often than the flood described.
Dikes and levees may still be the best option, but be aware of the long-term costs involved. |
89,986 | The city I'm designing had a 100-year flood in the year 1907. The city has a population of 90,000 inhabitants and is situated on a confluence of two navigable rivers.
River "Q" is about 400m wide; it has significant wetlands (about 200 to 400m on each bank) which absorb the more frequent flooding events. River "V" (which flows into "Q") is only 200m wide without protection. The city is a provincial capital with significant commerce and industry.
The "flooding event" had the waters rise 10ft (3m) above the normal water level. This flooded 40% to 50% of the built-up area.
My on-line research only yielded that the term '100-year flood' is misleading. It actually means a 1% probability of such event happening in that given year. Those odds increase to 63% in any 100-year period. I also read most of the report from the UK flood of London in 2001 or there about. None of these sources answer my question.
Further 'technical' information: Planet is earth-like, with earth gravity, physics and size. There is no magic, city is on an latitude of 9\*30'S, The city is over 2000 km from the ocean, The city is about half-way down the course of the river; The source is glacial from a 30,000 ft peak, (Roughly equatorial). City is in an unexplained natural jungle clearing few hundred km in either direction. Farmers took advantage of this fact and grow their crops / livestock there. Relatively flat terrain, with the 20ft elevation line being between 600 to 1000m from the bank.
**The question is: Do I protect the city against this event with infrastructure?**
[](https://i.stack.imgur.com/RYFLh.png) | 2017/08/25 | [
"https://worldbuilding.stackexchange.com/questions/89986",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/6500/"
] | Model it after a real flood, and the measures taken after that.
===============================================================
**The city of Dayton, Ohio, flooded in 1913.**
>
> The Dayton flood of March 1913 was caused by a series of severe winter rain storms that hit the Midwest in late March. Within three days, 8–11 inches (200–280 mm) of rain fell throughout the Great Miami River watershed on already saturated soil, resulting in more than 90 percent runoff. The river and its tributaries overflowed. The existing levees failed, and downtown Dayton was flooded up to 20 feet (6.1 m) deep. This flood is still the flood of record for the Great Miami River watershed. By comparison, the volume of water that passed through the river channel during this storm equals the monthly flow over Niagara Falls.[source](https://en.wikipedia.org/wiki/Great_Dayton_Flood)
>
>
>
*(There are markers on some buildings still standing that indicate where the flood waters reached. Seeing such markers on a second story of a building is... unsettling.)*
The source describes the flood timeline, the relief efforts, casualties, property damage, and of interest to you, the **flood control efforts** that began after this flood. The TL;DR version boils down to:
1. 10-day fundraiser (raised about $2 million in 1913 US dollars)
2. Hired engineers to conduct an analysis and design a plan
3. Borrowed from a plan used in Loire Valley in France
4. Began implementing a plan consisting of
1. Five earthen dams
2. modifications to river channel through Dayton
3. conduits to release limited amounts of water from dams
4. widened river channel with series of levees
5. flood storage areas behind dams (used as farmland)
6. Relocation of any businesses inside the river channel boundaries of a 1,000-year flood
Their design goal was to contain 140% of the 1913 flood waters.
This also required passing state laws to let local governments define conservancy districts for flood control, raise money for civil engineering through taxes, and use eminent domain to take land for dams, basins, and flood plains. | Until you mentioned the jungle it sounded a bit like my hometown. (Hint: don't build an Arch.) The founders here realized that right at the confluence of two big rivers was a bad place to build a a city, it was just too marshy. They chose a limestone rise a bit south of the confluance. Other small town along the Mississippi found they had to move uphill or face constant flooding.
Other answers covered levees. Your people will also need to handle water that falls within the city. Fill in standing ponds and low areas (they are a cholera risk anyway). If the water is higher outside the levees you may need to develop a pump system or a bucket brigade or the world's most amazing drainage system. |
3,859 | OS X doesn't allow me to cut files. Are there any applications to enable this? | 2010/11/09 | [
"https://apple.stackexchange.com/questions/3859",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/211/"
] | If you can pay some you can use [Moveaddict](http://www.apple.com/downloads/macosx/system_disk_utilities/moveaddict.html):

And also I found "[this solution](http://lifehacker.com/5622046/cut-and-paste-files-in-os-xs-finder-with-automator-services)" but I didn't test it myself. | [Path Finder](http://www.cocoatech.com/) supports cutting & pasting files, though I can't think of a case where I'd want to cut & paste that drag & drop wouldn't work. |
74,964 | Can anyone explain me the difference between "walks," "walked," and "will walk"?
Example:
>
> Martha loves exercising so she **walks/walked/will walk** everywhere she goes.
>
>
> | 2015/12/05 | [
"https://ell.stackexchange.com/questions/74964",
"https://ell.stackexchange.com",
"https://ell.stackexchange.com/users/27295/"
] | Those are three different tenses.
>
> **walks** (third person singular present tense; regular action or habit)
>
> — *She **walks** everyday.*
>
>
> **walked** (past tense; completed action on a specific time)
>
> — *She **walked** yesterday.*
>
>
> **will walk** (future tense; planned stuff to do in the future)
>
> — *She **will walk** within two days.*
>
>
> | *walks* - present simple tense (3rd person singular)
*will walk* - used to refer to future time (often called the future tense)
*walked* - past simple tense (it's not grammatical in your sentence, since *Martha loves exercising* is in the present tense.)
For more information, I highly suggest you read a basic grammar site about verb tenses. You can start [here on this page](https://www.englishclub.com/grammar/verb-tenses.htm) at the English Club. |
30,882 | I am interested in **sources** (reference author, work, paragraph/fragment) describing the view ancient Greeks held about people of African descent.
By that I mean those whom they called *Αιθίοπες* (Aithíopes), not the Egyptians or other North-Africans.
Are there attestations of racism (strictly because of skin color) or admiration in ancient Greek works?
Regarding admiration we have one account from Herodotos (3.20) who
states the following:
>
> These Ethiopians, to whom Cambyses sent them, are said to be the
> tallest and most handsome of all men. Their way of choosing kings is
> different from that of all others, as (it is said) are all their laws;
> they consider that man worthy to be their king whom they judge to be
> tallest and to have strength proportional to his stature.
>
>
> <http://www.perseus.tufts.edu/hopper/text?doc=Perseus:text:1999.01.0126:book=3:chapter=20>
>
>
> | 2016/07/14 | [
"https://history.stackexchange.com/questions/30882",
"https://history.stackexchange.com",
"https://history.stackexchange.com/users/5150/"
] | See
<http://department.monm.edu/classics/Courses/CLAS240/Africa/homeronethiopians.htm>
for a collection of quotations from Homer about the Aethiopians [which basically meant black, sub-Saharan Africans], who are referred to respectfully e.g.:
"Iliad 1.423-4 (Thetis is speaking to Achilles.)
Only yesterday Zeus went off to the Ocean River
to feast with the Aethiopians, loyal, lordly men,
and all of the gods went with him."
Another Example:
While Ovid was a Roman, his collection of stories the 'Metamorpheses' was written based on the Greek myths and shows the considerable influence from Greek culture normal among educated Romans. In the story of Phaeton, Ovid says that the Aethiopians became the colour they are because the sun once veered dangerously close to the Earth and permanently charred them black. This suggests he and his audience regarded being black-skinned as something sufficiently different from what they knew as the norm to potentially require explaining, but he does not suggest that blacks were intrinsically inferior to whites. | Aristotle mentions some descriptions in [Physiognomonica](https://archive.org/stream/worksaristotle00arisuoft/worksaristotle00arisuoft_djvu.txt) (look for Ethiopian. I'm not going to quote it... wow.)
Otherwise, I don't recall any mention of physical description of Africans (non-Northern Africans, i.e., Libyans, Egyptians, Cathaginians, etc.) in Thucydides, Xenophon or Plato, and only the reference from Herodotus you had. I don't see any reference to descriptions of physical attributes, but Herodotus also dicusses the [Persian invasian of Kush (Sudan)](http://www.livius.org/he-hg/herodotus/logos3_07.html). |
3,137,868 | I'm trying to develop a website using *Visualforce* with *Apex*.
But I'm unable.
Please me with the documents and websites regarding them, I'm unable to find docs regarded *Visualforce*. | 2010/06/29 | [
"https://Stackoverflow.com/questions/3137868",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/309721/"
] | You always have stack trace information--that's a runtime feature unrelated to the build mode--but line numbers and source file names are normally unavailable in release build stack traces.
You can get both line numbers and source file names in release build stack traces (including in exceptions) by altering the build configuration to create full program database (.pdb) files. To do so in Visual Studio:
1. open your project's property pages
2. select the Release configuration
3. go to the Build tab, then click the Advanced button
4. select "full" in the Debug Info dropdown.
Note that this will only help if the .pdb files are deployed alongside your application. | If you are referring to stack traces in the context of Exceptions, then yes, stack trace information is still available in release mode. What you lose in release mode is full debug symbols, which provide source code sequence point information to stack traces. This allows the stack trace to identify the specific line of code that a particular stack trace entry refers to.
Additionally, in release mode with optimizations enabled, code may be inlined, changing how the runtime code is structured. While definitely more optimal, runtime release code has less of a relationship with the line of code that were actually written. |
17,095 | I have done some basic object oriented programming with C++ (creating a B-Tree, Hashing Algorithms, Double Linked Lists) and I have done small project in C (like making a scientific calculator etc.)
**How different is hardware programming (specifically for micro controllers) from software/object oriented programming in terms of the mindset and "thinking" that the programmer has to have?**
Is one usually considered harder than the other my most people?
With my background (as described above) would I need a lot of preparation for going into hardware programming or can I dive straight in without too much preparation? | 2011/07/19 | [
"https://electronics.stackexchange.com/questions/17095",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/4711/"
] | You will have to completely abandon the object-oriented paradigm when dealing with most microcontrollers.
Microcontrollers are generally register- and RAM-limited, with slow clock rates and no pipelining / parallel code paths. You can forget about Java on a PIC, for example.
You have to get into an assembly-language mindset, and write procedurally.
You have to keep your code relatively flat and avoid recursion, as RAM limitations can often lead to stack issues.
You have to learn how to write interrupt service routines which are efficient (usually in assembly language).
You may have to refactor parts of the code manually, in assembly language, to implement functionality that the compiler doesn't support (or supports poorly).
You have to write mathematical code that takes into account the word size and lack of FPU capabilities of most microcontrollers (i.e. doing 32-bit multiplication on an 8-bit micro = evil).
It is a different world. To me, having a computer science or professional programming background can be as much of a hindrance as having no knowledge at all when dealing with microcontrollers. | Regarding microcontroller vs OOP programming, they are not something opposites. It is true that all vendor libraries are in plain C, but all platforms support C++ OOP too. Developers can build and do build C++ high level libraries and device firmware on top of that. The good example is Arduino libraries, official and user built - mostly C++ classes. Maybe not all OOP advantages can be fully utilized in embedded environment, but well known C++ vs C advantages are valid here too.
Regarding the mindset and thinking, as noted in other answers, microcontrollers are very resource constrained platforms (specially in RAM, less in speed) - stuff like dynamic memory allocation, C++ exceptions are usually ruled out. Given the right hardware is chosen, it is easy to adopt to these limitations and use other techniques (also widely used in other platforms).
In my view, the harder challenge might be one more extra dimension found in embedded programming - timing. This is because usually embedded software deal a lot with real-time events, strictly timed protocols to drive peripheral hardware and general task itself (these is some parallels in other "high level" platforms too, like multi-threaded applications).
Be prepared to read a lot of datasheets when dealing with new hardware - I guess it could be related to "mindset" question part :) Surely some EE and hardware knowledge would be needed.
Also I would like to note that these days embedded software development doesn't require assembly language. In fact Java (BTW it is OOP by default) is already here and getting stronger (at least for some class of embedded devices, for example IoT devices, it could have very bright future). |
17,095 | I have done some basic object oriented programming with C++ (creating a B-Tree, Hashing Algorithms, Double Linked Lists) and I have done small project in C (like making a scientific calculator etc.)
**How different is hardware programming (specifically for micro controllers) from software/object oriented programming in terms of the mindset and "thinking" that the programmer has to have?**
Is one usually considered harder than the other my most people?
With my background (as described above) would I need a lot of preparation for going into hardware programming or can I dive straight in without too much preparation? | 2011/07/19 | [
"https://electronics.stackexchange.com/questions/17095",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/4711/"
] | I'm presuming your C++ experience is PC-based.
An often made error by programmers moving from PC to microcontroller is that they don't realize how **limited resources** can be. On a PC nobody will stop you when you create a table with 100 000 entries, or write a program which compiles to 1MB of machine code.
There *are* microcontrollers which have a wealth of memory resources, especially in the high end, but it's still a far cry from what you'll be used to. For a hobby project you probably can always go for the maximum, but in a professional project you'll often be forced to work with the **smaller device** because it's **cheaper**.
On one project I was working with a TI [MSP430F1101](http://focus.ti.com/lit/ds/symlink/msp430f1101.pdf). 1KB of program memory, 128 bytes of configuration Flash, 128 bytes of RAM. The program didn't fit in the 1K, so I had to write a 23 bytes function in the configuration Flash. With these small controllers you **calculate by the byte**. On another occasion the program memory was 4 bytes too small. Boss wouldn't let me use the controller with more memory, but instead I had to optimize an already optimized machine code (it was already written in assembler) to fit the extra 4 bytes in. You get the picture.
Depending on the platform you're working on you'll have to deal with very **low level I/O**. Some development environments have functions to write to an LCD, but on others you're on your own, and will have to read the LCD's **datasheet** from start to finish to know how to control it.
You may have to control a relay, that's easier than an LCD, but it will require you to go to the register level of the microcontroller. Again a datasheet or user manual. You'll have to get to know the microcontroller's structure, which you'll find in a block diagram, again in the datasheet. In the microprocessor's days we talked about a **programming model**, which was basically a lineup of the processor's registers. Today's microcontrollers are so complex that a description of all the registers can take the best part of a 100 pages datasheet. IIRC just the description of the clock module for the MSP430 was 25 pages long.
You'll often have to deal with **real time** event handling. An **interrupt** you have to handle within 10\$\mu\$s, for example, and during that have another interrupt which requires the same timing accuracy.
Microcontrollers are often **programmed in C**. C++ is rather resources hungry, so that's usually out. (Most C++ implementations for microcontrollers offer a limited subset of C++.) Like I said, depending on the platform you may have an extensive **library of functions** available which could save you quite some developing time. It's worth taking some time to study it, it may save you a lot of time later on if you know what's available. | "hardware programming" can mean a lot of things. Programming a very small chip (think 10F200, 512 instructions, a few bytes of RAM) can be almost like designing an electronic circuit. On the other side programming a big Cortex microcontroller (1 Mb FLASH, 64 kB RAM) can be a lot like PC/GUI programming, using a big GUI toolkit. IMHO a good embedded/real-time programmer needs skills both from the software egineering side and from the circuit design side. For the bigger uC C++ is a good language choice, for the very small ones C might be the only choice. Assemby knowledge can be handy, but I would not recommend doing serious projects entirely in assembly.
I have done serious embedded work with people from both (SWI and EE) sides. I generally prefer the SWI people, provided that they have some experience with multu-threaded programming.
Your question sounds like you want to dive into embedded programming. By all means do so. For the low-level aspects (interfacing the peripherals in your chip and the hardware around it) you will need to learn some new skills, but it is just a lot of work without many new concepts. For the higher layers of your projects you can draw on your existing knwoledge. |
17,095 | I have done some basic object oriented programming with C++ (creating a B-Tree, Hashing Algorithms, Double Linked Lists) and I have done small project in C (like making a scientific calculator etc.)
**How different is hardware programming (specifically for micro controllers) from software/object oriented programming in terms of the mindset and "thinking" that the programmer has to have?**
Is one usually considered harder than the other my most people?
With my background (as described above) would I need a lot of preparation for going into hardware programming or can I dive straight in without too much preparation? | 2011/07/19 | [
"https://electronics.stackexchange.com/questions/17095",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/4711/"
] | I'm presuming your C++ experience is PC-based.
An often made error by programmers moving from PC to microcontroller is that they don't realize how **limited resources** can be. On a PC nobody will stop you when you create a table with 100 000 entries, or write a program which compiles to 1MB of machine code.
There *are* microcontrollers which have a wealth of memory resources, especially in the high end, but it's still a far cry from what you'll be used to. For a hobby project you probably can always go for the maximum, but in a professional project you'll often be forced to work with the **smaller device** because it's **cheaper**.
On one project I was working with a TI [MSP430F1101](http://focus.ti.com/lit/ds/symlink/msp430f1101.pdf). 1KB of program memory, 128 bytes of configuration Flash, 128 bytes of RAM. The program didn't fit in the 1K, so I had to write a 23 bytes function in the configuration Flash. With these small controllers you **calculate by the byte**. On another occasion the program memory was 4 bytes too small. Boss wouldn't let me use the controller with more memory, but instead I had to optimize an already optimized machine code (it was already written in assembler) to fit the extra 4 bytes in. You get the picture.
Depending on the platform you're working on you'll have to deal with very **low level I/O**. Some development environments have functions to write to an LCD, but on others you're on your own, and will have to read the LCD's **datasheet** from start to finish to know how to control it.
You may have to control a relay, that's easier than an LCD, but it will require you to go to the register level of the microcontroller. Again a datasheet or user manual. You'll have to get to know the microcontroller's structure, which you'll find in a block diagram, again in the datasheet. In the microprocessor's days we talked about a **programming model**, which was basically a lineup of the processor's registers. Today's microcontrollers are so complex that a description of all the registers can take the best part of a 100 pages datasheet. IIRC just the description of the clock module for the MSP430 was 25 pages long.
You'll often have to deal with **real time** event handling. An **interrupt** you have to handle within 10\$\mu\$s, for example, and during that have another interrupt which requires the same timing accuracy.
Microcontrollers are often **programmed in C**. C++ is rather resources hungry, so that's usually out. (Most C++ implementations for microcontrollers offer a limited subset of C++.) Like I said, depending on the platform you may have an extensive **library of functions** available which could save you quite some developing time. It's worth taking some time to study it, it may save you a lot of time later on if you know what's available. | For every arduino library method that you call there is a wealth of C/C++ code that makes it possible, it's simply packaged nicely for you to use as an API. Take a look at the arduino source code under the directory hardware/arduino/\* and you'll see all the C/C++ written for you which interacts directly with the AVR microcontroller's registers. If your objective is to learn how to write stuff like this (directly for the hardware) then there is a lot to cover. If your objective is to get something to work using their libraries then there might not be much to talk about as most of the hard work is done for you and their libraries and development environment are very easy to use.
Some rules of thumb though when working with resource constrained devices that could apply to either arduino environment or others:
Be aware of how much memory you are using. Both code size (which goes to flash memory) and static RAM usage (constants in your code that will always exist in RAM). I would argue that static RAM usage is a bit more important starting out, as it is easy to over look. It's not uncommon for you to have only 1000 bytes to work with for your stack, heap, and constants. Be wise in how you spend it, so avoid things like long arrays of integers (4-bytes each) when bytes or unsigned char's (1 byte each) will suffice. Another answer here covers some other important points very well so I'll stop here, I mainly wanted to get the point across that there is *a lot* to cover if you're not using the arduino library and writing your *own* C libraries. |
17,095 | I have done some basic object oriented programming with C++ (creating a B-Tree, Hashing Algorithms, Double Linked Lists) and I have done small project in C (like making a scientific calculator etc.)
**How different is hardware programming (specifically for micro controllers) from software/object oriented programming in terms of the mindset and "thinking" that the programmer has to have?**
Is one usually considered harder than the other my most people?
With my background (as described above) would I need a lot of preparation for going into hardware programming or can I dive straight in without too much preparation? | 2011/07/19 | [
"https://electronics.stackexchange.com/questions/17095",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/4711/"
] | "hardware programming" can mean a lot of things. Programming a very small chip (think 10F200, 512 instructions, a few bytes of RAM) can be almost like designing an electronic circuit. On the other side programming a big Cortex microcontroller (1 Mb FLASH, 64 kB RAM) can be a lot like PC/GUI programming, using a big GUI toolkit. IMHO a good embedded/real-time programmer needs skills both from the software egineering side and from the circuit design side. For the bigger uC C++ is a good language choice, for the very small ones C might be the only choice. Assemby knowledge can be handy, but I would not recommend doing serious projects entirely in assembly.
I have done serious embedded work with people from both (SWI and EE) sides. I generally prefer the SWI people, provided that they have some experience with multu-threaded programming.
Your question sounds like you want to dive into embedded programming. By all means do so. For the low-level aspects (interfacing the peripherals in your chip and the hardware around it) you will need to learn some new skills, but it is just a lot of work without many new concepts. For the higher layers of your projects you can draw on your existing knwoledge. | Regarding microcontroller vs OOP programming, they are not something opposites. It is true that all vendor libraries are in plain C, but all platforms support C++ OOP too. Developers can build and do build C++ high level libraries and device firmware on top of that. The good example is Arduino libraries, official and user built - mostly C++ classes. Maybe not all OOP advantages can be fully utilized in embedded environment, but well known C++ vs C advantages are valid here too.
Regarding the mindset and thinking, as noted in other answers, microcontrollers are very resource constrained platforms (specially in RAM, less in speed) - stuff like dynamic memory allocation, C++ exceptions are usually ruled out. Given the right hardware is chosen, it is easy to adopt to these limitations and use other techniques (also widely used in other platforms).
In my view, the harder challenge might be one more extra dimension found in embedded programming - timing. This is because usually embedded software deal a lot with real-time events, strictly timed protocols to drive peripheral hardware and general task itself (these is some parallels in other "high level" platforms too, like multi-threaded applications).
Be prepared to read a lot of datasheets when dealing with new hardware - I guess it could be related to "mindset" question part :) Surely some EE and hardware knowledge would be needed.
Also I would like to note that these days embedded software development doesn't require assembly language. In fact Java (BTW it is OOP by default) is already here and getting stronger (at least for some class of embedded devices, for example IoT devices, it could have very bright future). |
17,095 | I have done some basic object oriented programming with C++ (creating a B-Tree, Hashing Algorithms, Double Linked Lists) and I have done small project in C (like making a scientific calculator etc.)
**How different is hardware programming (specifically for micro controllers) from software/object oriented programming in terms of the mindset and "thinking" that the programmer has to have?**
Is one usually considered harder than the other my most people?
With my background (as described above) would I need a lot of preparation for going into hardware programming or can I dive straight in without too much preparation? | 2011/07/19 | [
"https://electronics.stackexchange.com/questions/17095",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/4711/"
] | I do both, so here's my view.
I think the most important skill by far in embedded is your debugging ability. The required mindset is much different in that so much more can go wrong, and you must be very open to considering all the different ways what you are trying to do can go wrong.
This is the single biggest issue for new embedded developers. PC people tend to have it rougher, as they're used to so much just working for them. They'll tend to waste a lot of time searching for tools to do things for them instead (hint: there aren't many). There's a lot of banging heads into walls over and over, not knowing what else to do. If you feel you're getting stuck, step back and figure out if you can identify what all might be going wrong. Systematically go through narrowing your potential problems list until you figure it out. It follows directly from this process that you should limit the scope of problems by not changing too much at once.
Experienced embedded people tend to take debugging for granted... most of the people who can't do it well don't last long (or work in large companies that simply accept "firmware is hard" as an answer for why a certain feature is years late)
You're working on code that runs on an external system to your development system, with varying degrees of visibility into your target from platform to platform. If under your control, push for development aids to help increase this visibility into your target system. Use debug serial ports, bit banging debug output, the famous blinking light, etc. Certainly at a minimum learn how to use an oscilloscope and use pin I/O with the 'scope to see when certain functions enter/exit, ISRs fire, etc. I've watched people struggle for literally years longer than necessary simply because they never bothered to set up/learn how to use a proper JTAG debugger link.
It's much more important to be very aware of exactly what resources you have relative to a PC. Read the datasheets carefully. Consider the resource 'cost' of anything you are trying to do. Learn resource-oriented debugging tricks like filling stack space with a magic value to track stack usage.
While some degree of debugging skill is required for both PC and embedded software, it's much more important with embedded. | Regarding microcontroller vs OOP programming, they are not something opposites. It is true that all vendor libraries are in plain C, but all platforms support C++ OOP too. Developers can build and do build C++ high level libraries and device firmware on top of that. The good example is Arduino libraries, official and user built - mostly C++ classes. Maybe not all OOP advantages can be fully utilized in embedded environment, but well known C++ vs C advantages are valid here too.
Regarding the mindset and thinking, as noted in other answers, microcontrollers are very resource constrained platforms (specially in RAM, less in speed) - stuff like dynamic memory allocation, C++ exceptions are usually ruled out. Given the right hardware is chosen, it is easy to adopt to these limitations and use other techniques (also widely used in other platforms).
In my view, the harder challenge might be one more extra dimension found in embedded programming - timing. This is because usually embedded software deal a lot with real-time events, strictly timed protocols to drive peripheral hardware and general task itself (these is some parallels in other "high level" platforms too, like multi-threaded applications).
Be prepared to read a lot of datasheets when dealing with new hardware - I guess it could be related to "mindset" question part :) Surely some EE and hardware knowledge would be needed.
Also I would like to note that these days embedded software development doesn't require assembly language. In fact Java (BTW it is OOP by default) is already here and getting stronger (at least for some class of embedded devices, for example IoT devices, it could have very bright future). |
17,095 | I have done some basic object oriented programming with C++ (creating a B-Tree, Hashing Algorithms, Double Linked Lists) and I have done small project in C (like making a scientific calculator etc.)
**How different is hardware programming (specifically for micro controllers) from software/object oriented programming in terms of the mindset and "thinking" that the programmer has to have?**
Is one usually considered harder than the other my most people?
With my background (as described above) would I need a lot of preparation for going into hardware programming or can I dive straight in without too much preparation? | 2011/07/19 | [
"https://electronics.stackexchange.com/questions/17095",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/4711/"
] | You will have to completely abandon the object-oriented paradigm when dealing with most microcontrollers.
Microcontrollers are generally register- and RAM-limited, with slow clock rates and no pipelining / parallel code paths. You can forget about Java on a PIC, for example.
You have to get into an assembly-language mindset, and write procedurally.
You have to keep your code relatively flat and avoid recursion, as RAM limitations can often lead to stack issues.
You have to learn how to write interrupt service routines which are efficient (usually in assembly language).
You may have to refactor parts of the code manually, in assembly language, to implement functionality that the compiler doesn't support (or supports poorly).
You have to write mathematical code that takes into account the word size and lack of FPU capabilities of most microcontrollers (i.e. doing 32-bit multiplication on an 8-bit micro = evil).
It is a different world. To me, having a computer science or professional programming background can be as much of a hindrance as having no knowledge at all when dealing with microcontrollers. | I do both, so here's my view.
I think the most important skill by far in embedded is your debugging ability. The required mindset is much different in that so much more can go wrong, and you must be very open to considering all the different ways what you are trying to do can go wrong.
This is the single biggest issue for new embedded developers. PC people tend to have it rougher, as they're used to so much just working for them. They'll tend to waste a lot of time searching for tools to do things for them instead (hint: there aren't many). There's a lot of banging heads into walls over and over, not knowing what else to do. If you feel you're getting stuck, step back and figure out if you can identify what all might be going wrong. Systematically go through narrowing your potential problems list until you figure it out. It follows directly from this process that you should limit the scope of problems by not changing too much at once.
Experienced embedded people tend to take debugging for granted... most of the people who can't do it well don't last long (or work in large companies that simply accept "firmware is hard" as an answer for why a certain feature is years late)
You're working on code that runs on an external system to your development system, with varying degrees of visibility into your target from platform to platform. If under your control, push for development aids to help increase this visibility into your target system. Use debug serial ports, bit banging debug output, the famous blinking light, etc. Certainly at a minimum learn how to use an oscilloscope and use pin I/O with the 'scope to see when certain functions enter/exit, ISRs fire, etc. I've watched people struggle for literally years longer than necessary simply because they never bothered to set up/learn how to use a proper JTAG debugger link.
It's much more important to be very aware of exactly what resources you have relative to a PC. Read the datasheets carefully. Consider the resource 'cost' of anything you are trying to do. Learn resource-oriented debugging tricks like filling stack space with a magic value to track stack usage.
While some degree of debugging skill is required for both PC and embedded software, it's much more important with embedded. |
17,095 | I have done some basic object oriented programming with C++ (creating a B-Tree, Hashing Algorithms, Double Linked Lists) and I have done small project in C (like making a scientific calculator etc.)
**How different is hardware programming (specifically for micro controllers) from software/object oriented programming in terms of the mindset and "thinking" that the programmer has to have?**
Is one usually considered harder than the other my most people?
With my background (as described above) would I need a lot of preparation for going into hardware programming or can I dive straight in without too much preparation? | 2011/07/19 | [
"https://electronics.stackexchange.com/questions/17095",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/4711/"
] | You need to think about several things:
* You will use C as the language
* You can still create a feeling of object orientation using function pointers so that you can override functions etc. I have used this method in the past and current projects and works very well. So OO is partially there but not in C++ sense.
There are other limitations that will come in to play such as limited speed and memory. So as a general guideline, I avoid:
* Using heap, if there is a way to solve the problem without Malloc, I do that. For example, I preallocate buffers and just use them.
* I intentionally reduce the stack size in compiler settings to face with stack size issues early on, optimize that carefully.
* I assume every single line of code will be interrupted by an event, so I avoid non reentrant code
* I assume even interrupts are nested so I write that code accordingly
* I avoid using OS unless it is necessary. 70% of the embedded projects doesn't really need an OS. If I must use an OS, I only use something with source code available. (Freertos etc)
* if I am using an OS, I almost always abstract things so that I can change OS in a matter of hours.
* For drivers etc. I will only use the libraries provided by the vendor, I never ever directly fiddle the bits, unless I have no other choice. This makes the code readable and improves debugging.
* I look at the loops and other stuff, especially in ISR, to make sure they are fast enough.
* I always keep a few GPIOs handy to measure stuff, context switching, ISR run time etc.
List goes on, I am probably below average in terms of software programming, I am sure there are better practices. | Regarding microcontroller vs OOP programming, they are not something opposites. It is true that all vendor libraries are in plain C, but all platforms support C++ OOP too. Developers can build and do build C++ high level libraries and device firmware on top of that. The good example is Arduino libraries, official and user built - mostly C++ classes. Maybe not all OOP advantages can be fully utilized in embedded environment, but well known C++ vs C advantages are valid here too.
Regarding the mindset and thinking, as noted in other answers, microcontrollers are very resource constrained platforms (specially in RAM, less in speed) - stuff like dynamic memory allocation, C++ exceptions are usually ruled out. Given the right hardware is chosen, it is easy to adopt to these limitations and use other techniques (also widely used in other platforms).
In my view, the harder challenge might be one more extra dimension found in embedded programming - timing. This is because usually embedded software deal a lot with real-time events, strictly timed protocols to drive peripheral hardware and general task itself (these is some parallels in other "high level" platforms too, like multi-threaded applications).
Be prepared to read a lot of datasheets when dealing with new hardware - I guess it could be related to "mindset" question part :) Surely some EE and hardware knowledge would be needed.
Also I would like to note that these days embedded software development doesn't require assembly language. In fact Java (BTW it is OOP by default) is already here and getting stronger (at least for some class of embedded devices, for example IoT devices, it could have very bright future). |
17,095 | I have done some basic object oriented programming with C++ (creating a B-Tree, Hashing Algorithms, Double Linked Lists) and I have done small project in C (like making a scientific calculator etc.)
**How different is hardware programming (specifically for micro controllers) from software/object oriented programming in terms of the mindset and "thinking" that the programmer has to have?**
Is one usually considered harder than the other my most people?
With my background (as described above) would I need a lot of preparation for going into hardware programming or can I dive straight in without too much preparation? | 2011/07/19 | [
"https://electronics.stackexchange.com/questions/17095",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/4711/"
] | I do both, so here's my view.
I think the most important skill by far in embedded is your debugging ability. The required mindset is much different in that so much more can go wrong, and you must be very open to considering all the different ways what you are trying to do can go wrong.
This is the single biggest issue for new embedded developers. PC people tend to have it rougher, as they're used to so much just working for them. They'll tend to waste a lot of time searching for tools to do things for them instead (hint: there aren't many). There's a lot of banging heads into walls over and over, not knowing what else to do. If you feel you're getting stuck, step back and figure out if you can identify what all might be going wrong. Systematically go through narrowing your potential problems list until you figure it out. It follows directly from this process that you should limit the scope of problems by not changing too much at once.
Experienced embedded people tend to take debugging for granted... most of the people who can't do it well don't last long (or work in large companies that simply accept "firmware is hard" as an answer for why a certain feature is years late)
You're working on code that runs on an external system to your development system, with varying degrees of visibility into your target from platform to platform. If under your control, push for development aids to help increase this visibility into your target system. Use debug serial ports, bit banging debug output, the famous blinking light, etc. Certainly at a minimum learn how to use an oscilloscope and use pin I/O with the 'scope to see when certain functions enter/exit, ISRs fire, etc. I've watched people struggle for literally years longer than necessary simply because they never bothered to set up/learn how to use a proper JTAG debugger link.
It's much more important to be very aware of exactly what resources you have relative to a PC. Read the datasheets carefully. Consider the resource 'cost' of anything you are trying to do. Learn resource-oriented debugging tricks like filling stack space with a magic value to track stack usage.
While some degree of debugging skill is required for both PC and embedded software, it's much more important with embedded. | For every arduino library method that you call there is a wealth of C/C++ code that makes it possible, it's simply packaged nicely for you to use as an API. Take a look at the arduino source code under the directory hardware/arduino/\* and you'll see all the C/C++ written for you which interacts directly with the AVR microcontroller's registers. If your objective is to learn how to write stuff like this (directly for the hardware) then there is a lot to cover. If your objective is to get something to work using their libraries then there might not be much to talk about as most of the hard work is done for you and their libraries and development environment are very easy to use.
Some rules of thumb though when working with resource constrained devices that could apply to either arduino environment or others:
Be aware of how much memory you are using. Both code size (which goes to flash memory) and static RAM usage (constants in your code that will always exist in RAM). I would argue that static RAM usage is a bit more important starting out, as it is easy to over look. It's not uncommon for you to have only 1000 bytes to work with for your stack, heap, and constants. Be wise in how you spend it, so avoid things like long arrays of integers (4-bytes each) when bytes or unsigned char's (1 byte each) will suffice. Another answer here covers some other important points very well so I'll stop here, I mainly wanted to get the point across that there is *a lot* to cover if you're not using the arduino library and writing your *own* C libraries. |
17,095 | I have done some basic object oriented programming with C++ (creating a B-Tree, Hashing Algorithms, Double Linked Lists) and I have done small project in C (like making a scientific calculator etc.)
**How different is hardware programming (specifically for micro controllers) from software/object oriented programming in terms of the mindset and "thinking" that the programmer has to have?**
Is one usually considered harder than the other my most people?
With my background (as described above) would I need a lot of preparation for going into hardware programming or can I dive straight in without too much preparation? | 2011/07/19 | [
"https://electronics.stackexchange.com/questions/17095",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/4711/"
] | You will have to completely abandon the object-oriented paradigm when dealing with most microcontrollers.
Microcontrollers are generally register- and RAM-limited, with slow clock rates and no pipelining / parallel code paths. You can forget about Java on a PIC, for example.
You have to get into an assembly-language mindset, and write procedurally.
You have to keep your code relatively flat and avoid recursion, as RAM limitations can often lead to stack issues.
You have to learn how to write interrupt service routines which are efficient (usually in assembly language).
You may have to refactor parts of the code manually, in assembly language, to implement functionality that the compiler doesn't support (or supports poorly).
You have to write mathematical code that takes into account the word size and lack of FPU capabilities of most microcontrollers (i.e. doing 32-bit multiplication on an 8-bit micro = evil).
It is a different world. To me, having a computer science or professional programming background can be as much of a hindrance as having no knowledge at all when dealing with microcontrollers. | I'm presuming your C++ experience is PC-based.
An often made error by programmers moving from PC to microcontroller is that they don't realize how **limited resources** can be. On a PC nobody will stop you when you create a table with 100 000 entries, or write a program which compiles to 1MB of machine code.
There *are* microcontrollers which have a wealth of memory resources, especially in the high end, but it's still a far cry from what you'll be used to. For a hobby project you probably can always go for the maximum, but in a professional project you'll often be forced to work with the **smaller device** because it's **cheaper**.
On one project I was working with a TI [MSP430F1101](http://focus.ti.com/lit/ds/symlink/msp430f1101.pdf). 1KB of program memory, 128 bytes of configuration Flash, 128 bytes of RAM. The program didn't fit in the 1K, so I had to write a 23 bytes function in the configuration Flash. With these small controllers you **calculate by the byte**. On another occasion the program memory was 4 bytes too small. Boss wouldn't let me use the controller with more memory, but instead I had to optimize an already optimized machine code (it was already written in assembler) to fit the extra 4 bytes in. You get the picture.
Depending on the platform you're working on you'll have to deal with very **low level I/O**. Some development environments have functions to write to an LCD, but on others you're on your own, and will have to read the LCD's **datasheet** from start to finish to know how to control it.
You may have to control a relay, that's easier than an LCD, but it will require you to go to the register level of the microcontroller. Again a datasheet or user manual. You'll have to get to know the microcontroller's structure, which you'll find in a block diagram, again in the datasheet. In the microprocessor's days we talked about a **programming model**, which was basically a lineup of the processor's registers. Today's microcontrollers are so complex that a description of all the registers can take the best part of a 100 pages datasheet. IIRC just the description of the clock module for the MSP430 was 25 pages long.
You'll often have to deal with **real time** event handling. An **interrupt** you have to handle within 10\$\mu\$s, for example, and during that have another interrupt which requires the same timing accuracy.
Microcontrollers are often **programmed in C**. C++ is rather resources hungry, so that's usually out. (Most C++ implementations for microcontrollers offer a limited subset of C++.) Like I said, depending on the platform you may have an extensive **library of functions** available which could save you quite some developing time. It's worth taking some time to study it, it may save you a lot of time later on if you know what's available. |
17,095 | I have done some basic object oriented programming with C++ (creating a B-Tree, Hashing Algorithms, Double Linked Lists) and I have done small project in C (like making a scientific calculator etc.)
**How different is hardware programming (specifically for micro controllers) from software/object oriented programming in terms of the mindset and "thinking" that the programmer has to have?**
Is one usually considered harder than the other my most people?
With my background (as described above) would I need a lot of preparation for going into hardware programming or can I dive straight in without too much preparation? | 2011/07/19 | [
"https://electronics.stackexchange.com/questions/17095",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/4711/"
] | You will have to completely abandon the object-oriented paradigm when dealing with most microcontrollers.
Microcontrollers are generally register- and RAM-limited, with slow clock rates and no pipelining / parallel code paths. You can forget about Java on a PIC, for example.
You have to get into an assembly-language mindset, and write procedurally.
You have to keep your code relatively flat and avoid recursion, as RAM limitations can often lead to stack issues.
You have to learn how to write interrupt service routines which are efficient (usually in assembly language).
You may have to refactor parts of the code manually, in assembly language, to implement functionality that the compiler doesn't support (or supports poorly).
You have to write mathematical code that takes into account the word size and lack of FPU capabilities of most microcontrollers (i.e. doing 32-bit multiplication on an 8-bit micro = evil).
It is a different world. To me, having a computer science or professional programming background can be as much of a hindrance as having no knowledge at all when dealing with microcontrollers. | "hardware programming" can mean a lot of things. Programming a very small chip (think 10F200, 512 instructions, a few bytes of RAM) can be almost like designing an electronic circuit. On the other side programming a big Cortex microcontroller (1 Mb FLASH, 64 kB RAM) can be a lot like PC/GUI programming, using a big GUI toolkit. IMHO a good embedded/real-time programmer needs skills both from the software egineering side and from the circuit design side. For the bigger uC C++ is a good language choice, for the very small ones C might be the only choice. Assemby knowledge can be handy, but I would not recommend doing serious projects entirely in assembly.
I have done serious embedded work with people from both (SWI and EE) sides. I generally prefer the SWI people, provided that they have some experience with multu-threaded programming.
Your question sounds like you want to dive into embedded programming. By all means do so. For the low-level aspects (interfacing the peripherals in your chip and the hardware around it) you will need to learn some new skills, but it is just a lot of work without many new concepts. For the higher layers of your projects you can draw on your existing knwoledge. |
535,179 | Dithering is used to add virtual resolution to an oversampling ADC, and for instance [this discussion](https://electronics.stackexchange.com/questions/69748/using-noise-to-increase-effective-resolution-of-adc) involves some of the math involved, and the criteria.
Take for instance an Arduino with 10 bit sampling. The unit has is the ability to set a digital pin to a random value, or a PWM output to a random value. It also has the ability to define a different Vref for ADC conversions. Would a simple circuit that drove the Vref with a filtered random digital signal between 5V and 1 LSB below 5V be a good solution for low-cost dithering?
The thought is that if the Vref were being altered, then the oversampling benefit would be available on all channels, not just one at a time. I can see there will be an over-read of the raw voltage of 0.5 LSB to be corrected for but this is a simple operation.
[](https://i.stack.imgur.com/sQ7l9.png)
(I would add a schematic but the plugin seems to not work at present) | 2020/12/02 | [
"https://electronics.stackexchange.com/questions/535179",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/38456/"
] | Altering Vref would be a *multiplicative* scaling, that's not really going to be useful across the range of inputs in walking values across code thresholds a statistical fraction of the time as dithering attempts to do.
For dithering you want *additive* noise, not *multiplicative distortion* that is functionally *gain variation*.
Eg, you want to *add* various fractions of an LSB so that sometimes you hit the higher of the closest two codes and sometimes the lower, and then if you analyze the comparative rate of the two and assume your noise is random, you can over time calculate a more precise intermediate value to a fraction of an LSB.
In contrast, dithering by gain variation would only really work usefully for a small range of inputs, in the upper part of the allowable input range, lower in the input range the effect would be proportinoally reduced. | No, while it could work on DC, the capacitors would shunt the PWM output directly. However Atmel/Microchip has an application note AVR121 which explains use of dither. |
256,928 | Most of the week I live in the city where I have a typical broadband connection, but most weekends I'm out of town and only have access to a satellite connection. Trying to work over SSH on a satellite connection, while possible, is hardly desirable due to the high latency (> 1 second).
**My question is this:**
Is there any software that will do something like buffering keystrokes on my local machine before they're sent over SSH to help make the lag on individual keystrokes a little bit more transparent? Essentially I'm looking for something that would reduce the effects of the high latency for everything except for commands (e.g., opening files, changing to a new directory, etc.).
I've already discovered that vim can open remote files locally and rewrite them remotely, but, while this is a huge help, it is not quite what I'm looking for since it only works when editing files, and requires opening a connection every time a read/write occurs.
(*For anyone who may not know how to do this and is curious, just use this command: 'vim scp://host/file/path/here)* | 2011/03/13 | [
"https://superuser.com/questions/256928",
"https://superuser.com",
"https://superuser.com/users/71494/"
] | It simply means of the total maximum of your processors normal speed.
With speed step, power saving and everything else disabled, this should always read 100%.
If you have power saving on your laptop that under clocks your CPU compared to the stock speed, it will report a lower percentage.
If you have turbo boost or similar, it will report a higher percentage.
So, again, this is the current maximum percentage your processor can currently run when compared against its reported normal speed.
I am not 100% sure, but my guess is that if you overclock, the overclocked amount would be the "base" speed to Windows and overclocking by 20% would not show a 120% maximum frequency - this is just guessing, I have no way to test. | According to an answer [here](http://social.technet.microsoft.com/Forums/en-US/w7itproui/thread/19ec3423-2112-4a44-9dd8-eacd097bc920/):
>
> Maximum Frequency in Resource Monitor is the same as the Processor Performance \ % of Maximum Frequency counter in Performance Monitor.
>
>
> For example if you have a 2.5 ghz processor which is running at 800 mhz then % of Maximum Frequency = 800/2500 = 31%. So the processor is running at 31%, or 800 mhz, of the processor's maximum frequency of 2500 mhz (2.5 ghz).
>
>
> The "best" percentage of maximum frequency is subjective. Basically, you want the CPU running at a frequency that is fast enough to do what you want while using the least amount of power so it doesn't drain your battery or increase your electric bill unnecessarily.
>
>
> Your power plan in Windows is part of what determines the frequency as well as settings in the computer's BIOS.
>
>
> Take a look at the section Processor power management (PPM) may cause CPU utilization to appear artificially high in this article: [Interpreting CPU Utilization for Performance Analysis](http://blogs.technet.com/b/winserverperformance/archive/2009/08/06/interpreting-cpu-utilization-for-performance-analysis.aspx)
>
>
> |
256,928 | Most of the week I live in the city where I have a typical broadband connection, but most weekends I'm out of town and only have access to a satellite connection. Trying to work over SSH on a satellite connection, while possible, is hardly desirable due to the high latency (> 1 second).
**My question is this:**
Is there any software that will do something like buffering keystrokes on my local machine before they're sent over SSH to help make the lag on individual keystrokes a little bit more transparent? Essentially I'm looking for something that would reduce the effects of the high latency for everything except for commands (e.g., opening files, changing to a new directory, etc.).
I've already discovered that vim can open remote files locally and rewrite them remotely, but, while this is a huge help, it is not quite what I'm looking for since it only works when editing files, and requires opening a connection every time a read/write occurs.
(*For anyone who may not know how to do this and is curious, just use this command: 'vim scp://host/file/path/here)* | 2011/03/13 | [
"https://superuser.com/questions/256928",
"https://superuser.com",
"https://superuser.com/users/71494/"
] | It simply means of the total maximum of your processors normal speed.
With speed step, power saving and everything else disabled, this should always read 100%.
If you have power saving on your laptop that under clocks your CPU compared to the stock speed, it will report a lower percentage.
If you have turbo boost or similar, it will report a higher percentage.
So, again, this is the current maximum percentage your processor can currently run when compared against its reported normal speed.
I am not 100% sure, but my guess is that if you overclock, the overclocked amount would be the "base" speed to Windows and overclocking by 20% would not show a 120% maximum frequency - this is just guessing, I have no way to test. | Very late reply, but I just noticed that my percentage in Resource Monitor for CPU frequency is 129%, which corresponds with my overclock. I have a 3.4 GHz Intel i5 that is overclocked to 4.4, which is a (1000/3400) \* 100 = 29.411% increase over stock speed. Turbo Boost for my processor (the factory boost to frequency) was 3.8 GHz, but this also showed above 100%. Basically, the frequency your processor is listed at on the box and in CPU-Z at its maximum stock frequency (without Turbo Boost) is what Resource Monitor takes to be 100%. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.