qid
int64
1
74.7M
question
stringlengths
12
33.8k
date
stringlengths
10
10
metadata
list
response_j
stringlengths
0
115k
response_k
stringlengths
2
98.3k
99,179
Just watched [*TNG: The Dauphin*](https://memory-alpha.fandom.com/wiki/The_Dauphin_(episode)). In it the following exchange occurs when receiving a powerful transmission: > > **Data** Sir, sensors indicate the communication originated from a tera-Watt source on the planet > > **Riker** That's more power than our entire ship can generate! > > > This seems silly. * There is currently [a hydro power station that produces 22.5GW](https://en.wikipedia.org/wiki/Three_Gorges_Dam). 50 times that seems very large to us today, but when you consider the biggest nuclear weapons can release 0.5TWh, the numbers don't seem *that* extreme. * Shields! Phasers! 150KW defensive laser systems exist. A 1PW laser accelerator [is a thing](https://en.wikipedia.org/wiki/BELLA_%28laser%29). Those on the enterprise must —by virtue of needing to go much further and charge much faster— need more power. * Transporters! Replicators! The holodeck! Surely the converstion of energy into matter and arranging that at distance, must pull a lot of power. And it's happening all over the ship, all the time. * Impulse engines. Even in a vacuum, shifting 4.5 megatonnes must take serious power. To accelerate 1m/s over a second, you're looking at 45GW and they seem to do things much faster, all the way up to 75,000,000m/s. Full impulse seems to take a few seconds... That makes my calculator cry, with numbers around 8 ×10^24W... That's way over. * Warp. * Computer. All of that, while keeping the lights on running other day-to-day things. Was Riker just off by a unit, or is there something I'm not factoring into these points? How much energy could the USS Enterprise NCC-1701-D produce at peak output? Are there listed energy requirements for the components above that I've been guesstimating for?
2015/08/13
[ "https://scifi.stackexchange.com/questions/99179", "https://scifi.stackexchange.com", "https://scifi.stackexchange.com/users/32863/" ]
[Memory Alpha](https://memory-alpha.fandom.com/wiki/Galaxy_class) explains that > > The warp core was one of the most powerful in Starfleet, generating > approximately **12.75 billion gigawatts** of power. (TNG: "True Q") > > > The exact quote is: > > AMANDA: It's hard to imagine how much energy is being harnessed in there. > > > DATA: Imagination is not necessary. The scale is readily quantifiable. **We are presently generating twelve point seven five billion gigawatts** per > (an alarm goes off) > > > ([Source](http://www.chakoteya.net/NextGen/232.htm)) So, that's 12.75 *million* terrawats that the *Enterprise-D* warp core was capable of producing! It also seems that's not the maximum amount. As per *Relics*: > > SCOTT: Geordi, the shields will hold. Don't worry about that. **I can > get a few extra gigawatts out of these babies.** > > > ([Source](http://www.chakoteya.net/NextGen/230.htm)) Now, I know that's for the shields, but it does seem to indicate that the power output could be slightly more, but probably not a huge amount. Regarding what Riker was on about: > > In 2365, the command headquarters of Daled IV utilized a communication > system that originated from a terawatt source, which was necessary to > penetrate the planet's atmosphere. According to Commander William T. > Riker, "that's more power than our entire ship could generate," > meaning that **they lacked the ability to respond to the communique**. > (TNG: "The Dauphin") > > > ([Source](https://memory-alpha.fandom.com/wiki/Gigawatt)) That is, the **communication system** of the *entire ship* couldn't produce a terrawatt. According to the excerpt from the script below, this seems to be confirmed: > > DATA: Sir, sensors indicate the communication originated from a terawatt source on the planet. > > > RIKER: That's more power than our entire ship can generate. > > > DATA: It is what is needed to penetrate the atmosphere. > > > RIKER: Which means we lack the ability to respond, sir. > > > ([Source](http://www.chakoteya.net/NextGen/136.htm)) Just judging by this quote which puts Riker's explanation into context, it does seem by 'entire ship' he meant 'the entire ship's communication system'. It would be pretty poor if a First Officer didn't know the energy output of the ship! [This site](http://www.ditl.org/ship-page.php?ClassID=fedgalaxy&ListID=Ships&ListOption=fed), citing the *[Star Trek: The Next Generation Technical Manual](http://www.ditl.org/book-page.php?BookID=4&ListID=Ships&ListOption=fed)* says the *Galaxy Class* had a > > total output 50,000 TeraWatts > > > for the phasers. Now, that's a *separate* system with an upper limit nowhere near the *total* energy produced by the warp core. As for the out of universe reason, as suggested by Stan's comment below, bear in mind that *The Dauphin* was well before *True Q* and the TNG Technical Manual had yet to be released, so, **from an out-of-universe perspective, Riker probably *was* referring to the entire ship's output as being about a terawatt**. From an in-universe perspective this is later contradicted in *True Q* with the more realistic figure of 12.75 billion gigawatts, so we resolve this contradiction by assuming that, in-universe, Riker was referring to the communication system alone.
Data mentioned '12.75 billion gigawatts per...' and got cut off by the alarm at that point. The script was supposed to say 'per second' however. Power generation has been a little inconsistent in Trek (ok by quite a bit and probably because at the time when they created the show, the writers thought those would be very big numbers - but they forgot about exponential advancements), but it is more or less reconcilable when you factor in that Starfleet and the Federation (and most Trek spacefaring species) rely on subspace technology and other implementations of exotic matter in all their systems. We've seen on more than one occasion that application of subspace technology can radically increase efficiency, power outputs, etc. So, when we hear 'joules', 'megajoules' etc... they may be relative baseline figures which do NOT take into account the application of subspace technology which gives you a total output (and could easily be orders of magnitude higher). Also, the Dauphin episode occurred in Season 2, episode 10. 'True Q' occurs in Season 6, episode 6... basically 4 years later. Enterprise also underwent a major refit (and upgrades to its weapons, shields, and energy generation following the Borg incursion of Federation space ('Best of Both Worlds'). It stands to reason that Enterprise-D power generation capabilities and all other systems would have been progressively enhanced during the course of TNG (in Season 1 episode with the Bynars, they received a major refit of their computer core for example). It was also mentioned in Star Trek Voyager episode:'Night' (season 5, episode 1) that Federation ships (or at least Voyager) uses a Transkinetic chamber and Radiometric converters among other things: *TORRES: The residual anti-matter is then processed in the transkinetic chamber, where it's broken down on the subatomic level. EMCK: What about the theta radiation? TORRES: Oh, it's absorbed by a series of radiometric converters. We recycle the energy, use it to power everything from life support to replicators. EMCK: We don't have this kind of conversion technology.* Essentially, she described the process of how 'waste energy' is re-absorbed into the system and used for power generation. In this sense, Starfleet ships emit very little or no waste byproducts to begin with (it makes sense because they ARE focusing on technical efficiency and EVERYTHING is recycled as much as it possibly can be). When you have a closed system like a starship, it makes sense. Even the NX-01 used recycling to a very large extent (such as converting human waste into edible food - a process which we can do today fairly easily as well, but NX-01 had a molecular seqencer onboard which made things easier - and actually, we also had molecular manufacturing technology since 2015, and AI controlled atomic scale manufacturing since 2018). Also, anti-matter warheads like the ones in Photon Torpedoes are probably a lot more powerful than just 64 Megatons as some people imply. Sure, they may carry 1.5kg of matter and 1.5kg of antimatter, but, the yields are highly variable and these weapons also carry subspace technology and anti-deuterium as well which increase overall explosive yields to 690 gigaton range. Here's a quote from memory alpha: **The second type warhead was loaded with a maximum yield of only 1.5 kilograms of antideuterium. Due to the premixed reactants, the released energy per unit time was greater than in a rupture of a storage pod containing 100 cubic meters of antideuterium. The torpedo had a dry mass of 247.5 kilograms. (pp. 129 & 68) By using standard physics calculations, a payload of 1.5 kilograms was equal to about 64.4 megatons. The second type, at maximum yield, generated the destructive effects greater than in an antimatter pod rupture. Antimatter was stored as liquid or slush on starships. (p. 69) Density of mere liquid antideuterium was around 160 kilograms per cubic meter. According to this comparison, the high annihilation rate energy release would be comparable to the effects in a 690 gigaton explosion. For the sake of plausibility the affected blast area at these intensities might be extremely small. Visual effects on-screen would seem to confirm this. See this antimatter calculator for more information.** That's how you can get gigaton and (later on) teraton level fire-power outputs (from both directed energy weapons and torpedoes). So standard conversion metrics don't apply because people fail to usually take into account subspace and other materials which can (when mixed together appropriately) produce massively larger effects. Taking that into account, it is reasonable that Trek ships employ technologies that on one end seem to require minuscule amounts of power (relative to what we see today), but end up with massive numbers (by many orders of magnitudes greater) once you factor in subspace technology, anti-deuterium, etc. Also, from TNG to DS9 (at least by the episode 'The Die is Cast' Season 3, Episode 21), there is more than enough time for exponential evolution of weapon (and other) technologies... more than enough to result in Teraton level outputs that have been estimated for the Romulan/Cardassian fleet which managed to destroy 30% of planetary crust in an opening volley. My guess is that Warp drive could also be described as 'brute force' method of FTL as it progressively requires more and more energy the faster you go. That's why also speed and energy requirements increase practically exponentially beyond Warp 9.9 with every increment (and why no ships were actually seen using anything faster than Warp 9.9 - not even Voyager as their on-screen dialogue actually mentions 9.75 as a maximum sustainable speed, not 9.975 - and this also makes sense as this is a notch above Warp 9.6 which was an absolute maximum for Enterprise-D - at least until USS Prometheus entered the scene which managed Warp 9.9 without effort as a sustainable speed). So, one can say that power generation for Trek is quite potent, and the Enterprise-D is quite a lot more powerful than what some people claimed - and it wouldn't be necessarily 'unrealistic'. So, internal volume size of storage matters to a point... even if the Ent-D sotres 3,000 m^3 of anti-deuterium... the USAGE of this substance will be relatively low over time while still producing mind-boggling amounts of power when you factor in all the power enhancements technologies in place (like subpsace), and other things. Factor in recycling of energy, and you basically end up with massive amounts of power. And the Federation demonstrated it can STORE massive amounts of energy for later use (such as when they are replicating objects... if they recycle it, that energy can be later used to create something else - and its not inconceivable that they would be actually converting energy into matter just as the dialogue explains).
99,179
Just watched [*TNG: The Dauphin*](https://memory-alpha.fandom.com/wiki/The_Dauphin_(episode)). In it the following exchange occurs when receiving a powerful transmission: > > **Data** Sir, sensors indicate the communication originated from a tera-Watt source on the planet > > **Riker** That's more power than our entire ship can generate! > > > This seems silly. * There is currently [a hydro power station that produces 22.5GW](https://en.wikipedia.org/wiki/Three_Gorges_Dam). 50 times that seems very large to us today, but when you consider the biggest nuclear weapons can release 0.5TWh, the numbers don't seem *that* extreme. * Shields! Phasers! 150KW defensive laser systems exist. A 1PW laser accelerator [is a thing](https://en.wikipedia.org/wiki/BELLA_%28laser%29). Those on the enterprise must —by virtue of needing to go much further and charge much faster— need more power. * Transporters! Replicators! The holodeck! Surely the converstion of energy into matter and arranging that at distance, must pull a lot of power. And it's happening all over the ship, all the time. * Impulse engines. Even in a vacuum, shifting 4.5 megatonnes must take serious power. To accelerate 1m/s over a second, you're looking at 45GW and they seem to do things much faster, all the way up to 75,000,000m/s. Full impulse seems to take a few seconds... That makes my calculator cry, with numbers around 8 ×10^24W... That's way over. * Warp. * Computer. All of that, while keeping the lights on running other day-to-day things. Was Riker just off by a unit, or is there something I'm not factoring into these points? How much energy could the USS Enterprise NCC-1701-D produce at peak output? Are there listed energy requirements for the components above that I've been guesstimating for?
2015/08/13
[ "https://scifi.stackexchange.com/questions/99179", "https://scifi.stackexchange.com", "https://scifi.stackexchange.com/users/32863/" ]
This is not specifically an answer to the question but was going to be a comment to N\_soong's wonderful answer but it ended up being too long and halfway to an answer itself. Communications equipment is not something you can just throw more power at. If an antenna is not tuned to the power and frequency of the broadcast you will have some major issues due to what is known as reflected power. When the antenna is not tuned to the transmitter then not all the power goes out the antenna. Any power that does not go out must come back at the transmitter. In the electronics world this is known as SWR and is a ratio of forward power (What makes it out of the antenna) and reflected power. At lower levels you can get away with having a antenna that is not exactly tuned to the rest of the system but when you get higher up there in power you have to get a lot more narrow on what a specific antenna does. This is because of the fact that the transmitter can only take so much power coming back before it gets fried. 10% of 1 megawatt is 100 kilowatts. Star trek equipment could probably handle that although that is more than most FM radio stations put out in total. However 10% of 1 terawatt is 100 gigawatts. That is an astounding amount of energy to be feeding back into the system.
Here's another approach to this question: From [How long can a Galaxy class starship last before it needs servicing?](https://scifi.stackexchange.com/questions/53181/how-long-can-a-galaxy-class-starship-last-before-it-needs-servicing?rq=1), the Enterprise-D can carry 3,000 m^3 of anti-deuterium, which is enough to keep the ship running for three years of normal operation (source: Rick Sternbach and Michael Okuda's *Star Trek TNG Technical Manual*). Based on data from the [Brookhaven National Laboratory](https://www.bnl.gov/magnets/staff/gupta/cryogenic-data-handbook/Section4.pdf), I'll estimate the maximum density of deuterium (in liquid or solid form) at *~ 0.2 g/cm^3 = 200 kg/m^3* (since it would be a liquid or solid, even vastly increasing the pressure wouldn't change the density much). Since anti-deuterium should have the same density: *200 kg/m^3 x 3,000 m^3 = 600,000 kg anti-deuterium* Now let's assume the Enterprise-D's engines could convert that anti-deuterium to energy with 90% efficiency (the manual specifies a minimum efficiency of 88% up to warp 7.0), by combining it with an equal amount of normal matter. Using *E = m c2*, that would give us a total of: *1,200,000 kg x (3.0 x 10^8 m/sec)^2 x .9 = 1.0 x 10^23 kg m^2/s^2 = 1.0 x 10^23 J of energy* [J = joules] And that total energy output, sustained over a 3-year period, would give us an average power output (for propulsion, which should be the main power consumer; total will be more, since they'll also be running the stereo and A/C) of: *1.0 x 10^23 J/(94,608,000 s) = 1.0 x 10^15 J/s = 1.0 x 10^15 watts = 1000 terawatts* [There are 94,608,000 seconds in 3 years.] [By comparison, total current worldwide energy generation (all sources -- coal, gas, oil, nuclear, hydroelectric, wind, solar, geothermal, etc.) is about 15 terawatts = 2 kilowatts/person.] We can compare this figure to one that can be estimated from the power usage chart for the engines (Fig. 5.1.1. p 55), and accompanying explanatory text, in the same *Star Trek TNG Technical Manual*. On p 57, Sternbach and Okuda say the Enterprise is able to cruise for an unlimited amount of time (until its fuel is depleted) at warp 6. So let's assume that's our average cruising speed. Now according to Fig. 5.1.1, the power usage (for propulsion) at warp 6 is 3 x 10^6 MJ/cochrane. Of course, those are the wrong units; since it's power, it should be MW/cochrane (MW = megawatts). So let's make that correction. They further say that a warp 6 field bubble has a field strength of 392 cochranes. Thus the power required for propulsion at warp 6 is: *3 x 10^6 MW/cochrane x 10^6 W/MW x 392 cochranes = 1.2 x 10^15 watts = 1200 terawatts* This is nearly the same as the first value we calculated! [This is probably serendipitous :).] Of course, as mentioned above, there are other power consumers besides propulsion, but I'm assuming that's the big one, at least for sustained operation. We can also use the graph and figures in the technical manual to estimate a maximum power output. At its [maximum theoretical speed of warp 9.8](https://en.wikipedia.org/wiki/USS_Enterprise_(NCC-1701-D)), we have: *8 x 10^9 MW/cochrane x 10^6 W/MW x 2 x 10^3 cochranes = 1.6 x 10^19 watts = 16 million terawatts = 16 exawatts* This is very close to the 12.75 million terawatt (= 13 exawatts) figure quoted by Data (though I don't know how fast the ship was travelling at the time). At the same time, the 13 and 16 exawatt figures seem a little silly to me, even for 24th–century technology, since they're over 100 times the power the earth receives from the sun (174 petawatts)! Furthermore, at a 90% conversion efficiency, the engines would need to dissipate 1 exawatt of heat, i.e., 10 times the power the earth receives from the sun! [Additionally, according to the tech manual, conversion efficiency tends to decrease at high warp speeds.] Though I suppose they could deal with this by saying the heat is dissipated into subspace... Interestingly, the Wikipedia article referenced above says the Enterprise-D can maintain emergency warp, 9.6, for 12 hours. Using the same sort of estimates given above, that would require 11 exawatts of power. However, at that power output, the ship would use up its total fuel capacity in 3 hours. So clearly there's not perfect consistency among these different specifications. Finally: > > A 1PW laser accelerator is a thing. Those on the enterprise must —by virtue of needing to go much further and charge much faster— need more power. > > > It's important not to conflate peak power output capabilities (of things like lasers) with sustained power output. Today we are capable of building a pair of lasers with a combined peak power output of [20 petawatts = 20,000 terawatts](https://www.laserfocusworld.com/articles/2017/11/the-extreme-light-infrastructure-takes-off.html). But this device will put out that power for only 150 femtoseconds = 1.5 x 10 ^-13 s, thus delivering a total energy of 3000 joules. It can do one shot/minute so, as impressive as its peak power output is, its sustained power output is only: *3000 J/min x 1 min/(60 s) = 50 J/s = 50 W* And remember that when we are talking about the power output of the Enterprise-D's warp engines, we're referring to sustained power output. [Notably, the peak power output of these lasers is >1000x the current 15 terawatt sustained power output of human civilization!]
99,179
Just watched [*TNG: The Dauphin*](https://memory-alpha.fandom.com/wiki/The_Dauphin_(episode)). In it the following exchange occurs when receiving a powerful transmission: > > **Data** Sir, sensors indicate the communication originated from a tera-Watt source on the planet > > **Riker** That's more power than our entire ship can generate! > > > This seems silly. * There is currently [a hydro power station that produces 22.5GW](https://en.wikipedia.org/wiki/Three_Gorges_Dam). 50 times that seems very large to us today, but when you consider the biggest nuclear weapons can release 0.5TWh, the numbers don't seem *that* extreme. * Shields! Phasers! 150KW defensive laser systems exist. A 1PW laser accelerator [is a thing](https://en.wikipedia.org/wiki/BELLA_%28laser%29). Those on the enterprise must —by virtue of needing to go much further and charge much faster— need more power. * Transporters! Replicators! The holodeck! Surely the converstion of energy into matter and arranging that at distance, must pull a lot of power. And it's happening all over the ship, all the time. * Impulse engines. Even in a vacuum, shifting 4.5 megatonnes must take serious power. To accelerate 1m/s over a second, you're looking at 45GW and they seem to do things much faster, all the way up to 75,000,000m/s. Full impulse seems to take a few seconds... That makes my calculator cry, with numbers around 8 ×10^24W... That's way over. * Warp. * Computer. All of that, while keeping the lights on running other day-to-day things. Was Riker just off by a unit, or is there something I'm not factoring into these points? How much energy could the USS Enterprise NCC-1701-D produce at peak output? Are there listed energy requirements for the components above that I've been guesstimating for?
2015/08/13
[ "https://scifi.stackexchange.com/questions/99179", "https://scifi.stackexchange.com", "https://scifi.stackexchange.com/users/32863/" ]
This is not specifically an answer to the question but was going to be a comment to N\_soong's wonderful answer but it ended up being too long and halfway to an answer itself. Communications equipment is not something you can just throw more power at. If an antenna is not tuned to the power and frequency of the broadcast you will have some major issues due to what is known as reflected power. When the antenna is not tuned to the transmitter then not all the power goes out the antenna. Any power that does not go out must come back at the transmitter. In the electronics world this is known as SWR and is a ratio of forward power (What makes it out of the antenna) and reflected power. At lower levels you can get away with having a antenna that is not exactly tuned to the rest of the system but when you get higher up there in power you have to get a lot more narrow on what a specific antenna does. This is because of the fact that the transmitter can only take so much power coming back before it gets fried. 10% of 1 megawatt is 100 kilowatts. Star trek equipment could probably handle that although that is more than most FM radio stations put out in total. However 10% of 1 terawatt is 100 gigawatts. That is an astounding amount of energy to be feeding back into the system.
Data mentioned '12.75 billion gigawatts per...' and got cut off by the alarm at that point. The script was supposed to say 'per second' however. Power generation has been a little inconsistent in Trek (ok by quite a bit and probably because at the time when they created the show, the writers thought those would be very big numbers - but they forgot about exponential advancements), but it is more or less reconcilable when you factor in that Starfleet and the Federation (and most Trek spacefaring species) rely on subspace technology and other implementations of exotic matter in all their systems. We've seen on more than one occasion that application of subspace technology can radically increase efficiency, power outputs, etc. So, when we hear 'joules', 'megajoules' etc... they may be relative baseline figures which do NOT take into account the application of subspace technology which gives you a total output (and could easily be orders of magnitude higher). Also, the Dauphin episode occurred in Season 2, episode 10. 'True Q' occurs in Season 6, episode 6... basically 4 years later. Enterprise also underwent a major refit (and upgrades to its weapons, shields, and energy generation following the Borg incursion of Federation space ('Best of Both Worlds'). It stands to reason that Enterprise-D power generation capabilities and all other systems would have been progressively enhanced during the course of TNG (in Season 1 episode with the Bynars, they received a major refit of their computer core for example). It was also mentioned in Star Trek Voyager episode:'Night' (season 5, episode 1) that Federation ships (or at least Voyager) uses a Transkinetic chamber and Radiometric converters among other things: *TORRES: The residual anti-matter is then processed in the transkinetic chamber, where it's broken down on the subatomic level. EMCK: What about the theta radiation? TORRES: Oh, it's absorbed by a series of radiometric converters. We recycle the energy, use it to power everything from life support to replicators. EMCK: We don't have this kind of conversion technology.* Essentially, she described the process of how 'waste energy' is re-absorbed into the system and used for power generation. In this sense, Starfleet ships emit very little or no waste byproducts to begin with (it makes sense because they ARE focusing on technical efficiency and EVERYTHING is recycled as much as it possibly can be). When you have a closed system like a starship, it makes sense. Even the NX-01 used recycling to a very large extent (such as converting human waste into edible food - a process which we can do today fairly easily as well, but NX-01 had a molecular seqencer onboard which made things easier - and actually, we also had molecular manufacturing technology since 2015, and AI controlled atomic scale manufacturing since 2018). Also, anti-matter warheads like the ones in Photon Torpedoes are probably a lot more powerful than just 64 Megatons as some people imply. Sure, they may carry 1.5kg of matter and 1.5kg of antimatter, but, the yields are highly variable and these weapons also carry subspace technology and anti-deuterium as well which increase overall explosive yields to 690 gigaton range. Here's a quote from memory alpha: **The second type warhead was loaded with a maximum yield of only 1.5 kilograms of antideuterium. Due to the premixed reactants, the released energy per unit time was greater than in a rupture of a storage pod containing 100 cubic meters of antideuterium. The torpedo had a dry mass of 247.5 kilograms. (pp. 129 & 68) By using standard physics calculations, a payload of 1.5 kilograms was equal to about 64.4 megatons. The second type, at maximum yield, generated the destructive effects greater than in an antimatter pod rupture. Antimatter was stored as liquid or slush on starships. (p. 69) Density of mere liquid antideuterium was around 160 kilograms per cubic meter. According to this comparison, the high annihilation rate energy release would be comparable to the effects in a 690 gigaton explosion. For the sake of plausibility the affected blast area at these intensities might be extremely small. Visual effects on-screen would seem to confirm this. See this antimatter calculator for more information.** That's how you can get gigaton and (later on) teraton level fire-power outputs (from both directed energy weapons and torpedoes). So standard conversion metrics don't apply because people fail to usually take into account subspace and other materials which can (when mixed together appropriately) produce massively larger effects. Taking that into account, it is reasonable that Trek ships employ technologies that on one end seem to require minuscule amounts of power (relative to what we see today), but end up with massive numbers (by many orders of magnitudes greater) once you factor in subspace technology, anti-deuterium, etc. Also, from TNG to DS9 (at least by the episode 'The Die is Cast' Season 3, Episode 21), there is more than enough time for exponential evolution of weapon (and other) technologies... more than enough to result in Teraton level outputs that have been estimated for the Romulan/Cardassian fleet which managed to destroy 30% of planetary crust in an opening volley. My guess is that Warp drive could also be described as 'brute force' method of FTL as it progressively requires more and more energy the faster you go. That's why also speed and energy requirements increase practically exponentially beyond Warp 9.9 with every increment (and why no ships were actually seen using anything faster than Warp 9.9 - not even Voyager as their on-screen dialogue actually mentions 9.75 as a maximum sustainable speed, not 9.975 - and this also makes sense as this is a notch above Warp 9.6 which was an absolute maximum for Enterprise-D - at least until USS Prometheus entered the scene which managed Warp 9.9 without effort as a sustainable speed). So, one can say that power generation for Trek is quite potent, and the Enterprise-D is quite a lot more powerful than what some people claimed - and it wouldn't be necessarily 'unrealistic'. So, internal volume size of storage matters to a point... even if the Ent-D sotres 3,000 m^3 of anti-deuterium... the USAGE of this substance will be relatively low over time while still producing mind-boggling amounts of power when you factor in all the power enhancements technologies in place (like subpsace), and other things. Factor in recycling of energy, and you basically end up with massive amounts of power. And the Federation demonstrated it can STORE massive amounts of energy for later use (such as when they are replicating objects... if they recycle it, that energy can be later used to create something else - and its not inconceivable that they would be actually converting energy into matter just as the dialogue explains).
99,179
Just watched [*TNG: The Dauphin*](https://memory-alpha.fandom.com/wiki/The_Dauphin_(episode)). In it the following exchange occurs when receiving a powerful transmission: > > **Data** Sir, sensors indicate the communication originated from a tera-Watt source on the planet > > **Riker** That's more power than our entire ship can generate! > > > This seems silly. * There is currently [a hydro power station that produces 22.5GW](https://en.wikipedia.org/wiki/Three_Gorges_Dam). 50 times that seems very large to us today, but when you consider the biggest nuclear weapons can release 0.5TWh, the numbers don't seem *that* extreme. * Shields! Phasers! 150KW defensive laser systems exist. A 1PW laser accelerator [is a thing](https://en.wikipedia.org/wiki/BELLA_%28laser%29). Those on the enterprise must —by virtue of needing to go much further and charge much faster— need more power. * Transporters! Replicators! The holodeck! Surely the converstion of energy into matter and arranging that at distance, must pull a lot of power. And it's happening all over the ship, all the time. * Impulse engines. Even in a vacuum, shifting 4.5 megatonnes must take serious power. To accelerate 1m/s over a second, you're looking at 45GW and they seem to do things much faster, all the way up to 75,000,000m/s. Full impulse seems to take a few seconds... That makes my calculator cry, with numbers around 8 ×10^24W... That's way over. * Warp. * Computer. All of that, while keeping the lights on running other day-to-day things. Was Riker just off by a unit, or is there something I'm not factoring into these points? How much energy could the USS Enterprise NCC-1701-D produce at peak output? Are there listed energy requirements for the components above that I've been guesstimating for?
2015/08/13
[ "https://scifi.stackexchange.com/questions/99179", "https://scifi.stackexchange.com", "https://scifi.stackexchange.com/users/32863/" ]
Here's another approach to this question: From [How long can a Galaxy class starship last before it needs servicing?](https://scifi.stackexchange.com/questions/53181/how-long-can-a-galaxy-class-starship-last-before-it-needs-servicing?rq=1), the Enterprise-D can carry 3,000 m^3 of anti-deuterium, which is enough to keep the ship running for three years of normal operation (source: Rick Sternbach and Michael Okuda's *Star Trek TNG Technical Manual*). Based on data from the [Brookhaven National Laboratory](https://www.bnl.gov/magnets/staff/gupta/cryogenic-data-handbook/Section4.pdf), I'll estimate the maximum density of deuterium (in liquid or solid form) at *~ 0.2 g/cm^3 = 200 kg/m^3* (since it would be a liquid or solid, even vastly increasing the pressure wouldn't change the density much). Since anti-deuterium should have the same density: *200 kg/m^3 x 3,000 m^3 = 600,000 kg anti-deuterium* Now let's assume the Enterprise-D's engines could convert that anti-deuterium to energy with 90% efficiency (the manual specifies a minimum efficiency of 88% up to warp 7.0), by combining it with an equal amount of normal matter. Using *E = m c2*, that would give us a total of: *1,200,000 kg x (3.0 x 10^8 m/sec)^2 x .9 = 1.0 x 10^23 kg m^2/s^2 = 1.0 x 10^23 J of energy* [J = joules] And that total energy output, sustained over a 3-year period, would give us an average power output (for propulsion, which should be the main power consumer; total will be more, since they'll also be running the stereo and A/C) of: *1.0 x 10^23 J/(94,608,000 s) = 1.0 x 10^15 J/s = 1.0 x 10^15 watts = 1000 terawatts* [There are 94,608,000 seconds in 3 years.] [By comparison, total current worldwide energy generation (all sources -- coal, gas, oil, nuclear, hydroelectric, wind, solar, geothermal, etc.) is about 15 terawatts = 2 kilowatts/person.] We can compare this figure to one that can be estimated from the power usage chart for the engines (Fig. 5.1.1. p 55), and accompanying explanatory text, in the same *Star Trek TNG Technical Manual*. On p 57, Sternbach and Okuda say the Enterprise is able to cruise for an unlimited amount of time (until its fuel is depleted) at warp 6. So let's assume that's our average cruising speed. Now according to Fig. 5.1.1, the power usage (for propulsion) at warp 6 is 3 x 10^6 MJ/cochrane. Of course, those are the wrong units; since it's power, it should be MW/cochrane (MW = megawatts). So let's make that correction. They further say that a warp 6 field bubble has a field strength of 392 cochranes. Thus the power required for propulsion at warp 6 is: *3 x 10^6 MW/cochrane x 10^6 W/MW x 392 cochranes = 1.2 x 10^15 watts = 1200 terawatts* This is nearly the same as the first value we calculated! [This is probably serendipitous :).] Of course, as mentioned above, there are other power consumers besides propulsion, but I'm assuming that's the big one, at least for sustained operation. We can also use the graph and figures in the technical manual to estimate a maximum power output. At its [maximum theoretical speed of warp 9.8](https://en.wikipedia.org/wiki/USS_Enterprise_(NCC-1701-D)), we have: *8 x 10^9 MW/cochrane x 10^6 W/MW x 2 x 10^3 cochranes = 1.6 x 10^19 watts = 16 million terawatts = 16 exawatts* This is very close to the 12.75 million terawatt (= 13 exawatts) figure quoted by Data (though I don't know how fast the ship was travelling at the time). At the same time, the 13 and 16 exawatt figures seem a little silly to me, even for 24th–century technology, since they're over 100 times the power the earth receives from the sun (174 petawatts)! Furthermore, at a 90% conversion efficiency, the engines would need to dissipate 1 exawatt of heat, i.e., 10 times the power the earth receives from the sun! [Additionally, according to the tech manual, conversion efficiency tends to decrease at high warp speeds.] Though I suppose they could deal with this by saying the heat is dissipated into subspace... Interestingly, the Wikipedia article referenced above says the Enterprise-D can maintain emergency warp, 9.6, for 12 hours. Using the same sort of estimates given above, that would require 11 exawatts of power. However, at that power output, the ship would use up its total fuel capacity in 3 hours. So clearly there's not perfect consistency among these different specifications. Finally: > > A 1PW laser accelerator is a thing. Those on the enterprise must —by virtue of needing to go much further and charge much faster— need more power. > > > It's important not to conflate peak power output capabilities (of things like lasers) with sustained power output. Today we are capable of building a pair of lasers with a combined peak power output of [20 petawatts = 20,000 terawatts](https://www.laserfocusworld.com/articles/2017/11/the-extreme-light-infrastructure-takes-off.html). But this device will put out that power for only 150 femtoseconds = 1.5 x 10 ^-13 s, thus delivering a total energy of 3000 joules. It can do one shot/minute so, as impressive as its peak power output is, its sustained power output is only: *3000 J/min x 1 min/(60 s) = 50 J/s = 50 W* And remember that when we are talking about the power output of the Enterprise-D's warp engines, we're referring to sustained power output. [Notably, the peak power output of these lasers is >1000x the current 15 terawatt sustained power output of human civilization!]
Data mentioned '12.75 billion gigawatts per...' and got cut off by the alarm at that point. The script was supposed to say 'per second' however. Power generation has been a little inconsistent in Trek (ok by quite a bit and probably because at the time when they created the show, the writers thought those would be very big numbers - but they forgot about exponential advancements), but it is more or less reconcilable when you factor in that Starfleet and the Federation (and most Trek spacefaring species) rely on subspace technology and other implementations of exotic matter in all their systems. We've seen on more than one occasion that application of subspace technology can radically increase efficiency, power outputs, etc. So, when we hear 'joules', 'megajoules' etc... they may be relative baseline figures which do NOT take into account the application of subspace technology which gives you a total output (and could easily be orders of magnitude higher). Also, the Dauphin episode occurred in Season 2, episode 10. 'True Q' occurs in Season 6, episode 6... basically 4 years later. Enterprise also underwent a major refit (and upgrades to its weapons, shields, and energy generation following the Borg incursion of Federation space ('Best of Both Worlds'). It stands to reason that Enterprise-D power generation capabilities and all other systems would have been progressively enhanced during the course of TNG (in Season 1 episode with the Bynars, they received a major refit of their computer core for example). It was also mentioned in Star Trek Voyager episode:'Night' (season 5, episode 1) that Federation ships (or at least Voyager) uses a Transkinetic chamber and Radiometric converters among other things: *TORRES: The residual anti-matter is then processed in the transkinetic chamber, where it's broken down on the subatomic level. EMCK: What about the theta radiation? TORRES: Oh, it's absorbed by a series of radiometric converters. We recycle the energy, use it to power everything from life support to replicators. EMCK: We don't have this kind of conversion technology.* Essentially, she described the process of how 'waste energy' is re-absorbed into the system and used for power generation. In this sense, Starfleet ships emit very little or no waste byproducts to begin with (it makes sense because they ARE focusing on technical efficiency and EVERYTHING is recycled as much as it possibly can be). When you have a closed system like a starship, it makes sense. Even the NX-01 used recycling to a very large extent (such as converting human waste into edible food - a process which we can do today fairly easily as well, but NX-01 had a molecular seqencer onboard which made things easier - and actually, we also had molecular manufacturing technology since 2015, and AI controlled atomic scale manufacturing since 2018). Also, anti-matter warheads like the ones in Photon Torpedoes are probably a lot more powerful than just 64 Megatons as some people imply. Sure, they may carry 1.5kg of matter and 1.5kg of antimatter, but, the yields are highly variable and these weapons also carry subspace technology and anti-deuterium as well which increase overall explosive yields to 690 gigaton range. Here's a quote from memory alpha: **The second type warhead was loaded with a maximum yield of only 1.5 kilograms of antideuterium. Due to the premixed reactants, the released energy per unit time was greater than in a rupture of a storage pod containing 100 cubic meters of antideuterium. The torpedo had a dry mass of 247.5 kilograms. (pp. 129 & 68) By using standard physics calculations, a payload of 1.5 kilograms was equal to about 64.4 megatons. The second type, at maximum yield, generated the destructive effects greater than in an antimatter pod rupture. Antimatter was stored as liquid or slush on starships. (p. 69) Density of mere liquid antideuterium was around 160 kilograms per cubic meter. According to this comparison, the high annihilation rate energy release would be comparable to the effects in a 690 gigaton explosion. For the sake of plausibility the affected blast area at these intensities might be extremely small. Visual effects on-screen would seem to confirm this. See this antimatter calculator for more information.** That's how you can get gigaton and (later on) teraton level fire-power outputs (from both directed energy weapons and torpedoes). So standard conversion metrics don't apply because people fail to usually take into account subspace and other materials which can (when mixed together appropriately) produce massively larger effects. Taking that into account, it is reasonable that Trek ships employ technologies that on one end seem to require minuscule amounts of power (relative to what we see today), but end up with massive numbers (by many orders of magnitudes greater) once you factor in subspace technology, anti-deuterium, etc. Also, from TNG to DS9 (at least by the episode 'The Die is Cast' Season 3, Episode 21), there is more than enough time for exponential evolution of weapon (and other) technologies... more than enough to result in Teraton level outputs that have been estimated for the Romulan/Cardassian fleet which managed to destroy 30% of planetary crust in an opening volley. My guess is that Warp drive could also be described as 'brute force' method of FTL as it progressively requires more and more energy the faster you go. That's why also speed and energy requirements increase practically exponentially beyond Warp 9.9 with every increment (and why no ships were actually seen using anything faster than Warp 9.9 - not even Voyager as their on-screen dialogue actually mentions 9.75 as a maximum sustainable speed, not 9.975 - and this also makes sense as this is a notch above Warp 9.6 which was an absolute maximum for Enterprise-D - at least until USS Prometheus entered the scene which managed Warp 9.9 without effort as a sustainable speed). So, one can say that power generation for Trek is quite potent, and the Enterprise-D is quite a lot more powerful than what some people claimed - and it wouldn't be necessarily 'unrealistic'. So, internal volume size of storage matters to a point... even if the Ent-D sotres 3,000 m^3 of anti-deuterium... the USAGE of this substance will be relatively low over time while still producing mind-boggling amounts of power when you factor in all the power enhancements technologies in place (like subpsace), and other things. Factor in recycling of energy, and you basically end up with massive amounts of power. And the Federation demonstrated it can STORE massive amounts of energy for later use (such as when they are replicating objects... if they recycle it, that energy can be later used to create something else - and its not inconceivable that they would be actually converting energy into matter just as the dialogue explains).
17,812
I know that when plucked as in pizzicato, the violin produces a muted sound. However, I was surprised to find out when I checked with my tuner that if I tune my violin by plucking, it seems somewhat flat when I double check it by bowing! So there's this *slight* discrepancy between the two. why is that? Is the bowed version correct (which I'm assuming) ? If so then why? also, is it true for any instrument when you play it muted?
2014/05/27
[ "https://music.stackexchange.com/questions/17812", "https://music.stackexchange.com", "https://music.stackexchange.com/users/9943/" ]
Many things can enter in. Bow pressure can force a string out of tune. Try this: tune the open string bowed, then play the string with excessively heavy bow pressure. You'll go out of tune. Depending on the quality of the instrument, the bridge&soundpost setup, and the phases of Jupiter's moons, you may find that a perfectly tuned (bowed) open string will decay slightly out of tune after you remove the bow. The behavior of a freely oscillating string differs from that of a string driven with a bow. From a physics standpoint, the resonant frequency of a string under tension, for real-world strings (not those infamous massless ones :-) ), can change with amplitude. Add to that the fact that a plucked string is resonating freely, while a bowed string is actually being caught and released by the bowhairs (at a very high rate), and the math gets well-nigh impossible. I fear I don't have any info as to what the magnitudes of these various effects are. Perhaps someone can chime in w/ some references.
The reason the bow produces sound is that sliding friction of the bow on the string is lower than static friction. What happens when bowing is that the bow initially starts out grabbing the string and stretching it until the force of the string exceeds the static friction of the bow. At that point, the string will slip and vibrate for one cycle, during which the bow will be applying force in its direction of motion. After almost a complete cycle, the velocity of the string will be close enough to that of the bow that the bow will again grab the string, thus increasing the amount of force that can be imparted to it, but shortly after that the string will be released and the cycle will repeat. What will happen essentially is that the bow will grab the string for part of the cycle when it would have been slowing down and make it move slightly faster there than it would have done otherwise. This will have the effect of reducing the time per oscillation cycle, thus raising the pitch.
17,812
I know that when plucked as in pizzicato, the violin produces a muted sound. However, I was surprised to find out when I checked with my tuner that if I tune my violin by plucking, it seems somewhat flat when I double check it by bowing! So there's this *slight* discrepancy between the two. why is that? Is the bowed version correct (which I'm assuming) ? If so then why? also, is it true for any instrument when you play it muted?
2014/05/27
[ "https://music.stackexchange.com/questions/17812", "https://music.stackexchange.com", "https://music.stackexchange.com/users/9943/" ]
Many things can enter in. Bow pressure can force a string out of tune. Try this: tune the open string bowed, then play the string with excessively heavy bow pressure. You'll go out of tune. Depending on the quality of the instrument, the bridge&soundpost setup, and the phases of Jupiter's moons, you may find that a perfectly tuned (bowed) open string will decay slightly out of tune after you remove the bow. The behavior of a freely oscillating string differs from that of a string driven with a bow. From a physics standpoint, the resonant frequency of a string under tension, for real-world strings (not those infamous massless ones :-) ), can change with amplitude. Add to that the fact that a plucked string is resonating freely, while a bowed string is actually being caught and released by the bowhairs (at a very high rate), and the math gets well-nigh impossible. I fear I don't have any info as to what the magnitudes of these various effects are. Perhaps someone can chime in w/ some references.
When plucking the string, it vibrates against the resistance of the air. When bowing the string, it vibrates in contact with the intentionally sticky bow. While the bow does a good job continually supplying energy to the string, the "free" movement of the string would be faster than when it swings and sticks.
17,812
I know that when plucked as in pizzicato, the violin produces a muted sound. However, I was surprised to find out when I checked with my tuner that if I tune my violin by plucking, it seems somewhat flat when I double check it by bowing! So there's this *slight* discrepancy between the two. why is that? Is the bowed version correct (which I'm assuming) ? If so then why? also, is it true for any instrument when you play it muted?
2014/05/27
[ "https://music.stackexchange.com/questions/17812", "https://music.stackexchange.com", "https://music.stackexchange.com/users/9943/" ]
Many things can enter in. Bow pressure can force a string out of tune. Try this: tune the open string bowed, then play the string with excessively heavy bow pressure. You'll go out of tune. Depending on the quality of the instrument, the bridge&soundpost setup, and the phases of Jupiter's moons, you may find that a perfectly tuned (bowed) open string will decay slightly out of tune after you remove the bow. The behavior of a freely oscillating string differs from that of a string driven with a bow. From a physics standpoint, the resonant frequency of a string under tension, for real-world strings (not those infamous massless ones :-) ), can change with amplitude. Add to that the fact that a plucked string is resonating freely, while a bowed string is actually being caught and released by the bowhairs (at a very high rate), and the math gets well-nigh impossible. I fear I don't have any info as to what the magnitudes of these various effects are. Perhaps someone can chime in w/ some references.
You notice more on a guitar that when you pluck the string the pitch will slowly decline as it fades.Not enough for you to notice if you had not been told . A violin bow will keep the pitch steady as the bow activates the string .
17,812
I know that when plucked as in pizzicato, the violin produces a muted sound. However, I was surprised to find out when I checked with my tuner that if I tune my violin by plucking, it seems somewhat flat when I double check it by bowing! So there's this *slight* discrepancy between the two. why is that? Is the bowed version correct (which I'm assuming) ? If so then why? also, is it true for any instrument when you play it muted?
2014/05/27
[ "https://music.stackexchange.com/questions/17812", "https://music.stackexchange.com", "https://music.stackexchange.com/users/9943/" ]
When plucking the string, it vibrates against the resistance of the air. When bowing the string, it vibrates in contact with the intentionally sticky bow. While the bow does a good job continually supplying energy to the string, the "free" movement of the string would be faster than when it swings and sticks.
The reason the bow produces sound is that sliding friction of the bow on the string is lower than static friction. What happens when bowing is that the bow initially starts out grabbing the string and stretching it until the force of the string exceeds the static friction of the bow. At that point, the string will slip and vibrate for one cycle, during which the bow will be applying force in its direction of motion. After almost a complete cycle, the velocity of the string will be close enough to that of the bow that the bow will again grab the string, thus increasing the amount of force that can be imparted to it, but shortly after that the string will be released and the cycle will repeat. What will happen essentially is that the bow will grab the string for part of the cycle when it would have been slowing down and make it move slightly faster there than it would have done otherwise. This will have the effect of reducing the time per oscillation cycle, thus raising the pitch.
17,812
I know that when plucked as in pizzicato, the violin produces a muted sound. However, I was surprised to find out when I checked with my tuner that if I tune my violin by plucking, it seems somewhat flat when I double check it by bowing! So there's this *slight* discrepancy between the two. why is that? Is the bowed version correct (which I'm assuming) ? If so then why? also, is it true for any instrument when you play it muted?
2014/05/27
[ "https://music.stackexchange.com/questions/17812", "https://music.stackexchange.com", "https://music.stackexchange.com/users/9943/" ]
The reason the bow produces sound is that sliding friction of the bow on the string is lower than static friction. What happens when bowing is that the bow initially starts out grabbing the string and stretching it until the force of the string exceeds the static friction of the bow. At that point, the string will slip and vibrate for one cycle, during which the bow will be applying force in its direction of motion. After almost a complete cycle, the velocity of the string will be close enough to that of the bow that the bow will again grab the string, thus increasing the amount of force that can be imparted to it, but shortly after that the string will be released and the cycle will repeat. What will happen essentially is that the bow will grab the string for part of the cycle when it would have been slowing down and make it move slightly faster there than it would have done otherwise. This will have the effect of reducing the time per oscillation cycle, thus raising the pitch.
You notice more on a guitar that when you pluck the string the pitch will slowly decline as it fades.Not enough for you to notice if you had not been told . A violin bow will keep the pitch steady as the bow activates the string .
17,812
I know that when plucked as in pizzicato, the violin produces a muted sound. However, I was surprised to find out when I checked with my tuner that if I tune my violin by plucking, it seems somewhat flat when I double check it by bowing! So there's this *slight* discrepancy between the two. why is that? Is the bowed version correct (which I'm assuming) ? If so then why? also, is it true for any instrument when you play it muted?
2014/05/27
[ "https://music.stackexchange.com/questions/17812", "https://music.stackexchange.com", "https://music.stackexchange.com/users/9943/" ]
When plucking the string, it vibrates against the resistance of the air. When bowing the string, it vibrates in contact with the intentionally sticky bow. While the bow does a good job continually supplying energy to the string, the "free" movement of the string would be faster than when it swings and sticks.
You notice more on a guitar that when you pluck the string the pitch will slowly decline as it fades.Not enough for you to notice if you had not been told . A violin bow will keep the pitch steady as the bow activates the string .
32,698
Last year, I built three raised beds and they were quite successful. But, at the end of the season when I cleaned them out, I noticed small, thin roots growing throughout, which, I assume, are coming from my neighbors fir trees that are on the other side of my fence. This week, I dug out all of the soil from one of the boxes and placed a barrier between the ground and the box (I realize this might only be a fix for a season or two). I want to shovel the soil back in, but my question is, do I have to remove all the small roots? I am removing as many by hand as possible, but some are so small and singular, I'm not sure what to do. Is it okay to leave them in there?
2017/04/18
[ "https://gardening.stackexchange.com/questions/32698", "https://gardening.stackexchange.com", "https://gardening.stackexchange.com/users/17235/" ]
If you mean loose root pieces in the soil you want to put back in the beds, don't worry about small bits, just remove the larger more obvious ones, especially any in clumps. The small, broken root pieces are not going to grow, assuming they're not from some pernicious weed - from your description, they do sound like the fine roots put out by trees or large shrubs.
Are you sure they're not from your vegetable plants? If they were from a fir tree then I think that would be obvious - they would be connected to the tree by some more substantial roots. I don't know what pernicious weeds you have in Oregon, but personally I'd remove on sight anything thistle or bindweed like (white, fairly uniformly 4mm thick and cord-like), dandelion, dock and cinqfoil (orange tapering taproot) or buttercup (tufts of relatively short white roots).
4,690,854
Sorry, I've forgotten an important word here. What is the most *forgotten word* way to perform MySQL Queries using PHP? I read *somewhere*, that instead of using the old mysql\_connect/mysql\_query() statements, we should be using something else now! And the person who wrote that made it sound like we all should have known this by now. I'm no expert on this stuff, but I really can't find anything about this. I just remembered the word: *efficient*. Any help at all is much appreciated. Any links/tuts/articles/code examples are very welcome. :) Thank you!
2011/01/14
[ "https://Stackoverflow.com/questions/4690854", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
improved mysql (myqli) or PDO ?
I would highly recommend using [PDO](http://php.net/pdo) over `mysql_*`. There are plenty of tutorials out there, e.g. [this one](http://www.pixel2life.com/publish/tutorials/1378/an_introduction_to_pdo/).
4,690,854
Sorry, I've forgotten an important word here. What is the most *forgotten word* way to perform MySQL Queries using PHP? I read *somewhere*, that instead of using the old mysql\_connect/mysql\_query() statements, we should be using something else now! And the person who wrote that made it sound like we all should have known this by now. I'm no expert on this stuff, but I really can't find anything about this. I just remembered the word: *efficient*. Any help at all is much appreciated. Any links/tuts/articles/code examples are very welcome. :) Thank you!
2011/01/14
[ "https://Stackoverflow.com/questions/4690854", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
[mysqli](http://www.php.net/manual/en/book.mysqli.php) and [PDO](http://www.php.net/manual/en/book.pdo.php) are recommended nowadays, mainly because they support parametrized queries which, if used properly, eliminate the risk of mySQL injection.
I would highly recommend using [PDO](http://php.net/pdo) over `mysql_*`. There are plenty of tutorials out there, e.g. [this one](http://www.pixel2life.com/publish/tutorials/1378/an_introduction_to_pdo/).
4,690,854
Sorry, I've forgotten an important word here. What is the most *forgotten word* way to perform MySQL Queries using PHP? I read *somewhere*, that instead of using the old mysql\_connect/mysql\_query() statements, we should be using something else now! And the person who wrote that made it sound like we all should have known this by now. I'm no expert on this stuff, but I really can't find anything about this. I just remembered the word: *efficient*. Any help at all is much appreciated. Any links/tuts/articles/code examples are very welcome. :) Thank you!
2011/01/14
[ "https://Stackoverflow.com/questions/4690854", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Nowadays most applications are developed using an ORM like doctrine or propel. Internally most of them are using PDO....
I would highly recommend using [PDO](http://php.net/pdo) over `mysql_*`. There are plenty of tutorials out there, e.g. [this one](http://www.pixel2life.com/publish/tutorials/1378/an_introduction_to_pdo/).
21,062
After I read [this](https://worldbuilding.stackexchange.com/a/2489/11092) answer it got me thinking. Let's considering a *dead blow mace* like [this](http://img03.deviantart.net/5c26/i/2007/150/a/4/midevil_flail_by_the_pwnisher.jpg) one but with a smooth ball instead of a spiked one and a hollow head filled with sand-like material. * How effective would it be in combat? * What kind of effects might it have against full metal plating compared to a normal ball and chain?
2015/07/22
[ "https://worldbuilding.stackexchange.com/questions/21062", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/11092/" ]
**You won't gain a benefit when hitting soft targets.** The advantage of the dead blow hammer is the distribution of the energy over a longer period of time. This significantly helps prevent rebound when striking a *rigid* surface. Essentially the dead blow hammer turns your hammer strike into a really solid shove. This is not what you want when fighting someone in platemail. You want to dent the plate and potentially crease it. The plate is already trying to smooth out the impact from blows and a dead blow hammer or mace would only aid the plate in this. There is a reason [dead blow hammers are used in body shops](https://en.wikipedia.org/wiki/Dead_blow_hammer#Applications) for chassis work, they don't damage the sheet metal. However, if you wanted to make a sparring weapon that wouldn't damage the plate too much, but would still simulate blows, a dead blow hammer or mace would be an excellent start.
Against enemies with armor you'd imagine it would be fairly ineffective unless you hit them in the head. Even then, the weight it would need to be to stun them significantly so you could follow up with another weapon would make the mace fairly unweildly. I would vote to keep the spikes.
21,062
After I read [this](https://worldbuilding.stackexchange.com/a/2489/11092) answer it got me thinking. Let's considering a *dead blow mace* like [this](http://img03.deviantart.net/5c26/i/2007/150/a/4/midevil_flail_by_the_pwnisher.jpg) one but with a smooth ball instead of a spiked one and a hollow head filled with sand-like material. * How effective would it be in combat? * What kind of effects might it have against full metal plating compared to a normal ball and chain?
2015/07/22
[ "https://worldbuilding.stackexchange.com/questions/21062", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/11092/" ]
Edit: I̶ ̶d̶o̶n̶'̶t̶ ̶t̶h̶i̶n̶k̶ ̶i̶t̶ ̶s̶o̶ ̶m̶u̶c̶h̶ ̶m̶a̶t̶t̶e̶r̶s̶ ̶w̶h̶a̶t̶'̶s̶ ̶i̶n̶ ̶t̶h̶e̶ ̶b̶a̶l̶l̶ ̶a̶s̶ ̶h̶o̶w̶ ̶h̶e̶a̶v̶y̶ ̶i̶t̶ ̶i̶s̶.̶ ̶S̶a̶n̶d̶ ̶i̶s̶ ̶l̶i̶g̶h̶t̶e̶r̶ ̶t̶h̶a̶n̶ ̶m̶e̶t̶a̶l̶,̶ ̶b̶u̶t̶ ̶y̶o̶u̶ ̶c̶o̶u̶l̶d̶ ̶c̶o̶m̶p̶e̶n̶s̶a̶t̶e̶ ̶w̶i̶t̶h̶ ̶a̶ ̶l̶a̶r̶g̶e̶r̶ ̶b̶a̶l̶l̶.̶ A proper dead-blow head spreads the force over time and reduces its peak force, which in general would tend to reduce damage effects. As for not having spikes, there are historical ball & chain weapons like that too. I don't think it's a huge difference in effectiveness, but of course the spiked version will cause shallow puncture wounds if they get through whatever the target is wearing, it looks a little nastier, and it focuses force on points. I'm not sure, but I think in the case where a hit doesn't penetrate armor, the plain ball might have more effect at the same weight, as I think it would more directly concentrate the impact on one point, instead of two or more spikes splitting the energy.
Against enemies with armor you'd imagine it would be fairly ineffective unless you hit them in the head. Even then, the weight it would need to be to stun them significantly so you could follow up with another weapon would make the mace fairly unweildly. I would vote to keep the spikes.
21,062
After I read [this](https://worldbuilding.stackexchange.com/a/2489/11092) answer it got me thinking. Let's considering a *dead blow mace* like [this](http://img03.deviantart.net/5c26/i/2007/150/a/4/midevil_flail_by_the_pwnisher.jpg) one but with a smooth ball instead of a spiked one and a hollow head filled with sand-like material. * How effective would it be in combat? * What kind of effects might it have against full metal plating compared to a normal ball and chain?
2015/07/22
[ "https://worldbuilding.stackexchange.com/questions/21062", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/11092/" ]
In general, as armour got heavier and more effective, slashing swords went out of fashion as it was too difficult to cut through armour. Polearms with greater leverage, "smashing" weapons like hammers and stabbing swords like rapiers evolved to negate the protective attributes of armour. The problem with a "dead blow" weapon is the force is distributed in both time and space, and diffused over a wide enough area that you will not be able to deliver a blow that would take out the opponent. The best you could hope for is to knock them down (and maybe follow up with a dagger), or if you are lucky, a blow to the head might stun them long enough to subdue them, or give them a concussion and put them out of the fight. Of course a knight or man at arms is wearing a great helm, balanced on a padded ring (a primitive suspension system, much like modern helmets have padding or straps to keep the helmet proper away from the head), a layer of chain mail (the coif) and possibly a leather skull cap as well, so it is easy to see why war hammers or halberds were favoured. Even a mace was usually made from multiple triangular "blades" around a central shaft with the points out to concentrate the force of the blow. The best use of such a weapon as a dead blow flail would be if capturing the lower ranked levies is somehow important. A spiked flail such as the one pictured would cause lethal damage to the peasant levy called up into battle (generally unarmoured and trying to fight you with a pitchfork or billhook), so a dead blow flail would knock them flat with maybe broken bones or concussions, allowing you to scoop them up as captives. A team of people would be involved, one armoured person to wade into the mob and start knocking them down with the dead blow flail, while the rest of the team rushed in and grabbed the captives.
Against enemies with armor you'd imagine it would be fairly ineffective unless you hit them in the head. Even then, the weight it would need to be to stun them significantly so you could follow up with another weapon would make the mace fairly unweildly. I would vote to keep the spikes.
21,062
After I read [this](https://worldbuilding.stackexchange.com/a/2489/11092) answer it got me thinking. Let's considering a *dead blow mace* like [this](http://img03.deviantart.net/5c26/i/2007/150/a/4/midevil_flail_by_the_pwnisher.jpg) one but with a smooth ball instead of a spiked one and a hollow head filled with sand-like material. * How effective would it be in combat? * What kind of effects might it have against full metal plating compared to a normal ball and chain?
2015/07/22
[ "https://worldbuilding.stackexchange.com/questions/21062", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/11092/" ]
**You won't gain a benefit when hitting soft targets.** The advantage of the dead blow hammer is the distribution of the energy over a longer period of time. This significantly helps prevent rebound when striking a *rigid* surface. Essentially the dead blow hammer turns your hammer strike into a really solid shove. This is not what you want when fighting someone in platemail. You want to dent the plate and potentially crease it. The plate is already trying to smooth out the impact from blows and a dead blow hammer or mace would only aid the plate in this. There is a reason [dead blow hammers are used in body shops](https://en.wikipedia.org/wiki/Dead_blow_hammer#Applications) for chassis work, they don't damage the sheet metal. However, if you wanted to make a sparring weapon that wouldn't damage the plate too much, but would still simulate blows, a dead blow hammer or mace would be an excellent start.
What's the context for this? How heavy is it? If I had one on top of a wall it would probably be great against people trying to climb up ladders. Against less nimble foes it could also be effective since it would be hard for them to get out of the way. However against someone/thing that is very agile if you missed you run the risk of being thrown off balance by the weight. The recovery time for something like that is also probably quite slow. Much longer than quick slashes of a short sword or some other more agile weapon. So if if short sword wielding, agile opponent was able to easily dodge your swipe a stabbing is probably in your near future. As far as damage against armor that depends heavily on the weight of the flaily bit and the composition of the armor. If there is any kind of significant padding between the metal and the body of the wearer the effectiveness could be greatly reduced. If it's just bare metal and fits relatively close to the body it could certainly break bones with enough force given the flaily part was heavy enough.
21,062
After I read [this](https://worldbuilding.stackexchange.com/a/2489/11092) answer it got me thinking. Let's considering a *dead blow mace* like [this](http://img03.deviantart.net/5c26/i/2007/150/a/4/midevil_flail_by_the_pwnisher.jpg) one but with a smooth ball instead of a spiked one and a hollow head filled with sand-like material. * How effective would it be in combat? * What kind of effects might it have against full metal plating compared to a normal ball and chain?
2015/07/22
[ "https://worldbuilding.stackexchange.com/questions/21062", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/11092/" ]
Edit: I̶ ̶d̶o̶n̶'̶t̶ ̶t̶h̶i̶n̶k̶ ̶i̶t̶ ̶s̶o̶ ̶m̶u̶c̶h̶ ̶m̶a̶t̶t̶e̶r̶s̶ ̶w̶h̶a̶t̶'̶s̶ ̶i̶n̶ ̶t̶h̶e̶ ̶b̶a̶l̶l̶ ̶a̶s̶ ̶h̶o̶w̶ ̶h̶e̶a̶v̶y̶ ̶i̶t̶ ̶i̶s̶.̶ ̶S̶a̶n̶d̶ ̶i̶s̶ ̶l̶i̶g̶h̶t̶e̶r̶ ̶t̶h̶a̶n̶ ̶m̶e̶t̶a̶l̶,̶ ̶b̶u̶t̶ ̶y̶o̶u̶ ̶c̶o̶u̶l̶d̶ ̶c̶o̶m̶p̶e̶n̶s̶a̶t̶e̶ ̶w̶i̶t̶h̶ ̶a̶ ̶l̶a̶r̶g̶e̶r̶ ̶b̶a̶l̶l̶.̶ A proper dead-blow head spreads the force over time and reduces its peak force, which in general would tend to reduce damage effects. As for not having spikes, there are historical ball & chain weapons like that too. I don't think it's a huge difference in effectiveness, but of course the spiked version will cause shallow puncture wounds if they get through whatever the target is wearing, it looks a little nastier, and it focuses force on points. I'm not sure, but I think in the case where a hit doesn't penetrate armor, the plain ball might have more effect at the same weight, as I think it would more directly concentrate the impact on one point, instead of two or more spikes splitting the energy.
What's the context for this? How heavy is it? If I had one on top of a wall it would probably be great against people trying to climb up ladders. Against less nimble foes it could also be effective since it would be hard for them to get out of the way. However against someone/thing that is very agile if you missed you run the risk of being thrown off balance by the weight. The recovery time for something like that is also probably quite slow. Much longer than quick slashes of a short sword or some other more agile weapon. So if if short sword wielding, agile opponent was able to easily dodge your swipe a stabbing is probably in your near future. As far as damage against armor that depends heavily on the weight of the flaily bit and the composition of the armor. If there is any kind of significant padding between the metal and the body of the wearer the effectiveness could be greatly reduced. If it's just bare metal and fits relatively close to the body it could certainly break bones with enough force given the flaily part was heavy enough.
21,062
After I read [this](https://worldbuilding.stackexchange.com/a/2489/11092) answer it got me thinking. Let's considering a *dead blow mace* like [this](http://img03.deviantart.net/5c26/i/2007/150/a/4/midevil_flail_by_the_pwnisher.jpg) one but with a smooth ball instead of a spiked one and a hollow head filled with sand-like material. * How effective would it be in combat? * What kind of effects might it have against full metal plating compared to a normal ball and chain?
2015/07/22
[ "https://worldbuilding.stackexchange.com/questions/21062", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/11092/" ]
In general, as armour got heavier and more effective, slashing swords went out of fashion as it was too difficult to cut through armour. Polearms with greater leverage, "smashing" weapons like hammers and stabbing swords like rapiers evolved to negate the protective attributes of armour. The problem with a "dead blow" weapon is the force is distributed in both time and space, and diffused over a wide enough area that you will not be able to deliver a blow that would take out the opponent. The best you could hope for is to knock them down (and maybe follow up with a dagger), or if you are lucky, a blow to the head might stun them long enough to subdue them, or give them a concussion and put them out of the fight. Of course a knight or man at arms is wearing a great helm, balanced on a padded ring (a primitive suspension system, much like modern helmets have padding or straps to keep the helmet proper away from the head), a layer of chain mail (the coif) and possibly a leather skull cap as well, so it is easy to see why war hammers or halberds were favoured. Even a mace was usually made from multiple triangular "blades" around a central shaft with the points out to concentrate the force of the blow. The best use of such a weapon as a dead blow flail would be if capturing the lower ranked levies is somehow important. A spiked flail such as the one pictured would cause lethal damage to the peasant levy called up into battle (generally unarmoured and trying to fight you with a pitchfork or billhook), so a dead blow flail would knock them flat with maybe broken bones or concussions, allowing you to scoop them up as captives. A team of people would be involved, one armoured person to wade into the mob and start knocking them down with the dead blow flail, while the rest of the team rushed in and grabbed the captives.
What's the context for this? How heavy is it? If I had one on top of a wall it would probably be great against people trying to climb up ladders. Against less nimble foes it could also be effective since it would be hard for them to get out of the way. However against someone/thing that is very agile if you missed you run the risk of being thrown off balance by the weight. The recovery time for something like that is also probably quite slow. Much longer than quick slashes of a short sword or some other more agile weapon. So if if short sword wielding, agile opponent was able to easily dodge your swipe a stabbing is probably in your near future. As far as damage against armor that depends heavily on the weight of the flaily bit and the composition of the armor. If there is any kind of significant padding between the metal and the body of the wearer the effectiveness could be greatly reduced. If it's just bare metal and fits relatively close to the body it could certainly break bones with enough force given the flaily part was heavy enough.
21,062
After I read [this](https://worldbuilding.stackexchange.com/a/2489/11092) answer it got me thinking. Let's considering a *dead blow mace* like [this](http://img03.deviantart.net/5c26/i/2007/150/a/4/midevil_flail_by_the_pwnisher.jpg) one but with a smooth ball instead of a spiked one and a hollow head filled with sand-like material. * How effective would it be in combat? * What kind of effects might it have against full metal plating compared to a normal ball and chain?
2015/07/22
[ "https://worldbuilding.stackexchange.com/questions/21062", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/11092/" ]
**You won't gain a benefit when hitting soft targets.** The advantage of the dead blow hammer is the distribution of the energy over a longer period of time. This significantly helps prevent rebound when striking a *rigid* surface. Essentially the dead blow hammer turns your hammer strike into a really solid shove. This is not what you want when fighting someone in platemail. You want to dent the plate and potentially crease it. The plate is already trying to smooth out the impact from blows and a dead blow hammer or mace would only aid the plate in this. There is a reason [dead blow hammers are used in body shops](https://en.wikipedia.org/wiki/Dead_blow_hammer#Applications) for chassis work, they don't damage the sheet metal. However, if you wanted to make a sparring weapon that wouldn't damage the plate too much, but would still simulate blows, a dead blow hammer or mace would be an excellent start.
Edit: I̶ ̶d̶o̶n̶'̶t̶ ̶t̶h̶i̶n̶k̶ ̶i̶t̶ ̶s̶o̶ ̶m̶u̶c̶h̶ ̶m̶a̶t̶t̶e̶r̶s̶ ̶w̶h̶a̶t̶'̶s̶ ̶i̶n̶ ̶t̶h̶e̶ ̶b̶a̶l̶l̶ ̶a̶s̶ ̶h̶o̶w̶ ̶h̶e̶a̶v̶y̶ ̶i̶t̶ ̶i̶s̶.̶ ̶S̶a̶n̶d̶ ̶i̶s̶ ̶l̶i̶g̶h̶t̶e̶r̶ ̶t̶h̶a̶n̶ ̶m̶e̶t̶a̶l̶,̶ ̶b̶u̶t̶ ̶y̶o̶u̶ ̶c̶o̶u̶l̶d̶ ̶c̶o̶m̶p̶e̶n̶s̶a̶t̶e̶ ̶w̶i̶t̶h̶ ̶a̶ ̶l̶a̶r̶g̶e̶r̶ ̶b̶a̶l̶l̶.̶ A proper dead-blow head spreads the force over time and reduces its peak force, which in general would tend to reduce damage effects. As for not having spikes, there are historical ball & chain weapons like that too. I don't think it's a huge difference in effectiveness, but of course the spiked version will cause shallow puncture wounds if they get through whatever the target is wearing, it looks a little nastier, and it focuses force on points. I'm not sure, but I think in the case where a hit doesn't penetrate armor, the plain ball might have more effect at the same weight, as I think it would more directly concentrate the impact on one point, instead of two or more spikes splitting the energy.
21,062
After I read [this](https://worldbuilding.stackexchange.com/a/2489/11092) answer it got me thinking. Let's considering a *dead blow mace* like [this](http://img03.deviantart.net/5c26/i/2007/150/a/4/midevil_flail_by_the_pwnisher.jpg) one but with a smooth ball instead of a spiked one and a hollow head filled with sand-like material. * How effective would it be in combat? * What kind of effects might it have against full metal plating compared to a normal ball and chain?
2015/07/22
[ "https://worldbuilding.stackexchange.com/questions/21062", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/11092/" ]
**You won't gain a benefit when hitting soft targets.** The advantage of the dead blow hammer is the distribution of the energy over a longer period of time. This significantly helps prevent rebound when striking a *rigid* surface. Essentially the dead blow hammer turns your hammer strike into a really solid shove. This is not what you want when fighting someone in platemail. You want to dent the plate and potentially crease it. The plate is already trying to smooth out the impact from blows and a dead blow hammer or mace would only aid the plate in this. There is a reason [dead blow hammers are used in body shops](https://en.wikipedia.org/wiki/Dead_blow_hammer#Applications) for chassis work, they don't damage the sheet metal. However, if you wanted to make a sparring weapon that wouldn't damage the plate too much, but would still simulate blows, a dead blow hammer or mace would be an excellent start.
In general, as armour got heavier and more effective, slashing swords went out of fashion as it was too difficult to cut through armour. Polearms with greater leverage, "smashing" weapons like hammers and stabbing swords like rapiers evolved to negate the protective attributes of armour. The problem with a "dead blow" weapon is the force is distributed in both time and space, and diffused over a wide enough area that you will not be able to deliver a blow that would take out the opponent. The best you could hope for is to knock them down (and maybe follow up with a dagger), or if you are lucky, a blow to the head might stun them long enough to subdue them, or give them a concussion and put them out of the fight. Of course a knight or man at arms is wearing a great helm, balanced on a padded ring (a primitive suspension system, much like modern helmets have padding or straps to keep the helmet proper away from the head), a layer of chain mail (the coif) and possibly a leather skull cap as well, so it is easy to see why war hammers or halberds were favoured. Even a mace was usually made from multiple triangular "blades" around a central shaft with the points out to concentrate the force of the blow. The best use of such a weapon as a dead blow flail would be if capturing the lower ranked levies is somehow important. A spiked flail such as the one pictured would cause lethal damage to the peasant levy called up into battle (generally unarmoured and trying to fight you with a pitchfork or billhook), so a dead blow flail would knock them flat with maybe broken bones or concussions, allowing you to scoop them up as captives. A team of people would be involved, one armoured person to wade into the mob and start knocking them down with the dead blow flail, while the rest of the team rushed in and grabbed the captives.
27,823,193
I'm new to VBA and would really appreciate some help. I want to filter a column that has comma separated values using multiple criteria. At the moment, if I put more than one word in a cell, my filtering options/criteria are all of the words in the cell rather than coming up as discrete words/criteria. Example: I have documented pets that each family has in a village. I want the filter to show separate criterion of *'dog'* and/or *'cat'* and/or *'horse'*, and not *'dog; cat; horse'* or *'dog; cat'*. If I wanted to know which families have a dog and I searched for *'dog'* in excel, it would only show me the dogs in families who don't own any other pets, since families that own other pets would come under a filter category of, say, *'dog; cat'*. I also want to have the option of filtering more than one column in this way (with multiuple criteria) so that I can search which pets are in the village, which hobbies each child has, and which professions are in the family. For example, I might want to search in the pet column A for all the cats and/or dogs, in the child hobby column B for all the children that play basketball and/or chess, and in professions column C for all the architects and/or chefs and/or newsreaders. I would want my spreadsheet only to display all the families (rows) that fit all of these criteria. Does anyone know how I can achieve this using VBA? Many thanks
2015/01/07
[ "https://Stackoverflow.com/questions/27823193", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4420493/" ]
[ZingChart](http://www.zingchart.com/docs/features/export/) exports to PDF from canvas, SVG, and VML. I'm on the team, so if you have any questions about implementation or other features, just reach out!
You might be able to convert the HTML to a CANVAS image (<http://html2canvas.hertzen.com/>), and then save via jsPDF (<https://github.com/MrRio/jsPDF>) What have *you* tried?
14,203
I would like to be able to not just centrally **monitor** but also **filter** any organizational data moving out our edge routers, **regardless** of the sender application and **regardless** of the protocol/port used by the sender application. For example, the sender application could be an ssh/sftp client, a browser (http/s), email client(??), or even a from-scratch handwritten TCP socket-based client/server program. If the outgoing data happens to be encrypted (say, as in case of https/ssl, ssh/sftp), I would still like to be able to decode it via an MITM-like pattern (as employed by a program like Squid) and reject or allow the data to pass through based on the decoded content. For example, if a large file was sent out by a user, I would like to be able to extract and assemble the file into a single logical unit (from the IP packets?) and then be able to run further checks on this assembled file to decide whether or not to allow it to pass through. Given a Linux-based environment, which (FOSS) tools and techniques should I employ to achieve this? I'm new to this area, and don't know how to proceed further. Hence the question.
2012/04/26
[ "https://security.stackexchange.com/questions/14203", "https://security.stackexchange.com", "https://security.stackexchange.com/users/9375/" ]
First you need to decide how you will identify this data - this is probably going to be the biggest issue and it is not necessarily technical. * Do you aim to tag all documents and files with a rating? This works for organisations that do tag ever single piece of data, as the gateway then just looks for the tag and acts accordingly. * Do you set a blacklist of data to block? For example sequences of 16 digits can be blocked as they may be credit card numbers. You'll get false positives, but it can work in some environments. If you have that sorted, the gateway aspect is relatively simple - force all devices to connect through a proxy that acts as an SSL endpoint. As you stated, this is effectively a MITM. Many proxies, including squid, will let you prevent or allow traffic based upon various criteria, but in the FOSS space you may be limited in the complexity level possible. Those two options above should work though. You may find that trying to run deep analysis of packet streams in order to do this real time may be beyond the reach of FOSS (Have a look at next gen firewalls such as Palo Alto for some options here)
[SSL Bump](http://wiki.squid-cache.org/Features/SslBump), to be integrated into a Squid proxy, can decrypt outgoing SSL sessions, subject to some conditions: * A specific CA certificate must be added to the "trust store" of clients. SSL Bump works by creating a fake certificate for the target server, and doing a [man-in-the-middle attack](http://en.wikipedia.org/wiki/Man-in-the-middle_attack). * The client browsers must be convinced to use your proxy. This can be enforced at the network level, depending on the level of control you have on the firewalls and routers. * This breaks certificate-based client authentication. Client certificates are rare on the Web, but some banks issue certificates to their clients. * The users will be able to see it. Most human users tend to react poorly to the discovery of such filtering of their supposedly "secure" connections. You'd better warn them proactively. Transparency is a great asset for employer/employee relations.
14,203
I would like to be able to not just centrally **monitor** but also **filter** any organizational data moving out our edge routers, **regardless** of the sender application and **regardless** of the protocol/port used by the sender application. For example, the sender application could be an ssh/sftp client, a browser (http/s), email client(??), or even a from-scratch handwritten TCP socket-based client/server program. If the outgoing data happens to be encrypted (say, as in case of https/ssl, ssh/sftp), I would still like to be able to decode it via an MITM-like pattern (as employed by a program like Squid) and reject or allow the data to pass through based on the decoded content. For example, if a large file was sent out by a user, I would like to be able to extract and assemble the file into a single logical unit (from the IP packets?) and then be able to run further checks on this assembled file to decide whether or not to allow it to pass through. Given a Linux-based environment, which (FOSS) tools and techniques should I employ to achieve this? I'm new to this area, and don't know how to proceed further. Hence the question.
2012/04/26
[ "https://security.stackexchange.com/questions/14203", "https://security.stackexchange.com", "https://security.stackexchange.com/users/9375/" ]
Sounds like you're looking for Data Loss Prevention (DLP) solutions. Although there are plenty of commercial tools, I recall Google, Snort, and a few other organizations offer FOSS DLP capabilities. Commercial DLP vendors like Symantec (Vontu) and Websense offer complete solutions. The last I checked, FOSS DLP solutions were still quite lacking in features/functionality when compared to commercial DLP solutions. HTH I'm not aware of any FOSS solution that provides all the features you're looking for. Even commercial DLP solutions aren't able to peek into ssh sessions (https is much easier since you can just employ a l7 proxy architecture). The challenge is building meaningful rules. Signature-based network devices operate at l3/l4 and they don't rebuild packets into conversations. Even then, network-only data-in-motion detectors are limited to seeing plaintext traffic (with the exception of an inline l7 proxy for http/https/ftp) without implementing some sort of endpoint solution as well. Can you explain with a bit more detail what type of data you're trying to protect? Perhaps a feature rich data-in-motion FOSS network-based DLP might not be available but you may be able to apply protection to the data itself. Otherwise, your challenge becomes protecting information assets without an appropriate budget, which is almost saying the data isn't worth protecting in the first place.
[SSL Bump](http://wiki.squid-cache.org/Features/SslBump), to be integrated into a Squid proxy, can decrypt outgoing SSL sessions, subject to some conditions: * A specific CA certificate must be added to the "trust store" of clients. SSL Bump works by creating a fake certificate for the target server, and doing a [man-in-the-middle attack](http://en.wikipedia.org/wiki/Man-in-the-middle_attack). * The client browsers must be convinced to use your proxy. This can be enforced at the network level, depending on the level of control you have on the firewalls and routers. * This breaks certificate-based client authentication. Client certificates are rare on the Web, but some banks issue certificates to their clients. * The users will be able to see it. Most human users tend to react poorly to the discovery of such filtering of their supposedly "secure" connections. You'd better warn them proactively. Transparency is a great asset for employer/employee relations.
1,107,802
Usually when I am working with the computer's internals, I have to speak clearly and loudly so that whoever I'm working with can hear me, with my head pointing down. However, sometimes this causes me to spit, and I don't want saliva getting into my computer. We all know that saliva is composed of 99.5% water plus electrolytes, mucus, enzymes, and blood cells. I'm worried that if I accidentally spat on the motherboard, it could it result in a short circuit, destroying the whole motherboard. What can I do to prevent this, or do I even need to be concerned about it?
2016/08/02
[ "https://superuser.com/questions/1107802", "https://superuser.com", "https://superuser.com/users/278985/" ]
As you yourself said, most of saliva is water. That should give you the information you need. This question boils down to how water damages electronics and what you can do about it if you get water on electronics. Water does two things: * If the water comes in contact with a piece of metal (a trace on the board, a pin, anything that's electrified) while it's powered on, the water dramatically changes the electrical characteristics of the circuit. Because water conducts electricity fairly well, the other risk is that water can effectively bridge the gap between two circuits that are ordinarily separate on one of the components, causing it to short out. This can cause components to be "fried", which means that due to the altered resistance properties of the new circuit of "circuit #1 plus water plus circuit #2", a component received far more current than it was designed to handle, which caused it to fail. * If the water comes in contact with circuitry while it's powered *off* (not in a "warm" state but completely **off**), there's the possibility that it will cause **corrosion**. Now, according to [this answer](https://electronics.stackexchange.com/a/14205/17323) on EE.SE, it's not so much the water itself that causes corrosion, but the impurities in it (other chemicals that are more corrosive than water). Corrosion is effectively adding oxygen (usually O2 or O3) to molecules that don't need to have an oxygen bond on them; the typical example of iron is that "rust" -- corroded iron -- is iron oxide. Anything that's a strong corroding agent that's mixed with the water will do this, moreso than the water itself. So, while it's safe to get perfectly pure water on powered-down electronics, it's not safe to use tap water, or even purified drinking water, because there are still a lot of mineral salts left in the water that are corrosive to the metals used in electronics. If you're worried about accidentally getting water -- er, saliva, but same thing -- on your electronics, then obviously using a mask is a good solution. And if you *do* get water on your equipment, the rice method doesn't work. [Here](https://www.youtube.com/watch?v=yPeITOz2_YM) is a video explaining why; **Warning: strong language in video**.
In the strictest sense, yes it can. If you were to perfectly glob between two contacts, you could cause a short. If it was to stay in there it could corrode the contacts. If the machine is turned off then you won't cause any shorting unless the saliva was still there when the device is turned on. I work with laptops myself repairing them, and I have can't say I've damaged one yet (I'm quite the sneezer!) I would would say not to worry. Anti static wristband is a good idea, mask I would hold off on unless your drenching the computer with some severe loony tunes style duck globing.
267,036
I have phrased similarly another question about how physicists knew that two charges exist, positive and negative. The purpose of the question is not necessarily to educate me historically. It's just that I wish to know about classical subjects without making the atomic assumption. I know that electrons (elementary negatively charged particles) move in contrast with protons (elementary positively charged particles) because electrons have small mass and orbit the nucleus while protons are stuck in the nucleus of atoms. Roughly at least! Charging by induction works because of the transfer of electrons, negative charge between a conductive sphere and the ground. But it could be very well explained by the transfer of either positive or negative charge. But only the later happens. Was there any (thought) experiment to show that only negative charges happen to move/transfer?
2016/07/08
[ "https://physics.stackexchange.com/questions/267036", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/52261/" ]
Physics's *don't* know that only negatively-charged particles move. We can create ion currents on demand in many environments. We *do* know that the current flowing *in a metal wire* is negatively charged particles in motion. As for how to determine that, you do a [Hall effect](https://en.wikipedia.org/wiki/Hall_effect) measurement. The measurement works by subjecting a current in a relatively wide bar to a magnetic field perpendicular to both the current and the width of the bar and then measuring the potential difference across the width. In this era of turn-key precision voltage measurements is easy enough to do in a high school laboratory if the students can follow the underlying arguments.
Initially, when first glass rods were systematically being rubbed, the "charging" phenomena was observed. The electric charges were hypothesized to be positive and negative, and the pioneer (Franklin? forgot the name...) pretty much arbitrarily decided to call one positive and the other negative. Further experiments helped him deduce that two like charges repel and opposite charges attract. At that point, nothing is said about which charge is the moving one. Original assumption was that the positive charges moved and the mathematical formalism reflected this. Later on came the experiment which deduces which charge is the carrier of current, positive or negative. The experiment is called the Hall Effect. Essentially, you apply a magnetic field directed such that its field is perpendicular to a current flowing in a conductor. Rules of electromagnetism are such that, in this situation the negative charges pile up on one side of the conductor and the positives on the other. By arranging your setup, you can say if left side is negative then that must be the moving charge. (Or vice versa.) This of course doesn't yet finish the picture. Atomic discoveries established that the protons are stuck in the nucleus, but the hall effect can clearly demonstrate that there are positive moving charge. What gives? The explanation is that of an imbalance of charges, where positive charge carriers are holes, absence of electrons in a material that shuffle around. It's still the electrons that are moving, but the net motion is that of an electron hole when a material is a positive charge carrying type.
82,516
I perfectly understand yaw, pitch and roll. I'm wondering what a few special terms are for other unusual movements, such as those executed by a helicopter, drone or even perhaps something like a Harrier jump jet. * If "*climb*" refers to a straight up vertical motion along the z-axis, what is the name for the opposite? One might think *fall*, but I believe (keep me honest) in aviation, a "fall" is akin to a crash-landing type scenario, where the vehicle is out of control. Another word might be *descent*, but to me, that implies a downward change in **pitch** so that the vehicle gradually starts to lower its altitude over time and distance; instead I just want the opposite of a helicopter "climb" * I understand *yaw* means a change in left/right direction of the vehicle, but what motions describe a 90 degree, perpendicular change in motion to the left or right? Meaning, vehicle doesn't yaw, pitch or roll, it just "slides" 90 degrees to the left or right? Any ideas as to what the correct terms are here?
2020/11/28
[ "https://aviation.stackexchange.com/questions/82516", "https://aviation.stackexchange.com", "https://aviation.stackexchange.com/users/53432/" ]
The opposite of climb is "descend" (that's an easy one). The second one can be "translate" or possibly "slew" but I think the most appropriate word is "slip" as in sideslip. Translate is a controlled movement from one place to another, and slew is an uncontrolled one according to Oxford, but when you work the "slew" switch for a directional gyro system, it's a controlled action, so there you go. However, those terms apply to a stationary object that starts to move sideways, "slip" describes lateral non-turning movement of an airplane sideways when it's already moving forward, so this is probably the best definition.
"Descent" is the opposite of "climb". "Translation" describes the second case.
318,298
I couldn't find anything about Ubuntu acting as a [Miracast](https://en.wikipedia.org/wiki/Miracast) receiver or sender. * Can it work at all? * Are there hardware prerequisites? * Is WiFi a requirement or can it work over LAN or another kind of network connection? * WiFi direct seems to be a necessary requirement, is it a sufficient one? (i.e. if a system supports WiFi direct does that mean it supports Miracast?) * Are there differences in support between receiving/sending? * How is the latency? (compared to the competition, i.e. VNC, commercial Miracast devices, etc.) * How do I actually use it, if it's difficult? Specifically, I plan to use it together with an Android phone (4.x Jelly Bean).
2013/07/09
[ "https://askubuntu.com/questions/318298", "https://askubuntu.com", "https://askubuntu.com/users/37213/" ]
*OpenWFD* is dead and now superceded by **[MiracleCast](https://github.com/albfan/miraclecast)**: > > MiracleCast is an open-source implementation of the Miracast technology (also: Wifi-Display (WFD)). It is based on the OpenWFD research project and will supercede it. We focus on proper and tight integration into existing Linux-Desktop systems, compared to OpenWFD which was meant as playground for fast-protoyping. > > > Despite its name and origin, the project itself is not limited to Miracast. We can support any kind of display-streaming with just a minimal amount of additional work. However, Miracast will remain the main development target due to its level of awareness. > > > It's still early in its development cycle. Currently it seems like it can do the linking, but won't do the actual video streaming. The **OpenWFD** demo at FOSDEM 2014 also did the streaming bit, but as I understand *MiracleCast* is a *do it right* project, whereas the code he showed at FOSDEM "will probably only work on this machine".
The Google Cast extension for Chromium works in Ubuntu (to cast Chromium pages/browsing to your TV using a ChromeCast at 720p which looks just fine, though a bit lagged). It doesn't cast the YUV (video overlay) space well though, even on 802.11n. (Testing in 12.04 LTS and 13.10, with latest Chromium) Having said that, casting YouTube from my Android 4.3 (Galaxy Nexus) phone works beautifully. (The ChromeCast dongle takes over the download+display, so it's not dependent on your phone/laptop once you've hit Play). I've not found any Miracast sender apps (eg. EZ Air) for Ubuntu yet unfortunately (for eBay HK/China generic HDMI Miracast dongles). So the 5 metre HDMI cable (also from eBay) is still our solution for ondemand TV at full-screen 1080p.
318,298
I couldn't find anything about Ubuntu acting as a [Miracast](https://en.wikipedia.org/wiki/Miracast) receiver or sender. * Can it work at all? * Are there hardware prerequisites? * Is WiFi a requirement or can it work over LAN or another kind of network connection? * WiFi direct seems to be a necessary requirement, is it a sufficient one? (i.e. if a system supports WiFi direct does that mean it supports Miracast?) * Are there differences in support between receiving/sending? * How is the latency? (compared to the competition, i.e. VNC, commercial Miracast devices, etc.) * How do I actually use it, if it's difficult? Specifically, I plan to use it together with an Android phone (4.x Jelly Bean).
2013/07/09
[ "https://askubuntu.com/questions/318298", "https://askubuntu.com", "https://askubuntu.com/users/37213/" ]
The Google Cast extension for Chromium works in Ubuntu (to cast Chromium pages/browsing to your TV using a ChromeCast at 720p which looks just fine, though a bit lagged). It doesn't cast the YUV (video overlay) space well though, even on 802.11n. (Testing in 12.04 LTS and 13.10, with latest Chromium) Having said that, casting YouTube from my Android 4.3 (Galaxy Nexus) phone works beautifully. (The ChromeCast dongle takes over the download+display, so it's not dependent on your phone/laptop once you've hit Play). I've not found any Miracast sender apps (eg. EZ Air) for Ubuntu yet unfortunately (for eBay HK/China generic HDMI Miracast dongles). So the 5 metre HDMI cable (also from eBay) is still our solution for ondemand TV at full-screen 1080p.
I got inspired to hunt a little more, and indeed, there isn't much on miracast, however I did find [this post](http://forum.xda-developers.com/showthread.php?t=2445314) from a few months ago that claims android doesn't even have it yet, thus I suspect it's still being worked on. Because of this I'm going to take some liberty and discuss DNLA / UPnP as it is almost the same (minus the direct connection and exact screen mirroring) Apparently, in KDE there is a media KIO-slave for kde called [kio-upnp-ms](http://download.kde.org/download.php?url=stable/kio-upnp-ms/0.8.0/src/kio-upnp-ms-0.8.0.tar.gz) that I saw announced [here](http://blog.nikhilism.com/2010/10/upnp-mediaserver-kio-slave-is-out.html). Moreover there seems to be a fair amount of other UPnP and DNLA options, such as [XBMC](http://xbmc.org/) as listed [here](http://elinux.org/DLNA_Open_Source_Projects) and [here](http://coherence.beebits.net/wiki/Resources) Also, searching for 'upnp' in synaptic will give you a many gnome options, for example Rygel is well integrated in Gnome and easy to use.
318,298
I couldn't find anything about Ubuntu acting as a [Miracast](https://en.wikipedia.org/wiki/Miracast) receiver or sender. * Can it work at all? * Are there hardware prerequisites? * Is WiFi a requirement or can it work over LAN or another kind of network connection? * WiFi direct seems to be a necessary requirement, is it a sufficient one? (i.e. if a system supports WiFi direct does that mean it supports Miracast?) * Are there differences in support between receiving/sending? * How is the latency? (compared to the competition, i.e. VNC, commercial Miracast devices, etc.) * How do I actually use it, if it's difficult? Specifically, I plan to use it together with an Android phone (4.x Jelly Bean).
2013/07/09
[ "https://askubuntu.com/questions/318298", "https://askubuntu.com", "https://askubuntu.com/users/37213/" ]
The Google Cast extension for Chromium works in Ubuntu (to cast Chromium pages/browsing to your TV using a ChromeCast at 720p which looks just fine, though a bit lagged). It doesn't cast the YUV (video overlay) space well though, even on 802.11n. (Testing in 12.04 LTS and 13.10, with latest Chromium) Having said that, casting YouTube from my Android 4.3 (Galaxy Nexus) phone works beautifully. (The ChromeCast dongle takes over the download+display, so it's not dependent on your phone/laptop once you've hit Play). I've not found any Miracast sender apps (eg. EZ Air) for Ubuntu yet unfortunately (for eBay HK/China generic HDMI Miracast dongles). So the 5 metre HDMI cable (also from eBay) is still our solution for ondemand TV at full-screen 1080p.
On the receiver side (sink) the already mentioned [MiracleCast](https://github.com/albfan/miraclecast) seems to be the best choice. There is also [work](https://github.com/albfan/miraclecast/issues/4) going on to support sending streams (source). [Gnome-Network-Displays](https://gitlab.gnome.org/GNOME/gnome-network-displays) (formerly [Gnome-Screencast](https://blogs.gnome.org/benzea/2019/01/30/gnome-screencast/)) is a new (2019) effort to support Miracast streaming (source) in GNU/Linux.
318,298
I couldn't find anything about Ubuntu acting as a [Miracast](https://en.wikipedia.org/wiki/Miracast) receiver or sender. * Can it work at all? * Are there hardware prerequisites? * Is WiFi a requirement or can it work over LAN or another kind of network connection? * WiFi direct seems to be a necessary requirement, is it a sufficient one? (i.e. if a system supports WiFi direct does that mean it supports Miracast?) * Are there differences in support between receiving/sending? * How is the latency? (compared to the competition, i.e. VNC, commercial Miracast devices, etc.) * How do I actually use it, if it's difficult? Specifically, I plan to use it together with an Android phone (4.x Jelly Bean).
2013/07/09
[ "https://askubuntu.com/questions/318298", "https://askubuntu.com", "https://askubuntu.com/users/37213/" ]
[Miracast](http://en.wikipedia.org/wiki/Miracast) is [based](http://www.wi-fi.org/knowledge-center/faq/how-miracast-related-wi-fi-direct) on [WiFi Direct](http://en.wikipedia.org/wiki/WiFi_Direct), which as far as I can tell requires a wireless card with hardware support for the standard. Sender ------ I think [Intel Wireless Display](http://www.intel.com/content/www/us/en/architecture-and-technology/intel-wireless-display.html) is the way to send a laptop screen to a Miracast receiver. However, [as far as I can tell](https://askubuntu.com/questions/341907/using-intel-wireless-display-widi-in-ubuntu) Ubuntu currently has no support for Wireless Display cards. Receiver -------- For receiving content from a Miracast transmitter (like your phone), you can buy Miracast receiver dongles that will output to any HDMI input: [Rocketfish™ - Miracast Video Receiver](http://www.bestbuy.com/site/Rocketfish%26%23153%3B---Miracast-Video-Receiver/7511057.p?id=1218851304131&skuId=7511057) There is also [Chromecast](http://www.google.com/intl/en/chrome/devices/chromecast/), but it [only receives content sent from a Chrome browser](http://caflib.blogspot.co.uk/2013/07/chromecast-isnt-miracastwifi.html), rather than from an entire display. I don't know if either device has Ubuntu drivers. If anyone can confirm, or suggest another device with Ubuntu drivers, that would be great.
You can try out the [gnome-screencast](https://github.com/benzea/gnome-screencast) project. More info in this [blogpost](https://blogs.gnome.org/benzea/2019/01/30/gnome-screencast/). It appears recently and therefore lacks documentation and looks buggy and intended mostly for fedora users (the issue about [installing to ubuntu](https://github.com/benzea/gnome-screencast/issues/19)). But at least it's a step in the right direction.
318,298
I couldn't find anything about Ubuntu acting as a [Miracast](https://en.wikipedia.org/wiki/Miracast) receiver or sender. * Can it work at all? * Are there hardware prerequisites? * Is WiFi a requirement or can it work over LAN or another kind of network connection? * WiFi direct seems to be a necessary requirement, is it a sufficient one? (i.e. if a system supports WiFi direct does that mean it supports Miracast?) * Are there differences in support between receiving/sending? * How is the latency? (compared to the competition, i.e. VNC, commercial Miracast devices, etc.) * How do I actually use it, if it's difficult? Specifically, I plan to use it together with an Android phone (4.x Jelly Bean).
2013/07/09
[ "https://askubuntu.com/questions/318298", "https://askubuntu.com", "https://askubuntu.com/users/37213/" ]
[Miracast](http://en.wikipedia.org/wiki/Miracast) is [based](http://www.wi-fi.org/knowledge-center/faq/how-miracast-related-wi-fi-direct) on [WiFi Direct](http://en.wikipedia.org/wiki/WiFi_Direct), which as far as I can tell requires a wireless card with hardware support for the standard. Sender ------ I think [Intel Wireless Display](http://www.intel.com/content/www/us/en/architecture-and-technology/intel-wireless-display.html) is the way to send a laptop screen to a Miracast receiver. However, [as far as I can tell](https://askubuntu.com/questions/341907/using-intel-wireless-display-widi-in-ubuntu) Ubuntu currently has no support for Wireless Display cards. Receiver -------- For receiving content from a Miracast transmitter (like your phone), you can buy Miracast receiver dongles that will output to any HDMI input: [Rocketfish™ - Miracast Video Receiver](http://www.bestbuy.com/site/Rocketfish%26%23153%3B---Miracast-Video-Receiver/7511057.p?id=1218851304131&skuId=7511057) There is also [Chromecast](http://www.google.com/intl/en/chrome/devices/chromecast/), but it [only receives content sent from a Chrome browser](http://caflib.blogspot.co.uk/2013/07/chromecast-isnt-miracastwifi.html), rather than from an entire display. I don't know if either device has Ubuntu drivers. If anyone can confirm, or suggest another device with Ubuntu drivers, that would be great.
On the receiver side (sink) the already mentioned [MiracleCast](https://github.com/albfan/miraclecast) seems to be the best choice. There is also [work](https://github.com/albfan/miraclecast/issues/4) going on to support sending streams (source). [Gnome-Network-Displays](https://gitlab.gnome.org/GNOME/gnome-network-displays) (formerly [Gnome-Screencast](https://blogs.gnome.org/benzea/2019/01/30/gnome-screencast/)) is a new (2019) effort to support Miracast streaming (source) in GNU/Linux.
318,298
I couldn't find anything about Ubuntu acting as a [Miracast](https://en.wikipedia.org/wiki/Miracast) receiver or sender. * Can it work at all? * Are there hardware prerequisites? * Is WiFi a requirement or can it work over LAN or another kind of network connection? * WiFi direct seems to be a necessary requirement, is it a sufficient one? (i.e. if a system supports WiFi direct does that mean it supports Miracast?) * Are there differences in support between receiving/sending? * How is the latency? (compared to the competition, i.e. VNC, commercial Miracast devices, etc.) * How do I actually use it, if it's difficult? Specifically, I plan to use it together with an Android phone (4.x Jelly Bean).
2013/07/09
[ "https://askubuntu.com/questions/318298", "https://askubuntu.com", "https://askubuntu.com/users/37213/" ]
*OpenWFD* is dead and now superceded by **[MiracleCast](https://github.com/albfan/miraclecast)**: > > MiracleCast is an open-source implementation of the Miracast technology (also: Wifi-Display (WFD)). It is based on the OpenWFD research project and will supercede it. We focus on proper and tight integration into existing Linux-Desktop systems, compared to OpenWFD which was meant as playground for fast-protoyping. > > > Despite its name and origin, the project itself is not limited to Miracast. We can support any kind of display-streaming with just a minimal amount of additional work. However, Miracast will remain the main development target due to its level of awareness. > > > It's still early in its development cycle. Currently it seems like it can do the linking, but won't do the actual video streaming. The **OpenWFD** demo at FOSDEM 2014 also did the streaming bit, but as I understand *MiracleCast* is a *do it right* project, whereas the code he showed at FOSDEM "will probably only work on this machine".
On the receiver side (sink) the already mentioned [MiracleCast](https://github.com/albfan/miraclecast) seems to be the best choice. There is also [work](https://github.com/albfan/miraclecast/issues/4) going on to support sending streams (source). [Gnome-Network-Displays](https://gitlab.gnome.org/GNOME/gnome-network-displays) (formerly [Gnome-Screencast](https://blogs.gnome.org/benzea/2019/01/30/gnome-screencast/)) is a new (2019) effort to support Miracast streaming (source) in GNU/Linux.
318,298
I couldn't find anything about Ubuntu acting as a [Miracast](https://en.wikipedia.org/wiki/Miracast) receiver or sender. * Can it work at all? * Are there hardware prerequisites? * Is WiFi a requirement or can it work over LAN or another kind of network connection? * WiFi direct seems to be a necessary requirement, is it a sufficient one? (i.e. if a system supports WiFi direct does that mean it supports Miracast?) * Are there differences in support between receiving/sending? * How is the latency? (compared to the competition, i.e. VNC, commercial Miracast devices, etc.) * How do I actually use it, if it's difficult? Specifically, I plan to use it together with an Android phone (4.x Jelly Bean).
2013/07/09
[ "https://askubuntu.com/questions/318298", "https://askubuntu.com", "https://askubuntu.com/users/37213/" ]
[Miracast](http://en.wikipedia.org/wiki/Miracast) is [based](http://www.wi-fi.org/knowledge-center/faq/how-miracast-related-wi-fi-direct) on [WiFi Direct](http://en.wikipedia.org/wiki/WiFi_Direct), which as far as I can tell requires a wireless card with hardware support for the standard. Sender ------ I think [Intel Wireless Display](http://www.intel.com/content/www/us/en/architecture-and-technology/intel-wireless-display.html) is the way to send a laptop screen to a Miracast receiver. However, [as far as I can tell](https://askubuntu.com/questions/341907/using-intel-wireless-display-widi-in-ubuntu) Ubuntu currently has no support for Wireless Display cards. Receiver -------- For receiving content from a Miracast transmitter (like your phone), you can buy Miracast receiver dongles that will output to any HDMI input: [Rocketfish™ - Miracast Video Receiver](http://www.bestbuy.com/site/Rocketfish%26%23153%3B---Miracast-Video-Receiver/7511057.p?id=1218851304131&skuId=7511057) There is also [Chromecast](http://www.google.com/intl/en/chrome/devices/chromecast/), but it [only receives content sent from a Chrome browser](http://caflib.blogspot.co.uk/2013/07/chromecast-isnt-miracastwifi.html), rather than from an entire display. I don't know if either device has Ubuntu drivers. If anyone can confirm, or suggest another device with Ubuntu drivers, that would be great.
I got inspired to hunt a little more, and indeed, there isn't much on miracast, however I did find [this post](http://forum.xda-developers.com/showthread.php?t=2445314) from a few months ago that claims android doesn't even have it yet, thus I suspect it's still being worked on. Because of this I'm going to take some liberty and discuss DNLA / UPnP as it is almost the same (minus the direct connection and exact screen mirroring) Apparently, in KDE there is a media KIO-slave for kde called [kio-upnp-ms](http://download.kde.org/download.php?url=stable/kio-upnp-ms/0.8.0/src/kio-upnp-ms-0.8.0.tar.gz) that I saw announced [here](http://blog.nikhilism.com/2010/10/upnp-mediaserver-kio-slave-is-out.html). Moreover there seems to be a fair amount of other UPnP and DNLA options, such as [XBMC](http://xbmc.org/) as listed [here](http://elinux.org/DLNA_Open_Source_Projects) and [here](http://coherence.beebits.net/wiki/Resources) Also, searching for 'upnp' in synaptic will give you a many gnome options, for example Rygel is well integrated in Gnome and easy to use.
318,298
I couldn't find anything about Ubuntu acting as a [Miracast](https://en.wikipedia.org/wiki/Miracast) receiver or sender. * Can it work at all? * Are there hardware prerequisites? * Is WiFi a requirement or can it work over LAN or another kind of network connection? * WiFi direct seems to be a necessary requirement, is it a sufficient one? (i.e. if a system supports WiFi direct does that mean it supports Miracast?) * Are there differences in support between receiving/sending? * How is the latency? (compared to the competition, i.e. VNC, commercial Miracast devices, etc.) * How do I actually use it, if it's difficult? Specifically, I plan to use it together with an Android phone (4.x Jelly Bean).
2013/07/09
[ "https://askubuntu.com/questions/318298", "https://askubuntu.com", "https://askubuntu.com/users/37213/" ]
The Google Cast extension for Chromium works in Ubuntu (to cast Chromium pages/browsing to your TV using a ChromeCast at 720p which looks just fine, though a bit lagged). It doesn't cast the YUV (video overlay) space well though, even on 802.11n. (Testing in 12.04 LTS and 13.10, with latest Chromium) Having said that, casting YouTube from my Android 4.3 (Galaxy Nexus) phone works beautifully. (The ChromeCast dongle takes over the download+display, so it's not dependent on your phone/laptop once you've hit Play). I've not found any Miracast sender apps (eg. EZ Air) for Ubuntu yet unfortunately (for eBay HK/China generic HDMI Miracast dongles). So the 5 metre HDMI cable (also from eBay) is still our solution for ondemand TV at full-screen 1080p.
You can try out the [gnome-screencast](https://github.com/benzea/gnome-screencast) project. More info in this [blogpost](https://blogs.gnome.org/benzea/2019/01/30/gnome-screencast/). It appears recently and therefore lacks documentation and looks buggy and intended mostly for fedora users (the issue about [installing to ubuntu](https://github.com/benzea/gnome-screencast/issues/19)). But at least it's a step in the right direction.
318,298
I couldn't find anything about Ubuntu acting as a [Miracast](https://en.wikipedia.org/wiki/Miracast) receiver or sender. * Can it work at all? * Are there hardware prerequisites? * Is WiFi a requirement or can it work over LAN or another kind of network connection? * WiFi direct seems to be a necessary requirement, is it a sufficient one? (i.e. if a system supports WiFi direct does that mean it supports Miracast?) * Are there differences in support between receiving/sending? * How is the latency? (compared to the competition, i.e. VNC, commercial Miracast devices, etc.) * How do I actually use it, if it's difficult? Specifically, I plan to use it together with an Android phone (4.x Jelly Bean).
2013/07/09
[ "https://askubuntu.com/questions/318298", "https://askubuntu.com", "https://askubuntu.com/users/37213/" ]
*OpenWFD* is dead and now superceded by **[MiracleCast](https://github.com/albfan/miraclecast)**: > > MiracleCast is an open-source implementation of the Miracast technology (also: Wifi-Display (WFD)). It is based on the OpenWFD research project and will supercede it. We focus on proper and tight integration into existing Linux-Desktop systems, compared to OpenWFD which was meant as playground for fast-protoyping. > > > Despite its name and origin, the project itself is not limited to Miracast. We can support any kind of display-streaming with just a minimal amount of additional work. However, Miracast will remain the main development target due to its level of awareness. > > > It's still early in its development cycle. Currently it seems like it can do the linking, but won't do the actual video streaming. The **OpenWFD** demo at FOSDEM 2014 also did the streaming bit, but as I understand *MiracleCast* is a *do it right* project, whereas the code he showed at FOSDEM "will probably only work on this machine".
I got inspired to hunt a little more, and indeed, there isn't much on miracast, however I did find [this post](http://forum.xda-developers.com/showthread.php?t=2445314) from a few months ago that claims android doesn't even have it yet, thus I suspect it's still being worked on. Because of this I'm going to take some liberty and discuss DNLA / UPnP as it is almost the same (minus the direct connection and exact screen mirroring) Apparently, in KDE there is a media KIO-slave for kde called [kio-upnp-ms](http://download.kde.org/download.php?url=stable/kio-upnp-ms/0.8.0/src/kio-upnp-ms-0.8.0.tar.gz) that I saw announced [here](http://blog.nikhilism.com/2010/10/upnp-mediaserver-kio-slave-is-out.html). Moreover there seems to be a fair amount of other UPnP and DNLA options, such as [XBMC](http://xbmc.org/) as listed [here](http://elinux.org/DLNA_Open_Source_Projects) and [here](http://coherence.beebits.net/wiki/Resources) Also, searching for 'upnp' in synaptic will give you a many gnome options, for example Rygel is well integrated in Gnome and easy to use.
318,298
I couldn't find anything about Ubuntu acting as a [Miracast](https://en.wikipedia.org/wiki/Miracast) receiver or sender. * Can it work at all? * Are there hardware prerequisites? * Is WiFi a requirement or can it work over LAN or another kind of network connection? * WiFi direct seems to be a necessary requirement, is it a sufficient one? (i.e. if a system supports WiFi direct does that mean it supports Miracast?) * Are there differences in support between receiving/sending? * How is the latency? (compared to the competition, i.e. VNC, commercial Miracast devices, etc.) * How do I actually use it, if it's difficult? Specifically, I plan to use it together with an Android phone (4.x Jelly Bean).
2013/07/09
[ "https://askubuntu.com/questions/318298", "https://askubuntu.com", "https://askubuntu.com/users/37213/" ]
*OpenWFD* is dead and now superceded by **[MiracleCast](https://github.com/albfan/miraclecast)**: > > MiracleCast is an open-source implementation of the Miracast technology (also: Wifi-Display (WFD)). It is based on the OpenWFD research project and will supercede it. We focus on proper and tight integration into existing Linux-Desktop systems, compared to OpenWFD which was meant as playground for fast-protoyping. > > > Despite its name and origin, the project itself is not limited to Miracast. We can support any kind of display-streaming with just a minimal amount of additional work. However, Miracast will remain the main development target due to its level of awareness. > > > It's still early in its development cycle. Currently it seems like it can do the linking, but won't do the actual video streaming. The **OpenWFD** demo at FOSDEM 2014 also did the streaming bit, but as I understand *MiracleCast* is a *do it right* project, whereas the code he showed at FOSDEM "will probably only work on this machine".
[Miracast](http://en.wikipedia.org/wiki/Miracast) is [based](http://www.wi-fi.org/knowledge-center/faq/how-miracast-related-wi-fi-direct) on [WiFi Direct](http://en.wikipedia.org/wiki/WiFi_Direct), which as far as I can tell requires a wireless card with hardware support for the standard. Sender ------ I think [Intel Wireless Display](http://www.intel.com/content/www/us/en/architecture-and-technology/intel-wireless-display.html) is the way to send a laptop screen to a Miracast receiver. However, [as far as I can tell](https://askubuntu.com/questions/341907/using-intel-wireless-display-widi-in-ubuntu) Ubuntu currently has no support for Wireless Display cards. Receiver -------- For receiving content from a Miracast transmitter (like your phone), you can buy Miracast receiver dongles that will output to any HDMI input: [Rocketfish™ - Miracast Video Receiver](http://www.bestbuy.com/site/Rocketfish%26%23153%3B---Miracast-Video-Receiver/7511057.p?id=1218851304131&skuId=7511057) There is also [Chromecast](http://www.google.com/intl/en/chrome/devices/chromecast/), but it [only receives content sent from a Chrome browser](http://caflib.blogspot.co.uk/2013/07/chromecast-isnt-miracastwifi.html), rather than from an entire display. I don't know if either device has Ubuntu drivers. If anyone can confirm, or suggest another device with Ubuntu drivers, that would be great.
48,353
I'm curious as to how so many words with the 'ch' sound have the silent 't' in them. *Catch, itch, retch, hatchet, botch* etc. The list is huge. They all have different origins, and yet they have the silent 't'. But words like *achieve, lecherous, spinach* don't have the silent 't'. Can anyone see any phonological patterns that might have led to this?
2011/11/15
[ "https://english.stackexchange.com/questions/48353", "https://english.stackexchange.com", "https://english.stackexchange.com/users/11605/" ]
I side with onomatomaniak — I pronounce *botch* differently to *leech*. I think looking at the etymologies of the words shows where the *t* comes from: * Batch [from O.E. \*bæcce](http://www.etymonline.com/index.php?term=batch) * Hatch ["opening," O.E. hæc (gen. hæcce)](http://www.etymonline.com/index.php?term=hatch) * Itch [from O.E. gicce](http://www.etymonline.com/index.php?term=itch) * Thatch [from O.E. þeccan](http://www.etymonline.com/index.php?term=thatch) Compared to * Church [from O.E. cirice](http://www.etymonline.com/index.php?term=church) * -arch [from Gk. arkh-](http://www.etymonline.com/index.php?term=arch-) * Finch [from O.E. finc, from P.Gmc. \*finkiz](http://www.etymonline.com/index.php?term=finch) * Leech [from O.E. læce](http://www.etymonline.com/index.php?term=leech) (All of the above are taken from [Etymonline](http://etymonline.com)) I think there is a pattern that words that originally have *cc* become *tch*, whereas words with a single *c* or *k* become ch. Obviously there will be exception, but I think that this is where the majority come from.
Well, the T is not entirely silent. The words that do include the T has a kind of a T-sound starting it off. If you look at another word where the ch is not including the T is "bachelor", and it is not so sharp (more like a D-sound than a T-sound). The difference is very small, but it is there.
48,353
I'm curious as to how so many words with the 'ch' sound have the silent 't' in them. *Catch, itch, retch, hatchet, botch* etc. The list is huge. They all have different origins, and yet they have the silent 't'. But words like *achieve, lecherous, spinach* don't have the silent 't'. Can anyone see any phonological patterns that might have led to this?
2011/11/15
[ "https://english.stackexchange.com/questions/48353", "https://english.stackexchange.com", "https://english.stackexchange.com/users/11605/" ]
It seems to me that the 'tch' behaves in English spelling the way a doubled consonant would, and the 'ch' the way a single consonant would. That is, 'tch' is more likely to occur after short vowels, so you see *patch, botch,* and *crutch,* but *beach, roach,* and *pooch*. As with any English spelling rule, there are numerous exceptions.
Well, the T is not entirely silent. The words that do include the T has a kind of a T-sound starting it off. If you look at another word where the ch is not including the T is "bachelor", and it is not so sharp (more like a D-sound than a T-sound). The difference is very small, but it is there.
48,353
I'm curious as to how so many words with the 'ch' sound have the silent 't' in them. *Catch, itch, retch, hatchet, botch* etc. The list is huge. They all have different origins, and yet they have the silent 't'. But words like *achieve, lecherous, spinach* don't have the silent 't'. Can anyone see any phonological patterns that might have led to this?
2011/11/15
[ "https://english.stackexchange.com/questions/48353", "https://english.stackexchange.com", "https://english.stackexchange.com/users/11605/" ]
The words you mention have been spelt in many different ways over the centuries. To take just two examples, *hatchet* has also appeared as *hachet, acchett, hachit, hachytt, hachette and hatchette* and *achieve* as *acheui acheeve, achyeue, atcheue, acheue, acheve, achieue, achyue, achieve, achiue, ascheve, atcheive, atchive, atchieue, atchiue, atchive, atchieve, acheive, atcheeue; acheive, acheue, atcheve, achieve and acheive.* Make of that what you will.
Well, the T is not entirely silent. The words that do include the T has a kind of a T-sound starting it off. If you look at another word where the ch is not including the T is "bachelor", and it is not so sharp (more like a D-sound than a T-sound). The difference is very small, but it is there.
48,353
I'm curious as to how so many words with the 'ch' sound have the silent 't' in them. *Catch, itch, retch, hatchet, botch* etc. The list is huge. They all have different origins, and yet they have the silent 't'. But words like *achieve, lecherous, spinach* don't have the silent 't'. Can anyone see any phonological patterns that might have led to this?
2011/11/15
[ "https://english.stackexchange.com/questions/48353", "https://english.stackexchange.com", "https://english.stackexchange.com/users/11605/" ]
It seems to me that the 'tch' behaves in English spelling the way a doubled consonant would, and the 'ch' the way a single consonant would. That is, 'tch' is more likely to occur after short vowels, so you see *patch, botch,* and *crutch,* but *beach, roach,* and *pooch*. As with any English spelling rule, there are numerous exceptions.
I side with onomatomaniak — I pronounce *botch* differently to *leech*. I think looking at the etymologies of the words shows where the *t* comes from: * Batch [from O.E. \*bæcce](http://www.etymonline.com/index.php?term=batch) * Hatch ["opening," O.E. hæc (gen. hæcce)](http://www.etymonline.com/index.php?term=hatch) * Itch [from O.E. gicce](http://www.etymonline.com/index.php?term=itch) * Thatch [from O.E. þeccan](http://www.etymonline.com/index.php?term=thatch) Compared to * Church [from O.E. cirice](http://www.etymonline.com/index.php?term=church) * -arch [from Gk. arkh-](http://www.etymonline.com/index.php?term=arch-) * Finch [from O.E. finc, from P.Gmc. \*finkiz](http://www.etymonline.com/index.php?term=finch) * Leech [from O.E. læce](http://www.etymonline.com/index.php?term=leech) (All of the above are taken from [Etymonline](http://etymonline.com)) I think there is a pattern that words that originally have *cc* become *tch*, whereas words with a single *c* or *k* become ch. Obviously there will be exception, but I think that this is where the majority come from.
48,353
I'm curious as to how so many words with the 'ch' sound have the silent 't' in them. *Catch, itch, retch, hatchet, botch* etc. The list is huge. They all have different origins, and yet they have the silent 't'. But words like *achieve, lecherous, spinach* don't have the silent 't'. Can anyone see any phonological patterns that might have led to this?
2011/11/15
[ "https://english.stackexchange.com/questions/48353", "https://english.stackexchange.com", "https://english.stackexchange.com/users/11605/" ]
The words you mention have been spelt in many different ways over the centuries. To take just two examples, *hatchet* has also appeared as *hachet, acchett, hachit, hachytt, hachette and hatchette* and *achieve* as *acheui acheeve, achyeue, atcheue, acheue, acheve, achieue, achyue, achieve, achiue, ascheve, atcheive, atchive, atchieue, atchiue, atchive, atchieve, acheive, atcheeue; acheive, acheue, atcheve, achieve and acheive.* Make of that what you will.
I side with onomatomaniak — I pronounce *botch* differently to *leech*. I think looking at the etymologies of the words shows where the *t* comes from: * Batch [from O.E. \*bæcce](http://www.etymonline.com/index.php?term=batch) * Hatch ["opening," O.E. hæc (gen. hæcce)](http://www.etymonline.com/index.php?term=hatch) * Itch [from O.E. gicce](http://www.etymonline.com/index.php?term=itch) * Thatch [from O.E. þeccan](http://www.etymonline.com/index.php?term=thatch) Compared to * Church [from O.E. cirice](http://www.etymonline.com/index.php?term=church) * -arch [from Gk. arkh-](http://www.etymonline.com/index.php?term=arch-) * Finch [from O.E. finc, from P.Gmc. \*finkiz](http://www.etymonline.com/index.php?term=finch) * Leech [from O.E. læce](http://www.etymonline.com/index.php?term=leech) (All of the above are taken from [Etymonline](http://etymonline.com)) I think there is a pattern that words that originally have *cc* become *tch*, whereas words with a single *c* or *k* become ch. Obviously there will be exception, but I think that this is where the majority come from.
48,353
I'm curious as to how so many words with the 'ch' sound have the silent 't' in them. *Catch, itch, retch, hatchet, botch* etc. The list is huge. They all have different origins, and yet they have the silent 't'. But words like *achieve, lecherous, spinach* don't have the silent 't'. Can anyone see any phonological patterns that might have led to this?
2011/11/15
[ "https://english.stackexchange.com/questions/48353", "https://english.stackexchange.com", "https://english.stackexchange.com/users/11605/" ]
I side with onomatomaniak — I pronounce *botch* differently to *leech*. I think looking at the etymologies of the words shows where the *t* comes from: * Batch [from O.E. \*bæcce](http://www.etymonline.com/index.php?term=batch) * Hatch ["opening," O.E. hæc (gen. hæcce)](http://www.etymonline.com/index.php?term=hatch) * Itch [from O.E. gicce](http://www.etymonline.com/index.php?term=itch) * Thatch [from O.E. þeccan](http://www.etymonline.com/index.php?term=thatch) Compared to * Church [from O.E. cirice](http://www.etymonline.com/index.php?term=church) * -arch [from Gk. arkh-](http://www.etymonline.com/index.php?term=arch-) * Finch [from O.E. finc, from P.Gmc. \*finkiz](http://www.etymonline.com/index.php?term=finch) * Leech [from O.E. læce](http://www.etymonline.com/index.php?term=leech) (All of the above are taken from [Etymonline](http://etymonline.com)) I think there is a pattern that words that originally have *cc* become *tch*, whereas words with a single *c* or *k* become ch. Obviously there will be exception, but I think that this is where the majority come from.
Maybe it's because the 'ch' sound used to be pronounced as in the Scottish 'loch', so the 't' is added to indicate the harder sound?
48,353
I'm curious as to how so many words with the 'ch' sound have the silent 't' in them. *Catch, itch, retch, hatchet, botch* etc. The list is huge. They all have different origins, and yet they have the silent 't'. But words like *achieve, lecherous, spinach* don't have the silent 't'. Can anyone see any phonological patterns that might have led to this?
2011/11/15
[ "https://english.stackexchange.com/questions/48353", "https://english.stackexchange.com", "https://english.stackexchange.com/users/11605/" ]
It seems to me that the 'tch' behaves in English spelling the way a doubled consonant would, and the 'ch' the way a single consonant would. That is, 'tch' is more likely to occur after short vowels, so you see *patch, botch,* and *crutch,* but *beach, roach,* and *pooch*. As with any English spelling rule, there are numerous exceptions.
Maybe it's because the 'ch' sound used to be pronounced as in the Scottish 'loch', so the 't' is added to indicate the harder sound?
48,353
I'm curious as to how so many words with the 'ch' sound have the silent 't' in them. *Catch, itch, retch, hatchet, botch* etc. The list is huge. They all have different origins, and yet they have the silent 't'. But words like *achieve, lecherous, spinach* don't have the silent 't'. Can anyone see any phonological patterns that might have led to this?
2011/11/15
[ "https://english.stackexchange.com/questions/48353", "https://english.stackexchange.com", "https://english.stackexchange.com/users/11605/" ]
The words you mention have been spelt in many different ways over the centuries. To take just two examples, *hatchet* has also appeared as *hachet, acchett, hachit, hachytt, hachette and hatchette* and *achieve* as *acheui acheeve, achyeue, atcheue, acheue, acheve, achieue, achyue, achieve, achiue, ascheve, atcheive, atchive, atchieue, atchiue, atchive, atchieve, acheive, atcheeue; acheive, acheue, atcheve, achieve and acheive.* Make of that what you will.
Maybe it's because the 'ch' sound used to be pronounced as in the Scottish 'loch', so the 't' is added to indicate the harder sound?
746,898
I am configuring a VPS on Windows Server 2012 R2 using MailEnable as email server. As the port 25 is blocked by ISP so I use port 587 instead.[![enter image description here](https://i.stack.imgur.com/B3HMG.png)](https://i.stack.imgur.com/B3HMG.png) When configuring on email client Outlook or ThunderBird, it all passed test and I am able to receive test message from outlook. [![enter image description here](https://i.stack.imgur.com/T5mLt.png)](https://i.stack.imgur.com/T5mLt.png) Below is my Outlook setting: [![enter image description here](https://i.stack.imgur.com/KL4Sc.png)](https://i.stack.imgur.com/KL4Sc.png) **However, I am not being able to receive any test email when I try to send from gmail or hotmail, etc.** I checked the server firewall and port 25 is open. Can anyone please help with this, thank you.
2016/01/05
[ "https://serverfault.com/questions/746898", "https://serverfault.com", "https://serverfault.com/users/279128/" ]
TCP port 587 is for mail submission from **clients**. Receiving email from other mail servers **requires** TCP port 25. You'll need to either move your server elsewhere or get your ISP to open that port. You will also want to un-check the "authentication required" option, as remote mail servers have no way of authenticating themselves to your server.
1. Change SMTP port 587 to 25. (This is for remote servers sending you e-mail.) 2. Untick the authentication requirement. 3. Get on Submission Port and enable listening on alternate port as 587. (This is for your server sending email.) 4. Tick the authentication requirement here for unauthorized sendings.
4,443,546
In light of the recent [Gawker Media password leak](http://blogs.wsj.com/digits/2010/12/13/the-top-50-gawker-media-passwords/), I've realized that many users share the same passwords. To help encourage stronger passwords, **would it be helpful if passwords are constrained to be unique among all users**? One immediate downside I could think of (besides account creation performance?) is being able to know that someone is using a given string as a password. This knowledge, combined with a list of users, could be quite dangerous. Is there a way to mitigate that downside while retaining the alleged benefits of not allowing repeat passwords? It's kind of like the [XKCD kick bot](http://blog.xkcd.com/2008/01/14/robot9000-and-xkcd-signal-attacking-noise-in-chat/) where you aren't allowed to repeat short, unoriginal sentences like "yah" or "lol". Edit^2: I thought you could unique-ify against a hash, but as someone pointed out, with varying salts, this would not have the intended effect. Good eye!
2010/12/14
[ "https://Stackoverflow.com/questions/4443546", "https://Stackoverflow.com", "https://Stackoverflow.com/users/40725/" ]
absolutely *not*. It is critical that no information about passwords be available to users outside the system. If they can easy guess which passwords are in use, by discovering that a password is unavailable, then they can use those passwords on known usernames and get a good shot at gaining access. An alternative is to find some kind of *common passwords* database, and prevent any user from using them.
I would suggest the follwing as you have already mentioned the disadvantage of using "unique@ passwords for all 1. Educate the user's about strong password. 2. Ask user's to change password regularly. 3. Keep a "Password strength" meter while they type in the password.
4,443,546
In light of the recent [Gawker Media password leak](http://blogs.wsj.com/digits/2010/12/13/the-top-50-gawker-media-passwords/), I've realized that many users share the same passwords. To help encourage stronger passwords, **would it be helpful if passwords are constrained to be unique among all users**? One immediate downside I could think of (besides account creation performance?) is being able to know that someone is using a given string as a password. This knowledge, combined with a list of users, could be quite dangerous. Is there a way to mitigate that downside while retaining the alleged benefits of not allowing repeat passwords? It's kind of like the [XKCD kick bot](http://blog.xkcd.com/2008/01/14/robot9000-and-xkcd-signal-attacking-noise-in-chat/) where you aren't allowed to repeat short, unoriginal sentences like "yah" or "lol". Edit^2: I thought you could unique-ify against a hash, but as someone pointed out, with varying salts, this would not have the intended effect. Good eye!
2010/12/14
[ "https://Stackoverflow.com/questions/4443546", "https://Stackoverflow.com", "https://Stackoverflow.com/users/40725/" ]
absolutely *not*. It is critical that no information about passwords be available to users outside the system. If they can easy guess which passwords are in use, by discovering that a password is unavailable, then they can use those passwords on known usernames and get a good shot at gaining access. An alternative is to find some kind of *common passwords* database, and prevent any user from using them.
**eeeuh** I might be misreading your question, but I hope you do not store the actual password? You should hash the password with a random salt. That way, *there is no way for you to ever tell if one or more users have the same password.* If your systems, in any way, allows you to determine if two or more users have the same password, you are storing the passwords the wrong way.
4,443,546
In light of the recent [Gawker Media password leak](http://blogs.wsj.com/digits/2010/12/13/the-top-50-gawker-media-passwords/), I've realized that many users share the same passwords. To help encourage stronger passwords, **would it be helpful if passwords are constrained to be unique among all users**? One immediate downside I could think of (besides account creation performance?) is being able to know that someone is using a given string as a password. This knowledge, combined with a list of users, could be quite dangerous. Is there a way to mitigate that downside while retaining the alleged benefits of not allowing repeat passwords? It's kind of like the [XKCD kick bot](http://blog.xkcd.com/2008/01/14/robot9000-and-xkcd-signal-attacking-noise-in-chat/) where you aren't allowed to repeat short, unoriginal sentences like "yah" or "lol". Edit^2: I thought you could unique-ify against a hash, but as someone pointed out, with varying salts, this would not have the intended effect. Good eye!
2010/12/14
[ "https://Stackoverflow.com/questions/4443546", "https://Stackoverflow.com", "https://Stackoverflow.com/users/40725/" ]
absolutely *not*. It is critical that no information about passwords be available to users outside the system. If they can easy guess which passwords are in use, by discovering that a password is unavailable, then they can use those passwords on known usernames and get a good shot at gaining access. An alternative is to find some kind of *common passwords* database, and prevent any user from using them.
Really don’t As long as you have salts, the password won’t be stored the same way anyway. If you want to ensure password security: 1. Pick a good hash (sha256, blowfish, etc.) 2. Use salts 3. Snap-in a password meter with a minimum threshold 4. A lot of those can be bundled with wordlists Check out a post I made about it on reddit: <http://www.reddit.com/r/netsec/comments/ektb8/in_the_light_of_recent_gawker_breakout_lets_talk/>
4,443,546
In light of the recent [Gawker Media password leak](http://blogs.wsj.com/digits/2010/12/13/the-top-50-gawker-media-passwords/), I've realized that many users share the same passwords. To help encourage stronger passwords, **would it be helpful if passwords are constrained to be unique among all users**? One immediate downside I could think of (besides account creation performance?) is being able to know that someone is using a given string as a password. This knowledge, combined with a list of users, could be quite dangerous. Is there a way to mitigate that downside while retaining the alleged benefits of not allowing repeat passwords? It's kind of like the [XKCD kick bot](http://blog.xkcd.com/2008/01/14/robot9000-and-xkcd-signal-attacking-noise-in-chat/) where you aren't allowed to repeat short, unoriginal sentences like "yah" or "lol". Edit^2: I thought you could unique-ify against a hash, but as someone pointed out, with varying salts, this would not have the intended effect. Good eye!
2010/12/14
[ "https://Stackoverflow.com/questions/4443546", "https://Stackoverflow.com", "https://Stackoverflow.com/users/40725/" ]
absolutely *not*. It is critical that no information about passwords be available to users outside the system. If they can easy guess which passwords are in use, by discovering that a password is unavailable, then they can use those passwords on known usernames and get a good shot at gaining access. An alternative is to find some kind of *common passwords* database, and prevent any user from using them.
If password management is done correctly, the only person who should know their password is the user who created it in the first place. In my web sites, I never store the password in any form. I store a cryptographic hash (SHA-1 or some variant) of that password that is manipulated with some sort of unique "salt" padding. Essentially if two people did have unique passwords, there would be no way to tell. Most of the passwords on that link you gave are all easily guessed dictionary passwords. Very weak, and easy to brute force. They would all be unallowed by any system with rudimentary password checking.
4,443,546
In light of the recent [Gawker Media password leak](http://blogs.wsj.com/digits/2010/12/13/the-top-50-gawker-media-passwords/), I've realized that many users share the same passwords. To help encourage stronger passwords, **would it be helpful if passwords are constrained to be unique among all users**? One immediate downside I could think of (besides account creation performance?) is being able to know that someone is using a given string as a password. This knowledge, combined with a list of users, could be quite dangerous. Is there a way to mitigate that downside while retaining the alleged benefits of not allowing repeat passwords? It's kind of like the [XKCD kick bot](http://blog.xkcd.com/2008/01/14/robot9000-and-xkcd-signal-attacking-noise-in-chat/) where you aren't allowed to repeat short, unoriginal sentences like "yah" or "lol". Edit^2: I thought you could unique-ify against a hash, but as someone pointed out, with varying salts, this would not have the intended effect. Good eye!
2010/12/14
[ "https://Stackoverflow.com/questions/4443546", "https://Stackoverflow.com", "https://Stackoverflow.com/users/40725/" ]
**eeeuh** I might be misreading your question, but I hope you do not store the actual password? You should hash the password with a random salt. That way, *there is no way for you to ever tell if one or more users have the same password.* If your systems, in any way, allows you to determine if two or more users have the same password, you are storing the passwords the wrong way.
I would suggest the follwing as you have already mentioned the disadvantage of using "unique@ passwords for all 1. Educate the user's about strong password. 2. Ask user's to change password regularly. 3. Keep a "Password strength" meter while they type in the password.
4,443,546
In light of the recent [Gawker Media password leak](http://blogs.wsj.com/digits/2010/12/13/the-top-50-gawker-media-passwords/), I've realized that many users share the same passwords. To help encourage stronger passwords, **would it be helpful if passwords are constrained to be unique among all users**? One immediate downside I could think of (besides account creation performance?) is being able to know that someone is using a given string as a password. This knowledge, combined with a list of users, could be quite dangerous. Is there a way to mitigate that downside while retaining the alleged benefits of not allowing repeat passwords? It's kind of like the [XKCD kick bot](http://blog.xkcd.com/2008/01/14/robot9000-and-xkcd-signal-attacking-noise-in-chat/) where you aren't allowed to repeat short, unoriginal sentences like "yah" or "lol". Edit^2: I thought you could unique-ify against a hash, but as someone pointed out, with varying salts, this would not have the intended effect. Good eye!
2010/12/14
[ "https://Stackoverflow.com/questions/4443546", "https://Stackoverflow.com", "https://Stackoverflow.com/users/40725/" ]
**eeeuh** I might be misreading your question, but I hope you do not store the actual password? You should hash the password with a random salt. That way, *there is no way for you to ever tell if one or more users have the same password.* If your systems, in any way, allows you to determine if two or more users have the same password, you are storing the passwords the wrong way.
Really don’t As long as you have salts, the password won’t be stored the same way anyway. If you want to ensure password security: 1. Pick a good hash (sha256, blowfish, etc.) 2. Use salts 3. Snap-in a password meter with a minimum threshold 4. A lot of those can be bundled with wordlists Check out a post I made about it on reddit: <http://www.reddit.com/r/netsec/comments/ektb8/in_the_light_of_recent_gawker_breakout_lets_talk/>
4,443,546
In light of the recent [Gawker Media password leak](http://blogs.wsj.com/digits/2010/12/13/the-top-50-gawker-media-passwords/), I've realized that many users share the same passwords. To help encourage stronger passwords, **would it be helpful if passwords are constrained to be unique among all users**? One immediate downside I could think of (besides account creation performance?) is being able to know that someone is using a given string as a password. This knowledge, combined with a list of users, could be quite dangerous. Is there a way to mitigate that downside while retaining the alleged benefits of not allowing repeat passwords? It's kind of like the [XKCD kick bot](http://blog.xkcd.com/2008/01/14/robot9000-and-xkcd-signal-attacking-noise-in-chat/) where you aren't allowed to repeat short, unoriginal sentences like "yah" or "lol". Edit^2: I thought you could unique-ify against a hash, but as someone pointed out, with varying salts, this would not have the intended effect. Good eye!
2010/12/14
[ "https://Stackoverflow.com/questions/4443546", "https://Stackoverflow.com", "https://Stackoverflow.com/users/40725/" ]
**eeeuh** I might be misreading your question, but I hope you do not store the actual password? You should hash the password with a random salt. That way, *there is no way for you to ever tell if one or more users have the same password.* If your systems, in any way, allows you to determine if two or more users have the same password, you are storing the passwords the wrong way.
If password management is done correctly, the only person who should know their password is the user who created it in the first place. In my web sites, I never store the password in any form. I store a cryptographic hash (SHA-1 or some variant) of that password that is manipulated with some sort of unique "salt" padding. Essentially if two people did have unique passwords, there would be no way to tell. Most of the passwords on that link you gave are all easily guessed dictionary passwords. Very weak, and easy to brute force. They would all be unallowed by any system with rudimentary password checking.
1,544,624
I have a windows 10 laptop from my company that is in the company domain and where I'm local admin. When I connect the laptop to my home network I can connect to anything on the internet (http/ftp/ssh/etc) but, even if I can ping my home machines from the laptop, I can't connect to them or see their shares. I also can't print to my network printer. Any ideas of what is blocking this? Is there a network monitor that I can see where connections are blocked?
2020/04/22
[ "https://superuser.com/questions/1544624", "https://superuser.com", "https://superuser.com/users/1167016/" ]
Set up regular folder sharing. Home Group is gone, SMBv1 is gone, and Browsing is unreliable also. The following instructions enable folder sharing between two Windows 10 Machines 1. Make sure Network Discovery and File / Print Sharing are enabled on both computers 2. Make sure password protected sharing is enabled both computers. 3. If you wish to share by computer name instead of IP address, put an entry in the HOSTS file of the computer you are connecting from with the name and IP address of the main computer. 4. Make sure both computers are in the same WORKGROUP and make sure Wireless connections are Private, not Public. 5. This next step depends on computer user names and passwords. If both computers use the same username and password, you can skip this step, restart both and test. If the user names are different, do the following. Make a username on Main that is a user name and password of the computer you are connecting from. Use this for permissions on the folders on Main you wish to share. It is normally quite difficult to share USER folders because Home Group was removed - security concerns. Use a neutral folder for sharing. Again after all the above changes restart and test. On the computer you are connecting from, open a command prompt and type: NET USE X: \nameofothercomputer\folder Press enter and then authenticate with the user name and password credentials.
Solved my problem. In the route table there was no entry for 192.168.1.0 mask 255.255.255.0 so every packet was sent to the gateway, and somehow it couldn't route correctly the packets!
867,074
I setup an AWS Elasticsearch Domain recently but I didn't see a way to stop it (like you can with an EC2 instance), which means I'm continuously billed. At this stage I just need to do some testing and don't require a full-time cluster. The only option I see is to delete the domain, am I missing something?
2017/08/06
[ "https://serverfault.com/questions/867074", "https://serverfault.com", "https://serverfault.com/users/88135/" ]
You will have to delete the cluster for billing to stop. However, if you want to backup the data for later experiments, you can take [manual snapshots](https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-managedomains.html#es-managedomains-snapshots) (link rotten, check [archived page here](https://web.archive.org/web/20170810131300/https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-managedomains.html#es-managedomains-snapshots)) of the indices to your S3 buckets. The next time you spawn a cluster, just restore the snapshot :)
There is no way to stop the cluster today. What I did to reduce my bill was that I edited the cluster to reduce the instance type to a t2.small instance which is significantly cheaper than the previous instance. Then when I needed to resume testing I changed the instance type back to what I required.
75,795
Why does the Star Trek franchise (produced in the USA) use [Celsius](http://en.wikipedia.org/wiki/Celsius) for temperature and other units from [SI](http://en.wikipedia.org/wiki/International_System_of_Units), rather than [Fahrenheit](http://en.wikipedia.org/wiki/Fahrenheit) and units from [the imperial system](http://en.wikipedia.org/wiki/Imperial_units) (still widely used in USA)? Eventually, as per Paul D. Waite's [comment](https://scifi.stackexchange.com/questions/75795/why-does-star-trek-use-celsius-si-system-not-fahrenheit-the-imperial-system#comment157654_75795), the question can be, *why don’t they use Kelvins*?
2014/12/14
[ "https://scifi.stackexchange.com/questions/75795", "https://scifi.stackexchange.com", "https://scifi.stackexchange.com/users/7885/" ]
The Federation is a utopian society derived from Earth. Such a utopian future world would use a consistent and planned measurement system. Thus they use Celsius. Because it is logical and simple. One Celsius degree is the same as one Kelvin which is 1/100th of the total range from the freezing point to the boiling point of water (at 1 atmosphere pressure). Fahrenheit is much more complicated scale (see [here](http://en.wikipedia.org/wiki/Fahrenheit)). Kelvin are much more unwieldy at the temperatures that we are accustomed to. Warm summer day is 298K or 25 degrees C. Note, the size of the Kelvin is set as the same as the degree Celsius. Kelvin just starts at 'absolute zero' which is -273.15 deg C. So Celsius was probably chosen because it is consistent, logical, simple, yet relatable by average audience (Americans in the Sixties), and it was also 'futuristic' to non-scientists at the time ST was invented.
For scientific purposes metric is the accepted standard, so for those above the Enterprise (all of whom have some degree of scientific expertise) it would simply be natural. On top of that, astronomical units of measurement are based in the metric system for example, we use Km to measure near planetary distances, it's only when we get up to interplanetary distances that we start to measure in AU (the distance from the Earth to the Sun), but gigametres are also interchangeable here. As we continue increasing in size above parsecs we have the kiloparsec and megaparsec which are all metric units of measurement. As for why they don't use Kelvin - Kelvin has a straight conversion of just being 0 Celsius + 273.15 so the two are interchangeable. We can assume that they use the two interchangeably as the scientific community does, so for ambient temperatures will refer to 25C rather than 298.15K but when referring to the very cold might use Kelvin. So they probably do use Kelvin but it's all based on context, for example I don't say that my walk to the kitchen is 0.003Km from my sitting room, instead I just say it's 3m.
75,795
Why does the Star Trek franchise (produced in the USA) use [Celsius](http://en.wikipedia.org/wiki/Celsius) for temperature and other units from [SI](http://en.wikipedia.org/wiki/International_System_of_Units), rather than [Fahrenheit](http://en.wikipedia.org/wiki/Fahrenheit) and units from [the imperial system](http://en.wikipedia.org/wiki/Imperial_units) (still widely used in USA)? Eventually, as per Paul D. Waite's [comment](https://scifi.stackexchange.com/questions/75795/why-does-star-trek-use-celsius-si-system-not-fahrenheit-the-imperial-system#comment157654_75795), the question can be, *why don’t they use Kelvins*?
2014/12/14
[ "https://scifi.stackexchange.com/questions/75795", "https://scifi.stackexchange.com", "https://scifi.stackexchange.com/users/7885/" ]
Short answer? [People feel that the future is the metric system](http://tvtropes.org/pmwiki/pmwiki.php/Main/TheMetricSystemIsHereToStay). It's more endorsed by the scientific community. Many nations have adopted it as a universal measure. Thus, in a farflung science-heavy future, the assumption is that people will be using metric units exclusively, the same reason futurists thought people would all be speaking Esperanto in the future as seen in the Harry Harrison books.
Gene Roddenberry was a visionary. I think he foresaw that future generations would be more likely to use metric units, which are already used by the scientific community (and by almost every nation on Earth outside the U.S.).
75,795
Why does the Star Trek franchise (produced in the USA) use [Celsius](http://en.wikipedia.org/wiki/Celsius) for temperature and other units from [SI](http://en.wikipedia.org/wiki/International_System_of_Units), rather than [Fahrenheit](http://en.wikipedia.org/wiki/Fahrenheit) and units from [the imperial system](http://en.wikipedia.org/wiki/Imperial_units) (still widely used in USA)? Eventually, as per Paul D. Waite's [comment](https://scifi.stackexchange.com/questions/75795/why-does-star-trek-use-celsius-si-system-not-fahrenheit-the-imperial-system#comment157654_75795), the question can be, *why don’t they use Kelvins*?
2014/12/14
[ "https://scifi.stackexchange.com/questions/75795", "https://scifi.stackexchange.com", "https://scifi.stackexchange.com/users/7885/" ]
Here and there both systems are used - sometimes just I think because 'miles' and 'inches' are easier to grasp in the mind and feel more human. However look how metricated the whole mythos is at its core - from stardates to coordinates. Therefore it is very logical they should use Celcius as well. Most importantly of all; Trek represents a utopian future where mankind has joined together without negative nationalism nor bigoted jingoism. In this single culture the sheer number of humans who do measure things in tens would massively outweigh those who don't. Logic - and therefore the metric system - would prevail through democracy just as @Royal says. Metric is also the measurement system of Science and Trek is a high Technocracy.
Your question isn't entirely accurate. Star Trek uses imperial measurements. :) In Star Trek, the original series, they use imperial. E.g. Spock tells Kirk a temperature in Fahrenheit, and at some point they both look at Mudd's data file and it gives his height in feet. They also use metric, sometimes in the [exact same episode](http://themetricmaven.com/?p=719) for the same measurements (e.g. distance). It's a big mix. From a production standpoint this is presumably because the writers at the time didn't put much thought in and just wrote what they know.
75,795
Why does the Star Trek franchise (produced in the USA) use [Celsius](http://en.wikipedia.org/wiki/Celsius) for temperature and other units from [SI](http://en.wikipedia.org/wiki/International_System_of_Units), rather than [Fahrenheit](http://en.wikipedia.org/wiki/Fahrenheit) and units from [the imperial system](http://en.wikipedia.org/wiki/Imperial_units) (still widely used in USA)? Eventually, as per Paul D. Waite's [comment](https://scifi.stackexchange.com/questions/75795/why-does-star-trek-use-celsius-si-system-not-fahrenheit-the-imperial-system#comment157654_75795), the question can be, *why don’t they use Kelvins*?
2014/12/14
[ "https://scifi.stackexchange.com/questions/75795", "https://scifi.stackexchange.com", "https://scifi.stackexchange.com/users/7885/" ]
You answered your question in the question - because Celsius is an SI unit (well it's not really, Kelvin is, but Celsius is just a constant offset so it is for the purposes of this question). There's no logic to a scientific organization in the future using anything other than what the scientific community use (Nasa used imperial for a while because it was US based). Other people have mentioned the fact most of the world use Celsius, but this is irrelevant. While it's sensible for countries to use SI, even if no one used Celsius, it would still be adopted by any scientific organization. An example is acceleration, where no country (as far as I know) would quote acceleration in m/s^2 but that's what science uses.
I think the "universal translator" takes care of it, as does the specialized translators used for ships log entries etc. If Spock were to use a cultural reference in his Officer's Log, and speak of "a hundred twenty eight *squelm*" in FedStandard (which is decendent from and rendered as English in the show) the metadata would automatically note the standard value in kelvin, and later when a sulfer-breathing admeral from [Sarr](https://en.m.wikipedia.org/wiki/Iceworld) reads it, it will be in his native language with the value in kelvin and a footnote explaining that the author likened it to the desert mesa whatever blooms are triggered, with links. Or, it may show a notation mapping to the normalized clement range of the author, so he knows without distraction if that is supposed to be *hot* or *bitter cold* or whatever. In the case of a human reading, Fahrenheit might be one of the configurable options of the normalized clemency perception scale. Since Starfleet is primarily founded and organized by Terran and Vulcan world governments, whose to say SI is the end-all/be-all of measurements? They might use Vulcan-based Interplanetary Standard units.
75,795
Why does the Star Trek franchise (produced in the USA) use [Celsius](http://en.wikipedia.org/wiki/Celsius) for temperature and other units from [SI](http://en.wikipedia.org/wiki/International_System_of_Units), rather than [Fahrenheit](http://en.wikipedia.org/wiki/Fahrenheit) and units from [the imperial system](http://en.wikipedia.org/wiki/Imperial_units) (still widely used in USA)? Eventually, as per Paul D. Waite's [comment](https://scifi.stackexchange.com/questions/75795/why-does-star-trek-use-celsius-si-system-not-fahrenheit-the-imperial-system#comment157654_75795), the question can be, *why don’t they use Kelvins*?
2014/12/14
[ "https://scifi.stackexchange.com/questions/75795", "https://scifi.stackexchange.com", "https://scifi.stackexchange.com/users/7885/" ]
Short answer? [People feel that the future is the metric system](http://tvtropes.org/pmwiki/pmwiki.php/Main/TheMetricSystemIsHereToStay). It's more endorsed by the scientific community. Many nations have adopted it as a universal measure. Thus, in a farflung science-heavy future, the assumption is that people will be using metric units exclusively, the same reason futurists thought people would all be speaking Esperanto in the future as seen in the Harry Harrison books.
Your question isn't entirely accurate. Star Trek uses imperial measurements. :) In Star Trek, the original series, they use imperial. E.g. Spock tells Kirk a temperature in Fahrenheit, and at some point they both look at Mudd's data file and it gives his height in feet. They also use metric, sometimes in the [exact same episode](http://themetricmaven.com/?p=719) for the same measurements (e.g. distance). It's a big mix. From a production standpoint this is presumably because the writers at the time didn't put much thought in and just wrote what they know.
75,795
Why does the Star Trek franchise (produced in the USA) use [Celsius](http://en.wikipedia.org/wiki/Celsius) for temperature and other units from [SI](http://en.wikipedia.org/wiki/International_System_of_Units), rather than [Fahrenheit](http://en.wikipedia.org/wiki/Fahrenheit) and units from [the imperial system](http://en.wikipedia.org/wiki/Imperial_units) (still widely used in USA)? Eventually, as per Paul D. Waite's [comment](https://scifi.stackexchange.com/questions/75795/why-does-star-trek-use-celsius-si-system-not-fahrenheit-the-imperial-system#comment157654_75795), the question can be, *why don’t they use Kelvins*?
2014/12/14
[ "https://scifi.stackexchange.com/questions/75795", "https://scifi.stackexchange.com", "https://scifi.stackexchange.com/users/7885/" ]
You answered your question in the question - because Celsius is an SI unit (well it's not really, Kelvin is, but Celsius is just a constant offset so it is for the purposes of this question). There's no logic to a scientific organization in the future using anything other than what the scientific community use (Nasa used imperial for a while because it was US based). Other people have mentioned the fact most of the world use Celsius, but this is irrelevant. While it's sensible for countries to use SI, even if no one used Celsius, it would still be adopted by any scientific organization. An example is acceleration, where no country (as far as I know) would quote acceleration in m/s^2 but that's what science uses.
Your question isn't entirely accurate. Star Trek uses imperial measurements. :) In Star Trek, the original series, they use imperial. E.g. Spock tells Kirk a temperature in Fahrenheit, and at some point they both look at Mudd's data file and it gives his height in feet. They also use metric, sometimes in the [exact same episode](http://themetricmaven.com/?p=719) for the same measurements (e.g. distance). It's a big mix. From a production standpoint this is presumably because the writers at the time didn't put much thought in and just wrote what they know.
75,795
Why does the Star Trek franchise (produced in the USA) use [Celsius](http://en.wikipedia.org/wiki/Celsius) for temperature and other units from [SI](http://en.wikipedia.org/wiki/International_System_of_Units), rather than [Fahrenheit](http://en.wikipedia.org/wiki/Fahrenheit) and units from [the imperial system](http://en.wikipedia.org/wiki/Imperial_units) (still widely used in USA)? Eventually, as per Paul D. Waite's [comment](https://scifi.stackexchange.com/questions/75795/why-does-star-trek-use-celsius-si-system-not-fahrenheit-the-imperial-system#comment157654_75795), the question can be, *why don’t they use Kelvins*?
2014/12/14
[ "https://scifi.stackexchange.com/questions/75795", "https://scifi.stackexchange.com", "https://scifi.stackexchange.com/users/7885/" ]
Short answer? [People feel that the future is the metric system](http://tvtropes.org/pmwiki/pmwiki.php/Main/TheMetricSystemIsHereToStay). It's more endorsed by the scientific community. Many nations have adopted it as a universal measure. Thus, in a farflung science-heavy future, the assumption is that people will be using metric units exclusively, the same reason futurists thought people would all be speaking Esperanto in the future as seen in the Harry Harrison books.
You answered your question in the question - because Celsius is an SI unit (well it's not really, Kelvin is, but Celsius is just a constant offset so it is for the purposes of this question). There's no logic to a scientific organization in the future using anything other than what the scientific community use (Nasa used imperial for a while because it was US based). Other people have mentioned the fact most of the world use Celsius, but this is irrelevant. While it's sensible for countries to use SI, even if no one used Celsius, it would still be adopted by any scientific organization. An example is acceleration, where no country (as far as I know) would quote acceleration in m/s^2 but that's what science uses.
75,795
Why does the Star Trek franchise (produced in the USA) use [Celsius](http://en.wikipedia.org/wiki/Celsius) for temperature and other units from [SI](http://en.wikipedia.org/wiki/International_System_of_Units), rather than [Fahrenheit](http://en.wikipedia.org/wiki/Fahrenheit) and units from [the imperial system](http://en.wikipedia.org/wiki/Imperial_units) (still widely used in USA)? Eventually, as per Paul D. Waite's [comment](https://scifi.stackexchange.com/questions/75795/why-does-star-trek-use-celsius-si-system-not-fahrenheit-the-imperial-system#comment157654_75795), the question can be, *why don’t they use Kelvins*?
2014/12/14
[ "https://scifi.stackexchange.com/questions/75795", "https://scifi.stackexchange.com", "https://scifi.stackexchange.com/users/7885/" ]
Today, countries making up about 95% of the world's population use the metric system: ![world map with USA, Liberia and Myanmar highlighted](https://i.stack.imgur.com/PwIln.png) The [holdouts](http://www.zmescience.com/other/map-of-countries-officially-not-using-the-metric-system/) are the USA, Liberia, and Myanmar. If the Earth is peacefully united and sends missions to the stars -- as is the case in Star Trek -- the overwhelming majority of people would be metric users. Simple democracy would lead to the metric system being adopted.
Here and there both systems are used - sometimes just I think because 'miles' and 'inches' are easier to grasp in the mind and feel more human. However look how metricated the whole mythos is at its core - from stardates to coordinates. Therefore it is very logical they should use Celcius as well. Most importantly of all; Trek represents a utopian future where mankind has joined together without negative nationalism nor bigoted jingoism. In this single culture the sheer number of humans who do measure things in tens would massively outweigh those who don't. Logic - and therefore the metric system - would prevail through democracy just as @Royal says. Metric is also the measurement system of Science and Trek is a high Technocracy.
75,795
Why does the Star Trek franchise (produced in the USA) use [Celsius](http://en.wikipedia.org/wiki/Celsius) for temperature and other units from [SI](http://en.wikipedia.org/wiki/International_System_of_Units), rather than [Fahrenheit](http://en.wikipedia.org/wiki/Fahrenheit) and units from [the imperial system](http://en.wikipedia.org/wiki/Imperial_units) (still widely used in USA)? Eventually, as per Paul D. Waite's [comment](https://scifi.stackexchange.com/questions/75795/why-does-star-trek-use-celsius-si-system-not-fahrenheit-the-imperial-system#comment157654_75795), the question can be, *why don’t they use Kelvins*?
2014/12/14
[ "https://scifi.stackexchange.com/questions/75795", "https://scifi.stackexchange.com", "https://scifi.stackexchange.com/users/7885/" ]
Today, countries making up about 95% of the world's population use the metric system: ![world map with USA, Liberia and Myanmar highlighted](https://i.stack.imgur.com/PwIln.png) The [holdouts](http://www.zmescience.com/other/map-of-countries-officially-not-using-the-metric-system/) are the USA, Liberia, and Myanmar. If the Earth is peacefully united and sends missions to the stars -- as is the case in Star Trek -- the overwhelming majority of people would be metric users. Simple democracy would lead to the metric system being adopted.
I think the "universal translator" takes care of it, as does the specialized translators used for ships log entries etc. If Spock were to use a cultural reference in his Officer's Log, and speak of "a hundred twenty eight *squelm*" in FedStandard (which is decendent from and rendered as English in the show) the metadata would automatically note the standard value in kelvin, and later when a sulfer-breathing admeral from [Sarr](https://en.m.wikipedia.org/wiki/Iceworld) reads it, it will be in his native language with the value in kelvin and a footnote explaining that the author likened it to the desert mesa whatever blooms are triggered, with links. Or, it may show a notation mapping to the normalized clement range of the author, so he knows without distraction if that is supposed to be *hot* or *bitter cold* or whatever. In the case of a human reading, Fahrenheit might be one of the configurable options of the normalized clemency perception scale. Since Starfleet is primarily founded and organized by Terran and Vulcan world governments, whose to say SI is the end-all/be-all of measurements? They might use Vulcan-based Interplanetary Standard units.
75,795
Why does the Star Trek franchise (produced in the USA) use [Celsius](http://en.wikipedia.org/wiki/Celsius) for temperature and other units from [SI](http://en.wikipedia.org/wiki/International_System_of_Units), rather than [Fahrenheit](http://en.wikipedia.org/wiki/Fahrenheit) and units from [the imperial system](http://en.wikipedia.org/wiki/Imperial_units) (still widely used in USA)? Eventually, as per Paul D. Waite's [comment](https://scifi.stackexchange.com/questions/75795/why-does-star-trek-use-celsius-si-system-not-fahrenheit-the-imperial-system#comment157654_75795), the question can be, *why don’t they use Kelvins*?
2014/12/14
[ "https://scifi.stackexchange.com/questions/75795", "https://scifi.stackexchange.com", "https://scifi.stackexchange.com/users/7885/" ]
The Federation is a utopian society derived from Earth. Such a utopian future world would use a consistent and planned measurement system. Thus they use Celsius. Because it is logical and simple. One Celsius degree is the same as one Kelvin which is 1/100th of the total range from the freezing point to the boiling point of water (at 1 atmosphere pressure). Fahrenheit is much more complicated scale (see [here](http://en.wikipedia.org/wiki/Fahrenheit)). Kelvin are much more unwieldy at the temperatures that we are accustomed to. Warm summer day is 298K or 25 degrees C. Note, the size of the Kelvin is set as the same as the degree Celsius. Kelvin just starts at 'absolute zero' which is -273.15 deg C. So Celsius was probably chosen because it is consistent, logical, simple, yet relatable by average audience (Americans in the Sixties), and it was also 'futuristic' to non-scientists at the time ST was invented.
I think the "universal translator" takes care of it, as does the specialized translators used for ships log entries etc. If Spock were to use a cultural reference in his Officer's Log, and speak of "a hundred twenty eight *squelm*" in FedStandard (which is decendent from and rendered as English in the show) the metadata would automatically note the standard value in kelvin, and later when a sulfer-breathing admeral from [Sarr](https://en.m.wikipedia.org/wiki/Iceworld) reads it, it will be in his native language with the value in kelvin and a footnote explaining that the author likened it to the desert mesa whatever blooms are triggered, with links. Or, it may show a notation mapping to the normalized clement range of the author, so he knows without distraction if that is supposed to be *hot* or *bitter cold* or whatever. In the case of a human reading, Fahrenheit might be one of the configurable options of the normalized clemency perception scale. Since Starfleet is primarily founded and organized by Terran and Vulcan world governments, whose to say SI is the end-all/be-all of measurements? They might use Vulcan-based Interplanetary Standard units.
122,125
I accepted a "full-time" freelance gig. Its freelance since its only 6 months and its remote and no need to go to the office. But I'm the only one who's gonna do their designs. They asked me for my rate and it was tricky for me since I'm going to be paid monthly like its a full time job. I asked the frequency and scope of designs. I took the job and 3 weeks in, they're making me do vouchers and the frequency was more than I expected. Now they want me to do business cards. The scope of work was only posters and social media posts. Honestly the rate wasn't THAT bad but I really wasn't expecting it to be this much. And I'm shy to confront them since honestly I'm thankful that they hired me. I'm not that experienced yet with design and its hard to get freelance clients. This one is fixed for 6 months. No hassle to source clients for me. Here is the exact email for the scope of work: > > Layouts per brand (THREE BRANDS) > > > 1. Menu editing – 1x every quarter (price revisions, removal of slow moving items, additional new items) > 2. Promo Posters – 1-2x monthly; resize for menu insert (optional) resize for social media, resize for tent cards, resize for creative > standee, resize for poster > 3. New Branches – lamp post banners, soon to open posters, board up collateral > 4. Social media for posts – 1-2x per week > > >
2018/11/05
[ "https://workplace.stackexchange.com/questions/122125", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/92969/" ]
If you have a written agreement outlining the scope it's easy. You just include that with a message saying that the extra work is out of scope and giving a costing for the extra work. This is normal procedure, so do it confidently and professionally. Outline the costs and ask what timeframes they need it done in as if it was an entirely different job. Then you can move forwards when they reply. If you don't have a written agreement, then you do the same thing. At the end of the day, you're a freelancer, not an employee. Any scope creep weakens your present and future negotiations and status. You haven't had a payment yet and they're trying to maximise returns on their money. At this point they haven't invested much in you. If you're really nervous about losing them as a client, then wait until you have received your first payment. Once money has changed hands there is more of an investment which means you have a stronger negotiating/dispute base. > > Do you think gift cards and business cards shouldn't be included on my work? > > > That is entirely up to your interpretation, as a freelancer you are your own boss. If something is not clear you can interpret it however you want, they can negotiate. But just taking it on the chin is a bad idea.
Since the scope has changed, this is a normal point of negotiation during freelance work. You should politely indicate that this is outside the initial scope of work and propose a few options to the client to decide. Your options, generally, are more time, more money, or remove/deprioritize other tasks. Tactics aside, I would generally try to figure out your own accounting strategy and proceed accordingly to decide the outcome you would prefer. Since you did not do this already, this is a retroactive exercise. For example, a simplified decision tree 1. Hours Billed. You charged what you actually expected to work at your rate. Agreeing to a fixed periodic payment is common but it doesn't change your calculus a. You estimated the amount of time needed for tasks described and the rate you wanted to receive, assuming the proposed scope. b. This was in line with what was offered on monthly terms c. If the work scope changes you adjust the billable hours accordingly and provide new estimates. 2. Retainer. You commit/reserve a certain percentage of your time to complete the tasks the client sends you. a. You had an understanding of commitment requested, for example 100% of your time on average on a 6 month term b. Based upon the percentage and your rate, you accept the a monthly payment understanding that *on average* your load will not exceed this commitment. c. Fluctuating work load on a weekly basis is part of the deal, knowing there are down weeks and up weeks. d. If the work scope changes permanently you adjust total time or tasks priority, alternatively you bill the overage based on additional hours worked. In all cases you should have a general understanding of the hours required and the rate for each task (The rates can be different for different tasks, e.g. lower rate for asset processing vs creative work) So depending on how you see yourself and the relationship with this client, I believe your options are pretty clear cut. 1. Ask for more money (Bill hours) 2. Ask for more time (extend contract term) 3. Absorb the extra work if it is still worth your time (Appeasement)
122,125
I accepted a "full-time" freelance gig. Its freelance since its only 6 months and its remote and no need to go to the office. But I'm the only one who's gonna do their designs. They asked me for my rate and it was tricky for me since I'm going to be paid monthly like its a full time job. I asked the frequency and scope of designs. I took the job and 3 weeks in, they're making me do vouchers and the frequency was more than I expected. Now they want me to do business cards. The scope of work was only posters and social media posts. Honestly the rate wasn't THAT bad but I really wasn't expecting it to be this much. And I'm shy to confront them since honestly I'm thankful that they hired me. I'm not that experienced yet with design and its hard to get freelance clients. This one is fixed for 6 months. No hassle to source clients for me. Here is the exact email for the scope of work: > > Layouts per brand (THREE BRANDS) > > > 1. Menu editing – 1x every quarter (price revisions, removal of slow moving items, additional new items) > 2. Promo Posters – 1-2x monthly; resize for menu insert (optional) resize for social media, resize for tent cards, resize for creative > standee, resize for poster > 3. New Branches – lamp post banners, soon to open posters, board up collateral > 4. Social media for posts – 1-2x per week > > >
2018/11/05
[ "https://workplace.stackexchange.com/questions/122125", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/92969/" ]
If you have a written agreement outlining the scope it's easy. You just include that with a message saying that the extra work is out of scope and giving a costing for the extra work. This is normal procedure, so do it confidently and professionally. Outline the costs and ask what timeframes they need it done in as if it was an entirely different job. Then you can move forwards when they reply. If you don't have a written agreement, then you do the same thing. At the end of the day, you're a freelancer, not an employee. Any scope creep weakens your present and future negotiations and status. You haven't had a payment yet and they're trying to maximise returns on their money. At this point they haven't invested much in you. If you're really nervous about losing them as a client, then wait until you have received your first payment. Once money has changed hands there is more of an investment which means you have a stronger negotiating/dispute base. > > Do you think gift cards and business cards shouldn't be included on my work? > > > That is entirely up to your interpretation, as a freelancer you are your own boss. If something is not clear you can interpret it however you want, they can negotiate. But just taking it on the chin is a bad idea.
Well, *is* it "freelance," or is it effectively "full-time work?" (Be wary of *"statutory employee"* territory!) If possible, do the work, but immediately have these discussions with your client/employer. If the quality of the work that you could do, or the timeliness with which you are able to do it, would suffer, then they need to know this. Maybe they're just so happy with what you're doing that they want to give you even more to do! But – these are discussions that you need to be having directly, very soon, with *them,* not StackExchange.
45,290
Can I claim my daughter as a dependent on my 2014 tax return even though she got married in August? She did not live at home but was a full-time student for at least 5 months out of the year, and we payed for her tuition.
2015/03/06
[ "https://money.stackexchange.com/questions/45290", "https://money.stackexchange.com", "https://money.stackexchange.com/users/26184/" ]
Depends on whether or not she files a joint return. If not, you can claim her, if she does, you cannot. See the link below for more info on whether she counts as a "Qualifying Child" in various situations. [http://www.irs.gov/uac/A-“Qualifying-Child”](http://www.irs.gov/uac/A-%E2%80%9CQualifying-Child%E2%80%9D)
From [reading this document](http://www.irs.gov/uac/A-%E2%80%9CQualifying-Child%E2%80%9D), she would have had to: * Live at home for more than six months out of the year. * Be between the ages of 19 and 24 **and** be a full-time student - check her transcripts for the year to see if her credit-hours per semester would have made her a full-time student. * Not file jointly with her spouse. If she's not stayed with you at home for at least that long, then she cannot be claimed. If you're concerned about the tuition, there are [available tax forms](http://www.irs.gov/uac/Tax-Benefits-for-Education:-Information-Center) which can reduce your taxable income by the amount of tuition you've paid, or even give you a bonus. Her educational institution should have provided her with a Form 1098-T, so be sure that you get a copy of them before you do your taxes.
35,625
I am attempting to replace an old PLCC32 part that was directly soldered to the board with a new part of undecided form. We will definitely need an adapter as we have not been able to find a PLCC32 part that does what we need. I cannot use a PLCC adapter plug because there are also height restrictions. We are considering building a two-sided adapter board that has pads on the bottom side that match the PLCC32 layout on the current board, with the new layout on top. Theoretically, the adapter board would be soldered directly to the old board and the new chip on top of the adapter. However, I have not seen any examples of soldering two PCBs directly together in this manner, which makes me think it is a likely to be a bad idea. Can anyone comment on this sort of custom adapter?
2012/07/12
[ "https://electronics.stackexchange.com/questions/35625", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5694/" ]
Soldering a small PCB flat onto a larger PCB is possible. In fact, that's how many of the embedded radio models are mounted ([example](http://www.rovingnetworks.com/products/RN_41), [example](http://www.anaren.com/sites/default/files/Part-Datasheets/A2500R24A_EM1.pdf)). The pad can be on the edge of the board (via cut to form a half-cylinder\*). Or, the SMT pads are either directly underneath. \* see also [photo in stevenh's answer](https://electronics.stackexchange.com/a/35628/7036). Such feature is called *castellation* (thanks, The Photon). Look also at [Aries Correct-a-Chip](http://www.arieselec.com/products/correct.htm) adapters. Some of them ([like this one](http://www.arieselec.com/Web_Data_Sheets/18045/18045.htm)) go from one SMT footprint to another SMT. There are also companies that specialize in making custom adapters. [adapters-Plus](http://www.adapt-plus.com/), for example.
I'd consider a Ball Grid Array (BGA) IC package to be close to an example of that. It comes with solder-balls preplaced on the "component" PCB. Assembly is tricky, usually done via automated placement and hot air, frequently with preheat from below too. In your case you presumably would only have contacts around the periphery so inspection would be a bit easier. However you probably won't have the preformed solder balls. You might look at rework solutions for re-balling BGAs. There is also some similarity to a QFN package, which is usually soldered by depositing paste with a stencil and then using a similar external area heat source, however you won't have the metalization up the edge thickness which many QFN's have to aid filleting (and incidentally give you a limited ability to do rework with an extremely fine-tip iron) If your PCB house will do it, the plated through holes cut in half by the board outline idea seen on some recent chip-carrier modules might be an interesting idea, as that would give you metalization up the thickness. I think you might have a fair shot of soldering that on with an iron or an air pencil.
35,625
I am attempting to replace an old PLCC32 part that was directly soldered to the board with a new part of undecided form. We will definitely need an adapter as we have not been able to find a PLCC32 part that does what we need. I cannot use a PLCC adapter plug because there are also height restrictions. We are considering building a two-sided adapter board that has pads on the bottom side that match the PLCC32 layout on the current board, with the new layout on top. Theoretically, the adapter board would be soldered directly to the old board and the new chip on top of the adapter. However, I have not seen any examples of soldering two PCBs directly together in this manner, which makes me think it is a likely to be a bad idea. Can anyone comment on this sort of custom adapter?
2012/07/12
[ "https://electronics.stackexchange.com/questions/35625", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5694/" ]
No problem. I had to look for a picture that illustrates the technique: ![enter image description here](https://i.stack.imgur.com/URLw8.jpg) You make a PCB with plated through holes on the PLCC's pads, so at a 1.27 mm pitch, and mill the four sides so that you get the half holes like in the picture. These are easily solderable on the old PLCC footprint, it's an often used technique, called *castellation*. A picture of a complete board: ![enter image description here](https://i.stack.imgur.com/dYnNg.jpg) and another one: ![enter image description here](https://i.stack.imgur.com/Ohlgh.jpg) or this one from a question posted 1 minute ago: ![enter image description here](https://i.stack.imgur.com/HJpXL.jpg) You get the idea. You'll have to find a part which fits inside this small PCB, but given the miniaturization of the last years that may not be a problem. **edit** 2012-07-15 *QuestionMan* suggested to make the PCB a bit larger so that the PLCC's solder pads are under it. For BGAs the solder balls are also under the IC, but that's solid solder balls, not paste, and I don't know how solder paste will behave when squeezed between two PCBs. But today I bumped into this IC package: ![enter image description here](https://i.stack.imgur.com/v4QEO.png) It's the "Staggered Dual-row MicroLeadFrame® Package (MLF)" of the [ATMega8HVD](http://media.digikey.com/pdf/Data%20Sheets/Atmel%20PDFs/ATMEGA4HVD,8HVD.pdf), and it has pins under the IC as well. This is 3.5 mm x 6.5 mm, and weighs a lot less than the small PCB. That may be important, because thanks to the low weight capillary forces of the molten solder paste can pull the IC to its exact position. I'm not sure if that will also be the case for that PCB, and then positioning may be a problem.
I'd consider a Ball Grid Array (BGA) IC package to be close to an example of that. It comes with solder-balls preplaced on the "component" PCB. Assembly is tricky, usually done via automated placement and hot air, frequently with preheat from below too. In your case you presumably would only have contacts around the periphery so inspection would be a bit easier. However you probably won't have the preformed solder balls. You might look at rework solutions for re-balling BGAs. There is also some similarity to a QFN package, which is usually soldered by depositing paste with a stencil and then using a similar external area heat source, however you won't have the metalization up the edge thickness which many QFN's have to aid filleting (and incidentally give you a limited ability to do rework with an extremely fine-tip iron) If your PCB house will do it, the plated through holes cut in half by the board outline idea seen on some recent chip-carrier modules might be an interesting idea, as that would give you metalization up the thickness. I think you might have a fair shot of soldering that on with an iron or an air pencil.
35,625
I am attempting to replace an old PLCC32 part that was directly soldered to the board with a new part of undecided form. We will definitely need an adapter as we have not been able to find a PLCC32 part that does what we need. I cannot use a PLCC adapter plug because there are also height restrictions. We are considering building a two-sided adapter board that has pads on the bottom side that match the PLCC32 layout on the current board, with the new layout on top. Theoretically, the adapter board would be soldered directly to the old board and the new chip on top of the adapter. However, I have not seen any examples of soldering two PCBs directly together in this manner, which makes me think it is a likely to be a bad idea. Can anyone comment on this sort of custom adapter?
2012/07/12
[ "https://electronics.stackexchange.com/questions/35625", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5694/" ]
They make adapters for just about every footprint to any other footprint. And if it's not made, there are companies that will make one custom for you. But they are usually pretty expensive and, as you mentioned, tall. ![enter image description here](https://i.stack.imgur.com/jf536.jpg) Another option is deadbugging the chip. But looking at your other question, you have a production run of ~70K units. So this solution would seem impractical. The chances of a wire being placed incorrectly or a solder joint not holding (especially if subjected to vibration) are probably too great on a run that size. And when you factor in technician time, it is also pretty expensive. ![enter image description here](https://i.stack.imgur.com/uO2up.jpg) They do make BGA adapters so something that is more solid than deadbugging and shorter than a normal adapter is possible. In order to accept another PLCC32, the board would proabbyl need to be bigger than the original PLCC32 footprint and soldered using solder paste on the original pads and a reflow oven like a BGA component would be. Then the new PLCC32 would be soldered on the adapter's pads. Again, expensive. ![enter image description here](https://i.stack.imgur.com/eG1uE.jpg) Your best bet would be to consider using a new chip with a smaller footprint. Then having a small board made up that is the size of a PLCC32 with similar pins. I've seen something similar for 8051 ICEs. I couldn't find a good picture though. For a production run of the size you're talking about. I would at least price out respinning the board. Comparing to the cost of a custom adapter plus technician time to install, the respin may be cheaper in the long run.
I'd consider a Ball Grid Array (BGA) IC package to be close to an example of that. It comes with solder-balls preplaced on the "component" PCB. Assembly is tricky, usually done via automated placement and hot air, frequently with preheat from below too. In your case you presumably would only have contacts around the periphery so inspection would be a bit easier. However you probably won't have the preformed solder balls. You might look at rework solutions for re-balling BGAs. There is also some similarity to a QFN package, which is usually soldered by depositing paste with a stencil and then using a similar external area heat source, however you won't have the metalization up the edge thickness which many QFN's have to aid filleting (and incidentally give you a limited ability to do rework with an extremely fine-tip iron) If your PCB house will do it, the plated through holes cut in half by the board outline idea seen on some recent chip-carrier modules might be an interesting idea, as that would give you metalization up the thickness. I think you might have a fair shot of soldering that on with an iron or an air pencil.
35,625
I am attempting to replace an old PLCC32 part that was directly soldered to the board with a new part of undecided form. We will definitely need an adapter as we have not been able to find a PLCC32 part that does what we need. I cannot use a PLCC adapter plug because there are also height restrictions. We are considering building a two-sided adapter board that has pads on the bottom side that match the PLCC32 layout on the current board, with the new layout on top. Theoretically, the adapter board would be soldered directly to the old board and the new chip on top of the adapter. However, I have not seen any examples of soldering two PCBs directly together in this manner, which makes me think it is a likely to be a bad idea. Can anyone comment on this sort of custom adapter?
2012/07/12
[ "https://electronics.stackexchange.com/questions/35625", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5694/" ]
No problem. I had to look for a picture that illustrates the technique: ![enter image description here](https://i.stack.imgur.com/URLw8.jpg) You make a PCB with plated through holes on the PLCC's pads, so at a 1.27 mm pitch, and mill the four sides so that you get the half holes like in the picture. These are easily solderable on the old PLCC footprint, it's an often used technique, called *castellation*. A picture of a complete board: ![enter image description here](https://i.stack.imgur.com/dYnNg.jpg) and another one: ![enter image description here](https://i.stack.imgur.com/Ohlgh.jpg) or this one from a question posted 1 minute ago: ![enter image description here](https://i.stack.imgur.com/HJpXL.jpg) You get the idea. You'll have to find a part which fits inside this small PCB, but given the miniaturization of the last years that may not be a problem. **edit** 2012-07-15 *QuestionMan* suggested to make the PCB a bit larger so that the PLCC's solder pads are under it. For BGAs the solder balls are also under the IC, but that's solid solder balls, not paste, and I don't know how solder paste will behave when squeezed between two PCBs. But today I bumped into this IC package: ![enter image description here](https://i.stack.imgur.com/v4QEO.png) It's the "Staggered Dual-row MicroLeadFrame® Package (MLF)" of the [ATMega8HVD](http://media.digikey.com/pdf/Data%20Sheets/Atmel%20PDFs/ATMEGA4HVD,8HVD.pdf), and it has pins under the IC as well. This is 3.5 mm x 6.5 mm, and weighs a lot less than the small PCB. That may be important, because thanks to the low weight capillary forces of the molten solder paste can pull the IC to its exact position. I'm not sure if that will also be the case for that PCB, and then positioning may be a problem.
Soldering a small PCB flat onto a larger PCB is possible. In fact, that's how many of the embedded radio models are mounted ([example](http://www.rovingnetworks.com/products/RN_41), [example](http://www.anaren.com/sites/default/files/Part-Datasheets/A2500R24A_EM1.pdf)). The pad can be on the edge of the board (via cut to form a half-cylinder\*). Or, the SMT pads are either directly underneath. \* see also [photo in stevenh's answer](https://electronics.stackexchange.com/a/35628/7036). Such feature is called *castellation* (thanks, The Photon). Look also at [Aries Correct-a-Chip](http://www.arieselec.com/products/correct.htm) adapters. Some of them ([like this one](http://www.arieselec.com/Web_Data_Sheets/18045/18045.htm)) go from one SMT footprint to another SMT. There are also companies that specialize in making custom adapters. [adapters-Plus](http://www.adapt-plus.com/), for example.
35,625
I am attempting to replace an old PLCC32 part that was directly soldered to the board with a new part of undecided form. We will definitely need an adapter as we have not been able to find a PLCC32 part that does what we need. I cannot use a PLCC adapter plug because there are also height restrictions. We are considering building a two-sided adapter board that has pads on the bottom side that match the PLCC32 layout on the current board, with the new layout on top. Theoretically, the adapter board would be soldered directly to the old board and the new chip on top of the adapter. However, I have not seen any examples of soldering two PCBs directly together in this manner, which makes me think it is a likely to be a bad idea. Can anyone comment on this sort of custom adapter?
2012/07/12
[ "https://electronics.stackexchange.com/questions/35625", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/5694/" ]
No problem. I had to look for a picture that illustrates the technique: ![enter image description here](https://i.stack.imgur.com/URLw8.jpg) You make a PCB with plated through holes on the PLCC's pads, so at a 1.27 mm pitch, and mill the four sides so that you get the half holes like in the picture. These are easily solderable on the old PLCC footprint, it's an often used technique, called *castellation*. A picture of a complete board: ![enter image description here](https://i.stack.imgur.com/dYnNg.jpg) and another one: ![enter image description here](https://i.stack.imgur.com/Ohlgh.jpg) or this one from a question posted 1 minute ago: ![enter image description here](https://i.stack.imgur.com/HJpXL.jpg) You get the idea. You'll have to find a part which fits inside this small PCB, but given the miniaturization of the last years that may not be a problem. **edit** 2012-07-15 *QuestionMan* suggested to make the PCB a bit larger so that the PLCC's solder pads are under it. For BGAs the solder balls are also under the IC, but that's solid solder balls, not paste, and I don't know how solder paste will behave when squeezed between two PCBs. But today I bumped into this IC package: ![enter image description here](https://i.stack.imgur.com/v4QEO.png) It's the "Staggered Dual-row MicroLeadFrame® Package (MLF)" of the [ATMega8HVD](http://media.digikey.com/pdf/Data%20Sheets/Atmel%20PDFs/ATMEGA4HVD,8HVD.pdf), and it has pins under the IC as well. This is 3.5 mm x 6.5 mm, and weighs a lot less than the small PCB. That may be important, because thanks to the low weight capillary forces of the molten solder paste can pull the IC to its exact position. I'm not sure if that will also be the case for that PCB, and then positioning may be a problem.
They make adapters for just about every footprint to any other footprint. And if it's not made, there are companies that will make one custom for you. But they are usually pretty expensive and, as you mentioned, tall. ![enter image description here](https://i.stack.imgur.com/jf536.jpg) Another option is deadbugging the chip. But looking at your other question, you have a production run of ~70K units. So this solution would seem impractical. The chances of a wire being placed incorrectly or a solder joint not holding (especially if subjected to vibration) are probably too great on a run that size. And when you factor in technician time, it is also pretty expensive. ![enter image description here](https://i.stack.imgur.com/uO2up.jpg) They do make BGA adapters so something that is more solid than deadbugging and shorter than a normal adapter is possible. In order to accept another PLCC32, the board would proabbyl need to be bigger than the original PLCC32 footprint and soldered using solder paste on the original pads and a reflow oven like a BGA component would be. Then the new PLCC32 would be soldered on the adapter's pads. Again, expensive. ![enter image description here](https://i.stack.imgur.com/eG1uE.jpg) Your best bet would be to consider using a new chip with a smaller footprint. Then having a small board made up that is the size of a PLCC32 with similar pins. I've seen something similar for 8051 ICEs. I couldn't find a good picture though. For a production run of the size you're talking about. I would at least price out respinning the board. Comparing to the cost of a custom adapter plus technician time to install, the respin may be cheaper in the long run.
40,078
*My question is a bit broad and opinion based so I think I first have to add some of my research and analyzes in order for you to answer. I am not looking for an absolute answer, but more a "good guess" or speculations.* Most people knows about the [seven ahruf](https://islam.stackexchange.com/q/30508/15201), and as I generally understand it, is that the differences are pretty small and only grammatical, like for instance (2:85) "يَعْمَلُونَ" in Warsh and "تَعْمَلُونَ" in Hafs. In the reading of Warsh, the word: "كَثِيرًا" (kathiran) is used in a verse while Hafs has another word: "كَبِيرًا" (kabiran). These almost mean the same thing and the message wouldn't really be changed, but still, they are different words. In the [tafsir of tabari, verse 49:6](http://altafasir.com/al-quran/surat/49/al-hudjurat/6/al-tabari), it is mentioned that the word "فَتَبَيَّنُوا" (fatabayyano) was read "فَتَثَبَّتُوا" (fatathabbato) by the most reciters in Medina. Both words is argued to have the same meaning: > > واختلفت القرّاء في قراءة قوله: { فَتَبَيَّنُوا } فقرأ ذلك عامة قرّاء أهل المدينة «فَتَثَبَّتُوا» بالثاء، وذُكر أنها في مصحف عبد الله منقوطة بالثاء. وقرأ ذلك بعض القرّاء فتبيَّنوا بالباء، بمعنى: أمهلوا حتى تعرفوا صحته، لا تعجلوا بقبوله، وكذلك معنى «فَتَثَبَّتُوا». > > > --- The differences found in the [Sana'a manuscript](https://en.wikipedia.org/wiki/Sana%27a_manuscript), seems generally to be the same, however it seems other words are used more frequently and some words are added/deleted. For instance: > > ؛{يَـٰزَكَرِيَّا إِنَّا} قَد وَهَبْنَا لَكَ غُلٰماً زَكِيَّاً ۝ وَبَشَّرْنٰهُ {بِيَحْيیٰ لَمْ نَجْعَل ﻟَّ}ﻪُ مِن قَبْلُ سَمِيًّا > > > Sana'a manuscript (19:7) > > > In Hafs we have: > > يَا زَكَرِيَّا إِنَّا نُبَشِّرُكَ بِغُلَامٍ اسْمُهُ يَحْيَىٰ لَمْ نَجْعَل لَّهُ مِن قَبْلُ سَمِيًّا > > > In the Sana'a script, the wording differs a lot while the message still is the same. It also seems that some extra detailed words are added which doesn't exist in our texts today, lets look at one other verse in surat Maryam: > > فَنٰدٮٰهَا مِن تَحْتِهَـ/ـا مَلَكٌ/ أَلَّا تَحْزَنِى > > > This is what we read today (in Hafs): > > فَنَادَاهَا مِن تَحْتِهَا أَلَّا تَحْزَنِي > > But he called her from below her, "Do not grieve;..." > > > So in Sana'a, the word "مَلَكٌ" (malakon) is added, i.e given the meaning "The angel called her from below her". In the tafsirs, most scholars seems to say that it was either Jesus or an Angel (Jibril) who called her. If the manuscript was or is accepted, then the conclusion could be drawn that it indeed was an angel who called, not Jesus. But I also think that by accepting it, it would force us to rethink lots of things that we use while deriving conclusions. More similar verses mentioned; "صَوْماً وَصُمْتاً" while we say "صَوْماً" (19:26) ... Most commentators do explain though that sawman here means "sawtan". An example of a removed word is "وَعَلَّمْنٰهُ الْحُكْمَ" while we say: "وَآتَيْنَاهُ الْحُكْمَ **صَبِيًّا**". You find more examples [here](https://en.wikipedia.org/wiki/Sana'a_manuscript). **Is it likely that the sana'a manuscript was an accepted reading of the Quran?**
2017/05/27
[ "https://islam.stackexchange.com/questions/40078", "https://islam.stackexchange.com", "https://islam.stackexchange.com/users/15201/" ]
Please refer to the answers to the question [What are the readings (qira'at) of Quran?](https://islam.stackexchange.com/questions/2676/what-are-the-readings-qiraat-of-quran) and the papers [The Codex Of A Companion Of The Prophet](https://archive.org/stream/130854520TheCodexOfACompanionOfTheProphetSAWBenhamSadeghiBergmann/130854520-The-codex-of-a-companion-of-the-Prophet-SAW-Benham-Sadeghi-Bergmann#page/n0/mode/2up) and [Sanaa And The Origins Of The Quran](https://archive.org/stream/110978941Sanaa1AndTheOriginsOfTheQurAn/110978941-Sanaa-1-and-the-Origins-of-the-Qur-An#page/n0/mode/2up). In the paper Sanaa And The Origins Of The Quran, it is suggested that the Sanaa manuscript does not completely fit a (now) known Qiraat, though some variations are shared with known readings and the variations in general are similar to the variations documented of known readings. > > The C-1 type shares a number of variants with those reported for the > codices of Abdallah b. Masud and Ubayy b. Kaab, and these are listed > in Appendix 1. These constitute a minority among its variants, as C-1 > does not share the vast majority of its variants with these codices. > Nor are most of their variants found in C-1. Thus, C-1 represents a > text type of its own, a distinct “Companion codex.” > > > C-1 confirms the reliability of much of what has been reported about > the other Companion codices not only because it shares some variants > with them, but also because its variants are of the same kinds as > those reported for those codices. > > > ... > > > The fact that all these features are found both in the codex of Ibn > Masud, as described by al-Amash, and in C-1 establishes that the > literary sources preserve information about codices that actually > existed. > > > Pages [116-122](https://archive.org/stream/110978941Sanaa1AndTheOriginsOfTheQurAn/110978941-Sanaa-1-and-the-Origins-of-the-Qur-An#page/n115/mode/2up) carry a list of differences that match with known Qiraat variants. Similarly, in the other paper it is noted that: > > In terms of wording, the lower text also agrees with reported > non-Utm̠anic variants in a few cases, as shown in Table 4; however, > as a rule, reported non-Utm̠anic variants do not appear in C-1, nor > are the variants of C-1 reported in the sources. Thus C-1 should not > be identified with the codices whose variants have been described in > the literary sources (Ibn Masud or Ubayy b. Kaʿb); it represents an > independent codex, text type, and textual tradition. > > > ... > > > In general, every type of variant found in C-1 is found also in Ibn > Masud. However, Ibn Masud also has some higher-tier types not found > in C-1. > > > **Is it likely that the sana'a manuscript was an accepted reading of the Quran?** The manuscript is dated to the time of the Sahabah[\*](https://archive.org/stream/130854520TheCodexOfACompanionOfTheProphetSAWBenhamSadeghiBergmann/130854520-The-codex-of-a-companion-of-the-Prophet-SAW-Benham-Sadeghi-Bergmann#page/n9/mode/2up), and the variations are similar to what Islamic tradition ascribes to some of their copies. Whether it was "accepted" by the majority or how much of it was approved by the Prophet and how much is due to scribal lapses is unknowable without further finds.
It seems to be someone personal notes like he adds some words for making a better understanding and in some places remove some words when he doesn't found them important to be written and another possibility is that he may had naturally forgotten to write them the under text isnt the quran but does include some part of the quran
15,427
The title largely sums up my question, what does happen if you either x-ray an x-ray, or point two x-ray generators at each other?
2011/10/06
[ "https://physics.stackexchange.com/questions/15427", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/4880/" ]
X-rays are electromagnetic waves, just as light rays are. The difference is in the wavelength (thus frequency and Energy ![Spectrum](https://i.stack.imgur.com/1sQp0.png)). So your question has the same answer as "What happens if you shine light on light" or "What happens if you point a light ray at a light ray". Classically, you will see the same effects you see with usual light rays, interference, diffraction, etc. On a quantum-level, you will even be able to see direct interaction (light with light, or analogously x-ray with x-ray), as described in quantum electrodynamics (QED).
X-ray can interfere, that is the basis of Bragg's law. For that, however, the two x-rays have to be coherent, and for that they have to come from the same source, the source has to be small and far away, etc. etc. People also have done different types of double-slit experiments with x-rays. Again, the x-rays have to come from the same source, and that source has to fulfill a few other criteria to be coherent. Grating interferometry is becoming a more and more popular technique. <http://www.psi.ch/lmn/grating-based-x-ray-interferometry> If you point two x-ray generators at each other, then nothing happens. Photon-photon scattering can happen, but the cross section is extremely weak. To the best of my knowledge nobody has ever observed direct experimental evidence for this. <http://en.wikipedia.org/wiki/Two-photon_physics>
105,775
Shown below are four country flags: Brazil, Australia, Philippines and Montenegro There is a very small thing wrong with each of the flags. It is not the Flag dimensions or colors. Something very obvious.:) Can you point it out? No partial answers please [![enter image description here](https://i.stack.imgur.com/jUzFP.png)](https://i.stack.imgur.com/jUzFP.png) [![enter image description here](https://i.stack.imgur.com/fb14G.png)](https://i.stack.imgur.com/fb14G.png) [![enter image description here](https://i.stack.imgur.com/0oeTA.png)](https://i.stack.imgur.com/0oeTA.png) [![enter image description here](https://i.stack.imgur.com/O735D.png)](https://i.stack.imgur.com/O735D.png)
2020/12/17
[ "https://puzzling.stackexchange.com/questions/105775", "https://puzzling.stackexchange.com", "https://puzzling.stackexchange.com/users/34419/" ]
All the other answers seem to be focusing on the flags in the pictures, themselves, but the question says to look for something *obvious* that is the same (I believe) throughout all four images. If you place all images against a black background, you can see what's obviously wrong: > > [![flags](https://i.stack.imgur.com/fLS3C.png)](https://i.stack.imgur.com/fLS3C.png) > > > > > They all have a white border that is not present in the actual countries' flags. Pretty sneaky considering they just look like margins when viewed on a white background, like a PSE question...unless they are meant to be margins and I've got this all wrong. > > >
In the Australian flag: > > One of the stars is actually a five-point star. > > > In the Brazilian flag: > > The blue ellipse is oriented wrong > > >
105,775
Shown below are four country flags: Brazil, Australia, Philippines and Montenegro There is a very small thing wrong with each of the flags. It is not the Flag dimensions or colors. Something very obvious.:) Can you point it out? No partial answers please [![enter image description here](https://i.stack.imgur.com/jUzFP.png)](https://i.stack.imgur.com/jUzFP.png) [![enter image description here](https://i.stack.imgur.com/fb14G.png)](https://i.stack.imgur.com/fb14G.png) [![enter image description here](https://i.stack.imgur.com/0oeTA.png)](https://i.stack.imgur.com/0oeTA.png) [![enter image description here](https://i.stack.imgur.com/O735D.png)](https://i.stack.imgur.com/O735D.png)
2020/12/17
[ "https://puzzling.stackexchange.com/questions/105775", "https://puzzling.stackexchange.com", "https://puzzling.stackexchange.com/users/34419/" ]
Here are the mistakes: > > [![enter image description here](https://i.stack.imgur.com/lHteN.jpg)](https://i.stack.imgur.com/lHteN.jpg) > > > Brizal: > > the “E” is larger [![enter image description here](https://i.stack.imgur.com/zbrbF.jpg)](https://i.stack.imgur.com/zbrbF.jpg) > > > Australia: > > the smallest star should have five angels[![enter image description here](https://i.stack.imgur.com/0NzMd.jpg)](https://i.stack.imgur.com/0NzMd.jpg) > > > Philippines: > > the star should be pointing different directions[![enter image description here](https://i.stack.imgur.com/SXZYv.jpg)](https://i.stack.imgur.com/SXZYv.jpg) > > > Montenegro: > > the tongues should only be out lined in gold, and the crown should have different shape(as circled)[![enter image description here](https://i.stack.imgur.com/u1Rgm.jpg)](https://i.stack.imgur.com/u1Rgm.jpg) > > >
In the Australian flag: > > One of the stars is actually a five-point star. > > > In the Brazilian flag: > > The blue ellipse is oriented wrong > > >
105,775
Shown below are four country flags: Brazil, Australia, Philippines and Montenegro There is a very small thing wrong with each of the flags. It is not the Flag dimensions or colors. Something very obvious.:) Can you point it out? No partial answers please [![enter image description here](https://i.stack.imgur.com/jUzFP.png)](https://i.stack.imgur.com/jUzFP.png) [![enter image description here](https://i.stack.imgur.com/fb14G.png)](https://i.stack.imgur.com/fb14G.png) [![enter image description here](https://i.stack.imgur.com/0oeTA.png)](https://i.stack.imgur.com/0oeTA.png) [![enter image description here](https://i.stack.imgur.com/O735D.png)](https://i.stack.imgur.com/O735D.png)
2020/12/17
[ "https://puzzling.stackexchange.com/questions/105775", "https://puzzling.stackexchange.com", "https://puzzling.stackexchange.com/users/34419/" ]
Brazil: > > the image has the wrong resolution. Also, the word "E" is enlarged. > > > Australia: > > the small star should have only 5 points > > > Philippines: > > the small stars should have a point pointing at the nearest corner of the white triangle > > > Montenegro: > > The crown is wrong shape. > > >
Here are the mistakes: > > [![enter image description here](https://i.stack.imgur.com/lHteN.jpg)](https://i.stack.imgur.com/lHteN.jpg) > > > Brizal: > > the “E” is larger [![enter image description here](https://i.stack.imgur.com/zbrbF.jpg)](https://i.stack.imgur.com/zbrbF.jpg) > > > Australia: > > the smallest star should have five angels[![enter image description here](https://i.stack.imgur.com/0NzMd.jpg)](https://i.stack.imgur.com/0NzMd.jpg) > > > Philippines: > > the star should be pointing different directions[![enter image description here](https://i.stack.imgur.com/SXZYv.jpg)](https://i.stack.imgur.com/SXZYv.jpg) > > > Montenegro: > > the tongues should only be out lined in gold, and the crown should have different shape(as circled)[![enter image description here](https://i.stack.imgur.com/u1Rgm.jpg)](https://i.stack.imgur.com/u1Rgm.jpg) > > >
105,775
Shown below are four country flags: Brazil, Australia, Philippines and Montenegro There is a very small thing wrong with each of the flags. It is not the Flag dimensions or colors. Something very obvious.:) Can you point it out? No partial answers please [![enter image description here](https://i.stack.imgur.com/jUzFP.png)](https://i.stack.imgur.com/jUzFP.png) [![enter image description here](https://i.stack.imgur.com/fb14G.png)](https://i.stack.imgur.com/fb14G.png) [![enter image description here](https://i.stack.imgur.com/0oeTA.png)](https://i.stack.imgur.com/0oeTA.png) [![enter image description here](https://i.stack.imgur.com/O735D.png)](https://i.stack.imgur.com/O735D.png)
2020/12/17
[ "https://puzzling.stackexchange.com/questions/105775", "https://puzzling.stackexchange.com", "https://puzzling.stackexchange.com/users/34419/" ]
Brazil ------ > > The E in the national motto has been made larger. Normally the E should be smaller than the other letters. > > > Australia --------- > > The smallest star has seven points when it should have five. > > > Philippines ----------- > > The stars are all pointing in the same direction when they should all be pointing towards the central sun. > > > Montenegro ---------- > > The tongues of the eagles are colored in (gold), when only the outline should be visible (the interior should thus be red like the background). > > >
Your version vs the original flag > > [![enter image description here](https://i.stack.imgur.com/F1QZ9.png)](https://i.stack.imgur.com/F1QZ9.png) > > > > > On the Montenegro flag, there seem to be many differences (I don't know if this is due to different types of the same flag or not) > > >
105,775
Shown below are four country flags: Brazil, Australia, Philippines and Montenegro There is a very small thing wrong with each of the flags. It is not the Flag dimensions or colors. Something very obvious.:) Can you point it out? No partial answers please [![enter image description here](https://i.stack.imgur.com/jUzFP.png)](https://i.stack.imgur.com/jUzFP.png) [![enter image description here](https://i.stack.imgur.com/fb14G.png)](https://i.stack.imgur.com/fb14G.png) [![enter image description here](https://i.stack.imgur.com/0oeTA.png)](https://i.stack.imgur.com/0oeTA.png) [![enter image description here](https://i.stack.imgur.com/O735D.png)](https://i.stack.imgur.com/O735D.png)
2020/12/17
[ "https://puzzling.stackexchange.com/questions/105775", "https://puzzling.stackexchange.com", "https://puzzling.stackexchange.com/users/34419/" ]
Brazil ------ > > The E in the national motto has been made larger. Normally the E should be smaller than the other letters. > > > Australia --------- > > The smallest star has seven points when it should have five. > > > Philippines ----------- > > The stars are all pointing in the same direction when they should all be pointing towards the central sun. > > > Montenegro ---------- > > The tongues of the eagles are colored in (gold), when only the outline should be visible (the interior should thus be red like the background). > > >
Brazil > > Hard to tell due to the resolution but I think there is at least one four-pointed star (fourth from left) where the original Brazilian flag has all five-pointed stars. > > > Australia > > is supposed to have one five-pointed star on it, while in the image above, all stars have seven points > > > Philippines > > The three yellow-pointed stars are supposed to be rotated slightly with respect to their positions in the image above > > > Montenegro > > The number of feathers extending out on either wing seems to be 11 in the picture above where it should be 13. > > >
105,775
Shown below are four country flags: Brazil, Australia, Philippines and Montenegro There is a very small thing wrong with each of the flags. It is not the Flag dimensions or colors. Something very obvious.:) Can you point it out? No partial answers please [![enter image description here](https://i.stack.imgur.com/jUzFP.png)](https://i.stack.imgur.com/jUzFP.png) [![enter image description here](https://i.stack.imgur.com/fb14G.png)](https://i.stack.imgur.com/fb14G.png) [![enter image description here](https://i.stack.imgur.com/0oeTA.png)](https://i.stack.imgur.com/0oeTA.png) [![enter image description here](https://i.stack.imgur.com/O735D.png)](https://i.stack.imgur.com/O735D.png)
2020/12/17
[ "https://puzzling.stackexchange.com/questions/105775", "https://puzzling.stackexchange.com", "https://puzzling.stackexchange.com/users/34419/" ]
Brazil: > > the image has the wrong resolution. Also, the word "E" is enlarged. > > > Australia: > > the small star should have only 5 points > > > Philippines: > > the small stars should have a point pointing at the nearest corner of the white triangle > > > Montenegro: > > The crown is wrong shape. > > >
Brazil > > Hard to tell due to the resolution but I think there is at least one four-pointed star (fourth from left) where the original Brazilian flag has all five-pointed stars. > > > Australia > > is supposed to have one five-pointed star on it, while in the image above, all stars have seven points > > > Philippines > > The three yellow-pointed stars are supposed to be rotated slightly with respect to their positions in the image above > > > Montenegro > > The number of feathers extending out on either wing seems to be 11 in the picture above where it should be 13. > > >
105,775
Shown below are four country flags: Brazil, Australia, Philippines and Montenegro There is a very small thing wrong with each of the flags. It is not the Flag dimensions or colors. Something very obvious.:) Can you point it out? No partial answers please [![enter image description here](https://i.stack.imgur.com/jUzFP.png)](https://i.stack.imgur.com/jUzFP.png) [![enter image description here](https://i.stack.imgur.com/fb14G.png)](https://i.stack.imgur.com/fb14G.png) [![enter image description here](https://i.stack.imgur.com/0oeTA.png)](https://i.stack.imgur.com/0oeTA.png) [![enter image description here](https://i.stack.imgur.com/O735D.png)](https://i.stack.imgur.com/O735D.png)
2020/12/17
[ "https://puzzling.stackexchange.com/questions/105775", "https://puzzling.stackexchange.com", "https://puzzling.stackexchange.com/users/34419/" ]
Brazil ------ > > The E in the national motto has been made larger. Normally the E should be smaller than the other letters. > > > Australia --------- > > The smallest star has seven points when it should have five. > > > Philippines ----------- > > The stars are all pointing in the same direction when they should all be pointing towards the central sun. > > > Montenegro ---------- > > The tongues of the eagles are colored in (gold), when only the outline should be visible (the interior should thus be red like the background). > > >
Here are the mistakes: > > [![enter image description here](https://i.stack.imgur.com/lHteN.jpg)](https://i.stack.imgur.com/lHteN.jpg) > > > Brizal: > > the “E” is larger [![enter image description here](https://i.stack.imgur.com/zbrbF.jpg)](https://i.stack.imgur.com/zbrbF.jpg) > > > Australia: > > the smallest star should have five angels[![enter image description here](https://i.stack.imgur.com/0NzMd.jpg)](https://i.stack.imgur.com/0NzMd.jpg) > > > Philippines: > > the star should be pointing different directions[![enter image description here](https://i.stack.imgur.com/SXZYv.jpg)](https://i.stack.imgur.com/SXZYv.jpg) > > > Montenegro: > > the tongues should only be out lined in gold, and the crown should have different shape(as circled)[![enter image description here](https://i.stack.imgur.com/u1Rgm.jpg)](https://i.stack.imgur.com/u1Rgm.jpg) > > >
105,775
Shown below are four country flags: Brazil, Australia, Philippines and Montenegro There is a very small thing wrong with each of the flags. It is not the Flag dimensions or colors. Something very obvious.:) Can you point it out? No partial answers please [![enter image description here](https://i.stack.imgur.com/jUzFP.png)](https://i.stack.imgur.com/jUzFP.png) [![enter image description here](https://i.stack.imgur.com/fb14G.png)](https://i.stack.imgur.com/fb14G.png) [![enter image description here](https://i.stack.imgur.com/0oeTA.png)](https://i.stack.imgur.com/0oeTA.png) [![enter image description here](https://i.stack.imgur.com/O735D.png)](https://i.stack.imgur.com/O735D.png)
2020/12/17
[ "https://puzzling.stackexchange.com/questions/105775", "https://puzzling.stackexchange.com", "https://puzzling.stackexchange.com/users/34419/" ]
Brazil ------ > > The E in the national motto has been made larger. Normally the E should be smaller than the other letters. > > > Australia --------- > > The smallest star has seven points when it should have five. > > > Philippines ----------- > > The stars are all pointing in the same direction when they should all be pointing towards the central sun. > > > Montenegro ---------- > > The tongues of the eagles are colored in (gold), when only the outline should be visible (the interior should thus be red like the background). > > >
All the other answers seem to be focusing on the flags in the pictures, themselves, but the question says to look for something *obvious* that is the same (I believe) throughout all four images. If you place all images against a black background, you can see what's obviously wrong: > > [![flags](https://i.stack.imgur.com/fLS3C.png)](https://i.stack.imgur.com/fLS3C.png) > > > > > They all have a white border that is not present in the actual countries' flags. Pretty sneaky considering they just look like margins when viewed on a white background, like a PSE question...unless they are meant to be margins and I've got this all wrong. > > >
105,775
Shown below are four country flags: Brazil, Australia, Philippines and Montenegro There is a very small thing wrong with each of the flags. It is not the Flag dimensions or colors. Something very obvious.:) Can you point it out? No partial answers please [![enter image description here](https://i.stack.imgur.com/jUzFP.png)](https://i.stack.imgur.com/jUzFP.png) [![enter image description here](https://i.stack.imgur.com/fb14G.png)](https://i.stack.imgur.com/fb14G.png) [![enter image description here](https://i.stack.imgur.com/0oeTA.png)](https://i.stack.imgur.com/0oeTA.png) [![enter image description here](https://i.stack.imgur.com/O735D.png)](https://i.stack.imgur.com/O735D.png)
2020/12/17
[ "https://puzzling.stackexchange.com/questions/105775", "https://puzzling.stackexchange.com", "https://puzzling.stackexchange.com/users/34419/" ]
Brazil > > Hard to tell due to the resolution but I think there is at least one four-pointed star (fourth from left) where the original Brazilian flag has all five-pointed stars. > > > Australia > > is supposed to have one five-pointed star on it, while in the image above, all stars have seven points > > > Philippines > > The three yellow-pointed stars are supposed to be rotated slightly with respect to their positions in the image above > > > Montenegro > > The number of feathers extending out on either wing seems to be 11 in the picture above where it should be 13. > > >
In the Australian flag: > > One of the stars is actually a five-point star. > > > In the Brazilian flag: > > The blue ellipse is oriented wrong > > >
105,775
Shown below are four country flags: Brazil, Australia, Philippines and Montenegro There is a very small thing wrong with each of the flags. It is not the Flag dimensions or colors. Something very obvious.:) Can you point it out? No partial answers please [![enter image description here](https://i.stack.imgur.com/jUzFP.png)](https://i.stack.imgur.com/jUzFP.png) [![enter image description here](https://i.stack.imgur.com/fb14G.png)](https://i.stack.imgur.com/fb14G.png) [![enter image description here](https://i.stack.imgur.com/0oeTA.png)](https://i.stack.imgur.com/0oeTA.png) [![enter image description here](https://i.stack.imgur.com/O735D.png)](https://i.stack.imgur.com/O735D.png)
2020/12/17
[ "https://puzzling.stackexchange.com/questions/105775", "https://puzzling.stackexchange.com", "https://puzzling.stackexchange.com/users/34419/" ]
Your version vs the original flag > > [![enter image description here](https://i.stack.imgur.com/F1QZ9.png)](https://i.stack.imgur.com/F1QZ9.png) > > > > > On the Montenegro flag, there seem to be many differences (I don't know if this is due to different types of the same flag or not) > > >
In the Australian flag: > > One of the stars is actually a five-point star. > > > In the Brazilian flag: > > The blue ellipse is oriented wrong > > >
343,806
I want to play Minecraft but I have one controller only. I do have a PC as well. Can I play PC and PS4 Minecraft?
2018/12/22
[ "https://gaming.stackexchange.com/questions/343806", "https://gaming.stackexchange.com", "https://gaming.stackexchange.com/users/224150/" ]
With Minecraft Java Edition, no. With Minecraft Bedrock (On switch, Windows 10, Xbox One), Not yet, but maybe it will happen in a future update :)
I think it could be possible if you use that ps4 connection thing. I don't know the name but Sony mad it theirselves and with it you'll be able to play on your ps4 using your pc. Maybe you can then both play on the same ps4, one using the pc and one the actual ps4. I'm not sure if this works tough, I haven't tested it yet.
343,806
I want to play Minecraft but I have one controller only. I do have a PC as well. Can I play PC and PS4 Minecraft?
2018/12/22
[ "https://gaming.stackexchange.com/questions/343806", "https://gaming.stackexchange.com", "https://gaming.stackexchange.com/users/224150/" ]
No, but you can cross play with Switch or Xbox. Sony is hoping to bring cross platform play within an update coming to Better Together soon.
Simply put, no. Minecraft is unfortunately not cross-platform, even though it's something we've all wanted for quite some time. Or at least, that's what I've experienced.
343,806
I want to play Minecraft but I have one controller only. I do have a PC as well. Can I play PC and PS4 Minecraft?
2018/12/22
[ "https://gaming.stackexchange.com/questions/343806", "https://gaming.stackexchange.com", "https://gaming.stackexchange.com/users/224150/" ]
With Minecraft Java Edition, no. With Minecraft Bedrock (On switch, Windows 10, Xbox One), Not yet, but maybe it will happen in a future update :)
Unfortunately if you are on Java edition then no, but on bedrock edition, yes absolutely! Cross-platform is 100% between Xbox and PC but you might be able to play with PS4 if you sign in with your Microsoft account.
343,806
I want to play Minecraft but I have one controller only. I do have a PC as well. Can I play PC and PS4 Minecraft?
2018/12/22
[ "https://gaming.stackexchange.com/questions/343806", "https://gaming.stackexchange.com", "https://gaming.stackexchange.com/users/224150/" ]
With Minecraft Java Edition, no. With Minecraft Bedrock (On switch, Windows 10, Xbox One), Not yet, but maybe it will happen in a future update :)
you have to download a special router called GeyserMc ( <https://geysermc.org/> ) you can search up yt videos on how to set it up ^-^ hope i helped a little!!!
343,806
I want to play Minecraft but I have one controller only. I do have a PC as well. Can I play PC and PS4 Minecraft?
2018/12/22
[ "https://gaming.stackexchange.com/questions/343806", "https://gaming.stackexchange.com", "https://gaming.stackexchange.com/users/224150/" ]
No, but you can cross play with Switch or Xbox. Sony is hoping to bring cross platform play within an update coming to Better Together soon.
Unfortunately if you are on Java edition then no, but on bedrock edition, yes absolutely! Cross-platform is 100% between Xbox and PC but you might be able to play with PS4 if you sign in with your Microsoft account.
343,806
I want to play Minecraft but I have one controller only. I do have a PC as well. Can I play PC and PS4 Minecraft?
2018/12/22
[ "https://gaming.stackexchange.com/questions/343806", "https://gaming.stackexchange.com", "https://gaming.stackexchange.com/users/224150/" ]
With java, you can use a program called GeyserMC to connect some bedrock clients to java. Not sure if it works for PS4 though.
I think it could be possible if you use that ps4 connection thing. I don't know the name but Sony mad it theirselves and with it you'll be able to play on your ps4 using your pc. Maybe you can then both play on the same ps4, one using the pc and one the actual ps4. I'm not sure if this works tough, I haven't tested it yet.
343,806
I want to play Minecraft but I have one controller only. I do have a PC as well. Can I play PC and PS4 Minecraft?
2018/12/22
[ "https://gaming.stackexchange.com/questions/343806", "https://gaming.stackexchange.com", "https://gaming.stackexchange.com/users/224150/" ]
Unfortunately if you are on Java edition then no, but on bedrock edition, yes absolutely! Cross-platform is 100% between Xbox and PC but you might be able to play with PS4 if you sign in with your Microsoft account.
you have to download a special router called GeyserMc ( <https://geysermc.org/> ) you can search up yt videos on how to set it up ^-^ hope i helped a little!!!
343,806
I want to play Minecraft but I have one controller only. I do have a PC as well. Can I play PC and PS4 Minecraft?
2018/12/22
[ "https://gaming.stackexchange.com/questions/343806", "https://gaming.stackexchange.com", "https://gaming.stackexchange.com/users/224150/" ]
I think it could be possible if you use that ps4 connection thing. I don't know the name but Sony mad it theirselves and with it you'll be able to play on your ps4 using your pc. Maybe you can then both play on the same ps4, one using the pc and one the actual ps4. I'm not sure if this works tough, I haven't tested it yet.
you have to download a special router called GeyserMc ( <https://geysermc.org/> ) you can search up yt videos on how to set it up ^-^ hope i helped a little!!!
343,806
I want to play Minecraft but I have one controller only. I do have a PC as well. Can I play PC and PS4 Minecraft?
2018/12/22
[ "https://gaming.stackexchange.com/questions/343806", "https://gaming.stackexchange.com", "https://gaming.stackexchange.com/users/224150/" ]
Yes === Yes, if the person on PC is playing Windows 10 Edition, and not Java Edition.
you have to download a special router called GeyserMc ( <https://geysermc.org/> ) you can search up yt videos on how to set it up ^-^ hope i helped a little!!!
343,806
I want to play Minecraft but I have one controller only. I do have a PC as well. Can I play PC and PS4 Minecraft?
2018/12/22
[ "https://gaming.stackexchange.com/questions/343806", "https://gaming.stackexchange.com", "https://gaming.stackexchange.com/users/224150/" ]
With java, you can use a program called GeyserMC to connect some bedrock clients to java. Not sure if it works for PS4 though.
Unfortunately if you are on Java edition then no, but on bedrock edition, yes absolutely! Cross-platform is 100% between Xbox and PC but you might be able to play with PS4 if you sign in with your Microsoft account.
749,669
Two interfaces of Reporting Engine are possible: 1. sql based for sql based user 2. non-sql Based interface for normal non-sql friendly users Database is very large so how do I go about thinking about 2) option that is Non-sql based interface How would it be ?
2009/04/14
[ "https://Stackoverflow.com/questions/749669", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
If you're using SQL Server 2005 or higher, you may want to consider the [ReportBuilder](http://msdn.microsoft.com/en-us/library/ms155933.aspx) supplied as part of [Reporting Services](http://www.microsoft.com/sqlserver/2008/en/us/reporting.aspx). You just need to build a 'business friendly' schema (known as a 'DataSource View') then auto-build a [Report Model](http://www.mssqltips.com/tip.asp?tip=1115) on top. The users just connect to the Report Model using the Report Builder tool and they can create their own reports. If you already have SQL Server, then the additional costs would be minimal.
You need an easy way to build SQL queries. Look at the wizards in all the desktop databases, but something that isn't paged might be more intuitive, e.g. <http://ruleeditor.googlecode.com/svn/wiki/NSRuleEditor_Tiger.png> (not affiliated)
143,059
I have a question regarding the weapon feature "Brace". Some weapons have the special weapon feature "Brace", for example a simple [spear](https://www.d20pfsrd.com/equipment/weapons/weapon-descriptions/spear/) How to use it (how I understand it) ----------------------------------- On your turn, you take the standard action "[Ready](https://www.d20pfsrd.com/Gamemastering/Combat/#TOC-Ready)". > > To do so, specify the action you will take and the conditions under which you will take it. > > > You specify the action (I will ready my spear) and the condition (I am attacked by a [Charge](https://www.d20pfsrd.com/gamemastering/combat#TOC-Charge)). Then, you wait until the condition happens and take your action (before the triggering action is resolved). You can now attack a charging enemy with a standard action (so no multiple attacks, if you are able to do so), but deal double damage. If you manage to kill the charging enemy, it does not get to do damage against you (because you interrupted its action). If not, you still deal double damage but receive the charge /the melee attack normally. Questions --------- 1. Do I understand readying and charging correctly? 2. Main Question: Isn't it a bit awkward playing out in a real-life (haha) fight situation? The player has to assume that he is being charged in this round, otherwise she would have wasted their turn. The GM, playing the monsters, has to decide wether she let's her monster run into the brace or not. Does it boil down to the monster strategy "During combat" as written in the monster description? Is there a check a monster can do or fail to notice, wether a PC has braced a weapon against a charge? (And vice versa?) Thank you all!
2019/03/13
[ "https://rpg.stackexchange.com/questions/143059", "https://rpg.stackexchange.com", "https://rpg.stackexchange.com/users/29132/" ]
1. Yes. 2. Yes. I have literally never seen this option used. In fact, the only real use of it I can imagine is, basically, the real-life one: an army of low-level mooks using it to make the charge itself suicidal. At low levels, charging into three double-damage attacks (from your target and from each mook on either side of them), probably plus three attacks of opportunity (from the same people after their readied action) will likely get you killed, so you probably won’t do it, so it can protect the army as a whole. By low-mid levels, though, there are just so many ways of breaking that formation that it becomes meaningless. Even at low levels, you could just *walk* up to the wall instead of charging, negating the effect. Three attacks of opportunity could be rough, but if you are facing an entire *army* presumably your defenses are far greater than their offenses (or else you shouldn’t be trying to solo that army). Outside of a formation like that, though, there just isn’t really any reason to even try it. Certainly, every single PC ever should have something better to do with their turn, just about *every* turn. You can imagine some really contrived scenarios where it becomes a more conceivable choice, but like I said, I’ve never seen any of those actually *happen*. Maybe some kind of 1st-level bodyguard for a squishy mage, so you stay adjacent and let threats come to you? With a chokepoint so they can’t just go around you and charge your ward. Charges are kind of dangerous, so it’s something. Just really hard to do without an army around you, and at an immense opportunity cost. Ultimately, though, D&D 3.5e and Pathfinder don’t always have rules because the rules are supposed to be good ideas or useful options. A lot of times, they have rules just because this is a tactic you ought to be able to do, so it should have rules for doing it. Often, those rules are implemented in a way that just kind of passes a “gut check,” seems to make sense to the authors, and no rigorous analysis of whether or not it’s a fair choice is ever made. So the question itself is kind off-base with its very premise: there is no particular promise made that any given option offered by the rules is going to be worth using. For things that cost resources to get (feats, spells, magic items, and so on), there is *supposed to be* more of a guarantee that it’ll be worth something, but the reality is that 90% of them are crap and aren’t actually worth their cost even when the game says they are or should be (and in at least a few cases, Paizo has explicitly said that things *aren’t* supposed to be worth their cost—exotic weapons, for a notorious example). For “free” stuff like this, the game doesn’t even pretend to say that.
I think there is some misunderstanding of the technique based on word choice. When you brace a spear or pike you do not "attack" anybody. You are simply holding the weapon in place and allowing them to impale themself on it as they attack you. There is an attack roll involved for targeting because you need to keep the weapon angled correctly so that it actually goes into the attacker instead of just being pushed aside. If you've watched any movies or TV shows that involved cavalry charging into infantry such as the "Spoils of War" episode of Game of Thrones when the Dothraki attack the Lannister supply column then you have seen spears and pikes being braced against a charging enemy. No human is strong enough to hold a spear or pike in place as a strong animal charges at them so bracing is the only realistic option in this situation. Otherwise the spear gets pushed back into the second and third rows of infantry and disrupts the line at a critical moment. As for the whether or not the charging creature continues their attack and impales themself or turns away when they see the weapon, that depends on several factors starting with whether or not they are able to see the weapon and understand what it is. The term "blind rage" comes to mind. Even if they do see the weapon, they also need to be able to stop or turn away before getting there. A skilled defender who waits until the last minute to brace the weapon may not give the attacker a chance to halt their attack. One thing to note, a weapon that is braced should not get any damage bonus from the strength of the person holding the weapon. The whole point of the technique is that you are holding the weapon against a solid object so you don't depend on your own strength. You are using their own strength and speed against them. What should offer a bonus to damage is the speed and mass of the creature charging at you. But I have never spent the time to work up or search for a formula for calculating this damage bonus. The truth is that hardly any of my players use spears or pikes so it has never been a major concern.
143,059
I have a question regarding the weapon feature "Brace". Some weapons have the special weapon feature "Brace", for example a simple [spear](https://www.d20pfsrd.com/equipment/weapons/weapon-descriptions/spear/) How to use it (how I understand it) ----------------------------------- On your turn, you take the standard action "[Ready](https://www.d20pfsrd.com/Gamemastering/Combat/#TOC-Ready)". > > To do so, specify the action you will take and the conditions under which you will take it. > > > You specify the action (I will ready my spear) and the condition (I am attacked by a [Charge](https://www.d20pfsrd.com/gamemastering/combat#TOC-Charge)). Then, you wait until the condition happens and take your action (before the triggering action is resolved). You can now attack a charging enemy with a standard action (so no multiple attacks, if you are able to do so), but deal double damage. If you manage to kill the charging enemy, it does not get to do damage against you (because you interrupted its action). If not, you still deal double damage but receive the charge /the melee attack normally. Questions --------- 1. Do I understand readying and charging correctly? 2. Main Question: Isn't it a bit awkward playing out in a real-life (haha) fight situation? The player has to assume that he is being charged in this round, otherwise she would have wasted their turn. The GM, playing the monsters, has to decide wether she let's her monster run into the brace or not. Does it boil down to the monster strategy "During combat" as written in the monster description? Is there a check a monster can do or fail to notice, wether a PC has braced a weapon against a charge? (And vice versa?) Thank you all!
2019/03/13
[ "https://rpg.stackexchange.com/questions/143059", "https://rpg.stackexchange.com", "https://rpg.stackexchange.com/users/29132/" ]
Yes, your understanding of Charge and Brace is correct ------------------------------------------------------ Your PC has to anticipate the charge and set their weapon against it. It is usable in many common scenarios (especially at low levels) ---------------------------------------------------------------- While 'bracing' won't be useful in every situation, there are plenty of common situations where it is. In practice you need 2 things. 1. The monster can only realistically target you. 2. The monster wants to charge. 2 is easy to satisfy. If you're fighting anything with [pounce](https://www.d20pfsrd.com/bestiary/rules-for-monsters/universal-monster-rules#TOC-Pounce-Ex-), or just anything that lacks ranged attacks and is more than 1 move away, you can realistically expect the monster to charge. This includes most low level non-humanoid monsters (especially the various beasts). 1 is harder to make happen, but is achievable in hallways, caves, alleyways, mountain passes, any place where you can be out in front of the party with no easy way around you. Even if it's not awkward to use, it isn't very rewarding ======================================================== The main problem is that you are giving up the guarantee of an action for the *chance* of a single stronger attack. Obviously this means that the effectiveness is dependent on the damage you output and the probability that the monster will charge you (which can be manipulated using the above), but there are some broad trends we can call out. If you have more than 1 attack, bracing probably isn't worth it, same for if you can cast spells (unless you are out of slots). These, combined with the tendency for high level monsters to get spells and ranged attacks, means bracing will most often be available to use at low levels. I personally have only used the Brace action once, on a mid-level cleric, and even then because both the situation was perfect for it and I had carried this trident with me all game and I was going to *use it*, dangit! I was in a hallway with the rest of the party behind me, the monster was clearly going to charge (only had melee attacks, was more than 40ft away, was a rage monster so it wouldn't think to try something else), and (due to the unique situation not allowing me to regain spellslots on a rest) I wanted to conserve my spells. It worked perfectly, but even then I probably would have been better off using my ranged weapon or a spell. But was it cool? Yes, yes it was.
I think there is some misunderstanding of the technique based on word choice. When you brace a spear or pike you do not "attack" anybody. You are simply holding the weapon in place and allowing them to impale themself on it as they attack you. There is an attack roll involved for targeting because you need to keep the weapon angled correctly so that it actually goes into the attacker instead of just being pushed aside. If you've watched any movies or TV shows that involved cavalry charging into infantry such as the "Spoils of War" episode of Game of Thrones when the Dothraki attack the Lannister supply column then you have seen spears and pikes being braced against a charging enemy. No human is strong enough to hold a spear or pike in place as a strong animal charges at them so bracing is the only realistic option in this situation. Otherwise the spear gets pushed back into the second and third rows of infantry and disrupts the line at a critical moment. As for the whether or not the charging creature continues their attack and impales themself or turns away when they see the weapon, that depends on several factors starting with whether or not they are able to see the weapon and understand what it is. The term "blind rage" comes to mind. Even if they do see the weapon, they also need to be able to stop or turn away before getting there. A skilled defender who waits until the last minute to brace the weapon may not give the attacker a chance to halt their attack. One thing to note, a weapon that is braced should not get any damage bonus from the strength of the person holding the weapon. The whole point of the technique is that you are holding the weapon against a solid object so you don't depend on your own strength. You are using their own strength and speed against them. What should offer a bonus to damage is the speed and mass of the creature charging at you. But I have never spent the time to work up or search for a formula for calculating this damage bonus. The truth is that hardly any of my players use spears or pikes so it has never been a major concern.
26,207
In the US, does a person photographing private property (houses, farms etc.) while standing on public ground (road, park etc.) commit any offence? If they do not, will they commit any offence by publishing the photos (think [Streisand effect](https://en.wikipedia.org/wiki/Streisand_effect))? There are a couple of similar questions here ([one](https://law.stackexchange.com/questions/660/do-people-generally-have-the-right-not-to-be-photographed-on-private-property), [two](https://law.stackexchange.com/questions/659/how-do-laws-affect-photography-of-non-humans-in-public-when-people-may-be-in-the)) but those are for Australia. Please also consider these variations: * The owner of the property (or security staff etc.) comes out and asks to stop (or even demands to delete the photos) — can the photographer legally ignore them? * Telephoto lens and tripod is used — potentially capable of zooming into details of what is inside the property. To avoid digging too deep into this let's assume that if something really private is caught on the camera (e.g. couple having sex), the photographer only *keeps* the pictures but never publishes them; * People are in the frame, e.g. a man mowing his lawn; * Special property (e.g. military base, power plant, railways etc.) is in the frame. If the answer varies greatly from state to state, please focus on Tennessee, North/South Carolina, Georgia and Florida.
2018/02/20
[ "https://law.stackexchange.com/questions/26207", "https://law.stackexchange.com", "https://law.stackexchange.com/users/2682/" ]
> > **In the US, does a person photographing private property (houses, farms etc.) while standing on public ground (road, park etc.) commit any offence?** > > > No. In general, while standing on public land, it is legal for your eyes to glance onto everything around you. You cannot be arrested and imprisoned for allowing your gaze to pass over your neighbours lawn. It is legal for you to take out a tripod, canvas and paintbrushes and paint the general scene, even if it includes, for example, a tree standing on private land. Instead of a paintbrush, you may use a camera to create a picture of the scene. There are a few exceptions * Some military installations * Some installations operated by the department of energy (e.g. some nuclear power stations) * You cannot photograph people where they have a "reasonable expectation of privacy" - Note that this is not dependant on how the people feel about it. You can photograph a couple kissing at a bus stop, you probably can't legally point a telephoto lens at their bedroom window through a broken privacy-fence. > > **will they commit any offence by publishing the photos** > > > They *may* need copyright permission from the owners of any identifiable works of art included and may need model releases from identifiable people included. There are specific exceptions allowing the publishing of photographs of sculptures and buildings that are visible from public spaces. --- See [The Photographer's Right](http://www.krages.com/phoright.htm)
So for your scenarios as given: 1. Yes you can ignore them. Even if they ask you, even if they demand. There is some quibble over minor details of this, but a generic shot of a private building taken from a publicly accessible location is not illegal. 2. Now we have entered the quibble. While the above is true, this situation violates reasonable expectations of privacy, if not explicitly spelt out in the law (Some states make the Bedroom, Bathroom, and Hotel Rooms explicit. Others do not.). This can be further quibbled by the definitions of normal photographic equipment (telephoto lenses count?) and if the photographer disabled measures to prevent snooping like this (if the blinds are open and the distance from the public area). 3. This is legal. While still on private property, the man is in public view. If you are not on his private property, you take as many pictures as you wish without legal intervention. 4. The general rule is that photography of these instillation is a bit more controlled and there might be some tricks to it as well. For all publicly accessible private property, the rule is you can take photographs unless it is explicitly stated that you cannot (so for your rail way, unless you see a sign or a railway worker tells you otherwise, snap away). For your power plants and military bases, you can take pictures from public areas, but be careful. These locations often have legal tricks that allow them to stop you from taking pictures, even when out side of the gates. Most military bases are built so the fence is set some distance into the property line. This means for some distance before you are barred entry onto the base, you are still "on the base" as far as the law is concerned. If the base authorities tell you to stop taking pictures, you are probably on the base already and just didn't know. Typically they have signs on the real line saying "you are about to enter the instillation and photography is prohibited". These are placed at the exact legal edge of the property. THE SIGN IS NOT LYING TO YOU. If you can read it, you have not yet entered the property. If you cannot, you are no longer ABOUT to enter the property because you either walking away from the property (your back is towards it) OR you have entered the property (in which case you are no longer "ABOUT TO" enter the property. You already have.). A famous example of this is the Area 51 complex, which is some distance into the desert away from the property lines. The closest a member of the public can get to the fence is also well into the property and the base security has a reputation for following anyone on the property at a distance (They drive white SUVs and are refereed to as "Cameo Men"). This allows the base some discretion with figuring out if they are just tourists (it does happen) or if they are a bit more of a threat... or both (it does happen) and address it properly. They may know you're taking pictures and they may even know you were told not to do so beyond a certain point... but they won't care because you aren't looking at what they do not want you looking at. In our Area 51 example, filming the Cameo Men is enough to get them to come down and tell you to stop because you are on the property. Unauthorized photography from an employee is a good way to get yourself fired in the happiest of cases. TL;DR: Same rule apply to the government facilities and sensitive power plants and other instillation. They just tend to give you space enough to legally trespass before they try and stop you from taking pictures. They might even let you take them just cause... but if they tell you to stop, they know exactly where you are and ask you to leave before trespassing gets charged.