qid int64 1 74.7M | question stringlengths 12 33.8k | date stringlengths 10 10 | metadata list | response_j stringlengths 0 115k | response_k stringlengths 2 98.3k |
|---|---|---|---|---|---|
613,361 | I'm trying to understand the reason why USB and PCIe (considering a single lane) can achieve higher data rates than e.g. SPI, I2C, UART.
The reason may be the better handling of signal impairments at PHY level, so it can work at higher clock rates.
Furthermore, sometimes USB and PCIe are referred as analog serial interfaces, referring how the actual physical transmission takes place. From interface perspective all these interfaces are digital, while the analog refers to the actual inter-chip transmission.
Why SPI is somehow not clustered as an analog transmission as USB or PCIe are?
Context where I get this classification (digital vs analog): I work in mobile platforms and experts in inter-IC communication tend to refer to the low-speed serial interfaces as digital and USB/PCIe as analog, as said, it looks like that they refer to the needs for a PHY which covers the aspects highlighted by some of the below replies (signal conditioning, differential transmission, ...)
Could you please share your understanding? | 2022/03/25 | [
"https://electronics.stackexchange.com/questions/613361",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/310074/"
] | There's a number of improvements that need to be made to get from the MHz of SPI/I2C to the GHz of PCIe/SATA/HDMI.
The low speed interfaces assume full-swing logic level signals. When the speed becomes too high for the medium to maintain a good pulse shape, they give up. The high speed signals are small swing and assumed to be degraded by the transmission medium, and both transmitter and receiver take steps to mitigate this (pre-emphasis and equalisation respectively).
The low speed interfaces are single-ended, the high speed ones differential. At high speeds, you can't rely on a well controlled ground at the far end, you need to look at the difference in voltage between two signal lines.
The low speed interfaces have the clock generated from just one end. In read mode, that requires an out and back round trip of the transmission distance, severely limiting the data rate. The high speed interfaces are source synchronous, clock and data (sometimes) or data with clock/timing embedded in it (more common) are sent from whichever node is transmitting.
Some interfaces make use of some of these improvements, but it really takes all of them to increase speeds significantly. UART for instance uses timing embedded in the data. | All interfaces do "require" a PHY. This is nearly a matter of definition. To move from low-level microscopic-size internal digital domain to a heavy-loaded long wires/board traces always require some sort of a DIFFERENT set of transistors inside a IC. In simple protocols they are usually called "driver". Just some interfaces have more complex circuitry for the transition to PHYSICAL domain.
In simple cases the "driver" could be just a circuit that matches the impedance of attached traces (transmission lines), again with a different level of complexity. This is already a challenging task, and most IC pads already use hundreds of integrated transistors.
In cases of USB/PCIe etc., the digital data are arranged in packets. Before entering physical world, these packets are wrapped with extra transitions like sync preambles, extra bits to balance the bitstream, form special "end of packets", etc. Finally the PHY makes enhancements to signal edges so it can look less distorted when arriving at far-placed receiver. On the receiving end the PHY usually makes compensation for non-uniform signal deterioration, does CDR - clock-data-recovery etc. More, the receivers (and transceivers, like in USB4) undergo "link training" on every individual connection. All these operations do the same routine for every data packet, so it is natural to offload these functions to a separate IP block, called PHY. All these enhancements allow much higher data rates as compared to the simple bit toggling as in I2C or SPI. |
184,218 | Can a [swashbuckler](https://www.d20pfsrd.com/classes/hybrid-classes/swashbuckler/) parry an attack from an invisible creature or creature they cannot see if they can make AOOs while flatfooted via [combat reflexes](https://www.d20pfsrd.com/feats/combat-feats/combat-reflexes-combat/) and have [blindfight](https://www.d20pfsrd.com/feats/combat-feats/blind-fight-combat/)? | 2021/04/22 | [
"https://rpg.stackexchange.com/questions/184218",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/32546/"
] | No.
===
**Can a swashbuckler parry an attack from an invisible creature, or creature they cannot see?**
Breaking it down/apart (at the request of Erudaki).
**Precondition:** Swashbuckler class is specifically mentioned to limit the scope of "parry" to the use of the class Deed "Opportune Parry and Riposte (Ex)" (OPaR). Other forms of parry are not considered for this answer.
**Precondition:** swashbuckler has an AoO available (i.e. has not used all AoO they are entitled to).
**Precondition:** swashbuckler has a panache point to spend to use the OPaR Deed.
**Precondition:** invisible creature is in melee with swashbuckler as OPaR only applies to melee attacks against the swashbuckler as explicitly stated in the OPaR description.
**Possible assistance:** Combat Reflexes allow AoO while flat-footed. (Turns out this doesn't come into play in this specific situation.)
**Possible assistance:** Blind-Fight allows rerolls on misses against concealed opponents and prevents the loss of Dexterity bonus to AC against invisible opponents and prevents the +2 bonus to hit given to an invisible creature attacking you in melee. (Turns out this doesn't come into play in this specific situation.)
Looking at the Deed: "[Opportune Parry and Riposte (Ex)](https://www.d20pfsrd.com/classes/hybrid-classes/swashbuckler/)" (OPaR)
1. Requirement: 1st Level swashbuckler = granted,
2. Requirement: 1 panache point to spend = granted,
3. Requirement: AoO available to spend = granted, all PCs have at least one per round.
4. Action: Make an attack roll AS IF IT WERE AN AoO\* = granted,
(\* I think this wording is used so that the AoO expended by OPaR cannot qualify for other adjustments AoOs might get from other abilities or feats the PC may possess. It does everything a normal AoO does but doesn't qualify as a prerequisite for anything.)
5. The invisible creature is in melee with swashbuckler therefore is in a threatened square.
(I cannot find any text in invisibility references to say the square isn't threatened, only that the swashbuckler wouldn't know which square the invisible attacker is in, or if it were in one at all... unless they pass a DC40 Perception check to pinpoint the square containing the invisible creature.)
It turns out it is irrelevant to the result whether the Perception check is passed or not, as the attacker remains invisible. If the Perception check is passed then the 5' square is know, but according to invisibility
>
> *"If a character tries to attack an invisible creature whose location he has pinpointed, he attacks normally, but the invisible creature
> still benefits from full concealment (and thus a 50% miss chance)."*
>
>
>
Full Concealment = Total Concealment (Sloppy English equivalency notwithstanding)
Under Concealment, subtopic Total Concealment, last sentence
>
> *"You can’t execute an attack of opportunity against an opponent with total concealment, even if you know what square or squares the
> opponent occupies."*
>
>
>
Therefore you cannot use an AoO against an invisible creature even if it is pinpointed.
OPaR states:
>
> *"The swashbuckler makes an attack roll as if she were making an attack of opportunity;"*
>
>
>
These two rules clearly answer this question with a resounding NO.
Even if that didn't end the question...
6. Invisible creature attacks swashbuckler
-- Here is where English language causes issues between description and rules --
OPaR says
>
> *"The swashbuckler must declare the use of this ability after the
> creature’s attack is announced, but before its attack roll is made."*
>
>
>
-- I contest that "announced" does not mean the invisible creature says "I stab at thee!". An invisible attack is NOT "announced" in any way other than for rules purposes, it is at best a meta-announcement. The swashbuckler has no inkling IF an attack is coming, let alone WHEN. It makes no logical sense to be able to block something you cannot see... unless you know... magic. However OPaR is (Ex) not (Sp) not even (Su) so it is NOT magic.
>
> *"Extraordinary abilities are non-magical. They are, however, not
> something that just anyone can do or even learn to do without
> extensive training. Effects or areas that suppress or negate magic
> have no effect on extraordinary abilities."*
>
>
>
If the swashbuckler cannot know when an attack is coming before it hits them, and potentially never knows if an attack misses them, then they have no cue to declare anything. OPaR cannot even be triggered in this situation! | **Yes, he can...**
First of all the invisible condition is an effect that you GAIN not a malus that you APPLY to someone else:
>
> Invisible: Invisible creatures are visually undetectable. An invisible creature gains a +2 bonus on attack rolls against a sighted opponent, and ignores its opponent’s Dexterity bonus to AC (if any). See the invisibility special ability.
>
>
>
As we can see the condition itself does two things: 1) you gain an attack bonus and 2) your target has no dex bonus to AC... the target **is not** flat-footed. Flat-footed is another condition.
Other than this the invisibility special ability says:
* While they can’t be seen, invisible creatures can be heard, smelled, or felt
* Invisibility makes a creature undetectable by vision, including darkvision
* Not immune to critical hits **but** immune to extra damage (ranger favored enemy, sneak attack...swashbuckler precision damage too i think)
* If the invisible creature is adjacent to the attacked creature it gains total concealment otherwise the attacked creature need to find a way to pinpoint the invisible creature
* Many other things... (like invisible creature leaves tracks, displace liquids and so on)
Now, going to the swashbuckler parry/reposte:
>
> Opportune Parry and Riposte (Ex): At 1st level, when an opponent makes a melee attack against the swashbuckler, she can spend 1 panache point and expend a use of an attack of opportunity to attempt to parry that attack. The swashbuckler makes an attack roll as if she were making an attack of opportunity. [...]
>
>
>
Now we can see different things:
* Opportune Parry and Riposte **is not** an attack of opportunity. It costs a use of an attack of opportunity and you attack *as if* it is an attack of opportunity..because it isn't.(i hope this part is clear enough).
* Nothing specifies conditions about the state of the attacker. This means that we don't know if it's required to see the attacker or to hear the attacker and so on.
And that's it.
Nothing prevents actually to use opportune parry and riposte versus an invisible attacker...and you do not need those two feats at all.
You do not need feats at all since when you are fighting an invisible enemy you **are not** flat footed (so combat reflexes isn't needed) and blind fight gives you more AC (since you retain DEX to armor) but that's it.
...**but**...
What happens if your GM treat opportune parry and riposte **as a real attack of opportunity**?
Well...in this case **you can not** make opportune parry and riposte since you can not make attack of opportunity to an opponent with total concealment :
>
> [...] You can’t execute an attack of opportunity against an opponent with total concealment, even if you know what square or squares the opponent occupies.
>
>
>
So your only chances in this case are:
* see invisibility (in some way)
* remove your dependency from eyesight (like echolocation for example)
* [Greater Blind-Fight](https://www.d20pfsrd.com/feats/combat-feats/greater-blind-fight-combat) (or similar effect that decreases or removes the total concealment from your opponent)
I hope it's enough, good game!
Marco. |
44,144 | I wonder what the following violin technique is called it happens in the following video at ar 4;50. It seems that it is some sort of slapping motion with the bow and also something that resembles pull offs. | 2016/05/04 | [
"https://music.stackexchange.com/questions/44144",
"https://music.stackexchange.com",
"https://music.stackexchange.com/users/7306/"
] | Looks to me like the player is interspersing conventionally bowed notes with left-hand pizzicato notes. This is similar to the pull-off (*ligado*) technique on guitar, but has a quite different sound.
I only watched the clip once, but it appears that the passages that combine bowed and L.H. pizzicato are executed as follows: a note is fingered with the little finger on the L.H. and bowed conventionally; then, the L.H. little finger plucks the string it is on, sounding a note fingered by another L.H. finger, which then plucks the string again, sounding either another fingered note or an open string. This is then repeated on different strings.
Of course, this is all executed very rapidly, giving a unique sound not achievable using a conventional pizzicato.
There are violin techniques that involve "slapping" the bow against the strings, but I can't see them in the passage you give the time of. These techniques include:
* *spiccato*, where the bow bounces on the string.
* *col legno battuto*, where the wooden side of the bow is "tapped" against the string. | It's a combination of spiccato bowing (bouncing the bow off the strings) alternating with left-hand pizzicato.
Of course the *real* Paganini did this sort of party trick having slashed three of the violin strings with a knife, and then holding the violin upside down, if some of the stories about him are to be believed. |
44,144 | I wonder what the following violin technique is called it happens in the following video at ar 4;50. It seems that it is some sort of slapping motion with the bow and also something that resembles pull offs. | 2016/05/04 | [
"https://music.stackexchange.com/questions/44144",
"https://music.stackexchange.com",
"https://music.stackexchange.com/users/7306/"
] | Looks to me like the player is interspersing conventionally bowed notes with left-hand pizzicato notes. This is similar to the pull-off (*ligado*) technique on guitar, but has a quite different sound.
I only watched the clip once, but it appears that the passages that combine bowed and L.H. pizzicato are executed as follows: a note is fingered with the little finger on the L.H. and bowed conventionally; then, the L.H. little finger plucks the string it is on, sounding a note fingered by another L.H. finger, which then plucks the string again, sounding either another fingered note or an open string. This is then repeated on different strings.
Of course, this is all executed very rapidly, giving a unique sound not achievable using a conventional pizzicato.
There are violin techniques that involve "slapping" the bow against the strings, but I can't see them in the passage you give the time of. These techniques include:
* *spiccato*, where the bow bounces on the string.
* *col legno battuto*, where the wooden side of the bow is "tapped" against the string. | While I didn't notice the col legno battuto anywhere the rest of the above information is correct. Also noteworthy is the incorporation of ricochet bowing in the phrasing of the beginning of this piece. The spiccato that appears in other parts of the piece you will notice uses the middle and lower half of the bow. The spiccato used to facilitate and shape the left hand pizzicato motives uses the upper fourth of the bow helping to match its attack and dynamics with the relatively weak sound produced by left hand pizz on the violin. While the stronger spiccato allowed the bow to draw more life for each note the upper bow spiccato resulted in little slaps with the end of the bow breathing just enough life into the string for the left hand technique to continue the phrase and manage the wave in the string. |
109,381 | I have a content type that has a mix of user-editable fields and auto-populated fields (from an external source). I would like to mark those fields as disabled right when I create them so the user cannot overwrite them. I know there are readonly modules and ways to do this in hook\_form\_alter but I'm curious if I can short circuit having to do this on the fly and just have it disabled by default. Thanks. | 2014/04/07 | [
"https://drupal.stackexchange.com/questions/109381",
"https://drupal.stackexchange.com",
"https://drupal.stackexchange.com/users/25639/"
] | The simple bulk action support in Drupal 8 core does not currently support this feature. | The issue about [Port Views Bulk Operations to Drupal 8](https://www.drupal.org/project/views_bulk_operations/issues/1823572) is closed (completed) now.
If somebody has problem with it, try to use /admin/yourpath in views, then you use admin theme and select/deselect all button should work. |
109,381 | I have a content type that has a mix of user-editable fields and auto-populated fields (from an external source). I would like to mark those fields as disabled right when I create them so the user cannot overwrite them. I know there are readonly modules and ways to do this in hook\_form\_alter but I'm curious if I can short circuit having to do this on the fly and just have it disabled by default. Thanks. | 2014/04/07 | [
"https://drupal.stackexchange.com/questions/109381",
"https://drupal.stackexchange.com",
"https://drupal.stackexchange.com/users/25639/"
] | The simple bulk action support in Drupal 8 core does not currently support this feature. | If you need "select all" you can use the module [VBO](https://www.drupal.org/project/views_bulk_operations).
Then you can replace the field "Node: Views bulk operation" with the "Global: Views bulk operations" which supports selecting all rows. |
10,772,530 | I am curious about this. I must learn Prolog for my course, but the applications that I seen mostly are written using C++, C# or Java. Applications written by Prolog, to me is very very rare application.
So, I wonder how Prolog is used and implement the real-world application? | 2012/05/27 | [
"https://Stackoverflow.com/questions/10772530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1326261/"
] | I once asked my supervisor a similar question, when he is giving us a Prological lecture.
And he told me that people do not really use prolog to implement a whole huge system. Instead, people write the main part with other language(which is more sane and trivial), and link it to a "decision procedure" or something written in Prolog.
Not sure about other Prolog implementation, we were using BProlog and it provides C/Java interface. | According to the Tiobe Software Index, Prolog is currently #36: between Haskell and FoxPro:
<http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html>
What's it used for?
I first heard of it with respect to Japan's (now defunct) "Fifth Generation" project:
<http://en.wikipedia.org/wiki/Fifth_generation_computer>
Frankly, I'm not really aware of anybody using Prolog for any serious commercial development. |
10,772,530 | I am curious about this. I must learn Prolog for my course, but the applications that I seen mostly are written using C++, C# or Java. Applications written by Prolog, to me is very very rare application.
So, I wonder how Prolog is used and implement the real-world application? | 2012/05/27 | [
"https://Stackoverflow.com/questions/10772530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1326261/"
] | * **Microsoft Windows NT Networking Installation and Configuration applet**
One of the notorious and in a way notable examples is Microsoft Windows NT OS network interface configuration code that involved a Small Prolog interpreter built in. Here is a [link](http://www.drdobbs.com/cpp/184409294) to the story written by David Hovel for Dr. Dobbs. (*The often cited Microsoft Research link seems to be gone.*)
* **Expert systems**
Once Prolog was considered as THE language for a class of software systems called *Expert Systems*. These were interactive knowledge management systems often with a relational database backend.
* **Beyond Prolog**
In general rule-based programming, resolution and different automated reasoning systems are widely used beyond Prolog. | According to the Tiobe Software Index, Prolog is currently #36: between Haskell and FoxPro:
<http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html>
What's it used for?
I first heard of it with respect to Japan's (now defunct) "Fifth Generation" project:
<http://en.wikipedia.org/wiki/Fifth_generation_computer>
Frankly, I'm not really aware of anybody using Prolog for any serious commercial development. |
10,772,530 | I am curious about this. I must learn Prolog for my course, but the applications that I seen mostly are written using C++, C# or Java. Applications written by Prolog, to me is very very rare application.
So, I wonder how Prolog is used and implement the real-world application? | 2012/05/27 | [
"https://Stackoverflow.com/questions/10772530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1326261/"
] | SWI-Prolog website is served from... SWI-prolog, using just a small subset of the libraries available.
Well, it's not a commercial application, but it's rather [real world](http://www.swi-prolog.org/).
Much effort was required to make the runtime able to perform 24x7 service (mainly garbage collection) and required performance scalability (among other multithreading).
Several libraries were developed driven by real world applications needs. | According to the Tiobe Software Index, Prolog is currently #36: between Haskell and FoxPro:
<http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html>
What's it used for?
I first heard of it with respect to Japan's (now defunct) "Fifth Generation" project:
<http://en.wikipedia.org/wiki/Fifth_generation_computer>
Frankly, I'm not really aware of anybody using Prolog for any serious commercial development. |
10,772,530 | I am curious about this. I must learn Prolog for my course, but the applications that I seen mostly are written using C++, C# or Java. Applications written by Prolog, to me is very very rare application.
So, I wonder how Prolog is used and implement the real-world application? | 2012/05/27 | [
"https://Stackoverflow.com/questions/10772530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1326261/"
] | SWI-Prolog website is served from... SWI-prolog, using just a small subset of the libraries available.
Well, it's not a commercial application, but it's rather [real world](http://www.swi-prolog.org/).
Much effort was required to make the runtime able to perform 24x7 service (mainly garbage collection) and required performance scalability (among other multithreading).
Several libraries were developed driven by real world applications needs. | I once asked my supervisor a similar question, when he is giving us a Prological lecture.
And he told me that people do not really use prolog to implement a whole huge system. Instead, people write the main part with other language(which is more sane and trivial), and link it to a "decision procedure" or something written in Prolog.
Not sure about other Prolog implementation, we were using BProlog and it provides C/Java interface. |
10,772,530 | I am curious about this. I must learn Prolog for my course, but the applications that I seen mostly are written using C++, C# or Java. Applications written by Prolog, to me is very very rare application.
So, I wonder how Prolog is used and implement the real-world application? | 2012/05/27 | [
"https://Stackoverflow.com/questions/10772530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1326261/"
] | SWI-Prolog website is served from... SWI-prolog, using just a small subset of the libraries available.
Well, it's not a commercial application, but it's rather [real world](http://www.swi-prolog.org/).
Much effort was required to make the runtime able to perform 24x7 service (mainly garbage collection) and required performance scalability (among other multithreading).
Several libraries were developed driven by real world applications needs. | * **Microsoft Windows NT Networking Installation and Configuration applet**
One of the notorious and in a way notable examples is Microsoft Windows NT OS network interface configuration code that involved a Small Prolog interpreter built in. Here is a [link](http://www.drdobbs.com/cpp/184409294) to the story written by David Hovel for Dr. Dobbs. (*The often cited Microsoft Research link seems to be gone.*)
* **Expert systems**
Once Prolog was considered as THE language for a class of software systems called *Expert Systems*. These were interactive knowledge management systems often with a relational database backend.
* **Beyond Prolog**
In general rule-based programming, resolution and different automated reasoning systems are widely used beyond Prolog. |
52,485,689 | I am new in Android, I have finished some Android app development courses and now I am trying to apply what I learned. I've chosen a news app for it. It will extract news' from 5-10 source and display them in recyclerview.
I recognized that the course materials I used is outdated. I've used AsynctaskLoader to handle internet connection issues but now in official Android documentation it says *"Loaders have been deprecated as of Android P (API 28). The recommended option for dealing with loading data while handling the Activity and Fragment lifecycles is to use a combination of ViewModels and LiveData."*
My question is should I convert my code to comply with ViewModels and LiveData or would Asynctask handle my task (or any other suggestion)? As I mentioned I only want to extract news data from a couple of source and display them in the app. It seems I don't need data storage feature. But, for now I have added two news source and the app seems to load news data a little bit late. Does this latency has something to do with using loaders? Would using viewmodels speed up news loading task (especially when there are lots of news source)? | 2018/09/24 | [
"https://Stackoverflow.com/questions/52485689",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10392737/"
] | If you've already written it with Loaders there's no reason to rush to change it. Deprecated doesn't mean gone. And no, Loaders don't add significant performance penalty- any perf issues would be elsewhere in your app. | Loaders have been deprecated as of Android P (API 28). The recommended option for dealing with loading data while handling the Activity and Fragment lifecycles is to use a combination of ViewModels and LiveData. ViewModels survive configuration changes like Loaders but with less boilerplate. LiveData provides a lifecycle-aware way of loading data that you can reuse in multiple ViewModels. You can also combine LiveData using MediatorLiveData , and any observable queries, such as those from a Room database, can be used to observe changes to the data. ViewModels and LiveData are also available in situations where you do not have access to the LoaderManager, such as in a Service. Using the two in tandem provides an easy way to access the data your app needs without having to deal with the UI lifecycle. |
52,485,689 | I am new in Android, I have finished some Android app development courses and now I am trying to apply what I learned. I've chosen a news app for it. It will extract news' from 5-10 source and display them in recyclerview.
I recognized that the course materials I used is outdated. I've used AsynctaskLoader to handle internet connection issues but now in official Android documentation it says *"Loaders have been deprecated as of Android P (API 28). The recommended option for dealing with loading data while handling the Activity and Fragment lifecycles is to use a combination of ViewModels and LiveData."*
My question is should I convert my code to comply with ViewModels and LiveData or would Asynctask handle my task (or any other suggestion)? As I mentioned I only want to extract news data from a couple of source and display them in the app. It seems I don't need data storage feature. But, for now I have added two news source and the app seems to load news data a little bit late. Does this latency has something to do with using loaders? Would using viewmodels speed up news loading task (especially when there are lots of news source)? | 2018/09/24 | [
"https://Stackoverflow.com/questions/52485689",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10392737/"
] | Loaders are good because of its ability to handle life cycle, but it is not as efficient as LiveData and ViewModel. If you care about performance, speed and being latest, use Android Architecture Components (LiveData, ViewModel), also, you don't have to stick to the old system of doing things, you can write a simple AsyncTask and wrap it with ViewModel and LiveData. It works like a magic and better than Loaders. For information on how to wrap AsyncTask in LiveData and ViewModel, visit <https://medium.com/androiddevelopers/lifecycle-aware-data-loading-with-android-architecture-components-f95484159de4> | Loaders have been deprecated as of Android P (API 28). The recommended option for dealing with loading data while handling the Activity and Fragment lifecycles is to use a combination of ViewModels and LiveData. ViewModels survive configuration changes like Loaders but with less boilerplate. LiveData provides a lifecycle-aware way of loading data that you can reuse in multiple ViewModels. You can also combine LiveData using MediatorLiveData , and any observable queries, such as those from a Room database, can be used to observe changes to the data. ViewModels and LiveData are also available in situations where you do not have access to the LoaderManager, such as in a Service. Using the two in tandem provides an easy way to access the data your app needs without having to deal with the UI lifecycle. |
72,047 | I would like to build a electronic flatulence (fart) detector. I was thinking of methane because detectors are readily available, but I read <http://en.wikipedia.org/wiki/Flatulence> and it says:
>
> However, not all humans produce flatus that contains methane. For example, in one study of the feces of nine adults, only five of the samples contained archaea capable of producing methane.
>
>
>
Oxygen, nitrogen, carbon dioxide are listed but I think they would be too common in normal air. That seems to leave:
* Hydrogen
* Hydrogen sulfide
* Methyl mercaptan
* Dimethyl sulfide
* Dimethyl trisulfide
Does anyone know if practical sensors are available that detect those gases or have other ideas? I think somewhere around the $20 or less mark would be good, so I wasn't really seeking a full professional solution like gas chromatography that may normally be used.
The application is for under office type chairs, so maybe heat detection could be used although I'm not sure it would be posible to tell the difference between the desired event and someone just sitting down on a cold chair, although maybe a pressure sensor could be used along with some filtering to not trigger until the temperature had stabilized a bit. | 2013/06/08 | [
"https://electronics.stackexchange.com/questions/72047",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/24932/"
] | It looks like Hydrogen is the major component: [Normal Flatus](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1378885/). 360mL perday. How much per fart will take some closer reading.
Here is an Arduino flamable gas detector, it probably can sense Hydrogen:
[LM393 MQ-9](http://dx.com/p/lm393-mq-9-flammable-gas-detection-sensor-module-for-arduino-red-black-151069), say, at 10ppm. (Some shopping legwork for a Hydrogen leak detector or flammable gas sensor is in order.) So a 36mL bolus of Hydrogen (I just guessed what volumes are emitted throughout the day to make up that 360ml, and guessed 1/10 of the total) must diffuse into a volume of 3600 Liters before it is below detection level of 10ppm. Your 10ppm sensor must be within about 100 centimeters. Looks like the under-the-seat location is the right spot. | Weird project. Chairs that detect farts? No thanks.
Anyway I would suggest you look at an off-the-shelf propane sensor. Propane (C3H8) and methane (CH4) are very similar. In fact many of them are described as Propane Methane sensors. They are cheap and made in the thousands for RV's and Boats. A friend of mine always said if he was too close the the sensor the alarm would go off. |
72,047 | I would like to build a electronic flatulence (fart) detector. I was thinking of methane because detectors are readily available, but I read <http://en.wikipedia.org/wiki/Flatulence> and it says:
>
> However, not all humans produce flatus that contains methane. For example, in one study of the feces of nine adults, only five of the samples contained archaea capable of producing methane.
>
>
>
Oxygen, nitrogen, carbon dioxide are listed but I think they would be too common in normal air. That seems to leave:
* Hydrogen
* Hydrogen sulfide
* Methyl mercaptan
* Dimethyl sulfide
* Dimethyl trisulfide
Does anyone know if practical sensors are available that detect those gases or have other ideas? I think somewhere around the $20 or less mark would be good, so I wasn't really seeking a full professional solution like gas chromatography that may normally be used.
The application is for under office type chairs, so maybe heat detection could be used although I'm not sure it would be posible to tell the difference between the desired event and someone just sitting down on a cold chair, although maybe a pressure sensor could be used along with some filtering to not trigger until the temperature had stabilized a bit. | 2013/06/08 | [
"https://electronics.stackexchange.com/questions/72047",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/24932/"
] | Methane was used by another guy used in his office chair to detect his own flatus, but if that source is real, Futurlec seems to offer [dozens of gas sensors](http://www.futurlec.com/Gas_Sensors.shtml) for whatever gas you like.
As an example: [The Twittering Office Chair](http://www.instructables.com/id/The-Twittering-Office-Chair/) and [it's Twitter account](https://twitter.com/officechair) (apparently died of asphyxiation back in '09). | Weird project. Chairs that detect farts? No thanks.
Anyway I would suggest you look at an off-the-shelf propane sensor. Propane (C3H8) and methane (CH4) are very similar. In fact many of them are described as Propane Methane sensors. They are cheap and made in the thousands for RV's and Boats. A friend of mine always said if he was too close the the sensor the alarm would go off. |
6,646,509 | What are the datatypes available in asterisk server dial plan?
how to check the date type? | 2011/07/11 | [
"https://Stackoverflow.com/questions/6646509",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/809719/"
] | Asterisk dial plan language is weakly-typed (has no 'types' as such). All values are strings, but can be treated as numbers in some context (e.g. arithmetic expressions). The only way to check 'the type' is to use a regular expression to check the value.
There are some functions like [HASH](http://www.voip-info.org/wiki/view/Asterisk+func+hash) or [SORT](http://www.voip-info.org/wiki/view/Asterisk+func+sort) which seem to operate on some complex data types, but these are not main features of the language, but rather helpers for specific use cases. | No, in dialplan everything is a string. You can use AGI to check date or compare various things.
What exactly do you need? I'm guessing but maybe this will be useful: [How to include contexts based on time and date](http://www.voip-info.org/wiki/view/Asterisk+tips+openhours) |
6,646,509 | What are the datatypes available in asterisk server dial plan?
how to check the date type? | 2011/07/11 | [
"https://Stackoverflow.com/questions/6646509",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/809719/"
] | Asterisk dial plan language is weakly-typed (has no 'types' as such). All values are strings, but can be treated as numbers in some context (e.g. arithmetic expressions). The only way to check 'the type' is to use a regular expression to check the value.
There are some functions like [HASH](http://www.voip-info.org/wiki/view/Asterisk+func+hash) or [SORT](http://www.voip-info.org/wiki/view/Asterisk+func+sort) which seem to operate on some complex data types, but these are not main features of the language, but rather helpers for specific use cases. | There is no type in dialplan, everything can be treated string variable.
But if required, you can always use your favorite programming language using AGI. |
6,646,509 | What are the datatypes available in asterisk server dial plan?
how to check the date type? | 2011/07/11 | [
"https://Stackoverflow.com/questions/6646509",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/809719/"
] | No, in dialplan everything is a string. You can use AGI to check date or compare various things.
What exactly do you need? I'm guessing but maybe this will be useful: [How to include contexts based on time and date](http://www.voip-info.org/wiki/view/Asterisk+tips+openhours) | There is no type in dialplan, everything can be treated string variable.
But if required, you can always use your favorite programming language using AGI. |
5,609,727 | Can you please suggest some books on Software Architecture, which should talk about how to design software at module level and how those modules will interact. There are numerous books which talks about design patterns which are mostly low level details. I know low level details are also important, but I want list of good design architecture book.
Please also suggest some books which talks about case studies of software architecture. | 2011/04/10 | [
"https://Stackoverflow.com/questions/5609727",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2302649/"
] | I *think* this is the book that came to mind when I first read this question. It talks about various architectural styles like pipes-and-filters, blackboard systems, etc. It's an oldie, and I'll let you judge whether it's a 'goodie'.
[Pattern Oriented Software Architecture](https://rads.stackoverflow.com/amzn/click/com/0471958697)
I also particularly like these two, especially the first. The second starts to dig into lower level design patterns, but it's still awesome in various spots:
[Enterprise Integration Patterns](https://rads.stackoverflow.com/amzn/click/com/0321200683)
[Patterns of Enterprise Application Architecture](https://rads.stackoverflow.com/amzn/click/com/0321127420)
I hope these are what you had in mind. | I'm not familiar with books that detail architectures and not design pattern. I mostly use the design books to get an understanding of how I would build such a system and I use sources such as [highscalability](http://highscalability.com/) to learn about the architecture of various companies, just look at the "all time favorites" tab on the right and you will see posts regarding the architecture of youtube, twitter, google, amazon, flickr and even [this site](http://highscalability.com/blog/2011/3/3/stack-overflow-architecture-update-now-at-95-million-page-vi.html)... |
5,609,727 | Can you please suggest some books on Software Architecture, which should talk about how to design software at module level and how those modules will interact. There are numerous books which talks about design patterns which are mostly low level details. I know low level details are also important, but I want list of good design architecture book.
Please also suggest some books which talks about case studies of software architecture. | 2011/04/10 | [
"https://Stackoverflow.com/questions/5609727",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2302649/"
] | Where can you get knowledge about software architecture? One place is your experience building systems. Another is conversations with other developers or reading their code. Yet another place is books. I am the author of a book on software architecture ([Just Enough Software Architecture](http://rhinoresearch.com/content/software-architecture-book)) but let me instead point you to some classics:
* [Software Architecture in Practice (Bass, Clements, Kazman)](https://rads.stackoverflow.com/amzn/click/com/0321154959). This book from the Software Engineering Institute (SEI) describes how architects should think about problems. It describes the importance of quality attributes (performance, security, modifiability, etc.) and how to make tradeoffs between them, since you cannot maximize all of them.
* [Documenting Software Architectures (lots of SEI/CMU authors)](https://rads.stackoverflow.com/amzn/click/com/0321552687). The title of this book is a bit scary, because many people are trying to avoid writing shelfware documents. But the wonderful thing about the book is that it describes the standard architectural styles / patterns, notations for describing structure and behavior, and a conceptual model of understanding architectures. All these are valuable even if you only ever sketch on a whiteboard.
* [Software Systems Architecture (Rosanski and Woods)](https://rads.stackoverflow.com/amzn/click/com/0321112296). Goes into detail about how to think about a system from multiple perspectives (views). What I like particularly is that it gives checklists for ensuring that a particular concern (say security) has been handled.
* [Essential Software Architecture (Gorton)](https://rads.stackoverflow.com/amzn/click/com/3540287132). Small, straightforward book on IT architecture. Covers the different kinds of things you'll see (databases, event busses, app servers, etc.)
That's just a short list and just because I didn't list something doesn't mean it's a bad book. If you are looking something free to read immediately, I have [three chapters of my book](http://rhinoresearch.com/files/Just_Enough_Software_Architecture__Fairbanks_2010-demo.pdf) available for download on my website. | I'm not familiar with books that detail architectures and not design pattern. I mostly use the design books to get an understanding of how I would build such a system and I use sources such as [highscalability](http://highscalability.com/) to learn about the architecture of various companies, just look at the "all time favorites" tab on the right and you will see posts regarding the architecture of youtube, twitter, google, amazon, flickr and even [this site](http://highscalability.com/blog/2011/3/3/stack-overflow-architecture-update-now-at-95-million-page-vi.html)... |
5,609,727 | Can you please suggest some books on Software Architecture, which should talk about how to design software at module level and how those modules will interact. There are numerous books which talks about design patterns which are mostly low level details. I know low level details are also important, but I want list of good design architecture book.
Please also suggest some books which talks about case studies of software architecture. | 2011/04/10 | [
"https://Stackoverflow.com/questions/5609727",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2302649/"
] | Where can you get knowledge about software architecture? One place is your experience building systems. Another is conversations with other developers or reading their code. Yet another place is books. I am the author of a book on software architecture ([Just Enough Software Architecture](http://rhinoresearch.com/content/software-architecture-book)) but let me instead point you to some classics:
* [Software Architecture in Practice (Bass, Clements, Kazman)](https://rads.stackoverflow.com/amzn/click/com/0321154959). This book from the Software Engineering Institute (SEI) describes how architects should think about problems. It describes the importance of quality attributes (performance, security, modifiability, etc.) and how to make tradeoffs between them, since you cannot maximize all of them.
* [Documenting Software Architectures (lots of SEI/CMU authors)](https://rads.stackoverflow.com/amzn/click/com/0321552687). The title of this book is a bit scary, because many people are trying to avoid writing shelfware documents. But the wonderful thing about the book is that it describes the standard architectural styles / patterns, notations for describing structure and behavior, and a conceptual model of understanding architectures. All these are valuable even if you only ever sketch on a whiteboard.
* [Software Systems Architecture (Rosanski and Woods)](https://rads.stackoverflow.com/amzn/click/com/0321112296). Goes into detail about how to think about a system from multiple perspectives (views). What I like particularly is that it gives checklists for ensuring that a particular concern (say security) has been handled.
* [Essential Software Architecture (Gorton)](https://rads.stackoverflow.com/amzn/click/com/3540287132). Small, straightforward book on IT architecture. Covers the different kinds of things you'll see (databases, event busses, app servers, etc.)
That's just a short list and just because I didn't list something doesn't mean it's a bad book. If you are looking something free to read immediately, I have [three chapters of my book](http://rhinoresearch.com/files/Just_Enough_Software_Architecture__Fairbanks_2010-demo.pdf) available for download on my website. | I *think* this is the book that came to mind when I first read this question. It talks about various architectural styles like pipes-and-filters, blackboard systems, etc. It's an oldie, and I'll let you judge whether it's a 'goodie'.
[Pattern Oriented Software Architecture](https://rads.stackoverflow.com/amzn/click/com/0471958697)
I also particularly like these two, especially the first. The second starts to dig into lower level design patterns, but it's still awesome in various spots:
[Enterprise Integration Patterns](https://rads.stackoverflow.com/amzn/click/com/0321200683)
[Patterns of Enterprise Application Architecture](https://rads.stackoverflow.com/amzn/click/com/0321127420)
I hope these are what you had in mind. |
325,201 | As from the title.
I've been receiving this from a security guard, when attending a developer conference, well, a bit overdressed (wearing a suit where all the other nerds just appeared in t-shirt, jeans and sneakers).
But I've also heard that otherwise (not thrown on me, the above anecdote was my only personal experience).
Well, I somehow felt that it's not a compliment, but a subtle insult / belittlement.
I'm not a native speaker. Can someone explain what's the background of that phrase, or where's the joke in it? | 2016/05/12 | [
"https://english.stackexchange.com/questions/325201",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/85265/"
] | It is impossible to tell from the minimal description of the circumstances surrounding the guard's comment what his intentions were in saying "Nice shoes." On the one hand, there is at least a possibility that the intention was sincerely to compliment you on your shoes. After all, some consultants recommend it as an ingratiating strategy. From David Topus, [*Talk to Strangers: How Everyday, Random Encounters Can Expand Your Business, Career, Income, and Life*](https://books.google.com/books?id=4aBCioax4UEC&pg=PA69&lpg=PA69&dq=%22nice+shoes%22+%22nice+tie%22&source=bl&ots=5lEl8XZp1l&sig=yH_TUisS2LIOoqZEtV7WJ5QVKss&hl=en&sa=X&ved=0ahUKEwi3v_CFjtXMAhVM_WMKHWgRCM4Q6AEIRjAL#v=onepage&q=%22nice%20shoes%22%20%22nice%20tie%22&f=false) (2012):
>
> Show sensitivity to and awareness of the other person as much as possible. Any positive comment you can make about the other person allows you to accomplish this. You can never go wrong with a compliment: nice suit, **nice shoes**, nice tie, nice purse, nice ring, nice briefcase, and so on. These will get you going in a great conversation direction.
>
>
>
On the other hand, the guard probably wasn't trying to butter you up in order to expand his business, career, income, and life—and you note that you were dressed rather more formally than other attendees at the conference—so it's possible that he was calling out a difference that you were already feeling a bit awkward or self-conscious about, as a form of teasing.
Disapproving references (couched as compliments) to unconventional dress have a long history in business and social settings. I remember reading an article years ago about the extreme rigidity of the unspoken dress code at a well-established San Francisco law firm. The author recounted how he had once arrived at work wearing a yellow, long-sleeve button-down shirt, instead of the standard white, long-sleeve button-down shirt that the unspoken code insisted upon—and one of the partners at the firm, who rarely had anything to say to him, said in passing, "Nice shirt." The author says that he immediately recognized the comment as a rebuke: to have one's clothing choices mentioned at all at the firm was a form of indirect criticism.
Something similar happens in Virginia Woolf's short story, "[The New Dress](https://books.google.com/books?id=o1AbAgAAQBAJ&pg=PT2078&dq=%22Mabel%27s+got+a+new+dress%22&hl=en&sa=X&ved=0ahUKEwiVutXajdXMAhVR52MKHUs4DrwQ6AEIPDAF#v=onepage&q=%22Mabel%27s%20got%20a%20new%20dress%22&f=false)," where a character named Mabel Waring convinces herself to alter an old-fashioned yellow dress and wear it to a fancy party at Mrs. Dalloway's house. Feeling more and more like a fly trapped and liable to drown in a saucer of milk, she intercepts a well-tailored acquaintance, trying to put herself at ease:
>
> "It's so old-fashioned," she said to Charles Burt, making him stop (which by itself he hated) on his way to talk to someone else.
>
>
> She meant, or she tried to make herself think that she meant, that it was the picture [on the wall] and not her dress, that was old-fashioned. And one word of praise, one word of affection from Charles would have made all the difference for her at that moment. If he had only said, "Mabel, you're looking charming tonight!" it would have changed her life. But then she ought to have been truthful and direct. Charles said nothing of the kind, of course. He was malice itself. He saw through one, especially if one were feeling particularly mean, paltry, or feeble-minded.
>
>
> "Mabel's got a new dress!" he said, and the poor fly was absolutely shoved into the middle of the saucer [of milk].
>
>
>
---
In a comment above, NVZ cites an entry for "[nice shoes](https://www.urbandictionary.com/define.php?term=Nice%20Shoes)" at Urban Dictionary indicating that the phrase may be used as pick-up line—a sexual come-on. But Urban Dictionary also has the [following entry](http://www.urbandictionary.com/define.php?term=resplect), which uses "nice shoes" as a straightforward compliment without ulterior meaning:
>
> **resplect.** When you reflect the respect. [Example 1:] "Hey man you're wearing a nice tie today" "No dude, I like your tie." Tie resplect [Example 2:] "**Nice shoes**" "No you got nice shoes" Shoes resplect
>
>
>
In short, the guard may have intended "Nice shoes" as a simple compliment, or he may have said it to discomfit you because you were not dressed like most of the other conference attendees. It is highly unlikely that he was trying to proposition you. | This complement was likely genuine but likely also meant as a humorous, slightly sarcastic understatement.
It's a stereotype almost to the point of cliche in business that you can tell who really has money by looking not at their suit, but at their shoes. The same mentality is also behind the term "well-heeled" meaning wealthy; shoes typically have a pretty hard life as clothing, and it's very tempting to cut corners and wear a more durable or simply a cheaper pair. Expensive shoes in good condition are a mark of someone with enough money to keep them that way and enough attention to detail to care, while someone more practically minded or less attentive might try to get away with a more durable or cheaper and less dressy shoe. The complement, in this context, carries the hidden meaning of "I have noticed that you are well-put-together, head to foot, and I know what that means".
However, as you noticed, you've been showing up significantly overdressed relative to the others in the room. That's because the Internet Age has ushered in a new well-to-do, in the vein of "I'm wealthy, important or otherwise valuable enough that I don't have to impress you with my clothes". In this context the compliment is meant as a humorous understatement more than anything else: "you are so well-dressed compared to your peers that I'm going to single out the least important element of your ensemble to complement". The doorman could just as easily have said, "nice pocket square" if you happened to be sporting one. |
84,679 | Is there a 'psychic mode' plugin for kopete?
Psychic mode is a pidgin plugin that opens up the chat dialog as soon as someone starts talking with you, before message is sent. i'm looking for the same functionality in kopete. | 2009/12/17 | [
"https://superuser.com/questions/84679",
"https://superuser.com",
"https://superuser.com/users/17980/"
] | Here you go : [kopete psyko 0.1](http://opendesktop.org/content/show.php?content=121585),
Same plugin [download link 2](http://linux.softpedia.com/get/Communications/Chat/kopete-psyko-55357.shtml)
Hope this helps! | In Configuration->Behavior->General, there is "Message Handling": "Open messages instantly". |
60,593 | Looking for movie featuring Dragons vs. Navy (or Army, but the trailer featured battleship trying to shoot the dragons down).
Trailer also showed a dogfight between helicopters and dragons, capping (on the trailer) with a dragon setting a Blackhawk's blades on fire. | 2014/07/03 | [
"https://scifi.stackexchange.com/questions/60593",
"https://scifi.stackexchange.com",
"https://scifi.stackexchange.com/users/28220/"
] | Sounds similar to [Dragon Wars : D-War](https://en.wikipedia.org/wiki/D-War), although I haven't found a specific trailer (of which there are quite a few) with a Blackhawk's blades on fire. Plenty of dragons destroying helicopters in other ways though. Here is a [short trailer](http://www.imdb.com/video/screenplay/vi3466789145), and here is a [longer trailer](https://www.youtube.com/watch?v=YnQXvQ1R4gg). Some of the dialogue in the longer trailer is Korean, but the movie itself is mostly in English.
 | This sounds very like the British film [Reign of Fire](http://en.wikipedia.org/wiki/Reign_of_Fire_%28film%29), which included several scenes of fire-breathing dragons versus helicopters.
The trailer is [here](https://www.youtube.com/watch?v=Wg7bjwEXp7Y) and although there are various helicopter shots there's nothing specifically with the blades on fire. |
60,593 | Looking for movie featuring Dragons vs. Navy (or Army, but the trailer featured battleship trying to shoot the dragons down).
Trailer also showed a dogfight between helicopters and dragons, capping (on the trailer) with a dragon setting a Blackhawk's blades on fire. | 2014/07/03 | [
"https://scifi.stackexchange.com/questions/60593",
"https://scifi.stackexchange.com",
"https://scifi.stackexchange.com/users/28220/"
] | I saw that trailer too and then could not find it again , but stumbled across it today ,it is called Crimson Skies
Only other info i could find on this is that it was originaly called Dragon seige but your guess is as good as mine as to wether its movie or game | Sounds similar to [Dragon Wars : D-War](https://en.wikipedia.org/wiki/D-War), although I haven't found a specific trailer (of which there are quite a few) with a Blackhawk's blades on fire. Plenty of dragons destroying helicopters in other ways though. Here is a [short trailer](http://www.imdb.com/video/screenplay/vi3466789145), and here is a [longer trailer](https://www.youtube.com/watch?v=YnQXvQ1R4gg). Some of the dialogue in the longer trailer is Korean, but the movie itself is mostly in English.
 |
60,593 | Looking for movie featuring Dragons vs. Navy (or Army, but the trailer featured battleship trying to shoot the dragons down).
Trailer also showed a dogfight between helicopters and dragons, capping (on the trailer) with a dragon setting a Blackhawk's blades on fire. | 2014/07/03 | [
"https://scifi.stackexchange.com/questions/60593",
"https://scifi.stackexchange.com",
"https://scifi.stackexchange.com/users/28220/"
] | Sounds similar to [Dragon Wars : D-War](https://en.wikipedia.org/wiki/D-War), although I haven't found a specific trailer (of which there are quite a few) with a Blackhawk's blades on fire. Plenty of dragons destroying helicopters in other ways though. Here is a [short trailer](http://www.imdb.com/video/screenplay/vi3466789145), and here is a [longer trailer](https://www.youtube.com/watch?v=YnQXvQ1R4gg). Some of the dialogue in the longer trailer is Korean, but the movie itself is mostly in English.
 | MainStay Productions released a trailer on YouTube last year, about a movie project they are working on with BluFire Studios, ostensibly called Crimson Skies.
The premise is that a volcanic island erupts, releasing thousands of dragons from millenia-long slumber, that attack a small fleet of Navy vessels.
I checked on both MainStay and BluFire's websites, neither of which make any mention of the project, which makes me think that the trailer may simply be a "pitch demo" to try and find a studio willing to underwrite it. There is no other mention of the movie, other than the video, that I can find.
Of course, this may also be just another elaborate YouTube hoax. |
60,593 | Looking for movie featuring Dragons vs. Navy (or Army, but the trailer featured battleship trying to shoot the dragons down).
Trailer also showed a dogfight between helicopters and dragons, capping (on the trailer) with a dragon setting a Blackhawk's blades on fire. | 2014/07/03 | [
"https://scifi.stackexchange.com/questions/60593",
"https://scifi.stackexchange.com",
"https://scifi.stackexchange.com/users/28220/"
] | I saw that trailer too and then could not find it again , but stumbled across it today ,it is called Crimson Skies
Only other info i could find on this is that it was originaly called Dragon seige but your guess is as good as mine as to wether its movie or game | This sounds very like the British film [Reign of Fire](http://en.wikipedia.org/wiki/Reign_of_Fire_%28film%29), which included several scenes of fire-breathing dragons versus helicopters.
The trailer is [here](https://www.youtube.com/watch?v=Wg7bjwEXp7Y) and although there are various helicopter shots there's nothing specifically with the blades on fire. |
60,593 | Looking for movie featuring Dragons vs. Navy (or Army, but the trailer featured battleship trying to shoot the dragons down).
Trailer also showed a dogfight between helicopters and dragons, capping (on the trailer) with a dragon setting a Blackhawk's blades on fire. | 2014/07/03 | [
"https://scifi.stackexchange.com/questions/60593",
"https://scifi.stackexchange.com",
"https://scifi.stackexchange.com/users/28220/"
] | This sounds very like the British film [Reign of Fire](http://en.wikipedia.org/wiki/Reign_of_Fire_%28film%29), which included several scenes of fire-breathing dragons versus helicopters.
The trailer is [here](https://www.youtube.com/watch?v=Wg7bjwEXp7Y) and although there are various helicopter shots there's nothing specifically with the blades on fire. | MainStay Productions released a trailer on YouTube last year, about a movie project they are working on with BluFire Studios, ostensibly called Crimson Skies.
The premise is that a volcanic island erupts, releasing thousands of dragons from millenia-long slumber, that attack a small fleet of Navy vessels.
I checked on both MainStay and BluFire's websites, neither of which make any mention of the project, which makes me think that the trailer may simply be a "pitch demo" to try and find a studio willing to underwrite it. There is no other mention of the movie, other than the video, that I can find.
Of course, this may also be just another elaborate YouTube hoax. |
60,593 | Looking for movie featuring Dragons vs. Navy (or Army, but the trailer featured battleship trying to shoot the dragons down).
Trailer also showed a dogfight between helicopters and dragons, capping (on the trailer) with a dragon setting a Blackhawk's blades on fire. | 2014/07/03 | [
"https://scifi.stackexchange.com/questions/60593",
"https://scifi.stackexchange.com",
"https://scifi.stackexchange.com/users/28220/"
] | I saw that trailer too and then could not find it again , but stumbled across it today ,it is called Crimson Skies
Only other info i could find on this is that it was originaly called Dragon seige but your guess is as good as mine as to wether its movie or game | MainStay Productions released a trailer on YouTube last year, about a movie project they are working on with BluFire Studios, ostensibly called Crimson Skies.
The premise is that a volcanic island erupts, releasing thousands of dragons from millenia-long slumber, that attack a small fleet of Navy vessels.
I checked on both MainStay and BluFire's websites, neither of which make any mention of the project, which makes me think that the trailer may simply be a "pitch demo" to try and find a studio willing to underwrite it. There is no other mention of the movie, other than the video, that I can find.
Of course, this may also be just another elaborate YouTube hoax. |
364,027 | in the image suppose A is an observer. a box length of 1 light second is moving away from A with a velocity of half of the speed of light. a laser is shot from the front side of the box(C) to the opposite side of the box meaning that the light is going to the opposite direction of the velocity of the box.
if there is an observer in the box he will see light to reach the end of the box(B) in one second.but in case of observer A will the time taken for the light be shorter than 1 second or longer? it seems like it would be shorter. according to special relativity should it be longer?[](https://i.stack.imgur.com/rmVKx.jpg) | 2017/10/20 | [
"https://physics.stackexchange.com/questions/364027",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/136782/"
] | Before I start, we have to agree on what Huygens’ principle says. The Wikipedia page you cite does a poor job at presenting Huygens’ original idea, instead entirely focusing on Fresnel’s addition to it (the article is titled Huygens-Fresnel principle after all). You may know what I am going to explain but it may be still be useful to clarify for other readers.
If we go down the route of Fresnel modifications, it is rather pointless to stop there as Fresnel’s theory is just a bunch of heuristic guesses to make Huygens’ idea work for diffraction: instead we should go directly to [Kirchhoff’s formula](https://en.wikipedia.org/wiki/Kirchhoff%27s_diffraction_formula) which is grounded in the rigorous treatment of the wave equation. As for Huygens’ original principle, it states
1. that every point on the wave front of a propagating wave (the primary wave) is the source of secondary spherical wavelets; and
2. that the envelope of the wave fronts of all those secondary waves is the wave front of the primary wave.
A key point is that all those waves propagate at the same velocity. This is clearer on a drawing.
[](https://i.stack.imgur.com/Jhf5O.png)
The blue circle is the primary wave front, the dotted circles are the wave fronts of the secondary wavelets and the red circle is the envelope. With this idea, Huygens was able to explain the laws of reflection and refraction but this method fails at explaining diffraction, not without Fresnel’s additions at least.
However we have a big problem: you considered surface waves but Huygens’ principle is not true in 2 dimensions! It is only true in odd dimensions. The only explanation of that I know of requires to study the spherical wave equation. I won’t elaborate on that but a nice exposition can be found on Kevin Brown’s [famous mathpages](http://www.mathpages.com/home/kmath242/kmath242.htm). At least the first half is understandable with a bit of knowledge in calculus. There was also this [question](https://physics.stackexchange.com/questions/129324/why-is-huygens-principle-only-valid-in-an-odd-number-of-spatial-dimensions) on our beloved physics.stackexchange, with answers at various levels of math.
Thus, I shall consider sound waves propagating in the atmosphere instead, in the presence of a steady uniform wind, so that I can work in 3D. In effect, since I will not write any math, it won’t make much difference to the exposition but at least you can feel assured that this is correct!
Now let’s consider an observer, moving at the same velocity as the wind, who claps his hands. By the principle of relativity, what he sees should then be the same as what he would have seen standing still on the ground if there was no wind: the wave fronts are concentric sphere expanding outward and the original Huygens' principle is valid. But now what we are interested in is what another observer standing still on the ground will see.
The essential point is that a wave front of the primary wave is still a sphere but one that is moving with the velocity of the wind. Similarly, the wave front of the secondary wavelets emitted on the primary wave front are also still spheres but moving with the wind speed. This is just a simple Galilean transformation from the frame of the observer moving along with the wind to the frame of the ground. As a result, the envelope of those secondary wave fronts will be the primary wave front further away, and we can conclude that the original Huygens’ principle hold for that observer standing still on the ground.
Owing to the discussion at the beginning of my answer, there is still the question of whether the improved Huygens' principles hold in the presence of the wind, specifically, whether Kirchhoff's formula does. The answer is that the standard form of it does not but that a tweaked version of it can be proven to work [Mor30].
[Mor30] W.R. Morgans. XIV. the Kirchhoff formula extended to a moving surface. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 9(55):141–161, 1930. | The wind basically adds new sources for waves that also have to be considered regarding Huygens' principle. These new wave sources will look different depending on the wind you choose. Maybe you can imagine it as if you would throw a lot of little stone everywhere the wind hits the water. After the wind stops the principle will again hold true with the new formed shapes of wavefronts ect. |
1,886,580 | I am in a early stage of a project, graphically modelling the system structure.
Is there any widely accepted graphical notation for showing interface "bundles"?
Interface bundles would be a collection of several separate interfaces (belonging together) which are aggregated in order to reduce figure complexity.
Example would be to visualize a
* direct debit interface,
* voucher interface,
* credit card interface and
* prepaid interface
as one aggregated payment interface with hinting that the actual implementation consists of several interfaces. I am looking for ways to illustrate the "hinting". | 2009/12/11 | [
"https://Stackoverflow.com/questions/1886580",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/74168/"
] | So, it all depends on what you are trying to say from a modeling perspective. Options 3 could be your hinting, but there are other options.
1. Use packages for grouping and add a keyword of 'group'/'interface set' plus name the package what the grouping should be called as, not my personal favorite, but common, because it is easy, and people over use packages for the wrong meaning.
2. Make one large/grouped interface and have it realize the others. This would be very explicit from an inheritance perspective. It would work nice during behavior modeling (sequence diagrams) because the child interface methods would actually be available.
3. Like 2, but instead the use/depends line and add a keyword to the line, you can put a 'hint' keyword like contains, groups, include (this is standard).
4. You could use association, aggregation, composition lines between the a large/grouped interface and contained interfaces like in option 3.
5. If it is more important that they are grouped at realization or the component level, you can put multiple interfaces on 1 port (i.e.) lollipop coming of a component.
If these don't work there might be more complex or behavior related UML modeling options. You could even try and leverage the IBM services profile, but it is more about grouping interfaces (services) for deployment grouping, kind of like 5. | If you're using UML, I think the relevant diagram is a component diagram, which you could use to easily capture component ("bundles") and describe their interfaces.
example:
[Component diagram example](http://images.google.com/imgres?imgurl=http://images.devshed.com/ds/stories/Introducing%2520UML/image%25204.jpg&imgrefurl=http://www.devshed.com/c/a/Practices/Introducing-UMLObjectOriented-Analysis-and-Design/4/&usg=__H2GdmQEszl1l5Yut-If0aC8X36I=&h=393&w=478&sz=19&hl=en&start=4&um=1&tbnid=TVe9tx3B2rs2mM:&tbnh=106&tbnw=129&prev=/images%3Fq%3Dcomponent%2Bdiagram%26hl%3Den%26client%3Dfirefox-a%26rlz%3D1R1GGGL_en___NL355%26sa%3DX%26um%3D1) |
1,886,580 | I am in a early stage of a project, graphically modelling the system structure.
Is there any widely accepted graphical notation for showing interface "bundles"?
Interface bundles would be a collection of several separate interfaces (belonging together) which are aggregated in order to reduce figure complexity.
Example would be to visualize a
* direct debit interface,
* voucher interface,
* credit card interface and
* prepaid interface
as one aggregated payment interface with hinting that the actual implementation consists of several interfaces. I am looking for ways to illustrate the "hinting". | 2009/12/11 | [
"https://Stackoverflow.com/questions/1886580",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/74168/"
] | So, it all depends on what you are trying to say from a modeling perspective. Options 3 could be your hinting, but there are other options.
1. Use packages for grouping and add a keyword of 'group'/'interface set' plus name the package what the grouping should be called as, not my personal favorite, but common, because it is easy, and people over use packages for the wrong meaning.
2. Make one large/grouped interface and have it realize the others. This would be very explicit from an inheritance perspective. It would work nice during behavior modeling (sequence diagrams) because the child interface methods would actually be available.
3. Like 2, but instead the use/depends line and add a keyword to the line, you can put a 'hint' keyword like contains, groups, include (this is standard).
4. You could use association, aggregation, composition lines between the a large/grouped interface and contained interfaces like in option 3.
5. If it is more important that they are grouped at realization or the component level, you can put multiple interfaces on 1 port (i.e.) lollipop coming of a component.
If these don't work there might be more complex or behavior related UML modeling options. You could even try and leverage the IBM services profile, but it is more about grouping interfaces (services) for deployment grouping, kind of like 5. | Sounds like component diagrams would be useful here. Check out the following example:
[](https://i.stack.imgur.com/HeVkP.png)
**UML Component Diagrams: Reference**: <http://msdn.microsoft.com/en-us/library/dd409390%28VS.100%29.aspx> |
15,710 | Was travelling on a Airbus A320 (night flight) and during the safety briefing noticed that the fluorescent floor lighting strip referred to in the evacuation instructions was totally absent. Both sides of the aisle.
I could see the marks on the floor carpet where it must have been earlier tacked on. Was removed and went unreplaced for whatever reason.
Is it OK for an aircraft to fly without this item? Seemed pretty essential to me for a safe evacuation in the dark.
Edit: I did try & take a photo but unfortunately didn't come out so well.
 | 2015/06/11 | [
"https://aviation.stackexchange.com/questions/15710",
"https://aviation.stackexchange.com",
"https://aviation.stackexchange.com/users/7611/"
] | According to [this document](http://fsims.faa.gov/wdocs/mmel/a-320%20r21.pdf) the answer would appear to be no if all lights were missing.

The 1/1 columns indicates the number installed and required for dispatch. | Those lights don't have to be mounted to the floor. They can also be seat mounted.
<http://www.bruceind.com/index.php?option=com_k2&view=item&id=90:escape-path-lighting-systems&Itemid=189>
<http://www.astronics.com/_images/aircraft-safety/EPM%20033010.pdf>

This is the AC that provides guidance on the requirements of the system. <http://www.faa.gov/documentLibrary/media/Advisory_Circular/AC25.812-1A.pdf>
The regulation is [14CFR 25.812](http://www.ecfr.gov/cgi-bin/text-idx?c=ecfr&SID=67a8813bf9d9da0aa64e74e2e5ced957&rgn=div8&view=text&node=14:1.0.1.3.11.4.178.62&idno=14 "14CFR25.812") |
25,284 | Why do summary routes get a lower AD than other routing protocols? For example: The AD of EIGRP is 90, whereas the AD of a summary route is 5. | 2015/12/17 | [
"https://networkengineering.stackexchange.com/questions/25284",
"https://networkengineering.stackexchange.com",
"https://networkengineering.stackexchange.com/users/21408/"
] | Since the summarized route means that a router advertising it has knowledge of the individual routes within the summarized prefix, it is more trustworthy than the same (summarized) prefix being advertised as an individual route without the knowledge of the individual routes which make up the summary.
This doesn't mean that the summarized route is more preferred than one of the routes in the summary since the longest match will be more preferred than the summary. It simply means that the same route, if advertised as both a summary and a non-summary, the summary route is more trustworthy. | The AD of the EIGRP summary route is 5 only on the router that has the summary route configured. When the summary is advertised to other routers it has an AD of 90.
The reason for the low AD is to insure that the summary route (to nul0) is preferred to prevent routing loops. |
700 | I know someone who bought earphones that shine light in you ears. According to what he was told, there are neurons that sense light and then make you feel wide awake when activated, which seemed like snake oil to me. Apparently the pineal gland may be able to sense light and it does secrete melatonin - a sleep regulating hormone. I'm still sceptical though as its stuck in the middle of your brain. Would shining lights in your ears be able to have any effect on how awake you feel? | 2012/01/17 | [
"https://biology.stackexchange.com/questions/700",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/368/"
] | There is no known mechanism for light detection through the ears in humans, as far as I know. It is certainly true that the pineal gland is part of the system that regulates the circadian rhythm (briefly, the daily sleep-wake cycle). However, while the pineal gland in birds and other non-mammalian vertebrates is directly sensitive to light, the mammalian pineal gland is not (see, for review, [Doyle and Menaker, 2007](http://www.ncbi.nlm.nih.gov/pubmed/18419310) and [Csernus, 2006](http://www.ncbi.nlm.nih.gov/pubmed/16687306)).
In all animals, the circadian rhythm is regulated by a photoperiod cue and therefore requires light detection. In mammals, the light sensors are found exclusively in the retina, the sensory portion of the eye. There are two classes of light detecting cells in the retina. First, rod and cone photoreceptors mediate vision in the usual sense of the word. These cells contain proteins called opsins that absorb photons of light and thereby excite the photoreceptors that contain them, informing the brain that light was detected.
A second class of photosenstive cells in the retina are called intrinsically photosensitive retinal ganglion cells (ipRGCs) (see [Do and Yau, 2010](http://www.ncbi.nlm.nih.gov/pubmed/20959623) for review). These cells mediate "non-image-forming" vision and are an important part of the circadian rhythm pathway. They also contain an opsin called *melanopsin* which is a photosensitive pigment. This is not to be confused with *melatonin*, which is the sleep hormone released by the pineal gland. The ipRGCs in the retina send the photoperiod cue to a brain area called the suprachiasmatic nucleus (SCN). The SCN then signals to the pineal gland.
If we are generous and assume that these light-emitting headphones are the result of misunderstandings, we can guess that the confusion arises from (1) the fact that some animals have a directly photosensitive pineal gland, but not mammals and (2) that the pineal gland secretes melatonin but not the photosensitive pigment melanopsin.
---
**Update**: From a bit of research, it turns out that the company selling the headphones is not "confused" as I politely offered. I don't think this site is the appropriate forum to refute their research or claims. Suffice to say that the retina is the only part of the human brain shown to be photosensitive. | I believe there are light sensors ([TRPV3](http://en.wikipedia.org/wiki/TRPV3)) in the skin for infrared light (heat), that convey that information back to the brain from the skin. This is kind of light detection, but it is not direct detection like the rhodopsins in the eye.
By the way, without passing information onto neurons, cells probably have a lot of sensors they may use to respond to their local environment. This recent article talks about how [olfactory receptors can be found in lung and gut cells](http://the-scientist.com/2011/12/01/taste-in-the-mouth-gut-and-airways/). so its quite possible that the conventional light detecting genes (rhodopsins) would be found in skin cells, but they may not convey information to neurons. |
806,506 | Where is it written that my hard disk is SSD or HDD?
I have tried searching:
* msinfo32
* Device Manager
* Disk Management
I need to to see the words solid state drive or hard disk drive in Windows 7.
It may be either through CLI or GUI.
I found the same information for Windows 8 here.
Right-click on C drive-> *Properties*-> *Tools*-> *Optimize/Defragment now* -> Here you should disk listed with its media type. | 2014/09/03 | [
"https://superuser.com/questions/806506",
"https://superuser.com",
"https://superuser.com/users/303024/"
] | 1. Find the drive in Device Manager (devmgmt.msc).
2. Look up the model number in Google.
Example:

[KINGSTON SH103S3120G](http://lmgtfy.com/?q=kingston+sh103s3120g&l=1) - Kingston 120 GB SSD
[ST1000LM014-1EJ164-SSHD](http://lmgtfy.com/?q=ST1000LM014-1EJ164-SSHD) - Seagate 1 TB SSHD
---
So far, every search I've done to find a proper solution for this seems to indicate that one doesn't exist. Every Windows 7 solution I've found has been either a hack based on finding some string like "SSD" in the model number (which is horribly unreliable, as demonstrated by my Kingston above) or testing read/write performance and comparing it against some threshold.
The fact of the matter is, the OS really has little reason to actually care what type of physical media resides within the hard drive. All the physical reading and writing is done by the hard drive controller, which translates the (generally media-agnostic) commands given to it from the OS via its drivers. Effectively, the OS only needs to worry about declaring what data it needs read/written and the controller handles the how and where of reading/writing it. (Yes, the OS knows a "where" too - but that's a *logical* location defined in software, not a *physical* one that's hardware-dependent.)
Windows 8, and the newer devices it supports, has a bit more intelligence built-in. However, these features appear to not have been back-ported to Windows 7. | I'm not completely clear on your question; however, in My Computer, right click on drive, select properties, select Hardware tab. In my case it shows *Patriot Pyro SSd SATA Disk Device*. |
806,506 | Where is it written that my hard disk is SSD or HDD?
I have tried searching:
* msinfo32
* Device Manager
* Disk Management
I need to to see the words solid state drive or hard disk drive in Windows 7.
It may be either through CLI or GUI.
I found the same information for Windows 8 here.
Right-click on C drive-> *Properties*-> *Tools*-> *Optimize/Defragment now* -> Here you should disk listed with its media type. | 2014/09/03 | [
"https://superuser.com/questions/806506",
"https://superuser.com",
"https://superuser.com/users/303024/"
] | 1. Find the drive in Device Manager (devmgmt.msc).
2. Look up the model number in Google.
Example:

[KINGSTON SH103S3120G](http://lmgtfy.com/?q=kingston+sh103s3120g&l=1) - Kingston 120 GB SSD
[ST1000LM014-1EJ164-SSHD](http://lmgtfy.com/?q=ST1000LM014-1EJ164-SSHD) - Seagate 1 TB SSHD
---
So far, every search I've done to find a proper solution for this seems to indicate that one doesn't exist. Every Windows 7 solution I've found has been either a hack based on finding some string like "SSD" in the model number (which is horribly unreliable, as demonstrated by my Kingston above) or testing read/write performance and comparing it against some threshold.
The fact of the matter is, the OS really has little reason to actually care what type of physical media resides within the hard drive. All the physical reading and writing is done by the hard drive controller, which translates the (generally media-agnostic) commands given to it from the OS via its drivers. Effectively, the OS only needs to worry about declaring what data it needs read/written and the controller handles the how and where of reading/writing it. (Yes, the OS knows a "where" too - but that's a *logical* location defined in software, not a *physical* one that's hardware-dependent.)
Windows 8, and the newer devices it supports, has a bit more intelligence built-in. However, these features appear to not have been back-ported to Windows 7. | Goto the control panel -> system -> and find the "device manager", click on it to give a listing of all devices present.
It should list the storage media, as in model **WD 500000000-XYZ.abc** You then check to see what that model# refers to by googling the exact model# provided. Once done it explains the specs of that storage device. |
68,761 | *[Beyond Libertarianism: Interpretations of Mill's Harm Principle and the Economic Implications Therein](https://scholarworks.gsu.edu/cgi/viewcontent.cgi?article=1051&context=political_science_theses#page=26)*
>
> The harm principle does not stipulate strict rights of the individual, applied uniformly.
>
>
> To justify a system of redistribution via Mill’s harm principle, we must first grant that taxation, in a general, nonspecific guise is a legitimate action of the state.1
>
>
>
I am trying to make the goal of reducing inequality and providing for social security and insurance compatible with the harm principle.
Adhering to the harm principle the state should only act (restrict people's free will, coerce them into doing something) in order to prevent harm and safeguard third persons' rights. Further the presumption in favor of liberty (in dubio pro libertate) makes a liberal state do so only when the harm (or danger since the probability of harm is harm in itself) to third person's rights is actually known and proven and not presumed.
I can't see how amassing wealth (in itself when it is devoid from any enriching actions that have a negative externality) can harm third persons'. I can't see how I am harming anyone by inheriting or by dying and having my inheritance passed to my heirs only (obviously there is an exclusion of the general public and any other person).
People (that have been infected) transmit (probabilistically) SARS-COV-2 now the (specific) vaccines after more than one year of testing have been finally proven to reduce transmission. I appreciate that not preventing (reducing) a harm (danger) to others that you know (or should and could know) is in itself a harm (i.e states that coerce their subjects/citizens into getting vaccinated are not illiberal).
I don't feel that reducing inequality and reducing the transmission of SARS-COV-2 are of the same nature. One is clearly a harm while I find it difficult to accept the other is harm.
I don't feel Taxes are illiberal but they seem against the harm principle.
**How could we adjust the Harm Principle so as to allow Taxation not to infringe upon it?**
I obviously don't mean by simply adding a perfunctory exemption (e.g excluding taxes or excluding reasonable burdens) but essentially and substantially altering Harm Principle's content without allowing obviously tyranical and despotic state actions either.
---
1Towery, Matthew A., "Beyond Libertarianism: Interpretations of Mill's Harm Principle and the Economic Implications Therein." Thesis, Georgia State University, 2012. <https://scholarworks.gsu.edu/political_science_theses/45> | 2021/09/14 | [
"https://politics.stackexchange.com/questions/68761",
"https://politics.stackexchange.com",
"https://politics.stackexchange.com/users/39834/"
] | This is where the difference between [intrinsic goals and instrumental goals](https://en.wikipedia.org/wiki/Instrumental_and_intrinsic_value) is important to discuss.
Mill is expressing an intrinsic goal in the Harm Principle, that is, a thing which self-justifies. Any moral philosophy that results in a self-defeating decision tree is incoherent and therefore not one we need to bother ourselves with. The principle of philosophical charity holds that when considering a philosopher's argument, one should do so in the terms that represent it as strongly as possible - so any interpretation of the Harm Principle, that governments should act to protect citizens from harm, including infringements on their liberties - including from the gov't itself - which leads us to recommend a government incapable of protecting anyone is not reasonable for us to offer.
Without taxation, governments have no resources and no capacity to enact policies or offer protection to anyone. Taxation is, therefore, an instrumental goal in support of the intrinsic goal of the preservation of liberty. You can't have the one without the other, period. Even if your soldiers/police work for free, they still need equipment, training, facilities, and other things which must be paid for.
It's not that wealth its itself harmful, it's that the whole political/moral philosophy becomes incoherent if you insist that the government should protect your person and freedoms... but may not have any resources with which to do so.
Similarly, redistributive policies are not punishments against wealth but protective measures to stave off ruinous poverty (the harms from which are obvious and well recorded). Taxing the poor to turn around and provide for them is equally incoherent as would be insisting that laws are enforced without resources. Taxing the wealthy to fund these protections is an instrumental goal that serves the intrinsic goal of protection.
No modification of Mill's Harm Principle is needed, per se. Only the context within which it is being contemplated. If you try to apply the Harm Principle as if every act happens in a vacuum, wholly independent of all other acts, then you wind up with a Harm Principle that only permits total anarchy - since in order for a government to act at all it must have the resources and capacity to do so. Considering taxation as an independent act, as you discuss in the comments, does result in the HP proscribing against it, thus a government may never have resources or capacity, and thus a government may never rightfully exist.
John Stewart Mill, however, was not an anarchist - and this line of argument is absurd on its face besides.
Therefore we *must* consider the Harm Principle in the context of the interdependent nature of acts, which forces the acknowledgement of the existence of the intrinsic vs. the instrumental. An instrumental act is justified by the ends it is made in pursuit of.
This means that insofar as a government's intrinsic acts are solely to prevent harm, all instrumental acts *necessary to that end* are similarly permitted by the Harm Principle. An interesting consequence here is that if the final, intrinsic end, is NOT to prevent harm (or has elements besides the prevention of harm) then the *entire* chain of acts is now in violation of the Harm Principle, without exception.
If you read the rest of Mill's body of work, however, you'll find that he (at the least) flirted with the beginnings of what became Rule Utilitarianism - which permits actors to make errors, so long as they have evidence to support their conclusions that their acts are *likely* in furtherance of greater utility to the greatest number. | The text of the 'harm principle', as given in the linked document, reads as follows:
>
> That principle is, that the sole end for which mankind are warranted,
> individually or collectively, in interfering with the liberty of
> action of any of their number is self-protection. That the only
> purpose for which power can be rightfully exercised over any member of
> a civilized community, against his will, is to prevent harm to others.
>
>
>
There are actually *two* formulations here, and it's useful to consider the difference. The two formulations are:
* The 'prevention of harm', from the second sentence, and from which we get the common name of the principle, and...
* The activity of 'self-protection', from the first sentence.
I suspect people focus on the concept of *harm* because harm seems like a quantifiable, measurable, objective concept. Intuitively, pinching someone does less harm than punching them, which does less harm than hitting them with a baseball bat, and we like to think that we can extend that intuitive rank-ordering to any sort of harm whatsoever. Obviously this suffers serious problems in practice — I mean, is living with the lingering effects of 300 years of slavery and oppression more harm or less harm than getting hit with a baseball bat? — but it is difficult to shake that intuition completely.
On the other hand, the principle of 'self-protection' is intuitive in a different, more subjective sense. We all know more or less what we want to protect ourselves *from*, and there is a broad range of events and activities that most of us would agree everyone wants to protect themselves from. This also changes the nature of our relationship to government. Instead of government being an aloof, paternalistic entity that determine what is objectively harmful and tasks itself with preventing it from happening, government becomes a tool that we actively use for collective self-protection. The question is no longer that ambiguous determination of what is and is not objectively harmful, with all the caveats and pitfalls that entails; it is a more Kantian question or what things we collectively decide that we collectively want to protect ourselves against.
This naturally changes our perspectives on the issue. We no longer try to measure (say) the harm of taxation against the harm of poverty (which are deeply incommensurate metrics in any case). Now we concern ourselves with the idea that people in general want to protect themselves against abject poverty (not to mention heritable poverty), and we take the least invasive approach to ensure that people can protect themselves against abject poverty. We no longer care about the wealth divide, or how wealthy any individual gets, so long as everyone can protect themselves against falling into poverty.
It's worth noting that this move is inherent in Marxism. Marx shifted the metric away from harm to *property* and towards harm to *labor*; one must protect the effort one expends towards producing a good, because one must live by the profits of the labor one expends. And if we follow the Marxist thread all the way we find that the ultimate *harm* (in his view) is the segregation pf people into 'groups' or 'classes' that are treated differentially under government and law. This leads us straight into social democratic and left-Libertarian principles, where the creation of 'others', of second class citizens and excluded groups, is the root of all structural harm within society. |
175,794 | I've heard and read enough programmers firmly advocating automatic tests. According to many, tests are themselves part of a code's functionality, untested code is broken and/or legacy by definition, long-term manual testing is more time-consuming and provides far weaker guarantees against failures than automatic testing... etc.
I'm trying to hobbystically develop a turn-based Pokemon-like game. While talking to a far more experienced developer I said that when I add a new attack I run my game, go into 1v1 combat, use this move and see if it does what its supposed to do. His answer was: *Why won't you instead write a test that goes into a 1v1 combat, uses this move and checks if it does what it's supposed to do? Doing what you're doing manually may be faster than writing tests if you do it once, twice, thrice. But after 10 times? One hundred times?*
I'm thinking about what he said. I'm thinking and I still don't feel convinced. About any other kind of project, perhaps. But a game?
Assume I'm adding lifesteal to a monster's attack. Problem is I'll have to **anyway** boot my game and play it! To see if it feels right, if it plays right, if not for anything else.
Tests on the other hand add maintenance cost: for example, once I (for balancement reasons) change the lifesteal coefficient, I'll have to reflect that change in tests. (Or I won't have to reflect this change in tests if I copy&paste the formula, parametrized by attack strength, lifesteal coefficient, etc, but doesn't this defeat the purpose of testing?)
Automatic tests are supposed to replace manual tests; but given I have to manually play my game anyway, isn't this duplication of work?
Or is my thinking wrong? What am I missing? | 2019/09/25 | [
"https://gamedev.stackexchange.com/questions/175794",
"https://gamedev.stackexchange.com",
"https://gamedev.stackexchange.com/users/101389/"
] | Testing is an investment in your future. While an individual test might duplicate some aspect of manual testing you are about to do in any given run of the game, a robust suite of tests can in the long run cover far more scenarios than that small bit of overlapping work (manual testing can also more effectively test more complex scenarios where writing automatic tests might be too difficult).
Writing tests is an overhead, yes. So is maintaining them when the things you are testing change. It's also much harder to retrofit tests onto a big project not built from scratch with that goal in mind, as there are some technical design choices that are less compatible with easy, isolated testing than others.
Whether or not that overhead is worth the benefit depends on you and the scale and scope of your project.
The payoff is that automated tests can be run automatically and require far, far less work on your part that equivalent manual testing coverage would. In your example, sure, you manually run the game and ad-hoc test your change.
But do you do it on every platform you are shipping on? On every build configuration? Automated tests can do so easily, which means you can catch errors that only manifest on one or two platforms/configurations that you don't regularly test more quickly.
Similarly, while you may ad-hoc test that one ability change, do you manually go back and ad-hoc test *every other ability*? Probably not, because you're operating under the assumption that you didn't make a change that could impact those other abilities. But you are a programmer, and therefore, you make mistakes ("bugs") and therefor you *could have* accidentally made your change in a such a way as to have unintentionally broke something you were not expecting. An automated test suite could catch that.
Granted, the value of the test suite is dependent on what you test and how, and there is a point of diminishing returns in the investment (100% test coverage is often impractical, for example).
For example, it's probably worthwhile to write a test to validate the results of a math function that presumes a left- or right-handed convention, as change in that function to prefer the other convention will probably destabilize a lot and should be flagged. But a test for the outcome of some damage calculation may not be a good candidate for a test, at least not early on when you are iterating rapidly on game balance. That sort of test is perhaps best added later, once balance has hardened a little, to alert you that you may have made a change that has balance implications. | First off, 100% test coverage is not a realistic goal, and games especially tend to incorporate elements that are difficult to test reliably, for example when aspects like timing, physics or (nontrivial) AI become involved. In addition, automated tests are neither infallible nor by definition superior to manual testing, so your friends' stance appears a little... overzealous to me.
Howver, that does not mean that games can't massively benefit from automated tests. A common approach in test automation is to prioritize areas that would benefit the most and work your way down the list until you reach a point where the efforts outweigh the advantages. Typical criteria are:
* How often does this code **change** / how likely is it to introduce a bug here?
* How **severe** are the consequences if it breaks?
* How **easy** is it to test?
* How **obvious** (or not) would a bug be during manual tests?
I'd recommend the following starting points:
### Sanity checks
If game stats and actions depend on complicated and/or frequently tweaked formulae or functions, it's rarely necessary to check for minor deviations (unless your game is really balanced on a knife's edge), and updating the test cases every time can take a lot of work. What you **do** want to guard against, however, are **gamebreaking** numbers and unexpected interactions. Say, an ability that does 20 times the expected damage on a critical hit, but only against a specific enemy type.
Instead of setting up a test case that checks the exact damage output and secondary effects of every attack, I would write a test that goes through all attacks and checks if the numbers are **within a specific range** that's appropriate for their power level. Tweaking an ability by 10% for balance reasons shouldn't cause any tests to fail, but extreme outliers (which are probably unintended) should.
### Core rules and algorithms
There will probably be pieces of code (say, the save/load mechanism, network code, pathfinding, core game rules...) that are critical to your game working as intended. They are also likely to interact with a lot of different modules, and thus be easily affected by changes to those. You might, for example, introduce a new enemy type, forget to serialize one of their stats and mess up the savegame format, which only becomes apparent during a longer playthrough, which you can't do after every little change.
For these central elements, automated tests are usually worth the effort because you absolutely don't want them to break.
### Rapid iteration
Few finished games look anything like their early versions. You will probably make some sweeping changes that affect most of the existing content in one way or another, for example by adding a new stat or game mechanic. This, essentially, resets the "tested and mature" status of all your code and assets and the amount of extra testing required will grow with the size of your game.
You can't just skip manual tests, of course. There's no replacement for actually playing your game (and having others play it as well). But having a battery of tests ready to go, even if they're imperfect and incomplete, can allow you to vet and tweak these sweeping changes *much* faster, and find many inconsistencies the second they are introduced.
You're not limited to testing for correctness, by the way. If you're worried about balance, a scipt that runs a battery of different combinations, items, matchups or whatever can yield a lot more valuable (and reliable) data points than a handful of manual tests. |
175,794 | I've heard and read enough programmers firmly advocating automatic tests. According to many, tests are themselves part of a code's functionality, untested code is broken and/or legacy by definition, long-term manual testing is more time-consuming and provides far weaker guarantees against failures than automatic testing... etc.
I'm trying to hobbystically develop a turn-based Pokemon-like game. While talking to a far more experienced developer I said that when I add a new attack I run my game, go into 1v1 combat, use this move and see if it does what its supposed to do. His answer was: *Why won't you instead write a test that goes into a 1v1 combat, uses this move and checks if it does what it's supposed to do? Doing what you're doing manually may be faster than writing tests if you do it once, twice, thrice. But after 10 times? One hundred times?*
I'm thinking about what he said. I'm thinking and I still don't feel convinced. About any other kind of project, perhaps. But a game?
Assume I'm adding lifesteal to a monster's attack. Problem is I'll have to **anyway** boot my game and play it! To see if it feels right, if it plays right, if not for anything else.
Tests on the other hand add maintenance cost: for example, once I (for balancement reasons) change the lifesteal coefficient, I'll have to reflect that change in tests. (Or I won't have to reflect this change in tests if I copy&paste the formula, parametrized by attack strength, lifesteal coefficient, etc, but doesn't this defeat the purpose of testing?)
Automatic tests are supposed to replace manual tests; but given I have to manually play my game anyway, isn't this duplication of work?
Or is my thinking wrong? What am I missing? | 2019/09/25 | [
"https://gamedev.stackexchange.com/questions/175794",
"https://gamedev.stackexchange.com",
"https://gamedev.stackexchange.com/users/101389/"
] | Testing is an investment in your future. While an individual test might duplicate some aspect of manual testing you are about to do in any given run of the game, a robust suite of tests can in the long run cover far more scenarios than that small bit of overlapping work (manual testing can also more effectively test more complex scenarios where writing automatic tests might be too difficult).
Writing tests is an overhead, yes. So is maintaining them when the things you are testing change. It's also much harder to retrofit tests onto a big project not built from scratch with that goal in mind, as there are some technical design choices that are less compatible with easy, isolated testing than others.
Whether or not that overhead is worth the benefit depends on you and the scale and scope of your project.
The payoff is that automated tests can be run automatically and require far, far less work on your part that equivalent manual testing coverage would. In your example, sure, you manually run the game and ad-hoc test your change.
But do you do it on every platform you are shipping on? On every build configuration? Automated tests can do so easily, which means you can catch errors that only manifest on one or two platforms/configurations that you don't regularly test more quickly.
Similarly, while you may ad-hoc test that one ability change, do you manually go back and ad-hoc test *every other ability*? Probably not, because you're operating under the assumption that you didn't make a change that could impact those other abilities. But you are a programmer, and therefore, you make mistakes ("bugs") and therefor you *could have* accidentally made your change in a such a way as to have unintentionally broke something you were not expecting. An automated test suite could catch that.
Granted, the value of the test suite is dependent on what you test and how, and there is a point of diminishing returns in the investment (100% test coverage is often impractical, for example).
For example, it's probably worthwhile to write a test to validate the results of a math function that presumes a left- or right-handed convention, as change in that function to prefer the other convention will probably destabilize a lot and should be flagged. But a test for the outcome of some damage calculation may not be a good candidate for a test, at least not early on when you are iterating rapidly on game balance. That sort of test is perhaps best added later, once balance has hardened a little, to alert you that you may have made a change that has balance implications. | >
> Problem is I'll have to anyway boot my game and play it! To see if it feels right, if it plays right, if not for anything else.
>
>
>
Yes, you need to do that while you are iterating on that specific situation you are implementing right now. But think ahead a couple years in the future. You might be working on a completely different aspect of the game and accidentally break this situation you though you had done. Are you going to test every single thing in your game after every change? Certainly not manually all by yourself. So how long will it take you to notice the bug? Will you then be able to easily connect it to that specific change you made? But if you have an automated test suit, you *can* easily test your entire codebase after every change.
>
> Tests on the other hand add maintenance cost: for example, once I (for balancement reasons) change the lifesteal coefficient, I'll have to reflect that change in tests.
>
>
>
Yes, and that's a good thing, because now you have to be aware of every ramification of your balance change.
Does the tutorial still play out the way you intended? Is that one boss fight still winnable with the intended strategy? Is that other boss still immune to life steal? In order to detect those problems you might have to play through your whole game from start to finish. But an automated test can tell you within seconds to minutes.
So an automated test allows you to go through every situation where the outcome is affected and confirm one by one that this is still the outcome you intended.
>
> Automatic tests are supposed to replace manual tests; but given I have to manually play my game anyway, isn't this duplication of work?
>
>
>
Automatic tests are not supposed to replace manual tests. They are supposed to augment them. While it can not replace the "how does it feel?" playtesting, it can save you a ton of work in regression testing (testing again and again if the things which used to work still work). |
58,525 | **Is there any evidence which shows whether users are more able to recognise another user's Photo over their Username or vice versa?**
I am interested in understanding this from a usability perspective.
Lets say on a site such as this network, a user has both a username as well as a photo/avatar.
* On *Sci-Fi.se* [DVK](https://scifi.stackexchange.com/users/976/dvk) has a very recognisable **avatar**
* On *StackOverflow.se* [Jon Skeet](https://stackoverflow.com/users/22656/jon-skeet) has a very memorable **username**
Which of these is more recognisable/memorable?1
i.e. If **DVK** was to change his picture2, would this throw users off (not quickly recognising who the post if from) or would there be more of an issue if **Jon Skeet** changed his username?3
---
**1.** Lets ignore rep points, as it is not relevant to my question.
**2.** The caveat here is that 'DVK' is also a memorable username, but for the sake of this example lets ignore that.
**3. Please reference research where possible.** | 2014/06/06 | [
"https://ux.stackexchange.com/questions/58525",
"https://ux.stackexchange.com",
"https://ux.stackexchange.com/users/4430/"
] | Remember or recall words or images is highly individual and depends on which hemisphere of the brain is dominant. My right half of the brain is slightly more dominant than the left, which makes me remember a face rather than the name. Sometimes I'm embarrassed when meeting people on the street and I recognize the person, but don't recall their name. If we start talking it takes a while before I remember the name because I need other details as well.
That's why avatar and username together are important to support both memory styles.
>
> Some people are just better at remembering in a certain way—even identical twins may vary in that regard—and it can relate to which hemisphere of the brain is dominant. Visual memory (which we call episodic memory) relates to the right hemisphere of the brain, which is associated with intuition. Verbal (semantic) memory is primarily a function of the left hemisphere, which we link with analytical thinking. The difference between the two kinds of memory becomes most obvious when it comes to recalling a deeply emotional event—9/11 or the day JFK was shot, for example. People with a strong semantic memory recall headlines, quotes, and phrases; those inclined toward visual memory retain the pictures and images of the event more vividly.
>
>
>
Reference: [Ask Dr. Gupta: Why Do I Recall Words Better Than Pictures?](http://www.prevention.com/health/brain-games/dr-sanjay-gupta-visual-memory) | As most people have said, it depends on the user.
For me personally it depends entirely on the context.
* On forums I'm heavily dependant on usernames for identifying people. I tend to remember people by their usernames on forums (and here). I think this is because forum users tend to use avatars detached from their identity such as cartoon characters or memes.
* On Twitter and Facebook I'm heavily reliant on the avatars of users because those people tend to represent themselves by their own face. I either learn what they look like (if i don't know them personally) or if it's someone i already know from the real world I know their face.
I So my answer would be think about the context of use and base your decision on that. |
58,525 | **Is there any evidence which shows whether users are more able to recognise another user's Photo over their Username or vice versa?**
I am interested in understanding this from a usability perspective.
Lets say on a site such as this network, a user has both a username as well as a photo/avatar.
* On *Sci-Fi.se* [DVK](https://scifi.stackexchange.com/users/976/dvk) has a very recognisable **avatar**
* On *StackOverflow.se* [Jon Skeet](https://stackoverflow.com/users/22656/jon-skeet) has a very memorable **username**
Which of these is more recognisable/memorable?1
i.e. If **DVK** was to change his picture2, would this throw users off (not quickly recognising who the post if from) or would there be more of an issue if **Jon Skeet** changed his username?3
---
**1.** Lets ignore rep points, as it is not relevant to my question.
**2.** The caveat here is that 'DVK' is also a memorable username, but for the sake of this example lets ignore that.
**3. Please reference research where possible.** | 2014/06/06 | [
"https://ux.stackexchange.com/questions/58525",
"https://ux.stackexchange.com",
"https://ux.stackexchange.com/users/4430/"
] | To the other questions I would add accessibility concerns. For many with poor sight a username is much more easily used either by using zoomed text or some for of text reading. Even for those who don't need screen enhancements (e.g. me) avatars can be hard to tell apart e.g. Twitter and using pictures, may are difficult to tell who they are as photos are not well taken or the user is trying to be clever and show something which works on a reasonably sized photo but not a thumbnail.
Given that last thought I would add a concern about general differentiability it is much easier to make two avatars very nearly the same and not notice compared to usernames, if the users are trying to confuse people - which some will do. | Certain avatar images are very memorable and distinct. Some are almost indistinguishable. Likewise with user names. Someone who was asked to remember the username of "Jon Skeet" and was asked a day later later to identify it from a list of the ten most similar usernames might have a good chance at identifying it, while someone who was shown a generic gravatar and asked to identify it from a list including nine randomly-selected ones would have a relatively poor chance even five minutes later. On the other hand, someone asked to remember a username written in an unfamiliar script [e.g. a hypothetical user "絕對沒有"] would have a hard time than someone asked to remember the gravatar of Jon Skeet (sompared with the ten most similar gravatars, assuming nobody copied Mr. Skeet's picture). The differences in memorability between different usernames and avatars would seem to outweigh any general advantage usernames would have over avatars or vice versa. Incidentally, without looking back at the previous hypothetical username, can one one remember whether it was "絕對伏特加", "絕對沒有", or "絕對值" [Chinese characters from Google translate] |
58,525 | **Is there any evidence which shows whether users are more able to recognise another user's Photo over their Username or vice versa?**
I am interested in understanding this from a usability perspective.
Lets say on a site such as this network, a user has both a username as well as a photo/avatar.
* On *Sci-Fi.se* [DVK](https://scifi.stackexchange.com/users/976/dvk) has a very recognisable **avatar**
* On *StackOverflow.se* [Jon Skeet](https://stackoverflow.com/users/22656/jon-skeet) has a very memorable **username**
Which of these is more recognisable/memorable?1
i.e. If **DVK** was to change his picture2, would this throw users off (not quickly recognising who the post if from) or would there be more of an issue if **Jon Skeet** changed his username?3
---
**1.** Lets ignore rep points, as it is not relevant to my question.
**2.** The caveat here is that 'DVK' is also a memorable username, but for the sake of this example lets ignore that.
**3. Please reference research where possible.** | 2014/06/06 | [
"https://ux.stackexchange.com/questions/58525",
"https://ux.stackexchange.com",
"https://ux.stackexchange.com/users/4430/"
] | It depends not only on person, but also on service.
if you need to type username often ( or see it in your message ) then username is more memorable ( twitter )
Making avatar very small also prevent avatar to recognition.
Some services might allow to use not only latin letters in the username which make them weird
Speaking about weird. It also depends on chosen username and avatar.
Long username is hard to memorize, photo of user face in an avatar 32x32 is hard to remember or recognize
As a conclusion very depends on service where both of their are going to be used. | Although it is not exactly about avatar and user name, there is a research paper about distinctive file icons called [VisualIDs: Automatic Distinctive Icons for Desktop Interfaces](http://scribblethink.org/Work/VisualIDs/visualids.html) in SIGGRAPH 2004. Visual distinctiveness is unsurprisingly useful for both short term memory task (browsing for a specific file) and long term memory task (sketching and describing icons two days later). One interesting principle is that arbitrarily unique icons can be recalled regardless of their contents or meanings to the user.
Note that this research assumes a priori that "Search and memory for images is known to be generally faster and more robust than search and memory for words" with a reference to [Data Mountain (UIST 1998)](http://research.microsoft.com/apps/pubs/default.aspx?id=64329). Despite no direct comparison to textual memory, I believe that this research can supplement that visual memory is *stickier* i.e. changing a distinctive and registered avatar would throw users off more than changing a username. |
58,525 | **Is there any evidence which shows whether users are more able to recognise another user's Photo over their Username or vice versa?**
I am interested in understanding this from a usability perspective.
Lets say on a site such as this network, a user has both a username as well as a photo/avatar.
* On *Sci-Fi.se* [DVK](https://scifi.stackexchange.com/users/976/dvk) has a very recognisable **avatar**
* On *StackOverflow.se* [Jon Skeet](https://stackoverflow.com/users/22656/jon-skeet) has a very memorable **username**
Which of these is more recognisable/memorable?1
i.e. If **DVK** was to change his picture2, would this throw users off (not quickly recognising who the post if from) or would there be more of an issue if **Jon Skeet** changed his username?3
---
**1.** Lets ignore rep points, as it is not relevant to my question.
**2.** The caveat here is that 'DVK' is also a memorable username, but for the sake of this example lets ignore that.
**3. Please reference research where possible.** | 2014/06/06 | [
"https://ux.stackexchange.com/questions/58525",
"https://ux.stackexchange.com",
"https://ux.stackexchange.com/users/4430/"
] | Standard reminder: Graphics are often of little or no use to folks who are reading the screen through assistive technology. The simplest answer -- as here in Stack Exchange -- is to display *both*. | Although it is not exactly about avatar and user name, there is a research paper about distinctive file icons called [VisualIDs: Automatic Distinctive Icons for Desktop Interfaces](http://scribblethink.org/Work/VisualIDs/visualids.html) in SIGGRAPH 2004. Visual distinctiveness is unsurprisingly useful for both short term memory task (browsing for a specific file) and long term memory task (sketching and describing icons two days later). One interesting principle is that arbitrarily unique icons can be recalled regardless of their contents or meanings to the user.
Note that this research assumes a priori that "Search and memory for images is known to be generally faster and more robust than search and memory for words" with a reference to [Data Mountain (UIST 1998)](http://research.microsoft.com/apps/pubs/default.aspx?id=64329). Despite no direct comparison to textual memory, I believe that this research can supplement that visual memory is *stickier* i.e. changing a distinctive and registered avatar would throw users off more than changing a username. |
58,525 | **Is there any evidence which shows whether users are more able to recognise another user's Photo over their Username or vice versa?**
I am interested in understanding this from a usability perspective.
Lets say on a site such as this network, a user has both a username as well as a photo/avatar.
* On *Sci-Fi.se* [DVK](https://scifi.stackexchange.com/users/976/dvk) has a very recognisable **avatar**
* On *StackOverflow.se* [Jon Skeet](https://stackoverflow.com/users/22656/jon-skeet) has a very memorable **username**
Which of these is more recognisable/memorable?1
i.e. If **DVK** was to change his picture2, would this throw users off (not quickly recognising who the post if from) or would there be more of an issue if **Jon Skeet** changed his username?3
---
**1.** Lets ignore rep points, as it is not relevant to my question.
**2.** The caveat here is that 'DVK' is also a memorable username, but for the sake of this example lets ignore that.
**3. Please reference research where possible.** | 2014/06/06 | [
"https://ux.stackexchange.com/questions/58525",
"https://ux.stackexchange.com",
"https://ux.stackexchange.com/users/4430/"
] | Some people have a great memory for words, other people a great memory for faces. Some have both or neither.
Some avatars can be completely generic and difficult to remember, such as Gravatar's autogenerated avatars.

Others can be very unique and memorable. Your DVK example is a good one.
Some usernames can be completely generic, such as this site's "user3216857". Others can be very unique and memorable. This is also very individual, since topics or references that impress me might not impress someone else (e.g. the username Gandalf wouldn't be especially memorable to someone unfamiliar with LoTR, but it's safe to assume that more SO newcomers would remember the name Gandalf than Jon Skeet - which is only memorable because he is Jon Skeet).
People process images faster than written words, even in their native language. Also, images contain more information and they are much more diverse. If you squint a little, all words will look pretty much the same, while you can still tell apart your average avatars. So they're usually easier to identify. This is separate from memorability. | Standard reminder: Graphics are often of little or no use to folks who are reading the screen through assistive technology. The simplest answer -- as here in Stack Exchange -- is to display *both*. |
58,525 | **Is there any evidence which shows whether users are more able to recognise another user's Photo over their Username or vice versa?**
I am interested in understanding this from a usability perspective.
Lets say on a site such as this network, a user has both a username as well as a photo/avatar.
* On *Sci-Fi.se* [DVK](https://scifi.stackexchange.com/users/976/dvk) has a very recognisable **avatar**
* On *StackOverflow.se* [Jon Skeet](https://stackoverflow.com/users/22656/jon-skeet) has a very memorable **username**
Which of these is more recognisable/memorable?1
i.e. If **DVK** was to change his picture2, would this throw users off (not quickly recognising who the post if from) or would there be more of an issue if **Jon Skeet** changed his username?3
---
**1.** Lets ignore rep points, as it is not relevant to my question.
**2.** The caveat here is that 'DVK' is also a memorable username, but for the sake of this example lets ignore that.
**3. Please reference research where possible.** | 2014/06/06 | [
"https://ux.stackexchange.com/questions/58525",
"https://ux.stackexchange.com",
"https://ux.stackexchange.com/users/4430/"
] | As most people have said, it depends on the user.
For me personally it depends entirely on the context.
* On forums I'm heavily dependant on usernames for identifying people. I tend to remember people by their usernames on forums (and here). I think this is because forum users tend to use avatars detached from their identity such as cartoon characters or memes.
* On Twitter and Facebook I'm heavily reliant on the avatars of users because those people tend to represent themselves by their own face. I either learn what they look like (if i don't know them personally) or if it's someone i already know from the real world I know their face.
I So my answer would be think about the context of use and base your decision on that. | Although it is not exactly about avatar and user name, there is a research paper about distinctive file icons called [VisualIDs: Automatic Distinctive Icons for Desktop Interfaces](http://scribblethink.org/Work/VisualIDs/visualids.html) in SIGGRAPH 2004. Visual distinctiveness is unsurprisingly useful for both short term memory task (browsing for a specific file) and long term memory task (sketching and describing icons two days later). One interesting principle is that arbitrarily unique icons can be recalled regardless of their contents or meanings to the user.
Note that this research assumes a priori that "Search and memory for images is known to be generally faster and more robust than search and memory for words" with a reference to [Data Mountain (UIST 1998)](http://research.microsoft.com/apps/pubs/default.aspx?id=64329). Despite no direct comparison to textual memory, I believe that this research can supplement that visual memory is *stickier* i.e. changing a distinctive and registered avatar would throw users off more than changing a username. |
58,525 | **Is there any evidence which shows whether users are more able to recognise another user's Photo over their Username or vice versa?**
I am interested in understanding this from a usability perspective.
Lets say on a site such as this network, a user has both a username as well as a photo/avatar.
* On *Sci-Fi.se* [DVK](https://scifi.stackexchange.com/users/976/dvk) has a very recognisable **avatar**
* On *StackOverflow.se* [Jon Skeet](https://stackoverflow.com/users/22656/jon-skeet) has a very memorable **username**
Which of these is more recognisable/memorable?1
i.e. If **DVK** was to change his picture2, would this throw users off (not quickly recognising who the post if from) or would there be more of an issue if **Jon Skeet** changed his username?3
---
**1.** Lets ignore rep points, as it is not relevant to my question.
**2.** The caveat here is that 'DVK' is also a memorable username, but for the sake of this example lets ignore that.
**3. Please reference research where possible.** | 2014/06/06 | [
"https://ux.stackexchange.com/questions/58525",
"https://ux.stackexchange.com",
"https://ux.stackexchange.com/users/4430/"
] | It depends on the person.
A bit of an extreme example, but a dyslexic for example might struggle telling apart John Skeet and Jonno Teeks, whereas a color-blind person might not be able to tell two people apart that have combinations of certain colors in their avatar.
In general though, avatars tend to offer a wider variety of options. You can use letters, words, colors, shapes, etc. whereas usernames can only do a certain amount of characters, mostly in a single color.
Then again, usernames are often unique, and people won't be able to change them.
Combining those characteristics: avatars serve as a great "first glance" recognition but aren't set in stone, whereas usernames are good for specifics and certainty.
So in general, **avatars are more recognizable, but less authoritative**. | It depends not only on person, but also on service.
if you need to type username often ( or see it in your message ) then username is more memorable ( twitter )
Making avatar very small also prevent avatar to recognition.
Some services might allow to use not only latin letters in the username which make them weird
Speaking about weird. It also depends on chosen username and avatar.
Long username is hard to memorize, photo of user face in an avatar 32x32 is hard to remember or recognize
As a conclusion very depends on service where both of their are going to be used. |
58,525 | **Is there any evidence which shows whether users are more able to recognise another user's Photo over their Username or vice versa?**
I am interested in understanding this from a usability perspective.
Lets say on a site such as this network, a user has both a username as well as a photo/avatar.
* On *Sci-Fi.se* [DVK](https://scifi.stackexchange.com/users/976/dvk) has a very recognisable **avatar**
* On *StackOverflow.se* [Jon Skeet](https://stackoverflow.com/users/22656/jon-skeet) has a very memorable **username**
Which of these is more recognisable/memorable?1
i.e. If **DVK** was to change his picture2, would this throw users off (not quickly recognising who the post if from) or would there be more of an issue if **Jon Skeet** changed his username?3
---
**1.** Lets ignore rep points, as it is not relevant to my question.
**2.** The caveat here is that 'DVK' is also a memorable username, but for the sake of this example lets ignore that.
**3. Please reference research where possible.** | 2014/06/06 | [
"https://ux.stackexchange.com/questions/58525",
"https://ux.stackexchange.com",
"https://ux.stackexchange.com/users/4430/"
] | It depends on the person.
A bit of an extreme example, but a dyslexic for example might struggle telling apart John Skeet and Jonno Teeks, whereas a color-blind person might not be able to tell two people apart that have combinations of certain colors in their avatar.
In general though, avatars tend to offer a wider variety of options. You can use letters, words, colors, shapes, etc. whereas usernames can only do a certain amount of characters, mostly in a single color.
Then again, usernames are often unique, and people won't be able to change them.
Combining those characteristics: avatars serve as a great "first glance" recognition but aren't set in stone, whereas usernames are good for specifics and certainty.
So in general, **avatars are more recognizable, but less authoritative**. | If it's only about recognizable or memorable, then it's avatar.
[Wikipedia page of Avatar](http://en.wikipedia.org/wiki/Avatar_%28computing%29) stated this (too bad no research or article backed this up)
>
> ...the avatar is placed in order for other users to easily identify who has written the post without having to read their username.
>
>
>
that implied avatar is indeed easier to be recognized. Because, you can catch a glimpse of image without having to focus on it (or is it just me?), yet username has to be read.
And that statement more or less matches with my experience. In a forum with big avatar, avatar is the first thing I recognize. I opened many threads and saw many posts. After some times, I realized, "eyyy that ava again," and that's where I 'realize' the username, and if I'm lucky, remember it.
I also found that is user X uses, say, Son Goku avatar for a long time, or changing the avatar, but still Son Goku image that clearly displays the face, before realizing it, everytime I see Son Goku avatar in the forum, I will immediately associate it with that user. |
58,525 | **Is there any evidence which shows whether users are more able to recognise another user's Photo over their Username or vice versa?**
I am interested in understanding this from a usability perspective.
Lets say on a site such as this network, a user has both a username as well as a photo/avatar.
* On *Sci-Fi.se* [DVK](https://scifi.stackexchange.com/users/976/dvk) has a very recognisable **avatar**
* On *StackOverflow.se* [Jon Skeet](https://stackoverflow.com/users/22656/jon-skeet) has a very memorable **username**
Which of these is more recognisable/memorable?1
i.e. If **DVK** was to change his picture2, would this throw users off (not quickly recognising who the post if from) or would there be more of an issue if **Jon Skeet** changed his username?3
---
**1.** Lets ignore rep points, as it is not relevant to my question.
**2.** The caveat here is that 'DVK' is also a memorable username, but for the sake of this example lets ignore that.
**3. Please reference research where possible.** | 2014/06/06 | [
"https://ux.stackexchange.com/questions/58525",
"https://ux.stackexchange.com",
"https://ux.stackexchange.com/users/4430/"
] | Some people have a great memory for words, other people a great memory for faces. Some have both or neither.
Some avatars can be completely generic and difficult to remember, such as Gravatar's autogenerated avatars.

Others can be very unique and memorable. Your DVK example is a good one.
Some usernames can be completely generic, such as this site's "user3216857". Others can be very unique and memorable. This is also very individual, since topics or references that impress me might not impress someone else (e.g. the username Gandalf wouldn't be especially memorable to someone unfamiliar with LoTR, but it's safe to assume that more SO newcomers would remember the name Gandalf than Jon Skeet - which is only memorable because he is Jon Skeet).
People process images faster than written words, even in their native language. Also, images contain more information and they are much more diverse. If you squint a little, all words will look pretty much the same, while you can still tell apart your average avatars. So they're usually easier to identify. This is separate from memorability. | If it's only about recognizable or memorable, then it's avatar.
[Wikipedia page of Avatar](http://en.wikipedia.org/wiki/Avatar_%28computing%29) stated this (too bad no research or article backed this up)
>
> ...the avatar is placed in order for other users to easily identify who has written the post without having to read their username.
>
>
>
that implied avatar is indeed easier to be recognized. Because, you can catch a glimpse of image without having to focus on it (or is it just me?), yet username has to be read.
And that statement more or less matches with my experience. In a forum with big avatar, avatar is the first thing I recognize. I opened many threads and saw many posts. After some times, I realized, "eyyy that ava again," and that's where I 'realize' the username, and if I'm lucky, remember it.
I also found that is user X uses, say, Son Goku avatar for a long time, or changing the avatar, but still Son Goku image that clearly displays the face, before realizing it, everytime I see Son Goku avatar in the forum, I will immediately associate it with that user. |
58,525 | **Is there any evidence which shows whether users are more able to recognise another user's Photo over their Username or vice versa?**
I am interested in understanding this from a usability perspective.
Lets say on a site such as this network, a user has both a username as well as a photo/avatar.
* On *Sci-Fi.se* [DVK](https://scifi.stackexchange.com/users/976/dvk) has a very recognisable **avatar**
* On *StackOverflow.se* [Jon Skeet](https://stackoverflow.com/users/22656/jon-skeet) has a very memorable **username**
Which of these is more recognisable/memorable?1
i.e. If **DVK** was to change his picture2, would this throw users off (not quickly recognising who the post if from) or would there be more of an issue if **Jon Skeet** changed his username?3
---
**1.** Lets ignore rep points, as it is not relevant to my question.
**2.** The caveat here is that 'DVK' is also a memorable username, but for the sake of this example lets ignore that.
**3. Please reference research where possible.** | 2014/06/06 | [
"https://ux.stackexchange.com/questions/58525",
"https://ux.stackexchange.com",
"https://ux.stackexchange.com/users/4430/"
] | It depends not only on person, but also on service.
if you need to type username often ( or see it in your message ) then username is more memorable ( twitter )
Making avatar very small also prevent avatar to recognition.
Some services might allow to use not only latin letters in the username which make them weird
Speaking about weird. It also depends on chosen username and avatar.
Long username is hard to memorize, photo of user face in an avatar 32x32 is hard to remember or recognize
As a conclusion very depends on service where both of their are going to be used. | Certain avatar images are very memorable and distinct. Some are almost indistinguishable. Likewise with user names. Someone who was asked to remember the username of "Jon Skeet" and was asked a day later later to identify it from a list of the ten most similar usernames might have a good chance at identifying it, while someone who was shown a generic gravatar and asked to identify it from a list including nine randomly-selected ones would have a relatively poor chance even five minutes later. On the other hand, someone asked to remember a username written in an unfamiliar script [e.g. a hypothetical user "絕對沒有"] would have a hard time than someone asked to remember the gravatar of Jon Skeet (sompared with the ten most similar gravatars, assuming nobody copied Mr. Skeet's picture). The differences in memorability between different usernames and avatars would seem to outweigh any general advantage usernames would have over avatars or vice versa. Incidentally, without looking back at the previous hypothetical username, can one one remember whether it was "絕對伏特加", "絕對沒有", or "絕對值" [Chinese characters from Google translate] |
60,558 | I had my first session as Dungeon Master (DM) yesterday playing pathfinder ("First Steps In Lore" from the "Pathfinder Society" series) with a group of first-time tabletop RPG players (myself included). Overall the sessions went well, a good crash course for everyone and everyone seemed to enjoy themselves.
My question is a problem I personally had while DMing. I taught the players that they can roll for perception to check things out, finding traps and secrets, etc. The players got the idea and started saying "roll for perspection to check out the room", etc., and they found a couple of traps that way, and that's fine. The problem I had was: What do I do with all the "useless" rolls?
Most of the time they were rolling when there was nothing to find. Even on a natural 20 I kind of just described the room in more detail while saying something like "but there is nothing of interest".
Is there a better way of approaching this? Do I make something up?
I'd like to add: I would prefer my players roll their perception checks when they are actively looking. I am not a fan of the hidden-DM-rolls-for-all-perception idea. | 2015/05/04 | [
"https://rpg.stackexchange.com/questions/60558",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/22662/"
] | have the players roll a bunch of perception rolls at the beginning, list them all and cross them off as you go, they know the rolled them, but have no idea if they see nothing because of a bad roll, or if there is truely nothing to see. | Perception rolls, like Stealth rolls, should be rolled by the GM out of sight of the players. The players should not know if they rolled high or low, just what they find. |
60,558 | I had my first session as Dungeon Master (DM) yesterday playing pathfinder ("First Steps In Lore" from the "Pathfinder Society" series) with a group of first-time tabletop RPG players (myself included). Overall the sessions went well, a good crash course for everyone and everyone seemed to enjoy themselves.
My question is a problem I personally had while DMing. I taught the players that they can roll for perception to check things out, finding traps and secrets, etc. The players got the idea and started saying "roll for perspection to check out the room", etc., and they found a couple of traps that way, and that's fine. The problem I had was: What do I do with all the "useless" rolls?
Most of the time they were rolling when there was nothing to find. Even on a natural 20 I kind of just described the room in more detail while saying something like "but there is nothing of interest".
Is there a better way of approaching this? Do I make something up?
I'd like to add: I would prefer my players roll their perception checks when they are actively looking. I am not a fan of the hidden-DM-rolls-for-all-perception idea. | 2015/05/04 | [
"https://rpg.stackexchange.com/questions/60558",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/22662/"
] | When I GM, there are generally four ways a perception check plays out.
1) There is something to be discovered. If I know that there is a trap or an ambush in the room, then the player's might spot that thing. Depending on the amount of success on the perception roll, the player's may get different levels of information. Maybe they realise that there is a pit trap in the middle of the corridor. Or maybe they can just see that the floor looks weird, like it doesn't fit in with the flagstones around that spot.
2) If you have nothing interesting prepared, there might be something to be discovered. If I haven't prepared anything in the room, I'll quickly consider if there still might be something the players can discover. Are there monsters in other rooms nearby that might be heard? Tracks or other signs of activity? Murals or furniture that have hints about the history or nature of the place? Like with 1, the level of success determines amount of detail. Could be vague like a sound from the next room over that might just be the ruins crumbling, or could be something alive moving around. Or explicit, like a mural showing the ritual used to open the magically sealed door elsewhere. Or inconsequential, such as destroyed machinery that reveals this to have been a torture room.
3/4) The perception check might fail, or there might be nothing that the players can discover. In both cases my answer is something along the lines of "Well.. there doesn't ***seem*** to be anything in the room." My players quickly learn that my suggestive tone means nothing in this case. I generally find it easier to always make it sound like the group missed something interesting, instead of trying to always make it sound like there was nothing interesting. The important thing is that your reaction is the same in both cases, so the players get no clue from you.
The way I play it, perception checks are used for finding interesting stuff. I don't go into minute detail about the architecture or furniture or how the air smells, unless that detail is either useful or interesting for what it reveals. My players won't care about those minute details, and I see no reason for punishing them for looking around. I want them to look around. So they can find all the interesting stuff. And when there is no interesting stuff to find, I get it over with quickly with the simply words "Well.. there doesn't ***seem*** to be anything in the room." So we can quickly move onwards with the action. | I would say, allow a character an opportunity to roll a perception check...
And treat it as a "Gut feeling" type of thing about a room. If, they want to go in more depth into a room, treat it as a Search attempt (X amount of time at a DC of Y) and do the whole DM trick of secretly rolling for random encounters or dual perception checks (do your characters notice the guards patrolling and do the guards notice the characters searching where they shouldn't be)
However, I would distinguish between, "You don't see anything obvious..." and "There is nothing here..." -- by that, I mean, EXPLICITLY, make it obvious that additional checks are not going to find anything so don't waste time rolling.
Maybe in the first case, they failed the roll or there wasn't anything to find...but, I would definitely try to make it obvious when additional checking is not going to be productive.
Though, I agree, you should wean the players off Suggesting checks and more into describing actions/intentions... |
60,558 | I had my first session as Dungeon Master (DM) yesterday playing pathfinder ("First Steps In Lore" from the "Pathfinder Society" series) with a group of first-time tabletop RPG players (myself included). Overall the sessions went well, a good crash course for everyone and everyone seemed to enjoy themselves.
My question is a problem I personally had while DMing. I taught the players that they can roll for perception to check things out, finding traps and secrets, etc. The players got the idea and started saying "roll for perspection to check out the room", etc., and they found a couple of traps that way, and that's fine. The problem I had was: What do I do with all the "useless" rolls?
Most of the time they were rolling when there was nothing to find. Even on a natural 20 I kind of just described the room in more detail while saying something like "but there is nothing of interest".
Is there a better way of approaching this? Do I make something up?
I'd like to add: I would prefer my players roll their perception checks when they are actively looking. I am not a fan of the hidden-DM-rolls-for-all-perception idea. | 2015/05/04 | [
"https://rpg.stackexchange.com/questions/60558",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/22662/"
] | We assume that players are by default going around being perceptive at an ordinary level. If there's an interesting detail that might escape the players' notice, the DM will call for the relevant characters (maybe everyone, maybe the guy in front, maybe the characters with darkvision, etc...) to make a perception check. If everyone who was given the opportunity fails it, there's no opportunity for anyone else to metagame and make the check. The downside is, if everyone rolls a low number and the DM says "ok, you don't notice anything interesting" then the players know that they have missed something. Sometimes there are immediate consequences (the orcs jump out from behind the bushes and surprise you, you fall into the pit trap you failed to notice, etc...) but other times not, and you're left wondering if you missed something important.
A partial antidote to that is to sometimes tell them they found something, but it's not the thing that you were offering the perception check for. If they were close, maybe they get a hint as to what they failed to perceive. Otherwise they "perceive" something totally different, and unimportant. You can also sometimes offer a perception check to find some small detail that isn't plot relevant, like "You notice some initials carved in that tree over there." This has the advantage of not "giving away" that something is significant, just because the players rolled a high perception check and found something. (If you're going to play this way, you should warn your players that not everything they perceive is going to be important to the plot, or they may waste too much time on red herrings.)
To answer the question of what to do when players want to examine something, and roll high, but there's nothing to find, you can occasionally also insert irrelevant details. You notice that the second drawer of the dresser sticks when you try to open it. You notice scuff marks on the floor near the window. And so on. So yes, I would sometimes "make stuff up". I think it's fine to also say, "nothing appears exceptional or unusual in this room" a fair amount of the time. | Perception rolls, like Stealth rolls, should be rolled by the GM out of sight of the players. The players should not know if they rolled high or low, just what they find. |
60,558 | I had my first session as Dungeon Master (DM) yesterday playing pathfinder ("First Steps In Lore" from the "Pathfinder Society" series) with a group of first-time tabletop RPG players (myself included). Overall the sessions went well, a good crash course for everyone and everyone seemed to enjoy themselves.
My question is a problem I personally had while DMing. I taught the players that they can roll for perception to check things out, finding traps and secrets, etc. The players got the idea and started saying "roll for perspection to check out the room", etc., and they found a couple of traps that way, and that's fine. The problem I had was: What do I do with all the "useless" rolls?
Most of the time they were rolling when there was nothing to find. Even on a natural 20 I kind of just described the room in more detail while saying something like "but there is nothing of interest".
Is there a better way of approaching this? Do I make something up?
I'd like to add: I would prefer my players roll their perception checks when they are actively looking. I am not a fan of the hidden-DM-rolls-for-all-perception idea. | 2015/05/04 | [
"https://rpg.stackexchange.com/questions/60558",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/22662/"
] | So, what's the downside of saying, "nope, you find nothing"?
You're committed to letting players make their own rolls (which is perfectly fine, though not everyone plays that way), so they already know that they hit a DC of 23 or less (or whatever). There is no need to punish yourself or them by pretending otherwise.
It's not even particularly bad in terms of meta-gaming, if you are willing to decide that characters have some sense of how effective their perception rolls have been. If you don't like that, then by letting the players roll they are taking responsibility for not using knowledge of the die roll to meta-game. (This is often why people advocate having the GM do perception rolls in secret).
I've had some success as a GM by using the "PCs search an empty room" situation to advantage to add some realism and depth of involvement, by having things that aren't really important but which fit the location, or that I think are cool. So things like utensils that fell behind something, a damaged straw doll, a broken weapon, dice, pottery. (Yes, you do run the risk of a player spending an hour on "the mystery of the rusty spoon").
That also helps a bit with the "the GM mentioned it so it must be part of the plot" meta-gaming.
Ultimately, if you have something that contributes to your game when there is a perception check, use it. If you don't, move on fast to get to things that do. | have the players roll a bunch of perception rolls at the beginning, list them all and cross them off as you go, they know the rolled them, but have no idea if they see nothing because of a bad roll, or if there is truely nothing to see. |
60,558 | I had my first session as Dungeon Master (DM) yesterday playing pathfinder ("First Steps In Lore" from the "Pathfinder Society" series) with a group of first-time tabletop RPG players (myself included). Overall the sessions went well, a good crash course for everyone and everyone seemed to enjoy themselves.
My question is a problem I personally had while DMing. I taught the players that they can roll for perception to check things out, finding traps and secrets, etc. The players got the idea and started saying "roll for perspection to check out the room", etc., and they found a couple of traps that way, and that's fine. The problem I had was: What do I do with all the "useless" rolls?
Most of the time they were rolling when there was nothing to find. Even on a natural 20 I kind of just described the room in more detail while saying something like "but there is nothing of interest".
Is there a better way of approaching this? Do I make something up?
I'd like to add: I would prefer my players roll their perception checks when they are actively looking. I am not a fan of the hidden-DM-rolls-for-all-perception idea. | 2015/05/04 | [
"https://rpg.stackexchange.com/questions/60558",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/22662/"
] | **There is ALWAYS something to be found!**\* *What* the players find, of course, may not be at all relevant or useful to the story or to the characters' progress.
But in fact, a high roll when searching or observing an area is a great chance to use some creativity to both enhance the overall experience, **and also to make the players think more carefully about their use of pointless random checks.**
A five-minute discourse on the detailed state of the area should do the trick! Consider this possible response:
>
> There's clearly nothing of significant interest here to be seen -- the room is completely empty. But, feeling keenly observant and abnormally curious, you inspect the area with intense scrutiny anyway. You notice the roughness of each stony brick, and the slight decay of the grout between them.
>
>
> Then you observe that your shoes make a satisfying *clop, clop, clop* sound as you walk, and you find yourself considering the subtle unevenness of the floor -- the sandy but firm texture of the sandstone, the slim gap between each tile. You doubt you could force a sheet of good paper between them. Indeed, despite the slight lip where some tiles have sunken a millimeter or two on one edge or another, the mason did their job well -- there's no way you could ever work a tile out of place without inflicting tremendous damage. What effort it must have taken him or her to cut the tiles from some hillside afar off, to apply the mortar and then painstakingly lay the tiles out perfectly adjacent to one another and in their proper order. You ponder the craftsman's trade a moment longer, and then turn your attention to the ceiling... *and so on.*
>
>
>
If you know your world, this isn't as hard as it may sound. Just be creative with it!
---
\* *Unless you have a PC who happens to be floating isolated and disembodied in an absolute void. But then I'd think it's unlikely to matter.* | have the players roll a bunch of perception rolls at the beginning, list them all and cross them off as you go, they know the rolled them, but have no idea if they see nothing because of a bad roll, or if there is truely nothing to see. |
60,558 | I had my first session as Dungeon Master (DM) yesterday playing pathfinder ("First Steps In Lore" from the "Pathfinder Society" series) with a group of first-time tabletop RPG players (myself included). Overall the sessions went well, a good crash course for everyone and everyone seemed to enjoy themselves.
My question is a problem I personally had while DMing. I taught the players that they can roll for perception to check things out, finding traps and secrets, etc. The players got the idea and started saying "roll for perspection to check out the room", etc., and they found a couple of traps that way, and that's fine. The problem I had was: What do I do with all the "useless" rolls?
Most of the time they were rolling when there was nothing to find. Even on a natural 20 I kind of just described the room in more detail while saying something like "but there is nothing of interest".
Is there a better way of approaching this? Do I make something up?
I'd like to add: I would prefer my players roll their perception checks when they are actively looking. I am not a fan of the hidden-DM-rolls-for-all-perception idea. | 2015/05/04 | [
"https://rpg.stackexchange.com/questions/60558",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/22662/"
] | **There is ALWAYS something to be found!**\* *What* the players find, of course, may not be at all relevant or useful to the story or to the characters' progress.
But in fact, a high roll when searching or observing an area is a great chance to use some creativity to both enhance the overall experience, **and also to make the players think more carefully about their use of pointless random checks.**
A five-minute discourse on the detailed state of the area should do the trick! Consider this possible response:
>
> There's clearly nothing of significant interest here to be seen -- the room is completely empty. But, feeling keenly observant and abnormally curious, you inspect the area with intense scrutiny anyway. You notice the roughness of each stony brick, and the slight decay of the grout between them.
>
>
> Then you observe that your shoes make a satisfying *clop, clop, clop* sound as you walk, and you find yourself considering the subtle unevenness of the floor -- the sandy but firm texture of the sandstone, the slim gap between each tile. You doubt you could force a sheet of good paper between them. Indeed, despite the slight lip where some tiles have sunken a millimeter or two on one edge or another, the mason did their job well -- there's no way you could ever work a tile out of place without inflicting tremendous damage. What effort it must have taken him or her to cut the tiles from some hillside afar off, to apply the mortar and then painstakingly lay the tiles out perfectly adjacent to one another and in their proper order. You ponder the craftsman's trade a moment longer, and then turn your attention to the ceiling... *and so on.*
>
>
>
If you know your world, this isn't as hard as it may sound. Just be creative with it!
---
\* *Unless you have a PC who happens to be floating isolated and disembodied in an absolute void. But then I'd think it's unlikely to matter.* | We assume that players are by default going around being perceptive at an ordinary level. If there's an interesting detail that might escape the players' notice, the DM will call for the relevant characters (maybe everyone, maybe the guy in front, maybe the characters with darkvision, etc...) to make a perception check. If everyone who was given the opportunity fails it, there's no opportunity for anyone else to metagame and make the check. The downside is, if everyone rolls a low number and the DM says "ok, you don't notice anything interesting" then the players know that they have missed something. Sometimes there are immediate consequences (the orcs jump out from behind the bushes and surprise you, you fall into the pit trap you failed to notice, etc...) but other times not, and you're left wondering if you missed something important.
A partial antidote to that is to sometimes tell them they found something, but it's not the thing that you were offering the perception check for. If they were close, maybe they get a hint as to what they failed to perceive. Otherwise they "perceive" something totally different, and unimportant. You can also sometimes offer a perception check to find some small detail that isn't plot relevant, like "You notice some initials carved in that tree over there." This has the advantage of not "giving away" that something is significant, just because the players rolled a high perception check and found something. (If you're going to play this way, you should warn your players that not everything they perceive is going to be important to the plot, or they may waste too much time on red herrings.)
To answer the question of what to do when players want to examine something, and roll high, but there's nothing to find, you can occasionally also insert irrelevant details. You notice that the second drawer of the dresser sticks when you try to open it. You notice scuff marks on the floor near the window. And so on. So yes, I would sometimes "make stuff up". I think it's fine to also say, "nothing appears exceptional or unusual in this room" a fair amount of the time. |
60,558 | I had my first session as Dungeon Master (DM) yesterday playing pathfinder ("First Steps In Lore" from the "Pathfinder Society" series) with a group of first-time tabletop RPG players (myself included). Overall the sessions went well, a good crash course for everyone and everyone seemed to enjoy themselves.
My question is a problem I personally had while DMing. I taught the players that they can roll for perception to check things out, finding traps and secrets, etc. The players got the idea and started saying "roll for perspection to check out the room", etc., and they found a couple of traps that way, and that's fine. The problem I had was: What do I do with all the "useless" rolls?
Most of the time they were rolling when there was nothing to find. Even on a natural 20 I kind of just described the room in more detail while saying something like "but there is nothing of interest".
Is there a better way of approaching this? Do I make something up?
I'd like to add: I would prefer my players roll their perception checks when they are actively looking. I am not a fan of the hidden-DM-rolls-for-all-perception idea. | 2015/05/04 | [
"https://rpg.stackexchange.com/questions/60558",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/22662/"
] | have the players roll a bunch of perception rolls at the beginning, list them all and cross them off as you go, they know the rolled them, but have no idea if they see nothing because of a bad roll, or if there is truely nothing to see. | If they fail, make them perceive something wrong. A misperception can happen anytime, specially if they're really looking for something. Characters can misinterpret, misunderstand and become very obsessed with something so they can be completely wrong about something. This will get them in trouble and give you plenty of ideas.
Example: If someone's in a room which there's nothing to see but fail a perception test, tell them they have a gut feeling this room is important. If they succeed tell them it's not important.
And in this time they could be discovered, attacked or lose a good opportunity. This would give you time to move NPCs, create situations and everything. It depends on your history. |
60,558 | I had my first session as Dungeon Master (DM) yesterday playing pathfinder ("First Steps In Lore" from the "Pathfinder Society" series) with a group of first-time tabletop RPG players (myself included). Overall the sessions went well, a good crash course for everyone and everyone seemed to enjoy themselves.
My question is a problem I personally had while DMing. I taught the players that they can roll for perception to check things out, finding traps and secrets, etc. The players got the idea and started saying "roll for perspection to check out the room", etc., and they found a couple of traps that way, and that's fine. The problem I had was: What do I do with all the "useless" rolls?
Most of the time they were rolling when there was nothing to find. Even on a natural 20 I kind of just described the room in more detail while saying something like "but there is nothing of interest".
Is there a better way of approaching this? Do I make something up?
I'd like to add: I would prefer my players roll their perception checks when they are actively looking. I am not a fan of the hidden-DM-rolls-for-all-perception idea. | 2015/05/04 | [
"https://rpg.stackexchange.com/questions/60558",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/22662/"
] | We assume that players are by default going around being perceptive at an ordinary level. If there's an interesting detail that might escape the players' notice, the DM will call for the relevant characters (maybe everyone, maybe the guy in front, maybe the characters with darkvision, etc...) to make a perception check. If everyone who was given the opportunity fails it, there's no opportunity for anyone else to metagame and make the check. The downside is, if everyone rolls a low number and the DM says "ok, you don't notice anything interesting" then the players know that they have missed something. Sometimes there are immediate consequences (the orcs jump out from behind the bushes and surprise you, you fall into the pit trap you failed to notice, etc...) but other times not, and you're left wondering if you missed something important.
A partial antidote to that is to sometimes tell them they found something, but it's not the thing that you were offering the perception check for. If they were close, maybe they get a hint as to what they failed to perceive. Otherwise they "perceive" something totally different, and unimportant. You can also sometimes offer a perception check to find some small detail that isn't plot relevant, like "You notice some initials carved in that tree over there." This has the advantage of not "giving away" that something is significant, just because the players rolled a high perception check and found something. (If you're going to play this way, you should warn your players that not everything they perceive is going to be important to the plot, or they may waste too much time on red herrings.)
To answer the question of what to do when players want to examine something, and roll high, but there's nothing to find, you can occasionally also insert irrelevant details. You notice that the second drawer of the dresser sticks when you try to open it. You notice scuff marks on the floor near the window. And so on. So yes, I would sometimes "make stuff up". I think it's fine to also say, "nothing appears exceptional or unusual in this room" a fair amount of the time. | I would say, allow a character an opportunity to roll a perception check...
And treat it as a "Gut feeling" type of thing about a room. If, they want to go in more depth into a room, treat it as a Search attempt (X amount of time at a DC of Y) and do the whole DM trick of secretly rolling for random encounters or dual perception checks (do your characters notice the guards patrolling and do the guards notice the characters searching where they shouldn't be)
However, I would distinguish between, "You don't see anything obvious..." and "There is nothing here..." -- by that, I mean, EXPLICITLY, make it obvious that additional checks are not going to find anything so don't waste time rolling.
Maybe in the first case, they failed the roll or there wasn't anything to find...but, I would definitely try to make it obvious when additional checking is not going to be productive.
Though, I agree, you should wean the players off Suggesting checks and more into describing actions/intentions... |
60,558 | I had my first session as Dungeon Master (DM) yesterday playing pathfinder ("First Steps In Lore" from the "Pathfinder Society" series) with a group of first-time tabletop RPG players (myself included). Overall the sessions went well, a good crash course for everyone and everyone seemed to enjoy themselves.
My question is a problem I personally had while DMing. I taught the players that they can roll for perception to check things out, finding traps and secrets, etc. The players got the idea and started saying "roll for perspection to check out the room", etc., and they found a couple of traps that way, and that's fine. The problem I had was: What do I do with all the "useless" rolls?
Most of the time they were rolling when there was nothing to find. Even on a natural 20 I kind of just described the room in more detail while saying something like "but there is nothing of interest".
Is there a better way of approaching this? Do I make something up?
I'd like to add: I would prefer my players roll their perception checks when they are actively looking. I am not a fan of the hidden-DM-rolls-for-all-perception idea. | 2015/05/04 | [
"https://rpg.stackexchange.com/questions/60558",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/22662/"
] | have the players roll a bunch of perception rolls at the beginning, list them all and cross them off as you go, they know the rolled them, but have no idea if they see nothing because of a bad roll, or if there is truely nothing to see. | The situation should be handled exactly as if there were something interesting to find but they did not meet the DC for finding it. That is effectively what happened, just in this case the DC is infinite as there is nothing to be found.
Explicitly telling the players that there is nothing to be found should be avoided - if you get in the habit of telling them that there is nothing to be found, they will know when they have merely failed the roll. |
60,558 | I had my first session as Dungeon Master (DM) yesterday playing pathfinder ("First Steps In Lore" from the "Pathfinder Society" series) with a group of first-time tabletop RPG players (myself included). Overall the sessions went well, a good crash course for everyone and everyone seemed to enjoy themselves.
My question is a problem I personally had while DMing. I taught the players that they can roll for perception to check things out, finding traps and secrets, etc. The players got the idea and started saying "roll for perspection to check out the room", etc., and they found a couple of traps that way, and that's fine. The problem I had was: What do I do with all the "useless" rolls?
Most of the time they were rolling when there was nothing to find. Even on a natural 20 I kind of just described the room in more detail while saying something like "but there is nothing of interest".
Is there a better way of approaching this? Do I make something up?
I'd like to add: I would prefer my players roll their perception checks when they are actively looking. I am not a fan of the hidden-DM-rolls-for-all-perception idea. | 2015/05/04 | [
"https://rpg.stackexchange.com/questions/60558",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/22662/"
] | We assume that players are by default going around being perceptive at an ordinary level. If there's an interesting detail that might escape the players' notice, the DM will call for the relevant characters (maybe everyone, maybe the guy in front, maybe the characters with darkvision, etc...) to make a perception check. If everyone who was given the opportunity fails it, there's no opportunity for anyone else to metagame and make the check. The downside is, if everyone rolls a low number and the DM says "ok, you don't notice anything interesting" then the players know that they have missed something. Sometimes there are immediate consequences (the orcs jump out from behind the bushes and surprise you, you fall into the pit trap you failed to notice, etc...) but other times not, and you're left wondering if you missed something important.
A partial antidote to that is to sometimes tell them they found something, but it's not the thing that you were offering the perception check for. If they were close, maybe they get a hint as to what they failed to perceive. Otherwise they "perceive" something totally different, and unimportant. You can also sometimes offer a perception check to find some small detail that isn't plot relevant, like "You notice some initials carved in that tree over there." This has the advantage of not "giving away" that something is significant, just because the players rolled a high perception check and found something. (If you're going to play this way, you should warn your players that not everything they perceive is going to be important to the plot, or they may waste too much time on red herrings.)
To answer the question of what to do when players want to examine something, and roll high, but there's nothing to find, you can occasionally also insert irrelevant details. You notice that the second drawer of the dresser sticks when you try to open it. You notice scuff marks on the floor near the window. And so on. So yes, I would sometimes "make stuff up". I think it's fine to also say, "nothing appears exceptional or unusual in this room" a fair amount of the time. | have the players roll a bunch of perception rolls at the beginning, list them all and cross them off as you go, they know the rolled them, but have no idea if they see nothing because of a bad roll, or if there is truely nothing to see. |
42,105 | >
> Immanuel Kant has been born in Europe.
>
>
>
I heard you can't use the Present Perfect for dead people, so can I use it to indirectly state how influential Kant was (as if he were Immortal, or that he still lives today through his ideas)? Are there instances where authors used the Present Perfect in this matter? What do you think about it? | 2019/02/10 | [
"https://writers.stackexchange.com/questions/42105",
"https://writers.stackexchange.com",
"https://writers.stackexchange.com/users/36239/"
] | >
> can I use it to indirectly state how influential Kant was (as if he
> were Immortal, or that he still lives today through his ideas)?
>
>
>
This is how I interpreted it. So, **yes. Maybe.**
However, it was awkward and I had to pause to decide what it meant. | That grammatically incorrect phrase connotes something other than you intend. Unless you want to once and future king him, just stick with was. People are born once and then live their lives, your construction seems to trap him at birth.
Why choose Europe? His birthplace is well documented and giving him an entire continent as his birthplace just seems a trifle bizarre.
When I read your sample, I think he has been born, shall be born and shall always be born in Germany.
If you wish to imply that a philosophical giant has far reaching influence beyond the span of his years, that does not do it.
Keep trying. For this reader, it seems a bit contorted which will damage immersion.
We disobey the rules of grammar with respect and try to do it rarely. They do exist for a reason, as grammatically correct text is easier to comprehend. |
42,105 | >
> Immanuel Kant has been born in Europe.
>
>
>
I heard you can't use the Present Perfect for dead people, so can I use it to indirectly state how influential Kant was (as if he were Immortal, or that he still lives today through his ideas)? Are there instances where authors used the Present Perfect in this matter? What do you think about it? | 2019/02/10 | [
"https://writers.stackexchange.com/questions/42105",
"https://writers.stackexchange.com",
"https://writers.stackexchange.com/users/36239/"
] | >
> can I use it to indirectly state how influential Kant was (as if he
> were Immortal, or that he still lives today through his ideas)?
>
>
>
This is how I interpreted it. So, **yes. Maybe.**
However, it was awkward and I had to pause to decide what it meant. | >
> (as if he were Immortal, or that he still lives today through his ideas)?
>
>
>
The approach of subverting Grammar to make your point will not go smoothly with **everyone** who will read your article. How about proving through your writing that Kant's ideas are immortal or that he 'still lives on' through them? You don't need to subvert Grammar for that. In my opinion, this writing approach will open a window for creativity. You can use metaphors and all sorts of tools a writer has to explain and explore her idea. |
126,185 | I came to my current place of employment as an contracted IT support technician of the most generic variety about a year ago. I had only an associates degree to my name and most of what I knew both as a support technician, and as a tech professional in general, was self taught. In short time, I showed myself to be useful enough to warrant hiring, and further more, worthy of more duties. I soon found myself working as a DevOps / Automation Engineer. Lot's of scripting.
Not to long after this transition I was tasked to build a web application for the enterprise that, in the long run, would serve as a central hub for day to day IT operations, with a focus on automation. Even without knowing the complete ins and outs of designing, prototyping, building, and maintaining a project of such scale, I knew this would be a lot for one person, but did not want to throw up the white flag just yet.
To recount all the time I took teaching myself basic web building principles, C#, javascript, various web application frameworks, etc. would take a movie montage, but suffice to say, I somehow managed to get the project of the ground and it is somewhat functional and not completely hideous. I am however feeling pretty burned out. I don't sleep much and am constantly worried I will fall flat on my face.
I am apart of a dev team of just three people, one of which has their own monolithic project, though not quite as large as my own, and the other who doesn't know enough about the technologies we use to adequately help us. I am approaching the UAT phase and have a growing backlog. Part of me wants to pack a bag and disappear to a place without electricity. More rationally though, I am thinking about talking to my boss about the workload and my struggle to carry it.
What would be best way to approach this situation to relieve the stress/pressure levels?
Edit: I realized I forgot to mention a key part of our development cycle. We have 3 week iterations (sprints) with catchups / demos at the end. We did not always have this in place. At the start, I was developing in a bubble with little to no feedback. | 2019/01/10 | [
"https://workplace.stackexchange.com/questions/126185",
"https://workplace.stackexchange.com",
"https://workplace.stackexchange.com/users/97769/"
] | I say listen to your body. I have been burned out twice. Don't let it happen to you. Did you know that burning out can have [lasting damaging effects on the brain](https://www.psychologicalscience.org/observer/burnout-and-the-brain)? When work affects your sleep you need to take a step back and address the things thst makes you stressed out.
There are many [causes of stress](https://www.webmd.com/balance/guide/causes-of-stress). The two main ones that I have experienced are *being in a situation you can't control*, and *lack of sleep*. You have to take control (by removing features, moving deadlines, etc) and you have to wind down so you can sleep.
Talk to your manager. Tell them you will not be able to meet the deadline. Say that work has started to affect your sleep. If you are working overtime, cease that immediately. Missing sleep means you underperform anyway since you need your brain for work. Why miss sleep and work overtime if you produce more when you are well rested and within the work hours?
When you come home in the evening, do something you love. Do not think about work. You need multiple hours of free time that is yours to do what you want. Eat healthy, go for a walk, meet people, do what you want. Then go to bed (at the same time every night), leaving the phone out of the bedroom. Possibly talk to a doctor if you need temporary medication.
Missing the deadline is ultimately not your main concern, your health is. Realize that the project is just money. And a good manager should be able to help you adjust the project to minimize the company losses. Your company will learn an expensive lesson, but so will you. This means you will actually become MORE valuable to them, not less, because you got some experience and have already made some mistakes. Why would they hire someone new to replace you just so thst person make the same mistakes again, the mistakes you already learned how to avoid?
Your reputation and success will depend entirely on you being communicative here. So take charge of your situation, don't try to do the impossible, begin replanning an reprioritizing the project. Help set the right expectations. Ask for help. Show what you have learned. Try to prevent this from happening again, and try to see the signs early so you can raise the flags in time. | ***Don't just sit there, watching the wall coming closer, take control and steer away before you smash into it.***
>
> Was I stupid to take such a job in the first place?
>
>
>
Ill advised, to put it very politely.
You need to know your abilities and shortcomings very well!
Only take on assignments that you're confident to finish, even if you need to acquire additional skills in the process.
>
> Should I quit now before I am buried by it all?
>
>
>
No. Your reputation would take a severe hit and bridges would not only be burnt but anihilated.
Avoid leaving mid project.
**Assess the state of your project and the list of what needs to be done until the deadline hits.**
If you conclude, you won't be able to finish the project on time, inform your manager about this with a short list of reasons and propose a plan of action if possible.
For instance additional team members, moving the deadline, lower complexity and prioritized features where some might be "good to have" for launch but not necessary to be implemented in v 1.0 or even the inclusion of third party libraries to take off some burden. |
126,185 | I came to my current place of employment as an contracted IT support technician of the most generic variety about a year ago. I had only an associates degree to my name and most of what I knew both as a support technician, and as a tech professional in general, was self taught. In short time, I showed myself to be useful enough to warrant hiring, and further more, worthy of more duties. I soon found myself working as a DevOps / Automation Engineer. Lot's of scripting.
Not to long after this transition I was tasked to build a web application for the enterprise that, in the long run, would serve as a central hub for day to day IT operations, with a focus on automation. Even without knowing the complete ins and outs of designing, prototyping, building, and maintaining a project of such scale, I knew this would be a lot for one person, but did not want to throw up the white flag just yet.
To recount all the time I took teaching myself basic web building principles, C#, javascript, various web application frameworks, etc. would take a movie montage, but suffice to say, I somehow managed to get the project of the ground and it is somewhat functional and not completely hideous. I am however feeling pretty burned out. I don't sleep much and am constantly worried I will fall flat on my face.
I am apart of a dev team of just three people, one of which has their own monolithic project, though not quite as large as my own, and the other who doesn't know enough about the technologies we use to adequately help us. I am approaching the UAT phase and have a growing backlog. Part of me wants to pack a bag and disappear to a place without electricity. More rationally though, I am thinking about talking to my boss about the workload and my struggle to carry it.
What would be best way to approach this situation to relieve the stress/pressure levels?
Edit: I realized I forgot to mention a key part of our development cycle. We have 3 week iterations (sprints) with catchups / demos at the end. We did not always have this in place. At the start, I was developing in a bubble with little to no feedback. | 2019/01/10 | [
"https://workplace.stackexchange.com/questions/126185",
"https://workplace.stackexchange.com",
"https://workplace.stackexchange.com/users/97769/"
] | ***Don't just sit there, watching the wall coming closer, take control and steer away before you smash into it.***
>
> Was I stupid to take such a job in the first place?
>
>
>
Ill advised, to put it very politely.
You need to know your abilities and shortcomings very well!
Only take on assignments that you're confident to finish, even if you need to acquire additional skills in the process.
>
> Should I quit now before I am buried by it all?
>
>
>
No. Your reputation would take a severe hit and bridges would not only be burnt but anihilated.
Avoid leaving mid project.
**Assess the state of your project and the list of what needs to be done until the deadline hits.**
If you conclude, you won't be able to finish the project on time, inform your manager about this with a short list of reasons and propose a plan of action if possible.
For instance additional team members, moving the deadline, lower complexity and prioritized features where some might be "good to have" for launch but not necessary to be implemented in v 1.0 or even the inclusion of third party libraries to take off some burden. | IMHO,
Find your balance and get help (i.e. more devs or contractors for areas you feel weaker at) |
126,185 | I came to my current place of employment as an contracted IT support technician of the most generic variety about a year ago. I had only an associates degree to my name and most of what I knew both as a support technician, and as a tech professional in general, was self taught. In short time, I showed myself to be useful enough to warrant hiring, and further more, worthy of more duties. I soon found myself working as a DevOps / Automation Engineer. Lot's of scripting.
Not to long after this transition I was tasked to build a web application for the enterprise that, in the long run, would serve as a central hub for day to day IT operations, with a focus on automation. Even without knowing the complete ins and outs of designing, prototyping, building, and maintaining a project of such scale, I knew this would be a lot for one person, but did not want to throw up the white flag just yet.
To recount all the time I took teaching myself basic web building principles, C#, javascript, various web application frameworks, etc. would take a movie montage, but suffice to say, I somehow managed to get the project of the ground and it is somewhat functional and not completely hideous. I am however feeling pretty burned out. I don't sleep much and am constantly worried I will fall flat on my face.
I am apart of a dev team of just three people, one of which has their own monolithic project, though not quite as large as my own, and the other who doesn't know enough about the technologies we use to adequately help us. I am approaching the UAT phase and have a growing backlog. Part of me wants to pack a bag and disappear to a place without electricity. More rationally though, I am thinking about talking to my boss about the workload and my struggle to carry it.
What would be best way to approach this situation to relieve the stress/pressure levels?
Edit: I realized I forgot to mention a key part of our development cycle. We have 3 week iterations (sprints) with catchups / demos at the end. We did not always have this in place. At the start, I was developing in a bubble with little to no feedback. | 2019/01/10 | [
"https://workplace.stackexchange.com/questions/126185",
"https://workplace.stackexchange.com",
"https://workplace.stackexchange.com/users/97769/"
] | I say listen to your body. I have been burned out twice. Don't let it happen to you. Did you know that burning out can have [lasting damaging effects on the brain](https://www.psychologicalscience.org/observer/burnout-and-the-brain)? When work affects your sleep you need to take a step back and address the things thst makes you stressed out.
There are many [causes of stress](https://www.webmd.com/balance/guide/causes-of-stress). The two main ones that I have experienced are *being in a situation you can't control*, and *lack of sleep*. You have to take control (by removing features, moving deadlines, etc) and you have to wind down so you can sleep.
Talk to your manager. Tell them you will not be able to meet the deadline. Say that work has started to affect your sleep. If you are working overtime, cease that immediately. Missing sleep means you underperform anyway since you need your brain for work. Why miss sleep and work overtime if you produce more when you are well rested and within the work hours?
When you come home in the evening, do something you love. Do not think about work. You need multiple hours of free time that is yours to do what you want. Eat healthy, go for a walk, meet people, do what you want. Then go to bed (at the same time every night), leaving the phone out of the bedroom. Possibly talk to a doctor if you need temporary medication.
Missing the deadline is ultimately not your main concern, your health is. Realize that the project is just money. And a good manager should be able to help you adjust the project to minimize the company losses. Your company will learn an expensive lesson, but so will you. This means you will actually become MORE valuable to them, not less, because you got some experience and have already made some mistakes. Why would they hire someone new to replace you just so thst person make the same mistakes again, the mistakes you already learned how to avoid?
Your reputation and success will depend entirely on you being communicative here. So take charge of your situation, don't try to do the impossible, begin replanning an reprioritizing the project. Help set the right expectations. Ask for help. Show what you have learned. Try to prevent this from happening again, and try to see the signs early so you can raise the flags in time. | IMHO,
Find your balance and get help (i.e. more devs or contractors for areas you feel weaker at) |
126,185 | I came to my current place of employment as an contracted IT support technician of the most generic variety about a year ago. I had only an associates degree to my name and most of what I knew both as a support technician, and as a tech professional in general, was self taught. In short time, I showed myself to be useful enough to warrant hiring, and further more, worthy of more duties. I soon found myself working as a DevOps / Automation Engineer. Lot's of scripting.
Not to long after this transition I was tasked to build a web application for the enterprise that, in the long run, would serve as a central hub for day to day IT operations, with a focus on automation. Even without knowing the complete ins and outs of designing, prototyping, building, and maintaining a project of such scale, I knew this would be a lot for one person, but did not want to throw up the white flag just yet.
To recount all the time I took teaching myself basic web building principles, C#, javascript, various web application frameworks, etc. would take a movie montage, but suffice to say, I somehow managed to get the project of the ground and it is somewhat functional and not completely hideous. I am however feeling pretty burned out. I don't sleep much and am constantly worried I will fall flat on my face.
I am apart of a dev team of just three people, one of which has their own monolithic project, though not quite as large as my own, and the other who doesn't know enough about the technologies we use to adequately help us. I am approaching the UAT phase and have a growing backlog. Part of me wants to pack a bag and disappear to a place without electricity. More rationally though, I am thinking about talking to my boss about the workload and my struggle to carry it.
What would be best way to approach this situation to relieve the stress/pressure levels?
Edit: I realized I forgot to mention a key part of our development cycle. We have 3 week iterations (sprints) with catchups / demos at the end. We did not always have this in place. At the start, I was developing in a bubble with little to no feedback. | 2019/01/10 | [
"https://workplace.stackexchange.com/questions/126185",
"https://workplace.stackexchange.com",
"https://workplace.stackexchange.com/users/97769/"
] | I say listen to your body. I have been burned out twice. Don't let it happen to you. Did you know that burning out can have [lasting damaging effects on the brain](https://www.psychologicalscience.org/observer/burnout-and-the-brain)? When work affects your sleep you need to take a step back and address the things thst makes you stressed out.
There are many [causes of stress](https://www.webmd.com/balance/guide/causes-of-stress). The two main ones that I have experienced are *being in a situation you can't control*, and *lack of sleep*. You have to take control (by removing features, moving deadlines, etc) and you have to wind down so you can sleep.
Talk to your manager. Tell them you will not be able to meet the deadline. Say that work has started to affect your sleep. If you are working overtime, cease that immediately. Missing sleep means you underperform anyway since you need your brain for work. Why miss sleep and work overtime if you produce more when you are well rested and within the work hours?
When you come home in the evening, do something you love. Do not think about work. You need multiple hours of free time that is yours to do what you want. Eat healthy, go for a walk, meet people, do what you want. Then go to bed (at the same time every night), leaving the phone out of the bedroom. Possibly talk to a doctor if you need temporary medication.
Missing the deadline is ultimately not your main concern, your health is. Realize that the project is just money. And a good manager should be able to help you adjust the project to minimize the company losses. Your company will learn an expensive lesson, but so will you. This means you will actually become MORE valuable to them, not less, because you got some experience and have already made some mistakes. Why would they hire someone new to replace you just so thst person make the same mistakes again, the mistakes you already learned how to avoid?
Your reputation and success will depend entirely on you being communicative here. So take charge of your situation, don't try to do the impossible, begin replanning an reprioritizing the project. Help set the right expectations. Ask for help. Show what you have learned. Try to prevent this from happening again, and try to see the signs early so you can raise the flags in time. | Software projects are always overpromised and underdelivered, that's more or less a fact of life.
Step 1: Mention to your manager you are understaffed relative to the workload. Estimate (realistically) how long it will take for various milestones in the project to be ready, even if it was just you working on them, assuming 8-hour work days, and report that to your manager. Make your manager aware that the farther out the deadline, the more inaccurate it may be; if you say it will take 3 weeks to complete a milestone in 6 months from now, set expectations that it may take 2.5 weeks, or 3.5 weeks, and 3 weeks is just a fuzzy estimate.
Don't be afraid of the response. What you are likely to hear is disappointment. Don't take it personally. Basically you are telling your manager "no", and no manager wants to ever hear "no", but that's what you have to do. Your manager will likely respond in one of a few ways:
1) "Can't you do it faster?": No, sorry, I can't. One person only has so much time in a day, and this is how long it will take. If you want more man-hours, hire more men (not specifically "men", etc, you get the point).
2) "Can you work overtime to do it?": It's at this point you should mention your health issues. Explain that you are having trouble sleeping, you are constantly stressed, etc. Be prepared to hand in your resignation letter on the spot if your manager does not take this response with the gravity it deserves. Those who have read my other comments on Workplace SE know that I am very very much opposed to leaving a current paying job without a backup plan (I have done so before and it was hellish let me tell you), but in this case I will shelve my normal reticence and tell you to just get out of there. In this case you may want to consult legal counsel for a case of [constructive dismissal](https://en.wikipedia.org/wiki/Constructive_dismissal).
3) "Can we negotiate this?": No. This is the absolute minimum amount of time it will take. It is non-negotiable.
Step 2: Stop working overtime. When you leave work for the day, *leave work for the day*. Go home, watch TV, relax, play some video games, exercise, whatever makes you happy. The work will get done on schedule, eventually. Get it into your head that you work 8-hour days, no more, no less.
Step 3: Encourage your company to expand your team. Explain to your company the concept of the [bus factor](https://en.wikipedia.org/wiki/Bus_factor) and why the current situation puts them at great risk. In addition to making your workload much lighter, it will also protect the company from catastrophic failure in the case of, well, you getting hit by a bus. |
126,185 | I came to my current place of employment as an contracted IT support technician of the most generic variety about a year ago. I had only an associates degree to my name and most of what I knew both as a support technician, and as a tech professional in general, was self taught. In short time, I showed myself to be useful enough to warrant hiring, and further more, worthy of more duties. I soon found myself working as a DevOps / Automation Engineer. Lot's of scripting.
Not to long after this transition I was tasked to build a web application for the enterprise that, in the long run, would serve as a central hub for day to day IT operations, with a focus on automation. Even without knowing the complete ins and outs of designing, prototyping, building, and maintaining a project of such scale, I knew this would be a lot for one person, but did not want to throw up the white flag just yet.
To recount all the time I took teaching myself basic web building principles, C#, javascript, various web application frameworks, etc. would take a movie montage, but suffice to say, I somehow managed to get the project of the ground and it is somewhat functional and not completely hideous. I am however feeling pretty burned out. I don't sleep much and am constantly worried I will fall flat on my face.
I am apart of a dev team of just three people, one of which has their own monolithic project, though not quite as large as my own, and the other who doesn't know enough about the technologies we use to adequately help us. I am approaching the UAT phase and have a growing backlog. Part of me wants to pack a bag and disappear to a place without electricity. More rationally though, I am thinking about talking to my boss about the workload and my struggle to carry it.
What would be best way to approach this situation to relieve the stress/pressure levels?
Edit: I realized I forgot to mention a key part of our development cycle. We have 3 week iterations (sprints) with catchups / demos at the end. We did not always have this in place. At the start, I was developing in a bubble with little to no feedback. | 2019/01/10 | [
"https://workplace.stackexchange.com/questions/126185",
"https://workplace.stackexchange.com",
"https://workplace.stackexchange.com/users/97769/"
] | I say listen to your body. I have been burned out twice. Don't let it happen to you. Did you know that burning out can have [lasting damaging effects on the brain](https://www.psychologicalscience.org/observer/burnout-and-the-brain)? When work affects your sleep you need to take a step back and address the things thst makes you stressed out.
There are many [causes of stress](https://www.webmd.com/balance/guide/causes-of-stress). The two main ones that I have experienced are *being in a situation you can't control*, and *lack of sleep*. You have to take control (by removing features, moving deadlines, etc) and you have to wind down so you can sleep.
Talk to your manager. Tell them you will not be able to meet the deadline. Say that work has started to affect your sleep. If you are working overtime, cease that immediately. Missing sleep means you underperform anyway since you need your brain for work. Why miss sleep and work overtime if you produce more when you are well rested and within the work hours?
When you come home in the evening, do something you love. Do not think about work. You need multiple hours of free time that is yours to do what you want. Eat healthy, go for a walk, meet people, do what you want. Then go to bed (at the same time every night), leaving the phone out of the bedroom. Possibly talk to a doctor if you need temporary medication.
Missing the deadline is ultimately not your main concern, your health is. Realize that the project is just money. And a good manager should be able to help you adjust the project to minimize the company losses. Your company will learn an expensive lesson, but so will you. This means you will actually become MORE valuable to them, not less, because you got some experience and have already made some mistakes. Why would they hire someone new to replace you just so thst person make the same mistakes again, the mistakes you already learned how to avoid?
Your reputation and success will depend entirely on you being communicative here. So take charge of your situation, don't try to do the impossible, begin replanning an reprioritizing the project. Help set the right expectations. Ask for help. Show what you have learned. Try to prevent this from happening again, and try to see the signs early so you can raise the flags in time. | To quote from one of your comments:
>
> The funding isn't really there to hire an additional resource and, to be honest, I don't even get paid that much. They definitely are not paying me as a full stack developer, which I believe is essentially the role I am playing.
>
>
>
An important thing to realize is, that if a company cannot afford to pay (adequately) for software, they aren't entitled to get software for "free". Because it isn't free, it comes at a high cost to folks like you who are overworked, over-stressed, underappreciated and underpaid. **There is no reason to burn yourself out, to damage your health, psyche, and relationships, to make someone else rich, or to save them money, or to fix their mistakes, etc., unless you are being handsomely rewarded for it.** (Some might say there is no reward worth burnout, but I leave that as a personal choice.)
So what should you do in this situation? First, congratulations, because that fact that you
>
> somehow managed to get the project of the ground and it is somewhat functional and not completely hideous
>
>
>
is an amazing accomplishment that you should be proud of. I'm a professional software engineer of more than 20 years, with all kinds of higher education, and that is still what I aim to accomplish.
So now let's figure out how to lift some of the stress, and get rewarded.
Given that you've now shown some real progress, I think you should start by taking a week off. Go away, or just sleep in, and read a book or catch up on TV or meet some friends for dinner or whatever. You'll be amazed at what a week away will do for you.
Second, when you return, you need to throttle back to a sustainable workload. For every sprint, or however you want to organize it, only attempt to do work that you estimate will fill about 2/3 of the time (of a 40 hour week) available in that sprint. Do not work more than 40 hours a week. If you complete all that work within your time limit, great, grab something else off the queue. But if you don't, **that's okay**, it doesn't mean that you are a bad engineer, or developer, etc., it just means that it was hard to estimate how long that piece of work would take.
Regarding getting rewarded:
As you progress, take a mental note of what you've learned, both from books, and from making mistakes and needing to redo stuff. Spend a bit of your work time reading about different architectures, coding styles, etc. relevant to this project, and try to incorporate them, and learn lessons from them. See if there are some local development meetups, and attend them. Take pride in what you've learned, and what you've built.
Now, set a date six months from now in your calendar. When that date rolls around, start applying for jobs. You deserve a job which pays you what you've worth! Good luck! |
126,185 | I came to my current place of employment as an contracted IT support technician of the most generic variety about a year ago. I had only an associates degree to my name and most of what I knew both as a support technician, and as a tech professional in general, was self taught. In short time, I showed myself to be useful enough to warrant hiring, and further more, worthy of more duties. I soon found myself working as a DevOps / Automation Engineer. Lot's of scripting.
Not to long after this transition I was tasked to build a web application for the enterprise that, in the long run, would serve as a central hub for day to day IT operations, with a focus on automation. Even without knowing the complete ins and outs of designing, prototyping, building, and maintaining a project of such scale, I knew this would be a lot for one person, but did not want to throw up the white flag just yet.
To recount all the time I took teaching myself basic web building principles, C#, javascript, various web application frameworks, etc. would take a movie montage, but suffice to say, I somehow managed to get the project of the ground and it is somewhat functional and not completely hideous. I am however feeling pretty burned out. I don't sleep much and am constantly worried I will fall flat on my face.
I am apart of a dev team of just three people, one of which has their own monolithic project, though not quite as large as my own, and the other who doesn't know enough about the technologies we use to adequately help us. I am approaching the UAT phase and have a growing backlog. Part of me wants to pack a bag and disappear to a place without electricity. More rationally though, I am thinking about talking to my boss about the workload and my struggle to carry it.
What would be best way to approach this situation to relieve the stress/pressure levels?
Edit: I realized I forgot to mention a key part of our development cycle. We have 3 week iterations (sprints) with catchups / demos at the end. We did not always have this in place. At the start, I was developing in a bubble with little to no feedback. | 2019/01/10 | [
"https://workplace.stackexchange.com/questions/126185",
"https://workplace.stackexchange.com",
"https://workplace.stackexchange.com/users/97769/"
] | I say listen to your body. I have been burned out twice. Don't let it happen to you. Did you know that burning out can have [lasting damaging effects on the brain](https://www.psychologicalscience.org/observer/burnout-and-the-brain)? When work affects your sleep you need to take a step back and address the things thst makes you stressed out.
There are many [causes of stress](https://www.webmd.com/balance/guide/causes-of-stress). The two main ones that I have experienced are *being in a situation you can't control*, and *lack of sleep*. You have to take control (by removing features, moving deadlines, etc) and you have to wind down so you can sleep.
Talk to your manager. Tell them you will not be able to meet the deadline. Say that work has started to affect your sleep. If you are working overtime, cease that immediately. Missing sleep means you underperform anyway since you need your brain for work. Why miss sleep and work overtime if you produce more when you are well rested and within the work hours?
When you come home in the evening, do something you love. Do not think about work. You need multiple hours of free time that is yours to do what you want. Eat healthy, go for a walk, meet people, do what you want. Then go to bed (at the same time every night), leaving the phone out of the bedroom. Possibly talk to a doctor if you need temporary medication.
Missing the deadline is ultimately not your main concern, your health is. Realize that the project is just money. And a good manager should be able to help you adjust the project to minimize the company losses. Your company will learn an expensive lesson, but so will you. This means you will actually become MORE valuable to them, not less, because you got some experience and have already made some mistakes. Why would they hire someone new to replace you just so thst person make the same mistakes again, the mistakes you already learned how to avoid?
Your reputation and success will depend entirely on you being communicative here. So take charge of your situation, don't try to do the impossible, begin replanning an reprioritizing the project. Help set the right expectations. Ask for help. Show what you have learned. Try to prevent this from happening again, and try to see the signs early so you can raise the flags in time. | One step that hasn't been touched on much: update your resume. You have, from a very unpromising start, created an application that should be beyond your abilities. Emphasize that. You are almost certainly not going to be paid what you're worth where you are, and you definitely can't keep that pace up.
If you fall back to an effort level you can maintain without serious personal harm, you may get fired. Clearly, your management doesn't understand the situation. You need to be able to move somewhere else, even if everything suddenly goes right where you are. |
126,185 | I came to my current place of employment as an contracted IT support technician of the most generic variety about a year ago. I had only an associates degree to my name and most of what I knew both as a support technician, and as a tech professional in general, was self taught. In short time, I showed myself to be useful enough to warrant hiring, and further more, worthy of more duties. I soon found myself working as a DevOps / Automation Engineer. Lot's of scripting.
Not to long after this transition I was tasked to build a web application for the enterprise that, in the long run, would serve as a central hub for day to day IT operations, with a focus on automation. Even without knowing the complete ins and outs of designing, prototyping, building, and maintaining a project of such scale, I knew this would be a lot for one person, but did not want to throw up the white flag just yet.
To recount all the time I took teaching myself basic web building principles, C#, javascript, various web application frameworks, etc. would take a movie montage, but suffice to say, I somehow managed to get the project of the ground and it is somewhat functional and not completely hideous. I am however feeling pretty burned out. I don't sleep much and am constantly worried I will fall flat on my face.
I am apart of a dev team of just three people, one of which has their own monolithic project, though not quite as large as my own, and the other who doesn't know enough about the technologies we use to adequately help us. I am approaching the UAT phase and have a growing backlog. Part of me wants to pack a bag and disappear to a place without electricity. More rationally though, I am thinking about talking to my boss about the workload and my struggle to carry it.
What would be best way to approach this situation to relieve the stress/pressure levels?
Edit: I realized I forgot to mention a key part of our development cycle. We have 3 week iterations (sprints) with catchups / demos at the end. We did not always have this in place. At the start, I was developing in a bubble with little to no feedback. | 2019/01/10 | [
"https://workplace.stackexchange.com/questions/126185",
"https://workplace.stackexchange.com",
"https://workplace.stackexchange.com/users/97769/"
] | Software projects are always overpromised and underdelivered, that's more or less a fact of life.
Step 1: Mention to your manager you are understaffed relative to the workload. Estimate (realistically) how long it will take for various milestones in the project to be ready, even if it was just you working on them, assuming 8-hour work days, and report that to your manager. Make your manager aware that the farther out the deadline, the more inaccurate it may be; if you say it will take 3 weeks to complete a milestone in 6 months from now, set expectations that it may take 2.5 weeks, or 3.5 weeks, and 3 weeks is just a fuzzy estimate.
Don't be afraid of the response. What you are likely to hear is disappointment. Don't take it personally. Basically you are telling your manager "no", and no manager wants to ever hear "no", but that's what you have to do. Your manager will likely respond in one of a few ways:
1) "Can't you do it faster?": No, sorry, I can't. One person only has so much time in a day, and this is how long it will take. If you want more man-hours, hire more men (not specifically "men", etc, you get the point).
2) "Can you work overtime to do it?": It's at this point you should mention your health issues. Explain that you are having trouble sleeping, you are constantly stressed, etc. Be prepared to hand in your resignation letter on the spot if your manager does not take this response with the gravity it deserves. Those who have read my other comments on Workplace SE know that I am very very much opposed to leaving a current paying job without a backup plan (I have done so before and it was hellish let me tell you), but in this case I will shelve my normal reticence and tell you to just get out of there. In this case you may want to consult legal counsel for a case of [constructive dismissal](https://en.wikipedia.org/wiki/Constructive_dismissal).
3) "Can we negotiate this?": No. This is the absolute minimum amount of time it will take. It is non-negotiable.
Step 2: Stop working overtime. When you leave work for the day, *leave work for the day*. Go home, watch TV, relax, play some video games, exercise, whatever makes you happy. The work will get done on schedule, eventually. Get it into your head that you work 8-hour days, no more, no less.
Step 3: Encourage your company to expand your team. Explain to your company the concept of the [bus factor](https://en.wikipedia.org/wiki/Bus_factor) and why the current situation puts them at great risk. In addition to making your workload much lighter, it will also protect the company from catastrophic failure in the case of, well, you getting hit by a bus. | IMHO,
Find your balance and get help (i.e. more devs or contractors for areas you feel weaker at) |
126,185 | I came to my current place of employment as an contracted IT support technician of the most generic variety about a year ago. I had only an associates degree to my name and most of what I knew both as a support technician, and as a tech professional in general, was self taught. In short time, I showed myself to be useful enough to warrant hiring, and further more, worthy of more duties. I soon found myself working as a DevOps / Automation Engineer. Lot's of scripting.
Not to long after this transition I was tasked to build a web application for the enterprise that, in the long run, would serve as a central hub for day to day IT operations, with a focus on automation. Even without knowing the complete ins and outs of designing, prototyping, building, and maintaining a project of such scale, I knew this would be a lot for one person, but did not want to throw up the white flag just yet.
To recount all the time I took teaching myself basic web building principles, C#, javascript, various web application frameworks, etc. would take a movie montage, but suffice to say, I somehow managed to get the project of the ground and it is somewhat functional and not completely hideous. I am however feeling pretty burned out. I don't sleep much and am constantly worried I will fall flat on my face.
I am apart of a dev team of just three people, one of which has their own monolithic project, though not quite as large as my own, and the other who doesn't know enough about the technologies we use to adequately help us. I am approaching the UAT phase and have a growing backlog. Part of me wants to pack a bag and disappear to a place without electricity. More rationally though, I am thinking about talking to my boss about the workload and my struggle to carry it.
What would be best way to approach this situation to relieve the stress/pressure levels?
Edit: I realized I forgot to mention a key part of our development cycle. We have 3 week iterations (sprints) with catchups / demos at the end. We did not always have this in place. At the start, I was developing in a bubble with little to no feedback. | 2019/01/10 | [
"https://workplace.stackexchange.com/questions/126185",
"https://workplace.stackexchange.com",
"https://workplace.stackexchange.com/users/97769/"
] | To quote from one of your comments:
>
> The funding isn't really there to hire an additional resource and, to be honest, I don't even get paid that much. They definitely are not paying me as a full stack developer, which I believe is essentially the role I am playing.
>
>
>
An important thing to realize is, that if a company cannot afford to pay (adequately) for software, they aren't entitled to get software for "free". Because it isn't free, it comes at a high cost to folks like you who are overworked, over-stressed, underappreciated and underpaid. **There is no reason to burn yourself out, to damage your health, psyche, and relationships, to make someone else rich, or to save them money, or to fix their mistakes, etc., unless you are being handsomely rewarded for it.** (Some might say there is no reward worth burnout, but I leave that as a personal choice.)
So what should you do in this situation? First, congratulations, because that fact that you
>
> somehow managed to get the project of the ground and it is somewhat functional and not completely hideous
>
>
>
is an amazing accomplishment that you should be proud of. I'm a professional software engineer of more than 20 years, with all kinds of higher education, and that is still what I aim to accomplish.
So now let's figure out how to lift some of the stress, and get rewarded.
Given that you've now shown some real progress, I think you should start by taking a week off. Go away, or just sleep in, and read a book or catch up on TV or meet some friends for dinner or whatever. You'll be amazed at what a week away will do for you.
Second, when you return, you need to throttle back to a sustainable workload. For every sprint, or however you want to organize it, only attempt to do work that you estimate will fill about 2/3 of the time (of a 40 hour week) available in that sprint. Do not work more than 40 hours a week. If you complete all that work within your time limit, great, grab something else off the queue. But if you don't, **that's okay**, it doesn't mean that you are a bad engineer, or developer, etc., it just means that it was hard to estimate how long that piece of work would take.
Regarding getting rewarded:
As you progress, take a mental note of what you've learned, both from books, and from making mistakes and needing to redo stuff. Spend a bit of your work time reading about different architectures, coding styles, etc. relevant to this project, and try to incorporate them, and learn lessons from them. See if there are some local development meetups, and attend them. Take pride in what you've learned, and what you've built.
Now, set a date six months from now in your calendar. When that date rolls around, start applying for jobs. You deserve a job which pays you what you've worth! Good luck! | IMHO,
Find your balance and get help (i.e. more devs or contractors for areas you feel weaker at) |
126,185 | I came to my current place of employment as an contracted IT support technician of the most generic variety about a year ago. I had only an associates degree to my name and most of what I knew both as a support technician, and as a tech professional in general, was self taught. In short time, I showed myself to be useful enough to warrant hiring, and further more, worthy of more duties. I soon found myself working as a DevOps / Automation Engineer. Lot's of scripting.
Not to long after this transition I was tasked to build a web application for the enterprise that, in the long run, would serve as a central hub for day to day IT operations, with a focus on automation. Even without knowing the complete ins and outs of designing, prototyping, building, and maintaining a project of such scale, I knew this would be a lot for one person, but did not want to throw up the white flag just yet.
To recount all the time I took teaching myself basic web building principles, C#, javascript, various web application frameworks, etc. would take a movie montage, but suffice to say, I somehow managed to get the project of the ground and it is somewhat functional and not completely hideous. I am however feeling pretty burned out. I don't sleep much and am constantly worried I will fall flat on my face.
I am apart of a dev team of just three people, one of which has their own monolithic project, though not quite as large as my own, and the other who doesn't know enough about the technologies we use to adequately help us. I am approaching the UAT phase and have a growing backlog. Part of me wants to pack a bag and disappear to a place without electricity. More rationally though, I am thinking about talking to my boss about the workload and my struggle to carry it.
What would be best way to approach this situation to relieve the stress/pressure levels?
Edit: I realized I forgot to mention a key part of our development cycle. We have 3 week iterations (sprints) with catchups / demos at the end. We did not always have this in place. At the start, I was developing in a bubble with little to no feedback. | 2019/01/10 | [
"https://workplace.stackexchange.com/questions/126185",
"https://workplace.stackexchange.com",
"https://workplace.stackexchange.com/users/97769/"
] | One step that hasn't been touched on much: update your resume. You have, from a very unpromising start, created an application that should be beyond your abilities. Emphasize that. You are almost certainly not going to be paid what you're worth where you are, and you definitely can't keep that pace up.
If you fall back to an effort level you can maintain without serious personal harm, you may get fired. Clearly, your management doesn't understand the situation. You need to be able to move somewhere else, even if everything suddenly goes right where you are. | IMHO,
Find your balance and get help (i.e. more devs or contractors for areas you feel weaker at) |
73,920 | I found this [cool website](http://gamemechanicexplorer.com/#easing-1) for game development and it has a list of easing functions:

Although the site contains a description of what they're for, it goes over my head. What are easing functions and what are they used for?
Update
------
I found a better example of the functions themselves from [Phaser.io's source code](https://github.com/photonstorm/phaser/blob/master/src/tween/Easing.js). These functions are much simpler than the answers here would suggest. They only take one parameter, `k`. As part of the answer, I'd like to know how to use these. | 2014/04/23 | [
"https://gamedev.stackexchange.com/questions/73920",
"https://gamedev.stackexchange.com",
"https://gamedev.stackexchange.com/users/31177/"
] | Easing functions are used for interpolation, typically (but not necessarily) in animation / kinematic motion. Linear interpolation (lerp) is something you may have heard of. Let's say you lerp a smiley face from one corner of the screen to another (much as per your image). This means the smiley will move at a steady velocity from point A to point B. If you were to apply this to movement of a limb, it would look very robotic and unnatural -- the actuators / servos that robots use, perform this way. Obviously, human limbs move in a very different way. And most motion that you will see in nature will have interesting motion curves, rather than the steady, unchanging velocity seen in linear interpolation.
Enter easing. Easing motion means the velocity is *not* constant. What this achieves is to look more realistic. Watch people, watch different animals, watch plants bending in the wind, or even how falling rain changes direction on a gusty day. Watch the velocity of a ball as you throw it up in the air and in comes back down again. Watch the motion of a guitar string as you pluck it. Each of these types of motion has a different curve describing velocity.
I suggest you play with [GreenSock's GSAP online](http://www.greensock.com/jump-start-js/) to get a feel for what the different types of easing curves produce in terms of motion. It's one of those things where it takes time and practice to map a particular named curve to the sort of motion you imagine you want. But once you have grasped the basics, you'll have a lot of fun.
P.S. As I said, easing is not only used for animation. It may be used for sound panning, for effecting skeletal motion at the logical / model level, or anything else you can think of that might need specific smooth variation over time. | Easing functions serve to change a value during a time period, from a starting number to an end number.
You use that value to animate a property of an object in your game, such as position, rotation, scale, changing colors and other properties that use a value.
The different easing functions determine the "feel" of the animation, or how the value changes over time.
On the website you posted, the graph shows the value changing over time from a start to an end, so it doesn't mean the object you are animating will follow the path of the ball in the graph. |
62,670 | In *Harry Potter and the Prisoner of Azkaban*, Prof. Lupin turns into a werewolf and tries to kill Harry, and just a few days later Harry talks nicely with Prof. Lupin.
Why would Harry do that? | 2016/11/02 | [
"https://movies.stackexchange.com/questions/62670",
"https://movies.stackexchange.com",
"https://movies.stackexchange.com/users/42753/"
] | Because he knows that Lupin is a werewolf because of Hermione.
>
> **Hermione:** He's a werewolf! That's why he's been missing classes.
>
>
> **Lupin:** How long have you known?
>
>
> **Hermione:** Since Professor Snape set the essay.
>
>
>
A werewolf can hurt his dear ones without knowing and transforming into a werewolf is very painful too. Since Harry knew he is just having a rough time being werewolf, he talks to Lupin normally.
Obviously, if someone's dear ones hurt him/her while being in trouble and he/she knows that they are not doing this intentionally, he/she would talk normally and help'em.
Hermione and Harry later in the forest after they freed Buckbeak:
>
> **Hermione:** Poor Professor Lupin's having a really tough night.
>
>
> | Because a werewolf has no control over themselves if they turn.
However, some of the worst effects can be mitigated by consuming Wolfsbane Potion, which allows a werewolf to retain his or her human mind while transformed, thus freeing him or her from the worry of harming other humans or themselves. But Lupin had not taken his potion that day. So, he lost control.
Lupin was a good person and he was turned into a werewolf by Fenrir Greyback at a very young age, that wasn't his fault.
Moreover he was very close friends with Harry's parents. |
20,508 | In my company to access my application to record, i need internet connection (with auto proxy: Ex: <http://autoproxy.xx.xx>), how and where can i set this proxy in **HTTPS Test script recorder?** thanks, | 2016/07/14 | [
"https://sqa.stackexchange.com/questions/20508",
"https://sqa.stackexchange.com",
"https://sqa.stackexchange.com/users/19291/"
] | It seems JMeter doesn't support PAC files. Try an alternative recording solution i.e. [JMeter Chrome Extension](https://guide.blazemeter.com/hc/en-us/articles/206732579-Chrome-Extension) | I did a Google search for "JMeter automatic proxy". The first search result said this:
>
> PAC fiels[sic] contain javascript which is executed by the browser to decide which proxy URL they want to use. JMeter is not a browser so it does not run this code. The solution is simply to resolve which proxy this script returns and input this value into JMeter directly, you can do this using the dev tools on most browsers, or just ask the IT dept. that maintains the thing to tell you the direct address.
>
>
> |
26,576 | I've read all the books and have seen all the movies (at least 3 times), but I've been wondering: **Is there ever a time when Harry wears contacts instead of glasses?** My memory is pretty good, but it's not *that* good ;) I'm so used to seeing him in glasses that I've never really thought about alternatives. | 2012/11/06 | [
"https://scifi.stackexchange.com/questions/26576",
"https://scifi.stackexchange.com",
"https://scifi.stackexchange.com/users/10667/"
] | There is nothing in the books to suggest Harry wore contact lenses at any point during the series.
However, in both the movie and the book *Harry Potter and the Deathly Hallows* (part 2 in terms of the movies), during the *King's Cross* scene (where Harry is "killed" by Voldemort) he becomes conscious of *not* wearing or needing glasses while in that state of stasis.
>
> He sat up. His body appeared unscathed. He touched his face. He was not wearing glasses any more.
>
>
> *Deathly Hallows* - page 565 - Bloomsbury - chapter 35, *King's Cross*
>
>
>

*Harry Potter and the Deathly Hallows - Part 2* - Warner Bros.
Moving on, I don't think J.K. Rowling would have wanted Harry to wear contact lenses; the fact that her hero wore glasses had personal significance to J.K. Rowling:
>
> **Eun Ji An for Raincoast.com, Canada:** I was wondering why Harry had glasses?
>
>
> **J.K. Rowling:** Because I had glasses all through my childhood and I was sick and tired of the person in the books who wore the glasses was always the brainy one and it really irritated me and I wanted to read about a hero wearing glasses. **It also has a symbolic function, Harry is the eyes on to the books in the sense that it is always Harry's point of view, so there was also that, you know, facet of him wearing glasses.**
>
>
> [**CBBC Newsround -- Interview with J.K. Rowling**](http://news.bbc.co.uk/cbbcnews/hi/newsid_4690000/newsid_4690800/4690885.stm) -- 07.18.05
>
>
>
As late in the series as *Deathly Hallows*, Harry is still wearing glasses, as evidenced in the chapter *The Seven Potters*:
>
> ‘Harry, your eyesight really is awful,’ said Hermione, as she put on glasses.
>
>
> *Deathly Hallows* - page 49 - Bloomsbury - chapter 4, *The Seven Potters*
>
>
>
As well, in the wizarding world, some physical ailments seem to be unable to be fixed by magic. One could make a case for utilizing Muggle treatments (which contact lenses would be), as Arthur Weasley accepted Muggle stitches for Nagini's bite when the wizarding treatments the St. Mungo's staff was giving him didn't work as well as they should have. Molly Weasley had a fit over this; however, Harry himself was raised in the Muggle world and might have been more amenable to contact lenses than purebloods or half-bloods who grew up in the wizarding world. This is just one point of view to consider.
To reiterate, there is no *canon* evidence that Harry Potter ever wore contact lenses or had Lasik surgery or fixed his eyesight magically or anything like that. | **Harry is never mentioned as wearing contacts**, including where he would be the most likely to do so (playing Quidditch). To the contrary, the book stresses that he wore regular glasses, since Hermione needed to magick his eyeglasses to be rainproof when playing in the rain..
The "never" comes from my search of all the relevant sources:
* "electronically searched softcopy text of all 7 books and didn't find a single instance of 'contact' or 'contacts' associated with eyewear"
* Googling for "Harry Potter" + contacts/"contact lenses"
* Search of accio quotes for JKR tidbits (same search strings). |
26,021 | Is it possible to achieve stereo sound totally without the "Center". Only extremely Right and Left pan (e.g. Choirs).
If yes, please tell me how? I'm using Ozone iZotope plug-in. | 2013/12/19 | [
"https://sound.stackexchange.com/questions/26021",
"https://sound.stackexchange.com",
"https://sound.stackexchange.com/users/6561/"
] | Not practically. In stereo sound, the phantom center is the 'illusion'. Exactly center is whatever is exactly identical between the two channels. One can generalize and say that the width of the image is proportional to how 'different' the two sounds are.
But it actually turns out this is a mathematical property of sound. Ignoring that there are a few other subtle queues humans pick up on that determine direction, mainly when you are talking about a stereo pan, you are talking about a phase difference.
So if the center is when the left and right channels are completely in phase, then completely eliminating the center would be when the left and right are 100% out of phase.
And you can try that out: take some mono signal and invert the phase of one side and it sounds basically about as "wide" as you'll get. But there's a problem which is that since sound sums additively, if you sum the two channels they subtract and disappear completely. There are other problems but that is the most drastic and what happens if you try to completely eliminate the center.
Doing any kind of stereo pan effect is just a balance between how wide you want it to sound and how much crap it becomes when it (inevitably) becomes much narrower on everyone's laptops (or can be summed to mono completely for radio).
[Ozone has M/S mode](http://www.izotope.com/support/help/ozone/pages/mid_side_processing.htm) for basically everything now. A pretty obvious choice is to raise a high shelf on the sides. | One can achieve such an effect by adding a plug-in to a channel that can phase-invert either the left or right stereo channel. This is a great way to make a pad fill up an entire audio space. |
34,266 | What I'm trying to say is should Light tell Teru that killing from scrape paper is possible, then if Teru kept a real deathnote paper with him 24/7, surely that in the final showdown, whoever names written on that piece of paper they surely will die, won't they?
That way you don't even need to worry about the book switching scheme. | 2016/07/15 | [
"https://anime.stackexchange.com/questions/34266",
"https://anime.stackexchange.com",
"https://anime.stackexchange.com/users/26187/"
] | Whole point of Teru was to be a decoy. If SPK had seen Mikami using a seperate paper of the real death note would be a give away. For example. Instead of visiting the bank to retrieve the real Deathnote, Megami might have used the paper. Which would still give him away.
Light made sure the Mikami had NO access to the real DeathNote and explicitly said him to only take it out on the D-Day.
This is basically an speculative question containing the What-if Scenarios. You can plausibly say that something might've worked out, but as I have just said above... Chances are it would have blown up more easily. Because SPK would have been dead if Mikami had trusted Light and not gone to retrieve the Real DeathNote from the bank.
EDIT: I get the question was why didn't Mikami keep a seperate real piece of DN paper for the day of the final D-Day instead of the DN. I answered this What-if scenario with a plausible explaination that Mikami may have used it to kill Takada. Then I went on to explain this was why Light didn't want ANY piece of real Death Note with Mikami, to keep the plan full proof.
**Tl;dr Mikami didn't keep the DN paper because Light told him not to. This was because Light just like Near wanted to play complete and full proof game but lost to the external factors of Mello and Mikami** | This is the email x-kira should be recieved prior to Jan 28th...
In exactly 1 month and 29 days from today, the 26th of January. Do not go to the bank to pick up the note. Instead and I suggest sometime in December. Make 2 copies not one of the notebook. Keep one in your brief case and the other in the bank vault. Keep the original but burry it by using a name to do so for you. That way you cannot be tracked. Then keep a page on you and on the day of 28th January at 1pm. Use this page to rest on the fake that the spk will use. Write down Nate River first. Then kill the rest except light Yagami.
-LY ;] |
96,396 | a bit of background to the question, I'm a junior developer and we have recently been given flexitime which so far is working well as I get to start at 8am and leave at 4:30pm.
Today I arrived at 7:30 (I like to be early so I can have a coffee in the office before I start my work) however no one arrived to open the office until 9am.
So my question is, should I leave at the the normal time (5:30 pm) or when I would usually leave(4:30 pm)? | 2017/08/02 | [
"https://workplace.stackexchange.com/questions/96396",
"https://workplace.stackexchange.com",
"https://workplace.stackexchange.com/users/53342/"
] | Today, work till 5:30. Yes it's not your fault that you couldn't start at 8 - but you still need to ensure you're working your contracted hours.
Make sure today though that you discuss with your boss what happens in future. If the company is offering flexible hours - they need to ensure you can start work when you want to (because that's the whole point). | This is a tricky one.
The basic answer is: You were there at the disposition of the employer in the time agreed upon with him; that you did not were able to do work due to factors external to you does not change that. It is not different to, say, staying an hour idle at work because of the computer do not work due to a power outage.
One small "but" is that it may be difficult to prove that you were there at 8 pm; but since you state you are there everyday that sould not be much of an issue. Also any message contacting your coworkers and reporting the issue with the doors should help. Check with your boss that he knowns that you were there at 8:00 am.
The big "but" is that you have just started with flex time, which means that the system still has to be worked out and your employer is still evaluating it. If the employer gets to the conclussion that the system leads to employees working less hours(either if it is not the employees'fault), he may want to return to strict time. I am not saying that it would happen by just one incident, but if it happens more times it could be determinant. |
96,396 | a bit of background to the question, I'm a junior developer and we have recently been given flexitime which so far is working well as I get to start at 8am and leave at 4:30pm.
Today I arrived at 7:30 (I like to be early so I can have a coffee in the office before I start my work) however no one arrived to open the office until 9am.
So my question is, should I leave at the the normal time (5:30 pm) or when I would usually leave(4:30 pm)? | 2017/08/02 | [
"https://workplace.stackexchange.com/questions/96396",
"https://workplace.stackexchange.com",
"https://workplace.stackexchange.com/users/53342/"
] | Unless there was a pre-existing conversation with your boss that you were coming in at 8 and that someone would be there to open the office then I'd say that this time you'll just have to take it on the chin and work till 5:30.
What I would do though is explain to your boss (in a non-confrontational, non-accusatory way) that there was no-one to open the office when you arrived and ask if you need to be letting people know about any plans to arrive "early" to ensure a keyholder opens up, or if the office *won't* be opening pre-9:00 on a day you were planning to come in early that you can at least adjust your plans.
Most likely this is just a blip that is down to the organisation adjusting to the flexitime (you mention it is a recent development). | Today, work till 5:30. Yes it's not your fault that you couldn't start at 8 - but you still need to ensure you're working your contracted hours.
Make sure today though that you discuss with your boss what happens in future. If the company is offering flexible hours - they need to ensure you can start work when you want to (because that's the whole point). |
96,396 | a bit of background to the question, I'm a junior developer and we have recently been given flexitime which so far is working well as I get to start at 8am and leave at 4:30pm.
Today I arrived at 7:30 (I like to be early so I can have a coffee in the office before I start my work) however no one arrived to open the office until 9am.
So my question is, should I leave at the the normal time (5:30 pm) or when I would usually leave(4:30 pm)? | 2017/08/02 | [
"https://workplace.stackexchange.com/questions/96396",
"https://workplace.stackexchange.com",
"https://workplace.stackexchange.com/users/53342/"
] | Today, work till 5:30. Yes it's not your fault that you couldn't start at 8 - but you still need to ensure you're working your contracted hours.
Make sure today though that you discuss with your boss what happens in future. If the company is offering flexible hours - they need to ensure you can start work when you want to (because that's the whole point). | Just ask your manager.
Don't assume anything and simply ask. Something odd happened today and you want to know how to proceed as well as set up an example for future instances should it happen again. This is the exact situation where the only correct thing to do is ask your manager.
You shouldn't "just do the work" without asking because depending on your company guidelines that could push you into overtime hours or the like, which may or may not be "ok".
You shouldn't assume the opposite either, since you don't want to short-change your workplace work-hours.
Just ask. |
96,396 | a bit of background to the question, I'm a junior developer and we have recently been given flexitime which so far is working well as I get to start at 8am and leave at 4:30pm.
Today I arrived at 7:30 (I like to be early so I can have a coffee in the office before I start my work) however no one arrived to open the office until 9am.
So my question is, should I leave at the the normal time (5:30 pm) or when I would usually leave(4:30 pm)? | 2017/08/02 | [
"https://workplace.stackexchange.com/questions/96396",
"https://workplace.stackexchange.com",
"https://workplace.stackexchange.com/users/53342/"
] | Unless there was a pre-existing conversation with your boss that you were coming in at 8 and that someone would be there to open the office then I'd say that this time you'll just have to take it on the chin and work till 5:30.
What I would do though is explain to your boss (in a non-confrontational, non-accusatory way) that there was no-one to open the office when you arrived and ask if you need to be letting people know about any plans to arrive "early" to ensure a keyholder opens up, or if the office *won't* be opening pre-9:00 on a day you were planning to come in early that you can at least adjust your plans.
Most likely this is just a blip that is down to the organisation adjusting to the flexitime (you mention it is a recent development). | This is a tricky one.
The basic answer is: You were there at the disposition of the employer in the time agreed upon with him; that you did not were able to do work due to factors external to you does not change that. It is not different to, say, staying an hour idle at work because of the computer do not work due to a power outage.
One small "but" is that it may be difficult to prove that you were there at 8 pm; but since you state you are there everyday that sould not be much of an issue. Also any message contacting your coworkers and reporting the issue with the doors should help. Check with your boss that he knowns that you were there at 8:00 am.
The big "but" is that you have just started with flex time, which means that the system still has to be worked out and your employer is still evaluating it. If the employer gets to the conclussion that the system leads to employees working less hours(either if it is not the employees'fault), he may want to return to strict time. I am not saying that it would happen by just one incident, but if it happens more times it could be determinant. |
96,396 | a bit of background to the question, I'm a junior developer and we have recently been given flexitime which so far is working well as I get to start at 8am and leave at 4:30pm.
Today I arrived at 7:30 (I like to be early so I can have a coffee in the office before I start my work) however no one arrived to open the office until 9am.
So my question is, should I leave at the the normal time (5:30 pm) or when I would usually leave(4:30 pm)? | 2017/08/02 | [
"https://workplace.stackexchange.com/questions/96396",
"https://workplace.stackexchange.com",
"https://workplace.stackexchange.com/users/53342/"
] | With all implementations of flextime that I know, there is a set of rules, in particular **core and maximum work hours**. You need to be there at least during core hours (e.g. 10am-2pm), and you may only work during a certain window (e.g. 7am-7pm). Was that not communicated?
If not, then talk to your manager and ask them to set this policy. Then everyone will know what is or is not possible with flextime.
As to today: If you normally start at 8 and leave at 4:30, I don't see why you cannot leave at 4:30 today, too. The time from 7:30 to 8 is probably on you, because it was not clear whether you can start before 8, but if your normal day starts at 8 am, and the door is locked, that's not your fault. | This is a tricky one.
The basic answer is: You were there at the disposition of the employer in the time agreed upon with him; that you did not were able to do work due to factors external to you does not change that. It is not different to, say, staying an hour idle at work because of the computer do not work due to a power outage.
One small "but" is that it may be difficult to prove that you were there at 8 pm; but since you state you are there everyday that sould not be much of an issue. Also any message contacting your coworkers and reporting the issue with the doors should help. Check with your boss that he knowns that you were there at 8:00 am.
The big "but" is that you have just started with flex time, which means that the system still has to be worked out and your employer is still evaluating it. If the employer gets to the conclussion that the system leads to employees working less hours(either if it is not the employees'fault), he may want to return to strict time. I am not saying that it would happen by just one incident, but if it happens more times it could be determinant. |
96,396 | a bit of background to the question, I'm a junior developer and we have recently been given flexitime which so far is working well as I get to start at 8am and leave at 4:30pm.
Today I arrived at 7:30 (I like to be early so I can have a coffee in the office before I start my work) however no one arrived to open the office until 9am.
So my question is, should I leave at the the normal time (5:30 pm) or when I would usually leave(4:30 pm)? | 2017/08/02 | [
"https://workplace.stackexchange.com/questions/96396",
"https://workplace.stackexchange.com",
"https://workplace.stackexchange.com/users/53342/"
] | Unless there was a pre-existing conversation with your boss that you were coming in at 8 and that someone would be there to open the office then I'd say that this time you'll just have to take it on the chin and work till 5:30.
What I would do though is explain to your boss (in a non-confrontational, non-accusatory way) that there was no-one to open the office when you arrived and ask if you need to be letting people know about any plans to arrive "early" to ensure a keyholder opens up, or if the office *won't* be opening pre-9:00 on a day you were planning to come in early that you can at least adjust your plans.
Most likely this is just a blip that is down to the organisation adjusting to the flexitime (you mention it is a recent development). | With all implementations of flextime that I know, there is a set of rules, in particular **core and maximum work hours**. You need to be there at least during core hours (e.g. 10am-2pm), and you may only work during a certain window (e.g. 7am-7pm). Was that not communicated?
If not, then talk to your manager and ask them to set this policy. Then everyone will know what is or is not possible with flextime.
As to today: If you normally start at 8 and leave at 4:30, I don't see why you cannot leave at 4:30 today, too. The time from 7:30 to 8 is probably on you, because it was not clear whether you can start before 8, but if your normal day starts at 8 am, and the door is locked, that's not your fault. |
96,396 | a bit of background to the question, I'm a junior developer and we have recently been given flexitime which so far is working well as I get to start at 8am and leave at 4:30pm.
Today I arrived at 7:30 (I like to be early so I can have a coffee in the office before I start my work) however no one arrived to open the office until 9am.
So my question is, should I leave at the the normal time (5:30 pm) or when I would usually leave(4:30 pm)? | 2017/08/02 | [
"https://workplace.stackexchange.com/questions/96396",
"https://workplace.stackexchange.com",
"https://workplace.stackexchange.com/users/53342/"
] | Unless there was a pre-existing conversation with your boss that you were coming in at 8 and that someone would be there to open the office then I'd say that this time you'll just have to take it on the chin and work till 5:30.
What I would do though is explain to your boss (in a non-confrontational, non-accusatory way) that there was no-one to open the office when you arrived and ask if you need to be letting people know about any plans to arrive "early" to ensure a keyholder opens up, or if the office *won't* be opening pre-9:00 on a day you were planning to come in early that you can at least adjust your plans.
Most likely this is just a blip that is down to the organisation adjusting to the flexitime (you mention it is a recent development). | Just ask your manager.
Don't assume anything and simply ask. Something odd happened today and you want to know how to proceed as well as set up an example for future instances should it happen again. This is the exact situation where the only correct thing to do is ask your manager.
You shouldn't "just do the work" without asking because depending on your company guidelines that could push you into overtime hours or the like, which may or may not be "ok".
You shouldn't assume the opposite either, since you don't want to short-change your workplace work-hours.
Just ask. |
96,396 | a bit of background to the question, I'm a junior developer and we have recently been given flexitime which so far is working well as I get to start at 8am and leave at 4:30pm.
Today I arrived at 7:30 (I like to be early so I can have a coffee in the office before I start my work) however no one arrived to open the office until 9am.
So my question is, should I leave at the the normal time (5:30 pm) or when I would usually leave(4:30 pm)? | 2017/08/02 | [
"https://workplace.stackexchange.com/questions/96396",
"https://workplace.stackexchange.com",
"https://workplace.stackexchange.com/users/53342/"
] | With all implementations of flextime that I know, there is a set of rules, in particular **core and maximum work hours**. You need to be there at least during core hours (e.g. 10am-2pm), and you may only work during a certain window (e.g. 7am-7pm). Was that not communicated?
If not, then talk to your manager and ask them to set this policy. Then everyone will know what is or is not possible with flextime.
As to today: If you normally start at 8 and leave at 4:30, I don't see why you cannot leave at 4:30 today, too. The time from 7:30 to 8 is probably on you, because it was not clear whether you can start before 8, but if your normal day starts at 8 am, and the door is locked, that's not your fault. | Just ask your manager.
Don't assume anything and simply ask. Something odd happened today and you want to know how to proceed as well as set up an example for future instances should it happen again. This is the exact situation where the only correct thing to do is ask your manager.
You shouldn't "just do the work" without asking because depending on your company guidelines that could push you into overtime hours or the like, which may or may not be "ok".
You shouldn't assume the opposite either, since you don't want to short-change your workplace work-hours.
Just ask. |
420,361 | My Exchange Server 2003 (running on Windows Server 2003) fails at around 04:54 on a regular basis, though not necessarily every day.
By "fails" I mean that my colleagues try and check their emails and Outlook says "outlook is not connected to exchange." Since outlook tries to update the email every 3 minutes or so, and it records the time when the folder was last updated, it is possible to see the time of failure.
It is impossible to download emails until the server is restarted. Thereupon everything works well.
I have looked in Scheduled Tasks and can't see anything pertinent.
Does anyone have any ideas? | 2012/08/23 | [
"https://serverfault.com/questions/420361",
"https://serverfault.com",
"https://serverfault.com/users/110917/"
] | **Question:**
a) What version of exchange 2003 are you using ? Standard or Enterprise
b) Is this part of SBS2003 or a stand alone exchange.
**Suggestions:**
1) Can you navigate to this path in registry.
HKEY\_LOCAL\_MACHINE\System\CurrentControlSet\Services\MSExchangeIS\Server name\Private-Mailbox Store GUID
Check if you have a key called
*Database Size Limit in Gb*
and what's the value there.
2) Whats the DB size on disk?
Default priv1.edb path is c:\Program Files\Exchsrvr\MDBDATA
**Possible Causes:**
a) Exchange 2003 DB is dismounting because of 18GB hard limit for Exchange 2003 standard.
This is usually resolved by increasing the db size limit to 75GB for Ex03 Standard.
ref:
<http://support.microsoft.com/kb/912375>
White-Space / Offline Defrag's etc, to reclaim space:
<http://www.msexchange.org/tutorials/exchange-isinteg-eseutil.html> | Try looking in the Event Viewer if something occurs around the time it fails each time.
Hit Start -> Run -> type eventvwr in the Run box to open the Event Viewer, check the events under System and Application logs.
In addition, if you have a monitoring application such as HP openview or Centerity you can create a business service which will include all components that are required for the server to run, such as disk, cpu, memory monitors as well as networking, storage, application monitors, through that you will be able to identify the source for the server’s fail on that specific time when it fails. |
420,361 | My Exchange Server 2003 (running on Windows Server 2003) fails at around 04:54 on a regular basis, though not necessarily every day.
By "fails" I mean that my colleagues try and check their emails and Outlook says "outlook is not connected to exchange." Since outlook tries to update the email every 3 minutes or so, and it records the time when the folder was last updated, it is possible to see the time of failure.
It is impossible to download emails until the server is restarted. Thereupon everything works well.
I have looked in Scheduled Tasks and can't see anything pertinent.
Does anyone have any ideas? | 2012/08/23 | [
"https://serverfault.com/questions/420361",
"https://serverfault.com",
"https://serverfault.com/users/110917/"
] | Try looking in the Event Viewer if something occurs around the time it fails each time.
Hit Start -> Run -> type eventvwr in the Run box to open the Event Viewer, check the events under System and Application logs.
In addition, if you have a monitoring application such as HP openview or Centerity you can create a business service which will include all components that are required for the server to run, such as disk, cpu, memory monitors as well as networking, storage, application monitors, through that you will be able to identify the source for the server’s fail on that specific time when it fails. | Yeah, SP2 might do it. If you still have issues, Google for "exchange 2003 sp2 hotfix" and see if any of them are relevant to your problem. Another workaround is to set the Exchange Information Store to automatically restart after a failure in services.msc (if that's the service that's failing), but this doesn't fix the root cause of the problem. |
420,361 | My Exchange Server 2003 (running on Windows Server 2003) fails at around 04:54 on a regular basis, though not necessarily every day.
By "fails" I mean that my colleagues try and check their emails and Outlook says "outlook is not connected to exchange." Since outlook tries to update the email every 3 minutes or so, and it records the time when the folder was last updated, it is possible to see the time of failure.
It is impossible to download emails until the server is restarted. Thereupon everything works well.
I have looked in Scheduled Tasks and can't see anything pertinent.
Does anyone have any ideas? | 2012/08/23 | [
"https://serverfault.com/questions/420361",
"https://serverfault.com",
"https://serverfault.com/users/110917/"
] | **Question:**
a) What version of exchange 2003 are you using ? Standard or Enterprise
b) Is this part of SBS2003 or a stand alone exchange.
**Suggestions:**
1) Can you navigate to this path in registry.
HKEY\_LOCAL\_MACHINE\System\CurrentControlSet\Services\MSExchangeIS\Server name\Private-Mailbox Store GUID
Check if you have a key called
*Database Size Limit in Gb*
and what's the value there.
2) Whats the DB size on disk?
Default priv1.edb path is c:\Program Files\Exchsrvr\MDBDATA
**Possible Causes:**
a) Exchange 2003 DB is dismounting because of 18GB hard limit for Exchange 2003 standard.
This is usually resolved by increasing the db size limit to 75GB for Ex03 Standard.
ref:
<http://support.microsoft.com/kb/912375>
White-Space / Offline Defrag's etc, to reclaim space:
<http://www.msexchange.org/tutorials/exchange-isinteg-eseutil.html> | Are you still able to ping the box after this happens or does it become completely unresponsive? Any backups running during that time? Is this server a dedicated Exchange server or does it have other functions?
Otherwise, I agree witi Itai, you should cross reference the the event logs with the time in which your machine goes down. You may also need to get a dump of your system and post any pertinent information here. If it is just exchange that is failing and you still have access to your system, check Task Manager to gather what processes are running during that time. |
420,361 | My Exchange Server 2003 (running on Windows Server 2003) fails at around 04:54 on a regular basis, though not necessarily every day.
By "fails" I mean that my colleagues try and check their emails and Outlook says "outlook is not connected to exchange." Since outlook tries to update the email every 3 minutes or so, and it records the time when the folder was last updated, it is possible to see the time of failure.
It is impossible to download emails until the server is restarted. Thereupon everything works well.
I have looked in Scheduled Tasks and can't see anything pertinent.
Does anyone have any ideas? | 2012/08/23 | [
"https://serverfault.com/questions/420361",
"https://serverfault.com",
"https://serverfault.com/users/110917/"
] | Are you still able to ping the box after this happens or does it become completely unresponsive? Any backups running during that time? Is this server a dedicated Exchange server or does it have other functions?
Otherwise, I agree witi Itai, you should cross reference the the event logs with the time in which your machine goes down. You may also need to get a dump of your system and post any pertinent information here. If it is just exchange that is failing and you still have access to your system, check Task Manager to gather what processes are running during that time. | Yeah, SP2 might do it. If you still have issues, Google for "exchange 2003 sp2 hotfix" and see if any of them are relevant to your problem. Another workaround is to set the Exchange Information Store to automatically restart after a failure in services.msc (if that's the service that's failing), but this doesn't fix the root cause of the problem. |
420,361 | My Exchange Server 2003 (running on Windows Server 2003) fails at around 04:54 on a regular basis, though not necessarily every day.
By "fails" I mean that my colleagues try and check their emails and Outlook says "outlook is not connected to exchange." Since outlook tries to update the email every 3 minutes or so, and it records the time when the folder was last updated, it is possible to see the time of failure.
It is impossible to download emails until the server is restarted. Thereupon everything works well.
I have looked in Scheduled Tasks and can't see anything pertinent.
Does anyone have any ideas? | 2012/08/23 | [
"https://serverfault.com/questions/420361",
"https://serverfault.com",
"https://serverfault.com/users/110917/"
] | **Question:**
a) What version of exchange 2003 are you using ? Standard or Enterprise
b) Is this part of SBS2003 or a stand alone exchange.
**Suggestions:**
1) Can you navigate to this path in registry.
HKEY\_LOCAL\_MACHINE\System\CurrentControlSet\Services\MSExchangeIS\Server name\Private-Mailbox Store GUID
Check if you have a key called
*Database Size Limit in Gb*
and what's the value there.
2) Whats the DB size on disk?
Default priv1.edb path is c:\Program Files\Exchsrvr\MDBDATA
**Possible Causes:**
a) Exchange 2003 DB is dismounting because of 18GB hard limit for Exchange 2003 standard.
This is usually resolved by increasing the db size limit to 75GB for Ex03 Standard.
ref:
<http://support.microsoft.com/kb/912375>
White-Space / Offline Defrag's etc, to reclaim space:
<http://www.msexchange.org/tutorials/exchange-isinteg-eseutil.html> | Yeah, SP2 might do it. If you still have issues, Google for "exchange 2003 sp2 hotfix" and see if any of them are relevant to your problem. Another workaround is to set the Exchange Information Store to automatically restart after a failure in services.msc (if that's the service that's failing), but this doesn't fix the root cause of the problem. |
6,176,725 | I need to query Sql Server and get an ID that identifies the machine on which SQL Server is installed. An ID that works well also in some complex scenarios like failover clusters, or similar architectures.
I need this for license check, i bind my licence key to a ID, currently i am creting "my id" using a combination of database creation date and server name, anyway this is not very good, expecially because it is a Database ID, not a Server ID.
In the past I used to read the harddisk serial number with an extended stored procedure, but isn't it there (at least in sql server 2008) a simple way to get this id?
I don't want to use a CLR stored procedure. | 2011/05/30 | [
"https://Stackoverflow.com/questions/6176725",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/193655/"
] | >
> et an ID that identifies the machine on which SQL Server is installed. An ID that works well
> also in some complex scenarios like failover clusters, or similar architectures.
>
>
>
So waht do you want? Identify the MACHINE or identify the CLUSTER and handle enterprise senarios?
In general this is not poossible wthout extended stored procedures or CLR. Point. You rule out all approache, then there are none left. And the fact that you did read the hard disc number already implies that you are NOT prepared to handle larger installs. What do you read if there is no hard disc (as disc), but a LUN on a SAN? What do you do in case the hard disc is a hardware raid controller? the approach you took from the start was pretty much not working but you failed to see it because you dont deal with enterprise customers ;) Which also means you dont handle complex scenarios (clusters etc.) ;)
As gbn says, don't bother enterprise customers with childish copy protections, put a license agreement in place. In general enterprise software has few technical limitations to allow enterprises to provide good service without running through hoops to get anoether licensing key jsut for a lab install. | It's called a "licensing agreement"
I wouldn't want some app shutting down because it thinks it is running a different box.
If we have a DR fail over in a log shipping or mirroring setup, these are 2 seperate SQL server installs. And what kind of server runs a single hard disk anyway?
Other SO questions: <https://stackoverflow.com/search?q=software+licensing> |
10 | I purchased a Prius many years ago, but many of my friends have put forth claims that the batteries are very toxic and thus I may in fact be creating more harm than good in using a hybrid and that hybrid's are a less sustainable solution than non-hybrids?
How can one objectively measure the total footprint of a hybrid vehicle to determine whether the use of a hybrid over a traditional non-hybrid gasoline powered vehicle is lesser or greater? Has much research gone into the entire picture? For example, when the car reaches the end of its useful life, what happens to the batteries, can they be recycled or are they thrown away? What about the manufacture of the batteries, what negative effects come into play during manufacture? Are there other externalities that I've missed? | 2013/01/29 | [
"https://sustainability.stackexchange.com/questions/10",
"https://sustainability.stackexchange.com",
"https://sustainability.stackexchange.com/users/20/"
] | So the short answer is to look at '[life-cycle assessment](http://en.wikipedia.org/wiki/Life-cycle_assessment)' studies (LCA).
The longer answer is to ask what you mean by 'better', and then look at a bunch of LCAs and figure out what impact categories you care about most.
In either case, the goal of LCA is to collect all the different inputs and outputs for a product for all stages - not just use, but manufacturing and mining and disposal, etc. These impacts are collected into a smaller number of interpretable impact categories. Some impact categories to think about: carbon footprint/global warming potential, energy consumption, ecotoxicity, human health, water consumption. All of these are pretty commonly used. There is a lot of research on LCA methods, both on how to collect the basic data and how to transform it into these impact categories. Two common methods include the EPA's [TRACI](http://www.epa.gov/nrmrl/std/traci/traci.html) or the european [ReCiPe](http://www.lcia-recipe.net/). Ideally, LCA's provide a quantitative and clear answer, but that's rarely true - they are quantitative though, which is really important.
For some of your later questions, particularly around batteries, the answers will depend on the study. The technical terms here are allocation and boundary conditions. Do we model battery recycling? who gets the benefits of it? what technologies are we looking at, and what other products do those produce? What impacts or processes do we include or exclude? There are generally a lot of assumptions, which need to be made and defended to move forward - though often they need to be revised for new technology.
To answer your initial question, there was a poorly done report from years ago that purported to show that older Priuses were 'worse' than a Hummer in terms of energy usage. The report was [debunked](http://thinkprogress.org/climate/2007/08/27/201795/prius-easily-beats-hummer-in-life-cycle-energy-use-dust-to-dust-report-has-no-basis-in-fact/?mobile=nc) all over the place, but it is true that we've gotten better at making batteries over time. New hybrids have much lower impacts than the originals, and keeping the car longer will definitely decrease the relative impact (one of the assumptions is about how long cars last). Check out a newer study [here](http://pubs.acs.org/doi/abs/10.1021/es702178s). In terms of toxicity, here's an [older](http://scholar.google.com/scholar?cluster=8885172735616357524&hl=en&as_sdt=0,39) and [newer](http://pubs.acs.org/doi/abs/10.1021/es103607c) study on the topic - it can take some study to get a reasonable answer, and it's rarely clear-cut. There are almost always tradeoffs. | In this case [Life Cycle Assessment](https://www.thinkstep.com/life-cycle-assessment) is indeed the way to go. A Life Cycle Assessment is the systematic analysis of the environmental impact of a product during its entire life cycle. So to determine whether hybrid cars are more sustainable than traditional cars, one would have to collect and assess all inputs and outputs during the entire life cycle of a hybrid car. |
10 | I purchased a Prius many years ago, but many of my friends have put forth claims that the batteries are very toxic and thus I may in fact be creating more harm than good in using a hybrid and that hybrid's are a less sustainable solution than non-hybrids?
How can one objectively measure the total footprint of a hybrid vehicle to determine whether the use of a hybrid over a traditional non-hybrid gasoline powered vehicle is lesser or greater? Has much research gone into the entire picture? For example, when the car reaches the end of its useful life, what happens to the batteries, can they be recycled or are they thrown away? What about the manufacture of the batteries, what negative effects come into play during manufacture? Are there other externalities that I've missed? | 2013/01/29 | [
"https://sustainability.stackexchange.com/questions/10",
"https://sustainability.stackexchange.com",
"https://sustainability.stackexchange.com/users/20/"
] | So the short answer is to look at '[life-cycle assessment](http://en.wikipedia.org/wiki/Life-cycle_assessment)' studies (LCA).
The longer answer is to ask what you mean by 'better', and then look at a bunch of LCAs and figure out what impact categories you care about most.
In either case, the goal of LCA is to collect all the different inputs and outputs for a product for all stages - not just use, but manufacturing and mining and disposal, etc. These impacts are collected into a smaller number of interpretable impact categories. Some impact categories to think about: carbon footprint/global warming potential, energy consumption, ecotoxicity, human health, water consumption. All of these are pretty commonly used. There is a lot of research on LCA methods, both on how to collect the basic data and how to transform it into these impact categories. Two common methods include the EPA's [TRACI](http://www.epa.gov/nrmrl/std/traci/traci.html) or the european [ReCiPe](http://www.lcia-recipe.net/). Ideally, LCA's provide a quantitative and clear answer, but that's rarely true - they are quantitative though, which is really important.
For some of your later questions, particularly around batteries, the answers will depend on the study. The technical terms here are allocation and boundary conditions. Do we model battery recycling? who gets the benefits of it? what technologies are we looking at, and what other products do those produce? What impacts or processes do we include or exclude? There are generally a lot of assumptions, which need to be made and defended to move forward - though often they need to be revised for new technology.
To answer your initial question, there was a poorly done report from years ago that purported to show that older Priuses were 'worse' than a Hummer in terms of energy usage. The report was [debunked](http://thinkprogress.org/climate/2007/08/27/201795/prius-easily-beats-hummer-in-life-cycle-energy-use-dust-to-dust-report-has-no-basis-in-fact/?mobile=nc) all over the place, but it is true that we've gotten better at making batteries over time. New hybrids have much lower impacts than the originals, and keeping the car longer will definitely decrease the relative impact (one of the assumptions is about how long cars last). Check out a newer study [here](http://pubs.acs.org/doi/abs/10.1021/es702178s). In terms of toxicity, here's an [older](http://scholar.google.com/scholar?cluster=8885172735616357524&hl=en&as_sdt=0,39) and [newer](http://pubs.acs.org/doi/abs/10.1021/es103607c) study on the topic - it can take some study to get a reasonable answer, and it's rarely clear-cut. There are almost always tradeoffs. | I find the site [CarbonCounter.com](http://carboncounter.com), developed by MIT, useful in approaching this question. You can play around with some of the assumptions on e.g electricity supply, battery manufacture emissions and vehicle lifetime. Under most reasonable assumptions, for vehicles of comparable size, hybrids are better than pure ICE, and EVs better than hybrids (unless the electricity is mainly coal). So while yes 'it depends', after playing with the options for a bit, a rough rule of thumb for US average electricity is that hybrids are around 2/3 the lifecycle co2 of an ICE, and full electric are around 50% lifecycle co2 of an ICE. (Very roughly) |
10 | I purchased a Prius many years ago, but many of my friends have put forth claims that the batteries are very toxic and thus I may in fact be creating more harm than good in using a hybrid and that hybrid's are a less sustainable solution than non-hybrids?
How can one objectively measure the total footprint of a hybrid vehicle to determine whether the use of a hybrid over a traditional non-hybrid gasoline powered vehicle is lesser or greater? Has much research gone into the entire picture? For example, when the car reaches the end of its useful life, what happens to the batteries, can they be recycled or are they thrown away? What about the manufacture of the batteries, what negative effects come into play during manufacture? Are there other externalities that I've missed? | 2013/01/29 | [
"https://sustainability.stackexchange.com/questions/10",
"https://sustainability.stackexchange.com",
"https://sustainability.stackexchange.com/users/20/"
] | So the short answer is to look at '[life-cycle assessment](http://en.wikipedia.org/wiki/Life-cycle_assessment)' studies (LCA).
The longer answer is to ask what you mean by 'better', and then look at a bunch of LCAs and figure out what impact categories you care about most.
In either case, the goal of LCA is to collect all the different inputs and outputs for a product for all stages - not just use, but manufacturing and mining and disposal, etc. These impacts are collected into a smaller number of interpretable impact categories. Some impact categories to think about: carbon footprint/global warming potential, energy consumption, ecotoxicity, human health, water consumption. All of these are pretty commonly used. There is a lot of research on LCA methods, both on how to collect the basic data and how to transform it into these impact categories. Two common methods include the EPA's [TRACI](http://www.epa.gov/nrmrl/std/traci/traci.html) or the european [ReCiPe](http://www.lcia-recipe.net/). Ideally, LCA's provide a quantitative and clear answer, but that's rarely true - they are quantitative though, which is really important.
For some of your later questions, particularly around batteries, the answers will depend on the study. The technical terms here are allocation and boundary conditions. Do we model battery recycling? who gets the benefits of it? what technologies are we looking at, and what other products do those produce? What impacts or processes do we include or exclude? There are generally a lot of assumptions, which need to be made and defended to move forward - though often they need to be revised for new technology.
To answer your initial question, there was a poorly done report from years ago that purported to show that older Priuses were 'worse' than a Hummer in terms of energy usage. The report was [debunked](http://thinkprogress.org/climate/2007/08/27/201795/prius-easily-beats-hummer-in-life-cycle-energy-use-dust-to-dust-report-has-no-basis-in-fact/?mobile=nc) all over the place, but it is true that we've gotten better at making batteries over time. New hybrids have much lower impacts than the originals, and keeping the car longer will definitely decrease the relative impact (one of the assumptions is about how long cars last). Check out a newer study [here](http://pubs.acs.org/doi/abs/10.1021/es702178s). In terms of toxicity, here's an [older](http://scholar.google.com/scholar?cluster=8885172735616357524&hl=en&as_sdt=0,39) and [newer](http://pubs.acs.org/doi/abs/10.1021/es103607c) study on the topic - it can take some study to get a reasonable answer, and it's rarely clear-cut. There are almost always tradeoffs. | Here is a report from Union of Concerned Scientists on the total benefits of hybrid vehicles:
<https://www.ucsusa.org/sites/default/files/attach/2017/11/cv-report-ev-savings.pdf>
It was quoted in this general interest article:
<https://www.nytimes.com/interactive/2021/01/15/climate/electric-car-cost.html>
It discusses a full range of aspects of this question, including state-by-state electricity costs and fuel savings; and vehicle costs for a range of vehicles.
But the other question in my mind is how much the entire petroleum economy costs - to society as a whole. Which is perhaps outside the scope of this question. |
10 | I purchased a Prius many years ago, but many of my friends have put forth claims that the batteries are very toxic and thus I may in fact be creating more harm than good in using a hybrid and that hybrid's are a less sustainable solution than non-hybrids?
How can one objectively measure the total footprint of a hybrid vehicle to determine whether the use of a hybrid over a traditional non-hybrid gasoline powered vehicle is lesser or greater? Has much research gone into the entire picture? For example, when the car reaches the end of its useful life, what happens to the batteries, can they be recycled or are they thrown away? What about the manufacture of the batteries, what negative effects come into play during manufacture? Are there other externalities that I've missed? | 2013/01/29 | [
"https://sustainability.stackexchange.com/questions/10",
"https://sustainability.stackexchange.com",
"https://sustainability.stackexchange.com/users/20/"
] | So the short answer is to look at '[life-cycle assessment](http://en.wikipedia.org/wiki/Life-cycle_assessment)' studies (LCA).
The longer answer is to ask what you mean by 'better', and then look at a bunch of LCAs and figure out what impact categories you care about most.
In either case, the goal of LCA is to collect all the different inputs and outputs for a product for all stages - not just use, but manufacturing and mining and disposal, etc. These impacts are collected into a smaller number of interpretable impact categories. Some impact categories to think about: carbon footprint/global warming potential, energy consumption, ecotoxicity, human health, water consumption. All of these are pretty commonly used. There is a lot of research on LCA methods, both on how to collect the basic data and how to transform it into these impact categories. Two common methods include the EPA's [TRACI](http://www.epa.gov/nrmrl/std/traci/traci.html) or the european [ReCiPe](http://www.lcia-recipe.net/). Ideally, LCA's provide a quantitative and clear answer, but that's rarely true - they are quantitative though, which is really important.
For some of your later questions, particularly around batteries, the answers will depend on the study. The technical terms here are allocation and boundary conditions. Do we model battery recycling? who gets the benefits of it? what technologies are we looking at, and what other products do those produce? What impacts or processes do we include or exclude? There are generally a lot of assumptions, which need to be made and defended to move forward - though often they need to be revised for new technology.
To answer your initial question, there was a poorly done report from years ago that purported to show that older Priuses were 'worse' than a Hummer in terms of energy usage. The report was [debunked](http://thinkprogress.org/climate/2007/08/27/201795/prius-easily-beats-hummer-in-life-cycle-energy-use-dust-to-dust-report-has-no-basis-in-fact/?mobile=nc) all over the place, but it is true that we've gotten better at making batteries over time. New hybrids have much lower impacts than the originals, and keeping the car longer will definitely decrease the relative impact (one of the assumptions is about how long cars last). Check out a newer study [here](http://pubs.acs.org/doi/abs/10.1021/es702178s). In terms of toxicity, here's an [older](http://scholar.google.com/scholar?cluster=8885172735616357524&hl=en&as_sdt=0,39) and [newer](http://pubs.acs.org/doi/abs/10.1021/es103607c) study on the topic - it can take some study to get a reasonable answer, and it's rarely clear-cut. There are almost always tradeoffs. | There are several environmental concerns when driving a hybrid.
One is the rare earth metals used in permanent magnet motors such as neodymium. However, it has been already established that [electric cars end up being more sustainable than gasoline cars](https://sustainability.stackexchange.com/questions/612/are-electric-cars-as-environmentally-friendly-as-we-think-they-are), and also the electric motors in hybrid vehicles aren't bigger than those in electric cars (actually in some cases they may even be smaller), so this is not an issue.
Another is the battery. However, a non-plug-in hybrid car has typically 1-2 kWh battery, whereas a plug-in hybrid car has 10-20 kWh battery and an electric car has 50-90 kWh battery. So you can see from this that the amount of metals in a non-plug-in hybrid is insignificant, because the battery is so small.
However, a non-plug-in hybrid very often uses a different battery chemistry. Prius uses nickel metal-hydride whereas electric cars practically all use lithium-ion today. In Prius, the main concern is lanthanum (10-15 kg in battery) whereas in electric vehicles the main concern is cobalt (10 kg in battery). World cobalt reserves are 7.1 megatonnes, and world rare earth reserves are 120 megatonnes. If we assume 25% of rare earth metals obtained from a mine is lanthanum, that would mean we have 30 megatonnes of reserves. So it can be seen that cobalt is limiting electric vehicles more than lanthanum is limiting non-plug-in hybrids. By the way, the lanthanum reserves would allow 2-3 billion hybrid vehicles using nickel methal-hydride batteries. That's more than the number of cars on the roads today.
Also it's possible to consider that battery recycling is very efficient and nearly all of the metals in batteries can be recycled. The battery can be treated as a deposit of valuable metals far richer than the deposits found in mines.
Overall, because gasoline cars use oil at a far greater rate than Prius uses oil, and because lanthanum isn't consumed by hybrid vehicles, it's probably better to reduce oil use by putting the lanthanum inside hybrid vehicle batteries and then at end of life recycle the lanthanum, than to leave the lanthanum in the ground and continue using oil at a great rate.
In the end, the main concern of non-plug-in hybrid vehicles is that they aren't good enough. We shouldn't reduce oil use and carbon dioxide emissions by 30% (which is what Prius does), but by at least 90% if not more (which is what electric vehicles in a clean electricity grid do).
Disclosure: I drive a Toyota non-plug-in hybrid, but my next car will be a fully electric car. The only reason I drive a non-plug-in hybrid was that back when I bought the car, the price of reasonable comparable electric cars was at least twice of what it is today, and plug-in-hybrids had concerns of battery wear and weren't available from Toyota back then (I only buy Toyotas because I like reliable cars). Also back then I didn't have a realistic possibility to charge a plug-in-hybrid, because the only electrical outlet available was behind a 2-hour clock switch. |
10 | I purchased a Prius many years ago, but many of my friends have put forth claims that the batteries are very toxic and thus I may in fact be creating more harm than good in using a hybrid and that hybrid's are a less sustainable solution than non-hybrids?
How can one objectively measure the total footprint of a hybrid vehicle to determine whether the use of a hybrid over a traditional non-hybrid gasoline powered vehicle is lesser or greater? Has much research gone into the entire picture? For example, when the car reaches the end of its useful life, what happens to the batteries, can they be recycled or are they thrown away? What about the manufacture of the batteries, what negative effects come into play during manufacture? Are there other externalities that I've missed? | 2013/01/29 | [
"https://sustainability.stackexchange.com/questions/10",
"https://sustainability.stackexchange.com",
"https://sustainability.stackexchange.com/users/20/"
] | I find the site [CarbonCounter.com](http://carboncounter.com), developed by MIT, useful in approaching this question. You can play around with some of the assumptions on e.g electricity supply, battery manufacture emissions and vehicle lifetime. Under most reasonable assumptions, for vehicles of comparable size, hybrids are better than pure ICE, and EVs better than hybrids (unless the electricity is mainly coal). So while yes 'it depends', after playing with the options for a bit, a rough rule of thumb for US average electricity is that hybrids are around 2/3 the lifecycle co2 of an ICE, and full electric are around 50% lifecycle co2 of an ICE. (Very roughly) | In this case [Life Cycle Assessment](https://www.thinkstep.com/life-cycle-assessment) is indeed the way to go. A Life Cycle Assessment is the systematic analysis of the environmental impact of a product during its entire life cycle. So to determine whether hybrid cars are more sustainable than traditional cars, one would have to collect and assess all inputs and outputs during the entire life cycle of a hybrid car. |
10 | I purchased a Prius many years ago, but many of my friends have put forth claims that the batteries are very toxic and thus I may in fact be creating more harm than good in using a hybrid and that hybrid's are a less sustainable solution than non-hybrids?
How can one objectively measure the total footprint of a hybrid vehicle to determine whether the use of a hybrid over a traditional non-hybrid gasoline powered vehicle is lesser or greater? Has much research gone into the entire picture? For example, when the car reaches the end of its useful life, what happens to the batteries, can they be recycled or are they thrown away? What about the manufacture of the batteries, what negative effects come into play during manufacture? Are there other externalities that I've missed? | 2013/01/29 | [
"https://sustainability.stackexchange.com/questions/10",
"https://sustainability.stackexchange.com",
"https://sustainability.stackexchange.com/users/20/"
] | There are several environmental concerns when driving a hybrid.
One is the rare earth metals used in permanent magnet motors such as neodymium. However, it has been already established that [electric cars end up being more sustainable than gasoline cars](https://sustainability.stackexchange.com/questions/612/are-electric-cars-as-environmentally-friendly-as-we-think-they-are), and also the electric motors in hybrid vehicles aren't bigger than those in electric cars (actually in some cases they may even be smaller), so this is not an issue.
Another is the battery. However, a non-plug-in hybrid car has typically 1-2 kWh battery, whereas a plug-in hybrid car has 10-20 kWh battery and an electric car has 50-90 kWh battery. So you can see from this that the amount of metals in a non-plug-in hybrid is insignificant, because the battery is so small.
However, a non-plug-in hybrid very often uses a different battery chemistry. Prius uses nickel metal-hydride whereas electric cars practically all use lithium-ion today. In Prius, the main concern is lanthanum (10-15 kg in battery) whereas in electric vehicles the main concern is cobalt (10 kg in battery). World cobalt reserves are 7.1 megatonnes, and world rare earth reserves are 120 megatonnes. If we assume 25% of rare earth metals obtained from a mine is lanthanum, that would mean we have 30 megatonnes of reserves. So it can be seen that cobalt is limiting electric vehicles more than lanthanum is limiting non-plug-in hybrids. By the way, the lanthanum reserves would allow 2-3 billion hybrid vehicles using nickel methal-hydride batteries. That's more than the number of cars on the roads today.
Also it's possible to consider that battery recycling is very efficient and nearly all of the metals in batteries can be recycled. The battery can be treated as a deposit of valuable metals far richer than the deposits found in mines.
Overall, because gasoline cars use oil at a far greater rate than Prius uses oil, and because lanthanum isn't consumed by hybrid vehicles, it's probably better to reduce oil use by putting the lanthanum inside hybrid vehicle batteries and then at end of life recycle the lanthanum, than to leave the lanthanum in the ground and continue using oil at a great rate.
In the end, the main concern of non-plug-in hybrid vehicles is that they aren't good enough. We shouldn't reduce oil use and carbon dioxide emissions by 30% (which is what Prius does), but by at least 90% if not more (which is what electric vehicles in a clean electricity grid do).
Disclosure: I drive a Toyota non-plug-in hybrid, but my next car will be a fully electric car. The only reason I drive a non-plug-in hybrid was that back when I bought the car, the price of reasonable comparable electric cars was at least twice of what it is today, and plug-in-hybrids had concerns of battery wear and weren't available from Toyota back then (I only buy Toyotas because I like reliable cars). Also back then I didn't have a realistic possibility to charge a plug-in-hybrid, because the only electrical outlet available was behind a 2-hour clock switch. | In this case [Life Cycle Assessment](https://www.thinkstep.com/life-cycle-assessment) is indeed the way to go. A Life Cycle Assessment is the systematic analysis of the environmental impact of a product during its entire life cycle. So to determine whether hybrid cars are more sustainable than traditional cars, one would have to collect and assess all inputs and outputs during the entire life cycle of a hybrid car. |
10 | I purchased a Prius many years ago, but many of my friends have put forth claims that the batteries are very toxic and thus I may in fact be creating more harm than good in using a hybrid and that hybrid's are a less sustainable solution than non-hybrids?
How can one objectively measure the total footprint of a hybrid vehicle to determine whether the use of a hybrid over a traditional non-hybrid gasoline powered vehicle is lesser or greater? Has much research gone into the entire picture? For example, when the car reaches the end of its useful life, what happens to the batteries, can they be recycled or are they thrown away? What about the manufacture of the batteries, what negative effects come into play during manufacture? Are there other externalities that I've missed? | 2013/01/29 | [
"https://sustainability.stackexchange.com/questions/10",
"https://sustainability.stackexchange.com",
"https://sustainability.stackexchange.com/users/20/"
] | I find the site [CarbonCounter.com](http://carboncounter.com), developed by MIT, useful in approaching this question. You can play around with some of the assumptions on e.g electricity supply, battery manufacture emissions and vehicle lifetime. Under most reasonable assumptions, for vehicles of comparable size, hybrids are better than pure ICE, and EVs better than hybrids (unless the electricity is mainly coal). So while yes 'it depends', after playing with the options for a bit, a rough rule of thumb for US average electricity is that hybrids are around 2/3 the lifecycle co2 of an ICE, and full electric are around 50% lifecycle co2 of an ICE. (Very roughly) | Here is a report from Union of Concerned Scientists on the total benefits of hybrid vehicles:
<https://www.ucsusa.org/sites/default/files/attach/2017/11/cv-report-ev-savings.pdf>
It was quoted in this general interest article:
<https://www.nytimes.com/interactive/2021/01/15/climate/electric-car-cost.html>
It discusses a full range of aspects of this question, including state-by-state electricity costs and fuel savings; and vehicle costs for a range of vehicles.
But the other question in my mind is how much the entire petroleum economy costs - to society as a whole. Which is perhaps outside the scope of this question. |
10 | I purchased a Prius many years ago, but many of my friends have put forth claims that the batteries are very toxic and thus I may in fact be creating more harm than good in using a hybrid and that hybrid's are a less sustainable solution than non-hybrids?
How can one objectively measure the total footprint of a hybrid vehicle to determine whether the use of a hybrid over a traditional non-hybrid gasoline powered vehicle is lesser or greater? Has much research gone into the entire picture? For example, when the car reaches the end of its useful life, what happens to the batteries, can they be recycled or are they thrown away? What about the manufacture of the batteries, what negative effects come into play during manufacture? Are there other externalities that I've missed? | 2013/01/29 | [
"https://sustainability.stackexchange.com/questions/10",
"https://sustainability.stackexchange.com",
"https://sustainability.stackexchange.com/users/20/"
] | I find the site [CarbonCounter.com](http://carboncounter.com), developed by MIT, useful in approaching this question. You can play around with some of the assumptions on e.g electricity supply, battery manufacture emissions and vehicle lifetime. Under most reasonable assumptions, for vehicles of comparable size, hybrids are better than pure ICE, and EVs better than hybrids (unless the electricity is mainly coal). So while yes 'it depends', after playing with the options for a bit, a rough rule of thumb for US average electricity is that hybrids are around 2/3 the lifecycle co2 of an ICE, and full electric are around 50% lifecycle co2 of an ICE. (Very roughly) | There are several environmental concerns when driving a hybrid.
One is the rare earth metals used in permanent magnet motors such as neodymium. However, it has been already established that [electric cars end up being more sustainable than gasoline cars](https://sustainability.stackexchange.com/questions/612/are-electric-cars-as-environmentally-friendly-as-we-think-they-are), and also the electric motors in hybrid vehicles aren't bigger than those in electric cars (actually in some cases they may even be smaller), so this is not an issue.
Another is the battery. However, a non-plug-in hybrid car has typically 1-2 kWh battery, whereas a plug-in hybrid car has 10-20 kWh battery and an electric car has 50-90 kWh battery. So you can see from this that the amount of metals in a non-plug-in hybrid is insignificant, because the battery is so small.
However, a non-plug-in hybrid very often uses a different battery chemistry. Prius uses nickel metal-hydride whereas electric cars practically all use lithium-ion today. In Prius, the main concern is lanthanum (10-15 kg in battery) whereas in electric vehicles the main concern is cobalt (10 kg in battery). World cobalt reserves are 7.1 megatonnes, and world rare earth reserves are 120 megatonnes. If we assume 25% of rare earth metals obtained from a mine is lanthanum, that would mean we have 30 megatonnes of reserves. So it can be seen that cobalt is limiting electric vehicles more than lanthanum is limiting non-plug-in hybrids. By the way, the lanthanum reserves would allow 2-3 billion hybrid vehicles using nickel methal-hydride batteries. That's more than the number of cars on the roads today.
Also it's possible to consider that battery recycling is very efficient and nearly all of the metals in batteries can be recycled. The battery can be treated as a deposit of valuable metals far richer than the deposits found in mines.
Overall, because gasoline cars use oil at a far greater rate than Prius uses oil, and because lanthanum isn't consumed by hybrid vehicles, it's probably better to reduce oil use by putting the lanthanum inside hybrid vehicle batteries and then at end of life recycle the lanthanum, than to leave the lanthanum in the ground and continue using oil at a great rate.
In the end, the main concern of non-plug-in hybrid vehicles is that they aren't good enough. We shouldn't reduce oil use and carbon dioxide emissions by 30% (which is what Prius does), but by at least 90% if not more (which is what electric vehicles in a clean electricity grid do).
Disclosure: I drive a Toyota non-plug-in hybrid, but my next car will be a fully electric car. The only reason I drive a non-plug-in hybrid was that back when I bought the car, the price of reasonable comparable electric cars was at least twice of what it is today, and plug-in-hybrids had concerns of battery wear and weren't available from Toyota back then (I only buy Toyotas because I like reliable cars). Also back then I didn't have a realistic possibility to charge a plug-in-hybrid, because the only electrical outlet available was behind a 2-hour clock switch. |
10 | I purchased a Prius many years ago, but many of my friends have put forth claims that the batteries are very toxic and thus I may in fact be creating more harm than good in using a hybrid and that hybrid's are a less sustainable solution than non-hybrids?
How can one objectively measure the total footprint of a hybrid vehicle to determine whether the use of a hybrid over a traditional non-hybrid gasoline powered vehicle is lesser or greater? Has much research gone into the entire picture? For example, when the car reaches the end of its useful life, what happens to the batteries, can they be recycled or are they thrown away? What about the manufacture of the batteries, what negative effects come into play during manufacture? Are there other externalities that I've missed? | 2013/01/29 | [
"https://sustainability.stackexchange.com/questions/10",
"https://sustainability.stackexchange.com",
"https://sustainability.stackexchange.com/users/20/"
] | There are several environmental concerns when driving a hybrid.
One is the rare earth metals used in permanent magnet motors such as neodymium. However, it has been already established that [electric cars end up being more sustainable than gasoline cars](https://sustainability.stackexchange.com/questions/612/are-electric-cars-as-environmentally-friendly-as-we-think-they-are), and also the electric motors in hybrid vehicles aren't bigger than those in electric cars (actually in some cases they may even be smaller), so this is not an issue.
Another is the battery. However, a non-plug-in hybrid car has typically 1-2 kWh battery, whereas a plug-in hybrid car has 10-20 kWh battery and an electric car has 50-90 kWh battery. So you can see from this that the amount of metals in a non-plug-in hybrid is insignificant, because the battery is so small.
However, a non-plug-in hybrid very often uses a different battery chemistry. Prius uses nickel metal-hydride whereas electric cars practically all use lithium-ion today. In Prius, the main concern is lanthanum (10-15 kg in battery) whereas in electric vehicles the main concern is cobalt (10 kg in battery). World cobalt reserves are 7.1 megatonnes, and world rare earth reserves are 120 megatonnes. If we assume 25% of rare earth metals obtained from a mine is lanthanum, that would mean we have 30 megatonnes of reserves. So it can be seen that cobalt is limiting electric vehicles more than lanthanum is limiting non-plug-in hybrids. By the way, the lanthanum reserves would allow 2-3 billion hybrid vehicles using nickel methal-hydride batteries. That's more than the number of cars on the roads today.
Also it's possible to consider that battery recycling is very efficient and nearly all of the metals in batteries can be recycled. The battery can be treated as a deposit of valuable metals far richer than the deposits found in mines.
Overall, because gasoline cars use oil at a far greater rate than Prius uses oil, and because lanthanum isn't consumed by hybrid vehicles, it's probably better to reduce oil use by putting the lanthanum inside hybrid vehicle batteries and then at end of life recycle the lanthanum, than to leave the lanthanum in the ground and continue using oil at a great rate.
In the end, the main concern of non-plug-in hybrid vehicles is that they aren't good enough. We shouldn't reduce oil use and carbon dioxide emissions by 30% (which is what Prius does), but by at least 90% if not more (which is what electric vehicles in a clean electricity grid do).
Disclosure: I drive a Toyota non-plug-in hybrid, but my next car will be a fully electric car. The only reason I drive a non-plug-in hybrid was that back when I bought the car, the price of reasonable comparable electric cars was at least twice of what it is today, and plug-in-hybrids had concerns of battery wear and weren't available from Toyota back then (I only buy Toyotas because I like reliable cars). Also back then I didn't have a realistic possibility to charge a plug-in-hybrid, because the only electrical outlet available was behind a 2-hour clock switch. | Here is a report from Union of Concerned Scientists on the total benefits of hybrid vehicles:
<https://www.ucsusa.org/sites/default/files/attach/2017/11/cv-report-ev-savings.pdf>
It was quoted in this general interest article:
<https://www.nytimes.com/interactive/2021/01/15/climate/electric-car-cost.html>
It discusses a full range of aspects of this question, including state-by-state electricity costs and fuel savings; and vehicle costs for a range of vehicles.
But the other question in my mind is how much the entire petroleum economy costs - to society as a whole. Which is perhaps outside the scope of this question. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.