qid
int64
1
74.7M
question
stringlengths
12
33.8k
date
stringlengths
10
10
metadata
list
response_j
stringlengths
0
115k
response_k
stringlengths
2
98.3k
126,513
My team has a need to publish documentation internally. At the moment, it's spread all over the place and this means we often have to search everywhere to find something. We'd like to publish everything in one place. The main thing that stops us is access control - the wikis in place don't belong to us and we can't do it. What is the best tool for publishing docs, ideally fitting these requirements: * web front end - readers access docs using browser * single place to put docs * access control by individual doc or by sets of docs (folders, branch of 'site', ...) * if you don't have access to a doc, you don't see the link to that page/doc/folder. * either built-in editor or something my users are familiar with (e.g. Word) * built-in version control would be nice Also, can you think of other criteria I should've specified?
2010/03/26
[ "https://serverfault.com/questions/126513", "https://serverfault.com", "https://serverfault.com/users/14439/" ]
Probably best to say what platform you're using, but if it's Windows Server then Windows Sharepoint Services will do nearly all of the above. It's also *free* as a download from Microsoft. There is a full paid version (Microsft Office Sharepoint Services) but we're using WSS here and it fulfils all our needs, and we have a scenario very similar to the one you detail above including versioning (with full check out/check in functionality if you're using MS Office), fairly granular control over permissions and a decent Wiki. I think the only area it wouldn't work for you is the 'if you don't have access to a doc, you don't see the link to that page/doc/folder'. You can download it from Microsoft [here.](http://technet.microsoft.com/en-us/windowsserver/sharepoint/bb400747.aspx)
We use [Jive SBS](http://www.jivesoftware.com/products) (previously named clearspace) for all of our documenation needs. I believe it matches every one of your bullet points. Permissioning being very strong within it. If you don't have access to something it won't show up in searches, feeds, or anywhere.
126,513
My team has a need to publish documentation internally. At the moment, it's spread all over the place and this means we often have to search everywhere to find something. We'd like to publish everything in one place. The main thing that stops us is access control - the wikis in place don't belong to us and we can't do it. What is the best tool for publishing docs, ideally fitting these requirements: * web front end - readers access docs using browser * single place to put docs * access control by individual doc or by sets of docs (folders, branch of 'site', ...) * if you don't have access to a doc, you don't see the link to that page/doc/folder. * either built-in editor or something my users are familiar with (e.g. Word) * built-in version control would be nice Also, can you think of other criteria I should've specified?
2010/03/26
[ "https://serverfault.com/questions/126513", "https://serverfault.com", "https://serverfault.com/users/14439/" ]
Have a look at [Alfresco Share](https://www.alfresco.com/ecm-software/enterprise-collaboration). * It has access control by individual doc or by sets of docs, for users or groups. * If you don't have access to a doc, you don't see the link to that page/doc/folder. * It has a built-in editor for HTML * It has version control * You can see an online preview of documents, no need to download. This saves a lot of time over opening a PDF reader or an office software. * Other collaboration features that can be easily enabled/disabled: wikis, calendars, issue trackers, data lists, blogs, forums. * Your dashboard allows you to see all recent activity, for instance who recently uploaded what. [![Alfresco Share](https://i.stack.imgur.com/UK1Ki.png)](https://i.stack.imgur.com/UK1Ki.png) (source: [alfresco.com](http://docs.alfresco.com/4.0/topic/com.alfresco.enterprise.doc/images/DocLibrary-Preview.png)) It is free and open source ([download](https://hub.alfresco.com/t5/alfresco-content-services-hub/download-and-install-alfresco/ba-p/290690)). If you want support, you can [subscribe as an Enterprise customer](https://www.alfresco.com/services/support-services).
Probably best to say what platform you're using, but if it's Windows Server then Windows Sharepoint Services will do nearly all of the above. It's also *free* as a download from Microsoft. There is a full paid version (Microsft Office Sharepoint Services) but we're using WSS here and it fulfils all our needs, and we have a scenario very similar to the one you detail above including versioning (with full check out/check in functionality if you're using MS Office), fairly granular control over permissions and a decent Wiki. I think the only area it wouldn't work for you is the 'if you don't have access to a doc, you don't see the link to that page/doc/folder'. You can download it from Microsoft [here.](http://technet.microsoft.com/en-us/windowsserver/sharepoint/bb400747.aspx)
126,513
My team has a need to publish documentation internally. At the moment, it's spread all over the place and this means we often have to search everywhere to find something. We'd like to publish everything in one place. The main thing that stops us is access control - the wikis in place don't belong to us and we can't do it. What is the best tool for publishing docs, ideally fitting these requirements: * web front end - readers access docs using browser * single place to put docs * access control by individual doc or by sets of docs (folders, branch of 'site', ...) * if you don't have access to a doc, you don't see the link to that page/doc/folder. * either built-in editor or something my users are familiar with (e.g. Word) * built-in version control would be nice Also, can you think of other criteria I should've specified?
2010/03/26
[ "https://serverfault.com/questions/126513", "https://serverfault.com", "https://serverfault.com/users/14439/" ]
I use <http://www.dokuwiki.org/> because it is easy to set up, has a lot of features (my favourite is "Downloadable Code Blocks"), allows the use of ACLs and uses .txt files which don't depend on any Office/Office Version and are easy to back up. It has everything you asked for except "if you don't have access to a doc, you don't see the link to that page/doc/folder" If you are using single .doc(x) files at the moment, I would say it is totally worth the extra effort to move to dokuwiki.
It used to have a free version I am not sure if it still does (under the name opengoo): <http://www.fengoffice.com/web/index.php> I think it provides everything you want.
126,513
My team has a need to publish documentation internally. At the moment, it's spread all over the place and this means we often have to search everywhere to find something. We'd like to publish everything in one place. The main thing that stops us is access control - the wikis in place don't belong to us and we can't do it. What is the best tool for publishing docs, ideally fitting these requirements: * web front end - readers access docs using browser * single place to put docs * access control by individual doc or by sets of docs (folders, branch of 'site', ...) * if you don't have access to a doc, you don't see the link to that page/doc/folder. * either built-in editor or something my users are familiar with (e.g. Word) * built-in version control would be nice Also, can you think of other criteria I should've specified?
2010/03/26
[ "https://serverfault.com/questions/126513", "https://serverfault.com", "https://serverfault.com/users/14439/" ]
Have a look at [Alfresco Share](https://www.alfresco.com/ecm-software/enterprise-collaboration). * It has access control by individual doc or by sets of docs, for users or groups. * If you don't have access to a doc, you don't see the link to that page/doc/folder. * It has a built-in editor for HTML * It has version control * You can see an online preview of documents, no need to download. This saves a lot of time over opening a PDF reader or an office software. * Other collaboration features that can be easily enabled/disabled: wikis, calendars, issue trackers, data lists, blogs, forums. * Your dashboard allows you to see all recent activity, for instance who recently uploaded what. [![Alfresco Share](https://i.stack.imgur.com/UK1Ki.png)](https://i.stack.imgur.com/UK1Ki.png) (source: [alfresco.com](http://docs.alfresco.com/4.0/topic/com.alfresco.enterprise.doc/images/DocLibrary-Preview.png)) It is free and open source ([download](https://hub.alfresco.com/t5/alfresco-content-services-hub/download-and-install-alfresco/ba-p/290690)). If you want support, you can [subscribe as an Enterprise customer](https://www.alfresco.com/services/support-services).
It used to have a free version I am not sure if it still does (under the name opengoo): <http://www.fengoffice.com/web/index.php> I think it provides everything you want.
126,513
My team has a need to publish documentation internally. At the moment, it's spread all over the place and this means we often have to search everywhere to find something. We'd like to publish everything in one place. The main thing that stops us is access control - the wikis in place don't belong to us and we can't do it. What is the best tool for publishing docs, ideally fitting these requirements: * web front end - readers access docs using browser * single place to put docs * access control by individual doc or by sets of docs (folders, branch of 'site', ...) * if you don't have access to a doc, you don't see the link to that page/doc/folder. * either built-in editor or something my users are familiar with (e.g. Word) * built-in version control would be nice Also, can you think of other criteria I should've specified?
2010/03/26
[ "https://serverfault.com/questions/126513", "https://serverfault.com", "https://serverfault.com/users/14439/" ]
I use <http://www.dokuwiki.org/> because it is easy to set up, has a lot of features (my favourite is "Downloadable Code Blocks"), allows the use of ACLs and uses .txt files which don't depend on any Office/Office Version and are easy to back up. It has everything you asked for except "if you don't have access to a doc, you don't see the link to that page/doc/folder" If you are using single .doc(x) files at the moment, I would say it is totally worth the extra effort to move to dokuwiki.
We use [Jive SBS](http://www.jivesoftware.com/products) (previously named clearspace) for all of our documenation needs. I believe it matches every one of your bullet points. Permissioning being very strong within it. If you don't have access to something it won't show up in searches, feeds, or anywhere.
126,513
My team has a need to publish documentation internally. At the moment, it's spread all over the place and this means we often have to search everywhere to find something. We'd like to publish everything in one place. The main thing that stops us is access control - the wikis in place don't belong to us and we can't do it. What is the best tool for publishing docs, ideally fitting these requirements: * web front end - readers access docs using browser * single place to put docs * access control by individual doc or by sets of docs (folders, branch of 'site', ...) * if you don't have access to a doc, you don't see the link to that page/doc/folder. * either built-in editor or something my users are familiar with (e.g. Word) * built-in version control would be nice Also, can you think of other criteria I should've specified?
2010/03/26
[ "https://serverfault.com/questions/126513", "https://serverfault.com", "https://serverfault.com/users/14439/" ]
I use <http://www.dokuwiki.org/> because it is easy to set up, has a lot of features (my favourite is "Downloadable Code Blocks"), allows the use of ACLs and uses .txt files which don't depend on any Office/Office Version and are easy to back up. It has everything you asked for except "if you don't have access to a doc, you don't see the link to that page/doc/folder" If you are using single .doc(x) files at the moment, I would say it is totally worth the extra effort to move to dokuwiki.
Have a look at [Alfresco Share](https://www.alfresco.com/ecm-software/enterprise-collaboration). * It has access control by individual doc or by sets of docs, for users or groups. * If you don't have access to a doc, you don't see the link to that page/doc/folder. * It has a built-in editor for HTML * It has version control * You can see an online preview of documents, no need to download. This saves a lot of time over opening a PDF reader or an office software. * Other collaboration features that can be easily enabled/disabled: wikis, calendars, issue trackers, data lists, blogs, forums. * Your dashboard allows you to see all recent activity, for instance who recently uploaded what. [![Alfresco Share](https://i.stack.imgur.com/UK1Ki.png)](https://i.stack.imgur.com/UK1Ki.png) (source: [alfresco.com](http://docs.alfresco.com/4.0/topic/com.alfresco.enterprise.doc/images/DocLibrary-Preview.png)) It is free and open source ([download](https://hub.alfresco.com/t5/alfresco-content-services-hub/download-and-install-alfresco/ba-p/290690)). If you want support, you can [subscribe as an Enterprise customer](https://www.alfresco.com/services/support-services).
126,513
My team has a need to publish documentation internally. At the moment, it's spread all over the place and this means we often have to search everywhere to find something. We'd like to publish everything in one place. The main thing that stops us is access control - the wikis in place don't belong to us and we can't do it. What is the best tool for publishing docs, ideally fitting these requirements: * web front end - readers access docs using browser * single place to put docs * access control by individual doc or by sets of docs (folders, branch of 'site', ...) * if you don't have access to a doc, you don't see the link to that page/doc/folder. * either built-in editor or something my users are familiar with (e.g. Word) * built-in version control would be nice Also, can you think of other criteria I should've specified?
2010/03/26
[ "https://serverfault.com/questions/126513", "https://serverfault.com", "https://serverfault.com/users/14439/" ]
Have a look at [Alfresco Share](https://www.alfresco.com/ecm-software/enterprise-collaboration). * It has access control by individual doc or by sets of docs, for users or groups. * If you don't have access to a doc, you don't see the link to that page/doc/folder. * It has a built-in editor for HTML * It has version control * You can see an online preview of documents, no need to download. This saves a lot of time over opening a PDF reader or an office software. * Other collaboration features that can be easily enabled/disabled: wikis, calendars, issue trackers, data lists, blogs, forums. * Your dashboard allows you to see all recent activity, for instance who recently uploaded what. [![Alfresco Share](https://i.stack.imgur.com/UK1Ki.png)](https://i.stack.imgur.com/UK1Ki.png) (source: [alfresco.com](http://docs.alfresco.com/4.0/topic/com.alfresco.enterprise.doc/images/DocLibrary-Preview.png)) It is free and open source ([download](https://hub.alfresco.com/t5/alfresco-content-services-hub/download-and-install-alfresco/ba-p/290690)). If you want support, you can [subscribe as an Enterprise customer](https://www.alfresco.com/services/support-services).
We use [Jive SBS](http://www.jivesoftware.com/products) (previously named clearspace) for all of our documenation needs. I believe it matches every one of your bullet points. Permissioning being very strong within it. If you don't have access to something it won't show up in searches, feeds, or anywhere.
19,574,655
I am new to the world of Tibco... I have been asked to create an VB.net application to do couple of things: 1. Update the value of a column in a database (which then generates a message in TIBCO EMS). 2. My application then needs to read this message from TIBCO and determine if the message has a particular word in it, and display the result as Pass or Fail I have already written the first piece of the task, however, I have no clue on how to proceed on the second one. I am hoping to get some kind of help/guidance on how to proceed! Any suggestions? Thanks, NewTibcoUser
2013/10/24
[ "https://Stackoverflow.com/questions/19574655", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2917261/" ]
This can be done easily depending on which Tibco Tools you own. If you have BW and ADB (Active Database Adapter) then you can use that. option 1: --- If you don't have adb you can mimic it by doing something like the following (ADB isn't magical its pretty strait forward) 1) Create a Mirror of the table that is being monitored for changes (You could just put in the column you want to monitor plus the key) Key ColumnYouWantToMonitor DeliveryStatus (Adb\_L\_DeliverStatus) Transaction type (adb\_opCode) Time It happened (Adb\_timestamp) Delivery Status (ADB\_L\_DeliveryStatus) 2) Create a trigger on the table That inserts a record into the table. 3) Write a .Net Process that monitors the table every 5 seconds or 10 or whatever (Make it configurable) (select \* from tableX where DeliveryStatus = 'N' order by transactionTime) 4) Place the message on the EMS Queue or do a service call to you .Net App. --- Option 2 1) Create a trigger on the table and write the event to a SQL Server Brokering Service Queue 2) Write a .Net app that reads from that SSBS queue and converts it into a EMS Message some design considerations * Try not to continually query (Aka poll) for changes on your main table (prevent blocking) * If your app is not running and DB changes are happening ensure that you have a message expire time. So when your app starts it doesn't have to process 1000's of messages off the queue (Depending if you need the message or not) * If you do need the messages you may want to set the Queue to be persistent to disk so you don't loose messages. Also Client acknowledgement in your .Net app would be a good idea not just auto ack. *
As you mention, the first point is already done (Perhaps with ADB or a custom program reacting to the DB insert). So, your problem is strictly the "React to content of an EMS message from VB.Net" part. I see two possibilities : 1- If you have EMS, ADB and BW, make a custom Adapter subscriber (a BW config) to change the DB in some way in reaction to messages on the bus. Your VB application can then simply query the DB to get the response status. 2- If you don't have so many products from the TIBCO stack, then you should make a simple C# EMS client program (see examples provided within EMS docs). This client can then signal you VB application (some kind of .Net internal signaling maybe, I am not an expert myself) or write the response status in DB.
518,427
I used to work in a Ubuntu environment. Machines would be pointed at one of three repos: unstable, testing, or stable. To quote the [Ubuntu FAQ](https://wiki.ubuntu.com/Debian/ForUbuntuDevelopers#What.27s_stable.2C_testing.2C_unstable.3F): > > The “stable” distribution contains the latest officially released > distribution of Debian. The “testing” distribution contains packages > that haven't been accepted into a “stable” release yet, but they are > in the queue for that. The main advantage of using this distribution > is that it has more recent versions of software. The “unstable” > (codenamed sid) distribution is where active development of Debian > occurs. Generally, this distribution is run by developers and those > who like to live on the edge. > > > Even more details are here in the [Debian Release page](http://www.debian.org/releases/). Now I work in a CentOS environment. Does CentOS use similar names or have an entirely different methodology? The reason I'm asking is that I want to set up a repo for home-grown packages and I want to have the benefits of unstable/testing/stable but I want to do it in a CentOS-friendly way.
2013/06/25
[ "https://serverfault.com/questions/518427", "https://serverfault.com", "https://serverfault.com/users/6472/" ]
Everything in the default RHEL and CentOS repos is stable. There is the EPEL repo, which is sort of like Universe in Ubuntu and is maintained by the Fedora project. If you want something like unstable/testing you should use Fedora.
There is the RHN [FasTrack](http://www.redhat.com/rhn/rhndetails/fastrack/) channel. I would consider this to be very close in definition to Ubuntu's Testing. It provides packages which are considered production ready, but are slated for release during the next minor update to the OS. As said in the other answers, this is an enterprise product and does not have a public development (unstable) channel.
518,427
I used to work in a Ubuntu environment. Machines would be pointed at one of three repos: unstable, testing, or stable. To quote the [Ubuntu FAQ](https://wiki.ubuntu.com/Debian/ForUbuntuDevelopers#What.27s_stable.2C_testing.2C_unstable.3F): > > The “stable” distribution contains the latest officially released > distribution of Debian. The “testing” distribution contains packages > that haven't been accepted into a “stable” release yet, but they are > in the queue for that. The main advantage of using this distribution > is that it has more recent versions of software. The “unstable” > (codenamed sid) distribution is where active development of Debian > occurs. Generally, this distribution is run by developers and those > who like to live on the edge. > > > Even more details are here in the [Debian Release page](http://www.debian.org/releases/). Now I work in a CentOS environment. Does CentOS use similar names or have an entirely different methodology? The reason I'm asking is that I want to set up a repo for home-grown packages and I want to have the benefits of unstable/testing/stable but I want to do it in a CentOS-friendly way.
2013/06/25
[ "https://serverfault.com/questions/518427", "https://serverfault.com", "https://serverfault.com/users/6472/" ]
CentOS do not have an unstable branch, because they aim to be binary-compatible with upstream. CentOS's upstream is Red Hat Enterprise Linux. RHEL is a server distribution intended for long (2 year) release cycle and very long (now 13 year) support lifecycle, so it's all stable. Red Hat's development is all internal, there is no public "bleeding edge" repo like Debian's "testing" or "unstable", although Red Hat will usually do a public beta of RHEL shortly before release. Red Hat's upstream is the Fedora project, which is a desktop distribution with a shorter (6 month) release cycle and a ~13 month support lifecycle. A significant number of the Fedora developers are Red Hat employees and Red Hat provide the project with infrastructure, but decisions about the distro are ultimately down to the steering committee ballot and Red Hat only have a vote just like other contributors and no greater. The point I'm trying to make is that Fedora is its own distro, it's not simply "unstable RHEL", although neither is the Fedora project ignorant that its software will be consumed by enterprise eventually. Anyway, Fedora have a "rawhide" branch which can be seen as somewhat equivalent to Debian's "unstable". Rawhide is supposed to be a stable rolling-release testbed but it breaks often and is fixed often, as any testbed should. Fedora packages very recent upstream versions of software, and are not afraid to change a version number mid-release. They're almost always on top of the most recent kernel too. For example, the current F18 released with kernel 3.6, the current available Fedora kernel is 3.9.6, and the latest stable available from kernel.org is 3.9.8. Personally I think Fedora is the most stable way to get the latest version a lot of software. Arch probably packages more recent versions, but I feel Arch requires *a lot* more maintenance to use day-to-day than Fedora does. The EPEL repository is exactly what the acronym stands for "Extra Packages for Enterprise Linux". These are additional packages built for EL (RHEL, CentOS, Scientific, etc) maintained by the Fedora Project. This contains packages which Red Hat don't package for RHEL but the community feel will be useful to have available anyway. It's not in any way an unstable or development branch, it's just community additions to the stable codebase. Packages in EPEL are usually intentionally structured *not* to interfere with the original underlying OS.
There is the RHN [FasTrack](http://www.redhat.com/rhn/rhndetails/fastrack/) channel. I would consider this to be very close in definition to Ubuntu's Testing. It provides packages which are considered production ready, but are slated for release during the next minor update to the OS. As said in the other answers, this is an enterprise product and does not have a public development (unstable) channel.
18,594
My HYLO eyedrops are dispensed by Pump Action; it doesn't need squeezing or gravity. [![Bottle of eyedrops](https://i.stack.imgur.com/iaU15m.jpg "Click photo for full size.")](https://i.stack.imgur.com/iaU15.jpg) I usually must instill eyedrops myself; nobody can help me. But I miss too often. ### Method 1: Vertical drop in eyes, while staring at the ceiling and lying on my back. 1.1. I wash my hands. 1.2. I lie flat on my back (on the couch or ground). I then stare at the dispenser's opening. 1.3. [My right eye (OS) is dominant](https://www.diyphotography.net/a-neat-trick-to-determine-your-dominant-eye/). So I instill left eye first (OD), while keeping open OS: [![Instruction diagram](https://i.stack.imgur.com/U3Z8f.jpg)](https://i.stack.imgur.com/U3Z8f.jpg) 1.4. After closing both eyes for 5 mins, I keep OS open when instilling OD. 1.5. Difficulty: The drop lands on an eyelash, or the Upper or Lower Eyelid. ### Method 2: Squirt in eyes, while standing and in front of mirror. 2.1. I stand in front of a mirror to witness my aim. 2.2. Same as 1.3. I aim the dispenser at the caruncle: [![Image of actual application of drops](https://i.stack.imgur.com/wTu8mm.png "Click photo for full size.")](https://i.stack.imgur.com/wTu8m.png) 2.3. Difficulty: Because I'm standing, most of the drop falls out the eye.
2018/05/09
[ "https://lifehacks.stackexchange.com/questions/18594", "https://lifehacks.stackexchange.com", "https://lifehacks.stackexchange.com/users/-1/" ]
I usually apply eyedrops while standing, and looking up. No need to lie down. For the left eye: 1. Tilt your head back as far as you can. 2. Use your thumb to hold down the lower eyelid, and the forefinger to hold the upper eyelid. This keeps you from blinking and provides a visual reference. 3. Use your right hand to hold the eyedropper. 4. Position the eyedropper: when looking straight up, you should be looking along the nozzle of the eyedropper. Your left hand gives a visual reference: the eyedropper should be just above your left thumb. Note the distance between your thumb and the nozzle. Try a few times until you get the distance right, then remember this distance for next time.
I don't use eyedrops, but it doesn't make sense to aim at the caruncle - this is the place where tears discharge and if you put the drop there it will, naturally, fall out immediately. (Keep in mind that the tear gland is located at the further top side of the eye and you want to mimick that as much as possible when you apply eyedrops.) So I'd suggest you stand in front of the mirror, draw your lower eyelid away from your eye and aim, in the pocket formed, at the furthermost side from the caruncle (so that the drops would spread well before discharging at the caruncle). Tilt your head back right after this if you think it's necessary.
18,594
My HYLO eyedrops are dispensed by Pump Action; it doesn't need squeezing or gravity. [![Bottle of eyedrops](https://i.stack.imgur.com/iaU15m.jpg "Click photo for full size.")](https://i.stack.imgur.com/iaU15.jpg) I usually must instill eyedrops myself; nobody can help me. But I miss too often. ### Method 1: Vertical drop in eyes, while staring at the ceiling and lying on my back. 1.1. I wash my hands. 1.2. I lie flat on my back (on the couch or ground). I then stare at the dispenser's opening. 1.3. [My right eye (OS) is dominant](https://www.diyphotography.net/a-neat-trick-to-determine-your-dominant-eye/). So I instill left eye first (OD), while keeping open OS: [![Instruction diagram](https://i.stack.imgur.com/U3Z8f.jpg)](https://i.stack.imgur.com/U3Z8f.jpg) 1.4. After closing both eyes for 5 mins, I keep OS open when instilling OD. 1.5. Difficulty: The drop lands on an eyelash, or the Upper or Lower Eyelid. ### Method 2: Squirt in eyes, while standing and in front of mirror. 2.1. I stand in front of a mirror to witness my aim. 2.2. Same as 1.3. I aim the dispenser at the caruncle: [![Image of actual application of drops](https://i.stack.imgur.com/wTu8mm.png "Click photo for full size.")](https://i.stack.imgur.com/wTu8m.png) 2.3. Difficulty: Because I'm standing, most of the drop falls out the eye.
2018/05/09
[ "https://lifehacks.stackexchange.com/questions/18594", "https://lifehacks.stackexchange.com", "https://lifehacks.stackexchange.com/users/-1/" ]
As a contact lense wearer, I modified the method for inserting a contact lense for this purpose. For me it works pertty great. 1. Pull your eyelids wide open with your forefinger and middle finger, with your palm being in the plane of your face. (In my experience this is much more effective and convenient than using your thumb- and forefinger.) 2. Face the sky with your head (no need to lie down), and stare straight forward, so your iris is around the center of your eye 3. Position the eyedrop where it's point is (blurredly) seen at the center of your field of vision. This means it's right above your iris. Bring it as close as you feel comfortable. With some experience this can be pretty close - centimeter close. 4. Apply the drop. 5. ??? 6. Profit.
I don't use eyedrops, but it doesn't make sense to aim at the caruncle - this is the place where tears discharge and if you put the drop there it will, naturally, fall out immediately. (Keep in mind that the tear gland is located at the further top side of the eye and you want to mimick that as much as possible when you apply eyedrops.) So I'd suggest you stand in front of the mirror, draw your lower eyelid away from your eye and aim, in the pocket formed, at the furthermost side from the caruncle (so that the drops would spread well before discharging at the caruncle). Tilt your head back right after this if you think it's necessary.
18,812
All of the sites I looked at said "heavy" or "a lot" or "42.3 pounds." Mjolnir was forged from a collapsed star (a white dwarf or a neutron star). This means it is insanely dense. Worthiness aside, given its size (as seen in the movie) how much should it weigh? Perhaps I should clarify. Given the average density of a collapsed star, if you took enough material to construct the hammer, how much would that item weigh on Earth?
2012/06/21
[ "https://scifi.stackexchange.com/questions/18812", "https://scifi.stackexchange.com", "https://scifi.stackexchange.com/users/1148/" ]
The magic of the hammer makes its weight arbitrary. Consider that weight is the force on an object due to gravity, that means that we have to take into account the mass of the hammer and gravity(a constant). However, the hammer doesn't adhere to physics as we know it. At one moment it can be wielded by Thor, set on a table without breaking the structure, and fly through the air. At the next moment the Hulk himself, who we know to be exceedingly strong can't lift it up. If we attribute these instances to its weight changing, then it can obviously be any arbitrary weight necessary. (Technically we don't know if there is an upper bound but as it's **magic** it is safe to say there is, in so far as it is convenient to the plot). If we attribute them to some force other than a change in weight, the weight still becomes inconsequential, as it still performs the same way as if it did not have weight. To reflect on your edit: From the [transcript](http://www.imsdb.com/scripts/Thor.html) > > Forged in the heart of a dying star, from the sacred metal of Uru... > > > I think this is saying that the **location** of where the hammer was forged is in the "heart of a dying star", not that it was forged **from** the dying star. It is a common idea in a lot of literature to say that an object (sword, spear, shield, hammer) is stronger when forged in a hotter furnace(not being a blacksmith I can't attest to this but it makes sense), I imagine that this is just a matter of Odin indicating that the heat required to forge it came from a dying star.
Mjolnir, Thor's hammer, was crafted from uru, an element native only to Thor's realm, Asgard. It being a heavy metal, logistically speaking, I'd say it weighs at least a good 50 pounds or so, but its weight is not the key. Mjolnir, from the day it was begotten had a spell cast on it, by the All-Father Odin, literally the spell from the movie *Thor*, stating only the worthy may wield the hammer, and the beholder will have the power of Thor. So staying precise, in truth no one besides Thor himself can lift Mjolnir, no matter its weight.
18,812
All of the sites I looked at said "heavy" or "a lot" or "42.3 pounds." Mjolnir was forged from a collapsed star (a white dwarf or a neutron star). This means it is insanely dense. Worthiness aside, given its size (as seen in the movie) how much should it weigh? Perhaps I should clarify. Given the average density of a collapsed star, if you took enough material to construct the hammer, how much would that item weigh on Earth?
2012/06/21
[ "https://scifi.stackexchange.com/questions/18812", "https://scifi.stackexchange.com", "https://scifi.stackexchange.com/users/1148/" ]
The magic of the hammer makes its weight arbitrary. Consider that weight is the force on an object due to gravity, that means that we have to take into account the mass of the hammer and gravity(a constant). However, the hammer doesn't adhere to physics as we know it. At one moment it can be wielded by Thor, set on a table without breaking the structure, and fly through the air. At the next moment the Hulk himself, who we know to be exceedingly strong can't lift it up. If we attribute these instances to its weight changing, then it can obviously be any arbitrary weight necessary. (Technically we don't know if there is an upper bound but as it's **magic** it is safe to say there is, in so far as it is convenient to the plot). If we attribute them to some force other than a change in weight, the weight still becomes inconsequential, as it still performs the same way as if it did not have weight. To reflect on your edit: From the [transcript](http://www.imsdb.com/scripts/Thor.html) > > Forged in the heart of a dying star, from the sacred metal of Uru... > > > I think this is saying that the **location** of where the hammer was forged is in the "heart of a dying star", not that it was forged **from** the dying star. It is a common idea in a lot of literature to say that an object (sword, spear, shield, hammer) is stronger when forged in a hotter furnace(not being a blacksmith I can't attest to this but it makes sense), I imagine that this is just a matter of Odin indicating that the heat required to forge it came from a dying star.
If Mjolnir was forged from a dying star or neutron star, than below is an approximate weight. 5 millilitres of neutron star material has a mass of approximately 5,500,000,000 tons. The Dimensions of Thors hammer has been estimated at 4x4x8in or having a volume of 128 millilitres. Mjolnir can have an estimated mass of 140,800,000,000 tons.
18,812
All of the sites I looked at said "heavy" or "a lot" or "42.3 pounds." Mjolnir was forged from a collapsed star (a white dwarf or a neutron star). This means it is insanely dense. Worthiness aside, given its size (as seen in the movie) how much should it weigh? Perhaps I should clarify. Given the average density of a collapsed star, if you took enough material to construct the hammer, how much would that item weigh on Earth?
2012/06/21
[ "https://scifi.stackexchange.com/questions/18812", "https://scifi.stackexchange.com", "https://scifi.stackexchange.com/users/1148/" ]
Mjolnir, Thor's hammer, was crafted from uru, an element native only to Thor's realm, Asgard. It being a heavy metal, logistically speaking, I'd say it weighs at least a good 50 pounds or so, but its weight is not the key. Mjolnir, from the day it was begotten had a spell cast on it, by the All-Father Odin, literally the spell from the movie *Thor*, stating only the worthy may wield the hammer, and the beholder will have the power of Thor. So staying precise, in truth no one besides Thor himself can lift Mjolnir, no matter its weight.
The details of the the dying star are irrelevant. In order to calculate the mass (or weight) of it, you would need to at least know the chemical properties of its metal (Uru). The properties (including its density) are unknown because most samples are "heavily shielded by magic". Density of a metal is determined by its atomic structure. The dying start is relevant to Mjolnir because of the massive amount of heat required to deform it. Being forged in a dying star would not increase the density of Uru (or the weight of Mjolnir). It might ensure that there are no impurities or bubbles (to ensure pure solid uru) - but the density of Uru would remain the same, be destroyed, or become another element entirely. Either way, the best we could do to imagine the heaviest possible hammer is use the density of a hammer made of our own heaviest metal (without impurities). We cannot know anything about the weight of Mjolnor without knowing more about Uru. All we know is that is a highly unmalleable heavy metal. Might be as dense as Lead or Iron for all we know. Like others have said, the real truth to Mjolnir lies in its magic.
18,812
All of the sites I looked at said "heavy" or "a lot" or "42.3 pounds." Mjolnir was forged from a collapsed star (a white dwarf or a neutron star). This means it is insanely dense. Worthiness aside, given its size (as seen in the movie) how much should it weigh? Perhaps I should clarify. Given the average density of a collapsed star, if you took enough material to construct the hammer, how much would that item weigh on Earth?
2012/06/21
[ "https://scifi.stackexchange.com/questions/18812", "https://scifi.stackexchange.com", "https://scifi.stackexchange.com/users/1148/" ]
### Mjolnir, as displayed in the Avengers, would probably weigh 50-60 pounds. We are, for the moment, precluding any magical enchantments that make it unliftable, immovable or able to alter its gravitational constant via the power of the Odin-given worthiness enchantments. Its head is twice the size of an ordinary sledgehammer (whose head is slightly smaller than the average brick) and weighs about 18-20 pounds. Since it is supposedly comprised of a metal more durable and at least as heavy as iron or steel, we can approximate its unaltered, non-magical specifications. ![enter image description here](https://i.stack.imgur.com/88cR6.jpg) Thor's hammer is supposed to be a long handled warhammer. As legends have it, during its manufacturer, its handle was somehow reduced in length (likely some trick or machination by Loki) so its head is disproportionate to its handle length. Given Thor's incredible strength that didn't seem to matter much. Sledgehammers for human use rarely weight more than 10-20 pounds, since generally they are used as tools. To scale one up their normal size to the size of the head of Mjolnir would make its physical weight approximately 50-60 pounds, and make it impossible for an unaltered human being to wield no matter how strong they were! Even if you were to remove the enchantments for worthiness, its physical weight and very short handle would preclude normal humans from doing much more than picking it up. **EDIT** With the new request of the hammer being made of collapsed star matter...Astrophysicist Neil Degrasse Tyson says on Twitter: > > * If Thor's hammer is made of neutron-star matter, implied by legend, then it weighs as much as a herd of 300 billion elephants > > > [![enter image description here](https://i.stack.imgur.com/WiZgJ.jpg)](https://i.stack.imgur.com/WiZgJ.jpg) * Blah, blah, blah; Let's assume everything said about Mjolnir was true, and not Asgardian hyperbole at best. If it were, just supposing it was, there is no place it could sit on Earth that would be able to support it, effectively. * If Mjolnir WAS actually made of collapsed star matter, then there is no where he could set it down that it would not collapse that object (like the helicarrier, for instance) like a tin can. A teaspoon of neutron star can weigh 900 times the mass of the Great Pyramid of Giza! For the record, the Great Pyramid of Giza weighs 5.9 million tons. * We can assume it may be more massive than its apparent weight but being truly made of ONLY collapsed stellar matter, even the lightest of such neutronium-like material would cause the hammer to be untenable in any human environment. * A neutron star the size of Mjolnir (which is impossible, the smallest neutron star would be the size of the Earth, give or take) would emit a devastating amount of electromagnetic radiation i.e. gamma rays and other assorted deadly EM phenomena. Thor would need completely radiation-proof friends. * More importantly, if you could get that much neutron material in a tiny space such as Mjolnir, its gravitational field would cause it to rip the Earth asunder due to the [gravitational imbalance](http://en.wikipedia.org/wiki/Newton%27s_law_of_universal_gravitation) between the Earth and the hammer. It wouldn't be pretty. * Since we have seen Mjolnir sitting on tables, lying on city streets, Thor standing in office buildings, the hammer sitting someplace in the SHIELD helicarrier without THOR holding it up, we can be safe assuming it is not weighing more than the Great Pyramid of Giza. Or more likely fifty Pyramids of Giza due to neutron star material density... ### Apocrypha Marvel produced a trading card which specified Mjolnir's weight as 42.3 lbs. Given its dimensions, it would be considered very light, its physical density would be lighter than aluminum.\* [![enter image description here](https://i.stack.imgur.com/i6bwg.jpg)](https://i.stack.imgur.com/i6bwg.jpg) \*Excerpt from Matt Shipman's North Carolina State [blog interview](http://web.ncsu.edu/abstract/science/wms-mjolnir/) with materials scientist, Suveen Mathaudhu: > > * Using the dimensions and weight on Marvel's trading card, Mathaudhu estimates that the density of Mjolnir is about 2.13 grams (g) per cubic centimeter (cc). That makes it even lighter than aluminum, which has a density of 2.71 g/cc. > > >
If Mjolnir was forged from a dying star or neutron star, than below is an approximate weight. 5 millilitres of neutron star material has a mass of approximately 5,500,000,000 tons. The Dimensions of Thors hammer has been estimated at 4x4x8in or having a volume of 128 millilitres. Mjolnir can have an estimated mass of 140,800,000,000 tons.
18,812
All of the sites I looked at said "heavy" or "a lot" or "42.3 pounds." Mjolnir was forged from a collapsed star (a white dwarf or a neutron star). This means it is insanely dense. Worthiness aside, given its size (as seen in the movie) how much should it weigh? Perhaps I should clarify. Given the average density of a collapsed star, if you took enough material to construct the hammer, how much would that item weigh on Earth?
2012/06/21
[ "https://scifi.stackexchange.com/questions/18812", "https://scifi.stackexchange.com", "https://scifi.stackexchange.com/users/1148/" ]
Mjolnir, Thor's hammer, was crafted from uru, an element native only to Thor's realm, Asgard. It being a heavy metal, logistically speaking, I'd say it weighs at least a good 50 pounds or so, but its weight is not the key. Mjolnir, from the day it was begotten had a spell cast on it, by the All-Father Odin, literally the spell from the movie *Thor*, stating only the worthy may wield the hammer, and the beholder will have the power of Thor. So staying precise, in truth no one besides Thor himself can lift Mjolnir, no matter its weight.
If Mjolnir was forged from a dying star or neutron star, than below is an approximate weight. 5 millilitres of neutron star material has a mass of approximately 5,500,000,000 tons. The Dimensions of Thors hammer has been estimated at 4x4x8in or having a volume of 128 millilitres. Mjolnir can have an estimated mass of 140,800,000,000 tons.
18,812
All of the sites I looked at said "heavy" or "a lot" or "42.3 pounds." Mjolnir was forged from a collapsed star (a white dwarf or a neutron star). This means it is insanely dense. Worthiness aside, given its size (as seen in the movie) how much should it weigh? Perhaps I should clarify. Given the average density of a collapsed star, if you took enough material to construct the hammer, how much would that item weigh on Earth?
2012/06/21
[ "https://scifi.stackexchange.com/questions/18812", "https://scifi.stackexchange.com", "https://scifi.stackexchange.com/users/1148/" ]
### Mjolnir, as displayed in the Avengers, would probably weigh 50-60 pounds. We are, for the moment, precluding any magical enchantments that make it unliftable, immovable or able to alter its gravitational constant via the power of the Odin-given worthiness enchantments. Its head is twice the size of an ordinary sledgehammer (whose head is slightly smaller than the average brick) and weighs about 18-20 pounds. Since it is supposedly comprised of a metal more durable and at least as heavy as iron or steel, we can approximate its unaltered, non-magical specifications. ![enter image description here](https://i.stack.imgur.com/88cR6.jpg) Thor's hammer is supposed to be a long handled warhammer. As legends have it, during its manufacturer, its handle was somehow reduced in length (likely some trick or machination by Loki) so its head is disproportionate to its handle length. Given Thor's incredible strength that didn't seem to matter much. Sledgehammers for human use rarely weight more than 10-20 pounds, since generally they are used as tools. To scale one up their normal size to the size of the head of Mjolnir would make its physical weight approximately 50-60 pounds, and make it impossible for an unaltered human being to wield no matter how strong they were! Even if you were to remove the enchantments for worthiness, its physical weight and very short handle would preclude normal humans from doing much more than picking it up. **EDIT** With the new request of the hammer being made of collapsed star matter...Astrophysicist Neil Degrasse Tyson says on Twitter: > > * If Thor's hammer is made of neutron-star matter, implied by legend, then it weighs as much as a herd of 300 billion elephants > > > [![enter image description here](https://i.stack.imgur.com/WiZgJ.jpg)](https://i.stack.imgur.com/WiZgJ.jpg) * Blah, blah, blah; Let's assume everything said about Mjolnir was true, and not Asgardian hyperbole at best. If it were, just supposing it was, there is no place it could sit on Earth that would be able to support it, effectively. * If Mjolnir WAS actually made of collapsed star matter, then there is no where he could set it down that it would not collapse that object (like the helicarrier, for instance) like a tin can. A teaspoon of neutron star can weigh 900 times the mass of the Great Pyramid of Giza! For the record, the Great Pyramid of Giza weighs 5.9 million tons. * We can assume it may be more massive than its apparent weight but being truly made of ONLY collapsed stellar matter, even the lightest of such neutronium-like material would cause the hammer to be untenable in any human environment. * A neutron star the size of Mjolnir (which is impossible, the smallest neutron star would be the size of the Earth, give or take) would emit a devastating amount of electromagnetic radiation i.e. gamma rays and other assorted deadly EM phenomena. Thor would need completely radiation-proof friends. * More importantly, if you could get that much neutron material in a tiny space such as Mjolnir, its gravitational field would cause it to rip the Earth asunder due to the [gravitational imbalance](http://en.wikipedia.org/wiki/Newton%27s_law_of_universal_gravitation) between the Earth and the hammer. It wouldn't be pretty. * Since we have seen Mjolnir sitting on tables, lying on city streets, Thor standing in office buildings, the hammer sitting someplace in the SHIELD helicarrier without THOR holding it up, we can be safe assuming it is not weighing more than the Great Pyramid of Giza. Or more likely fifty Pyramids of Giza due to neutron star material density... ### Apocrypha Marvel produced a trading card which specified Mjolnir's weight as 42.3 lbs. Given its dimensions, it would be considered very light, its physical density would be lighter than aluminum.\* [![enter image description here](https://i.stack.imgur.com/i6bwg.jpg)](https://i.stack.imgur.com/i6bwg.jpg) \*Excerpt from Matt Shipman's North Carolina State [blog interview](http://web.ncsu.edu/abstract/science/wms-mjolnir/) with materials scientist, Suveen Mathaudhu: > > * Using the dimensions and weight on Marvel's trading card, Mathaudhu estimates that the density of Mjolnir is about 2.13 grams (g) per cubic centimeter (cc). That makes it even lighter than aluminum, which has a density of 2.71 g/cc. > > >
The minimum weight of a neutron star is that of about 1-2 solar masses. This is however all condensed into a sphere of a few hundred kilometres across. However such a huge sphere needn't all be used in the forging process so only, lets assume, a cubic foot is used. This still constitutes a mass of about 10^12 metric tons. Clearly however this cannot be the case otherwise the the force exerted would be a million times greater than earth's gravity. Clearly this can't be true, otherwise enemies, friends and buildings would be sucked into the hammer. So what could it be? Mjolnir was forged from Uru in a dying star. If it was from a star dying 'peacefully' and becoming neutronium, perhaps it was formed in the wake of a supernova? All elements greater in mass than iron-56 can only be made via supernovas. This leads me to believe that the material is an isotope of Iron. Why [Iron](http://en.wikipedia.org/wiki/Iron_in_folklore)? Iron has a rich history of being involved in folk-lore and magic being able resistant to magic but also often enchanted as Uru is also. As such I'd estimate it weight about twice as much as a standard sledge hammer, having a look at the pictures of it.
18,812
All of the sites I looked at said "heavy" or "a lot" or "42.3 pounds." Mjolnir was forged from a collapsed star (a white dwarf or a neutron star). This means it is insanely dense. Worthiness aside, given its size (as seen in the movie) how much should it weigh? Perhaps I should clarify. Given the average density of a collapsed star, if you took enough material to construct the hammer, how much would that item weigh on Earth?
2012/06/21
[ "https://scifi.stackexchange.com/questions/18812", "https://scifi.stackexchange.com", "https://scifi.stackexchange.com/users/1148/" ]
There are two problems answering this question. One - what is the volume? Two - what form of collapsed matter are we talking about? The difference between the density of a white dwarf and neutronium is 7 orders of magnitude (factor of 10,000,000). So let's assume, for the sake of making it possible to compute a number that the weight of 42.3 pounds was computed for some volume of mostly iron. 42.3lb would be 19.187kg which, would be 2.437l of iron (density is listed as 7.874 g/ml). 2.437l of white dwarf material (approximate density 1,000,000 g/ml) would be 2,437,000kg or about 2,686 US tons. The [Wikipedia article on white dwarfs](http://en.wikipedia.org/wiki/White_dwarf) lists a range of densities for neutronium. We'll use the low end (8.4 e13 g/ml) and this would give 225.6 billion US tons. Obviously, as was pointed out, the hammer is routinely placed on top of ordinary objects that are not immediately crushed, if the hammer weighed that much. So either: 1. The hammer was forged of ordinary material and it's weight and/or mass are adjusted in real time by some magic to produce the desired effects. 2. The hammer really does have a massive weight but it's effective weight and/or mass are adjusted in real time by some magic to produce the desired effects. The effect is the same, so it really doesn't matter, though one could suppose that the magic to reduce the weight of an object might be simpler than the magic to magnify it (or vice versa). If something like that were true, then my best guess is that Uru is something formed in the heart of a white dwarf and the actual weight is in the neighborhood of 2,000 tons.
As I pointed out in my answer [here](https://scifi.stackexchange.com/a/54581/20774), you're misreading the passage. The blacksmith [Eitri](http://marvel.wikia.com/Mjolnir) and his assistants Brok and Buri used the core of a dying star as **the heat source** to "*forge a mold with which he birthed Mjolnir*". This "heart" (which presumably also served to heat the [Uru](http://marvel.wikia.com/Uru) from which the hammer is constructed) was not **inside the star** at the time but rather had **already been ejected** from the star as it exploded or collapsed. There's an image of them hard at work in [Thor Vol 2 #80](http://marvel.wikia.com/Thor_Vol_2_80) ![enter image description here](https://i.stack.imgur.com/sOIxm.png) Given that Uru is a relatively dense metal, we can assume that the weight of the hammer (sans enchantments) is around a couple of hundred(ish) pounds.
18,812
All of the sites I looked at said "heavy" or "a lot" or "42.3 pounds." Mjolnir was forged from a collapsed star (a white dwarf or a neutron star). This means it is insanely dense. Worthiness aside, given its size (as seen in the movie) how much should it weigh? Perhaps I should clarify. Given the average density of a collapsed star, if you took enough material to construct the hammer, how much would that item weigh on Earth?
2012/06/21
[ "https://scifi.stackexchange.com/questions/18812", "https://scifi.stackexchange.com", "https://scifi.stackexchange.com/users/1148/" ]
What if the hammer (Mjölnir) is not just a hunk of metal, but rather a concentration of Higgs Bosons with the "hammer" really being a containment device. I base this on the interaction between Captain America's vibranium shield vs Mjölnir. Since vibranium can absorb (or cancel out?) kinetic energy, and Higgs Boson gives something its mass, ("kinetic energy") then the reaction of those two meeting would be an incredible force indeed. If it is a Higgs Boson variable containment device, then it would be able to rest on normal object. Maybe people are unable to lift not because it's heavy, but rather it increases the nuclear strong force and the hammer literally becomes nuclearly bonded to whatever surface it's in contact with. Modifying the strong nuclear force would be fairly easy if it's modifying the Higgs field with impunity. And in order to keep continuity, the Higgs Bosons could have been collected from a dying star, where the increase in mass would allow for easy harvesting of the bosons. :)
It's only logical it was made from a dying star, since if it was not extremely heavy it could not be a planet busting weapon - as it is in the comics. My explanation is that it is made of a dying star, and Odin's enchantments allow it to weigh only 42 lbs for the wielder but still hit with enough force to bust planets. Also the weight is controlled by Thor's mind, so when he is holding back he can hit normal people without killing them, but when fighting gods and beings like Galactus he can hit with planetary force.
18,812
All of the sites I looked at said "heavy" or "a lot" or "42.3 pounds." Mjolnir was forged from a collapsed star (a white dwarf or a neutron star). This means it is insanely dense. Worthiness aside, given its size (as seen in the movie) how much should it weigh? Perhaps I should clarify. Given the average density of a collapsed star, if you took enough material to construct the hammer, how much would that item weigh on Earth?
2012/06/21
[ "https://scifi.stackexchange.com/questions/18812", "https://scifi.stackexchange.com", "https://scifi.stackexchange.com/users/1148/" ]
So here is a pretty good Vsauce video on this. To summarize - Thor's hammer wasn't made OF a dying star, just IN a dying star. However, if it were made of dying (in this case neutron) star material, it would weigh about 10 quadrillion lbs; given its density and size, anything within 100ft of the hammer would be pulled in at almost the speed of sound, torn apart with gravitational tidal force in the process. If dropped, the hammer would cause an explosion equal to about 1.3 million Tzar-Bomba nukes, killing off a good chunk of humanity, and sinking through the Earth to rest at the Earth's core.
What if the hammer (Mjölnir) is not just a hunk of metal, but rather a concentration of Higgs Bosons with the "hammer" really being a containment device. I base this on the interaction between Captain America's vibranium shield vs Mjölnir. Since vibranium can absorb (or cancel out?) kinetic energy, and Higgs Boson gives something its mass, ("kinetic energy") then the reaction of those two meeting would be an incredible force indeed. If it is a Higgs Boson variable containment device, then it would be able to rest on normal object. Maybe people are unable to lift not because it's heavy, but rather it increases the nuclear strong force and the hammer literally becomes nuclearly bonded to whatever surface it's in contact with. Modifying the strong nuclear force would be fairly easy if it's modifying the Higgs field with impunity. And in order to keep continuity, the Higgs Bosons could have been collected from a dying star, where the increase in mass would allow for easy harvesting of the bosons. :)
18,812
All of the sites I looked at said "heavy" or "a lot" or "42.3 pounds." Mjolnir was forged from a collapsed star (a white dwarf or a neutron star). This means it is insanely dense. Worthiness aside, given its size (as seen in the movie) how much should it weigh? Perhaps I should clarify. Given the average density of a collapsed star, if you took enough material to construct the hammer, how much would that item weigh on Earth?
2012/06/21
[ "https://scifi.stackexchange.com/questions/18812", "https://scifi.stackexchange.com", "https://scifi.stackexchange.com/users/1148/" ]
As I pointed out in my answer [here](https://scifi.stackexchange.com/a/54581/20774), you're misreading the passage. The blacksmith [Eitri](http://marvel.wikia.com/Mjolnir) and his assistants Brok and Buri used the core of a dying star as **the heat source** to "*forge a mold with which he birthed Mjolnir*". This "heart" (which presumably also served to heat the [Uru](http://marvel.wikia.com/Uru) from which the hammer is constructed) was not **inside the star** at the time but rather had **already been ejected** from the star as it exploded or collapsed. There's an image of them hard at work in [Thor Vol 2 #80](http://marvel.wikia.com/Thor_Vol_2_80) ![enter image description here](https://i.stack.imgur.com/sOIxm.png) Given that Uru is a relatively dense metal, we can assume that the weight of the hammer (sans enchantments) is around a couple of hundred(ish) pounds.
If Mjolnir was forged from a dying star or neutron star, than below is an approximate weight. 5 millilitres of neutron star material has a mass of approximately 5,500,000,000 tons. The Dimensions of Thors hammer has been estimated at 4x4x8in or having a volume of 128 millilitres. Mjolnir can have an estimated mass of 140,800,000,000 tons.
134,691
In conversation, when someone says they appreciate my brain, I need an effective comeback. I was going to say: > > "I hope that you are not turning into a zombie with your love for my brain." > > > But I feel that *"turning into"* is not a sufficiently **vivid** verb to use. What verb should I use when someone turns into (or possibly might turn into) a zombie? I want the verb to tell a bit of the story by describing the what and why of zombies.
2013/11/07
[ "https://english.stackexchange.com/questions/134691", "https://english.stackexchange.com", "https://english.stackexchange.com/users/9787/" ]
A few words come to mind: * transform * convert * evolve * devolve * grow * mature * adapt
Why, when you tell me, "I love you for your brain." Do I only feel fear? Fear, like, seriously? Have you undergone zombification[1]? [1] <http://www.realclearscience.com/blog/2012/10/zombies-voodoo-and-pufferfish.html>
134,691
In conversation, when someone says they appreciate my brain, I need an effective comeback. I was going to say: > > "I hope that you are not turning into a zombie with your love for my brain." > > > But I feel that *"turning into"* is not a sufficiently **vivid** verb to use. What verb should I use when someone turns into (or possibly might turn into) a zombie? I want the verb to tell a bit of the story by describing the what and why of zombies.
2013/11/07
[ "https://english.stackexchange.com/questions/134691", "https://english.stackexchange.com", "https://english.stackexchange.com/users/9787/" ]
"Mutate" seems appropriate to me.
Undergoing [zombification](https://www.google.com/search?q=zombification&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla%3aen-US%3aofficial&client=firefox-a) is quite popular, however some prefer the slightly higher-browed term [zombifaction](https://www.google.com/search?safe=off&client=firefox-a&hs=S35&rls=org.mozilla%3aen-US%3aofficial&q=zombifaction&oq=zombifaction&gs_l=serp.12..0i10j0i7i30j0i10l8.7162.7162.0.10261.1.1.0.0.0.0.541.541.5-1.1.0....0...1c..31.serp..0.1.541.54K5tsx_pAk).
134,691
In conversation, when someone says they appreciate my brain, I need an effective comeback. I was going to say: > > "I hope that you are not turning into a zombie with your love for my brain." > > > But I feel that *"turning into"* is not a sufficiently **vivid** verb to use. What verb should I use when someone turns into (or possibly might turn into) a zombie? I want the verb to tell a bit of the story by describing the what and why of zombies.
2013/11/07
[ "https://english.stackexchange.com/questions/134691", "https://english.stackexchange.com", "https://english.stackexchange.com/users/9787/" ]
OED has a first citation from *NY Times, 1950* for... > > **zombify**: *to transform into a zombie.* > > > ...so I don't see why in OP's context he shouldn't say... > > *"I hope that you're not being/becoming **zombified** by your love of my brain."* > > >
I would use 'descending'. 'I hope you are not descending into a zombie'.
134,691
In conversation, when someone says they appreciate my brain, I need an effective comeback. I was going to say: > > "I hope that you are not turning into a zombie with your love for my brain." > > > But I feel that *"turning into"* is not a sufficiently **vivid** verb to use. What verb should I use when someone turns into (or possibly might turn into) a zombie? I want the verb to tell a bit of the story by describing the what and why of zombies.
2013/11/07
[ "https://english.stackexchange.com/questions/134691", "https://english.stackexchange.com", "https://english.stackexchange.com/users/9787/" ]
OED has a first citation from *NY Times, 1950* for... > > **zombify**: *to transform into a zombie.* > > > ...so I don't see why in OP's context he shouldn't say... > > *"I hope that you're not being/becoming **zombified** by your love of my brain."* > > >
Undergoing [zombification](https://www.google.com/search?q=zombification&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla%3aen-US%3aofficial&client=firefox-a) is quite popular, however some prefer the slightly higher-browed term [zombifaction](https://www.google.com/search?safe=off&client=firefox-a&hs=S35&rls=org.mozilla%3aen-US%3aofficial&q=zombifaction&oq=zombifaction&gs_l=serp.12..0i10j0i7i30j0i10l8.7162.7162.0.10261.1.1.0.0.0.0.541.541.5-1.1.0....0...1c..31.serp..0.1.541.54K5tsx_pAk).
134,691
In conversation, when someone says they appreciate my brain, I need an effective comeback. I was going to say: > > "I hope that you are not turning into a zombie with your love for my brain." > > > But I feel that *"turning into"* is not a sufficiently **vivid** verb to use. What verb should I use when someone turns into (or possibly might turn into) a zombie? I want the verb to tell a bit of the story by describing the what and why of zombies.
2013/11/07
[ "https://english.stackexchange.com/questions/134691", "https://english.stackexchange.com", "https://english.stackexchange.com/users/9787/" ]
A more vivid verb that describes the what and why of zombies. Ambitious! O.K. How about **transmute**? That hasn't been suggested by anyone yet. Looking into the history of transmutation, I discovered that the word was used in the 19th century several years before the advent of Charles Darwin's theory of evolution: *On The Origin of Species* in 1859. Previously to that, it had been adopted by alchemists to describe the transformation of ordinary metal into gold. The promoter of the transmutation theory, the Frenchman, Jean-Baptiste Lamarck and author of *Philosophie Zoologique* (1809) claimed... > > that simple forms of life were created continuously by > *spontaneous generation*. He also believed that an innate life force, > which he sometimes described as a nervous fluid, **drove species to > become more complex over time**, advancing up a linear ladder of > complexity that was related to the great chain of being. > > > [Online Etymology](http://www.etymonline.com/index.php?term=transmute) has this to say *(v.) early 15c., from Latin transmutare, "change from one condition to another,"from trans-"thoroughly" + mutare "to change" Related: Transmuted; transmuting*
Why, when you tell me, "I love you for your brain." Do I only feel fear? Fear, like, seriously? Have you undergone zombification[1]? [1] <http://www.realclearscience.com/blog/2012/10/zombies-voodoo-and-pufferfish.html>
134,691
In conversation, when someone says they appreciate my brain, I need an effective comeback. I was going to say: > > "I hope that you are not turning into a zombie with your love for my brain." > > > But I feel that *"turning into"* is not a sufficiently **vivid** verb to use. What verb should I use when someone turns into (or possibly might turn into) a zombie? I want the verb to tell a bit of the story by describing the what and why of zombies.
2013/11/07
[ "https://english.stackexchange.com/questions/134691", "https://english.stackexchange.com", "https://english.stackexchange.com/users/9787/" ]
How about ***[transmogrify](http://www.merriam-webster.com/dictionary/transmogrify)***? > > to change or alter greatly and often with grotesque or humorous effect > > >
**zombie** 'I hope your are not **zombieing**' <= **to zombie**, inchoat, *to become a zombie* To me zombify is a causative, appropriate when you cause someone else to become a zombie. source: [urbandictionary.com](http://www.urbandictionary.com/define.php?term=zombieing)
134,691
In conversation, when someone says they appreciate my brain, I need an effective comeback. I was going to say: > > "I hope that you are not turning into a zombie with your love for my brain." > > > But I feel that *"turning into"* is not a sufficiently **vivid** verb to use. What verb should I use when someone turns into (or possibly might turn into) a zombie? I want the verb to tell a bit of the story by describing the what and why of zombies.
2013/11/07
[ "https://english.stackexchange.com/questions/134691", "https://english.stackexchange.com", "https://english.stackexchange.com/users/9787/" ]
I don't think there is anything wrong with saying *turn*, in this context. See <http://dictionary.cambridge.org/dictionary/british/turn_4> , which has the following definition of the word: **to (cause to) become, change into, or come to be something**. It's good enough for zombie films and television programmes, where characters talk of other people who have *turned*, meaning that they became a zombie.
A few words come to mind: * transform * convert * evolve * devolve * grow * mature * adapt
134,691
In conversation, when someone says they appreciate my brain, I need an effective comeback. I was going to say: > > "I hope that you are not turning into a zombie with your love for my brain." > > > But I feel that *"turning into"* is not a sufficiently **vivid** verb to use. What verb should I use when someone turns into (or possibly might turn into) a zombie? I want the verb to tell a bit of the story by describing the what and why of zombies.
2013/11/07
[ "https://english.stackexchange.com/questions/134691", "https://english.stackexchange.com", "https://english.stackexchange.com/users/9787/" ]
OED has a first citation from *NY Times, 1950* for... > > **zombify**: *to transform into a zombie.* > > > ...so I don't see why in OP's context he shouldn't say... > > *"I hope that you're not being/becoming **zombified** by your love of my brain."* > > >
I don't think there is anything wrong with saying *turn*, in this context. See <http://dictionary.cambridge.org/dictionary/british/turn_4> , which has the following definition of the word: **to (cause to) become, change into, or come to be something**. It's good enough for zombie films and television programmes, where characters talk of other people who have *turned*, meaning that they became a zombie.
134,691
In conversation, when someone says they appreciate my brain, I need an effective comeback. I was going to say: > > "I hope that you are not turning into a zombie with your love for my brain." > > > But I feel that *"turning into"* is not a sufficiently **vivid** verb to use. What verb should I use when someone turns into (or possibly might turn into) a zombie? I want the verb to tell a bit of the story by describing the what and why of zombies.
2013/11/07
[ "https://english.stackexchange.com/questions/134691", "https://english.stackexchange.com", "https://english.stackexchange.com/users/9787/" ]
A few words come to mind: * transform * convert * evolve * devolve * grow * mature * adapt
Morphing had already been mentioned, but one can also try to metamorphose into zombie (just the same as larva metamorphose into butterflies). Freedictionary had got a suitable usage example: <http://www.thefreedictionary.com/metamorphose>
134,691
In conversation, when someone says they appreciate my brain, I need an effective comeback. I was going to say: > > "I hope that you are not turning into a zombie with your love for my brain." > > > But I feel that *"turning into"* is not a sufficiently **vivid** verb to use. What verb should I use when someone turns into (or possibly might turn into) a zombie? I want the verb to tell a bit of the story by describing the what and why of zombies.
2013/11/07
[ "https://english.stackexchange.com/questions/134691", "https://english.stackexchange.com", "https://english.stackexchange.com/users/9787/" ]
OED has a first citation from *NY Times, 1950* for... > > **zombify**: *to transform into a zombie.* > > > ...so I don't see why in OP's context he shouldn't say... > > *"I hope that you're not being/becoming **zombified** by your love of my brain."* > > >
How about ***[transmogrify](http://www.merriam-webster.com/dictionary/transmogrify)***? > > to change or alter greatly and often with grotesque or humorous effect > > >
234,773
Can a Slow Post HTTP attack be done from a single client? The following source cites this line: <https://medium.com/fantageek/understanding-socket-and-port-in-tcp-2213dc2e9b0c> > > Ports are 16-bit numbers, therefore the maximum number of connections > any given client can have to any given host port is 64K. > > > But since the max. number of connections a server can handle is usually much greater than the above limit (around ~300k but scalable), I assume that the Slow Post HTTP attack can only be done as a DDoS attack (using more than 1 client). But I can also see descriptions about Slow Post HTTP attack as DoS attack. For eg. <https://blog.qualys.com/securitylabs/2011/11/02/how-to-protect-against-slow-http-attacks> states > > Slow HTTP attacks are denial-of-service (DoS) attacks in which the > attacker sends HTTP requests in pieces slowly, one at a time to a Web > server. > > > Please let me know if I am missing something.
2020/07/17
[ "https://security.stackexchange.com/questions/234773", "https://security.stackexchange.com", "https://security.stackexchange.com/users/238520/" ]
Yes, a server can handle a lot of requests, but it is not handling just the attacker's requests. It is handling it's normal load, and these attacks are on top of that load. It certainly is possible for a single attacker to exhaust *remaining* server resources, which means that this type of attack is correctly described as a non-DDoS attack. Yes, it might be enhanced by multiple nodes using this attack, but it's not a fundamental requirement.
Yes it can, in fact basically that type of attack tries to consume the internal memory that is handling the buffering of the post operation. In general the backends have parameters for handle this type of situations.
5,741
Both of them are used: **visualization** and **visualisation**. *Visualisation* is more commonly used in UK. But, *visualization* is more common in US. What is the history of this word? Which is officially correct?
2013/04/24
[ "https://ell.stackexchange.com/questions/5741", "https://ell.stackexchange.com", "https://ell.stackexchange.com/users/862/" ]
"Visualization" is the only correct spelling in American English. "Visualisation" and "visualization" are both acceptable in British English although it is a common misconception that "visualization" is an Americanism and therefore incorrect. Oxford resolutely prefers "ize" forms - see [visualize](http://oxforddictionaries.com/definition/english/visualize?q=visualization#visualize__19) in the British English Oxford Dictionary - and a lot of academic writing follows their lead but most mass market publications prefer "ise". There's more information in [Wikipedia](http://en.wikipedia.org/wiki/American_and_British_English_spelling_differences#-ise.2C_-ize_.28-isation.2C_-ization.29) and an [interesting article](http://www.metadyne.co.uk/ize.html) with further references.
You've answered your own question. In the UK the following spelling is correct: > > Visualisation > > > In the US, the following spelling is correct: > > Visualization > > > This is one of a number of different spellings between British English and American English. Neither is "more correct" than the other - British English spellings are correct in the UK and not in the US. As a learner, you know better than anyone whether knowing British English or American English is more suitable for you. If you need to learn US English, learn the US spellings. If your learning means British English is more useful to you, learn the British English ones.
20,284
You don't have the option to "hold it till maturity", so you are invariably exposed to interest rate risk, and as a consequence, you will probably gain or lose on your principal in the end. Isn't it better to invest in individual bonds, where if things go sour with the interest rate, you can hold them till maturity and recover your full principal?
2013/01/15
[ "https://money.stackexchange.com/questions/20284", "https://money.stackexchange.com", "https://money.stackexchange.com/users/6855/" ]
According to the U.S. Bureau of Engraving and Printing, if you have clearly more than one-half of the current bill remaining, you should be able to take it to your bank and exchange it. But if for some reason your bank will not take it, you can submit it to Bureau of Engraving and Printing Office of Currency Standards. Question asked on <https://www.moneyfactory.gov/resources/faqs.html> > > **I have some currency that was damaged. My bank will not exchange it for undamaged currency. What can I do?** The Bureau of Engraving and > Printing's Office of Currency Standards processes all requests for > reimbursement for damaged United States currency. They decide the > redemption value of torn or otherwise unfit currency by measuring the > portions of the notes submitted. Generally, they reimburse the full > face value if clearly more than one-half of the original note remains. > Currency fragments measuring less than one-half are not redeemable. Go > to the [Damaged Money section of our website](https://www.moneyfactory.gov/services/currencyredemption.html) for additional information > and the procedures to redeem mutilated currency. > > > However take notice of this: > > Any badly soiled, dirty, defaced, disintegrated, limp, torn, worn, out > currency note that is CLEARLY MORE than one-half of the original note, > and does not require special examination to determine its value. These > notes should be exchanged through your local bank. > > >
There is usually contact information for the owner of the machine printed somewhere on it. Call that number. If it is in a business you could always try the clerk. Whether you get your money back is up to that person, I suppose.
6,882
Since many days I am trying to achieve this, but I can only delete/ clear all recent dial, received phone numbers. Is there an option/ hack to delete only one number at a time on my iPhone ? I have iPhone 3GS
2011/01/24
[ "https://apple.stackexchange.com/questions/6882", "https://apple.stackexchange.com", "https://apple.stackexchange.com/users/832/" ]
If your iPhone is jailbroken you can install the package [Ixtension](http://ixtension.com/) (it's $9.99 from the BigBossRepo) Disclaimer: I'm in now way affiliated with the developers.
I doubt iOS allows you to enumerate recent calls through standard iOS apps. I would suspect that through jail-breaking you could make an app yourself that could do it. Other than that, I doubt it's possible at all.
6,882
Since many days I am trying to achieve this, but I can only delete/ clear all recent dial, received phone numbers. Is there an option/ hack to delete only one number at a time on my iPhone ? I have iPhone 3GS
2011/01/24
[ "https://apple.stackexchange.com/questions/6882", "https://apple.stackexchange.com", "https://apple.stackexchange.com/users/832/" ]
You will have to wait for iOS 5 to be able to do what you are asking. I'm running iOS 5 and I'm able to do this without a jailbreak. ![iOS 5 Developer Preview](https://i.stack.imgur.com/NYCEh.png)
If your iPhone is jailbroken you can install the package [Ixtension](http://ixtension.com/) (it's $9.99 from the BigBossRepo) Disclaimer: I'm in now way affiliated with the developers.
6,882
Since many days I am trying to achieve this, but I can only delete/ clear all recent dial, received phone numbers. Is there an option/ hack to delete only one number at a time on my iPhone ? I have iPhone 3GS
2011/01/24
[ "https://apple.stackexchange.com/questions/6882", "https://apple.stackexchange.com", "https://apple.stackexchange.com/users/832/" ]
You will have to wait for iOS 5 to be able to do what you are asking. I'm running iOS 5 and I'm able to do this without a jailbreak. ![iOS 5 Developer Preview](https://i.stack.imgur.com/NYCEh.png)
I doubt iOS allows you to enumerate recent calls through standard iOS apps. I would suspect that through jail-breaking you could make an app yourself that could do it. Other than that, I doubt it's possible at all.
6,882
Since many days I am trying to achieve this, but I can only delete/ clear all recent dial, received phone numbers. Is there an option/ hack to delete only one number at a time on my iPhone ? I have iPhone 3GS
2011/01/24
[ "https://apple.stackexchange.com/questions/6882", "https://apple.stackexchange.com", "https://apple.stackexchange.com/users/832/" ]
Go to 'recents' on phone. Tap 'edit' in top right hand corner of screen. You will then have like no entry sign type symbol at the left of each number in recents. Tap the no entry sign by the number you wish to delete, a box will appear to the right of that number asking you if you wish to delete it, then type delete.
I doubt iOS allows you to enumerate recent calls through standard iOS apps. I would suspect that through jail-breaking you could make an app yourself that could do it. Other than that, I doubt it's possible at all.
6,882
Since many days I am trying to achieve this, but I can only delete/ clear all recent dial, received phone numbers. Is there an option/ hack to delete only one number at a time on my iPhone ? I have iPhone 3GS
2011/01/24
[ "https://apple.stackexchange.com/questions/6882", "https://apple.stackexchange.com", "https://apple.stackexchange.com/users/832/" ]
You will have to wait for iOS 5 to be able to do what you are asking. I'm running iOS 5 and I'm able to do this without a jailbreak. ![iOS 5 Developer Preview](https://i.stack.imgur.com/NYCEh.png)
Go to 'recents' on phone. Tap 'edit' in top right hand corner of screen. You will then have like no entry sign type symbol at the left of each number in recents. Tap the no entry sign by the number you wish to delete, a box will appear to the right of that number asking you if you wish to delete it, then type delete.
165,193
I asked <https://stackoverflow.com/q/14541047/1888017> and for some reason it's closed. I can't figure out why. I read the FAQ several times, and I know that it is possible to make a concrete answer, because it's just about differences between protocols and their performance. And it's also not broad, like how to build a rocket or a million-dollar business. So please explain to me what's wrong with it.
2013/01/26
[ "https://meta.stackexchange.com/questions/165193", "https://meta.stackexchange.com", "https://meta.stackexchange.com/users/209229/" ]
It's impossibly broad. A person could write a book attempting to give a comprehensive answer to the several questions you pose. The FAQ specifically states that 'it would take a book' is too broad.
It's enormously wide. It's a great example of a question that could be answered by a book. Consider asking which is supported by more platforms/libraries/programming languages (pick ONE) or which is fastest for a particular kind of use (be detailed) or which is easiest to learn, or easiest to hire developers who already know, or easiest to debug, or capable of handling your particular payload (large, or oddly formatted, or whatever) or most secure, or .... ONE question. Not "tell me about the differences". It would be like asking whether I should live in Canada or Europe. Unanswerable in general, and you've provided no details to take it away from the general.
165,193
I asked <https://stackoverflow.com/q/14541047/1888017> and for some reason it's closed. I can't figure out why. I read the FAQ several times, and I know that it is possible to make a concrete answer, because it's just about differences between protocols and their performance. And it's also not broad, like how to build a rocket or a million-dollar business. So please explain to me what's wrong with it.
2013/01/26
[ "https://meta.stackexchange.com/questions/165193", "https://meta.stackexchange.com", "https://meta.stackexchange.com/users/209229/" ]
There's no concrete, specific answer with the information you provided. (Even with specifics, I find it unlikely there would be an answer that's "concrete and specific" in a meaningful way.) Why are there so many protocols? Why are there so many sodas? Beers? Sausages? Each one has something going for it. Each one scratches a particular itch. And sausage is delicious. What specific features do each provide? This is documented in each implementation's docs. Do they have differences in performance? Of course they do. *What* those differences are, and where or not they *matter* in your case, depend on your case, and we have zero information about that. The documentation of each, and performance testing under your specific usecase, will be the specific, concrete answer you seek.
It's enormously wide. It's a great example of a question that could be answered by a book. Consider asking which is supported by more platforms/libraries/programming languages (pick ONE) or which is fastest for a particular kind of use (be detailed) or which is easiest to learn, or easiest to hire developers who already know, or easiest to debug, or capable of handling your particular payload (large, or oddly formatted, or whatever) or most secure, or .... ONE question. Not "tell me about the differences". It would be like asking whether I should live in Canada or Europe. Unanswerable in general, and you've provided no details to take it away from the general.
104,403
I know that it sounds like a very stupid question but if Shor's algorithm has a complexity of roughly $n^3$ why cant we just increase the bit size until the time for the algorithm to run is unfeasible on a quantum computer or would it just take too much memory and too much computation for RSA/ECC to be worth it?
2023/02/27
[ "https://crypto.stackexchange.com/questions/104403", "https://crypto.stackexchange.com", "https://crypto.stackexchange.com/users/107047/" ]
> > if shor's algorithm has a complexity of roughly n^3 why cant we just increase the bit size until the time for the algorithm to run is unfeasible on a quantum computer > > > The problem is that the amount of work needed by the honest parties is also roughly $n^3$, hence we don't have that much advantage over an attacker. To be fair, we do have a few advantages: * For RSA, there are optimizations available for the honest parties that aren't there for the attacker, for example, the CRT optimization on the private operation, and short exponents on the public one [1] * The attacker has to run his operation on a Quantum Computer, which will likely be a large constant times as expensive as a classical one. On the other hand, both these advantages don't add up to enough (especially given that we also assume the adversary has considerably more computational resources at his disposal than we do). --- [1]: I believe that, when targeting a discrete log problem, Shor's can take advantage of knowledge that the exponent is small, and so that discrete log optimization doesn't help us.
That's what CNSA 1.0 did: minimum RSA key length is 3072, minimum AES key length is 256, and minimum SHA length is 384. [CNSA 2.0](https://media.defense.gov/2022/Sep/07/2003071834/-1/-1/0/CSA_CNSA_2.0_ALGORITHMS_.PDF) goes beyond that with some new algorithms.
13,916
I would like to try the drop handlebar on my touring bicycle that has V-Brakes, and Shimano 9x3 speeds. Are there drop handlebar levers (integrated shifters would be great), that have correct pull ratio for V-brakes? If there are brake levers but no shifters, where would be a good place to put my existing (deore shifters) on the drop handlebar?
2013/01/08
[ "https://bicycles.stackexchange.com/questions/13916", "https://bicycles.stackexchange.com", "https://bicycles.stackexchange.com/users/5563/" ]
Sounds like you want to use your current V-Brakes, but if getting new brakes, TRP makes mini V-Brakes that are STI lever compatible, sometimes called a brifter (brakes and shifting in one lever). Most V-Brakes have a different pull ratio which is not compatible with brifters. The [TRP CX9](http://rads.stackoverflow.com/amzn/click/B004C966DK) is for Shimano STI levers The [TRP CX8.4](http://rads.stackoverflow.com/amzn/click/B009IJAPG8) is for SRAM and Campagnolo levers
Tektro makes relatively inexpensive brake levers for drop handlebars. They have a [RL-340](http://www.tektro.com/_english/01_products/01_prodetail.php?pid=67&sortname=Lever&sort=1&fid=3) model which works with caliper and cantilever brakes, and a [RL-520](http://www.tektro.com/_english/01_products/01_prodetail.php?pid=10&sortname=Lever&sort=1&fid=3) model that works with V-brakes. I found this the hard way: I bought a Genesis single-speed cyclocross bike online, and it came equipped with mini V-brakes and the wrong levers. The biggest surprise was finding out that it wasn't an assembly error: it was like that in the specs. I ended up buying some RL-520s for $20 and replacing them myself. I am not aware of any V-brake lever with an integrated shifter. I have never tried one, but the standard in many touring setups seems to be bar-end shifters, as the picture below. ![enter image description here](https://i.stack.imgur.com/TcG1n.jpg)
13,916
I would like to try the drop handlebar on my touring bicycle that has V-Brakes, and Shimano 9x3 speeds. Are there drop handlebar levers (integrated shifters would be great), that have correct pull ratio for V-brakes? If there are brake levers but no shifters, where would be a good place to put my existing (deore shifters) on the drop handlebar?
2013/01/08
[ "https://bicycles.stackexchange.com/questions/13916", "https://bicycles.stackexchange.com", "https://bicycles.stackexchange.com/users/5563/" ]
Sounds like you want to use your current V-Brakes, but if getting new brakes, TRP makes mini V-Brakes that are STI lever compatible, sometimes called a brifter (brakes and shifting in one lever). Most V-Brakes have a different pull ratio which is not compatible with brifters. The [TRP CX9](http://rads.stackoverflow.com/amzn/click/B004C966DK) is for Shimano STI levers The [TRP CX8.4](http://rads.stackoverflow.com/amzn/click/B009IJAPG8) is for SRAM and Campagnolo levers
I have a Surly Long Haul Trucker which came stock with dropbars and Tektro 992 “Oryx” brakes cantilever brakes and levers. I have now switched out those cantilever brakes out for Avid Single Digit 7 v-brakes and to specifically answer your question fitted Cane Creek Drop V brake levers. These are not integrated shifter levers but, so I still have my bar-end shifters. ![enter image description here](https://i.stack.imgur.com/HndLQ.jpg) For me this update has significantly improved the braking on the bike.
13,916
I would like to try the drop handlebar on my touring bicycle that has V-Brakes, and Shimano 9x3 speeds. Are there drop handlebar levers (integrated shifters would be great), that have correct pull ratio for V-brakes? If there are brake levers but no shifters, where would be a good place to put my existing (deore shifters) on the drop handlebar?
2013/01/08
[ "https://bicycles.stackexchange.com/questions/13916", "https://bicycles.stackexchange.com", "https://bicycles.stackexchange.com/users/5563/" ]
Tektro makes relatively inexpensive brake levers for drop handlebars. They have a [RL-340](http://www.tektro.com/_english/01_products/01_prodetail.php?pid=67&sortname=Lever&sort=1&fid=3) model which works with caliper and cantilever brakes, and a [RL-520](http://www.tektro.com/_english/01_products/01_prodetail.php?pid=10&sortname=Lever&sort=1&fid=3) model that works with V-brakes. I found this the hard way: I bought a Genesis single-speed cyclocross bike online, and it came equipped with mini V-brakes and the wrong levers. The biggest surprise was finding out that it wasn't an assembly error: it was like that in the specs. I ended up buying some RL-520s for $20 and replacing them myself. I am not aware of any V-brake lever with an integrated shifter. I have never tried one, but the standard in many touring setups seems to be bar-end shifters, as the picture below. ![enter image description here](https://i.stack.imgur.com/TcG1n.jpg)
I have a Surly Long Haul Trucker which came stock with dropbars and Tektro 992 “Oryx” brakes cantilever brakes and levers. I have now switched out those cantilever brakes out for Avid Single Digit 7 v-brakes and to specifically answer your question fitted Cane Creek Drop V brake levers. These are not integrated shifter levers but, so I still have my bar-end shifters. ![enter image description here](https://i.stack.imgur.com/HndLQ.jpg) For me this update has significantly improved the braking on the bike.
216,791
so I thought I'd give Hyper-V on Windows Server 2008 R2 Enterprise a try on my Homeserver (yes, it's legit... got it from msdnaa). First thing to throw at it was my firewall which runs IPFire. This distribution currently uses the kernel version 2.6.32.15 and comes with the Hyper-V drivers. So I enabled them and at first they work just fine but after a few minutes they just fail. There are no packages going in or out anymore until I reboot the VM but sometimes even that won't work so the VM just keeps "Stopping" like forever. Emulated networking works fine but it slow and uses more CPU. That way my firewall routes slower than when running under virtualbox on an atom N270. My server has an E6750; VM is limited to 25%, but that should still outperform this atom CPU especially since it's never going anywhere near 100% CPU load, so give me a break! A quick google search led me to people having the same problem (even with other distributions and kernel versions that include those drivers) but no solution yet... I already found [this](http://projectdream.org/wordpress/2010/02/17/hyper-v-2008-r2-and-linux-guests/) but I can't quite follow the author on the part where he solved the issue - especially since I need two virtual nics for my firewall distro to work (obviously one internal and one external) What am I missing here?
2010/12/28
[ "https://serverfault.com/questions/216791", "https://serverfault.com", "https://serverfault.com/users/52395/" ]
I also had a lot of issues with any mainline kernel hv starting with 2.6.33 up to recent 2.6.36.1. It seems that either you are of the lucky ones and can get working Linux VMs or what is more often seen are network losses and crashing storage modules. I hope things improve but at the moment I still have a bad feeling. I have the certain doubt the hv modules were primarily developed and testest with Kernels from RHEL 5 (not 6) amd SLES 10. I had to switch CentOS 5 to get a stable running Linux. See the problems that were reported here: <http://www.panterlo.com/2010/10/10/ubuntu-10-10-and-hyper-v-r2/#comment-84>
see this list and if anything is relevant then apply it, and or if this is not production then download the beta of w2k8r2 sp1, it solves lots of little issues! <http://technet.microsoft.com/en-us/library/ff394763(WS.10).aspx>
216,791
so I thought I'd give Hyper-V on Windows Server 2008 R2 Enterprise a try on my Homeserver (yes, it's legit... got it from msdnaa). First thing to throw at it was my firewall which runs IPFire. This distribution currently uses the kernel version 2.6.32.15 and comes with the Hyper-V drivers. So I enabled them and at first they work just fine but after a few minutes they just fail. There are no packages going in or out anymore until I reboot the VM but sometimes even that won't work so the VM just keeps "Stopping" like forever. Emulated networking works fine but it slow and uses more CPU. That way my firewall routes slower than when running under virtualbox on an atom N270. My server has an E6750; VM is limited to 25%, but that should still outperform this atom CPU especially since it's never going anywhere near 100% CPU load, so give me a break! A quick google search led me to people having the same problem (even with other distributions and kernel versions that include those drivers) but no solution yet... I already found [this](http://projectdream.org/wordpress/2010/02/17/hyper-v-2008-r2-and-linux-guests/) but I can't quite follow the author on the part where he solved the issue - especially since I need two virtual nics for my firewall distro to work (obviously one internal and one external) What am I missing here?
2010/12/28
[ "https://serverfault.com/questions/216791", "https://serverfault.com", "https://serverfault.com/users/52395/" ]
I also had a lot of issues with any mainline kernel hv starting with 2.6.33 up to recent 2.6.36.1. It seems that either you are of the lucky ones and can get working Linux VMs or what is more often seen are network losses and crashing storage modules. I hope things improve but at the moment I still have a bad feeling. I have the certain doubt the hv modules were primarily developed and testest with Kernels from RHEL 5 (not 6) amd SLES 10. I had to switch CentOS 5 to get a stable running Linux. See the problems that were reported here: <http://www.panterlo.com/2010/10/10/ubuntu-10-10-and-hyper-v-r2/#comment-84>
Just installed IPFire with the latest patches on Hyper-V VM (W2K8R2) with the SP1 RC installed on the host. When running a speedtest (speedtest.net) it just crashes within 10-15 seconds - but for a brief moment it reaches 103.7 Mbps! :o)
10,300,069
When I convert PSDs to XHTML+CSS, the most common problem I always feel is the crispiness of fonts edges & the size unit. In Photoshop the font size unit is pt, and in css its px. Now the smoothness in Photoshop is awesome but when I get it to XHTML, i can't achieve the same smoothness neither for common fonts like myrid pro nor for rare fonts for which I use @font-face property. Now recently I converted a PSD to XHTML, the font used was GothamHTF, I used @font-face to make this font available on the site, but the issue is I can't achieve the same quality as I did in Photoshop for same text, font, size. Did anyone face the same issue?
2012/04/24
[ "https://Stackoverflow.com/questions/10300069", "https://Stackoverflow.com", "https://Stackoverflow.com/users/585748/" ]
Photoshop has an engine in place for rendering typefaces. The five browsers also have an engine in place for rendering typefaces. These engines are different and as a result, typefaces are not always displayed the same (or very good sometimes). The best way to ensure quality control in select places is to use an image of your headline or text. Otherwise, it is best to understand that one font does not display the same across all screen media.
It is not convenient to use images as titles or pieces of text. In this way google or any other search engine will not index the text you're using as an image. NOT SEO FRIENDLY.
32,477
Suppose I want to reduce risk of a circuit breaker malfunctioning and failing to disconnect a shorted circuit. What if I install two identical breakers sequentially (one after another) so that if one of them fails the other one steps in and disconnects the circuit? Is there any problem with such installation?
2013/10/07
[ "https://diy.stackexchange.com/questions/32477", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/807/" ]
There is no problem here. In fact, this is often how subpanels are configured; the main panel has a breaker that feeds the subpanel, and the subpanel has individual circuits protected with their own breakers. The subpanel will have an amp rating. The minimum of the subpanel rating and the ampacity rating of the wire feeding the subpanel will dictate the appropriate breaker for the main panel. Appliances that require their own local disconnects (such as air conditioners, water heaters, etc.) frequently use a small enclosure with space for just one circuit breaker. This could be a good option for whatever application you are thinking of.
"Suppose I want to reduce risk of a circuit breaker malfunctioning and failing to disconnect a shorted circuit." If you have a good quality panel... Square D, Eaton, Siemens, etc.. you really have no reason to worry about a circuit breaker malfunctioning in such a way that it doesn't trip if your load is greater than its ampacity. I have never seen this happen to any manufacturers circuit breakers not named Zinsco. In short, circuit breakers work and they work well you dont have to be afraid of them failing. If you are really nervous, install a arc fault circuit interrupter breaker in the existing ones place... any arcing and it will trip. Or an arc fault/ GFI combo breaker that will protect you from every angle.
32,477
Suppose I want to reduce risk of a circuit breaker malfunctioning and failing to disconnect a shorted circuit. What if I install two identical breakers sequentially (one after another) so that if one of them fails the other one steps in and disconnects the circuit? Is there any problem with such installation?
2013/10/07
[ "https://diy.stackexchange.com/questions/32477", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/807/" ]
Most generator manual transfer switch-over panels require you to do just this. The wire coming out of the breaker and going to a load (normally) is disconnected and channeled over to a second panel with another breaker and then then the biter-end coming out of the second panel is tied back to the load. In short, two breakers in series.
I have to agree: Connecting two circuit breakers of the same frame size and trip rating in series could be dangerous, depending on the application. When two circuit breakers of the same trip rating are connected in series it is impossible to predict which circuit breaker will trip first on a fault condition or on overload. The proper operation of the circuit breakers and clearing time would be affected. Connecting two circuit breakers of the same frame size and trip rating is simply not a good practice.
32,477
Suppose I want to reduce risk of a circuit breaker malfunctioning and failing to disconnect a shorted circuit. What if I install two identical breakers sequentially (one after another) so that if one of them fails the other one steps in and disconnects the circuit? Is there any problem with such installation?
2013/10/07
[ "https://diy.stackexchange.com/questions/32477", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/807/" ]
"Suppose I want to reduce risk of a circuit breaker malfunctioning and failing to disconnect a shorted circuit." If you have a good quality panel... Square D, Eaton, Siemens, etc.. you really have no reason to worry about a circuit breaker malfunctioning in such a way that it doesn't trip if your load is greater than its ampacity. I have never seen this happen to any manufacturers circuit breakers not named Zinsco. In short, circuit breakers work and they work well you dont have to be afraid of them failing. If you are really nervous, install a arc fault circuit interrupter breaker in the existing ones place... any arcing and it will trip. Or an arc fault/ GFI combo breaker that will protect you from every angle.
Putting two breakers of same size and rating in series can be dangerous. manufacturers give the short circuit breaking current for ONE device. Modern CB operated with the magnetic force generated during the short circuit. This magnetic repulsion of the contacts is linked to the energy generated during the short circuit. The breakers are designed to open according to this energy generated during the short circuit. the opening time and speed is defined according to this energy. If two breakers are in series it impossible to predicte the energy at the level of the two devices. The energy is distributed over the two devices. And the operation time and openning speed can be slower than with only one breaker. Putting two breaker of same size and same rating can be dangerous.
32,477
Suppose I want to reduce risk of a circuit breaker malfunctioning and failing to disconnect a shorted circuit. What if I install two identical breakers sequentially (one after another) so that if one of them fails the other one steps in and disconnects the circuit? Is there any problem with such installation?
2013/10/07
[ "https://diy.stackexchange.com/questions/32477", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/807/" ]
Most generator manual transfer switch-over panels require you to do just this. The wire coming out of the breaker and going to a load (normally) is disconnected and channeled over to a second panel with another breaker and then then the biter-end coming out of the second panel is tied back to the load. In short, two breakers in series.
Two breakers in series would reduce the risk of one failing in a way that stays in a closed (conducting) position. But modern breakers today have an extreme low risk of this happening. If I were that paranoid, I'd put a breaker and a fuse in series. But I'm not.
32,477
Suppose I want to reduce risk of a circuit breaker malfunctioning and failing to disconnect a shorted circuit. What if I install two identical breakers sequentially (one after another) so that if one of them fails the other one steps in and disconnects the circuit? Is there any problem with such installation?
2013/10/07
[ "https://diy.stackexchange.com/questions/32477", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/807/" ]
Most generator manual transfer switch-over panels require you to do just this. The wire coming out of the breaker and going to a load (normally) is disconnected and channeled over to a second panel with another breaker and then then the biter-end coming out of the second panel is tied back to the load. In short, two breakers in series.
Putting two breakers of same size and rating in series can be dangerous. manufacturers give the short circuit breaking current for ONE device. Modern CB operated with the magnetic force generated during the short circuit. This magnetic repulsion of the contacts is linked to the energy generated during the short circuit. The breakers are designed to open according to this energy generated during the short circuit. the opening time and speed is defined according to this energy. If two breakers are in series it impossible to predicte the energy at the level of the two devices. The energy is distributed over the two devices. And the operation time and openning speed can be slower than with only one breaker. Putting two breaker of same size and same rating can be dangerous.
32,477
Suppose I want to reduce risk of a circuit breaker malfunctioning and failing to disconnect a shorted circuit. What if I install two identical breakers sequentially (one after another) so that if one of them fails the other one steps in and disconnects the circuit? Is there any problem with such installation?
2013/10/07
[ "https://diy.stackexchange.com/questions/32477", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/807/" ]
There is no problem here. In fact, this is often how subpanels are configured; the main panel has a breaker that feeds the subpanel, and the subpanel has individual circuits protected with their own breakers. The subpanel will have an amp rating. The minimum of the subpanel rating and the ampacity rating of the wire feeding the subpanel will dictate the appropriate breaker for the main panel. Appliances that require their own local disconnects (such as air conditioners, water heaters, etc.) frequently use a small enclosure with space for just one circuit breaker. This could be a good option for whatever application you are thinking of.
Most generator manual transfer switch-over panels require you to do just this. The wire coming out of the breaker and going to a load (normally) is disconnected and channeled over to a second panel with another breaker and then then the biter-end coming out of the second panel is tied back to the load. In short, two breakers in series.
32,477
Suppose I want to reduce risk of a circuit breaker malfunctioning and failing to disconnect a shorted circuit. What if I install two identical breakers sequentially (one after another) so that if one of them fails the other one steps in and disconnects the circuit? Is there any problem with such installation?
2013/10/07
[ "https://diy.stackexchange.com/questions/32477", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/807/" ]
There is no problem here. In fact, this is often how subpanels are configured; the main panel has a breaker that feeds the subpanel, and the subpanel has individual circuits protected with their own breakers. The subpanel will have an amp rating. The minimum of the subpanel rating and the ampacity rating of the wire feeding the subpanel will dictate the appropriate breaker for the main panel. Appliances that require their own local disconnects (such as air conditioners, water heaters, etc.) frequently use a small enclosure with space for just one circuit breaker. This could be a good option for whatever application you are thinking of.
I have to agree: Connecting two circuit breakers of the same frame size and trip rating in series could be dangerous, depending on the application. When two circuit breakers of the same trip rating are connected in series it is impossible to predict which circuit breaker will trip first on a fault condition or on overload. The proper operation of the circuit breakers and clearing time would be affected. Connecting two circuit breakers of the same frame size and trip rating is simply not a good practice.
32,477
Suppose I want to reduce risk of a circuit breaker malfunctioning and failing to disconnect a shorted circuit. What if I install two identical breakers sequentially (one after another) so that if one of them fails the other one steps in and disconnects the circuit? Is there any problem with such installation?
2013/10/07
[ "https://diy.stackexchange.com/questions/32477", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/807/" ]
There is no problem here. In fact, this is often how subpanels are configured; the main panel has a breaker that feeds the subpanel, and the subpanel has individual circuits protected with their own breakers. The subpanel will have an amp rating. The minimum of the subpanel rating and the ampacity rating of the wire feeding the subpanel will dictate the appropriate breaker for the main panel. Appliances that require their own local disconnects (such as air conditioners, water heaters, etc.) frequently use a small enclosure with space for just one circuit breaker. This could be a good option for whatever application you are thinking of.
Logically, this makes some sense. Practically, there is no breaker panel configured to let you do this. You cannot modify the panel, physically, without invalidating the U/L certification. Actually the [system availability goes down for mechanisms in series](http://www.eventhelix.com/RealtimeMantra/FaultHandling/system_reliability_availability.htm#Availability%20in%20Series), because you multiply the individual available availabilities.
32,477
Suppose I want to reduce risk of a circuit breaker malfunctioning and failing to disconnect a shorted circuit. What if I install two identical breakers sequentially (one after another) so that if one of them fails the other one steps in and disconnects the circuit? Is there any problem with such installation?
2013/10/07
[ "https://diy.stackexchange.com/questions/32477", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/807/" ]
"Suppose I want to reduce risk of a circuit breaker malfunctioning and failing to disconnect a shorted circuit." If you have a good quality panel... Square D, Eaton, Siemens, etc.. you really have no reason to worry about a circuit breaker malfunctioning in such a way that it doesn't trip if your load is greater than its ampacity. I have never seen this happen to any manufacturers circuit breakers not named Zinsco. In short, circuit breakers work and they work well you dont have to be afraid of them failing. If you are really nervous, install a arc fault circuit interrupter breaker in the existing ones place... any arcing and it will trip. Or an arc fault/ GFI combo breaker that will protect you from every angle.
Two breakers in series would reduce the risk of one failing in a way that stays in a closed (conducting) position. But modern breakers today have an extreme low risk of this happening. If I were that paranoid, I'd put a breaker and a fuse in series. But I'm not.
32,477
Suppose I want to reduce risk of a circuit breaker malfunctioning and failing to disconnect a shorted circuit. What if I install two identical breakers sequentially (one after another) so that if one of them fails the other one steps in and disconnects the circuit? Is there any problem with such installation?
2013/10/07
[ "https://diy.stackexchange.com/questions/32477", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/807/" ]
Two breakers in series would reduce the risk of one failing in a way that stays in a closed (conducting) position. But modern breakers today have an extreme low risk of this happening. If I were that paranoid, I'd put a breaker and a fuse in series. But I'm not.
Logically, this makes some sense. Practically, there is no breaker panel configured to let you do this. You cannot modify the panel, physically, without invalidating the U/L certification. Actually the [system availability goes down for mechanisms in series](http://www.eventhelix.com/RealtimeMantra/FaultHandling/system_reliability_availability.htm#Availability%20in%20Series), because you multiply the individual available availabilities.
3,111
A user who created an account to ask a [fairly trollish question](https://politics.stackexchange.com/questions/23025/is-israel-worried-about-the-escape-of-its-citizens) about rockets hitting a Jewish-majority country chose a user name of "vonbraun". Wernher von Braun is the main person listed on the English language Wikipedia's [disambiguation page for Von Braun](https://en.wikipedia.org/wiki/Von_Braun) (several of the other entries are his relatives, or things named after him, though the chemistry things aren't him). Von Braun was a rocket scientist who developed rockets for [Nazi Germany](https://en.wikipedia.org/wiki/Wernher_von_Braun#German_career) (and the US) which were used in anger. I flagged this username as inappropriate, and my flag was not merely not acted on, but declined. I seriously doubt that the user involved just happens to have a surname of von braun - if they did, they would have chosen to use their first name as well, or only their first name. This user's account name should be deleted.
2017/07/23
[ "https://politics.meta.stackexchange.com/questions/3111", "https://politics.meta.stackexchange.com", "https://politics.meta.stackexchange.com/users/270/" ]
I agree with Carpetsmoker. There is not a strong case that the username is to invoke Nazi-ism, and it is equally or more likely that it is not intended to invoke Nazi-ism at all. 1. The user's post doesn't even seem to be anti-Israel let alone anti-semetic. 2. Judging from his Wikipedia page, Von Braun appears to be more notable as a scientist than as a Nazi. Using the name VonBraun as a way to invoke Nazi-ism seems like a ineffectual choice.
The community as a whole (not necessarily everyone, but many of them) does not regard this as an inappropriate username.
258,904
I have an induction cooktop/stove in a California condo built in 2011-2012. Recently, I had to pull the cooktop electrical and discovered that the stove has a 3 wire harness (red, black, yellow/green). The building has 4 wires (red, black, white/neutral, and ground). The builder connected the stove's yellow/green to the building's neutral and left the building's ground unconnected. Per the [manual](https://www.manualslib.com/manual/372320/Whirlpool-Gci3061x.html?page=6#manual), I believe they should have connected the stove's yellow/green wire to ground. However, the manual indicates it would be a bare wire and it's actually a yellow/green. What's the correct way to wire the stove up to code? [![wiring](https://i.stack.imgur.com/sZFCy.jpg)](https://i.stack.imgur.com/sZFCy.jpg)
2022/10/19
[ "https://diy.stackexchange.com/questions/258904", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/129801/" ]
This appears to be a combination of errors: * Neutral vs. Ground The installer is used to typical US 240V/120V stoves which *require* neutral. So they naturally connected the "not black, not red" wire to neutral. Incorrectly, because that is actually a ground wire and they just didn't know what they were looking at. What they should have done was connected ground wires together and capped the neutral because it is not needed for this stove. * Yellow/Green I'll bet it isn't yellow/black but rather yellow/dark green. Ground can be bare, green or *yellow/green*. * 240V-only A typical US stove uses 240V for heating elements and 120V for lights and controls. But there is no reason these days to not use 240V for everything (except inertia), as controls are generally running off of a conversion to low voltage DC (so 120V or 240V as the starting point doesn't matter much) and lights are LEDs (again, low voltage DC, not line voltage AC). So a savvy manufacturer may choose, particularly with a new design such as an induction cooktop, to skip any 120V parts altogether. Which conveniently means the same exact cooktop can be used around the world (e.g., 208V - 250V) with no changes needed.
According to the linked manual, you are correct and the installer should not have connected anything to the condo's neutral wire. The red and black cooktop wires are for the 240V supply and the yellow w/green stripe is the equipment ground. Correct installation would be: * Cooktop Red > Branch Circuit Red * Cooktop Black > Branch Circuit Black * Cooktop Yellow w/Green Stripe > Branch Circuit Bare There is no neutral wire on the cooktop so the condo white should be capped off and left in the box.
258,904
I have an induction cooktop/stove in a California condo built in 2011-2012. Recently, I had to pull the cooktop electrical and discovered that the stove has a 3 wire harness (red, black, yellow/green). The building has 4 wires (red, black, white/neutral, and ground). The builder connected the stove's yellow/green to the building's neutral and left the building's ground unconnected. Per the [manual](https://www.manualslib.com/manual/372320/Whirlpool-Gci3061x.html?page=6#manual), I believe they should have connected the stove's yellow/green wire to ground. However, the manual indicates it would be a bare wire and it's actually a yellow/green. What's the correct way to wire the stove up to code? [![wiring](https://i.stack.imgur.com/sZFCy.jpg)](https://i.stack.imgur.com/sZFCy.jpg)
2022/10/19
[ "https://diy.stackexchange.com/questions/258904", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/129801/" ]
This appears to be a combination of errors: * Neutral vs. Ground The installer is used to typical US 240V/120V stoves which *require* neutral. So they naturally connected the "not black, not red" wire to neutral. Incorrectly, because that is actually a ground wire and they just didn't know what they were looking at. What they should have done was connected ground wires together and capped the neutral because it is not needed for this stove. * Yellow/Green I'll bet it isn't yellow/black but rather yellow/dark green. Ground can be bare, green or *yellow/green*. * 240V-only A typical US stove uses 240V for heating elements and 120V for lights and controls. But there is no reason these days to not use 240V for everything (except inertia), as controls are generally running off of a conversion to low voltage DC (so 120V or 240V as the starting point doesn't matter much) and lights are LEDs (again, low voltage DC, not line voltage AC). So a savvy manufacturer may choose, particularly with a new design such as an induction cooktop, to skip any 120V parts altogether. Which conveniently means the same exact cooktop can be used around the world (e.g., 208V - 250V) with no changes needed.
In 1966 when they required grounding for everything else, appliance makers got an exception allowing **hot-hot-neutral** 3-wire connections for ranges and dryers. "What could go wrong?" 4-wire connections were strongly encouraged, then mandated in NEC 1996. As such, appliance installers generally view the world as 2 types of connections: 3-wire Hot-hot-neutral, or 4-wire hot-hot-neutral-ground. Becuase the induction cooktop doesn't have an *oven*, it doesn't need an *oven light* and has no need for neutral. Therefore it is hot-hot-*ground* which the installer was unfamiliar with. Feel free to correct it. **Cap off the neutral securely** as it can be hot in some conditions. That's why it's insulated.
575,647
According to [Merriam-Webster](https://www.merriam-webster.com/words-at-play/sympathy-empathy-difference): > > In general, 'sympathy' is when you share the feelings of another; 'empathy' is when you understand the feelings of another but do not necessarily share them. > > > This seems at odds with the information given in the answers to [How can empathy be distinguished from sympathy?](https://english.stackexchange.com/questions/343402/how-can-empathy-be-distinguished-from-sympathy), which states that: > > With sympathy, you feel sorry that someone else has experienced something bad even if you have no idea how they feel. With empathy, if they are sorrowful, you feel their sorrow. > > > Is Merriam-Webster wrong?
2021/09/26
[ "https://english.stackexchange.com/questions/575647", "https://english.stackexchange.com", "https://english.stackexchange.com/users/97984/" ]
Merriam-Webster's view of how *sympathy* and *empathy* differ has evolved over time. In the past eighty years, MW has attempted on three occasions (that I'm aware of) to distinguish between the two terms, and each time it has thoroughly revamped its explanation. --- ***'Sympathy' and 'empathy' in Webster's Dictionary of Synonyms (1942)*** MW's *Webster's Dictionary of Synonyms* (1942) addresses *empathy* and *sympathy* in a bundle of synonyms headed by *pity*: > > **Pity, compassion, commiseration, ruth, condolence, sympathy, empathy, bowels** agree in meaning a feeling for for the suffering, distress, or unhappiness of another. ... **Sympathy** (etymologically, suffering with) is often used in place of *pity* or *compassion* (as, his plight aroused her *sympathy*) or in place of condolence (as, to offer one's sympathy to a bereaved friend), but in its precise meaning, it implies a power to enter into another's emotions or experiences, whether of a sorrowful or joyful nature, as by sharing them, by truly understanding them, or by being equally affected by them; as, "a boy goes for *sympathy* and companionship to his mother and sisters, not often to his father (*A. C. Benson*); "the rebel, as a human type entitled to respect and often to *sympathy*" (*R. E. N. Dodge*); "Amid the various feelings she was aware of arousing, she let me see that *sympathy*, in the sense of a moved understanding, had always been lacking" (*E. Wharton*); "Ah, then that was it! He was a lonely old man, who didn't want to live in constant reminder of happy times past. ... Tony ... felt a quick *sympathy* with him" (*Arch. Marshall*). *Sympathy* is also applicable to to anything that engages one's interest, sometimes because one is in agreement with its aims, accomplishments, principles, or tenets, and is attached to it (as, "the stepfather was a moderate Pompeian in *sympathies*"—*Buchan*), but more often because one has the imaginative capacity to enter into it and understand it in its true nature (as, "a creative writer can do his best only with what lies within the range and character of his deepest *sympathies*"—*Cather*). **Empathy** applies to the imaginative power which enables a person, especially an artist, to understand the emotions and experiences of others and to sympathize with them. "The active power of *empathy* which makes the creative artist, or the passive power of *empathy* which makes the appreciator of art" (*Rebecca West*). > > > A fair-minded observer may see a great deal of overlap between *sympathy* as used in the quotation from Willa Cather and *empathy* as defined immediately afterward by Merriam-Webster. --- ***'Sympathy' and 'empathy' in Merriam-Webster's Dictionary of Synonyms (1984)*** Many of the entries in *Merriam-Webster's Dictionary of Synonyms* (1984) are virtually identical to their counterparts in the 1942 edition of the dictionary, aside from a smattering of replaced or additional quotations illustrating how writers use a particular word. But that is not the case with *sympathy* and *empathy*. The first change is that the synonym bundle now appears under *sympathy* instead of *pity*. But the wording of the discussions of both *sympathy* and *empathy* are completely different in 1984 from what they were in 1942: > > **Sympathy, pity, compassion, condolence, ruth, empathy** are comparable though often not interchangeable when they mean a feeling for the suffering or distress of another. **Sympathy** is the most general term, ranging in meaning from friendly interest or agreement in taste or opinion to emotional identification, often accompanied by deep tenderness {*sympathy* with my desire to increase my ... knowledge—*Fairchild*} {*sympathies* were ... with the Roman Stoics—*Ellis*} {satire has its roots not in hatred but in *sympathy*—*Perry*} ... **Empathy**, of all the terms here discussed, has the least emotional content; it describes a gift, often a cultivated gift, for vicarious feeling, but the feeling need not be one of sorrow; thus *empathy* is often used as a synonym for some senses of *sympathy* as well as in distinction from *sympathy* {what he lacks is not *sympathy* but *empathy*, the ability to put himself in the other fellow's place—*G. W. Johnson*} *Empathy* is frequently employed with reference to a nonhuman object (as a literary character or an idea, culture, or work of art) {a fundamental component of the aesthetic attitude is *sympathy*, or—more *accurately*—empathy. In the presence of any work of art ... the recipient ... must surrender his independent and outstanding personality, to identify himself with the form or action presented by the artist—*Read*} > > > In my opinion, the change in MW's understanding of *empathy* between 1942 and 1984 reflects the professionalization of empathy in psychology and psychiatry. From being (in 1942) primarily a power of sympathetic imagination that is most common, in its active form, in creative artists, *empathy* becomes (in 1984) primarily a mental orientation to maximize analytical insight—a cultivated gift of vicarious feeling (one thinks of a psychotherapist trained to glean the feelings of a patient without indulging in them, or of an art appreciator passively identifying with the artist without fully internalizing the artist's emotional onslaught). --- ***'Sympathy' and 'empathy' in Merriam-Webster Online (undated)*** MW's undated online article, [What's the difference between 'sympathy' and 'empathy'?](https://www.merriam-webster.com/words-at-play/sympathy-empathy-difference), if anything, doubles down on the 1984 synonym dictionary's effort to distance *empathy* from a core sense of shared sympathy: > > **Sympathy vs. Empathy Difference** > > > The difference in meaning is usually explained with some variation of the following: sympathy is when you share the feelings of another; empathy is when you understand the feelings of another but do not necessarily share them. > > > ... > > > **Empathy is Understanding [whereas 'Sympathy is Sharing']** > > > *Empathy* suggests the notion of projection. You have empathy for a person when you can imagine how they might feel based on what you know about that person, despite not having those feelings explicitly communicated[.] > > > ... > > > *Empathy* can be contrasted with *sympathy* in terms of a kind of remove, or emotional distance: > > > > > > > The act or capacity of entering into or sharing the feelings of another is known as **sympathy**. **Empathy**, on the other hand, not only is an identification of sorts but also connotes an awareness of one's separateness from the observed. One of the most difficult tasks put upon man is reflective commitment to another's problem while maintaining his own identity. —*Journal of the American Medical Association*, 24 May 1958 > > > > > > > > > It is not by chance that the quotation that MW has chosen to highlight this proposed distinction comes from the official periodical of the AMA. It would hardly do for a clinician to dispense with emotional distance and perform some sort of Vulcan mind meld with a patient who is suffering emotional trauma or other mental turmoil. --- ***'Empathy' in Merriam-Webster's Eleventh Collegiate Dictionary (2003)*** It seems fair to ask whether everyday nonspecialists use *empathy* in a way that rigidly sequesters it from *sympathy* as the AMA recommends. My sense is that they do not. Still, *Merriam-Webster's Eleventh Collegiate Dictionary* (2003) studiously avoids mentioning the word *sympathy* in either of its two definitions of *empathy*: > > **empathy** *n* (1850) **1 :** the imaginative projection of a subjective state into an object so that the object appears to be infused with it **2 :** the action of understanding, being aware of, being sensitive to, and vicariously experiencing the feelings, thoughts, an experience of another of either the past or present without having the feelings, thoughts, and experience fully communicated in an objectively explicit manner; *also* : the capacity for this > > > I suspect that the vast majority of lay people do not use *empathy* in either of these senses. The first definition is so abstract as to be almost opaque, and the second comes across as stilted, artificial, and clinical. --- ***'Sympathy' and 'empathy' in the American Heritage Dictionary (2010)*** The entry for *empathy* in *The American Heritage Dictionary of the English Language*, fifth edition (2010) does a much better job of approximating what real-world non-experts have in mind when they use the word: > > **empathy** *n.* **1.** The ability to identify with or understand another's situation or feelings: *Empathy is a distinctly human capability.* ... **2.** The attribution of one's feelings to an object: *They have empathy for the evacuees who were displaced by the flood.* > > > In a discussion of a synonym bundle that includes *sympathy* and *empathy*, AHDEL writes as follows: > > *pity, compassion, sympathy, empathy, commiseration, condolence.* These nouns signify kindly concern aroused by the misfortune, affliction, or suffering of another. ... *Sympathy* denotes the act of or capacity for sharing in the sorrows or troubles of another: "*They had little sympathy to spare for their unfortunate enemies*" (William Hickling Prescott). *Empathy* is an identification with and understanding of another's situation, feelings, and motives: *Having changed schools several times as a child, I feel empathy for the transfer students.* > > > Although AHDEL supports the same basic split between "sympathy = sharing" and "empathy = understanding" that MW identifies—and although it, too, avoids using the word *sympathy* in its definitions of empathy—it does so in a way that permits readers to see more easily that *empathy* can involve a strong identification with another person's feelings—what one might be tempted to call *sympathy*. --- ***Conclusion*** Merriam-Webster's approach to the definitions it composes can be startlingly inconsistent. In many instances, it adopts the populist, descriptivist view that the definitions it gives for a word ought to reflect how how people use that word in the real world. But in some instances, it adopts a narrow, specialist-friendly, prescriptivist view of the proper definitions of a word, as though that word existed only in a milieu where all users were aware of and respected its precise, complicated, and nuanced technical meaning. This, I think, is what MW has done in its handling of *empathy* in recent decades, and as a result the meanings it endorses seem poorly matched to the ways in which people in the wild actually use it.
The explanation provided by the Cambridge Dictionary suggests that yes, MW is not correct. [Sympathy](https://dictionary.cambridge.org/us/dictionary/english/sympathy) --------------------------------------------------------------------------- (UNDERSTANDING) B2 [U] **(an expression of) understanding and care for someone else's suffering:** *The president has sent a message of sympathy to the relatives of the dead soldiers.* *I don't have much sympathy for her - I think she's brought her troubles on herself.* it offers the following link on the compare section [Empathy](https://dictionary.cambridge.org/us/dictionary/english/empathy) ------------------------------------------------------------------------- C2 [U] **the ability to share someone else’s feelings or experiences by imagining what it would be like to be in that person’s situation:** And has this example in the Intermediate English section: *He loves children and has a certain empathy with them.* The way the words are defined, *empathy* carries the meaning of shared feelings / experiences, while *sympathy* is more on the understanding level, not necessarily coming from empathy You can have sympathy coming from empathy, or from something else, like social convention (see the first example for sympathy)
575,647
According to [Merriam-Webster](https://www.merriam-webster.com/words-at-play/sympathy-empathy-difference): > > In general, 'sympathy' is when you share the feelings of another; 'empathy' is when you understand the feelings of another but do not necessarily share them. > > > This seems at odds with the information given in the answers to [How can empathy be distinguished from sympathy?](https://english.stackexchange.com/questions/343402/how-can-empathy-be-distinguished-from-sympathy), which states that: > > With sympathy, you feel sorry that someone else has experienced something bad even if you have no idea how they feel. With empathy, if they are sorrowful, you feel their sorrow. > > > Is Merriam-Webster wrong?
2021/09/26
[ "https://english.stackexchange.com/questions/575647", "https://english.stackexchange.com", "https://english.stackexchange.com/users/97984/" ]
Merriam-Webster's view of how *sympathy* and *empathy* differ has evolved over time. In the past eighty years, MW has attempted on three occasions (that I'm aware of) to distinguish between the two terms, and each time it has thoroughly revamped its explanation. --- ***'Sympathy' and 'empathy' in Webster's Dictionary of Synonyms (1942)*** MW's *Webster's Dictionary of Synonyms* (1942) addresses *empathy* and *sympathy* in a bundle of synonyms headed by *pity*: > > **Pity, compassion, commiseration, ruth, condolence, sympathy, empathy, bowels** agree in meaning a feeling for for the suffering, distress, or unhappiness of another. ... **Sympathy** (etymologically, suffering with) is often used in place of *pity* or *compassion* (as, his plight aroused her *sympathy*) or in place of condolence (as, to offer one's sympathy to a bereaved friend), but in its precise meaning, it implies a power to enter into another's emotions or experiences, whether of a sorrowful or joyful nature, as by sharing them, by truly understanding them, or by being equally affected by them; as, "a boy goes for *sympathy* and companionship to his mother and sisters, not often to his father (*A. C. Benson*); "the rebel, as a human type entitled to respect and often to *sympathy*" (*R. E. N. Dodge*); "Amid the various feelings she was aware of arousing, she let me see that *sympathy*, in the sense of a moved understanding, had always been lacking" (*E. Wharton*); "Ah, then that was it! He was a lonely old man, who didn't want to live in constant reminder of happy times past. ... Tony ... felt a quick *sympathy* with him" (*Arch. Marshall*). *Sympathy* is also applicable to to anything that engages one's interest, sometimes because one is in agreement with its aims, accomplishments, principles, or tenets, and is attached to it (as, "the stepfather was a moderate Pompeian in *sympathies*"—*Buchan*), but more often because one has the imaginative capacity to enter into it and understand it in its true nature (as, "a creative writer can do his best only with what lies within the range and character of his deepest *sympathies*"—*Cather*). **Empathy** applies to the imaginative power which enables a person, especially an artist, to understand the emotions and experiences of others and to sympathize with them. "The active power of *empathy* which makes the creative artist, or the passive power of *empathy* which makes the appreciator of art" (*Rebecca West*). > > > A fair-minded observer may see a great deal of overlap between *sympathy* as used in the quotation from Willa Cather and *empathy* as defined immediately afterward by Merriam-Webster. --- ***'Sympathy' and 'empathy' in Merriam-Webster's Dictionary of Synonyms (1984)*** Many of the entries in *Merriam-Webster's Dictionary of Synonyms* (1984) are virtually identical to their counterparts in the 1942 edition of the dictionary, aside from a smattering of replaced or additional quotations illustrating how writers use a particular word. But that is not the case with *sympathy* and *empathy*. The first change is that the synonym bundle now appears under *sympathy* instead of *pity*. But the wording of the discussions of both *sympathy* and *empathy* are completely different in 1984 from what they were in 1942: > > **Sympathy, pity, compassion, condolence, ruth, empathy** are comparable though often not interchangeable when they mean a feeling for the suffering or distress of another. **Sympathy** is the most general term, ranging in meaning from friendly interest or agreement in taste or opinion to emotional identification, often accompanied by deep tenderness {*sympathy* with my desire to increase my ... knowledge—*Fairchild*} {*sympathies* were ... with the Roman Stoics—*Ellis*} {satire has its roots not in hatred but in *sympathy*—*Perry*} ... **Empathy**, of all the terms here discussed, has the least emotional content; it describes a gift, often a cultivated gift, for vicarious feeling, but the feeling need not be one of sorrow; thus *empathy* is often used as a synonym for some senses of *sympathy* as well as in distinction from *sympathy* {what he lacks is not *sympathy* but *empathy*, the ability to put himself in the other fellow's place—*G. W. Johnson*} *Empathy* is frequently employed with reference to a nonhuman object (as a literary character or an idea, culture, or work of art) {a fundamental component of the aesthetic attitude is *sympathy*, or—more *accurately*—empathy. In the presence of any work of art ... the recipient ... must surrender his independent and outstanding personality, to identify himself with the form or action presented by the artist—*Read*} > > > In my opinion, the change in MW's understanding of *empathy* between 1942 and 1984 reflects the professionalization of empathy in psychology and psychiatry. From being (in 1942) primarily a power of sympathetic imagination that is most common, in its active form, in creative artists, *empathy* becomes (in 1984) primarily a mental orientation to maximize analytical insight—a cultivated gift of vicarious feeling (one thinks of a psychotherapist trained to glean the feelings of a patient without indulging in them, or of an art appreciator passively identifying with the artist without fully internalizing the artist's emotional onslaught). --- ***'Sympathy' and 'empathy' in Merriam-Webster Online (undated)*** MW's undated online article, [What's the difference between 'sympathy' and 'empathy'?](https://www.merriam-webster.com/words-at-play/sympathy-empathy-difference), if anything, doubles down on the 1984 synonym dictionary's effort to distance *empathy* from a core sense of shared sympathy: > > **Sympathy vs. Empathy Difference** > > > The difference in meaning is usually explained with some variation of the following: sympathy is when you share the feelings of another; empathy is when you understand the feelings of another but do not necessarily share them. > > > ... > > > **Empathy is Understanding [whereas 'Sympathy is Sharing']** > > > *Empathy* suggests the notion of projection. You have empathy for a person when you can imagine how they might feel based on what you know about that person, despite not having those feelings explicitly communicated[.] > > > ... > > > *Empathy* can be contrasted with *sympathy* in terms of a kind of remove, or emotional distance: > > > > > > > The act or capacity of entering into or sharing the feelings of another is known as **sympathy**. **Empathy**, on the other hand, not only is an identification of sorts but also connotes an awareness of one's separateness from the observed. One of the most difficult tasks put upon man is reflective commitment to another's problem while maintaining his own identity. —*Journal of the American Medical Association*, 24 May 1958 > > > > > > > > > It is not by chance that the quotation that MW has chosen to highlight this proposed distinction comes from the official periodical of the AMA. It would hardly do for a clinician to dispense with emotional distance and perform some sort of Vulcan mind meld with a patient who is suffering emotional trauma or other mental turmoil. --- ***'Empathy' in Merriam-Webster's Eleventh Collegiate Dictionary (2003)*** It seems fair to ask whether everyday nonspecialists use *empathy* in a way that rigidly sequesters it from *sympathy* as the AMA recommends. My sense is that they do not. Still, *Merriam-Webster's Eleventh Collegiate Dictionary* (2003) studiously avoids mentioning the word *sympathy* in either of its two definitions of *empathy*: > > **empathy** *n* (1850) **1 :** the imaginative projection of a subjective state into an object so that the object appears to be infused with it **2 :** the action of understanding, being aware of, being sensitive to, and vicariously experiencing the feelings, thoughts, an experience of another of either the past or present without having the feelings, thoughts, and experience fully communicated in an objectively explicit manner; *also* : the capacity for this > > > I suspect that the vast majority of lay people do not use *empathy* in either of these senses. The first definition is so abstract as to be almost opaque, and the second comes across as stilted, artificial, and clinical. --- ***'Sympathy' and 'empathy' in the American Heritage Dictionary (2010)*** The entry for *empathy* in *The American Heritage Dictionary of the English Language*, fifth edition (2010) does a much better job of approximating what real-world non-experts have in mind when they use the word: > > **empathy** *n.* **1.** The ability to identify with or understand another's situation or feelings: *Empathy is a distinctly human capability.* ... **2.** The attribution of one's feelings to an object: *They have empathy for the evacuees who were displaced by the flood.* > > > In a discussion of a synonym bundle that includes *sympathy* and *empathy*, AHDEL writes as follows: > > *pity, compassion, sympathy, empathy, commiseration, condolence.* These nouns signify kindly concern aroused by the misfortune, affliction, or suffering of another. ... *Sympathy* denotes the act of or capacity for sharing in the sorrows or troubles of another: "*They had little sympathy to spare for their unfortunate enemies*" (William Hickling Prescott). *Empathy* is an identification with and understanding of another's situation, feelings, and motives: *Having changed schools several times as a child, I feel empathy for the transfer students.* > > > Although AHDEL supports the same basic split between "sympathy = sharing" and "empathy = understanding" that MW identifies—and although it, too, avoids using the word *sympathy* in its definitions of empathy—it does so in a way that permits readers to see more easily that *empathy* can involve a strong identification with another person's feelings—what one might be tempted to call *sympathy*. --- ***Conclusion*** Merriam-Webster's approach to the definitions it composes can be startlingly inconsistent. In many instances, it adopts the populist, descriptivist view that the definitions it gives for a word ought to reflect how how people use that word in the real world. But in some instances, it adopts a narrow, specialist-friendly, prescriptivist view of the proper definitions of a word, as though that word existed only in a milieu where all users were aware of and respected its precise, complicated, and nuanced technical meaning. This, I think, is what MW has done in its handling of *empathy* in recent decades, and as a result the meanings it endorses seem poorly matched to the ways in which people in the wild actually use it.
I have seen these contradictory definitions in many place over many years, to the point that the words "sympathy" and "empathy" are commonly used to mean each other, and it is very difficult to say what is the "correct" definition of each word anymore.
2,301,588
Why is the design of TCP servers mostly such that whenever it accepts a connection, a new process is invoked to handle it . But, why in the case of UDP servers, mostly there is only a single process that handles all client requests ?
2010/02/20
[ "https://Stackoverflow.com/questions/2301588", "https://Stackoverflow.com", "https://Stackoverflow.com/users/243312/" ]
When you talk with client via TCP connection you maintain TCP session. So when new connection established you need separate process(or thread, no matter how it implemented and what OS used) and maintain conversation. But when you use UDP connection you may recieve datagram(and you will be informed about senders ip and port) but in common case you cannot respond on it.
Your generalization is too general. This is a pattern you might see with a Unix-based server, where process creation is inexpensive. A .NET-based service will use a new thread from the thread pool instead of creating a new process.
2,301,588
Why is the design of TCP servers mostly such that whenever it accepts a connection, a new process is invoked to handle it . But, why in the case of UDP servers, mostly there is only a single process that handles all client requests ?
2010/02/20
[ "https://Stackoverflow.com/questions/2301588", "https://Stackoverflow.com", "https://Stackoverflow.com/users/243312/" ]
The main difference between TCP and UDP is, as stated before, that UDP is connectionless. A program using UDP has only one socket where it receives messages. So there's no problem if you just block and wait for a message. If using TCP you get one socket for every client which connects. Then you can't just block and wait for ONE socket to receive something, because there are other sockets which must be processed at the same time. So you got two options, either use nonblocking methods or use threads. Code is usually much simpler when you don't have one while loop which has to handle every client, so threading is often prefered. You can also save some CPU time if using blocking methods.
Your generalization is too general. This is a pattern you might see with a Unix-based server, where process creation is inexpensive. A .NET-based service will use a new thread from the thread pool instead of creating a new process.
2,301,588
Why is the design of TCP servers mostly such that whenever it accepts a connection, a new process is invoked to handle it . But, why in the case of UDP servers, mostly there is only a single process that handles all client requests ?
2010/02/20
[ "https://Stackoverflow.com/questions/2301588", "https://Stackoverflow.com", "https://Stackoverflow.com/users/243312/" ]
First of all, the classic Unix server paradigm is filter based. For example, various network services can be configured in /etc/services and a program like inetd listens on all of the TCP and UDP sockets for incoming connections and datagrams. When a connection / DG arrives it forks, redirects stdin, stdout and stderr to the socket using the dup2 system call, and then execs the server process. [You can take any program which reads from stdin and writes to stdout and turn it into a network service, such as grep](https://stackoverflow.com/questions/2216374/why-is-writing-a-closed-tcp-socket-worse-than-reading-one/2218360#2218360). According to Steven's in "[Unix Network Programming](http://www.unpbook.com)", there are five kinds of server I/O models (pg. 154): 1. blocking 2. non-blocking 3. multiplexing (select and poll) 4. Signal Driven 5. asynchronous ( POSIX aio\_ functions ) In addition the servers can be either Iterative or Concurrent. You ask why are TCP servers are typically concurrent, while UDP servers are typically iterative. The UDP side is easier to answer. Typically UDP apps follow a simple request response model where a client sends a short request followed by a reply with each pair constituting a stand alone transaction. UDP servers are the only ones which use Signal Drive I/O, and at the very rarely. TCP is a bit more complicated. Iterative servers can use any of the I/O models above, except #4. The fastest servers on a single processor are actually Iterative servers using non-blocking I/O. However, these are considered relatively complex to implement and that plus the Unix filter idiom where traditionally the primary reasons for use of the concurrent model with blocking I/O, whether multiprocess or multithreaded. Now, with the advent of common multicore systems, the concurrent model also has the performance advantage.
Your generalization is too general. This is a pattern you might see with a Unix-based server, where process creation is inexpensive. A .NET-based service will use a new thread from the thread pool instead of creating a new process.
2,301,588
Why is the design of TCP servers mostly such that whenever it accepts a connection, a new process is invoked to handle it . But, why in the case of UDP servers, mostly there is only a single process that handles all client requests ?
2010/02/20
[ "https://Stackoverflow.com/questions/2301588", "https://Stackoverflow.com", "https://Stackoverflow.com/users/243312/" ]
The main difference between TCP and UDP is, as stated before, that UDP is connectionless. A program using UDP has only one socket where it receives messages. So there's no problem if you just block and wait for a message. If using TCP you get one socket for every client which connects. Then you can't just block and wait for ONE socket to receive something, because there are other sockets which must be processed at the same time. So you got two options, either use nonblocking methods or use threads. Code is usually much simpler when you don't have one while loop which has to handle every client, so threading is often prefered. You can also save some CPU time if using blocking methods.
When you talk with client via TCP connection you maintain TCP session. So when new connection established you need separate process(or thread, no matter how it implemented and what OS used) and maintain conversation. But when you use UDP connection you may recieve datagram(and you will be informed about senders ip and port) but in common case you cannot respond on it.
2,301,588
Why is the design of TCP servers mostly such that whenever it accepts a connection, a new process is invoked to handle it . But, why in the case of UDP servers, mostly there is only a single process that handles all client requests ?
2010/02/20
[ "https://Stackoverflow.com/questions/2301588", "https://Stackoverflow.com", "https://Stackoverflow.com/users/243312/" ]
When you talk with client via TCP connection you maintain TCP session. So when new connection established you need separate process(or thread, no matter how it implemented and what OS used) and maintain conversation. But when you use UDP connection you may recieve datagram(and you will be informed about senders ip and port) but in common case you cannot respond on it.
Programs that can continue to do useful work while they are waiting for I/O will often be multithreaded. Programs that do lots of computation which can be neatly divided into separate sections can benefit from multithreading, if there are multiple processors. Programs that service lots of network requests can sometimes benefit by having a pool of available threads to service requests. GUI programs that also need to perform computation can benefit from multithreading, because it allows the main thread to continue to service GUI events. Thats why we use TCP as an internet protocol.
2,301,588
Why is the design of TCP servers mostly such that whenever it accepts a connection, a new process is invoked to handle it . But, why in the case of UDP servers, mostly there is only a single process that handles all client requests ?
2010/02/20
[ "https://Stackoverflow.com/questions/2301588", "https://Stackoverflow.com", "https://Stackoverflow.com/users/243312/" ]
The main difference between TCP and UDP is, as stated before, that UDP is connectionless. A program using UDP has only one socket where it receives messages. So there's no problem if you just block and wait for a message. If using TCP you get one socket for every client which connects. Then you can't just block and wait for ONE socket to receive something, because there are other sockets which must be processed at the same time. So you got two options, either use nonblocking methods or use threads. Code is usually much simpler when you don't have one while loop which has to handle every client, so threading is often prefered. You can also save some CPU time if using blocking methods.
First of all, the classic Unix server paradigm is filter based. For example, various network services can be configured in /etc/services and a program like inetd listens on all of the TCP and UDP sockets for incoming connections and datagrams. When a connection / DG arrives it forks, redirects stdin, stdout and stderr to the socket using the dup2 system call, and then execs the server process. [You can take any program which reads from stdin and writes to stdout and turn it into a network service, such as grep](https://stackoverflow.com/questions/2216374/why-is-writing-a-closed-tcp-socket-worse-than-reading-one/2218360#2218360). According to Steven's in "[Unix Network Programming](http://www.unpbook.com)", there are five kinds of server I/O models (pg. 154): 1. blocking 2. non-blocking 3. multiplexing (select and poll) 4. Signal Driven 5. asynchronous ( POSIX aio\_ functions ) In addition the servers can be either Iterative or Concurrent. You ask why are TCP servers are typically concurrent, while UDP servers are typically iterative. The UDP side is easier to answer. Typically UDP apps follow a simple request response model where a client sends a short request followed by a reply with each pair constituting a stand alone transaction. UDP servers are the only ones which use Signal Drive I/O, and at the very rarely. TCP is a bit more complicated. Iterative servers can use any of the I/O models above, except #4. The fastest servers on a single processor are actually Iterative servers using non-blocking I/O. However, these are considered relatively complex to implement and that plus the Unix filter idiom where traditionally the primary reasons for use of the concurrent model with blocking I/O, whether multiprocess or multithreaded. Now, with the advent of common multicore systems, the concurrent model also has the performance advantage.
2,301,588
Why is the design of TCP servers mostly such that whenever it accepts a connection, a new process is invoked to handle it . But, why in the case of UDP servers, mostly there is only a single process that handles all client requests ?
2010/02/20
[ "https://Stackoverflow.com/questions/2301588", "https://Stackoverflow.com", "https://Stackoverflow.com/users/243312/" ]
The main difference between TCP and UDP is, as stated before, that UDP is connectionless. A program using UDP has only one socket where it receives messages. So there's no problem if you just block and wait for a message. If using TCP you get one socket for every client which connects. Then you can't just block and wait for ONE socket to receive something, because there are other sockets which must be processed at the same time. So you got two options, either use nonblocking methods or use threads. Code is usually much simpler when you don't have one while loop which has to handle every client, so threading is often prefered. You can also save some CPU time if using blocking methods.
Programs that can continue to do useful work while they are waiting for I/O will often be multithreaded. Programs that do lots of computation which can be neatly divided into separate sections can benefit from multithreading, if there are multiple processors. Programs that service lots of network requests can sometimes benefit by having a pool of available threads to service requests. GUI programs that also need to perform computation can benefit from multithreading, because it allows the main thread to continue to service GUI events. Thats why we use TCP as an internet protocol.
2,301,588
Why is the design of TCP servers mostly such that whenever it accepts a connection, a new process is invoked to handle it . But, why in the case of UDP servers, mostly there is only a single process that handles all client requests ?
2010/02/20
[ "https://Stackoverflow.com/questions/2301588", "https://Stackoverflow.com", "https://Stackoverflow.com/users/243312/" ]
First of all, the classic Unix server paradigm is filter based. For example, various network services can be configured in /etc/services and a program like inetd listens on all of the TCP and UDP sockets for incoming connections and datagrams. When a connection / DG arrives it forks, redirects stdin, stdout and stderr to the socket using the dup2 system call, and then execs the server process. [You can take any program which reads from stdin and writes to stdout and turn it into a network service, such as grep](https://stackoverflow.com/questions/2216374/why-is-writing-a-closed-tcp-socket-worse-than-reading-one/2218360#2218360). According to Steven's in "[Unix Network Programming](http://www.unpbook.com)", there are five kinds of server I/O models (pg. 154): 1. blocking 2. non-blocking 3. multiplexing (select and poll) 4. Signal Driven 5. asynchronous ( POSIX aio\_ functions ) In addition the servers can be either Iterative or Concurrent. You ask why are TCP servers are typically concurrent, while UDP servers are typically iterative. The UDP side is easier to answer. Typically UDP apps follow a simple request response model where a client sends a short request followed by a reply with each pair constituting a stand alone transaction. UDP servers are the only ones which use Signal Drive I/O, and at the very rarely. TCP is a bit more complicated. Iterative servers can use any of the I/O models above, except #4. The fastest servers on a single processor are actually Iterative servers using non-blocking I/O. However, these are considered relatively complex to implement and that plus the Unix filter idiom where traditionally the primary reasons for use of the concurrent model with blocking I/O, whether multiprocess or multithreaded. Now, with the advent of common multicore systems, the concurrent model also has the performance advantage.
Programs that can continue to do useful work while they are waiting for I/O will often be multithreaded. Programs that do lots of computation which can be neatly divided into separate sections can benefit from multithreading, if there are multiple processors. Programs that service lots of network requests can sometimes benefit by having a pool of available threads to service requests. GUI programs that also need to perform computation can benefit from multithreading, because it allows the main thread to continue to service GUI events. Thats why we use TCP as an internet protocol.
31,221,919
I have a problem with this new unified Windows Dev Center. In old version, there wasn't any problems to delete or hide app from Windows Phone store. But in this new interface.. I cannot find that option. <https://dev.windows.com/en-us/overview?from=UHF> Of course, I tried to find some information how to do it - I found [this](https://msdn.microsoft.com/pl-pl/en-en/library/windows/apps/jj193601.aspx), but it is totally useless. > > This topic applies to the earlier Windows Store dashboard and not the new unified Windows Dev Center dashboard. > > > Nice one, very userfriendly. To conclude - how to delete or hide or just unpublish my app from WindowsPhone store using new Windows Dev Center?
2015/07/04
[ "https://Stackoverflow.com/questions/31221919", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1503095/" ]
Select your app from the dashboard, go to submissions and click update to create a new submission (yes counter intuitive) and then go to pricing and availability and scroll down to Distribution and visibility and click on show options. Then select hide this app and stop selling, save it and submit it.
The new dev center is in preview mode(read only), so I think that you will have this feature when the account migrations are finished. Until then, you also can't publish new apps. You can find more info here: <http://blogs.windows.com/buildingapps/2015/06/05/dev-center-account-migration-begins-next-week/>
13,984
Card sleeves are probably be a no-brainer for board game collectors who want to keep their games in pristine condition for as long as possible, or for CCGs where an individual card may be very difficulty to replace. For non-collectors playing games like Agricola, 7 Wonders, Mage Knight, Pandemic, etc, when does it make sense to use card sleeves and when doesn't it make sense?
2013/12/30
[ "https://boardgames.stackexchange.com/questions/13984", "https://boardgames.stackexchange.com", "https://boardgames.stackexchange.com/users/6469/" ]
Sleeves are good for games that have you shuffle a *lot*. Roborally, for example, shuffles a single deck of cards many times every game, and my cards were significantly worn after a year or two of playing occasionally. Similarly, if the cards just don't seem to be very durable, sleeves might be a good idea. They're usually mostly pointless for games that don't shuffle much (once per game). The cards shouldn't wear that fast, so unless you prefer shuffling with sleeves or are very protective, there's not really much need. And sleeves are counterproductive for extremely large decks of cards (think Arkham Horror with expansions) where the stack of sleeved cards would be way too high to be stable.
As card sleeves are quite cheap, it's just a matter of personal preference. Some people like sleeves (I do) others don't (some of my friends). One case I could argue for sleeves is when you have a card game and plan on maybe buying an expansion later. The cards from the base set would be rugged from use, while the expansion would be pristine and shiny. It will be visible. In case of sleeves, you just put all the cards into new sleeves and voila, every card looks the same.
13,984
Card sleeves are probably be a no-brainer for board game collectors who want to keep their games in pristine condition for as long as possible, or for CCGs where an individual card may be very difficulty to replace. For non-collectors playing games like Agricola, 7 Wonders, Mage Knight, Pandemic, etc, when does it make sense to use card sleeves and when doesn't it make sense?
2013/12/30
[ "https://boardgames.stackexchange.com/questions/13984", "https://boardgames.stackexchange.com", "https://boardgames.stackexchange.com/users/6469/" ]
As card sleeves are quite cheap, it's just a matter of personal preference. Some people like sleeves (I do) others don't (some of my friends). One case I could argue for sleeves is when you have a card game and plan on maybe buying an expansion later. The cards from the base set would be rugged from use, while the expansion would be pristine and shiny. It will be visible. In case of sleeves, you just put all the cards into new sleeves and voila, every card looks the same.
I think sleeves are only really needed for a game where the cards will be shuffled a lot, and you expect to play the game often. (How often is ‘often’ depends on the amount of shuffling.) If there are going to be expansions, or there are cards where a subset gets shuffled into the main deck, you might want to get sleeves just to prevent the cards from having different amounts of wear. (So, I should probably get sleeves for my Cosmic Encounter set, even though I don’t get to play very often.)
13,984
Card sleeves are probably be a no-brainer for board game collectors who want to keep their games in pristine condition for as long as possible, or for CCGs where an individual card may be very difficulty to replace. For non-collectors playing games like Agricola, 7 Wonders, Mage Knight, Pandemic, etc, when does it make sense to use card sleeves and when doesn't it make sense?
2013/12/30
[ "https://boardgames.stackexchange.com/questions/13984", "https://boardgames.stackexchange.com", "https://boardgames.stackexchange.com/users/6469/" ]
There are a few situations where you want to consider sleeving your cards * You want to prevent marking caused by uneven use (ever play Euchre with a normal deck of cards?) * You want to prevent marking caused by tears, folds, scratches and nicks (especially in games with a lot of shuffling or playing cards repeatedly) * You want to protect the value of the cards (for collectible card games, such as *MtG*) * You want to protect the condition of the cards (because you want them to remain in a nice state) * The cards are difficult or expensive to replace And a few reasons you might want to not sleeve your cards * They will not fit nicely into the original packaging you have with sleeves * The cards no longer stack nicely with sleeves * It doesn't have much impact (if any) on the game if cards become marked (resource cards in *Settlers of Catan*, for example) * There is information on both sides of the card that needs to remain visible (you can, however, buy transparent sleeves) * The cards are easily replaceable (such as a normal deck of playing cards) * And, of course, sleeves cost money So, in short, it's really a matter of personal preference. I sleeve my cards when playing *MtG* as some of the cards are valuable and marking can be an issue. I don't sleeve my cards for *Arkham Horror* as (although they are very worn) it isn't very important to keep information secret. The cards also come in various sizes and sleeving them all would be annoying. For another game, *Citadels*, some information is very important to keep secret (roles) so I sleeve these cards to prevent marking, but other cards (buildings) are not as important to keep secret so I do not bother sleeving them. The same idea applies with *Battlestar Galactica*.
As card sleeves are quite cheap, it's just a matter of personal preference. Some people like sleeves (I do) others don't (some of my friends). One case I could argue for sleeves is when you have a card game and plan on maybe buying an expansion later. The cards from the base set would be rugged from use, while the expansion would be pristine and shiny. It will be visible. In case of sleeves, you just put all the cards into new sleeves and voila, every card looks the same.
13,984
Card sleeves are probably be a no-brainer for board game collectors who want to keep their games in pristine condition for as long as possible, or for CCGs where an individual card may be very difficulty to replace. For non-collectors playing games like Agricola, 7 Wonders, Mage Knight, Pandemic, etc, when does it make sense to use card sleeves and when doesn't it make sense?
2013/12/30
[ "https://boardgames.stackexchange.com/questions/13984", "https://boardgames.stackexchange.com", "https://boardgames.stackexchange.com/users/6469/" ]
As card sleeves are quite cheap, it's just a matter of personal preference. Some people like sleeves (I do) others don't (some of my friends). One case I could argue for sleeves is when you have a card game and plan on maybe buying an expansion later. The cards from the base set would be rugged from use, while the expansion would be pristine and shiny. It will be visible. In case of sleeves, you just put all the cards into new sleeves and voila, every card looks the same.
Another, not obvious reason when you want to sleeve your cards, is when you suspect your cards could be easily damaged. It can happen, when you are going to play some multiplayer games like Dixit Odyssey on your lake holiday with a bit drunk friends, when everyone probably would have their hands dirty, not to mention a few times sleeves saved my cards when someone fall over candles, or some glasses with juice or coke. Then, you are really happy that you bought sleeves.
13,984
Card sleeves are probably be a no-brainer for board game collectors who want to keep their games in pristine condition for as long as possible, or for CCGs where an individual card may be very difficulty to replace. For non-collectors playing games like Agricola, 7 Wonders, Mage Knight, Pandemic, etc, when does it make sense to use card sleeves and when doesn't it make sense?
2013/12/30
[ "https://boardgames.stackexchange.com/questions/13984", "https://boardgames.stackexchange.com", "https://boardgames.stackexchange.com/users/6469/" ]
Sleeves are good for games that have you shuffle a *lot*. Roborally, for example, shuffles a single deck of cards many times every game, and my cards were significantly worn after a year or two of playing occasionally. Similarly, if the cards just don't seem to be very durable, sleeves might be a good idea. They're usually mostly pointless for games that don't shuffle much (once per game). The cards shouldn't wear that fast, so unless you prefer shuffling with sleeves or are very protective, there's not really much need. And sleeves are counterproductive for extremely large decks of cards (think Arkham Horror with expansions) where the stack of sleeved cards would be way too high to be stable.
I think sleeves are only really needed for a game where the cards will be shuffled a lot, and you expect to play the game often. (How often is ‘often’ depends on the amount of shuffling.) If there are going to be expansions, or there are cards where a subset gets shuffled into the main deck, you might want to get sleeves just to prevent the cards from having different amounts of wear. (So, I should probably get sleeves for my Cosmic Encounter set, even though I don’t get to play very often.)
13,984
Card sleeves are probably be a no-brainer for board game collectors who want to keep their games in pristine condition for as long as possible, or for CCGs where an individual card may be very difficulty to replace. For non-collectors playing games like Agricola, 7 Wonders, Mage Knight, Pandemic, etc, when does it make sense to use card sleeves and when doesn't it make sense?
2013/12/30
[ "https://boardgames.stackexchange.com/questions/13984", "https://boardgames.stackexchange.com", "https://boardgames.stackexchange.com/users/6469/" ]
There are a few situations where you want to consider sleeving your cards * You want to prevent marking caused by uneven use (ever play Euchre with a normal deck of cards?) * You want to prevent marking caused by tears, folds, scratches and nicks (especially in games with a lot of shuffling or playing cards repeatedly) * You want to protect the value of the cards (for collectible card games, such as *MtG*) * You want to protect the condition of the cards (because you want them to remain in a nice state) * The cards are difficult or expensive to replace And a few reasons you might want to not sleeve your cards * They will not fit nicely into the original packaging you have with sleeves * The cards no longer stack nicely with sleeves * It doesn't have much impact (if any) on the game if cards become marked (resource cards in *Settlers of Catan*, for example) * There is information on both sides of the card that needs to remain visible (you can, however, buy transparent sleeves) * The cards are easily replaceable (such as a normal deck of playing cards) * And, of course, sleeves cost money So, in short, it's really a matter of personal preference. I sleeve my cards when playing *MtG* as some of the cards are valuable and marking can be an issue. I don't sleeve my cards for *Arkham Horror* as (although they are very worn) it isn't very important to keep information secret. The cards also come in various sizes and sleeving them all would be annoying. For another game, *Citadels*, some information is very important to keep secret (roles) so I sleeve these cards to prevent marking, but other cards (buildings) are not as important to keep secret so I do not bother sleeving them. The same idea applies with *Battlestar Galactica*.
Sleeves are good for games that have you shuffle a *lot*. Roborally, for example, shuffles a single deck of cards many times every game, and my cards were significantly worn after a year or two of playing occasionally. Similarly, if the cards just don't seem to be very durable, sleeves might be a good idea. They're usually mostly pointless for games that don't shuffle much (once per game). The cards shouldn't wear that fast, so unless you prefer shuffling with sleeves or are very protective, there's not really much need. And sleeves are counterproductive for extremely large decks of cards (think Arkham Horror with expansions) where the stack of sleeved cards would be way too high to be stable.
13,984
Card sleeves are probably be a no-brainer for board game collectors who want to keep their games in pristine condition for as long as possible, or for CCGs where an individual card may be very difficulty to replace. For non-collectors playing games like Agricola, 7 Wonders, Mage Knight, Pandemic, etc, when does it make sense to use card sleeves and when doesn't it make sense?
2013/12/30
[ "https://boardgames.stackexchange.com/questions/13984", "https://boardgames.stackexchange.com", "https://boardgames.stackexchange.com/users/6469/" ]
Sleeves are good for games that have you shuffle a *lot*. Roborally, for example, shuffles a single deck of cards many times every game, and my cards were significantly worn after a year or two of playing occasionally. Similarly, if the cards just don't seem to be very durable, sleeves might be a good idea. They're usually mostly pointless for games that don't shuffle much (once per game). The cards shouldn't wear that fast, so unless you prefer shuffling with sleeves or are very protective, there's not really much need. And sleeves are counterproductive for extremely large decks of cards (think Arkham Horror with expansions) where the stack of sleeved cards would be way too high to be stable.
Another, not obvious reason when you want to sleeve your cards, is when you suspect your cards could be easily damaged. It can happen, when you are going to play some multiplayer games like Dixit Odyssey on your lake holiday with a bit drunk friends, when everyone probably would have their hands dirty, not to mention a few times sleeves saved my cards when someone fall over candles, or some glasses with juice or coke. Then, you are really happy that you bought sleeves.
13,984
Card sleeves are probably be a no-brainer for board game collectors who want to keep their games in pristine condition for as long as possible, or for CCGs where an individual card may be very difficulty to replace. For non-collectors playing games like Agricola, 7 Wonders, Mage Knight, Pandemic, etc, when does it make sense to use card sleeves and when doesn't it make sense?
2013/12/30
[ "https://boardgames.stackexchange.com/questions/13984", "https://boardgames.stackexchange.com", "https://boardgames.stackexchange.com/users/6469/" ]
There are a few situations where you want to consider sleeving your cards * You want to prevent marking caused by uneven use (ever play Euchre with a normal deck of cards?) * You want to prevent marking caused by tears, folds, scratches and nicks (especially in games with a lot of shuffling or playing cards repeatedly) * You want to protect the value of the cards (for collectible card games, such as *MtG*) * You want to protect the condition of the cards (because you want them to remain in a nice state) * The cards are difficult or expensive to replace And a few reasons you might want to not sleeve your cards * They will not fit nicely into the original packaging you have with sleeves * The cards no longer stack nicely with sleeves * It doesn't have much impact (if any) on the game if cards become marked (resource cards in *Settlers of Catan*, for example) * There is information on both sides of the card that needs to remain visible (you can, however, buy transparent sleeves) * The cards are easily replaceable (such as a normal deck of playing cards) * And, of course, sleeves cost money So, in short, it's really a matter of personal preference. I sleeve my cards when playing *MtG* as some of the cards are valuable and marking can be an issue. I don't sleeve my cards for *Arkham Horror* as (although they are very worn) it isn't very important to keep information secret. The cards also come in various sizes and sleeving them all would be annoying. For another game, *Citadels*, some information is very important to keep secret (roles) so I sleeve these cards to prevent marking, but other cards (buildings) are not as important to keep secret so I do not bother sleeving them. The same idea applies with *Battlestar Galactica*.
I think sleeves are only really needed for a game where the cards will be shuffled a lot, and you expect to play the game often. (How often is ‘often’ depends on the amount of shuffling.) If there are going to be expansions, or there are cards where a subset gets shuffled into the main deck, you might want to get sleeves just to prevent the cards from having different amounts of wear. (So, I should probably get sleeves for my Cosmic Encounter set, even though I don’t get to play very often.)
13,984
Card sleeves are probably be a no-brainer for board game collectors who want to keep their games in pristine condition for as long as possible, or for CCGs where an individual card may be very difficulty to replace. For non-collectors playing games like Agricola, 7 Wonders, Mage Knight, Pandemic, etc, when does it make sense to use card sleeves and when doesn't it make sense?
2013/12/30
[ "https://boardgames.stackexchange.com/questions/13984", "https://boardgames.stackexchange.com", "https://boardgames.stackexchange.com/users/6469/" ]
There are a few situations where you want to consider sleeving your cards * You want to prevent marking caused by uneven use (ever play Euchre with a normal deck of cards?) * You want to prevent marking caused by tears, folds, scratches and nicks (especially in games with a lot of shuffling or playing cards repeatedly) * You want to protect the value of the cards (for collectible card games, such as *MtG*) * You want to protect the condition of the cards (because you want them to remain in a nice state) * The cards are difficult or expensive to replace And a few reasons you might want to not sleeve your cards * They will not fit nicely into the original packaging you have with sleeves * The cards no longer stack nicely with sleeves * It doesn't have much impact (if any) on the game if cards become marked (resource cards in *Settlers of Catan*, for example) * There is information on both sides of the card that needs to remain visible (you can, however, buy transparent sleeves) * The cards are easily replaceable (such as a normal deck of playing cards) * And, of course, sleeves cost money So, in short, it's really a matter of personal preference. I sleeve my cards when playing *MtG* as some of the cards are valuable and marking can be an issue. I don't sleeve my cards for *Arkham Horror* as (although they are very worn) it isn't very important to keep information secret. The cards also come in various sizes and sleeving them all would be annoying. For another game, *Citadels*, some information is very important to keep secret (roles) so I sleeve these cards to prevent marking, but other cards (buildings) are not as important to keep secret so I do not bother sleeving them. The same idea applies with *Battlestar Galactica*.
Another, not obvious reason when you want to sleeve your cards, is when you suspect your cards could be easily damaged. It can happen, when you are going to play some multiplayer games like Dixit Odyssey on your lake holiday with a bit drunk friends, when everyone probably would have their hands dirty, not to mention a few times sleeves saved my cards when someone fall over candles, or some glasses with juice or coke. Then, you are really happy that you bought sleeves.
115,474
I came across a bar that sounds like this while transcribing a piece for keyboard. Is there a correct/better way to write this? [![it sounds like this](https://i.stack.imgur.com/3R2AN.png)](https://i.stack.imgur.com/3R2AN.png)
2021/06/21
[ "https://music.stackexchange.com/questions/115474", "https://music.stackexchange.com", "https://music.stackexchange.com/users/79727/" ]
Just based off of the span of these pitches, most players can't play all of these pitches with one hand (assuming either a B♭–D or B–D♯ span). As such, I would argue that pedaling is the best bet. Just write the pitches as normal eighth notes, but indicate to the player to hold down the pedal, thereby sustaining the pitches until the pedal sign changes: [![enter image description here](https://i.stack.imgur.com/YClKl.png)](https://i.stack.imgur.com/YClKl.png) It's true that pedaling could muddy up the melody notes in the right hand (depending on how active they are), but an accomplished pianist should be able to use pedal without sacrificing melodic clarity. If the right hand is too much for the pedal, you could also clear up the notation by using simple eighth notes (as in the above example) but with a text indicator to the performer: "sustain all left-hand pitches for the duration of the measure." PS: How funny that [When piano pedalling seems to conflict with the original score](https://music.stackexchange.com/q/115473/21766) was asked almost simultaneously with this one, since it connects with my answer exactly.
That's the way I've seen it. Obviously a wavy line won't do the job, a player would play the notes too quickly in succession. The other way would be to write the lowest note as full value, and write as SATB, using rests before each of the other notes, which would then have their own appropriate values, losing a quaver each time in your example. But that's even messier. Or - write out the first 3 quavers consecutively, each tied across to those same notes to, say, a minim chord to fill the bar. It's not technically correct, but tidies it all up, and the expected result is more than apparent.
115,474
I came across a bar that sounds like this while transcribing a piece for keyboard. Is there a correct/better way to write this? [![it sounds like this](https://i.stack.imgur.com/3R2AN.png)](https://i.stack.imgur.com/3R2AN.png)
2021/06/21
[ "https://music.stackexchange.com/questions/115474", "https://music.stackexchange.com", "https://music.stackexchange.com/users/79727/" ]
A broken chord like this is notated with **double stems for each note**. One set of stems corresponds to the eighth-note onsets of each pitch. The second set of stems corresponds to the duration of each pitch and shows the tie to the final, full chord. [![Notated arpeggio with double note-heads](https://i.stack.imgur.com/9Ve5v.png)](https://i.stack.imgur.com/9Ve5v.png) It is **acceptable to use shared note heads** in this kind of situation. For example, the first note could have a half-note head, but an eighth-note stem. [![Shared note head arpeggio](https://i.stack.imgur.com/Fw3Dt.png)](https://i.stack.imgur.com/Fw3Dt.png) Here is a similar example from the left hand, final two measures of Brahms's "Intermezzo" Op. 119, No. 2. Note here that Brahms clearly expects the fingers to sustain the pitches. The pedal marking is not for the sustain, but rather for the "warmth" of the overall sound. [![Brahms Op. 119, No. 2, LH, final two bars](https://i.stack.imgur.com/6Ugx0.png)](https://i.stack.imgur.com/6Ugx0.png) A second example, from Chopin, shows **the importance of the ties versus a simple pedal indication**.1 This first image is the first four measures of Chopin's Prelude in C major, Op. 28, No. 1, from the Paderewski edition (i.e., this is Paderewski's interpretation of how this should be played). Note the one-pedal-per-measure scheme, which is marked "Ped. simile" ("keep pedaling in the same way") in the fourth measure. [![Chopin Prelude Op. 28, No. 1, mm. 1–4](https://i.stack.imgur.com/8X9Xs.png)](https://i.stack.imgur.com/8X9Xs.png) However, this notation clearly does not mean to hold all notes of the chords. The sound would be extremely muddy. Consider this notation in light of the final three measures (mm. 32–34), shown below. Measure 32 is played as shown in mm. 1–4, but measures 33 and 34 are explicitly tied to illustrate the intention of a slowly arpeggiated block chord. (Also, the **absence of double note heads is another alternative notation** that is sometimes used.) [![Chopin Op. 28, No. 1, mm. 32–34](https://i.stack.imgur.com/p8xFD.png)](https://i.stack.imgur.com/p8xFD.png)
That's the way I've seen it. Obviously a wavy line won't do the job, a player would play the notes too quickly in succession. The other way would be to write the lowest note as full value, and write as SATB, using rests before each of the other notes, which would then have their own appropriate values, losing a quaver each time in your example. But that's even messier. Or - write out the first 3 quavers consecutively, each tied across to those same notes to, say, a minim chord to fill the bar. It's not technically correct, but tidies it all up, and the expected result is more than apparent.
115,474
I came across a bar that sounds like this while transcribing a piece for keyboard. Is there a correct/better way to write this? [![it sounds like this](https://i.stack.imgur.com/3R2AN.png)](https://i.stack.imgur.com/3R2AN.png)
2021/06/21
[ "https://music.stackexchange.com/questions/115474", "https://music.stackexchange.com", "https://music.stackexchange.com/users/79727/" ]
A broken chord like this is notated with **double stems for each note**. One set of stems corresponds to the eighth-note onsets of each pitch. The second set of stems corresponds to the duration of each pitch and shows the tie to the final, full chord. [![Notated arpeggio with double note-heads](https://i.stack.imgur.com/9Ve5v.png)](https://i.stack.imgur.com/9Ve5v.png) It is **acceptable to use shared note heads** in this kind of situation. For example, the first note could have a half-note head, but an eighth-note stem. [![Shared note head arpeggio](https://i.stack.imgur.com/Fw3Dt.png)](https://i.stack.imgur.com/Fw3Dt.png) Here is a similar example from the left hand, final two measures of Brahms's "Intermezzo" Op. 119, No. 2. Note here that Brahms clearly expects the fingers to sustain the pitches. The pedal marking is not for the sustain, but rather for the "warmth" of the overall sound. [![Brahms Op. 119, No. 2, LH, final two bars](https://i.stack.imgur.com/6Ugx0.png)](https://i.stack.imgur.com/6Ugx0.png) A second example, from Chopin, shows **the importance of the ties versus a simple pedal indication**.1 This first image is the first four measures of Chopin's Prelude in C major, Op. 28, No. 1, from the Paderewski edition (i.e., this is Paderewski's interpretation of how this should be played). Note the one-pedal-per-measure scheme, which is marked "Ped. simile" ("keep pedaling in the same way") in the fourth measure. [![Chopin Prelude Op. 28, No. 1, mm. 1–4](https://i.stack.imgur.com/8X9Xs.png)](https://i.stack.imgur.com/8X9Xs.png) However, this notation clearly does not mean to hold all notes of the chords. The sound would be extremely muddy. Consider this notation in light of the final three measures (mm. 32–34), shown below. Measure 32 is played as shown in mm. 1–4, but measures 33 and 34 are explicitly tied to illustrate the intention of a slowly arpeggiated block chord. (Also, the **absence of double note heads is another alternative notation** that is sometimes used.) [![Chopin Op. 28, No. 1, mm. 32–34](https://i.stack.imgur.com/p8xFD.png)](https://i.stack.imgur.com/p8xFD.png)
Just based off of the span of these pitches, most players can't play all of these pitches with one hand (assuming either a B♭–D or B–D♯ span). As such, I would argue that pedaling is the best bet. Just write the pitches as normal eighth notes, but indicate to the player to hold down the pedal, thereby sustaining the pitches until the pedal sign changes: [![enter image description here](https://i.stack.imgur.com/YClKl.png)](https://i.stack.imgur.com/YClKl.png) It's true that pedaling could muddy up the melody notes in the right hand (depending on how active they are), but an accomplished pianist should be able to use pedal without sacrificing melodic clarity. If the right hand is too much for the pedal, you could also clear up the notation by using simple eighth notes (as in the above example) but with a text indicator to the performer: "sustain all left-hand pitches for the duration of the measure." PS: How funny that [When piano pedalling seems to conflict with the original score](https://music.stackexchange.com/q/115473/21766) was asked almost simultaneously with this one, since it connects with my answer exactly.
1,400,814
I have an issue installing Ubuntu 20.04 on my Dell XPS 15 9550. Here are my actions: 1. Created a boot drive using Rufus in MBR mode 2. Disabled Intel Rapid Storage controller 3. Chose my USB stick from UEFI boot section 4. After GRUB had a black screen 5. Attempted to fix the issue by changing quiet splash to nomodeset (didn't work) 6. Attempted to boot from legacy boot section (got me into the live image) 7. Upon trying to install Ubuntu Windows 10 was not detected (theory: Ubuntu can't see Windows as the live image was launched in legacy mode) 8. Attempted to partition the drive manually and install Ubuntu that way, however GRUB failed to install to its partition 9. Formated the USB drive from Windows using Rufus again, but this time using GPT partition scheme 10. The drive wasn't detected as a bootable device At this point I am not sure what to do. I'm guessing that I need to change some of my BIOS settings, but I don't know to what. The strange thing is that I managed to install Fedora dual boot previously on this machine, but after removing it I wasn't able to install Ubuntu. (I removed both the partition and the GRUB bootloader to get a clean start) Any advice will be welcome! edit 1: So added another EFI boot option by using the BOOTX64.EFI file in my BIOS however, I am met with a black screen after GRUB. I tried nomodeset again but that did not help.
2022/04/04
[ "https://askubuntu.com/questions/1400814", "https://askubuntu.com", "https://askubuntu.com/users/1578630/" ]
you probably cannot select the sd card as a boot medium (like usb,cd or lan) in your bios. if not available for boot i don't think you can use it for an os. you might stick the sd card in a usb card reader and see if a default dual boot install or boot-repair will work.
It depends on the type of Microsoft Surface Pro you have. Model 3 can boot from SD, model 4 doesn't. Please take a look at he Microsoft Community. e.g. <https://answers.microsoft.com/en-us/surface/forum/all/surface-pro-4-boot-from-sd-card/b0eeb516-50b3-4460-8492-6ebb4edd9648>. It seems to be able to boot from USB. Maybe a USB-SD cardreader is an option. I'm not sure though. Boot from USB: <https://support.microsoft.com/en-us/surface/boot-surface-from-a-usb-device-fe7a7323-8d1d-823d-be17-9aec89c4f9f5>.
1,400,814
I have an issue installing Ubuntu 20.04 on my Dell XPS 15 9550. Here are my actions: 1. Created a boot drive using Rufus in MBR mode 2. Disabled Intel Rapid Storage controller 3. Chose my USB stick from UEFI boot section 4. After GRUB had a black screen 5. Attempted to fix the issue by changing quiet splash to nomodeset (didn't work) 6. Attempted to boot from legacy boot section (got me into the live image) 7. Upon trying to install Ubuntu Windows 10 was not detected (theory: Ubuntu can't see Windows as the live image was launched in legacy mode) 8. Attempted to partition the drive manually and install Ubuntu that way, however GRUB failed to install to its partition 9. Formated the USB drive from Windows using Rufus again, but this time using GPT partition scheme 10. The drive wasn't detected as a bootable device At this point I am not sure what to do. I'm guessing that I need to change some of my BIOS settings, but I don't know to what. The strange thing is that I managed to install Fedora dual boot previously on this machine, but after removing it I wasn't able to install Ubuntu. (I removed both the partition and the GRUB bootloader to get a clean start) Any advice will be welcome! edit 1: So added another EFI boot option by using the BOOTX64.EFI file in my BIOS however, I am met with a black screen after GRUB. I tried nomodeset again but that did not help.
2022/04/04
[ "https://askubuntu.com/questions/1400814", "https://askubuntu.com", "https://askubuntu.com/users/1578630/" ]
you probably cannot select the sd card as a boot medium (like usb,cd or lan) in your bios. if not available for boot i don't think you can use it for an os. you might stick the sd card in a usb card reader and see if a default dual boot install or boot-repair will work.
First of all, thank you for your suggestions! I found a solution: I followed (mostly) this [guide](https://blog.hackdesk.com/running-ubuntu-on-micro-sd-card-on-surface-pro-4-dfe9e38e17e1). I changed: * Step 21: I've chosen "erase disk and install ubuntu" * Step 22&23: Choose the SD-Card * before Step 28: check the names of your drives with "sudo fdisk -l" (see Zhihao Wu's [Comment](https://medium.com/@zhihaowu208/thank-you-for-writing-this-comprehensive-tutorial-6588e4e0cdf)) * Step 30: Don't drag "ubuntu" to the top (See MegaKyurem's [comment](https://www.reddit.com/r/SurfaceLinux/comments/9lrf6e/how_do_i_dual_boot_from_a_micro_sd_card_on_a/)). My Boot-order is: **USB** - Ubuntu - Windows Reboot and it should work! You can even turn "secure boot" in UEFI back on (if you don't want the read lock to appear on your boot-screen).
1,400,814
I have an issue installing Ubuntu 20.04 on my Dell XPS 15 9550. Here are my actions: 1. Created a boot drive using Rufus in MBR mode 2. Disabled Intel Rapid Storage controller 3. Chose my USB stick from UEFI boot section 4. After GRUB had a black screen 5. Attempted to fix the issue by changing quiet splash to nomodeset (didn't work) 6. Attempted to boot from legacy boot section (got me into the live image) 7. Upon trying to install Ubuntu Windows 10 was not detected (theory: Ubuntu can't see Windows as the live image was launched in legacy mode) 8. Attempted to partition the drive manually and install Ubuntu that way, however GRUB failed to install to its partition 9. Formated the USB drive from Windows using Rufus again, but this time using GPT partition scheme 10. The drive wasn't detected as a bootable device At this point I am not sure what to do. I'm guessing that I need to change some of my BIOS settings, but I don't know to what. The strange thing is that I managed to install Fedora dual boot previously on this machine, but after removing it I wasn't able to install Ubuntu. (I removed both the partition and the GRUB bootloader to get a clean start) Any advice will be welcome! edit 1: So added another EFI boot option by using the BOOTX64.EFI file in my BIOS however, I am met with a black screen after GRUB. I tried nomodeset again but that did not help.
2022/04/04
[ "https://askubuntu.com/questions/1400814", "https://askubuntu.com", "https://askubuntu.com/users/1578630/" ]
First of all, thank you for your suggestions! I found a solution: I followed (mostly) this [guide](https://blog.hackdesk.com/running-ubuntu-on-micro-sd-card-on-surface-pro-4-dfe9e38e17e1). I changed: * Step 21: I've chosen "erase disk and install ubuntu" * Step 22&23: Choose the SD-Card * before Step 28: check the names of your drives with "sudo fdisk -l" (see Zhihao Wu's [Comment](https://medium.com/@zhihaowu208/thank-you-for-writing-this-comprehensive-tutorial-6588e4e0cdf)) * Step 30: Don't drag "ubuntu" to the top (See MegaKyurem's [comment](https://www.reddit.com/r/SurfaceLinux/comments/9lrf6e/how_do_i_dual_boot_from_a_micro_sd_card_on_a/)). My Boot-order is: **USB** - Ubuntu - Windows Reboot and it should work! You can even turn "secure boot" in UEFI back on (if you don't want the read lock to appear on your boot-screen).
It depends on the type of Microsoft Surface Pro you have. Model 3 can boot from SD, model 4 doesn't. Please take a look at he Microsoft Community. e.g. <https://answers.microsoft.com/en-us/surface/forum/all/surface-pro-4-boot-from-sd-card/b0eeb516-50b3-4460-8492-6ebb4edd9648>. It seems to be able to boot from USB. Maybe a USB-SD cardreader is an option. I'm not sure though. Boot from USB: <https://support.microsoft.com/en-us/surface/boot-surface-from-a-usb-device-fe7a7323-8d1d-823d-be17-9aec89c4f9f5>.
135,237
I am currently finishing my masters in Germany and I will start my PhD soon. I will be having a TV-L contract, however, I haven't acquired my official diploma yet. In case that the bureaucratic stuff are not finished before my starting day, would I still be able to sign the contract by proving I have submitted my thesis (or getting something like 4,0 bescheinigung)? Or do I have to wait until I am officially graduated?
2019/08/26
[ "https://academia.stackexchange.com/questions/135237", "https://academia.stackexchange.com", "https://academia.stackexchange.com/users/102107/" ]
I know some people who signed their contract with "4,0 Bescheinigung" and got the final mark afterwards. As far as I know submitting your thesis is not enough, you have to prove that you passed your masters. But it would be better if you ask your supervisor or the people in the HR.
To sign a TV-L 13 contract, you definitly need a proof that you have a master's degree (or something equivalent). I think a 4.0-Bescheinigung should suffice but this might also depend on your university's policies for that matter. As pointed out already, go check with your university's HR department.
135,237
I am currently finishing my masters in Germany and I will start my PhD soon. I will be having a TV-L contract, however, I haven't acquired my official diploma yet. In case that the bureaucratic stuff are not finished before my starting day, would I still be able to sign the contract by proving I have submitted my thesis (or getting something like 4,0 bescheinigung)? Or do I have to wait until I am officially graduated?
2019/08/26
[ "https://academia.stackexchange.com/questions/135237", "https://academia.stackexchange.com", "https://academia.stackexchange.com/users/102107/" ]
There is no general answer I guess. I know how my university handles this (in Bavaria): If you have no official diploma yet, you will be employed within TV-L 12. Once you provide the diploma, you will be "upgraded" to TV-L 13. A 4,0-Bescheinigung is not sufficient. I'm sure you can find out quickly by calling someone from HR.
To sign a TV-L 13 contract, you definitly need a proof that you have a master's degree (or something equivalent). I think a 4.0-Bescheinigung should suffice but this might also depend on your university's policies for that matter. As pointed out already, go check with your university's HR department.
453,405
I've seen many people add logotypes and other graphics in the copper layers of PCBs, often also removing the solder mask in these places to expose the bare (or rather plated) copper. Is there anything electrically, mechanically or chemically that makes this a bad idea?
2019/08/17
[ "https://electronics.stackexchange.com/questions/453405", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4303/" ]
Nope, there really aren't much downsides, especially with solder mask openings. The solder mask is electrically quite insignificant. If you'd have the opening on an impedance controlled trace it would have slight effect, but you wouldn't have it there anyway, because it would be ugly. If you have ENIG finish on the board, there's added cost, because there will be gold on the logo as well. You can avoid the cost by placing the logo on a place where there's no copper under the solder mask, but then it isn't as nice and flashy. You can't make fancy or small graphics with solder mask, because thin lines don't stick to the board well. Etching the logo to copper might have some effect on EMC, depending what's underneath. If there's a ground plane on the next layer, then you most likely don't have any issues. The modern etching processes are quite impressive so with copper you can get a bit more details than with solder mask, but it might come with added cost, if the board doesn't otherwise have thin traces on that layer. And there's also the added cost of added gold with ENIG finish.
We do this in our designs so that we can save money by totally avoiding silkscreen printing. The text as such will be small, isolated from all other sections in the circuitry and doesn't create any trouble with EMI EMC. Ofcourse this wouldn't be placed some where close to a impedance matched tracks but rather in a place which is easy to read but without harming any other sensitive sections.
453,405
I've seen many people add logotypes and other graphics in the copper layers of PCBs, often also removing the solder mask in these places to expose the bare (or rather plated) copper. Is there anything electrically, mechanically or chemically that makes this a bad idea?
2019/08/17
[ "https://electronics.stackexchange.com/questions/453405", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4303/" ]
The answer here is common sense. If the logo is placed on some previously empty space and its presence does not require any component placement or routing change there is no harm in doing it. If you're already using ENIG, by all means open the solder mask and make the logo shinny. I doubt that the PCB fab house will charge you more for the extra gold used. If there is no plating restrain from the idea of leaving exposed copper - that's never a good thing. If the logo disrupts a ground that is critical for shielding or for controlled impedance traces, or if the logo presence risks creating an undesired coupling path, resist the temptation to do it. Functionality is way more important!
We do this in our designs so that we can save money by totally avoiding silkscreen printing. The text as such will be small, isolated from all other sections in the circuitry and doesn't create any trouble with EMI EMC. Ofcourse this wouldn't be placed some where close to a impedance matched tracks but rather in a place which is easy to read but without harming any other sensitive sections.
453,405
I've seen many people add logotypes and other graphics in the copper layers of PCBs, often also removing the solder mask in these places to expose the bare (or rather plated) copper. Is there anything electrically, mechanically or chemically that makes this a bad idea?
2019/08/17
[ "https://electronics.stackexchange.com/questions/453405", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/4303/" ]
Nope, there really aren't much downsides, especially with solder mask openings. The solder mask is electrically quite insignificant. If you'd have the opening on an impedance controlled trace it would have slight effect, but you wouldn't have it there anyway, because it would be ugly. If you have ENIG finish on the board, there's added cost, because there will be gold on the logo as well. You can avoid the cost by placing the logo on a place where there's no copper under the solder mask, but then it isn't as nice and flashy. You can't make fancy or small graphics with solder mask, because thin lines don't stick to the board well. Etching the logo to copper might have some effect on EMC, depending what's underneath. If there's a ground plane on the next layer, then you most likely don't have any issues. The modern etching processes are quite impressive so with copper you can get a bit more details than with solder mask, but it might come with added cost, if the board doesn't otherwise have thin traces on that layer. And there's also the added cost of added gold with ENIG finish.
The answer here is common sense. If the logo is placed on some previously empty space and its presence does not require any component placement or routing change there is no harm in doing it. If you're already using ENIG, by all means open the solder mask and make the logo shinny. I doubt that the PCB fab house will charge you more for the extra gold used. If there is no plating restrain from the idea of leaving exposed copper - that's never a good thing. If the logo disrupts a ground that is critical for shielding or for controlled impedance traces, or if the logo presence risks creating an undesired coupling path, resist the temptation to do it. Functionality is way more important!
39,470,651
We had a period of latency in our application that was directly correlated with latency in DynamoDB and we are trying to figure out what caused that latency. During that time, the consumed reads and consumed writes for the table were normal (much below the provisioned capacity) and the number of throttled requests was also 0 or 1. The only thing that increased was the SuccessfulRequestLatency. The high latency occurred during a period where we were doing a lot of automatic writes. In our use case, writing to dynamo also includes some reading (to get any existing records). However, we often write the same quantity of data in the same period of time without causing any increased latency. Is there any way to understand what contributes to an increase in SuccessfulRequest latency where it seems that we have provisioned enough read capacity? Is there any way to diagnose the latency caused by this set of writes to dynamodb?
2016/09/13
[ "https://Stackoverflow.com/questions/39470651", "https://Stackoverflow.com", "https://Stackoverflow.com/users/277659/" ]
Hi You missed the quotes for 'False'
The problem lies with your vehicleid, this has to have a value and '0' is an incorrect number for a record/field. Change this to '1' or more.
29,154
I am not a pilot and have come to this site to ask a question about the measurement of height. This is as a result of the comments made as a result of a question in the Physics Stack Exchange forum ["Does barometer improve gps accuracy?"](https://physics.stackexchange.com/questions/261185/does-barometer-improve-gps-position-accuracy/261240#261240) I feel that there will be more of you on this site who have real time experience with flying and so can answer my question which is: *Which is more accurate for height measurement: GPS or barometer?* I have noted that those engaged in geocaching often have GPS units with built-in barometers to improve height accuracy. One of my friends who is a pilot responded as follows: > > A light aircraft relies on a barometer that is set to the standard > 1013 millibars when in cruise at height (or the regional QNH when > lower) but they set to the local reading for the airfield (QFE) when > planning their decent. Commercial aircraft also have ground proximity > radar but they also have a standard set of old fashioned flight > instruments in case the electronics fail. My old aircraft GPS did > indeed read out height as well but you were told never to rely on it! > > >
2016/06/09
[ "https://aviation.stackexchange.com/questions/29154", "https://aviation.stackexchange.com", "https://aviation.stackexchange.com/users/15334/" ]
Directly comparing accuracy of GPS, barometric altimeter and radio altimeter makes little sense, because they each measure **different quantity**. * *GPS* measures ‘geometric altitude’. This is the actual height above the reference ellipsoid, in units of length. It is fairly accurate, but there are external reasons it might be unavailable. * *Barometric altimeter* measures ‘pressure altitude’. It is given units of length, but it really is just a barometric pressure, expressed in funny scale. It only corresponds to altitude when the atmospheric conditions match the “International Standard Atmosphere,” which they of course usually don't. The biggest error is due to pressure, which might be corrected using the altimeter setting. With this, the altimeter reading is better approximation of actual altitude, but since temperature also changes the rate of pressure change with altitude, it still only matches the geometric altitude exactly on the ground at the airport for which the altimeter setting is. * *Radio altimeter* measures ‘height’ above ground level. Now in aircraft, each of these quantities has different use: * The geometric altitude from GPS is, together with a topographic map, used in EGPWS to alert pilots that they are flying too low. Compared to older GPWS that only used the radio altimeter this has the advantage that warning can be given when the aircraft is high enough above ground directly below, but approaching higher ground where it won't be. GPS is good for this, because when it works, it always gives the geometric altitude without need for setting that might be incorrect. However, it is a new technology that not all aircraft have and it might be unavailable for external reasons (that can't be mitigated by adding redundancy). * The barometric altimeter is used for aircraft separation. Here, the exact altitude does not matter, what is important is whether the aircraft are at the same altitude or not. Barometric altimeter is excellent for this, because it is old technology, so all aircraft since the early days have it, and it is simple technology, so it is reliable. Above certain altitude (the ‘transition altitude’, 18,000 ft in the USA, but varies in other parts of the world), all aircraft have the altimeter set to the standard value of 29.92 inHg/1013 hPa. This high, there is no concern of separation from terrain, so everybody just uses the same setting to keep things simpler and errors less likely. The pressure altitude can easily differ from the geometric by couple thousand feet, but nobody cares, because it is only important to know whether the other plane is above, below or same altitude and this serves that purpose well. Below transition altitude, the altimeter setting from nearest airport is used, so the altimeter reading better approximates the geometric altitude for purpose of separation from terrain. However, the altitudes still only match at the altitude of the airport from which the setting is. Above (and below) the error increases. In cold weather, you can easily be 10% lower height (above ground) than the altimeter says. This is simply handled by adding sufficient margins to the published minimum altitudes. * The height measured by radio altimeter is used in the GPWS and EGPWS systems to warn pilot of flying too close to the ground and for announcing the height during final approach so the pilot can better judge the landing without having to look at the instrument. However, due to terrain irregularities, radio height is not much use outside these two specific cases. Also, radio altimeters usually only indicate up to 1,500 ft–2,500 ft if installed at all (all airliners have it, but most GA planes don't). The radio altimeter is accurate to a few feet. The barometric altimeter has to be accurate IIRC to 75 ft. Even simple GPS receiver should be able to do that too, but since the values can be easily off by a few *thousand* feet at high altitudes, they can't be mixed. So barometric altitude is always used for traffic control.
To amend already existing answers: Very close to the ground, GPS altitude is unusable for aviation purposes. Vertical accuracy of GPS position is roughly 1.5 times that of the lateral position. Unaugmented GPS signal might have position of over 10 meters, giving even larger vertical error. For that reason, vertical GPS position is not used for approaches if some augmentation method is not available. Even then, GPS altitude is not used to determine approach minima, which is defined as barometric altitude. There is also problem of momentary degradation of GPS accuracy due to satellite constellation or interference from building, terrain, weather, jamming etc. which would be unacceptable if the signal would be used for terrain clearance. During takeoff and landing, barometric altitude is referenced to local air pressure, making it very accurate locally at low altitude. In cruise as mentioned, the accuracy does not matter as much as aircraft maintain suitable clearance from the terrain in the area, so even in case of lateral navigation error there will not be a collision. The benefits of barometric altitude are separation from other aircraft, as any aircraft arriving from distant corners of world are using same barometric reference of 1013hPa (29.92 inHg) during cruise, and the matter of aircraft performance. Aircraft performance depends on local air pressure and temperature, so fixing the altitude to the barometric reference reduces aircraft performance to a single variable problem, ie. temperature.
29,154
I am not a pilot and have come to this site to ask a question about the measurement of height. This is as a result of the comments made as a result of a question in the Physics Stack Exchange forum ["Does barometer improve gps accuracy?"](https://physics.stackexchange.com/questions/261185/does-barometer-improve-gps-position-accuracy/261240#261240) I feel that there will be more of you on this site who have real time experience with flying and so can answer my question which is: *Which is more accurate for height measurement: GPS or barometer?* I have noted that those engaged in geocaching often have GPS units with built-in barometers to improve height accuracy. One of my friends who is a pilot responded as follows: > > A light aircraft relies on a barometer that is set to the standard > 1013 millibars when in cruise at height (or the regional QNH when > lower) but they set to the local reading for the airfield (QFE) when > planning their decent. Commercial aircraft also have ground proximity > radar but they also have a standard set of old fashioned flight > instruments in case the electronics fail. My old aircraft GPS did > indeed read out height as well but you were told never to rely on it! > > >
2016/06/09
[ "https://aviation.stackexchange.com/questions/29154", "https://aviation.stackexchange.com", "https://aviation.stackexchange.com/users/15334/" ]
For surveying glacier movement, or a mountain range's growth/decay, GPS is more accurate. For airplanes: refresh rate, reliability, and simplicity matter more. When it comes down to accuracy, ground augmented GPS provides ILS CAT I accuracy using [LPV](https://en.wikipedia.org/wiki/Localizer_performance_with_vertical_guidance). So alongside refresh rate, and accuracy, comes the question of application. For airplanes in cruise, where separation matters, reliable devices are best. GPS signals can be jammed. As can be seen in the news lately in [California](https://www.aopa.org/news-and-media/all-news/2016/june/08/navy-cancels-planned-gps-outage-in-southern-california) and [Egypt](http://www.mirror.co.uk/news/world-news/fresh-terror-fears-hackers-trying-8074849). And GPS signal can be lost due to ionospheric interference. So, any equipment that doesn't rely on external sources is safer.
Most likely GPS is more accurate, after all, it can calculate your location in three-dimensional space within a few feet. However, it's not really that usable, since GPS measures altitude from the [geoid](https://en.wikipedia.org/wiki/Geoid). That said, new GPS-receivers should be able to correct the difference between the geoid and actual earth surface. Baro-altimeters are also quite accurate, but only when the atmosphere is at [ISA](https://en.wikipedia.org/wiki/International_Standard_Atmosphere)-conditions. Baro-altimeters are calibrated to ISA-atmosphere, and when any condition varies from this, you get an error. Some of those, for example temperature error, can be compensated for, but removing all potential errors is a tedious task. If I was geocaching and wanted the most accurate elevation information for my location, I would use an old-fashioned geographic map with elevation information.
29,154
I am not a pilot and have come to this site to ask a question about the measurement of height. This is as a result of the comments made as a result of a question in the Physics Stack Exchange forum ["Does barometer improve gps accuracy?"](https://physics.stackexchange.com/questions/261185/does-barometer-improve-gps-position-accuracy/261240#261240) I feel that there will be more of you on this site who have real time experience with flying and so can answer my question which is: *Which is more accurate for height measurement: GPS or barometer?* I have noted that those engaged in geocaching often have GPS units with built-in barometers to improve height accuracy. One of my friends who is a pilot responded as follows: > > A light aircraft relies on a barometer that is set to the standard > 1013 millibars when in cruise at height (or the regional QNH when > lower) but they set to the local reading for the airfield (QFE) when > planning their decent. Commercial aircraft also have ground proximity > radar but they also have a standard set of old fashioned flight > instruments in case the electronics fail. My old aircraft GPS did > indeed read out height as well but you were told never to rely on it! > > >
2016/06/09
[ "https://aviation.stackexchange.com/questions/29154", "https://aviation.stackexchange.com", "https://aviation.stackexchange.com/users/15334/" ]
This depends on the accuracy of the barometer and the weather, since the weather influences the air pressure. So even if you're standing still at the same altitude the barometer will show you different readings. This is why a pilot constantly has to adjust his altimeter. An altimeter is designed to measure the altitude above a fixed level. This is why pilots have to calibrate their altimeters before take-off with ATC. The GPS system, on the other hand, works with an accurate clock and the known positions of the satellites. The satellite sends out packages which are time stamped. The receiver receives these time-stamped packages and uses its own calibrated clock to determine the sequence. Comparing this to the know positions of the satellite at certain positions allow the use of triangulation formulas to solve for the position of the receiver. The GPS system is owned by the US military and they have this graph about the accuracy improvements over the years. [![source: http://www.gps.gov/systems/gps/performance/accuracy/URE.pdf](https://i.stack.imgur.com/EwrZD.png)](https://i.stack.imgur.com/EwrZD.png) So to answer your question, it is determined by the equipment you use, but cost wise it is cheaper to get an accurate reading from a GPS since it is also less influenced by the weather.
To amend already existing answers: Very close to the ground, GPS altitude is unusable for aviation purposes. Vertical accuracy of GPS position is roughly 1.5 times that of the lateral position. Unaugmented GPS signal might have position of over 10 meters, giving even larger vertical error. For that reason, vertical GPS position is not used for approaches if some augmentation method is not available. Even then, GPS altitude is not used to determine approach minima, which is defined as barometric altitude. There is also problem of momentary degradation of GPS accuracy due to satellite constellation or interference from building, terrain, weather, jamming etc. which would be unacceptable if the signal would be used for terrain clearance. During takeoff and landing, barometric altitude is referenced to local air pressure, making it very accurate locally at low altitude. In cruise as mentioned, the accuracy does not matter as much as aircraft maintain suitable clearance from the terrain in the area, so even in case of lateral navigation error there will not be a collision. The benefits of barometric altitude are separation from other aircraft, as any aircraft arriving from distant corners of world are using same barometric reference of 1013hPa (29.92 inHg) during cruise, and the matter of aircraft performance. Aircraft performance depends on local air pressure and temperature, so fixing the altitude to the barometric reference reduces aircraft performance to a single variable problem, ie. temperature.
29,154
I am not a pilot and have come to this site to ask a question about the measurement of height. This is as a result of the comments made as a result of a question in the Physics Stack Exchange forum ["Does barometer improve gps accuracy?"](https://physics.stackexchange.com/questions/261185/does-barometer-improve-gps-position-accuracy/261240#261240) I feel that there will be more of you on this site who have real time experience with flying and so can answer my question which is: *Which is more accurate for height measurement: GPS or barometer?* I have noted that those engaged in geocaching often have GPS units with built-in barometers to improve height accuracy. One of my friends who is a pilot responded as follows: > > A light aircraft relies on a barometer that is set to the standard > 1013 millibars when in cruise at height (or the regional QNH when > lower) but they set to the local reading for the airfield (QFE) when > planning their decent. Commercial aircraft also have ground proximity > radar but they also have a standard set of old fashioned flight > instruments in case the electronics fail. My old aircraft GPS did > indeed read out height as well but you were told never to rely on it! > > >
2016/06/09
[ "https://aviation.stackexchange.com/questions/29154", "https://aviation.stackexchange.com", "https://aviation.stackexchange.com/users/15334/" ]
There is a pretty comprehensive description of the issues at [this link](http://www.borgeltinstruments.com/GPSvsPressurealtitude.pdf). Briefly - the conversion of pressure to altitude makes certain assumptions about the "standard atmosphere" - that is, it works best when the temperature and the lapse rate (change in temperature with altitude) exactly matches that from the model. In the real world, this is almost never the case. For example, on a hot day the air will be less dense, so the weight of a column of air will be smaller - and so when the pressure seems to have dropped by say 200 HPa, you will underestimate your altitude. The example calculation in the link shows that when the surface temperature on a dry day is 42°C, the error at 10,000 feet could be as much as 800 ft - which is considerably bigger than the typical error in a GPS (although GPS is less accurate in the vertical direction, you should routinely be able to get height within about 50 feet when you have an unobstructed view of the sky, and ionospheric activity is not unusually high). In principle it is possible to make corrections for these things - but a simple mechanical pressure-based altimeter probably doesn't. On the other hand - if everybody else is flying by barometer, you might be "right" about your GPS based altitude, but "wrong" compared to other planes in the vicinity. So it is advisable to use the same method as everyone else, so you don't crash into each other. On the other hand, when it comes to your final glide, the GPS is more likely to get you home safely on a hot day.
Most likely GPS is more accurate, after all, it can calculate your location in three-dimensional space within a few feet. However, it's not really that usable, since GPS measures altitude from the [geoid](https://en.wikipedia.org/wiki/Geoid). That said, new GPS-receivers should be able to correct the difference between the geoid and actual earth surface. Baro-altimeters are also quite accurate, but only when the atmosphere is at [ISA](https://en.wikipedia.org/wiki/International_Standard_Atmosphere)-conditions. Baro-altimeters are calibrated to ISA-atmosphere, and when any condition varies from this, you get an error. Some of those, for example temperature error, can be compensated for, but removing all potential errors is a tedious task. If I was geocaching and wanted the most accurate elevation information for my location, I would use an old-fashioned geographic map with elevation information.
29,154
I am not a pilot and have come to this site to ask a question about the measurement of height. This is as a result of the comments made as a result of a question in the Physics Stack Exchange forum ["Does barometer improve gps accuracy?"](https://physics.stackexchange.com/questions/261185/does-barometer-improve-gps-position-accuracy/261240#261240) I feel that there will be more of you on this site who have real time experience with flying and so can answer my question which is: *Which is more accurate for height measurement: GPS or barometer?* I have noted that those engaged in geocaching often have GPS units with built-in barometers to improve height accuracy. One of my friends who is a pilot responded as follows: > > A light aircraft relies on a barometer that is set to the standard > 1013 millibars when in cruise at height (or the regional QNH when > lower) but they set to the local reading for the airfield (QFE) when > planning their decent. Commercial aircraft also have ground proximity > radar but they also have a standard set of old fashioned flight > instruments in case the electronics fail. My old aircraft GPS did > indeed read out height as well but you were told never to rely on it! > > >
2016/06/09
[ "https://aviation.stackexchange.com/questions/29154", "https://aviation.stackexchange.com", "https://aviation.stackexchange.com/users/15334/" ]
There is a pretty comprehensive description of the issues at [this link](http://www.borgeltinstruments.com/GPSvsPressurealtitude.pdf). Briefly - the conversion of pressure to altitude makes certain assumptions about the "standard atmosphere" - that is, it works best when the temperature and the lapse rate (change in temperature with altitude) exactly matches that from the model. In the real world, this is almost never the case. For example, on a hot day the air will be less dense, so the weight of a column of air will be smaller - and so when the pressure seems to have dropped by say 200 HPa, you will underestimate your altitude. The example calculation in the link shows that when the surface temperature on a dry day is 42°C, the error at 10,000 feet could be as much as 800 ft - which is considerably bigger than the typical error in a GPS (although GPS is less accurate in the vertical direction, you should routinely be able to get height within about 50 feet when you have an unobstructed view of the sky, and ionospheric activity is not unusually high). In principle it is possible to make corrections for these things - but a simple mechanical pressure-based altimeter probably doesn't. On the other hand - if everybody else is flying by barometer, you might be "right" about your GPS based altitude, but "wrong" compared to other planes in the vicinity. So it is advisable to use the same method as everyone else, so you don't crash into each other. On the other hand, when it comes to your final glide, the GPS is more likely to get you home safely on a hot day.
To amend already existing answers: Very close to the ground, GPS altitude is unusable for aviation purposes. Vertical accuracy of GPS position is roughly 1.5 times that of the lateral position. Unaugmented GPS signal might have position of over 10 meters, giving even larger vertical error. For that reason, vertical GPS position is not used for approaches if some augmentation method is not available. Even then, GPS altitude is not used to determine approach minima, which is defined as barometric altitude. There is also problem of momentary degradation of GPS accuracy due to satellite constellation or interference from building, terrain, weather, jamming etc. which would be unacceptable if the signal would be used for terrain clearance. During takeoff and landing, barometric altitude is referenced to local air pressure, making it very accurate locally at low altitude. In cruise as mentioned, the accuracy does not matter as much as aircraft maintain suitable clearance from the terrain in the area, so even in case of lateral navigation error there will not be a collision. The benefits of barometric altitude are separation from other aircraft, as any aircraft arriving from distant corners of world are using same barometric reference of 1013hPa (29.92 inHg) during cruise, and the matter of aircraft performance. Aircraft performance depends on local air pressure and temperature, so fixing the altitude to the barometric reference reduces aircraft performance to a single variable problem, ie. temperature.
29,154
I am not a pilot and have come to this site to ask a question about the measurement of height. This is as a result of the comments made as a result of a question in the Physics Stack Exchange forum ["Does barometer improve gps accuracy?"](https://physics.stackexchange.com/questions/261185/does-barometer-improve-gps-position-accuracy/261240#261240) I feel that there will be more of you on this site who have real time experience with flying and so can answer my question which is: *Which is more accurate for height measurement: GPS or barometer?* I have noted that those engaged in geocaching often have GPS units with built-in barometers to improve height accuracy. One of my friends who is a pilot responded as follows: > > A light aircraft relies on a barometer that is set to the standard > 1013 millibars when in cruise at height (or the regional QNH when > lower) but they set to the local reading for the airfield (QFE) when > planning their decent. Commercial aircraft also have ground proximity > radar but they also have a standard set of old fashioned flight > instruments in case the electronics fail. My old aircraft GPS did > indeed read out height as well but you were told never to rely on it! > > >
2016/06/09
[ "https://aviation.stackexchange.com/questions/29154", "https://aviation.stackexchange.com", "https://aviation.stackexchange.com/users/15334/" ]
Most likely GPS is more accurate, after all, it can calculate your location in three-dimensional space within a few feet. However, it's not really that usable, since GPS measures altitude from the [geoid](https://en.wikipedia.org/wiki/Geoid). That said, new GPS-receivers should be able to correct the difference between the geoid and actual earth surface. Baro-altimeters are also quite accurate, but only when the atmosphere is at [ISA](https://en.wikipedia.org/wiki/International_Standard_Atmosphere)-conditions. Baro-altimeters are calibrated to ISA-atmosphere, and when any condition varies from this, you get an error. Some of those, for example temperature error, can be compensated for, but removing all potential errors is a tedious task. If I was geocaching and wanted the most accurate elevation information for my location, I would use an old-fashioned geographic map with elevation information.
To amend already existing answers: Very close to the ground, GPS altitude is unusable for aviation purposes. Vertical accuracy of GPS position is roughly 1.5 times that of the lateral position. Unaugmented GPS signal might have position of over 10 meters, giving even larger vertical error. For that reason, vertical GPS position is not used for approaches if some augmentation method is not available. Even then, GPS altitude is not used to determine approach minima, which is defined as barometric altitude. There is also problem of momentary degradation of GPS accuracy due to satellite constellation or interference from building, terrain, weather, jamming etc. which would be unacceptable if the signal would be used for terrain clearance. During takeoff and landing, barometric altitude is referenced to local air pressure, making it very accurate locally at low altitude. In cruise as mentioned, the accuracy does not matter as much as aircraft maintain suitable clearance from the terrain in the area, so even in case of lateral navigation error there will not be a collision. The benefits of barometric altitude are separation from other aircraft, as any aircraft arriving from distant corners of world are using same barometric reference of 1013hPa (29.92 inHg) during cruise, and the matter of aircraft performance. Aircraft performance depends on local air pressure and temperature, so fixing the altitude to the barometric reference reduces aircraft performance to a single variable problem, ie. temperature.