qid
int64
1
74.7M
question
stringlengths
12
33.8k
date
stringlengths
10
10
metadata
list
response_j
stringlengths
0
115k
response_k
stringlengths
2
98.3k
97,856
The novel [Daemon](https://en.wikipedia.org/wiki/Daemon_(novel_series)) is frequently praised for being realistic in its portrayal rather than just mashing buzzwords. However, this struck me as unrealistic: > > Gragg's e-mail contained a poisoned JPEG of the brokerage logo. JPEGs were compressed image files. When the user viewed the e-mail, the operating system ran a decompression algorithm to render the graphic on-screen; it was this decompression algorithm that executed Gragg's malicious script and let him slip inside the user's system—granting him full access. There was a patch available for the decompression flaw, but older, rich folks typically had no clue about security patches. > > > Is there such a thing? Is this description based on some real exploit? This was published in December 2006. Is it sensible to say "the operating system" was decompressing the image to render it? --- Note this has nothing to do with security of PHP image uploading scripts. I'm asking about the *decoding process of displaying a JPEG*, not scripts taking input from remote users, nor files misnamed as `.jpeg`. The duplicate flagging I'm responding to looks poor even for a buzzword match; really nothing alike other than mentioning image files.
2015/08/26
[ "https://security.stackexchange.com/questions/97856", "https://security.stackexchange.com", "https://security.stackexchange.com/users/80219/" ]
Agreeing with others to say yes this is totally possible, but also to add an interesting anecdote: Joshua Drake (@jduck), discovered a bug based on a very similar concept (images being interpreted by the OS) which ended up being named "Stagefright", and affected a [ridiculous number of Android devices](http://www.androidcentral.com/stagefright). He also discovered a similar image based bug in [libpng](http://libpng.org/pub/png/libpng.html) that would cause certain devices to crash. He tweeted an example of the exploit basically saying "Hey, check out this cool malicious PNG I made, it'll probably crash your device", without realising that twitter had added automatic rendering of inline images. Needless to say a lot of his followers started having their machines crash the instant the browser tried to load the image thumbnail in their feed.
Unrealistic? There was recent critical bug in font definition parsing: <https://technet.microsoft.com/en-us/library/security/ms15-078.aspx> and libjpeg changenotes are full of security advisories. Parsing files[1] is hard: overflows, underflows, out of bounds access. Recently there were many fuzzing tools developed for semi-automatic detection of input that can cause crash. [1] or network packets, XML or even SQL queries.
97,856
The novel [Daemon](https://en.wikipedia.org/wiki/Daemon_(novel_series)) is frequently praised for being realistic in its portrayal rather than just mashing buzzwords. However, this struck me as unrealistic: > > Gragg's e-mail contained a poisoned JPEG of the brokerage logo. JPEGs were compressed image files. When the user viewed the e-mail, the operating system ran a decompression algorithm to render the graphic on-screen; it was this decompression algorithm that executed Gragg's malicious script and let him slip inside the user's system—granting him full access. There was a patch available for the decompression flaw, but older, rich folks typically had no clue about security patches. > > > Is there such a thing? Is this description based on some real exploit? This was published in December 2006. Is it sensible to say "the operating system" was decompressing the image to render it? --- Note this has nothing to do with security of PHP image uploading scripts. I'm asking about the *decoding process of displaying a JPEG*, not scripts taking input from remote users, nor files misnamed as `.jpeg`. The duplicate flagging I'm responding to looks poor even for a buzzword match; really nothing alike other than mentioning image files.
2015/08/26
[ "https://security.stackexchange.com/questions/97856", "https://security.stackexchange.com", "https://security.stackexchange.com/users/80219/" ]
Agreeing with others to say yes this is totally possible, but also to add an interesting anecdote: Joshua Drake (@jduck), discovered a bug based on a very similar concept (images being interpreted by the OS) which ended up being named "Stagefright", and affected a [ridiculous number of Android devices](http://www.androidcentral.com/stagefright). He also discovered a similar image based bug in [libpng](http://libpng.org/pub/png/libpng.html) that would cause certain devices to crash. He tweeted an example of the exploit basically saying "Hey, check out this cool malicious PNG I made, it'll probably crash your device", without realising that twitter had added automatic rendering of inline images. Needless to say a lot of his followers started having their machines crash the instant the browser tried to load the image thumbnail in their feed.
As others have pointed out, such attacks usually exploit buffer overflows. Regarding the nuts-and-bolts of how, it's called a stack-smashing attack. It involves corrupting the [call stack](https://en.wikipedia.org/wiki/Call_stack), and overwriting an address to legitimate code to be executed with an address to attacker-supplied code, which gets executed instead. You can find details at [insecure.org/stf/smashstack.html](http://insecure.org/stf/smashstack.html).
97,856
The novel [Daemon](https://en.wikipedia.org/wiki/Daemon_(novel_series)) is frequently praised for being realistic in its portrayal rather than just mashing buzzwords. However, this struck me as unrealistic: > > Gragg's e-mail contained a poisoned JPEG of the brokerage logo. JPEGs were compressed image files. When the user viewed the e-mail, the operating system ran a decompression algorithm to render the graphic on-screen; it was this decompression algorithm that executed Gragg's malicious script and let him slip inside the user's system—granting him full access. There was a patch available for the decompression flaw, but older, rich folks typically had no clue about security patches. > > > Is there such a thing? Is this description based on some real exploit? This was published in December 2006. Is it sensible to say "the operating system" was decompressing the image to render it? --- Note this has nothing to do with security of PHP image uploading scripts. I'm asking about the *decoding process of displaying a JPEG*, not scripts taking input from remote users, nor files misnamed as `.jpeg`. The duplicate flagging I'm responding to looks poor even for a buzzword match; really nothing alike other than mentioning image files.
2015/08/26
[ "https://security.stackexchange.com/questions/97856", "https://security.stackexchange.com", "https://security.stackexchange.com/users/80219/" ]
Unrealistic? There was recent critical bug in font definition parsing: <https://technet.microsoft.com/en-us/library/security/ms15-078.aspx> and libjpeg changenotes are full of security advisories. Parsing files[1] is hard: overflows, underflows, out of bounds access. Recently there were many fuzzing tools developed for semi-automatic detection of input that can cause crash. [1] or network packets, XML or even SQL queries.
As others have pointed out, such attacks usually exploit buffer overflows. Regarding the nuts-and-bolts of how, it's called a stack-smashing attack. It involves corrupting the [call stack](https://en.wikipedia.org/wiki/Call_stack), and overwriting an address to legitimate code to be executed with an address to attacker-supplied code, which gets executed instead. You can find details at [insecure.org/stf/smashstack.html](http://insecure.org/stf/smashstack.html).
12,152
IRC is a handy tool at all sorts of levels: * Team communication * User community participation * Building customer service chat bots on an established protocol For these reasons and more I've come to love irssi+screen. Why do hosting companies love to hate IRC? If it's just DDoS and warez they're worried about, surely those specific activities are already banned and easily detected?
2009/05/26
[ "https://serverfault.com/questions/12152", "https://serverfault.com", "https://serverfault.com/users/919/" ]
IRC is regularly used in command and control systems for various internet worms, malware and spyware. I guess the overhead of determining the valid use of IRC is to high compared to the cost of being black hole routed if their IP block is suspected of hosting a botnet.
They view IRC network(s) as a haven for illegal activity such as software/music piracy, botnets, credit card fraud, etc.. Probably their biggest concern is their liability if a malicious user connects a botnet to their server to lauch attacks from or simply eat up alot of bandwidth.
12,152
IRC is a handy tool at all sorts of levels: * Team communication * User community participation * Building customer service chat bots on an established protocol For these reasons and more I've come to love irssi+screen. Why do hosting companies love to hate IRC? If it's just DDoS and warez they're worried about, surely those specific activities are already banned and easily detected?
2009/05/26
[ "https://serverfault.com/questions/12152", "https://serverfault.com", "https://serverfault.com/users/919/" ]
Speaking as a former IRC op. As productive as IRC can be for the well-mannered person, it's also the sort of place that tends to collect people with poor self-control. Younger people, the disaffected, mentally unbalanced, etc. When angered by some real or imagined slight, these folks tend to love launching DOS or DDOS attacks against the IRC servers and/or those IRC clients whose IP address is visible via a /WHOIS command. A person determined to be a nuisance to your IRC server will continue being a nuisance for as long as he/she wishes. Many [proxy lists](http://www.google.com/search?q=proxy+lists) are available to the determined nuisance; blocking/banning by IP address thus becomes a constant manual hassle. There's no sure way to automate it; these skript kiddies can and do vary their scripted or manual attacks quite often. Longterm the only way to win is to have a team of people who thus have a longer attention span than the average angry teenager. An IRC server of any popularity at all is nearly guaranteed one or more such attacks per month; dealing with these attacks requires time, expertise, and tools that many low-cost hosting providers just don't have available and don't see any profit in acquiring. I stay on IRC a fair amount of the time, via irssi/screen running at [Silence Is Defeat](http://silenceisdefeat.net/) (SiD). It costs a buck to signup, and there are no recurring costs. But the quality of service has been going downhill lately; the SiD server is often a tad overloaded and thus can lag a few seconds at times. So you might check out a few other [free shell providers](http://www.red-pill.eu/freeunix.shtml). If all you need is an irssi/screen host and the occasional ping/traceroute/nmap, you'll soon find one that works well for you. rsync your homedir to your local machine every so often though; of course being free, you're not exactly gonna get an ironclad SLA!
IRC is regularly used in command and control systems for various internet worms, malware and spyware. I guess the overhead of determining the valid use of IRC is to high compared to the cost of being black hole routed if their IP block is suspected of hosting a botnet.
12,152
IRC is a handy tool at all sorts of levels: * Team communication * User community participation * Building customer service chat bots on an established protocol For these reasons and more I've come to love irssi+screen. Why do hosting companies love to hate IRC? If it's just DDoS and warez they're worried about, surely those specific activities are already banned and easily detected?
2009/05/26
[ "https://serverfault.com/questions/12152", "https://serverfault.com", "https://serverfault.com/users/919/" ]
Speaking as a former IRC op. As productive as IRC can be for the well-mannered person, it's also the sort of place that tends to collect people with poor self-control. Younger people, the disaffected, mentally unbalanced, etc. When angered by some real or imagined slight, these folks tend to love launching DOS or DDOS attacks against the IRC servers and/or those IRC clients whose IP address is visible via a /WHOIS command. A person determined to be a nuisance to your IRC server will continue being a nuisance for as long as he/she wishes. Many [proxy lists](http://www.google.com/search?q=proxy+lists) are available to the determined nuisance; blocking/banning by IP address thus becomes a constant manual hassle. There's no sure way to automate it; these skript kiddies can and do vary their scripted or manual attacks quite often. Longterm the only way to win is to have a team of people who thus have a longer attention span than the average angry teenager. An IRC server of any popularity at all is nearly guaranteed one or more such attacks per month; dealing with these attacks requires time, expertise, and tools that many low-cost hosting providers just don't have available and don't see any profit in acquiring. I stay on IRC a fair amount of the time, via irssi/screen running at [Silence Is Defeat](http://silenceisdefeat.net/) (SiD). It costs a buck to signup, and there are no recurring costs. But the quality of service has been going downhill lately; the SiD server is often a tad overloaded and thus can lag a few seconds at times. So you might check out a few other [free shell providers](http://www.red-pill.eu/freeunix.shtml). If all you need is an irssi/screen host and the occasional ping/traceroute/nmap, you'll soon find one that works well for you. rsync your homedir to your local machine every so often though; of course being free, you're not exactly gonna get an ironclad SLA!
They view IRC network(s) as a haven for illegal activity such as software/music piracy, botnets, credit card fraud, etc.. Probably their biggest concern is their liability if a malicious user connects a botnet to their server to lauch attacks from or simply eat up alot of bandwidth.
24,845
Someone just mentioned in a question that having *all* the lights on your starship flash red was a good way to tell people to find somewhere safe whilst the ship fights Borg or whatever the alien of the week is. But what do people actually do during red alert besides run frantically down corridors? Is there a safe spot for civilians onboard a Federation starship? Or are they 'press ganged' into helping out?
2012/10/17
[ "https://scifi.stackexchange.com/questions/24845", "https://scifi.stackexchange.com", "https://scifi.stackexchange.com/users/3804/" ]
On a contemporary naval ship - all crew will have a designated action station (battle stations sounds cooler, but is not strictly correct) in addition to their regular duty station. Even "non-combative" crewmen, such as cooks and orderlies, will have an action station: as part of a damage control team, or first aid, etc. Even on a civilian merchant ship, crew would have been trained in fire-fighting and damage control roles. The USS Enterprise (NCC-1701) is a bit different, in that the primary role is non-military (and many of its crew are scientists rather than combative), it carries a large number of civilians, and many systems are at least semi-automated. Starfleet probably use Red Alert to mean a call to action stations (again - which is more dramatic). Still, there are references in several episodes of designated shelter areas for civilians - presumably they have extra shielding, more structure around them, and rapid access to the lifepods. As for Starfleet personnel - every serving crewman would have been trained in an "action" role - damage control, medical aid, etc, regardless of their primary role on ship. Even semi-automated systems can (and do) fail in the 24th century.
A Red Alert is call for all crew members to go to a higher state of readiness or "alertness". A call to battle stations simultaneously calls a red alert, and *requires* the crew to man their assigned battle stations, which would include, but would not be limited to: manning weapons, the assembling of armed security teams, repair stations and medical personnel preparing for casualties. During a red alert, civilians are sometimes directed to remain in designated areas, or their quarters. The fact that this distinction is not made on every alert indicates that it may not be required behavior.
24,845
Someone just mentioned in a question that having *all* the lights on your starship flash red was a good way to tell people to find somewhere safe whilst the ship fights Borg or whatever the alien of the week is. But what do people actually do during red alert besides run frantically down corridors? Is there a safe spot for civilians onboard a Federation starship? Or are they 'press ganged' into helping out?
2012/10/17
[ "https://scifi.stackexchange.com/questions/24845", "https://scifi.stackexchange.com", "https://scifi.stackexchange.com/users/3804/" ]
A Red Alert is call for all crew members to go to a higher state of readiness or "alertness". A call to battle stations simultaneously calls a red alert, and *requires* the crew to man their assigned battle stations, which would include, but would not be limited to: manning weapons, the assembling of armed security teams, repair stations and medical personnel preparing for casualties. During a red alert, civilians are sometimes directed to remain in designated areas, or their quarters. The fact that this distinction is not made on every alert indicates that it may not be required behavior.
This is addressed in the [TNG: Technical Manual](https://en.wikipedia.org/wiki/Star_Trek:_The_Next_Generation_Technical_Manual) Non-Bridge Personnel -------------------- > > During Red Alert situations, crew and attached personnel from all > three duty shifts are informed via alarm klaxons and annunciator > lights. Key second shift personnel are ordered to report immediately > to their primary duty stations, while other second shift personnel > report to their secondary duty stations. Key third shift personnel > (who are presumably on their sleep cycle) are ordered to report to > their secondary duty stations (or special assignment stations) in > fifteen minutes. > > > * Level 4 automatic diagnostic series run on all ship's primary and tactical systems at five-minute intervals. Bridge given immediate > notification of any significant change in ship's readiness status. > * If presently off-line, warp power core to be brought to full operating condition and maintained at 75% power output. Level 3 > diagnostics conducted on warp propulsion systems at initiation of Red > Alert status, Level 4 series repeated at five- minute intervals. > * Main impulse propulsion system is brought to full operating condition. All operational backup reactor units are brought to hot > standby. In actual or potential combat situ-ations, Saucer Module > impulse propulsion system is brought to full operating status. > * All tactical and long-range sensor arrays are brought to full operational status. Secondary mission use of sensor elements is > discontinued, except with approval of Ops. > * Deflector systems are automatically brought to tactical configuration unless specifically overridden by the Tactical Officer. > All available secondary and backup deflector genera-tors are brought > to hot standby. > * Phaser banks are energized to full standby. Power conduits are enabled, targeting scanners are activated. Level 3 diagnostics are > performed to confirm operational status. > * Photon torpedo launchers are brought to full standby. One torpedo device in each launcher is energized to full launch readiness and > primed with a standard antimatter charge of 1.5 kg. > * The Battle Bridge is brought to full standby status and backup bridge crews are notified for possible duty in the event of possible > Saucer sep maneuvers. > * All three shuttlebays are brought to launch readiness. Two shuttlecraft are brought to launch minus thirty seconds' readiness. > * Onboard sensors record the location of all personnel and alert Security of any anomalous activity. Location and activity information > is recorded for post-mission analysis. > * Level 4 automated diagnostics are performed to verify readiness of autonomous survival and recovery vehicle systems (lifeboats). > Readiness of ejection initiator servos is verified through a partial > Level 3 semi-automated check. Security officers are assigned to insure > that all passageways to lifeboat accesses are clear. > > > Red Alert Mode operating rules require two additional shuttles to be > brought to urgent standby, and all nine remaining operational vehicles > to be maintained at immediate standby. > > > and > > In the event a set of casings is loaded, and the ship then stands down > from Red Alert, the warhead fuels are off-loaded and returned to > storage, and the launcher system is powered down. > > > What happens on the Bridge -------------------------- > > ***Red Alert.*** > > > *During Red Alert condition, all bridge stations are automatically brought to Full Enable Mode. Tactical systems are placed on full alert > and, if unoccupied, the duty security chief will occupy the bridge > Tactical station* > > > and > > Yellow and Red Alert operating rules generally require the presence of > at least two command personnel [on the Bridge], in addition to Conn > and Ops. > > >
24,845
Someone just mentioned in a question that having *all* the lights on your starship flash red was a good way to tell people to find somewhere safe whilst the ship fights Borg or whatever the alien of the week is. But what do people actually do during red alert besides run frantically down corridors? Is there a safe spot for civilians onboard a Federation starship? Or are they 'press ganged' into helping out?
2012/10/17
[ "https://scifi.stackexchange.com/questions/24845", "https://scifi.stackexchange.com", "https://scifi.stackexchange.com/users/3804/" ]
On a contemporary naval ship - all crew will have a designated action station (battle stations sounds cooler, but is not strictly correct) in addition to their regular duty station. Even "non-combative" crewmen, such as cooks and orderlies, will have an action station: as part of a damage control team, or first aid, etc. Even on a civilian merchant ship, crew would have been trained in fire-fighting and damage control roles. The USS Enterprise (NCC-1701) is a bit different, in that the primary role is non-military (and many of its crew are scientists rather than combative), it carries a large number of civilians, and many systems are at least semi-automated. Starfleet probably use Red Alert to mean a call to action stations (again - which is more dramatic). Still, there are references in several episodes of designated shelter areas for civilians - presumably they have extra shielding, more structure around them, and rapid access to the lifepods. As for Starfleet personnel - every serving crewman would have been trained in an "action" role - damage control, medical aid, etc, regardless of their primary role on ship. Even semi-automated systems can (and do) fail in the 24th century.
This is addressed in the [TNG: Technical Manual](https://en.wikipedia.org/wiki/Star_Trek:_The_Next_Generation_Technical_Manual) Non-Bridge Personnel -------------------- > > During Red Alert situations, crew and attached personnel from all > three duty shifts are informed via alarm klaxons and annunciator > lights. Key second shift personnel are ordered to report immediately > to their primary duty stations, while other second shift personnel > report to their secondary duty stations. Key third shift personnel > (who are presumably on their sleep cycle) are ordered to report to > their secondary duty stations (or special assignment stations) in > fifteen minutes. > > > * Level 4 automatic diagnostic series run on all ship's primary and tactical systems at five-minute intervals. Bridge given immediate > notification of any significant change in ship's readiness status. > * If presently off-line, warp power core to be brought to full operating condition and maintained at 75% power output. Level 3 > diagnostics conducted on warp propulsion systems at initiation of Red > Alert status, Level 4 series repeated at five- minute intervals. > * Main impulse propulsion system is brought to full operating condition. All operational backup reactor units are brought to hot > standby. In actual or potential combat situ-ations, Saucer Module > impulse propulsion system is brought to full operating status. > * All tactical and long-range sensor arrays are brought to full operational status. Secondary mission use of sensor elements is > discontinued, except with approval of Ops. > * Deflector systems are automatically brought to tactical configuration unless specifically overridden by the Tactical Officer. > All available secondary and backup deflector genera-tors are brought > to hot standby. > * Phaser banks are energized to full standby. Power conduits are enabled, targeting scanners are activated. Level 3 diagnostics are > performed to confirm operational status. > * Photon torpedo launchers are brought to full standby. One torpedo device in each launcher is energized to full launch readiness and > primed with a standard antimatter charge of 1.5 kg. > * The Battle Bridge is brought to full standby status and backup bridge crews are notified for possible duty in the event of possible > Saucer sep maneuvers. > * All three shuttlebays are brought to launch readiness. Two shuttlecraft are brought to launch minus thirty seconds' readiness. > * Onboard sensors record the location of all personnel and alert Security of any anomalous activity. Location and activity information > is recorded for post-mission analysis. > * Level 4 automated diagnostics are performed to verify readiness of autonomous survival and recovery vehicle systems (lifeboats). > Readiness of ejection initiator servos is verified through a partial > Level 3 semi-automated check. Security officers are assigned to insure > that all passageways to lifeboat accesses are clear. > > > Red Alert Mode operating rules require two additional shuttles to be > brought to urgent standby, and all nine remaining operational vehicles > to be maintained at immediate standby. > > > and > > In the event a set of casings is loaded, and the ship then stands down > from Red Alert, the warhead fuels are off-loaded and returned to > storage, and the launcher system is powered down. > > > What happens on the Bridge -------------------------- > > ***Red Alert.*** > > > *During Red Alert condition, all bridge stations are automatically brought to Full Enable Mode. Tactical systems are placed on full alert > and, if unoccupied, the duty security chief will occupy the bridge > Tactical station* > > > and > > Yellow and Red Alert operating rules generally require the presence of > at least two command personnel [on the Bridge], in addition to Conn > and Ops. > > >
1,168,088
I am impressed by the way we programmatically access lists in SharePoint. I percieve it as a Data Access Layer, while modeling the database is as simple as defining the columns in the List. I am looking for a tool OR an application that would give me similar interface to a database. Basically, for some reason I cannot use SharePoint and I don't wish to take up the responsibility of modeling, deploying and maintaining the database. I find the SharePoint way of persistence management acceptable and exciting. Can anyone suggest me something even close to this. BTW, my application is on ASP.Net and my preferred RDBMS is MS SQL Server.
2009/07/22
[ "https://Stackoverflow.com/questions/1168088", "https://Stackoverflow.com", "https://Stackoverflow.com/users/143191/" ]
If you don't want the overhead and expense of a Sharepoint installation, 90% of the time all you really need is WSS 3.0 (free with a windows server license).
For auto generated entity classes you can use Linq To Sharepoint (SPMetal) For hand written POCO entities you can try using [SharepointCommon](http://sharepointcommon.codeplex.com/) ORM
1,168,088
I am impressed by the way we programmatically access lists in SharePoint. I percieve it as a Data Access Layer, while modeling the database is as simple as defining the columns in the List. I am looking for a tool OR an application that would give me similar interface to a database. Basically, for some reason I cannot use SharePoint and I don't wish to take up the responsibility of modeling, deploying and maintaining the database. I find the SharePoint way of persistence management acceptable and exciting. Can anyone suggest me something even close to this. BTW, my application is on ASP.Net and my preferred RDBMS is MS SQL Server.
2009/07/22
[ "https://Stackoverflow.com/questions/1168088", "https://Stackoverflow.com", "https://Stackoverflow.com/users/143191/" ]
If you don't want the overhead and expense of a Sharepoint installation, 90% of the time all you really need is WSS 3.0 (free with a windows server license).
Use NOSQL database like MongoDb or CouchDB which are schema less, allowing you to freely add fields to JSON documents without having to first define schema.
195,853
What are the basic differences between a semaphore & spin-lock? When would we use a semaphore over a spin-lock?
2008/10/12
[ "https://Stackoverflow.com/questions/195853", "https://Stackoverflow.com", "https://Stackoverflow.com/users/24813/" ]
very simply, a semaphore is a "yielding" synchronisation object, a spinlock is a 'busywait' one. (there's a little more to semaphores in that they synchronise several threads, unlike a mutex or guard or monitor or critical section that protects a code region from a single thread) You'd use a semaphore in more circumstances, but use a spinlock where you are going to lock for a very short time - there is a cost to locking especially if you lock a lot. In such cases it can be more efficient to spinlock for a little while waiting for the protected resource to become unlocked. Obviously there is a performance hit if you spin for too long. typically if you spin for longer than a thread quantum, then you should use a semaphore.
Over and above what Yoav Aviram and gbjbaanb said, the other key point used to be that you would never use a spin-lock on a single-CPU machine, whereas a semaphore would make sense on such a machine. Nowadays, you are frequently hard-pressed to find a machine without multiple cores, or hyperthreading, or equivalent, but in the circumstances that you have just a single CPU, you should use semaphores. (I trust the reason is obvious. If the single CPU is busy waiting for something else to release the spin-lock, but it is running on the only CPU, the lock is unlikely to be released until the current process or thread is preempted by the O/S, which might take a while and nothing useful happens until the preemption occurs.)
195,853
What are the basic differences between a semaphore & spin-lock? When would we use a semaphore over a spin-lock?
2008/10/12
[ "https://Stackoverflow.com/questions/195853", "https://Stackoverflow.com", "https://Stackoverflow.com/users/24813/" ]
very simply, a semaphore is a "yielding" synchronisation object, a spinlock is a 'busywait' one. (there's a little more to semaphores in that they synchronise several threads, unlike a mutex or guard or monitor or critical section that protects a code region from a single thread) You'd use a semaphore in more circumstances, but use a spinlock where you are going to lock for a very short time - there is a cost to locking especially if you lock a lot. In such cases it can be more efficient to spinlock for a little while waiting for the protected resource to become unlocked. Obviously there is a performance hit if you spin for too long. typically if you spin for longer than a thread quantum, then you should use a semaphore.
From Linux Device Drivers by Rubinni > > Unlike semaphores, spinlocks may be used in code that cannot sleep, > such as interrupt handlers > > >
195,853
What are the basic differences between a semaphore & spin-lock? When would we use a semaphore over a spin-lock?
2008/10/12
[ "https://Stackoverflow.com/questions/195853", "https://Stackoverflow.com", "https://Stackoverflow.com/users/24813/" ]
Spinlock refers to an implementation of inter-thread locking using machine dependent assembly instructions (such as test-and-set). It is called a spinlock because the thread simply waits in a loop ("spins") repeatedly checking until the lock becomes available (busy wait). Spinlocks are used as a substitute for mutexes, which are a facility supplied by operating systems (not the CPU), because spinlocks perform better, if locked for a short period of time. A Semaphor is a facility supplied by operating systems for IPC, therefor it's main purpose is inter-process-communication. Being a facility supplied by the operating system it's performance will not be as good as that of a spinlock for inter-thead locking (although possible). Semaphores are better for locking for longer periods of time. That said - implementing splinlocks in assembly is tricky, and not portable.
Spinlock is used if and only if you are pretty certain that your expected result will happen very shortly, before your thread's execution slice time expires. Example: In device driver module, The driver writes "0" in hardware Register R0 and now it needs to wait for that R0 register to become 1. The H/W reads the R0 and does some work and writes "1" in R0. This is generally quick(in micro seconds). Now spinning is much better than going to sleep and interrupted by the H/W. Of course, while spinning, H/W failure condition needs to be taken care! There is absolutely no reason for a user application to spin. It doesn't make sense. You are going to spin for some event to happen and that event needs to be completed by another user level application which is never guaranteed to happen within quick time frame. So, I will not spin at all in user mode. I better to sleep() or mutexlock() or semaphore lock() in user mode.
195,853
What are the basic differences between a semaphore & spin-lock? When would we use a semaphore over a spin-lock?
2008/10/12
[ "https://Stackoverflow.com/questions/195853", "https://Stackoverflow.com", "https://Stackoverflow.com/users/24813/" ]
very simply, a semaphore is a "yielding" synchronisation object, a spinlock is a 'busywait' one. (there's a little more to semaphores in that they synchronise several threads, unlike a mutex or guard or monitor or critical section that protects a code region from a single thread) You'd use a semaphore in more circumstances, but use a spinlock where you are going to lock for a very short time - there is a cost to locking especially if you lock a lot. In such cases it can be more efficient to spinlock for a little while waiting for the protected resource to become unlocked. Obviously there is a performance hit if you spin for too long. typically if you spin for longer than a thread quantum, then you should use a semaphore.
Spinlock is used if and only if you are pretty certain that your expected result will happen very shortly, before your thread's execution slice time expires. Example: In device driver module, The driver writes "0" in hardware Register R0 and now it needs to wait for that R0 register to become 1. The H/W reads the R0 and does some work and writes "1" in R0. This is generally quick(in micro seconds). Now spinning is much better than going to sleep and interrupted by the H/W. Of course, while spinning, H/W failure condition needs to be taken care! There is absolutely no reason for a user application to spin. It doesn't make sense. You are going to spin for some event to happen and that event needs to be completed by another user level application which is never guaranteed to happen within quick time frame. So, I will not spin at all in user mode. I better to sleep() or mutexlock() or semaphore lock() in user mode.
195,853
What are the basic differences between a semaphore & spin-lock? When would we use a semaphore over a spin-lock?
2008/10/12
[ "https://Stackoverflow.com/questions/195853", "https://Stackoverflow.com", "https://Stackoverflow.com/users/24813/" ]
Spinlock and semaphore differ mainly in four things: **1. What they are** A *spinlock* is one possible implementation of a lock, namely one that is implemented by busy waiting ("spinning"). A semaphore is a generalization of a lock (or, the other way around, a lock is a special case of a semaphore). Usually, *but not necessarily*, spinlocks are only valid within one process whereas semaphores can be used to synchronize between different processes, too. A lock works for mutual exclusion, that is **one** thread at a time can acquire the lock and proceed with a "critical section" of code. Usually, this means code that modifies some data shared by several threads. A *semaphore* has a counter and will allow itself being acquired by **one or several** threads, depending on what value you post to it, and (in some implementations) depending on what its maximum allowable value is. Insofar, one can consider a lock a special case of a semaphore with a maximum value of 1. **2. What they do** As stated above, a spinlock is a lock, and therefore a mutual exclusion (strictly 1 to 1) mechanism. It works by repeatedly querying and/or modifying a memory location, usually in an atomic manner. This means that acquiring a spinlock is a "busy" operation that possibly burns CPU cycles for a long time (maybe forever!) while it effectively achieves "nothing". The main incentive for such an approach is the fact that a context switch has an overhead equivalent to spinning a few hundred (or maybe thousand) times, so if a lock can be acquired by burning a few cycles spinning, this may overall very well be more efficient. Also, for realtime applications it may not be acceptable to block and wait for the scheduler to come back to them at some far away time in the future. A semaphore, by contrast, either does not spin at all, or only spins for a very short time (as an optimization to avoid the syscall overhead). If a semaphore cannot be acquired, it blocks, giving up CPU time to a different thread that is ready to run. This may of course mean that a few milliseconds pass before your thread is scheduled again, but if this is no problem (usually it isn't) then it can be a very efficient, CPU-conservative approach. **3. How they behave in presence of congestion** It is a common misconception that spinlocks or lock-free algorithms are "generally faster", or that they are only useful for "very short tasks" (ideally, no synchronization object should be held for longer than absolutely necessary, ever). The one important difference is how the different approaches behave *in presence of congestion*. A well-designed system normally has low or no congestion (this means not all threads try to acquire the lock at the exact same time). For example, one would normally *not* write code that acquires a lock, then loads half a megabyte of zip-compressed data from the network, decodes and parses the data, and finally modifies a shared reference (append data to a container, etc.) before releasing the lock. Instead, one would acquire the lock only for the purpose of accessing the *shared resource*. Since this means that there is considerably more work outside the critical section than inside it, naturally the likelihood for a thread being inside the critical section is relatively low, and thus few threads are contending for the lock at the same time. Of course every now and then two threads will try to acquire the lock at the same time (if this *couldn't* happen you wouldn't need a lock!), but this is rather the exception than the rule in a "healthy" system. In such a case, a spinlock *greatly* outperforms a semaphore because if there is no lock congestion, the overhead of acquiring the spinlock is a mere dozen cycles as compared to hundreds/thousands of cycles for a context switch or 10-20 million cycles for losing the remainder of a time slice. On the other hand, given high congestion, or if the lock is being held for lengthy periods (sometimes you just can't help it!), a spinlock will burn insane amounts of CPU cycles for achieving nothing. A semaphore (or mutex) is a much better choice in this case, as it allows a different thread to run *useful* tasks during that time. Or, if no other thread has something useful to do, it allows the operating system to throttle down the CPU and reduce heat / conserve energy. Also, on a single-core system, a spinlock will be quite inefficient in presence of lock congestion, as a spinning thread will waste its complete time waiting for a state change that cannot possibly happen (not until the releasing thread is scheduled, which *isn't happening* while the waiting thread is running!). Therefore, given *any* amount of contention, acquiring the lock takes around 1 1/2 time slices in the best case (assuming the releasing thread is the next one being scheduled), which is not very good behaviour. **4. How they're implemented** A semaphore will nowadays typically wrap `sys_futex` under Linux (optionally with a spinlock that exits after a few attempts). A spinlock is typically implemented using atomic operations, and without using anything provided by the operating system. In the past, this meant using either compiler intrinsics or non-portable assembler instructions. Meanwhile both C++11 and C11 have atomic operations as part of the language, so apart from the general difficulty of writing provably correct lock-free code, it is now possible to implement lock-free code in an entirely portable and (almost) painless way.
A "mutex" (or "mutual exclusion lock") is a signal that two or more asynchronous processes can use to reserve a shared resource for exclusive use. The first process that obtains ownership of the "mutex" also obtains ownership of the shared resource. Other processes must wait for for the first process to release it's ownership of the "mutex" before they may attempt to obtain it. The most common locking primitive in the kernel is the spinlock. The spinlock is a very simple single-holder lock. If a process attempts to acquire a spinlock and it is unavailable, the process will keep trying (spinning) until it can acquire the lock. This simplicity creates a small and fast lock.
195,853
What are the basic differences between a semaphore & spin-lock? When would we use a semaphore over a spin-lock?
2008/10/12
[ "https://Stackoverflow.com/questions/195853", "https://Stackoverflow.com", "https://Stackoverflow.com/users/24813/" ]
Over and above what Yoav Aviram and gbjbaanb said, the other key point used to be that you would never use a spin-lock on a single-CPU machine, whereas a semaphore would make sense on such a machine. Nowadays, you are frequently hard-pressed to find a machine without multiple cores, or hyperthreading, or equivalent, but in the circumstances that you have just a single CPU, you should use semaphores. (I trust the reason is obvious. If the single CPU is busy waiting for something else to release the spin-lock, but it is running on the only CPU, the lock is unlikely to be released until the current process or thread is preempted by the O/S, which might take a while and nothing useful happens until the preemption occurs.)
A "mutex" (or "mutual exclusion lock") is a signal that two or more asynchronous processes can use to reserve a shared resource for exclusive use. The first process that obtains ownership of the "mutex" also obtains ownership of the shared resource. Other processes must wait for for the first process to release it's ownership of the "mutex" before they may attempt to obtain it. The most common locking primitive in the kernel is the spinlock. The spinlock is a very simple single-holder lock. If a process attempts to acquire a spinlock and it is unavailable, the process will keep trying (spinning) until it can acquire the lock. This simplicity creates a small and fast lock.
195,853
What are the basic differences between a semaphore & spin-lock? When would we use a semaphore over a spin-lock?
2008/10/12
[ "https://Stackoverflow.com/questions/195853", "https://Stackoverflow.com", "https://Stackoverflow.com/users/24813/" ]
Over and above what Yoav Aviram and gbjbaanb said, the other key point used to be that you would never use a spin-lock on a single-CPU machine, whereas a semaphore would make sense on such a machine. Nowadays, you are frequently hard-pressed to find a machine without multiple cores, or hyperthreading, or equivalent, but in the circumstances that you have just a single CPU, you should use semaphores. (I trust the reason is obvious. If the single CPU is busy waiting for something else to release the spin-lock, but it is running on the only CPU, the lock is unlikely to be released until the current process or thread is preempted by the O/S, which might take a while and nothing useful happens until the preemption occurs.)
I am not a kernel expert but here are few points: Even uniprocessor machine can use spin-locks if kernel preemption is enabled while compiling the kernel. If kernel preemption is disabled then spin-lock (perhaps) expands to a *void* statement. Also, when we are trying to compare Semaphore vs Spin-lock, I believe semaphore refers to the one used in kernel - NOT the one used for IPC (userland). Basically, spin-lock shall be used if critical section is small (smaller than the overhead of sleep/wake-up) and critical section does not call anything that can sleep! A semaphore shall be used if critical section is bigger and it can sleep. Raman Chalotra.
195,853
What are the basic differences between a semaphore & spin-lock? When would we use a semaphore over a spin-lock?
2008/10/12
[ "https://Stackoverflow.com/questions/195853", "https://Stackoverflow.com", "https://Stackoverflow.com/users/24813/" ]
I am not a kernel expert but here are few points: Even uniprocessor machine can use spin-locks if kernel preemption is enabled while compiling the kernel. If kernel preemption is disabled then spin-lock (perhaps) expands to a *void* statement. Also, when we are trying to compare Semaphore vs Spin-lock, I believe semaphore refers to the one used in kernel - NOT the one used for IPC (userland). Basically, spin-lock shall be used if critical section is small (smaller than the overhead of sleep/wake-up) and critical section does not call anything that can sleep! A semaphore shall be used if critical section is bigger and it can sleep. Raman Chalotra.
A "mutex" (or "mutual exclusion lock") is a signal that two or more asynchronous processes can use to reserve a shared resource for exclusive use. The first process that obtains ownership of the "mutex" also obtains ownership of the shared resource. Other processes must wait for for the first process to release it's ownership of the "mutex" before they may attempt to obtain it. The most common locking primitive in the kernel is the spinlock. The spinlock is a very simple single-holder lock. If a process attempts to acquire a spinlock and it is unavailable, the process will keep trying (spinning) until it can acquire the lock. This simplicity creates a small and fast lock.
195,853
What are the basic differences between a semaphore & spin-lock? When would we use a semaphore over a spin-lock?
2008/10/12
[ "https://Stackoverflow.com/questions/195853", "https://Stackoverflow.com", "https://Stackoverflow.com/users/24813/" ]
From Linux Device Drivers by Rubinni > > Unlike semaphores, spinlocks may be used in code that cannot sleep, > such as interrupt handlers > > >
I would like to add my observations, more general and not very Linux-specific. Depending on the memory architecture and the processor capabilities, you might need a spin-lock in order to implement a semaphore on a multi-core or a multiprocessor system, because in such systems a race condition might occur when two or more threads/processes want to acquire a semaphore. Yes, if your memory architecture offers the locking of a memory section by one core/processor delaying all other accesses, and if your processors offers a test-and-set, you may implement a semaphore without a spin-lock (but very carefully!). However, as simple/cheap multi-core systems are designed (I'm working in embedded systems), not all memory architectures support such multi-core/multiprocessor features, only test-and-set or equivalent. Then an implementation could be as follows: * acquire the spin-lock (busy waiting) * try to acquire the semaphore * release the spin-lock * if the semaphore was not successfully acquired, suspend the current thread until the semaphore is released; otherwise continue with the critical section Releasing the semaphore would need to be implemented as follows: * acquire the spin-lock * release the semaphore * release the spin-lock Yes, and for simple binary semaphores on an OS-level it would be possible to use only a spin-lock as replacement. But only if the code-sections to be protected are really very small. As said before, if and when you implement your own OS, make sure to be careful. Debugging such errors is fun (my opinion, not shared by many), but mostly very tedious and difficult.
352,020
I know almost nothing about category theory (I have just skimmed the first chapters of Aluffi's [algebra book](http://rads.stackoverflow.com/amzn/click/0821847813)), reading [this question](https://math.stackexchange.com/questions/21128/when-to-learn-category-theory) got me thinking... why should someone mostly interested in combinatorics/graph theory learn category theory? What I am asking for is examples of how knowledge of category theory might be beneficial for someone doing combinatorics.
2013/04/05
[ "https://math.stackexchange.com/questions/352020", "https://math.stackexchange.com", "https://math.stackexchange.com/users/12053/" ]
If you are interested in becoming a pioneer in a new area of mathematics that involves combinatorics then there is *Combinatorial category theory.* László Lovász talks about this in a video interview starting at 1m59s in <https://simonsfoundation.org/science_lives_video/laszlo-lovasz/?chapter=22> Also in his book *Large networks and graph limits*, chapter 23, <http://www.cs.elte.hu/~lovasz/bookxx/hombook-oct-2012.pdf>, there is a section on categories in which he says: "In graph theory, the use of categories (as a language and also as guide for asking question in a certain way) has been practiced mainly by the Prague school, and has lead to many valuable results; see e.g. the book by Hell and Nešetřil [2004]." (Graphs and Homomorphisms)
You might find interesting [Extensive categories and the size of an orbit](http://tac.mta.ca/tac/volumes/30/17/30-17abs.html) by Ernie Manes (TAC, 2015).
132,992
Stewie Griffin has a time machine, which he built that he uses to go back in time and cause trouble (perhaps not his main aim, but it often ends up that way). **What are all the episodes where he uses a time machine he built?** I've found [this](http://familyguy.wikia.com/wiki/Time_Machine) list of all uses of a time machine, but I don't believe it's a complete list, nor is it restricted to just Stewie's time machines.
2016/06/26
[ "https://scifi.stackexchange.com/questions/132992", "https://scifi.stackexchange.com", "https://scifi.stackexchange.com/users/3804/" ]
The time machine is originally created in Season 1 Episode 4, ["Mind Over Murder"](http://familyguy.wikia.com/wiki/Mind_Over_Murder), in an attempt to avoid the pain caused by teething. In Season 4 Episode 30, ["Stu & Stewie's Excellent Adventure"](http://familyguy.wikia.com/wiki/Stu_%26_Stewie%27s_Excellent_Adventure) Stewie goes back in time to save himself from being crushed by a lifeguards chair which caused him to become "disgusting" as his baby self puts it. In Season 7 Episode 3, ["Road to Germany"](http://familyguy.wikia.com/wiki/Road_to_Germany), Mort stumbles into the time machine and ends up in WW2 Germany. As such Stewie and Brian have to go in after him to bring him back. In Season 9 Episode 16, ["The Big Bang Theory"](http://familyguy.wikia.com/wiki/The_Big_Bang_Theory), Stewie goes back in time to stop Bertram from killing Stewie's ancestor Leonardo da Vinci as Stewie created the universe with his time machine. In killing Leornado the universe would cease to exist. In Season 10 Episode 5, ["Back to the Pilot"](http://familyguy.wikia.com/wiki/Back_to_the_Pilot), Stewie and Brian use the time machine to go back to the first episode and then lots of future them end up going back to stop the event changes that happened. In Season 10 Episode 22, ["Family Guy Viewer Mail No. 2"](http://familyguy.wikia.com/wiki/Family_Guy_Viewer_Mail_No._2), Stewie uses his time machine to prevent Kurt Cobain from killing himself. In Season 11 Episode 4, ["Yug Ylimaf"](http://familyguy.wikia.com/wiki/Yug_Ylimaf), Brian is using the time machine to pick up girls and ends up reversing time. Stewie then uses the time machine to set everything straight again. In Season 12 Episode 8, ["Christmas Guy"](http://familyguy.wikia.com/wiki/Christmas_Guy), Stewie uses the time machine return pad from a past Stewie, who had travelled forward in time, to go back in time and save Brian. In Season 12 Episode 21, ["Chap Stewie"](http://familyguy.wikia.com/wiki/Chap_Stewie), Stewie uses a time machine to change himself to be born to a British family. After realising he prefers his previous life he builds another rudimentary time machine to change things back again. In Season 13 Episode 7, ["Stewie, Chris & Brian's Excellent Adventure"](http://familyguy.wikia.com/wiki/Stewie,_Chris_%26_Brian%27s_Excellent_Adventure), Brian and Stewie take Chris on a journey through time to try and teach him enough to pass his history test.
1. Season 1 Episode 4 Mind Over Murder 2. Season 3 Episode 6 Death Lives 3. Season 4 Episode 30 Stu & Stewie’s Excellent Adventure 4. Season 5 Episode 18 Meet The Quagmires 5. Season 7 Episode 3 Road To Germany 6. Season 9 Episode 10 Friends of Peter G. 7. Season 9 Episode 16 Big Bang Theory 8. Season 10 Episode 5 Back to The Pilot 9. Season 10 Episode 22 Family Guy Viewer Mail 2 10. Season 10 Episode 23 Internal Affairs 11. Season 11 Episode 4 Yug Yilmaf 12. Season 11 Episode 12 Valentines Day 13. Season 12 Episode 6 Life of Brian 14. Season 12 Episode 8 Christmas Guy 15. Season 12 Episode 21 Chap Stewie 16. Season 13 Episode 7 Stewie, Chris, & Brian’s Excellent Adventure
39,750
I'm a level 1... human, I assume? I have a lovely little sword that does 6 points of damage (1+5), and have 450 hit points. The first enemy I face is a "Level 1 Shade". He has a dark, ominous-looking sword that does 20 points of damage. (Ok, that's cool, I have free will and a lot of patience.) **But he has 4000 hit points.** Let me once again point out that my sword does 6 points damage. While it's possibly a nod to a shade's demonic nature that I must stike him **666** times to kill him, I can't help but wonder if something is amiss. To put it in perspective, if you assume you get about 4 strikes in each time you have a "break", *you need to break 166 rounds of attacks to beat him*. (Yeah, yeah, less scratches and stabs, but C'MON.) Did I somehow inadvertantly activate the "Wish you'd never bought an ipad" difficulty setting? **UPDATE**: It was basically a glitch. Perhaps I quit the app at a bad time, but what happened was that I'd been converted to the weak, post-tutorial character with 400 HP and no infinity blade, but I was still facing the tutorial characters who have 4000 HP like you do in the intro. **SOLUTION**: You have to create an all new character in the options menu under "character slots". Resetting thy our existing character won't fix it.
2011/12/02
[ "https://gaming.stackexchange.com/questions/39750", "https://gaming.stackexchange.com", "https://gaming.stackexchange.com/users/16/" ]
No, it's not supposed to have that many hit points: in the first rebirth, no enemy exceeds 600 hit points (with most clocking in around 400-500). In the first *Infinity Blade*, you could unlock the negative bloodline by doing things in a specific way. You may have inadvertently unlocked an ultra-ultra hardcore based on your actions during the first area. You may want to try creating a new character and noting what you do. If you remember what specific you did the first time (like say, lost to a specific foe), try doing things differently.
From my experience, the first enemy after the tutorial is not supposed to have that many hitpoints. It was an easily beatable enemy, although I can't remember who or what *exactly* it was.
123,580
I'm testing a circuit for creating am unregulated "HV"DC source. The circuit is like this: ![schematic](https://i.stack.imgur.com/eGOBC.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2feGOBC.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/) The example I'm using uses a Variac with 120V. I'm limited to the mains supply in Europe of 230V. The description in the example uses 1500uF caps, I'm using 220uF caps since I don't have any higher. Using the 120VAC of the transformer, the example circuit outputs 370VDC. Me, using 230VAC get 330VDC from my circuit between 1 and 2. A couple of things are still unclear to me. What is causing the difference between the outputs of both circuits? Is it merely the difference between capacitors? I would expect 240VDC and 500VDC respectively (minus some voltage drop) as output. Strangely, the polarity is also reversed as I expected. I would expect line 1 to be positive, 2 to be negative, but I have to use my meters positive probe on 2 and negative probe on 1 to read a positive voltage. Secondly, is it required to connect the AC neutral between the caps? My knowledge of AC is limited and I'm trying to learn it, but although with AC the voltage is alternating, I would expect the neutral carrying the near-earth potential to be negative for cap 1 and positive for cap 2, and reversing this polarity to make the caps puff or in the worst case explode.
2014/07/31
[ "https://electronics.stackexchange.com/questions/123580", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/50467/" ]
I'd say you can go as slow as you want. Communication in an optic fiber is achieved pulsing a light, usually of a "single" wavelength, and detecting the pulses on the other side. Speed is limited by the speed of the emitter and the receiver, and of course by the lowpass response of the fiber. There is no low limit to bandwidth though, this sort of connection even allows DC: if you leave the emitter on the receiver would detect a steady ON. It might be that if you buy a module it can include some sort of circuitry that may or may not limit the badwidth on the low side. If you just buy the connector with the emitter/receiver indside and hook it directly to the micro you are good to go. If you really need to avoid current flow optic fiber is a great idea, I'd like to know why you have such a specification because maybe a differential/twisted pair is suitable too. You can insulate the devices using optocouplers and call it a day.
LVDS and RS-422 signaling standards work very well in this type of scenario. Driver chips for the two standards are readily available as well. Both standards work very well to support multi-MB data rates and below. They solve the problem by providing a differential pair of signals, which means that the voltage and current transmitted on the positive side is always matched by the negative side, such that there is no net current from one side of the interface to the other.
123,580
I'm testing a circuit for creating am unregulated "HV"DC source. The circuit is like this: ![schematic](https://i.stack.imgur.com/eGOBC.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2feGOBC.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/) The example I'm using uses a Variac with 120V. I'm limited to the mains supply in Europe of 230V. The description in the example uses 1500uF caps, I'm using 220uF caps since I don't have any higher. Using the 120VAC of the transformer, the example circuit outputs 370VDC. Me, using 230VAC get 330VDC from my circuit between 1 and 2. A couple of things are still unclear to me. What is causing the difference between the outputs of both circuits? Is it merely the difference between capacitors? I would expect 240VDC and 500VDC respectively (minus some voltage drop) as output. Strangely, the polarity is also reversed as I expected. I would expect line 1 to be positive, 2 to be negative, but I have to use my meters positive probe on 2 and negative probe on 1 to read a positive voltage. Secondly, is it required to connect the AC neutral between the caps? My knowledge of AC is limited and I'm trying to learn it, but although with AC the voltage is alternating, I would expect the neutral carrying the near-earth potential to be negative for cap 1 and positive for cap 2, and reversing this polarity to make the caps puff or in the worst case explode.
2014/07/31
[ "https://electronics.stackexchange.com/questions/123580", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/50467/" ]
The answer is: it depends! It seems that most or all Toslink ***transmitters*** have bandwidth right down to DC. On the other hand, most Toslink ***receivers*** require a minimum of ***100kHz*** for the modulation frequency for the part to function. That's too bad, because Toslink is a very practical medium for electrically isolated communication and the TX or RX hardware, as well as the fiber optic cables, are pretty inexpensive. I did find one RX by Toshiba that is specified for operation down to DC, the TORX1952. Here is a link to the part at Mouser (The US supplier) and the datasheet: [Toshiba TORX1952(6M,F) at Mouser.com](http://www.mouser.com/ProductDetail/Toshiba/TORX19526MF/?qs=sGAEpiMZZMvAL21a%2FDhxMmgrh7HB08WlZQWHgVIcxss%3D) [DATASHEET](http://www.toshiba.com/taec/Catalog/taec/components2/Datasheet_Sync/201205/DST_TORX1952%28F%29-TDE_EN_29752.pdf) It's not exactly cheap at $10 each for QTY 1 but it should work for low speeds. .
LVDS and RS-422 signaling standards work very well in this type of scenario. Driver chips for the two standards are readily available as well. Both standards work very well to support multi-MB data rates and below. They solve the problem by providing a differential pair of signals, which means that the voltage and current transmitted on the positive side is always matched by the negative side, such that there is no net current from one side of the interface to the other.
123,580
I'm testing a circuit for creating am unregulated "HV"DC source. The circuit is like this: ![schematic](https://i.stack.imgur.com/eGOBC.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2feGOBC.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/) The example I'm using uses a Variac with 120V. I'm limited to the mains supply in Europe of 230V. The description in the example uses 1500uF caps, I'm using 220uF caps since I don't have any higher. Using the 120VAC of the transformer, the example circuit outputs 370VDC. Me, using 230VAC get 330VDC from my circuit between 1 and 2. A couple of things are still unclear to me. What is causing the difference between the outputs of both circuits? Is it merely the difference between capacitors? I would expect 240VDC and 500VDC respectively (minus some voltage drop) as output. Strangely, the polarity is also reversed as I expected. I would expect line 1 to be positive, 2 to be negative, but I have to use my meters positive probe on 2 and negative probe on 1 to read a positive voltage. Secondly, is it required to connect the AC neutral between the caps? My knowledge of AC is limited and I'm trying to learn it, but although with AC the voltage is alternating, I would expect the neutral carrying the near-earth potential to be negative for cap 1 and positive for cap 2, and reversing this polarity to make the caps puff or in the worst case explode.
2014/07/31
[ "https://electronics.stackexchange.com/questions/123580", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/50467/" ]
LVDS and RS-422 signaling standards work very well in this type of scenario. Driver chips for the two standards are readily available as well. Both standards work very well to support multi-MB data rates and below. They solve the problem by providing a differential pair of signals, which means that the voltage and current transmitted on the positive side is always matched by the negative side, such that there is no net current from one side of the interface to the other.
The simplest solution would be to use signaling via current rather than voltage. In current loop signaling, you signal a one by sending 5ma (for example) around a loop. On the other end, the loop is directly fed through the LED side of an optocoupler and not electrically connected to the reciever at all. There is no common ground, it is immune to interference and since the current is the same around the entire loop it is independent of the length of the cable, no voltage drop. This is exactly the method MIDI uses to avoid ground loops in musical equipment. The only bad part is that it is single duplex so you need two loops with 4 wires total for bidirectional traffic but it is dead simple to implement. There are a variety of MIDI interface circuits online that you can directly use.
123,580
I'm testing a circuit for creating am unregulated "HV"DC source. The circuit is like this: ![schematic](https://i.stack.imgur.com/eGOBC.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2feGOBC.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/) The example I'm using uses a Variac with 120V. I'm limited to the mains supply in Europe of 230V. The description in the example uses 1500uF caps, I'm using 220uF caps since I don't have any higher. Using the 120VAC of the transformer, the example circuit outputs 370VDC. Me, using 230VAC get 330VDC from my circuit between 1 and 2. A couple of things are still unclear to me. What is causing the difference between the outputs of both circuits? Is it merely the difference between capacitors? I would expect 240VDC and 500VDC respectively (minus some voltage drop) as output. Strangely, the polarity is also reversed as I expected. I would expect line 1 to be positive, 2 to be negative, but I have to use my meters positive probe on 2 and negative probe on 1 to read a positive voltage. Secondly, is it required to connect the AC neutral between the caps? My knowledge of AC is limited and I'm trying to learn it, but although with AC the voltage is alternating, I would expect the neutral carrying the near-earth potential to be negative for cap 1 and positive for cap 2, and reversing this polarity to make the caps puff or in the worst case explode.
2014/07/31
[ "https://electronics.stackexchange.com/questions/123580", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/50467/" ]
The answer is: it depends! It seems that most or all Toslink ***transmitters*** have bandwidth right down to DC. On the other hand, most Toslink ***receivers*** require a minimum of ***100kHz*** for the modulation frequency for the part to function. That's too bad, because Toslink is a very practical medium for electrically isolated communication and the TX or RX hardware, as well as the fiber optic cables, are pretty inexpensive. I did find one RX by Toshiba that is specified for operation down to DC, the TORX1952. Here is a link to the part at Mouser (The US supplier) and the datasheet: [Toshiba TORX1952(6M,F) at Mouser.com](http://www.mouser.com/ProductDetail/Toshiba/TORX19526MF/?qs=sGAEpiMZZMvAL21a%2FDhxMmgrh7HB08WlZQWHgVIcxss%3D) [DATASHEET](http://www.toshiba.com/taec/Catalog/taec/components2/Datasheet_Sync/201205/DST_TORX1952%28F%29-TDE_EN_29752.pdf) It's not exactly cheap at $10 each for QTY 1 but it should work for low speeds. .
I'd say you can go as slow as you want. Communication in an optic fiber is achieved pulsing a light, usually of a "single" wavelength, and detecting the pulses on the other side. Speed is limited by the speed of the emitter and the receiver, and of course by the lowpass response of the fiber. There is no low limit to bandwidth though, this sort of connection even allows DC: if you leave the emitter on the receiver would detect a steady ON. It might be that if you buy a module it can include some sort of circuitry that may or may not limit the badwidth on the low side. If you just buy the connector with the emitter/receiver indside and hook it directly to the micro you are good to go. If you really need to avoid current flow optic fiber is a great idea, I'd like to know why you have such a specification because maybe a differential/twisted pair is suitable too. You can insulate the devices using optocouplers and call it a day.
123,580
I'm testing a circuit for creating am unregulated "HV"DC source. The circuit is like this: ![schematic](https://i.stack.imgur.com/eGOBC.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2feGOBC.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/) The example I'm using uses a Variac with 120V. I'm limited to the mains supply in Europe of 230V. The description in the example uses 1500uF caps, I'm using 220uF caps since I don't have any higher. Using the 120VAC of the transformer, the example circuit outputs 370VDC. Me, using 230VAC get 330VDC from my circuit between 1 and 2. A couple of things are still unclear to me. What is causing the difference between the outputs of both circuits? Is it merely the difference between capacitors? I would expect 240VDC and 500VDC respectively (minus some voltage drop) as output. Strangely, the polarity is also reversed as I expected. I would expect line 1 to be positive, 2 to be negative, but I have to use my meters positive probe on 2 and negative probe on 1 to read a positive voltage. Secondly, is it required to connect the AC neutral between the caps? My knowledge of AC is limited and I'm trying to learn it, but although with AC the voltage is alternating, I would expect the neutral carrying the near-earth potential to be negative for cap 1 and positive for cap 2, and reversing this polarity to make the caps puff or in the worst case explode.
2014/07/31
[ "https://electronics.stackexchange.com/questions/123580", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/50467/" ]
I'd say you can go as slow as you want. Communication in an optic fiber is achieved pulsing a light, usually of a "single" wavelength, and detecting the pulses on the other side. Speed is limited by the speed of the emitter and the receiver, and of course by the lowpass response of the fiber. There is no low limit to bandwidth though, this sort of connection even allows DC: if you leave the emitter on the receiver would detect a steady ON. It might be that if you buy a module it can include some sort of circuitry that may or may not limit the badwidth on the low side. If you just buy the connector with the emitter/receiver indside and hook it directly to the micro you are good to go. If you really need to avoid current flow optic fiber is a great idea, I'd like to know why you have such a specification because maybe a differential/twisted pair is suitable too. You can insulate the devices using optocouplers and call it a day.
The simplest solution would be to use signaling via current rather than voltage. In current loop signaling, you signal a one by sending 5ma (for example) around a loop. On the other end, the loop is directly fed through the LED side of an optocoupler and not electrically connected to the reciever at all. There is no common ground, it is immune to interference and since the current is the same around the entire loop it is independent of the length of the cable, no voltage drop. This is exactly the method MIDI uses to avoid ground loops in musical equipment. The only bad part is that it is single duplex so you need two loops with 4 wires total for bidirectional traffic but it is dead simple to implement. There are a variety of MIDI interface circuits online that you can directly use.
123,580
I'm testing a circuit for creating am unregulated "HV"DC source. The circuit is like this: ![schematic](https://i.stack.imgur.com/eGOBC.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2feGOBC.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/) The example I'm using uses a Variac with 120V. I'm limited to the mains supply in Europe of 230V. The description in the example uses 1500uF caps, I'm using 220uF caps since I don't have any higher. Using the 120VAC of the transformer, the example circuit outputs 370VDC. Me, using 230VAC get 330VDC from my circuit between 1 and 2. A couple of things are still unclear to me. What is causing the difference between the outputs of both circuits? Is it merely the difference between capacitors? I would expect 240VDC and 500VDC respectively (minus some voltage drop) as output. Strangely, the polarity is also reversed as I expected. I would expect line 1 to be positive, 2 to be negative, but I have to use my meters positive probe on 2 and negative probe on 1 to read a positive voltage. Secondly, is it required to connect the AC neutral between the caps? My knowledge of AC is limited and I'm trying to learn it, but although with AC the voltage is alternating, I would expect the neutral carrying the near-earth potential to be negative for cap 1 and positive for cap 2, and reversing this polarity to make the caps puff or in the worst case explode.
2014/07/31
[ "https://electronics.stackexchange.com/questions/123580", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/50467/" ]
The answer is: it depends! It seems that most or all Toslink ***transmitters*** have bandwidth right down to DC. On the other hand, most Toslink ***receivers*** require a minimum of ***100kHz*** for the modulation frequency for the part to function. That's too bad, because Toslink is a very practical medium for electrically isolated communication and the TX or RX hardware, as well as the fiber optic cables, are pretty inexpensive. I did find one RX by Toshiba that is specified for operation down to DC, the TORX1952. Here is a link to the part at Mouser (The US supplier) and the datasheet: [Toshiba TORX1952(6M,F) at Mouser.com](http://www.mouser.com/ProductDetail/Toshiba/TORX19526MF/?qs=sGAEpiMZZMvAL21a%2FDhxMmgrh7HB08WlZQWHgVIcxss%3D) [DATASHEET](http://www.toshiba.com/taec/Catalog/taec/components2/Datasheet_Sync/201205/DST_TORX1952%28F%29-TDE_EN_29752.pdf) It's not exactly cheap at $10 each for QTY 1 but it should work for low speeds. .
The simplest solution would be to use signaling via current rather than voltage. In current loop signaling, you signal a one by sending 5ma (for example) around a loop. On the other end, the loop is directly fed through the LED side of an optocoupler and not electrically connected to the reciever at all. There is no common ground, it is immune to interference and since the current is the same around the entire loop it is independent of the length of the cable, no voltage drop. This is exactly the method MIDI uses to avoid ground loops in musical equipment. The only bad part is that it is single duplex so you need two loops with 4 wires total for bidirectional traffic but it is dead simple to implement. There are a variety of MIDI interface circuits online that you can directly use.
20,036
I have been using MiKTeX for a couple of years. I don't know whether it is "wise" to insist on using it. Could you give me a list of advantages of TeX Live over MiKTeX?
2011/06/06
[ "https://tex.stackexchange.com/questions/20036", "https://tex.stackexchange.com", "https://tex.stackexchange.com/users/2099/" ]
I've covered some of this before on [my blog](https://www.texdev.net/2016/12/18/tex-on-windows-tex-live-versus-miktex-revisited/), so some of this is a rehash! In recent versions, the differences between MiKTeX and TeX Live have narrowed. Package coverage between the two is similar, as is the ability to do on-line updates. I guess here you want differences: * Only MiKTeX can do 'on the fly' package installation, as TeX Live is more focussed on having a system that works well on multi-user systems. * TeX Live defaults to installing *everything*, which means that if you want everything it's (marginally) easier to use TeX Live than MiKTeX. (MiKTeX has different installers, one of which installs everything, whereas for TeX Live you have one installer and make the choices within in.) For most users, it's largely down to 'personal opinion' or 'what you try first'!
In addition to what Ulrike Fischer has mentioned, the additional advantages of Miktex are: 1. Miktex has both 32 bit (stable) and 64 bit (experimental). It is a pity that TeX Live for Windows is available only for 32 bit. 2. Deciding [the install location of your own packages and classes is easier on MiKTeX](https://tex.stackexchange.com/questions/1137/where-do-i-place-my-own-sty-files-to-make-them-available-to-all-my-tex-files/20121#20121). Installing [them is just as easy on TeX Live if you use one of the predefined locations](https://tex.stackexchange.com/q/20160/2099).
20,036
I have been using MiKTeX for a couple of years. I don't know whether it is "wise" to insist on using it. Could you give me a list of advantages of TeX Live over MiKTeX?
2011/06/06
[ "https://tex.stackexchange.com/questions/20036", "https://tex.stackexchange.com", "https://tex.stackexchange.com/users/2099/" ]
(Not meant to be a complete answer, just an addition to others.) TeX Live provides more secure defaults than MiKTeX and probably pays more attention to security in general. For example, section 3 of [this paper](http://cseweb.ucsd.edu/~hovav/papers/csr10.html) describes a simple way to make document (or bibtex database, or package) viruses which would almost make MS-Word look as secure alternative ;-) This attack doesn't work with TeX Live's default settings, regardless of the platform (Windows or other). Not completely unrelated, TeX Live is designed to support multi-user systems, including being installed on a servers and used on network clients, possibly with mixed architectures and OSes. (Which may be totally irrelevant to the OP, but mentioned only for information.)
The advantages of miktex: 1. Supports (more or less) only windows which means that it can concentrate on windows problems and windows "look and feel". 2. On-the-fly installation of missing packages. 3. Supports more packages and its packages are more complete as it doesn't restrict itself to "free software". 4. Miktex updates binaries also between releases so its binaries often were newer than the one in TeXLive (nowadays you can update binaries in TeXlive tlcontrib so it also can be the other way round.). 5. Regarding Josephs claim that TeXLive has more command line tools: I wouldn't bet on it.
20,036
I have been using MiKTeX for a couple of years. I don't know whether it is "wise" to insist on using it. Could you give me a list of advantages of TeX Live over MiKTeX?
2011/06/06
[ "https://tex.stackexchange.com/questions/20036", "https://tex.stackexchange.com", "https://tex.stackexchange.com/users/2099/" ]
In addition to what Ulrike Fischer has mentioned, the additional advantages of Miktex are: 1. Miktex has both 32 bit (stable) and 64 bit (experimental). It is a pity that TeX Live for Windows is available only for 32 bit. 2. Deciding [the install location of your own packages and classes is easier on MiKTeX](https://tex.stackexchange.com/questions/1137/where-do-i-place-my-own-sty-files-to-make-them-available-to-all-my-tex-files/20121#20121). Installing [them is just as easy on TeX Live if you use one of the predefined locations](https://tex.stackexchange.com/q/20160/2099).
The disadvantage of TeXLive over MikTeX: Updating is frozen for several months before the new one is released. Very bad feature!
20,036
I have been using MiKTeX for a couple of years. I don't know whether it is "wise" to insist on using it. Could you give me a list of advantages of TeX Live over MiKTeX?
2011/06/06
[ "https://tex.stackexchange.com/questions/20036", "https://tex.stackexchange.com", "https://tex.stackexchange.com/users/2099/" ]
(Not meant to be a complete answer, just an addition to others.) TeX Live provides more secure defaults than MiKTeX and probably pays more attention to security in general. For example, section 3 of [this paper](http://cseweb.ucsd.edu/~hovav/papers/csr10.html) describes a simple way to make document (or bibtex database, or package) viruses which would almost make MS-Word look as secure alternative ;-) This attack doesn't work with TeX Live's default settings, regardless of the platform (Windows or other). Not completely unrelated, TeX Live is designed to support multi-user systems, including being installed on a servers and used on network clients, possibly with mixed architectures and OSes. (Which may be totally irrelevant to the OP, but mentioned only for information.)
In addition to what Ulrike Fischer has mentioned, the additional advantages of Miktex are: 1. Miktex has both 32 bit (stable) and 64 bit (experimental). It is a pity that TeX Live for Windows is available only for 32 bit. 2. Deciding [the install location of your own packages and classes is easier on MiKTeX](https://tex.stackexchange.com/questions/1137/where-do-i-place-my-own-sty-files-to-make-them-available-to-all-my-tex-files/20121#20121). Installing [them is just as easy on TeX Live if you use one of the predefined locations](https://tex.stackexchange.com/q/20160/2099).
51,676
> > 1. Known as number one > 2. Found on the edge of Trinidad and Denmark > 3. Tool for filling six pockets > > > What's missing? Hint 1: > > Alone, I am considered worthless, but together with my brothers and sisters, we can be a small fortune. > > > Hint 2: > > I'm kept safely in objects of all shapes and sizes, but my most commonly known home is inside the belly of an animal. > > >
2017/05/10
[ "https://puzzling.stackexchange.com/questions/51676", "https://puzzling.stackexchange.com", "https://puzzling.stackexchange.com/users/35972/" ]
Known as number one > > I (roman numeral) > > > Found on the edge of Trinidad and Denmark > > D (Trinida**d**, **D**enmark > > > Tool for filling six pockets > > Q (Cue from Pool) > > > Potentially leaving > > P, for the [IDQP](http://www.iata.org/whatwedo/safety/audit/Pages/idqp.aspx)? > > >
Known as number one > > Letter A? > > > #2 and #3 have already been found, which gives us > > ADQ . > > > I have no idea where this lead to, but the title of this puzzle make me think to > > smithing (sound of the hammer on the anvil). > This with humm's answer is making me thinking to "Between the hammer and the anvil" from Judas Priest, but probably no link with the solution. > > > But I don't know if that is the right path.
51,676
> > 1. Known as number one > 2. Found on the edge of Trinidad and Denmark > 3. Tool for filling six pockets > > > What's missing? Hint 1: > > Alone, I am considered worthless, but together with my brothers and sisters, we can be a small fortune. > > > Hint 2: > > I'm kept safely in objects of all shapes and sizes, but my most commonly known home is inside the belly of an animal. > > >
2017/05/10
[ "https://puzzling.stackexchange.com/questions/51676", "https://puzzling.stackexchange.com", "https://puzzling.stackexchange.com/users/35972/" ]
We know, based on humn's answer that: > > The letters are P,D,Q > > > And from the hint that: > > Together they are worth a small fortune > > > So what is missing is: > > N for nickel. The letters stand for Penny Dime, Quarter, Nickel, a small fortune. > > > Edit, finally figured out the last clue, LOL. > > Piggy bank. Nice one. > > >
Known as number one > > I (roman numeral) > > > Found on the edge of Trinidad and Denmark > > D (Trinida**d**, **D**enmark > > > Tool for filling six pockets > > Q (Cue from Pool) > > > Potentially leaving > > P, for the [IDQP](http://www.iata.org/whatwedo/safety/audit/Pages/idqp.aspx)? > > >
51,676
> > 1. Known as number one > 2. Found on the edge of Trinidad and Denmark > 3. Tool for filling six pockets > > > What's missing? Hint 1: > > Alone, I am considered worthless, but together with my brothers and sisters, we can be a small fortune. > > > Hint 2: > > I'm kept safely in objects of all shapes and sizes, but my most commonly known home is inside the belly of an animal. > > >
2017/05/10
[ "https://puzzling.stackexchange.com/questions/51676", "https://puzzling.stackexchange.com", "https://puzzling.stackexchange.com/users/35972/" ]
We know, based on humn's answer that: > > The letters are P,D,Q > > > And from the hint that: > > Together they are worth a small fortune > > > So what is missing is: > > N for nickel. The letters stand for Penny Dime, Quarter, Nickel, a small fortune. > > > Edit, finally figured out the last clue, LOL. > > Piggy bank. Nice one. > > >
Known as number one > > Letter A? > > > #2 and #3 have already been found, which gives us > > ADQ . > > > I have no idea where this lead to, but the title of this puzzle make me think to > > smithing (sound of the hammer on the anvil). > This with humm's answer is making me thinking to "Between the hammer and the anvil" from Judas Priest, but probably no link with the solution. > > > But I don't know if that is the right path.
114,107
The **short version of my question** is: Assuming the definition of the minor scale in Kostka's *Tonal Harmony* textbook (Eighth edition, chapter 4), which is that: > > there is, in a sense, one minor scale that has two scale steps 6 and > 7, that are variable. That is, there are two versions of 6 and 7, and > both versions will usually appear in a piece in the minor mode. > > > how is then a step vs a skip/leap defined? Is, for example, 6 to raised 6 (raised here means in comparison to natural minor) a step, or does step mean the scale degree has to change: e.g., from 6 to 7 (I assume the latter is correct [more details below])? Is "7 to raised 6" considered a step? Some **more information** from the respective section (example numbers from the Kostka book are given in brackets): In an example (Nr. 4-2) the note material is written out for the root E as: E(1),F#(2),G(3),A(4),B(5),C(6),C#(raised 6),D(7),D#(raised 7),E(1). This example is **not** called a 'scale' by the book, and there is a bracket above 6 and raised 6 (same for 7 and raised 7). One could thus infer that these are the options you have for scale degrees 6 and 7, rather than it being a 10 note scale (which of course would be a strange way to view it). Now a section follows stating that: > > Melodically, the most graceful thing for raised 6 and raised 7 to do > is to ascend by step, whereas 6 and 7 [meaning natural minor scale > degrees] tend naturally to descend by step > > > Then some examples are given (Nr. 4-3 and 4-4, from Bach's *Well Tempered Clavier*), where, e.g., the sequences "7,6,5" or "raised 7, 1 (above previous note)" occur (which is perfectly in line with the above statement). Then it is stated that > > If a 6 or 7 is left by leap instead of by step, there will generally > be an eventual stepwise goal for that scale degree, and the 6 and 7 > will probably be raised or left unaltered according to the direction > of that goal. > > > and then some examples (also from Bach - *Well Tempered Clavier*) follow that gave me some trouble: One is the sequence "raised 7,5,raised 6, raised 7, 1 (above previous note)". It is stated that the first note (raised 7) is left by leap (which would more precisely be a skip i think?) and thus the question if the 7 gets raised or not depends on the "*eventual stepwise goal*". Now this is indicated to be the 1 at the end of the sequence, thus the initial 7 is raised. Enclosed in this sequence there is a "raised 6, raised 7" (going to 1(above previous note)). Why is the raised 6 not the *eventual stepwise goal* for the first 7 (which would thus be unaltered)? Is it because unaltered 7 to raised 6 is not a step? Does this mean that it's only a step between 6 and 7 if both are raised/unaltered? While writing all this down things got a lot clearer, but the question on what exactly the conditions for a step are, are not completely clear to me. The book also does not give much explanation for the statements regarding this example. I, from what I tried to figure out, would assume that indeed both 6 and 7 have to be raised/unaltered the same way. This would however forbid the unaltered 7 in a sequence "7,raised 6, raised 7, 1 (above previous note)" since it is not leaving by "step" (since one is unaltered and the other one is raised) and thus would target the 1 and would be raised (because of the 1 at the end of the sequence) even though for "(raised) 7, raised 6" it is then also not a leap/skip which in the example given initiates the conditioning of the raised/unaltered option to the "*eventual stepwise goal*"? This seems to me, to be a special case not mentioned in the book. The section in the book makes it clear that the mechanism explained above is not a hard rule (in fact some examples from famous pieces are given where this "rule is broken"). Nonetheless, it points to the given example and makes the statements i described above, so i assume there is some logic to it.
2021/04/30
[ "https://music.stackexchange.com/questions/114107", "https://music.stackexchange.com", "https://music.stackexchange.com/users/51044/" ]
Moving from scale degree 6 to scale degree 7, or from 7 to 6, is always a "step" regardless any alterations to one or both of those notes. "Step" is a shorthand for "the interval of a second", and "skip" is shorthand for "any interval larger than a second". Examples: * raised 6 to flat 7 is a "minor second", thus "a step". * raised 6 to raised 7 is a "major second", also "a step". * flat 6 to raised 7 is an "augmented second", so, too, "a step". The melodic minor scale is a model for how minor is sometimes used in compositions: 6 and 7 raised when the melody is ascending; 6 and 7 not raised when the melody is descending. For more on the melodic minor scale, see [Why does the melodic minor scale turn into natural minor when descending?](https://music.stackexchange.com/questions/48302/why-does-the-melodic-minor-scale-turn-into-natural-minor-when-descending).
> > how is then a step vs a skip/leap defined?" > > > Not necessarily strictly. It's reasonable to call the interval between the lowered sixth degree and the raised seventh degree a leap, and it is reasonable to call it a step. It depends on the context. Some of those steps are also half steps, and some are whole steps. Moving from the raised sixth degree to the lowered, or vice versa, is also a half step. You may have to disambiguate by indicating whether you are talking about steps on the diatonic scale or not. But one thing you will generally never find in common practice period music is that interval in a melodic context, so the question is not really pertinent to the analysis. Any voice that has the lower sixth degree in one chord will generally resolve down by half step in the next, and any voice with the higher seventh degree will generally resolve up by half step. If they go somewhere else, it won't be to move by an augmented second.
25,461
Cannot seem to probe virtual guests from the virtual host. **These guests can be probed from other devices on the same LAN/Network**, but not the host. I can understand why it might be struggling, but I am wondering if anyone ever found a way to make it work. HOST: OSX 10.6 GUEST: FreeBSD 8 (two of them) Edit: Adding some finer details, I have networking set to "bridged", I can ping and regularly consume services running on TCP/IP on both guests. All of my nmap probe attempts are done from the root account on the host.
2011/11/27
[ "https://unix.stackexchange.com/questions/25461", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12338/" ]
You can't if you run the `NAT` network between VM and host, Switch to a host-only adapter, i.e vboxnet0 (you might need to create one in the Preferences dialog) **EDIT** You can use two interfaces anyway, I always do that
Updated: As you can ping, the next thing to find out is what happens when you use nmap just set to do a ping sweep? As this is using the very same ICMP connection, it should give you a result. If it doesn't, have a look at the permissions of the account you are using to run nmap - On Windows, the user you are logged in as must have administrative privileges on the computer (user should be a member of the administrators group).
25,461
Cannot seem to probe virtual guests from the virtual host. **These guests can be probed from other devices on the same LAN/Network**, but not the host. I can understand why it might be struggling, but I am wondering if anyone ever found a way to make it work. HOST: OSX 10.6 GUEST: FreeBSD 8 (two of them) Edit: Adding some finer details, I have networking set to "bridged", I can ping and regularly consume services running on TCP/IP on both guests. All of my nmap probe attempts are done from the root account on the host.
2011/11/27
[ "https://unix.stackexchange.com/questions/25461", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12338/" ]
The KEXT for the network card somehow had gotten corrupt. Reloaded from the install CD and then I was able to hit the guests with various NMAP probes.
Updated: As you can ping, the next thing to find out is what happens when you use nmap just set to do a ping sweep? As this is using the very same ICMP connection, it should give you a result. If it doesn't, have a look at the permissions of the account you are using to run nmap - On Windows, the user you are logged in as must have administrative privileges on the computer (user should be a member of the administrators group).
6,343,791
I am new to cocos2d game development. I want to know how to get started. Is any special installation needed for it?
2011/06/14
[ "https://Stackoverflow.com/questions/6343791", "https://Stackoverflow.com", "https://Stackoverflow.com/users/251304/" ]
Check out [the cocos2d-iPhone website.](http://www.cocos2d-iphone.org/). You can download cocos2d-iPhone [here](http://www.cocos2d-iphone.org/download). Finally, [here is](http://www.cocos2d-iphone.org/wiki/doku.php/) the documentation.
I suggest you check out Kobold2D here: <http://www.kobold2d.com/display/KKSITE/Home> To get started there is nothing wrong with the Cocos2D Project templates (there is a shell script in the official download that you can use to install them). But once you want to upgrade your Cocos2D version in such a project you will notice that it's getting quite complicated. Kobold2D should make this much simpler, I think it has an updater tool. I also can recommend the book by Steffen Itterheim which is also featured on the Kobold2D website. I don't know if the second edition is out yet, if it isn't I suggest you wait for that. The official Cocos2D forum also is a great place to get started. Check out this thread: <http://www.cocos2d-iphone.org/forum/topic/737>
47,863,710
how to add reportviewer control in visual studio 2017? controls are installed but I cant into vb .net application forms control only showing below the from [enter image description here](https://i.stack.imgur.com/ycTwr.png)
2017/12/18
[ "https://Stackoverflow.com/questions/47863710", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3354514/" ]
If the method above still can not add the report viewer control to the toolbox, then try to drag the dll directly to the toolbox: 1, open Windows explorer and navigate to the DLL 2, drag the DLL and drop it on Visual Studio in the Toolbox, exactly where you want your components to appear {Solution Directory} This means working project folder C:\Users\xxxx\xxxx\repos\WindowsApp4 For example this  the project location means Get into following path: C:\Users\xxxx\xxxx\repos\WindowsApp4\packages C:\Users\xxxx\xxxx\repos\WindowsApp4\packages\Microsoft.ReportingServices.ReportViewerControl.Winforms.140.340.80 C:\Users\xxxx\xxxx\repos\WindowsApp4\packages\Microsoft.ReportingServices.ReportViewerControl.Winforms.140.340.80\lib C:\Users\xxxx\xxxx\repos\WindowsApp4\packages\Microsoft.ReportingServices.ReportViewerControl.Winforms.140.340.80\lib\net40 Under this folder Following is the dll. *Microsoft.ReportViewer.WinForms.dll* Drag and drop the above mentioned dll to tool box
Step 1, run the following command in the Package Manager Console: \*\*\*Install-Package Microsoft.ReportingServices.ReportViewerControl.Winforms -\*\*\*Version 140.340.80 Step 2, Remove the ReportViewer Control that is listed in the toolbox. This is the control with a version of 12.x. Step 3, Right click in anywhere in the toolbox and then select Choose Items.... Step 4, On the .NET Framework Components, select Browse. Step 5, Select the Microsoft.ReportViewer.WinForms.dll from the NuGet package you installed. The NuGet package will be installed in the solution directory of your project. The path to the dll will be similar to the following:  {Solution Directory}\packages\Microsoft.ReportingServices.ReportViewerControl.Winforms.{version}\lib\net40 Step 6, The new control should display within the toolbox.
237,796
[When I was testing the movement of the bones of the whole body I moved the head bone, I rotated it on the X axis to be exact, and when I rotated it the head of the character looked as if it had lengthened in the back and it also looked like as if the top of the head had been flattened, what can I do so it won´t happen again?[![][1]](https://i.stack.imgur.com/IHaPJ.png)](https://i.stack.imgur.com/IHaPJ.png)
2021/09/11
[ "https://blender.stackexchange.com/questions/237796", "https://blender.stackexchange.com", "https://blender.stackexchange.com/users/132077/" ]
Go in *Object* mode, select your armature, open the *N* panel and in *Item > Transform > Scale*, see the values, you've scaled your armature on Z, it explains why it's so deformed. To bring it back to a 1:1:1 scale press `Ctrl``A` > *Apply Scale*. You'll also need to select the arms and go in the *Child Of* constraint and click on *Set Inverse* (if you plan to keep these constraints, not sure if it's useful compared to a simple parentage though). If you want to scale your bones, do it in *Edit* mode. [![enter image description here](https://i.stack.imgur.com/PhrGG.jpg)](https://i.stack.imgur.com/PhrGG.jpg)
Have you tried weightpaining these areas by hand? You could also try too move the neck bone a bit up and apply a fresh automatic weight paint.
25,905
I am not sure if this is possible and cant seem to find any information but what I would like to know about is; I am in the UK and have exchange server 2007. I have remote users in Germany. They are not that keen on OWA and would like to know if they can link to a public folder on our exchange from their Outlook client. Is this possible at all? Any links / advise would be appreciated. Thanks, Barry
2009/06/15
[ "https://serverfault.com/questions/25905", "https://serverfault.com", "https://serverfault.com/users/9537/" ]
It sounds like they don't have another Exchange Server computer there. If they do, there's a different set of questions and answers for your issue. Assuming they do not, you can use HTTP-over-RPC (aka "Outlook Anywhere") to give them "full blown Outlook" access to your Exchange Server computer, or you can use a VPN. Have a look at: <http://technet.microsoft.com/en-us/library/aa998934.aspx>
Certainly. If they're currently using Outlook, then in the folders view, they'll see a top level choice of "All Public Folders". They can drill down into that tree view and select the folders they want. They may want to add some of those folders as their favorites, which can be set up to synchronize public folder so they can be worked on offline.
25,905
I am not sure if this is possible and cant seem to find any information but what I would like to know about is; I am in the UK and have exchange server 2007. I have remote users in Germany. They are not that keen on OWA and would like to know if they can link to a public folder on our exchange from their Outlook client. Is this possible at all? Any links / advise would be appreciated. Thanks, Barry
2009/06/15
[ "https://serverfault.com/questions/25905", "https://serverfault.com", "https://serverfault.com/users/9537/" ]
It sounds like they don't have another Exchange Server computer there. If they do, there's a different set of questions and answers for your issue. Assuming they do not, you can use HTTP-over-RPC (aka "Outlook Anywhere") to give them "full blown Outlook" access to your Exchange Server computer, or you can use a VPN. Have a look at: <http://technet.microsoft.com/en-us/library/aa998934.aspx>
Are the German users using the same mailbox server as your UK users, or do they have their own server? If they've got their own, is it in the same AD domain, or a separate one? And finally, if they're in a separate domain, is there any kind of trust relationship between it and yours?
25,905
I am not sure if this is possible and cant seem to find any information but what I would like to know about is; I am in the UK and have exchange server 2007. I have remote users in Germany. They are not that keen on OWA and would like to know if they can link to a public folder on our exchange from their Outlook client. Is this possible at all? Any links / advise would be appreciated. Thanks, Barry
2009/06/15
[ "https://serverfault.com/questions/25905", "https://serverfault.com", "https://serverfault.com/users/9537/" ]
It sounds like they don't have another Exchange Server computer there. If they do, there's a different set of questions and answers for your issue. Assuming they do not, you can use HTTP-over-RPC (aka "Outlook Anywhere") to give them "full blown Outlook" access to your Exchange Server computer, or you can use a VPN. Have a look at: <http://technet.microsoft.com/en-us/library/aa998934.aspx>
This is possible if they are part of your network. As long as they can browse to the folder with Windows Explorer this will work. We run Outlook at my office and do not use Exchange, but we do host the pst's in a network folder. I accomplished this by setting up Outlook on each machine and then moving the pst to the network. When Outlook is started again it will prompt you to browse for the location of the pst. If the remote users arent part of your domain this may not be possible...or it may be possible with something like Hamachi (which is free).
423,941
For STM32F405 the max system clock frequency is 168 MHz. I want to run it at marginally lower frequency only for the sake of safety and reliability because my application will run non-stop through out a year without going into low power sleep mode. Should I select a value for SYSCLK or HCLK as a power of 2 (example 128 MHz) or can I select any value (example 150 MHz or 148 MHz etc)? [![enter image description here](https://i.stack.imgur.com/MlRys.png)](https://i.stack.imgur.com/MlRys.png) Edit: Jitter info added below: [![enter image description here](https://i.stack.imgur.com/JGcy1.png)](https://i.stack.imgur.com/JGcy1.png)
2019/02/23
[ "https://electronics.stackexchange.com/questions/423941", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/208358/" ]
Any frequency is allowed, CubeMX won't allow illegal values. However if you use a peripheral that needs the 48Hz clock then than must be set correctly too. What may be important is the input range to PLL, with some values there is less jitter. Read the data sheet about PLL input.
The PLL appears to be performing an integer divide, so the only significant parameter is the ratio between the reference clock and the output. Unless you're concerned about high bandwidth communication, the level of jitter which you can expect should be ittelevant to your application. Unless the datasheet says otherwise, device ageing is not significantly affected by operating frequency. Supply voltage, heat dissipation - yes, 10% clock speed backoff not so much. Even overclocked you would be unlikely to observe a significant degredation over time in achievable frequency. Clocking of these devices is delay limited, not thermally limited. To expand on the latter point, timing between registers is the limiting factor for power/area optimised designs. So in the M4 CPU, there are maybe up to 25-35 levels of logic (nand2) between typical pipeline registers. Depending on the technology, this determines fmax through propagation delay. Faster cores have longer pipelines (less logic in each stage), and do the same work over more clock cycles (trading throughput for latency).
52,540
In the 2004 film, [The Village](http://www.imdb.com/title/tt0368447/), the villagers wear yellow as a "safe" color to protect themselves against the creatures that live in the woods beyond the borders of their village. The color yellow is to show the creatures that the villagers mean them no harm and that peace is to be maintained between them. Ivy Walker (who is blind) needs to breach the borders and go into the woods. It is revealed to her by her father that the creatures actually do not exist, but they were simply used as a scare tactic to prevent the villagers from venturing into the woods and going to "the towns." Ivy still wears her yellow cloak. She falls into a hole and gets her yellow cloak covered with mud. Realizing that something is following her, she gets extremely nervous and frantically tries to wipe the mud off of her cloak to reveal that she is wearing yellow. [![enter image description here](https://i.stack.imgur.com/E1FfO.jpg)](https://i.stack.imgur.com/E1FfO.jpg) If Ivy knew that the creatures were not real, why was she trying to reveal the color of her cloak?
2016/05/03
[ "https://movies.stackexchange.com/questions/52540", "https://movies.stackexchange.com", "https://movies.stackexchange.com/users/22792/" ]
The Yellow clothes are a type of 'Elder Endorsement', a ratification or 'pass'. The village guardians who dress up as the beasts know that if someone is dressed in yellow they have been given the Yellow cloak under the supervision of a village elder, so it is a sanctioned activity. If someone is found wandering without one, it will be assumed they're not supposed to be there. It's unlikely the 'beasts' would harm her, but they'd certainly interfere and try and scare her off/back to the village. Ivy possibly doesn't know what they're capable of. Of course, Ivy *could* at this point tell them she knows they're not really beasts and knows what they're doing, but within the narrative of the film her maintaining her cloak's yellow lustre removes that possibility.
I know I'm late to the party but I think I have a better answer. When her father showed her that the monsters aren't real, he did say that they based them off of rumors in the past that there had been monsters in those woods. Before Noah attacks dressed as a monster, this line of dialogue is actually heard again (to represent the words in her head). At this point, Ivy is afraid that the monsters are real, in spite of the fact that the elders had been faking it. She has reason to believe this too as her father said they were based off of past rumors and she is being chased at that moment. Of course, this is wrong because the monster attacking her is later revealed to just be Noah. This is why she was afraid to take off the cloak as if they were real maybe it would help her.
3,068,548
After deployment of new version of our ASP.NET 2.0 application, it started to raise security exception: „System.Security.SecurityException: Request for the permission of type 'System.Web.AspNetHostingPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed.“. After quick research on internet we were able to resolve this isse by setting „Load User Profile“ to True in IIS 7.5 application pool. This solution is also mentioned several times here on stackoverflow: * [Strange ASP.NET error !](https://stackoverflow.com/questions/1385999/strange-asp-net-error) * [System.Web.AspNetHostingPermission Exception on New Deployment](https://stackoverflow.com/questions/2242039/system-web-aspnethostingpermission-exception-on-new-deployment) * [Running a asp.net web application project on IIS7 throws exception](https://stackoverflow.com/questions/697429/running-a-asp-net-web-application-project-on-iis7-throws-exception) However we were unable to find reason why it has to be true. We reviewed all changes in new version (gladly there were only a few), but didn’t find anything suspicious (no access to registry or temp data as some articles suggested etc). Could anybody give us hints when an ASP.NET application hosted in IIS 7.5 needs „Load User Profile“ option set to True? Details: * Application pool: .NET 2.0; Managed Pipeline Mode - Classic; Identity – custom domain account * In IIS 6.0 (W2K3): Old and new version of application work fine * In IIS 7.5 (W2K8-R2): Old version of application works fine; new version of application raises security exception – it starts to work after setting „Load User Profile“ to True Thank you! **EDIT:** We have finally found the cause of this problem! Our admin used different technique to copy the new version of application from staging environment to production environment. He used web server as intermediary. After donwloading zipped release build artifacts to production environment and then unzipping the files, they were still marked as "blocked" because they came from different computer. See also <https://superuser.com/questions/38476/this-file-came-from-another-computer-how-can-i-unblock-all-the-files-in-a>. **ASP.NET then logically executes these binaries in partial trust instead of full trust and that was actually causing mentioned security exceptions in our application**. Setting "Load User Profile" to True fixed the security exceptions as a side-effect. If "Load User Profile" is set to False, then our application (not our code, maybe some .NET BCL or external assembly) is trying to query basic info about directory "C:\Windows\System32\config\systemprofile\AppData\Local\Microsoft\Windows\Temporary Internet Files" which the identity of application pool is not allowed to: * With full trust: access denied to this query operation doesn't raise any exception * With partial trust: access denied to this query operation raises security exception If "Load User Profile" is set to True, then temporary profile in Users directory is created every time when application pool starts. Our application is then trying to query info about "Temporary Internet Files" directory of this profile, which the identity of application pool is allowed to. Thus no exception is raised even with partial trust. Really nice troubleshooting session! :)
2010/06/18
[ "https://Stackoverflow.com/questions/3068548", "https://Stackoverflow.com", "https://Stackoverflow.com/users/370169/" ]
One more example when "Load User Profile" setting could helps you is usage of temporary files. Sometime this usege can be indirect. SQL Express for example can do this in some situations. So my advice. Switch off "Load User Profile" and examine %TEMP%. Then try to give domain account used for application pool the full access (or change access) to the directory from %TEMP%. Probably it fix your problem. One more advice is usage of Process Monitor (see <http://technet.microsoft.com/en-us/sysinternals/bb896645.aspx>) to locale which parts of user profile will be used (or receive "access denied" error) at the moment when you receive "System.Security.SecurityException: Request for the permission of type 'System.Web.AspNetHostingPermission" exception.
Another area where LoadUserProfile might help is when configuring a trusted MSMQ binding in WCF. If the app pool is running under a trusted account, this won't load the SID unless the Application pool load user profile setting is set to true, and hence authentication will fail.
3,068,548
After deployment of new version of our ASP.NET 2.0 application, it started to raise security exception: „System.Security.SecurityException: Request for the permission of type 'System.Web.AspNetHostingPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed.“. After quick research on internet we were able to resolve this isse by setting „Load User Profile“ to True in IIS 7.5 application pool. This solution is also mentioned several times here on stackoverflow: * [Strange ASP.NET error !](https://stackoverflow.com/questions/1385999/strange-asp-net-error) * [System.Web.AspNetHostingPermission Exception on New Deployment](https://stackoverflow.com/questions/2242039/system-web-aspnethostingpermission-exception-on-new-deployment) * [Running a asp.net web application project on IIS7 throws exception](https://stackoverflow.com/questions/697429/running-a-asp-net-web-application-project-on-iis7-throws-exception) However we were unable to find reason why it has to be true. We reviewed all changes in new version (gladly there were only a few), but didn’t find anything suspicious (no access to registry or temp data as some articles suggested etc). Could anybody give us hints when an ASP.NET application hosted in IIS 7.5 needs „Load User Profile“ option set to True? Details: * Application pool: .NET 2.0; Managed Pipeline Mode - Classic; Identity – custom domain account * In IIS 6.0 (W2K3): Old and new version of application work fine * In IIS 7.5 (W2K8-R2): Old version of application works fine; new version of application raises security exception – it starts to work after setting „Load User Profile“ to True Thank you! **EDIT:** We have finally found the cause of this problem! Our admin used different technique to copy the new version of application from staging environment to production environment. He used web server as intermediary. After donwloading zipped release build artifacts to production environment and then unzipping the files, they were still marked as "blocked" because they came from different computer. See also <https://superuser.com/questions/38476/this-file-came-from-another-computer-how-can-i-unblock-all-the-files-in-a>. **ASP.NET then logically executes these binaries in partial trust instead of full trust and that was actually causing mentioned security exceptions in our application**. Setting "Load User Profile" to True fixed the security exceptions as a side-effect. If "Load User Profile" is set to False, then our application (not our code, maybe some .NET BCL or external assembly) is trying to query basic info about directory "C:\Windows\System32\config\systemprofile\AppData\Local\Microsoft\Windows\Temporary Internet Files" which the identity of application pool is not allowed to: * With full trust: access denied to this query operation doesn't raise any exception * With partial trust: access denied to this query operation raises security exception If "Load User Profile" is set to True, then temporary profile in Users directory is created every time when application pool starts. Our application is then trying to query info about "Temporary Internet Files" directory of this profile, which the identity of application pool is allowed to. Thus no exception is raised even with partial trust. Really nice troubleshooting session! :)
2010/06/18
[ "https://Stackoverflow.com/questions/3068548", "https://Stackoverflow.com", "https://Stackoverflow.com/users/370169/" ]
One more example when "Load User Profile" setting could helps you is usage of temporary files. Sometime this usege can be indirect. SQL Express for example can do this in some situations. So my advice. Switch off "Load User Profile" and examine %TEMP%. Then try to give domain account used for application pool the full access (or change access) to the directory from %TEMP%. Probably it fix your problem. One more advice is usage of Process Monitor (see <http://technet.microsoft.com/en-us/sysinternals/bb896645.aspx>) to locale which parts of user profile will be used (or receive "access denied" error) at the moment when you receive "System.Security.SecurityException: Request for the permission of type 'System.Web.AspNetHostingPermission" exception.
I also ran into the same problem and could resolve the problem by setting load user profile=true. However i have reverted the load user profile = false and restarted the app pool but now i dont get any exception. I have gone through all the relavents posts on stackoverflow and also on Asp.net and iis forum pages.
183,471
Imagine a planet that's identical in pretty much every aspect (including Homo Sapiens) to our own current Earth, with only a single exception - the continental crust has, for whatever reason, much less iron in it. What's in the core/mantle is not all that important, as long as it provides the conditions above. I read that life could develop similarly with very little *mineable* iron ([Can an earth-like world lack mineable iron?](https://worldbuilding.stackexchange.com/questions/102915/can-an-earth-like-world-lack-mineable-iron)) so I'm happy with identical life forms. What I wonder about, is how would human civilisations develop if there were 10 times less iron accessible (with ancient and industrial revolution era technology) than there was in the Earth's actual history. What would be the major differences? Would iron age happen at all? Would industrial revolution (leaving socio-economic readiness aside) happen at all, or how much later? How would it all change if there was 100 or 1000 times less iron available?
2020/08/10
[ "https://worldbuilding.stackexchange.com/questions/183471", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/29103/" ]
You'd barely notice a difference -------------------------------- ... mainly because iron is so incomprehensively common. Iron is about 1000 times as common as copper and 20,000 times as common as tin in the Earth's crust. SEE: <https://periodictable.com/Properties/A/CrustAbundance.v.html> So, even with 10-1000 times less of it, iron/steal would still be easy to make in significant amounts. At 10-100 times less common, you would probably not notice much of a difference in our world's history. Steel would be a bit more expensive, but still a pretty universally accessible metal. 1000 times less common, and it would probably extend the period of time where Iron and Bronze were used interchangeably much longer, but not really prevent the use of steel since it is still the better option a lot of the time. Instead of iron replacing broze, you'd just see each used based on when one is better than the other, or when one is more available in your region. At 10-100 times less common, no industry would be significantly impacted from developing, but some places in the world might be iron scarce. This could affect the balance of power between economies throughout history such that some empires may rise instead of others and from this you could speculate all sorts of butterfly effects, but by in large, there would be enough places to acquire iron in that world that no country would be fully cut off from having it as a readily available resource. At 1000 times though, you may also start to see the industrial revolution impacted, but not really prevented. There would still be plenty enough steel for mechanized production machines to remain economically viable. So early industrialization like grain mills, textile factories, etc. would still happen. But... things like trains, automobiles, and highrises use a LOT of steel, and this is where you would see the scarcity bottle neck maybe start to affect you. Just like copper started becoming scarce when we decided to wire and pipe up our entire world with the stuff, iron might become scarce if we tried laying down too many railroad tracks, making too many car engines, or making a bunch of steel framed highrises. There would still be enough viable ore to go around to get you started, but you'd often have to go farther to find it which would make it more expensive. The transportation revolution would seem to be in jeopardy because of this except for a very important discovery that happened in 1888 called the Bayer Process which is where we learned to mass produce aluminium. Aluminium is even more common in the crust than iron; so, the Bayer Process opened up a virtually limitless supply of metal even without abundant steel. By replacing most of our bulk steel with aluminium, the industrial revolution could stay more or less on track. Aluminum, like copper, needs to be alloyed to become comparable to steel in strength, but instead of needing something rare like tin, it normally alloys with trace amounts of magnesium, silicon, and/or zinc which were all easily isolated elements by 1888; so, by the time the automotive and high-rise structure industries really starts to take off enough for steel supplies to be a problem, we'd already have enough enough access to aluminium alloys to pretty much replace steel even at very large scales. The bottle neck would be cost, new aluminum costs about 3x as much as steel per pound to refine but is twice as strong for its weight. This would make things like sky scrapers much more expensive at first because you would need a similar weight of aluminum as steel to hold up the weight of all that concrete, but things like automobiles would be less affected because you are only engineering to the weight of chase. Either way, the cost of aluminum would cause an initial adoption issue for a few years, but would not stay a lot more expensive for long. Aluminum is much cheaper to recycle, form, and transport than it is to refine. So, recycled aluminum products are only about 1/2 the cost of steel; so, as your civilization starts to have enough old stock to blend with the new stock, the cost of aluminum will drop to prices that could be comparable to steel today. All this would really mean for anyone today is that cars would be a bit more expensive maybe making public transportation a bit more common and high rises would also not be as high. But on the surface, society and our history would still be pretty recognizable.
Ten times less iron would make iron more valuable but overall many things would look similar. Iron is very common. But if we assume iron is made *scarce* by these changes, then the world does begin to look different. Brass, Bronze, and various formulations would largely replace iron ------------------------------------------------------------------ While Iron is more generally useful than bronze and copper and so on, there are still smithing and smelting techniques that can improve the qualities of other metals beyond the level it is generally assumed the bronze age sat at (especially considering many issues with various bronzes were caused by lack of tin or proper equipment). However even so, the relative advantage of these metals is lower - they are rarer, and harder to produce - so wooden or even stone weapons would not be considered quite so primitive. Clubs and maces would potentially be more common, and more warriors might go armourless. Simple tools that rely on size or force might be still used alongside smaller metal tools that rely on hardness. Skilled woodcrafting and wood treatment would be used in large-scale constructions and everyday goods for longer ---------------------------------------------------------------------------------------------------------------- While in many cases iron replaced wood for use in household items and such, the relative higher scarcity of copper would likely mean that things like lanterns or shutters, latches, locks etc would use treated hardwoods more often rather than metal. Likewise, lacquering would probably be more popular, as well as using high-labour cost expensive but tough fabrics. Supply issues, technological issues, and social issues led some areas of Asia (notably japan but also many pacific islands, korea) to reserve iron for warlike uses and use more of other materials in construction and household goods - those are good references for the kind of workarounds in pottery, stone, cloth, and timber that humans will use when they don't have iron to spare. Good steel would be more of a semi-mythical metal than an everyday item ----------------------------------------------------------------------- How to make good steel was non-obvious. It took a very long time of constant and widespread use for people to figure out how to create stronger iron. If iron was rarer, there would be less opportunity for that kind of development - however, if it was rare, perhaps more effort would be spent on refining it (iron was largely improved when people figured out safe techniques to *cheaply* improve its quality - there were very labour intensive methods to improve the quality but those were typically only used to produce luxury items aimed at the upper class) resulting in a kind of 'damascus steel' where a labour-intensive (and secret) method is used to make good quality iron or steel. A well-made steel sword would give an immense advantage in a fight over a bronze sword or a club, as would steel armour over (heavier and usually not stronger) bronze, leather or wooden armour. There would potentially be religious or social rules over the use of iron or steel, which did exist in some areas of the globe, but to a greater extent due to the rarity than existed in our world. Industrialization would proceed significantly slower ---------------------------------------------------- Methods to produce strong materials that are not based on iron do exist but are often more difficult in terms of mass production. They rely on harder-to-produce raw materials and/or processing methods that take longer or require more complex factories and machinery that were produced later in the industrial revolution than industrial furnaces/smelteries and in many cases were powered by or relied upon iron equipment that would be more expensive to reproduce in bronze, wood, or stone. The discovery and utilization of aluminium would be a far bigger deal --------------------------------------------------------------------- Although it is actually quite hard to get a usable metal aluminium out of naturally occurring minerals, in an iron-poor world having a sudden source of an abundant and useful metal would be absolutely game-changing and potentially kick off a harder and faster industrial age than had existed previously.
183,471
Imagine a planet that's identical in pretty much every aspect (including Homo Sapiens) to our own current Earth, with only a single exception - the continental crust has, for whatever reason, much less iron in it. What's in the core/mantle is not all that important, as long as it provides the conditions above. I read that life could develop similarly with very little *mineable* iron ([Can an earth-like world lack mineable iron?](https://worldbuilding.stackexchange.com/questions/102915/can-an-earth-like-world-lack-mineable-iron)) so I'm happy with identical life forms. What I wonder about, is how would human civilisations develop if there were 10 times less iron accessible (with ancient and industrial revolution era technology) than there was in the Earth's actual history. What would be the major differences? Would iron age happen at all? Would industrial revolution (leaving socio-economic readiness aside) happen at all, or how much later? How would it all change if there was 100 or 1000 times less iron available?
2020/08/10
[ "https://worldbuilding.stackexchange.com/questions/183471", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/29103/" ]
Ten times less iron would make iron more valuable but overall many things would look similar. Iron is very common. But if we assume iron is made *scarce* by these changes, then the world does begin to look different. Brass, Bronze, and various formulations would largely replace iron ------------------------------------------------------------------ While Iron is more generally useful than bronze and copper and so on, there are still smithing and smelting techniques that can improve the qualities of other metals beyond the level it is generally assumed the bronze age sat at (especially considering many issues with various bronzes were caused by lack of tin or proper equipment). However even so, the relative advantage of these metals is lower - they are rarer, and harder to produce - so wooden or even stone weapons would not be considered quite so primitive. Clubs and maces would potentially be more common, and more warriors might go armourless. Simple tools that rely on size or force might be still used alongside smaller metal tools that rely on hardness. Skilled woodcrafting and wood treatment would be used in large-scale constructions and everyday goods for longer ---------------------------------------------------------------------------------------------------------------- While in many cases iron replaced wood for use in household items and such, the relative higher scarcity of copper would likely mean that things like lanterns or shutters, latches, locks etc would use treated hardwoods more often rather than metal. Likewise, lacquering would probably be more popular, as well as using high-labour cost expensive but tough fabrics. Supply issues, technological issues, and social issues led some areas of Asia (notably japan but also many pacific islands, korea) to reserve iron for warlike uses and use more of other materials in construction and household goods - those are good references for the kind of workarounds in pottery, stone, cloth, and timber that humans will use when they don't have iron to spare. Good steel would be more of a semi-mythical metal than an everyday item ----------------------------------------------------------------------- How to make good steel was non-obvious. It took a very long time of constant and widespread use for people to figure out how to create stronger iron. If iron was rarer, there would be less opportunity for that kind of development - however, if it was rare, perhaps more effort would be spent on refining it (iron was largely improved when people figured out safe techniques to *cheaply* improve its quality - there were very labour intensive methods to improve the quality but those were typically only used to produce luxury items aimed at the upper class) resulting in a kind of 'damascus steel' where a labour-intensive (and secret) method is used to make good quality iron or steel. A well-made steel sword would give an immense advantage in a fight over a bronze sword or a club, as would steel armour over (heavier and usually not stronger) bronze, leather or wooden armour. There would potentially be religious or social rules over the use of iron or steel, which did exist in some areas of the globe, but to a greater extent due to the rarity than existed in our world. Industrialization would proceed significantly slower ---------------------------------------------------- Methods to produce strong materials that are not based on iron do exist but are often more difficult in terms of mass production. They rely on harder-to-produce raw materials and/or processing methods that take longer or require more complex factories and machinery that were produced later in the industrial revolution than industrial furnaces/smelteries and in many cases were powered by or relied upon iron equipment that would be more expensive to reproduce in bronze, wood, or stone. The discovery and utilization of aluminium would be a far bigger deal --------------------------------------------------------------------- Although it is actually quite hard to get a usable metal aluminium out of naturally occurring minerals, in an iron-poor world having a sudden source of an abundant and useful metal would be absolutely game-changing and potentially kick off a harder and faster industrial age than had existed previously.
Although clearly less available than today, even at one thousandth the quantity we have there might still be substantial ore available because there is so much of it. This might mean it was available to some nations and not others and it might run low before the iron age took off. But assuming a worse case situation where iron is not available in concentrated form any more It would change the course of history. The Bronze Age would last longer and instead of an iron age there would have been a nickel age or similar.
183,471
Imagine a planet that's identical in pretty much every aspect (including Homo Sapiens) to our own current Earth, with only a single exception - the continental crust has, for whatever reason, much less iron in it. What's in the core/mantle is not all that important, as long as it provides the conditions above. I read that life could develop similarly with very little *mineable* iron ([Can an earth-like world lack mineable iron?](https://worldbuilding.stackexchange.com/questions/102915/can-an-earth-like-world-lack-mineable-iron)) so I'm happy with identical life forms. What I wonder about, is how would human civilisations develop if there were 10 times less iron accessible (with ancient and industrial revolution era technology) than there was in the Earth's actual history. What would be the major differences? Would iron age happen at all? Would industrial revolution (leaving socio-economic readiness aside) happen at all, or how much later? How would it all change if there was 100 or 1000 times less iron available?
2020/08/10
[ "https://worldbuilding.stackexchange.com/questions/183471", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/29103/" ]
Ten times less iron would make iron more valuable but overall many things would look similar. Iron is very common. But if we assume iron is made *scarce* by these changes, then the world does begin to look different. Brass, Bronze, and various formulations would largely replace iron ------------------------------------------------------------------ While Iron is more generally useful than bronze and copper and so on, there are still smithing and smelting techniques that can improve the qualities of other metals beyond the level it is generally assumed the bronze age sat at (especially considering many issues with various bronzes were caused by lack of tin or proper equipment). However even so, the relative advantage of these metals is lower - they are rarer, and harder to produce - so wooden or even stone weapons would not be considered quite so primitive. Clubs and maces would potentially be more common, and more warriors might go armourless. Simple tools that rely on size or force might be still used alongside smaller metal tools that rely on hardness. Skilled woodcrafting and wood treatment would be used in large-scale constructions and everyday goods for longer ---------------------------------------------------------------------------------------------------------------- While in many cases iron replaced wood for use in household items and such, the relative higher scarcity of copper would likely mean that things like lanterns or shutters, latches, locks etc would use treated hardwoods more often rather than metal. Likewise, lacquering would probably be more popular, as well as using high-labour cost expensive but tough fabrics. Supply issues, technological issues, and social issues led some areas of Asia (notably japan but also many pacific islands, korea) to reserve iron for warlike uses and use more of other materials in construction and household goods - those are good references for the kind of workarounds in pottery, stone, cloth, and timber that humans will use when they don't have iron to spare. Good steel would be more of a semi-mythical metal than an everyday item ----------------------------------------------------------------------- How to make good steel was non-obvious. It took a very long time of constant and widespread use for people to figure out how to create stronger iron. If iron was rarer, there would be less opportunity for that kind of development - however, if it was rare, perhaps more effort would be spent on refining it (iron was largely improved when people figured out safe techniques to *cheaply* improve its quality - there were very labour intensive methods to improve the quality but those were typically only used to produce luxury items aimed at the upper class) resulting in a kind of 'damascus steel' where a labour-intensive (and secret) method is used to make good quality iron or steel. A well-made steel sword would give an immense advantage in a fight over a bronze sword or a club, as would steel armour over (heavier and usually not stronger) bronze, leather or wooden armour. There would potentially be religious or social rules over the use of iron or steel, which did exist in some areas of the globe, but to a greater extent due to the rarity than existed in our world. Industrialization would proceed significantly slower ---------------------------------------------------- Methods to produce strong materials that are not based on iron do exist but are often more difficult in terms of mass production. They rely on harder-to-produce raw materials and/or processing methods that take longer or require more complex factories and machinery that were produced later in the industrial revolution than industrial furnaces/smelteries and in many cases were powered by or relied upon iron equipment that would be more expensive to reproduce in bronze, wood, or stone. The discovery and utilization of aluminium would be a far bigger deal --------------------------------------------------------------------- Although it is actually quite hard to get a usable metal aluminium out of naturally occurring minerals, in an iron-poor world having a sudden source of an abundant and useful metal would be absolutely game-changing and potentially kick off a harder and faster industrial age than had existed previously.
Many civilisations have developed with little to no iron. Maoris, for one. You would get a lot of fishing, wooden boats, sails etc but nothing very strong or long-lasting. Everything would need to be created multiple times and constantly repaired.
183,471
Imagine a planet that's identical in pretty much every aspect (including Homo Sapiens) to our own current Earth, with only a single exception - the continental crust has, for whatever reason, much less iron in it. What's in the core/mantle is not all that important, as long as it provides the conditions above. I read that life could develop similarly with very little *mineable* iron ([Can an earth-like world lack mineable iron?](https://worldbuilding.stackexchange.com/questions/102915/can-an-earth-like-world-lack-mineable-iron)) so I'm happy with identical life forms. What I wonder about, is how would human civilisations develop if there were 10 times less iron accessible (with ancient and industrial revolution era technology) than there was in the Earth's actual history. What would be the major differences? Would iron age happen at all? Would industrial revolution (leaving socio-economic readiness aside) happen at all, or how much later? How would it all change if there was 100 or 1000 times less iron available?
2020/08/10
[ "https://worldbuilding.stackexchange.com/questions/183471", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/29103/" ]
You'd barely notice a difference -------------------------------- ... mainly because iron is so incomprehensively common. Iron is about 1000 times as common as copper and 20,000 times as common as tin in the Earth's crust. SEE: <https://periodictable.com/Properties/A/CrustAbundance.v.html> So, even with 10-1000 times less of it, iron/steal would still be easy to make in significant amounts. At 10-100 times less common, you would probably not notice much of a difference in our world's history. Steel would be a bit more expensive, but still a pretty universally accessible metal. 1000 times less common, and it would probably extend the period of time where Iron and Bronze were used interchangeably much longer, but not really prevent the use of steel since it is still the better option a lot of the time. Instead of iron replacing broze, you'd just see each used based on when one is better than the other, or when one is more available in your region. At 10-100 times less common, no industry would be significantly impacted from developing, but some places in the world might be iron scarce. This could affect the balance of power between economies throughout history such that some empires may rise instead of others and from this you could speculate all sorts of butterfly effects, but by in large, there would be enough places to acquire iron in that world that no country would be fully cut off from having it as a readily available resource. At 1000 times though, you may also start to see the industrial revolution impacted, but not really prevented. There would still be plenty enough steel for mechanized production machines to remain economically viable. So early industrialization like grain mills, textile factories, etc. would still happen. But... things like trains, automobiles, and highrises use a LOT of steel, and this is where you would see the scarcity bottle neck maybe start to affect you. Just like copper started becoming scarce when we decided to wire and pipe up our entire world with the stuff, iron might become scarce if we tried laying down too many railroad tracks, making too many car engines, or making a bunch of steel framed highrises. There would still be enough viable ore to go around to get you started, but you'd often have to go farther to find it which would make it more expensive. The transportation revolution would seem to be in jeopardy because of this except for a very important discovery that happened in 1888 called the Bayer Process which is where we learned to mass produce aluminium. Aluminium is even more common in the crust than iron; so, the Bayer Process opened up a virtually limitless supply of metal even without abundant steel. By replacing most of our bulk steel with aluminium, the industrial revolution could stay more or less on track. Aluminum, like copper, needs to be alloyed to become comparable to steel in strength, but instead of needing something rare like tin, it normally alloys with trace amounts of magnesium, silicon, and/or zinc which were all easily isolated elements by 1888; so, by the time the automotive and high-rise structure industries really starts to take off enough for steel supplies to be a problem, we'd already have enough enough access to aluminium alloys to pretty much replace steel even at very large scales. The bottle neck would be cost, new aluminum costs about 3x as much as steel per pound to refine but is twice as strong for its weight. This would make things like sky scrapers much more expensive at first because you would need a similar weight of aluminum as steel to hold up the weight of all that concrete, but things like automobiles would be less affected because you are only engineering to the weight of chase. Either way, the cost of aluminum would cause an initial adoption issue for a few years, but would not stay a lot more expensive for long. Aluminum is much cheaper to recycle, form, and transport than it is to refine. So, recycled aluminum products are only about 1/2 the cost of steel; so, as your civilization starts to have enough old stock to blend with the new stock, the cost of aluminum will drop to prices that could be comparable to steel today. All this would really mean for anyone today is that cars would be a bit more expensive maybe making public transportation a bit more common and high rises would also not be as high. But on the surface, society and our history would still be pretty recognizable.
Although clearly less available than today, even at one thousandth the quantity we have there might still be substantial ore available because there is so much of it. This might mean it was available to some nations and not others and it might run low before the iron age took off. But assuming a worse case situation where iron is not available in concentrated form any more It would change the course of history. The Bronze Age would last longer and instead of an iron age there would have been a nickel age or similar.
183,471
Imagine a planet that's identical in pretty much every aspect (including Homo Sapiens) to our own current Earth, with only a single exception - the continental crust has, for whatever reason, much less iron in it. What's in the core/mantle is not all that important, as long as it provides the conditions above. I read that life could develop similarly with very little *mineable* iron ([Can an earth-like world lack mineable iron?](https://worldbuilding.stackexchange.com/questions/102915/can-an-earth-like-world-lack-mineable-iron)) so I'm happy with identical life forms. What I wonder about, is how would human civilisations develop if there were 10 times less iron accessible (with ancient and industrial revolution era technology) than there was in the Earth's actual history. What would be the major differences? Would iron age happen at all? Would industrial revolution (leaving socio-economic readiness aside) happen at all, or how much later? How would it all change if there was 100 or 1000 times less iron available?
2020/08/10
[ "https://worldbuilding.stackexchange.com/questions/183471", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/29103/" ]
You'd barely notice a difference -------------------------------- ... mainly because iron is so incomprehensively common. Iron is about 1000 times as common as copper and 20,000 times as common as tin in the Earth's crust. SEE: <https://periodictable.com/Properties/A/CrustAbundance.v.html> So, even with 10-1000 times less of it, iron/steal would still be easy to make in significant amounts. At 10-100 times less common, you would probably not notice much of a difference in our world's history. Steel would be a bit more expensive, but still a pretty universally accessible metal. 1000 times less common, and it would probably extend the period of time where Iron and Bronze were used interchangeably much longer, but not really prevent the use of steel since it is still the better option a lot of the time. Instead of iron replacing broze, you'd just see each used based on when one is better than the other, or when one is more available in your region. At 10-100 times less common, no industry would be significantly impacted from developing, but some places in the world might be iron scarce. This could affect the balance of power between economies throughout history such that some empires may rise instead of others and from this you could speculate all sorts of butterfly effects, but by in large, there would be enough places to acquire iron in that world that no country would be fully cut off from having it as a readily available resource. At 1000 times though, you may also start to see the industrial revolution impacted, but not really prevented. There would still be plenty enough steel for mechanized production machines to remain economically viable. So early industrialization like grain mills, textile factories, etc. would still happen. But... things like trains, automobiles, and highrises use a LOT of steel, and this is where you would see the scarcity bottle neck maybe start to affect you. Just like copper started becoming scarce when we decided to wire and pipe up our entire world with the stuff, iron might become scarce if we tried laying down too many railroad tracks, making too many car engines, or making a bunch of steel framed highrises. There would still be enough viable ore to go around to get you started, but you'd often have to go farther to find it which would make it more expensive. The transportation revolution would seem to be in jeopardy because of this except for a very important discovery that happened in 1888 called the Bayer Process which is where we learned to mass produce aluminium. Aluminium is even more common in the crust than iron; so, the Bayer Process opened up a virtually limitless supply of metal even without abundant steel. By replacing most of our bulk steel with aluminium, the industrial revolution could stay more or less on track. Aluminum, like copper, needs to be alloyed to become comparable to steel in strength, but instead of needing something rare like tin, it normally alloys with trace amounts of magnesium, silicon, and/or zinc which were all easily isolated elements by 1888; so, by the time the automotive and high-rise structure industries really starts to take off enough for steel supplies to be a problem, we'd already have enough enough access to aluminium alloys to pretty much replace steel even at very large scales. The bottle neck would be cost, new aluminum costs about 3x as much as steel per pound to refine but is twice as strong for its weight. This would make things like sky scrapers much more expensive at first because you would need a similar weight of aluminum as steel to hold up the weight of all that concrete, but things like automobiles would be less affected because you are only engineering to the weight of chase. Either way, the cost of aluminum would cause an initial adoption issue for a few years, but would not stay a lot more expensive for long. Aluminum is much cheaper to recycle, form, and transport than it is to refine. So, recycled aluminum products are only about 1/2 the cost of steel; so, as your civilization starts to have enough old stock to blend with the new stock, the cost of aluminum will drop to prices that could be comparable to steel today. All this would really mean for anyone today is that cars would be a bit more expensive maybe making public transportation a bit more common and high rises would also not be as high. But on the surface, society and our history would still be pretty recognizable.
Many civilisations have developed with little to no iron. Maoris, for one. You would get a lot of fishing, wooden boats, sails etc but nothing very strong or long-lasting. Everything would need to be created multiple times and constantly repaired.
183,471
Imagine a planet that's identical in pretty much every aspect (including Homo Sapiens) to our own current Earth, with only a single exception - the continental crust has, for whatever reason, much less iron in it. What's in the core/mantle is not all that important, as long as it provides the conditions above. I read that life could develop similarly with very little *mineable* iron ([Can an earth-like world lack mineable iron?](https://worldbuilding.stackexchange.com/questions/102915/can-an-earth-like-world-lack-mineable-iron)) so I'm happy with identical life forms. What I wonder about, is how would human civilisations develop if there were 10 times less iron accessible (with ancient and industrial revolution era technology) than there was in the Earth's actual history. What would be the major differences? Would iron age happen at all? Would industrial revolution (leaving socio-economic readiness aside) happen at all, or how much later? How would it all change if there was 100 or 1000 times less iron available?
2020/08/10
[ "https://worldbuilding.stackexchange.com/questions/183471", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/29103/" ]
Although clearly less available than today, even at one thousandth the quantity we have there might still be substantial ore available because there is so much of it. This might mean it was available to some nations and not others and it might run low before the iron age took off. But assuming a worse case situation where iron is not available in concentrated form any more It would change the course of history. The Bronze Age would last longer and instead of an iron age there would have been a nickel age or similar.
Many civilisations have developed with little to no iron. Maoris, for one. You would get a lot of fishing, wooden boats, sails etc but nothing very strong or long-lasting. Everything would need to be created multiple times and constantly repaired.
32,491,761
My development team and I are traditional .NET developers writing and maintaining many applications structured in C# .NET web forms (aspx pages) and more recently MVC 5. We've recently stood up a new MVC based API which acts as a web service for serving up parameter-requested data in JSON and serving this data in a consumable form. I'm very interested in pursuing lightweight web applications going forward where all of the heavy lifting would take place in the API and the application itself would be a collection of basic web pages. We went down the path of using an MVC solution written in Visual Studio to consume the API but that we started bumping up against our old habits, writing business logic in the app itself, fragmenting each application, etc. My question to you all is what types of functionality are fully lost when moving to a lightweight front end web application (HTML pages) consuming a JSON producing web API? I imagine security would be a challenge since today our C# apps rely on the Active Directory account of the web user. What other items will stand in our way? Our web applications are for relatively basic, that is calling and displaying lists of data objects, viewing/editing/creating individual objects, reporting etc. We're trying to get away from the complex re-work that comes with each new web application (we have over 60 currently). We're trying to centralize and I love the idea of APIs and lightweight web front ends. It would be great to use something like AngularJS to consume the service, deserialize the objects and write them to modern Bootstrappped HTML pages. Ideally we wouldn't even maintain classes and objects in the web app (of course they would be in the API) but we would just expect a defined list of object attributes and would place them on the page where needed. And on the creates and updates we can post data back by deserializing the data from the HTML form and pushing back to the API. Is this possible? What stands in our way? What will we miss most from the world of C#, code behinds, ASPX, and cshtml/html helpers? What is not possible in this basic HTML/web API platform? Thank you!!
2015/09/10
[ "https://Stackoverflow.com/questions/32491761", "https://Stackoverflow.com", "https://Stackoverflow.com/users/715724/" ]
This is a big question. It sounds like your dev team is welcoming in some new technology with open arms, and you like what you see. Good first step (: But it's clear that there are still plenty of hurdles to overcome, and ways to improve the development process (an ongoing battle for all teams). You mentioned: 1. Business logic in the presentation layer 2. Fragmentation + general bad habits These seem like a tangent from your main question, but are good questions. Have you considered [Code Reviews](https://en.wikipedia.org/wiki/Code_review) as part of the workflow for accepting pull requests into your version control? Do you have code standards such as DRY, KISS, etc. that can be used to objectively measure these pull requests? And do you have a clearly defined architecture in place (like [Onion Architecture](http://jeffreypalermo.com/blog/the-onion-architecture-part-1/)) so that all code has a clear place where it lives in the code? Back to your main questions: 1. Security (namely active directory) 2. Reusable code (mostly use lists and data objects [basic crud]) 3. Don't want to maintain classes/objects in webapp 4. Limitations? Can we do what we want? I am by no means a Senior .NET Developer, so maybe someone else can offer better thoughts, but here are mine: I think you will find very few limitations, but it will require you and your team to tackle new problems that many of you may not have experience with. You mention not using models (just properties), but I would push you in the other direction. Application's I work with generally have the following: * Domain Objects (Model your DB) * Data Transfer Objects (DTOs) * View Models (comprised of DTOs and view model properties) With all three being maintained in the code behind, and the transfer from Domain -> DTO and back being facilitated with AutoMapper. We also recognized that the majority of our apps were comprised of basic CRUD operations on key Domain Objects. So... We created a base view model that had a List and a DetailDto (form/single object), where each object that we operate CRUD on acts as a SPA. For security, we use [authorization filters and the identity project](http://www.asp.net/web-api/overview/security/authentication-and-authorization-in-aspnet-web-api). This gets us the level of security we need, but it sounds like you have additional concerns that I'm woefully unqualified to answer. We still use helpers, especially for complex components, but we've built them in such a way that they work well with our front end data-binding framework. A bit rambly. Feel free to ask for clarification or additional questions. Hope this helps at least a little!
One thing that comes to my mind is re usability. With MVC you can have partial views to organize your repeating code blocks. Though you could achieve something with Angular, It may not be fully possible With plain HTML pages.
28,256
I am trying to compute treasury yields (with different data) similar to what has been done by [bloomberg](http://www.bloomberg.com/markets/rates-bonds/government-bonds/us), [yahoo finance](http://finance.yahoo.com/bonds), [msn money](http://www.msn.com/en-us/money/markets), and [wall street](http://quotes.wsj.com/bond/BX/TMUBMUSD06M?mod=DNH_S). I find the data reported by these are not the same and also do not match with that of [US treasury](https://www.treasury.gov/resource-center/data-chart-center/interest-rates/Pages/TextView.aspx?data=yield). Please let me know how should I proceed (academic articles reference is also welcomed).
2016/07/25
[ "https://quant.stackexchange.com/questions/28256", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/22693/" ]
There are standard software packages for yield calculations according to "Street" conventions. One is FICALC, endorsed by SIFMA: <http://www.tipsinc.com/ficalc/calc.tips> It is probably hte closest you can get to an industry standard, and is widely used by itself or to check the correctness of other software.
By virtue of being OTC instruments, you'll get different yields from different sources. At the very same point of time, Goldman and JPM might quote slightly different yields (although the difference is virtually nonexistent for hot-run Treasuries). The biggest differences in this case are: 1) Timing: US Treasury computes their yields using bid-side prices at around 3:30pm EST, while financial media usually reports the last traded yield (as of 5pm). 2) Methodology: Financial media reports the yields of on-the-run issues, while the US Treasury reports hypothetical, constant maturity par yields calculated using a cubic spline model.
19,640
In [this help center article](https://askubuntu.com/help/reviews-intro), there is some information about *triage* and *help and improvement review queues*. > > * [Triage](https://stackoverflow.com/help/review-triage) – Stack Overflow only - 500 reputation. The primary goal of Triage is to quickly sort questions into groups: good as is, needs community editing, or should be closed/flagged. > * [Help and improvement](https://stackoverflow.com/help/review-help-and-improvement) - Stack Overflow only - 2,000 reputation. This queue lets you edit questions that were flagged in Triage as needing community editing. > > > Since the aforementioned review queues are specific to Stack Overflow, I don't understand why the above quoted information exists in [Ask Ubuntu help center](https://askubuntu.com/help). Why does this information exist in Ask Ubuntu help center? Shouldn't it be removed?
2021/06/28
[ "https://meta.askubuntu.com/questions/19640", "https://meta.askubuntu.com", "https://meta.askubuntu.com/users/1103140/" ]
The [answer Tinkeringbell wrote on MSE](https://meta.stackexchange.com/a/367009/284336) is pretty much the explanation for this but I'll throw a bit of additional information out there. As mentioned in [Workflow changes for review queues](https://meta.stackexchange.com/questions/363552/workflow-changes-for-review-queues) - Help & Improvement will be retired in the near future - so that's one of them removed. Triage will not be retired but it may be added to additional sites that wish to have it, so it will be applicable in more places (though we'll likely have to change "Stack Overflow only" to "where applicable"). These pages can't be specialized for each site without making them have to be custom edited on every site. This is why moderators can't edit these pages - we need to have them be identical so that they can be updated. They work just like regular posts but we do have some ability to reference tables to pull in information - that's how the reputation levels for different privileges are custom to the site. As to changing the order so that the Triage option is at the bottom - it's *possible* but the items are (generally) in order based on when someone unlocks the queue - though this doesn't work on beta sites or MSE where the rep levels are different because (again) we can't adjust the content that appears on that level on a per-site basis.
I think it's quite nice that it's there - it's like we're overhearing something not exactly meant for us but that may nonetheless be of interest. Like, if our site became immensely busier for some reason, we might think about requesting the addition of those queues. Or, we might see this stuff in the Help Center and feel a surge of generous interest in helping the poor overworked Stack Overflow reviewers.
2,112,936
Is it possible to add new nodes to Hadoop after it is started? I know that you can remove nodes (as that the master tends to keep tabs on the node state).
2010/01/21
[ "https://Stackoverflow.com/questions/2112936", "https://Stackoverflow.com", "https://Stackoverflow.com/users/80701/" ]
You can add new nodes by just booting up a new one with a proper hadoop-site.xml (one that points back to the master namenode and jobtracker). That said, removing nodes is a really bad idea without rebalancing your hdfs file blocks to prevent removing all the dupes of a given block. If you drop three datanodes, you could lose all the dupes for a given block (that has a replication of 3, the default), thus corrupting the file the block belongs too. Removing two nodes could leave you with one replica, and it could be corrupt (known to happen with dying disks).
i think as long as you don't use them as datanodes your wouldn't have an issue ... of course the data locality aspect of hadoop is gone at that point.
149,442
The homepages of several banks are available over http. Some have search engines linked to http. This, it seems to me, makes their users trivially vulnerable to MITM attacks if they visit the bank over e.g. public wifi. This, it further seems to me, means that bank account security is depending on “I *hope* they only access their bank accounts from secure home networks” *<crosses fingers>*. It looks to me that for an outlay of (1) a wifipineapple or other Evil AP device and (2) a $fewhundred to pay someone to customise SSLStrip with fake versions of 3 or 4 specific bank sites, I can: * Go sit in a shopping mall for a couple of days with the pineapple * Collect entire traffic user<->bank * Take over sessions and start making payments i.e: easy-to-mitm-public-wifi + easy-to-spoof-http-homepage => easy to hack. What have I missed—it can't be that easy and cheap to hack bank accounts, can it? ———————————————————————————————————————— PS I raised a complaint with my bank and they called back to point out that the login page asks me to install <https://www.trusteer.com/ProtectYourMoney> (an IBM product) which apparently addresses this kind of attack by closing down the browser. Also that I should not use the search engine to find their website. But I suppose once you've stolen a public AP, DNS poisoning is simple too
2017/01/25
[ "https://security.stackexchange.com/questions/149442", "https://security.stackexchange.com", "https://security.stackexchange.com/users/54992/" ]
On websites that experience large amounts of traffic (e.g. consumer sites, like banks) administrators have decided to use mixed plaintext HTTP and HTTPS implementations. Typically, HTTP is used for public information, and the switch to HTTPS occurs when accessing private resources such as a login screen. This practice largely came from the days when SSL (now TLS) encryption/decryption was costlier (1990s and early 2000s) from a CPU perspective, and could severely slow down page load times. Now, with faster CPUs more adept at using hash algorithms like SHA256 and encryption ciphers like AES, this problem is less obvious. However, if you're running a large-scale web server receiving thousands of hits a minute, the administrator will surely notice TLS traffic vs non-TLS traffic in terms of CPU utilized. The plaintext site will require less CPU and IT maintenance overhead, and in turn cause a savings to the organization in terms of hosting expense. Additionally as pointed out in a comment by @mgjk, the costs of certificate management between bank IT departments (business versus personal, trading vs other commerce, WAF and DDoS mitigation layers that need to be able to do SSL decryption, etc) is also significant and cumbersome in a large organization, and can lead to a reluctance on the part of management to roll out a TLS-only site. Thus many banks continue to leverage mixed HTTP and HTTPS web sites. New best practices are being developed to control for problems and attack vectors present in this setup, notably "SSLStrip" style MiTM attacks. Control measures include use of HSTS headers, and are making headway to banks -- but have yet to achieve wide-scale commercial implementation from these large, generally risk and change averse entities. As far as the specific situation you describe with a WiFi Pineapple as an Evil AP, this would be possible if you stage an "SSLStrip" attack, or another man-in-the-middle attack. However, many banks take measures to protect against session hijack attacks -- notably expiring the session quickly. If the device had visited the site before and it sent HSTS headers, an error (perhaps enough to scare the average user) would surely result. So, the attack scenario you describe is a bit simplistic and would require a bit more sophistication before being viable against modern bank web site implementations. However, yes, when banks don't force the use of TLS with HSTS and certificate pinning, a large attack vector is opened up similar to what you describe -- and this practice should be phased out!
While I consider the bulk of Herringbone Cat's answer to be very misleading, he/she is correct in saying that the situation you describe is far from ideal. But there are steps which a site provider can take to mitigate the attack in other ways. SSLStripping has been a well known attack method [since 2009](https://moxie.org/software/sslstrip/). The most effective solution to date is [HSTS](https://www.rfc-editor.org/rfc/rfc6797) where a capable browser will remember (or be told) which sites exclusively use HTTPS. However: * this is keyed to the domain name - you can't mix HTTP and HTTPS on the same domain name * it is only very recently that MSIE (Edge 11) has implemented HSTS support (a long time after Chrome, Firefox and Opera) There are other solutions based on [detecting the connection type](http://stupentest.net/detecting-sslstrip-using-css.html) in the browser at run time using server-supplied logic - but the obvious limitation is that this logic is more susceptible to tampering. There are also other more subtle methods for detecting fraud which can be implemented server-side and are applicable to other types of attack; split sessions, unusual navigation patterns, bot detection, patterns of transactions... The secrecy of such methods is rather important to their efficacy - hence the banks will not be publishing information about what these controls are. **But we don't know if the sites in question are using such protections.** Another point to bear in mind, is that the attack need not be initiated on the bank's website itself (nor is the attack restricted to bank websites) but on any web page which has links to the site you wish to strip. Microsoft's bing.com site is still served up over HTTP by default. IMHO this is dereliction of their duty of care to their customers. However, even in the absence of any protection, for your attack to be successful you would need to be MITM'ing a device at the same time as the user happened to log into their bank. I suspect that a shopping mall would not give a particularly good yield. So, yes, it *may* be that easy.
2,467
Right now, we have [professionalism](https://softwareengineering.stackexchange.com/questions/tagged/professionalism "show questions tagged 'professionalism'") and [profession](https://softwareengineering.stackexchange.com/questions/tagged/profession "show questions tagged 'profession'"), but I'm not sure that they should be two separate tags. Between them, there are only 32 questions (18 in professionalism and 14 in profession). I, personally, prefer professionalism - it includes a variety of topics ranging from the development software engineering as a [profession](http://en.wikipedia.org/wiki/Profession) to ethics and can be used to denote things that are of interest exclusively to professionals in a work environment. I think that [professionalism](https://softwareengineering.stackexchange.com/questions/tagged/professionalism "show questions tagged 'professionalism'"), by nature, includes everything in [profession](https://softwareengineering.stackexchange.com/questions/tagged/profession "show questions tagged 'profession'"). But thoughts?
2011/10/25
[ "https://softwareengineering.meta.stackexchange.com/questions/2467", "https://softwareengineering.meta.stackexchange.com", "https://softwareengineering.meta.stackexchange.com/users/4/" ]
"Profession" seems like a meta-tag. After all, all questions here are ostensibly related to the software development profession. I haven't looked closely at the professionalism tag yet, but I think that it makes no sense to keep "profession" regardless.
If profession is indeed a Meta tag as Anna suggests, I would recommend merging profession->professionalism and then manually reviewing the 30 questions, or reviewing the 14 profession questions and then merging. Either way, it's a small enough number for a manual cleanup followed by a merge or a merge followed by a manual cleanup.
2,467
Right now, we have [professionalism](https://softwareengineering.stackexchange.com/questions/tagged/professionalism "show questions tagged 'professionalism'") and [profession](https://softwareengineering.stackexchange.com/questions/tagged/profession "show questions tagged 'profession'"), but I'm not sure that they should be two separate tags. Between them, there are only 32 questions (18 in professionalism and 14 in profession). I, personally, prefer professionalism - it includes a variety of topics ranging from the development software engineering as a [profession](http://en.wikipedia.org/wiki/Profession) to ethics and can be used to denote things that are of interest exclusively to professionals in a work environment. I think that [professionalism](https://softwareengineering.stackexchange.com/questions/tagged/professionalism "show questions tagged 'professionalism'"), by nature, includes everything in [profession](https://softwareengineering.stackexchange.com/questions/tagged/profession "show questions tagged 'profession'"). But thoughts?
2011/10/25
[ "https://softwareengineering.meta.stackexchange.com/questions/2467", "https://softwareengineering.meta.stackexchange.com", "https://softwareengineering.meta.stackexchange.com/users/4/" ]
### Profession [profession](https://softwareengineering.stackexchange.com/questions/tagged/profession "show questions tagged 'profession'") is a meta tag and just needs to be burninated from all the questions that have it. It has no semantic value, and it's not the same thing as [professionalism](https://softwareengineering.stackexchange.com/questions/tagged/professionalism "show questions tagged 'professionalism'"): it should not be merged into it. ### Professionalism I looked at the [professionalism](https://softwareengineering.stackexchange.com/questions/tagged/professionalism "show questions tagged 'professionalism'") and didn't find a question there that needed the tag and was on-topic here: many of the questions were general workplace professionalism questions, which are off-topic here (probably going to be okay on [Professional Matters](http://area51.stackexchange.com/proposals/30887/professional-matters?referrer=TXS6wD4QBeaJS6wd42Y-3w2)). Some had professionalism seemingly as an afterthought. Others mention professionalism, but are really about something else. I went ahead and retagged most the questions that were on-topic, and [professionalism](https://softwareengineering.stackexchange.com/questions/tagged/professionalism "show questions tagged 'professionalism'") can now be burninated. ### Professional There's also [professional](https://softwareengineering.stackexchange.com/questions/tagged/professional "show questions tagged 'professional'"): there was only one question there that was actually on-topic and not a "please tell me what I should do" advice question. This should be burninated from all the questions that have it. --- We should be able to get SE to just drop the [profession](https://softwareengineering.stackexchange.com/questions/tagged/profession "show questions tagged 'profession'"), [professional](https://softwareengineering.stackexchange.com/questions/tagged/professional "show questions tagged 'professional'"), and [professionalism](https://softwareengineering.stackexchange.com/questions/tagged/professionalism "show questions tagged 'professionalism'") tags so we don't wind up retagging and bumping few dozen off-topic questions for no reason.
"Profession" seems like a meta-tag. After all, all questions here are ostensibly related to the software development profession. I haven't looked closely at the professionalism tag yet, but I think that it makes no sense to keep "profession" regardless.
2,467
Right now, we have [professionalism](https://softwareengineering.stackexchange.com/questions/tagged/professionalism "show questions tagged 'professionalism'") and [profession](https://softwareengineering.stackexchange.com/questions/tagged/profession "show questions tagged 'profession'"), but I'm not sure that they should be two separate tags. Between them, there are only 32 questions (18 in professionalism and 14 in profession). I, personally, prefer professionalism - it includes a variety of topics ranging from the development software engineering as a [profession](http://en.wikipedia.org/wiki/Profession) to ethics and can be used to denote things that are of interest exclusively to professionals in a work environment. I think that [professionalism](https://softwareengineering.stackexchange.com/questions/tagged/professionalism "show questions tagged 'professionalism'"), by nature, includes everything in [profession](https://softwareengineering.stackexchange.com/questions/tagged/profession "show questions tagged 'profession'"). But thoughts?
2011/10/25
[ "https://softwareengineering.meta.stackexchange.com/questions/2467", "https://softwareengineering.meta.stackexchange.com", "https://softwareengineering.meta.stackexchange.com/users/4/" ]
### Profession [profession](https://softwareengineering.stackexchange.com/questions/tagged/profession "show questions tagged 'profession'") is a meta tag and just needs to be burninated from all the questions that have it. It has no semantic value, and it's not the same thing as [professionalism](https://softwareengineering.stackexchange.com/questions/tagged/professionalism "show questions tagged 'professionalism'"): it should not be merged into it. ### Professionalism I looked at the [professionalism](https://softwareengineering.stackexchange.com/questions/tagged/professionalism "show questions tagged 'professionalism'") and didn't find a question there that needed the tag and was on-topic here: many of the questions were general workplace professionalism questions, which are off-topic here (probably going to be okay on [Professional Matters](http://area51.stackexchange.com/proposals/30887/professional-matters?referrer=TXS6wD4QBeaJS6wd42Y-3w2)). Some had professionalism seemingly as an afterthought. Others mention professionalism, but are really about something else. I went ahead and retagged most the questions that were on-topic, and [professionalism](https://softwareengineering.stackexchange.com/questions/tagged/professionalism "show questions tagged 'professionalism'") can now be burninated. ### Professional There's also [professional](https://softwareengineering.stackexchange.com/questions/tagged/professional "show questions tagged 'professional'"): there was only one question there that was actually on-topic and not a "please tell me what I should do" advice question. This should be burninated from all the questions that have it. --- We should be able to get SE to just drop the [profession](https://softwareengineering.stackexchange.com/questions/tagged/profession "show questions tagged 'profession'"), [professional](https://softwareengineering.stackexchange.com/questions/tagged/professional "show questions tagged 'professional'"), and [professionalism](https://softwareengineering.stackexchange.com/questions/tagged/professionalism "show questions tagged 'professionalism'") tags so we don't wind up retagging and bumping few dozen off-topic questions for no reason.
If profession is indeed a Meta tag as Anna suggests, I would recommend merging profession->professionalism and then manually reviewing the 30 questions, or reviewing the 14 profession questions and then merging. Either way, it's a small enough number for a manual cleanup followed by a merge or a merge followed by a manual cleanup.
300,287
I have been casually playing minecraft, and I recently went from 1.8.9 to 1.11.2. For a couple days, everything was okay just like normal, but when I come on to play today, Minecraft is being weird. It can fullscreen, but it shows the taskbar of my computer. Because I am OCD, it drives me nuts. Any ideas to why this happened/how to fix it? EDIT: Screenshot: [![enter image description here](https://i.stack.imgur.com/ec604.png)](https://i.stack.imgur.com/ec604.png)
2017/02/11
[ "https://gaming.stackexchange.com/questions/300287", "https://gaming.stackexchange.com", "https://gaming.stackexchange.com/users/179249/" ]
I did a little digging, and it seems it might not be JUST a Minecraft issue. It would seem, after reading some of these posts, that there is a numbers of applications that will let the taskbar appear on top of it. This is what I found in this one post: **"Right click on the task bar. Choose Properties, check box for "Hide toolbar" Whenever you move the mouse to the bottom of the screen the taskbar (toolbar) will re-appear"** Also, here is a forum thread where a lot of people solved their problem with some different solutions: <http://www.avsforum.com/forum/26-home-theater-computers/1314371-windows-7-taskbar-visible-during-full-screen-videos.html>
step 1: open task manager (ctrl+shift+esc or ctrl+alt+delete then click task manager) step 2: go to more details step 3:scroll down to windows explorer (should have a file explorer icon) step 4:right click, then click on restart see [this](http://blog.drivethelife.com/how-to/fix-windows-10-taskbar-not-hiding-full-screen.html) post
300,287
I have been casually playing minecraft, and I recently went from 1.8.9 to 1.11.2. For a couple days, everything was okay just like normal, but when I come on to play today, Minecraft is being weird. It can fullscreen, but it shows the taskbar of my computer. Because I am OCD, it drives me nuts. Any ideas to why this happened/how to fix it? EDIT: Screenshot: [![enter image description here](https://i.stack.imgur.com/ec604.png)](https://i.stack.imgur.com/ec604.png)
2017/02/11
[ "https://gaming.stackexchange.com/questions/300287", "https://gaming.stackexchange.com", "https://gaming.stackexchange.com/users/179249/" ]
I did a little digging, and it seems it might not be JUST a Minecraft issue. It would seem, after reading some of these posts, that there is a numbers of applications that will let the taskbar appear on top of it. This is what I found in this one post: **"Right click on the task bar. Choose Properties, check box for "Hide toolbar" Whenever you move the mouse to the bottom of the screen the taskbar (toolbar) will re-appear"** Also, here is a forum thread where a lot of people solved their problem with some different solutions: <http://www.avsforum.com/forum/26-home-theater-computers/1314371-windows-7-taskbar-visible-during-full-screen-videos.html>
Try F11. It seems simple, but a random click worked for me.
300,287
I have been casually playing minecraft, and I recently went from 1.8.9 to 1.11.2. For a couple days, everything was okay just like normal, but when I come on to play today, Minecraft is being weird. It can fullscreen, but it shows the taskbar of my computer. Because I am OCD, it drives me nuts. Any ideas to why this happened/how to fix it? EDIT: Screenshot: [![enter image description here](https://i.stack.imgur.com/ec604.png)](https://i.stack.imgur.com/ec604.png)
2017/02/11
[ "https://gaming.stackexchange.com/questions/300287", "https://gaming.stackexchange.com", "https://gaming.stackexchange.com/users/179249/" ]
step 1: open task manager (ctrl+shift+esc or ctrl+alt+delete then click task manager) step 2: go to more details step 3:scroll down to windows explorer (should have a file explorer icon) step 4:right click, then click on restart see [this](http://blog.drivethelife.com/how-to/fix-windows-10-taskbar-not-hiding-full-screen.html) post
Try F11. It seems simple, but a random click worked for me.
5,118,658
Is it possible to generate dalvik bytecode at runtime using a language such as python, perl or something similar? Thanks
2011/02/25
[ "https://Stackoverflow.com/questions/5118658", "https://Stackoverflow.com", "https://Stackoverflow.com/users/577909/" ]
I think the best way is to create a smali compatible file and compile it with smali <http://code.google.com/p/smali/>
Take a look at [android-scripting](http://code.google.com/p/android-scripting/), a toolkit for running Python, Perl, JRuby, Lua, BeanShell, JavaScript, Tcl, and shell.
43,993
I'm relatively new to my Nikon D5100, and I have to photograph moths on a sheet for my job. I've begun to notice that my camera doesn't seem to let me take more than a few pictures in a row anymore, without my having to turn it off and on again (and sometimes I have to do this several times). Is this a common problem? Do any other D5100 users have suggestions for a solution?
2013/11/07
[ "https://photo.stackexchange.com/questions/43993", "https://photo.stackexchange.com", "https://photo.stackexchange.com/users/22632/" ]
I would do two things: 1. Buy a high-performance memory card. Something that's specifically rated for 45MB/s (megaBYTES, not megaBITS) or faster should make a huge difference if the problem is that you're filling up the buffer. 2. Buy an external hotshoe flash. If you can, borrow one first to test if it makes a difference. Some cameras **do** prevent you from taking subsequent shots if the popup flash is still recharging, especially consumer-grade cameras. For testing purposes, you might also try removing some additional barriers to testing to determine the culprit: 1. Set the lens to manual focus 2. Turn the flash or use it in manual power on the lowest setting 3. Use manual exposure and see if it fires, regardless of the resulting picture
I have experienced similar problems with my Nikon D7000 when I first bought it. However, the problem with my camera was that I was taking pictures in High Burst mode and the camera didn't have enough time to record the images on the SD card. Finally, I never had to switch it off and just give it some more time. So my questions are: * are you shooting in High Burst mode? * are you shooting in RAW * what kind of an SD card are you using (what is the data transfer rate) My suggestions are: * shoot in JPEG and see if the problem persists * try another SD card * switch to Slow Burst or deactivate the Burst mode If the problem persist I think that the camera is faulty and you should service it. Hope that helps. PS. Sometimes when the camera struggles with autofocus it won't let you take pictures. You mentioned shooting Moths so I presume you are using a Macro lens. This is sometimes the case with autofocus in Macro photography.
43,993
I'm relatively new to my Nikon D5100, and I have to photograph moths on a sheet for my job. I've begun to notice that my camera doesn't seem to let me take more than a few pictures in a row anymore, without my having to turn it off and on again (and sometimes I have to do this several times). Is this a common problem? Do any other D5100 users have suggestions for a solution?
2013/11/07
[ "https://photo.stackexchange.com/questions/43993", "https://photo.stackexchange.com", "https://photo.stackexchange.com/users/22632/" ]
I have experienced similar problems with my Nikon D7000 when I first bought it. However, the problem with my camera was that I was taking pictures in High Burst mode and the camera didn't have enough time to record the images on the SD card. Finally, I never had to switch it off and just give it some more time. So my questions are: * are you shooting in High Burst mode? * are you shooting in RAW * what kind of an SD card are you using (what is the data transfer rate) My suggestions are: * shoot in JPEG and see if the problem persists * try another SD card * switch to Slow Burst or deactivate the Burst mode If the problem persist I think that the camera is faulty and you should service it. Hope that helps. PS. Sometimes when the camera struggles with autofocus it won't let you take pictures. You mentioned shooting Moths so I presume you are using a Macro lens. This is sometimes the case with autofocus in Macro photography.
I use a D5600 and I take 100 - 200 pictures everyday, almost 365 days of the year. I'm usually in manual and using flash. In 60degF weather or warmer, I can run off about 15 pics before the camera stops and it takes time for the flash to recover. If I switch to auto the camera will continue taking pics IF the flash is not required. I always have fresh batteries so I wonder if it's not a temperature issue. When the weather is cold I rarely have the problem. Nikon told me to get better memory but I found no difference in operation. I've been through three D5600s and they all did the same thing. So the external hotshoe answer may be the right one. I haven't spent the cash and tried it yet.
43,993
I'm relatively new to my Nikon D5100, and I have to photograph moths on a sheet for my job. I've begun to notice that my camera doesn't seem to let me take more than a few pictures in a row anymore, without my having to turn it off and on again (and sometimes I have to do this several times). Is this a common problem? Do any other D5100 users have suggestions for a solution?
2013/11/07
[ "https://photo.stackexchange.com/questions/43993", "https://photo.stackexchange.com", "https://photo.stackexchange.com/users/22632/" ]
I would do two things: 1. Buy a high-performance memory card. Something that's specifically rated for 45MB/s (megaBYTES, not megaBITS) or faster should make a huge difference if the problem is that you're filling up the buffer. 2. Buy an external hotshoe flash. If you can, borrow one first to test if it makes a difference. Some cameras **do** prevent you from taking subsequent shots if the popup flash is still recharging, especially consumer-grade cameras. For testing purposes, you might also try removing some additional barriers to testing to determine the culprit: 1. Set the lens to manual focus 2. Turn the flash or use it in manual power on the lowest setting 3. Use manual exposure and see if it fires, regardless of the resulting picture
I use a D5600 and I take 100 - 200 pictures everyday, almost 365 days of the year. I'm usually in manual and using flash. In 60degF weather or warmer, I can run off about 15 pics before the camera stops and it takes time for the flash to recover. If I switch to auto the camera will continue taking pics IF the flash is not required. I always have fresh batteries so I wonder if it's not a temperature issue. When the weather is cold I rarely have the problem. Nikon told me to get better memory but I found no difference in operation. I've been through three D5600s and they all did the same thing. So the external hotshoe answer may be the right one. I haven't spent the cash and tried it yet.
111,627
I deleted work I was doing and I didn't add it to SVN, I have a performance review tomorrow and I already mentioned that I am stressed, thus slow progress. Now with this problem, should I tell my manager? I can do it, but if I cannot, do I have to report it in the daily scrum? How do I explain to the manager what happened?
2018/05/03
[ "https://workplace.stackexchange.com/questions/111627", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/-1/" ]
This is a very unfortunate situation but since you can't change it anymore, make it a valuable experience. > > Do I have to report in daily scrum? > > > **Yes!** If you are scared of telling this to your team, start with your scrum master or product owner. Transparency and honesty is one of the key elements of scrum. You are responsible for the team and the team is responsible for you. Discuss the issue with your team, they need to re-evaluate the schedule and story points and can help you clean up the mess. > > Do I have to tell my manager? > > > Never lie to your manager. If she/he asks, give the honest answer. **But** make sure you have a follow-up answer ready. How are you (and your team) going to fix this? How will a similar problem be avoided in the future?
It just happened to me recently, on something that was needed in a deadline. It was a new project so I should have added it to git but didn't. Thankfully it was a maven project with a distribution folder and the java war was there, so I de-compiled and remade the project. I did tell the PM but also told him I have solved it. Two days of productivity can be (sometimes) replicated in lesser time, so don't worry that much as you have already done it. (Edit) What I failed to mention before, , come in clean. If you have deleted something and there is a possibility that it could be delayed then its better to come clean and notify your manager. I would suggested first try to find out how long will it take to replicate your work so you have a problem (lost code/rework needed) and a possible solution as well.
111,627
I deleted work I was doing and I didn't add it to SVN, I have a performance review tomorrow and I already mentioned that I am stressed, thus slow progress. Now with this problem, should I tell my manager? I can do it, but if I cannot, do I have to report it in the daily scrum? How do I explain to the manager what happened?
2018/05/03
[ "https://workplace.stackexchange.com/questions/111627", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/-1/" ]
This is a very unfortunate situation but since you can't change it anymore, make it a valuable experience. > > Do I have to report in daily scrum? > > > **Yes!** If you are scared of telling this to your team, start with your scrum master or product owner. Transparency and honesty is one of the key elements of scrum. You are responsible for the team and the team is responsible for you. Discuss the issue with your team, they need to re-evaluate the schedule and story points and can help you clean up the mess. > > Do I have to tell my manager? > > > Never lie to your manager. If she/he asks, give the honest answer. **But** make sure you have a follow-up answer ready. How are you (and your team) going to fix this? How will a similar problem be avoided in the future?
I find it usually best to own up to your mistakes, both to your team in the daily standup meeting and to your manager. The performance review has a little bit of a bad timing now, but if this is the first time you made a mistake like this, I don't think it will be a huge problem. I would do my best though (and communicate) to make up for lost work asap since it is your fault and make a habit of saving to SVN *regularly*
111,627
I deleted work I was doing and I didn't add it to SVN, I have a performance review tomorrow and I already mentioned that I am stressed, thus slow progress. Now with this problem, should I tell my manager? I can do it, but if I cannot, do I have to report it in the daily scrum? How do I explain to the manager what happened?
2018/05/03
[ "https://workplace.stackexchange.com/questions/111627", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/-1/" ]
I find it usually best to own up to your mistakes, both to your team in the daily standup meeting and to your manager. The performance review has a little bit of a bad timing now, but if this is the first time you made a mistake like this, I don't think it will be a huge problem. I would do my best though (and communicate) to make up for lost work asap since it is your fault and make a habit of saving to SVN *regularly*
It's time you learn one of the most powerful tools any employee can ever use. **Asking your co-workers for help**. I understand the fear - you made a big mistake, and you're worried about the repercussions. And you don't know if there's any way to recover from this mistake. Which is why you absolutely *need* to report this issue as soon as possible - and thoroughly as possible - to your supervisor, and ask for assistance. Don't just say "I made a mistake and now the work is gone, help!" because that looks awful. Instead, lay out exactly what you did, and don't make excuses. Explain: * What you were doing (whatever work you were attempting to do) * What you did just before you lost the work (this is *Very* important) * The steps you took to *try* to recover it (You must state these so that you can show you made a good faith effort to resolve this yourself) * A request for help (If you know someone who is 'good' with this sort of thing, you could ask for their help by name) By explaining what happened and how, you're also helping to solve the issue by giving that information to whomever is going to help you solve it. And by explaining your steps to fix it yourself, you're showing that you made an honest attempt to correct it. Most importantly, by admitting you made a mistake, you show that you are mature enough to look past embarrassment about mistakes and seek out the answer to the problem. That's not just good work ethic - that's the essence of maturity.
111,627
I deleted work I was doing and I didn't add it to SVN, I have a performance review tomorrow and I already mentioned that I am stressed, thus slow progress. Now with this problem, should I tell my manager? I can do it, but if I cannot, do I have to report it in the daily scrum? How do I explain to the manager what happened?
2018/05/03
[ "https://workplace.stackexchange.com/questions/111627", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/-1/" ]
I find it usually best to own up to your mistakes, both to your team in the daily standup meeting and to your manager. The performance review has a little bit of a bad timing now, but if this is the first time you made a mistake like this, I don't think it will be a huge problem. I would do my best though (and communicate) to make up for lost work asap since it is your fault and make a habit of saving to SVN *regularly*
I would fess up - **"I deleted 2 days of work, but I think I can recover it in 1 day".** I think you should be able to recover it in one day, we have all deleted work by mistake (especially when under pressure) and it usually does take less time to type it out again than the first time, as less thinking is involved. There may be some automatic backup that you are unaware of that enables your manager to recover your work (unlikely with git but you never know.) Your performance review is based on (I am assuming) minimum 6 months work so I really wouldn't worry about it. Also, performance reviews are (unfortunately) only vaguely related to pay raises!
111,627
I deleted work I was doing and I didn't add it to SVN, I have a performance review tomorrow and I already mentioned that I am stressed, thus slow progress. Now with this problem, should I tell my manager? I can do it, but if I cannot, do I have to report it in the daily scrum? How do I explain to the manager what happened?
2018/05/03
[ "https://workplace.stackexchange.com/questions/111627", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/-1/" ]
I would fess up - **"I deleted 2 days of work, but I think I can recover it in 1 day".** I think you should be able to recover it in one day, we have all deleted work by mistake (especially when under pressure) and it usually does take less time to type it out again than the first time, as less thinking is involved. There may be some automatic backup that you are unaware of that enables your manager to recover your work (unlikely with git but you never know.) Your performance review is based on (I am assuming) minimum 6 months work so I really wouldn't worry about it. Also, performance reviews are (unfortunately) only vaguely related to pay raises!
**It's just another problem to solve; show that you are solving it** > > should I tell my manager? How do I explain to the manager [or anyone] what happened? > > > Depends on the manager. My manager wouldn't want to hear about details like that. If your manager does want to hear about things like this, you should tell him. Keep it technical-- you didn't "screw up," you had a "code management issue" that you are resolving, with process enhancements to avoid the issue in the future. The issue was accidental deletion. It happens. You don't have to make it a big deal, and he probably won't. Just make sure you have a concrete and credible plan to avoid it in the future. > > I can do it, but if I cannot, do I have to report it in the daily scrum? > > > In your daily scrum, you have to report what you did yesterday, what you are working on, and if you have any impediments. Yesterday you were coding the feature. Today you will continue to work on it. You don't have an impediment-- you can move forward without outside help. You aren't really obligated to say *why* it's taking you so long, unless someone asks. When I lead a team, I like to give the developers a bit of privacy in this respect, since sh\*t does happen, and as long as you hit your targets and don't need help or coaching with a chronic problem, the rest of the team doesn't need to know the details. Your lead may be different, so follow his lead.
111,627
I deleted work I was doing and I didn't add it to SVN, I have a performance review tomorrow and I already mentioned that I am stressed, thus slow progress. Now with this problem, should I tell my manager? I can do it, but if I cannot, do I have to report it in the daily scrum? How do I explain to the manager what happened?
2018/05/03
[ "https://workplace.stackexchange.com/questions/111627", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/-1/" ]
This is a very unfortunate situation but since you can't change it anymore, make it a valuable experience. > > Do I have to report in daily scrum? > > > **Yes!** If you are scared of telling this to your team, start with your scrum master or product owner. Transparency and honesty is one of the key elements of scrum. You are responsible for the team and the team is responsible for you. Discuss the issue with your team, they need to re-evaluate the schedule and story points and can help you clean up the mess. > > Do I have to tell my manager? > > > Never lie to your manager. If she/he asks, give the honest answer. **But** make sure you have a follow-up answer ready. How are you (and your team) going to fix this? How will a similar problem be avoided in the future?
I would fess up - **"I deleted 2 days of work, but I think I can recover it in 1 day".** I think you should be able to recover it in one day, we have all deleted work by mistake (especially when under pressure) and it usually does take less time to type it out again than the first time, as less thinking is involved. There may be some automatic backup that you are unaware of that enables your manager to recover your work (unlikely with git but you never know.) Your performance review is based on (I am assuming) minimum 6 months work so I really wouldn't worry about it. Also, performance reviews are (unfortunately) only vaguely related to pay raises!
111,627
I deleted work I was doing and I didn't add it to SVN, I have a performance review tomorrow and I already mentioned that I am stressed, thus slow progress. Now with this problem, should I tell my manager? I can do it, but if I cannot, do I have to report it in the daily scrum? How do I explain to the manager what happened?
2018/05/03
[ "https://workplace.stackexchange.com/questions/111627", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/-1/" ]
This is a very unfortunate situation but since you can't change it anymore, make it a valuable experience. > > Do I have to report in daily scrum? > > > **Yes!** If you are scared of telling this to your team, start with your scrum master or product owner. Transparency and honesty is one of the key elements of scrum. You are responsible for the team and the team is responsible for you. Discuss the issue with your team, they need to re-evaluate the schedule and story points and can help you clean up the mess. > > Do I have to tell my manager? > > > Never lie to your manager. If she/he asks, give the honest answer. **But** make sure you have a follow-up answer ready. How are you (and your team) going to fix this? How will a similar problem be avoided in the future?
Once the problem is solved: **It looks like a good time to review your backup plan on a larger scale.** A wize man once told me: > > "In a good Infra, you have to do the elevator test." > > > With the sound of an elevator trying to brake from a 12 floor fall. In most country reglementation on elevator is strict. The security of an elevator is basicaly build around: Detector > Eletrical Safety > Manual Safety The test of an elevator is really simple, We are going to test our security feature one by one. Trying to make it touche the ground (The spring on the bottom have to be test too) with the following test. Free fall at full speed + full loaded -> trigger detector. Free fall at full speed + full loaded -> without detector to trigger electrical safty. Free fall at full speed + full loaded -> without electrical safty to trigger manual safty. Free fall at full speed + full loaded -> without safty , it's cranked. Same with over speed -> into the spring. How many have ever test the back up, after a full wipe, just to be sure that the back process is ok? On backing up work their are few step that could be follow: backing the project with SVN etc. backing the working computer with Dpm etc. and backing things outside. I have see society able to survive flood, fire, and malice. Beeing able to save ones work even if he had an heart attack. Able to recover from one dev going rampage the tfs/svn. It's not about mistake, you should just not be able to do that even if you wanted to. At the end of the day comit or not the computer state must be safe.
111,627
I deleted work I was doing and I didn't add it to SVN, I have a performance review tomorrow and I already mentioned that I am stressed, thus slow progress. Now with this problem, should I tell my manager? I can do it, but if I cannot, do I have to report it in the daily scrum? How do I explain to the manager what happened?
2018/05/03
[ "https://workplace.stackexchange.com/questions/111627", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/-1/" ]
It's time you learn one of the most powerful tools any employee can ever use. **Asking your co-workers for help**. I understand the fear - you made a big mistake, and you're worried about the repercussions. And you don't know if there's any way to recover from this mistake. Which is why you absolutely *need* to report this issue as soon as possible - and thoroughly as possible - to your supervisor, and ask for assistance. Don't just say "I made a mistake and now the work is gone, help!" because that looks awful. Instead, lay out exactly what you did, and don't make excuses. Explain: * What you were doing (whatever work you were attempting to do) * What you did just before you lost the work (this is *Very* important) * The steps you took to *try* to recover it (You must state these so that you can show you made a good faith effort to resolve this yourself) * A request for help (If you know someone who is 'good' with this sort of thing, you could ask for their help by name) By explaining what happened and how, you're also helping to solve the issue by giving that information to whomever is going to help you solve it. And by explaining your steps to fix it yourself, you're showing that you made an honest attempt to correct it. Most importantly, by admitting you made a mistake, you show that you are mature enough to look past embarrassment about mistakes and seek out the answer to the problem. That's not just good work ethic - that's the essence of maturity.
I would fess up - **"I deleted 2 days of work, but I think I can recover it in 1 day".** I think you should be able to recover it in one day, we have all deleted work by mistake (especially when under pressure) and it usually does take less time to type it out again than the first time, as less thinking is involved. There may be some automatic backup that you are unaware of that enables your manager to recover your work (unlikely with git but you never know.) Your performance review is based on (I am assuming) minimum 6 months work so I really wouldn't worry about it. Also, performance reviews are (unfortunately) only vaguely related to pay raises!
111,627
I deleted work I was doing and I didn't add it to SVN, I have a performance review tomorrow and I already mentioned that I am stressed, thus slow progress. Now with this problem, should I tell my manager? I can do it, but if I cannot, do I have to report it in the daily scrum? How do I explain to the manager what happened?
2018/05/03
[ "https://workplace.stackexchange.com/questions/111627", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/-1/" ]
This is a very unfortunate situation but since you can't change it anymore, make it a valuable experience. > > Do I have to report in daily scrum? > > > **Yes!** If you are scared of telling this to your team, start with your scrum master or product owner. Transparency and honesty is one of the key elements of scrum. You are responsible for the team and the team is responsible for you. Discuss the issue with your team, they need to re-evaluate the schedule and story points and can help you clean up the mess. > > Do I have to tell my manager? > > > Never lie to your manager. If she/he asks, give the honest answer. **But** make sure you have a follow-up answer ready. How are you (and your team) going to fix this? How will a similar problem be avoided in the future?
Programming is a lot about planning and understanding code and the underlying specs. Even if you spend 2 days on writting it, you should be able to complete it again in a lot less time. Owe up to it, make sure to explain you will need a lot less time and that you're willing to do overtime to compensate, if necessary.
111,627
I deleted work I was doing and I didn't add it to SVN, I have a performance review tomorrow and I already mentioned that I am stressed, thus slow progress. Now with this problem, should I tell my manager? I can do it, but if I cannot, do I have to report it in the daily scrum? How do I explain to the manager what happened?
2018/05/03
[ "https://workplace.stackexchange.com/questions/111627", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/-1/" ]
It's time you learn one of the most powerful tools any employee can ever use. **Asking your co-workers for help**. I understand the fear - you made a big mistake, and you're worried about the repercussions. And you don't know if there's any way to recover from this mistake. Which is why you absolutely *need* to report this issue as soon as possible - and thoroughly as possible - to your supervisor, and ask for assistance. Don't just say "I made a mistake and now the work is gone, help!" because that looks awful. Instead, lay out exactly what you did, and don't make excuses. Explain: * What you were doing (whatever work you were attempting to do) * What you did just before you lost the work (this is *Very* important) * The steps you took to *try* to recover it (You must state these so that you can show you made a good faith effort to resolve this yourself) * A request for help (If you know someone who is 'good' with this sort of thing, you could ask for their help by name) By explaining what happened and how, you're also helping to solve the issue by giving that information to whomever is going to help you solve it. And by explaining your steps to fix it yourself, you're showing that you made an honest attempt to correct it. Most importantly, by admitting you made a mistake, you show that you are mature enough to look past embarrassment about mistakes and seek out the answer to the problem. That's not just good work ethic - that's the essence of maturity.
Once the problem is solved: **It looks like a good time to review your backup plan on a larger scale.** A wize man once told me: > > "In a good Infra, you have to do the elevator test." > > > With the sound of an elevator trying to brake from a 12 floor fall. In most country reglementation on elevator is strict. The security of an elevator is basicaly build around: Detector > Eletrical Safety > Manual Safety The test of an elevator is really simple, We are going to test our security feature one by one. Trying to make it touche the ground (The spring on the bottom have to be test too) with the following test. Free fall at full speed + full loaded -> trigger detector. Free fall at full speed + full loaded -> without detector to trigger electrical safty. Free fall at full speed + full loaded -> without electrical safty to trigger manual safty. Free fall at full speed + full loaded -> without safty , it's cranked. Same with over speed -> into the spring. How many have ever test the back up, after a full wipe, just to be sure that the back process is ok? On backing up work their are few step that could be follow: backing the project with SVN etc. backing the working computer with Dpm etc. and backing things outside. I have see society able to survive flood, fire, and malice. Beeing able to save ones work even if he had an heart attack. Able to recover from one dev going rampage the tfs/svn. It's not about mistake, you should just not be able to do that even if you wanted to. At the end of the day comit or not the computer state must be safe.
601,972
I am curious as to why the position of where I apply a force on an object matter? Like what is is happening at the molecular level where force is only transmitted through the axis of where it is applied? Does force not spread radially to other areas of the same object? This question is asked somewhere in here but I was not really satisfied with the answer. Please, I am not looking for an explanation that involves derivation of formulas to explain the question. I am looking for a more intuitive, non-mathematical reason for this phenomenon. Thanks you.
2020/12/21
[ "https://physics.stackexchange.com/questions/601972", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/135978/" ]
When you touch an object, the charged particles (electrons and protons) in your hand exert an electrostatic repulsive force on the charged particles in the object. This force is predominantly exerted on the atoms and molecules on the surface of the object, causing them to be displaced slightly from their previous positions. This displacement deforms the chemical bonds within the solid, causing the bonds to exert a force on the surface molecules' neighbors. These neighbors are displaced, which causes them to exert a force on *their* neighbors, and this continues on and on. This is the mechanism by which force is "transmitted" through solid objects (though that term is not really an accurate description of what's happening).
In physics a free vector is defined as having a direction and a magnitude; location does not matter. See Symon, Mechanics. This definition is necessary to consider the transformation of the components of a vector between different coordinate systems, one coordinate system in translation and rotation with respect to the other. Of course the location of a vector does determine its effect; for example, the force vector has to be localized at a particle to act on the particle. Some developments (mostly mechanical engineering texts) go to great lengths to discuss bound (or localized) and sliding vectors. I prefer how Davis approaches this in his book Introduction to Vector Analysis, where he explains that physicists regard a vector as a free vector, recognizing nevertheless that the effect of the vector may depend on where is it applied. Regarding torque as a vector, the torque vector explicitly depends on the origin about where the torque is calculated; using a different origin produces a different torque. Torque addresses the rotation with respect to the origin, produced by a force acting at a specific point, either at a single particle, at a specific position on a rigid body, or at a specific particle in a system of particles. A force effectively acting at the center of mass (such as gravity) does produce a torque if the origin is taken as some point other than the center of mass where the cross product of distance and force is not zero; if the center of mass is taken as the origin this force produces no torque. Just as the external forces depend on the definition of our system, the torque depends on the origin used. For a body not constrained to have a specific point fixed, the torque is typically taken about the center of mass as that simplifies the evaluation. Conversely, if a point is fixed that point is taken as the origin for evaluating torque. See any basic physics text such as one by Halliday and Resnick. The mechanics relationships using vectors for force, torque, etc. were developed and shown to be consistent with Newton's laws of motion which in turn agree with experimental observations. Actually, vectors were first used for electrodynamics then later applied for classical mechanics. The concept is that a force acts at a point and affects the motion depending on its magnitude and direction regardless of the specific microscopic characteristics of the force, whether it be electrodynamic or gravity (strong and weak nuclear forces are not addressed in classical mechanics and electrodynamics). Other concepts besides vectors, such as quaternions, can also be used to evaluate the motion. Bottom line: vectors are a mathematical construct that explain the dynamics of motion, regardless of the specific microscopic phenomena involved.
545,191
How can I boot a VM off USB in VMWare Workstation 9? I don't see the USB I have attached in the bios boot order. It's clearly attaching before boot, as this shows up before the VM even starts running: ![enter image description here](https://i.stack.imgur.com/PhDxN.png) ![enter image description here](https://i.stack.imgur.com/roFn0.png)
2013/02/01
[ "https://superuser.com/questions/545191", "https://superuser.com", "https://superuser.com/users/146694/" ]
There is a workaround, in your virtual machine settings, add a new "harddisk", and when prompted to select the type, choose "physical disk" , and then point it to the usb stick. This way, the usb will appear as a real drive to the virtual machine.
You can use "Plop Boot Manager". See its features [here](http://www.plop.at/en/bootmanager/features.html "Plop Boot Manager Website!"). > > * USB boot without BIOS support (UHCI, OHCI and EHCI) > * CD/DVD boot without BIOS support (IDE) > > > ![](https://i.stack.imgur.com/kD9tY.jpg) > > >
2,143
This is part of the second step into an attempted reboot of this community: [Rebooting Cognitive Sciences: a Suggested Approach](https://cogsci.meta.stackexchange.com/q/2102/21) --- This site is now several years in beta, and it is about time we revisit and more concretely define which scientific disciplines are in scope, and how this should be reflected in the site name. The discussion on these issues is summarized in [the recent community review](https://cogsci.meta.stackexchange.com/q/2097/21): > > [Scope:](https://cogsci.meta.stackexchange.com/a/2099/21) Which fields of study can ask questions here? How to deal with overlap with other sites? > --------------------------------------------------------------------------------------------------------------------------------------------------- > > > * We decided to welcome any cognitive science, [in line with the definition of > Wikipedia](https://en.wikipedia.org/wiki/Cognitive_science). This > includes any field which tackles the mind or its processes (behavior), > [including animals](https://cogsci.meta.stackexchange.com/a/28/21). > E.g., [Human-Computer > Interaction](https://cogsci.meta.stackexchange.com/a/2/21), > [Neurobiology](https://cogsci.meta.stackexchange.com/q/23/21), [Applied > Psychology](https://cogsci.meta.stackexchange.com/q/35/21), [Social > Psychology](https://cogsci.meta.stackexchange.com/q/627/21), > [Sociology](https://cogsci.meta.stackexchange.com/a/255/21), > [Neuroinformatics](https://cogsci.meta.stackexchange.com/a/272/21). > * This implies an overlap with sites like, e.g., Programmers, Biology, UX, Cross Validated, and Skeptics. However, when questions *do not* > pertain to the mind or behavior, they are off topic here: e.g. some > questions about [HCI](https://cogsci.meta.stackexchange.com/a/7/21), > [statistics](https://cogsci.meta.stackexchange.com/q/192/21), [coding > experiments](https://cogsci.meta.stackexchange.com/q/471/21). > * However, generally we do want to have a scientific focus (whether we accept laymen or not). E.g. [the Autism > proposal](https://cogsci.meta.stackexchange.com/a/354/21) was not > deemed a good fit for this site. > * Concrete guidelines for overlap with specific sites is desirable: e.g., [Biology](https://cogsci.meta.stackexchange.com/q/119/21), in > [particular neurobiology seems to repeatedly > overlap](https://cogsci.meta.stackexchange.com/q/389/21). > * What to do with [questions about highly specific tools and tool requests](https://cogsci.meta.stackexchange.com/q/293/21) is still > undecided. In particular, [we have many questions on neuroscience > software](https://cogsci.meta.stackexchange.com/q/1034/21). > * There is [some discussion on whether or not psychiatry should remain in scope](https://cogsci.meta.stackexchange.com/q/630/21). > * [Bias-laden questions are off-topic](https://cogsci.meta.stackexchange.com/a/103/11318), e.g., > [improbably human > conditions](https://cogsci.meta.stackexchange.com/q/713/21). > * Our [about (help section) of the site should clearly communicate what is in scope and what is > not](https://cogsci.meta.stackexchange.com/q/514/21). > > > [Site name:](https://cogsci.meta.stackexchange.com/a/2101/21) We have noticed our site name might lead to confusion on what this site is, what new name should we use? > ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- > > > * The name "Cognitive Sciences" [can be interpreted to only welcome cognitive science and cognitive > psychology](https://cogsci.meta.stackexchange.com/q/101/21), thus > excluding, e.g., non-cognitive sub-disciplines of psychology. We [have > collected evidence](https://cogsci.meta.stackexchange.com/q/390/21) > showcasing that psychology and neuroscience is often perceived to be > excluded. > * There [is a strong majority favoring a name change](https://cogsci.meta.stackexchange.com/q/283/21). > * When choosing a new name, the [url prefix needs to be considered as well](https://cogsci.meta.stackexchange.com/a/701/21). > * Many popular suggestions follow the format "A and B": [Cognitive Science and > Psychology](https://cogsci.meta.stackexchange.com/q/290/21), > [Psychology and > Neuroscience](https://cogsci.meta.stackexchange.com/a/296/21), [Mind > and Brain](https://cogsci.meta.stackexchange.com/q/297/21). > * An attempt at getting an overview of consensus resulted in two favorites ([Psychology and Neuroscience, and Mind and > Brain](https://cogsci.meta.stackexchange.com/q/537/21)), with > [Psychology and > Neuroscience](https://cogsci.meta.stackexchange.com/a/1052/21) as a > clear winner. > * Unfortunately these suggestions [go against the naming standards of SE](https://cogsci.meta.stackexchange.com/a/2077/21), which 'suggest' > avoiding "X and B" like names. > * Regardless, this is a topic which keeps resurfacing. The name keeps causing confusion [even for active researchers in the > fields](https://cogsci.meta.stackexchange.com/q/1038/21). > > > The next step now is to collect *concrete* proposals for a potential different scope and site name, keeping in mind **the issues mentioned above**. A proposal would do well by including the following: 1. An exhaustive list of scientific disciplines we want to target explicitly. 2. A list of scientific disciplines we potentially do not want to consider, including a motivation why. 3. A suggestion on how to handle overlap with specific other sites. 4. A suggestion on how to handle questions only tangentially related to the mind, which do not have a home on other sites ([given the recent events of the closed neuroscience proposal](https://cogsci.meta.stackexchange.com/q/2136/21)). 5. A suggested site name and url prefix, reflecting the earlier described scope, including a motivation.
2016/09/01
[ "https://cogsci.meta.stackexchange.com/questions/2143", "https://cogsci.meta.stackexchange.com", "https://cogsci.meta.stackexchange.com/users/21/" ]
Science ------- Before starting to list all the bullet points, I want to stress the importance of the word **science**. That is the beauty of the current site name; it is immediately clear we want to discuss science. Although I actually enjoy the current name, I will suggest a new one at bullet point 5. So, here goes: The proposal ------------ 1. 1. Neuro-, social-, industry/organizational-, sports-, developmental, behavioral- psychology. The **science** behind human-factors/HCI, psychiatry, human movements, sociology, linguistics. 2. Beside theories and facts, conducting scientific research is also incredibly important. Therefore, methodological questions are also on topic. These methods should be somewhat related to behavior or the brain, including behavioral experimental paradigms and [neuroscientific methods](https://cogsci.meta.stackexchange.com/questions/2136/we-should-be-more-welcoming-to-technical-neuroscience-questions). 2. 1. Pseudoscience (let's consider this a science for the moment). The reason is clear. 2. Topics described in bullet point 1 that are not related to science (e.g. usability issues, parenting tips, psychiatric tips or diagnosis). These are generally self-help questions, opinion based, or clearly belong on another SE-site (e.g. ux, parenting and health, respectively). 3. Everything other that is not related to behavior or the brain (e.g. physics, philosophy, English language, world building, etc. etc.). 3. 1. Biology: Everything that is not related to the brain or behavior should be considered off-topic. This includes (but is not limited to) ecology, genetics, non-animal biology, etc.. All things close to the brain, (networks of) neurons, muscles, sensory systems are on-topic. 2. Cross Validated: There are only few statistical methods that are specific for behavioral sciences, and many should thus be migrated, even if they are posed within a behavioral scientific context. Only if the questions are relevant for our site and are conceptual of nature (i.e. not about mathematics or "how- do you calculate") we should welcome them. 3. Stack Overflow: I believe there is little overlap here. If they post a line of code in which we must find the incorrect use of a for-loop, then definitely off-topic here. If the question is conceptual and about behavioral sciences (e.g. fieldTrip/EegLAB function) then it will be off-topic at Stack Overflow, and should be on-topic here. 4. Each new 'tangentially' related topic **should be discussed in a meta post**, as is happening now. We cannot decide beforehand what we would accept or not, and we must need acceptance of the current user base. If we see a new Area51 proposal or a trend in new questions that are asked here, we can start the discussion. 5. The new name would become **Psychological Sciences**. I believe it is incredibly important to keep the word science in the name, to clearly show we are about science and not pseudoscience or self-help questions. Psychological, as opposed to Cognitive, should invite more users to come here and will make it easier to find the website. As discussed here as well: ([Why is this stack exchange called 'Cognitive Sciences'?](https://cogsci.meta.stackexchange.com/questions/1038/why-is-this-stack-exchange-called-cognitive-sciences?noredirect=1&lq=1)), people do not likely search for Cognitive. The url-prefix will become **pscience.stackexchange.com**. I thought it is a nice play of words and not a cumbersome abbreviation. Alternatively, I would also accept **psych.stackexchange.com**. **I do want to express my fear that using the name Psychological Sciences will likely attract many laymen that ask off-topic, self-help and opinionated questions. Psychology is (or sounds) more accessible than Cognition, since the term is better known. If we choose for Psychological Sciences, we have to stay vigilant and close each of these questions without mercy.**
I suggest we make a Venn diagram of all the related disciplines, choose one or two biggest circles that cover most areas that we want, then name the site according to that. It will be intuitive, and I think it will clear all the debates. If there is any question about the scope on the site in the future, we just need to re-consider the field again. For example, this is a discipline diagram for UX: ![http://www.kickerstudio.com/2008/12/the-disciplines-of-user-experience](https://i.stack.imgur.com/WHtSx.jpg)
492,782
Is this geometry or algebra "things equal to the same thing are equal to each other"? thank you in advance.... Joseph Whelan CPA Bedford, NY
2013/09/13
[ "https://math.stackexchange.com/questions/492782", "https://math.stackexchange.com", "https://math.stackexchange.com/users/94710/" ]
> > *Things which equal the same thing also equal one another.* > > > It's what Euclid (in his book *Elements, Book I*) considered a ["common notion"](http://aleph0.clarku.edu/~djoyce/java/elements/bookI/bookI.html), and the "things" which he was referring to were geometric "magnitudes", but is a generally accepted in every branch of mathematics, and fundamentally, in logic. A "common notion" states propositions which cannot be otherwise defined nor proven, but are propositions no one (sane) would object to (i.e., that which we find to be obviously true that "it goes without saying", so to speak).
This is an axiom of equality. As Umberto P. suggests, it is closely related to the definition of equality wherein we say that equality is a predicate which satisfies reflexivity, symmetry and transitivity. If you say algebra is study of functions and operations and geometry is study of relations, this might be close to, but not in, geometry. It is far more fundamental than geometry.
617,829
We have an enterprise application written in asp.net c# (3.5) and SQL server that we want to bundle and release for customers. However, several have expressed concerns that it **requires** a Microsoft server due to the costs. Yes, I know... Therefore, we are considering porting it to a LAMP stack, with the "P" referring to php. **What challenges can we expect to face developing on a LAMP stack coming from a Visual Studio asp.net environment?** The issues I am sure of are: 1. Debugging: Visual Studio is awesome for both client and server side debugging. 2. Framework: The code behind model works great, and the MVC framework is nice. 3. Maintenance: We would like the feature set to be common on both platforms. 4. Database layer: Code is loosely coupled to the mssql data types. If you've been through this exciting process, I'd love to know what it was like with some recommendations/tips. *As a side to this, is there any way for us to run this code as is? Mono? Others?*
2009/03/06
[ "https://Stackoverflow.com/questions/617829", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11703/" ]
I have more experience with .NET than the \*AMP stacks, but based on my experience with XAMPP, I would offer the following observations 1. Debugging: Visual Studio is awesome for both client and server side debugging. [Eclipse PDT](http://www.eclipse.org/pdt/) works great for design, development, and debugging. I've heard good things about Zend Studio but haven't worked with it. 2. Framework: The code behind model works great, and the MVC framework is nice. There are frameworks to allow you to separate presentation from logic (e.g. [Smarty](http://www.smarty.net/rightforme.php) ) and at least one MVC framework is available (e.g. [CakePHP](http://cakephp.org/)) 3. Maintenance: We would like the feature set to be common on both platforms. If you exclude Windows specific functionality (Windows Integrated Security, etc) there shouldn't be much you **can't** do in both stacks, but if you have to reproduce controls like the gridview it will be labor intensive. 4. Database layer: Code is loosely coupled to the mssql data types. I am not aware of any data types that cannot be mapped between mysql and sql server and there is good documentation for [handling migrations](http://dev.mysql.com/doc/migration-toolkit/en/index.html) Mono might decrease the amount of time required to port your solution, but I am unaware of any way you could re-use all of your code "as is".
i have a Asp.net background myself and have been researching open source frameworks for the last few months. I still haven't made up my mind. I've recently been looking at Grails. Seems to have the best of both worlds - a scripted, easy to use, open source RAD MVC framework on an enterprise platform. It uses the Groovy scripting language (ruby -like) but runs on the JVM so you can use full Java framework if you like. there's tons of prewritten java components out there to tap into. This thing is pretty cool. you'd be able to port your existing app pretty quickly. You'll need a Tomcat webhost though. if you need PHP, straight PHP performs pretty well but most of the frameworks are poor performers. If go with straight PHP there's no mvc. You'd be using the traditional page based model. But you'll feel more at home. You can roll your own DAL with PDO and use stored procedures. You'll need a templating system though. Stay away from Smarty which uses it's own templating language. It's slow and why do you need to learn a seprate templating language. i never got that. Use Savant instead: <http://phpsavant.com/>. it uses php for template language and is fast. You can mimick code-behind with this too by creating a template page for each site page. As far as mvc there's a new PHP framework called Yii (<http://www.yiiframework.com/>) that claims to have the best performance out there for php frameworks. It's well documented too. It's probably the best the php framework out there if you're coming from .Net. Feels enterprisey like Zend but without the poor performance. Most of the others are toy-ish or really slow like Symphony and Cake. Php works great with Apache. Not a lot of tuning or maintenance unlike Rails and Django. Next you need an IDE. Go with Netbeans. Use the PHP version and install <http://www.xdebug.org/>. Will feel inferior to VS but it's not bad. For a DB, MySql is the sexy pick but Postgres is superior. It has one db engine that does it all. With Mysql, some features you want are in InnoDB and some are in MyIsam. If you need foreign keys and transactions you have to use InnoDB. Use MyIsam for fulltext search and faster read performance. Postgres performance has greatly imnproved with the version 8 release (same as mysql now) and has a nice windows installer finally.
617,829
We have an enterprise application written in asp.net c# (3.5) and SQL server that we want to bundle and release for customers. However, several have expressed concerns that it **requires** a Microsoft server due to the costs. Yes, I know... Therefore, we are considering porting it to a LAMP stack, with the "P" referring to php. **What challenges can we expect to face developing on a LAMP stack coming from a Visual Studio asp.net environment?** The issues I am sure of are: 1. Debugging: Visual Studio is awesome for both client and server side debugging. 2. Framework: The code behind model works great, and the MVC framework is nice. 3. Maintenance: We would like the feature set to be common on both platforms. 4. Database layer: Code is loosely coupled to the mssql data types. If you've been through this exciting process, I'd love to know what it was like with some recommendations/tips. *As a side to this, is there any way for us to run this code as is? Mono? Others?*
2009/03/06
[ "https://Stackoverflow.com/questions/617829", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11703/" ]
Another PHP IDE that you can consider is NetBeans. As a .NET, Java, and LAMP developer at one point or another, the biggest change was largely cultural. For example, PHP has a legacy of not using OO principles whereas ASP .NET started off as a .NET language with full OO support. This basic difference leads to significant issues such as PHP's long lists of reserved keywords and so forth.
i have a Asp.net background myself and have been researching open source frameworks for the last few months. I still haven't made up my mind. I've recently been looking at Grails. Seems to have the best of both worlds - a scripted, easy to use, open source RAD MVC framework on an enterprise platform. It uses the Groovy scripting language (ruby -like) but runs on the JVM so you can use full Java framework if you like. there's tons of prewritten java components out there to tap into. This thing is pretty cool. you'd be able to port your existing app pretty quickly. You'll need a Tomcat webhost though. if you need PHP, straight PHP performs pretty well but most of the frameworks are poor performers. If go with straight PHP there's no mvc. You'd be using the traditional page based model. But you'll feel more at home. You can roll your own DAL with PDO and use stored procedures. You'll need a templating system though. Stay away from Smarty which uses it's own templating language. It's slow and why do you need to learn a seprate templating language. i never got that. Use Savant instead: <http://phpsavant.com/>. it uses php for template language and is fast. You can mimick code-behind with this too by creating a template page for each site page. As far as mvc there's a new PHP framework called Yii (<http://www.yiiframework.com/>) that claims to have the best performance out there for php frameworks. It's well documented too. It's probably the best the php framework out there if you're coming from .Net. Feels enterprisey like Zend but without the poor performance. Most of the others are toy-ish or really slow like Symphony and Cake. Php works great with Apache. Not a lot of tuning or maintenance unlike Rails and Django. Next you need an IDE. Go with Netbeans. Use the PHP version and install <http://www.xdebug.org/>. Will feel inferior to VS but it's not bad. For a DB, MySql is the sexy pick but Postgres is superior. It has one db engine that does it all. With Mysql, some features you want are in InnoDB and some are in MyIsam. If you need foreign keys and transactions you have to use InnoDB. Use MyIsam for fulltext search and faster read performance. Postgres performance has greatly imnproved with the version 8 release (same as mysql now) and has a nice windows installer finally.
617,829
We have an enterprise application written in asp.net c# (3.5) and SQL server that we want to bundle and release for customers. However, several have expressed concerns that it **requires** a Microsoft server due to the costs. Yes, I know... Therefore, we are considering porting it to a LAMP stack, with the "P" referring to php. **What challenges can we expect to face developing on a LAMP stack coming from a Visual Studio asp.net environment?** The issues I am sure of are: 1. Debugging: Visual Studio is awesome for both client and server side debugging. 2. Framework: The code behind model works great, and the MVC framework is nice. 3. Maintenance: We would like the feature set to be common on both platforms. 4. Database layer: Code is loosely coupled to the mssql data types. If you've been through this exciting process, I'd love to know what it was like with some recommendations/tips. *As a side to this, is there any way for us to run this code as is? Mono? Others?*
2009/03/06
[ "https://Stackoverflow.com/questions/617829", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11703/" ]
Other MVC frameworks: * CodeIgniter * Kohana * Yii (Just found out about Yii. [Here's an article](http://www.beyondcoding.com/2009/03/02/choosing-a-php-framework-round-2-yii-vs-kohana-vs-codeigniter/) that compares them.) There are probably a half-dozen more out there, as well.
i have a Asp.net background myself and have been researching open source frameworks for the last few months. I still haven't made up my mind. I've recently been looking at Grails. Seems to have the best of both worlds - a scripted, easy to use, open source RAD MVC framework on an enterprise platform. It uses the Groovy scripting language (ruby -like) but runs on the JVM so you can use full Java framework if you like. there's tons of prewritten java components out there to tap into. This thing is pretty cool. you'd be able to port your existing app pretty quickly. You'll need a Tomcat webhost though. if you need PHP, straight PHP performs pretty well but most of the frameworks are poor performers. If go with straight PHP there's no mvc. You'd be using the traditional page based model. But you'll feel more at home. You can roll your own DAL with PDO and use stored procedures. You'll need a templating system though. Stay away from Smarty which uses it's own templating language. It's slow and why do you need to learn a seprate templating language. i never got that. Use Savant instead: <http://phpsavant.com/>. it uses php for template language and is fast. You can mimick code-behind with this too by creating a template page for each site page. As far as mvc there's a new PHP framework called Yii (<http://www.yiiframework.com/>) that claims to have the best performance out there for php frameworks. It's well documented too. It's probably the best the php framework out there if you're coming from .Net. Feels enterprisey like Zend but without the poor performance. Most of the others are toy-ish or really slow like Symphony and Cake. Php works great with Apache. Not a lot of tuning or maintenance unlike Rails and Django. Next you need an IDE. Go with Netbeans. Use the PHP version and install <http://www.xdebug.org/>. Will feel inferior to VS but it's not bad. For a DB, MySql is the sexy pick but Postgres is superior. It has one db engine that does it all. With Mysql, some features you want are in InnoDB and some are in MyIsam. If you need foreign keys and transactions you have to use InnoDB. Use MyIsam for fulltext search and faster read performance. Postgres performance has greatly imnproved with the version 8 release (same as mysql now) and has a nice windows installer finally.
15,302,841
I am trying to implement a solution to a problem using Integer linear programming (ILP). As the problem is NP-hard , I am wondering if the solution provided by Simplex Method would be optimal ? Can anyone comment on the optimality of ILP using Simplex Method or point to some source. Is there any other algorithm that can provide optimal solution to the ILP problem? EDIT: I am looking for yes/no answer to the optimality of the solution obtained by any of the algorithms (Simplex Method, branch and bound and cutting planes) for ILP.
2013/03/08
[ "https://Stackoverflow.com/questions/15302841", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1020149/" ]
The Simplex Method doesn't handle the constraint that you want integers. Simply rounding the result is not guaranteed to give an optimal solution. Using the Simplex Method to solve an ILP problem does work if the constraint matrix is [totally dual integral](http://en.wikipedia.org/wiki/Total_dual_integrality). Some algorithms that solve ILP (not constrained to totally dual integral constraint matrixes) are [Branch and Bound](http://en.wikipedia.org/wiki/Branch_and_bound), which is simple to implement and generally works well if the costs are reasonably uniform (very non-uniform costs make it try many attempts that look promising at first but turn out not to be), and [Cutting Plane](http://en.wikipedia.org/wiki/Cutting-plane_method), which I honestly don't know much about but it's probably good because people are using it.
The solution set for a linear programming problem is optimal by definition. Linear programming is a class of algorithms known as "constraint satisfaction". Once you have satisfied the constraints you have solved the problem and there is no "better" solution, because by definition the best outcome is to satisfy the constraints. If you have not completely modeled the problem, however, then obviously some other type of solution may be better. --- Clarification: When I write above "satisfy the constraints", I am including maximization of objective function. The cutting plane algorithm is essentially an extension of the simplex algorithm.
64,436
I just have a thought on the new link badges: * **Announcer** - Shared a link to a question that was visited by 50 unique IP addresses in 2 days * **Booster** - Shared a link to a question that was visited by 400 unique IP addresses in 3 days * **Publicist** - Shared a link to a question that was visited by 1,000 unique IP addresses in 4 days **Scenario:** A user share a link to a question on StackOverflow.com in his blog. The link is visited by 50 unique IP addresses within two days, which should give him an Announcer badge. But on the third day, suddenly the link is visited by another 350 unique addresses (so 400 in total), which should give him Booster badge. Then lastly on the fourth day, (as you might already guess what I wanted to say) the link is visited by another 600 unique addresses (so 1000 in total), which should give him Publicist badge. **Question:** I know each badge can only be earned on a different question (based on [Jeff's blog](https://blog.stackoverflow.com/2010/09/announcer-booster-and-publicist-badges/)), so the user above won't get all three badges. The questions I want to ask are: * Would the user get a bronze Announcer badge or a gold Publicist badge for the above scenario? * If the answer above is bronze, does it mean that the users have to get low-level badge (bronze) before getting the corresponding high-level badge (gold)? * When is the badge awarded? After 2nd day (bronze)? After 4th day (gold)?
2010/09/14
[ "https://meta.stackexchange.com/questions/64436", "https://meta.stackexchange.com", "https://meta.stackexchange.com/users/151244/" ]
First, some facts: 1. Yesterday, Jeff [got a Booster badge (silver)](https://english.stackexchange.com/badges/55/booster) for promoting a question on English Language and Usage. 2. At that time, the question was only one (1) day old. Jeff's promotional link must be even younger, by definition. 3. Jeff [doesn't have a Publicist badge (gold)](https://english.stackexchange.com/badges/56/publicist) on that site yet. 4. Jeff [does already have an Announcer badge (bronze)](https://english.stackexchange.com/badges/54/announcer) for a different question on that site. So, obviously, the system does not wait for a full 4 days before deciding which badge to hand out. (Theoretically, Jeff could still become eligible for a Publicist badge for that question.) **Edit:** now that Jeff himself has [chimed in](https://meta.stackexchange.com/questions/64436/sequence-behind-awarding-link-badges-announcer-booster-publicist/64794#64794), I have deleted my original speculation, because it is now proven to be *blatantly wrong*. Jeff's answer is somewhat cryptic on its own, but when taken together with the four facts I provided and the links NullUserException offered in his comment below, it answers the question perfectly. I'll sum it up here once again, but really, you should be upvoting Jeff. **You get the Announcer, Booster, and Publicist badges *in that order*. In other words, you can only get the Announcer badge (but not Booster or Publicist) for your first promotional link, the Booster badge (but not Publicist) for the next one, and the Publicist badge only after that.**
As it says in the blog post: > > Each badge can be earned only once, and **each must be earned on a different question**. Also, the tracked IPs must originate from outside our existing network. > > > <https://blog.stackoverflow.com/2010/09/announcer-booster-and-publicist-badges/>
64,436
I just have a thought on the new link badges: * **Announcer** - Shared a link to a question that was visited by 50 unique IP addresses in 2 days * **Booster** - Shared a link to a question that was visited by 400 unique IP addresses in 3 days * **Publicist** - Shared a link to a question that was visited by 1,000 unique IP addresses in 4 days **Scenario:** A user share a link to a question on StackOverflow.com in his blog. The link is visited by 50 unique IP addresses within two days, which should give him an Announcer badge. But on the third day, suddenly the link is visited by another 350 unique addresses (so 400 in total), which should give him Booster badge. Then lastly on the fourth day, (as you might already guess what I wanted to say) the link is visited by another 600 unique addresses (so 1000 in total), which should give him Publicist badge. **Question:** I know each badge can only be earned on a different question (based on [Jeff's blog](https://blog.stackoverflow.com/2010/09/announcer-booster-and-publicist-badges/)), so the user above won't get all three badges. The questions I want to ask are: * Would the user get a bronze Announcer badge or a gold Publicist badge for the above scenario? * If the answer above is bronze, does it mean that the users have to get low-level badge (bronze) before getting the corresponding high-level badge (gold)? * When is the badge awarded? After 2nd day (bronze)? After 4th day (gold)?
2010/09/14
[ "https://meta.stackexchange.com/questions/64436", "https://meta.stackexchange.com", "https://meta.stackexchange.com/users/151244/" ]
First, some facts: 1. Yesterday, Jeff [got a Booster badge (silver)](https://english.stackexchange.com/badges/55/booster) for promoting a question on English Language and Usage. 2. At that time, the question was only one (1) day old. Jeff's promotional link must be even younger, by definition. 3. Jeff [doesn't have a Publicist badge (gold)](https://english.stackexchange.com/badges/56/publicist) on that site yet. 4. Jeff [does already have an Announcer badge (bronze)](https://english.stackexchange.com/badges/54/announcer) for a different question on that site. So, obviously, the system does not wait for a full 4 days before deciding which badge to hand out. (Theoretically, Jeff could still become eligible for a Publicist badge for that question.) **Edit:** now that Jeff himself has [chimed in](https://meta.stackexchange.com/questions/64436/sequence-behind-awarding-link-badges-announcer-booster-publicist/64794#64794), I have deleted my original speculation, because it is now proven to be *blatantly wrong*. Jeff's answer is somewhat cryptic on its own, but when taken together with the four facts I provided and the links NullUserException offered in his comment below, it answers the question perfectly. I'll sum it up here once again, but really, you should be upvoting Jeff. **You get the Announcer, Booster, and Publicist badges *in that order*. In other words, you can only get the Announcer badge (but not Booster or Publicist) for your first promotional link, the Booster badge (but not Publicist) for the next one, and the Publicist badge only after that.**
They've obviously changed this recently because yesterday I gained the Publicist (gold) badge, and I login today to discover I was awarded the Booster(silver) badge, all on the same question! <https://android.stackexchange.com/users/3868/dunhamzzz?tab=badges&sort=class>
64,436
I just have a thought on the new link badges: * **Announcer** - Shared a link to a question that was visited by 50 unique IP addresses in 2 days * **Booster** - Shared a link to a question that was visited by 400 unique IP addresses in 3 days * **Publicist** - Shared a link to a question that was visited by 1,000 unique IP addresses in 4 days **Scenario:** A user share a link to a question on StackOverflow.com in his blog. The link is visited by 50 unique IP addresses within two days, which should give him an Announcer badge. But on the third day, suddenly the link is visited by another 350 unique addresses (so 400 in total), which should give him Booster badge. Then lastly on the fourth day, (as you might already guess what I wanted to say) the link is visited by another 600 unique addresses (so 1000 in total), which should give him Publicist badge. **Question:** I know each badge can only be earned on a different question (based on [Jeff's blog](https://blog.stackoverflow.com/2010/09/announcer-booster-and-publicist-badges/)), so the user above won't get all three badges. The questions I want to ask are: * Would the user get a bronze Announcer badge or a gold Publicist badge for the above scenario? * If the answer above is bronze, does it mean that the users have to get low-level badge (bronze) before getting the corresponding high-level badge (gold)? * When is the badge awarded? After 2nd day (bronze)? After 4th day (gold)?
2010/09/14
[ "https://meta.stackexchange.com/questions/64436", "https://meta.stackexchange.com", "https://meta.stackexchange.com/users/151244/" ]
As it says in the blog post: > > Each badge can be earned only once, and **each must be earned on a different question**. Also, the tracked IPs must originate from outside our existing network. > > > <https://blog.stackoverflow.com/2010/09/announcer-booster-and-publicist-badges/>
They've obviously changed this recently because yesterday I gained the Publicist (gold) badge, and I login today to discover I was awarded the Booster(silver) badge, all on the same question! <https://android.stackexchange.com/users/3868/dunhamzzz?tab=badges&sort=class>
20,825,275
My question is related to Java but is quite general. When making such things as calculators I see people store the operator as a char as opposed to a string? Surely a string is easier to work with? In said scenario are there any advantages of using char over string?
2013/12/29
[ "https://Stackoverflow.com/questions/20825275", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1849663/" ]
A char explicitly requires one character. It can't take two, nor zero, and not even a null. This increases type safety where this requirement is appropriate. Also, using char is slightly faster than using String, in Java.
Also, char class has special method to check if the char is a digit and so on. Char takes less memory than a string because char is a value type and not a reference type. The char value will seat on the stack instead of the heap. Meaning faster reading and writing - better performence.
34,438,130
I noticed that there is not a default created directory called "Services" or anything like that in the default Web Api template. I come from a Grails background where I learned to use service classes to handle application logic and model processing to move it away from the controllers. Because there is not a default directory for such classes (that I know of) in Web API it leads me to believe that this isn't best practice here. Anyone care to shed some knowledge on this?
2015/12/23
[ "https://Stackoverflow.com/questions/34438130", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3351924/" ]
Well, "Service" is a misnomer in that regard, you could as well call it a "Manager". Web API does not dictate any project layout. It just utilizes storing models and controllers, the rest of the layout is yours to decide. You can as well rename the Controllers and Models directories if you'd like. Of course it's recommended to develop your business logic in a separate class library altogether, which promotes abstraction, separation of concern and hence testability.
Keeping application logic in "Services" directory, separately from controller is a good approach regardless of technology stack. Such approach is used by many in .NET world as well. Default templates are not perfect, so feel free to structure the application the way you used to.
20,714,745
I am trying to learn more about JAVA web development. I am mainly focused on trying to understand how data that a user enters, maybe through the course of filling out a multipage form, is managed as the user moves from page to page. From what I have gathered, you can store data within the session on the server side. I am also learning about cookies which are stored within the browser. Is there a general rule that is used to determine what data should be stored in a cookie vs. when you should store data in a session (session.setAttribute), or are these completely different concepts? Thanks
2013/12/21
[ "https://Stackoverflow.com/questions/20714745", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1154644/" ]
The basics of session/cookies are like this. A session is typically a way for a server to store data about a user. This can be done in a variety of ways from memory, file to database. This session can be used by you store pretty much anything you need to have as the user bounces around your site. It is assigned an ID (the session ID) which you don't usually need to worry about too much. In most web languages you can easily access the user session with some functions without dealing with IDs. Now since the web is stateless - meaning there is really no way to know that user that visited page A is the same as the one that visited page B then we want to make sure that the user carries their session IDs with them. This can be done in a variety of ways but the most common one is through the use of a session cookie which is a special cookie automatically set by the server that is solely there for passing the session around. It can also be passed in the URL (I'm sure you've seen things like index.php?sessid=01223..) as well as headers and so on. When most people talk about adding info to a cookie they are not talking about session cookies but about a custom cookie that you specifically set. The only reason that you would want to do that is if you needed to store info beyond the life of the session (which ends when the browser is closed). A good example of that is the "remember me" feature of many sites. So use sessions unless you need to have something last a long time.
Yes. There are a few rules actually. For one, cookie data is sent by the browser on every request; session data is kept on the server (and not re-transmitted every request). However, usually the session id is used with a coookie. This enables the server to identify the client.
96,826
I have two 3-ft 4x4 pieces of wood and I would like to join them securely so that I have one 6-ft 4x4 piece of wood. What's the best way of doing this? Thanks.
2016/08/05
[ "https://diy.stackexchange.com/questions/96826", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/53603/" ]
Slice them up into 1x4s then laminate them with a 1 1/2 foot overlap on alternating pieces. [![enter image description here](https://i.stack.imgur.com/IVYjK.png)](https://i.stack.imgur.com/IVYjK.png) That will give you a 6 foot 4x4 from two 3 foot 4x4s. You'll need lots of wood glue, many clamps and either a good table saw or something equivalent. The only other ways I can think of doing this involve spending more on hardware than a 6 foot 4x4 probably costs.
Much of it is a matter of what you're trying to do but my first off the cuff suggestion is a "half lap" joint...
10,450,969
I am building an application that is supposed to extract a mission for the user from a finite mission pool. The thing is that I want: 1. that the user won't get the same mission twice, 2. that the user won't get the same missions as his friends (in the application) until some time has passed. To summarize my problem, I need to extract the **least common mission** out of the pool. Can someone please reference me to known algorithms of finding least common something (LFU). I also need the theoretical aspect, so if someone knows some articles or research papers about this (from known magazines like Scientific American) that would be great.
2012/05/04
[ "https://Stackoverflow.com/questions/10450969", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1331124/" ]
For getting the least frequently used mission, simply give every mission a counter that counts how many times it was used. Then search for the mission with the lowest counter value. For getting the mission that was least frequently used by a group of friends, you can store for every user the missions he/she has done (and the number of times). This information is probably useful anyway. Then when a new mission needs to be chosen for a user, a (temporary) combined list of used missions and their frequencies by the users and all his friends can easily be created and sorted by frequency. This is not very expensive.
A good start is Edmond Blossom V algorithm for a perfect minimum matching in general graph. If you have a bipartite graph you can look for the Floyd-Warshall algorithmus to find the shortest path. Maybe you can use also a topological search but I don't know because these algorithm are really hard to learn.
10,450,969
I am building an application that is supposed to extract a mission for the user from a finite mission pool. The thing is that I want: 1. that the user won't get the same mission twice, 2. that the user won't get the same missions as his friends (in the application) until some time has passed. To summarize my problem, I need to extract the **least common mission** out of the pool. Can someone please reference me to known algorithms of finding least common something (LFU). I also need the theoretical aspect, so if someone knows some articles or research papers about this (from known magazines like Scientific American) that would be great.
2012/05/04
[ "https://Stackoverflow.com/questions/10450969", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1331124/" ]
For getting the least frequently used mission, simply give every mission a counter that counts how many times it was used. Then search for the mission with the lowest counter value. For getting the mission that was least frequently used by a group of friends, you can store for every user the missions he/she has done (and the number of times). This information is probably useful anyway. Then when a new mission needs to be chosen for a user, a (temporary) combined list of used missions and their frequencies by the users and all his friends can easily be created and sorted by frequency. This is not very expensive.
Base on your 2 requirements, I don't see what "LEAST" used mission has anything to do with this. You said you want non repeating missions. **OPTION 1:** What container do you use to hold all missions? Assume it's a list, when you or your friend chooses a mission move that mission to the end of the list (swap it with the missions there). Now you have split your initial list into 2 sublists. The first part holds unused missions, and the second part holds used missions. Keep track of the pivot/index which separates the 2 lists. Now every time you or your friends choose a new mission it is choosen it from the first sublist. Then move it into the second sublist and update the pivot. **OPTION 2:** If you repeat missions eventually, but choose first the ones which have been chosen the least amount of time, then you can make your container a min heap. Add a usage counter to each mission and add them to the heap based on that. Extract a mission and increment its usage counter then put it back into the heap. This is a good solution, but depending on how simple your program is, you could even use a circular buffer. It would be nice to know more about what you're building :)
10,450,969
I am building an application that is supposed to extract a mission for the user from a finite mission pool. The thing is that I want: 1. that the user won't get the same mission twice, 2. that the user won't get the same missions as his friends (in the application) until some time has passed. To summarize my problem, I need to extract the **least common mission** out of the pool. Can someone please reference me to known algorithms of finding least common something (LFU). I also need the theoretical aspect, so if someone knows some articles or research papers about this (from known magazines like Scientific American) that would be great.
2012/05/04
[ "https://Stackoverflow.com/questions/10450969", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1331124/" ]
For getting the least frequently used mission, simply give every mission a counter that counts how many times it was used. Then search for the mission with the lowest counter value. For getting the mission that was least frequently used by a group of friends, you can store for every user the missions he/she has done (and the number of times). This information is probably useful anyway. Then when a new mission needs to be chosen for a user, a (temporary) combined list of used missions and their frequencies by the users and all his friends can easily be created and sorted by frequency. This is not very expensive.
I think the structure you need is a [min-heap](http://en.wikipedia.org/wiki/Heap_%28data_structure%29). It allows extraction of the minimum in O(Log(n)) and it allows you to increase the value of an item in O(Log(n)) too.
10,450,969
I am building an application that is supposed to extract a mission for the user from a finite mission pool. The thing is that I want: 1. that the user won't get the same mission twice, 2. that the user won't get the same missions as his friends (in the application) until some time has passed. To summarize my problem, I need to extract the **least common mission** out of the pool. Can someone please reference me to known algorithms of finding least common something (LFU). I also need the theoretical aspect, so if someone knows some articles or research papers about this (from known magazines like Scientific American) that would be great.
2012/05/04
[ "https://Stackoverflow.com/questions/10450969", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1331124/" ]
Base on your 2 requirements, I don't see what "LEAST" used mission has anything to do with this. You said you want non repeating missions. **OPTION 1:** What container do you use to hold all missions? Assume it's a list, when you or your friend chooses a mission move that mission to the end of the list (swap it with the missions there). Now you have split your initial list into 2 sublists. The first part holds unused missions, and the second part holds used missions. Keep track of the pivot/index which separates the 2 lists. Now every time you or your friends choose a new mission it is choosen it from the first sublist. Then move it into the second sublist and update the pivot. **OPTION 2:** If you repeat missions eventually, but choose first the ones which have been chosen the least amount of time, then you can make your container a min heap. Add a usage counter to each mission and add them to the heap based on that. Extract a mission and increment its usage counter then put it back into the heap. This is a good solution, but depending on how simple your program is, you could even use a circular buffer. It would be nice to know more about what you're building :)
A good start is Edmond Blossom V algorithm for a perfect minimum matching in general graph. If you have a bipartite graph you can look for the Floyd-Warshall algorithmus to find the shortest path. Maybe you can use also a topological search but I don't know because these algorithm are really hard to learn.
10,450,969
I am building an application that is supposed to extract a mission for the user from a finite mission pool. The thing is that I want: 1. that the user won't get the same mission twice, 2. that the user won't get the same missions as his friends (in the application) until some time has passed. To summarize my problem, I need to extract the **least common mission** out of the pool. Can someone please reference me to known algorithms of finding least common something (LFU). I also need the theoretical aspect, so if someone knows some articles or research papers about this (from known magazines like Scientific American) that would be great.
2012/05/04
[ "https://Stackoverflow.com/questions/10450969", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1331124/" ]
Base on your 2 requirements, I don't see what "LEAST" used mission has anything to do with this. You said you want non repeating missions. **OPTION 1:** What container do you use to hold all missions? Assume it's a list, when you or your friend chooses a mission move that mission to the end of the list (swap it with the missions there). Now you have split your initial list into 2 sublists. The first part holds unused missions, and the second part holds used missions. Keep track of the pivot/index which separates the 2 lists. Now every time you or your friends choose a new mission it is choosen it from the first sublist. Then move it into the second sublist and update the pivot. **OPTION 2:** If you repeat missions eventually, but choose first the ones which have been chosen the least amount of time, then you can make your container a min heap. Add a usage counter to each mission and add them to the heap based on that. Extract a mission and increment its usage counter then put it back into the heap. This is a good solution, but depending on how simple your program is, you could even use a circular buffer. It would be nice to know more about what you're building :)
I think the structure you need is a [min-heap](http://en.wikipedia.org/wiki/Heap_%28data_structure%29). It allows extraction of the minimum in O(Log(n)) and it allows you to increase the value of an item in O(Log(n)) too.