qid
int64
1
74.7M
question
stringlengths
12
33.8k
date
stringlengths
10
10
metadata
list
response_j
stringlengths
0
115k
response_k
stringlengths
2
98.3k
304,651
[![enter image description here](https://i.stack.imgur.com/liZpX.png)](https://i.stack.imgur.com/liZpX.png) ![](https://i.stack.imgur.com/NHn5D.png) I installed high Sierra a few months ago and nowadays there seems to be a problem with System Information.app, it always shows the disk usage wrong like for example System 40GB but it shows photos to be 0KB even though I have 19GB worth photos and DaisyDisk reports correct disk usage. I have tried booting into safe mode and rebooting, this persists after restarts Screenshot: [![enter image description here](https://i.stack.imgur.com/BwsKp.png)](https://i.stack.imgur.com/BwsKp.png) [![enter image description here](https://i.stack.imgur.com/MbKKj.png)](https://i.stack.imgur.com/MbKKj.png) I have run this scan as root user not admin. so how do fix this missing photo sizing in system information.app?
2017/11/05
[ "https://apple.stackexchange.com/questions/304651", "https://apple.stackexchange.com", "https://apple.stackexchange.com/users/-1/" ]
You need to reindex your Spotlight cache. To do this 1. Go to System Preferences → Spotlight → Privacy 2. Click on the plus symbol and add your macOS hard drive you're having the wrong status/info problem 3. After adding your particular hard drive, quit System Preferences then wait for a few seconds. 4. Again open System Preferences → Spotlight → Privacy and this time remove your previously added hard drive. What this does is it reindexes and creates the .Spotlight-V100 files on your hard drive. There are also various sudo commands to perform this step but for now try this!
I solved this issue just relaunch finder. Click on apple logo->Force Quit->Finder and click Relaunch. So I opened my finder and it shows the correct size.
304,651
[![enter image description here](https://i.stack.imgur.com/liZpX.png)](https://i.stack.imgur.com/liZpX.png) ![](https://i.stack.imgur.com/NHn5D.png) I installed high Sierra a few months ago and nowadays there seems to be a problem with System Information.app, it always shows the disk usage wrong like for example System 40GB but it shows photos to be 0KB even though I have 19GB worth photos and DaisyDisk reports correct disk usage. I have tried booting into safe mode and rebooting, this persists after restarts Screenshot: [![enter image description here](https://i.stack.imgur.com/BwsKp.png)](https://i.stack.imgur.com/BwsKp.png) [![enter image description here](https://i.stack.imgur.com/MbKKj.png)](https://i.stack.imgur.com/MbKKj.png) I have run this scan as root user not admin. so how do fix this missing photo sizing in system information.app?
2017/11/05
[ "https://apple.stackexchange.com/questions/304651", "https://apple.stackexchange.com", "https://apple.stackexchange.com/users/-1/" ]
I had a similar problem just now: Finder, "Get Info", and Disk Utility were all reporting 18GB free on a 250GB SSD that should have had plenty of free space. Under "About this Mac"->Storage, it reported 130GB of "System" usage. None of the space was listed as "purgeable", and macOS started complaining about lack of disk space. I found a solution in [this Apple discussion thread](https://discussions.apple.com/message/32537764#message32537764). In my case, the problem was fixed by going to Time Machine Preferences and removing an old backup drive (that had long ago died) from the list of backup drives. Immediately, the available disk space on my SSD started being reported correctly as 120GB.
You need to reindex your Spotlight cache. To do this 1. Go to System Preferences → Spotlight → Privacy 2. Click on the plus symbol and add your macOS hard drive you're having the wrong status/info problem 3. After adding your particular hard drive, quit System Preferences then wait for a few seconds. 4. Again open System Preferences → Spotlight → Privacy and this time remove your previously added hard drive. What this does is it reindexes and creates the .Spotlight-V100 files on your hard drive. There are also various sudo commands to perform this step but for now try this!
340,068
~~I discovered a potential bug~~I'm having a bizarre problem, which I originally thought was environmental to my computer, but I have now been able to reproduce it two other machines. It's possible it's environmental to my company's network, but I've never seen something like this before. Attempting to enter tags causes [external] Internet connection loss. Yes, I'll repeat that because it sounds insane: "Attempting to enter tags causes the computer to lose Internet connection." The local network works without an issue, and I can connect to any other machines and use our DFS which are within the LAN, and I can ping the local gateway or any other machine. However, attempting to access any other website / use any web service will fail. The interesting part is that DNS will still resolve IP addresses in a ping request even if you flush your DNS. It appears to only happen if I copy and paste a question I attempted to ask from Notepad into the question box. Steps to reproduce: Download this text file from my Google drive which contains my question that I tried to save when I realized I lost the Internet connection: <https://drive.google.com/file/d/0B9k0kTfjwe8Ibk1UY3AtNzBPVnM/view?usp=sharing> 1. Create a new question and write a title. DO NOT ENTER ANY TAGS YET. 2. Copy the entire contents of the text file 3. Paste the contents of the text file into the question text area. 4. Start typing into the tag field. At step 4 if you experience the same behavior as me you'll see a JSON parsing error in the console and no tags will load, however the loading dots will continue to flash as if it was searching for tags. Here's a screen shot of the error that shows up in the console: [![Error in console screenshot](https://i.stack.imgur.com/XjhmH.png)](https://i.stack.imgur.com/XjhmH.png) This has been experienced on Windows 7 64-bit Pro, Windows 10 64-bit Pro using both Google Chrome 55 and Mozilla Firefox 50. The antivirus software installed on these machines is the corporate version of AV Defender: Security Manager AV Defender 5.3.32.780 by N-able Technologies. It appear that running ComboFix corrects the issue for the Windows 7 machines, but we're still trying to figure how a fix on the Windows 10 machine. I was very doubtful that Stack Overflow / Stack Exchange was the problem at first because this sounds insane. I've never had an issue like this before, but I am now able to reproduce the issue in multiple places and following those above steps keeps triggering the problem. Please be careful testing this issue. It's a PITA to fix. PS: Anyone know the answer to my question? =)
2016/12/21
[ "https://meta.stackoverflow.com/questions/340068", "https://meta.stackoverflow.com", "https://meta.stackoverflow.com/users/2359643/" ]
We periodically get reports from folks whose local network has some sort of filter intended to block SQL injection attacks kicking in when they try to ask questions about SQL. I'd bet that's what's happening here. The blocking of all other requests is something I haven't seen before; if you do get to the bottom of this, please update us on what exactly was running and what you had to do to calm it down.
Seeing a Connection Timeout sounds like you're getting blacklisted from external access by a transparent proxy wich Drop your packets instead of denying them (this is a one more step to not inform a malicious program it is blocked). The feature is called Data Leak Prevention or Data Loss Prevention (DLP) The main idea is to act as a proxy, intercepting communications between your browser and the website while routing the traffic to the internet and inspect your POST datas. If something ring the detection, it could be a malicious software on your workstation trying to steal your data. To avoid the leak, some firewalls will blacklist you, permanently or temporarily (the time for an admin to check the risk) depending on the score of the detected leak (more or less what an anti-spam could do to flag or delete the message). Some other will just drop the request. For those thinking SSL could avoid this kind of things, here is how StackoverFlow certificate is signed when I visit it through our company firewall, compare with your own view: [![enter image description here](https://i.stack.imgur.com/qO8mx.png)](https://i.stack.imgur.com/qO8mx.png) The firewall just does a Man In The Middle interception, so it can see the content of the requests in clear text and make it's URL filtering and interception, it's clever enough to detect SSH on port 443 and block it (no stunnel option), in fact it verify that the exchange is valid HTTP.
377,577
I'm just curious if it's possible to power 2 devices at different voltages with a set of batteries while making sure that the 2 devices makes use of all the batteries. I have here an illustration of the connection that I would like to get verified. [![Each battery is rated ***n*** volts](https://i.stack.imgur.com/Q846m.jpg)](https://i.stack.imgur.com/Q846m.jpg) Each battery is rated ***n*** volts I used dashed lines in 2n volts connection for ease of seeing. I want to power each device with 4n volts and 2n volts[![An Alternative Illustration](https://i.stack.imgur.com/YEZa5.jpg)](https://i.stack.imgur.com/YEZa5.jpg) An Alternative Illustration^ The solution that I propose is to connect 4 batteries in series first to power 1 device. Then since there can be 2 sets of batteries in series already I can connect these sets in parallel for the Voltage to **not** add up. However, just by looking at the circuit I think some portions are actually shorted. Any help would be appreciated. *Please don't be harsh if this is a dumb question.*
2018/06/01
[ "https://electronics.stackexchange.com/questions/377577", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/-1/" ]
No. Your attempt to power a 4n load, and and 2n load from balanced batteries, shorts out each 2n battery. You have two (or three) options. a) Power the 4n load, and a 4n->2n converter to power the 2n load. This could be a linear regulator if the power level is low enough for you to not worry about the inefficiency and heat. Better though would be a buck regulator. b) Configure the batteries as 2n, and power the 2n load, and a 2n->4n boost converter. Choose (a) or (b) depending on which load draws more power, or the convenience of having a buck or boost converter to hand. c) Power the 2n load from a battery tap, and tolerate the consequences of the imbalance (replacing batteries earlier, or charging differently) d) Power the 2n load from a switchable battery tap, so you spend 50% of the time powered from the 'top' batteries, and 50% of the time from the 'bottom' ones. Obviously you cannot common the grounds if using it like this. e) Use a 2n set and a 4n set of batteries. You need more batteries, but get complete freedom. Ok, so 5 options.
Wouldn't adding a pair of diodes solve the issue? If the current isn't something massive I think it would be fine no?![Professional drawing](https://i.stack.imgur.com/e1qvi.jpg)
29,260
I've asked a few questions relating to schemes for various security-related functions, and posited schemes to accomplish those goals. In the responses, I see a conflict between two fundamental principles of IT security; "defense in depth" (make an attacker break not one, but many layers of information security) and "Complexity is the enemy of security" (the KISS principle). The question IMO is obvious; when does one of these two take priority over the other? You can't have both in the extreme; adding layers necessarily increases complexity of the system, while simplifying security typically "thins" it. There's a spectrum in between "ideal" complete simplicity and "ideal" infinite depth, and thus a balance to be struck. So, which of these, when they conflict, is generally to be preferred in the design of an information security scheme?
2013/01/17
[ "https://security.stackexchange.com/questions/29260", "https://security.stackexchange.com", "https://security.stackexchange.com/users/8281/" ]
This is written from the perspective of a software developer and project manager, who often needs to deal with sensitive data in apps that I am involved in creating. Defense In Depth is not *necessarily* at odds with the principle of simplicity. [Simplicity is difficult](http://timelessrepo.com/simplicity-is-difficult), and it's not what you'd think. Simplicity doesn't necessarily mean lack of effort. It can (and in this case, I think it usually means) not using complex, home-grown mechanisms when established practices/tools exist. It can also mean keeping your systems simple to reduce the attack surface area, or make them less attractive targets by not storing the type of data that people want to steal. There is a true art, as well as science in finding the right balance of simplicity vs. functionality, but in my experience, when it comes to simplicity, simplicity in some aspects ***is*** one of the keys to defense-in-depth. The essence of what I'm going to try to get at here is that the amount of time you spend defending your application should be proportional to the sensitivity of the data contained within, and the size of the attack surface area. Here are two ways that simplicity ***adds*** to defense in depth by subtracting something. 1. [Carefully deciding whether you should store sensitive data in the first place](https://donedesk.com/security) * One of the most blindingly obvious truths about protecting sensitive data is that if you don't have sensitive data, you don't need to protect it. * When developing systems/software, the most basic thing you can do to increase relative the security of your system is to do so by limiting the sensitive data in the first place. * By deciding not to store unnecessary sensitive data, you are making it simpler, but exercising defense-in-depth by considering defense as a part of your initial requirements gathering phase 2. [Reducing the attack surface area](https://www.owasp.org/index.php/Minimize_attack_surface_area) * Similar to the above, but this time looking at features. Every single user input control - text-box, drop-down list, etc, is a potential "window" that, if unprotected, can be an entry point for an attacker. If you're failing to validate input or failing to sanitize output, that control may be vulnerable to any number of well-known attacks. As a developer, I can tell you that when creating large, complex sites/forms, it's easy to miss a validation control here or validate something incorrectly there. * The business might want a fancy interface that has all sorts of bells and whistles, but the benefit of having that interface/feature should be weighed against the cost of securing it, and the downside of the increased attack surface area. Also, keeping security simple doesn't necessarily mean limiting your defenses to fewer defenses. Keeping security simple simply means don't make it harder than necessary. Ways you can keep security simple ***and*** still have defense-in-depth include: * Having secure defaults. Establish your normal defenses. You can have fifty layers of defenses, but if you know what your baseline is, you are still keeping it simple by just following the normal routine. * Using established best practices. Similar to the above, for almost every type of I.T. activity, there is already an established set of best practices. Simply following those instead of coming up with wild schemes of your own keeps things simple. * Using established, trusted tools. Of course, no tool is foolproof, but if you're adding layers of defense, using established tools instead of coming up with your own or using lesser-known, unsupported tools keeps things simpler in the long-run. You'll have better documentation, a wider user community for support, and greater likelihood that when there's a problem, it'll get patched quickly. Defense-in-depth is about layers of security. Keeping each layer as simple as possible is the key to applying the principle of keeing it simple to a defense-in-depth strategy.
"Defense in depth" is usually pushed out of a feeling of paranoia. You implement layers upon layers of defense in response to panicky rants from upper management. "Low complexity" is usually promoted in order to reduce costs and delivery times. You reduce complexity so as to meet the deadlines imposed from upper management. Often, "upper management" will insist on both aspects at the same time. It is then your job to notify them of how irksome and pesky details like the Laws of Physics may prevent or slow down the simultaneous fulfillment of both goals. Ultimately, this is not a technical decision; this is about *economics*. Risk analysis ought to put a price on intrusions, and thus assess how much money defense in depth may save in the long run, by containing damage resulting from a successful attack. Similarly, increased development costs and delays will be given their own financial estimate. It is up to the decision-makers to balance these costs as it best suits their strategy; the important point being that this is a matter of *policy*, not of *technology*.
29,260
I've asked a few questions relating to schemes for various security-related functions, and posited schemes to accomplish those goals. In the responses, I see a conflict between two fundamental principles of IT security; "defense in depth" (make an attacker break not one, but many layers of information security) and "Complexity is the enemy of security" (the KISS principle). The question IMO is obvious; when does one of these two take priority over the other? You can't have both in the extreme; adding layers necessarily increases complexity of the system, while simplifying security typically "thins" it. There's a spectrum in between "ideal" complete simplicity and "ideal" infinite depth, and thus a balance to be struck. So, which of these, when they conflict, is generally to be preferred in the design of an information security scheme?
2013/01/17
[ "https://security.stackexchange.com/questions/29260", "https://security.stackexchange.com", "https://security.stackexchange.com/users/8281/" ]
"Defense in depth" is usually pushed out of a feeling of paranoia. You implement layers upon layers of defense in response to panicky rants from upper management. "Low complexity" is usually promoted in order to reduce costs and delivery times. You reduce complexity so as to meet the deadlines imposed from upper management. Often, "upper management" will insist on both aspects at the same time. It is then your job to notify them of how irksome and pesky details like the Laws of Physics may prevent or slow down the simultaneous fulfillment of both goals. Ultimately, this is not a technical decision; this is about *economics*. Risk analysis ought to put a price on intrusions, and thus assess how much money defense in depth may save in the long run, by containing damage resulting from a successful attack. Similarly, increased development costs and delays will be given their own financial estimate. It is up to the decision-makers to balance these costs as it best suits their strategy; the important point being that this is a matter of *policy*, not of *technology*.
Defense in depth and simplicity are not contradictory as long as each layer is simple and independent. Each layer should not be dependent on other layers or it really isn't good defense in depth. (Since it really is just one complex layer at that point.) If each layer can be worked with separately and is implemented in a simple, verifiable and maintainable way, then there is really nothing at odds.
29,260
I've asked a few questions relating to schemes for various security-related functions, and posited schemes to accomplish those goals. In the responses, I see a conflict between two fundamental principles of IT security; "defense in depth" (make an attacker break not one, but many layers of information security) and "Complexity is the enemy of security" (the KISS principle). The question IMO is obvious; when does one of these two take priority over the other? You can't have both in the extreme; adding layers necessarily increases complexity of the system, while simplifying security typically "thins" it. There's a spectrum in between "ideal" complete simplicity and "ideal" infinite depth, and thus a balance to be struck. So, which of these, when they conflict, is generally to be preferred in the design of an information security scheme?
2013/01/17
[ "https://security.stackexchange.com/questions/29260", "https://security.stackexchange.com", "https://security.stackexchange.com/users/8281/" ]
"Defense in depth" is usually pushed out of a feeling of paranoia. You implement layers upon layers of defense in response to panicky rants from upper management. "Low complexity" is usually promoted in order to reduce costs and delivery times. You reduce complexity so as to meet the deadlines imposed from upper management. Often, "upper management" will insist on both aspects at the same time. It is then your job to notify them of how irksome and pesky details like the Laws of Physics may prevent or slow down the simultaneous fulfillment of both goals. Ultimately, this is not a technical decision; this is about *economics*. Risk analysis ought to put a price on intrusions, and thus assess how much money defense in depth may save in the long run, by containing damage resulting from a successful attack. Similarly, increased development costs and delays will be given their own financial estimate. It is up to the decision-makers to balance these costs as it best suits their strategy; the important point being that this is a matter of *policy*, not of *technology*.
This is a good question. I think that you can have both defense in depth and simplicity without contradiction. Defense in depth is redundancy is security controls (defense mechanisms). One control can fail but it is much less probable that two or more will fail at the same time. As for simplicity, it depends on what area you are targeting. If you're talking about applications, systems, processes you are trying to protect, there's no controversy. They should be as simple as possible and still be able to accomplish the task at hand (I know this is easy to say but difficult to do). If you mean simplicity in security controls, I agree with AJ Henderson that they should be as simple and as independent as possible (the same principle as in the paragraph above). The only problem is related to the number of these security controls; zero is definitely more simple than three. Here you should consider the probability and impact of the "problem" you're trying to protect against. The higher the risk (probability x impact) the more security controls you should deploy. But there is probably a break point in number of controls (like three) where the ratio of added value vs cost gets very low.
29,260
I've asked a few questions relating to schemes for various security-related functions, and posited schemes to accomplish those goals. In the responses, I see a conflict between two fundamental principles of IT security; "defense in depth" (make an attacker break not one, but many layers of information security) and "Complexity is the enemy of security" (the KISS principle). The question IMO is obvious; when does one of these two take priority over the other? You can't have both in the extreme; adding layers necessarily increases complexity of the system, while simplifying security typically "thins" it. There's a spectrum in between "ideal" complete simplicity and "ideal" infinite depth, and thus a balance to be struck. So, which of these, when they conflict, is generally to be preferred in the design of an information security scheme?
2013/01/17
[ "https://security.stackexchange.com/questions/29260", "https://security.stackexchange.com", "https://security.stackexchange.com/users/8281/" ]
This is written from the perspective of a software developer and project manager, who often needs to deal with sensitive data in apps that I am involved in creating. Defense In Depth is not *necessarily* at odds with the principle of simplicity. [Simplicity is difficult](http://timelessrepo.com/simplicity-is-difficult), and it's not what you'd think. Simplicity doesn't necessarily mean lack of effort. It can (and in this case, I think it usually means) not using complex, home-grown mechanisms when established practices/tools exist. It can also mean keeping your systems simple to reduce the attack surface area, or make them less attractive targets by not storing the type of data that people want to steal. There is a true art, as well as science in finding the right balance of simplicity vs. functionality, but in my experience, when it comes to simplicity, simplicity in some aspects ***is*** one of the keys to defense-in-depth. The essence of what I'm going to try to get at here is that the amount of time you spend defending your application should be proportional to the sensitivity of the data contained within, and the size of the attack surface area. Here are two ways that simplicity ***adds*** to defense in depth by subtracting something. 1. [Carefully deciding whether you should store sensitive data in the first place](https://donedesk.com/security) * One of the most blindingly obvious truths about protecting sensitive data is that if you don't have sensitive data, you don't need to protect it. * When developing systems/software, the most basic thing you can do to increase relative the security of your system is to do so by limiting the sensitive data in the first place. * By deciding not to store unnecessary sensitive data, you are making it simpler, but exercising defense-in-depth by considering defense as a part of your initial requirements gathering phase 2. [Reducing the attack surface area](https://www.owasp.org/index.php/Minimize_attack_surface_area) * Similar to the above, but this time looking at features. Every single user input control - text-box, drop-down list, etc, is a potential "window" that, if unprotected, can be an entry point for an attacker. If you're failing to validate input or failing to sanitize output, that control may be vulnerable to any number of well-known attacks. As a developer, I can tell you that when creating large, complex sites/forms, it's easy to miss a validation control here or validate something incorrectly there. * The business might want a fancy interface that has all sorts of bells and whistles, but the benefit of having that interface/feature should be weighed against the cost of securing it, and the downside of the increased attack surface area. Also, keeping security simple doesn't necessarily mean limiting your defenses to fewer defenses. Keeping security simple simply means don't make it harder than necessary. Ways you can keep security simple ***and*** still have defense-in-depth include: * Having secure defaults. Establish your normal defenses. You can have fifty layers of defenses, but if you know what your baseline is, you are still keeping it simple by just following the normal routine. * Using established best practices. Similar to the above, for almost every type of I.T. activity, there is already an established set of best practices. Simply following those instead of coming up with wild schemes of your own keeps things simple. * Using established, trusted tools. Of course, no tool is foolproof, but if you're adding layers of defense, using established tools instead of coming up with your own or using lesser-known, unsupported tools keeps things simpler in the long-run. You'll have better documentation, a wider user community for support, and greater likelihood that when there's a problem, it'll get patched quickly. Defense-in-depth is about layers of security. Keeping each layer as simple as possible is the key to applying the principle of keeing it simple to a defense-in-depth strategy.
Defense in depth and simplicity are not contradictory as long as each layer is simple and independent. Each layer should not be dependent on other layers or it really isn't good defense in depth. (Since it really is just one complex layer at that point.) If each layer can be worked with separately and is implemented in a simple, verifiable and maintainable way, then there is really nothing at odds.
29,260
I've asked a few questions relating to schemes for various security-related functions, and posited schemes to accomplish those goals. In the responses, I see a conflict between two fundamental principles of IT security; "defense in depth" (make an attacker break not one, but many layers of information security) and "Complexity is the enemy of security" (the KISS principle). The question IMO is obvious; when does one of these two take priority over the other? You can't have both in the extreme; adding layers necessarily increases complexity of the system, while simplifying security typically "thins" it. There's a spectrum in between "ideal" complete simplicity and "ideal" infinite depth, and thus a balance to be struck. So, which of these, when they conflict, is generally to be preferred in the design of an information security scheme?
2013/01/17
[ "https://security.stackexchange.com/questions/29260", "https://security.stackexchange.com", "https://security.stackexchange.com/users/8281/" ]
This is written from the perspective of a software developer and project manager, who often needs to deal with sensitive data in apps that I am involved in creating. Defense In Depth is not *necessarily* at odds with the principle of simplicity. [Simplicity is difficult](http://timelessrepo.com/simplicity-is-difficult), and it's not what you'd think. Simplicity doesn't necessarily mean lack of effort. It can (and in this case, I think it usually means) not using complex, home-grown mechanisms when established practices/tools exist. It can also mean keeping your systems simple to reduce the attack surface area, or make them less attractive targets by not storing the type of data that people want to steal. There is a true art, as well as science in finding the right balance of simplicity vs. functionality, but in my experience, when it comes to simplicity, simplicity in some aspects ***is*** one of the keys to defense-in-depth. The essence of what I'm going to try to get at here is that the amount of time you spend defending your application should be proportional to the sensitivity of the data contained within, and the size of the attack surface area. Here are two ways that simplicity ***adds*** to defense in depth by subtracting something. 1. [Carefully deciding whether you should store sensitive data in the first place](https://donedesk.com/security) * One of the most blindingly obvious truths about protecting sensitive data is that if you don't have sensitive data, you don't need to protect it. * When developing systems/software, the most basic thing you can do to increase relative the security of your system is to do so by limiting the sensitive data in the first place. * By deciding not to store unnecessary sensitive data, you are making it simpler, but exercising defense-in-depth by considering defense as a part of your initial requirements gathering phase 2. [Reducing the attack surface area](https://www.owasp.org/index.php/Minimize_attack_surface_area) * Similar to the above, but this time looking at features. Every single user input control - text-box, drop-down list, etc, is a potential "window" that, if unprotected, can be an entry point for an attacker. If you're failing to validate input or failing to sanitize output, that control may be vulnerable to any number of well-known attacks. As a developer, I can tell you that when creating large, complex sites/forms, it's easy to miss a validation control here or validate something incorrectly there. * The business might want a fancy interface that has all sorts of bells and whistles, but the benefit of having that interface/feature should be weighed against the cost of securing it, and the downside of the increased attack surface area. Also, keeping security simple doesn't necessarily mean limiting your defenses to fewer defenses. Keeping security simple simply means don't make it harder than necessary. Ways you can keep security simple ***and*** still have defense-in-depth include: * Having secure defaults. Establish your normal defenses. You can have fifty layers of defenses, but if you know what your baseline is, you are still keeping it simple by just following the normal routine. * Using established best practices. Similar to the above, for almost every type of I.T. activity, there is already an established set of best practices. Simply following those instead of coming up with wild schemes of your own keeps things simple. * Using established, trusted tools. Of course, no tool is foolproof, but if you're adding layers of defense, using established tools instead of coming up with your own or using lesser-known, unsupported tools keeps things simpler in the long-run. You'll have better documentation, a wider user community for support, and greater likelihood that when there's a problem, it'll get patched quickly. Defense-in-depth is about layers of security. Keeping each layer as simple as possible is the key to applying the principle of keeing it simple to a defense-in-depth strategy.
This is a good question. I think that you can have both defense in depth and simplicity without contradiction. Defense in depth is redundancy is security controls (defense mechanisms). One control can fail but it is much less probable that two or more will fail at the same time. As for simplicity, it depends on what area you are targeting. If you're talking about applications, systems, processes you are trying to protect, there's no controversy. They should be as simple as possible and still be able to accomplish the task at hand (I know this is easy to say but difficult to do). If you mean simplicity in security controls, I agree with AJ Henderson that they should be as simple and as independent as possible (the same principle as in the paragraph above). The only problem is related to the number of these security controls; zero is definitely more simple than three. Here you should consider the probability and impact of the "problem" you're trying to protect against. The higher the risk (probability x impact) the more security controls you should deploy. But there is probably a break point in number of controls (like three) where the ratio of added value vs cost gets very low.
19,570
There is a bag of sugar that is full of **dead** weevils. These tiny insects are just the same size as sugar particles. If they were alive, I could expose them to the sun and they would fly away. If their size were different than the sugar particles size, I could filter them through a mesh. Any practical suggestions to get rid of these dead insects and save the sugar?
2018/11/21
[ "https://lifehacks.stackexchange.com/questions/19570", "https://lifehacks.stackexchange.com", "https://lifehacks.stackexchange.com/users/25782/" ]
You can dissolve the sugar in water and filter it, then evaporate the water.
If the dead weevils are a different weight than the sugar crystals, you can use moving air to separate the sugar from the weevils.
19,570
There is a bag of sugar that is full of **dead** weevils. These tiny insects are just the same size as sugar particles. If they were alive, I could expose them to the sun and they would fly away. If their size were different than the sugar particles size, I could filter them through a mesh. Any practical suggestions to get rid of these dead insects and save the sugar?
2018/11/21
[ "https://lifehacks.stackexchange.com/questions/19570", "https://lifehacks.stackexchange.com", "https://lifehacks.stackexchange.com/users/25782/" ]
You can dissolve the sugar in water and filter it, then evaporate the water.
how much money are you talking about? Dead Weevils (yuk!) I'd toss the whole bag and buy another rather than swallowing dead weevils. Not my kind of protein.
19,570
There is a bag of sugar that is full of **dead** weevils. These tiny insects are just the same size as sugar particles. If they were alive, I could expose them to the sun and they would fly away. If their size were different than the sugar particles size, I could filter them through a mesh. Any practical suggestions to get rid of these dead insects and save the sugar?
2018/11/21
[ "https://lifehacks.stackexchange.com/questions/19570", "https://lifehacks.stackexchange.com", "https://lifehacks.stackexchange.com/users/25782/" ]
You can dissolve the sugar in water and filter it, then evaporate the water.
You might be able to get rid of the weevils, but what about any waste they've excreted? They've been living, breeding, eating, defecating, and dying inside that bag of sugar. I'm not sure if it's wise to use that sugar, even if you could remove the weevil bodies. You've probably got some unwanted bacteria in there that will taint whatever product you want to use the sugar in. For example, if you use it to sweeten some home made cider, you'll probably end up with cider vinegar after a few weeks. Sugar is cheap, and easy to replace. Unless you're talking about a very large quantity (on an industrial scale), it's probably wiser to write it off. How about using preventative measures, to stop this from happening again? A few years ago, I had a problem with weevil infections in my bakery cupboard. They would get into my flour, sugar and cornflour. I would then throw out my supplies, clean my cupboard, and buy new supplies. A few months later, I would find that my new supplies were also infected! It was becoming an expensive problem. I decided to wrap all of my baking supply bags in plastic bags. The idea was to isolate any infected bags, and stop it from spreading to any other bags. I've never had any problems since.
19,570
There is a bag of sugar that is full of **dead** weevils. These tiny insects are just the same size as sugar particles. If they were alive, I could expose them to the sun and they would fly away. If their size were different than the sugar particles size, I could filter them through a mesh. Any practical suggestions to get rid of these dead insects and save the sugar?
2018/11/21
[ "https://lifehacks.stackexchange.com/questions/19570", "https://lifehacks.stackexchange.com", "https://lifehacks.stackexchange.com/users/25782/" ]
You might be able to get rid of the weevils, but what about any waste they've excreted? They've been living, breeding, eating, defecating, and dying inside that bag of sugar. I'm not sure if it's wise to use that sugar, even if you could remove the weevil bodies. You've probably got some unwanted bacteria in there that will taint whatever product you want to use the sugar in. For example, if you use it to sweeten some home made cider, you'll probably end up with cider vinegar after a few weeks. Sugar is cheap, and easy to replace. Unless you're talking about a very large quantity (on an industrial scale), it's probably wiser to write it off. How about using preventative measures, to stop this from happening again? A few years ago, I had a problem with weevil infections in my bakery cupboard. They would get into my flour, sugar and cornflour. I would then throw out my supplies, clean my cupboard, and buy new supplies. A few months later, I would find that my new supplies were also infected! It was becoming an expensive problem. I decided to wrap all of my baking supply bags in plastic bags. The idea was to isolate any infected bags, and stop it from spreading to any other bags. I've never had any problems since.
If the dead weevils are a different weight than the sugar crystals, you can use moving air to separate the sugar from the weevils.
19,570
There is a bag of sugar that is full of **dead** weevils. These tiny insects are just the same size as sugar particles. If they were alive, I could expose them to the sun and they would fly away. If their size were different than the sugar particles size, I could filter them through a mesh. Any practical suggestions to get rid of these dead insects and save the sugar?
2018/11/21
[ "https://lifehacks.stackexchange.com/questions/19570", "https://lifehacks.stackexchange.com", "https://lifehacks.stackexchange.com/users/25782/" ]
You might be able to get rid of the weevils, but what about any waste they've excreted? They've been living, breeding, eating, defecating, and dying inside that bag of sugar. I'm not sure if it's wise to use that sugar, even if you could remove the weevil bodies. You've probably got some unwanted bacteria in there that will taint whatever product you want to use the sugar in. For example, if you use it to sweeten some home made cider, you'll probably end up with cider vinegar after a few weeks. Sugar is cheap, and easy to replace. Unless you're talking about a very large quantity (on an industrial scale), it's probably wiser to write it off. How about using preventative measures, to stop this from happening again? A few years ago, I had a problem with weevil infections in my bakery cupboard. They would get into my flour, sugar and cornflour. I would then throw out my supplies, clean my cupboard, and buy new supplies. A few months later, I would find that my new supplies were also infected! It was becoming an expensive problem. I decided to wrap all of my baking supply bags in plastic bags. The idea was to isolate any infected bags, and stop it from spreading to any other bags. I've never had any problems since.
how much money are you talking about? Dead Weevils (yuk!) I'd toss the whole bag and buy another rather than swallowing dead weevils. Not my kind of protein.
502,225
For a two terminal current sensing shunt resistor with a layout that deviates from the symmetric Kelvin layout, what results should be expected? For example, with this layout, what result would be expected on the sense leads when the resistor is subject to a 3A current? [![enter image description here](https://i.stack.imgur.com/NLRhF.png)](https://i.stack.imgur.com/NLRhF.png)
2020/05/27
[ "https://electronics.stackexchange.com/questions/502225", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/221336/" ]
You should take the opportunity to route your sense traces so that they are as close to the same length and symmetrical as possible. Here is a mockup of what I am suggesting. [![enter image description here](https://i.stack.imgur.com/6NDf5.png)](https://i.stack.imgur.com/6NDf5.png) You almost want to consider the routing of these sense lines as a controlled impedance pair. The goal to be that both lines share the same common mode voltage coupling from other circuits and external influences.
You almost have perfect symmetry. I'd flatten that lower right region just to right of "4", and achieve total balance. To simulate the effects, build a grid of 2\_D resistors that includes that 45\_degree piece; use 0.000498 ohms (that is 500 microOhms) per square. Then you need to include the 3rd dimension of the resistor's solder terminal.
11,837
I am a noob in Java and automation so please bear with me if I get the terminology wrong. I have a class file that can open a browser and navigate to google.com and then closes the browser. There’s another class file that does the same but goes to yahoo.com. I can run this from eclipse and everything works fine. However,  I need to know if I can run both the test cases one after the other without opening Eclipse and only by using batch files (.bat files) on Windows. I would appreciate a clear step by step instruction. Thanks in advance.
2015/01/17
[ "https://sqa.stackexchange.com/questions/11837", "https://sqa.stackexchange.com", "https://sqa.stackexchange.com/users/10792/" ]
I would suggest just exporting a jar file for your project. All the libraries will be packaged together in the jar files(including TestNG) and you can simply double click on the jar file to start your tests. Make a runner file that calls all the tests you have to run one by one. External resources (if any), will have to be available for the jar file though. The external resources might include your test data (if any) or Portable browsers (example portable firefox). **Steps**: * Right click on Project -> Export -> Runnable jar file * Give a name and file path for the jar file * Select option - *Extract required libraries into generated JAR* * And Finish **Troubleshooting**: Check the java version for the machines that you will be running your jar file on. Programs compiled with java 7 will mostly not run if the machine has java 6. Either compile with java 6 or update the jre on the target machines. If the jar file does not launch, try using Jarfix.
I have also create step by step guide video: I have created video how to run selenium webdriver test cases from command line using batch file.. please look in this link: <https://www.youtube.com/watch?v=jpzI_-z3eQM> Also you can visit my blog for step by step documentation: <http://software-testing-easy.blogspot.in/>
6,218
* Holding A♣ 6⋄ * Flop K♣ 9♣ 6♣ * BB goes all in on Turn. If I flop a club on Turn or River, pot is mine for sure. I call his all in, miss the flush, BB has two clubs and takes it. Should i have called his him?
2015/08/30
[ "https://poker.stackexchange.com/questions/6218", "https://poker.stackexchange.com", "https://poker.stackexchange.com/users/3606/" ]
You can think about the range of cards the villain could have that would make sense for him to push with, I'm assuming his all in is around pot size or less, overbets probably narrow villains range further: Big drawing hands: 78 off, T8 with a club, 57 with a club, JQ with a club Two pair hands: K9, 96s Top pair: AK, KQ, KJ, AA Sets: KK, 99, 66 Made flush hands: I've included all suited gap and double gap club connectors. Air hands: How often will villain try and represent a flush/big draw? Now you can take a lot of these out of his range if he's short stacked or a generally tight pre-flop player, and with information about how pre-flop went (was he given great odds to play preflop?) Even with all of this range and without any bluffs, you are a favourite according to pokerstove. I suggest you play around with opponents hand ranges in pokerstove when analysing hands, and see how your holding plays against this.
If your opponent has the flush then you only have 7 outs so cannot call. If your opponent is doing this with a set or 2 pair then you can call. You have 14 outs and are right at 50% to hit by the river. You are at best 50% so I think you fold here.
1,104
Exceptions in C++ don't need to be caught (no compile time errors) by the calling function. So it's up to developer's judgment whether to catch them using try/catch (unlike in Java). Is there a way one can ensure that the exceptions thrown are always caught using try/catch by the calling function?
2008/08/04
[ "https://Stackoverflow.com/questions/1104", "https://Stackoverflow.com", "https://Stackoverflow.com/users/236/" ]
[Chris](https://stackoverflow.com/questions/1104/#1109)' probably has the best pure answer to the question: However, I'm curious about the root of the question. If the user should *always* wrap the call in a try/catch block, should the user-called function really be throwing exceptions in the first place? This is a difficult question to answer without more context regarding the code-base in question. Shooting from the hip, I think the best answer here is to wrap the function up such that the recommended (if not only, depending on the overall exception style of the code) public interface does the try/catch *for* the user. If you're just trying to ensure that there are no unhandled exceptions in your code, unit tests and code review are probably the best solution.
> > Is there a way one can ensure that the > exceptions thrown are always caught > using try/catch by the calling > function? > > > I find it rather funny, that the Java crowd - [including myself](http://dlinsin.blogspot.com/2008/01/wonderful-checked-exceptions.html) - is trying to avoid checked Exceptions. They are trying to work their way around being forced to catch Exceptions by using [RuntimeExceptions](http://java.sun.com/javase/6/docs/api/java/lang/RuntimeException.html).
1,104
Exceptions in C++ don't need to be caught (no compile time errors) by the calling function. So it's up to developer's judgment whether to catch them using try/catch (unlike in Java). Is there a way one can ensure that the exceptions thrown are always caught using try/catch by the calling function?
2008/08/04
[ "https://Stackoverflow.com/questions/1104", "https://Stackoverflow.com", "https://Stackoverflow.com/users/236/" ]
There was once an attempt to add [dynamic exception specifications](http://en.cppreference.com/w/cpp/language/except_spec) to a function's signature, but since the language could not enforce their accuracy, they were later depreciated. In C++11 and forward, we now have the [noexcept specifier](http://en.cppreference.com/w/cpp/language/noexcept_spec). Again, if the signature is marked to throw, there is still not requriement that it be handled by the caller. --- Depending on the context, you can ensure that exceptional behaviour be handled by coding it into the type system. **See:** [std::optional](http://en.cppreference.com/w/cpp/utility/optional) as part of the library fundamentals.
Or you could start throwing critical exceptions. Surely, an access violation exception will *catch* your users' attention.
1,104
Exceptions in C++ don't need to be caught (no compile time errors) by the calling function. So it's up to developer's judgment whether to catch them using try/catch (unlike in Java). Is there a way one can ensure that the exceptions thrown are always caught using try/catch by the calling function?
2008/08/04
[ "https://Stackoverflow.com/questions/1104", "https://Stackoverflow.com", "https://Stackoverflow.com/users/236/" ]
Outside the scope of your question so I debated not posting this but in Java there are actually 2 types of exceptions, checked and unchecked. The basic difference is that, much like in `c[++]`, you dont have to catch an unchecked exception. For a good reference [try this](http://java.sun.com/docs/books/tutorial/essential/exceptions/runtime.html)
> > Is there a way one can ensure that the > exceptions thrown are always caught > using try/catch by the calling > function? > > > I find it rather funny, that the Java crowd - [including myself](http://dlinsin.blogspot.com/2008/01/wonderful-checked-exceptions.html) - is trying to avoid checked Exceptions. They are trying to work their way around being forced to catch Exceptions by using [RuntimeExceptions](http://java.sun.com/javase/6/docs/api/java/lang/RuntimeException.html).
1,104
Exceptions in C++ don't need to be caught (no compile time errors) by the calling function. So it's up to developer's judgment whether to catch them using try/catch (unlike in Java). Is there a way one can ensure that the exceptions thrown are always caught using try/catch by the calling function?
2008/08/04
[ "https://Stackoverflow.com/questions/1104", "https://Stackoverflow.com", "https://Stackoverflow.com/users/236/" ]
There was once an attempt to add [dynamic exception specifications](http://en.cppreference.com/w/cpp/language/except_spec) to a function's signature, but since the language could not enforce their accuracy, they were later depreciated. In C++11 and forward, we now have the [noexcept specifier](http://en.cppreference.com/w/cpp/language/noexcept_spec). Again, if the signature is marked to throw, there is still not requriement that it be handled by the caller. --- Depending on the context, you can ensure that exceptional behaviour be handled by coding it into the type system. **See:** [std::optional](http://en.cppreference.com/w/cpp/utility/optional) as part of the library fundamentals.
> > Is there a way one can ensure that the > exceptions thrown are always caught > using try/catch by the calling > function? > > > I find it rather funny, that the Java crowd - [including myself](http://dlinsin.blogspot.com/2008/01/wonderful-checked-exceptions.html) - is trying to avoid checked Exceptions. They are trying to work their way around being forced to catch Exceptions by using [RuntimeExceptions](http://java.sun.com/javase/6/docs/api/java/lang/RuntimeException.html).
1,104
Exceptions in C++ don't need to be caught (no compile time errors) by the calling function. So it's up to developer's judgment whether to catch them using try/catch (unlike in Java). Is there a way one can ensure that the exceptions thrown are always caught using try/catch by the calling function?
2008/08/04
[ "https://Stackoverflow.com/questions/1104", "https://Stackoverflow.com", "https://Stackoverflow.com/users/236/" ]
No. See [A Pragmatic Look at Exception Specifications](http://www.gotw.ca/publications/mill22.htm) for reasons why not. The only way you can "help" this is to document the exceptions your function can throw, say as a comment in the header file declaring it. This is not enforced by the compiler or anything. Use code reviews for that purpose.
> > Is there a way one can ensure that the > exceptions thrown are always caught > using try/catch by the calling > function? > > > I find it rather funny, that the Java crowd - [including myself](http://dlinsin.blogspot.com/2008/01/wonderful-checked-exceptions.html) - is trying to avoid checked Exceptions. They are trying to work their way around being forced to catch Exceptions by using [RuntimeExceptions](http://java.sun.com/javase/6/docs/api/java/lang/RuntimeException.html).
1,104
Exceptions in C++ don't need to be caught (no compile time errors) by the calling function. So it's up to developer's judgment whether to catch them using try/catch (unlike in Java). Is there a way one can ensure that the exceptions thrown are always caught using try/catch by the calling function?
2008/08/04
[ "https://Stackoverflow.com/questions/1104", "https://Stackoverflow.com", "https://Stackoverflow.com/users/236/" ]
> > Is there a way one can ensure that the > exceptions thrown are always caught > using try/catch by the calling > function? > > > I find it rather funny, that the Java crowd - [including myself](http://dlinsin.blogspot.com/2008/01/wonderful-checked-exceptions.html) - is trying to avoid checked Exceptions. They are trying to work their way around being forced to catch Exceptions by using [RuntimeExceptions](http://java.sun.com/javase/6/docs/api/java/lang/RuntimeException.html).
Or you could start throwing critical exceptions. Surely, an access violation exception will *catch* your users' attention.
532,790
On my custom-built computer, there is a chip-reader that will accept about 8 kinds of chips - you know, the ones you use in cameras. These all count as "drives" on my computer, and in the Drive Manager of the MMC console, (or you can get to it by right-clicking on "Computer" and selecting "Manage") I can set the drive letters for these devices. However, when I plug in another flash drive, I get confused as to which drive it is - with all those other ones complicating the matter. How can I *rename* my empty chip-reader drives? I've tried opening their "properties" in Windows Explorer - which lets me specify a name, but it won't accept anything I tell it (never had this issue in XP). When I click "Apply" in the properties dialogue, it says, > > The volume label is not valid. Please enter a valid volume label. > > > But nothing I type there seems to be "valid". So, I want to call the drives by their chip names, such as "xd drive", "sd drive", "M2 drive", etc. That way I can tell which one is which.
2013/01/11
[ "https://superuser.com/questions/532790", "https://superuser.com", "https://superuser.com/users/121324/" ]
Ah, I figured it out. ***I opened computer in Windows Explorer and simply clicked, "rename".*** It worked like a charm. It works with OR without a space, however you like. Pushing good ole' `F2` works too. I feel kind of stupid for asking the question, now, but if it worked via properties like you'd think, there would have been no question. You can't rename it through properties, for some reason! ![enter image description here](https://i.stack.imgur.com/aofWh.png) You may have to [enable the setting in Windows Explorer](http://www.sevenforums.com/tutorials/6969-drives-hide-show-empty-drives-computer-folder.html).
You cannot rename an "empty" drive because the name is actually stored in the partition/filesystem. Windows is being confusing with its error messages here. If you want, you can change the Windows preference to not show the drives when they are empty, or rename the individual devices when they are inserted into the reader.
532,790
On my custom-built computer, there is a chip-reader that will accept about 8 kinds of chips - you know, the ones you use in cameras. These all count as "drives" on my computer, and in the Drive Manager of the MMC console, (or you can get to it by right-clicking on "Computer" and selecting "Manage") I can set the drive letters for these devices. However, when I plug in another flash drive, I get confused as to which drive it is - with all those other ones complicating the matter. How can I *rename* my empty chip-reader drives? I've tried opening their "properties" in Windows Explorer - which lets me specify a name, but it won't accept anything I tell it (never had this issue in XP). When I click "Apply" in the properties dialogue, it says, > > The volume label is not valid. Please enter a valid volume label. > > > But nothing I type there seems to be "valid". So, I want to call the drives by their chip names, such as "xd drive", "sd drive", "M2 drive", etc. That way I can tell which one is which.
2013/01/11
[ "https://superuser.com/questions/532790", "https://superuser.com", "https://superuser.com/users/121324/" ]
Ah, I figured it out. ***I opened computer in Windows Explorer and simply clicked, "rename".*** It worked like a charm. It works with OR without a space, however you like. Pushing good ole' `F2` works too. I feel kind of stupid for asking the question, now, but if it worked via properties like you'd think, there would have been no question. You can't rename it through properties, for some reason! ![enter image description here](https://i.stack.imgur.com/aofWh.png) You may have to [enable the setting in Windows Explorer](http://www.sevenforums.com/tutorials/6969-drives-hide-show-empty-drives-computer-folder.html).
Probably, the space character is not allowed, or something. Try something that will obviously be accepted, like "ABC". Or perhaps the error message is wrong and you don't have the right to change the name - in which case, try the administrator account, safe mode or even the built-in elevated administrator account. Perhaps you can't change the drive's name, only cards that you put in it. I think this is likely, because I think I remember something similar happening to me. Alternatively, if I were you, instead of doing this, I'd go to folder options in the control panel and check "hide empty drives", or however it's called. You can't get confused if there's only one drive displayed. You can also, through the popup that shows up when you put it in, ask Windows to always open an Explorer window for it when you put it in.
70,800
I don't know how to write this without it sounding like a plug (lol), but it's not. So please don't close-vote it, even if you really want your editor badge. Here goes: There's this site called av-comparatives that I use and trust to provide me independent antivirus reviews. You probably do, too, actually. While I keep myself somewhat knowledgeable about infosec, my specialties lay elsewhere and it's so useful for me to have something like that as a resource. I don't know of anything like that for firewalls, and when I go out to find one, I'm accosted with millions, all claiming that one product or another is the best. Cross-referencing them often leads to one or two that I've never heard of. The (I think) fairly obvious conclusion there is that these are all just fake third party review sites set up for marketing purposes. Unfortunately, I don't have the resilience to wade through such murky waters for so many more hours :P > > IMPORTANT: This is **NOT** an open invitation to fake firewall review sites to post > their crap. I will respond appropriately to spam. > > > I'm looking for legit review sites that you, as security professionals, trust (ie: on par with av-comparatives, which you probably trust too)
2014/10/15
[ "https://security.stackexchange.com/questions/70800", "https://security.stackexchange.com", "https://security.stackexchange.com/users/35718/" ]
Possibly. It's not immediately clear if you are relying on the email address as the sole identifier (i.e. you're using it as the username) or if a separate username is in play. If the latter case, then the obvious flaw is that an attacker can supply the username of someone else, but provide their own email address as the destination for the recovery password. Assuming the email address is the only identifier for a user, obviously it will need to be supplied by them in order to reset their password. I would suggest that regardless of whether an account associated with that email address actually exists, the system should respond similarly. (E.g. saying "Your password recovery information has been sent.") However, you would obviously only want to send the recovery email if there is a corresponding account; if no such account exists there is nothing to recover. This prevents an attacker from trying multiple addresses to see which succeed and which fail, and thus building a list of valid accounts on the system. (If your application is faster to respond if it doesn't need to send the email, you may wish to intentionally add a random bit of time delay to hide that.)
I think it's important to check that entered email addresses actually belong to one of your users before you send the recovery email. Assuming that your sign-up form already collects email addresses, it would be silly not to make use of them to perform this simple check. Without verification, one major concern would be spammers/hackers entering tons of made-up, invalid email addresses. This would cause your server to send many messages that bounce, and draw attention from spam filters and spammer databases. Eventually, your server could get blacklisted, and no emails from your site would go through anymore. Plus, mistyped email addresses are quite common, so you could have instances where a user accidentally sends a recovery email to someone else's inbox. Without additional information about your implementation I can't really point out any other issues, but in phase 3, there are some additional security measures you can consider. For example, you need to make sure that the recovery URLs are long and random, so that they cannot easily be guessed or brute-forced. You also probably don't want the URLs to be active forever; it's a good idea to expire them after an hour or two if a user generates a recovery URL but never actually visits it, or if the user visits it but never provides a new password. To further reduce the chance of a hack, you can completely disable the password recovery feature if the account was used within the past few days, or if the IP-based location is atypical for that user (though these measures may inconvenience users).
70,800
I don't know how to write this without it sounding like a plug (lol), but it's not. So please don't close-vote it, even if you really want your editor badge. Here goes: There's this site called av-comparatives that I use and trust to provide me independent antivirus reviews. You probably do, too, actually. While I keep myself somewhat knowledgeable about infosec, my specialties lay elsewhere and it's so useful for me to have something like that as a resource. I don't know of anything like that for firewalls, and when I go out to find one, I'm accosted with millions, all claiming that one product or another is the best. Cross-referencing them often leads to one or two that I've never heard of. The (I think) fairly obvious conclusion there is that these are all just fake third party review sites set up for marketing purposes. Unfortunately, I don't have the resilience to wade through such murky waters for so many more hours :P > > IMPORTANT: This is **NOT** an open invitation to fake firewall review sites to post > their crap. I will respond appropriately to spam. > > > I'm looking for legit review sites that you, as security professionals, trust (ie: on par with av-comparatives, which you probably trust too)
2014/10/15
[ "https://security.stackexchange.com/questions/70800", "https://security.stackexchange.com", "https://security.stackexchange.com/users/35718/" ]
There are some potential significant issues, mostly related to aspects of your procedure that have not been specified (hence "potential"). These are additional considerations, rather than problems with your method: * **Automated bots might spam users by submitting lots of different email addresses.** Emailing (or at least giving the same form response), regardless of whether a user is valid or not, is a good way of preventing username enumeration, but it does open you up to this risk. Consider using a CAPTCHA when users are requesting the reset email, to prevent automatic submission. * **The secret value embedded in the link might be too easy to guess.** If the token included in the link is not randomly generated with a high level of entropy, an attacker might be able to guess it. * **The secret value could be re-used by an attacker if the email is compromised at a later date.** The token should be single-use (i.e. a nonce), and ideally expire after a set amount of time (appropriate for your user base and security requirement - [OWASP recommend as little as 20 minutes here](https://www.owasp.org/index.php/Forgot_Password_Cheat_Sheet#Step_3.29_Send_a_Token_Over_a_Side-Channel), but you may consider your application to not be that sensitive). * **The secret value could be intercepted.** Email lacks end-to-end encryption, and you do not know how secure a user's endpoint or mail servers are. Some applications therefore require additional verification steps before or after sending and following the link. [OWASP recommend secret questions](https://www.owasp.org/index.php/Forgot_Password_Cheat_Sheet#Step_3.29_Send_a_Token_Over_a_Side-Channel) in *addition* to the email, although you may feel that this is excessive for the sensitivity of your application. * **The new password is not submitted with transport security.** Naturally the URL in your email should point to an HTTPS page (partially so the user can verify that they are submitting their new password directly to the server), and then the form should submit the password over HTTPS as well. * **An attacker requests a reset email on a user's behalf, and the user is unable to prevent the token from remaining active.** Consider providing a way for users to cancel the password reset from the email, enabling them to deactivate the link if they did not request it. * **The user is not notified if an attacker succeeds in changing their password.** Consider at least notifying the user via email of every successful password reset, in case it was not initiated by them.
I think it's important to check that entered email addresses actually belong to one of your users before you send the recovery email. Assuming that your sign-up form already collects email addresses, it would be silly not to make use of them to perform this simple check. Without verification, one major concern would be spammers/hackers entering tons of made-up, invalid email addresses. This would cause your server to send many messages that bounce, and draw attention from spam filters and spammer databases. Eventually, your server could get blacklisted, and no emails from your site would go through anymore. Plus, mistyped email addresses are quite common, so you could have instances where a user accidentally sends a recovery email to someone else's inbox. Without additional information about your implementation I can't really point out any other issues, but in phase 3, there are some additional security measures you can consider. For example, you need to make sure that the recovery URLs are long and random, so that they cannot easily be guessed or brute-forced. You also probably don't want the URLs to be active forever; it's a good idea to expire them after an hour or two if a user generates a recovery URL but never actually visits it, or if the user visits it but never provides a new password. To further reduce the chance of a hack, you can completely disable the password recovery feature if the account was used within the past few days, or if the IP-based location is atypical for that user (though these measures may inconvenience users).
70,800
I don't know how to write this without it sounding like a plug (lol), but it's not. So please don't close-vote it, even if you really want your editor badge. Here goes: There's this site called av-comparatives that I use and trust to provide me independent antivirus reviews. You probably do, too, actually. While I keep myself somewhat knowledgeable about infosec, my specialties lay elsewhere and it's so useful for me to have something like that as a resource. I don't know of anything like that for firewalls, and when I go out to find one, I'm accosted with millions, all claiming that one product or another is the best. Cross-referencing them often leads to one or two that I've never heard of. The (I think) fairly obvious conclusion there is that these are all just fake third party review sites set up for marketing purposes. Unfortunately, I don't have the resilience to wade through such murky waters for so many more hours :P > > IMPORTANT: This is **NOT** an open invitation to fake firewall review sites to post > their crap. I will respond appropriately to spam. > > > I'm looking for legit review sites that you, as security professionals, trust (ie: on par with av-comparatives, which you probably trust too)
2014/10/15
[ "https://security.stackexchange.com/questions/70800", "https://security.stackexchange.com", "https://security.stackexchange.com/users/35718/" ]
Possibly. It's not immediately clear if you are relying on the email address as the sole identifier (i.e. you're using it as the username) or if a separate username is in play. If the latter case, then the obvious flaw is that an attacker can supply the username of someone else, but provide their own email address as the destination for the recovery password. Assuming the email address is the only identifier for a user, obviously it will need to be supplied by them in order to reset their password. I would suggest that regardless of whether an account associated with that email address actually exists, the system should respond similarly. (E.g. saying "Your password recovery information has been sent.") However, you would obviously only want to send the recovery email if there is a corresponding account; if no such account exists there is nothing to recover. This prevents an attacker from trying multiple addresses to see which succeed and which fail, and thus building a list of valid accounts on the system. (If your application is faster to respond if it doesn't need to send the email, you may wish to intentionally add a random bit of time delay to hide that.)
There are some potential significant issues, mostly related to aspects of your procedure that have not been specified (hence "potential"). These are additional considerations, rather than problems with your method: * **Automated bots might spam users by submitting lots of different email addresses.** Emailing (or at least giving the same form response), regardless of whether a user is valid or not, is a good way of preventing username enumeration, but it does open you up to this risk. Consider using a CAPTCHA when users are requesting the reset email, to prevent automatic submission. * **The secret value embedded in the link might be too easy to guess.** If the token included in the link is not randomly generated with a high level of entropy, an attacker might be able to guess it. * **The secret value could be re-used by an attacker if the email is compromised at a later date.** The token should be single-use (i.e. a nonce), and ideally expire after a set amount of time (appropriate for your user base and security requirement - [OWASP recommend as little as 20 minutes here](https://www.owasp.org/index.php/Forgot_Password_Cheat_Sheet#Step_3.29_Send_a_Token_Over_a_Side-Channel), but you may consider your application to not be that sensitive). * **The secret value could be intercepted.** Email lacks end-to-end encryption, and you do not know how secure a user's endpoint or mail servers are. Some applications therefore require additional verification steps before or after sending and following the link. [OWASP recommend secret questions](https://www.owasp.org/index.php/Forgot_Password_Cheat_Sheet#Step_3.29_Send_a_Token_Over_a_Side-Channel) in *addition* to the email, although you may feel that this is excessive for the sensitivity of your application. * **The new password is not submitted with transport security.** Naturally the URL in your email should point to an HTTPS page (partially so the user can verify that they are submitting their new password directly to the server), and then the form should submit the password over HTTPS as well. * **An attacker requests a reset email on a user's behalf, and the user is unable to prevent the token from remaining active.** Consider providing a way for users to cancel the password reset from the email, enabling them to deactivate the link if they did not request it. * **The user is not notified if an attacker succeeds in changing their password.** Consider at least notifying the user via email of every successful password reset, in case it was not initiated by them.
70,800
I don't know how to write this without it sounding like a plug (lol), but it's not. So please don't close-vote it, even if you really want your editor badge. Here goes: There's this site called av-comparatives that I use and trust to provide me independent antivirus reviews. You probably do, too, actually. While I keep myself somewhat knowledgeable about infosec, my specialties lay elsewhere and it's so useful for me to have something like that as a resource. I don't know of anything like that for firewalls, and when I go out to find one, I'm accosted with millions, all claiming that one product or another is the best. Cross-referencing them often leads to one or two that I've never heard of. The (I think) fairly obvious conclusion there is that these are all just fake third party review sites set up for marketing purposes. Unfortunately, I don't have the resilience to wade through such murky waters for so many more hours :P > > IMPORTANT: This is **NOT** an open invitation to fake firewall review sites to post > their crap. I will respond appropriately to spam. > > > I'm looking for legit review sites that you, as security professionals, trust (ie: on par with av-comparatives, which you probably trust too)
2014/10/15
[ "https://security.stackexchange.com/questions/70800", "https://security.stackexchange.com", "https://security.stackexchange.com/users/35718/" ]
Possibly. It's not immediately clear if you are relying on the email address as the sole identifier (i.e. you're using it as the username) or if a separate username is in play. If the latter case, then the obvious flaw is that an attacker can supply the username of someone else, but provide their own email address as the destination for the recovery password. Assuming the email address is the only identifier for a user, obviously it will need to be supplied by them in order to reset their password. I would suggest that regardless of whether an account associated with that email address actually exists, the system should respond similarly. (E.g. saying "Your password recovery information has been sent.") However, you would obviously only want to send the recovery email if there is a corresponding account; if no such account exists there is nothing to recover. This prevents an attacker from trying multiple addresses to see which succeed and which fail, and thus building a list of valid accounts on the system. (If your application is faster to respond if it doesn't need to send the email, you may wish to intentionally add a random bit of time delay to hide that.)
Phishing -------- Spammers could spoof your password reset email and trick other users to click on a link that phish for information Attackers could DDOS your server -------------------------------- By submitting the form rapidly, your server might not be able to keep up with the processing power required for sending out emails fast enough
70,800
I don't know how to write this without it sounding like a plug (lol), but it's not. So please don't close-vote it, even if you really want your editor badge. Here goes: There's this site called av-comparatives that I use and trust to provide me independent antivirus reviews. You probably do, too, actually. While I keep myself somewhat knowledgeable about infosec, my specialties lay elsewhere and it's so useful for me to have something like that as a resource. I don't know of anything like that for firewalls, and when I go out to find one, I'm accosted with millions, all claiming that one product or another is the best. Cross-referencing them often leads to one or two that I've never heard of. The (I think) fairly obvious conclusion there is that these are all just fake third party review sites set up for marketing purposes. Unfortunately, I don't have the resilience to wade through such murky waters for so many more hours :P > > IMPORTANT: This is **NOT** an open invitation to fake firewall review sites to post > their crap. I will respond appropriately to spam. > > > I'm looking for legit review sites that you, as security professionals, trust (ie: on par with av-comparatives, which you probably trust too)
2014/10/15
[ "https://security.stackexchange.com/questions/70800", "https://security.stackexchange.com", "https://security.stackexchange.com/users/35718/" ]
There are some potential significant issues, mostly related to aspects of your procedure that have not been specified (hence "potential"). These are additional considerations, rather than problems with your method: * **Automated bots might spam users by submitting lots of different email addresses.** Emailing (or at least giving the same form response), regardless of whether a user is valid or not, is a good way of preventing username enumeration, but it does open you up to this risk. Consider using a CAPTCHA when users are requesting the reset email, to prevent automatic submission. * **The secret value embedded in the link might be too easy to guess.** If the token included in the link is not randomly generated with a high level of entropy, an attacker might be able to guess it. * **The secret value could be re-used by an attacker if the email is compromised at a later date.** The token should be single-use (i.e. a nonce), and ideally expire after a set amount of time (appropriate for your user base and security requirement - [OWASP recommend as little as 20 minutes here](https://www.owasp.org/index.php/Forgot_Password_Cheat_Sheet#Step_3.29_Send_a_Token_Over_a_Side-Channel), but you may consider your application to not be that sensitive). * **The secret value could be intercepted.** Email lacks end-to-end encryption, and you do not know how secure a user's endpoint or mail servers are. Some applications therefore require additional verification steps before or after sending and following the link. [OWASP recommend secret questions](https://www.owasp.org/index.php/Forgot_Password_Cheat_Sheet#Step_3.29_Send_a_Token_Over_a_Side-Channel) in *addition* to the email, although you may feel that this is excessive for the sensitivity of your application. * **The new password is not submitted with transport security.** Naturally the URL in your email should point to an HTTPS page (partially so the user can verify that they are submitting their new password directly to the server), and then the form should submit the password over HTTPS as well. * **An attacker requests a reset email on a user's behalf, and the user is unable to prevent the token from remaining active.** Consider providing a way for users to cancel the password reset from the email, enabling them to deactivate the link if they did not request it. * **The user is not notified if an attacker succeeds in changing their password.** Consider at least notifying the user via email of every successful password reset, in case it was not initiated by them.
Phishing -------- Spammers could spoof your password reset email and trick other users to click on a link that phish for information Attackers could DDOS your server -------------------------------- By submitting the form rapidly, your server might not be able to keep up with the processing power required for sending out emails fast enough
33,807
Is there a good bodyweight antagonist exercise to the squat? In the same way the push up has the pull up or the inverted row?
2017/04/07
[ "https://fitness.stackexchange.com/questions/33807", "https://fitness.stackexchange.com", "https://fitness.stackexchange.com/users/25298/" ]
I'd argue the opposite exercise to a body squat is a [Hanging Knee Raise](https://barbend.com/benefits-hanging-knee-raises/). While the squat relies on eccentric contraction of the quadriceps going down into the squat, and then concentric contraction on the way up, the knee raise reverses this with concentric contraction of the quadriceps on the way up and eccentric on the way down. They're not exactly equivalent, of course. The hanging knee raise works shoulder mobility in a different way, and the bodyweight squat does different things for opening up your hips and ankles, but I think it's still a pretty good opposition.
What is the opposite of sit to stand? Break the motion down concentric vs eccentric contractions, joint by joint -- quite simply there isn't one. A great exercise to add to your arsenal is the hip dominant Split Stance Romanian Deadlift <https://www.youtube.com/watch?v=QDEMmKocxbM> (Direct Example) <https://www.youtube.com/watch?v=XowKMitOVNc> (This Guy Knows His Stuff)
33,807
Is there a good bodyweight antagonist exercise to the squat? In the same way the push up has the pull up or the inverted row?
2017/04/07
[ "https://fitness.stackexchange.com/questions/33807", "https://fitness.stackexchange.com", "https://fitness.stackexchange.com/users/25298/" ]
Well, we have to divide the body up into chunks that make sense. **Opposites** For exercises where we push forward, e.g. the pushup, we train mainly chest, triceps, and front deltoid. For exercises where we pull backward, e.g. inverted rows, we train the upper back and biceps. These muscles are opposite the ones in the pushing exercises. For squats, I'm not sure we can follow the same pattern. During the squat you train a lot of quadricep. On the other side of that is the hamstring, but the squat trains that too. During the squat we also train the lower back. On the other side of that are the abdominal muscles, but again, the squat forces you to engage this muscle group too, in order to keep correct posture. This is actually why we love and preach the glory of the squat. It does so many things all at once. **Caveat** Now, while it does *train* all these muscle groups, it trains some more than others. For instance, the quadriceps are far more active than the hamstrings. Luckily we have other movements that train much of the same areas, but with different foci. For instance, the deadlift also trains quads, hamstring, lower back, abdominals etc, but it requires more hamstring work than the squat. **Bottom line** So as far as your question goes, I'm not sure the squat has an "opposite" exercise in that regard. Just *complementary* ones. Oh, and whenever you're in doubt as to whether you should be doing this move or that move, the answer is usually both. Variety is key.
I'd argue the opposite exercise to a body squat is a [Hanging Knee Raise](https://barbend.com/benefits-hanging-knee-raises/). While the squat relies on eccentric contraction of the quadriceps going down into the squat, and then concentric contraction on the way up, the knee raise reverses this with concentric contraction of the quadriceps on the way up and eccentric on the way down. They're not exactly equivalent, of course. The hanging knee raise works shoulder mobility in a different way, and the bodyweight squat does different things for opening up your hips and ankles, but I think it's still a pretty good opposition.
33,807
Is there a good bodyweight antagonist exercise to the squat? In the same way the push up has the pull up or the inverted row?
2017/04/07
[ "https://fitness.stackexchange.com/questions/33807", "https://fitness.stackexchange.com", "https://fitness.stackexchange.com/users/25298/" ]
Well, we have to divide the body up into chunks that make sense. **Opposites** For exercises where we push forward, e.g. the pushup, we train mainly chest, triceps, and front deltoid. For exercises where we pull backward, e.g. inverted rows, we train the upper back and biceps. These muscles are opposite the ones in the pushing exercises. For squats, I'm not sure we can follow the same pattern. During the squat you train a lot of quadricep. On the other side of that is the hamstring, but the squat trains that too. During the squat we also train the lower back. On the other side of that are the abdominal muscles, but again, the squat forces you to engage this muscle group too, in order to keep correct posture. This is actually why we love and preach the glory of the squat. It does so many things all at once. **Caveat** Now, while it does *train* all these muscle groups, it trains some more than others. For instance, the quadriceps are far more active than the hamstrings. Luckily we have other movements that train much of the same areas, but with different foci. For instance, the deadlift also trains quads, hamstring, lower back, abdominals etc, but it requires more hamstring work than the squat. **Bottom line** So as far as your question goes, I'm not sure the squat has an "opposite" exercise in that regard. Just *complementary* ones. Oh, and whenever you're in doubt as to whether you should be doing this move or that move, the answer is usually both. Variety is key.
If you have a hanging bar and some inversion boots you can do the bodyweight squat upside down ie hanging from feet "lifting" glutes to heels and back down
33,807
Is there a good bodyweight antagonist exercise to the squat? In the same way the push up has the pull up or the inverted row?
2017/04/07
[ "https://fitness.stackexchange.com/questions/33807", "https://fitness.stackexchange.com", "https://fitness.stackexchange.com/users/25298/" ]
What is the opposite of sit to stand? Break the motion down concentric vs eccentric contractions, joint by joint -- quite simply there isn't one. A great exercise to add to your arsenal is the hip dominant Split Stance Romanian Deadlift <https://www.youtube.com/watch?v=QDEMmKocxbM> (Direct Example) <https://www.youtube.com/watch?v=XowKMitOVNc> (This Guy Knows His Stuff)
You could hang from a lat pulldown machine and fix your feet into place. This is the opposite of the squat. the hip flexors are the main muscle that is working i think. you could also do leg raises of some kind. I am not sure why you would want to do them tho
33,807
Is there a good bodyweight antagonist exercise to the squat? In the same way the push up has the pull up or the inverted row?
2017/04/07
[ "https://fitness.stackexchange.com/questions/33807", "https://fitness.stackexchange.com", "https://fitness.stackexchange.com/users/25298/" ]
Well, we have to divide the body up into chunks that make sense. **Opposites** For exercises where we push forward, e.g. the pushup, we train mainly chest, triceps, and front deltoid. For exercises where we pull backward, e.g. inverted rows, we train the upper back and biceps. These muscles are opposite the ones in the pushing exercises. For squats, I'm not sure we can follow the same pattern. During the squat you train a lot of quadricep. On the other side of that is the hamstring, but the squat trains that too. During the squat we also train the lower back. On the other side of that are the abdominal muscles, but again, the squat forces you to engage this muscle group too, in order to keep correct posture. This is actually why we love and preach the glory of the squat. It does so many things all at once. **Caveat** Now, while it does *train* all these muscle groups, it trains some more than others. For instance, the quadriceps are far more active than the hamstrings. Luckily we have other movements that train much of the same areas, but with different foci. For instance, the deadlift also trains quads, hamstring, lower back, abdominals etc, but it requires more hamstring work than the squat. **Bottom line** So as far as your question goes, I'm not sure the squat has an "opposite" exercise in that regard. Just *complementary* ones. Oh, and whenever you're in doubt as to whether you should be doing this move or that move, the answer is usually both. Variety is key.
You could hang from a lat pulldown machine and fix your feet into place. This is the opposite of the squat. the hip flexors are the main muscle that is working i think. you could also do leg raises of some kind. I am not sure why you would want to do them tho
33,807
Is there a good bodyweight antagonist exercise to the squat? In the same way the push up has the pull up or the inverted row?
2017/04/07
[ "https://fitness.stackexchange.com/questions/33807", "https://fitness.stackexchange.com", "https://fitness.stackexchange.com/users/25298/" ]
If you have a hanging bar and some inversion boots you can do the bodyweight squat upside down ie hanging from feet "lifting" glutes to heels and back down
You could hang from a lat pulldown machine and fix your feet into place. This is the opposite of the squat. the hip flexors are the main muscle that is working i think. you could also do leg raises of some kind. I am not sure why you would want to do them tho
33,807
Is there a good bodyweight antagonist exercise to the squat? In the same way the push up has the pull up or the inverted row?
2017/04/07
[ "https://fitness.stackexchange.com/questions/33807", "https://fitness.stackexchange.com", "https://fitness.stackexchange.com/users/25298/" ]
For bodyweight exercises, perhaps you're looking for something along the lines of a **slick floor bridge curl**? [This video](https://youtu.be/ZA8GzhFh_CQ?t=2m26s) demonstrates the exercise. With a little more equipment you can do the **Nordic Curl** [as demonstrated here](https://youtu.be/DQQleh4xUjU).
What is the opposite of sit to stand? Break the motion down concentric vs eccentric contractions, joint by joint -- quite simply there isn't one. A great exercise to add to your arsenal is the hip dominant Split Stance Romanian Deadlift <https://www.youtube.com/watch?v=QDEMmKocxbM> (Direct Example) <https://www.youtube.com/watch?v=XowKMitOVNc> (This Guy Knows His Stuff)
33,807
Is there a good bodyweight antagonist exercise to the squat? In the same way the push up has the pull up or the inverted row?
2017/04/07
[ "https://fitness.stackexchange.com/questions/33807", "https://fitness.stackexchange.com", "https://fitness.stackexchange.com/users/25298/" ]
For bodyweight exercises, perhaps you're looking for something along the lines of a **slick floor bridge curl**? [This video](https://youtu.be/ZA8GzhFh_CQ?t=2m26s) demonstrates the exercise. With a little more equipment you can do the **Nordic Curl** [as demonstrated here](https://youtu.be/DQQleh4xUjU).
You could hang from a lat pulldown machine and fix your feet into place. This is the opposite of the squat. the hip flexors are the main muscle that is working i think. you could also do leg raises of some kind. I am not sure why you would want to do them tho
33,807
Is there a good bodyweight antagonist exercise to the squat? In the same way the push up has the pull up or the inverted row?
2017/04/07
[ "https://fitness.stackexchange.com/questions/33807", "https://fitness.stackexchange.com", "https://fitness.stackexchange.com/users/25298/" ]
If you have a hanging bar and some inversion boots you can do the bodyweight squat upside down ie hanging from feet "lifting" glutes to heels and back down
What is the opposite of sit to stand? Break the motion down concentric vs eccentric contractions, joint by joint -- quite simply there isn't one. A great exercise to add to your arsenal is the hip dominant Split Stance Romanian Deadlift <https://www.youtube.com/watch?v=QDEMmKocxbM> (Direct Example) <https://www.youtube.com/watch?v=XowKMitOVNc> (This Guy Knows His Stuff)
33,807
Is there a good bodyweight antagonist exercise to the squat? In the same way the push up has the pull up or the inverted row?
2017/04/07
[ "https://fitness.stackexchange.com/questions/33807", "https://fitness.stackexchange.com", "https://fitness.stackexchange.com/users/25298/" ]
Well, we have to divide the body up into chunks that make sense. **Opposites** For exercises where we push forward, e.g. the pushup, we train mainly chest, triceps, and front deltoid. For exercises where we pull backward, e.g. inverted rows, we train the upper back and biceps. These muscles are opposite the ones in the pushing exercises. For squats, I'm not sure we can follow the same pattern. During the squat you train a lot of quadricep. On the other side of that is the hamstring, but the squat trains that too. During the squat we also train the lower back. On the other side of that are the abdominal muscles, but again, the squat forces you to engage this muscle group too, in order to keep correct posture. This is actually why we love and preach the glory of the squat. It does so many things all at once. **Caveat** Now, while it does *train* all these muscle groups, it trains some more than others. For instance, the quadriceps are far more active than the hamstrings. Luckily we have other movements that train much of the same areas, but with different foci. For instance, the deadlift also trains quads, hamstring, lower back, abdominals etc, but it requires more hamstring work than the squat. **Bottom line** So as far as your question goes, I'm not sure the squat has an "opposite" exercise in that regard. Just *complementary* ones. Oh, and whenever you're in doubt as to whether you should be doing this move or that move, the answer is usually both. Variety is key.
What is the opposite of sit to stand? Break the motion down concentric vs eccentric contractions, joint by joint -- quite simply there isn't one. A great exercise to add to your arsenal is the hip dominant Split Stance Romanian Deadlift <https://www.youtube.com/watch?v=QDEMmKocxbM> (Direct Example) <https://www.youtube.com/watch?v=XowKMitOVNc> (This Guy Knows His Stuff)
21,515,539
Django rest framework is a great tool to expose data in restful protocol, but does it have a built in client that does the heavy lifting at the back to enable easy implementation in SOA architecture between different django projects? So far I haven't found much from the django rest framework [documentation](http://www.django-rest-framework.org/), hopefully someone can shed some light on this one.
2014/02/02
[ "https://Stackoverflow.com/questions/21515539", "https://Stackoverflow.com", "https://Stackoverflow.com/users/342553/" ]
There is no "official" client for DRF, since REST-APIs mostly don't have much "heavy-lifting" as you perhaps know it from SOAP or similar techniques. For most REST-APIs [slumber](http://slumber.readthedocs.org/en/v0.6.0/) is the easiest way to connect to these. It handles url-building, authentication and json-dump/load.
I recently created a package that mimics the django queryset over DRF. [django-rest-framework-queryset](https://github.com/variable/django-rest-framework-queryset)
151,258
I have a new (2013) 15" Macbook Pro. The USB port provides 500mA which is not good enough for many devices I use (Hard Drive, 3G Dongle...) Reading this article: <http://support.apple.com/kb/HT4049> it seems that these ports are capable of delivering more power, but it's limited to Apple products. Is it possible to hack the mac, and change this default value (to something like 900mA?)
2014/10/18
[ "https://apple.stackexchange.com/questions/151258", "https://apple.stackexchange.com", "https://apple.stackexchange.com/users/58703/" ]
Apple USB 3.0 ports will output up to 1100mA if requested, USB 2.0 is limited to 500mA You can check the current requirements for any attached device in Apple Menu > About this Mac > More Info (later macOS versions now labelled 'System Report…')... USB I only have USB 2.0 ports on this machine, but see pic... ![enter image description here](https://i.stack.imgur.com/cOovo.png)
Thunderbolt or USB hub ====================== You can fix this by using a powered USB hub. This way you do not have to modify your mac. This [Belking 4-Port USB hub](http://www.belkin.com/hk/IWCatProductPage.process?Product_Id=692786) for example. ![belkin](https://i.stack.imgur.com/utDw4.png) The only negative, you need a power socket. Another fix is to use a Thunderbolt hub, like the [Matrox DS1](http://www.matrox.com/docking_station/en/ds1/specs/). This is a hub with Thunderbolt input, needs no additional power, and outputs all sorts of IO, including USB 3. ![matrox ds1](https://i.stack.imgur.com/QdHMn.jpg)
151,258
I have a new (2013) 15" Macbook Pro. The USB port provides 500mA which is not good enough for many devices I use (Hard Drive, 3G Dongle...) Reading this article: <http://support.apple.com/kb/HT4049> it seems that these ports are capable of delivering more power, but it's limited to Apple products. Is it possible to hack the mac, and change this default value (to something like 900mA?)
2014/10/18
[ "https://apple.stackexchange.com/questions/151258", "https://apple.stackexchange.com", "https://apple.stackexchange.com/users/58703/" ]
Apple USB 3.0 ports will output up to 1100mA if requested, USB 2.0 is limited to 500mA You can check the current requirements for any attached device in Apple Menu > About this Mac > More Info (later macOS versions now labelled 'System Report…')... USB I only have USB 2.0 ports on this machine, but see pic... ![enter image description here](https://i.stack.imgur.com/cOovo.png)
Another possible solution, that avoids any sort of hacking, would be to use a USB-Y cable. These cables provide two usb connectors that plug into your laptop and merge to a single cable that's plugged into your external device, therefore pulling current from two usb ports on your laptop. Many external HD's come with these, they're inexpensive, and do the job. See <http://www.toshiba.com/us/accessories/Cables-Adapters/Cables/USB/BA-82010> for an example.
151,258
I have a new (2013) 15" Macbook Pro. The USB port provides 500mA which is not good enough for many devices I use (Hard Drive, 3G Dongle...) Reading this article: <http://support.apple.com/kb/HT4049> it seems that these ports are capable of delivering more power, but it's limited to Apple products. Is it possible to hack the mac, and change this default value (to something like 900mA?)
2014/10/18
[ "https://apple.stackexchange.com/questions/151258", "https://apple.stackexchange.com", "https://apple.stackexchange.com/users/58703/" ]
Apple USB 3.0 ports will output up to 1100mA if requested, USB 2.0 is limited to 500mA You can check the current requirements for any attached device in Apple Menu > About this Mac > More Info (later macOS versions now labelled 'System Report…')... USB I only have USB 2.0 ports on this machine, but see pic... ![enter image description here](https://i.stack.imgur.com/cOovo.png)
> > The USB port provides 500mA which is not good enough for many devices I use (Hard Drive, 3G Dongle...) > > > The USB 3.x ports on Apple computers are able to supply more than 500 mA. This can be demonstrated by plugging in an iPhone and see the computer report in System Information that it is supplying 12 watts. The ability of the port to supply power doesn't change with what is plugged into it. > > Reading this article: <http://support.apple.com/kb/HT4049> it seems that these ports are capable of delivering more power, but it's limited to Apple products. > > > The power the port can supply is not limited by what is plugged in. This document is not intended for a highly technical audience so it's in a way lying by omission. > > Is it possible to hack the mac, and change this default value (to something like 900mA?) > > > Much of this default behavior is written in the device, not the host. And Apple computers built after iPods started using USB for charging (2005 or there about) will provide at least 1500 mA from their USB ports. You don't have to "hack" anything for it to provide 900 mA to a USB device. The USB 2.0 and USB 3.x spec allows for up to 1500 mA to devices. Apple computers since 2012 or so were built to provide 2400 mA from USB. Apple isn't doing anything "sneaky" or out of spec in providing this extra current from USB ports to Apple iDevices. They use the USB-PD and USB-BC protocols for this, and other USB devices can safely use this power too if they use the same protocol. Few USB devices will require more than 900 mA from a USB host because for a number of reasons few USB hosts provide more than 900 mA. Apple computers will happily provide this much power without any "hack". Because this budgeting of power relies as much on the device as on the host there's ways to get more power by "hacks" to the device. That's assuming one desires well behaved USB devices. It's possible, and trivial, to create a device that will take 12 watts from a USB port like an iPhone would but without asking nicely first like an iPhone would.
151,258
I have a new (2013) 15" Macbook Pro. The USB port provides 500mA which is not good enough for many devices I use (Hard Drive, 3G Dongle...) Reading this article: <http://support.apple.com/kb/HT4049> it seems that these ports are capable of delivering more power, but it's limited to Apple products. Is it possible to hack the mac, and change this default value (to something like 900mA?)
2014/10/18
[ "https://apple.stackexchange.com/questions/151258", "https://apple.stackexchange.com", "https://apple.stackexchange.com/users/58703/" ]
Thunderbolt or USB hub ====================== You can fix this by using a powered USB hub. This way you do not have to modify your mac. This [Belking 4-Port USB hub](http://www.belkin.com/hk/IWCatProductPage.process?Product_Id=692786) for example. ![belkin](https://i.stack.imgur.com/utDw4.png) The only negative, you need a power socket. Another fix is to use a Thunderbolt hub, like the [Matrox DS1](http://www.matrox.com/docking_station/en/ds1/specs/). This is a hub with Thunderbolt input, needs no additional power, and outputs all sorts of IO, including USB 3. ![matrox ds1](https://i.stack.imgur.com/QdHMn.jpg)
> > The USB port provides 500mA which is not good enough for many devices I use (Hard Drive, 3G Dongle...) > > > The USB 3.x ports on Apple computers are able to supply more than 500 mA. This can be demonstrated by plugging in an iPhone and see the computer report in System Information that it is supplying 12 watts. The ability of the port to supply power doesn't change with what is plugged into it. > > Reading this article: <http://support.apple.com/kb/HT4049> it seems that these ports are capable of delivering more power, but it's limited to Apple products. > > > The power the port can supply is not limited by what is plugged in. This document is not intended for a highly technical audience so it's in a way lying by omission. > > Is it possible to hack the mac, and change this default value (to something like 900mA?) > > > Much of this default behavior is written in the device, not the host. And Apple computers built after iPods started using USB for charging (2005 or there about) will provide at least 1500 mA from their USB ports. You don't have to "hack" anything for it to provide 900 mA to a USB device. The USB 2.0 and USB 3.x spec allows for up to 1500 mA to devices. Apple computers since 2012 or so were built to provide 2400 mA from USB. Apple isn't doing anything "sneaky" or out of spec in providing this extra current from USB ports to Apple iDevices. They use the USB-PD and USB-BC protocols for this, and other USB devices can safely use this power too if they use the same protocol. Few USB devices will require more than 900 mA from a USB host because for a number of reasons few USB hosts provide more than 900 mA. Apple computers will happily provide this much power without any "hack". Because this budgeting of power relies as much on the device as on the host there's ways to get more power by "hacks" to the device. That's assuming one desires well behaved USB devices. It's possible, and trivial, to create a device that will take 12 watts from a USB port like an iPhone would but without asking nicely first like an iPhone would.
151,258
I have a new (2013) 15" Macbook Pro. The USB port provides 500mA which is not good enough for many devices I use (Hard Drive, 3G Dongle...) Reading this article: <http://support.apple.com/kb/HT4049> it seems that these ports are capable of delivering more power, but it's limited to Apple products. Is it possible to hack the mac, and change this default value (to something like 900mA?)
2014/10/18
[ "https://apple.stackexchange.com/questions/151258", "https://apple.stackexchange.com", "https://apple.stackexchange.com/users/58703/" ]
Another possible solution, that avoids any sort of hacking, would be to use a USB-Y cable. These cables provide two usb connectors that plug into your laptop and merge to a single cable that's plugged into your external device, therefore pulling current from two usb ports on your laptop. Many external HD's come with these, they're inexpensive, and do the job. See <http://www.toshiba.com/us/accessories/Cables-Adapters/Cables/USB/BA-82010> for an example.
> > The USB port provides 500mA which is not good enough for many devices I use (Hard Drive, 3G Dongle...) > > > The USB 3.x ports on Apple computers are able to supply more than 500 mA. This can be demonstrated by plugging in an iPhone and see the computer report in System Information that it is supplying 12 watts. The ability of the port to supply power doesn't change with what is plugged into it. > > Reading this article: <http://support.apple.com/kb/HT4049> it seems that these ports are capable of delivering more power, but it's limited to Apple products. > > > The power the port can supply is not limited by what is plugged in. This document is not intended for a highly technical audience so it's in a way lying by omission. > > Is it possible to hack the mac, and change this default value (to something like 900mA?) > > > Much of this default behavior is written in the device, not the host. And Apple computers built after iPods started using USB for charging (2005 or there about) will provide at least 1500 mA from their USB ports. You don't have to "hack" anything for it to provide 900 mA to a USB device. The USB 2.0 and USB 3.x spec allows for up to 1500 mA to devices. Apple computers since 2012 or so were built to provide 2400 mA from USB. Apple isn't doing anything "sneaky" or out of spec in providing this extra current from USB ports to Apple iDevices. They use the USB-PD and USB-BC protocols for this, and other USB devices can safely use this power too if they use the same protocol. Few USB devices will require more than 900 mA from a USB host because for a number of reasons few USB hosts provide more than 900 mA. Apple computers will happily provide this much power without any "hack". Because this budgeting of power relies as much on the device as on the host there's ways to get more power by "hacks" to the device. That's assuming one desires well behaved USB devices. It's possible, and trivial, to create a device that will take 12 watts from a USB port like an iPhone would but without asking nicely first like an iPhone would.
189,170
I am very much a newbie when it comes to this stuff. I have to replace a broken light switch. When I took it apart, this is what the wiring looked like- one wire backwired and one using the side screw. What does this mean? Can i rewire it the same way with a new switch? Thank you in advance. [![enter image description here](https://i.stack.imgur.com/xMppD.jpg)](https://i.stack.imgur.com/xMppD.jpg)
2020/04/05
[ "https://diy.stackexchange.com/questions/189170", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/115236/" ]
First, if it works *even at all*, check with your power company about any appliance upgrade assistance they may have. They will often subsidize replacement with efficient appliances, *especially air conditioners*, because every watt of *A/C* draw they eliminate is a *peaking unit* power plant they don't have to build. Peaking units are by far the most expensive because the bank wants the mortgage paid exactly the same as a baseload 24x7 nuke, but the peaking unit only makes hay a few hundred hours a year. There's window units, *and then, there's wall units*. ----------------------------------------------------- Wall units are much tougher and more industrial units than window air conditioners. They also have a much tougher and more industrial price :) As such, people like to substitute for window units, which is not a good idea. Window A/Cs assume a threshold only a few inches wide, especially on the top. The wall passage tends to be deep, and the window unit won't perform well there. One could install a window :)
Yes you can replace the unit make sure the vents are in similar places you don’t want the sleeve blocking outside air flow normally on the sides and back. Also make sure the unit tilts slightly to the outside so the water can drain outside.
27,667
What is the difference between *quite* and *pretty* in the following context: > > The differences between these concepts are quite complicated. > > > and > > The differences between these concepts are pretty complicated. > > >
2014/07/08
[ "https://ell.stackexchange.com/questions/27667", "https://ell.stackexchange.com", "https://ell.stackexchange.com/users/8234/" ]
In the fist sentence **Quite** refers to **Completely**. While in the second sentence **Pretty** refers to **a certain extent**.
one would use pretty in this context so that the speaker liked to talk about the mentioned concepts i would use quite in this context maybe to hint that the speaker himself would rather not talk about the differences between the concepts in full detail -- which is absolutely understandable, as they apparently a r e "quite complicated".. :-) hope you accomplish to get message across . good luck
27,667
What is the difference between *quite* and *pretty* in the following context: > > The differences between these concepts are quite complicated. > > > and > > The differences between these concepts are pretty complicated. > > >
2014/07/08
[ "https://ell.stackexchange.com/questions/27667", "https://ell.stackexchange.com", "https://ell.stackexchange.com/users/8234/" ]
Firstly, when you search the word 'quite' on OALD, it says *pretty* is a synonym. > > [quite (synonyms - fairly, pretty)](http://www.oxfordlearnersdictionaries.com/definition/english/quite) - *to some degree* > > > But then, if you search for *pretty*, besides its general meaning, it also means *very* *[I actually thought it this way when I saw the sentence first!]* > > [pretty](http://www.oxfordlearnersdictionaries.com/definition/english/pretty_1) - *very* > > > So... > > The differences between these concepts are quite complicated = *to some degree complicated.* > > > The differences between these concepts are pretty complicated = *to some degree complected* **OR** *very complected* > > >
In the fist sentence **Quite** refers to **Completely**. While in the second sentence **Pretty** refers to **a certain extent**.
27,667
What is the difference between *quite* and *pretty* in the following context: > > The differences between these concepts are quite complicated. > > > and > > The differences between these concepts are pretty complicated. > > >
2014/07/08
[ "https://ell.stackexchange.com/questions/27667", "https://ell.stackexchange.com", "https://ell.stackexchange.com/users/8234/" ]
In many cases, it would be equally or more correct to omit either word and still retain the meaning. "It is broken" instead of "It is pretty broken" "It is sore" instead of "It is quite sore" etc etc
one would use pretty in this context so that the speaker liked to talk about the mentioned concepts i would use quite in this context maybe to hint that the speaker himself would rather not talk about the differences between the concepts in full detail -- which is absolutely understandable, as they apparently a r e "quite complicated".. :-) hope you accomplish to get message across . good luck
27,667
What is the difference between *quite* and *pretty* in the following context: > > The differences between these concepts are quite complicated. > > > and > > The differences between these concepts are pretty complicated. > > >
2014/07/08
[ "https://ell.stackexchange.com/questions/27667", "https://ell.stackexchange.com", "https://ell.stackexchange.com/users/8234/" ]
The usage of the word "quite" in modern English is a paradox. It can mean two opposite things: The original meaning is still used sometimes: "He is quite dead." (You cannot be slightly dead, so "quite" in this context means, "absolutely.") The modern meaning can mean anything from "moderately" through to "surprisingly." "They said I'd hate Javanese cooking but I found it quite tasty." In this second context, the word, "quite tasty" could be substituted by the word "pretty tasty" and it would mean the same thing.
In many cases, it would be equally or more correct to omit either word and still retain the meaning. "It is broken" instead of "It is pretty broken" "It is sore" instead of "It is quite sore" etc etc
27,667
What is the difference between *quite* and *pretty* in the following context: > > The differences between these concepts are quite complicated. > > > and > > The differences between these concepts are pretty complicated. > > >
2014/07/08
[ "https://ell.stackexchange.com/questions/27667", "https://ell.stackexchange.com", "https://ell.stackexchange.com/users/8234/" ]
The usage of the word "quite" in modern English is a paradox. It can mean two opposite things: The original meaning is still used sometimes: "He is quite dead." (You cannot be slightly dead, so "quite" in this context means, "absolutely.") The modern meaning can mean anything from "moderately" through to "surprisingly." "They said I'd hate Javanese cooking but I found it quite tasty." In this second context, the word, "quite tasty" could be substituted by the word "pretty tasty" and it would mean the same thing.
In the fist sentence **Quite** refers to **Completely**. While in the second sentence **Pretty** refers to **a certain extent**.
27,667
What is the difference between *quite* and *pretty* in the following context: > > The differences between these concepts are quite complicated. > > > and > > The differences between these concepts are pretty complicated. > > >
2014/07/08
[ "https://ell.stackexchange.com/questions/27667", "https://ell.stackexchange.com", "https://ell.stackexchange.com/users/8234/" ]
Firstly, when you search the word 'quite' on OALD, it says *pretty* is a synonym. > > [quite (synonyms - fairly, pretty)](http://www.oxfordlearnersdictionaries.com/definition/english/quite) - *to some degree* > > > But then, if you search for *pretty*, besides its general meaning, it also means *very* *[I actually thought it this way when I saw the sentence first!]* > > [pretty](http://www.oxfordlearnersdictionaries.com/definition/english/pretty_1) - *very* > > > So... > > The differences between these concepts are quite complicated = *to some degree complicated.* > > > The differences between these concepts are pretty complicated = *to some degree complected* **OR** *very complected* > > >
one would use pretty in this context so that the speaker liked to talk about the mentioned concepts i would use quite in this context maybe to hint that the speaker himself would rather not talk about the differences between the concepts in full detail -- which is absolutely understandable, as they apparently a r e "quite complicated".. :-) hope you accomplish to get message across . good luck
27,667
What is the difference between *quite* and *pretty* in the following context: > > The differences between these concepts are quite complicated. > > > and > > The differences between these concepts are pretty complicated. > > >
2014/07/08
[ "https://ell.stackexchange.com/questions/27667", "https://ell.stackexchange.com", "https://ell.stackexchange.com/users/8234/" ]
*Pretty complicated* is approximately the same as *fairly complicated*: there is a significant degree of complication. It is complicated enough that it will require much effort for an ordinary person to understand it. * *Pretty*, in this sense, is used mostly in conversation, very little in formal discourse. *Quite complicated* is more complicated than that: *very complicated*. It is so complicated that an ordinary person may not be able to understand it entirely. * *Quite* is used mostly in formal discourse, much less in ordinary conversation.
one would use pretty in this context so that the speaker liked to talk about the mentioned concepts i would use quite in this context maybe to hint that the speaker himself would rather not talk about the differences between the concepts in full detail -- which is absolutely understandable, as they apparently a r e "quite complicated".. :-) hope you accomplish to get message across . good luck
27,667
What is the difference between *quite* and *pretty* in the following context: > > The differences between these concepts are quite complicated. > > > and > > The differences between these concepts are pretty complicated. > > >
2014/07/08
[ "https://ell.stackexchange.com/questions/27667", "https://ell.stackexchange.com", "https://ell.stackexchange.com/users/8234/" ]
*Pretty complicated* is approximately the same as *fairly complicated*: there is a significant degree of complication. It is complicated enough that it will require much effort for an ordinary person to understand it. * *Pretty*, in this sense, is used mostly in conversation, very little in formal discourse. *Quite complicated* is more complicated than that: *very complicated*. It is so complicated that an ordinary person may not be able to understand it entirely. * *Quite* is used mostly in formal discourse, much less in ordinary conversation.
The usage of the word "quite" in modern English is a paradox. It can mean two opposite things: The original meaning is still used sometimes: "He is quite dead." (You cannot be slightly dead, so "quite" in this context means, "absolutely.") The modern meaning can mean anything from "moderately" through to "surprisingly." "They said I'd hate Javanese cooking but I found it quite tasty." In this second context, the word, "quite tasty" could be substituted by the word "pretty tasty" and it would mean the same thing.
27,667
What is the difference between *quite* and *pretty* in the following context: > > The differences between these concepts are quite complicated. > > > and > > The differences between these concepts are pretty complicated. > > >
2014/07/08
[ "https://ell.stackexchange.com/questions/27667", "https://ell.stackexchange.com", "https://ell.stackexchange.com/users/8234/" ]
Firstly, when you search the word 'quite' on OALD, it says *pretty* is a synonym. > > [quite (synonyms - fairly, pretty)](http://www.oxfordlearnersdictionaries.com/definition/english/quite) - *to some degree* > > > But then, if you search for *pretty*, besides its general meaning, it also means *very* *[I actually thought it this way when I saw the sentence first!]* > > [pretty](http://www.oxfordlearnersdictionaries.com/definition/english/pretty_1) - *very* > > > So... > > The differences between these concepts are quite complicated = *to some degree complicated.* > > > The differences between these concepts are pretty complicated = *to some degree complected* **OR** *very complected* > > >
*Quite* means, variably: * *Exactly* or *completely*, as in "quite so" (meaning "exactly right") * *Somewhat* or *fairly*, as in "quite big" (something cannot be "completely big", so this must mean "fairly large") *Pretty* means the same as that second sense of *quite* (In this context! Obviously it also means cute/beautiful etc.), so that one can say "pretty big" but not "pretty so". Unfortunately, there is a tendency to use BOTH of these words ironically or sarcastically, for example: * Describing something extremely large as "pretty big" or "quite big" (the second sense of *quite*) * I'm quite sure there must be a pretty good example of the first sense of *quite*, but I can't think of one :( To answer the question, there is no real difference between the two words in the context of the question - *quite* doesn't necessarily indicate the degree to which something is [adjective], any more than *pretty* does. It's also worth noting that the second sense of *quite* is more common as a colloquialism in British English than *pretty*, whereas the first sense you would be more likely to see in polite/formal written language.
27,667
What is the difference between *quite* and *pretty* in the following context: > > The differences between these concepts are quite complicated. > > > and > > The differences between these concepts are pretty complicated. > > >
2014/07/08
[ "https://ell.stackexchange.com/questions/27667", "https://ell.stackexchange.com", "https://ell.stackexchange.com/users/8234/" ]
The usage of the word "quite" in modern English is a paradox. It can mean two opposite things: The original meaning is still used sometimes: "He is quite dead." (You cannot be slightly dead, so "quite" in this context means, "absolutely.") The modern meaning can mean anything from "moderately" through to "surprisingly." "They said I'd hate Javanese cooking but I found it quite tasty." In this second context, the word, "quite tasty" could be substituted by the word "pretty tasty" and it would mean the same thing.
one would use pretty in this context so that the speaker liked to talk about the mentioned concepts i would use quite in this context maybe to hint that the speaker himself would rather not talk about the differences between the concepts in full detail -- which is absolutely understandable, as they apparently a r e "quite complicated".. :-) hope you accomplish to get message across . good luck
30,662
Is it possible to use regular trainers/shoes (i.e., withou cleats) with clipless pedals? A friend of mine mentioned he does this all the time. However, I cannot imagine how this would work. Surely the area of grip would be far too small to get any kind of purchase on the pedals. We both have [Shimano PD-R540 SPD](https://bike.shimano.com/en-EU/product/component/tiagra-4700/PD-R540.html) pedals.
2015/05/19
[ "https://bicycles.stackexchange.com/questions/30662", "https://bicycles.stackexchange.com", "https://bicycles.stackexchange.com/users/18733/" ]
There is nothing to stop you from getting on your bike in regular trainers instead of shoes with cleats. you won't damage your pedals in any way as long as you don't have a rock lodged in the sole of your shoe. The problem, as far as I can see it, is that it is just not very comfortable due to the small surface area and flexible sole of the shoe. Your feet will feel the pressure localized into a very small area. The other issue is grip. There isn't a lot of traction since the body of the pedal wasn't designed with that in mind. I do my own mechanics out if my garage and will occasionally hop on the bike I'm working on to make sure my repair or adjustment is working properly. I have bikes with Ritchey Logic, Speedplay frogs, SpeedPlay Zeros, Shimano SPD and non-clipless pedals. I don't go and put on the proper shoes just to ride down the street and back while I test an adjustment. So, yes, you can ride without shoes with cleats but really only in a very limited way.
Don’t wear sneakers over clip-less pedals! Didn’t feel bad then but when I woke up with ball area of the foot swollen with terrible pain. Icing as I text. I knew better too.
30,662
Is it possible to use regular trainers/shoes (i.e., withou cleats) with clipless pedals? A friend of mine mentioned he does this all the time. However, I cannot imagine how this would work. Surely the area of grip would be far too small to get any kind of purchase on the pedals. We both have [Shimano PD-R540 SPD](https://bike.shimano.com/en-EU/product/component/tiagra-4700/PD-R540.html) pedals.
2015/05/19
[ "https://bicycles.stackexchange.com/questions/30662", "https://bicycles.stackexchange.com", "https://bicycles.stackexchange.com/users/18733/" ]
Yes, you *can* use them with normal shoes, but as you predict, it isn't very comfortable, especially if your shoes have thin, flexible soles. Also, there's a risk of your foot slipping off, particularly in the wet. There are various options to temporarily convert clip pedals into ordinary flat ones. * [Fly pedals](http://flypedals.com/) * [BBB BPD FeetRest pedal adaptors (SPD only)](http://bbbcycling.com/bike-parts/pedals/BPD-90)
Yes you can. No it's not going to be comfortable. You're more likely to slip off the pedals. One alternative is to get double sided pedals where one side of the pedal has an SPD mount and the other side is a flat pedal. I've been running Shimano M324 pedals on my commuter so I can hop on with casual shoes or use my cycling shoes for more power on longer rides. It's the worst of both worlds, so you'll find yourself trying to flip the pedal from time to time, but it's a doable option that gives you flexibility without having to swap pedals all the time. ![enter image description here](https://i.stack.imgur.com/y1DlK.jpg) Unfortunately, I don't know of any other type of shoe / cleat combination that works with dual sided pedals. It seems that only shimano mountain SPD seems to offer this option. If you're already sold on road pedals, then it's probably not going to work like this. You might just want to invest in a set of flat pedals and swap your pedals when you want to ride without clipless shoes.
30,662
Is it possible to use regular trainers/shoes (i.e., withou cleats) with clipless pedals? A friend of mine mentioned he does this all the time. However, I cannot imagine how this would work. Surely the area of grip would be far too small to get any kind of purchase on the pedals. We both have [Shimano PD-R540 SPD](https://bike.shimano.com/en-EU/product/component/tiagra-4700/PD-R540.html) pedals.
2015/05/19
[ "https://bicycles.stackexchange.com/questions/30662", "https://bicycles.stackexchange.com", "https://bicycles.stackexchange.com/users/18733/" ]
There is nothing to stop you from getting on your bike in regular trainers instead of shoes with cleats. you won't damage your pedals in any way as long as you don't have a rock lodged in the sole of your shoe. The problem, as far as I can see it, is that it is just not very comfortable due to the small surface area and flexible sole of the shoe. Your feet will feel the pressure localized into a very small area. The other issue is grip. There isn't a lot of traction since the body of the pedal wasn't designed with that in mind. I do my own mechanics out if my garage and will occasionally hop on the bike I'm working on to make sure my repair or adjustment is working properly. I have bikes with Ritchey Logic, Speedplay frogs, SpeedPlay Zeros, Shimano SPD and non-clipless pedals. I don't go and put on the proper shoes just to ride down the street and back while I test an adjustment. So, yes, you can ride without shoes with cleats but really only in a very limited way.
I have used normal office shoes on look road pedals (albeit shoes with relatively thick soles) This works fine for to/from work or lunchtime errands. However I found that pedalling on the "underside" of the pedal was more comfortable in some thin-soled shoes. Not ideal but workable. Another option, try clipping a plastic cleat in the pedal, an older and worn-out one would be perfect. Plus you can remove it when you want to ride with proper shoes. I have look clipless on my road bike, but I put platforms back on my MTB for trips around town, mostly because our roads are still terrible, and the MTB has better brakes and more load-carrying capacity. Cleats may not be best for you.
30,662
Is it possible to use regular trainers/shoes (i.e., withou cleats) with clipless pedals? A friend of mine mentioned he does this all the time. However, I cannot imagine how this would work. Surely the area of grip would be far too small to get any kind of purchase on the pedals. We both have [Shimano PD-R540 SPD](https://bike.shimano.com/en-EU/product/component/tiagra-4700/PD-R540.html) pedals.
2015/05/19
[ "https://bicycles.stackexchange.com/questions/30662", "https://bicycles.stackexchange.com", "https://bicycles.stackexchange.com/users/18733/" ]
Yes, you *can* use them with normal shoes, but as you predict, it isn't very comfortable, especially if your shoes have thin, flexible soles. Also, there's a risk of your foot slipping off, particularly in the wet. There are various options to temporarily convert clip pedals into ordinary flat ones. * [Fly pedals](http://flypedals.com/) * [BBB BPD FeetRest pedal adaptors (SPD only)](http://bbbcycling.com/bike-parts/pedals/BPD-90)
Don’t wear sneakers over clip-less pedals! Didn’t feel bad then but when I woke up with ball area of the foot swollen with terrible pain. Icing as I text. I knew better too.
30,662
Is it possible to use regular trainers/shoes (i.e., withou cleats) with clipless pedals? A friend of mine mentioned he does this all the time. However, I cannot imagine how this would work. Surely the area of grip would be far too small to get any kind of purchase on the pedals. We both have [Shimano PD-R540 SPD](https://bike.shimano.com/en-EU/product/component/tiagra-4700/PD-R540.html) pedals.
2015/05/19
[ "https://bicycles.stackexchange.com/questions/30662", "https://bicycles.stackexchange.com", "https://bicycles.stackexchange.com/users/18733/" ]
Yes you can. No it's not going to be comfortable. You're more likely to slip off the pedals. One alternative is to get double sided pedals where one side of the pedal has an SPD mount and the other side is a flat pedal. I've been running Shimano M324 pedals on my commuter so I can hop on with casual shoes or use my cycling shoes for more power on longer rides. It's the worst of both worlds, so you'll find yourself trying to flip the pedal from time to time, but it's a doable option that gives you flexibility without having to swap pedals all the time. ![enter image description here](https://i.stack.imgur.com/y1DlK.jpg) Unfortunately, I don't know of any other type of shoe / cleat combination that works with dual sided pedals. It seems that only shimano mountain SPD seems to offer this option. If you're already sold on road pedals, then it's probably not going to work like this. You might just want to invest in a set of flat pedals and swap your pedals when you want to ride without clipless shoes.
I have used normal office shoes on look road pedals (albeit shoes with relatively thick soles) This works fine for to/from work or lunchtime errands. However I found that pedalling on the "underside" of the pedal was more comfortable in some thin-soled shoes. Not ideal but workable. Another option, try clipping a plastic cleat in the pedal, an older and worn-out one would be perfect. Plus you can remove it when you want to ride with proper shoes. I have look clipless on my road bike, but I put platforms back on my MTB for trips around town, mostly because our roads are still terrible, and the MTB has better brakes and more load-carrying capacity. Cleats may not be best for you.
30,662
Is it possible to use regular trainers/shoes (i.e., withou cleats) with clipless pedals? A friend of mine mentioned he does this all the time. However, I cannot imagine how this would work. Surely the area of grip would be far too small to get any kind of purchase on the pedals. We both have [Shimano PD-R540 SPD](https://bike.shimano.com/en-EU/product/component/tiagra-4700/PD-R540.html) pedals.
2015/05/19
[ "https://bicycles.stackexchange.com/questions/30662", "https://bicycles.stackexchange.com", "https://bicycles.stackexchange.com/users/18733/" ]
Yes, you *can* use them with normal shoes, but as you predict, it isn't very comfortable, especially if your shoes have thin, flexible soles. Also, there's a risk of your foot slipping off, particularly in the wet. There are various options to temporarily convert clip pedals into ordinary flat ones. * [Fly pedals](http://flypedals.com/) * [BBB BPD FeetRest pedal adaptors (SPD only)](http://bbbcycling.com/bike-parts/pedals/BPD-90)
There is nothing to stop you from getting on your bike in regular trainers instead of shoes with cleats. you won't damage your pedals in any way as long as you don't have a rock lodged in the sole of your shoe. The problem, as far as I can see it, is that it is just not very comfortable due to the small surface area and flexible sole of the shoe. Your feet will feel the pressure localized into a very small area. The other issue is grip. There isn't a lot of traction since the body of the pedal wasn't designed with that in mind. I do my own mechanics out if my garage and will occasionally hop on the bike I'm working on to make sure my repair or adjustment is working properly. I have bikes with Ritchey Logic, Speedplay frogs, SpeedPlay Zeros, Shimano SPD and non-clipless pedals. I don't go and put on the proper shoes just to ride down the street and back while I test an adjustment. So, yes, you can ride without shoes with cleats but really only in a very limited way.
10,029
I'm using the Adobe CS5 suite and trying to upload all of my pictures to Facebook... and .NEF files are too large to upload, any suggestions?
2011/03/22
[ "https://photo.stackexchange.com/questions/10029", "https://photo.stackexchange.com", "https://photo.stackexchange.com/users/3841/" ]
I use Picasa for that. I would simply import the NEFs from the camera or the camera's memory card. That puts the NEFs in a folder on your computer's disk. Picasa sees the NEFs. You can edit them just like any other photo. Adjust contrast, crop, color, whatever... At that point, you can click on your folder of photos to select the whole folder. The "Photo Tray" in the lower left should say "Folder Selected..." Then, click Picasa's "Export" button. The Export will create a new folder full of all-edits-applied JPEGs. Picasa can be downloaded [here](http://picasa.google.com/)
Create a [batch process](http://www.youtube.com/watch?v=PTtWEtN06EI) to convert from .NEF to .JPEG with Photoshop. Don't forget to include closing the picture in your recording as Photoshop does have a finite limit on number of open files. This solution is ideal if you took your photos in the same lighting condition so corrections, if any, are the same. For corrections: at the very least, you need to reduce your file size (i.e. change dpi, dimensions) to meet Facebook's restrictions.
10,029
I'm using the Adobe CS5 suite and trying to upload all of my pictures to Facebook... and .NEF files are too large to upload, any suggestions?
2011/03/22
[ "https://photo.stackexchange.com/questions/10029", "https://photo.stackexchange.com", "https://photo.stackexchange.com/users/3841/" ]
There is a very easy way to convert a group of photos to jpeg format within Photoshop. It is done within the Image Processor which is located as follows File > Scripts > Image Processor then a pop-up screen appears. (Depending on your version of Adobe Software, this can also be done in Bridge.) Within the Image Processor - Step 1. Locate the folder where the images are stored. Step 2. Select either 'save in same location' or you can select a different location. Step 3. Select the output format - Jpeg, PSD, TIFF. If converting the images for facebook, select jpeg, quality under the number 5, check 'resize to fit' and at 800 or lower number to the W & H boxes. (Photos loaded to the internet don't need to have a high pixel resoluation. The quality and resize would need to be different if the photos are going to be printed.) Step 4. Nothing needs to be checked for images that will be loaded to Facebook. Step 5. Click 'Run'. While Photoshop is processing the images, you will not be able to use Photoshop. The Image Processor is an auto process that will create a new image for any image within the folder. (If sub-folders is checked within Step 1, it will also create new images them.) Note: The Image Processor will create a jpeg file for any picture within the folder. Depending on the number of photos in the folder, size of the images and output size, it can take only a few minutes to over an 45 minutes. [I shoot in raw (nef) format and have a SD card with a lot of memory so output (jpeg/psd) really affects the processing time. As one of the contributors indicated above, I also create jpegs to review/select photos because jpeg files are smaller and faster to load and thus review.]
just use adobe elements organiser. select all the images you want to export and use the File and the Export to new image option. Free, easy and simple without much loss of depth.
10,029
I'm using the Adobe CS5 suite and trying to upload all of my pictures to Facebook... and .NEF files are too large to upload, any suggestions?
2011/03/22
[ "https://photo.stackexchange.com/questions/10029", "https://photo.stackexchange.com", "https://photo.stackexchange.com/users/3841/" ]
I'm surprised no one has mentioned Nikon's own ViewNX, which will allow you to select all the images in a folder and batch convert them from .NEF to .JPG. The program is free, and came with the camera and if not, it can also be [downloaded](http://www.nikonusa.com/en/Nikon-Products/Product/Imaging-Software/NVNX2/ViewNX-2.html) from the Nikon USA site Facebook upload is already integrated in ViewNX2. Here is a screenshot of a portion of the preferences screen for illustration purposes. ![enter image description here](https://i.stack.imgur.com/8ct1q.png)
Speaking of Scott Kelby (if you're a neophyte photographer and you've never read his books or visited his site, you're cheating yourself) the tool he recommends for the job is the JPEG extractor utility from [Michael Tapes](http://mtapesdesign.com/). It works with the embedded JPEG in the RAW (NEF) file, so it won't give you the quality you'd get with a proper "development" in Adobe Camera Raw -- but if you're uploading to Facebook, you don't get to keep your glorious high resolution anyway. "Instant JPEG From Raw" is a free download -- the email download code thing is just Tapes' way of keeping his server bandwidth reasonable (a minor inconvenience is an absolute brick wall to a lot of people).
10,029
I'm using the Adobe CS5 suite and trying to upload all of my pictures to Facebook... and .NEF files are too large to upload, any suggestions?
2011/03/22
[ "https://photo.stackexchange.com/questions/10029", "https://photo.stackexchange.com", "https://photo.stackexchange.com/users/3841/" ]
I use Picasa for that. I would simply import the NEFs from the camera or the camera's memory card. That puts the NEFs in a folder on your computer's disk. Picasa sees the NEFs. You can edit them just like any other photo. Adjust contrast, crop, color, whatever... At that point, you can click on your folder of photos to select the whole folder. The "Photo Tray" in the lower left should say "Folder Selected..." Then, click Picasa's "Export" button. The Export will create a new folder full of all-edits-applied JPEGs. Picasa can be downloaded [here](http://picasa.google.com/)
The easiest way to upload raws to facebook is to use Lightroom and set up a publish channel to point at your facebook. Then you just drag'n'drop files from your library to the publish folder. you can set up default resizing, watermark, screen sharpening, point to albums on your page, etc.
10,029
I'm using the Adobe CS5 suite and trying to upload all of my pictures to Facebook... and .NEF files are too large to upload, any suggestions?
2011/03/22
[ "https://photo.stackexchange.com/questions/10029", "https://photo.stackexchange.com", "https://photo.stackexchange.com/users/3841/" ]
Speaking of Scott Kelby (if you're a neophyte photographer and you've never read his books or visited his site, you're cheating yourself) the tool he recommends for the job is the JPEG extractor utility from [Michael Tapes](http://mtapesdesign.com/). It works with the embedded JPEG in the RAW (NEF) file, so it won't give you the quality you'd get with a proper "development" in Adobe Camera Raw -- but if you're uploading to Facebook, you don't get to keep your glorious high resolution anyway. "Instant JPEG From Raw" is a free download -- the email download code thing is just Tapes' way of keeping his server bandwidth reasonable (a minor inconvenience is an absolute brick wall to a lot of people).
The easiest way to upload raws to facebook is to use Lightroom and set up a publish channel to point at your facebook. Then you just drag'n'drop files from your library to the publish folder. you can set up default resizing, watermark, screen sharpening, point to albums on your page, etc.
10,029
I'm using the Adobe CS5 suite and trying to upload all of my pictures to Facebook... and .NEF files are too large to upload, any suggestions?
2011/03/22
[ "https://photo.stackexchange.com/questions/10029", "https://photo.stackexchange.com", "https://photo.stackexchange.com/users/3841/" ]
Irfanview is free ,and does it all. * <https://www.irfanview.com/> Irfanview has been progressively developed over many years and is used by literally millions of people. It's free for private use - a donation is welcome but not essential. It does what you want and a vast amount more. If using, be sure to install both the progam and the separately downloaded "plugins".
NEF is Nikons Raw image format, which thends to have a size over 10MB. To display a picture on the internet, mainly embedded into a website like facebook or email, you need to use a compatible image format that is displayable by the browser (client). Most compatible image formats are JPEG, GIF and PNG. Image sizes commonly used on the internet are below 300KB, which is a fraction of the size of a NEF. Your NEF's will have a dpi higher that 72. For displaying images on a screen 72 dpi is sufficiant, so you might want to reduce dpi as-well, additional to scaling the image (length or height around 1000px is a good starting point). You need to convert the NEF into a format and size explained above. As already said by others, you can use any RAW converter to do the job. I would recommend to use Nikons own software like Caputre NX, which offers batch processing, or the View NX, that most probably came with you camera and offers batch processing as-well afaik. Just a note, be aware what you upload to the internet, especially to facebook ;-)
10,029
I'm using the Adobe CS5 suite and trying to upload all of my pictures to Facebook... and .NEF files are too large to upload, any suggestions?
2011/03/22
[ "https://photo.stackexchange.com/questions/10029", "https://photo.stackexchange.com", "https://photo.stackexchange.com/users/3841/" ]
I'm surprised no one has mentioned Nikon's own ViewNX, which will allow you to select all the images in a folder and batch convert them from .NEF to .JPG. The program is free, and came with the camera and if not, it can also be [downloaded](http://www.nikonusa.com/en/Nikon-Products/Product/Imaging-Software/NVNX2/ViewNX-2.html) from the Nikon USA site Facebook upload is already integrated in ViewNX2. Here is a screenshot of a portion of the preferences screen for illustration purposes. ![enter image description here](https://i.stack.imgur.com/8ct1q.png)
Everybody have focused in how to automatically convert NEF (Nikon proprietary raw format) after the fact where you loose all control over the development, I am going to propose you to use an in camera method that will give you much better copies. If you have a Nikon, you also have Picture Control on your camera, it allows from some to a lot development set at customs presets (depending on the camera model). In my opinion to get the best of both worlds when you should shoot NEF + Fine JPEG, the JPEG will be the developed raw applying the Picture Control and WB you chose for each picture. By the way this is how every photo-journalists that I know works if they have to meet a tight deadline. By the way the Picture Control settings are also saved within each of the NEF files, but in a proprietary/secret nikon format (**I don’t understand the logic behind this awful Nikon policy**) that can interpret it to render the raw files only in Nikon’s ergonomically awful developing software. **They are good at making cameras so the should stick with it and partner with a large photo editing software company to provide the bundled developing soft. and make Picture Control setting in NEF opensouce**. On the other hand knowing that you use Adobe CS5 suite and the quality of its jpeg exports, in my personal opinion you should not be bulk uploading to Facebook: first, for privacy reasons (you need to have a proper model and property release signed for each of you photos that explicitly allows you to upload to social network, it would be better if they explicitly allows uploading to Facebook -which is more than a social network), and secondly, because Facebook compression algorithms are so aggressive that will render an awful picture with a lot of compression artifacts.
10,029
I'm using the Adobe CS5 suite and trying to upload all of my pictures to Facebook... and .NEF files are too large to upload, any suggestions?
2011/03/22
[ "https://photo.stackexchange.com/questions/10029", "https://photo.stackexchange.com", "https://photo.stackexchange.com/users/3841/" ]
Well, Facebook isn't going to handle NEF anyways. However, if you have CS5, that means you have Adobe Bridge and the batch functionality to perform image conversion from there. The short example would be... 1. Open bridge and find an image directory to work on. 2. Select the images to modify. 3. Select on the menu: "Tools -> Photoshop -> Image Processor" This is going to run Photoshop. From there you will be presented with a dialog that provides a number of options for batch processing including using the first image as the basis for further changes, file type to save as, etc. You may want to experiment a little with a small set of images, but be aware that Raw conversion to JPEG is seldom, if ever really, a consistent change. Personally, I would never do this for final images. I've only ever done it for proof images where I've totally controlled the light used in the shoot, but for anything else, including images I intend for display on the web or in print, the editing is done image by image. This is generally because white balance changes, sharpening changes, and a host of other little tweaks that vary as a result of settings, light, and more. By the way, if you haven't a lot of Photoshop experience with photographs, I'd recommend Scott Kelby's "[The Adobe Photoshop CS5 Book for Digital Photographers](http://rads.stackoverflow.com/amzn/click/0321703561)" as a good place to start (Google if the link doesn't work). There are a lot of other resources, but he covers a lot of ground and does it with some style, so worth the rather small price of admission.
Create a [batch process](http://www.youtube.com/watch?v=PTtWEtN06EI) to convert from .NEF to .JPEG with Photoshop. Don't forget to include closing the picture in your recording as Photoshop does have a finite limit on number of open files. This solution is ideal if you took your photos in the same lighting condition so corrections, if any, are the same. For corrections: at the very least, you need to reduce your file size (i.e. change dpi, dimensions) to meet Facebook's restrictions.
10,029
I'm using the Adobe CS5 suite and trying to upload all of my pictures to Facebook... and .NEF files are too large to upload, any suggestions?
2011/03/22
[ "https://photo.stackexchange.com/questions/10029", "https://photo.stackexchange.com", "https://photo.stackexchange.com/users/3841/" ]
There is a very easy way to convert a group of photos to jpeg format within Photoshop. It is done within the Image Processor which is located as follows File > Scripts > Image Processor then a pop-up screen appears. (Depending on your version of Adobe Software, this can also be done in Bridge.) Within the Image Processor - Step 1. Locate the folder where the images are stored. Step 2. Select either 'save in same location' or you can select a different location. Step 3. Select the output format - Jpeg, PSD, TIFF. If converting the images for facebook, select jpeg, quality under the number 5, check 'resize to fit' and at 800 or lower number to the W & H boxes. (Photos loaded to the internet don't need to have a high pixel resoluation. The quality and resize would need to be different if the photos are going to be printed.) Step 4. Nothing needs to be checked for images that will be loaded to Facebook. Step 5. Click 'Run'. While Photoshop is processing the images, you will not be able to use Photoshop. The Image Processor is an auto process that will create a new image for any image within the folder. (If sub-folders is checked within Step 1, it will also create new images them.) Note: The Image Processor will create a jpeg file for any picture within the folder. Depending on the number of photos in the folder, size of the images and output size, it can take only a few minutes to over an 45 minutes. [I shoot in raw (nef) format and have a SD card with a lot of memory so output (jpeg/psd) really affects the processing time. As one of the contributors indicated above, I also create jpegs to review/select photos because jpeg files are smaller and faster to load and thus review.]
The easiest way to upload raws to facebook is to use Lightroom and set up a publish channel to point at your facebook. Then you just drag'n'drop files from your library to the publish folder. you can set up default resizing, watermark, screen sharpening, point to albums on your page, etc.
10,029
I'm using the Adobe CS5 suite and trying to upload all of my pictures to Facebook... and .NEF files are too large to upload, any suggestions?
2011/03/22
[ "https://photo.stackexchange.com/questions/10029", "https://photo.stackexchange.com", "https://photo.stackexchange.com/users/3841/" ]
There is a very easy way to convert a group of photos to jpeg format within Photoshop. It is done within the Image Processor which is located as follows File > Scripts > Image Processor then a pop-up screen appears. (Depending on your version of Adobe Software, this can also be done in Bridge.) Within the Image Processor - Step 1. Locate the folder where the images are stored. Step 2. Select either 'save in same location' or you can select a different location. Step 3. Select the output format - Jpeg, PSD, TIFF. If converting the images for facebook, select jpeg, quality under the number 5, check 'resize to fit' and at 800 or lower number to the W & H boxes. (Photos loaded to the internet don't need to have a high pixel resoluation. The quality and resize would need to be different if the photos are going to be printed.) Step 4. Nothing needs to be checked for images that will be loaded to Facebook. Step 5. Click 'Run'. While Photoshop is processing the images, you will not be able to use Photoshop. The Image Processor is an auto process that will create a new image for any image within the folder. (If sub-folders is checked within Step 1, it will also create new images them.) Note: The Image Processor will create a jpeg file for any picture within the folder. Depending on the number of photos in the folder, size of the images and output size, it can take only a few minutes to over an 45 minutes. [I shoot in raw (nef) format and have a SD card with a lot of memory so output (jpeg/psd) really affects the processing time. As one of the contributors indicated above, I also create jpegs to review/select photos because jpeg files are smaller and faster to load and thus review.]
Create a [batch process](http://www.youtube.com/watch?v=PTtWEtN06EI) to convert from .NEF to .JPEG with Photoshop. Don't forget to include closing the picture in your recording as Photoshop does have a finite limit on number of open files. This solution is ideal if you took your photos in the same lighting condition so corrections, if any, are the same. For corrections: at the very least, you need to reduce your file size (i.e. change dpi, dimensions) to meet Facebook's restrictions.
131,856
I made a disaster with Sony A7iii silent shoot, I have a huge flicker in some photos. [![enter image description here](https://i.stack.imgur.com/pv461.jpg)](https://i.stack.imgur.com/pv461.jpg) Is there any way to fix it a little at least? Thank you so much!!!
2019/12/09
[ "https://graphicdesign.stackexchange.com/questions/131856", "https://graphicdesign.stackexchange.com", "https://graphicdesign.stackexchange.com/users/146379/" ]
It's going to be difficult to remove these lines completely, however there is an approach which can reduce them to an extent. The result is not perfect though. I used GIMP and the G'MIC plugin's Fourier Transform filter to suppress the stripe pattern, but if you can find a Fourier Transform plugin\* for Photoshop, you could also do something similar. Here's the before and after [![enter image description here](https://i.stack.imgur.com/1W4Q5.gif)](https://i.stack.imgur.com/1W4Q5.gif) I also did a little extra retouching with the Clone tool to remove some of the artefacts around the subject which the process can cause. I'm sure with some extra time and care you can make a better job than me - I did this very quickly just as an example. Here's how I edited the Fourier Transform image, by painting over the brighter spots and lines (leaving the central cross and bright centre) before reversing the transform. The screencapture is speeded up to show you what I did. [![enter image description here](https://i.stack.imgur.com/NsOxZ.gif)](https://i.stack.imgur.com/NsOxZ.gif) **\*Edit:** I found a [plugin for Photoshop](http://ft.rognemedia.no/) which could be used similarly. Haven't tested it though.
Actually this kind of problem is really hard to solve in Photoshop without affecting the quality but I will try to do my best. If we are talking about only this picture At first duplicte your original layer to use in further or prevent any accident. 1. Change mode to lab color to get different color space in Photoshop 2. Make sure that you selected all channels in channels tab (Lab is the main channel if I remember well) 3. Use surface blur from filters and find the optimum values in blur amount and threshold. After these steps flickers will be better than before. At that point you need to separate parts from image like display behind man, shirt, wall behind display and desk. At that point, you need to use heal and little blur filters to have almost-same colors. I can't guarantee that it will help you 100% but at least there will be less flicker than this image.
65,646
I noticed when it snows and salt is melting the snow on the roads in the city, that the air temperature feels a lot cooler than the temperature indicated on the thermometer. A 40 degree fahrenheit temperature feels like 30 degrees. Is it the moisture from melting snow or rain that makes the air temp feel ten degrees cooler or is it the salt brine (melted snow or ice and salt? Is it similar to adding salt to the ice in an Hand powered ice cream churn to make the cream container colder to make ice cream?
2017/01/03
[ "https://chemistry.stackexchange.com/questions/65646", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/39482/" ]
It depends on the exactness of your thermometer and your definition of ‘nearby’. Since adding salt to ice causes the overall mixture to assume a liquid phase rather than the previous solid phase, melting enthalpy is required to liquefy the ice/salt mixture. This melting enthalpy is typically supplied by drawing heat from the surroundings, i.e. due to the melting everything gets colder. This, naturally, also draws heat from the surrounding (close) air. However, there are no *direct* effects on the surrounding air. You can read more about the effect of salt added to ice in [this question](https://chemistry.stackexchange.com/q/5748).
No. It merely reduces the melting temperature of the ice. If you've ever noticed, salt water doesn't freeze until a much lower temperature than regular water. It is because of the same reason.
65,646
I noticed when it snows and salt is melting the snow on the roads in the city, that the air temperature feels a lot cooler than the temperature indicated on the thermometer. A 40 degree fahrenheit temperature feels like 30 degrees. Is it the moisture from melting snow or rain that makes the air temp feel ten degrees cooler or is it the salt brine (melted snow or ice and salt? Is it similar to adding salt to the ice in an Hand powered ice cream churn to make the cream container colder to make ice cream?
2017/01/03
[ "https://chemistry.stackexchange.com/questions/65646", "https://chemistry.stackexchange.com", "https://chemistry.stackexchange.com/users/39482/" ]
As water melts due to adding a salt (AKA melting point depression), it [absorbs energy from the environment](http://www.clemson.edu/ces/chemistry/organic/Labs/2270Docs/MeltingPoint.pdf) and the temperature of the salt/ice/water mixture does, indeed, drop. Whether that is sufficient to cool the air above the ice *perceptibly*, though, is a matter of opinion. Another effect that might make the air *feel* colder is increased humidity, particularly as the salt slush is splashed into the air by vehicles. You could perform an experiment yourself to determine the air temperature above two containers, one with fresh snow and the other with snow and salt. If you do, please post your results as an answer!
No. It merely reduces the melting temperature of the ice. If you've ever noticed, salt water doesn't freeze until a much lower temperature than regular water. It is because of the same reason.
477,759
We currently have a quite complex business application that contains a huge lot of JavaScript code for making the user interface & interaction feel as close to working with a traditional desktop application as possible (since that's what our users want). Over the years, this Javascript code has grown and grown, making it hard to manage & maintain and making it ever more likely that adding new functionallity will break some existing one. Needless to say, lots of this code also isn't state of the art anymore. Thus, we have some ongoing discussion whether the client-side part of the application should be written anew in either Flex or Silverlight, or written anew with some state of the art JavaScript framework like jQuery, or whether we should simply carry on with what we have and gradually try to replace the worst bits of the existing code. What makes this even harder to decide is that writing the UI anew will probable cost us 6-12 person months. I'd like to hear your thoughts on that issue (maybe some of you have already had to make a similar decission). EDIT: To answer some of the questions that came up with the answers: The back-end code is written in C#, the target audience are (usually) non-technical users from the companies we sell the software to (not the general public, but not strictly internal users either), the software 'only' has to run in desktop browsers but not necessarily on mobile devices, and the client app is a full-blown UI.
2009/01/25
[ "https://Stackoverflow.com/questions/477759", "https://Stackoverflow.com", "https://Stackoverflow.com/users/56505/" ]
This decision is usually less about the technology, and more about your skill sets and comfort zones. If you have guys that eat and breathe Javascript, but know nothing about .net or Flash/Flex then there's nothing wrong with sticking with Javascript and leaning on a library like jQuery or Prototype. If you have skills in either of the others then you might get a quicker result using Silverlight or Flex, as you get quite a lot of functionality "for free" with both of them.
Check this [comparison table](http://askmeflash.com/article_m.php?p=article&id=11) for Flex vs Javascript:
477,759
We currently have a quite complex business application that contains a huge lot of JavaScript code for making the user interface & interaction feel as close to working with a traditional desktop application as possible (since that's what our users want). Over the years, this Javascript code has grown and grown, making it hard to manage & maintain and making it ever more likely that adding new functionallity will break some existing one. Needless to say, lots of this code also isn't state of the art anymore. Thus, we have some ongoing discussion whether the client-side part of the application should be written anew in either Flex or Silverlight, or written anew with some state of the art JavaScript framework like jQuery, or whether we should simply carry on with what we have and gradually try to replace the worst bits of the existing code. What makes this even harder to decide is that writing the UI anew will probable cost us 6-12 person months. I'd like to hear your thoughts on that issue (maybe some of you have already had to make a similar decission). EDIT: To answer some of the questions that came up with the answers: The back-end code is written in C#, the target audience are (usually) non-technical users from the companies we sell the software to (not the general public, but not strictly internal users either), the software 'only' has to run in desktop browsers but not necessarily on mobile devices, and the client app is a full-blown UI.
2009/01/25
[ "https://Stackoverflow.com/questions/477759", "https://Stackoverflow.com", "https://Stackoverflow.com/users/56505/" ]
In all honesty, I would refactor the old JavaScript code and not rewrite the application. Since you are asking about which platform to put it in, I would guess that your team isn't an expert in any of them (not slamming the team, it's just a simple fact that you have to consider when making a decision). This will work against you as you'll have double duty rewriting and learning how to do things on the new platform. By keeping it in JavaScript, you can slowly introduce a framework if you choose and do it iteratively (Replace on section of code, test it, release it, and fix any bugs). This will allow you to do it at a slower pace and get feedback along the way. That way too, if the project is canceled part way through, you aren't out all the work, because the updated code is being used by the end users. Remember the waterfall model, which is essentially what a full swap out of will be almost never works. As much as I hate to admit this, as it is always the most fun for developers, shifting platforms, and replacing an entire system at once rarely works. There are countless examples of this, Netscape for one. [Here is the post from Spolsky on it.](http://www.joelonsoftware.com/articles/fog0000000027.html) (I would also recommend the book [Dreaming in Code](https://rads.stackoverflow.com/amzn/click/com/1400082463). It is an excellent example of a software project that failed and how and why). Remember to rewrite a system from scratch you are essentially going to have to go through every line of code and figure what it does and why. At first you think you can skip it, but eventually it comes down to this. Like you said, your code is old, and that means there are most likely hacks in it to get something done. Some of these you can ignore, and others will be, "I didn't know the system needed it to do that."
Check this [comparison table](http://askmeflash.com/article_m.php?p=article&id=11) for Flex vs Javascript:
477,759
We currently have a quite complex business application that contains a huge lot of JavaScript code for making the user interface & interaction feel as close to working with a traditional desktop application as possible (since that's what our users want). Over the years, this Javascript code has grown and grown, making it hard to manage & maintain and making it ever more likely that adding new functionallity will break some existing one. Needless to say, lots of this code also isn't state of the art anymore. Thus, we have some ongoing discussion whether the client-side part of the application should be written anew in either Flex or Silverlight, or written anew with some state of the art JavaScript framework like jQuery, or whether we should simply carry on with what we have and gradually try to replace the worst bits of the existing code. What makes this even harder to decide is that writing the UI anew will probable cost us 6-12 person months. I'd like to hear your thoughts on that issue (maybe some of you have already had to make a similar decission). EDIT: To answer some of the questions that came up with the answers: The back-end code is written in C#, the target audience are (usually) non-technical users from the companies we sell the software to (not the general public, but not strictly internal users either), the software 'only' has to run in desktop browsers but not necessarily on mobile devices, and the client app is a full-blown UI.
2009/01/25
[ "https://Stackoverflow.com/questions/477759", "https://Stackoverflow.com", "https://Stackoverflow.com/users/56505/" ]
In all honesty, I would refactor the old JavaScript code and not rewrite the application. Since you are asking about which platform to put it in, I would guess that your team isn't an expert in any of them (not slamming the team, it's just a simple fact that you have to consider when making a decision). This will work against you as you'll have double duty rewriting and learning how to do things on the new platform. By keeping it in JavaScript, you can slowly introduce a framework if you choose and do it iteratively (Replace on section of code, test it, release it, and fix any bugs). This will allow you to do it at a slower pace and get feedback along the way. That way too, if the project is canceled part way through, you aren't out all the work, because the updated code is being used by the end users. Remember the waterfall model, which is essentially what a full swap out of will be almost never works. As much as I hate to admit this, as it is always the most fun for developers, shifting platforms, and replacing an entire system at once rarely works. There are countless examples of this, Netscape for one. [Here is the post from Spolsky on it.](http://www.joelonsoftware.com/articles/fog0000000027.html) (I would also recommend the book [Dreaming in Code](https://rads.stackoverflow.com/amzn/click/com/1400082463). It is an excellent example of a software project that failed and how and why). Remember to rewrite a system from scratch you are essentially going to have to go through every line of code and figure what it does and why. At first you think you can skip it, but eventually it comes down to this. Like you said, your code is old, and that means there are most likely hacks in it to get something done. Some of these you can ignore, and others will be, "I didn't know the system needed it to do that."
These things spring to mind: * As you have a .Net backend and you have some ability to force your customers onto a specific platform, Silverlight is an option; * Since your client is a full-blown UI you want widgets and possibly other features like Drag and Drop; * I haven't seen any requirements that to me would justify starting over (which often doesn't work out) in Flex/Silverlight (eg streaming video, SVG support. Added to your team's familiarity with Javascript, I think you can't make a compelling case for doing it in anything other than Javascript. But of course Javascript is lots of things and there are [lots of Javascript frameworks[1](http://ui.jquery.com/demos). The most important divider is whether your intent is to "decorate" a set of Web pages or you need a full set of Widgets to create a desktop-like application on the Web. Your question indicate it is the latter. As such--and I may get downvoted for saying this--I don't think jQuery is the answer and I say this as someone who loves jQuery. jQuery (imho) is great to enhance Webpages and abstract cross-browser low-level functionality but the most important factor for complex UI developer is this: **It's all about the widgets.** And yes I'm aware of [jQuery UI](http://ui.jquery.com/demos) but it's a lot sparser than the others when it comes to widgets. I suggest you take a look at the samples and widget galleries of some frameworks: * [YUI Examples Gallery](http://developer.yahoo.com/yui/examples/); * [ExtJS demo](http://extjs.com/deploy/dev/examples/samples.html); and * [SmartClient feature explorer](http://www.smartclient.com/index.jsp#_Welcome). The others (jQuery, Dojo, Mootools, Prototype) are more "compact" frameworks arguably less suited to your purpose. Also consider the license of each framework when making your decision. My thoughts on the above three are: * ExtJS has somewhat angered the community in that it started out as LGPL but had a [controversial license change](http://extjs.com/forum/showthread.php?t=33096) (that thread is at 76 pages!) to GPL/commercial at version 2.1. The problem with that the community no longer has an active participation in the framework's development. Not the mainline version anyway. This means it's being developed and supported by a small team (possibly one person) rather than the community. IMHO it's not worth paying a commercial license for that and GPL is probably prohibitive in your situation; * YUI is supported by Yahoo and available under a far more permissive and far less invasive BSD license. It's mature, well-used and well worth serious consideration; and * SmartClient impresses me a lot. It has perhaps the most permissive license of all (LGPL), is roughly seven years old, has an incredibly impressive array of widgets available. Check out their feature explorer. Your decision should be based on how you get as much of your application "for free" as possible. You don't want to spending valuable developer time doing things like: * Coding UI widgets like trees and accordions; * Testing and fixing cross-browser Javascript and CSS issues; * Creating homegrown frameworks that greatly duplicate what existing frameworks do and do well. I would seriously look at one of the above three as your path forward.
477,759
We currently have a quite complex business application that contains a huge lot of JavaScript code for making the user interface & interaction feel as close to working with a traditional desktop application as possible (since that's what our users want). Over the years, this Javascript code has grown and grown, making it hard to manage & maintain and making it ever more likely that adding new functionallity will break some existing one. Needless to say, lots of this code also isn't state of the art anymore. Thus, we have some ongoing discussion whether the client-side part of the application should be written anew in either Flex or Silverlight, or written anew with some state of the art JavaScript framework like jQuery, or whether we should simply carry on with what we have and gradually try to replace the worst bits of the existing code. What makes this even harder to decide is that writing the UI anew will probable cost us 6-12 person months. I'd like to hear your thoughts on that issue (maybe some of you have already had to make a similar decission). EDIT: To answer some of the questions that came up with the answers: The back-end code is written in C#, the target audience are (usually) non-technical users from the companies we sell the software to (not the general public, but not strictly internal users either), the software 'only' has to run in desktop browsers but not necessarily on mobile devices, and the client app is a full-blown UI.
2009/01/25
[ "https://Stackoverflow.com/questions/477759", "https://Stackoverflow.com", "https://Stackoverflow.com/users/56505/" ]
My opinion on this one's pretty simple: unless the app needs to be accessible publicly, unless it needs to be search-engine optimized and findable, and/or there's an otherwise compelling case for its having to remain strictly text-based, then the chips are stacked in favor of rich-client runtimes like Flash or Silverlight right out of the gate. A big reason, if not the biggest, is that they eliminate the complexities of developing for multiple browsers and platforms. Again: they **eliminate the runtime-environment variable**. No more debugging old versions of Netscape and IE, no more object detection and consequent branching, no more wacky CSS hacks -- one codebase, and you're done. Offloading the runtime environment to Adobe or Microsoft will save you time, money and headaches, all else equal. (Sure, there's YUI, JQuery, etc., but they don't eliminate that variable -- they just abstract it. And they don't abstract *all* of it, either -- only some of it; ultimately, it's still up to you to test, debug, retest, debug, repeat.) Of course, your situation's a bit more complicated by the existing-codebase problem, and it's difficult to say definitively which way you should go, because only you've got the code, and we're just geeks with opinions. But assuming, just by your asking the question, that a refactoring of your existing codebase would involve a significant-enough undertaking as to warrant even considering an alternative (and probably comparatively foreign) technology in the first place, which it sounds like it would, then my response is that your curiosity is well-placed, and that you should give them both a serious look before making a decision. For my part, I'm a longtime server-side guy, ASP.NET/C# for the past several years, and I've written many a text-based line-of-business app in my time, the last few with strong emphasis on delivering rich soverign UIs with JavaScript. I've also spent the last couple of years almost exclusively with Flex. I've got experience in both worlds. And I can tell you without hesitation that right now, **it's everyone else's job to beat Flex**: it's just an amazingly versatile and productive product, and for line-of-business apps, it remains leaps and bounds ahead of Silverlight. I just can't recommend it highly enough; the data-binding and event-handling features alone are incredible time-savers, to say nothing of the complete freedom you'll have over layout, animation, etc. The list goes on. So, my advice: **take a long, careful look at Flex**. In the end, you might find a ground-up rewrite is just too massive an undertaking to justify, and that's fine -- only you can make that determination. (And to be fair, you've just as much ability to make a mess of a Flex project as you do with a JavaScript project -- I know. I've done it.) But all told, Flex is probably the least-limiting, most flexible, most feature-rich and productive option out there right now, so it's certainly worth considering. Good luck!
Check this [comparison table](http://askmeflash.com/article_m.php?p=article&id=11) for Flex vs Javascript:
477,759
We currently have a quite complex business application that contains a huge lot of JavaScript code for making the user interface & interaction feel as close to working with a traditional desktop application as possible (since that's what our users want). Over the years, this Javascript code has grown and grown, making it hard to manage & maintain and making it ever more likely that adding new functionallity will break some existing one. Needless to say, lots of this code also isn't state of the art anymore. Thus, we have some ongoing discussion whether the client-side part of the application should be written anew in either Flex or Silverlight, or written anew with some state of the art JavaScript framework like jQuery, or whether we should simply carry on with what we have and gradually try to replace the worst bits of the existing code. What makes this even harder to decide is that writing the UI anew will probable cost us 6-12 person months. I'd like to hear your thoughts on that issue (maybe some of you have already had to make a similar decission). EDIT: To answer some of the questions that came up with the answers: The back-end code is written in C#, the target audience are (usually) non-technical users from the companies we sell the software to (not the general public, but not strictly internal users either), the software 'only' has to run in desktop browsers but not necessarily on mobile devices, and the client app is a full-blown UI.
2009/01/25
[ "https://Stackoverflow.com/questions/477759", "https://Stackoverflow.com", "https://Stackoverflow.com/users/56505/" ]
These things spring to mind: * As you have a .Net backend and you have some ability to force your customers onto a specific platform, Silverlight is an option; * Since your client is a full-blown UI you want widgets and possibly other features like Drag and Drop; * I haven't seen any requirements that to me would justify starting over (which often doesn't work out) in Flex/Silverlight (eg streaming video, SVG support. Added to your team's familiarity with Javascript, I think you can't make a compelling case for doing it in anything other than Javascript. But of course Javascript is lots of things and there are [lots of Javascript frameworks[1](http://ui.jquery.com/demos). The most important divider is whether your intent is to "decorate" a set of Web pages or you need a full set of Widgets to create a desktop-like application on the Web. Your question indicate it is the latter. As such--and I may get downvoted for saying this--I don't think jQuery is the answer and I say this as someone who loves jQuery. jQuery (imho) is great to enhance Webpages and abstract cross-browser low-level functionality but the most important factor for complex UI developer is this: **It's all about the widgets.** And yes I'm aware of [jQuery UI](http://ui.jquery.com/demos) but it's a lot sparser than the others when it comes to widgets. I suggest you take a look at the samples and widget galleries of some frameworks: * [YUI Examples Gallery](http://developer.yahoo.com/yui/examples/); * [ExtJS demo](http://extjs.com/deploy/dev/examples/samples.html); and * [SmartClient feature explorer](http://www.smartclient.com/index.jsp#_Welcome). The others (jQuery, Dojo, Mootools, Prototype) are more "compact" frameworks arguably less suited to your purpose. Also consider the license of each framework when making your decision. My thoughts on the above three are: * ExtJS has somewhat angered the community in that it started out as LGPL but had a [controversial license change](http://extjs.com/forum/showthread.php?t=33096) (that thread is at 76 pages!) to GPL/commercial at version 2.1. The problem with that the community no longer has an active participation in the framework's development. Not the mainline version anyway. This means it's being developed and supported by a small team (possibly one person) rather than the community. IMHO it's not worth paying a commercial license for that and GPL is probably prohibitive in your situation; * YUI is supported by Yahoo and available under a far more permissive and far less invasive BSD license. It's mature, well-used and well worth serious consideration; and * SmartClient impresses me a lot. It has perhaps the most permissive license of all (LGPL), is roughly seven years old, has an incredibly impressive array of widgets available. Check out their feature explorer. Your decision should be based on how you get as much of your application "for free" as possible. You don't want to spending valuable developer time doing things like: * Coding UI widgets like trees and accordions; * Testing and fixing cross-browser Javascript and CSS issues; * Creating homegrown frameworks that greatly duplicate what existing frameworks do and do well. I would seriously look at one of the above three as your path forward.
Any javascript you have that has been developed 'Over the years' probably doesn't look anything like what's possible today. You undoubtedly have a lot of useful code there. nonetheless. So my recommendation would be re-write in javascript using jQuery and make use of one of the available GUI add-ons, perhaps look at Yahoos stuff. You will also be targeting the widest audience this way.
477,759
We currently have a quite complex business application that contains a huge lot of JavaScript code for making the user interface & interaction feel as close to working with a traditional desktop application as possible (since that's what our users want). Over the years, this Javascript code has grown and grown, making it hard to manage & maintain and making it ever more likely that adding new functionallity will break some existing one. Needless to say, lots of this code also isn't state of the art anymore. Thus, we have some ongoing discussion whether the client-side part of the application should be written anew in either Flex or Silverlight, or written anew with some state of the art JavaScript framework like jQuery, or whether we should simply carry on with what we have and gradually try to replace the worst bits of the existing code. What makes this even harder to decide is that writing the UI anew will probable cost us 6-12 person months. I'd like to hear your thoughts on that issue (maybe some of you have already had to make a similar decission). EDIT: To answer some of the questions that came up with the answers: The back-end code is written in C#, the target audience are (usually) non-technical users from the companies we sell the software to (not the general public, but not strictly internal users either), the software 'only' has to run in desktop browsers but not necessarily on mobile devices, and the client app is a full-blown UI.
2009/01/25
[ "https://Stackoverflow.com/questions/477759", "https://Stackoverflow.com", "https://Stackoverflow.com/users/56505/" ]
This decision is usually less about the technology, and more about your skill sets and comfort zones. If you have guys that eat and breathe Javascript, but know nothing about .net or Flash/Flex then there's nothing wrong with sticking with Javascript and leaning on a library like jQuery or Prototype. If you have skills in either of the others then you might get a quicker result using Silverlight or Flex, as you get quite a lot of functionality "for free" with both of them.
Any javascript you have that has been developed 'Over the years' probably doesn't look anything like what's possible today. You undoubtedly have a lot of useful code there. nonetheless. So my recommendation would be re-write in javascript using jQuery and make use of one of the available GUI add-ons, perhaps look at Yahoos stuff. You will also be targeting the widest audience this way.
477,759
We currently have a quite complex business application that contains a huge lot of JavaScript code for making the user interface & interaction feel as close to working with a traditional desktop application as possible (since that's what our users want). Over the years, this Javascript code has grown and grown, making it hard to manage & maintain and making it ever more likely that adding new functionallity will break some existing one. Needless to say, lots of this code also isn't state of the art anymore. Thus, we have some ongoing discussion whether the client-side part of the application should be written anew in either Flex or Silverlight, or written anew with some state of the art JavaScript framework like jQuery, or whether we should simply carry on with what we have and gradually try to replace the worst bits of the existing code. What makes this even harder to decide is that writing the UI anew will probable cost us 6-12 person months. I'd like to hear your thoughts on that issue (maybe some of you have already had to make a similar decission). EDIT: To answer some of the questions that came up with the answers: The back-end code is written in C#, the target audience are (usually) non-technical users from the companies we sell the software to (not the general public, but not strictly internal users either), the software 'only' has to run in desktop browsers but not necessarily on mobile devices, and the client app is a full-blown UI.
2009/01/25
[ "https://Stackoverflow.com/questions/477759", "https://Stackoverflow.com", "https://Stackoverflow.com/users/56505/" ]
In all honesty, I would refactor the old JavaScript code and not rewrite the application. Since you are asking about which platform to put it in, I would guess that your team isn't an expert in any of them (not slamming the team, it's just a simple fact that you have to consider when making a decision). This will work against you as you'll have double duty rewriting and learning how to do things on the new platform. By keeping it in JavaScript, you can slowly introduce a framework if you choose and do it iteratively (Replace on section of code, test it, release it, and fix any bugs). This will allow you to do it at a slower pace and get feedback along the way. That way too, if the project is canceled part way through, you aren't out all the work, because the updated code is being used by the end users. Remember the waterfall model, which is essentially what a full swap out of will be almost never works. As much as I hate to admit this, as it is always the most fun for developers, shifting platforms, and replacing an entire system at once rarely works. There are countless examples of this, Netscape for one. [Here is the post from Spolsky on it.](http://www.joelonsoftware.com/articles/fog0000000027.html) (I would also recommend the book [Dreaming in Code](https://rads.stackoverflow.com/amzn/click/com/1400082463). It is an excellent example of a software project that failed and how and why). Remember to rewrite a system from scratch you are essentially going to have to go through every line of code and figure what it does and why. At first you think you can skip it, but eventually it comes down to this. Like you said, your code is old, and that means there are most likely hacks in it to get something done. Some of these you can ignore, and others will be, "I didn't know the system needed it to do that."
This decision is usually less about the technology, and more about your skill sets and comfort zones. If you have guys that eat and breathe Javascript, but know nothing about .net or Flash/Flex then there's nothing wrong with sticking with Javascript and leaning on a library like jQuery or Prototype. If you have skills in either of the others then you might get a quicker result using Silverlight or Flex, as you get quite a lot of functionality "for free" with both of them.
477,759
We currently have a quite complex business application that contains a huge lot of JavaScript code for making the user interface & interaction feel as close to working with a traditional desktop application as possible (since that's what our users want). Over the years, this Javascript code has grown and grown, making it hard to manage & maintain and making it ever more likely that adding new functionallity will break some existing one. Needless to say, lots of this code also isn't state of the art anymore. Thus, we have some ongoing discussion whether the client-side part of the application should be written anew in either Flex or Silverlight, or written anew with some state of the art JavaScript framework like jQuery, or whether we should simply carry on with what we have and gradually try to replace the worst bits of the existing code. What makes this even harder to decide is that writing the UI anew will probable cost us 6-12 person months. I'd like to hear your thoughts on that issue (maybe some of you have already had to make a similar decission). EDIT: To answer some of the questions that came up with the answers: The back-end code is written in C#, the target audience are (usually) non-technical users from the companies we sell the software to (not the general public, but not strictly internal users either), the software 'only' has to run in desktop browsers but not necessarily on mobile devices, and the client app is a full-blown UI.
2009/01/25
[ "https://Stackoverflow.com/questions/477759", "https://Stackoverflow.com", "https://Stackoverflow.com/users/56505/" ]
My opinion on this one's pretty simple: unless the app needs to be accessible publicly, unless it needs to be search-engine optimized and findable, and/or there's an otherwise compelling case for its having to remain strictly text-based, then the chips are stacked in favor of rich-client runtimes like Flash or Silverlight right out of the gate. A big reason, if not the biggest, is that they eliminate the complexities of developing for multiple browsers and platforms. Again: they **eliminate the runtime-environment variable**. No more debugging old versions of Netscape and IE, no more object detection and consequent branching, no more wacky CSS hacks -- one codebase, and you're done. Offloading the runtime environment to Adobe or Microsoft will save you time, money and headaches, all else equal. (Sure, there's YUI, JQuery, etc., but they don't eliminate that variable -- they just abstract it. And they don't abstract *all* of it, either -- only some of it; ultimately, it's still up to you to test, debug, retest, debug, repeat.) Of course, your situation's a bit more complicated by the existing-codebase problem, and it's difficult to say definitively which way you should go, because only you've got the code, and we're just geeks with opinions. But assuming, just by your asking the question, that a refactoring of your existing codebase would involve a significant-enough undertaking as to warrant even considering an alternative (and probably comparatively foreign) technology in the first place, which it sounds like it would, then my response is that your curiosity is well-placed, and that you should give them both a serious look before making a decision. For my part, I'm a longtime server-side guy, ASP.NET/C# for the past several years, and I've written many a text-based line-of-business app in my time, the last few with strong emphasis on delivering rich soverign UIs with JavaScript. I've also spent the last couple of years almost exclusively with Flex. I've got experience in both worlds. And I can tell you without hesitation that right now, **it's everyone else's job to beat Flex**: it's just an amazingly versatile and productive product, and for line-of-business apps, it remains leaps and bounds ahead of Silverlight. I just can't recommend it highly enough; the data-binding and event-handling features alone are incredible time-savers, to say nothing of the complete freedom you'll have over layout, animation, etc. The list goes on. So, my advice: **take a long, careful look at Flex**. In the end, you might find a ground-up rewrite is just too massive an undertaking to justify, and that's fine -- only you can make that determination. (And to be fair, you've just as much ability to make a mess of a Flex project as you do with a JavaScript project -- I know. I've done it.) But all told, Flex is probably the least-limiting, most flexible, most feature-rich and productive option out there right now, so it's certainly worth considering. Good luck!
We have developed an extremely rich application using EXTJS with C# and a some C++ on the server. Not only do we have clients who are happy with the interface in their desktop browsers but with very little tweaking to the Javascript we were able to provide web browser support. Also, we have clients in third-world countries who cannot use Flash or Silverlight apps due to their field personnel using kiosks in internet cafes (many of which don't have Flash installed - forget Silverlight!). I think these issues and others make up for the difficulty of coding a complex app in javascript...
477,759
We currently have a quite complex business application that contains a huge lot of JavaScript code for making the user interface & interaction feel as close to working with a traditional desktop application as possible (since that's what our users want). Over the years, this Javascript code has grown and grown, making it hard to manage & maintain and making it ever more likely that adding new functionallity will break some existing one. Needless to say, lots of this code also isn't state of the art anymore. Thus, we have some ongoing discussion whether the client-side part of the application should be written anew in either Flex or Silverlight, or written anew with some state of the art JavaScript framework like jQuery, or whether we should simply carry on with what we have and gradually try to replace the worst bits of the existing code. What makes this even harder to decide is that writing the UI anew will probable cost us 6-12 person months. I'd like to hear your thoughts on that issue (maybe some of you have already had to make a similar decission). EDIT: To answer some of the questions that came up with the answers: The back-end code is written in C#, the target audience are (usually) non-technical users from the companies we sell the software to (not the general public, but not strictly internal users either), the software 'only' has to run in desktop browsers but not necessarily on mobile devices, and the client app is a full-blown UI.
2009/01/25
[ "https://Stackoverflow.com/questions/477759", "https://Stackoverflow.com", "https://Stackoverflow.com/users/56505/" ]
In all honesty, I would refactor the old JavaScript code and not rewrite the application. Since you are asking about which platform to put it in, I would guess that your team isn't an expert in any of them (not slamming the team, it's just a simple fact that you have to consider when making a decision). This will work against you as you'll have double duty rewriting and learning how to do things on the new platform. By keeping it in JavaScript, you can slowly introduce a framework if you choose and do it iteratively (Replace on section of code, test it, release it, and fix any bugs). This will allow you to do it at a slower pace and get feedback along the way. That way too, if the project is canceled part way through, you aren't out all the work, because the updated code is being used by the end users. Remember the waterfall model, which is essentially what a full swap out of will be almost never works. As much as I hate to admit this, as it is always the most fun for developers, shifting platforms, and replacing an entire system at once rarely works. There are countless examples of this, Netscape for one. [Here is the post from Spolsky on it.](http://www.joelonsoftware.com/articles/fog0000000027.html) (I would also recommend the book [Dreaming in Code](https://rads.stackoverflow.com/amzn/click/com/1400082463). It is an excellent example of a software project that failed and how and why). Remember to rewrite a system from scratch you are essentially going to have to go through every line of code and figure what it does and why. At first you think you can skip it, but eventually it comes down to this. Like you said, your code is old, and that means there are most likely hacks in it to get something done. Some of these you can ignore, and others will be, "I didn't know the system needed it to do that."
My opinion on this one's pretty simple: unless the app needs to be accessible publicly, unless it needs to be search-engine optimized and findable, and/or there's an otherwise compelling case for its having to remain strictly text-based, then the chips are stacked in favor of rich-client runtimes like Flash or Silverlight right out of the gate. A big reason, if not the biggest, is that they eliminate the complexities of developing for multiple browsers and platforms. Again: they **eliminate the runtime-environment variable**. No more debugging old versions of Netscape and IE, no more object detection and consequent branching, no more wacky CSS hacks -- one codebase, and you're done. Offloading the runtime environment to Adobe or Microsoft will save you time, money and headaches, all else equal. (Sure, there's YUI, JQuery, etc., but they don't eliminate that variable -- they just abstract it. And they don't abstract *all* of it, either -- only some of it; ultimately, it's still up to you to test, debug, retest, debug, repeat.) Of course, your situation's a bit more complicated by the existing-codebase problem, and it's difficult to say definitively which way you should go, because only you've got the code, and we're just geeks with opinions. But assuming, just by your asking the question, that a refactoring of your existing codebase would involve a significant-enough undertaking as to warrant even considering an alternative (and probably comparatively foreign) technology in the first place, which it sounds like it would, then my response is that your curiosity is well-placed, and that you should give them both a serious look before making a decision. For my part, I'm a longtime server-side guy, ASP.NET/C# for the past several years, and I've written many a text-based line-of-business app in my time, the last few with strong emphasis on delivering rich soverign UIs with JavaScript. I've also spent the last couple of years almost exclusively with Flex. I've got experience in both worlds. And I can tell you without hesitation that right now, **it's everyone else's job to beat Flex**: it's just an amazingly versatile and productive product, and for line-of-business apps, it remains leaps and bounds ahead of Silverlight. I just can't recommend it highly enough; the data-binding and event-handling features alone are incredible time-savers, to say nothing of the complete freedom you'll have over layout, animation, etc. The list goes on. So, my advice: **take a long, careful look at Flex**. In the end, you might find a ground-up rewrite is just too massive an undertaking to justify, and that's fine -- only you can make that determination. (And to be fair, you've just as much ability to make a mess of a Flex project as you do with a JavaScript project -- I know. I've done it.) But all told, Flex is probably the least-limiting, most flexible, most feature-rich and productive option out there right now, so it's certainly worth considering. Good luck!
477,759
We currently have a quite complex business application that contains a huge lot of JavaScript code for making the user interface & interaction feel as close to working with a traditional desktop application as possible (since that's what our users want). Over the years, this Javascript code has grown and grown, making it hard to manage & maintain and making it ever more likely that adding new functionallity will break some existing one. Needless to say, lots of this code also isn't state of the art anymore. Thus, we have some ongoing discussion whether the client-side part of the application should be written anew in either Flex or Silverlight, or written anew with some state of the art JavaScript framework like jQuery, or whether we should simply carry on with what we have and gradually try to replace the worst bits of the existing code. What makes this even harder to decide is that writing the UI anew will probable cost us 6-12 person months. I'd like to hear your thoughts on that issue (maybe some of you have already had to make a similar decission). EDIT: To answer some of the questions that came up with the answers: The back-end code is written in C#, the target audience are (usually) non-technical users from the companies we sell the software to (not the general public, but not strictly internal users either), the software 'only' has to run in desktop browsers but not necessarily on mobile devices, and the client app is a full-blown UI.
2009/01/25
[ "https://Stackoverflow.com/questions/477759", "https://Stackoverflow.com", "https://Stackoverflow.com/users/56505/" ]
In all honesty, I would refactor the old JavaScript code and not rewrite the application. Since you are asking about which platform to put it in, I would guess that your team isn't an expert in any of them (not slamming the team, it's just a simple fact that you have to consider when making a decision). This will work against you as you'll have double duty rewriting and learning how to do things on the new platform. By keeping it in JavaScript, you can slowly introduce a framework if you choose and do it iteratively (Replace on section of code, test it, release it, and fix any bugs). This will allow you to do it at a slower pace and get feedback along the way. That way too, if the project is canceled part way through, you aren't out all the work, because the updated code is being used by the end users. Remember the waterfall model, which is essentially what a full swap out of will be almost never works. As much as I hate to admit this, as it is always the most fun for developers, shifting platforms, and replacing an entire system at once rarely works. There are countless examples of this, Netscape for one. [Here is the post from Spolsky on it.](http://www.joelonsoftware.com/articles/fog0000000027.html) (I would also recommend the book [Dreaming in Code](https://rads.stackoverflow.com/amzn/click/com/1400082463). It is an excellent example of a software project that failed and how and why). Remember to rewrite a system from scratch you are essentially going to have to go through every line of code and figure what it does and why. At first you think you can skip it, but eventually it comes down to this. Like you said, your code is old, and that means there are most likely hacks in it to get something done. Some of these you can ignore, and others will be, "I didn't know the system needed it to do that."
We have developed an extremely rich application using EXTJS with C# and a some C++ on the server. Not only do we have clients who are happy with the interface in their desktop browsers but with very little tweaking to the Javascript we were able to provide web browser support. Also, we have clients in third-world countries who cannot use Flash or Silverlight apps due to their field personnel using kiosks in internet cafes (many of which don't have Flash installed - forget Silverlight!). I think these issues and others make up for the difficulty of coding a complex app in javascript...
103,625
I want to try [this recipe for Vegan Lox by Tasty](https://tasty.co/recipe/vegan-lox). Step 5 is > > Use a vegetable peeler to shave the carrots lengthwise into ribbons. > Massage with salt. > > > I don't understand what "Massage with salt" means and I don't see anything happening in the video. Do they just mean "rub in with salt"?
2019/11/21
[ "https://cooking.stackexchange.com/questions/103625", "https://cooking.stackexchange.com", "https://cooking.stackexchange.com/users/61364/" ]
Yes, this is mostly same as rub in with salt, but with a bit more *intensity* and physical contact with the food. While rubbing can be taken as applying one *coating*, massage warrants ensuring that the salt is mixed well with the food. It's done so that the sprinkled salt is spread uniformly.
yes. In that case, the word massage (IMO) is used wrongly, rub would be more appropriate. It does not make sense in the case of vegetables, but it sure does with meat.
103,625
I want to try [this recipe for Vegan Lox by Tasty](https://tasty.co/recipe/vegan-lox). Step 5 is > > Use a vegetable peeler to shave the carrots lengthwise into ribbons. > Massage with salt. > > > I don't understand what "Massage with salt" means and I don't see anything happening in the video. Do they just mean "rub in with salt"?
2019/11/21
[ "https://cooking.stackexchange.com/questions/103625", "https://cooking.stackexchange.com", "https://cooking.stackexchange.com/users/61364/" ]
I'd argue that 'massage' is the right word in this case. I've this technique a lot in japanese cooking -- you cut up the vegetables, sprinkling with salt as you go (so there's layers of salt in between layers of vegetables), then you really get in there and basically massage (knead?) the pile of vegetables with the salt, so that the salt not only is spread through the pile of vegetables, but that there's some mechanical abrasion happening, too. You then typically let the vegetables sit for a while, and then you rinse them off. When people talk about 'rubs', it's often just a coating that's at most patted onto things (like for ribs), but there isn't the extended period of mechanical manipulation that you'd expect for 'massage' or 'knead'. If you ever make sushi, I highly recommend trying it with carrots. The carrots will lose some of their crispness, so that you can have large sticks of carrots without it being too crunchy compared to the rest of the fillings. It's also useful for other firm vegetables that you're going to use raw in a salad. This also works well to pre-wilt your cabbage before you make cole slaw -- the cabbage will give up much of its moisture that would otherwise end up in the final dish. (I think this was the first time I saw it -- on an episode of Good Eats)
Yes, this is mostly same as rub in with salt, but with a bit more *intensity* and physical contact with the food. While rubbing can be taken as applying one *coating*, massage warrants ensuring that the salt is mixed well with the food. It's done so that the sprinkled salt is spread uniformly.
103,625
I want to try [this recipe for Vegan Lox by Tasty](https://tasty.co/recipe/vegan-lox). Step 5 is > > Use a vegetable peeler to shave the carrots lengthwise into ribbons. > Massage with salt. > > > I don't understand what "Massage with salt" means and I don't see anything happening in the video. Do they just mean "rub in with salt"?
2019/11/21
[ "https://cooking.stackexchange.com/questions/103625", "https://cooking.stackexchange.com", "https://cooking.stackexchange.com/users/61364/" ]
I'd argue that 'massage' is the right word in this case. I've this technique a lot in japanese cooking -- you cut up the vegetables, sprinkling with salt as you go (so there's layers of salt in between layers of vegetables), then you really get in there and basically massage (knead?) the pile of vegetables with the salt, so that the salt not only is spread through the pile of vegetables, but that there's some mechanical abrasion happening, too. You then typically let the vegetables sit for a while, and then you rinse them off. When people talk about 'rubs', it's often just a coating that's at most patted onto things (like for ribs), but there isn't the extended period of mechanical manipulation that you'd expect for 'massage' or 'knead'. If you ever make sushi, I highly recommend trying it with carrots. The carrots will lose some of their crispness, so that you can have large sticks of carrots without it being too crunchy compared to the rest of the fillings. It's also useful for other firm vegetables that you're going to use raw in a salad. This also works well to pre-wilt your cabbage before you make cole slaw -- the cabbage will give up much of its moisture that would otherwise end up in the final dish. (I think this was the first time I saw it -- on an episode of Good Eats)
yes. In that case, the word massage (IMO) is used wrongly, rub would be more appropriate. It does not make sense in the case of vegetables, but it sure does with meat.
556,625
We are experiencing an issue with our SonicWall NSA 2400 Firewall. We have a secondary gateway over IPSec setup in the event of a failure of our main ISP (which is unfortunately common). The secondary gateway is a 4G connection through Verizon and the cost grows as data usage increases. The firewall switches over to the secondary gateway properly, but then *sometimes* will not renegotiate and switch back to the primary when it comes back up. Hitting 'Renegotiate' on both sides seems to fix the issue, but I am wondering if there is something I am missing that may be causing it to stay on the secondary. I don't think this has anything to do with the settings as it seems to work 50% of the time, but here they are anyway in case someone has some tips on how to ensure switching back to primary when the connection is restored. Policy Type: Site to Site Auth Method: IKE using Preshared Secret IKE Phase 1 proposal: Exchange: Main Mode DH Group: Group 1 Encrypt: AES-256 Auth: SHA1 Lifetime: 3600 (seconds) Phase 2 proposal: Protocol: ESP Encrypt: AES-256 Auth: SHA1 Lifetime: 900 (seconds) Keep Alive is enabled, and Preempt Secondary Gateway is enabled at 120 second interval.
2013/11/18
[ "https://serverfault.com/questions/556625", "https://serverfault.com", "https://serverfault.com/users/200261/" ]
[EDIT after understanding the question better] Not having any experience of the ELB, I still think this sounds suspiciously like the 503 error which may be thrown when Apache fronts a Tomcat and floods the connection. The effect is that if Apache delivers more connection requests than can be processed by the backend, the backend input queues fill up until no more connections can be accepted. When that happens, the corresponding output queues of Apache start filling up. When the queues are full Apache throws a 503. It would follow that the same could happen when Apache is the backend, and the frontend delivers at such a rate as to make the queues fill up. The (hypothetical) solution is to size the input connectors of the backend and output connectors of the frontend. This turns into a balancing act between the anticipated flooding level and the available RAM of the computers involved. So as this happens, check your maxclients settings and monitor your busy workers in Apache (mod\_status.). Do the same if possible with whatever ELB has that corresponds to Tomcats connector backlog, maxthreads etc. In short, look at everything concerning the input queues of Apache and the output queues of ELB. Although I fully understand it is not directly applicable, this link contains a sizing guide for the Apache connector. You would need to research the corresponding ELB queue technicalities, then do the math: <http://www.cubrid.org/blog/dev-platform/maxclients-in-apache-and-its-effect-on-tomcat-during-full-gc/> As observed in the commentary below, to overwhelm the Apache connector a spike in traffic is not the only possibility. If some requests are slower served than others, a higher ratio of those can also lead to the connector queues filling up. This was true in my case. Also, when this happened to me I was baffled that I had to restart the Apache service in order to not get served 503:s again. Simply waiting out the connector flooding was not enough. I never got that figured out, but one can speculate in Apache serving from its cache perhaps? After increasing the number of workers and the corresponding pre-fork maxclients settings (this was multithreaded Apache on Windows which has a couple of other directives for the queues if I remember correctly), the 503-problem disappeared. I actually didn't do the math, but just tweaked the values up until I could observe a wide margin to the peak consumption of the queue resources. I let it go at that. Hope this was of some help.
you can up the values of the elb health checker, so as a single slow response wont pull a server from elb. better to have a few users get service unavailable, than the site being down for everyone. EDIT: We are able to get away without pre-warming cache by upping health check timeout to 25 seconds......after 1-2 minutes... site is responsive as hell EDIT:: just launch a bunch of on demand, and when your monitoring tools shows management just how fast your are, then just prepay RI amazon :P EDIT: it is possible, a single backend elb registered instance is not enough. just launch a few more, and register them with elb, and that will help you narrow down your problem
556,625
We are experiencing an issue with our SonicWall NSA 2400 Firewall. We have a secondary gateway over IPSec setup in the event of a failure of our main ISP (which is unfortunately common). The secondary gateway is a 4G connection through Verizon and the cost grows as data usage increases. The firewall switches over to the secondary gateway properly, but then *sometimes* will not renegotiate and switch back to the primary when it comes back up. Hitting 'Renegotiate' on both sides seems to fix the issue, but I am wondering if there is something I am missing that may be causing it to stay on the secondary. I don't think this has anything to do with the settings as it seems to work 50% of the time, but here they are anyway in case someone has some tips on how to ensure switching back to primary when the connection is restored. Policy Type: Site to Site Auth Method: IKE using Preshared Secret IKE Phase 1 proposal: Exchange: Main Mode DH Group: Group 1 Encrypt: AES-256 Auth: SHA1 Lifetime: 3600 (seconds) Phase 2 proposal: Protocol: ESP Encrypt: AES-256 Auth: SHA1 Lifetime: 900 (seconds) Keep Alive is enabled, and Preempt Secondary Gateway is enabled at 120 second interval.
2013/11/18
[ "https://serverfault.com/questions/556625", "https://serverfault.com", "https://serverfault.com/users/200261/" ]
I just ran into this issue myself. The Amazon ELB will return this error if there are no healthy instances. Our sites were misconfigured, so the ELB healthcheck was failing, which caused the ELB to take the two servers out of rotation. With zero healthy sites, the ELB returned 503 Service Unavailable: Back-end server is at capacity.
[EDIT after understanding the question better] Not having any experience of the ELB, I still think this sounds suspiciously like the 503 error which may be thrown when Apache fronts a Tomcat and floods the connection. The effect is that if Apache delivers more connection requests than can be processed by the backend, the backend input queues fill up until no more connections can be accepted. When that happens, the corresponding output queues of Apache start filling up. When the queues are full Apache throws a 503. It would follow that the same could happen when Apache is the backend, and the frontend delivers at such a rate as to make the queues fill up. The (hypothetical) solution is to size the input connectors of the backend and output connectors of the frontend. This turns into a balancing act between the anticipated flooding level and the available RAM of the computers involved. So as this happens, check your maxclients settings and monitor your busy workers in Apache (mod\_status.). Do the same if possible with whatever ELB has that corresponds to Tomcats connector backlog, maxthreads etc. In short, look at everything concerning the input queues of Apache and the output queues of ELB. Although I fully understand it is not directly applicable, this link contains a sizing guide for the Apache connector. You would need to research the corresponding ELB queue technicalities, then do the math: <http://www.cubrid.org/blog/dev-platform/maxclients-in-apache-and-its-effect-on-tomcat-during-full-gc/> As observed in the commentary below, to overwhelm the Apache connector a spike in traffic is not the only possibility. If some requests are slower served than others, a higher ratio of those can also lead to the connector queues filling up. This was true in my case. Also, when this happened to me I was baffled that I had to restart the Apache service in order to not get served 503:s again. Simply waiting out the connector flooding was not enough. I never got that figured out, but one can speculate in Apache serving from its cache perhaps? After increasing the number of workers and the corresponding pre-fork maxclients settings (this was multithreaded Apache on Windows which has a couple of other directives for the queues if I remember correctly), the 503-problem disappeared. I actually didn't do the math, but just tweaked the values up until I could observe a wide margin to the peak consumption of the queue resources. I let it go at that. Hope this was of some help.
556,625
We are experiencing an issue with our SonicWall NSA 2400 Firewall. We have a secondary gateway over IPSec setup in the event of a failure of our main ISP (which is unfortunately common). The secondary gateway is a 4G connection through Verizon and the cost grows as data usage increases. The firewall switches over to the secondary gateway properly, but then *sometimes* will not renegotiate and switch back to the primary when it comes back up. Hitting 'Renegotiate' on both sides seems to fix the issue, but I am wondering if there is something I am missing that may be causing it to stay on the secondary. I don't think this has anything to do with the settings as it seems to work 50% of the time, but here they are anyway in case someone has some tips on how to ensure switching back to primary when the connection is restored. Policy Type: Site to Site Auth Method: IKE using Preshared Secret IKE Phase 1 proposal: Exchange: Main Mode DH Group: Group 1 Encrypt: AES-256 Auth: SHA1 Lifetime: 3600 (seconds) Phase 2 proposal: Protocol: ESP Encrypt: AES-256 Auth: SHA1 Lifetime: 900 (seconds) Keep Alive is enabled, and Preempt Secondary Gateway is enabled at 120 second interval.
2013/11/18
[ "https://serverfault.com/questions/556625", "https://serverfault.com", "https://serverfault.com/users/200261/" ]
[EDIT after understanding the question better] Not having any experience of the ELB, I still think this sounds suspiciously like the 503 error which may be thrown when Apache fronts a Tomcat and floods the connection. The effect is that if Apache delivers more connection requests than can be processed by the backend, the backend input queues fill up until no more connections can be accepted. When that happens, the corresponding output queues of Apache start filling up. When the queues are full Apache throws a 503. It would follow that the same could happen when Apache is the backend, and the frontend delivers at such a rate as to make the queues fill up. The (hypothetical) solution is to size the input connectors of the backend and output connectors of the frontend. This turns into a balancing act between the anticipated flooding level and the available RAM of the computers involved. So as this happens, check your maxclients settings and monitor your busy workers in Apache (mod\_status.). Do the same if possible with whatever ELB has that corresponds to Tomcats connector backlog, maxthreads etc. In short, look at everything concerning the input queues of Apache and the output queues of ELB. Although I fully understand it is not directly applicable, this link contains a sizing guide for the Apache connector. You would need to research the corresponding ELB queue technicalities, then do the math: <http://www.cubrid.org/blog/dev-platform/maxclients-in-apache-and-its-effect-on-tomcat-during-full-gc/> As observed in the commentary below, to overwhelm the Apache connector a spike in traffic is not the only possibility. If some requests are slower served than others, a higher ratio of those can also lead to the connector queues filling up. This was true in my case. Also, when this happened to me I was baffled that I had to restart the Apache service in order to not get served 503:s again. Simply waiting out the connector flooding was not enough. I never got that figured out, but one can speculate in Apache serving from its cache perhaps? After increasing the number of workers and the corresponding pre-fork maxclients settings (this was multithreaded Apache on Windows which has a couple of other directives for the queues if I remember correctly), the 503-problem disappeared. I actually didn't do the math, but just tweaked the values up until I could observe a wide margin to the peak consumption of the queue resources. I let it go at that. Hope this was of some help.
It's a few years late, but hopefully this helps someone out. I was seeing this error when the instance behind the ELB did not have a proper public IP assigned. I needed to manually create an Elastic IP and associate it with the instance after which point in time the ELB picked it up nearly instantly.
556,625
We are experiencing an issue with our SonicWall NSA 2400 Firewall. We have a secondary gateway over IPSec setup in the event of a failure of our main ISP (which is unfortunately common). The secondary gateway is a 4G connection through Verizon and the cost grows as data usage increases. The firewall switches over to the secondary gateway properly, but then *sometimes* will not renegotiate and switch back to the primary when it comes back up. Hitting 'Renegotiate' on both sides seems to fix the issue, but I am wondering if there is something I am missing that may be causing it to stay on the secondary. I don't think this has anything to do with the settings as it seems to work 50% of the time, but here they are anyway in case someone has some tips on how to ensure switching back to primary when the connection is restored. Policy Type: Site to Site Auth Method: IKE using Preshared Secret IKE Phase 1 proposal: Exchange: Main Mode DH Group: Group 1 Encrypt: AES-256 Auth: SHA1 Lifetime: 3600 (seconds) Phase 2 proposal: Protocol: ESP Encrypt: AES-256 Auth: SHA1 Lifetime: 900 (seconds) Keep Alive is enabled, and Preempt Secondary Gateway is enabled at 120 second interval.
2013/11/18
[ "https://serverfault.com/questions/556625", "https://serverfault.com", "https://serverfault.com/users/200261/" ]
I just ran into this issue myself. The Amazon ELB will return this error if there are no healthy instances. Our sites were misconfigured, so the ELB healthcheck was failing, which caused the ELB to take the two servers out of rotation. With zero healthy sites, the ELB returned 503 Service Unavailable: Back-end server is at capacity.
you can up the values of the elb health checker, so as a single slow response wont pull a server from elb. better to have a few users get service unavailable, than the site being down for everyone. EDIT: We are able to get away without pre-warming cache by upping health check timeout to 25 seconds......after 1-2 minutes... site is responsive as hell EDIT:: just launch a bunch of on demand, and when your monitoring tools shows management just how fast your are, then just prepay RI amazon :P EDIT: it is possible, a single backend elb registered instance is not enough. just launch a few more, and register them with elb, and that will help you narrow down your problem
556,625
We are experiencing an issue with our SonicWall NSA 2400 Firewall. We have a secondary gateway over IPSec setup in the event of a failure of our main ISP (which is unfortunately common). The secondary gateway is a 4G connection through Verizon and the cost grows as data usage increases. The firewall switches over to the secondary gateway properly, but then *sometimes* will not renegotiate and switch back to the primary when it comes back up. Hitting 'Renegotiate' on both sides seems to fix the issue, but I am wondering if there is something I am missing that may be causing it to stay on the secondary. I don't think this has anything to do with the settings as it seems to work 50% of the time, but here they are anyway in case someone has some tips on how to ensure switching back to primary when the connection is restored. Policy Type: Site to Site Auth Method: IKE using Preshared Secret IKE Phase 1 proposal: Exchange: Main Mode DH Group: Group 1 Encrypt: AES-256 Auth: SHA1 Lifetime: 3600 (seconds) Phase 2 proposal: Protocol: ESP Encrypt: AES-256 Auth: SHA1 Lifetime: 900 (seconds) Keep Alive is enabled, and Preempt Secondary Gateway is enabled at 120 second interval.
2013/11/18
[ "https://serverfault.com/questions/556625", "https://serverfault.com", "https://serverfault.com/users/200261/" ]
you can up the values of the elb health checker, so as a single slow response wont pull a server from elb. better to have a few users get service unavailable, than the site being down for everyone. EDIT: We are able to get away without pre-warming cache by upping health check timeout to 25 seconds......after 1-2 minutes... site is responsive as hell EDIT:: just launch a bunch of on demand, and when your monitoring tools shows management just how fast your are, then just prepay RI amazon :P EDIT: it is possible, a single backend elb registered instance is not enough. just launch a few more, and register them with elb, and that will help you narrow down your problem
It's a few years late, but hopefully this helps someone out. I was seeing this error when the instance behind the ELB did not have a proper public IP assigned. I needed to manually create an Elastic IP and associate it with the instance after which point in time the ELB picked it up nearly instantly.
556,625
We are experiencing an issue with our SonicWall NSA 2400 Firewall. We have a secondary gateway over IPSec setup in the event of a failure of our main ISP (which is unfortunately common). The secondary gateway is a 4G connection through Verizon and the cost grows as data usage increases. The firewall switches over to the secondary gateway properly, but then *sometimes* will not renegotiate and switch back to the primary when it comes back up. Hitting 'Renegotiate' on both sides seems to fix the issue, but I am wondering if there is something I am missing that may be causing it to stay on the secondary. I don't think this has anything to do with the settings as it seems to work 50% of the time, but here they are anyway in case someone has some tips on how to ensure switching back to primary when the connection is restored. Policy Type: Site to Site Auth Method: IKE using Preshared Secret IKE Phase 1 proposal: Exchange: Main Mode DH Group: Group 1 Encrypt: AES-256 Auth: SHA1 Lifetime: 3600 (seconds) Phase 2 proposal: Protocol: ESP Encrypt: AES-256 Auth: SHA1 Lifetime: 900 (seconds) Keep Alive is enabled, and Preempt Secondary Gateway is enabled at 120 second interval.
2013/11/18
[ "https://serverfault.com/questions/556625", "https://serverfault.com", "https://serverfault.com/users/200261/" ]
I just ran into this issue myself. The Amazon ELB will return this error if there are no healthy instances. Our sites were misconfigured, so the ELB healthcheck was failing, which caused the ELB to take the two servers out of rotation. With zero healthy sites, the ELB returned 503 Service Unavailable: Back-end server is at capacity.
It's a few years late, but hopefully this helps someone out. I was seeing this error when the instance behind the ELB did not have a proper public IP assigned. I needed to manually create an Elastic IP and associate it with the instance after which point in time the ELB picked it up nearly instantly.
5,654
While preparing coffee in my 2-cup moka pot, I usually try to carefully sprinkle ground coffee from my grinder into the funnel of my pot. When I reach approximately the amount of coffee needed to fill it, I use a knife with a straight spine to level and tip off any excess coffee, which usually wastes some of the coffee, as it tends to fall everywhere around the funnel. Are there any tools that can make this process cleaner?
2021/05/28
[ "https://coffee.stackexchange.com/questions/5654", "https://coffee.stackexchange.com", "https://coffee.stackexchange.com/users/10153/" ]
Short answer is no, though you can use a common kitchen scale to minimize (but not eliminate) waste. Measure in grams if you're not already and if your scale supports it for greater precision. The basket on your pot has a fixed volume. Coffee within a particular roast batch will also have fairly consistent density. This means that you should be able to use very nearly the same "weight" of coffee per brew. Measure your coffee with a kitchen scale to get an idea of how much it takes to fill the basket, then in the future grind or scoop that amount in weight including the excess you will later level off. Leveling off is important. If you don't level off your basket with a moka pot, you'll significantly increase the volume of steam required to produce enough pressure to push the coffee up the tube. That's bad because it'll add some variables into your brew routine that are impossible to control. Use more heat or brew longer to get more steam and you could burn your coffee, but because you wouldn't get the same surface distribution with each brew, the amount of steam you need would change per brew. So look at it like this: dumping 1ish gram of coffee grounds per brew is not nearly as bad as dumping an entire dose because you ruined your brew. If you're really worried about dumping that bit, get yourself an airtight food canister and use it to store leftover grounds for your next brew. That coffee will go stale right quick, but since it's such a small amount, it won't dramatically affect the next brew regardless. Waste nothing if you save what you level off!
A convenient and cheap solution is to use a coffee dispenser like this one: [![enter image description here](https://i.stack.imgur.com/oh294.png)](https://i.stack.imgur.com/oh294.png) The bottom of the can has steps of various diameters to fit the most common moka sizes. You put the can onto the moka funnel, turn the knob once (this opens a small door on the bottom) and then turn the knob back. In two seconds you have filled the moka without wasting any of the coffee. Dispenser are sold empty and can be used many times, they last virtually forever since they have few moving parts and no electronics ;). Zero waste.
76,228
as part of security awareness for the company, I am looking for something that I could use to spread the awareness. Maybe a web based application/portal which I can create quiz forms easily or share out media resources on IT security easily. What kind of tools do you use to help spread awareness to the company? thanks
2014/12/17
[ "https://security.stackexchange.com/questions/76228", "https://security.stackexchange.com", "https://security.stackexchange.com/users/60935/" ]
Have a look at <http://www.securingthehuman.org/>. The newsletters are quite good. And <http://www.cpni.gov.uk/advice/Personnel-security1/>. They have reusable posters and other resources for awareness and training.
If you have software developers and testers check out the courses at Pluralsight. Also troyhunt.com very useful.
516,973
Are the owners details retained in a picture? I mean, when I click a pic with my web-cam and check the properties, I find that the owner's (my) name is present in it. When I upload the same to an image hosting site or lets say in a forum, is any such information (apart from what is seen from the picture itself) retained in the picture?
2012/12/09
[ "https://superuser.com/questions/516973", "https://superuser.com", "https://superuser.com/users/135657/" ]
Not all metadata you see in the Windows "File properties" window is *inside* the file... In particular, such things as "Owner" and "Date modified" **are not** transferred over the web when uploading the file.
The metadata is part of the image and follows the image wherever it goes, unless the online service strips this information (unlikely). However, it is possible to strip the metadata off an image before uploading it using either general purpose image manipulation software or specialized image-metadata software.
2,523,570
I am having an issue with UAC and executing a non interactive process as a different user (APIs such as CreateProcessAsUser or CreateProcessWithLogonW). My program is intended to do the following: 1) Create a new windows user account (check, works correctly) 2) Create a non interactive child process as new user account (fails when UAC is enabled) My application includes a administrator manifest, and elevates correct when UAC is enabled in order to complete step 1. But step 2 is failing to execute correctly. I suspect this is because the child process which executes as another user is not inheriting the elevated rights of my main process (which executes as the interactive user). I would like to know how to resolve this issue. When UAC is off my program works correctly. How can I deal with UAC or required elevated rights in this situation? If it helps any, the child process needs to run as another user in order to setup file encryption for the new user account.
2010/03/26
[ "https://Stackoverflow.com/questions/2523570", "https://Stackoverflow.com", "https://Stackoverflow.com/users/269630/" ]
The reason why the spawned process has no admin rights when using CreateProcessWithLogon and CreateProcessAsUser is explained in this blog post: <http://blogs.msdn.com/cjacks/archive/2010/02/01/why-can-t-i-elevate-my-application-to-run-as-administrator-while-using-createprocesswithlogonw.aspx> Long story short: CreateProcess is such a low layer in windows it doesn't know about elevation. ShellExecute(Ex) does. So you have to create and start a bootstrapper application with CreateProcessWithLogon/CreateProcessAsUser which in turn (now acting as the other user) starts your final application with ShellExecute(Ex) which will ask for admin rights (if you specify "runas" as lpVerb or provide a manifest for your app). And because this is such an easy and fun task to do there is no ShellExecuteWithLogon function provided by Windows. Hope this helps.
Just faced a similar issue on Windows 7 under maxed UAC. When UAC is turned ON, CreateProcessWithLogon creates a restricted token, just like LogonUser with LOGON32\_LOGON\_INTERACTIVE would do. This token prevents elevation. Solution is to first call LogonUser with LOGON32\_LOGON\_BATCH, which returns a full-access token. Once obtained, just call CreateProcessWithToken.
200,922
I plan to build a brick wall, so I need to compact the ground first. The hardware store sells a tool called a tamper, that sells for $60 and only weighs 15 lbs, so it seems inefficient. How is that sufficient weight to pack down the ground? If a worker weighs 200+ lbs, is that more efficient to have them cut on a small square of wood and jump on that? If not, is there some alternative method of stamping the ground down?
2020/08/13
[ "https://diy.stackexchange.com/questions/200922", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/71197/" ]
You can rent a power compactor which would do a much better job than a hand operated tamper.
if you need to compact the ground before you dig a foundation you definately want to use motorised tools. probably a whacker/kango type thing [![enter image description here](https://i.stack.imgur.com/m3Atq.png)](https://i.stack.imgur.com/m3Atq.png)