qid int64 1 74.7M | question stringlengths 12 33.8k | date stringlengths 10 10 | metadata list | response_j stringlengths 0 115k | response_k stringlengths 2 98.3k |
|---|---|---|---|---|---|
1,161,389 | There have been a decent amount of questions about mysql spatial datatypes, however mine is more specific to how **best** to deal with them within a rails MVC architecture.
I have an input form where an admin user can create a new point of interest, let's say, a restaurant and input some information. They can also input a human-readable latitude and longitude in decimal format.
However, for distance calculations, etc... I am storing the location data as a spatial point in the database.
My question therefore, is how to best handle this in the MVC architecture in rails?
Here are some ideas I had, but nothing really seems clean:
* Call :after\_filter method that takes the new instance of the object and does a raw SQL update that handles the "GeomFromText('POINT(lat long)' ))" goodness. The issue with this is that "lat/long" would be text fields in my create form, although this disrupts the clean form\_for :object architecture that rails provides since lat/long aren't really attributes, they're just there to let a human input values that aren't mysql spatials.
* Maybe creating a trigger in the db to run after a row insert that updates that row? I have no idea and it doesn't seem like these triggers would have access to the lat/long, unless I stored the lat/long as well as the spatial point, and then created the row in the db with the lat/long decimals, and then ran the trigger after creation to update the spatial. I guess i could also do that with an after\_filter if I added the lat/long columns to the model.
Any other ideas? I think storing the lat/long is redundant since I'll really be using the spatial point for distance calculations, etc... but it might be necessary if I'm allowing for human editing. | 2009/07/21 | [
"https://Stackoverflow.com/questions/1161389",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/139007/"
] | Check out [the geokit-rails plugin for Rails](https://github.com/geokit/geokit-rails) which does distance calculations using plain lat/lng columns as floats (and uses the [geokit](http://geokit.rubyforge.org/) gem). However, if you'd like to use your database's geo-spatial abilities, [GeoRuby](http://georuby.rubyforge.org/) supports the basic spatial features like Point or LineString as column types. I hope these help. | I agree with hopeless, geokit is nice, I use it too.
If you want to do it yourself, I would do an after\_filter but externalize the update method to a thread. Like that you don't have a slow down while saving but still nice code and timely updated columns.
Triggers are not nice, the database should deliver data but not do logic. |
40,566 | I've seen sources where its discussed that the body gets use to the amount of caffeine that is being absorbed and does not create the effect desired.
Is it necessary to have a week of no / very low caffeine in take in order to get the desired effect back? | 2019/07/02 | [
"https://fitness.stackexchange.com/questions/40566",
"https://fitness.stackexchange.com",
"https://fitness.stackexchange.com/users/31366/"
] | **1) Chronic caffeine intake can result in, at least partial, tolerance to caffeine effects.**
[Chronic ingestion of a low dose of caffeine induces tolerance to the performance benefits of caffeine (Journal of Sports Sciences, 2017)](https://www.ncbi.nlm.nih.gov/pubmed/27762662/):
>
> Chronic ingestion of a low dose of caffeine develops tolerance in
> low-caffeine consumers. Therefore, individuals with low-habitual
> intakes should refrain from chronic caffeine supplementation to
> maximise performance benefits from acute caffeine ingestion.
>
>
>
[Time course of tolerance to the performance benefits of caffeine (PlosOne, 2019)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6343867/):
>
> In summary, the daily intake of caffeine (3 mg/kg/day) significantly
> increased peak cycling power during a maximal incremental test for the
> first 15 days of ingestion and improved VO2max for the first 4 days,
> when compared to the same treatment with a placebo. Day-to-day
> pre-exercise caffeine intake also produced higher peak cycling power
> during the 15-s Wingate tests for ~18 days of intake, although Wingate
> mean power was only increased on the first day of ingestion with
> respect to the placebo.
>
>
>
**2) Caffeine abstention may not have significant effects on physical performance.**
[Caffeine withdrawal and high-intensity endurance cycling performance (Journal of Sport Sciences, 2011)](https://www.ncbi.nlm.nih.gov/pubmed/21279864/):
>
> A 3 mg/kg dose of caffeine significantly improves exercise performance
> irrespective of whether a 4-day withdrawal period is imposed on
> habitual caffeine users.
>
>
>
[Effect of caffeine on metabolism, exercise endurance, and catecholamine responses after withdrawal (Journal of Applied Physiology, 1985)](https://www.ncbi.nlm.nih.gov/pubmed/9760346/):
>
> Subjects responded to caffeine with increases in plasma epinephrine at
> exhaustion and prolonged exercise time in all caffeine trials compared
> with placebo, regardless of withdrawal from caffeine. It is concluded
> that increased endurance is unrelated to hormonal or metabolic changes
> and that it is not related to prior caffeine habituation in
> recreational athletes.
>
>
> | The time in which you develop a tolerance, and the degree to which you can tolerate it, and the time in which in takes you to reset your tolerance all vary much from person to person.
When you feel like the amount you take daily is no longer affecting you, or not affecting you as much as it once did, you can take some time off. Maybe starting at a week, and then start taking it again and test if your tolerance has decreased or reset completely.
If you take a week off and you still don't feel much affect from the amount you normally take, take more time off, or ramp up your intake.
Essentially, just go off of the feel. |
108,816 | I use both [Divvy](http://mizage.com/divvy/) and [SizeUp](http://www.irradiatedsoftware.com/sizeup/) to manage my windows. They work fine with application windows with the exceptions of Adobe Photoshop. They just don't seem to work with Photoshop. Whenever I apply them on Photoshop the window just acts bizarrely.
How may I address this? | 2013/11/08 | [
"https://apple.stackexchange.com/questions/108816",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/43968/"
] | I had a similar problem…
* I had no sound playing on phone speaker from any application / music, but the phone rings and I can talk to other parties and rasise and lower the volume with the buttons
* The volume + and - buttons had no effect and I had no control over rasing or lowering volume in a phone idle situation, or within any application except when in setting mode under sound while selecting a tone — the selected tone is played and the volume buttons can then operate normally for that particular purpose only. The same is applicable to the alarm clock settings when selecting a tone from within the clock applications. Sound was also available in voice recording application (built in iOS 7 latest update).
* In all other cases there was no sound and no control over volume. When pressing the volume buttons I get a screen that says ringer on top but I cannot control it
If you have this problem, it has nothing to do with hardware or earphone sockets, charging socket, etc. The problem only occurs if you have updated your iPhone using OTA update. For some reason updating over the air causes the system to trigger the sound and volume problem.
1. First back up your phone on iTunes so you can restore data later.
2. Keep the phone connected to itunes and enter the phone into a DFU mode as follows:
1. Press the power button and home button simultaneously for 15 seconds.
2. Release the power button while keeping the home button pressed for another 15 seconds.
3. Release the home button.Your screen would turn to pitch black as it is now in DFU mode. If the screen is anything but black repeat the sequence above until you get the pitch black screen.
3. At this particular time when you are correctly in DFU mode, iIunes will show a message stating that your phone needs restoration. Press Restore.
4. iTunes will start downloading the latest version of iOS 7 — do not unplug the phone or power off computer.
5. Once downloaded, message will appear to ask you to restore hit restore and wait for the process to continue.
Once phone restarts you can now restore your data back up that you have created on your computer or iCloud that you have created before you started this process.
The phone sound will not return immediately — give it a couple of hours to allow systems to reset and there you go your phone is working again. | I had the same problem till I discovered that iOS 7 had turned on the DND (Do Not Disturb). Once I turned off it worked perfectly. |
108,816 | I use both [Divvy](http://mizage.com/divvy/) and [SizeUp](http://www.irradiatedsoftware.com/sizeup/) to manage my windows. They work fine with application windows with the exceptions of Adobe Photoshop. They just don't seem to work with Photoshop. Whenever I apply them on Photoshop the window just acts bizarrely.
How may I address this? | 2013/11/08 | [
"https://apple.stackexchange.com/questions/108816",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/43968/"
] | Just clean the charger port using clean toothbrush and blowing air into the port...it could be because of some dust particle. | I had the same problem till I discovered that iOS 7 had turned on the DND (Do Not Disturb). Once I turned off it worked perfectly. |
108,816 | I use both [Divvy](http://mizage.com/divvy/) and [SizeUp](http://www.irradiatedsoftware.com/sizeup/) to manage my windows. They work fine with application windows with the exceptions of Adobe Photoshop. They just don't seem to work with Photoshop. Whenever I apply them on Photoshop the window just acts bizarrely.
How may I address this? | 2013/11/08 | [
"https://apple.stackexchange.com/questions/108816",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/43968/"
] | i had exactly the same issue. Cleaned the charging socket + the earphone entrance. Voila! it was working again | I had the same problem till I discovered that iOS 7 had turned on the DND (Do Not Disturb). Once I turned off it worked perfectly. |
108,816 | I use both [Divvy](http://mizage.com/divvy/) and [SizeUp](http://www.irradiatedsoftware.com/sizeup/) to manage my windows. They work fine with application windows with the exceptions of Adobe Photoshop. They just don't seem to work with Photoshop. Whenever I apply them on Photoshop the window just acts bizarrely.
How may I address this? | 2013/11/08 | [
"https://apple.stackexchange.com/questions/108816",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/43968/"
] | Check the following things:
1. Is your ringer switched to off?
2. Is DND (Do Not Disturb Mode) on?
3. Has your iPhone / Apple Product had water damage recently or in the past?
4. If you plug in headphones does it work?
5. Is it glitched into headphone mode?
The answer to most is simple and you can also try restarting. | I had the same problem till I discovered that iOS 7 had turned on the DND (Do Not Disturb). Once I turned off it worked perfectly. |
108,816 | I use both [Divvy](http://mizage.com/divvy/) and [SizeUp](http://www.irradiatedsoftware.com/sizeup/) to manage my windows. They work fine with application windows with the exceptions of Adobe Photoshop. They just don't seem to work with Photoshop. Whenever I apply them on Photoshop the window just acts bizarrely.
How may I address this? | 2013/11/08 | [
"https://apple.stackexchange.com/questions/108816",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/43968/"
] | I had a similar problem…
* I had no sound playing on phone speaker from any application / music, but the phone rings and I can talk to other parties and rasise and lower the volume with the buttons
* The volume + and - buttons had no effect and I had no control over rasing or lowering volume in a phone idle situation, or within any application except when in setting mode under sound while selecting a tone — the selected tone is played and the volume buttons can then operate normally for that particular purpose only. The same is applicable to the alarm clock settings when selecting a tone from within the clock applications. Sound was also available in voice recording application (built in iOS 7 latest update).
* In all other cases there was no sound and no control over volume. When pressing the volume buttons I get a screen that says ringer on top but I cannot control it
If you have this problem, it has nothing to do with hardware or earphone sockets, charging socket, etc. The problem only occurs if you have updated your iPhone using OTA update. For some reason updating over the air causes the system to trigger the sound and volume problem.
1. First back up your phone on iTunes so you can restore data later.
2. Keep the phone connected to itunes and enter the phone into a DFU mode as follows:
1. Press the power button and home button simultaneously for 15 seconds.
2. Release the power button while keeping the home button pressed for another 15 seconds.
3. Release the home button.Your screen would turn to pitch black as it is now in DFU mode. If the screen is anything but black repeat the sequence above until you get the pitch black screen.
3. At this particular time when you are correctly in DFU mode, iIunes will show a message stating that your phone needs restoration. Press Restore.
4. iTunes will start downloading the latest version of iOS 7 — do not unplug the phone or power off computer.
5. Once downloaded, message will appear to ask you to restore hit restore and wait for the process to continue.
Once phone restarts you can now restore your data back up that you have created on your computer or iCloud that you have created before you started this process.
The phone sound will not return immediately — give it a couple of hours to allow systems to reset and there you go your phone is working again. | Just clean the charger port using clean toothbrush and blowing air into the port...it could be because of some dust particle. |
108,816 | I use both [Divvy](http://mizage.com/divvy/) and [SizeUp](http://www.irradiatedsoftware.com/sizeup/) to manage my windows. They work fine with application windows with the exceptions of Adobe Photoshop. They just don't seem to work with Photoshop. Whenever I apply them on Photoshop the window just acts bizarrely.
How may I address this? | 2013/11/08 | [
"https://apple.stackexchange.com/questions/108816",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/43968/"
] | i had exactly the same issue. Cleaned the charging socket + the earphone entrance. Voila! it was working again | I had a similar problem…
* I had no sound playing on phone speaker from any application / music, but the phone rings and I can talk to other parties and rasise and lower the volume with the buttons
* The volume + and - buttons had no effect and I had no control over rasing or lowering volume in a phone idle situation, or within any application except when in setting mode under sound while selecting a tone — the selected tone is played and the volume buttons can then operate normally for that particular purpose only. The same is applicable to the alarm clock settings when selecting a tone from within the clock applications. Sound was also available in voice recording application (built in iOS 7 latest update).
* In all other cases there was no sound and no control over volume. When pressing the volume buttons I get a screen that says ringer on top but I cannot control it
If you have this problem, it has nothing to do with hardware or earphone sockets, charging socket, etc. The problem only occurs if you have updated your iPhone using OTA update. For some reason updating over the air causes the system to trigger the sound and volume problem.
1. First back up your phone on iTunes so you can restore data later.
2. Keep the phone connected to itunes and enter the phone into a DFU mode as follows:
1. Press the power button and home button simultaneously for 15 seconds.
2. Release the power button while keeping the home button pressed for another 15 seconds.
3. Release the home button.Your screen would turn to pitch black as it is now in DFU mode. If the screen is anything but black repeat the sequence above until you get the pitch black screen.
3. At this particular time when you are correctly in DFU mode, iIunes will show a message stating that your phone needs restoration. Press Restore.
4. iTunes will start downloading the latest version of iOS 7 — do not unplug the phone or power off computer.
5. Once downloaded, message will appear to ask you to restore hit restore and wait for the process to continue.
Once phone restarts you can now restore your data back up that you have created on your computer or iCloud that you have created before you started this process.
The phone sound will not return immediately — give it a couple of hours to allow systems to reset and there you go your phone is working again. |
108,816 | I use both [Divvy](http://mizage.com/divvy/) and [SizeUp](http://www.irradiatedsoftware.com/sizeup/) to manage my windows. They work fine with application windows with the exceptions of Adobe Photoshop. They just don't seem to work with Photoshop. Whenever I apply them on Photoshop the window just acts bizarrely.
How may I address this? | 2013/11/08 | [
"https://apple.stackexchange.com/questions/108816",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/43968/"
] | I had a similar problem…
* I had no sound playing on phone speaker from any application / music, but the phone rings and I can talk to other parties and rasise and lower the volume with the buttons
* The volume + and - buttons had no effect and I had no control over rasing or lowering volume in a phone idle situation, or within any application except when in setting mode under sound while selecting a tone — the selected tone is played and the volume buttons can then operate normally for that particular purpose only. The same is applicable to the alarm clock settings when selecting a tone from within the clock applications. Sound was also available in voice recording application (built in iOS 7 latest update).
* In all other cases there was no sound and no control over volume. When pressing the volume buttons I get a screen that says ringer on top but I cannot control it
If you have this problem, it has nothing to do with hardware or earphone sockets, charging socket, etc. The problem only occurs if you have updated your iPhone using OTA update. For some reason updating over the air causes the system to trigger the sound and volume problem.
1. First back up your phone on iTunes so you can restore data later.
2. Keep the phone connected to itunes and enter the phone into a DFU mode as follows:
1. Press the power button and home button simultaneously for 15 seconds.
2. Release the power button while keeping the home button pressed for another 15 seconds.
3. Release the home button.Your screen would turn to pitch black as it is now in DFU mode. If the screen is anything but black repeat the sequence above until you get the pitch black screen.
3. At this particular time when you are correctly in DFU mode, iIunes will show a message stating that your phone needs restoration. Press Restore.
4. iTunes will start downloading the latest version of iOS 7 — do not unplug the phone or power off computer.
5. Once downloaded, message will appear to ask you to restore hit restore and wait for the process to continue.
Once phone restarts you can now restore your data back up that you have created on your computer or iCloud that you have created before you started this process.
The phone sound will not return immediately — give it a couple of hours to allow systems to reset and there you go your phone is working again. | Check the following things:
1. Is your ringer switched to off?
2. Is DND (Do Not Disturb Mode) on?
3. Has your iPhone / Apple Product had water damage recently or in the past?
4. If you plug in headphones does it work?
5. Is it glitched into headphone mode?
The answer to most is simple and you can also try restarting. |
108,816 | I use both [Divvy](http://mizage.com/divvy/) and [SizeUp](http://www.irradiatedsoftware.com/sizeup/) to manage my windows. They work fine with application windows with the exceptions of Adobe Photoshop. They just don't seem to work with Photoshop. Whenever I apply them on Photoshop the window just acts bizarrely.
How may I address this? | 2013/11/08 | [
"https://apple.stackexchange.com/questions/108816",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/43968/"
] | i had exactly the same issue. Cleaned the charging socket + the earphone entrance. Voila! it was working again | Just clean the charger port using clean toothbrush and blowing air into the port...it could be because of some dust particle. |
108,816 | I use both [Divvy](http://mizage.com/divvy/) and [SizeUp](http://www.irradiatedsoftware.com/sizeup/) to manage my windows. They work fine with application windows with the exceptions of Adobe Photoshop. They just don't seem to work with Photoshop. Whenever I apply them on Photoshop the window just acts bizarrely.
How may I address this? | 2013/11/08 | [
"https://apple.stackexchange.com/questions/108816",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/43968/"
] | i had exactly the same issue. Cleaned the charging socket + the earphone entrance. Voila! it was working again | Check the following things:
1. Is your ringer switched to off?
2. Is DND (Do Not Disturb Mode) on?
3. Has your iPhone / Apple Product had water damage recently or in the past?
4. If you plug in headphones does it work?
5. Is it glitched into headphone mode?
The answer to most is simple and you can also try restarting. |
126,295 | I work in an office and I usually sit down doing paper work. I work as a bookkeeper, my work is both standing and sitting depends on your department. I am the only one here in bookkeeping; other locations of the same company have chairs for their bookkeepers. I had a chair for 5 years, but my manager took it away saying it is not a sitting job. What can I do to get the chair back in the office? | 2019/01/11 | [
"https://workplace.stackexchange.com/questions/126295",
"https://workplace.stackexchange.com",
"https://workplace.stackexchange.com/users/97824/"
] | You should talk to your boss, explain your situation, ask for the reason of the chair being taken and politely request your chair back.
When you request the chair be sure to explain that it helps with the pain, and focus on the part of not being a problem to develop your current activities. | Your boss is either a prankster, an idiot or devious.
If you have a medical condition, it may make it easy to force the chair back.
If he is being difficult, you can explain, that bookkeeping is actually done a lot sitting, nowadays in front of the computer.
The following link rates bookkeeping on 124 with 83.7% sitting.
<https://qz.com/922650/if-you-literally-never-want-to-sit-down-on-the-job-here-are-the-careers-for-you/amp/>
Now he could have ulterior motives, maybe he wants to drive you out of employment or harasses you with this chair thing in some weird way.
Besides, I'm pretty sure unions and guilds have a thing or two to say about the idea of not providing chairs for employees.
So maybe find some literature on that in your locale to help you make the case next time you talk to him about getting the chair back.
Should he insist or maybe you come to like the idea of standing (which seems to have advantages) here is some info on desks and postures:
<https://notsitting.com/proper-height/>
<https://www.posturite.co.uk/desks-furniture/height-adjustable-standing-desks.html> |
4,873,718 | I know about alpha-beta pruning and the minimax algorithm.
What other algorithms would you suggest?
Is it possible if we use negascout? | 2011/02/02 | [
"https://Stackoverflow.com/questions/4873718",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/599881/"
] | Considering the simplicity of the game the optimum moves can be simply stored.
Relevant XKCD-
[](http://xkcd.com/832/) | Tic-Tac-Toe's entire game tree can be represented in memory, so you can just generate that and backtrack winning moves. There are less than 363k legal configurations. |
289,408 | I have been watching my brother play Battlefield 1 lately, and I understand most of the mechanics. Apparently Conquest mode now gives tickets for holding a flag for a certain period of time, making for a more balanced game than previous battefield games.
But what we (I tried asking him) don't understand is how tickets work in Operations.
The attacking team starts with 150 tickets. They spend these ... by respawning? ... and when they get below 0 (I saw a game stuck at 0 tickets for about 20 seconds so I think it needs to go in the minus), the attacking team loses a try.
But the tickets also go up! During regular play, the ticket counter bounces up and down. 32, 31, 30, 31, 30... So apparently there is some way to get tickets back?
Additionally, when the sector is captured, there's a large and rapid gain of tickets... But this is not a linear growth, so I get the feeling these are awarded for certain things as well.
---
My questions:
* How are tickets spent in Operations?
* How are tickets gained during regular play in Operations?
* How are tickets gained during the clear the sector/retreat section in Operations?
Depending on the answers, I'm also interested in knowing whether you should spawn as an attacker if there are 0 tickets left, and whether you should kill yourself as a defender during retreat. | 2016/10/26 | [
"https://gaming.stackexchange.com/questions/289408",
"https://gaming.stackexchange.com",
"https://gaming.stackexchange.com/users/15659/"
] | There are lot of similarities with Rush. Like in rush, the attackers are given a limited number of reinforcement tickets. In Operations by default it's 250 for 64 players, 200 for 40 players *(150 for both prior to the fall update)*. When an attacker dies, a ticket is lost. When Medics revive a teammate, a ticket is refunded.
When tickets are at zero, game continues as long as there isn't a flag fully in defenders possession (so for example either both are disputed or one is taken by attackers, other is disputed).
Unlike Rush however, in Operations advancing to next set of objectives does not fully reset the ticket count, instead you get additional 50 tickets (30 prior to patch). There is however an opportunity to gain some additional tickets. Retreating defenders are marked and killing each of them grants attackers **3** tickets *(2 prior to the fall update)* | **How are tickets spent in Operations?**
If you are the defending team, you have a unlimited amount of tickets. The ticket loss is only for the attacking team. It's the same as the gamemode Rush.
**How are tickets gained during regular play in Operations?**
I'm not sure how to explain this, but I think if a player of the attacking team dies. They will lose a ticket, but if he is revived by a teammate the team gets the ticket back. But when you respawn as the attacking team, you will lose a ticket. The ticket count is the same as how much you can respawn.
**How are tickets gained during the clear the sector/retreat section in Operations?**
I don't know it yet, because i have only played a few operation matches
I hope this answers your question. |
208,866 | Last night, I walked home from my bus stop (in Belgium). Since it was around 11 PM, it was quite cold, probably only about 4-5 °C. However, it didn't actually feel cold at all, and I didn't feel like I had to rush to get home for the cold. In fact, I didn't have headwear on, but I didn't get cold ears. It was about the same temperature as this morning, but the difference is that this morning felt a lot colder, probably due to the wind and drizzle.
I tried to describe it in English on Twitter, but I couldn't actually find a proper word to decribe it. I considered "a warm cold" or "a cozy cold", but I thought these were too poetic, more like something you'd use in a fairy tale than in a tweet to a handful of followers.
How can you describe that temperature without it becoming confusing or poetic? | 2014/11/18 | [
"https://english.stackexchange.com/questions/208866",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/69529/"
] | I realize this doesn't serve the single-word tag, but perhaps you could separate the temperature from your perception:
'I was unaffected by the cold' or 'I didn't feel the cold'
With your description of the morning you could state: 'compared with the rainy wind in the morning, it didn't feel cold.' | I would say "chilly" or a "nip" in the air. |
208,866 | Last night, I walked home from my bus stop (in Belgium). Since it was around 11 PM, it was quite cold, probably only about 4-5 °C. However, it didn't actually feel cold at all, and I didn't feel like I had to rush to get home for the cold. In fact, I didn't have headwear on, but I didn't get cold ears. It was about the same temperature as this morning, but the difference is that this morning felt a lot colder, probably due to the wind and drizzle.
I tried to describe it in English on Twitter, but I couldn't actually find a proper word to decribe it. I considered "a warm cold" or "a cozy cold", but I thought these were too poetic, more like something you'd use in a fairy tale than in a tweet to a handful of followers.
How can you describe that temperature without it becoming confusing or poetic? | 2014/11/18 | [
"https://english.stackexchange.com/questions/208866",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/69529/"
] | That sounds 'a bit brisk' to me.
[brisk](http://www.oxforddictionaries.com/definition/english/brisk)
>
> (Of wind or the weather) cold but pleasantly invigorating:
>
>
> * *A cold, brisk wind fills the square on a grey Saturday afternoon.*
> * *Though the wind was brisk and chilly, the sun was bright and warm.*
> * *The September night was chilly, with a brisk wind picking up, but neither seemed to notice.*
>
>
>
Here in the UK it's often (though not always) used as a form of humorous understatement, for example on the coldest day of the year you might say "*oooh, it's a bit brisk out, isn't it?*" c.f. "*Nice weather for ducks!*"
---
If it's less cold than you expected then it's **mild**. As in, "*we had a mild winter*".
[mild](http://www.oxforddictionaries.com/definition/english/mild)
>
> (Of weather) moderately warm, especially less cold than expected:
>
>
> *Tropical continental air is very dry and tends to bring very warm weather during the summer and unseasonably mild weather during the winter.*
>
>
> *Plants suffer most when warm / mild weather is suddenly replaced with cold.*
>
>
> *October has come round again and the weather is still mild, with the cold snap we had last weekend coming as a shock.*
>
>
>
---
It sounds [reet](http://www.bbc.co.uk/northyorkshire/voices2005/glossary/glossary.shtml) [parky](http://www.oxforddictionaries.com/definition/english/parky)!
Example usage: <https://www.flickr.com/photos/heandfi/4141559844/> | I also like
[bracing](http://www.oxforddictionaries.com/definition/english/bracing)
although it doesn't necessarily mean cold. |
208,866 | Last night, I walked home from my bus stop (in Belgium). Since it was around 11 PM, it was quite cold, probably only about 4-5 °C. However, it didn't actually feel cold at all, and I didn't feel like I had to rush to get home for the cold. In fact, I didn't have headwear on, but I didn't get cold ears. It was about the same temperature as this morning, but the difference is that this morning felt a lot colder, probably due to the wind and drizzle.
I tried to describe it in English on Twitter, but I couldn't actually find a proper word to decribe it. I considered "a warm cold" or "a cozy cold", but I thought these were too poetic, more like something you'd use in a fairy tale than in a tweet to a handful of followers.
How can you describe that temperature without it becoming confusing or poetic? | 2014/11/18 | [
"https://english.stackexchange.com/questions/208866",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/69529/"
] | mplungjan's **Fresh** is a very good suggestion, but have you considered calling it **cool** rather than
"*{adjective}* cold"?
Describes the low temperature and implies no discomfort (or you would have used something more harsh than **cool**) | Nate, several factors influence the way cold temperatures are perceived by the body. It may have been around 5ºC but, with **no wind** and very **low humidity**, it may have felt relatively pleasant. The reason is that under such circumstances, it will take longer for the exposed skin to cool and for our body to perceive it is really cold. we could then say...
>
> **It was pleasantly cool.**
>
>
> It was around freezing but **it felt very mild.**
>
>
>
["mild"](http://www.thefreedictionary.com/mild) - not cold, severe, or extreme; temperate: a mild winter. |
208,866 | Last night, I walked home from my bus stop (in Belgium). Since it was around 11 PM, it was quite cold, probably only about 4-5 °C. However, it didn't actually feel cold at all, and I didn't feel like I had to rush to get home for the cold. In fact, I didn't have headwear on, but I didn't get cold ears. It was about the same temperature as this morning, but the difference is that this morning felt a lot colder, probably due to the wind and drizzle.
I tried to describe it in English on Twitter, but I couldn't actually find a proper word to decribe it. I considered "a warm cold" or "a cozy cold", but I thought these were too poetic, more like something you'd use in a fairy tale than in a tweet to a handful of followers.
How can you describe that temperature without it becoming confusing or poetic? | 2014/11/18 | [
"https://english.stackexchange.com/questions/208866",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/69529/"
] | That sounds 'a bit brisk' to me.
[brisk](http://www.oxforddictionaries.com/definition/english/brisk)
>
> (Of wind or the weather) cold but pleasantly invigorating:
>
>
> * *A cold, brisk wind fills the square on a grey Saturday afternoon.*
> * *Though the wind was brisk and chilly, the sun was bright and warm.*
> * *The September night was chilly, with a brisk wind picking up, but neither seemed to notice.*
>
>
>
Here in the UK it's often (though not always) used as a form of humorous understatement, for example on the coldest day of the year you might say "*oooh, it's a bit brisk out, isn't it?*" c.f. "*Nice weather for ducks!*"
---
If it's less cold than you expected then it's **mild**. As in, "*we had a mild winter*".
[mild](http://www.oxforddictionaries.com/definition/english/mild)
>
> (Of weather) moderately warm, especially less cold than expected:
>
>
> *Tropical continental air is very dry and tends to bring very warm weather during the summer and unseasonably mild weather during the winter.*
>
>
> *Plants suffer most when warm / mild weather is suddenly replaced with cold.*
>
>
> *October has come round again and the weather is still mild, with the cold snap we had last weekend coming as a shock.*
>
>
>
---
It sounds [reet](http://www.bbc.co.uk/northyorkshire/voices2005/glossary/glossary.shtml) [parky](http://www.oxforddictionaries.com/definition/english/parky)!
Example usage: <https://www.flickr.com/photos/heandfi/4141559844/> | Nate, several factors influence the way cold temperatures are perceived by the body. It may have been around 5ºC but, with **no wind** and very **low humidity**, it may have felt relatively pleasant. The reason is that under such circumstances, it will take longer for the exposed skin to cool and for our body to perceive it is really cold. we could then say...
>
> **It was pleasantly cool.**
>
>
> It was around freezing but **it felt very mild.**
>
>
>
["mild"](http://www.thefreedictionary.com/mild) - not cold, severe, or extreme; temperate: a mild winter. |
208,866 | Last night, I walked home from my bus stop (in Belgium). Since it was around 11 PM, it was quite cold, probably only about 4-5 °C. However, it didn't actually feel cold at all, and I didn't feel like I had to rush to get home for the cold. In fact, I didn't have headwear on, but I didn't get cold ears. It was about the same temperature as this morning, but the difference is that this morning felt a lot colder, probably due to the wind and drizzle.
I tried to describe it in English on Twitter, but I couldn't actually find a proper word to decribe it. I considered "a warm cold" or "a cozy cold", but I thought these were too poetic, more like something you'd use in a fairy tale than in a tweet to a handful of followers.
How can you describe that temperature without it becoming confusing or poetic? | 2014/11/18 | [
"https://english.stackexchange.com/questions/208866",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/69529/"
] | I also like
[bracing](http://www.oxforddictionaries.com/definition/english/bracing)
although it doesn't necessarily mean cold. | "Real Feel" is the word which is used in accuweather. Usually morning is more colder than nights beacuse in morning we are coming out of "warm home" and till night we become used to. |
208,866 | Last night, I walked home from my bus stop (in Belgium). Since it was around 11 PM, it was quite cold, probably only about 4-5 °C. However, it didn't actually feel cold at all, and I didn't feel like I had to rush to get home for the cold. In fact, I didn't have headwear on, but I didn't get cold ears. It was about the same temperature as this morning, but the difference is that this morning felt a lot colder, probably due to the wind and drizzle.
I tried to describe it in English on Twitter, but I couldn't actually find a proper word to decribe it. I considered "a warm cold" or "a cozy cold", but I thought these were too poetic, more like something you'd use in a fairy tale than in a tweet to a handful of followers.
How can you describe that temperature without it becoming confusing or poetic? | 2014/11/18 | [
"https://english.stackexchange.com/questions/208866",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/69529/"
] | mplungjan's **Fresh** is a very good suggestion, but have you considered calling it **cool** rather than
"*{adjective}* cold"?
Describes the low temperature and implies no discomfort (or you would have used something more harsh than **cool**) | "Real Feel" is the word which is used in accuweather. Usually morning is more colder than nights beacuse in morning we are coming out of "warm home" and till night we become used to. |
208,866 | Last night, I walked home from my bus stop (in Belgium). Since it was around 11 PM, it was quite cold, probably only about 4-5 °C. However, it didn't actually feel cold at all, and I didn't feel like I had to rush to get home for the cold. In fact, I didn't have headwear on, but I didn't get cold ears. It was about the same temperature as this morning, but the difference is that this morning felt a lot colder, probably due to the wind and drizzle.
I tried to describe it in English on Twitter, but I couldn't actually find a proper word to decribe it. I considered "a warm cold" or "a cozy cold", but I thought these were too poetic, more like something you'd use in a fairy tale than in a tweet to a handful of followers.
How can you describe that temperature without it becoming confusing or poetic? | 2014/11/18 | [
"https://english.stackexchange.com/questions/208866",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/69529/"
] | I realize this doesn't serve the single-word tag, but perhaps you could separate the temperature from your perception:
'I was unaffected by the cold' or 'I didn't feel the cold'
With your description of the morning you could state: 'compared with the rainy wind in the morning, it didn't feel cold.' | "Real Feel" is the word which is used in accuweather. Usually morning is more colder than nights beacuse in morning we are coming out of "warm home" and till night we become used to. |
208,866 | Last night, I walked home from my bus stop (in Belgium). Since it was around 11 PM, it was quite cold, probably only about 4-5 °C. However, it didn't actually feel cold at all, and I didn't feel like I had to rush to get home for the cold. In fact, I didn't have headwear on, but I didn't get cold ears. It was about the same temperature as this morning, but the difference is that this morning felt a lot colder, probably due to the wind and drizzle.
I tried to describe it in English on Twitter, but I couldn't actually find a proper word to decribe it. I considered "a warm cold" or "a cozy cold", but I thought these were too poetic, more like something you'd use in a fairy tale than in a tweet to a handful of followers.
How can you describe that temperature without it becoming confusing or poetic? | 2014/11/18 | [
"https://english.stackexchange.com/questions/208866",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/69529/"
] | "Real Feel" is the word which is used in accuweather. Usually morning is more colder than nights beacuse in morning we are coming out of "warm home" and till night we become used to. | Technically, the condition is produced by low humidity and lack of wind. I would suggest using something like "frosty stillness". Also, if you are under trees, you can pick up infrared radiation from the trees themselves that actually does warm you. In that case, you could mention the trees, such as "the warm embrace of trees in the frosty night". |
65,713 | I have a set of observations, independent of time. I am wondering whether I should run any autocorrelation tests? It seems to me that it makes no sense, since there's no time component in my data. However, I actually tried serial correlation LM test, and it indicates strong autocorrelation of residuals. Does it make any sense? What I'm thinking is that I can actually rearrange observations in my dataset in any possible order, and this would change the autocorrelation in residuals. So the question is - should I care at all about autocorrelation in this case? And should I use Newey-West to adjust SE for it in case test indicates so? Thanks! | 2013/07/27 | [
"https://stats.stackexchange.com/questions/65713",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/28479/"
] | The true distinction between data, is whether there exists, or not, a natural ordering of them that corresponds to real-world structures, and is relevant to the issue at hand.
Of course, the clearest (and indisputable) "natural ordering" is that of time, and hence the usual dichotomy "cross-sectional / time series". But as pointed out in the comments, we may have non-time series data that nevertheless possess a natural *spatial* ordering. In such a case all the concepts and tools developed in the context of time-series analysis apply here equally well, since you are supposed to realize that a meaningful spatial ordering exists, and not only preserve it, but also examine what it may imply for the series of the error term, among other things related to the whole model (like the existence of a trend, that would make the data non-stationarity for example).
For a (crude) example, assume that you collect data on number of cars that has stopped in various stop-in establishments along a highway, on a particular day (that's the dependent variable). Your regressors measure the various facilities/services each stop-in offers, and perhaps other things like distance from highway exits/entrances. These establishments are naturally ordered along the highway...
But does this matter? Should we maintain the ordering, and even wonder whether the error term is auto-correlated? *Certainly*: assume that some facilities/services on establishment No 1 are in reality non-functional during this particular day (this event would be captured by the error term). Cars intending to use these particular facilities/services will nevertheless stop-in, because they do not know about the problem. But they will find out about the problem, and so, *because of the problem*, they will also stop in the *next* establishment, No 2, where, *if* what they want is on offer, they will receive the services and they won't stop in establishment No 3 - but there is a possibility that establishment No 2 will appear expensive, and so they will, after all, try also establishment No 3: This means that the dependent variables of the three establishments may not be independent, which is equivalent to say that there is the possibility of correlation of the three corresponding error terms, and *not* "equally", but depending on their respective positions.
So the spatial ordering is to be preserved, and tests for autocorrelation must be executed -and they will be meaningful.
If on the other hand no such "natural" and meaningful ordering appears to be present for a specific data set, then the possible correlation between observations should not be designated as "autocorrelation" because it would be misleading, and the tools specifically developed for ordered data are inapplicable. But correlation may very well exist, although in such case, it is rather more difficult to detect and estimate it. | Just adding another example (much more common) in which you will probably find autocorrelation in crossectional data, and is when you have groups of observations. For example, if you have the one math scores from a standardized exam of a thousand kids, but these kids came from 100 different schools, it would be appropriate to think that observations are not independent since the school's overall math performance could be related to the students' individual performance.
In this case, if you omit the school ID term in your model you will be omitting a relevant variable, which could bias your estimates. Also, if there is a relevant difference in the distribution of math scores is observed apart from the mean (variance, skewness, and kurtosis), you should probably consider using robust errors in your models (or cluster the errors at the school level). This won't change your coefficients, but could dramatically change your model's t-test and f-test statistics since you are now accounting for possible violations of the 4th OLS assumption (constant variance).
To sum up, if you have groups in you cross-sectional data, and is plausible that these group matter, therefore it is also plausible that the observations are not independent. Thus, you should control by group (through a fixed-effect model by the group for example) and use robust errors at the group level, to have much more confidence both in your coefficients and its p-values. |
24,644 | What is the difference between Best effort traffic and Real time traffic? Is TCP means best effort traffic and UDP means real time traffic? Or anything else? | 2015/11/22 | [
"https://networkengineering.stackexchange.com/questions/24644",
"https://networkengineering.stackexchange.com",
"https://networkengineering.stackexchange.com/users/20873/"
] | Typically, real-time traffic (voice, video, etc) does use UDP, but UDP is used for many other things, and most UDP traffic is not real-time traffic.
Classifications like "best effort" and "real time" (you can have additional classifications) are made by the network administrator to specify how the traffic is treated by the network devices--routers and switches. An administrator decides which traffic is given priority so that network resources (bandwidth, etc) can be matched to the requirements of the traffic.
In other words, as the network administrator, it's up to you to classify traffic in ways that suits your requirements. You decide that VoIP traffic gets priority in your network and is forwarded before other kinds of traffic. You can also decide that Youtube videos of dancing kittens gets lower priority and limited access to your network resources. | Real time or best effort only makes sense when you have a congested network.
When your network is congested you can mark some types of traffic as real time and some as best effort.
When a real time packet hits a switch/router it's processed first. After that the switch/router process best effort packets.
The best analogy I can give you is an avenue. The avenue here is your network. All types of cars/trucks (packets) go thorugh it.
But there are times when a firefighter truck needs to go first. And in your network you could mark firefighters as real time traffic and normal cars as best effort.
TCP can be real time and UDP could be best effort and the other way around. The answer here is the good old "it depends".
The thing is that TCP is connection oriented where UDP is not. So it's a matter of taste I guess...
Hope that helps! |
4,521,901 | assuming I have a Fingerprint DB of Cell towers.
The data (including Long. & Lat. CellID, signal strength, etc) is achieved by 'wardriving', similar to OpenCellID.org.
I would like to be able to get the location of the client mobile phone without GPS (similar to OpenCellID / Skyhook Wireless/ Google's 'MyLocation'), which sends me info on the Cell towers it "sees" at the moment: the Cell tower connected to, and another 6 neighboring cell towers (assuming GSM).
I have read and Googled it for a long time and came across several effective theories, such as using SQL 2008 Spatial capabilities, or using an euclidean algorithm, or Markov Model.
However, I am lacking a practical solution, preferably in C# or using SQL 2008 :)
The location calculation will be done on the server and not on the client mobile phone. the phone's single job is to send via HTTP/GPRS, the tower it's connected to and other neighboring cell towers.
Any input is appreciated, I have read so much and so far haven't really advanced much.
Thanx | 2010/12/23 | [
"https://Stackoverflow.com/questions/4521901",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/292351/"
] | Why don't you do a plain triangulation calculation, based on cell position and signal strength? | You can't do triangulation as the phone scans just the signals from two base stations and not three. Furthermore I don't know whether it is possible to somehow access the data of both stations because you would have to deal with low-level GSM/3G protocols.
By using AT commands or functions of newer phones SDKs (Java, Android, iPhone, Symbian...) you can retrieve the informations about the cell id, lac, mnc, mcc, signal strength and timing advance.
By examining timing advance you can determine how much you are away from the base station as the signal travels with the speed of light but you can't determine the exact position, just the "circle of possible positions" as you calculate the approximate radius (approximate because it is not in general true that the signal travels directly - the signal can reflect from nearby objects).
If yo are working with .NET [here](http://mypetprojects.blogspot.com/2011/05/retrieving-cell-informations-using.html) is an example for retrieving base station location from cellID and other data that are accessible on your phone. Hope it helps. |
82,654 | Does anybody have a Sketch to Illustrator conversion workflow? Reason I am asking is because at the place I am working at a lot of our documentation is done in Illustrator and I have been asked to deliver spec material for the pattern library there in Illustrator format (ai and pdf). I don't want to do that.
I have a technique where I have key commands set up to convert single lines of text back to blocks and another key command for releasing clipping paths. It's a bit more cleanup than I would like to do. Anybody have something better?
This is the script I am using to merge lines of text: <http://ajarproductions.com/blog/2008/11/23/merge-text-extension-for-illustrator/>
What I have tried:
Copy from Illustrator to Sketch
Save as pdf import to Sketch
The issue with this is that text blocks break apart. Objects like buttons come into illustrator with many clipping paths which make cleaning up a mission | 2017/01/05 | [
"https://graphicdesign.stackexchange.com/questions/82654",
"https://graphicdesign.stackexchange.com",
"https://graphicdesign.stackexchange.com/users/81687/"
] | Probably SVG is the best format to pass files between Illustrator and SketchApp.
* Sketch - Export ARTBOARD in SVG
* Illustrator - Open, select all, ungroup (couple of times)
* Illustrator - Clean a bit
* Illustrator - Export/Save as .AI
[](https://i.stack.imgur.com/g49Pn.png) | You can open .ai files with adobe xd. It is free to use. My workflow:
1. Import .ai File in adobe xd
2. Select my needed layers and copy
3. Paste into Sketch
4. Be happy and work with sketch ;) |
242,887 | Is there a general 2-input/1-output circuit that multiplies the voltage of one wire by the voltage of the other wire?
For example, Voltage of A = 3, Voltage of B = 5, output voltage is 15. | 2016/06/25 | [
"https://electronics.stackexchange.com/questions/242887",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/113696/"
] | There are several possibilities, mainly depending on required accuracy and speed (signal frequency)
* dual gate MOSFETs (e.g. BF961)
* mixer ICs e.g. NE612
* precision analog multiplier ICs e.g. AD633, MPY534
* log-add-antilog OpAmp circuits (i.e. calculate logarithms of both inputs signals, add them and calculate anti-log (exponential function) of sum). The log and antilog functions can be accomplished by OpAmp circuits that exploit the exponential I-V characteristic of semiconductor diodes.
The first two solutions are good for RF and when scaling is not important (e.g. RF mixing).
The latter two are good when scale is important (e.g. in analog computer).
For special cases when one of the input signals takes only values 0, 1 or 1, -1 you can use analog switches. | For the simplest approach, you can use a **variable-gain** or v**oltage-controlled amplifier**.
Furthur as mentioned by Jack B in above comment, you can select an IC for [analog multiplier](https://en.wikipedia.org/wiki/Analog_multiplier) depending on the use in power electronics or signal processing electronics. |
1,425 | I'm always intrigued by the lack of numerical evidence from experimental mathematics for or against the P vs NP question. While the Riemann Hypothesis has some supporting evidence from numerical verification, I'm not aware of similar evidence for the P vs NP question.
Additionally, I'm not aware of any direct physical world consequences of the existence of undecidable problems (or existence of uncomputable functions). Protein folding is NP-complete problem but it appears to be taking place very efficiently in biological systems. Scott Aaronson proposed using the NP Hardness Assumption as a principle of physics. He states the assumption informally as "*NP-complete problems are intractable in the physical world*".
>
> Assuming NP Hardness Assumption, Why is it hard to design a scientific experiment that decides whether our universe respects the NP Hardness Assumption or not?
>
>
>
Also, Is there any known numerical evidence from experimental mathematics for or against $P\ne NP$?
**EDIT:** Here is a nice presentation by Scott Aaronson titled [Computational Intractability As A Law of Physics](http://www.scottaaronson.com/talks/colloq2.ppt) | 2010/09/18 | [
"https://cstheory.stackexchange.com/questions/1425",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/495/"
] | The definitions of "polynomial time" and "exponential time" describe the limiting behavior of the running time as the input size grows to infinity. On the other hand, any physical experiment necessarily considers only inputs of bounded size. Thus, there is absolutely no way to determine experimentally whether a given algorithm runs in polynomial time, exponential time, or something else.
Or in other words: what Robin said. | The study of real-world situations from a computational perspective is quite hard due to the continuous-discrete "jump". While all events in the real world (supposedly) are run in continuous time, the models we usually use are implemented in discrete time. Therefore, it is very tricky to define how small or large a step should be, what should be the size of the problem, etc.
I have written a summary on an Aaronson's paper on the subject, however it is not in English. See [the original paper](http://www.scottaaronson.com/papers/npcomplete.pdf).
Personally, I have heard of another example of a real world problem modeled into computation. The paper is about control-systems models based on bird flocking.It turns out although it takes a short time in real life for birds, it's intractable ("a tower of 2s") when analyzed as a computational problem. See [the paper by Bernard Chazelle](http://arxiv.org/abs/0905.4241) for details.
[Edit: Clarified the part about the Chazelle paper. Thanks for providing precise information.] |
1,425 | I'm always intrigued by the lack of numerical evidence from experimental mathematics for or against the P vs NP question. While the Riemann Hypothesis has some supporting evidence from numerical verification, I'm not aware of similar evidence for the P vs NP question.
Additionally, I'm not aware of any direct physical world consequences of the existence of undecidable problems (or existence of uncomputable functions). Protein folding is NP-complete problem but it appears to be taking place very efficiently in biological systems. Scott Aaronson proposed using the NP Hardness Assumption as a principle of physics. He states the assumption informally as "*NP-complete problems are intractable in the physical world*".
>
> Assuming NP Hardness Assumption, Why is it hard to design a scientific experiment that decides whether our universe respects the NP Hardness Assumption or not?
>
>
>
Also, Is there any known numerical evidence from experimental mathematics for or against $P\ne NP$?
**EDIT:** Here is a nice presentation by Scott Aaronson titled [Computational Intractability As A Law of Physics](http://www.scottaaronson.com/talks/colloq2.ppt) | 2010/09/18 | [
"https://cstheory.stackexchange.com/questions/1425",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/495/"
] | The definitions of "polynomial time" and "exponential time" describe the limiting behavior of the running time as the input size grows to infinity. On the other hand, any physical experiment necessarily considers only inputs of bounded size. Thus, there is absolutely no way to determine experimentally whether a given algorithm runs in polynomial time, exponential time, or something else.
Or in other words: what Robin said. | I still vote for the n-body problem as an example of NP intractability. The gentlemen who refer to numeric solutions forget that the numeric solution is a recursive model, and not a solution in principle in the same way that an analytic solution is. Qui Dong Wang's analytic solution is intractable. Proteins which can fold, and planets which can orbit in systems of more than two bodies are physical systems, not algorithmic solutions of the kind which the P-NP problem addresses.
I must also appreciate chazisop's difficulties with solutions in continuous time. If either time or space is continuous, potential state spaces become uncountable (aleph one). |
1,425 | I'm always intrigued by the lack of numerical evidence from experimental mathematics for or against the P vs NP question. While the Riemann Hypothesis has some supporting evidence from numerical verification, I'm not aware of similar evidence for the P vs NP question.
Additionally, I'm not aware of any direct physical world consequences of the existence of undecidable problems (or existence of uncomputable functions). Protein folding is NP-complete problem but it appears to be taking place very efficiently in biological systems. Scott Aaronson proposed using the NP Hardness Assumption as a principle of physics. He states the assumption informally as "*NP-complete problems are intractable in the physical world*".
>
> Assuming NP Hardness Assumption, Why is it hard to design a scientific experiment that decides whether our universe respects the NP Hardness Assumption or not?
>
>
>
Also, Is there any known numerical evidence from experimental mathematics for or against $P\ne NP$?
**EDIT:** Here is a nice presentation by Scott Aaronson titled [Computational Intractability As A Law of Physics](http://www.scottaaronson.com/talks/colloq2.ppt) | 2010/09/18 | [
"https://cstheory.stackexchange.com/questions/1425",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/495/"
] | The study of real-world situations from a computational perspective is quite hard due to the continuous-discrete "jump". While all events in the real world (supposedly) are run in continuous time, the models we usually use are implemented in discrete time. Therefore, it is very tricky to define how small or large a step should be, what should be the size of the problem, etc.
I have written a summary on an Aaronson's paper on the subject, however it is not in English. See [the original paper](http://www.scottaaronson.com/papers/npcomplete.pdf).
Personally, I have heard of another example of a real world problem modeled into computation. The paper is about control-systems models based on bird flocking.It turns out although it takes a short time in real life for birds, it's intractable ("a tower of 2s") when analyzed as a computational problem. See [the paper by Bernard Chazelle](http://arxiv.org/abs/0905.4241) for details.
[Edit: Clarified the part about the Chazelle paper. Thanks for providing precise information.] | We can't efficiently solve the $n$-body problem, but those rocks-for-brains planets seem to manage just fine. |
1,425 | I'm always intrigued by the lack of numerical evidence from experimental mathematics for or against the P vs NP question. While the Riemann Hypothesis has some supporting evidence from numerical verification, I'm not aware of similar evidence for the P vs NP question.
Additionally, I'm not aware of any direct physical world consequences of the existence of undecidable problems (or existence of uncomputable functions). Protein folding is NP-complete problem but it appears to be taking place very efficiently in biological systems. Scott Aaronson proposed using the NP Hardness Assumption as a principle of physics. He states the assumption informally as "*NP-complete problems are intractable in the physical world*".
>
> Assuming NP Hardness Assumption, Why is it hard to design a scientific experiment that decides whether our universe respects the NP Hardness Assumption or not?
>
>
>
Also, Is there any known numerical evidence from experimental mathematics for or against $P\ne NP$?
**EDIT:** Here is a nice presentation by Scott Aaronson titled [Computational Intractability As A Law of Physics](http://www.scottaaronson.com/talks/colloq2.ppt) | 2010/09/18 | [
"https://cstheory.stackexchange.com/questions/1425",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/495/"
] | If you'll allow me to generalize a tiny bit... Let's extend the question and ask for other complexity-theoretic hardness assumptions and their consequences for scientific experiments. (I'll focus on physics.) Recently there was a rather successful program to try to understand the set of allowable correlations between two measurement devices which, while spatially separated, perform a measurement on a (possibly non-locally correlated) physical system (1). Under this and similar setups, one can use the assumptions about the hardness of *communication complexity* to derive tight bounds which reproduce the allowable correlations for quantum mechanics.
To give you a flavor, let me describe an earlier result in this regard. A [Popescu-Rohrlich box](http://en.wikipedia.org/wiki/Nonlocality#Generalising_nonlocality) (or PR box) is an imaginary device which reproduces correlations between the measurement devices which are consistent with the principle that no information can travel faster than light (called the principle of *no signaling*).
>
> S. Popescu & D. Rohrlich, Quantum
> nonlocality as an axiom, Found. Phys.
> 24, 379–385 (1994).
>
>
>
We can see this as an instance of communication complexity having some influence. The idea that two observers *must* communicate implicitly assumes some constraint which a physicist would call no signaling. Turning this idea around, what types of correlations are possible between two measurement devices constrained by no signaling? This is what Popescu & Rohrlich study. They showed that this set of allowable correlations is strictly larger than those allowed by quantum mechanics, which are in turn strictly larger than those allowed by classical physics.
The question then presents itself, what makes the set of quantum correlations the "right" set of correlations, and not those allowed by no signaling?
To address this question, let's make the bare-bones assumption that there exist functions for which the communication complexity is non-trivial. Here non-trivial just means that to jointly compute a boolean function f(x,y), it takes more than just a *single* bit (2). Well surprisingly, even this very weak complexity-theoretic assumption is sufficient to restrict the space of allowable correlations.
>
> G. Brassard, H. Buhrman, N. Linden, A.
> A. Méthot, A. Tapp, and F. Unger,
> Limit on Nonlocality in Any World in
> Which Communication Complexity Is Not
> Trivial, Phys. Rev. Lett. 96, 250401
> (2006).
>
>
>
Note that a weaker result was already proven in the Ph.D. thesis of Wim van Dam. What Brassard et al. prove is that having access to PR boxes, even ones which are faulty and only produce the correct correlation some of the time, enables one to completely trivialize communication complexity. In this world, every two-variable Boolean function can be jointly computed by transmitting only a single bit. This seems pretty absurd, so let's look at it conversely. We can take the non-triviality of communication complexity as an axiom, and this allows us to *derive* the fact that we don't observe certain stronger-than-quantum correlations in our experiments.
This program using communication complexity has been surprisingly successful, perhaps much more so than the corresponding one for computational complexity. The papers above are really just the tip of the iceberg. A good place to begin further reading is this review:
>
> H. Buhrman, R. Cleve, S. Massar and R.
> de Wolf, Nonlocality and communication
> complexity, Rev. Mod. Phys. 82,
> 665–698 (2010).
>
>
>
or a forward literature search from the two other papers that I cited.
This also raises the interesting question about why the communication setting seems much more amenable to analysis than the computation setting. Perhaps that could be the subject of another posted question on cstheory.
---
(1) Take for example the experiments measuring something known as the CHSH inequality (a type of [Bell inequality](http://en.wikipedia.org/wiki/Bell%27s_theorem)), where the physical system consists of two entangled photons, and the measurements are polarization measurements on the individual photons at two spatially distant locations.
(2) This single bit is necessary whenever f(x,y) actually depends on both x and y, since sending *zero* bits would violate no signaling. | The definitions of "polynomial time" and "exponential time" describe the limiting behavior of the running time as the input size grows to infinity. On the other hand, any physical experiment necessarily considers only inputs of bounded size. Thus, there is absolutely no way to determine experimentally whether a given algorithm runs in polynomial time, exponential time, or something else.
Or in other words: what Robin said. |
1,425 | I'm always intrigued by the lack of numerical evidence from experimental mathematics for or against the P vs NP question. While the Riemann Hypothesis has some supporting evidence from numerical verification, I'm not aware of similar evidence for the P vs NP question.
Additionally, I'm not aware of any direct physical world consequences of the existence of undecidable problems (or existence of uncomputable functions). Protein folding is NP-complete problem but it appears to be taking place very efficiently in biological systems. Scott Aaronson proposed using the NP Hardness Assumption as a principle of physics. He states the assumption informally as "*NP-complete problems are intractable in the physical world*".
>
> Assuming NP Hardness Assumption, Why is it hard to design a scientific experiment that decides whether our universe respects the NP Hardness Assumption or not?
>
>
>
Also, Is there any known numerical evidence from experimental mathematics for or against $P\ne NP$?
**EDIT:** Here is a nice presentation by Scott Aaronson titled [Computational Intractability As A Law of Physics](http://www.scottaaronson.com/talks/colloq2.ppt) | 2010/09/18 | [
"https://cstheory.stackexchange.com/questions/1425",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/495/"
] | If you'll allow me to generalize a tiny bit... Let's extend the question and ask for other complexity-theoretic hardness assumptions and their consequences for scientific experiments. (I'll focus on physics.) Recently there was a rather successful program to try to understand the set of allowable correlations between two measurement devices which, while spatially separated, perform a measurement on a (possibly non-locally correlated) physical system (1). Under this and similar setups, one can use the assumptions about the hardness of *communication complexity* to derive tight bounds which reproduce the allowable correlations for quantum mechanics.
To give you a flavor, let me describe an earlier result in this regard. A [Popescu-Rohrlich box](http://en.wikipedia.org/wiki/Nonlocality#Generalising_nonlocality) (or PR box) is an imaginary device which reproduces correlations between the measurement devices which are consistent with the principle that no information can travel faster than light (called the principle of *no signaling*).
>
> S. Popescu & D. Rohrlich, Quantum
> nonlocality as an axiom, Found. Phys.
> 24, 379–385 (1994).
>
>
>
We can see this as an instance of communication complexity having some influence. The idea that two observers *must* communicate implicitly assumes some constraint which a physicist would call no signaling. Turning this idea around, what types of correlations are possible between two measurement devices constrained by no signaling? This is what Popescu & Rohrlich study. They showed that this set of allowable correlations is strictly larger than those allowed by quantum mechanics, which are in turn strictly larger than those allowed by classical physics.
The question then presents itself, what makes the set of quantum correlations the "right" set of correlations, and not those allowed by no signaling?
To address this question, let's make the bare-bones assumption that there exist functions for which the communication complexity is non-trivial. Here non-trivial just means that to jointly compute a boolean function f(x,y), it takes more than just a *single* bit (2). Well surprisingly, even this very weak complexity-theoretic assumption is sufficient to restrict the space of allowable correlations.
>
> G. Brassard, H. Buhrman, N. Linden, A.
> A. Méthot, A. Tapp, and F. Unger,
> Limit on Nonlocality in Any World in
> Which Communication Complexity Is Not
> Trivial, Phys. Rev. Lett. 96, 250401
> (2006).
>
>
>
Note that a weaker result was already proven in the Ph.D. thesis of Wim van Dam. What Brassard et al. prove is that having access to PR boxes, even ones which are faulty and only produce the correct correlation some of the time, enables one to completely trivialize communication complexity. In this world, every two-variable Boolean function can be jointly computed by transmitting only a single bit. This seems pretty absurd, so let's look at it conversely. We can take the non-triviality of communication complexity as an axiom, and this allows us to *derive* the fact that we don't observe certain stronger-than-quantum correlations in our experiments.
This program using communication complexity has been surprisingly successful, perhaps much more so than the corresponding one for computational complexity. The papers above are really just the tip of the iceberg. A good place to begin further reading is this review:
>
> H. Buhrman, R. Cleve, S. Massar and R.
> de Wolf, Nonlocality and communication
> complexity, Rev. Mod. Phys. 82,
> 665–698 (2010).
>
>
>
or a forward literature search from the two other papers that I cited.
This also raises the interesting question about why the communication setting seems much more amenable to analysis than the computation setting. Perhaps that could be the subject of another posted question on cstheory.
---
(1) Take for example the experiments measuring something known as the CHSH inequality (a type of [Bell inequality](http://en.wikipedia.org/wiki/Bell%27s_theorem)), where the physical system consists of two entangled photons, and the measurements are polarization measurements on the individual photons at two spatially distant locations.
(2) This single bit is necessary whenever f(x,y) actually depends on both x and y, since sending *zero* bits would violate no signaling. | Let me start out by saying that I agree completely with Robin. As regards the protein folding, there is a small issue. As with all such systems, protein folding can get stuck in local minima, which is something you seem to be neglecting. The more general problem is simply finding the ground state of some Hamiltonian. Actually, even if we consider only spins (i.e. qubits) this problem is complete for QMA.
Natural Hamiltonians are a little softer, however, than some of the artificial ones used to prove QMA completeness (which tend not to mirror natural interactions), but even when we restrict to natural two-body interactions on simple systems the result is still an NP-complete problem. Indeed, this forms the basis of an approach attempted to tackling NP problems using adiabatic quantum computing. Unfortunately it appears that this approach will not work for NP-complete problems, due to a rather technical issue to do with the energy level structure. This does however lead to an interesting consequence of there existing problems within NP which are not efficiently solvable by nature (by which I mean physical processes). It means that there exist systems which cannot cool efficiently. That is to say, it seems you can construct a physical system which takes exponentially long in the size of the system to come into thermal equilibrium with the environment. |
1,425 | I'm always intrigued by the lack of numerical evidence from experimental mathematics for or against the P vs NP question. While the Riemann Hypothesis has some supporting evidence from numerical verification, I'm not aware of similar evidence for the P vs NP question.
Additionally, I'm not aware of any direct physical world consequences of the existence of undecidable problems (or existence of uncomputable functions). Protein folding is NP-complete problem but it appears to be taking place very efficiently in biological systems. Scott Aaronson proposed using the NP Hardness Assumption as a principle of physics. He states the assumption informally as "*NP-complete problems are intractable in the physical world*".
>
> Assuming NP Hardness Assumption, Why is it hard to design a scientific experiment that decides whether our universe respects the NP Hardness Assumption or not?
>
>
>
Also, Is there any known numerical evidence from experimental mathematics for or against $P\ne NP$?
**EDIT:** Here is a nice presentation by Scott Aaronson titled [Computational Intractability As A Law of Physics](http://www.scottaaronson.com/talks/colloq2.ppt) | 2010/09/18 | [
"https://cstheory.stackexchange.com/questions/1425",
"https://cstheory.stackexchange.com",
"https://cstheory.stackexchange.com/users/495/"
] | Indeed the physical version of P not equal to NP, namely that no natural physical systems can solve NP complete problem is very interesting. There are a few concerns
1) The progrem seems practically "orthogonal" to both experimental and theoretical physics.
So it does not really provide (so far) useful insights in physics.
There are some nice arguments how one can deduce from this physical version of the conjecture some insights in physics, but these arguments are fairly "soft" and have loopholes. (And such arguments are likely to be problematic, since they rely on very difficult mathematical conjectures such as NP nonot equal to P, and NP not being included in BQP that we do not understand.)
(A similar comment apply to the "Church-turing thesis".)
2) Although the physical NP not equal P is a wider conjecture than the mathematical NP not equal P, we can also regard it as more restricted since the algorithms that occur in nature
(and even the men-made algorithms) seem to be a very restricted class of all theoretically possible algorithms. It will be very interesting to understand such restrictions formally, but in any case any exerimental "proof" as suggested in the question will apply only to these restricted class.
3) In scientific modeling, computational complexity represents a sort of a second order matter where first we would like to model a natural phenomena and see what can be predicted based on the model (putting computational complexity theory aside). Giving too much weight to computational complexity issues in the modeling stage does not seem to be fruitful. In many cases, the model is computationally intractable to start with but it may still be feasible for naturally occuring problems or useful to understand the phenomena.
4) I agree with Boaz that the asymptotic issue is not necessary a "deal breaker". Still it is a rather serious matter when it comes to the relevance of computational complexity matters to real life modeling. | The study of real-world situations from a computational perspective is quite hard due to the continuous-discrete "jump". While all events in the real world (supposedly) are run in continuous time, the models we usually use are implemented in discrete time. Therefore, it is very tricky to define how small or large a step should be, what should be the size of the problem, etc.
I have written a summary on an Aaronson's paper on the subject, however it is not in English. See [the original paper](http://www.scottaaronson.com/papers/npcomplete.pdf).
Personally, I have heard of another example of a real world problem modeled into computation. The paper is about control-systems models based on bird flocking.It turns out although it takes a short time in real life for birds, it's intractable ("a tower of 2s") when analyzed as a computational problem. See [the paper by Bernard Chazelle](http://arxiv.org/abs/0905.4241) for details.
[Edit: Clarified the part about the Chazelle paper. Thanks for providing precise information.] |
103,788 | Can you push [Tenser's Floating Disk](http://www.5esrd.com/spellcasting/all-spells/f/floating-disk/) so that it doesn't just hang out behind you? If so, how hard would it be to push? | 2017/07/18 | [
"https://rpg.stackexchange.com/questions/103788",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/30073/"
] | RAW, the [text of the spell](https://www.dndbeyond.com/spells/floating-disk) says "No":
>
> The disk is **immobile** while you are within 20 feet of it.
>
>
>
Immobile is pretty clear (emphasis mine).
That said... I don't see any reason why a DM should not allow the spell and its cargo to be moved around at the expense of an action by the caster. | No
==
It isn't described as having any weight, but it is described as immobile in it's description, at least while it is within 20 feet of you. |
103,788 | Can you push [Tenser's Floating Disk](http://www.5esrd.com/spellcasting/all-spells/f/floating-disk/) so that it doesn't just hang out behind you? If so, how hard would it be to push? | 2017/07/18 | [
"https://rpg.stackexchange.com/questions/103788",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/30073/"
] | No
==
It isn't described as having any weight, but it is described as immobile in it's description, at least while it is within 20 feet of you. | Immobile could mean it stops moving on it own. It could mean nothing can ever move it.
This really is not something the RAW answers. No amount of interpretation will make this clear. There is no way of parsing this that will be immune to abuse.
If you take immobile to be an absolute you've just given the player an immovable object to counter the DM's irresistible force. They could stop a charging dragon with one of these and build a cage with three of them.
So don't think being conservative with how you read a spell makes it immune to abuse.
Tenser's floating disk is fun because you can invent uses for it. A DM that insists the only use is to escape the encumbrance rules is running a poor game.
The best way to answer this question is to go buy a scroll of Tensor's Floating Disc. Use it. Give the disc a push. See what happens. Now decide if you want to learn the spell. |
103,788 | Can you push [Tenser's Floating Disk](http://www.5esrd.com/spellcasting/all-spells/f/floating-disk/) so that it doesn't just hang out behind you? If so, how hard would it be to push? | 2017/07/18 | [
"https://rpg.stackexchange.com/questions/103788",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/30073/"
] | RAW, the [text of the spell](https://www.dndbeyond.com/spells/floating-disk) says "No":
>
> The disk is **immobile** while you are within 20 feet of it.
>
>
>
Immobile is pretty clear (emphasis mine).
That said... I don't see any reason why a DM should not allow the spell and its cargo to be moved around at the expense of an action by the caster. | Immobile could mean it stops moving on it own. It could mean nothing can ever move it.
This really is not something the RAW answers. No amount of interpretation will make this clear. There is no way of parsing this that will be immune to abuse.
If you take immobile to be an absolute you've just given the player an immovable object to counter the DM's irresistible force. They could stop a charging dragon with one of these and build a cage with three of them.
So don't think being conservative with how you read a spell makes it immune to abuse.
Tenser's floating disk is fun because you can invent uses for it. A DM that insists the only use is to escape the encumbrance rules is running a poor game.
The best way to answer this question is to go buy a scroll of Tensor's Floating Disc. Use it. Give the disc a push. See what happens. Now decide if you want to learn the spell. |
3,938,454 | I need something like the slider control from jQueryUI, but I don't want to use the whole framework for only one control.
I tried searching Google but I get results for image sliders rather than the type of control I'm after. Perhaps there is another name for this kind of control?
I found only [this](http://programming.arantius.com/lightweight+javascript+slider+control)
Which is exactly what I want, but it hasn't been updated in a long time, and I don't have the facilities to make sure it works in all browsers.
Thanks! | 2010/10/14 | [
"https://Stackoverflow.com/questions/3938454",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/232906/"
] | There isn't much to a custom jQuery UI download.
Plus, the files are available on Google's CDN.
jQuery
------
<http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js>
jQuery UI
---------
<http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.5/jquery-ui.min.js>
jQuery UI Theme
---------------
<http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.5/themes/smoothness/jquery-ui.css>
Your end user may not even need to download anything new. | You can customize your jQuery UI download to only include the things required for the slider |
3,938,454 | I need something like the slider control from jQueryUI, but I don't want to use the whole framework for only one control.
I tried searching Google but I get results for image sliders rather than the type of control I'm after. Perhaps there is another name for this kind of control?
I found only [this](http://programming.arantius.com/lightweight+javascript+slider+control)
Which is exactly what I want, but it hasn't been updated in a long time, and I don't have the facilities to make sure it works in all browsers.
Thanks! | 2010/10/14 | [
"https://Stackoverflow.com/questions/3938454",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/232906/"
] | There isn't much to a custom jQuery UI download.
Plus, the files are available on Google's CDN.
jQuery
------
<http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js>
jQuery UI
---------
<http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.5/jquery-ui.min.js>
jQuery UI Theme
---------------
<http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.5/themes/smoothness/jquery-ui.css>
Your end user may not even need to download anything new. | Like Petah says in this thread, you can customize your jQuery UI download at:
<http://jqueryui.com/download>
Just first click all 'Deselect all' links and then start checking the components you want |
144,993 | A lot of literature talks about using inline functions to "avoid the overhead of a function call". However I haven't seen quantifiable data. What is the actual overhead of a function call i.e. what sort of performance increase do we achieve by inlining functions? | 2008/09/28 | [
"https://Stackoverflow.com/questions/144993",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/23120/"
] | The amount of overhead will depend on the compiler, CPU, etc. The percentage overhead will depend on the code you're inlining. The only way to know is to take *your code* and profile it both ways - that's why there's no definitive answer. | Each new function requires a new local stack to be created. But the overhead of this would only be noticeable if you are calling a function on every iteration of a loop over a very large number of iterations. |
144,993 | A lot of literature talks about using inline functions to "avoid the overhead of a function call". However I haven't seen quantifiable data. What is the actual overhead of a function call i.e. what sort of performance increase do we achieve by inlining functions? | 2008/09/28 | [
"https://Stackoverflow.com/questions/144993",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/23120/"
] | There's the technical and the practical answer. The practical answer is it will never matter, and in the very rare case it does the only way you'll know is through actual profiled tests.
The technical answer, which your literature refers to, is generally not relevant due to compiler optimizations. But if you're still interested, is well described by [Josh](https://stackoverflow.com/questions/144993/how-much-overhead-is-there-in-calling-a-function-in-c#144997).
As far as a "percentage" you'd have to know how expensive the function itself was. Outside of the cost of the called function there is no percentage because you are comparing to a zero cost operation. For inlined code there is no cost, the processor just moves to the next instruction. The downside to inling is a larger code size which manifests it's costs in a different way than the stack construction/tear down costs. | For most functions, their is no additional overhead for calling them in C++ vs C (unless you count that the "this" pointer as an unnecessary argument to every function.. You have to pass state to a function somehow tho)...
For virtual functions, their is an additional level of indirection (equivalent to a calling a function through a pointer in C)... But really, on today's hardware this is trivial. |
144,993 | A lot of literature talks about using inline functions to "avoid the overhead of a function call". However I haven't seen quantifiable data. What is the actual overhead of a function call i.e. what sort of performance increase do we achieve by inlining functions? | 2008/09/28 | [
"https://Stackoverflow.com/questions/144993",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/23120/"
] | There are a few issues here.
* If you have a smart enough compiler, it will do some automatic inlining for you even if you did not specify inline. On the other hand, there are many things that cannot be inlined.
* If the function is virtual, then of course you are going to pay the price that it cannot be inlined because the target is determined at runtime. Conversely, in Java, you might be paying this price unless you indicate that the method is final.
* Depending on how your code is organized in memory, you may be paying a cost in cache misses and even page misses as the code is located elsewhere. That can end up having a huge impact in some applications. | For most functions, their is no additional overhead for calling them in C++ vs C (unless you count that the "this" pointer as an unnecessary argument to every function.. You have to pass state to a function somehow tho)...
For virtual functions, their is an additional level of indirection (equivalent to a calling a function through a pointer in C)... But really, on today's hardware this is trivial. |
144,993 | A lot of literature talks about using inline functions to "avoid the overhead of a function call". However I haven't seen quantifiable data. What is the actual overhead of a function call i.e. what sort of performance increase do we achieve by inlining functions? | 2008/09/28 | [
"https://Stackoverflow.com/questions/144993",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/23120/"
] | Your question is one of the questions, that has no answer one could call the "absolute truth". The overhead of a normal function call depends on three factors:
1. The CPU. The overhead of x86, PPC, and ARM CPUs varies a lot and even if you just stay with one architecture, the overhead also varies quite a bit between an Intel Pentium 4, Intel Core 2 Duo and an Intel Core i7. The overhead might even vary noticeably between an Intel and an AMD CPU, even if both run at the same clock speed, since factors like cache sizes, caching algorithms, memory access patterns and the actual hardware implementation of the call opcode itself can have a huge influence on the overhead.
2. The ABI (Application Binary Interface). Even with the same CPU, there often exist different ABIs that specify how function calls pass parameters (via registers, via stack, or via a combination of both) and where and how stack frame initialization and clean-up takes place. All this has an influence on the overhead. Different operating systems may use different ABIs for the same CPU; e.g. Linux, Windows and Solaris may all three use a different ABI for the same CPU.
3. The Compiler. Strictly following the ABI is only important if functions are called between independent code units, e.g. if an application calls a function of a system library or a user library calls a function of another user library. As long as functions are "private", not visible outside a certain library or binary, the compiler may "cheat". It may not strictly follow the ABI but instead use shortcuts that lead to faster function calls. E.g. it may pass parameters in register instead of using the stack or it may skip stack frame setup and clean-up completely if not really necessary.
If you want to know the overhead for a specific combination of the three factors above, e.g. for Intel Core i5 on Linux using GCC, your only way to get this information is benchmarking the difference between two implementations, one using function calls and one where you copy the code directly into the caller; this way you force inlining for sure, since the inline statement is only a hint and does not always lead to inlining.
However, the real question here is: Does the exact overhead really matter? One thing is for sure: A function call always has an overhead. It may be small, it may be big, but it is for sure existent. And no matter how small it is if a function is called often enough in a performance critical section, the overhead will matter to some degree. Inlining rarely makes your code slower, unless you terribly overdo it; it will make the code bigger though. Today's compilers are pretty good at deciding themselves when to inline and when not, so you hardly ever have to rack your brain about it.
Personally I ignore inlining during development completely, until I have a more or less usable product that I can profile and only if profiling tells me, that a certain function is called really often and also within a performance critical section of the application, then I will consider "force-inlining" of this function.
So far my answer is very generic, it applies to C as much as it applies to C++ and Objective-C. As a closing word let me say something about C++ in particular: Methods that are virtual are double indirect function calls, that means they have a higher function call overhead than normal function calls and also they cannot be inlined. Non-virtual methods might be inlined by the compiler or not but even if they are not inlined, they are still significant faster than virtual ones, so you should not make methods virtual, unless you really plan to override them or have them overridden. | Depending on how you structure your code, division into units such as modules and libraries it might matter in some cases profoundly.
1. Using dynamic library function with external linkage will most of the time impose full stack frame processing.
That is why using qsort from stdc library is one order of magnitude (10 times) slower than using stl code when comparison operation is as simple as integer comparison.
2. Passing function pointers between modules will also be affected.
3. The same penalty will most likely affect usage of C++'s virtual functions as well as other functions, whose code is defined in separate modules.
4. Good news is that whole program optimization might resolve the issue for dependencies between static libraries and modules. |
144,993 | A lot of literature talks about using inline functions to "avoid the overhead of a function call". However I haven't seen quantifiable data. What is the actual overhead of a function call i.e. what sort of performance increase do we achieve by inlining functions? | 2008/09/28 | [
"https://Stackoverflow.com/questions/144993",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/23120/"
] | Your question is one of the questions, that has no answer one could call the "absolute truth". The overhead of a normal function call depends on three factors:
1. The CPU. The overhead of x86, PPC, and ARM CPUs varies a lot and even if you just stay with one architecture, the overhead also varies quite a bit between an Intel Pentium 4, Intel Core 2 Duo and an Intel Core i7. The overhead might even vary noticeably between an Intel and an AMD CPU, even if both run at the same clock speed, since factors like cache sizes, caching algorithms, memory access patterns and the actual hardware implementation of the call opcode itself can have a huge influence on the overhead.
2. The ABI (Application Binary Interface). Even with the same CPU, there often exist different ABIs that specify how function calls pass parameters (via registers, via stack, or via a combination of both) and where and how stack frame initialization and clean-up takes place. All this has an influence on the overhead. Different operating systems may use different ABIs for the same CPU; e.g. Linux, Windows and Solaris may all three use a different ABI for the same CPU.
3. The Compiler. Strictly following the ABI is only important if functions are called between independent code units, e.g. if an application calls a function of a system library or a user library calls a function of another user library. As long as functions are "private", not visible outside a certain library or binary, the compiler may "cheat". It may not strictly follow the ABI but instead use shortcuts that lead to faster function calls. E.g. it may pass parameters in register instead of using the stack or it may skip stack frame setup and clean-up completely if not really necessary.
If you want to know the overhead for a specific combination of the three factors above, e.g. for Intel Core i5 on Linux using GCC, your only way to get this information is benchmarking the difference between two implementations, one using function calls and one where you copy the code directly into the caller; this way you force inlining for sure, since the inline statement is only a hint and does not always lead to inlining.
However, the real question here is: Does the exact overhead really matter? One thing is for sure: A function call always has an overhead. It may be small, it may be big, but it is for sure existent. And no matter how small it is if a function is called often enough in a performance critical section, the overhead will matter to some degree. Inlining rarely makes your code slower, unless you terribly overdo it; it will make the code bigger though. Today's compilers are pretty good at deciding themselves when to inline and when not, so you hardly ever have to rack your brain about it.
Personally I ignore inlining during development completely, until I have a more or less usable product that I can profile and only if profiling tells me, that a certain function is called really often and also within a performance critical section of the application, then I will consider "force-inlining" of this function.
So far my answer is very generic, it applies to C as much as it applies to C++ and Objective-C. As a closing word let me say something about C++ in particular: Methods that are virtual are double indirect function calls, that means they have a higher function call overhead than normal function calls and also they cannot be inlined. Non-virtual methods might be inlined by the compiler or not but even if they are not inlined, they are still significant faster than virtual ones, so you should not make methods virtual, unless you really plan to override them or have them overridden. | There are a few issues here.
* If you have a smart enough compiler, it will do some automatic inlining for you even if you did not specify inline. On the other hand, there are many things that cannot be inlined.
* If the function is virtual, then of course you are going to pay the price that it cannot be inlined because the target is determined at runtime. Conversely, in Java, you might be paying this price unless you indicate that the method is final.
* Depending on how your code is organized in memory, you may be paying a cost in cache misses and even page misses as the code is located elsewhere. That can end up having a huge impact in some applications. |
144,993 | A lot of literature talks about using inline functions to "avoid the overhead of a function call". However I haven't seen quantifiable data. What is the actual overhead of a function call i.e. what sort of performance increase do we achieve by inlining functions? | 2008/09/28 | [
"https://Stackoverflow.com/questions/144993",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/23120/"
] | Your question is one of the questions, that has no answer one could call the "absolute truth". The overhead of a normal function call depends on three factors:
1. The CPU. The overhead of x86, PPC, and ARM CPUs varies a lot and even if you just stay with one architecture, the overhead also varies quite a bit between an Intel Pentium 4, Intel Core 2 Duo and an Intel Core i7. The overhead might even vary noticeably between an Intel and an AMD CPU, even if both run at the same clock speed, since factors like cache sizes, caching algorithms, memory access patterns and the actual hardware implementation of the call opcode itself can have a huge influence on the overhead.
2. The ABI (Application Binary Interface). Even with the same CPU, there often exist different ABIs that specify how function calls pass parameters (via registers, via stack, or via a combination of both) and where and how stack frame initialization and clean-up takes place. All this has an influence on the overhead. Different operating systems may use different ABIs for the same CPU; e.g. Linux, Windows and Solaris may all three use a different ABI for the same CPU.
3. The Compiler. Strictly following the ABI is only important if functions are called between independent code units, e.g. if an application calls a function of a system library or a user library calls a function of another user library. As long as functions are "private", not visible outside a certain library or binary, the compiler may "cheat". It may not strictly follow the ABI but instead use shortcuts that lead to faster function calls. E.g. it may pass parameters in register instead of using the stack or it may skip stack frame setup and clean-up completely if not really necessary.
If you want to know the overhead for a specific combination of the three factors above, e.g. for Intel Core i5 on Linux using GCC, your only way to get this information is benchmarking the difference between two implementations, one using function calls and one where you copy the code directly into the caller; this way you force inlining for sure, since the inline statement is only a hint and does not always lead to inlining.
However, the real question here is: Does the exact overhead really matter? One thing is for sure: A function call always has an overhead. It may be small, it may be big, but it is for sure existent. And no matter how small it is if a function is called often enough in a performance critical section, the overhead will matter to some degree. Inlining rarely makes your code slower, unless you terribly overdo it; it will make the code bigger though. Today's compilers are pretty good at deciding themselves when to inline and when not, so you hardly ever have to rack your brain about it.
Personally I ignore inlining during development completely, until I have a more or less usable product that I can profile and only if profiling tells me, that a certain function is called really often and also within a performance critical section of the application, then I will consider "force-inlining" of this function.
So far my answer is very generic, it applies to C as much as it applies to C++ and Objective-C. As a closing word let me say something about C++ in particular: Methods that are virtual are double indirect function calls, that means they have a higher function call overhead than normal function calls and also they cannot be inlined. Non-virtual methods might be inlined by the compiler or not but even if they are not inlined, they are still significant faster than virtual ones, so you should not make methods virtual, unless you really plan to override them or have them overridden. | I don't have any numbers, either, but I'm glad you're asking. Too often I see people try to optimize their code starting with vague ideas of overhead, but not really knowing. |
144,993 | A lot of literature talks about using inline functions to "avoid the overhead of a function call". However I haven't seen quantifiable data. What is the actual overhead of a function call i.e. what sort of performance increase do we achieve by inlining functions? | 2008/09/28 | [
"https://Stackoverflow.com/questions/144993",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/23120/"
] | There's the technical and the practical answer. The practical answer is it will never matter, and in the very rare case it does the only way you'll know is through actual profiled tests.
The technical answer, which your literature refers to, is generally not relevant due to compiler optimizations. But if you're still interested, is well described by [Josh](https://stackoverflow.com/questions/144993/how-much-overhead-is-there-in-calling-a-function-in-c#144997).
As far as a "percentage" you'd have to know how expensive the function itself was. Outside of the cost of the called function there is no percentage because you are comparing to a zero cost operation. For inlined code there is no cost, the processor just moves to the next instruction. The downside to inling is a larger code size which manifests it's costs in a different way than the stack construction/tear down costs. | There is a great concept called 'register shadowing', which allows to pass ( up to 6 ? ),values thru registers ( on CPU ) instead of stack ( memory ). Also, depending on the function and variables used within, compiler may just decide that frame management code is not required !!
Also, even C++ compiler may do a 'tail recursion optimiztaion', i.e. if A() calls B(), and after calling B(), A just returns, compiler will reuse the stack frame !!
Of course, this all can be done, only if program sticks to the semantics of standard ( see pointer aliasing and it's effect on optimizations ) |
144,993 | A lot of literature talks about using inline functions to "avoid the overhead of a function call". However I haven't seen quantifiable data. What is the actual overhead of a function call i.e. what sort of performance increase do we achieve by inlining functions? | 2008/09/28 | [
"https://Stackoverflow.com/questions/144993",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/23120/"
] | On most architectures, the cost consists of saving all (or some, or none) of the registers to the stack, pushing the function arguments to the stack (or putting them in registers), incrementing the stack pointer and jumping to the beginning of the new code. Then when the function is done, you have to restore the registers from the stack. [This webpage](http://www.angelcode.com/dev/callconv/callconv.html) has a description of what's involved in the various calling conventions.
Most C++ compilers are smart enough now to inline functions for you. The inline keyword is just a hint to the compiler. Some will even do inlining across translation units where they decide it's helpful. | There are a few issues here.
* If you have a smart enough compiler, it will do some automatic inlining for you even if you did not specify inline. On the other hand, there are many things that cannot be inlined.
* If the function is virtual, then of course you are going to pay the price that it cannot be inlined because the target is determined at runtime. Conversely, in Java, you might be paying this price unless you indicate that the method is final.
* Depending on how your code is organized in memory, you may be paying a cost in cache misses and even page misses as the code is located elsewhere. That can end up having a huge impact in some applications. |
40,927 | silly one,
do you have any problems with rsync'ing large [ >4GB ] files under modern linux? [ 32bit, 64bit, large file support turned on ]? i've done some tests on my own between 2 64bit boxes and didn't have any problems transferring 6-10GB files. to make test thorough i altered files, run rsync again, checked md5... - all seems ok.
but after i saw [this](http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=532227) bug report i got a bit worried. i did some searching but have not found any confirmation of the problem.
thanks for your thoughts!
**edit:** file system: ext3, reiserfs | 2009/07/15 | [
"https://serverfault.com/questions/40927",
"https://serverfault.com",
"https://serverfault.com/users/2413/"
] | The error report that you've linked to does not seem to be a 4GB+ filesize related error. 429796854 bytes is just shy of 410Mb, and it seems to be a transport error rather than an rsync one. If
I would suspect that the transport connection (presumably SSH) has dropped, perhaps due to an inactivity timeout because the CPU spent so long between sends because it had to do something like checksum a very large file, and this is the reason that rsync reports a broken pipe.
I'm sure I've used rsync successfully on files over 4Gb in the past with 32bit clients and servers, and at least once where more then 4Gb was actually transferred rather than just considered for transfer. | Nope, I throw around 5-10GB VM images using rsync all the time, never seen a problem. |
40,927 | silly one,
do you have any problems with rsync'ing large [ >4GB ] files under modern linux? [ 32bit, 64bit, large file support turned on ]? i've done some tests on my own between 2 64bit boxes and didn't have any problems transferring 6-10GB files. to make test thorough i altered files, run rsync again, checked md5... - all seems ok.
but after i saw [this](http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=532227) bug report i got a bit worried. i did some searching but have not found any confirmation of the problem.
thanks for your thoughts!
**edit:** file system: ext3, reiserfs | 2009/07/15 | [
"https://serverfault.com/questions/40927",
"https://serverfault.com",
"https://serverfault.com/users/2413/"
] | The error report that you've linked to does not seem to be a 4GB+ filesize related error. 429796854 bytes is just shy of 410Mb, and it seems to be a transport error rather than an rsync one. If
I would suspect that the transport connection (presumably SSH) has dropped, perhaps due to an inactivity timeout because the CPU spent so long between sends because it had to do something like checksum a very large file, and this is the reason that rsync reports a broken pipe.
I'm sure I've used rsync successfully on files over 4Gb in the past with 32bit clients and servers, and at least once where more then 4Gb was actually transferred rather than just considered for transfer. | The largest file I have recently rsynced was 180GB which was in a a set of directories containing 15TB ( I have written a set of scripts which can do the sync in parallel, I can move the data at about 3TB an hour ... ) |
40,927 | silly one,
do you have any problems with rsync'ing large [ >4GB ] files under modern linux? [ 32bit, 64bit, large file support turned on ]? i've done some tests on my own between 2 64bit boxes and didn't have any problems transferring 6-10GB files. to make test thorough i altered files, run rsync again, checked md5... - all seems ok.
but after i saw [this](http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=532227) bug report i got a bit worried. i did some searching but have not found any confirmation of the problem.
thanks for your thoughts!
**edit:** file system: ext3, reiserfs | 2009/07/15 | [
"https://serverfault.com/questions/40927",
"https://serverfault.com",
"https://serverfault.com/users/2413/"
] | The error report that you've linked to does not seem to be a 4GB+ filesize related error. 429796854 bytes is just shy of 410Mb, and it seems to be a transport error rather than an rsync one. If
I would suspect that the transport connection (presumably SSH) has dropped, perhaps due to an inactivity timeout because the CPU spent so long between sends because it had to do something like checksum a very large file, and this is the reason that rsync reports a broken pipe.
I'm sure I've used rsync successfully on files over 4Gb in the past with 32bit clients and servers, and at least once where more then 4Gb was actually transferred rather than just considered for transfer. | No, we synchronize two 30TB data sets (made of files ranging from 4 to 20GB) daily for months with rsync, no problem. |
40,927 | silly one,
do you have any problems with rsync'ing large [ >4GB ] files under modern linux? [ 32bit, 64bit, large file support turned on ]? i've done some tests on my own between 2 64bit boxes and didn't have any problems transferring 6-10GB files. to make test thorough i altered files, run rsync again, checked md5... - all seems ok.
but after i saw [this](http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=532227) bug report i got a bit worried. i did some searching but have not found any confirmation of the problem.
thanks for your thoughts!
**edit:** file system: ext3, reiserfs | 2009/07/15 | [
"https://serverfault.com/questions/40927",
"https://serverfault.com",
"https://serverfault.com/users/2413/"
] | The error report that you've linked to does not seem to be a 4GB+ filesize related error. 429796854 bytes is just shy of 410Mb, and it seems to be a transport error rather than an rsync one. If
I would suspect that the transport connection (presumably SSH) has dropped, perhaps due to an inactivity timeout because the CPU spent so long between sends because it had to do something like checksum a very large file, and this is the reason that rsync reports a broken pipe.
I'm sure I've used rsync successfully on files over 4Gb in the past with 32bit clients and servers, and at least once where more then 4Gb was actually transferred rather than just considered for transfer. | Depends on the filesystem that you're using. I've run into trouble with FAT32 filesystems. I had a 200GB portable harddrive (formatted as FAT32) and was trying to copy a DVD .iso onto it. It didn't work cause you can't have files greater than 4.somethingGB in FAT32 |
112,078 | For a [20W amp](http://www.adafruit.com/datasheets/MAX9744.pdf), what are the determining characters, apart from speaker Impedance and Power rating, that affects a selection.
Example Specs -

(source - <http://www.madisoundspeakerstore.com/approx-2-fullrange/aurasound-nsw2-326-8a-120-2-full-range-with-solder-pads/>)
What are these
- Power capacity(RMS)
- Power capacity(peak)
- Maximum Excursion
- Resonant Frequency
- Total, Electrical, Mechanical Q
- Xmax
- Compliance
Any other notes.
Also, can I use this speaker.
Thanks. | 2014/05/27 | [
"https://electronics.stackexchange.com/questions/112078",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/40885/"
] | Short answer: yes you could use this speaker, but maybe you shouldn't.
**Power Capacity**
Simply put, RMS power capacity = safe power capacity.
Peak power ratings are the marketing departments' favorite, because peak power of even the smallest speakers can be up to 1000 Watts!!! (For one microsecond. Right before being engulfed in flames.) If you do not know how peak power rating is measured, it is a useless number.
*"But wait, this speaker is 15W RMS, and my amp is 20W..."*
Trust me, your amp will *never* output 20 Watts under usable circumstances. If you were to turn it all up, and somehow protection wouldn't kick in, your music will sound as if you're farting trough a megaphone. It sounds horrible, even if you're into dubstep.
All in all, based purely on power rating this speakers and amp will play nice together. Don't turn your amp up all the way, and it will use those 'spare watts' to give you a nice, dynamic output.
**Maximum Excursion**
This is how far the speaker cone (the moving part) can travel before irreversible damage occurs. It's what makes people upload movies of their subwoofer to Youtube. For your design, you should not worry about this.
**Resonant Frequency**
If you keep pushing a decent sized tree at just the right speed, and you will notice that your relatively low force is able to move the tree remarkably easy. This is because you keep adding force to the tree at it's resonance frequency.
The same goes for speakers. Now, it's no problem at all if the resonant frequency appears in output signal every once in a while, but it's not the best idea to constantly use a speaker near or at its resonant frequency. Simplified, the speaker could shake 'out of control', sounding bad and even damaging itself.
**Frequency Range**
As Naz already pointed out, the frequency range of this speakers is limited. Normally, it is described as the range between the two point where the sensitivity of the speaker (how well it can reproduce the frequencies) drops a relative 3dB. Here, - again, marketing - it is measured as the range between the 10dB drops in sensitivity.
So, *could* you use this speakers? Yes. *Should* you use this speakers? That is up to you. My advice is to look around for other (slightly bigger) loudspeakers, as this will probably improve the frequency range, especially towards the lower end. The magic words are 'fullrange driver' here. Do not expect earth rattling bass from your amplifier, however, when paired with the right speakers you will be able to produce a very nice sound.
Good luck! | Wow!. Never seen such detailed description of a speaker.
Power capacity: I would say that's the nominal/normal, long term power of the speaker. That is, how much power you provide to a speaker.
Peak power capacity would be the maximum short term power/spike that the speaker can handle before it burns.
Maximum excursion: maximum amplitude the speaker's diaphragm can travel (in/out).
Resonant frequency: what ever... check out this [link](http://www.eminence.com/support/understanding-loudspeaker-data/).
You probably can use this speaker. Depending on what you are going to connect it to: output impedance and power of the amplifier - that's the minimum you need to consider. See the frequency range 250-15k, that's the mid range. It will miss deep low frequencies and high frequencies. So, this would better go in combination with other speakers to cover a broader range of hearing band. Good systems range 20-25kHz, but you can not get a single speaker that will cover the entire range. Therefore, you can usually see two-three speaker system. |
6,340,859 | Is it possible to normally run Visual C++ Express edition on windows7 64-bit?
Because when I try to install it, the setup window says "visual c++ 2010 express includes the 32-bit visual c++ compiler toolset".
I am a student and intend to use the IDE for learning/practicing C language. I don't plan to create windows-ready applications anytime soon with the windows SDK.
So, will it allow me to write and compile normally without the 64-bit compiler toolset(on my 64-bit system)? I mean will it make any difference if I don't plan on making applications using SDK? If yes, please explain how?
And finally, should I go on and install it or opt for other C/C++ IDE? I previously used Dev C++ but it isn't as great on Windows 7.
Thanks. | 2011/06/14 | [
"https://Stackoverflow.com/questions/6340859",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/797220/"
] | Yes, you'll be able to run your programs as every other 32-bit application - via WoW64 (Windows-on-Windows64) technology. | Yes, Windows 7 64bit supports pretty much all 32bit applications just fine (except if they depend on some 32bit-only driver components, but most applications don't do that). |
86,279 | Looking for a book I read in the late 60s: humans were almost eradicated in the galaxy but discovered technology that made them almost godlike. They then lived in miles-long spaceships and watched over races in the galaxy as "guardians". | 2015/04/15 | [
"https://scifi.stackexchange.com/questions/86279",
"https://scifi.stackexchange.com",
"https://scifi.stackexchange.com/users/44344/"
] | I think this is [*Crown of Infinity*](https://www.isfdb.org/cgi-bin/title.cgi?22315) (1968) by John M. Faucette. It was apparently only published as an Ace Double with *The Prism*:
[](https://i.stack.imgur.com/mpupV.jpg)
Unfortunately I no longer have my copy but I remember humanity almost being wiped out in sneak attacks by various alien species uniting....led by lizard like creatures.
Then there was a generations long hunt and destroy programme but humans proved really good at sneaking around the galaxy and surviving, any time a ship was destroyed signals were sent out so other humans could avoid that particular fate.
Slowly the survivors increased their technology until they eventually reigned supreme and took their vengeance - after that they oversaw the galaxy and became known as the Star Kings.
Note:- as I said, this is from my memory only - please anyone, feel free to edit if you have a source
There is a [review on Goodreads](https://www.goodreads.com/review/show/53040173), and [another on Amazon](https://www.amazon.com/gp/customer-reviews/R163I6F7EW6Y0T/ref=cm_cr_dp_d_rvw_ttl) that have broadly similar details. | Your description is very similar to the plot of [The Lensman](http://en.wikipedia.org/wiki/Lensman_series) series by E. E. "Doc" Smith. Intelligent species on many planets are part of a breeding program by the Arisians, one of two competing galactic races, which results in superior humans and other species. With the use of a "Lens" provided by the Arisians, which provide telelpathic powers, the super humans form the Galactic Patrol to enforce laws and keep the peace in the galaxy. |
22,999 | Last millenium with the advent of common Stereo equipment, recording engineers often used the stereo effects to re-create a 'stage' like presence in the recordings. I.e. the instruments would be panned to roughly their position on the stage whilst still maintaining a rich listening experience.
These days, stereo effects are more used for enhancing the music rather than trying to "re-create" a live experience.
Is there any scenarios where an engineer/producer should use one method over the other - particularly from a producers point of view? | 2010/12/08 | [
"https://sound.stackexchange.com/questions/22999",
"https://sound.stackexchange.com",
"https://sound.stackexchange.com/users/13188/"
] | Try the Effect->Compressor.
>
> The Compressor effect reduces the dynamic range of audio. One of the
> main purposes of reducing dynamic range is to permit the audio to be
> amplified further (without clipping) than would be otherwise possible,
>
>
>
as stated here: <http://manual.audacityteam.org/man/Compressor> | the other thing is to get a mic with a DB meter, so you can choose to record louder for example the SE Electronics condenser mic |
253,179 | I've checked a few other threads around the topic and search around, I am wondering if someone can give me a clear direction as to ***why*** should I consider NoSQL and ***which*** one (since there are quite a few of them each with different purposes)
* [Why NoSQL over SQL?](https://softwareengineering.stackexchange.com/questions/109192/why-nosql-over-sql)
* [Is MongoDB the right choice in my case?](https://softwareengineering.stackexchange.com/questions/139108/is-mongodb-the-right-choice-in-my-case)
* <https://softwareengineering.stackexchange.com/questions/5354/are-nosql-databases-going-to-take-the-place-of-relational-databases-is-sql-goin>
Like many others - I started with relational databases and been working on them ever since, thus when presented with a problem, the first instinct is to always think of *"I can create these tables, with these columns, with this foreign keys"*, etc
My overall goal is **How to get into "NoSQL" mindset**? ie getting away from the inclination of always thinking about tables/columns/FKs (I understand that there are cases where RDBMS is still the better way to go)
I am thinking of 2 scenarios for example just to get more concrete direction
**Scenario 1**
Imagine a database to model building a furniture instructions (think of IKEA instructions) where you would have the object "furniture" which would have a list of "materials" and have a list of "instructions"
* Furniture - would simply have a name that have a list of Materials and Instructions
* Materials - would be a name + quantity, may be we can even have "Material Category" table as well
* Instructions - would simply be an ordered list of texts
My first instinct would go the RDBMS way:
* Create a table called "Furniture", "Material" and "Instruction" and the approppriate columns
* Create the appropriate JOIN tables as necessary and FKs
The use of this system can include *searching* based on materials or may be combination of materials. And may be think of extending the data stored to include information on how many people are required to build it? Difficulty level? how much time it would take?
Would something like this be a good candidate for a NoSQL database?
**Scenario 2**
Imagine a database to model a User database with basic information (eg. name, email, phone number, etc), but you also want to have the flexibility of being able to add any custom fields as you wish.
Think of different systems consuming this user database, each system will want to have their own custom attribute to be attached to the user
My inclination would go the RDBMS way:
* Create a table for "USER" with columns: ID, name, email, phone
* Create a table for "USER\_ATTRIBUTE" with columns: ID, USER\_ID, attr\_name, attr\_type, attr\_value
The USER\_ATTRIBUTE will allow that customization and flexibility without having to shut down the system, alter the database and restart it.
Would something like this be a good candidate for a NoSQL database? | 2014/08/13 | [
"https://softwareengineering.stackexchange.com/questions/253179",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/53849/"
] | NoSQL isn't a very well defined term and all the solutions that run under this name have very different features, so a lot may be possible or not depending on what exactly you are planning to do with it.
Basically you could use some of the more general solutions like maybe MongoDB or Cassandra to simply replace your current relational database. In some cases this makes more sense in others less, but it will work once your team got used to it. Certain things will be easier then, others will be more difficult and you must weight those options against each other and decide (which often enough will mean that there are no advantages big enough and the simple fact that everybody in the team feels most comfortable with relationals and SQL will make the decision easy)
Other NoSQL solutions that are more specialised are not really good candidates to replace your relational DB, like graph databases or simple key value stores. So lets from here talk mainly about those databases that are at least to some degree similar to relational databases.
**Scenario 1**
Where I work we have exactly this scenario, though quite more complex with a lot of different attributes per article. Some of those attributes in hierarchies like Apple -> iPad -> Air.
The data is still stored in a relational database. But: searching this in real time became a pain. With SQL it was slow and code would have been terribly complex. Selects over many tables, with the additional option to exclude certain attributes like "not blue".
In this case Apache Solr or Elastic Search are a solution. Though of course data is duplicated from the relational database.
But from here our experience with this kind of document store showed that it can handle certain problems very well and we will consider to replace part of the existing relational structure with some other kind of storage. So not the whole database where we also store all the transactional data like orders etc, but for example take out all the attribute information which can be handled much better in the aggregate like data structures of NoSQL.
**Scenario 2**
Difficult to say, since what you describe is most likely only a very small part of your user handling. Having schemaless storage is an advantage with many NoSQL databases. But some relational databases allow to store such data too (as long as you don't need to query it via SQL in most cases).
Cassandra for example would allow you to define column families in such a case, where your first set of attributes would be one such family and the variable attributes another one.
As somebody said: NoSQL is less about storage and more about querying. So the question is what will be the typical use case for those queries.
A typical problem would be the transactional data here. If you want to store orders, one way would be a schema where users and their orders form an aggregate (kind of user document that contains the orders as subdocuments). This would make getting a user together with his orders very simple and fast, but would make it very difficult to retrieve all orders from last month for sales statistics.
Also strengths of NoSQL solutions are that it can be easier to run them on multiple clusters if you have to work with very large datasets.
**Conclusion:** Both your scenarios could be modelled with certain NoSQL solutions, but I don't think that (assuming they have to run in a larger environment) they really justify a large extra effort in learning, training and implementation and maybe some other additional disadvantages because both are not specific enough to really leverage the strengths of NoSQL. At least not in that simple form you describe it. Things may become very different once some aspects you describe would be very, very prominent in your usage scenario, like in scenario one the attribute data becomes very complex or in scenario two the variable fields become the largest part of data you store with every user. | I've been using document dbs (ravendb to be specific) as my data store of choice for 3+ years now and I really don't want to look back.
At least for that sort of nosql databases the biggest question is "what goes in this document? What goes in another document? What goes in a related document?" Unfortunately there isn't a lot of good guidance on this. Then again RDBs are a 30+ year old technology so there is a pretty massive body of work there but there still are not perfect answers to all problems -- for example I would reject any entity-attribute-value solution like your scenario #2 without real, real good reasons to go EAV -- I would rather model data extensions as sub-type-tables or using some sort of extensions field comprising serialized data.
Anyhow, there are no perfect principles but there are some good guiding principles one can follow. The two that have helped me the most are:
1. Model your documents around transaction boundaries. Joins are much more expensive to work out and use with objects so being able to select a Foo by ID and getting all of foo makes a ton of sense and makes things easier to work with on many levels. Now, this is not to say everything need be some massive document -- transaction boundaries can be more confined than "everything to do with a piece of furniture". In the case of your scenario #1 I would probably look at the transaction boundaries as the Furniture including Materials and then a separate Instructions document. The logic being you probably manage furniture and materials together but the instructions likely come from somewhere else. Keep in mind that aggregation on the front end is pretty cheap. Categories is an interesting example which leads me to . . .
2. Data duplication is a-ok if you manage it right. A major underlying principle of RDBMS is "don't duplicate data" largely because it grew up in a world where disk storage was orders of magnitude more dear than it is in 2014. For document-style databases it can make sense to have copies of things within your transaction boundaries. For example let's take the furniture categories from scenario #1 -- I would probably have a FurnitureCategoryDocument that would have all the information about the category. I would also have some key information -- ID and name at least -- embedded in the documents for ease of use. This is just fine so long as you can cascade updates, which requires more code than ON UPDATE CASCADE, in your app.
Hope this helps demystify things a bit. |
29,615 | While taking thermodynamics our chemistry teacher told us that entropy is increasing in day by day (as per second law of thermodynamics), and when it reaches its maximum the end of the world will occur. I didn't see such an argument in any of the science books, is there any probablity for this to be true? | 2012/06/06 | [
"https://physics.stackexchange.com/questions/29615",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/7922/"
] | What your describing is the theory of the [Heat death of the universe](http://en.wikipedia.org/wiki/Heat_death_of_the_universe) which is speculated about since 1850s. However, as explained [here](http://www.kk.org/thetechnium/archives/2011/03/there_aint_no_h_1.php), object at astronomical scale are often self-gravitating and that gives them have unintuitive thermodynamical properties like a [negative heat capacity](http://en.wikipedia.org/wiki/Heat_capacity#Negative_heat_capacity_.28stars.29). This usually gives more structured systems as entropy increases, and negates the idea of heat-death.
Furthermore, given the fact that the universe is currently thought to be forever expanding and that the majority of the entropy is/will be in black-holes, the estimated time-scale for such thermal equilibrium to occur is huge (of the order of 10100 years), which gives us vastly enough time to change our cosmological theories about the end of the universe... | heat death( in which I don't believe) is the condition when all available energy fails to exist due to an enormous increment in universes entropy. entropy is a thermodynamic quantity which is arrow of time(means it can tell us if time is going forward or backward). no process can not decrease the overall entropy of the universe.by the time the entropy reaches its maximum value the universe will have failed to exist as there is no available energy to do work. |
658 | I'm trying to learn details on how LiveID works, when compared to other federation technologies.
To be honest, I'm a bit overwhelmed by all the options at <https://msm.live.com/> and want to understand what I'm doing before I federate my application to LiveID.
In addition I'd like to understand the differences between the free implementatations and the RPS system. My understanding is that there are at least 4 different ways to hook up to LiveID. Considering that large surface area of authentication, and little public documentation, makes me uneasy. | 2010/11/21 | [
"https://security.stackexchange.com/questions/658",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/396/"
] | LiveID is an extension of WS-Trust/Federation/Security in a passive mode. Passive meaning that the client -- the browser -- needs to be told what to do by the clients, which is done through 302 redirects and POST-backs.
To rephrase this, LiveID follows a Claims-Based model, building on top of the protocols mentioned earlier. The documentation is very sparse because, well, Microsoft can be very lame at times.
The differences between the versions have to do with how the authentication happens, moreso what layer makes the redirect to login.live.com. E.g. the server-side code forcing the redirect, or the client code -- javascript -- does the redirect.
What exactly are you trying to accomplish? Are you just wanting a person to log into your site using their LiveID, or are you looking for a more in depth federation model? | I'm not sure about this, but I believe that LiveID is not it's own set of technologies, but simply Microsoft's offering in public federation.
They wrap it up nicely and provide easy interfaces, but underneath its still the same techs and protocols: ADFS, WS-Trust, SAML, etc. They might also give you some flexibility in mix-n-matching, that would be difficult to do on your own.
I'm also pretty sure that LiveID is also OpenID-compliant, being both a provider and consumer. |
8,593 | Is a contract valid if one party doesn't know the other has signed? For example one party signs a contract, faxes or emails it to the other party, but the other party doesn't reply. Can the other party play it to their advantage saying "we had agreed to this!" when it works for them and "we never signed this!" when it doesn't? | 2016/04/16 | [
"https://law.stackexchange.com/questions/8593",
"https://law.stackexchange.com",
"https://law.stackexchange.com/users/3797/"
] | 1. Fax or email
At this time in common law, faxes and email are considered to have been accepted when actually communicated to the other party. This means that if I sign a contract and send it to you, I acceptance of the offer is not actually effected until you read it.
2. Post
However, the postal acceptance rule can play havoc with this. Under this rule, and specifically for post, as long as there is some indication that we contemplated acceptance by post, my acceptance of the offer is effected *the moment I put it in the mailbox*, regardless of when or whether it actually reaches you.
Your scenario
-------------
* A has signed a contract and faxes it to B
* B doesn't reply
* A or B tries to claim that they never agreed to this or signed this
### B has read the contract and was the offeree
If B claims not to have read it, A must prove that they have, or that acceptance was otherwise communicated to them. This is unless the postal acceptance rule applies, in which case it does not matter whether or not it was read. Proving that it was posted is a different matter.
### B hasn't read the contract or was not the offeree
* A was the offeree (A sent B a signed contract)
In this case, A must prove that they actually communicate a revocation of the offer. If B has not accepted the offer, then A can communicate this in any reasonable way.
If B has accepted the offer, then A must prove that the revocation was effected prior to their acceptance. Otherwise, A is bound by the contract.
* A and B drafted this contract together (offer and acceptance is unclear but there is clearly agreement at some point)
In this case, it's a bit murkier but it is likely that A would not be bound by the contract. | The signed contract is simply good evidence of the agreement you made, nothing more (that may not be true in all jurisdictions). If, as in your scenario, John sends Jane a contract, Jane signs and returns to John, and John doesn't respond:
* Jane should continue to communicate with John through all available channels that she has accepted and she considers the agreement in effect
* In the event that things go badly and John starts playing games and it's worth it to Jane to fight it rather than simply walk away and learn not to deal with John in the future - Jane sues, Jane shows the communications leading up to the contract, the copy with her signature, the communication in which she sent the contract to John, and the follow-up communications to the judge. Civil cases are won by a preponderance of the evidence. What evidence does John have? Shrugs his shoulders and lies to the judge? Judges are good at recognizing that. Keep records and you'll be alright. |
8,593 | Is a contract valid if one party doesn't know the other has signed? For example one party signs a contract, faxes or emails it to the other party, but the other party doesn't reply. Can the other party play it to their advantage saying "we had agreed to this!" when it works for them and "we never signed this!" when it doesn't? | 2016/04/16 | [
"https://law.stackexchange.com/questions/8593",
"https://law.stackexchange.com",
"https://law.stackexchange.com/users/3797/"
] | 1. Fax or email
At this time in common law, faxes and email are considered to have been accepted when actually communicated to the other party. This means that if I sign a contract and send it to you, I acceptance of the offer is not actually effected until you read it.
2. Post
However, the postal acceptance rule can play havoc with this. Under this rule, and specifically for post, as long as there is some indication that we contemplated acceptance by post, my acceptance of the offer is effected *the moment I put it in the mailbox*, regardless of when or whether it actually reaches you.
Your scenario
-------------
* A has signed a contract and faxes it to B
* B doesn't reply
* A or B tries to claim that they never agreed to this or signed this
### B has read the contract and was the offeree
If B claims not to have read it, A must prove that they have, or that acceptance was otherwise communicated to them. This is unless the postal acceptance rule applies, in which case it does not matter whether or not it was read. Proving that it was posted is a different matter.
### B hasn't read the contract or was not the offeree
* A was the offeree (A sent B a signed contract)
In this case, A must prove that they actually communicate a revocation of the offer. If B has not accepted the offer, then A can communicate this in any reasonable way.
If B has accepted the offer, then A must prove that the revocation was effected prior to their acceptance. Otherwise, A is bound by the contract.
* A and B drafted this contract together (offer and acceptance is unclear but there is clearly agreement at some point)
In this case, it's a bit murkier but it is likely that A would not be bound by the contract. | Let's keep things simple: assume that Alice has made an offer to Bob that, when Bob accepts it, will become a binding contract.
The contract is binding when Bob communicates his acceptance to Alice; there is no need for him to sign anything (see below).
He can communicate his acceptance verbally by talking to her in person or by telephone or Skype or any other means of making a verbal communication: the contract becomes binding immediately Alice hears the acceptance.
He can also communicate his acceptance in writing and a number of court precedents and statutory rules exist to determine when the communication has been made:
* in person, when Bob hands Alice the communication
* physical delivery, when delivery to Alice's address has been made (there's a whole subcategory of cases that deal with if "delivery" actually happened but we won't go there)
* the postal rule, when Bob drops the communication with adequate pre-paid postage in a legal mail deposit box (there is no requirement for certified mail but this helps provide evidence of exactly when Bob did this). There is a valid contract from the time of posting, not when (or if) the communication is delivered. The postal rule is old common law and holds in most common law jurisdictions.
* electronic communications, because these are newer than the post they are usually governed by specific statutes. For example, in NSW, Australia a faxed acceptance is valid when the sending machine receives conformation from the receiving machine that the transmission was successful. However, for email or Facebook messages etc. the communication is effective from when the email is *read* - something the sender has no control over; therefore fax is the preferred way of doing this.
As you can see, in some circumstances Alice can be bound without knowing she is. In addition, there are circumstances where the acceptance can validly served on an agent of Alice, say an employee or her solicitor: Alice is bound from the time her agent receives the acceptance.
Now, Alice is free to put conditions on the acceptance like: it is only when Alice receives the signed contract from Bob who must pass it to her with his left hand while standing on one foot and whistling Dixie. If such are the conditions then they must be fulfilled to create a binding contract, sending it by fax or post or passing it while whistling Yankee Doodle won't cut it. |
27,808 | In Internet Explorer 11 the file download dialog appears at the bottom of the window. I know many people who don't know where to find the "save" and "open" buttons after having clicked on a download link on a website.
It seems to me that the small yellow stripe does not attract enough attention.

Is there any reason to put the dialog at the bottom? To me it looks more like a notification bar than a "proper" dialog box. | 2012/10/15 | [
"https://ux.stackexchange.com/questions/27808",
"https://ux.stackexchange.com",
"https://ux.stackexchange.com/users/11800/"
] | Downloading a file really doesn't *need* to be an obstructive action, so they've ditched the old dialog box. The old version was pretty obtrusive even though immediate action isn't strictly required:

Really you only need to take action *after* the file is downloaded, there's no pressure. Particularly when downloading multiple files, the old IE's dialog got very annoying very quickly. In fact it's still much more annoying than Firefox and Chrome's download system, which implicitly assumes you want to save files you download.
Chrome's download bar follows a similar path of least obstruction:

Here they've made a dropdown to use secondary actions and the file is "saved" by default (Firefox does this as well). I'm surprised IE didn't go this way, but they seem to have wanted to keep "cancel" an easily accessible option (it used to be the default action in fact), almost certainly for security reasons due to the notice IE8 gives you.
Windows' use of modal dialog boxes has drastically decreased since their "peak" which was probably around windows 95/98. [Modal dialogs are quite disruptive](https://ux.stackexchange.com/questions/12637/what-research-is-there-suggesting-modal-dialogs-are-disruptive) so eliminating them when unnecessary is certainly a good design goal. The notification bar might be less noticeable the first time, but this is a browser; you quickly learn how common features act, and it's likely to be one of your most used applications, so ease of use long-term is preferable over help so in-your-face it gets annoying.
It looks like you're downloading a file. Would you like help? | Despite what most people think, IE has not "ditched" or "gotten rid of" the dialog box, or at least not for file download anyway. What they've done is use the notification bar as default, but there is a way to change that and use the dialog box again in the newer IE versions. I know because I have some users that see the dialog box and some that see the notification bar when downloading the same file, and everyone has IE11.
Now, if I could just find that setting... Does anybody know what setting it would be? |
6,261 | I'm trying to replace the ball joints on my escort but i don't have a pickle fork. Most of the things i've seen so far without the fork require removing the calipers and rotor. Is there another method? | 2013/06/06 | [
"https://mechanics.stackexchange.com/questions/6261",
"https://mechanics.stackexchange.com",
"https://mechanics.stackexchange.com/users/3248/"
] | I always prefer to use a scissor-type ball joint separator, like [this one](http://www.machinemart.co.uk/shop/product/details/cht222-ball-joint-remover?da=1&TC=SRC-ball%20joint) rather than a fork-type one, if it'll fit. They're less likely to cause damage to other parts (and to the joints themselves, but that doesn't matter if you're replacing them.
What's stopping you buying a fork-type one? They're incredibly cheap (the above site has them for £5, so less than US$10 ), certainly much less than the value of the time it'll take to remove and replace the brakes! | [This](http://www.youtube.com/watch?v=yjM6rTVre-0&t=4m51s) is how you can remove the ball joint from the LCA.
[This](http://www.youtube.com/watch?v=zd5IcN3yjsg&t=3m28s) is how you remove the ball joint from the knuckle. He took the entire assembly off the vehicle. However you may be able to finagle it out while it is still on the car. |
6,261 | I'm trying to replace the ball joints on my escort but i don't have a pickle fork. Most of the things i've seen so far without the fork require removing the calipers and rotor. Is there another method? | 2013/06/06 | [
"https://mechanics.stackexchange.com/questions/6261",
"https://mechanics.stackexchange.com",
"https://mechanics.stackexchange.com/users/3248/"
] | This year Escort uses a ball joint that attaches to the knuckle via a pinch bolt. It is not a taper fit shaft that is typically used. The ball joint shaft has a radius notch that the pinch bolt slides into. In theory once the bolt is removed the ball joint should slip out. The theory part means that a "pickle fork" may be needed to get the shaft to slip out. The ball-joint is attached to the control arm by two bolts. Not pressed in as shown in the generic video Typically the brakes must be removed to get the ball joint to clear the dust shield behind the rotor. You may be lucky and slip the ball joint past the dustshield by loosening the ball-joint to control arm mounting bolts. You may also have to remove the sway bar mounts from the control arm to allow it to pivot low enough to get the balljoint to clear. | I always prefer to use a scissor-type ball joint separator, like [this one](http://www.machinemart.co.uk/shop/product/details/cht222-ball-joint-remover?da=1&TC=SRC-ball%20joint) rather than a fork-type one, if it'll fit. They're less likely to cause damage to other parts (and to the joints themselves, but that doesn't matter if you're replacing them.
What's stopping you buying a fork-type one? They're incredibly cheap (the above site has them for £5, so less than US$10 ), certainly much less than the value of the time it'll take to remove and replace the brakes! |
6,261 | I'm trying to replace the ball joints on my escort but i don't have a pickle fork. Most of the things i've seen so far without the fork require removing the calipers and rotor. Is there another method? | 2013/06/06 | [
"https://mechanics.stackexchange.com/questions/6261",
"https://mechanics.stackexchange.com",
"https://mechanics.stackexchange.com/users/3248/"
] | I always prefer to use a scissor-type ball joint separator, like [this one](http://www.machinemart.co.uk/shop/product/details/cht222-ball-joint-remover?da=1&TC=SRC-ball%20joint) rather than a fork-type one, if it'll fit. They're less likely to cause damage to other parts (and to the joints themselves, but that doesn't matter if you're replacing them.
What's stopping you buying a fork-type one? They're incredibly cheap (the above site has them for £5, so less than US$10 ), certainly much less than the value of the time it'll take to remove and replace the brakes! | Just use a hammer, on the backside without knocking the axle.. In same time put a (long) iron bar for maintening down, and knock with medium force.
If it don't come, don't hurt you or the car, disassemble all the front half axle, this is quick and will be easy to dissasemble into separates parts when this is not under pressure.
Usually not necessary but sometimes better, not really longer.with good tools, and it is preferable to never force if you work alone, so the second solution could be better, depending of the context. |
6,261 | I'm trying to replace the ball joints on my escort but i don't have a pickle fork. Most of the things i've seen so far without the fork require removing the calipers and rotor. Is there another method? | 2013/06/06 | [
"https://mechanics.stackexchange.com/questions/6261",
"https://mechanics.stackexchange.com",
"https://mechanics.stackexchange.com/users/3248/"
] | This year Escort uses a ball joint that attaches to the knuckle via a pinch bolt. It is not a taper fit shaft that is typically used. The ball joint shaft has a radius notch that the pinch bolt slides into. In theory once the bolt is removed the ball joint should slip out. The theory part means that a "pickle fork" may be needed to get the shaft to slip out. The ball-joint is attached to the control arm by two bolts. Not pressed in as shown in the generic video Typically the brakes must be removed to get the ball joint to clear the dust shield behind the rotor. You may be lucky and slip the ball joint past the dustshield by loosening the ball-joint to control arm mounting bolts. You may also have to remove the sway bar mounts from the control arm to allow it to pivot low enough to get the balljoint to clear. | [This](http://www.youtube.com/watch?v=yjM6rTVre-0&t=4m51s) is how you can remove the ball joint from the LCA.
[This](http://www.youtube.com/watch?v=zd5IcN3yjsg&t=3m28s) is how you remove the ball joint from the knuckle. He took the entire assembly off the vehicle. However you may be able to finagle it out while it is still on the car. |
6,261 | I'm trying to replace the ball joints on my escort but i don't have a pickle fork. Most of the things i've seen so far without the fork require removing the calipers and rotor. Is there another method? | 2013/06/06 | [
"https://mechanics.stackexchange.com/questions/6261",
"https://mechanics.stackexchange.com",
"https://mechanics.stackexchange.com/users/3248/"
] | This year Escort uses a ball joint that attaches to the knuckle via a pinch bolt. It is not a taper fit shaft that is typically used. The ball joint shaft has a radius notch that the pinch bolt slides into. In theory once the bolt is removed the ball joint should slip out. The theory part means that a "pickle fork" may be needed to get the shaft to slip out. The ball-joint is attached to the control arm by two bolts. Not pressed in as shown in the generic video Typically the brakes must be removed to get the ball joint to clear the dust shield behind the rotor. You may be lucky and slip the ball joint past the dustshield by loosening the ball-joint to control arm mounting bolts. You may also have to remove the sway bar mounts from the control arm to allow it to pivot low enough to get the balljoint to clear. | Just use a hammer, on the backside without knocking the axle.. In same time put a (long) iron bar for maintening down, and knock with medium force.
If it don't come, don't hurt you or the car, disassemble all the front half axle, this is quick and will be easy to dissasemble into separates parts when this is not under pressure.
Usually not necessary but sometimes better, not really longer.with good tools, and it is preferable to never force if you work alone, so the second solution could be better, depending of the context. |
96,523 | I have a pair of Qi-compatible wireless charging pads, and I've noticed that when a device is charging, the pad will emit a quiet but noticeable squeal in a relatively high frequency range. Perhaps it's because the pads sit on my desk near my bed, that I notice, but I'm curious what might be causing the squeal and if there's anything I can try to mitigate it. | 2014/01/15 | [
"https://electronics.stackexchange.com/questions/96523",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/5638/"
] | Wireless charging pads work by inductive coupling. In the pad is a coil, and in the device being charged is another coil. When these coils are close together, they have a high mutual inductance, and we can use this mutual inductance to transfer energy between them, as in a transformer.

[simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2fvc3Rw.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/)
The trouble is this: an inductor is also an electromagnet. As the current oscillates in the coil, the magnetic forces also oscillate. These magnetic forces act on the individual turns of the coil itself and ferromagnetic materials nearby, causing audible vibration. [Magnetostriction](http://en.wikipedia.org/wiki/Magnetostriction) can also play a role.
Besides the charging coils themselves, there are probably more inductors in the circuitry that converts the 60 or 50 Hz mains AC to the higher frequency shown here.
I don't know that there's much you can do to mitigate this noise, other than remanufacturing the device. You might try setting the charger on a neoprene or rubber pad, which might at least prevent some of the vibration from coupling into your table. The better solution is usually to move the oscillation frequency above 20 kHz where it can't be heard by humans, or to more solidly support the coils so they can move less. | The following is just a guess because there is not enough information, but I post it as an answer because I had the same effect in wireless charging system that I built.
Some capacitors, ceramic and film in particular, can emit audible noise when operated at (or near) audible frequencies.
While not a big issue, this noise originates in vibrations, which can lead to a mechanical failure on a PCB and have minor electric effects which can be significant in a high precision electronics. These effects are not desirable and are not restricted to audible frequencies.
In your case, the only issue is the annoying sound. You can try to take the charger apart and find the "loudest" capacitors - these can be replaced with better (and usually more expensive) capacitors which emit less noise. See [this article](http://www.edn.com/design/components-and-packaging/4364020/Reduce-acoustic-noise-from-capacitors) for a related discussion.
The other source of audible noise in electronics are transformers, however, since the sound is in high frequency range, it is not probable that any transformer has something to do with it. |
39,030,573 | When compiling a project in idea IDE, error occurs:
Error:osgi: [Test] The default package '.' is not permitted by the Import-Package syntax.
This can be caused by compile errors in Eclipse because Eclipse creates
valid class files regardless of compile errors.
The following package(s) import from the default package null
But when using Eclipse ide, it works.
I've googled times, only found [this](https://stackoverflow.com/questions/33395100/the-default-package-is-not-permitted-by-the-import-package-syntax) post , but it's not my case.
I de-compiled the produced class by OSGI, there's no class has syntax like `import .`
Any idea for this problem?
[](https://i.stack.imgur.com/wZuQx.png) | 2016/08/19 | [
"https://Stackoverflow.com/questions/39030573",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6554954/"
] | Can you change package of your classes? That way eclipse will create new classes after compilation. Having '.' package is not good practice anyway.
Also, prefer different workspace for eclipse and intellij. Having 3 folders is good practice, one for source, one for eclipse workspace, one for intelliJ workspace. Each IDE creates their own files | I got this error due to a Groovy script file that had import statements, but no package name at the start of the file. I added a package name to my Groovy script, **ran a clean to erase my target directory**, and then the problem went away.
See answer by Hemant [in this similar issue](https://stackoverflow.com/questions/33395100/the-default-package-is-not-permitted-by-the-import-package-syntax) |
7,935,890 | I'm currently working on a protocol, which uses Diffie-Hellman for an key exchange.
I receive a packet, which consists of an aes-128 encrypted part and a 128 Bit DH Public Key.
In a very last step in the protocol, the aes key is sent to another peer in the network.
This aes-key should be encrypted with a cipher using a 128 bit strong secretkey.
I plan to use Blowfish (can also be another cipher, doesn't really matter for the problem)
Now to encrypt the aes-key, with lets say blowfish, I have to build a secretkey for the encryption with a class called SecretKeySpec (I'm using javax.crypto stuff), which takes an byteArray in the constructor to build the secretKey.
The sharedkey from DH is a 128 Bit BigInteger.
Well, now I can interpret my shared-key as a byteArray (wich gives me still 128 Bit in 16Bytes [where the numbers are interpreted as frames of 8 Bit data])
So my question is, how strong is my key really?
Is there any impact because I only use numbers as input for the byteArray (so does this limit the keyspace in any way?)
I think this is not the case, but I'm not 100% sure.
Maybe someone can do the math and proof me right or wrong.
If I'm wrong what keysize for the shared key give me piece of mind to finally get to the 128Bit SecretKey for the encryption? | 2011/10/28 | [
"https://Stackoverflow.com/questions/7935890",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/43679/"
] | The [Crypto++](http://www.cryptopp.com/wiki/Diffie-Hellman) website suggests using a minimum `p` of 3072 bits (or 256 bits for an [ECC](http://en.wikipedia.org/wiki/Elliptic_curve_cryptography) implementation) to transport a 128 bit AES key.
You might wish to study the references provided at <http://www.keylength.com/en/compare/> for further information about comparing key lengths among different algorithms. | Not an expert in DH here, but to me it seems that DH's keyspace for the shared key represented in *n* bits is somewhat smaller than 2^*n*. |
7,887 | Is there a limit to the number of tiles a city can work? I've noticed that I can purchase and use tiles several spaces away from my city. How far can I go? | 2010/09/23 | [
"https://gaming.stackexchange.com/questions/7887",
"https://gaming.stackexchange.com",
"https://gaming.stackexchange.com/users/3/"
] | Up to 3 hexes away, 36 hexes total
----------------------------------
Plus one more for the hex the city is on.
***Civilization 5***

All of the previous games in the Civilization Series used squares for the grid, and the maximum city radius was *2ish* squares away. Take a look at the image below for what I mean by *2ish*. (In words, the exact city radius is a 3x3 grid centered on the city, with one square from each corner removed.) That was a total of 20 squares, plus one more for the city square.
***Civilization 4, and earlier***
 | Your cities can get to hexes that are up to 3 away from them, although they all have to be contiguous, so you can't get to one that is 3 away until you have one that is two away connected to it. There is no limit to the number of tiles you can work (other than the limits on your population). |
7,887 | Is there a limit to the number of tiles a city can work? I've noticed that I can purchase and use tiles several spaces away from my city. How far can I go? | 2010/09/23 | [
"https://gaming.stackexchange.com/questions/7887",
"https://gaming.stackexchange.com",
"https://gaming.stackexchange.com/users/3/"
] | Your cities can get to hexes that are up to 3 away from them, although they all have to be contiguous, so you can't get to one that is 3 away until you have one that is two away connected to it. There is no limit to the number of tiles you can work (other than the limits on your population). | The maximum limit is five tiles, but thats for very large, isolated(more resorces) cities and usually your capital. |
7,887 | Is there a limit to the number of tiles a city can work? I've noticed that I can purchase and use tiles several spaces away from my city. How far can I go? | 2010/09/23 | [
"https://gaming.stackexchange.com/questions/7887",
"https://gaming.stackexchange.com",
"https://gaming.stackexchange.com/users/3/"
] | Your cities can get to hexes that are up to 3 away from them, although they all have to be contiguous, so you can't get to one that is 3 away until you have one that is two away connected to it. There is no limit to the number of tiles you can work (other than the limits on your population). | In square or isometric (1-4) Civ games, you can work 2 tiles away except the 4 corners.
In hexagon (5-6) Civ games, you can work 3 tiles away.
You can access strategic or luxury goods within your border, but cannot work tiles outside of work limit. You can share tiles between cities, and transfer them by clicking on a tile in the city screen. You can also use cultural borders of another city to work a tile further away. With Civ 5-6, you could work a tile 3 away within having any tiles 2 away because it is within the cultural border of another city.
Culture Levels (numbers are for Normal Speed): Quick is 1/2x, Epic is 3/2x, Marathon is 3x.
0. None (0)
1. Poor (0)
2. Fledgling (10)
3. Developed (100)
4. Refined (500), Civ4 Tutorial (1,000)
5. Influential (5,000), Civ4 Tutorial (10,000)
6. Legendary (50,000), Civ4 Tutorial (100,000)
Cultural Levels of the Civ4 Tutorial is different from the actual game in Normal speed. You could mod the XML so that Poor may require cultural points so that new cities start without any workable tiles; this is not recommended, as without Cultural Borders, anyone could capture by walking into a new city without any military units to defend. |
7,887 | Is there a limit to the number of tiles a city can work? I've noticed that I can purchase and use tiles several spaces away from my city. How far can I go? | 2010/09/23 | [
"https://gaming.stackexchange.com/questions/7887",
"https://gaming.stackexchange.com",
"https://gaming.stackexchange.com/users/3/"
] | Up to 3 hexes away, 36 hexes total
----------------------------------
Plus one more for the hex the city is on.
***Civilization 5***

All of the previous games in the Civilization Series used squares for the grid, and the maximum city radius was *2ish* squares away. Take a look at the image below for what I mean by *2ish*. (In words, the exact city radius is a 3x3 grid centered on the city, with one square from each corner removed.) That was a total of 20 squares, plus one more for the city square.
***Civilization 4, and earlier***
 | The maximum limit is five tiles, but thats for very large, isolated(more resorces) cities and usually your capital. |
7,887 | Is there a limit to the number of tiles a city can work? I've noticed that I can purchase and use tiles several spaces away from my city. How far can I go? | 2010/09/23 | [
"https://gaming.stackexchange.com/questions/7887",
"https://gaming.stackexchange.com",
"https://gaming.stackexchange.com/users/3/"
] | Up to 3 hexes away, 36 hexes total
----------------------------------
Plus one more for the hex the city is on.
***Civilization 5***

All of the previous games in the Civilization Series used squares for the grid, and the maximum city radius was *2ish* squares away. Take a look at the image below for what I mean by *2ish*. (In words, the exact city radius is a 3x3 grid centered on the city, with one square from each corner removed.) That was a total of 20 squares, plus one more for the city square.
***Civilization 4, and earlier***
 | In square or isometric (1-4) Civ games, you can work 2 tiles away except the 4 corners.
In hexagon (5-6) Civ games, you can work 3 tiles away.
You can access strategic or luxury goods within your border, but cannot work tiles outside of work limit. You can share tiles between cities, and transfer them by clicking on a tile in the city screen. You can also use cultural borders of another city to work a tile further away. With Civ 5-6, you could work a tile 3 away within having any tiles 2 away because it is within the cultural border of another city.
Culture Levels (numbers are for Normal Speed): Quick is 1/2x, Epic is 3/2x, Marathon is 3x.
0. None (0)
1. Poor (0)
2. Fledgling (10)
3. Developed (100)
4. Refined (500), Civ4 Tutorial (1,000)
5. Influential (5,000), Civ4 Tutorial (10,000)
6. Legendary (50,000), Civ4 Tutorial (100,000)
Cultural Levels of the Civ4 Tutorial is different from the actual game in Normal speed. You could mod the XML so that Poor may require cultural points so that new cities start without any workable tiles; this is not recommended, as without Cultural Borders, anyone could capture by walking into a new city without any military units to defend. |
7,887 | Is there a limit to the number of tiles a city can work? I've noticed that I can purchase and use tiles several spaces away from my city. How far can I go? | 2010/09/23 | [
"https://gaming.stackexchange.com/questions/7887",
"https://gaming.stackexchange.com",
"https://gaming.stackexchange.com/users/3/"
] | The maximum limit is five tiles, but thats for very large, isolated(more resorces) cities and usually your capital. | In square or isometric (1-4) Civ games, you can work 2 tiles away except the 4 corners.
In hexagon (5-6) Civ games, you can work 3 tiles away.
You can access strategic or luxury goods within your border, but cannot work tiles outside of work limit. You can share tiles between cities, and transfer them by clicking on a tile in the city screen. You can also use cultural borders of another city to work a tile further away. With Civ 5-6, you could work a tile 3 away within having any tiles 2 away because it is within the cultural border of another city.
Culture Levels (numbers are for Normal Speed): Quick is 1/2x, Epic is 3/2x, Marathon is 3x.
0. None (0)
1. Poor (0)
2. Fledgling (10)
3. Developed (100)
4. Refined (500), Civ4 Tutorial (1,000)
5. Influential (5,000), Civ4 Tutorial (10,000)
6. Legendary (50,000), Civ4 Tutorial (100,000)
Cultural Levels of the Civ4 Tutorial is different from the actual game in Normal speed. You could mod the XML so that Poor may require cultural points so that new cities start without any workable tiles; this is not recommended, as without Cultural Borders, anyone could capture by walking into a new city without any military units to defend. |
2,965 | 100k, 200k and now 300k. Such an amazing feat. Heartily congrats [Brian](https://salesforce.stackexchange.com/users/2984/sfdcfox) and thanks for your amazing help in making SFSE a great success. I can't count how many times you have helped me in reaching a solution or understanding a difficult concept. You rock. :)
[](https://i.stack.imgur.com/ELUlH.png) | 2019/12/30 | [
"https://salesforce.meta.stackexchange.com/questions/2965",
"https://salesforce.meta.stackexchange.com",
"https://salesforce.meta.stackexchange.com/users/19118/"
] | What an achievement! Salesforce and programing as a second career for me after 20 years in the Army. I would not be where I am now without people like sfdxfox. | You are an amazing and generous man. I know I have asked many questions and you have always answered. I remember telling my wife about your tenacity with the SFSE community years ago. Thank you for all you do and hoping things get smoother personally for you. |
2,965 | 100k, 200k and now 300k. Such an amazing feat. Heartily congrats [Brian](https://salesforce.stackexchange.com/users/2984/sfdcfox) and thanks for your amazing help in making SFSE a great success. I can't count how many times you have helped me in reaching a solution or understanding a difficult concept. You rock. :)
[](https://i.stack.imgur.com/ELUlH.png) | 2019/12/30 | [
"https://salesforce.meta.stackexchange.com/questions/2965",
"https://salesforce.meta.stackexchange.com",
"https://salesforce.meta.stackexchange.com/users/19118/"
] | Welcome to 2020!
It's been an incredible journey so far. 300k is indeed impressive, even more so when you look at how small our network is, relatively speaking (compared to Stack Overflow, for example). I've personally answered about 8 of 100 questions we have on the network, and I've probably read close to 80% of the questions that have been posted since I've joined.
I've learned so much, and I've had so much support. I cannot possibly thank everyone enough for their continued support. My personal life has been rocky since joining, and I've kept most of it from everyone but my closest friends.
I feel like it's not appropriate to share my personal problems, but even if you didn't know my struggles, everyone's positive support, comments, answers, edits, votes, etc have all been an inspiration and kept me afloat in the darkest of times.
I look forward to contributing even more in the future, continuing my personal growth, and helping those that need it. I'm so glad that I found salesforce.com, and the community that they foster. Everyone has been amazing. I simply can't stress that enough.
So, here's to the next 100k! | I remember when I introduced a colleague to this stack exchange.
I told them;
>
> You can just ask any Salesforce question and sfdcfox will usually answer you pretty quickly.
>
>
> |
2,965 | 100k, 200k and now 300k. Such an amazing feat. Heartily congrats [Brian](https://salesforce.stackexchange.com/users/2984/sfdcfox) and thanks for your amazing help in making SFSE a great success. I can't count how many times you have helped me in reaching a solution or understanding a difficult concept. You rock. :)
[](https://i.stack.imgur.com/ELUlH.png) | 2019/12/30 | [
"https://salesforce.meta.stackexchange.com/questions/2965",
"https://salesforce.meta.stackexchange.com",
"https://salesforce.meta.stackexchange.com/users/19118/"
] | What an achievement! Salesforce and programing as a second career for me after 20 years in the Army. I would not be where I am now without people like sfdxfox. | Congratulations! I've only ever posted one question (today in fact, lol) but I've been lurking for a few years and it seems that every question I've looked up has had a great answer from you, or at least a comment with additional useful insight. Definitely the most consistently helpful poster on the site from what I've seen. |
2,965 | 100k, 200k and now 300k. Such an amazing feat. Heartily congrats [Brian](https://salesforce.stackexchange.com/users/2984/sfdcfox) and thanks for your amazing help in making SFSE a great success. I can't count how many times you have helped me in reaching a solution or understanding a difficult concept. You rock. :)
[](https://i.stack.imgur.com/ELUlH.png) | 2019/12/30 | [
"https://salesforce.meta.stackexchange.com/questions/2965",
"https://salesforce.meta.stackexchange.com",
"https://salesforce.meta.stackexchange.com/users/19118/"
] | Congrats on Reaching 300K. It's really a great achievement. Well Done @sfdcfox | I remember when I introduced a colleague to this stack exchange.
I told them;
>
> You can just ask any Salesforce question and sfdcfox will usually answer you pretty quickly.
>
>
> |
2,965 | 100k, 200k and now 300k. Such an amazing feat. Heartily congrats [Brian](https://salesforce.stackexchange.com/users/2984/sfdcfox) and thanks for your amazing help in making SFSE a great success. I can't count how many times you have helped me in reaching a solution or understanding a difficult concept. You rock. :)
[](https://i.stack.imgur.com/ELUlH.png) | 2019/12/30 | [
"https://salesforce.meta.stackexchange.com/questions/2965",
"https://salesforce.meta.stackexchange.com",
"https://salesforce.meta.stackexchange.com/users/19118/"
] | I remember when you got 100K and everyone ooh'ed and aw'ed. Now, it seems as though 1 million is readily achievable. A truly remarkable achievement and one wonders how you do any other work at all.
I and the community are extraordinarily grateful for you contributions | Congratulations! I've only ever posted one question (today in fact, lol) but I've been lurking for a few years and it seems that every question I've looked up has had a great answer from you, or at least a comment with additional useful insight. Definitely the most consistently helpful poster on the site from what I've seen. |
2,965 | 100k, 200k and now 300k. Such an amazing feat. Heartily congrats [Brian](https://salesforce.stackexchange.com/users/2984/sfdcfox) and thanks for your amazing help in making SFSE a great success. I can't count how many times you have helped me in reaching a solution or understanding a difficult concept. You rock. :)
[](https://i.stack.imgur.com/ELUlH.png) | 2019/12/30 | [
"https://salesforce.meta.stackexchange.com/questions/2965",
"https://salesforce.meta.stackexchange.com",
"https://salesforce.meta.stackexchange.com/users/19118/"
] | Thanks to people like Brian, the lives of others become easier and more interesting. Personally, I was often helped by the answers of Brian. It is almost impossible to overestimate the contribution he has made to our community.
Keep up the good work and thank you @sfdcfox | I remember when I introduced a colleague to this stack exchange.
I told them;
>
> You can just ask any Salesforce question and sfdcfox will usually answer you pretty quickly.
>
>
> |
2,965 | 100k, 200k and now 300k. Such an amazing feat. Heartily congrats [Brian](https://salesforce.stackexchange.com/users/2984/sfdcfox) and thanks for your amazing help in making SFSE a great success. I can't count how many times you have helped me in reaching a solution or understanding a difficult concept. You rock. :)
[](https://i.stack.imgur.com/ELUlH.png) | 2019/12/30 | [
"https://salesforce.meta.stackexchange.com/questions/2965",
"https://salesforce.meta.stackexchange.com",
"https://salesforce.meta.stackexchange.com/users/19118/"
] | Congrats on Reaching 300K. It's really a great achievement. Well Done @sfdcfox | Congratulations @sfdcfox. You are amazing, you are generous, you are helpful. But most importantly you are humble and kind. When my questions get a response from you, i know i will learn something that will stay in my mind. Just as you strengthen up our knowledge each day, I pray that you get the strength and wisdom to smoothly surmount all the challenges life bring you. |
2,965 | 100k, 200k and now 300k. Such an amazing feat. Heartily congrats [Brian](https://salesforce.stackexchange.com/users/2984/sfdcfox) and thanks for your amazing help in making SFSE a great success. I can't count how many times you have helped me in reaching a solution or understanding a difficult concept. You rock. :)
[](https://i.stack.imgur.com/ELUlH.png) | 2019/12/30 | [
"https://salesforce.meta.stackexchange.com/questions/2965",
"https://salesforce.meta.stackexchange.com",
"https://salesforce.meta.stackexchange.com/users/19118/"
] | Welcome to 2020!
It's been an incredible journey so far. 300k is indeed impressive, even more so when you look at how small our network is, relatively speaking (compared to Stack Overflow, for example). I've personally answered about 8 of 100 questions we have on the network, and I've probably read close to 80% of the questions that have been posted since I've joined.
I've learned so much, and I've had so much support. I cannot possibly thank everyone enough for their continued support. My personal life has been rocky since joining, and I've kept most of it from everyone but my closest friends.
I feel like it's not appropriate to share my personal problems, but even if you didn't know my struggles, everyone's positive support, comments, answers, edits, votes, etc have all been an inspiration and kept me afloat in the darkest of times.
I look forward to contributing even more in the future, continuing my personal growth, and helping those that need it. I'm so glad that I found salesforce.com, and the community that they foster. Everyone has been amazing. I simply can't stress that enough.
So, here's to the next 100k! | You are an amazing and generous man. I know I have asked many questions and you have always answered. I remember telling my wife about your tenacity with the SFSE community years ago. Thank you for all you do and hoping things get smoother personally for you. |
2,965 | 100k, 200k and now 300k. Such an amazing feat. Heartily congrats [Brian](https://salesforce.stackexchange.com/users/2984/sfdcfox) and thanks for your amazing help in making SFSE a great success. I can't count how many times you have helped me in reaching a solution or understanding a difficult concept. You rock. :)
[](https://i.stack.imgur.com/ELUlH.png) | 2019/12/30 | [
"https://salesforce.meta.stackexchange.com/questions/2965",
"https://salesforce.meta.stackexchange.com",
"https://salesforce.meta.stackexchange.com/users/19118/"
] | I remember when you got 100K and everyone ooh'ed and aw'ed. Now, it seems as though 1 million is readily achievable. A truly remarkable achievement and one wonders how you do any other work at all.
I and the community are extraordinarily grateful for you contributions | I remember when I introduced a colleague to this stack exchange.
I told them;
>
> You can just ask any Salesforce question and sfdcfox will usually answer you pretty quickly.
>
>
> |
2,965 | 100k, 200k and now 300k. Such an amazing feat. Heartily congrats [Brian](https://salesforce.stackexchange.com/users/2984/sfdcfox) and thanks for your amazing help in making SFSE a great success. I can't count how many times you have helped me in reaching a solution or understanding a difficult concept. You rock. :)
[](https://i.stack.imgur.com/ELUlH.png) | 2019/12/30 | [
"https://salesforce.meta.stackexchange.com/questions/2965",
"https://salesforce.meta.stackexchange.com",
"https://salesforce.meta.stackexchange.com/users/19118/"
] | Welcome to 2020!
It's been an incredible journey so far. 300k is indeed impressive, even more so when you look at how small our network is, relatively speaking (compared to Stack Overflow, for example). I've personally answered about 8 of 100 questions we have on the network, and I've probably read close to 80% of the questions that have been posted since I've joined.
I've learned so much, and I've had so much support. I cannot possibly thank everyone enough for their continued support. My personal life has been rocky since joining, and I've kept most of it from everyone but my closest friends.
I feel like it's not appropriate to share my personal problems, but even if you didn't know my struggles, everyone's positive support, comments, answers, edits, votes, etc have all been an inspiration and kept me afloat in the darkest of times.
I look forward to contributing even more in the future, continuing my personal growth, and helping those that need it. I'm so glad that I found salesforce.com, and the community that they foster. Everyone has been amazing. I simply can't stress that enough.
So, here's to the next 100k! | Another truly astounding stat: ~7.8 *million* people reached by sfdcfox's contributions through the years.
We can't show enough gratitude for everything that you do for this community! |
4,920,039 | Can anyone please send the code for updating the data using sqlite for todolist app. I use the 4 parts of "Creating a ToDo List using Sqlite" from Google. But in that , when i enter the data it takes and changed. But, when i build again it shows previous data which i was entered in the insert command. It doesn't saved permenantly. Please refer the code which is at Google.
And pls help me to update the data.
Thanks... | 2011/02/07 | [
"https://Stackoverflow.com/questions/4920039",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1315167/"
] | Apple will not allow this. If your app is used by people with disabilities severe enough that they can't unlock an iPhone then your developing on the wrong platform. If someone can't unlock an iPhone how will they use all it's other features including your app? It's possible to set the iPhone to never autolock but this is something the user has to do in settings and your app can not do it. You could pop up a dialog though the first time your app is launched explaining how to do this. | No, no matter what your reason might be, Apple won't allow such an App because of various reasons (it could break any time due to API changes etc) |
4,920,039 | Can anyone please send the code for updating the data using sqlite for todolist app. I use the 4 parts of "Creating a ToDo List using Sqlite" from Google. But in that , when i enter the data it takes and changed. But, when i build again it shows previous data which i was entered in the insert command. It doesn't saved permenantly. Please refer the code which is at Google.
And pls help me to update the data.
Thanks... | 2011/02/07 | [
"https://Stackoverflow.com/questions/4920039",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1315167/"
] | It's not possible to work on a function like this, because the Unlock-Screen is a part of the *Springboard.app*. You can't change something in another app, **this is not possible.**
You would need to jailbreak the phones for changing something in the Springboard.app. | No, no matter what your reason might be, Apple won't allow such an App because of various reasons (it could break any time due to API changes etc) |
243,230 | Cambridge Dictionary gives this definition about "complain"
>
> to say that something is wrong or not satisfactory
>
>
>
and this definition about "complaint"
>
> a statement that something is wrong or not satisfactory
>
>
>
I guess both of them means the same thing. However, [Google Ngram](https://books.google.com/ngrams/graph?content=to%20complain%20about%2C%20make%20a%20complaint%20about%20&case_insensitive=on&year_start=1800&year_end=2008&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Cto%20complain%20about%3B%2Cc0%3B.t1%3B%2Cmake%20a%20complaint%20about%3B%2Cc0#t1%3B%2Cto%20complain%20about%3B%2Cc0%3B.t1%3B%2Cmake%20a%20complaint%20about%3B%2Cc0) shows that the former is much more commonly used that the latter
[](https://i.stack.imgur.com/IyIrh.png)
Are those two expressions interchangeable? | 2020/04/01 | [
"https://ell.stackexchange.com/questions/243230",
"https://ell.stackexchange.com",
"https://ell.stackexchange.com/users/109190/"
] | When in doubt, I find it helpful to simplify the sentence. Consider these:
>
> The conclusion is uncertain.
>
> The conclusion is final.
>
> The conclusion is X.
>
>
>
Clearly, whatever the conclusion is, it's singular and needs a singular verb.
Now let's look at what the actual conclusion is:
>
> Both are harmful.
>
>
>
Again, clearly, "both" refers to two things and thus requires a plural verb, "are". You could check that by replacing both:
>
> Cigarettes and gun battles are harmful.
>
> Angry dragons and mean dogs are harmful.
>
>
>
Now, let's take that last *conclusion* example -- *The conclusion is X.* -- and replace X with what it actually is:
>
> The conclusion is [both are harmful].
>
>
>
When you look at it that way, it becomes clear that the sentence as originally written is indeed correct.
My conclusion **is** that your friends **are** wrong. | You have two ideas together. One is that the conclusion *is* right. The other is that both A and B *are* harmful. You are putting these into one sentence and finding it troublesome.
You are also rushing by with spoken English and leaving out an implied word that explains things.
The conclusion is *that* both are purple. |
243,230 | Cambridge Dictionary gives this definition about "complain"
>
> to say that something is wrong or not satisfactory
>
>
>
and this definition about "complaint"
>
> a statement that something is wrong or not satisfactory
>
>
>
I guess both of them means the same thing. However, [Google Ngram](https://books.google.com/ngrams/graph?content=to%20complain%20about%2C%20make%20a%20complaint%20about%20&case_insensitive=on&year_start=1800&year_end=2008&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Cto%20complain%20about%3B%2Cc0%3B.t1%3B%2Cmake%20a%20complaint%20about%3B%2Cc0#t1%3B%2Cto%20complain%20about%3B%2Cc0%3B.t1%3B%2Cmake%20a%20complaint%20about%3B%2Cc0) shows that the former is much more commonly used that the latter
[](https://i.stack.imgur.com/IyIrh.png)
Are those two expressions interchangeable? | 2020/04/01 | [
"https://ell.stackexchange.com/questions/243230",
"https://ell.stackexchange.com",
"https://ell.stackexchange.com/users/109190/"
] | No. No, no, no. Send your friends back a grade.
*Conclusion* is singular. Use *is*.
*Both* is plural. Use *are*. | No reason they can't be used in the same sentence, but for the sentence you provided it's incorrect.
The context of the question is "can you use both or only one", so "both" is *a solution* and therefore:
"The conclusion is [the use of] both is harmful."
In the original sentence, "The conclusion is both are harmful.", this is implying that *both* words "is" and "are" are independently harmful, and they are not both harmful. |
243,230 | Cambridge Dictionary gives this definition about "complain"
>
> to say that something is wrong or not satisfactory
>
>
>
and this definition about "complaint"
>
> a statement that something is wrong or not satisfactory
>
>
>
I guess both of them means the same thing. However, [Google Ngram](https://books.google.com/ngrams/graph?content=to%20complain%20about%2C%20make%20a%20complaint%20about%20&case_insensitive=on&year_start=1800&year_end=2008&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Cto%20complain%20about%3B%2Cc0%3B.t1%3B%2Cmake%20a%20complaint%20about%3B%2Cc0#t1%3B%2Cto%20complain%20about%3B%2Cc0%3B.t1%3B%2Cmake%20a%20complaint%20about%3B%2Cc0) shows that the former is much more commonly used that the latter
[](https://i.stack.imgur.com/IyIrh.png)
Are those two expressions interchangeable? | 2020/04/01 | [
"https://ell.stackexchange.com/questions/243230",
"https://ell.stackexchange.com",
"https://ell.stackexchange.com/users/109190/"
] | You have two ideas together. One is that the conclusion *is* right. The other is that both A and B *are* harmful. You are putting these into one sentence and finding it troublesome.
You are also rushing by with spoken English and leaving out an implied word that explains things.
The conclusion is *that* both are purple. | No reason they can't be used in the same sentence, but for the sentence you provided it's incorrect.
The context of the question is "can you use both or only one", so "both" is *a solution* and therefore:
"The conclusion is [the use of] both is harmful."
In the original sentence, "The conclusion is both are harmful.", this is implying that *both* words "is" and "are" are independently harmful, and they are not both harmful. |
243,230 | Cambridge Dictionary gives this definition about "complain"
>
> to say that something is wrong or not satisfactory
>
>
>
and this definition about "complaint"
>
> a statement that something is wrong or not satisfactory
>
>
>
I guess both of them means the same thing. However, [Google Ngram](https://books.google.com/ngrams/graph?content=to%20complain%20about%2C%20make%20a%20complaint%20about%20&case_insensitive=on&year_start=1800&year_end=2008&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Cto%20complain%20about%3B%2Cc0%3B.t1%3B%2Cmake%20a%20complaint%20about%3B%2Cc0#t1%3B%2Cto%20complain%20about%3B%2Cc0%3B.t1%3B%2Cmake%20a%20complaint%20about%3B%2Cc0) shows that the former is much more commonly used that the latter
[](https://i.stack.imgur.com/IyIrh.png)
Are those two expressions interchangeable? | 2020/04/01 | [
"https://ell.stackexchange.com/questions/243230",
"https://ell.stackexchange.com",
"https://ell.stackexchange.com/users/109190/"
] | It all depends on the way you define what *sentence* means in your context.
If you mean the string between two periods, then yes, a sentence can have an arbitrary number of conjugated verbs, and hence *is* and *are* in one sentence are allowed. For example:
>
> The weather is fine and all people are happy.
>
>
>
If you consider a sentence to be the part containing subject and conjugated verb (or an infinitive sentence), then it is not possible. Then, the example above must be considered as two sentences that are contactenated.
The example in your question is really a sentence and its subclause - so it is one sentence in the first sense but two sentences in the latter sense. For better readability, there should be a *that* in between:
>
> The conclusion is that both are harmful.
>
>
> | No reason they can't be used in the same sentence, but for the sentence you provided it's incorrect.
The context of the question is "can you use both or only one", so "both" is *a solution* and therefore:
"The conclusion is [the use of] both is harmful."
In the original sentence, "The conclusion is both are harmful.", this is implying that *both* words "is" and "are" are independently harmful, and they are not both harmful. |
243,230 | Cambridge Dictionary gives this definition about "complain"
>
> to say that something is wrong or not satisfactory
>
>
>
and this definition about "complaint"
>
> a statement that something is wrong or not satisfactory
>
>
>
I guess both of them means the same thing. However, [Google Ngram](https://books.google.com/ngrams/graph?content=to%20complain%20about%2C%20make%20a%20complaint%20about%20&case_insensitive=on&year_start=1800&year_end=2008&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Cto%20complain%20about%3B%2Cc0%3B.t1%3B%2Cmake%20a%20complaint%20about%3B%2Cc0#t1%3B%2Cto%20complain%20about%3B%2Cc0%3B.t1%3B%2Cmake%20a%20complaint%20about%3B%2Cc0) shows that the former is much more commonly used that the latter
[](https://i.stack.imgur.com/IyIrh.png)
Are those two expressions interchangeable? | 2020/04/01 | [
"https://ell.stackexchange.com/questions/243230",
"https://ell.stackexchange.com",
"https://ell.stackexchange.com/users/109190/"
] | When in doubt, I find it helpful to simplify the sentence. Consider these:
>
> The conclusion is uncertain.
>
> The conclusion is final.
>
> The conclusion is X.
>
>
>
Clearly, whatever the conclusion is, it's singular and needs a singular verb.
Now let's look at what the actual conclusion is:
>
> Both are harmful.
>
>
>
Again, clearly, "both" refers to two things and thus requires a plural verb, "are". You could check that by replacing both:
>
> Cigarettes and gun battles are harmful.
>
> Angry dragons and mean dogs are harmful.
>
>
>
Now, let's take that last *conclusion* example -- *The conclusion is X.* -- and replace X with what it actually is:
>
> The conclusion is [both are harmful].
>
>
>
When you look at it that way, it becomes clear that the sentence as originally written is indeed correct.
My conclusion **is** that your friends **are** wrong. | In relative , noun or adverbial clauses we often face these kinds of sentences. E.g “Who I am is not important. |
243,230 | Cambridge Dictionary gives this definition about "complain"
>
> to say that something is wrong or not satisfactory
>
>
>
and this definition about "complaint"
>
> a statement that something is wrong or not satisfactory
>
>
>
I guess both of them means the same thing. However, [Google Ngram](https://books.google.com/ngrams/graph?content=to%20complain%20about%2C%20make%20a%20complaint%20about%20&case_insensitive=on&year_start=1800&year_end=2008&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Cto%20complain%20about%3B%2Cc0%3B.t1%3B%2Cmake%20a%20complaint%20about%3B%2Cc0#t1%3B%2Cto%20complain%20about%3B%2Cc0%3B.t1%3B%2Cmake%20a%20complaint%20about%3B%2Cc0) shows that the former is much more commonly used that the latter
[](https://i.stack.imgur.com/IyIrh.png)
Are those two expressions interchangeable? | 2020/04/01 | [
"https://ell.stackexchange.com/questions/243230",
"https://ell.stackexchange.com",
"https://ell.stackexchange.com/users/109190/"
] | When in doubt, I find it helpful to simplify the sentence. Consider these:
>
> The conclusion is uncertain.
>
> The conclusion is final.
>
> The conclusion is X.
>
>
>
Clearly, whatever the conclusion is, it's singular and needs a singular verb.
Now let's look at what the actual conclusion is:
>
> Both are harmful.
>
>
>
Again, clearly, "both" refers to two things and thus requires a plural verb, "are". You could check that by replacing both:
>
> Cigarettes and gun battles are harmful.
>
> Angry dragons and mean dogs are harmful.
>
>
>
Now, let's take that last *conclusion* example -- *The conclusion is X.* -- and replace X with what it actually is:
>
> The conclusion is [both are harmful].
>
>
>
When you look at it that way, it becomes clear that the sentence as originally written is indeed correct.
My conclusion **is** that your friends **are** wrong. | It all depends on the way you define what *sentence* means in your context.
If you mean the string between two periods, then yes, a sentence can have an arbitrary number of conjugated verbs, and hence *is* and *are* in one sentence are allowed. For example:
>
> The weather is fine and all people are happy.
>
>
>
If you consider a sentence to be the part containing subject and conjugated verb (or an infinitive sentence), then it is not possible. Then, the example above must be considered as two sentences that are contactenated.
The example in your question is really a sentence and its subclause - so it is one sentence in the first sense but two sentences in the latter sense. For better readability, there should be a *that* in between:
>
> The conclusion is that both are harmful.
>
>
> |
243,230 | Cambridge Dictionary gives this definition about "complain"
>
> to say that something is wrong or not satisfactory
>
>
>
and this definition about "complaint"
>
> a statement that something is wrong or not satisfactory
>
>
>
I guess both of them means the same thing. However, [Google Ngram](https://books.google.com/ngrams/graph?content=to%20complain%20about%2C%20make%20a%20complaint%20about%20&case_insensitive=on&year_start=1800&year_end=2008&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Cto%20complain%20about%3B%2Cc0%3B.t1%3B%2Cmake%20a%20complaint%20about%3B%2Cc0#t1%3B%2Cto%20complain%20about%3B%2Cc0%3B.t1%3B%2Cmake%20a%20complaint%20about%3B%2Cc0) shows that the former is much more commonly used that the latter
[](https://i.stack.imgur.com/IyIrh.png)
Are those two expressions interchangeable? | 2020/04/01 | [
"https://ell.stackexchange.com/questions/243230",
"https://ell.stackexchange.com",
"https://ell.stackexchange.com/users/109190/"
] | No. No, no, no. Send your friends back a grade.
*Conclusion* is singular. Use *is*.
*Both* is plural. Use *are*. | You have two ideas together. One is that the conclusion *is* right. The other is that both A and B *are* harmful. You are putting these into one sentence and finding it troublesome.
You are also rushing by with spoken English and leaving out an implied word that explains things.
The conclusion is *that* both are purple. |
243,230 | Cambridge Dictionary gives this definition about "complain"
>
> to say that something is wrong or not satisfactory
>
>
>
and this definition about "complaint"
>
> a statement that something is wrong or not satisfactory
>
>
>
I guess both of them means the same thing. However, [Google Ngram](https://books.google.com/ngrams/graph?content=to%20complain%20about%2C%20make%20a%20complaint%20about%20&case_insensitive=on&year_start=1800&year_end=2008&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Cto%20complain%20about%3B%2Cc0%3B.t1%3B%2Cmake%20a%20complaint%20about%3B%2Cc0#t1%3B%2Cto%20complain%20about%3B%2Cc0%3B.t1%3B%2Cmake%20a%20complaint%20about%3B%2Cc0) shows that the former is much more commonly used that the latter
[](https://i.stack.imgur.com/IyIrh.png)
Are those two expressions interchangeable? | 2020/04/01 | [
"https://ell.stackexchange.com/questions/243230",
"https://ell.stackexchange.com",
"https://ell.stackexchange.com/users/109190/"
] | In relative , noun or adverbial clauses we often face these kinds of sentences. E.g “Who I am is not important. | You have two ideas together. One is that the conclusion *is* right. The other is that both A and B *are* harmful. You are putting these into one sentence and finding it troublesome.
You are also rushing by with spoken English and leaving out an implied word that explains things.
The conclusion is *that* both are purple. |
243,230 | Cambridge Dictionary gives this definition about "complain"
>
> to say that something is wrong or not satisfactory
>
>
>
and this definition about "complaint"
>
> a statement that something is wrong or not satisfactory
>
>
>
I guess both of them means the same thing. However, [Google Ngram](https://books.google.com/ngrams/graph?content=to%20complain%20about%2C%20make%20a%20complaint%20about%20&case_insensitive=on&year_start=1800&year_end=2008&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Cto%20complain%20about%3B%2Cc0%3B.t1%3B%2Cmake%20a%20complaint%20about%3B%2Cc0#t1%3B%2Cto%20complain%20about%3B%2Cc0%3B.t1%3B%2Cmake%20a%20complaint%20about%3B%2Cc0) shows that the former is much more commonly used that the latter
[](https://i.stack.imgur.com/IyIrh.png)
Are those two expressions interchangeable? | 2020/04/01 | [
"https://ell.stackexchange.com/questions/243230",
"https://ell.stackexchange.com",
"https://ell.stackexchange.com/users/109190/"
] | No. No, no, no. Send your friends back a grade.
*Conclusion* is singular. Use *is*.
*Both* is plural. Use *are*. | It all depends on the way you define what *sentence* means in your context.
If you mean the string between two periods, then yes, a sentence can have an arbitrary number of conjugated verbs, and hence *is* and *are* in one sentence are allowed. For example:
>
> The weather is fine and all people are happy.
>
>
>
If you consider a sentence to be the part containing subject and conjugated verb (or an infinitive sentence), then it is not possible. Then, the example above must be considered as two sentences that are contactenated.
The example in your question is really a sentence and its subclause - so it is one sentence in the first sense but two sentences in the latter sense. For better readability, there should be a *that* in between:
>
> The conclusion is that both are harmful.
>
>
> |
243,230 | Cambridge Dictionary gives this definition about "complain"
>
> to say that something is wrong or not satisfactory
>
>
>
and this definition about "complaint"
>
> a statement that something is wrong or not satisfactory
>
>
>
I guess both of them means the same thing. However, [Google Ngram](https://books.google.com/ngrams/graph?content=to%20complain%20about%2C%20make%20a%20complaint%20about%20&case_insensitive=on&year_start=1800&year_end=2008&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Cto%20complain%20about%3B%2Cc0%3B.t1%3B%2Cmake%20a%20complaint%20about%3B%2Cc0#t1%3B%2Cto%20complain%20about%3B%2Cc0%3B.t1%3B%2Cmake%20a%20complaint%20about%3B%2Cc0) shows that the former is much more commonly used that the latter
[](https://i.stack.imgur.com/IyIrh.png)
Are those two expressions interchangeable? | 2020/04/01 | [
"https://ell.stackexchange.com/questions/243230",
"https://ell.stackexchange.com",
"https://ell.stackexchange.com/users/109190/"
] | When in doubt, I find it helpful to simplify the sentence. Consider these:
>
> The conclusion is uncertain.
>
> The conclusion is final.
>
> The conclusion is X.
>
>
>
Clearly, whatever the conclusion is, it's singular and needs a singular verb.
Now let's look at what the actual conclusion is:
>
> Both are harmful.
>
>
>
Again, clearly, "both" refers to two things and thus requires a plural verb, "are". You could check that by replacing both:
>
> Cigarettes and gun battles are harmful.
>
> Angry dragons and mean dogs are harmful.
>
>
>
Now, let's take that last *conclusion* example -- *The conclusion is X.* -- and replace X with what it actually is:
>
> The conclusion is [both are harmful].
>
>
>
When you look at it that way, it becomes clear that the sentence as originally written is indeed correct.
My conclusion **is** that your friends **are** wrong. | No. No, no, no. Send your friends back a grade.
*Conclusion* is singular. Use *is*.
*Both* is plural. Use *are*. |
916 | What is your experience with applying IT policy to the Board of Directors?
Please mention the country and industry you have experience in, since the advice you're sharing may or may not be the same across all industries.
**[Edit]**
It isn't uncommon for a single Board Member to be involved in more than one board/company. If this is the case, it's entirely possible that that individual may have conflicting IT policies in place if they were both applied to the same machine. How does this ultimately impact the way they do business? | 2010/12/01 | [
"https://security.stackexchange.com/questions/916",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/396/"
] | Every single person in the organization must abide by the policies. With that being said, since they are in charge they are within their right to make a change to the policies.
They should be sold on the policies so they don't change them. IT is doing it to protect the Board's investment.
EDIT: with regard to conflicting policies across orgs, I say the same policies apply. How would you handle an outside contractors laptop? | A board member should never deviate from the companies policy where the policy applies to them. The policy should be clear on what applies to whom and what the consequences for violation are. Policies can have contradictory elements where "allowed vs not-allowed" depends on ones position and responsibilities. The board should be prepared to justify the policy to whomever they answer to. Shareholders, Regulatory Agencies, etc.
Where policy is concerned the one position you don't want to find yourself in is where you knowingly allow/aid users in circumventing the policy. If you have to have exceptions document them. Better yet make them part of the policy. If the board isn't comfortable documenting an exception and still insists on being the exception go polish your resume.
A board member shouldn't be any different from any other employee with regards to following the rules. But it is up to the board, barring any legal restriction, to decide if a different set of rules should apply to them.
It is the board member's responsibility to understand what is required of them. If they are placed in position of conflict they should reach out to both entities and try to reach a compromise. From a technical point of view the easiest solution would be complete segregation of resources, ie two separate machines. Obviously this isn't the most usable solution. I would try to shoot for giving them remote access to an internal machine that as limited access to just what they need to do their job. I am not a big fan of giving them email access where they can work with documents from their personal machine. No matter what you do be sure it is documented and you are doing what your policy states you are doing. |
916 | What is your experience with applying IT policy to the Board of Directors?
Please mention the country and industry you have experience in, since the advice you're sharing may or may not be the same across all industries.
**[Edit]**
It isn't uncommon for a single Board Member to be involved in more than one board/company. If this is the case, it's entirely possible that that individual may have conflicting IT policies in place if they were both applied to the same machine. How does this ultimately impact the way they do business? | 2010/12/01 | [
"https://security.stackexchange.com/questions/916",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/396/"
] | I work in healthcare and write the IT policies (among other things). All of my policies are reviewed by the corporate compliance team and others before being finalized. Once a policy is approved the first people I hold to the policy are the ones who asked for it or helped write it.
My thinking is that the people who helped create the policy should be the ones who have to deal with it first so that they know whether it works. When I changed the password complexity requirements our CEO needed to get information off of his PC for a meeting. Rather than accept a temporary exemption from the password policy change he insisted on being held to the same standard as everyone else.
If someone on our board is using a system connected to our network they can either use the guest wireless or they can abide by the policies. All of the policies I write have an exception clause, but the exception must be in writing and reviewed. All exceptions are sent to the compliance auditor within 30 days of the exception and annually the full list is sent. Any exception expires no more than a year from the exception to force a review.
I'm okay with there being reason able exceptions, as long as it is documented and necessary. When that happens I just require a compensating control to be demonstrated. | Every single person in the organization must abide by the policies. With that being said, since they are in charge they are within their right to make a change to the policies.
They should be sold on the policies so they don't change them. IT is doing it to protect the Board's investment.
EDIT: with regard to conflicting policies across orgs, I say the same policies apply. How would you handle an outside contractors laptop? |
916 | What is your experience with applying IT policy to the Board of Directors?
Please mention the country and industry you have experience in, since the advice you're sharing may or may not be the same across all industries.
**[Edit]**
It isn't uncommon for a single Board Member to be involved in more than one board/company. If this is the case, it's entirely possible that that individual may have conflicting IT policies in place if they were both applied to the same machine. How does this ultimately impact the way they do business? | 2010/12/01 | [
"https://security.stackexchange.com/questions/916",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/396/"
] | Agree with Steve - however a common source of non-compliance is director or board level. These individuals often want the latest technology, or want more freedom or flexibility than their staff, and are in a position of power so can demand it, so sometimes the Information Security team need to proactively identify solutions to upcoming technology issues in order to provide a secure solution by exception in these cases.
Where senior/executive management are utterly bought in to security policies, an organisation is typically more robust and governance and compliance are more easily demonstrated, but in the more usual business organisation the aim is to make compromises which allowm to enable business while not impacting security too much.
In my experience this doesn't vary that much between countries in Europe, America or the Middle East, or across industries. The point being that individuals in senior positions want to do business their way, and usually their way is considered right for the business if they make revenues and that is where we as Information Security professionals come in.
The circumstance where an individual sits on more than one board is a major problem. The security ideal is obviously to completely segregate each role, however getting a director to carry round multiple laptops is unlikely. What typically happens is they use one account and manage all emails and accesses from one machine - and you end up relying on them not making a mistake.
Dangerous!
Segregation by virtual machine would seem to be a logical next step, but I have only ever seen this once. This can be secured to a high level, but requires a certain amount of communication between organisations to agree the configs etc. | A board member should never deviate from the companies policy where the policy applies to them. The policy should be clear on what applies to whom and what the consequences for violation are. Policies can have contradictory elements where "allowed vs not-allowed" depends on ones position and responsibilities. The board should be prepared to justify the policy to whomever they answer to. Shareholders, Regulatory Agencies, etc.
Where policy is concerned the one position you don't want to find yourself in is where you knowingly allow/aid users in circumventing the policy. If you have to have exceptions document them. Better yet make them part of the policy. If the board isn't comfortable documenting an exception and still insists on being the exception go polish your resume.
A board member shouldn't be any different from any other employee with regards to following the rules. But it is up to the board, barring any legal restriction, to decide if a different set of rules should apply to them.
It is the board member's responsibility to understand what is required of them. If they are placed in position of conflict they should reach out to both entities and try to reach a compromise. From a technical point of view the easiest solution would be complete segregation of resources, ie two separate machines. Obviously this isn't the most usable solution. I would try to shoot for giving them remote access to an internal machine that as limited access to just what they need to do their job. I am not a big fan of giving them email access where they can work with documents from their personal machine. No matter what you do be sure it is documented and you are doing what your policy states you are doing. |
916 | What is your experience with applying IT policy to the Board of Directors?
Please mention the country and industry you have experience in, since the advice you're sharing may or may not be the same across all industries.
**[Edit]**
It isn't uncommon for a single Board Member to be involved in more than one board/company. If this is the case, it's entirely possible that that individual may have conflicting IT policies in place if they were both applied to the same machine. How does this ultimately impact the way they do business? | 2010/12/01 | [
"https://security.stackexchange.com/questions/916",
"https://security.stackexchange.com",
"https://security.stackexchange.com/users/396/"
] | I work in healthcare and write the IT policies (among other things). All of my policies are reviewed by the corporate compliance team and others before being finalized. Once a policy is approved the first people I hold to the policy are the ones who asked for it or helped write it.
My thinking is that the people who helped create the policy should be the ones who have to deal with it first so that they know whether it works. When I changed the password complexity requirements our CEO needed to get information off of his PC for a meeting. Rather than accept a temporary exemption from the password policy change he insisted on being held to the same standard as everyone else.
If someone on our board is using a system connected to our network they can either use the guest wireless or they can abide by the policies. All of the policies I write have an exception clause, but the exception must be in writing and reviewed. All exceptions are sent to the compliance auditor within 30 days of the exception and annually the full list is sent. Any exception expires no more than a year from the exception to force a review.
I'm okay with there being reason able exceptions, as long as it is documented and necessary. When that happens I just require a compensating control to be demonstrated. | Agree with Steve - however a common source of non-compliance is director or board level. These individuals often want the latest technology, or want more freedom or flexibility than their staff, and are in a position of power so can demand it, so sometimes the Information Security team need to proactively identify solutions to upcoming technology issues in order to provide a secure solution by exception in these cases.
Where senior/executive management are utterly bought in to security policies, an organisation is typically more robust and governance and compliance are more easily demonstrated, but in the more usual business organisation the aim is to make compromises which allowm to enable business while not impacting security too much.
In my experience this doesn't vary that much between countries in Europe, America or the Middle East, or across industries. The point being that individuals in senior positions want to do business their way, and usually their way is considered right for the business if they make revenues and that is where we as Information Security professionals come in.
The circumstance where an individual sits on more than one board is a major problem. The security ideal is obviously to completely segregate each role, however getting a director to carry round multiple laptops is unlikely. What typically happens is they use one account and manage all emails and accesses from one machine - and you end up relying on them not making a mistake.
Dangerous!
Segregation by virtual machine would seem to be a logical next step, but I have only ever seen this once. This can be secured to a high level, but requires a certain amount of communication between organisations to agree the configs etc. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.