qid
int64
1
74.7M
question
stringlengths
12
33.8k
date
stringlengths
10
10
metadata
list
response_j
stringlengths
0
115k
response_k
stringlengths
2
98.3k
36,408
I have an aluminium bearing holder attached to one end of an aluminium plate as follows: [![enter image description here](https://i.stack.imgur.com/026xMl.png)](https://i.stack.imgur.com/026xMl.png) The bearing holder is attached to the plate by two bolts on the bottom. The intended setup is to attach a relatively heavy load to the right hand side of the shaft. To reduce stress on the two bolt holes in the plate, is it a good idea to have a lip on the plate as shown? Will it actually have any effect? Ideally I'd like to fasten the bearing holder through the lip, but I'm trying to avoid machining from the side and stay away from 4-axis machining. Are there better ways to reduce the stress on the holes? Any advice appreciated.
2020/06/25
[ "https://engineering.stackexchange.com/questions/36408", "https://engineering.stackexchange.com", "https://engineering.stackexchange.com/users/4301/" ]
Contact an accredited laboratory and ask what they would need as a sample. You may need to cut a short piece off the end of the pipes or by using a grinder, or similar, obtain separate ground samples of the pipes and send them to the laboratory. Assuming you live in the US, the [California Department of Public Health has a website](https://www.cdph.ca.gov/Programs/CCDPHP/DEODC/CLPPB/Pages/home_test.aspx) with information. Use this as guide or contact the health depart where you live for more details specific to your region.
your tests suggest that the pipe is just iron or steel and safe to hold. Certain special steel alloys called *free-machining steel* or *ledloy* have lead in them but they are not used to make pipe.
36,408
I have an aluminium bearing holder attached to one end of an aluminium plate as follows: [![enter image description here](https://i.stack.imgur.com/026xMl.png)](https://i.stack.imgur.com/026xMl.png) The bearing holder is attached to the plate by two bolts on the bottom. The intended setup is to attach a relatively heavy load to the right hand side of the shaft. To reduce stress on the two bolt holes in the plate, is it a good idea to have a lip on the plate as shown? Will it actually have any effect? Ideally I'd like to fasten the bearing holder through the lip, but I'm trying to avoid machining from the side and stay away from 4-axis machining. Are there better ways to reduce the stress on the holes? Any advice appreciated.
2020/06/25
[ "https://engineering.stackexchange.com/questions/36408", "https://engineering.stackexchange.com", "https://engineering.stackexchange.com/users/4301/" ]
Lead can be held safely... Just don't eat it and wash your hands. If it is old, is most likely contains some trace amount of lead, as it was a common additive to make machining easier.
Steels are ferro-magnetic except for annealed austenitic stainless steels. The only metal that is pipe material that is ( except for the occasional heat of monel). Pipe would never be made of free-machining leaded ( 0.1 % Pb) steels. What is your obsession with lead ? It was the choice for better grade water pipes from Roman times through most of the nineteenth century.
36,408
I have an aluminium bearing holder attached to one end of an aluminium plate as follows: [![enter image description here](https://i.stack.imgur.com/026xMl.png)](https://i.stack.imgur.com/026xMl.png) The bearing holder is attached to the plate by two bolts on the bottom. The intended setup is to attach a relatively heavy load to the right hand side of the shaft. To reduce stress on the two bolt holes in the plate, is it a good idea to have a lip on the plate as shown? Will it actually have any effect? Ideally I'd like to fasten the bearing holder through the lip, but I'm trying to avoid machining from the side and stay away from 4-axis machining. Are there better ways to reduce the stress on the holes? Any advice appreciated.
2020/06/25
[ "https://engineering.stackexchange.com/questions/36408", "https://engineering.stackexchange.com", "https://engineering.stackexchange.com/users/4301/" ]
Lead can be held safely... Just don't eat it and wash your hands. If it is old, is most likely contains some trace amount of lead, as it was a common additive to make machining easier.
Contact an accredited laboratory and ask what they would need as a sample. You may need to cut a short piece off the end of the pipes or by using a grinder, or similar, obtain separate ground samples of the pipes and send them to the laboratory. Assuming you live in the US, the [California Department of Public Health has a website](https://www.cdph.ca.gov/Programs/CCDPHP/DEODC/CLPPB/Pages/home_test.aspx) with information. Use this as guide or contact the health depart where you live for more details specific to your region.
36,408
I have an aluminium bearing holder attached to one end of an aluminium plate as follows: [![enter image description here](https://i.stack.imgur.com/026xMl.png)](https://i.stack.imgur.com/026xMl.png) The bearing holder is attached to the plate by two bolts on the bottom. The intended setup is to attach a relatively heavy load to the right hand side of the shaft. To reduce stress on the two bolt holes in the plate, is it a good idea to have a lip on the plate as shown? Will it actually have any effect? Ideally I'd like to fasten the bearing holder through the lip, but I'm trying to avoid machining from the side and stay away from 4-axis machining. Are there better ways to reduce the stress on the holes? Any advice appreciated.
2020/06/25
[ "https://engineering.stackexchange.com/questions/36408", "https://engineering.stackexchange.com", "https://engineering.stackexchange.com/users/4301/" ]
Contact an accredited laboratory and ask what they would need as a sample. You may need to cut a short piece off the end of the pipes or by using a grinder, or similar, obtain separate ground samples of the pipes and send them to the laboratory. Assuming you live in the US, the [California Department of Public Health has a website](https://www.cdph.ca.gov/Programs/CCDPHP/DEODC/CLPPB/Pages/home_test.aspx) with information. Use this as guide or contact the health depart where you live for more details specific to your region.
Steels are ferro-magnetic except for annealed austenitic stainless steels. The only metal that is pipe material that is ( except for the occasional heat of monel). Pipe would never be made of free-machining leaded ( 0.1 % Pb) steels. What is your obsession with lead ? It was the choice for better grade water pipes from Roman times through most of the nineteenth century.
35,142,571
I'm building a page that allows user generation of multiple d3 charts based on the user pushing buttons to select a dataset. The first chart generates fine. The second chart generates but the lines starts off the chart to the lefthand side. Every additional chart has this same problem. Has anyone had similar issue? I'm not posting a specific line of code, because I'm not sure where the problem is? I'm hoping others have run into a similar issue. This is an example of the code running. Click on the department buttons to start bringing up additional charts to see the problem. <http://www.justingosses.com/cookCounty/Index.html> The code itself can be found on github. <https://github.com/JustinGOSSES/JustinGOSSES.github.io> Any help would be appreciated. I haven't been able to find previous similar problems.
2016/02/01
[ "https://Stackoverflow.com/questions/35142571", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5835715/" ]
Mobile Services is now folded in as Mobile Apps in App Service. You should start using Mobile Apps instead of Mobile Services
Mobile Apps is the new version of Mobile Services. But beware, most of the documentation you will find at this present date is for the older version. Some of the features like the Node.js backend is very poorly documented for Mobile Apps.
186,322
I have 4 degrees: 1. [B.S. Computer Engineering](https://ece.umd.edu/undergraduate/degrees/bs-computer-engineering) 2. [B.S. Mathematics](https://www-math.umd.edu/) 3. [M.S. Software Engineering](https://www.umgc.edu/online-degrees/masters/it-software-engineering) 4. [M.S. Electrical Engineering](https://www.csee.umbc.edu/graduate/electrical-engineering-m-s-ph-d/) If you click on the links above you can find out more about the schools I went to and their programs. My question: What is the best way to find open positions/jobs at companies that use all of my degrees? If you go to a job search website (like indeed.com or google.com/jobs) you can type in search terms like "software engineer" or "Computer Engineer" or "Information Technology", and you'll see a bunch of search results for those specific fields. But what if I'm looking for a position that requires knowledge from multiple fields? Is there a website that keeps track of open positions that require knowledge from several fields ?
2022/07/19
[ "https://workplace.stackexchange.com/questions/186322", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/80736/" ]
> > I have 4 degrees, what is the best way to find a job > > > I would start with this > > that takes advantage of all 4? > > > and make this a "nice to have". A lot of this depends on how you actually did acquire these degrees and how long it took you to do so. If you did all of these in parallel, you are an academic rock star and should have no problem getting into almost any company of your choice. If you did these all sequentially, than it will be very difficult to find a job at all since you are competing with people that have at least one degree and perhaps 10+ years more of actual job experience. There are two basic routes: industry and academia. For many industry jobs degrees are not super relevant other than a check mark. In most places you need a degree to get some good understanding for the underlying theoretical foundations, but you'll learn how to to the actual work on the job. In my field (engineering, acoustics, physics): once you are 5 years into the workplace, no one cares what the subject of your degree and they care even less if it is Ph.D. M.Sc. or B.Sc. Acadmedia is typically looking for depth not breadth. Some open minded faculties might actually embrace a broader perspective and additional skill set, but I have also seen faculties where this was deeply frowned upon because it's not "pure enough" and not a full commitment to the specific cause. I don't know where exactly on the spectrum between rock-star to long-term-student you are, but I would recommend trying to find a job that you like and that gives you some hand on experience, even if you only use one of your degrees. Once you have a foot in the door, you can quickly see how the other skills may come in useful (or not).
> > What is the best way to find open positions/jobs at companies that use > all of my degrees? > > > If you go to a job search website (like indeed.com or google.com/jobs) > you can type in search terms like "software engineer" or "Computer > Engineer" or "Information Technology", and you'll see a bunch of > search results for those specific fields. But what if I'm looking for > a position that requires knowledge from multiple fields? > > > * Go to the job site of your choice. * Search for the **roles** that you want. * Read the job requirements for that role at each open job that is returned. * Choose one that requires knowledge from multiple fields. A multi-keyword search like this may work on some job sites: <https://www.indeed.com/jobs?q=%22computer%20science%22%20and%20%22mathematics%22%20and%20%22software%20engineering%22%20and%20%22electrical%20engineering%22> In either case, you'll have to weed through the results and likely add more filters like locale and experience level.
44,737,956
I am looking for different methods of payment using authorize.net gateway. I am wondering if authorize.net provide methods similar express checkout and do direct payment in paypal. I see there is card method available but I am not sure if authorize.net provide account to account payment option or not.
2017/06/24
[ "https://Stackoverflow.com/questions/44737956", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5040460/" ]
Yes they do. <https://developer.authorize.net/api/reference/features/paypal.html> > > **PayPal Express Checkout** > > PayPal Express Checkout for Authorize.Net enables you to offer PayPal as a payment option to your customers by incorporating it within your existing API implementation. > > >
They do not. They are merely a payment gateway which connects a website to a merchant account. Paypal is a payment services provider and offers payment services that traditional payment gateways do not.
945,491
I'm looking for a way to bulk edit .jpeg files and retain their original names. Before you say, "Duh, they already have their original names.", I also want to be able to place that name on the .jpeg as a caption in a location of my choosing. The operative word is bulk. Doing it one file at a time is prohibitive unless there is no other way.
2015/07/26
[ "https://superuser.com/questions/945491", "https://superuser.com", "https://superuser.com/users/473554/" ]
After reserving a Windows 10 Upgrade, after a few hours when you click the same Windows-like icon in the taskbar, you will see a window pop up with the message "Download - In Progress", and you can see the download progress by clicking on the button "View Download Progress". [![](https://i.stack.imgur.com/ScwK3.jpg)](https://i.stack.imgur.com/ScwK3.jpg) ([Image source](http://pocketnow.com/2015/07/29/force-windows-10-update#comment-2162803169)) My background download is in progress with 20% done. That's all folks!
No, I won't be notified regarding the download. I learn tthis in the thread that I started: [Will I know when the Windows 10 files are being downloaded in the background or not?](http://answers.microsoft.com/en-us/windows/forum/windows_10-win_upgrade/will-i-know-when-the-windows-10-files-are-being/e93b2188-d2f4-4f02-8bb8-66f301c584f6?tm=1437913781590). The correct answer was posted by an MVP, [Andre Da Costa](http://answers.microsoft.com/en-us/profile/e5564792-9930-4912-828b-e206924b132a) , and it said: > > No, it won't provide any progress when its actually downloading. > > >
945,491
I'm looking for a way to bulk edit .jpeg files and retain their original names. Before you say, "Duh, they already have their original names.", I also want to be able to place that name on the .jpeg as a caption in a location of my choosing. The operative word is bulk. Doing it one file at a time is prohibitive unless there is no other way.
2015/07/26
[ "https://superuser.com/questions/945491", "https://superuser.com", "https://superuser.com/users/473554/" ]
After reserving a Windows 10 Upgrade, after a few hours when you click the same Windows-like icon in the taskbar, you will see a window pop up with the message "Download - In Progress", and you can see the download progress by clicking on the button "View Download Progress". [![](https://i.stack.imgur.com/ScwK3.jpg)](https://i.stack.imgur.com/ScwK3.jpg) ([Image source](http://pocketnow.com/2015/07/29/force-windows-10-update#comment-2162803169)) My background download is in progress with 20% done. That's all folks!
No you wont get a notification but if you go to the root of C: then you should see (if enabled) a hidden folder called: "$Windows.~BT" that shows that windows 10 is being downloaded to your computer. [![Hidden folder](https://i.stack.imgur.com/ywsdz.png)](https://i.stack.imgur.com/ywsdz.png) You can also download with this: <https://www.microsoft.com/en-us/software-download/windows10>
42,766
His grandparents think he shouldn't read a book suitable for a three year old or play with cuddly soft toys (even though those are only associated with his love of Pokémon). His grandparent also don't like anything even slightly related to Pokémon. My son doesn't play enough as it is—too obsessed with computer games, videos, making stuff or writing (not that he writes sentences). For me I am happy if he reads anything or plays with any toys, especially if he has fun. Is there any research to say you should get rid of younger child toys?
2022/07/08
[ "https://parenting.stackexchange.com/questions/42766", "https://parenting.stackexchange.com", "https://parenting.stackexchange.com/users/11975/" ]
Cuddly soft toys are most certainly age appropriate, and in fact most of the Pokémon plush toys are at least age 4+ if not higher. They're great for creative play, and are especially useful if there's nobody else to play with (as the plush can take the role of the other person - think Calvin and Hobbes). This kind of play is extremely valuable at this age (and continues to be valuable as they get older) as it lets them explore different things from their life and grow emotionally in a safe setting. It also helps develop [empathy and social skills](https://blog.frontiersin.org/2020/10/01/human-neuroscience-child-play-dolls-cognitive-social-benefits-children/). Books that are aimed at a younger child will not challenge him, but it's reasonable to separate "reading for fun" from "reading challenging books". My children, who are now pre-teens, still read picture books from time to time; they're both extremely high achieving readers (3+ grade levels ahead), so it hasn't stunted them meaningfully. Instead, it's a fun thing for them to do. They *also* read more challenging books, but that's separate from reading fun things that are nostalgic or just fun. They also may [pick up more complicated concepts](https://childrenslibrarylady.com/using-picture-books-with-older-children/) than they did as younger children - think about how rereading a book as an adult works; this won't apply to every picture book, but some are quite complex and have value for both younger and older children.
I mean I'm 24 and I still play with LEGOs, model trains, action figures, basketballs, water guns, water balloons and such, I could go on. Ridiculous? Maybe, but it's also what helps me kick back after long hours of working, studying what have you. Oh and I've still held on to a number of my old plushes as well (ok granted they're more room decor now) because they still have deep sentimental meanings. Your son is 7, not 37, he should absolutely partake in anything that's constructive for him and tell his grandparents to take a hike if they have such a problem with it. At least he isn't out skipping school or bullying other kids.
72,494
I can not get the shutter of the Schneider Kreuznach 135/3.5 Xenotar to open. I tried to explain what I know in this picutre: ![image](https://i.imgur.com/6hnafsOh.jpg) Anyone any idea? I'm afraid I don't have a manual for the lens and haven't found one online.
2016/01/03
[ "https://photo.stackexchange.com/questions/72494", "https://photo.stackexchange.com", "https://photo.stackexchange.com/users/47612/" ]
With large format, unless you have an oddball focal plane shutter (I've seen some - more often in a press camera, which you *might* have here), the camera that you have has a *leaf* shutter - the aperture and the shutter are between the front sent and rear set of the elements. I'm personally most familiar with the Copal brand shutters: [![enter image description here](https://i.stack.imgur.com/retL0.jpg)](https://i.stack.imgur.com/retL0.jpg) In this image you can see the f/stop select (silver triangle) which is part of the lens, the shutter select (red line on ring), the shutter release (top of the image), and the lock open (bottom black triangle). Another view of a Copal shutter (that first one was a #0, this is a #1): [![enter image description here](https://i.stack.imgur.com/3yO47.jpg)](https://i.stack.imgur.com/3yO47.jpg) To shoot a photo, one would typically: 1. open up wide (f/5.6 for this shutter) 2. then push that black triangle to the lock open position 3. do your focusing and standard adjustments 4. then stop down to what you want for the f/stop to be to check for the depth of field 5. release the lock open position 6. insert the film holder 7. select the shutter speed 8. cock the shutter 9. release the shutter (cable release) Now, lets look at this for another view of a variation on this lens: [![enter image description here](https://i.stack.imgur.com/PDyYY.jpg)](https://i.stack.imgur.com/PDyYY.jpg) You can see the similar parts - the shutter speed select, the aperture select at the bottom, the shutter cocking. However, this shutter doesn't have the lock open that the Copal shutters do. What you will need to do there is set the shutter to either 'M', 'B', or 'T' depending on the labeling of the shutter so that it it is open for a long period of time. It is possible that these are different things - you will note that 'T' and 'B' are both on the Copal shutter - one is a bulb (as long as you hold the [bulb](https://i.stack.imgur.com/Ea2RR.jpg) tight it will stay open - the 'T' is for a timed release where you press it once to open, and press it again to close. You *will* want a cable release. You may find that watching [this video](https://www.youtube.com/watch?v=eG0ODIugSFI) of a Copal No. 1 shutter being twisted about and worked with or a 6000 FPS [video](https://www.youtube.com/watch?v=co7J2_DkdL4) of the shutter opening and closing - note that the shutter and the aperture are different things, though *really* close together. That green knob that you didn't mark... it *may* be a lock open. Though, I'm not familiar with that shutter.
Finally. "user47638" was basically right: ![image](https://i.imgur.com/UyBfVhH.jpg)
72,494
I can not get the shutter of the Schneider Kreuznach 135/3.5 Xenotar to open. I tried to explain what I know in this picutre: ![image](https://i.imgur.com/6hnafsOh.jpg) Anyone any idea? I'm afraid I don't have a manual for the lens and haven't found one online.
2016/01/03
[ "https://photo.stackexchange.com/questions/72494", "https://photo.stackexchange.com", "https://photo.stackexchange.com/users/47612/" ]
Your shutter is a Linhof-branded Synchro Compur. (Only the bezel part is Linhof.) It appears from the picture you posted to have a press focus (the little square tab/button thing near the shutter speed indicator); that button will open the shutter for focus after the shutter has been cocked, and will close if you re-cock the shutter. (Note: not all Compurs have the press open function.)
Finally. "user47638" was basically right: ![image](https://i.imgur.com/UyBfVhH.jpg)
94,729
sorry im kinda clueless with this stuff... I heard if you set your wifi password to more than 10 or so digits (and make it complicated), then your wpa / wpa2 psk would be safe. Is that true? If not, how do i make it secure so that i dont have to revert to Wired (=safe?)? I just need it to be secure for one day at a time. (I mean I can keep resetting password if that helps)
2015/07/23
[ "https://security.stackexchange.com/questions/94729", "https://security.stackexchange.com", "https://security.stackexchange.com/users/81652/" ]
If a password is strong, then it is strong. WPA2 uses [PBKDF2](https://en.wikipedia.org/wiki/PBKDF2) with 4096 iterations and the network SSID as salt to turn the password into the shared secret. This is not bad. This means that a "strong" password will be, in this case, a password with 68 bits of entropy or more: the 4096 iterations add 12 bits to the brute force effort, and I use the traditional "80 bits" as the threshold of attacker's power. Note that 280 is a lot, and an attacker who is interested in obtaining a free Internet access through your WiFi is unlikely to devote that much computing power to the effort. A weaker password in the 40-bit entropy range will already be strong enough to deter attackers. This [famous question](https://security.stackexchange.com/questions/6095/xkcd-936-short-complex-password-or-long-dictionary-passphrase) discusses password generation and entropy. A password with 40 bits of entropy can still be compatible with a human brain. Password derivation aside, there is no known weakness in WPA2, so you can say that with a strong password, WPA2/PSK is safe... *within its functionality*. Remember the following points: * WPA2 is about keeping outsiders outside. It prevents external attackers from joining the network, i.e. looking at existing traffic and inserting his own. However, WPA2 does not protect connected clients from each other. * Even without piercing the crypto layer, attackers will still be able to do some [traffic analysis](https://en.wikipedia.org/wiki/Traffic_analysis): they can track the client machines (their MAC addresses are visible) and get some idea of what they say based on the timing and size of each frame. * Attackers who just want to achieve wanton disruption can jam the traffic by simply emitting stronger radio waves at the same frequency; and they can do that *remotely*. To do the same with a wired network requires physical access to the wires. For instance, your neighbour can jam your WiFi without leaving his house, while breaking your wire network would require entering your home.
Choose a cryptographically strong PRNG to generate a 12 characters password. That will *too* sufficient because it will give you 71 bits of entropy, which is safe and secure against all of the attacks that attackers might try to attack your password. This way, you do not need to change your password everyday and it is not only too secure but also practical in case you type it on a smartphone.
40,404
I dropped my Nikon D3200. Camera still turns on, but the lens wont attach to the body. It looks like something at the bottom of the lens might be broken; not the actual lens, but a small black circle type of thing. Anybody has any idea how much something like that will cost to repair?
2013/06/25
[ "https://photo.stackexchange.com/questions/40404", "https://photo.stackexchange.com", "https://photo.stackexchange.com/users/20687/" ]
Your best bet is to contact a Nikon service center about it and ask them for an estimate. It's hard to tell what might actually be wrong without a more detailed explanation and possibly photos. Even the Nikon service center might not be able to estimate it without actually having the camera in hand. There is normally a ring that doesn't go all the way around the lens on Nikon lenses, so what you think is the problem might not even be the problem.
First ensure that your camera works. Shoot a few pictures without the lens and confirm that you get pictures. Better idea would be to borrow a lens from a friend or go to a local retailer. If you figure that your lens is broken but your camera is okay, you can [buy a new lens for 200 or less](http://www.bhphotovideo.com/c/search?atclk=Lens%20Mount_Nikon&ci=274&N=4288584247%204261208195%204108103566). This means your max budget for repairs should not exceed 200. Then follow @AJ Henderson's answer and decide which way you want to go based on the cost.
184,511
I wouldn't call this scientifically accurate, but would like some opinion on sensibility of the idea. As part of a story about two time travelling engineers, they discover that their own universe actually updates itself whenever a frontier in technology is crossed. So basically, the invention of time travel itself caused certain updates to support this new paradigm by offering system-wide resolutions to various time-travel induced paradoxes. Now, I was wondering which direction to take when trying to offer an explanation - since this one does drive curiosities. I've got two ideas to branch out: 1. The universe is a living organism with a nucleus, mitochondria and RNA, and it is actually responding in a way that gene mutations occur. 2. The universe is an AI driven program, and is continuously updating a logical universe. Which one of these would be more likely in case of a reactive universe? I'm trying to figure out a way to *know* this to be true from the character's perspective.
2020/08/28
[ "https://worldbuilding.stackexchange.com/questions/184511", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/79073/" ]
**AI and simulations** If you're set on your 2 branches, the AI one makes most sense. The Singularity is an idea that people are uploading their conscious into a computer. In some ideas they retain their individuality. Alternatively it can be just like the Matrix. A fully simulated reality created to live out your lives. Lastly there are idea's that the universe is a simulation on the edge of a super massive black hole. All of these have one thing in common. They are existing theories of how all of this could be a simulation. Simulations can be controlled and changed by AI. A living organism however is very difficult to explain. It can't directly be observed in our universe, so it could be "living" throughout the multitude of dimensions. When time travelling, it would change the universe. However, this is much more hand-wavy than the actual existing theories of an AI simulating every experience.
**Observing changes things** I'm proposing a 3rd option. The [double slit](https://en.wikipedia.org/wiki/Double-slit_experiment) experiment shows us that observing things can change the properties. How and why lies in the quantum realm on which I've read a lot, but can't pretend to truly understand. As far as I read it, there is a form of uncertainty everywhere. When observing, this uncertainty collapses and resolves into a certainty. This whole experiment shows that this small difference of just observing has 2 wildly different outcomes. In one outcome it's like an energy wave in a sound wave kind of way. In the other one a particle, like ball thrown at something. Possibly observing can go much further. The moment you can travel through time, different ways of observing are available. The universe doesn't change, but the act of time travel does change the outcome. Essentially "updating" the universe that happen to support your version of time travel. As an aside, I once heard a theory that the moment we research and discover more about the universe, it gets more complex. Another way to perceive the observing changing the world around you.
184,511
I wouldn't call this scientifically accurate, but would like some opinion on sensibility of the idea. As part of a story about two time travelling engineers, they discover that their own universe actually updates itself whenever a frontier in technology is crossed. So basically, the invention of time travel itself caused certain updates to support this new paradigm by offering system-wide resolutions to various time-travel induced paradoxes. Now, I was wondering which direction to take when trying to offer an explanation - since this one does drive curiosities. I've got two ideas to branch out: 1. The universe is a living organism with a nucleus, mitochondria and RNA, and it is actually responding in a way that gene mutations occur. 2. The universe is an AI driven program, and is continuously updating a logical universe. Which one of these would be more likely in case of a reactive universe? I'm trying to figure out a way to *know* this to be true from the character's perspective.
2020/08/28
[ "https://worldbuilding.stackexchange.com/questions/184511", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/79073/" ]
A living vessel re-configuring itself based on the behaviour of it's occupants is something that occurs in fiction all the time. Some examples include Moya from Farscape (the ship), the Wraith Hive ships from Stargate Atlantis, and "The Cloud" from Star Trek Voyager. Expanding these "Ship or Nebula" sized organisms up to "Universe" size is not beyond my ability to suspend my disbelief, but I wouldn't suggest to use RNA / mitochondria / human definitions of life, a living being the size of a universe has issues with speed of light propagation of information, and by handwaving away the internals of how that life works you'll be able to suspend belief better. My biggest issue with a living universe updating in response to what I've done is "Why do you care about little old me"? People have similar questions with religion topics (why does God care about my actions, I'm insignificant), and perhaps some "The universe loves you and wants the best for you" might help sell why the universe cares about what happens enough to update in response to them building a new thing in their garage. Could also be a population of very large beings, rather than one singular being. If the universe is a simulation with AI running the show, then this would explain the updates. You'd need to sell that we're in a simulation - perhaps temporary observed changes in planck distance as the simulation struggles with complex behaviour nearby so lowers the resolution of the simulation? There's two other approaches Id suggest: * Multi-verse theory, by the act of travelling back in time they move to a new universe, with different properties. So it appears that the universe is reconfiguring. * Quantum trickery. I'd summarise Quantum mechanics as the more you examine what's going on, the more bizarre the universe gets.
**Observing changes things** I'm proposing a 3rd option. The [double slit](https://en.wikipedia.org/wiki/Double-slit_experiment) experiment shows us that observing things can change the properties. How and why lies in the quantum realm on which I've read a lot, but can't pretend to truly understand. As far as I read it, there is a form of uncertainty everywhere. When observing, this uncertainty collapses and resolves into a certainty. This whole experiment shows that this small difference of just observing has 2 wildly different outcomes. In one outcome it's like an energy wave in a sound wave kind of way. In the other one a particle, like ball thrown at something. Possibly observing can go much further. The moment you can travel through time, different ways of observing are available. The universe doesn't change, but the act of time travel does change the outcome. Essentially "updating" the universe that happen to support your version of time travel. As an aside, I once heard a theory that the moment we research and discover more about the universe, it gets more complex. Another way to perceive the observing changing the world around you.
184,511
I wouldn't call this scientifically accurate, but would like some opinion on sensibility of the idea. As part of a story about two time travelling engineers, they discover that their own universe actually updates itself whenever a frontier in technology is crossed. So basically, the invention of time travel itself caused certain updates to support this new paradigm by offering system-wide resolutions to various time-travel induced paradoxes. Now, I was wondering which direction to take when trying to offer an explanation - since this one does drive curiosities. I've got two ideas to branch out: 1. The universe is a living organism with a nucleus, mitochondria and RNA, and it is actually responding in a way that gene mutations occur. 2. The universe is an AI driven program, and is continuously updating a logical universe. Which one of these would be more likely in case of a reactive universe? I'm trying to figure out a way to *know* this to be true from the character's perspective.
2020/08/28
[ "https://worldbuilding.stackexchange.com/questions/184511", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/79073/" ]
A living vessel re-configuring itself based on the behaviour of it's occupants is something that occurs in fiction all the time. Some examples include Moya from Farscape (the ship), the Wraith Hive ships from Stargate Atlantis, and "The Cloud" from Star Trek Voyager. Expanding these "Ship or Nebula" sized organisms up to "Universe" size is not beyond my ability to suspend my disbelief, but I wouldn't suggest to use RNA / mitochondria / human definitions of life, a living being the size of a universe has issues with speed of light propagation of information, and by handwaving away the internals of how that life works you'll be able to suspend belief better. My biggest issue with a living universe updating in response to what I've done is "Why do you care about little old me"? People have similar questions with religion topics (why does God care about my actions, I'm insignificant), and perhaps some "The universe loves you and wants the best for you" might help sell why the universe cares about what happens enough to update in response to them building a new thing in their garage. Could also be a population of very large beings, rather than one singular being. If the universe is a simulation with AI running the show, then this would explain the updates. You'd need to sell that we're in a simulation - perhaps temporary observed changes in planck distance as the simulation struggles with complex behaviour nearby so lowers the resolution of the simulation? There's two other approaches Id suggest: * Multi-verse theory, by the act of travelling back in time they move to a new universe, with different properties. So it appears that the universe is reconfiguring. * Quantum trickery. I'd summarise Quantum mechanics as the more you examine what's going on, the more bizarre the universe gets.
**AI and simulations** If you're set on your 2 branches, the AI one makes most sense. The Singularity is an idea that people are uploading their conscious into a computer. In some ideas they retain their individuality. Alternatively it can be just like the Matrix. A fully simulated reality created to live out your lives. Lastly there are idea's that the universe is a simulation on the edge of a super massive black hole. All of these have one thing in common. They are existing theories of how all of this could be a simulation. Simulations can be controlled and changed by AI. A living organism however is very difficult to explain. It can't directly be observed in our universe, so it could be "living" throughout the multitude of dimensions. When time travelling, it would change the universe. However, this is much more hand-wavy than the actual existing theories of an AI simulating every experience.
156,604
Assuming space is expanding (and that seems plausible), photons from a galaxy that have been traveling for 12 billion years must have been traveling on a curve to get to us due to the fact that where we see the light today is not where the object is now. And yet there is no distortion in the light from that galaxy. Why?
2015/01/04
[ "https://physics.stackexchange.com/questions/156604", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/69032/" ]
You and your buddy are standing on some kind of sheet of stretchy material. Someone is slowly stretching the material in all directions at once, which causes you and your buddy to slowly get further and further apart. Your buddy rolls a ball towards you. When you rolled the ball, you and your buddy were 20 feet apart. As the ball rolls, the sheet is, as described, expanding. By the time the ball reaches you, you and your buddy are 25 feet apart. Analogy key: * you = telescopes observing distant stars * buddy = distant star * ball = photon * stretchy sheet = space * person stretching the sheet = dark energy?? TBD. As you can see in this analogy, no curved paths or distortion are necessary.
Generally it does not travel on a curve. Just the space through which it travels has expanded. There is no need for curved trajectory to explain the expansion of space.
86,692
I'd like to enable folder redirection on my stand-alone Windows 7 Pro box. Is that possible? The Group Policy Editor shown in the [Managing Roaming User Data Deployment Guide](http://technet.microsoft.com/en-us/library/cc766489%28WS.10%29.aspx) has a *Folder Redirection* node that I don't see: ![alt text](https://i.stack.imgur.com/X7vXF.png)
2009/12/22
[ "https://superuser.com/questions/86692", "https://superuser.com", "https://superuser.com/users/7226/" ]
> > Is that possible? > > > Short answer, No. You can't "Roam" on a single box.. you need a domain dude. The "Redirection" is to another server, so your profile gets copied down to your box on login, really damn handy if your machine craps out, cause then you just log into another box, download your profile, and your working again! [Related](https://superuser.com/questions/80622/folder-redirection-doesnt-work-in-windows-7-or-vista)
While you can't set Folder Redirection for a single box via the Group Policy editor (hence "Group"), you can set the folder locations from the registry, in the [HKEY\_CURRENT\_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders] key. For a single box, I imagine you just want Folder Redirection for its backup or extensibility purposes.
469,314
*Plesionyms* are synonymous words which have slight differences in meaning. What are the examples of it? I found: * Fog v Mist * Fearless v Brave When and why are they are used? What are the aspects which differentiate plesionymic synonyms from cognitive synonyms?
2018/10/20
[ "https://english.stackexchange.com/questions/469314", "https://english.stackexchange.com", "https://english.stackexchange.com/users/320986/" ]
When words have a similar but slightly different meaning, there will be contexts where one is more appropriate than another. In other contexts, the difference may not be relevant so either is acceptable. In your examples: A brave person and a fearless person may both perform the same feats that others may be too fearful to perform. However, a fearless person would not be afraid to perform the feat whereas a brave person may perform the feat despite being afraid. From an outside observer's perspective, if the person did not appear to be afraid, then the observer might use brave or fearless to describe the person. If, however, it was clear that the person was afraid but did it anyway, then only brave would make sense.
It depends on the context. In a nearly polar land with frequent precipitation there will be many words used for **snow** and they aren't all synonyms: they describe different types of snow which are relevant to the environment. But in an equatorial land where it never snows, the words **snow** and **sleet** and **hail** might be considered to be synonyms.
469,314
*Plesionyms* are synonymous words which have slight differences in meaning. What are the examples of it? I found: * Fog v Mist * Fearless v Brave When and why are they are used? What are the aspects which differentiate plesionymic synonyms from cognitive synonyms?
2018/10/20
[ "https://english.stackexchange.com/questions/469314", "https://english.stackexchange.com", "https://english.stackexchange.com/users/320986/" ]
Why would you choose one word over another, when the two might be synonyms? The best explanation was provided by [Isaac Asimov](https://en.wikipedia.org/wiki/Isaac_Asimov): > > R. Daneel said, "I do not understand the distinction you are making, > Partner Elijah. Since 'murder' and 'homicide' are both used to > represent the violent ending of the life of a human being, the two > words must be interchangeable. Where, then, is the distinction?" > > > "Of the two words, one screamed out will more effectively chill the > blood of a human being than the other will, Daneel." > > > "Why is that?" > > > "Connotations and associations; the subtle effect, not of dictionary > meaning, but of years of usage; the nature of the sentences and > conditions and events in which one has experienced the use of one word > as compared with that of the other." > > > "There is nothing of this in my programming," said Daneel [...]. > > > *(from ["The Robots of Dawn"](https://en.wikipedia.org/wiki/The_Robots_of_Dawn))* > > >
It depends on the context. In a nearly polar land with frequent precipitation there will be many words used for **snow** and they aren't all synonyms: they describe different types of snow which are relevant to the environment. But in an equatorial land where it never snows, the words **snow** and **sleet** and **hail** might be considered to be synonyms.
469,314
*Plesionyms* are synonymous words which have slight differences in meaning. What are the examples of it? I found: * Fog v Mist * Fearless v Brave When and why are they are used? What are the aspects which differentiate plesionymic synonyms from cognitive synonyms?
2018/10/20
[ "https://english.stackexchange.com/questions/469314", "https://english.stackexchange.com", "https://english.stackexchange.com/users/320986/" ]
Why would you choose one word over another, when the two might be synonyms? The best explanation was provided by [Isaac Asimov](https://en.wikipedia.org/wiki/Isaac_Asimov): > > R. Daneel said, "I do not understand the distinction you are making, > Partner Elijah. Since 'murder' and 'homicide' are both used to > represent the violent ending of the life of a human being, the two > words must be interchangeable. Where, then, is the distinction?" > > > "Of the two words, one screamed out will more effectively chill the > blood of a human being than the other will, Daneel." > > > "Why is that?" > > > "Connotations and associations; the subtle effect, not of dictionary > meaning, but of years of usage; the nature of the sentences and > conditions and events in which one has experienced the use of one word > as compared with that of the other." > > > "There is nothing of this in my programming," said Daneel [...]. > > > *(from ["The Robots of Dawn"](https://en.wikipedia.org/wiki/The_Robots_of_Dawn))* > > >
When words have a similar but slightly different meaning, there will be contexts where one is more appropriate than another. In other contexts, the difference may not be relevant so either is acceptable. In your examples: A brave person and a fearless person may both perform the same feats that others may be too fearful to perform. However, a fearless person would not be afraid to perform the feat whereas a brave person may perform the feat despite being afraid. From an outside observer's perspective, if the person did not appear to be afraid, then the observer might use brave or fearless to describe the person. If, however, it was clear that the person was afraid but did it anyway, then only brave would make sense.
783,711
I wanted to know if there's an equation that Windows uses to determine how long it takes to perform an action on a file, say delete, to copy, to erase, or to install. ![enter image description here](https://i.stack.imgur.com/bOsth.png) For example, when I'm deleting a file, and Windows says "Time remaining: 18 seconds" how is it calculating this number, and using what?
2014/07/16
[ "https://superuser.com/questions/783711", "https://superuser.com", "https://superuser.com/users/168853/" ]
Have you noticed that usually it doesn't give you any estimates in the first seconds? That's because in the first seconds, it just does the operation it has to do. Then, after a (small) while, it knows *how much it already copied/deleted/etc*, and *how long it took*. That gives you the **average speed** of the operation. Then, divide the remaining bytes by the speed, and you have the time it will take to complete the operation. This is elementary school maths. If you want to travel 360 km, and at the end of the first minute you travelled 1 km, how much will it take you to get to your destination? Well, the speed is 1 km/minute. That's 60 km/h. 360 km divided by 60 km/h gives you 6 hours (or 360 km / 1 km/minute = 360 minutes = 6 hours). Since you've already travelled for one minute then the estimated time left is 5 hours and 59 minutes. Substitute "travel" with "copy" and "km" with "bytes" and that's your question. Different systems have different ways of estimating time. You can take the last minute, and the estimates may vary wildly, or you can take the full time, and if the speed actually changes permanently, your estimates may be far off reality. What I described is the simplest method.
answering with a simple cross-multiplication is awfully condescending I think, I'm sure that he already knew that, it's how we constantly guesstimate things in our head too. The problem with file-operation progress bars is that it's only correct for uniform data, so if you copy 100 files that have all the same size and your drive is doing nothing else, the estimated progress will be spot on, but what if the first 99 files were small txt-files and the last one is a large video file? The progress will be WAY off. This problem is further increased when you're not handling files in one folder, but multiple sub folders. Say you have 5 subfolders and you want to delete them (size doesn't matter much in this case then), the first 4 folders only contain less then 10 files, so by the time the operation comes to the 5th folder it thinks it's about 80% done, and *boom* 5th folder contains 5000 files and your progress jumps back to 1% WinXP tried to get around this by counting the number of files beforehand which meant that when the folder wasn't indexed in windows, depending on the number of files, XP didn't really start the operation for the first 20seconds (time it took to count) which made everybody furious. So while I also don't have special knowledge on how Windows does it (but what else is there apart from counting files and bytes) I hope I could illustrate why it's flawed and why it never will be perfect. Best you could do would be to not rely solely on filecount OR bytecount, but build an average out of the two. Or if you wanted to go extra crazy the OS could start a database of how long these operations took in the past on your machine and factor that into the equation. Final thought: If someone would think of a filesystem that would let the OS know what size every folder has, without calculating it first, you would at least get a correct progress estimation when deleting whole folders and not just parts of it.
201,426
Digitalis is a gray mesh that carries Creeper really fast. You can't destroy it, an in some maps, you can't avoid building on it. Whatever you build on it is prone to getting destroyed quickly whenever your defenses fall a bit behind, because then the Creeper rushes across the Digitalis quickly and destroys your Relays, and it's all downhill after that. I've noticed that, after I destroy an Emitter, sometimes *all of* the Creeper that's in the Digitalis will "fall out" onto the ground and spread around like the viscous liquid it is. (Sometimes it's good to not have Creeper in the Digitalis, and sometimes it's a right pain to have it splashing around.) But I've also seen this happen a bit randomly, when I'm shooting Digitalis. So exactly what circumstances does it take to make the Creeper fall out of the Digitalis?
2015/01/11
[ "https://gaming.stackexchange.com/questions/201426", "https://gaming.stackexchange.com", "https://gaming.stackexchange.com/users/9522/" ]
The scenario you are describing occurs when the charged Digitalis is isolated from its source. The source of each group of Digitalis must be an Emitter that is located ON the Digitalis itself - an Emitter near a section of Digitalis does not charge it. Isolating Digitalis means one of two things, either destroying its source, or cutting off a group of charged Digitalis from the source - by destroying some Digitalis in the middle of a charged Digitalis pathway. It is useful to keep this second method in mind; I have often accidentally isolated a group of charged Digitalis within my base (because I thought it was of no further threat) to my impending doom and/or destruction. I have just double-checked all of this on a map I created for the purpose.
There are two kinds of Digitalis. I can't exactly recall the terms, so I will refer to them as "destroyed" and "constructed" Digitalis. Most/all of the Digitalis on the map starts in the destroyed state when a new map is started, although a few maps don't follow that. Destroyed Digitalis will absorb a little Creeper when they come in contact, destroying the Creeper and "constructing" the Digitalis. Constructed Digitalis works like you say - it picks up and transports Creeper, making it a great way for Creeper to attack you up a hill or across an empty space. However, attacks by your weapons can destroy the Digitalis again, reverting it to its inactive state. This will stop it from carrying Creeper, so damaging the Digitalis while there's a lot of Creeper being carrying by it will make it drop and spread normally. Given a little time with the Creeper, it'll absorb a bit of Creeper again and repair itself once more.
132,372
In Season 6 episode 9 > > Jon Snow and his army is saved at the last minute by an army recruited by Sansa. > > > What house was this? Were they previously asked to help?
2016/06/20
[ "https://scifi.stackexchange.com/questions/132372", "https://scifi.stackexchange.com", "https://scifi.stackexchange.com/users/44635/" ]
[House Arryn](http://gameofthrones.wikia.com/wiki/House_Arryn), led by Littlefinger as shown in the below screen shot. [![](https://i.stack.imgur.com/pZ35J.jpg)](https://i.stack.imgur.com/pZ35J.jpg) Sansa requested their help in Episode Eight of Season Six. [Here's what the letter written by Sansa said](http://nerdist.com/reddit-sleuths-confirm-sansas-letter-recipient-on-game-of-thrones/): > > “[…] to protect me. Now you have a chance to fulfill your promise. […] **Knights of the Vale** are under your command. Ride north for Winterfell. Lend us your aid and I shall see to it that you are well rewarded.” > > > Here's House Arryn's sigil, > > A white falcon volant and crescent moon on a blue field. > > > which can be seen when the third army rides in: [![](https://i.stack.imgur.com/8Tqik.jpg)](https://i.stack.imgur.com/8Tqik.jpg)
The banners (which depict a white falcon on a blue field) are those of [House Arryn](http://gameofthrones.wikia.com/wiki/House_Arryn). Robin Arryn is the head of this house but he's a child, so Petyr Baelish acts as the Lord Protector. This makes sense as Sansa is seen next to Littlefinger as the army arrives. Also worth noting that Robin and Sansa are cousins, through their mothers Lyssa and Catelyn, respectively.
8,234,837
I have been doing a lot of reading on this topic and most folks seem to agree that having Excel is required to use the COM Interop libraries. However they are never specific as to where that should be installed. Does it need to be installed on the machine I am developing on or does it need to be on every machine that I deploy to? Thanks in advance, David Edit: I should mention that this is desktop development/deployment targeted for all Windows machines
2011/11/22
[ "https://Stackoverflow.com/questions/8234837", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1060772/" ]
Depends. If you go client/server and people access the app through their browser, you can get away with having it installed only on the server. If you go stand-alone, each computer that runs the program needs it. You'll definitely want it on the development computer as well.
You can also look into using openxml (http://openxmldeveloper.org/) and build office documents without office applications. I do believe you can only build office 2007 or 2010 document formats (e.g. .xlsx ect..).
8,234,837
I have been doing a lot of reading on this topic and most folks seem to agree that having Excel is required to use the COM Interop libraries. However they are never specific as to where that should be installed. Does it need to be installed on the machine I am developing on or does it need to be on every machine that I deploy to? Thanks in advance, David Edit: I should mention that this is desktop development/deployment targeted for all Windows machines
2011/11/22
[ "https://Stackoverflow.com/questions/8234837", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1060772/" ]
When you use Excel Introp, it actually opens Excel in the background (You will see Excel in the Task Manager) and it make changes very simular to a user making it directly in Excel. So it needs to be installed on the computer that the application runs on, and set up (if needed). Make sure you clean all the COM references to Excel, otherwise the reference won't be released, and Excel will still be open in the background, even after you close it!
You can also look into using openxml (http://openxmldeveloper.org/) and build office documents without office applications. I do believe you can only build office 2007 or 2010 document formats (e.g. .xlsx ect..).
12,819
All of the ones I can think of are specific products that have come to represent their kind. This is usually either because it is the first of its kind, as in a Xerox machine (the first office photocopier), or it arises from popularity, as in Sharpie or something like "Google that" (though I'd say that's a bit informal/debatable). Other examples I can think of off the top of my head are: * Kleenex * Post-it
2011/02/16
[ "https://english.stackexchange.com/questions/12819", "https://english.stackexchange.com", "https://english.stackexchange.com/users/5044/" ]
What you are looking for is called a [*genericized trademark*, *generic trademark*, or *proprietary eponym*](http://en.wikipedia.org/wiki/Genericized_trademark), and Wikipedia has a huge list: * [List of generic and genericized trademarks](http://en.wikipedia.org/wiki/List_of_generic_and_genericized_trademarks) It includes all the examples mentioned by chaos and yourself, and many more. See also this related question: * [What is a word/phrase for using a term for a popular special case instead of a generic term?](https://english.stackexchange.com/questions/7235/what-is-a-word-phrase-for-using-a-term-for-a-popular-special-case-instead-of-a-ge)
* iPod (I've seen many people use iPod to refer to any MP3 player) * Xerox * Zip-lock
12,819
All of the ones I can think of are specific products that have come to represent their kind. This is usually either because it is the first of its kind, as in a Xerox machine (the first office photocopier), or it arises from popularity, as in Sharpie or something like "Google that" (though I'd say that's a bit informal/debatable). Other examples I can think of off the top of my head are: * Kleenex * Post-it
2011/02/16
[ "https://english.stackexchange.com/questions/12819", "https://english.stackexchange.com", "https://english.stackexchange.com/users/5044/" ]
* iPod (I've seen many people use iPod to refer to any MP3 player) * Xerox * Zip-lock
Some of the words that I find in common use nowadays in their domain are as follows: > > 1. Blogging (Posting articles on web-logs) > 2. Magging (Shooting) > 3. Facebooking (Online on facebook) > > >
12,819
All of the ones I can think of are specific products that have come to represent their kind. This is usually either because it is the first of its kind, as in a Xerox machine (the first office photocopier), or it arises from popularity, as in Sharpie or something like "Google that" (though I'd say that's a bit informal/debatable). Other examples I can think of off the top of my head are: * Kleenex * Post-it
2011/02/16
[ "https://english.stackexchange.com/questions/12819", "https://english.stackexchange.com", "https://english.stackexchange.com/users/5044/" ]
* Aqualung * Aspirin * Astroturf * Band-aid * Bubble wrap * Butterscotch * Cellophane * Chapstick * Coke (only in some regions) * Crock pot * Cuisinart * Dumpster * Dry ice * Escalator * Frisbee * Jeep * Jello * Jetski * Hacky sack * Heroin * Hoover (mainly in the UK) * Kerosene * Laundromat * Linoleum * Muzak * Q-tip * Tarmac * Taser * Thermos * Trampoline * Velcro * Walkman * Yo-yo * Zipper
* iPod (I've seen many people use iPod to refer to any MP3 player) * Xerox * Zip-lock
12,819
All of the ones I can think of are specific products that have come to represent their kind. This is usually either because it is the first of its kind, as in a Xerox machine (the first office photocopier), or it arises from popularity, as in Sharpie or something like "Google that" (though I'd say that's a bit informal/debatable). Other examples I can think of off the top of my head are: * Kleenex * Post-it
2011/02/16
[ "https://english.stackexchange.com/questions/12819", "https://english.stackexchange.com", "https://english.stackexchange.com/users/5044/" ]
* Duck tape * George Foreman grill * Palm Pilot * Scotch tape
Some of the words that I find in common use nowadays in their domain are as follows: > > 1. Blogging (Posting articles on web-logs) > 2. Magging (Shooting) > 3. Facebooking (Online on facebook) > > >
12,819
All of the ones I can think of are specific products that have come to represent their kind. This is usually either because it is the first of its kind, as in a Xerox machine (the first office photocopier), or it arises from popularity, as in Sharpie or something like "Google that" (though I'd say that's a bit informal/debatable). Other examples I can think of off the top of my head are: * Kleenex * Post-it
2011/02/16
[ "https://english.stackexchange.com/questions/12819", "https://english.stackexchange.com", "https://english.stackexchange.com/users/5044/" ]
* iPod (I've seen many people use iPod to refer to any MP3 player) * Xerox * Zip-lock
Left out *Jacuzzi* - the generic term is *hot tub*; and perhaps *fridge*, which according to [the Online Etymological Dictionary](http://www.etymonline.com/index.php?allowed_in_frame=0&search=fridge&searchmode=none) is: > > shortened and altered form of refrigerator, 1926, perhaps influenced > by Frigidaire (1919), a popular early brand name of the appliances. > > >
12,819
All of the ones I can think of are specific products that have come to represent their kind. This is usually either because it is the first of its kind, as in a Xerox machine (the first office photocopier), or it arises from popularity, as in Sharpie or something like "Google that" (though I'd say that's a bit informal/debatable). Other examples I can think of off the top of my head are: * Kleenex * Post-it
2011/02/16
[ "https://english.stackexchange.com/questions/12819", "https://english.stackexchange.com", "https://english.stackexchange.com/users/5044/" ]
* Duck tape * George Foreman grill * Palm Pilot * Scotch tape
Left out *Jacuzzi* - the generic term is *hot tub*; and perhaps *fridge*, which according to [the Online Etymological Dictionary](http://www.etymonline.com/index.php?allowed_in_frame=0&search=fridge&searchmode=none) is: > > shortened and altered form of refrigerator, 1926, perhaps influenced > by Frigidaire (1919), a popular early brand name of the appliances. > > >
12,819
All of the ones I can think of are specific products that have come to represent their kind. This is usually either because it is the first of its kind, as in a Xerox machine (the first office photocopier), or it arises from popularity, as in Sharpie or something like "Google that" (though I'd say that's a bit informal/debatable). Other examples I can think of off the top of my head are: * Kleenex * Post-it
2011/02/16
[ "https://english.stackexchange.com/questions/12819", "https://english.stackexchange.com", "https://english.stackexchange.com/users/5044/" ]
What you are looking for is called a [*genericized trademark*, *generic trademark*, or *proprietary eponym*](http://en.wikipedia.org/wiki/Genericized_trademark), and Wikipedia has a huge list: * [List of generic and genericized trademarks](http://en.wikipedia.org/wiki/List_of_generic_and_genericized_trademarks) It includes all the examples mentioned by chaos and yourself, and many more. See also this related question: * [What is a word/phrase for using a term for a popular special case instead of a generic term?](https://english.stackexchange.com/questions/7235/what-is-a-word-phrase-for-using-a-term-for-a-popular-special-case-instead-of-a-ge)
* Duck tape * George Foreman grill * Palm Pilot * Scotch tape
12,819
All of the ones I can think of are specific products that have come to represent their kind. This is usually either because it is the first of its kind, as in a Xerox machine (the first office photocopier), or it arises from popularity, as in Sharpie or something like "Google that" (though I'd say that's a bit informal/debatable). Other examples I can think of off the top of my head are: * Kleenex * Post-it
2011/02/16
[ "https://english.stackexchange.com/questions/12819", "https://english.stackexchange.com", "https://english.stackexchange.com/users/5044/" ]
* Aqualung * Aspirin * Astroturf * Band-aid * Bubble wrap * Butterscotch * Cellophane * Chapstick * Coke (only in some regions) * Crock pot * Cuisinart * Dumpster * Dry ice * Escalator * Frisbee * Jeep * Jello * Jetski * Hacky sack * Heroin * Hoover (mainly in the UK) * Kerosene * Laundromat * Linoleum * Muzak * Q-tip * Tarmac * Taser * Thermos * Trampoline * Velcro * Walkman * Yo-yo * Zipper
Left out *Jacuzzi* - the generic term is *hot tub*; and perhaps *fridge*, which according to [the Online Etymological Dictionary](http://www.etymonline.com/index.php?allowed_in_frame=0&search=fridge&searchmode=none) is: > > shortened and altered form of refrigerator, 1926, perhaps influenced > by Frigidaire (1919), a popular early brand name of the appliances. > > >
12,819
All of the ones I can think of are specific products that have come to represent their kind. This is usually either because it is the first of its kind, as in a Xerox machine (the first office photocopier), or it arises from popularity, as in Sharpie or something like "Google that" (though I'd say that's a bit informal/debatable). Other examples I can think of off the top of my head are: * Kleenex * Post-it
2011/02/16
[ "https://english.stackexchange.com/questions/12819", "https://english.stackexchange.com", "https://english.stackexchange.com/users/5044/" ]
* Aqualung * Aspirin * Astroturf * Band-aid * Bubble wrap * Butterscotch * Cellophane * Chapstick * Coke (only in some regions) * Crock pot * Cuisinart * Dumpster * Dry ice * Escalator * Frisbee * Jeep * Jello * Jetski * Hacky sack * Heroin * Hoover (mainly in the UK) * Kerosene * Laundromat * Linoleum * Muzak * Q-tip * Tarmac * Taser * Thermos * Trampoline * Velcro * Walkman * Yo-yo * Zipper
Some of the words that I find in common use nowadays in their domain are as follows: > > 1. Blogging (Posting articles on web-logs) > 2. Magging (Shooting) > 3. Facebooking (Online on facebook) > > >
12,819
All of the ones I can think of are specific products that have come to represent their kind. This is usually either because it is the first of its kind, as in a Xerox machine (the first office photocopier), or it arises from popularity, as in Sharpie or something like "Google that" (though I'd say that's a bit informal/debatable). Other examples I can think of off the top of my head are: * Kleenex * Post-it
2011/02/16
[ "https://english.stackexchange.com/questions/12819", "https://english.stackexchange.com", "https://english.stackexchange.com/users/5044/" ]
What you are looking for is called a [*genericized trademark*, *generic trademark*, or *proprietary eponym*](http://en.wikipedia.org/wiki/Genericized_trademark), and Wikipedia has a huge list: * [List of generic and genericized trademarks](http://en.wikipedia.org/wiki/List_of_generic_and_genericized_trademarks) It includes all the examples mentioned by chaos and yourself, and many more. See also this related question: * [What is a word/phrase for using a term for a popular special case instead of a generic term?](https://english.stackexchange.com/questions/7235/what-is-a-word-phrase-for-using-a-term-for-a-popular-special-case-instead-of-a-ge)
* Aqualung * Aspirin * Astroturf * Band-aid * Bubble wrap * Butterscotch * Cellophane * Chapstick * Coke (only in some regions) * Crock pot * Cuisinart * Dumpster * Dry ice * Escalator * Frisbee * Jeep * Jello * Jetski * Hacky sack * Heroin * Hoover (mainly in the UK) * Kerosene * Laundromat * Linoleum * Muzak * Q-tip * Tarmac * Taser * Thermos * Trampoline * Velcro * Walkman * Yo-yo * Zipper
2,581
I am looking for a qwerty, flat keyboard with the shortest [travel distance](https://mechanicalkeyboards.com/terms.php?t=Travel%20Distance) and lowest [actuation force](https://mechanicalkeyboards.com/terms.php?t=Actuation%20Force) possible. In other words, a keyboard that requires as little force as possible to type. The keyboard I'm looking for does not need to be mechanical, though. Price and shipping locations are not an issue. (and it should be a keyboard, not a speech recognition software or some BCI device) --- I know that the [Cherry MX Red has 45 centinewton (cN) of actuation force](https://deskthority.net/wiki/Cherry_MX) but I find the travel distance to be too long (2mm to actuation, 4mm to bottom). I could customize the keyboard to be 2mm to bottom but that would be quite tedious, all the more so as I need more than one. [![enter image description here](https://i.stack.imgur.com/4spjm.png)](https://i.stack.imgur.com/4spjm.png) All other Cherry MX have pretty much the same travel distance: [![enter image description here](https://i.stack.imgur.com/QkAhhm.png)](https://i.stack.imgur.com/QkAhhm.png) [![enter image description here](https://i.stack.imgur.com/s0UUN.gif)](https://i.stack.imgur.com/s0UUN.gif)[![enter image description here](https://i.stack.imgur.com/64yA3.gif)](https://i.stack.imgur.com/64yA3.gif)[![enter image description here](https://i.stack.imgur.com/tZWNe.gif)](https://i.stack.imgur.com/tZWNe.gif)[![enter image description here](https://i.stack.imgur.com/LzbnZ.gif)](https://i.stack.imgur.com/LzbnZ.gif)[![enter image description here](https://i.stack.imgur.com/aQSMz.gif)](https://i.stack.imgur.com/aQSMz.gif) (animated gifs from [keyboardco](http://www.keyboardco.com/blog/index.php/2012/12/an-introduction-to-cherry-mx-mechanical-switches/), can be easily tried with a [Cherry MX Switch Tester](http://rads.stackoverflow.com/amzn/click/B00AZQKCD4): [![enter image description here](https://i.stack.imgur.com/DhRfsm.jpg)](https://i.stack.imgur.com/DhRfsm.jpg) ) A few more comparisons from <http://www.pcper.com/reviews/General-Tech/Mechanical-Keyboard-Switches-Explained-and-Compared>: [![enter image description here](https://i.stack.imgur.com/jYKH8.png)](https://i.stack.imgur.com/jYKH8.png) --- I am also aware of the Topre 30g switch (e.g. present in the [Ducky 104UB Realforce Mechanical Keyboard](https://mechanicalkeyboards.com/shop/index.php?l=product_detail&p=1508) and [a few others](https://deskthority.net/wiki/Topre_Realforce#Realforce_Model_Reference)), but it has a 4 mm travel distance. [![enter image description here](https://i.stack.imgur.com/yXSY5.png)](https://i.stack.imgur.com/yXSY5.png) (they seem to mostly sell in Japan) Some specifications regarding the [Topre Realforce 104UB Silent](https://elitekeyboards.com/products.php?sub=topre_keyboards,rf104&pid=xf11ts): [![enter image description here](https://i.stack.imgur.com/s8IYI.png)](https://i.stack.imgur.com/s8IYI.png)[![enter image description here](https://i.stack.imgur.com/kN8uG.png)](https://i.stack.imgur.com/kN8uG.png) --- The Romer-G Mechanical Switches have a lower operating point (1.5 mm) and key travel (3 mm). They are present in the [Logitech G910 Orion Spark RGB Mechanical Gaming Keyboard (920-006385)](http://rads.stackoverflow.com/amzn/click/B00N3OELPU) In the following graph, the blue curve is a Romer-G Mechanical Switch, the white curve of is a typical mechanical switch: [![enter image description here](https://i.stack.imgur.com/hXI5P.jpg)](https://i.stack.imgur.com/hXI5P.jpg) --- The newly released (June 2016) Tesoro’s Low-Profile 'Agile' Switches are quite interesting: the full travel of the switches is 3.5 mm. Unfortunately, they haven't announced the actuation force nor the operation point. [video](https://www.youtube.com/watch?v=aIA8Yf9DE1w) [![enter image description here](https://i.stack.imgur.com/WXCJwm.png)](https://i.stack.imgur.com/WXCJwm.png) --- The [ErgoDox EZ](https://www.indiegogo.com/projects/ergodox-ez-an-incredible-mechanical-keyboard#/) (open source, released in 2015) has a 35 cN key switch but the travel distance is still 2 mm: [![enter image description here](https://i.stack.imgur.com/x5yYk.png)](https://i.stack.imgur.com/x5yYk.png) <https://ergodox-ez.com/> [![enter image description here](https://i.stack.imgur.com/SnK63.png)](https://i.stack.imgur.com/SnK63.png) --- Lastly, I have run across the [Comfort Keyboard Split Magic Keyboard USB1-2BLK](http://rads.stackoverflow.com/amzn/click/B005CRGMYO), as well as its non-compact version the [Soft Touch ErgoMagic Keyboard](http://www.fentek-ind.com/split_magic.htm#.Vx_8V_krK3c). It looks quite thin but According to one [seller](http://www.fentek-ind.com/split_magic.htm) (Sam@fentek-ind.com, Monday-Friday 8a-4p Arizona MST, P: (928)639-0161 F: (928)639-0551) the key travel is 3.5 mm, and I don't want a split keyboard. (the seller also told me "we do not have a no split version of that keyboard and I do not have a plot of the key force and travel.") There is a compact version: [![enter image description here](https://i.stack.imgur.com/6TIQNm.png)](https://i.stack.imgur.com/6TIQNm.png) As well as a non-compact version: [![enter image description here](https://i.stack.imgur.com/inbt4m.png)](https://i.stack.imgur.com/inbt4m.png) [![enter image description here](https://i.stack.imgur.com/NvO9zm.png)](https://i.stack.imgur.com/NvO9zm.png)
2016/04/25
[ "https://hardwarerecs.stackexchange.com/questions/2581", "https://hardwarerecs.stackexchange.com", "https://hardwarerecs.stackexchange.com/users/40/" ]
<http://www.keyboardco.com/keyboard/cleankeys-glass-easy-clean-medical-wireless-keyboard.asp> Expensive, but: * Zero travel * Almost zero force * As a bonus, easy to clean (which is what it's marketed for) * Last but not least, it has the very important feature of actually being available for sale
The [SteelSeries Apex M800 Customizable Mechanical Gaming Keyboard](http://rads.stackoverflow.com/amzn/click/B00SB6DO6M) uses a [QS1 Switch](https://deskthority.net/wiki/SteelSeries_QS1), and a low-profile layout. Around 180 USD. * Switch Type: Mechanical * Switch Name: SteelSeries QS1 * Throw Depth: 3 mm * Actuation and Reset Depth: 1.5 mm * Actuation Force Needed: 45cN * 60 Million Click Lifetime [![enter image description here](https://i.stack.imgur.com/oayhC.png)](https://i.stack.imgur.com/oayhC.png) [![enter image description here](https://i.stack.imgur.com/UiKNA.png)](https://i.stack.imgur.com/UiKNA.png)
2,581
I am looking for a qwerty, flat keyboard with the shortest [travel distance](https://mechanicalkeyboards.com/terms.php?t=Travel%20Distance) and lowest [actuation force](https://mechanicalkeyboards.com/terms.php?t=Actuation%20Force) possible. In other words, a keyboard that requires as little force as possible to type. The keyboard I'm looking for does not need to be mechanical, though. Price and shipping locations are not an issue. (and it should be a keyboard, not a speech recognition software or some BCI device) --- I know that the [Cherry MX Red has 45 centinewton (cN) of actuation force](https://deskthority.net/wiki/Cherry_MX) but I find the travel distance to be too long (2mm to actuation, 4mm to bottom). I could customize the keyboard to be 2mm to bottom but that would be quite tedious, all the more so as I need more than one. [![enter image description here](https://i.stack.imgur.com/4spjm.png)](https://i.stack.imgur.com/4spjm.png) All other Cherry MX have pretty much the same travel distance: [![enter image description here](https://i.stack.imgur.com/QkAhhm.png)](https://i.stack.imgur.com/QkAhhm.png) [![enter image description here](https://i.stack.imgur.com/s0UUN.gif)](https://i.stack.imgur.com/s0UUN.gif)[![enter image description here](https://i.stack.imgur.com/64yA3.gif)](https://i.stack.imgur.com/64yA3.gif)[![enter image description here](https://i.stack.imgur.com/tZWNe.gif)](https://i.stack.imgur.com/tZWNe.gif)[![enter image description here](https://i.stack.imgur.com/LzbnZ.gif)](https://i.stack.imgur.com/LzbnZ.gif)[![enter image description here](https://i.stack.imgur.com/aQSMz.gif)](https://i.stack.imgur.com/aQSMz.gif) (animated gifs from [keyboardco](http://www.keyboardco.com/blog/index.php/2012/12/an-introduction-to-cherry-mx-mechanical-switches/), can be easily tried with a [Cherry MX Switch Tester](http://rads.stackoverflow.com/amzn/click/B00AZQKCD4): [![enter image description here](https://i.stack.imgur.com/DhRfsm.jpg)](https://i.stack.imgur.com/DhRfsm.jpg) ) A few more comparisons from <http://www.pcper.com/reviews/General-Tech/Mechanical-Keyboard-Switches-Explained-and-Compared>: [![enter image description here](https://i.stack.imgur.com/jYKH8.png)](https://i.stack.imgur.com/jYKH8.png) --- I am also aware of the Topre 30g switch (e.g. present in the [Ducky 104UB Realforce Mechanical Keyboard](https://mechanicalkeyboards.com/shop/index.php?l=product_detail&p=1508) and [a few others](https://deskthority.net/wiki/Topre_Realforce#Realforce_Model_Reference)), but it has a 4 mm travel distance. [![enter image description here](https://i.stack.imgur.com/yXSY5.png)](https://i.stack.imgur.com/yXSY5.png) (they seem to mostly sell in Japan) Some specifications regarding the [Topre Realforce 104UB Silent](https://elitekeyboards.com/products.php?sub=topre_keyboards,rf104&pid=xf11ts): [![enter image description here](https://i.stack.imgur.com/s8IYI.png)](https://i.stack.imgur.com/s8IYI.png)[![enter image description here](https://i.stack.imgur.com/kN8uG.png)](https://i.stack.imgur.com/kN8uG.png) --- The Romer-G Mechanical Switches have a lower operating point (1.5 mm) and key travel (3 mm). They are present in the [Logitech G910 Orion Spark RGB Mechanical Gaming Keyboard (920-006385)](http://rads.stackoverflow.com/amzn/click/B00N3OELPU) In the following graph, the blue curve is a Romer-G Mechanical Switch, the white curve of is a typical mechanical switch: [![enter image description here](https://i.stack.imgur.com/hXI5P.jpg)](https://i.stack.imgur.com/hXI5P.jpg) --- The newly released (June 2016) Tesoro’s Low-Profile 'Agile' Switches are quite interesting: the full travel of the switches is 3.5 mm. Unfortunately, they haven't announced the actuation force nor the operation point. [video](https://www.youtube.com/watch?v=aIA8Yf9DE1w) [![enter image description here](https://i.stack.imgur.com/WXCJwm.png)](https://i.stack.imgur.com/WXCJwm.png) --- The [ErgoDox EZ](https://www.indiegogo.com/projects/ergodox-ez-an-incredible-mechanical-keyboard#/) (open source, released in 2015) has a 35 cN key switch but the travel distance is still 2 mm: [![enter image description here](https://i.stack.imgur.com/x5yYk.png)](https://i.stack.imgur.com/x5yYk.png) <https://ergodox-ez.com/> [![enter image description here](https://i.stack.imgur.com/SnK63.png)](https://i.stack.imgur.com/SnK63.png) --- Lastly, I have run across the [Comfort Keyboard Split Magic Keyboard USB1-2BLK](http://rads.stackoverflow.com/amzn/click/B005CRGMYO), as well as its non-compact version the [Soft Touch ErgoMagic Keyboard](http://www.fentek-ind.com/split_magic.htm#.Vx_8V_krK3c). It looks quite thin but According to one [seller](http://www.fentek-ind.com/split_magic.htm) (Sam@fentek-ind.com, Monday-Friday 8a-4p Arizona MST, P: (928)639-0161 F: (928)639-0551) the key travel is 3.5 mm, and I don't want a split keyboard. (the seller also told me "we do not have a no split version of that keyboard and I do not have a plot of the key force and travel.") There is a compact version: [![enter image description here](https://i.stack.imgur.com/6TIQNm.png)](https://i.stack.imgur.com/6TIQNm.png) As well as a non-compact version: [![enter image description here](https://i.stack.imgur.com/inbt4m.png)](https://i.stack.imgur.com/inbt4m.png) [![enter image description here](https://i.stack.imgur.com/NvO9zm.png)](https://i.stack.imgur.com/NvO9zm.png)
2016/04/25
[ "https://hardwarerecs.stackexchange.com/questions/2581", "https://hardwarerecs.stackexchange.com", "https://hardwarerecs.stackexchange.com/users/40/" ]
The [Arion Rapoo Black KX 5.8GHz Wireless Smart Backlight LED Built-in Lithium Battery Mechanical MX Keyboard - Black](http://rads.stackoverflow.com/amzn/click/B00UZZUFOQ) (85 USD) is mechanical, in addition to being flat (2mm travel distance, 50g actuation force). And they don't have any inclination, unlike too many other mechanical keyboards. Full specifications: * Key Structure:Yellow mechanical key switches * Lifecycle: 60 Million Key-press Key press force:50 + / - 10g * Key press whole travel: 2.0mm + / - 0.6mm Unfortunately, they use their own switch, which have a higher actuation force than the Cherry MX Reds. (I have [read](http://techgage.com/news/breaking-the-wire-kx-wireless-mechanical-keyboard-now-available/) they feel similar as Cherry MX Blacks) [![enter image description here](https://i.stack.imgur.com/7ASogm.png)](https://i.stack.imgur.com/7ASogm.png) [![enter image description here](https://i.stack.imgur.com/UaC3Hm.jpg)](https://i.stack.imgur.com/UaC3Hm.jpg) [![enter image description here](https://i.stack.imgur.com/QAimKm.png)](https://i.stack.imgur.com/QAimKm.png) [![enter image description here](https://i.stack.imgur.com/7WlAUm.png)](https://i.stack.imgur.com/7WlAUm.png) [![enter image description here](https://i.stack.imgur.com/sDJOym.png)](https://i.stack.imgur.com/sDJOym.png) I couldn't find any force vs travel plot nor a non-compact version of it. According to this [message from the Amazon seller](https://www.amazon.com/there-non-compact-version-this-keyboard/forum/Fx1Z9A9AGE3591Y/TxCG8PN08XYYCE), they don't have any non-compact version of this keyboard. The Amazon seller also sent me this table summarizing the properties of the Rapoo key switches: [![enter image description here](https://i.stack.imgur.com/VHYEq.png)](https://i.stack.imgur.com/VHYEq.png)
Another good option: Keychron K1 Wireless Mechanical Keyboard (Version 4) - 74 USD. <https://www.keychron.com/products/keychron-k1-wireless-mechanical-keyboard> gives the specs: [![enter image description here](https://i.stack.imgur.com/1Aa1Y.png)](https://i.stack.imgur.com/1Aa1Y.png) More specs: Compact version (without numpad): * Dimension (87-Key): 355 x 120mm * Weight: About 650g / 1.43 lbs Regular version (with numpad): * Dimension (104-Key) : 435 x 120mm * Weight: About 805g / 1.77 lbs Any version: * Height incl. keycap (front): 22mm * Height incl. keycap (rear): 26mm * Operating Environment: -10 to 50℃ * The K1 has included keycaps for both Windows and Mac operating systems. * Has wireless option (Bluetooth 5.1) --- A picture from <https://workingsetup.com/review-so-sanh-keychron-k3v2-va-k1v4-phim-co-low-profile-den-tu-nha-keychron/> comparing the regular version (with numpad) with the compact version (without numpad): [![enter image description here](https://i.stack.imgur.com/IfuBC.jpg)](https://i.stack.imgur.com/IfuBC.jpg) A picture from <https://www.imore.com/keychron-k1-v4-review> showing the thinness: [![enter image description here](https://i.stack.imgur.com/RUm8a.jpg)](https://i.stack.imgur.com/RUm8a.jpg)
2,581
I am looking for a qwerty, flat keyboard with the shortest [travel distance](https://mechanicalkeyboards.com/terms.php?t=Travel%20Distance) and lowest [actuation force](https://mechanicalkeyboards.com/terms.php?t=Actuation%20Force) possible. In other words, a keyboard that requires as little force as possible to type. The keyboard I'm looking for does not need to be mechanical, though. Price and shipping locations are not an issue. (and it should be a keyboard, not a speech recognition software or some BCI device) --- I know that the [Cherry MX Red has 45 centinewton (cN) of actuation force](https://deskthority.net/wiki/Cherry_MX) but I find the travel distance to be too long (2mm to actuation, 4mm to bottom). I could customize the keyboard to be 2mm to bottom but that would be quite tedious, all the more so as I need more than one. [![enter image description here](https://i.stack.imgur.com/4spjm.png)](https://i.stack.imgur.com/4spjm.png) All other Cherry MX have pretty much the same travel distance: [![enter image description here](https://i.stack.imgur.com/QkAhhm.png)](https://i.stack.imgur.com/QkAhhm.png) [![enter image description here](https://i.stack.imgur.com/s0UUN.gif)](https://i.stack.imgur.com/s0UUN.gif)[![enter image description here](https://i.stack.imgur.com/64yA3.gif)](https://i.stack.imgur.com/64yA3.gif)[![enter image description here](https://i.stack.imgur.com/tZWNe.gif)](https://i.stack.imgur.com/tZWNe.gif)[![enter image description here](https://i.stack.imgur.com/LzbnZ.gif)](https://i.stack.imgur.com/LzbnZ.gif)[![enter image description here](https://i.stack.imgur.com/aQSMz.gif)](https://i.stack.imgur.com/aQSMz.gif) (animated gifs from [keyboardco](http://www.keyboardco.com/blog/index.php/2012/12/an-introduction-to-cherry-mx-mechanical-switches/), can be easily tried with a [Cherry MX Switch Tester](http://rads.stackoverflow.com/amzn/click/B00AZQKCD4): [![enter image description here](https://i.stack.imgur.com/DhRfsm.jpg)](https://i.stack.imgur.com/DhRfsm.jpg) ) A few more comparisons from <http://www.pcper.com/reviews/General-Tech/Mechanical-Keyboard-Switches-Explained-and-Compared>: [![enter image description here](https://i.stack.imgur.com/jYKH8.png)](https://i.stack.imgur.com/jYKH8.png) --- I am also aware of the Topre 30g switch (e.g. present in the [Ducky 104UB Realforce Mechanical Keyboard](https://mechanicalkeyboards.com/shop/index.php?l=product_detail&p=1508) and [a few others](https://deskthority.net/wiki/Topre_Realforce#Realforce_Model_Reference)), but it has a 4 mm travel distance. [![enter image description here](https://i.stack.imgur.com/yXSY5.png)](https://i.stack.imgur.com/yXSY5.png) (they seem to mostly sell in Japan) Some specifications regarding the [Topre Realforce 104UB Silent](https://elitekeyboards.com/products.php?sub=topre_keyboards,rf104&pid=xf11ts): [![enter image description here](https://i.stack.imgur.com/s8IYI.png)](https://i.stack.imgur.com/s8IYI.png)[![enter image description here](https://i.stack.imgur.com/kN8uG.png)](https://i.stack.imgur.com/kN8uG.png) --- The Romer-G Mechanical Switches have a lower operating point (1.5 mm) and key travel (3 mm). They are present in the [Logitech G910 Orion Spark RGB Mechanical Gaming Keyboard (920-006385)](http://rads.stackoverflow.com/amzn/click/B00N3OELPU) In the following graph, the blue curve is a Romer-G Mechanical Switch, the white curve of is a typical mechanical switch: [![enter image description here](https://i.stack.imgur.com/hXI5P.jpg)](https://i.stack.imgur.com/hXI5P.jpg) --- The newly released (June 2016) Tesoro’s Low-Profile 'Agile' Switches are quite interesting: the full travel of the switches is 3.5 mm. Unfortunately, they haven't announced the actuation force nor the operation point. [video](https://www.youtube.com/watch?v=aIA8Yf9DE1w) [![enter image description here](https://i.stack.imgur.com/WXCJwm.png)](https://i.stack.imgur.com/WXCJwm.png) --- The [ErgoDox EZ](https://www.indiegogo.com/projects/ergodox-ez-an-incredible-mechanical-keyboard#/) (open source, released in 2015) has a 35 cN key switch but the travel distance is still 2 mm: [![enter image description here](https://i.stack.imgur.com/x5yYk.png)](https://i.stack.imgur.com/x5yYk.png) <https://ergodox-ez.com/> [![enter image description here](https://i.stack.imgur.com/SnK63.png)](https://i.stack.imgur.com/SnK63.png) --- Lastly, I have run across the [Comfort Keyboard Split Magic Keyboard USB1-2BLK](http://rads.stackoverflow.com/amzn/click/B005CRGMYO), as well as its non-compact version the [Soft Touch ErgoMagic Keyboard](http://www.fentek-ind.com/split_magic.htm#.Vx_8V_krK3c). It looks quite thin but According to one [seller](http://www.fentek-ind.com/split_magic.htm) (Sam@fentek-ind.com, Monday-Friday 8a-4p Arizona MST, P: (928)639-0161 F: (928)639-0551) the key travel is 3.5 mm, and I don't want a split keyboard. (the seller also told me "we do not have a no split version of that keyboard and I do not have a plot of the key force and travel.") There is a compact version: [![enter image description here](https://i.stack.imgur.com/6TIQNm.png)](https://i.stack.imgur.com/6TIQNm.png) As well as a non-compact version: [![enter image description here](https://i.stack.imgur.com/inbt4m.png)](https://i.stack.imgur.com/inbt4m.png) [![enter image description here](https://i.stack.imgur.com/NvO9zm.png)](https://i.stack.imgur.com/NvO9zm.png)
2016/04/25
[ "https://hardwarerecs.stackexchange.com/questions/2581", "https://hardwarerecs.stackexchange.com", "https://hardwarerecs.stackexchange.com/users/40/" ]
Razer's ["ultra-low-profile mechanical keyboard"](http://www.razerzone.com/gaming-keyboards-keypads/razer-mechanical-keyboard-case-ipad-pro) (I don't see any model name) is the mechanical keyboard with the shortest travel distance I am aware of (without having to add o-rings). Unfortunately, the actuation force is high: 65g to the actuation point, 70g to bottom. It was announced on [Thursday, 14 July 2016](http://www.razerzone.com/press/detail/press-releases/razer-releases-worlds-first-ultra-low-profile-mechanical-keyboard-switch). The announced price is U.S. $169.99 / EU €189.99. [![enter image description here](https://i.stack.imgur.com/aRTjv.png)](https://i.stack.imgur.com/aRTjv.png) [![enter image description here](https://i.stack.imgur.com/UdhCQ.png)](https://i.stack.imgur.com/UdhCQ.png) [![enter image description here](https://i.stack.imgur.com/E9cEu.png)](https://i.stack.imgur.com/E9cEu.png) [![enter image description here](https://i.stack.imgur.com/SYOm1.jpg)](https://i.stack.imgur.com/SYOm1.jpg)
Another good option: Keychron K1 Wireless Mechanical Keyboard (Version 4) - 74 USD. <https://www.keychron.com/products/keychron-k1-wireless-mechanical-keyboard> gives the specs: [![enter image description here](https://i.stack.imgur.com/1Aa1Y.png)](https://i.stack.imgur.com/1Aa1Y.png) More specs: Compact version (without numpad): * Dimension (87-Key): 355 x 120mm * Weight: About 650g / 1.43 lbs Regular version (with numpad): * Dimension (104-Key) : 435 x 120mm * Weight: About 805g / 1.77 lbs Any version: * Height incl. keycap (front): 22mm * Height incl. keycap (rear): 26mm * Operating Environment: -10 to 50℃ * The K1 has included keycaps for both Windows and Mac operating systems. * Has wireless option (Bluetooth 5.1) --- A picture from <https://workingsetup.com/review-so-sanh-keychron-k3v2-va-k1v4-phim-co-low-profile-den-tu-nha-keychron/> comparing the regular version (with numpad) with the compact version (without numpad): [![enter image description here](https://i.stack.imgur.com/IfuBC.jpg)](https://i.stack.imgur.com/IfuBC.jpg) A picture from <https://www.imore.com/keychron-k1-v4-review> showing the thinness: [![enter image description here](https://i.stack.imgur.com/RUm8a.jpg)](https://i.stack.imgur.com/RUm8a.jpg)
3,906,359
I can't seem to find where to configure this option. Backspace unindent only works when using hard tabs, but should'nt this work since it works on other Scintilla based editors (eg Scite)?
2010/10/11
[ "https://Stackoverflow.com/questions/3906359", "https://Stackoverflow.com", "https://Stackoverflow.com/users/472296/" ]
The solution I've found, which works exactly like Scite's solution does, is a little plugin from Virtual Roadside named [npp\_tabs](http://www.virtualroadside.com/blog/index.php/2012/12/03/notepad-plugin-to-enable-unindent-on-backspace-key/). Highly recommended - after three separate attempts, it's the only thing I found that worked how I wanted.
Go to Setting > Shortcut Mapper then click the "Scintilla Commands" tab. Line 11 (on my version) is where you set the shortcut key for "SCI\_BACKTAB". Highlight it and press "Modify" and you can set it to whatever you would like. Hope this helps.
3,906,359
I can't seem to find where to configure this option. Backspace unindent only works when using hard tabs, but should'nt this work since it works on other Scintilla based editors (eg Scite)?
2010/10/11
[ "https://Stackoverflow.com/questions/3906359", "https://Stackoverflow.com", "https://Stackoverflow.com/users/472296/" ]
The solution I've found, which works exactly like Scite's solution does, is a little plugin from Virtual Roadside named [npp\_tabs](http://www.virtualroadside.com/blog/index.php/2012/12/03/notepad-plugin-to-enable-unindent-on-backspace-key/). Highly recommended - after three separate attempts, it's the only thing I found that worked how I wanted.
npp\_tabs seems not working anymore. Use ExtSettings, which is available through PluginsAdmin. Website: <https://sourceforge.net/projects/extsettings/>
99,487
I am handling a paper as an associate editor that proposed an algorithm that I find to be weak. In fact, I was able to show that a very simple, brute-force approach actually has a better running time than their algorithm. Therefore, I will recommend rejecting this paper. Do I have an obligation to share my proof that the brute-force running time is better? I want the higher-level editors to have confidence in the rejection, but it also occurs to me that I might be able to improve my own result and publish it independently. Is this a violation of ethics? UPDATE: Well this certainly took off! I would like to add the following: * The overwhelming consensus is that it would be unethical for me to "scoop" the other authors, so I will not do that. The advice from all is greatly appreciated. * The journal is the top in its field, so we have to be extremely selective. The problem proposed is fairly interesting, but overall the paper does not meet our threshold. * When I say that brute force is better than their method, I mean it in a provable, big O sense. * The authors' proposed scheme is not only inefficient, it is written in a very confusing way. In fact, I asked them to compare their approach to brute-force as a way of helping them clarify their argument, and they did a bad job of it, which is what led me to look into it in the first place. * The fact that brute force performs better than their scheme is not totally trivial because it relies on a combinatorial argument that is not amazing, but not completely obvious either. * I will share my proof with the Editor in chief, but I have decided not to give it to the authors; I will consider publishing independently in the future if their work ever appears elsewhere. * Their paper is not on arXiv or any other website.
2017/11/27
[ "https://academia.stackexchange.com/questions/99487", "https://academia.stackexchange.com", "https://academia.stackexchange.com/users/83440/" ]
The first thing you should do is to "recuse" yourself from the editorial process. That is, have someone else in the organization make the accept/reject decision since your own work now creates a conflict of interest. Put another way, you have an "axe to grind." After the decision is made, then you can make your move. Let's say that your paper makes a decision to accept/publish. Then you can publish your improvement later, citing the original paper. If your paper's decision is to reject, then you can approach the authors with your improvement, and honestly tell them that you had nothing to do with the rejection. (That's why you need to "recuse" yourself.) If they agree, you might offer to "run interference" for a second try at your shop; otherwise, you and they might publish elsewhere.
You have a valid reason to reject their paper, but no obligation to share your information. If you wanted to pursue it as research, wouldn't the best option be to privately contact them outside of your official duties, and ask to work with them?
99,487
I am handling a paper as an associate editor that proposed an algorithm that I find to be weak. In fact, I was able to show that a very simple, brute-force approach actually has a better running time than their algorithm. Therefore, I will recommend rejecting this paper. Do I have an obligation to share my proof that the brute-force running time is better? I want the higher-level editors to have confidence in the rejection, but it also occurs to me that I might be able to improve my own result and publish it independently. Is this a violation of ethics? UPDATE: Well this certainly took off! I would like to add the following: * The overwhelming consensus is that it would be unethical for me to "scoop" the other authors, so I will not do that. The advice from all is greatly appreciated. * The journal is the top in its field, so we have to be extremely selective. The problem proposed is fairly interesting, but overall the paper does not meet our threshold. * When I say that brute force is better than their method, I mean it in a provable, big O sense. * The authors' proposed scheme is not only inefficient, it is written in a very confusing way. In fact, I asked them to compare their approach to brute-force as a way of helping them clarify their argument, and they did a bad job of it, which is what led me to look into it in the first place. * The fact that brute force performs better than their scheme is not totally trivial because it relies on a combinatorial argument that is not amazing, but not completely obvious either. * I will share my proof with the Editor in chief, but I have decided not to give it to the authors; I will consider publishing independently in the future if their work ever appears elsewhere. * Their paper is not on arXiv or any other website.
2017/11/27
[ "https://academia.stackexchange.com/questions/99487", "https://academia.stackexchange.com", "https://academia.stackexchange.com/users/83440/" ]
I think you are treading on thin ice, ethically speaking. Obviously you, as an editor, have no obligation to help the authors in any specific way, and you are free to tell them about your improvement or not, but rejecting their paper, taking the idea/problem, applying a different method to its resolution, and then publishing this under your own name seems problematic. As an editor, you are usually expected to treat the papers you are handling as privileged information, and you are specifically expected to not use your knowledge of rejected papers to scoop the authors - which is exactly what you plan to do. Say you reject the paper with comments along the line "important problem, solution is too simplistic". The authors now go back to the drawing board, come up with a solution similar to what you had in mind, and get their paper accepted. If you publish your idea first they obviously can't do this anymore - you have effectively made use of knowledge you learned as an editor to pull the rug from under the people who initially thought of the research project, even if they did not do a great job with the first submission. I understand that it sucks that if you told them about your idea you would be giving away information that, in different circumstances, may be sufficient for co-authorship. However, I would argue, as a reviewer / editor we *sign up for* helping the authors "for free" to some extent (that is, without expecting recognition). If you really don't want to tell the authors about your idea your best hope is that the authors get their work accepted somewhere else. In that case you are free to write your follow-up paper and cite the original paper.
I think you have an obligation to academia to disprove his algorhithm with a brute force algorithm. Give your brute force proof to everybody and put it in the public domain. After you disprove it, if you have an algorithm that's better than his and better than brute force, you could offer to help him write a better algorithm. You and him can both be coauthors of the new paper with your new algorithm. Wait until everybody accepts that his original algorithm has been rejected before offering to coauthor with him the new algorithm. Tell him that you have a new, better algorithm, but don't tell him what it is until he agrees to coauthor with you in a written, signed document. That way, you get credit for helping, but are not taking away his idea. However, remember that some ideas are proposed in odd ways because of a need to avoid similar patents. An inefficient algorithm that is patentable is better than an efficient algorithm that looks like someone else's invention.
99,487
I am handling a paper as an associate editor that proposed an algorithm that I find to be weak. In fact, I was able to show that a very simple, brute-force approach actually has a better running time than their algorithm. Therefore, I will recommend rejecting this paper. Do I have an obligation to share my proof that the brute-force running time is better? I want the higher-level editors to have confidence in the rejection, but it also occurs to me that I might be able to improve my own result and publish it independently. Is this a violation of ethics? UPDATE: Well this certainly took off! I would like to add the following: * The overwhelming consensus is that it would be unethical for me to "scoop" the other authors, so I will not do that. The advice from all is greatly appreciated. * The journal is the top in its field, so we have to be extremely selective. The problem proposed is fairly interesting, but overall the paper does not meet our threshold. * When I say that brute force is better than their method, I mean it in a provable, big O sense. * The authors' proposed scheme is not only inefficient, it is written in a very confusing way. In fact, I asked them to compare their approach to brute-force as a way of helping them clarify their argument, and they did a bad job of it, which is what led me to look into it in the first place. * The fact that brute force performs better than their scheme is not totally trivial because it relies on a combinatorial argument that is not amazing, but not completely obvious either. * I will share my proof with the Editor in chief, but I have decided not to give it to the authors; I will consider publishing independently in the future if their work ever appears elsewhere. * Their paper is not on arXiv or any other website.
2017/11/27
[ "https://academia.stackexchange.com/questions/99487", "https://academia.stackexchange.com", "https://academia.stackexchange.com/users/83440/" ]
Does running time actually matter? It's a program for a paper, not something to be distributed and run commercially on many machines. Perhaps they coded it the way they did because that algorithm is more readable and understandable than a brute force approach. For example, if something in my paper required me to do something simple like add up the numbers from 1 to n, we all know the sum is equal to n(n+1)/2, but having a loop that goes from 1 to n and adds them together is simpler to read. Assuming this is the only 'lousy' thing in the paper that you take issue with, I don't see it as a reason to reject. If you could come up with a brute force approach that quickly, surely they could as well. Perhaps ask them why they chose that algorithm, or ask a senior editor at your paper for a second opinion, not random strangers on the internet. As far as using the algorithm you wrote yourself goes, the other answers have adequately stated that that would be incredibly unethical.
Assuming that you do not especially care about the topic of the paper that you reviewed, and that the idea that you had while reviewing is not exceptional and did not require huge amounts of work, I think the best course of action is simply to write up concisely that idea as part of your review, and give it away to the authors of the original submission. If they use it, I agree it's too bad that you won't have credit for it, but you will have gotten other people to polish and write up your idea -- which is probably significantly more effort than what it will take you to concisely describe the idea in the review. I'd say the main problem is if they don't use it, and you'd like the idea not to die off. I don't know what's a good solution in this case. Wait for their paper to be accepted somewhere, or wait for several years to be sure that they have given up?
99,487
I am handling a paper as an associate editor that proposed an algorithm that I find to be weak. In fact, I was able to show that a very simple, brute-force approach actually has a better running time than their algorithm. Therefore, I will recommend rejecting this paper. Do I have an obligation to share my proof that the brute-force running time is better? I want the higher-level editors to have confidence in the rejection, but it also occurs to me that I might be able to improve my own result and publish it independently. Is this a violation of ethics? UPDATE: Well this certainly took off! I would like to add the following: * The overwhelming consensus is that it would be unethical for me to "scoop" the other authors, so I will not do that. The advice from all is greatly appreciated. * The journal is the top in its field, so we have to be extremely selective. The problem proposed is fairly interesting, but overall the paper does not meet our threshold. * When I say that brute force is better than their method, I mean it in a provable, big O sense. * The authors' proposed scheme is not only inefficient, it is written in a very confusing way. In fact, I asked them to compare their approach to brute-force as a way of helping them clarify their argument, and they did a bad job of it, which is what led me to look into it in the first place. * The fact that brute force performs better than their scheme is not totally trivial because it relies on a combinatorial argument that is not amazing, but not completely obvious either. * I will share my proof with the Editor in chief, but I have decided not to give it to the authors; I will consider publishing independently in the future if their work ever appears elsewhere. * Their paper is not on arXiv or any other website.
2017/11/27
[ "https://academia.stackexchange.com/questions/99487", "https://academia.stackexchange.com", "https://academia.stackexchange.com/users/83440/" ]
Does running time actually matter? It's a program for a paper, not something to be distributed and run commercially on many machines. Perhaps they coded it the way they did because that algorithm is more readable and understandable than a brute force approach. For example, if something in my paper required me to do something simple like add up the numbers from 1 to n, we all know the sum is equal to n(n+1)/2, but having a loop that goes from 1 to n and adds them together is simpler to read. Assuming this is the only 'lousy' thing in the paper that you take issue with, I don't see it as a reason to reject. If you could come up with a brute force approach that quickly, surely they could as well. Perhaps ask them why they chose that algorithm, or ask a senior editor at your paper for a second opinion, not random strangers on the internet. As far as using the algorithm you wrote yourself goes, the other answers have adequately stated that that would be incredibly unethical.
The first thing you should do is to "recuse" yourself from the editorial process. That is, have someone else in the organization make the accept/reject decision since your own work now creates a conflict of interest. Put another way, you have an "axe to grind." After the decision is made, then you can make your move. Let's say that your paper makes a decision to accept/publish. Then you can publish your improvement later, citing the original paper. If your paper's decision is to reject, then you can approach the authors with your improvement, and honestly tell them that you had nothing to do with the rejection. (That's why you need to "recuse" yourself.) If they agree, you might offer to "run interference" for a second try at your shop; otherwise, you and they might publish elsewhere.
99,487
I am handling a paper as an associate editor that proposed an algorithm that I find to be weak. In fact, I was able to show that a very simple, brute-force approach actually has a better running time than their algorithm. Therefore, I will recommend rejecting this paper. Do I have an obligation to share my proof that the brute-force running time is better? I want the higher-level editors to have confidence in the rejection, but it also occurs to me that I might be able to improve my own result and publish it independently. Is this a violation of ethics? UPDATE: Well this certainly took off! I would like to add the following: * The overwhelming consensus is that it would be unethical for me to "scoop" the other authors, so I will not do that. The advice from all is greatly appreciated. * The journal is the top in its field, so we have to be extremely selective. The problem proposed is fairly interesting, but overall the paper does not meet our threshold. * When I say that brute force is better than their method, I mean it in a provable, big O sense. * The authors' proposed scheme is not only inefficient, it is written in a very confusing way. In fact, I asked them to compare their approach to brute-force as a way of helping them clarify their argument, and they did a bad job of it, which is what led me to look into it in the first place. * The fact that brute force performs better than their scheme is not totally trivial because it relies on a combinatorial argument that is not amazing, but not completely obvious either. * I will share my proof with the Editor in chief, but I have decided not to give it to the authors; I will consider publishing independently in the future if their work ever appears elsewhere. * Their paper is not on arXiv or any other website.
2017/11/27
[ "https://academia.stackexchange.com/questions/99487", "https://academia.stackexchange.com", "https://academia.stackexchange.com/users/83440/" ]
You have a valid reason to reject their paper, but no obligation to share your information. If you wanted to pursue it as research, wouldn't the best option be to privately contact them outside of your official duties, and ask to work with them?
As addition to [xLeitix very thoughtful answer](https://academia.stackexchange.com/a/99491/54543) I would give the authors a period of grace (say half a year or so, depending on the typical time scales of your discipline). If by then they did not publish an improved version matching the performance of your ideas from now, I would regard the confidentiality period to be over and the subject to be fair game for everyone including you. If they published the poor version somewhere else you should cite them. Do not copy from their submitted previous manuscript. Conduct your own research and present it as such. If you tell them now about a possible way to improve it's because you want them to improve and publish a better version. If they don't, you could still do it on your own then. But you would have to wait to find out. The waiting period is the key here and without it you are indeed acting unethically. The waiting period must be really quite generous. However, it should not be infinitely long because otherwise as an editor (or reviewer) you would unfairly restrict yourself in possible research if all topics you ever got to see were inaccessible to you in the future.
99,487
I am handling a paper as an associate editor that proposed an algorithm that I find to be weak. In fact, I was able to show that a very simple, brute-force approach actually has a better running time than their algorithm. Therefore, I will recommend rejecting this paper. Do I have an obligation to share my proof that the brute-force running time is better? I want the higher-level editors to have confidence in the rejection, but it also occurs to me that I might be able to improve my own result and publish it independently. Is this a violation of ethics? UPDATE: Well this certainly took off! I would like to add the following: * The overwhelming consensus is that it would be unethical for me to "scoop" the other authors, so I will not do that. The advice from all is greatly appreciated. * The journal is the top in its field, so we have to be extremely selective. The problem proposed is fairly interesting, but overall the paper does not meet our threshold. * When I say that brute force is better than their method, I mean it in a provable, big O sense. * The authors' proposed scheme is not only inefficient, it is written in a very confusing way. In fact, I asked them to compare their approach to brute-force as a way of helping them clarify their argument, and they did a bad job of it, which is what led me to look into it in the first place. * The fact that brute force performs better than their scheme is not totally trivial because it relies on a combinatorial argument that is not amazing, but not completely obvious either. * I will share my proof with the Editor in chief, but I have decided not to give it to the authors; I will consider publishing independently in the future if their work ever appears elsewhere. * Their paper is not on arXiv or any other website.
2017/11/27
[ "https://academia.stackexchange.com/questions/99487", "https://academia.stackexchange.com", "https://academia.stackexchange.com/users/83440/" ]
This is one of the most feared and contentious acts in the publication of science. We all have heard stories of the paper that was rejected, only to serve as impetus for a subsequent publication by one of the rejecting reviewers. The worst case, which you would not be guilty of, is rejecting the paper for the sole purpose of benefiting from the idea. That's not the case here. If the optimality of the algorithm is the *point* of the paper, and you can clearly demonstrate their logic is flawed, you have every right to reject the paper on those grounds. But you *should* lay this out for them in the review, and give them a chance to rebut. If your logic is iron-clad, you'll win the argument. Whether it is ethically sound to then move forward with your own publication on the topic is unclear. Did the authors present this work at a conference? If so, and had you seen that talk, you would be justified in moving forward. If not, the manuscript is considered confidential, and your idea was spurred only through a confidential review process. It would be wise to discuss this with the general editor of the publication for guidance. Some day, they may review your own paper.
I think you have an obligation to academia to disprove his algorhithm with a brute force algorithm. Give your brute force proof to everybody and put it in the public domain. After you disprove it, if you have an algorithm that's better than his and better than brute force, you could offer to help him write a better algorithm. You and him can both be coauthors of the new paper with your new algorithm. Wait until everybody accepts that his original algorithm has been rejected before offering to coauthor with him the new algorithm. Tell him that you have a new, better algorithm, but don't tell him what it is until he agrees to coauthor with you in a written, signed document. That way, you get credit for helping, but are not taking away his idea. However, remember that some ideas are proposed in odd ways because of a need to avoid similar patents. An inefficient algorithm that is patentable is better than an efficient algorithm that looks like someone else's invention.
99,487
I am handling a paper as an associate editor that proposed an algorithm that I find to be weak. In fact, I was able to show that a very simple, brute-force approach actually has a better running time than their algorithm. Therefore, I will recommend rejecting this paper. Do I have an obligation to share my proof that the brute-force running time is better? I want the higher-level editors to have confidence in the rejection, but it also occurs to me that I might be able to improve my own result and publish it independently. Is this a violation of ethics? UPDATE: Well this certainly took off! I would like to add the following: * The overwhelming consensus is that it would be unethical for me to "scoop" the other authors, so I will not do that. The advice from all is greatly appreciated. * The journal is the top in its field, so we have to be extremely selective. The problem proposed is fairly interesting, but overall the paper does not meet our threshold. * When I say that brute force is better than their method, I mean it in a provable, big O sense. * The authors' proposed scheme is not only inefficient, it is written in a very confusing way. In fact, I asked them to compare their approach to brute-force as a way of helping them clarify their argument, and they did a bad job of it, which is what led me to look into it in the first place. * The fact that brute force performs better than their scheme is not totally trivial because it relies on a combinatorial argument that is not amazing, but not completely obvious either. * I will share my proof with the Editor in chief, but I have decided not to give it to the authors; I will consider publishing independently in the future if their work ever appears elsewhere. * Their paper is not on arXiv or any other website.
2017/11/27
[ "https://academia.stackexchange.com/questions/99487", "https://academia.stackexchange.com", "https://academia.stackexchange.com/users/83440/" ]
I think you are treading on thin ice, ethically speaking. Obviously you, as an editor, have no obligation to help the authors in any specific way, and you are free to tell them about your improvement or not, but rejecting their paper, taking the idea/problem, applying a different method to its resolution, and then publishing this under your own name seems problematic. As an editor, you are usually expected to treat the papers you are handling as privileged information, and you are specifically expected to not use your knowledge of rejected papers to scoop the authors - which is exactly what you plan to do. Say you reject the paper with comments along the line "important problem, solution is too simplistic". The authors now go back to the drawing board, come up with a solution similar to what you had in mind, and get their paper accepted. If you publish your idea first they obviously can't do this anymore - you have effectively made use of knowledge you learned as an editor to pull the rug from under the people who initially thought of the research project, even if they did not do a great job with the first submission. I understand that it sucks that if you told them about your idea you would be giving away information that, in different circumstances, may be sufficient for co-authorship. However, I would argue, as a reviewer / editor we *sign up for* helping the authors "for free" to some extent (that is, without expecting recognition). If you really don't want to tell the authors about your idea your best hope is that the authors get their work accepted somewhere else. In that case you are free to write your follow-up paper and cite the original paper.
As addition to [xLeitix very thoughtful answer](https://academia.stackexchange.com/a/99491/54543) I would give the authors a period of grace (say half a year or so, depending on the typical time scales of your discipline). If by then they did not publish an improved version matching the performance of your ideas from now, I would regard the confidentiality period to be over and the subject to be fair game for everyone including you. If they published the poor version somewhere else you should cite them. Do not copy from their submitted previous manuscript. Conduct your own research and present it as such. If you tell them now about a possible way to improve it's because you want them to improve and publish a better version. If they don't, you could still do it on your own then. But you would have to wait to find out. The waiting period is the key here and without it you are indeed acting unethically. The waiting period must be really quite generous. However, it should not be infinitely long because otherwise as an editor (or reviewer) you would unfairly restrict yourself in possible research if all topics you ever got to see were inaccessible to you in the future.
99,487
I am handling a paper as an associate editor that proposed an algorithm that I find to be weak. In fact, I was able to show that a very simple, brute-force approach actually has a better running time than their algorithm. Therefore, I will recommend rejecting this paper. Do I have an obligation to share my proof that the brute-force running time is better? I want the higher-level editors to have confidence in the rejection, but it also occurs to me that I might be able to improve my own result and publish it independently. Is this a violation of ethics? UPDATE: Well this certainly took off! I would like to add the following: * The overwhelming consensus is that it would be unethical for me to "scoop" the other authors, so I will not do that. The advice from all is greatly appreciated. * The journal is the top in its field, so we have to be extremely selective. The problem proposed is fairly interesting, but overall the paper does not meet our threshold. * When I say that brute force is better than their method, I mean it in a provable, big O sense. * The authors' proposed scheme is not only inefficient, it is written in a very confusing way. In fact, I asked them to compare their approach to brute-force as a way of helping them clarify their argument, and they did a bad job of it, which is what led me to look into it in the first place. * The fact that brute force performs better than their scheme is not totally trivial because it relies on a combinatorial argument that is not amazing, but not completely obvious either. * I will share my proof with the Editor in chief, but I have decided not to give it to the authors; I will consider publishing independently in the future if their work ever appears elsewhere. * Their paper is not on arXiv or any other website.
2017/11/27
[ "https://academia.stackexchange.com/questions/99487", "https://academia.stackexchange.com", "https://academia.stackexchange.com/users/83440/" ]
I believe rejecting a paper just on the basis of poor performance of their algorithm and then publishing your brute force algorithm is highly unethical because: to me it sounds like you tried the idea of making an algorithm only after getting an idea from that work. Secondly, brute force algorithm are not science and they can never win over a smart procedure, it is not ethical to reject a research work to just get yourself a publication. Then again, you can very well communicate to the authors that you have something better. Collaborate with them. Tell high level editors you can't handle this with as you are working together with them. That's ethical way to do it.
I think you have an obligation to academia to disprove his algorhithm with a brute force algorithm. Give your brute force proof to everybody and put it in the public domain. After you disprove it, if you have an algorithm that's better than his and better than brute force, you could offer to help him write a better algorithm. You and him can both be coauthors of the new paper with your new algorithm. Wait until everybody accepts that his original algorithm has been rejected before offering to coauthor with him the new algorithm. Tell him that you have a new, better algorithm, but don't tell him what it is until he agrees to coauthor with you in a written, signed document. That way, you get credit for helping, but are not taking away his idea. However, remember that some ideas are proposed in odd ways because of a need to avoid similar patents. An inefficient algorithm that is patentable is better than an efficient algorithm that looks like someone else's invention.
99,487
I am handling a paper as an associate editor that proposed an algorithm that I find to be weak. In fact, I was able to show that a very simple, brute-force approach actually has a better running time than their algorithm. Therefore, I will recommend rejecting this paper. Do I have an obligation to share my proof that the brute-force running time is better? I want the higher-level editors to have confidence in the rejection, but it also occurs to me that I might be able to improve my own result and publish it independently. Is this a violation of ethics? UPDATE: Well this certainly took off! I would like to add the following: * The overwhelming consensus is that it would be unethical for me to "scoop" the other authors, so I will not do that. The advice from all is greatly appreciated. * The journal is the top in its field, so we have to be extremely selective. The problem proposed is fairly interesting, but overall the paper does not meet our threshold. * When I say that brute force is better than their method, I mean it in a provable, big O sense. * The authors' proposed scheme is not only inefficient, it is written in a very confusing way. In fact, I asked them to compare their approach to brute-force as a way of helping them clarify their argument, and they did a bad job of it, which is what led me to look into it in the first place. * The fact that brute force performs better than their scheme is not totally trivial because it relies on a combinatorial argument that is not amazing, but not completely obvious either. * I will share my proof with the Editor in chief, but I have decided not to give it to the authors; I will consider publishing independently in the future if their work ever appears elsewhere. * Their paper is not on arXiv or any other website.
2017/11/27
[ "https://academia.stackexchange.com/questions/99487", "https://academia.stackexchange.com", "https://academia.stackexchange.com/users/83440/" ]
Does running time actually matter? It's a program for a paper, not something to be distributed and run commercially on many machines. Perhaps they coded it the way they did because that algorithm is more readable and understandable than a brute force approach. For example, if something in my paper required me to do something simple like add up the numbers from 1 to n, we all know the sum is equal to n(n+1)/2, but having a loop that goes from 1 to n and adds them together is simpler to read. Assuming this is the only 'lousy' thing in the paper that you take issue with, I don't see it as a reason to reject. If you could come up with a brute force approach that quickly, surely they could as well. Perhaps ask them why they chose that algorithm, or ask a senior editor at your paper for a second opinion, not random strangers on the internet. As far as using the algorithm you wrote yourself goes, the other answers have adequately stated that that would be incredibly unethical.
As addition to [xLeitix very thoughtful answer](https://academia.stackexchange.com/a/99491/54543) I would give the authors a period of grace (say half a year or so, depending on the typical time scales of your discipline). If by then they did not publish an improved version matching the performance of your ideas from now, I would regard the confidentiality period to be over and the subject to be fair game for everyone including you. If they published the poor version somewhere else you should cite them. Do not copy from their submitted previous manuscript. Conduct your own research and present it as such. If you tell them now about a possible way to improve it's because you want them to improve and publish a better version. If they don't, you could still do it on your own then. But you would have to wait to find out. The waiting period is the key here and without it you are indeed acting unethically. The waiting period must be really quite generous. However, it should not be infinitely long because otherwise as an editor (or reviewer) you would unfairly restrict yourself in possible research if all topics you ever got to see were inaccessible to you in the future.
99,487
I am handling a paper as an associate editor that proposed an algorithm that I find to be weak. In fact, I was able to show that a very simple, brute-force approach actually has a better running time than their algorithm. Therefore, I will recommend rejecting this paper. Do I have an obligation to share my proof that the brute-force running time is better? I want the higher-level editors to have confidence in the rejection, but it also occurs to me that I might be able to improve my own result and publish it independently. Is this a violation of ethics? UPDATE: Well this certainly took off! I would like to add the following: * The overwhelming consensus is that it would be unethical for me to "scoop" the other authors, so I will not do that. The advice from all is greatly appreciated. * The journal is the top in its field, so we have to be extremely selective. The problem proposed is fairly interesting, but overall the paper does not meet our threshold. * When I say that brute force is better than their method, I mean it in a provable, big O sense. * The authors' proposed scheme is not only inefficient, it is written in a very confusing way. In fact, I asked them to compare their approach to brute-force as a way of helping them clarify their argument, and they did a bad job of it, which is what led me to look into it in the first place. * The fact that brute force performs better than their scheme is not totally trivial because it relies on a combinatorial argument that is not amazing, but not completely obvious either. * I will share my proof with the Editor in chief, but I have decided not to give it to the authors; I will consider publishing independently in the future if their work ever appears elsewhere. * Their paper is not on arXiv or any other website.
2017/11/27
[ "https://academia.stackexchange.com/questions/99487", "https://academia.stackexchange.com", "https://academia.stackexchange.com/users/83440/" ]
If the paper is overall lousy then simply reject it. I'm sure the reviewers would give you plenty of reasons for this. However, if all it is, is a weak algorithm but otherwise well written, then it might still be worthy of publication (depends on the journal). Once published, you can then publish your own work and cite the paper. After all, your algorithm is indeed inspired by theirs. Also, how sure are you that your algorithm is "better"? Always better? 90% of the time better? On CPUs? What about GPUs? How does it scale? As an example, think about all the sorting algorithms out there. There is not a best algorithm. Depending on the dataset being sorted, any number of algorithms could give the best result.
I believe rejecting a paper just on the basis of poor performance of their algorithm and then publishing your brute force algorithm is highly unethical because: to me it sounds like you tried the idea of making an algorithm only after getting an idea from that work. Secondly, brute force algorithm are not science and they can never win over a smart procedure, it is not ethical to reject a research work to just get yourself a publication. Then again, you can very well communicate to the authors that you have something better. Collaborate with them. Tell high level editors you can't handle this with as you are working together with them. That's ethical way to do it.
143,554
While watching the excellent 1972 picture "Cabaret", I came across an interesting quote using the expression "to make a pounce". The context it is used in can be found on [subzin.com](http://www.subzin.com/quotes/Cabaret/The+only+thing+to+do+with+virgins+is+to+make+a+ferocious+pounce "here"). While I have never heard the expression, its meaning becomes somewhat clear from the context to mean "to make advances to/at someone". When googling it, however, it seems the expression is used very rarely and there is hardly any mentioning other than the quote from the movie itself. "Cabaret" plays in 1931 and I was therefore wondering whether the expression is just outdated or if it really is not used at all. How does it sound to a native speaker? Would you use it in an every-day conversation? Thanks!
2013/12/28
[ "https://english.stackexchange.com/questions/143554", "https://english.stackexchange.com", "https://english.stackexchange.com/users/60778/" ]
[Here's the evidence](https://books.google.com/ngrams/graph?content=make%20a%20pounce&year_start=1840&year_end=2000&corpus=15&smoothing=3&share=&direct_url=t1;,make%20a%20pounce;,c0) to support OP's suggestion that *make a pounce* was more common in the past... ![enter image description here](https://i.stack.imgur.com/lBAHb.png) But [here's the evidence](https://books.google.com/ngrams/graph?content=he%20pounced,he%20made%20a%20pounce&year_start=1800&year_end=2000&corpus=15&smoothing=3&share=&direct_url=t1;,he%20pounced;,c0;.t1;,he%20made%20a%20pounce;,c0) to show that relatively speaking it was *never* actually "common"... ![enter image description here](https://i.stack.imgur.com/HdLXE.png) It's also worth noting this from [etymonline.com...](http://www.etymonline.com/index.php?term=pounce) > > **pounce** (v.) ***1680s***, originally "to seize with the pounces," from Middle English pownse (n.) "hawk's claw" (see pounce (n.)). > > **pounce** (n.) "claw of a bird of prey," late 15c., pownse, probably from Old French ponchon "lance, javelin; spine, quill". Meaning "an act of jumping or falling upon" is from ***1825.*** > > > That's to say the relevant *verb* usage predates the relevant *noun* usage by well over a century. In short, there never was a time when any significant proportion of Anglophones would have used the noun-based form *to make a pounce* rather than the simple verb *to pounce*. But the *Cabaret* scriptwriters didn't necessarily know or care about that level of detail. They just wanted an easily-understood version that would sound strange/exotic/"other" to the modern ear.
The Oxford English Dictionary has a citation for *make a pounce* from 1806, and the most recent is from 1995. This suggests that it is both well-established and fairly current. As a native speaker, I find nothing unusual about it, and its use is not limted to the making of sexual advances.
17,292
Did they like Oliver Cromwell, who founded a republic after a dispute about taxes led to the overthrow of a king, or did they see him as a usurping Caesar who destroyed a republic and became a tyrant?
2014/11/26
[ "https://history.stackexchange.com/questions/17292", "https://history.stackexchange.com", "https://history.stackexchange.com/users/8451/" ]
No. On the one side, we have Hamilton denouncing Cromwell in the [Federalist Papers No. 21](http://avalon.law.yale.edu/18th_century/fed21.asp): > > Without a guaranty the assistance to be derived from the Union in > repelling those domestic dangers which may sometimes threaten the > existence of the State constitutions, must be renounced. Usurpation > may rear its crest in each State, and trample upon the liberties of > the people, while the national government could legally do nothing > more than behold its encroachments with indignation and regret. A > successful faction may erect a tyranny on the ruins of order and law, > while no succor could constitutionally be afforded by the Union to the > friends and supporters of the government. The tempestuous situation > from which Massachusetts has scarcely emerged, evinces that dangers of > this kind are not merely speculative. Who can determine what might > have been the issue of her late convulsions, if the malcontents had > been headed by a Caesar or by a Cromwell? Who can predict what effect > a despotism, established in Massachusetts, would have upon the > liberties of New Hampshire or Rhode Island, of Connecticut or New > York? > > > And from the other side, the [anti-federalist Brutus papers](http://www.constitution.org/afp/brutus10.htm): > > I firmly believe, no country in the world had ever a more patriotic > army, than the one which so ably served this country, in the late war. > > > But had the General who commanded them, been possessed of the spirit > of a Julius Cesar or a Cromwell, the liberties of this country, had in > all probability, terminated with the war; or had they been maintained, > might have cost more blood and treasure, than was expended in the > conflict with Great-Britain. > > > So "Cromwell" is a term of abuse equivalent to "Caesar" for both sides. The answer to your question may be different if you asked if early New England colonists saw Cromwell as a role model. But keep in mind that the [southern gentry saw themselves as cavaliers](http://en.wikipedia.org/wiki/American_gentry)--they would have detested Cromwell. And Americans in this period venerated [Cincinnatus](http://en.wikipedia.org/wiki/Lucius_Quinctius_Cincinnatus#Legacy), who is effectively the anti-Cromwell.
Extemely short and simple answer: No, because for one thing, Cromwell eventually set himself up as dictator, the "Lord-Protector", which was simply a title for the person in charge. He first created a non-elected "representative" system before that, where the people in that system were simply nominated but not elected. In other words, he was very much like a Caesar, which started out with having a "Parliament" that he effectively controlled. The person(s) above goes into more detail, but that pretty much sums up why they wouldn't have a positive view of him. That said, ironically, one person who did have a positive view of him was Karl Marx, who viewed him as an early revolutionary.
17,292
Did they like Oliver Cromwell, who founded a republic after a dispute about taxes led to the overthrow of a king, or did they see him as a usurping Caesar who destroyed a republic and became a tyrant?
2014/11/26
[ "https://history.stackexchange.com/questions/17292", "https://history.stackexchange.com", "https://history.stackexchange.com/users/8451/" ]
No. On the one side, we have Hamilton denouncing Cromwell in the [Federalist Papers No. 21](http://avalon.law.yale.edu/18th_century/fed21.asp): > > Without a guaranty the assistance to be derived from the Union in > repelling those domestic dangers which may sometimes threaten the > existence of the State constitutions, must be renounced. Usurpation > may rear its crest in each State, and trample upon the liberties of > the people, while the national government could legally do nothing > more than behold its encroachments with indignation and regret. A > successful faction may erect a tyranny on the ruins of order and law, > while no succor could constitutionally be afforded by the Union to the > friends and supporters of the government. The tempestuous situation > from which Massachusetts has scarcely emerged, evinces that dangers of > this kind are not merely speculative. Who can determine what might > have been the issue of her late convulsions, if the malcontents had > been headed by a Caesar or by a Cromwell? Who can predict what effect > a despotism, established in Massachusetts, would have upon the > liberties of New Hampshire or Rhode Island, of Connecticut or New > York? > > > And from the other side, the [anti-federalist Brutus papers](http://www.constitution.org/afp/brutus10.htm): > > I firmly believe, no country in the world had ever a more patriotic > army, than the one which so ably served this country, in the late war. > > > But had the General who commanded them, been possessed of the spirit > of a Julius Cesar or a Cromwell, the liberties of this country, had in > all probability, terminated with the war; or had they been maintained, > might have cost more blood and treasure, than was expended in the > conflict with Great-Britain. > > > So "Cromwell" is a term of abuse equivalent to "Caesar" for both sides. The answer to your question may be different if you asked if early New England colonists saw Cromwell as a role model. But keep in mind that the [southern gentry saw themselves as cavaliers](http://en.wikipedia.org/wiki/American_gentry)--they would have detested Cromwell. And Americans in this period venerated [Cincinnatus](http://en.wikipedia.org/wiki/Lucius_Quinctius_Cincinnatus#Legacy), who is effectively the anti-Cromwell.
I do think that the fact that Cromwell's sect of Puritans mostly emigrating to Massachusetts is part of the reason that the first shots of the revolution were fired there. There must have been, only a few generations later, a seething dislike of the crown. Think about supper table talk. No doubt that many were the grandchildren of people that fought in Cromwell's campaigns. Sympathy for Cromwell and his attitudes had to be present.
17,292
Did they like Oliver Cromwell, who founded a republic after a dispute about taxes led to the overthrow of a king, or did they see him as a usurping Caesar who destroyed a republic and became a tyrant?
2014/11/26
[ "https://history.stackexchange.com/questions/17292", "https://history.stackexchange.com", "https://history.stackexchange.com/users/8451/" ]
No. On the one side, we have Hamilton denouncing Cromwell in the [Federalist Papers No. 21](http://avalon.law.yale.edu/18th_century/fed21.asp): > > Without a guaranty the assistance to be derived from the Union in > repelling those domestic dangers which may sometimes threaten the > existence of the State constitutions, must be renounced. Usurpation > may rear its crest in each State, and trample upon the liberties of > the people, while the national government could legally do nothing > more than behold its encroachments with indignation and regret. A > successful faction may erect a tyranny on the ruins of order and law, > while no succor could constitutionally be afforded by the Union to the > friends and supporters of the government. The tempestuous situation > from which Massachusetts has scarcely emerged, evinces that dangers of > this kind are not merely speculative. Who can determine what might > have been the issue of her late convulsions, if the malcontents had > been headed by a Caesar or by a Cromwell? Who can predict what effect > a despotism, established in Massachusetts, would have upon the > liberties of New Hampshire or Rhode Island, of Connecticut or New > York? > > > And from the other side, the [anti-federalist Brutus papers](http://www.constitution.org/afp/brutus10.htm): > > I firmly believe, no country in the world had ever a more patriotic > army, than the one which so ably served this country, in the late war. > > > But had the General who commanded them, been possessed of the spirit > of a Julius Cesar or a Cromwell, the liberties of this country, had in > all probability, terminated with the war; or had they been maintained, > might have cost more blood and treasure, than was expended in the > conflict with Great-Britain. > > > So "Cromwell" is a term of abuse equivalent to "Caesar" for both sides. The answer to your question may be different if you asked if early New England colonists saw Cromwell as a role model. But keep in mind that the [southern gentry saw themselves as cavaliers](http://en.wikipedia.org/wiki/American_gentry)--they would have detested Cromwell. And Americans in this period venerated [Cincinnatus](http://en.wikipedia.org/wiki/Lucius_Quinctius_Cincinnatus#Legacy), who is effectively the anti-Cromwell.
> > **Question:** Did the 'founding fathers' of the United States see Oliver Cromwell as a role model? > > > Oliver Cromwell died 1658, more than a century before the Declaration of Independence. Cromwell was religious fanatic, a regicidal dictator who waged religious genocide. He was the founding fathers worst nightmare, their model of what not to do. Can you imagine if George Washington after winning the Revolutionary War would have marched on Maryland (a colony settled by Catholics) and committed genocide against the colony's Catholics before establishing himself as dictator? No the founding fathers were made up of many different religions. protestant, Catholic, Unitarians and deists; religious fanaticism and religious prosecutions were something which concerned the founding fathers who side stepped that danger by forbidding the government from passing any laws "respecting the establishment of religion". The founding fathers intentionally created a form of government devoid of religious persecution. As for dictatorships, again the founding fathers rejected dictatorship. > > **@JamesSqf** > On Religious genocide is a misuse of the term. > > > You site a dictionary definition of the term. The term is legally defined in [Article 2 of the United Nations Convention on the Prevention and Punishment of the Crime of Genocide](https://en.m.wikipedia.org/wiki/Genocide_Convention) of 1948 as "any of the following acts committed with intent to destroy, in whole or in part, a national, ethnic, racial or **religious group.**” > > **[Genocide](https://en.m.wikipedia.org/wiki/Genocide)** is intentional action to destroy a people (usually defined as an ethnic, national, racial, or religious group) in whole or in part. > > > Just to confirm here is a paper from Case Western Journal of International Law: [Exploring Critical Issues in Religious Genocide: Case Studies of Violence in Tibet, Iraq and Gujarat](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2380308) And Oliver Cromwell’s systemic attack on the Irish food supply which resulted in mass starvation of Ireland’s population and 200,000 to 600,000 civilian casualties and a further 50,000 deported into indentured slavery is referred to as an example of religious genocide. Parliamentary soldiers were not paid in coin but in confiscated Irish lands. Estimates of Ireland’s population drop as a result of Cromwell’s invasion run as high as 85%. **[Cromwellian conquest of Ireland](https://en.m.wikipedia.org/wiki/Cromwellian_conquest_of_Ireland)** > > John Hewson systematically destroyed food stocks in counties Wicklow and Kildare, Hardress Waller did likewise in the Burren in County Clare, as did Colonel Cook in County Wexford. The result was famine throughout much of Ireland, aggravated by an outbreak of bubonic plague.[26] As the guerrilla war ground on, the Parliamentarians, as of April 1651, designated areas such as County Wicklow and much of the south of the country as what would now be called free-fire zones, where anyone found would be, "taken slain and destroyed as enemies and their cattle and good shall be taken or spoiled as the goods of enemies".[27] This tactic had succeeded in the Nine Years' War. > This phase of the war was by far the most costly in terms of civilian loss of life. The combination of warfare, famine and plague caused a huge mortality among the Irish population. > > > . > > @JamesSqf. Same point, Cromwell’s invasion of Ireland was a land conquest thus could not be genocide and besides Protestants can’t commit genocide against Catholics because Protestants and Catholics are sects of the same religion. > > > These are two different points aren’t they. A. Saying the Cromwell didn’t commit genocide in Ireland because he confiscated Irish land is absurd. It’s like claiming Hitler didn’t commit genocide cause he confiscated Jewish land. Absurd. . B. As for it couldn’t be genocide because Protestants and Catholics are sects of the same Religion. Again your criteria and logic are not supported by the legal definition of genocide. .
17,292
Did they like Oliver Cromwell, who founded a republic after a dispute about taxes led to the overthrow of a king, or did they see him as a usurping Caesar who destroyed a republic and became a tyrant?
2014/11/26
[ "https://history.stackexchange.com/questions/17292", "https://history.stackexchange.com", "https://history.stackexchange.com/users/8451/" ]
Extemely short and simple answer: No, because for one thing, Cromwell eventually set himself up as dictator, the "Lord-Protector", which was simply a title for the person in charge. He first created a non-elected "representative" system before that, where the people in that system were simply nominated but not elected. In other words, he was very much like a Caesar, which started out with having a "Parliament" that he effectively controlled. The person(s) above goes into more detail, but that pretty much sums up why they wouldn't have a positive view of him. That said, ironically, one person who did have a positive view of him was Karl Marx, who viewed him as an early revolutionary.
I do think that the fact that Cromwell's sect of Puritans mostly emigrating to Massachusetts is part of the reason that the first shots of the revolution were fired there. There must have been, only a few generations later, a seething dislike of the crown. Think about supper table talk. No doubt that many were the grandchildren of people that fought in Cromwell's campaigns. Sympathy for Cromwell and his attitudes had to be present.
17,292
Did they like Oliver Cromwell, who founded a republic after a dispute about taxes led to the overthrow of a king, or did they see him as a usurping Caesar who destroyed a republic and became a tyrant?
2014/11/26
[ "https://history.stackexchange.com/questions/17292", "https://history.stackexchange.com", "https://history.stackexchange.com/users/8451/" ]
Extemely short and simple answer: No, because for one thing, Cromwell eventually set himself up as dictator, the "Lord-Protector", which was simply a title for the person in charge. He first created a non-elected "representative" system before that, where the people in that system were simply nominated but not elected. In other words, he was very much like a Caesar, which started out with having a "Parliament" that he effectively controlled. The person(s) above goes into more detail, but that pretty much sums up why they wouldn't have a positive view of him. That said, ironically, one person who did have a positive view of him was Karl Marx, who viewed him as an early revolutionary.
> > **Question:** Did the 'founding fathers' of the United States see Oliver Cromwell as a role model? > > > Oliver Cromwell died 1658, more than a century before the Declaration of Independence. Cromwell was religious fanatic, a regicidal dictator who waged religious genocide. He was the founding fathers worst nightmare, their model of what not to do. Can you imagine if George Washington after winning the Revolutionary War would have marched on Maryland (a colony settled by Catholics) and committed genocide against the colony's Catholics before establishing himself as dictator? No the founding fathers were made up of many different religions. protestant, Catholic, Unitarians and deists; religious fanaticism and religious prosecutions were something which concerned the founding fathers who side stepped that danger by forbidding the government from passing any laws "respecting the establishment of religion". The founding fathers intentionally created a form of government devoid of religious persecution. As for dictatorships, again the founding fathers rejected dictatorship. > > **@JamesSqf** > On Religious genocide is a misuse of the term. > > > You site a dictionary definition of the term. The term is legally defined in [Article 2 of the United Nations Convention on the Prevention and Punishment of the Crime of Genocide](https://en.m.wikipedia.org/wiki/Genocide_Convention) of 1948 as "any of the following acts committed with intent to destroy, in whole or in part, a national, ethnic, racial or **religious group.**” > > **[Genocide](https://en.m.wikipedia.org/wiki/Genocide)** is intentional action to destroy a people (usually defined as an ethnic, national, racial, or religious group) in whole or in part. > > > Just to confirm here is a paper from Case Western Journal of International Law: [Exploring Critical Issues in Religious Genocide: Case Studies of Violence in Tibet, Iraq and Gujarat](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2380308) And Oliver Cromwell’s systemic attack on the Irish food supply which resulted in mass starvation of Ireland’s population and 200,000 to 600,000 civilian casualties and a further 50,000 deported into indentured slavery is referred to as an example of religious genocide. Parliamentary soldiers were not paid in coin but in confiscated Irish lands. Estimates of Ireland’s population drop as a result of Cromwell’s invasion run as high as 85%. **[Cromwellian conquest of Ireland](https://en.m.wikipedia.org/wiki/Cromwellian_conquest_of_Ireland)** > > John Hewson systematically destroyed food stocks in counties Wicklow and Kildare, Hardress Waller did likewise in the Burren in County Clare, as did Colonel Cook in County Wexford. The result was famine throughout much of Ireland, aggravated by an outbreak of bubonic plague.[26] As the guerrilla war ground on, the Parliamentarians, as of April 1651, designated areas such as County Wicklow and much of the south of the country as what would now be called free-fire zones, where anyone found would be, "taken slain and destroyed as enemies and their cattle and good shall be taken or spoiled as the goods of enemies".[27] This tactic had succeeded in the Nine Years' War. > This phase of the war was by far the most costly in terms of civilian loss of life. The combination of warfare, famine and plague caused a huge mortality among the Irish population. > > > . > > @JamesSqf. Same point, Cromwell’s invasion of Ireland was a land conquest thus could not be genocide and besides Protestants can’t commit genocide against Catholics because Protestants and Catholics are sects of the same religion. > > > These are two different points aren’t they. A. Saying the Cromwell didn’t commit genocide in Ireland because he confiscated Irish land is absurd. It’s like claiming Hitler didn’t commit genocide cause he confiscated Jewish land. Absurd. . B. As for it couldn’t be genocide because Protestants and Catholics are sects of the same Religion. Again your criteria and logic are not supported by the legal definition of genocide. .
17,292
Did they like Oliver Cromwell, who founded a republic after a dispute about taxes led to the overthrow of a king, or did they see him as a usurping Caesar who destroyed a republic and became a tyrant?
2014/11/26
[ "https://history.stackexchange.com/questions/17292", "https://history.stackexchange.com", "https://history.stackexchange.com/users/8451/" ]
> > **Question:** Did the 'founding fathers' of the United States see Oliver Cromwell as a role model? > > > Oliver Cromwell died 1658, more than a century before the Declaration of Independence. Cromwell was religious fanatic, a regicidal dictator who waged religious genocide. He was the founding fathers worst nightmare, their model of what not to do. Can you imagine if George Washington after winning the Revolutionary War would have marched on Maryland (a colony settled by Catholics) and committed genocide against the colony's Catholics before establishing himself as dictator? No the founding fathers were made up of many different religions. protestant, Catholic, Unitarians and deists; religious fanaticism and religious prosecutions were something which concerned the founding fathers who side stepped that danger by forbidding the government from passing any laws "respecting the establishment of religion". The founding fathers intentionally created a form of government devoid of religious persecution. As for dictatorships, again the founding fathers rejected dictatorship. > > **@JamesSqf** > On Religious genocide is a misuse of the term. > > > You site a dictionary definition of the term. The term is legally defined in [Article 2 of the United Nations Convention on the Prevention and Punishment of the Crime of Genocide](https://en.m.wikipedia.org/wiki/Genocide_Convention) of 1948 as "any of the following acts committed with intent to destroy, in whole or in part, a national, ethnic, racial or **religious group.**” > > **[Genocide](https://en.m.wikipedia.org/wiki/Genocide)** is intentional action to destroy a people (usually defined as an ethnic, national, racial, or religious group) in whole or in part. > > > Just to confirm here is a paper from Case Western Journal of International Law: [Exploring Critical Issues in Religious Genocide: Case Studies of Violence in Tibet, Iraq and Gujarat](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2380308) And Oliver Cromwell’s systemic attack on the Irish food supply which resulted in mass starvation of Ireland’s population and 200,000 to 600,000 civilian casualties and a further 50,000 deported into indentured slavery is referred to as an example of religious genocide. Parliamentary soldiers were not paid in coin but in confiscated Irish lands. Estimates of Ireland’s population drop as a result of Cromwell’s invasion run as high as 85%. **[Cromwellian conquest of Ireland](https://en.m.wikipedia.org/wiki/Cromwellian_conquest_of_Ireland)** > > John Hewson systematically destroyed food stocks in counties Wicklow and Kildare, Hardress Waller did likewise in the Burren in County Clare, as did Colonel Cook in County Wexford. The result was famine throughout much of Ireland, aggravated by an outbreak of bubonic plague.[26] As the guerrilla war ground on, the Parliamentarians, as of April 1651, designated areas such as County Wicklow and much of the south of the country as what would now be called free-fire zones, where anyone found would be, "taken slain and destroyed as enemies and their cattle and good shall be taken or spoiled as the goods of enemies".[27] This tactic had succeeded in the Nine Years' War. > This phase of the war was by far the most costly in terms of civilian loss of life. The combination of warfare, famine and plague caused a huge mortality among the Irish population. > > > . > > @JamesSqf. Same point, Cromwell’s invasion of Ireland was a land conquest thus could not be genocide and besides Protestants can’t commit genocide against Catholics because Protestants and Catholics are sects of the same religion. > > > These are two different points aren’t they. A. Saying the Cromwell didn’t commit genocide in Ireland because he confiscated Irish land is absurd. It’s like claiming Hitler didn’t commit genocide cause he confiscated Jewish land. Absurd. . B. As for it couldn’t be genocide because Protestants and Catholics are sects of the same Religion. Again your criteria and logic are not supported by the legal definition of genocide. .
I do think that the fact that Cromwell's sect of Puritans mostly emigrating to Massachusetts is part of the reason that the first shots of the revolution were fired there. There must have been, only a few generations later, a seething dislike of the crown. Think about supper table talk. No doubt that many were the grandchildren of people that fought in Cromwell's campaigns. Sympathy for Cromwell and his attitudes had to be present.
6,279
I just received an old 3D printer from one of my school teachers. I have no idea whatsoever as to which brand it is, no instruction manual attached to it, or any other info about it. How can I find some information about it? Some links would be very useful. Remember when giving advice that I know nothing about 3D printers. This is the printer: *Backside* [![Printer as seen from the back](https://i.stack.imgur.com/GC3mb.jpg "Printer as seen from the back")](https://i.stack.imgur.com/GC3mb.jpg "Printer as seen from the back") *Front* [![Printer as seen from the front](https://i.stack.imgur.com/FmDtw.jpg "Printer as seen from the front")](https://i.stack.imgur.com/FmDtw.jpg "Printer as seen from the front") *The X-axis stepper* [![The X-axis stepper](https://i.stack.imgur.com/OXuiX.jpg "The X-axis stepper")](https://i.stack.imgur.com/OXuiX.jpg "The X-axis stepper") *The electronics board* [![The electronics board](https://i.stack.imgur.com/YTJwl.jpg "The electronics board")](https://i.stack.imgur.com/YTJwl.jpg "The electronics board")
2018/07/04
[ "https://3dprinting.stackexchange.com/questions/6279", "https://3dprinting.stackexchange.com", "https://3dprinting.stackexchange.com/users/11245/" ]
This is an old 3D printer that looks a lot like the [Mendel](https://reprap.org/wiki/Mendel) or a simpler remix of the Mendel (the [Prusa Mendel](https://reprap.org/wiki/Prusa_Mendel)). I think this is a Mendel you have obtained, it was released in October 2009. This is a printer type from the early days, a lot about these printers can be found now that you know the type. These old types can be constructed from printed parts and hardware store materials. Nowadays, metals like steel and aluminium sheets or aluminium profiles are more commonly used.
As far as I can see on the pictures - the main board shall be capable to upload Marlin software and run smoothly. If you connect power and PC/Mac over the USB connection, then using [Pronterface](http://www.pronterface.com) you can validate mechanical movements of the printer. As the rods looks a bit dusty - please clean them with a soft cloth and degreaser to avoid jamming. If you have any issues you could flash a new version of the firmware - please use [this answer](https://3dprinting.stackexchange.com/a/5849/9730), to the question, [How to upload firmware to reprap printer?](https://3dprinting.stackexchange.com/questions/5848/how-to-upload-firmware-to-reprap-printer), as a guide to how to upload firmware to the printer.
20,681
I am creating an application where the user picks a value from a combo box, the application then selects that feature and then selects the corresponding features from a second layer, (both selections are in a different colour). What I want to do is as soon as the selection is made, the map will zoom to the envelope of the selected features. I know how to zoom to 2 individual layers (using pEnvlope.union...), but i need to do it on features. Has anyone got any idea of the best way to do this? (I'm using ArcGIS 9.3.1 and VBA). Thanks
2012/02/22
[ "https://gis.stackexchange.com/questions/20681", "https://gis.stackexchange.com", "https://gis.stackexchange.com/users/4750/" ]
If you are outside of arcmap you would get the features geometries and use the IToplogicalOperator's union and zoom to the extent of the unioned geometry.
If you have info about the object, you know it's min/max X and Y's, so it's a matter of building some logic in the code tozoom to that envelope, taking into account scaling etc
3,791
I put that painters tape on the edges by the ceiling and cut in next to it (overlapping the tape with the paint somewhat). A couple of hours later, I pull the tape, and small sections of paint come off with it. How do I avoid this? Is it sufficient to just take a small brush afterward and touch up the area? It always seems to look crappier.
2011/01/04
[ "https://diy.stackexchange.com/questions/3791", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/1399/" ]
Remove the masking tape immediately after painting so that there's no time for the skin to form over the join between the tape and the painted surface. If the paint has already dried, use a craft knife and a straight edge or ruler to cut it along the edge of the tape.
For removing tape without damaging the finish it is stuck to, I've found that if I pull the tape back over itself and run my hand parallel to the surface I get the best results. When removing tape, the natural tendency is to pull it perpendicular to the surface which puts the most stress on the underlying paint. As already mentioned, most pros don't use tape. This may sound more difficult, but with a little practice it's not hard to master. It's also going to save you time. The Purdy paintbrush company has an excellent set of instructional videos with tips on how to cut in by hand. If you search for ["purdy cutting in"](http://www.google.com/#&q=purdy%20cutting%20in) you should find links to short little videos on the Purdy site as well as on YouTube. One other tip that I learned from my painter is to caulk the corner between the ceiling and the walls to help get a nice crisp line. Depending on what type of texture is on the wall this can really help.
3,791
I put that painters tape on the edges by the ceiling and cut in next to it (overlapping the tape with the paint somewhat). A couple of hours later, I pull the tape, and small sections of paint come off with it. How do I avoid this? Is it sufficient to just take a small brush afterward and touch up the area? It always seems to look crappier.
2011/01/04
[ "https://diy.stackexchange.com/questions/3791", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/1399/" ]
Remove the masking tape immediately after painting so that there's no time for the skin to form over the join between the tape and the painted surface. If the paint has already dried, use a craft knife and a straight edge or ruler to cut it along the edge of the tape.
I find that using a hairdryer to warm the tape helps it to peel away with very little force and no damage
3,791
I put that painters tape on the edges by the ceiling and cut in next to it (overlapping the tape with the paint somewhat). A couple of hours later, I pull the tape, and small sections of paint come off with it. How do I avoid this? Is it sufficient to just take a small brush afterward and touch up the area? It always seems to look crappier.
2011/01/04
[ "https://diy.stackexchange.com/questions/3791", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/1399/" ]
Mistakes are bound to happen, and yes you can touch it up with a very small crafting paintbursh (like the ones kids use for waterpainting). You will probably never notice. The best solution however, is to not use painters tape on corners and ceilings. Typically in professional painting, tape is not used. If you use a small 2" cutting paintbrush, and apply by pulling the paintbursh at an angle toward you, keeping the smallest amount of bristles near the corner / edge, you will be fine. The only time i apply painters tape is around trim - and this is more to prevent drippings and splatter, than to protect a clean edge. If you get paint on the opposite wall using this tapeless method, it is most easily corrected by fixing it immediately when the pain it still wet. Simple use a all-in-one paint tool (or a putty knife) with a damp cloth (i use old t-shirts) pulled tight over the edge, then scrape away the paint that got on the opposing wall.
For removing tape without damaging the finish it is stuck to, I've found that if I pull the tape back over itself and run my hand parallel to the surface I get the best results. When removing tape, the natural tendency is to pull it perpendicular to the surface which puts the most stress on the underlying paint. As already mentioned, most pros don't use tape. This may sound more difficult, but with a little practice it's not hard to master. It's also going to save you time. The Purdy paintbrush company has an excellent set of instructional videos with tips on how to cut in by hand. If you search for ["purdy cutting in"](http://www.google.com/#&q=purdy%20cutting%20in) you should find links to short little videos on the Purdy site as well as on YouTube. One other tip that I learned from my painter is to caulk the corner between the ceiling and the walls to help get a nice crisp line. Depending on what type of texture is on the wall this can really help.
3,791
I put that painters tape on the edges by the ceiling and cut in next to it (overlapping the tape with the paint somewhat). A couple of hours later, I pull the tape, and small sections of paint come off with it. How do I avoid this? Is it sufficient to just take a small brush afterward and touch up the area? It always seems to look crappier.
2011/01/04
[ "https://diy.stackexchange.com/questions/3791", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/1399/" ]
Mistakes are bound to happen, and yes you can touch it up with a very small crafting paintbursh (like the ones kids use for waterpainting). You will probably never notice. The best solution however, is to not use painters tape on corners and ceilings. Typically in professional painting, tape is not used. If you use a small 2" cutting paintbrush, and apply by pulling the paintbursh at an angle toward you, keeping the smallest amount of bristles near the corner / edge, you will be fine. The only time i apply painters tape is around trim - and this is more to prevent drippings and splatter, than to protect a clean edge. If you get paint on the opposite wall using this tapeless method, it is most easily corrected by fixing it immediately when the pain it still wet. Simple use a all-in-one paint tool (or a putty knife) with a damp cloth (i use old t-shirts) pulled tight over the edge, then scrape away the paint that got on the opposing wall.
I find that using a hairdryer to warm the tape helps it to peel away with very little force and no damage
3,791
I put that painters tape on the edges by the ceiling and cut in next to it (overlapping the tape with the paint somewhat). A couple of hours later, I pull the tape, and small sections of paint come off with it. How do I avoid this? Is it sufficient to just take a small brush afterward and touch up the area? It always seems to look crappier.
2011/01/04
[ "https://diy.stackexchange.com/questions/3791", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/1399/" ]
I find that using a hairdryer to warm the tape helps it to peel away with very little force and no damage
For removing tape without damaging the finish it is stuck to, I've found that if I pull the tape back over itself and run my hand parallel to the surface I get the best results. When removing tape, the natural tendency is to pull it perpendicular to the surface which puts the most stress on the underlying paint. As already mentioned, most pros don't use tape. This may sound more difficult, but with a little practice it's not hard to master. It's also going to save you time. The Purdy paintbrush company has an excellent set of instructional videos with tips on how to cut in by hand. If you search for ["purdy cutting in"](http://www.google.com/#&q=purdy%20cutting%20in) you should find links to short little videos on the Purdy site as well as on YouTube. One other tip that I learned from my painter is to caulk the corner between the ceiling and the walls to help get a nice crisp line. Depending on what type of texture is on the wall this can really help.
72,387,399
It seems that Visual Studio 2022 has a new feature that resembles GitHub Autopilot. This is an image related to this feature: [![enter image description here](https://i.stack.imgur.com/RFvsD.png)](https://i.stack.imgur.com/RFvsD.png) This feature is very very annoying (slow and unpredictable and interfering with your typing speed). Thus I searched to see how can I disable it. But I can't find anything. So, how can I disable this?
2022/05/26
[ "https://Stackoverflow.com/questions/72387399", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16473956/" ]
The new Intellicode in Visual Studio was pretty annoying for me as well, and I was able to follow the steps in [this Stack Overflow post](https://stackoverflow.com/questions/70007337/how-to-disable-new-ai-based-intellicode-in-vs-2022) to disable it completely. However, if you just want to disable the specific prompt, you can press `esc` and dismiss the suggestion.
Some extensions can cause a typing delay, for example Rainbow Braces. It can cause a lag of few secs trying to type, usually on large files > 5000 lines. After disabling the typing comes to normal.
271,150
On Microsoft Windows, I right-click on the file and choose "Copy". I place my cursor on the target directory, right-click on it and choose "Paste". The file will be copied to the target folder. It seems that Ubuntu 12.10 does not have this function. Is there a workaround?
2013/03/22
[ "https://askubuntu.com/questions/271150", "https://askubuntu.com", "https://askubuntu.com/users/116242/" ]
> > It seems that Ubuntu 12.10 does not have this function. > > > Yes, it does. In exactly the same way. Example image: ![enter image description here](https://i.stack.imgur.com/miQQb.png) It shows both cut (move) and copy as options. Paste is added/highlighted when you have something to paste: ![enter image description here](https://i.stack.imgur.com/wTnyO.png) The only reservations I have with this: * You need to have permissions to use the location you want to move/copy files to. If you do not you need to have the admin of that system open that location for that user. > > Is there a workaround? > > > No and not needed either.
My 12.10 installation using session fall back desktop environment provides these option once a file is selected and right-click on that file ![Nautilis](https://i.stack.imgur.com/QVL6P.png) Applications | Accessories | ScreenShot is available to aid taking screen shots
45,632,167
I'm currently implementing a basic deferred renderer with multithreading in Vulkan. Since my G-Buffer should have the same resolution as the final image I want to do it in a single render-pass with multiple sub-passes, according to [this](https://www.khronos.org/assets/uploads/developers/library/2016-vulkan-devday-uk/6-Vulkan-subpasses.pdf) presentation, on slide 44 (page 138). It says: * vkCmdBeginCommandBuffer * vkCmdBeginRenderPass * vkCmdExecuteCommands * vkCmdNextSubpass * vkCmdExecuteCommands * vkCmdEndRenderPass * vkCmdEndCommandBuffer I get that in the first sub-pass, you iterate the scene graph and record one secondary commandbuffer for each entity/mesh. What I don't get is how you are supposed to do the shading pass with secondary command buffers. Do you somehow spit the screen into parts and render each part in a separate thread or just record one secondary commandbuffer for the entire second sub-pass?
2017/08/11
[ "https://Stackoverflow.com/questions/45632167", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4801255/" ]
> > What I don't get is how you are supposed to do the shading pass with secondary command buffers. > > > The shading pass (assumably the second subpass) would possibly take the G-buffers created by the first subpass as an *Input Attachment*. Then it would draw to equally sized screen-size quad using data from the G-buffers + from a set of lights (or whatever your deferred shader tries to defer). The presentation you link tries to hint at this structure style starting at page 13 (marked "Page 107"). First step would be to make it working. Use e.g. this [SW example](https://github.com/SaschaWillems/Vulkan/tree/master/deferred). Then the next step of optimizing it into single renderpass should be easier.
To me, like you said, you can need to multi thread your command buffer for the "building g-buffer subpass". However for the shading pass, it must depends on how are you doing things. To me (again), you do not need to multi thread your shading subpasses. However, you must take into consideration that you can have one "by region dependency". So, I encourage you to procede that way. Before to begin your RenderPass, use a Compute Shader to splat all your lights on the screen (here you have a kind of array of "quad"). By splatting I mean this kind of thing. You have a point light (for example), the idea is to compute the quad in screen space affected by the light. With that you have 4 vertices (that represents a quad) that you put into a SSBO and you can use it as a vertex Buffer in the shading subpass. Now you begin the render pass. 1. MT the scene graph rendering if needed. and do your vkCmdExecuteCommands(); 2. NextSubpass 3. Use the "array of quads" you create from the earlier compute shader (do not forget a VK\_SUBPASS\_EXTERNAL dependency). 4. NextSubpass and so on However, you said > > you iterate the scene graph and record one secondary commandbuffer for each entity/mesh. > > > I am not sure I really understand what you meant, but if you intend to have one secondary command buffer for one mesh, I really advice you to change the way you are doing. You must use batching. Let's say you have 64 000 different meshes to draw. You could for exemple create 64 command buffers (that you dispatch on 4 threads) and each command buffers have 1000 meshes to draw. (The number are took randomly, so profile your application). So to answer your question for the shading subpass, I would not use command buffers or only very few (by kind of lights (punctual, directional))
204,757
Our finished basement has electric baseboard heat. The baseboard is powered by a light switch, so I either have the heat on or off. Can I take out the switch and use the wiring to connect a digital thermostat instead?
2020/10/03
[ "https://diy.stackexchange.com/questions/204757", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/124022/" ]
What you need is a line voltage thermostat. Non--programmable ones are very common. Programmable (digital as you call it) line voltage stats are available but rare. I attached a pic of one that I found. Search for "line voltage thermostat" and you'll find a few. If you don't care about programability, simple line voltage stats are as common as dirt. Just be sure it's two pole. I don't know if single pole stats are still available, but for 240, I don't consider them safe. Not sure what NoSparksPlease was getting at regarding the disconnecting switch (???), I would assume the existing circuit is on a breaker and that would satisfy the code. [![Line voltage programmable thermostat](https://i.stack.imgur.com/S9mu0.png)](https://i.stack.imgur.com/S9mu0.png)
You normally can, but not every digital thermostat will work. You will need to select one that specifies the proper voltage (120 or 240v), most digital thermostats are only rated for 24v or less, and some digital stats require a neutral, which may not be present if two wire cable was used on a 240v circuit. Most line voltage digital thermostats do not satisfy electrical code requirements that may require an electrical disconnecting switch, so you may have to install a bracket in your electrical panel to make it so you can lock the breaker in the off position. Also my recollection and checking with a couple online manufacturers webpages is the Installation Instructions that come with baseboards specify either using a built in or wall thermostat. The Installation instructions are part of the NRTL Listing (UL, ETL. CSA), and the electrical codes require following the those instructions. So it is likely that you existing installation is not legal.
28,092
I want to give our sales team the ability to create custom slidedecks and then show them while on sales visits. Obviously, they're sales people, so they're not technically savvy. Just looking for ideas how I might be able to do this or at least something to start with. I was thinking that the slides could even be created aleady, they would just need to pick which slides would be shown and then a URL created dynamically. Perhaps using query string..... hmm. Ideas? Wouldn't need to be power point either.
2012/01/31
[ "https://sharepoint.stackexchange.com/questions/28092", "https://sharepoint.stackexchange.com", "https://sharepoint.stackexchange.com/users/4808/" ]
SharePoint's answer to this would be to create a Slide Library. Then sales people can connect to the slide library using PowerPoint and select the desider slides within PowerPoint. If the slides from Slide Library are updated, when people open PowerPoint presentations containing such slides, they will get notification and can automatically update the slides that have been updated. [More about slide library.](http://office.microsoft.com/serverhelp/helppreview14.aspx?AssetId=HA010338394&lcid=1033&products=WSSENDUSER&CTT=3) Other than that you would need to implement custom solution for selecting the slides and generating a presentation of the selected slides. You can then display them just with PowerPoint or perhaps using Office Web Apps so that sales people could use browser to show the slides. They would need network connection to your server in that case, so it might not be convenient in the end.
Microsoft pre-sales people have a demo on this using Fast search engine to find Powerpoint files, and drag and drop slide to dynamically create a new one. It looks really good, but you'll need to do some development to built it yourself.
57,350
I was recently party to a preliminary hearing in a criminal case in which a 911 call was played. The content of the 911 call was very beneficial to the defendant. During the hearing, the judge said something to the effect of, "It would be a question for the jury whether the caller was 'being honest.'" In other words, the judge was suggesting that the caller to the 911 was being disengenuous and deliberately conveying a false impression -- something that I think would never occur to most people because the call seemed completely genuine. Also, the caller would have had no motive to faking their mental state. I got the idea that the judge was looking for some angle to discredit the 911 call out of interest to see the defendant convicted. In any case, from the judge's remark I got the idea that he might say something like that during the trial, or give the jury an instruction designed to make them think the 911 caller was putting on an elaborate act. Is a judge allowed to do this?
2020/10/21
[ "https://law.stackexchange.com/questions/57350", "https://law.stackexchange.com", "https://law.stackexchange.com/users/2609/" ]
It is the jury's job to evaluate the credibility of the witnesses, and it is the judge's job to inform them of that responsbility. It is not appropriate, however, for the judge to indicate to the jury what answer they should come to on those questions. In [*Quercia v. United States*, 289 U.S. 466 (1933)](https://supreme.justia.com/cases/federal/us/289/466/), the defendant in a drug case took the stand to deny the charges. Before the jury went to deliberate, the judge made the following observation: > > I am going to tell you what I think of the defendant's testimony. You may have noticed, Mr. Foreman and gentlemen, that he wiped his hands during his testimony. It is rather a curious thing, but that is almost always an indication of lying. Why it should be so we don't know, but that is the fact. I think that every single word that man said, except when he agreed with the Government's testimony, was a lie. > > > The jury convicted, but the U.S. Supreme Court reversed, holding that the instruction was an error. It said that the judge has the right, generally speaking, to comment on the evidence, but that right is not unlimited, because juries are likely to be swayed by the judge's assessments, even if he instructs them to make their own decisions: > > The influence of the trial judge on the jury is necessarily and properly of great weight and his lightest word or intimation is received with deference, and may prove controlling. This court has accordingly emphasized the duty of the trial judge to use great care that an expression of opinion upon the evidence should be so given as not to mislead, and especially that it should not be one-sided; that deductions and theories not warranted by the evidence should be studiously avoided. > > > The comment you seem to be imagining is a closer call than this, but I think most judges would agree it would be inappropriate. At a preliminary hearing, though, where there is no jury, there is no real problem with the judge making that comment. If I were the defense attorney, I'd be glad he did, as it would help inform my decision about whether to pursue a jury trial or a bench trial.
In the US, in most if not all states, the Judge at a jury trial may not comment in such a way as to indicate a belief in the truth or falsity of testimony or the guilt or innocence of the accused. I believe the rule is different in the UK and perhaps elsewhere. However this was not in the presence of a jury. **Addition**: This is because in a jury trial, the jury, not the Judge, is supposed to determine the facts. The drafters of US Procedure apparently thought that any comment by the judge would be highly influential with the jury. I think that this was a reaction against 17th C and 18th C British practice. In England and Wales at least it used to be the case that the Judge would routinely comment on the evidence, both during the course of the trial, and during the "Summing uP" which followed the evidence but I think came before the arguments of the lawyers. I am not sure if theis is still the procedure.
57,350
I was recently party to a preliminary hearing in a criminal case in which a 911 call was played. The content of the 911 call was very beneficial to the defendant. During the hearing, the judge said something to the effect of, "It would be a question for the jury whether the caller was 'being honest.'" In other words, the judge was suggesting that the caller to the 911 was being disengenuous and deliberately conveying a false impression -- something that I think would never occur to most people because the call seemed completely genuine. Also, the caller would have had no motive to faking their mental state. I got the idea that the judge was looking for some angle to discredit the 911 call out of interest to see the defendant convicted. In any case, from the judge's remark I got the idea that he might say something like that during the trial, or give the jury an instruction designed to make them think the 911 caller was putting on an elaborate act. Is a judge allowed to do this?
2020/10/21
[ "https://law.stackexchange.com/questions/57350", "https://law.stackexchange.com", "https://law.stackexchange.com/users/2609/" ]
It is the jury's job to evaluate the credibility of the witnesses, and it is the judge's job to inform them of that responsbility. It is not appropriate, however, for the judge to indicate to the jury what answer they should come to on those questions. In [*Quercia v. United States*, 289 U.S. 466 (1933)](https://supreme.justia.com/cases/federal/us/289/466/), the defendant in a drug case took the stand to deny the charges. Before the jury went to deliberate, the judge made the following observation: > > I am going to tell you what I think of the defendant's testimony. You may have noticed, Mr. Foreman and gentlemen, that he wiped his hands during his testimony. It is rather a curious thing, but that is almost always an indication of lying. Why it should be so we don't know, but that is the fact. I think that every single word that man said, except when he agreed with the Government's testimony, was a lie. > > > The jury convicted, but the U.S. Supreme Court reversed, holding that the instruction was an error. It said that the judge has the right, generally speaking, to comment on the evidence, but that right is not unlimited, because juries are likely to be swayed by the judge's assessments, even if he instructs them to make their own decisions: > > The influence of the trial judge on the jury is necessarily and properly of great weight and his lightest word or intimation is received with deference, and may prove controlling. This court has accordingly emphasized the duty of the trial judge to use great care that an expression of opinion upon the evidence should be so given as not to mislead, and especially that it should not be one-sided; that deductions and theories not warranted by the evidence should be studiously avoided. > > > The comment you seem to be imagining is a closer call than this, but I think most judges would agree it would be inappropriate. At a preliminary hearing, though, where there is no jury, there is no real problem with the judge making that comment. If I were the defense attorney, I'd be glad he did, as it would help inform my decision about whether to pursue a jury trial or a bench trial.
It is the judge's obligation to instruct the jury w.r.t. believing witnesses. [This](https://govt.westlaw.com/wcrji/Document/Ief97fa32e10d11daade1ae871d9b2cbe?viewType=FullText&originationContext=documenttoc&transitionType=DocumentItem&contextData=(sc.Default)&bhcp=1) is the introductory instruction for criminal trials in Washington, which on that topic says > > You are the sole judges of the credibility of each witness. You are > also the sole judges of the value or weight to be given to the > testimony of each witness. In assessing credibility, you must avoid > bias, conscious or unconscious, including bias based on religion, > ethnicity, race, sexual orientation, gender or disability. In > considering a witness's testimony, you may consider these things: the > opportunity of the witness to observe or know the things he or she > testifies about; the ability of the witness to observe accurately; the > quality of a witness's memory while testifying; the manner of the > witness while testifying; any personal interest that the witness might > have in the outcome or the issues; any bias or prejudice that the > witness may have shown; the reasonableness of the witness's statements > in the context of all of the other evidence; and any other factors > that affect your evaluation or belief of a witness or your evaluation > of his or her testimony. > > > Some such statement will be made in any trial. There used to be a more specific instruction in witness credibility, but it was withdrawn. The general instruction also says > > Our state constitution prohibits a trial judge from making a comment > on the evidence. It would be improper for me to express, by words or > conduct, my personal opinion about the value of testimony or other > evidence. I have not intentionally done this. If it appeared to you > that I have indicated my personal opinion in any way, either during > trial or in giving these instructions, you must disregard this > entirely. > > > Except for Texas and West Virgina, all states have such instructions. If a judge went off the rails and said "You are going to have to decide if you believe all that stuff that Smith said", that would be reversible error. The judge may not imply belief or disbelief, and may not make comments that tend to favor the defense vs. the prosecution.
57,350
I was recently party to a preliminary hearing in a criminal case in which a 911 call was played. The content of the 911 call was very beneficial to the defendant. During the hearing, the judge said something to the effect of, "It would be a question for the jury whether the caller was 'being honest.'" In other words, the judge was suggesting that the caller to the 911 was being disengenuous and deliberately conveying a false impression -- something that I think would never occur to most people because the call seemed completely genuine. Also, the caller would have had no motive to faking their mental state. I got the idea that the judge was looking for some angle to discredit the 911 call out of interest to see the defendant convicted. In any case, from the judge's remark I got the idea that he might say something like that during the trial, or give the jury an instruction designed to make them think the 911 caller was putting on an elaborate act. Is a judge allowed to do this?
2020/10/21
[ "https://law.stackexchange.com/questions/57350", "https://law.stackexchange.com", "https://law.stackexchange.com/users/2609/" ]
It is the jury's job to evaluate the credibility of the witnesses, and it is the judge's job to inform them of that responsbility. It is not appropriate, however, for the judge to indicate to the jury what answer they should come to on those questions. In [*Quercia v. United States*, 289 U.S. 466 (1933)](https://supreme.justia.com/cases/federal/us/289/466/), the defendant in a drug case took the stand to deny the charges. Before the jury went to deliberate, the judge made the following observation: > > I am going to tell you what I think of the defendant's testimony. You may have noticed, Mr. Foreman and gentlemen, that he wiped his hands during his testimony. It is rather a curious thing, but that is almost always an indication of lying. Why it should be so we don't know, but that is the fact. I think that every single word that man said, except when he agreed with the Government's testimony, was a lie. > > > The jury convicted, but the U.S. Supreme Court reversed, holding that the instruction was an error. It said that the judge has the right, generally speaking, to comment on the evidence, but that right is not unlimited, because juries are likely to be swayed by the judge's assessments, even if he instructs them to make their own decisions: > > The influence of the trial judge on the jury is necessarily and properly of great weight and his lightest word or intimation is received with deference, and may prove controlling. This court has accordingly emphasized the duty of the trial judge to use great care that an expression of opinion upon the evidence should be so given as not to mislead, and especially that it should not be one-sided; that deductions and theories not warranted by the evidence should be studiously avoided. > > > The comment you seem to be imagining is a closer call than this, but I think most judges would agree it would be inappropriate. At a preliminary hearing, though, where there is no jury, there is no real problem with the judge making that comment. If I were the defense attorney, I'd be glad he did, as it would help inform my decision about whether to pursue a jury trial or a bench trial.
There is a concept of [implicature](https://en.wikipedia.org/wiki/Implicature) that says that meaning is conveyed not only by the meanings of the words, but by the circumstances that are likely to cause someone to utter those words. There is nothing in the literal meanings of the words that says that the witness is lying. Your belief that it conveys that seems to be based on implicature: the judge would not feel compelled to mention it unless they thought the witness is lying. However, that is not necessarily the case. Certainly if for one witness and one witness only, the judge were to say this without anything else giving them reason to, it could come across as implying that the witness is untrustworthy. On the other hand, if the judge were to say this each time a witness takes the stand, or say it at the very beginning of the trial and indicate that it applies to all witnesses, or say it when prompted by something more than just the witness testifying, such as a party asking the judge to make a decision that they believe relies on an assertion of fact, then this inference is less valid.
57,350
I was recently party to a preliminary hearing in a criminal case in which a 911 call was played. The content of the 911 call was very beneficial to the defendant. During the hearing, the judge said something to the effect of, "It would be a question for the jury whether the caller was 'being honest.'" In other words, the judge was suggesting that the caller to the 911 was being disengenuous and deliberately conveying a false impression -- something that I think would never occur to most people because the call seemed completely genuine. Also, the caller would have had no motive to faking their mental state. I got the idea that the judge was looking for some angle to discredit the 911 call out of interest to see the defendant convicted. In any case, from the judge's remark I got the idea that he might say something like that during the trial, or give the jury an instruction designed to make them think the 911 caller was putting on an elaborate act. Is a judge allowed to do this?
2020/10/21
[ "https://law.stackexchange.com/questions/57350", "https://law.stackexchange.com", "https://law.stackexchange.com/users/2609/" ]
It is the jury's job to evaluate the credibility of the witnesses, and it is the judge's job to inform them of that responsbility. It is not appropriate, however, for the judge to indicate to the jury what answer they should come to on those questions. In [*Quercia v. United States*, 289 U.S. 466 (1933)](https://supreme.justia.com/cases/federal/us/289/466/), the defendant in a drug case took the stand to deny the charges. Before the jury went to deliberate, the judge made the following observation: > > I am going to tell you what I think of the defendant's testimony. You may have noticed, Mr. Foreman and gentlemen, that he wiped his hands during his testimony. It is rather a curious thing, but that is almost always an indication of lying. Why it should be so we don't know, but that is the fact. I think that every single word that man said, except when he agreed with the Government's testimony, was a lie. > > > The jury convicted, but the U.S. Supreme Court reversed, holding that the instruction was an error. It said that the judge has the right, generally speaking, to comment on the evidence, but that right is not unlimited, because juries are likely to be swayed by the judge's assessments, even if he instructs them to make their own decisions: > > The influence of the trial judge on the jury is necessarily and properly of great weight and his lightest word or intimation is received with deference, and may prove controlling. This court has accordingly emphasized the duty of the trial judge to use great care that an expression of opinion upon the evidence should be so given as not to mislead, and especially that it should not be one-sided; that deductions and theories not warranted by the evidence should be studiously avoided. > > > The comment you seem to be imagining is a closer call than this, but I think most judges would agree it would be inappropriate. At a preliminary hearing, though, where there is no jury, there is no real problem with the judge making that comment. If I were the defense attorney, I'd be glad he did, as it would help inform my decision about whether to pursue a jury trial or a bench trial.
Of course he can, but in some jurisdictions he will have to choose his words carefully. Depending on tone or pace of voice, "it's for the jury to decide (this or that)" will often be heard by the jury to mean "but I don't believe it". Since the meaning depends on tone or pace of voice, who could ever prove anything against the judge?
57,350
I was recently party to a preliminary hearing in a criminal case in which a 911 call was played. The content of the 911 call was very beneficial to the defendant. During the hearing, the judge said something to the effect of, "It would be a question for the jury whether the caller was 'being honest.'" In other words, the judge was suggesting that the caller to the 911 was being disengenuous and deliberately conveying a false impression -- something that I think would never occur to most people because the call seemed completely genuine. Also, the caller would have had no motive to faking their mental state. I got the idea that the judge was looking for some angle to discredit the 911 call out of interest to see the defendant convicted. In any case, from the judge's remark I got the idea that he might say something like that during the trial, or give the jury an instruction designed to make them think the 911 caller was putting on an elaborate act. Is a judge allowed to do this?
2020/10/21
[ "https://law.stackexchange.com/questions/57350", "https://law.stackexchange.com", "https://law.stackexchange.com/users/2609/" ]
It is the judge's obligation to instruct the jury w.r.t. believing witnesses. [This](https://govt.westlaw.com/wcrji/Document/Ief97fa32e10d11daade1ae871d9b2cbe?viewType=FullText&originationContext=documenttoc&transitionType=DocumentItem&contextData=(sc.Default)&bhcp=1) is the introductory instruction for criminal trials in Washington, which on that topic says > > You are the sole judges of the credibility of each witness. You are > also the sole judges of the value or weight to be given to the > testimony of each witness. In assessing credibility, you must avoid > bias, conscious or unconscious, including bias based on religion, > ethnicity, race, sexual orientation, gender or disability. In > considering a witness's testimony, you may consider these things: the > opportunity of the witness to observe or know the things he or she > testifies about; the ability of the witness to observe accurately; the > quality of a witness's memory while testifying; the manner of the > witness while testifying; any personal interest that the witness might > have in the outcome or the issues; any bias or prejudice that the > witness may have shown; the reasonableness of the witness's statements > in the context of all of the other evidence; and any other factors > that affect your evaluation or belief of a witness or your evaluation > of his or her testimony. > > > Some such statement will be made in any trial. There used to be a more specific instruction in witness credibility, but it was withdrawn. The general instruction also says > > Our state constitution prohibits a trial judge from making a comment > on the evidence. It would be improper for me to express, by words or > conduct, my personal opinion about the value of testimony or other > evidence. I have not intentionally done this. If it appeared to you > that I have indicated my personal opinion in any way, either during > trial or in giving these instructions, you must disregard this > entirely. > > > Except for Texas and West Virgina, all states have such instructions. If a judge went off the rails and said "You are going to have to decide if you believe all that stuff that Smith said", that would be reversible error. The judge may not imply belief or disbelief, and may not make comments that tend to favor the defense vs. the prosecution.
In the US, in most if not all states, the Judge at a jury trial may not comment in such a way as to indicate a belief in the truth or falsity of testimony or the guilt or innocence of the accused. I believe the rule is different in the UK and perhaps elsewhere. However this was not in the presence of a jury. **Addition**: This is because in a jury trial, the jury, not the Judge, is supposed to determine the facts. The drafters of US Procedure apparently thought that any comment by the judge would be highly influential with the jury. I think that this was a reaction against 17th C and 18th C British practice. In England and Wales at least it used to be the case that the Judge would routinely comment on the evidence, both during the course of the trial, and during the "Summing uP" which followed the evidence but I think came before the arguments of the lawyers. I am not sure if theis is still the procedure.
57,350
I was recently party to a preliminary hearing in a criminal case in which a 911 call was played. The content of the 911 call was very beneficial to the defendant. During the hearing, the judge said something to the effect of, "It would be a question for the jury whether the caller was 'being honest.'" In other words, the judge was suggesting that the caller to the 911 was being disengenuous and deliberately conveying a false impression -- something that I think would never occur to most people because the call seemed completely genuine. Also, the caller would have had no motive to faking their mental state. I got the idea that the judge was looking for some angle to discredit the 911 call out of interest to see the defendant convicted. In any case, from the judge's remark I got the idea that he might say something like that during the trial, or give the jury an instruction designed to make them think the 911 caller was putting on an elaborate act. Is a judge allowed to do this?
2020/10/21
[ "https://law.stackexchange.com/questions/57350", "https://law.stackexchange.com", "https://law.stackexchange.com/users/2609/" ]
There is a concept of [implicature](https://en.wikipedia.org/wiki/Implicature) that says that meaning is conveyed not only by the meanings of the words, but by the circumstances that are likely to cause someone to utter those words. There is nothing in the literal meanings of the words that says that the witness is lying. Your belief that it conveys that seems to be based on implicature: the judge would not feel compelled to mention it unless they thought the witness is lying. However, that is not necessarily the case. Certainly if for one witness and one witness only, the judge were to say this without anything else giving them reason to, it could come across as implying that the witness is untrustworthy. On the other hand, if the judge were to say this each time a witness takes the stand, or say it at the very beginning of the trial and indicate that it applies to all witnesses, or say it when prompted by something more than just the witness testifying, such as a party asking the judge to make a decision that they believe relies on an assertion of fact, then this inference is less valid.
In the US, in most if not all states, the Judge at a jury trial may not comment in such a way as to indicate a belief in the truth or falsity of testimony or the guilt or innocence of the accused. I believe the rule is different in the UK and perhaps elsewhere. However this was not in the presence of a jury. **Addition**: This is because in a jury trial, the jury, not the Judge, is supposed to determine the facts. The drafters of US Procedure apparently thought that any comment by the judge would be highly influential with the jury. I think that this was a reaction against 17th C and 18th C British practice. In England and Wales at least it used to be the case that the Judge would routinely comment on the evidence, both during the course of the trial, and during the "Summing uP" which followed the evidence but I think came before the arguments of the lawyers. I am not sure if theis is still the procedure.
57,350
I was recently party to a preliminary hearing in a criminal case in which a 911 call was played. The content of the 911 call was very beneficial to the defendant. During the hearing, the judge said something to the effect of, "It would be a question for the jury whether the caller was 'being honest.'" In other words, the judge was suggesting that the caller to the 911 was being disengenuous and deliberately conveying a false impression -- something that I think would never occur to most people because the call seemed completely genuine. Also, the caller would have had no motive to faking their mental state. I got the idea that the judge was looking for some angle to discredit the 911 call out of interest to see the defendant convicted. In any case, from the judge's remark I got the idea that he might say something like that during the trial, or give the jury an instruction designed to make them think the 911 caller was putting on an elaborate act. Is a judge allowed to do this?
2020/10/21
[ "https://law.stackexchange.com/questions/57350", "https://law.stackexchange.com", "https://law.stackexchange.com/users/2609/" ]
In the US, in most if not all states, the Judge at a jury trial may not comment in such a way as to indicate a belief in the truth or falsity of testimony or the guilt or innocence of the accused. I believe the rule is different in the UK and perhaps elsewhere. However this was not in the presence of a jury. **Addition**: This is because in a jury trial, the jury, not the Judge, is supposed to determine the facts. The drafters of US Procedure apparently thought that any comment by the judge would be highly influential with the jury. I think that this was a reaction against 17th C and 18th C British practice. In England and Wales at least it used to be the case that the Judge would routinely comment on the evidence, both during the course of the trial, and during the "Summing uP" which followed the evidence but I think came before the arguments of the lawyers. I am not sure if theis is still the procedure.
Of course he can, but in some jurisdictions he will have to choose his words carefully. Depending on tone or pace of voice, "it's for the jury to decide (this or that)" will often be heard by the jury to mean "but I don't believe it". Since the meaning depends on tone or pace of voice, who could ever prove anything against the judge?
57,350
I was recently party to a preliminary hearing in a criminal case in which a 911 call was played. The content of the 911 call was very beneficial to the defendant. During the hearing, the judge said something to the effect of, "It would be a question for the jury whether the caller was 'being honest.'" In other words, the judge was suggesting that the caller to the 911 was being disengenuous and deliberately conveying a false impression -- something that I think would never occur to most people because the call seemed completely genuine. Also, the caller would have had no motive to faking their mental state. I got the idea that the judge was looking for some angle to discredit the 911 call out of interest to see the defendant convicted. In any case, from the judge's remark I got the idea that he might say something like that during the trial, or give the jury an instruction designed to make them think the 911 caller was putting on an elaborate act. Is a judge allowed to do this?
2020/10/21
[ "https://law.stackexchange.com/questions/57350", "https://law.stackexchange.com", "https://law.stackexchange.com/users/2609/" ]
It is the judge's obligation to instruct the jury w.r.t. believing witnesses. [This](https://govt.westlaw.com/wcrji/Document/Ief97fa32e10d11daade1ae871d9b2cbe?viewType=FullText&originationContext=documenttoc&transitionType=DocumentItem&contextData=(sc.Default)&bhcp=1) is the introductory instruction for criminal trials in Washington, which on that topic says > > You are the sole judges of the credibility of each witness. You are > also the sole judges of the value or weight to be given to the > testimony of each witness. In assessing credibility, you must avoid > bias, conscious or unconscious, including bias based on religion, > ethnicity, race, sexual orientation, gender or disability. In > considering a witness's testimony, you may consider these things: the > opportunity of the witness to observe or know the things he or she > testifies about; the ability of the witness to observe accurately; the > quality of a witness's memory while testifying; the manner of the > witness while testifying; any personal interest that the witness might > have in the outcome or the issues; any bias or prejudice that the > witness may have shown; the reasonableness of the witness's statements > in the context of all of the other evidence; and any other factors > that affect your evaluation or belief of a witness or your evaluation > of his or her testimony. > > > Some such statement will be made in any trial. There used to be a more specific instruction in witness credibility, but it was withdrawn. The general instruction also says > > Our state constitution prohibits a trial judge from making a comment > on the evidence. It would be improper for me to express, by words or > conduct, my personal opinion about the value of testimony or other > evidence. I have not intentionally done this. If it appeared to you > that I have indicated my personal opinion in any way, either during > trial or in giving these instructions, you must disregard this > entirely. > > > Except for Texas and West Virgina, all states have such instructions. If a judge went off the rails and said "You are going to have to decide if you believe all that stuff that Smith said", that would be reversible error. The judge may not imply belief or disbelief, and may not make comments that tend to favor the defense vs. the prosecution.
There is a concept of [implicature](https://en.wikipedia.org/wiki/Implicature) that says that meaning is conveyed not only by the meanings of the words, but by the circumstances that are likely to cause someone to utter those words. There is nothing in the literal meanings of the words that says that the witness is lying. Your belief that it conveys that seems to be based on implicature: the judge would not feel compelled to mention it unless they thought the witness is lying. However, that is not necessarily the case. Certainly if for one witness and one witness only, the judge were to say this without anything else giving them reason to, it could come across as implying that the witness is untrustworthy. On the other hand, if the judge were to say this each time a witness takes the stand, or say it at the very beginning of the trial and indicate that it applies to all witnesses, or say it when prompted by something more than just the witness testifying, such as a party asking the judge to make a decision that they believe relies on an assertion of fact, then this inference is less valid.
57,350
I was recently party to a preliminary hearing in a criminal case in which a 911 call was played. The content of the 911 call was very beneficial to the defendant. During the hearing, the judge said something to the effect of, "It would be a question for the jury whether the caller was 'being honest.'" In other words, the judge was suggesting that the caller to the 911 was being disengenuous and deliberately conveying a false impression -- something that I think would never occur to most people because the call seemed completely genuine. Also, the caller would have had no motive to faking their mental state. I got the idea that the judge was looking for some angle to discredit the 911 call out of interest to see the defendant convicted. In any case, from the judge's remark I got the idea that he might say something like that during the trial, or give the jury an instruction designed to make them think the 911 caller was putting on an elaborate act. Is a judge allowed to do this?
2020/10/21
[ "https://law.stackexchange.com/questions/57350", "https://law.stackexchange.com", "https://law.stackexchange.com/users/2609/" ]
It is the judge's obligation to instruct the jury w.r.t. believing witnesses. [This](https://govt.westlaw.com/wcrji/Document/Ief97fa32e10d11daade1ae871d9b2cbe?viewType=FullText&originationContext=documenttoc&transitionType=DocumentItem&contextData=(sc.Default)&bhcp=1) is the introductory instruction for criminal trials in Washington, which on that topic says > > You are the sole judges of the credibility of each witness. You are > also the sole judges of the value or weight to be given to the > testimony of each witness. In assessing credibility, you must avoid > bias, conscious or unconscious, including bias based on religion, > ethnicity, race, sexual orientation, gender or disability. In > considering a witness's testimony, you may consider these things: the > opportunity of the witness to observe or know the things he or she > testifies about; the ability of the witness to observe accurately; the > quality of a witness's memory while testifying; the manner of the > witness while testifying; any personal interest that the witness might > have in the outcome or the issues; any bias or prejudice that the > witness may have shown; the reasonableness of the witness's statements > in the context of all of the other evidence; and any other factors > that affect your evaluation or belief of a witness or your evaluation > of his or her testimony. > > > Some such statement will be made in any trial. There used to be a more specific instruction in witness credibility, but it was withdrawn. The general instruction also says > > Our state constitution prohibits a trial judge from making a comment > on the evidence. It would be improper for me to express, by words or > conduct, my personal opinion about the value of testimony or other > evidence. I have not intentionally done this. If it appeared to you > that I have indicated my personal opinion in any way, either during > trial or in giving these instructions, you must disregard this > entirely. > > > Except for Texas and West Virgina, all states have such instructions. If a judge went off the rails and said "You are going to have to decide if you believe all that stuff that Smith said", that would be reversible error. The judge may not imply belief or disbelief, and may not make comments that tend to favor the defense vs. the prosecution.
Of course he can, but in some jurisdictions he will have to choose his words carefully. Depending on tone or pace of voice, "it's for the jury to decide (this or that)" will often be heard by the jury to mean "but I don't believe it". Since the meaning depends on tone or pace of voice, who could ever prove anything against the judge?
57,350
I was recently party to a preliminary hearing in a criminal case in which a 911 call was played. The content of the 911 call was very beneficial to the defendant. During the hearing, the judge said something to the effect of, "It would be a question for the jury whether the caller was 'being honest.'" In other words, the judge was suggesting that the caller to the 911 was being disengenuous and deliberately conveying a false impression -- something that I think would never occur to most people because the call seemed completely genuine. Also, the caller would have had no motive to faking their mental state. I got the idea that the judge was looking for some angle to discredit the 911 call out of interest to see the defendant convicted. In any case, from the judge's remark I got the idea that he might say something like that during the trial, or give the jury an instruction designed to make them think the 911 caller was putting on an elaborate act. Is a judge allowed to do this?
2020/10/21
[ "https://law.stackexchange.com/questions/57350", "https://law.stackexchange.com", "https://law.stackexchange.com/users/2609/" ]
There is a concept of [implicature](https://en.wikipedia.org/wiki/Implicature) that says that meaning is conveyed not only by the meanings of the words, but by the circumstances that are likely to cause someone to utter those words. There is nothing in the literal meanings of the words that says that the witness is lying. Your belief that it conveys that seems to be based on implicature: the judge would not feel compelled to mention it unless they thought the witness is lying. However, that is not necessarily the case. Certainly if for one witness and one witness only, the judge were to say this without anything else giving them reason to, it could come across as implying that the witness is untrustworthy. On the other hand, if the judge were to say this each time a witness takes the stand, or say it at the very beginning of the trial and indicate that it applies to all witnesses, or say it when prompted by something more than just the witness testifying, such as a party asking the judge to make a decision that they believe relies on an assertion of fact, then this inference is less valid.
Of course he can, but in some jurisdictions he will have to choose his words carefully. Depending on tone or pace of voice, "it's for the jury to decide (this or that)" will often be heard by the jury to mean "but I don't believe it". Since the meaning depends on tone or pace of voice, who could ever prove anything against the judge?
145,468
I am a naturalized American citizen of African descent. I speak and write French fluently though I am an anglophone. Would I be allowed on an Air France flight departing from New York to Cameroon, even though I don't have a visa for Cameroon? I have made negotiations for a visa once I arrive at the airport in Douala.
2019/09/09
[ "https://travel.stackexchange.com/questions/145468", "https://travel.stackexchange.com", "https://travel.stackexchange.com/users/102457/" ]
According to Timatic, the system used by most airlines in determining if you meet the requirements for travel to a specific country, a US citizen CAN travel to Cameroon without a visa in their passport under the following conditions : > > Passengers with a confirmation letter that a visa has been approved > can obtain a visa on arrival. The letter has to be issued by the > competent authorities of Cameroon. > > > Presuming the "negotiations" you refer to include a letter that meets these requirements, then yes, you will be allowed to travel.
The actual visa policy of a country is not always exactly the same as what is stated in the relevant airline manuals or on wikipedia. E.g. even though the manual says that you cannot get a visa on arrival, in reality this may be possible. HOWEVER: people at the checkin counter will only trust their manual. The only exception would be that an airline representative tells them that a visa on arrival is possible. This sometimes happens when flights are flown by national carriers (e.g. Air Cameroon). Such a scenario is rather unlikely if you fly Air France and have connecting flights. It is not a risk I would be willing to take unless I were very certain that this works for my particular connections and my particular airlines.
56,287,870
We use ReLu instead of Sigmoid activation function since it is devoid of vanishing and exploding gradients problem that has been in sigmoid like activation functions, Leaky-ReLU is one of rely's improvements. Everyone is talking about the advantages of Leaky-ReLU. But what are the disadvantages of Leaky-ReLU?
2019/05/24
[ "https://Stackoverflow.com/questions/56287870", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10882863/" ]
ReLU replaced sigmoid in the hidden layers since it yields better results for general purpose applications, but it really depends in your case and other activation function might work better. Leaky ReLU helps with the vainishing gradient problem. I think the main disadvange of Leaky ReLU is that you have another parameter to tune, the slope. But I remark that it really depends in your problem which function works better.
The adventage: LeakyRelu is "inmortal". If you play enough with your Relu neural network some neurons are going to die. (specialy with L1, L2 normalization) Detect death neurons is hard. Correct them even harder. The disadventage: You will add computational work on every epoch. (it's harder to multiply than to assign a zero) Depending the job you may need a few more epochs to convergence. The slope at negative z is another parameter but not a very critical one. When you reach small learning rates a dead neuron tend to remain dead.
558,927
[How does conservation of momentum change in moving frames ( constant velocity ) and non-inertial frames?](https://physics.stackexchange.com/questions/545950/how-does-conservation-of-momentum-change-in-moving-frames-constant-velocity) In this question's accepted answer, it says that if the time period of application of pseudo force is negligible, then the conservation of momentum holds. But I have learnt that momentum is always conserved? Where does this aspect of time period come into the picture?
2020/06/12
[ "https://physics.stackexchange.com/questions/558927", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/236734/" ]
The 'solution' makes essentially zero sense. Indeed, it's of the *not even wrong* variety. (0) A resistor is only a 'fuse' in the sense that it will overheat and perhaps smoke before opening up if the power dissipated is well above the power rating for some time. But there's no mention of the power rating of this resistor 'fuse' so there's not enough information given to even make a guess at what current would cause it to open up. (1) The calculation for the 5A maximum current calculation assumes the full 240V is *dropped* across the 48 ohm resistor 'fuse'. Why? Consider that it could only be a 'fuse' if it's in series with the load (a fuse protects a circuit by becoming an open circuit). But if it's in series with the load, it would have the full 240V across only if the load has *zero* volts across. But globes with zero volts across have *zero* current through! The 5A current calculation makes zero sense in this context. (2) The calculation for the current through a single globe assumes the entire 240V is across the globe. But, if there's a 48 ohm resistor 'fuse' in series, the globe current is through that resistor which means that the voltage across the globe is *less* than 240V. Which means the calculated current isn't relevant to this problem. (3) The problem statement mentions that the globes are in series but clearly, only when the globes are parallel connected do we add their currents.
The teacher is wrong. Not even wrong. In a very embarrassing way. You should complain to his supervisor if he is not willing to accept his mistake.
69,664
[This is the second version of another question: [How does consciousness depend on spatiality?]](https://philosophy.stackexchange.com/questions/69630/thought-experiment-how-does-consciousness-depend-on-spatiality) --- I admit that the technical details of the following thought experiment sound completely weird but that's what it has in common with many other brain-related thought experiments: * [the brain in a vat](https://en.wikipedia.org/wiki/Brain_in_a_vat) [![enter image description here](https://i.stack.imgur.com/qSX12.png)](https://i.stack.imgur.com/qSX12.png)[source](https://existentialcomics.com/comic/265) * [teleportation](https://existentialcomics.com/comic/1) [![enter image description here](https://i.stack.imgur.com/hQ6vo.png)](https://i.stack.imgur.com/hQ6vo.png) * [the split-brain thought experiment](https://www.academia.edu/12717457/What_does_the_split-brain_thought_experiment_reveal_about_personal_identity) Consider a brain in a vat and that its conscious experiences are determined by the firing patterns of its neurons. Consider a super-tiny nano-device placed into the gap of a synapse: it absorbes all neurotransmitters emitted from the pre-synaptic neuron and re-emits them after a given period of time such that the post-synaptic neuron just receives and reacts on them with some delay. Now consider these devices placed into all synapses of the brain in the vat. If one assumes that the conscious experience of the brain is essentially the same as before, just (indetectably) slowed down, then it makes sense to go one step further. (If not so, one can stop here.) Replace each nano-device by two devices (an input and an output device) which communicate by a super-thin optical fiber. Now move all neurons to arbitrary positions such that the lengths of all optical fibers are the same, so the constant speed of light guarantees that the delay at all synapses is the same. > > **What would the conscious experience of the distorted brain in the vat be, assuming that the neurons exhibit exactly the same firing > patterns as before, just (indetectably) slowed down and at different places?** > > > Note that an important assumption has been made: that the communication between neurons is completely governed by neurotransmitters at synapses. Electrical synapses and [neuromodulators](https://biology.stackexchange.com/questions/89777/differences-between-neurotransmitters-and-neuromodulators) (which are transmitted differently) are ignored. **Edit**: Another caveat is the fact, that changes of synaptic strengths (i.e. learning) that rely on the *simultaneous firing of spatially nearby neurons* could not take place. So the isomorphism of neural activities (and thus conscious experiences) would break down rather fast, and thus the thought experiment may make sense only for some seconds or minutes (as long as synaptic activies don't deviate too much).
2020/01/16
[ "https://philosophy.stackexchange.com/questions/69664", "https://philosophy.stackexchange.com", "https://philosophy.stackexchange.com/users/1000/" ]
> > While we can doubt any particular perception, illusions can appear only against the background of the world and our primordial faith in it. While we never coincide with the world or grasp it with absolute certainty, we are also never entirely cut off from it; > > > The background of the world is the inescapable, shared external reality that all of us who are not solipsists participate in. It is the fundamental construction of experience which is nearly universal among thinkers. It is the undeniable [naive realism](https://en.wikipedia.org/wiki/Na%C3%AFve_realism) that arises from the universal nature of the human mind, such as qualia and language, which is inevitably subject to philosophical discourse. One may choose to enter into the process of [phenomenological reduction](https://www.iep.utm.edu/phen-red/), but one must have something to reduce. This constant assertion of the appearance of reality is this background. The nature of the relationship between the Continental and Anglo-American traditions hinges upon how one copes with this background. If one accepts that one's introspection is the best course of dealing with understanding experience, one is apt to accept rational and introspective epistemic methods as privileged; if one broadens one's perspective to include skepticism of that privilege and rejects that rationalism and introspection are indeed privileged, one must break down those introspections not by thought, but by experiment. **Times, Dichotimzation, and Realities** If you want to understand the background of the world, escape the limits of rejecting testimony, measurement, and experimentation. Start with understanding the relationship between the subjective experience of time versus the objectivity of space-time (even under the aegis of the temporal relativity of relativistic physics); Daniel Dennett has a chapter (6, Time and Experience) in his 2017 edition of [*Consciousness Explained*](https://www.google.com/search?tbm=bks&q=consciousness%20explained&oq=consciousness%20exp). It is important to both draw an ontological distinction between the two types of time, and to note the fact that advocates of the view that "time is but an illusion" usually post their thoughts using a CPU with a clock while glancing at their wrist watches that they wear to save themselves time by avoiding glancing up and searching for the clock on the wall before collecting their biweekly paychecks. While it is true that time as experienced is subject to the effects of being constructed by neurons, that hardly makes it random and arbitrary. In fact, [time perception](https://en.wikipedia.org/wiki/Time_perception) and psychophysics are quite robust in their empirical content, and help cut through the hoodoo and voodoo of the relativity of time. There is much to be learned by introspection, but contrary to Descartes' view that introspection is a very reliable and accurate source of the workings of the mind (it is not), the tradition should be respected accordingly. Unfortunately, some thinkers forget that their thoughts are mere representations of external reality, and not external reality itself. This is best remembered by the phrase ["The map is not the territory"](https://en.wikipedia.org/wiki/Map%E2%80%93territory_relation). You read enough metaphysical speculation and you'll notice self-skepticism seems to be parsimoniously applied to the peculiar brand of generous skepticism which doubts external reality.For a quick sketch of the defense of realism, chapters 7 and 8 in Searle's [*The Construction of Social Reality*](https://books.google.com/books?id=MoDhXBxad_oC&printsec=frontcover&dq=construction%20of%20social%20reality&hl=en&newbks=1&newbks_redir=0&sa=X&ved=2ahUKEwiw5bu3vIjnAhXVUs0KHSu4D-0Q6AEwAHoECAIQAg#v=onepage&q=construction%20of%20social%20reality&f=false). **Time and Experience** The most reasonable position to take on time is one which is ontologically pluralistic, that is to say, that time is both external to the mind (as in clocks and sunrises), and internal to the mind (as in stream of consciousness and [flow](https://en.wikipedia.org/wiki/Flow_(psychology))). If you want to experience life with an awareness of time, wait for water to boil. There's a reason that the saw ["a watched pot doesn't boil"](https://idioms.thefreedictionary.com/watched+pot+never+boils) has become an idiom. If you want to experience life without any awareness of time, I suggest Tibetan bowl meditation. It sounds silly (and sounds lovely literally), and you can lose a few hours of time. Every night, we dream and usually are quite unaware of the passage of time. **The Construction of Time and Time Dilation** It is true that the Newtonian universe has been cast aside for a more accurate model where [time dilates](https://en.wikipedia.org/wiki/Time_dilation). Time dilation can actually be proven mathematically with trigonometry and vectors at the secondary level of math. Paul Hewitt has a lovely appendix in the 4th edition of his [*Conceptual Physics*](https://books.google.com/books?id=VwNBPQAACAAJ&dq=conceptual%20physics&hl=en&newbks=1&newbks_redir=0&sa=X&ved=2ahUKEwiosPOzv4jnAhWQZM0KHUMuCGQQ6AEwCXoECAQQAg) like the WP article listed above. GPS wouldn't function without technological accommodations to the relativistic nature of time. But as experienced locally here on earth by people, time is relatively unmerciful and unyielding. It is so real and important, that the progress of society and the fate of the British empire was strongly interlinked to the development of the [marine chronometer](https://en.wikipedia.org/wiki/Marine_chronometer). Tempo and synchronicity in battle is tantamount to life and death, and is explored in [*Warfighting*](https://books.google.com/books?id=xBeqI--4Gt4C&printsec=frontcover&dq=warfighting&hl=en&newbks=1&newbks_redir=0&sa=X&ved=2ahUKEwic8sSDwojnAhUFCc0KHYmeCZEQ6AEwAHoECAMQAg#v=onepage&q=warfighting&f=false) by Commandant Krulak. **Knowing Time** So, keep in mind Ryle's distinction between [knowing-how and knowing-that](https://plato.stanford.edu/entries/knowledge-how/). On the one end, if you want to understand time better, learn the mathematical and scientific theories of space-time (start with learning the proof for time dilation), and the psychophysics of the mind. Once you understand the science of perception, you'll be able to pair up theory with praxis. Praxis, in this case, means experiencing the variability of time awareness through different activities such as sports, meditation, and the like. But whatever you do, beware of the philosophical mumbo-jumbo that "time is an illusion". That's just a [deepity](https://www.youtube.com/watch?v=1EuhcxZs1qg).
If time does not pass there is no future or past and the present is not a duration. This means that the phenomena you call the 'background of the world' cannot be any more real than time. Time is required for the phenomena of the world so must be prior. This is a difficult idea of time yet is unfalsifiable. It is best explained in the metaphysics of Middle Way Buddhism, for which nothing really exists and nothing really happens. It would not quite be right to call time and phenomena illusions. They would be as real as the they seem, but not real in the way we usually believe. They would be unreal in the sense they are not fundamental, not basic to consciousness. The Ultimate would be Eckhart's 'Divine Instant' or Plotinus and Peirce's 'First'. Experience itself would be unreal so would require no phenomenal background. The background (as Schrodinger notes) would be the 'canvas on which they are painted'. This background would be undifferentiated consciousness. All this may be verified, but for most people it will take a long time to do so. You might like a poem by Bernardo Kastrup called 'Legacy of a Truth -Seeker'. <https://www.youtube.com/watch?v=ZgiwVYZM5A8> You do not experience time-not-passing since experience requires time to pass. When you become what lies beyond experience then time will have ceased to be experienced and you are beyond time and experience. This is where only the Grail-seeker ventures. This transcendence is represented by Spielberg as a vast empty Chasm into which Indie must step with only faith to rely on.
69,664
[This is the second version of another question: [How does consciousness depend on spatiality?]](https://philosophy.stackexchange.com/questions/69630/thought-experiment-how-does-consciousness-depend-on-spatiality) --- I admit that the technical details of the following thought experiment sound completely weird but that's what it has in common with many other brain-related thought experiments: * [the brain in a vat](https://en.wikipedia.org/wiki/Brain_in_a_vat) [![enter image description here](https://i.stack.imgur.com/qSX12.png)](https://i.stack.imgur.com/qSX12.png)[source](https://existentialcomics.com/comic/265) * [teleportation](https://existentialcomics.com/comic/1) [![enter image description here](https://i.stack.imgur.com/hQ6vo.png)](https://i.stack.imgur.com/hQ6vo.png) * [the split-brain thought experiment](https://www.academia.edu/12717457/What_does_the_split-brain_thought_experiment_reveal_about_personal_identity) Consider a brain in a vat and that its conscious experiences are determined by the firing patterns of its neurons. Consider a super-tiny nano-device placed into the gap of a synapse: it absorbes all neurotransmitters emitted from the pre-synaptic neuron and re-emits them after a given period of time such that the post-synaptic neuron just receives and reacts on them with some delay. Now consider these devices placed into all synapses of the brain in the vat. If one assumes that the conscious experience of the brain is essentially the same as before, just (indetectably) slowed down, then it makes sense to go one step further. (If not so, one can stop here.) Replace each nano-device by two devices (an input and an output device) which communicate by a super-thin optical fiber. Now move all neurons to arbitrary positions such that the lengths of all optical fibers are the same, so the constant speed of light guarantees that the delay at all synapses is the same. > > **What would the conscious experience of the distorted brain in the vat be, assuming that the neurons exhibit exactly the same firing > patterns as before, just (indetectably) slowed down and at different places?** > > > Note that an important assumption has been made: that the communication between neurons is completely governed by neurotransmitters at synapses. Electrical synapses and [neuromodulators](https://biology.stackexchange.com/questions/89777/differences-between-neurotransmitters-and-neuromodulators) (which are transmitted differently) are ignored. **Edit**: Another caveat is the fact, that changes of synaptic strengths (i.e. learning) that rely on the *simultaneous firing of spatially nearby neurons* could not take place. So the isomorphism of neural activities (and thus conscious experiences) would break down rather fast, and thus the thought experiment may make sense only for some seconds or minutes (as long as synaptic activies don't deviate too much).
2020/01/16
[ "https://philosophy.stackexchange.com/questions/69664", "https://philosophy.stackexchange.com", "https://philosophy.stackexchange.com/users/1000/" ]
> > While we can doubt any particular perception, illusions can appear only against the background of the world and our primordial faith in it. While we never coincide with the world or grasp it with absolute certainty, we are also never entirely cut off from it; > > > The background of the world is the inescapable, shared external reality that all of us who are not solipsists participate in. It is the fundamental construction of experience which is nearly universal among thinkers. It is the undeniable [naive realism](https://en.wikipedia.org/wiki/Na%C3%AFve_realism) that arises from the universal nature of the human mind, such as qualia and language, which is inevitably subject to philosophical discourse. One may choose to enter into the process of [phenomenological reduction](https://www.iep.utm.edu/phen-red/), but one must have something to reduce. This constant assertion of the appearance of reality is this background. The nature of the relationship between the Continental and Anglo-American traditions hinges upon how one copes with this background. If one accepts that one's introspection is the best course of dealing with understanding experience, one is apt to accept rational and introspective epistemic methods as privileged; if one broadens one's perspective to include skepticism of that privilege and rejects that rationalism and introspection are indeed privileged, one must break down those introspections not by thought, but by experiment. **Times, Dichotimzation, and Realities** If you want to understand the background of the world, escape the limits of rejecting testimony, measurement, and experimentation. Start with understanding the relationship between the subjective experience of time versus the objectivity of space-time (even under the aegis of the temporal relativity of relativistic physics); Daniel Dennett has a chapter (6, Time and Experience) in his 2017 edition of [*Consciousness Explained*](https://www.google.com/search?tbm=bks&q=consciousness%20explained&oq=consciousness%20exp). It is important to both draw an ontological distinction between the two types of time, and to note the fact that advocates of the view that "time is but an illusion" usually post their thoughts using a CPU with a clock while glancing at their wrist watches that they wear to save themselves time by avoiding glancing up and searching for the clock on the wall before collecting their biweekly paychecks. While it is true that time as experienced is subject to the effects of being constructed by neurons, that hardly makes it random and arbitrary. In fact, [time perception](https://en.wikipedia.org/wiki/Time_perception) and psychophysics are quite robust in their empirical content, and help cut through the hoodoo and voodoo of the relativity of time. There is much to be learned by introspection, but contrary to Descartes' view that introspection is a very reliable and accurate source of the workings of the mind (it is not), the tradition should be respected accordingly. Unfortunately, some thinkers forget that their thoughts are mere representations of external reality, and not external reality itself. This is best remembered by the phrase ["The map is not the territory"](https://en.wikipedia.org/wiki/Map%E2%80%93territory_relation). You read enough metaphysical speculation and you'll notice self-skepticism seems to be parsimoniously applied to the peculiar brand of generous skepticism which doubts external reality.For a quick sketch of the defense of realism, chapters 7 and 8 in Searle's [*The Construction of Social Reality*](https://books.google.com/books?id=MoDhXBxad_oC&printsec=frontcover&dq=construction%20of%20social%20reality&hl=en&newbks=1&newbks_redir=0&sa=X&ved=2ahUKEwiw5bu3vIjnAhXVUs0KHSu4D-0Q6AEwAHoECAIQAg#v=onepage&q=construction%20of%20social%20reality&f=false). **Time and Experience** The most reasonable position to take on time is one which is ontologically pluralistic, that is to say, that time is both external to the mind (as in clocks and sunrises), and internal to the mind (as in stream of consciousness and [flow](https://en.wikipedia.org/wiki/Flow_(psychology))). If you want to experience life with an awareness of time, wait for water to boil. There's a reason that the saw ["a watched pot doesn't boil"](https://idioms.thefreedictionary.com/watched+pot+never+boils) has become an idiom. If you want to experience life without any awareness of time, I suggest Tibetan bowl meditation. It sounds silly (and sounds lovely literally), and you can lose a few hours of time. Every night, we dream and usually are quite unaware of the passage of time. **The Construction of Time and Time Dilation** It is true that the Newtonian universe has been cast aside for a more accurate model where [time dilates](https://en.wikipedia.org/wiki/Time_dilation). Time dilation can actually be proven mathematically with trigonometry and vectors at the secondary level of math. Paul Hewitt has a lovely appendix in the 4th edition of his [*Conceptual Physics*](https://books.google.com/books?id=VwNBPQAACAAJ&dq=conceptual%20physics&hl=en&newbks=1&newbks_redir=0&sa=X&ved=2ahUKEwiosPOzv4jnAhWQZM0KHUMuCGQQ6AEwCXoECAQQAg) like the WP article listed above. GPS wouldn't function without technological accommodations to the relativistic nature of time. But as experienced locally here on earth by people, time is relatively unmerciful and unyielding. It is so real and important, that the progress of society and the fate of the British empire was strongly interlinked to the development of the [marine chronometer](https://en.wikipedia.org/wiki/Marine_chronometer). Tempo and synchronicity in battle is tantamount to life and death, and is explored in [*Warfighting*](https://books.google.com/books?id=xBeqI--4Gt4C&printsec=frontcover&dq=warfighting&hl=en&newbks=1&newbks_redir=0&sa=X&ved=2ahUKEwic8sSDwojnAhUFCc0KHYmeCZEQ6AEwAHoECAMQAg#v=onepage&q=warfighting&f=false) by Commandant Krulak. **Knowing Time** So, keep in mind Ryle's distinction between [knowing-how and knowing-that](https://plato.stanford.edu/entries/knowledge-how/). On the one end, if you want to understand time better, learn the mathematical and scientific theories of space-time (start with learning the proof for time dilation), and the psychophysics of the mind. Once you understand the science of perception, you'll be able to pair up theory with praxis. Praxis, in this case, means experiencing the variability of time awareness through different activities such as sports, meditation, and the like. But whatever you do, beware of the philosophical mumbo-jumbo that "time is an illusion". That's just a [deepity](https://www.youtube.com/watch?v=1EuhcxZs1qg).
Do you experience the weight of the air? Or to be trite, does the fish experience being surrounded by water? To adopt a definition of 'experience' that simply means 'undergo' is not useful. We already have words like 'suffer', 'undergo', etc. And it creates pointless confusion. So to claim you 'experience' these things that are simply present without having any changing effect upon you is not a productive use of language. But that is the only sense in which one could 'experience' time not passing. Any other sense of 'experience' requires that the event be at least temporarily available to be recorded mentally. And that would express the passage of time. Memory is an exothermic process. To take Dennett's contention that that quale are simply labeled perceptions and therefore consciousness is simply a very temporary layer of memory and bend it in a Kantian direction, then, in a very real sense, it is the fact that our memory is an exothermic process that causes the second law of thermodynamics. Time does not 'pass' or 'flow', there is no 'arrow of time', as a part of essential reality. But our experience takes a given direction through time because of the way we record information. Then we interpret the effect of our nature as a necessary aspect of all physical processes. But that is mere projection. The stream of memory is real, but it is not caused by the passage of time, it is simply correlated with a given direction through four-dimensional space which always accompanies increasing entropy because it is itself a process that depends upon increasing entropy. So there is a *real lack* of the passage of time, for any 'being' who might assemble information in some other way than by *being*. But you cannot experience it, because you are not such a thing. That does not make the passage of time an *illusion*, in many senses of the word, just a subjective shared human trait, like color. We can see it as very much the exact thing Kant said it was -- an underlying form for our intuitions.
68,352
I have a currency pair usdinr put option with strike price at 73.5 INR, risk free rate 0, underlying price of 75.4025, days to expiry is 15 and iv is 5.9%. Delta of this option is -0.019 and gamma is 0.051. So I am interpreting gamma as change in delta with 1% change in underlying. So by this logic if I increase the underlying by 1% and option being put the delta should decrease by 0.051. So that means new delta would be 0.031. This would be wrong because delta of put option can't be positive. Now I have increased the underlying by 1% in my excel working. And delta is decreased by 0.017 whereas my gamma was 0.051. So is this because of the low volatility or is there any mistake in my interpretation of gamma?
2021/10/13
[ "https://quant.stackexchange.com/questions/68352", "https://quant.stackexchange.com", "https://quant.stackexchange.com/users/47368/" ]
This recent paper <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3891120> (highly recommended in its entirety) has its entire section 5 devoted to XVA model validation. You may also find these ORE slides insightful <https://www.opensourcerisk.org/wp-content/uploads/2018/12/ore_user_meeting_2018_patrick_buechel.pdf>
In general, the model validation consists of several steps: * Checking the model design, i.e. model theory, model assumptions, model limitations, etc.; * Checking the model inputs, i.e. market data sources, market data quality, model parameters quality, calibration process, etc.; * Checking the model implementation, i.e. checking if the model is implemented in the system in line with the theory; * Checking the model outputs, i.e. if the model produces correct numbers under current market conditions; * Checking the model performance under stress scenarios, i.e. checking if the model produces reliable outcome under extreme market conditions; * Checking all other related stuff, like model governance, model documentation, input controls, output controls, etc.
6,184,658
Elongated question: **When having more blocking threads then CPU cores, where's the balance between thread amount and thread block times to maximize CPU efficiency by reducing context switch overhead?** I have a wide variety of IO devices that I need to control on Windows 7, with a x64 multi-core processor: PCI devices, network devices, stuff being saved to hard drives, big chunks of data being copied,... The most common policy is: "Put a thread on it!". Several dozen threads later, this is starting to feel like a bad idea. None of my cores are being used 100%, and there's several cores who're still idling, but there are delays showing up in the range of 10 to 100ms who cannot be explained by IO blockage or CPU intensive usage. Other processes don't seem to require resources either. I'm suspecting context switch overhead. There's a bunch of possible solutions I have: * Reduce threads by bundling the same IO devices: This mainly goes for the hard drive, but maybe for the network as well. If I'm saving 20MB to the hard drive in one thread, and 10MB in the other, wouldn't it be better to post it all to the same? How would this work in case of multiple hard drives? * Reduce threads by bundling similar IO devices, and increase it's priority: Dozens of threads with increased priority are probably gonna make my user interface thread stutter. But I can bundle all that functionality together in 1 or a couple of threads and increase it's priority. Any case studies tackling similar problems are much appreciated.
2011/05/31
[ "https://Stackoverflow.com/questions/6184658", "https://Stackoverflow.com", "https://Stackoverflow.com/users/264359/" ]
First, it sounds like these tasks should be performed using asynchronous I/O (IO Completion Ports, preferably), rather than with separate threads. Blocking threads are generally the wrong way to do I/O. Second, blocked threads shouldn't affect context switching. The scheduler has to juggle all the *active* threads, and so, having a lot of threads running (not blocked) might slow down context switching a bit. But as long as most of your threads are blocked, they shouldn't affect the ones that aren't.
I don't think that there is a conclusive answer, and it probably depends on your OS as well; some handle threads better than others. Still, delays in the 10 to 100 ms range are not due to context switching itself (although they could be due to characteristics of the scheduling algorithm). My experience under Windows is that I/O is very inefficient, and if you're doing I/O, of any type, you will block. And that I/O by one process or thread will end up blocking other processes or threads. (Under Windows, for example, there's probably no point in having more than one thread handle the hard drive. You can't read or write several sectors at the same time, and my impression is that Windows doesn't optimize accesses like some other systems do.) With regards to your exact questions: "If I'm saving 20MB to the hard drive in one thread, and 10MB in the other, wouldn't it be better to post it all to the same?": It depends on the OS. Normally, there should be no reduction in time or latency using separate threads, and depending on other activity and the OS, there could be an improvement. (If there are several disk requests in instance, most OS's will optimize the accesses, reordering the requests to reduce head movement.) The simplest solution would be to try both, and see which works better on your system. "How would this work in case of multiple hard drives?": The OS should be able to do the I/O in parallel, if the requests are to different drives. With regards to increasing priority of one or more theads, it's very OS dependent, but probably worth trying. Unless there's significant CPU time used in the threads with the higher priority, it shouldn't impact the user interface—these threads are mostly blocked for I/O, remember.
6,184,658
Elongated question: **When having more blocking threads then CPU cores, where's the balance between thread amount and thread block times to maximize CPU efficiency by reducing context switch overhead?** I have a wide variety of IO devices that I need to control on Windows 7, with a x64 multi-core processor: PCI devices, network devices, stuff being saved to hard drives, big chunks of data being copied,... The most common policy is: "Put a thread on it!". Several dozen threads later, this is starting to feel like a bad idea. None of my cores are being used 100%, and there's several cores who're still idling, but there are delays showing up in the range of 10 to 100ms who cannot be explained by IO blockage or CPU intensive usage. Other processes don't seem to require resources either. I'm suspecting context switch overhead. There's a bunch of possible solutions I have: * Reduce threads by bundling the same IO devices: This mainly goes for the hard drive, but maybe for the network as well. If I'm saving 20MB to the hard drive in one thread, and 10MB in the other, wouldn't it be better to post it all to the same? How would this work in case of multiple hard drives? * Reduce threads by bundling similar IO devices, and increase it's priority: Dozens of threads with increased priority are probably gonna make my user interface thread stutter. But I can bundle all that functionality together in 1 or a couple of threads and increase it's priority. Any case studies tackling similar problems are much appreciated.
2011/05/31
[ "https://Stackoverflow.com/questions/6184658", "https://Stackoverflow.com", "https://Stackoverflow.com/users/264359/" ]
First, it sounds like these tasks should be performed using asynchronous I/O (IO Completion Ports, preferably), rather than with separate threads. Blocking threads are generally the wrong way to do I/O. Second, blocked threads shouldn't affect context switching. The scheduler has to juggle all the *active* threads, and so, having a lot of threads running (not blocked) might slow down context switching a bit. But as long as most of your threads are blocked, they shouldn't affect the ones that aren't.
10-100ms with some cores idle: it's not context-switching overhead in itself since a switch is orders of magnitude faster than these delays, even with a core swap and cache flush. Async I/O would not help much here. The kernel thread pools that implement ASIO also have to be scheduled/swapped, albeit this is faster than user-space threads since there are fewer Wagnerian ring-cycles. I would certainly head for ASIO if the CPU loading was becoming an issue, but it's not. You are not short of CPU, so what is it? Is there much thrashing - RAM shortage? Excessive paging can surely result in large delays. Where is your page file? I've shoved mine off Drive C onto another fast SATA drive. PCI bandwidth? You got a couple of TV cards in there? Disk controller flushing activity - have you got an SSD that's approaching capacity? That's always a good one for unexplained pauses. I get the odd pause even though my 128G SSD is only 2/3 full. I've never had a problem specifically related to context-swap time and I've been writing multiThreaded apps for decades. Windows OS schedules & despatches the ready threads onto cores reasonably quickly. 'Several dozen threads' in itself, (ie. not all running!), is not remotely a problem - looking now at my TaskManger/performance, I have 1213 threads loaded on and no performance issues at all with ~6% CPU usage, (app on test running in background, bitTorrent etc). Firefox has 30 threads, VLC media player 27, my test app 23. No problem at all writing this post. Given your issue of 10-100ms delays, I would be amazed if fiddling with thread priorities and/or changing the way your work is loaded onto threads provides any improvement - something else is stuffing your system, (you haven't got any drivers that I coded, have you? :). Does perfmon give any clues? Rgds, Martin
6,184,658
Elongated question: **When having more blocking threads then CPU cores, where's the balance between thread amount and thread block times to maximize CPU efficiency by reducing context switch overhead?** I have a wide variety of IO devices that I need to control on Windows 7, with a x64 multi-core processor: PCI devices, network devices, stuff being saved to hard drives, big chunks of data being copied,... The most common policy is: "Put a thread on it!". Several dozen threads later, this is starting to feel like a bad idea. None of my cores are being used 100%, and there's several cores who're still idling, but there are delays showing up in the range of 10 to 100ms who cannot be explained by IO blockage or CPU intensive usage. Other processes don't seem to require resources either. I'm suspecting context switch overhead. There's a bunch of possible solutions I have: * Reduce threads by bundling the same IO devices: This mainly goes for the hard drive, but maybe for the network as well. If I'm saving 20MB to the hard drive in one thread, and 10MB in the other, wouldn't it be better to post it all to the same? How would this work in case of multiple hard drives? * Reduce threads by bundling similar IO devices, and increase it's priority: Dozens of threads with increased priority are probably gonna make my user interface thread stutter. But I can bundle all that functionality together in 1 or a couple of threads and increase it's priority. Any case studies tackling similar problems are much appreciated.
2011/05/31
[ "https://Stackoverflow.com/questions/6184658", "https://Stackoverflow.com", "https://Stackoverflow.com/users/264359/" ]
First, it sounds like these tasks should be performed using asynchronous I/O (IO Completion Ports, preferably), rather than with separate threads. Blocking threads are generally the wrong way to do I/O. Second, blocked threads shouldn't affect context switching. The scheduler has to juggle all the *active* threads, and so, having a lot of threads running (not blocked) might slow down context switching a bit. But as long as most of your threads are blocked, they shouldn't affect the ones that aren't.
Well, my Windows 7 is currently running 950 threads. I don't think that adding another few dozen on would make a significant difference. However, you should definitely be looking at a thread pool or other work-stealing device for this - you shouldn't make new threads just to let them block. If Windows provides asynchronous I/O by default, then use it.
6,184,658
Elongated question: **When having more blocking threads then CPU cores, where's the balance between thread amount and thread block times to maximize CPU efficiency by reducing context switch overhead?** I have a wide variety of IO devices that I need to control on Windows 7, with a x64 multi-core processor: PCI devices, network devices, stuff being saved to hard drives, big chunks of data being copied,... The most common policy is: "Put a thread on it!". Several dozen threads later, this is starting to feel like a bad idea. None of my cores are being used 100%, and there's several cores who're still idling, but there are delays showing up in the range of 10 to 100ms who cannot be explained by IO blockage or CPU intensive usage. Other processes don't seem to require resources either. I'm suspecting context switch overhead. There's a bunch of possible solutions I have: * Reduce threads by bundling the same IO devices: This mainly goes for the hard drive, but maybe for the network as well. If I'm saving 20MB to the hard drive in one thread, and 10MB in the other, wouldn't it be better to post it all to the same? How would this work in case of multiple hard drives? * Reduce threads by bundling similar IO devices, and increase it's priority: Dozens of threads with increased priority are probably gonna make my user interface thread stutter. But I can bundle all that functionality together in 1 or a couple of threads and increase it's priority. Any case studies tackling similar problems are much appreciated.
2011/05/31
[ "https://Stackoverflow.com/questions/6184658", "https://Stackoverflow.com", "https://Stackoverflow.com/users/264359/" ]
10-100ms with some cores idle: it's not context-switching overhead in itself since a switch is orders of magnitude faster than these delays, even with a core swap and cache flush. Async I/O would not help much here. The kernel thread pools that implement ASIO also have to be scheduled/swapped, albeit this is faster than user-space threads since there are fewer Wagnerian ring-cycles. I would certainly head for ASIO if the CPU loading was becoming an issue, but it's not. You are not short of CPU, so what is it? Is there much thrashing - RAM shortage? Excessive paging can surely result in large delays. Where is your page file? I've shoved mine off Drive C onto another fast SATA drive. PCI bandwidth? You got a couple of TV cards in there? Disk controller flushing activity - have you got an SSD that's approaching capacity? That's always a good one for unexplained pauses. I get the odd pause even though my 128G SSD is only 2/3 full. I've never had a problem specifically related to context-swap time and I've been writing multiThreaded apps for decades. Windows OS schedules & despatches the ready threads onto cores reasonably quickly. 'Several dozen threads' in itself, (ie. not all running!), is not remotely a problem - looking now at my TaskManger/performance, I have 1213 threads loaded on and no performance issues at all with ~6% CPU usage, (app on test running in background, bitTorrent etc). Firefox has 30 threads, VLC media player 27, my test app 23. No problem at all writing this post. Given your issue of 10-100ms delays, I would be amazed if fiddling with thread priorities and/or changing the way your work is loaded onto threads provides any improvement - something else is stuffing your system, (you haven't got any drivers that I coded, have you? :). Does perfmon give any clues? Rgds, Martin
I don't think that there is a conclusive answer, and it probably depends on your OS as well; some handle threads better than others. Still, delays in the 10 to 100 ms range are not due to context switching itself (although they could be due to characteristics of the scheduling algorithm). My experience under Windows is that I/O is very inefficient, and if you're doing I/O, of any type, you will block. And that I/O by one process or thread will end up blocking other processes or threads. (Under Windows, for example, there's probably no point in having more than one thread handle the hard drive. You can't read or write several sectors at the same time, and my impression is that Windows doesn't optimize accesses like some other systems do.) With regards to your exact questions: "If I'm saving 20MB to the hard drive in one thread, and 10MB in the other, wouldn't it be better to post it all to the same?": It depends on the OS. Normally, there should be no reduction in time or latency using separate threads, and depending on other activity and the OS, there could be an improvement. (If there are several disk requests in instance, most OS's will optimize the accesses, reordering the requests to reduce head movement.) The simplest solution would be to try both, and see which works better on your system. "How would this work in case of multiple hard drives?": The OS should be able to do the I/O in parallel, if the requests are to different drives. With regards to increasing priority of one or more theads, it's very OS dependent, but probably worth trying. Unless there's significant CPU time used in the threads with the higher priority, it shouldn't impact the user interface—these threads are mostly blocked for I/O, remember.
6,184,658
Elongated question: **When having more blocking threads then CPU cores, where's the balance between thread amount and thread block times to maximize CPU efficiency by reducing context switch overhead?** I have a wide variety of IO devices that I need to control on Windows 7, with a x64 multi-core processor: PCI devices, network devices, stuff being saved to hard drives, big chunks of data being copied,... The most common policy is: "Put a thread on it!". Several dozen threads later, this is starting to feel like a bad idea. None of my cores are being used 100%, and there's several cores who're still idling, but there are delays showing up in the range of 10 to 100ms who cannot be explained by IO blockage or CPU intensive usage. Other processes don't seem to require resources either. I'm suspecting context switch overhead. There's a bunch of possible solutions I have: * Reduce threads by bundling the same IO devices: This mainly goes for the hard drive, but maybe for the network as well. If I'm saving 20MB to the hard drive in one thread, and 10MB in the other, wouldn't it be better to post it all to the same? How would this work in case of multiple hard drives? * Reduce threads by bundling similar IO devices, and increase it's priority: Dozens of threads with increased priority are probably gonna make my user interface thread stutter. But I can bundle all that functionality together in 1 or a couple of threads and increase it's priority. Any case studies tackling similar problems are much appreciated.
2011/05/31
[ "https://Stackoverflow.com/questions/6184658", "https://Stackoverflow.com", "https://Stackoverflow.com/users/264359/" ]
10-100ms with some cores idle: it's not context-switching overhead in itself since a switch is orders of magnitude faster than these delays, even with a core swap and cache flush. Async I/O would not help much here. The kernel thread pools that implement ASIO also have to be scheduled/swapped, albeit this is faster than user-space threads since there are fewer Wagnerian ring-cycles. I would certainly head for ASIO if the CPU loading was becoming an issue, but it's not. You are not short of CPU, so what is it? Is there much thrashing - RAM shortage? Excessive paging can surely result in large delays. Where is your page file? I've shoved mine off Drive C onto another fast SATA drive. PCI bandwidth? You got a couple of TV cards in there? Disk controller flushing activity - have you got an SSD that's approaching capacity? That's always a good one for unexplained pauses. I get the odd pause even though my 128G SSD is only 2/3 full. I've never had a problem specifically related to context-swap time and I've been writing multiThreaded apps for decades. Windows OS schedules & despatches the ready threads onto cores reasonably quickly. 'Several dozen threads' in itself, (ie. not all running!), is not remotely a problem - looking now at my TaskManger/performance, I have 1213 threads loaded on and no performance issues at all with ~6% CPU usage, (app on test running in background, bitTorrent etc). Firefox has 30 threads, VLC media player 27, my test app 23. No problem at all writing this post. Given your issue of 10-100ms delays, I would be amazed if fiddling with thread priorities and/or changing the way your work is loaded onto threads provides any improvement - something else is stuffing your system, (you haven't got any drivers that I coded, have you? :). Does perfmon give any clues? Rgds, Martin
Well, my Windows 7 is currently running 950 threads. I don't think that adding another few dozen on would make a significant difference. However, you should definitely be looking at a thread pool or other work-stealing device for this - you shouldn't make new threads just to let them block. If Windows provides asynchronous I/O by default, then use it.
20,297
I have Ubuntu 10.10 installed on a machine for my parents. The thing is they never request updates from Update Manager even the manager itself prompted them so. Moreover, when they are done with whatever they are doing on Ubuntu, they always leave the computer on. And I always have to come back and shut the machine down. Sometimes, the computer even sit idle for hours. So I want to know whether this is possible in Ubuntu. I am thinking of a script that will be activated after the machine is idle for x minutes. When x minutes have elapsed, Update Manager will automatically update everything listed. (I recall that you need the admin password for this, so is there a workaround?) After all the updates are done, the machine will automatically shutdown. Is this possible?
2011/01/06
[ "https://askubuntu.com/questions/20297", "https://askubuntu.com", "https://askubuntu.com/users/8252/" ]
There are a few packages that may help with this: * [unattended-upgrades](http://packages.ubuntu.com/unattended-upgrades) installs security updates automatically * [powernap](http://packages.ubuntu.com/powernap) suspends or shuts down the computer when there are no processes running from a given list Although finding the right list of important processes may be difficult... powernap is more targeted at servers.
I am thinking that you could start with a screensaver, so you now get the hook for Idleness done for you, which then kicks off the update in background and shutdown. I say in background, with nohup so even if the screensaver is dismissed the update isn't killed stone dead (log this, and it's stderror somewhere you can get to though!). The shutdown however should be based on the update finishing AND machine being still Idle, so nobody gets a surprise shutdown. Bear in mind that not all Ubuntu updates will run unattended - which may put a spanner in your works here.
404,115
On Ubutnu, Using firefox: i set it in firefox network setting(SOCKS5) when i check my ip (i.e whatsmyip.com) it is the socks server's ip, but when trying to access youtube.com it's blocked why?! in about:config i have set use\_remote\_socks\_dns to true, no change, set DNS server of my router to 8.8.8.8 and 8.8.4.4 , no change again, used wireshark, just got more confused... i know this has been asked before and it seems everyone has got their problem fixed except for me, can you please tell me what am i doing wrong?
2012/03/23
[ "https://superuser.com/questions/404115", "https://superuser.com", "https://superuser.com/users/114913/" ]
I had the same issue. To fix it I changed a setting: (Press 'alt' to view top menu then) Tools > Options > Advanced > General (tab). Turn off "Use hardware acceleration when available".
I ended up reinstalling Firefox after an uninstall. lorengphd's solution did not work.
5,418,151
In which event for a ComboBox that has a DrawMode of OwnerDrawVariable is it most appropriate to set the ComboBox's DropDownHeight? I am currently setting the DropDownHeight value within the DrawItem event but this seems inefficient. Edit: The reason I ask is that I can not get the DropDown window height to be "just right". It is always a little bit too tall or too short.
2011/03/24
[ "https://Stackoverflow.com/questions/5418151", "https://Stackoverflow.com", "https://Stackoverflow.com/users/651104/" ]
Incorrect DropDownHeight for ComboBoxes that have DrawMode of OwnerDrawVariable is known issue that has be answered at [Unable to set the DropDownHeight of ComboBox](https://stackoverflow.com/questions/1245530), and I have added the c# code that I use to solve my problem as part of the answer.
Put it in the event DropDown - or override for ComboBox.OnDropDown
54,391,261
Working on a recent project I wondered how to find a good/perfect path to a target that is moving with a steady speed. I tried standart A\* pathfinding but it failed, since the heuristic will be wrong the more the object moves and I just can´t find a way to make that work for me. Maybe you guys got another algorith that should work with just fine or some calculation tuning with A\* that would work... Thanks for your afford :)
2019/01/27
[ "https://Stackoverflow.com/questions/54391261", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10632873/" ]
A\* should in general work, but then of course you need to recalculate every time the target moves. For 99% of cases, this is actually ok. For example, in video games you can get away with only recalculating the best path once every second or so, so it's generally not a huge performance hit. However, if you really need something more powerful, check out [Generalized Adaptive A\*](http://www.aamas-conference.org/Proceedings/aamas08/proceedings/pdf/paper/AAMAS08_0026.pdf), an algorithm specifically designed to handle moving targets. And if you really want to be on the bleeding-edge, there are multiple adaptations of GAA\* that are faster in certain cases - see [this post](https://cstheory.stackexchange.com/a/11866/8532) *(under "moving target points")* for more details.
Using A\* with a moving target is ok, but you must recalculate the whole path again. I don't think A\* likes just having it's destination / goal changed. Your A\* needs to be very well optimised to run in real time and recalculate the new path every time the target moves. Remember to play with your H to get a balance between working out the shortest path and the quickest to calculate. All depends on your map and obstructions really. However A\* may not be the best path finder for your application, but I'd need to see your map and more info..
157,765
For a fantasy, I need to know how a world similar to Earth would exist in a geocentric model. 1) I would assume the sun would have to be a lot smaller. I'm okay with artificial stars (hand-wave that stuff away with magic). But I would like to know if that makes an Earth-similar planet impossible. (I don't need other planets in this model.) For example, how would it effect... * seasons and climate * length of day, month, or year * sunrises and sunsets * gravity * constellations and/or navigation * any huge effect I don't have enough science to anticipate 2) How would I manipulate my universe's model to make it more Earth similar if those things are completely off? It doesn't have to be exact, but I need a temperate climate with pretty normal seasons and climate zones.
2019/10/06
[ "https://worldbuilding.stackexchange.com/questions/157765", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/69409/" ]
The first thing we must understand is that from a purely *kinematic* point of view, the heliocentric and geocentric models are both equally correct within the accuracy limits of astronomical instruments available before the, say, 16th century. There was no way for an astronomer who lived before [Tycho Brahe](https://en.wikipedia.org/wiki/Tycho_Brahe) to bring serious arguments in favor of one or the other. (That's why Galileo Galilei had such trouble with the astronomical establishment of the day -- he simply did not have any good arguments to bring in favor of his pet theory.) The problem is that from a *dynamic* point of view, for a geocentric model to be correct it is necessary to abolish the law of universal gravitation: and this means, of course, that the universe in which a geocentric model is correct from a physical point of view has vastly different physics from ours. Whether *"a world similar to Earth would exist"* in an universe with vastly different physics than ours is not something that anybody but you can answer. How would it effect... * Seasons and climate: We don't know. The law of universal gravity doesn't work in your world, so we have no idea how wind works, how the water cycle works, the lot. By the way, how does *fire* work in a world where the law of universal gravity does not operate? * Length of day, month, or year: Those are purely kinematic phenomena, and from a purely kinematic point of view the heliocentric and geocentric models are both equally correct within the accuracy limits of astronomical instruments available before the Renaissance. * Sunrises and sunsets: The sun will rise and the sun will set. We have no idea how the atmosphere works, or how thick it is, because the law of universal gravitation doesn't work in that world. So we don't know if, for example, the sun will appear red at sunset. * Gravity: *Our* kind of gravity doesn't work in an universe where the geocentric model is correct. It must be some different force which is called gravity. How it works nobody but you, the author, can say. * Constellations and/or navigation: No effect whatsoever. The funny thing is that *up to this day* celestial navigation, as an application of practical astronomy, is done assuming a geocentric model. See [celestial sphere](https://en.wikipedia.org/wiki/Celestial_sphere) for how this works. Of course, satellite-based navigation systems won't work, because the law of universal gravitation doesn't work. * Any huge effect I don't have enough science to anticipate: The main huge effect is that only the author can say how that world works, because it most definitely it doesn't work like ours. What keeps water in the ocean, what keeps people on the ground? Does hot air rise? Why? Are there tides? Why? Note that you *do not* have to make Sun any smaller or bigger -- whether we adopt a heliocentric or geocentric system has no impact on the distance between the Earth and the Sun. Everything also applies for the Moon. A Moon may or may not exist; if it exists, it is not universal gravitation which makes it orbit. What is it that keeps the Moon in orbit only the author can decide.
Not much ... ============ The three key bodies for life on Earth are Earth, Sun, and Moon. The Moon orbits Earth. The difference between the Earth orbiting the Sun and the Sun orbiting the Earth is one of reference frames, which are somewhat arbitrary. *The only real difference is explaining the other planets and their moons.* * In a heliocentric view, Earth is a planet like the others, orbiting the Sun. No special cases are necessary. * In a pure geocentric view, Earth is unlike the other other planets, and you need *complicated* explanations for their paths. Seasons, days, months, etc. are unchanged. Celestial navigation is greatly complicated on a global scale by the *weird* apparent paths of the stars.
157,765
For a fantasy, I need to know how a world similar to Earth would exist in a geocentric model. 1) I would assume the sun would have to be a lot smaller. I'm okay with artificial stars (hand-wave that stuff away with magic). But I would like to know if that makes an Earth-similar planet impossible. (I don't need other planets in this model.) For example, how would it effect... * seasons and climate * length of day, month, or year * sunrises and sunsets * gravity * constellations and/or navigation * any huge effect I don't have enough science to anticipate 2) How would I manipulate my universe's model to make it more Earth similar if those things are completely off? It doesn't have to be exact, but I need a temperate climate with pretty normal seasons and climate zones.
2019/10/06
[ "https://worldbuilding.stackexchange.com/questions/157765", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/69409/" ]
You don't have to change much at all, if you don't force yourself to follow Newtonian mechanics to explain planetary motions. For one, universal gravitation was not mainstream until Newton built his system around it. Until then, it seemed possible that other heavenly bodies might not exert any kind of gravity at all. If you use this approach, then you don't have to worry about the size of these other bodies, or what they might do to your Earth (Although you might want to give the moon gravitational pull if you want tides). People also once theorized that the other heavenly bodies moved in fixed tracks around the Earth; if you are willing to assert that some divine being fixed a track (perfect circles, for instance), then you can make them go wherever you want, as fast as you want, and you don't have to give a scientific justification for it. If you're doing fantasy, there's no reason you should feel committed to being Newtonian about everything. Heavenly bodies were once thought to follow different laws than earthly ones; you could make that a reality in your world. This would also give you a lot of freedom to shape them as you like.
Assuming that Newtonian mechanics still apply the factor that determines whether the Earth goes around the Sun or the Sun around the Earth is the mass of the sun. If the sun radiates equally in all directions then its total power output must be proportional to its distance from the Earth. Assuming that the mass of the Sun is small compared to that of the Earth, an orbital period of 30Ms (about 1 year) gives a radius of about 2Gm corresponding to a sphere with a surface area of about 5.5E19 m^2. For a power density of 1.361 W/m^2 the Sun must produce 7.4E22 W. If it burns at this rate for about one billion years then it will consume about 2.5E22 kg of matter. For comparison, the mass of the Earth is about 6E24 kg, so if you postulate a light-weight machine with a store of fuel which it converts to energy and radiates in all directions then this arrangement could be plausible. However, if you have a machine converting fuel to sunlight then why assume that it will waste most of it? If it could focus its entire output on the Earth then it could reduce its power consumption by a factor of about 430,000 and at the same time make the mechanism more easily accessible for the maintenance crew. Unfortunately the radiation pressure would push the Sun away from the Earth, so perhaps it should radiate an equal amount in the opposite direction. This would produce as a biproduct an interesting galactic lighthouse. From the perspective of a fairly primitive civilisation this should be almost indistinguishable from a heliocentric system. Weather would be similar, although I am not sure what effect the absence of the Sun's magnetic field would have. If you want a moon like Earth's then you must make arrangements to illuminate it. Maybe spread the beam in the plane of the Moon's orbit, or even provide a separately focused beam. Comets would be interesting, as they would now be orbiting the Earth rather than the sun, and they would abruptly disappear as the passed out of the beam.
157,765
For a fantasy, I need to know how a world similar to Earth would exist in a geocentric model. 1) I would assume the sun would have to be a lot smaller. I'm okay with artificial stars (hand-wave that stuff away with magic). But I would like to know if that makes an Earth-similar planet impossible. (I don't need other planets in this model.) For example, how would it effect... * seasons and climate * length of day, month, or year * sunrises and sunsets * gravity * constellations and/or navigation * any huge effect I don't have enough science to anticipate 2) How would I manipulate my universe's model to make it more Earth similar if those things are completely off? It doesn't have to be exact, but I need a temperate climate with pretty normal seasons and climate zones.
2019/10/06
[ "https://worldbuilding.stackexchange.com/questions/157765", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/69409/" ]
It would be perfectly possible for an extremely advanced civilization, perhaps humans of the future, to create a geocentric solar system. They could take a rogue Earth-sized planet in interstellar space and create a giant sun satellite orbiting the planet with gigantic fusion power generators generating power for thousands of giant lamps aimed at the planet to heat it and warm it. If they want the sidereal day of the Earth-sized planet to be similar to that of Earth (23 hours, 56 minutes, 4.0905 seconds) they will have to select an Earth-sized planet in interstellar space that rotates with a similar period and/or slow down or spreed up the rotation of the planet. If they do that the stars at night will seem to circle with the same speed as on Earth. The giant artificial sun satellite will have to orbit at such a distance that the solar day (the time between two successive noons or midnights at the same location) will equal 24 hours. So that means that the time it takes for the giant artificial sun satellite to make one orbit combined with the time it takes for the planet to rotate once (the sidereal day) will equal 24 hours, a solar day on Earth. I'm certain there are some users at this site who can easily calculate the distance for you. Of course there is the problem that the "moon" should orbit the Earth-sized planet at the same distance that the Moon orbits the Earth in order to have a month of the same length and similar tides. > > In Aristotle's (384–322 BC) description of the universe, the Moon marked the boundary between the spheres of the mutable elements (earth, water, air and fire), and the imperishable stars of aether, an influential philosophy that would dominate for centuries.[183] However, in the 2nd century BC, Seleucus of Seleucia correctly theorized that tides were due to the attraction of the Moon, and that their height depends on the Moon's position relative to the Sun.[184] In the same century, Aristarchus computed the size and distance of the Moon from Earth, obtaining a value of about twenty times the radius of Earth for the distance. These figures were greatly improved by Ptolemy (90–168 AD): his values of a mean distance of 59 times Earth's radius and a diameter of 0.292 Earth diameters were close to the correct values of about 60 and 0.273 respectively.[185] Archimedes (287–212 BC) designed a planetarium that could calculate the motions of the Moon and other objects in the Solar System.[186] > > > <https://en.wikipedia.org/wiki/Moon#Before_spaceflight>[1](https://en.wikipedia.org/wiki/Moon#Before_spaceflight) So the size and distance of the Moon was measured reasonably accurately about 2,000 years ago. And a fake moon orbiting a fake earth in an artificial geocentric solar system would have to orbit the fake earth at a similar distance to that of the real Moon. Which could be a farther distance than than the proper distance for the giant artificial sun satellite to orbit. Which would be bad because on Earth eclipses are caused by the nearer Moon passing in front of the farther Sun. There are many other things to consider when designing a possible artificial geocentric solar system. But presumably some users on this board can do it for you. A possibly simpler way to create an artificial geocentric solar system would be to find an Earth-sized rogue planet in interstellar space and build a gigantic artificial geodesic spherical structure around it and fit the inner surface of that spherical structure with countless gazillions of lamps. The lamps would be programmed to turn on and off in patterns to simulate the movements of the Sun, the Moon, the visible planets in the Solar System, and the stars. So if it is scientifically possible for an advanced civilization to create an artificial geocentric solar system, a possibly artificial or natural geocentric solar system might exist in a science fiction story set in some parallel universe where the laws of science are different. And of course a natural geocentric solar system might exist in a fantasy story filled with magic. As I remember, in J.R.R. Tolkien's legendarium the world was originally not only geocentric but flat, until a great cataclysm where the God of the story changed the Earth into a sphere and made the solar system heliocentric.
You have to limit the solar system to the planets visible by naked eye - or else starting from neptune they would need to rotate around your fantasy earth with more than the speed of light - which is impossible. Stars even increase this dilemma. Your true problem lies in "epi-cycles" ... the visible paths that planets take on our nightsky. Even Ptolomaeus wrote 14 books to show and explain those epi-cycles: every planet has its own set, except for the inner planets Venus and Mercury. Seasons are a problem because the geocentrist model needs the sun to actually move "up and down" between the tropics of cancer and capricorn to give the same path over the sky as can be viewed. And finally what forces would govern the movement of those bodies?
157,765
For a fantasy, I need to know how a world similar to Earth would exist in a geocentric model. 1) I would assume the sun would have to be a lot smaller. I'm okay with artificial stars (hand-wave that stuff away with magic). But I would like to know if that makes an Earth-similar planet impossible. (I don't need other planets in this model.) For example, how would it effect... * seasons and climate * length of day, month, or year * sunrises and sunsets * gravity * constellations and/or navigation * any huge effect I don't have enough science to anticipate 2) How would I manipulate my universe's model to make it more Earth similar if those things are completely off? It doesn't have to be exact, but I need a temperate climate with pretty normal seasons and climate zones.
2019/10/06
[ "https://worldbuilding.stackexchange.com/questions/157765", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/69409/" ]
It would be perfectly possible for an extremely advanced civilization, perhaps humans of the future, to create a geocentric solar system. They could take a rogue Earth-sized planet in interstellar space and create a giant sun satellite orbiting the planet with gigantic fusion power generators generating power for thousands of giant lamps aimed at the planet to heat it and warm it. If they want the sidereal day of the Earth-sized planet to be similar to that of Earth (23 hours, 56 minutes, 4.0905 seconds) they will have to select an Earth-sized planet in interstellar space that rotates with a similar period and/or slow down or spreed up the rotation of the planet. If they do that the stars at night will seem to circle with the same speed as on Earth. The giant artificial sun satellite will have to orbit at such a distance that the solar day (the time between two successive noons or midnights at the same location) will equal 24 hours. So that means that the time it takes for the giant artificial sun satellite to make one orbit combined with the time it takes for the planet to rotate once (the sidereal day) will equal 24 hours, a solar day on Earth. I'm certain there are some users at this site who can easily calculate the distance for you. Of course there is the problem that the "moon" should orbit the Earth-sized planet at the same distance that the Moon orbits the Earth in order to have a month of the same length and similar tides. > > In Aristotle's (384–322 BC) description of the universe, the Moon marked the boundary between the spheres of the mutable elements (earth, water, air and fire), and the imperishable stars of aether, an influential philosophy that would dominate for centuries.[183] However, in the 2nd century BC, Seleucus of Seleucia correctly theorized that tides were due to the attraction of the Moon, and that their height depends on the Moon's position relative to the Sun.[184] In the same century, Aristarchus computed the size and distance of the Moon from Earth, obtaining a value of about twenty times the radius of Earth for the distance. These figures were greatly improved by Ptolemy (90–168 AD): his values of a mean distance of 59 times Earth's radius and a diameter of 0.292 Earth diameters were close to the correct values of about 60 and 0.273 respectively.[185] Archimedes (287–212 BC) designed a planetarium that could calculate the motions of the Moon and other objects in the Solar System.[186] > > > <https://en.wikipedia.org/wiki/Moon#Before_spaceflight>[1](https://en.wikipedia.org/wiki/Moon#Before_spaceflight) So the size and distance of the Moon was measured reasonably accurately about 2,000 years ago. And a fake moon orbiting a fake earth in an artificial geocentric solar system would have to orbit the fake earth at a similar distance to that of the real Moon. Which could be a farther distance than than the proper distance for the giant artificial sun satellite to orbit. Which would be bad because on Earth eclipses are caused by the nearer Moon passing in front of the farther Sun. There are many other things to consider when designing a possible artificial geocentric solar system. But presumably some users on this board can do it for you. A possibly simpler way to create an artificial geocentric solar system would be to find an Earth-sized rogue planet in interstellar space and build a gigantic artificial geodesic spherical structure around it and fit the inner surface of that spherical structure with countless gazillions of lamps. The lamps would be programmed to turn on and off in patterns to simulate the movements of the Sun, the Moon, the visible planets in the Solar System, and the stars. So if it is scientifically possible for an advanced civilization to create an artificial geocentric solar system, a possibly artificial or natural geocentric solar system might exist in a science fiction story set in some parallel universe where the laws of science are different. And of course a natural geocentric solar system might exist in a fantasy story filled with magic. As I remember, in J.R.R. Tolkien's legendarium the world was originally not only geocentric but flat, until a great cataclysm where the God of the story changed the Earth into a sphere and made the solar system heliocentric.
Assuming that Newtonian mechanics still apply the factor that determines whether the Earth goes around the Sun or the Sun around the Earth is the mass of the sun. If the sun radiates equally in all directions then its total power output must be proportional to its distance from the Earth. Assuming that the mass of the Sun is small compared to that of the Earth, an orbital period of 30Ms (about 1 year) gives a radius of about 2Gm corresponding to a sphere with a surface area of about 5.5E19 m^2. For a power density of 1.361 W/m^2 the Sun must produce 7.4E22 W. If it burns at this rate for about one billion years then it will consume about 2.5E22 kg of matter. For comparison, the mass of the Earth is about 6E24 kg, so if you postulate a light-weight machine with a store of fuel which it converts to energy and radiates in all directions then this arrangement could be plausible. However, if you have a machine converting fuel to sunlight then why assume that it will waste most of it? If it could focus its entire output on the Earth then it could reduce its power consumption by a factor of about 430,000 and at the same time make the mechanism more easily accessible for the maintenance crew. Unfortunately the radiation pressure would push the Sun away from the Earth, so perhaps it should radiate an equal amount in the opposite direction. This would produce as a biproduct an interesting galactic lighthouse. From the perspective of a fairly primitive civilisation this should be almost indistinguishable from a heliocentric system. Weather would be similar, although I am not sure what effect the absence of the Sun's magnetic field would have. If you want a moon like Earth's then you must make arrangements to illuminate it. Maybe spread the beam in the plane of the Moon's orbit, or even provide a separately focused beam. Comets would be interesting, as they would now be orbiting the Earth rather than the sun, and they would abruptly disappear as the passed out of the beam.