qid int64 1 74.7M | question stringlengths 12 33.8k | date stringlengths 10 10 | metadata list | response_j stringlengths 0 115k | response_k stringlengths 2 98.3k |
|---|---|---|---|---|---|
1,310 | When are population dynamics models useful? There seems to have been a lot of research about it, but how does it help? If I need data about how a population will evolve under what conditions, I need it because I need data for a decision (such as "can we kill 50% of population X without doing too much damage?"), right? But for that, the model needs to be aware of what causes what. And for that, I have to do experiments, right? Like "let's kill a significant amount of population X and see what happens in the next ten years". I really don't get it. | 2012/03/04 | [
"https://biology.stackexchange.com/questions/1310",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/585/"
] | Leonardo's already given you an excellent answer, but I thought I'd add my perspective. I'm a mathematical epidemiologist, so I'd at least like to believe these types of models are useful.
For me, there are a number of things population dynamics models are especially useful for:
* Highlighting data requirements. Yes, models need data, as you've mentioned. But they don't need all their data to come from one source, one study, or even one *field*. Models are also profoundly useful for showing where we *don't* have the data we need to fully understand a system. "To make a model where we understand A, we need the values for X, Y, and Z. X is well studied, but Y and Z aren't - though it turns out when we look over the entire parameter space for Z, nothing really changes in our answer. But guys? We could really use a study on Y."
* Eliminating guesswork. Models aren't perfect encapsulations of reality - there will always be some simplifying assumptions, etc. But it's better than "going with your gut" - especially for complex problems.
* Impossible studies. A ton of what mathematical epidemiology looks at it is areas where studies are either impossible, logistically difficult, or unethical. It would be very hard indeed to only be able to study pandemic response plans, or vaccination strategy only when we had an actual outbreak, or while a new vaccine is being rolled out.
* Highlighting potentially new directions. If you're considering an intervention, but no matter how effective you make it in your model it doesn't move the system much, it might not be worthwhile. Models can also highlight threshold effects - like the critical % of the population you'd need to vaccinate to achieve herd immunity. | I'll throw one more application into the pot. Population dynamics also forms the foundations of population genetics, population ecology, and more recently plays an important role in frameworks such as evolutionary game theory and eco-evolutionary dynamics.
Here the models are also used as a type of theoretical exercise or thought experiment (as a previous answer suggests). In the development of evolutionary theory we simply cannot observe the process over the timescales we require to test the hypotheses we make. Thus, the development of population models allows us to explore 'possible worlds', as Robert May once put it, to see what kinds of adaptations or population structures we would expect to see, *given* the assumptions we put in.
We are also seeing an increasing number of population models and dynamical models used in conjunction with experiments on microbes in the field of experimental evolution. Here we *can* observe evolution in real time, and many of the assumptions about well-mixedness and large population sizes that are often made in modelling populations are actually fairly accurate. |
1,310 | When are population dynamics models useful? There seems to have been a lot of research about it, but how does it help? If I need data about how a population will evolve under what conditions, I need it because I need data for a decision (such as "can we kill 50% of population X without doing too much damage?"), right? But for that, the model needs to be aware of what causes what. And for that, I have to do experiments, right? Like "let's kill a significant amount of population X and see what happens in the next ten years". I really don't get it. | 2012/03/04 | [
"https://biology.stackexchange.com/questions/1310",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/585/"
] | Two previous answers listed many applications of population dynamics models. I want to add that they are also important for conservation of endangered species. For example classical stage-class model ([Crouse et al 1987](http://www.esajournals.org/doi/abs/10.2307/1939225), [free copy](http://www.google.com/url?q=http://depts.washington.edu/amath/courses/422-winter-2011/materials/crouse_et_al_sea_turtles.pdf&sa=U&ei=NMFXT9GpIYOXOt-SnJ0N&ved=0CAQQFjAA&client=internal-uds-cse&usg=AFQjCNHMllJA87MvU5XqxcxgjMQm3TNSAQ)) indicate that the most effective way to protect sea turtles is reducing mortality of large juveniles.
Moreover, you don't have to perform such drastic experiments as killing 50% of a population to estimate your model parameters. Information about number of offspring, breeding success, natural mortality, etc can usually be gain without serious perturbation of wild populations. The number of individuals in every natural population fluctuates because of random reasons, so it is possible (but sometimes needs more field work) to calculate, (possibly nonlinear) regression between density of population and some demographic indicators and then extrapolate it to non-examined densities. For some small, short-living species it is sometimes possible to measure that correlations in laboratory or semi-wild conditions. For some long-living species, especially if they are sedentary, like trees, it may be better to compare specimens living in different distance from its neighborhoods. For poorly-known species it is possible to take missing information from related or similar species. | I'll throw one more application into the pot. Population dynamics also forms the foundations of population genetics, population ecology, and more recently plays an important role in frameworks such as evolutionary game theory and eco-evolutionary dynamics.
Here the models are also used as a type of theoretical exercise or thought experiment (as a previous answer suggests). In the development of evolutionary theory we simply cannot observe the process over the timescales we require to test the hypotheses we make. Thus, the development of population models allows us to explore 'possible worlds', as Robert May once put it, to see what kinds of adaptations or population structures we would expect to see, *given* the assumptions we put in.
We are also seeing an increasing number of population models and dynamical models used in conjunction with experiments on microbes in the field of experimental evolution. Here we *can* observe evolution in real time, and many of the assumptions about well-mixedness and large population sizes that are often made in modelling populations are actually fairly accurate. |
14,734,479 | I realize there are a few posts on this already but most of the libraries that are mentioned are at least a year old (and don't have a lot of features such as attachment support, checking bouncebacks, etc).
Does anybody know the best library for Node.js to use Amazon SES that is maintained?
Ex:
* [node-amazon-ses](https://github.com/jjenkins/node-amazon-ses) No attachment support; no callback for if the send for successful. But it does have
+ DeleteVerifiedEmailAddress
+ GetSendQuota
+ GetSendStatistics
+ ListVerifiedEmailAddresses
+ SendEmail
+ VerifyEmailAddress | 2013/02/06 | [
"https://Stackoverflow.com/questions/14734479",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1769217/"
] | The [AWS SDK for Node.js](http://aws.amazon.com/sdkfornodejs) supports SES. | Another option is the node mailer module which can be used with SES:
<https://github.com/andris9/Nodemailer>
Here is an example with SES and attachments:
<https://github.com/andris9/Nodemailer/blob/master/examples/example_ses.js>
Not really a full featured SES library, but does make sending to ses with attachments simpler. |
5,001,417 | I have 2 kinds of business units; division and department. A business units has to be one or the other, but cannot be both.
So this is easy enough. Have a BusinessUnit table and a BusinessUnitType lookup table containing division and department.
However only divisions can contain teams. For each division there are one to many teams. Departments do not have teams.
So what should I be doing here? Maybe I should have a flag on the BusinessUnitType table called hasTeam?
Is that the best way to organise this data?
I am not sure if this particular design has a name. | 2011/02/15 | [
"https://Stackoverflow.com/questions/5001417",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/125673/"
] | Your case looks like an instance of the gen-spec design pattern. Gen-spec is short for “generalization specialization” ([see definition](http://en.wikipedia.org/wiki/Class_diagram#Generalization) ). The gen-spec pattern is familiar to programmers who understand inheritance. But implementing the gen-spec pattern in a relational schema can be a little tricky, and database design tutorials often skip over this topic.
This topic has come up before. ([See sample discussion](https://stackoverflow.com/questions/3879806/data-modeling-question/3880673#3880673)).
Fortunately, there are some good articles on the web that explain just this subject ([see sample article](http://www.javaguicodexample.com/erdrelationalmodelnotes1.html)). And a Google search ([see sample search](http://www.google.com/search?q=http%3A%2F%2Fstackoverflow.com%2Fquestions%2F3879806%2Fdata-modeling-question%2F3880673%233880673&rls=com.microsoft:en-us:IE-ContextMenu&ie=UTF-8&oe=UTF-8&sourceid=ie7&rlz=1I7GWYE#hl=en&sugexp=gsisc&xhr=t&q=generalization+specialization+relational+modeling&cp=31&pf=p&sclient=psy&rls=com.microsoft:en-us%3AIE-ContextMenu&rlz=1I7GWYE&aq=0l&aqi=&aql=f&oq=Generalization+Specialiazation+&pbx=1&fp=e637dc619dfb9af3)) will yield lots more articles. | It's a bit difficult to answer without a broader context (for example- what technologies you're using, whether this is a new project, whether you have any other constraints on implementation etc.).
but generally speaking I would say:
1. If this is a new project, and you are not technology-constrainted, I'd recommend using an OR/M (I personally use nHibernate). You can easily configure it to fit your needs without worrying about the underlying DB.
2. otherwise- it seems that your original thought is a good idea. Depending on the DB you're using, you can probably create a constraint to enforce that logic, if you like (I personally would not recommend that, since it brings your business logic into your DB, where it doesn't belong).
Hope this helps. |
28,481 | I'm going to move my blog from <http://blog.wordfruit.com> to <http://wordfruit.com/blog>
the main Wordfruit site is in PHP and the blog is WordPress.
I know I can make the change at wp-admin/options-general.php
...I want to make sure I don't create problems when I make that change...
1. Do I not need to create any folders etc on the domain? Will they just create themselves when I make the change?
2. Can I redirect the old urls to the new urls from within the WordPress account?
3. Are there any other things I need to account for in making this change?
Cheers, Richard | 2011/09/14 | [
"https://wordpress.stackexchange.com/questions/28481",
"https://wordpress.stackexchange.com",
"https://wordpress.stackexchange.com/users/7692/"
] | First you should read the Codex entry on domain moving
<http://codex.wordpress.org/Moving_WordPress>
But in a nutshell: aside from moving your contents phisically to the /blog location, all you have to do is search and replace every SQL entry for the previous domain (instead of just changing the domain in Wordpress options). Doing this by hand is pretty dangerous, I found this script to be very helpful:
<https://interconnectit.com/products/search-and-replace-for-wordpress-databases/>
Backup your database (eg. with PHPMyAdmin) in SQL form, then upload the above php script to your host folder, and point your browser to it. Using it you can safely search & replace <http://blog.wordfruit.com> to <http://wordfruit.com/blog>.
All your post contents and options will be then rewritten to the new domain. What possibly could remain is your page template - if you made it yourself, I'd also download and search the contents the theme folder for possible hard links to the old domain. | I have played with those options myself and would recommend doing this instead.
1. Copy all physical files to new location and delete wp-config.php
2. Use a back up plugin and export all content and options
3. Install the new blog on a new database or with a new prefix
4. Import all of the content and settings back into WordPress
5. Check that everything is working on the new site
6. If everything is working delete the old sites database and files
7. Redirect the old sites domain to the new location
I know this seems strange given the settings but whenever I used the change domain settings it didn't work and it broke the site, I then had to manually change it back through wp\_config.php. |
204,348 | Ubuntu 12.10 is so slow and a lot of not responding applications I was using Skype whenever i open it it will go to non-responding state thin back to normal after a while even the software centre the system process is eating the CPU I don’t know if the compiz is the problem but issuing the command compiz --replace restore the applications from non-responding state CPU : Intel Celeron D 3.4 RAM : 1 GB VGA : Intel G45
Plz help | 2012/10/21 | [
"https://askubuntu.com/questions/204348",
"https://askubuntu.com",
"https://askubuntu.com/users/99631/"
] | Have you tried installing Gnome Shell or KDE Plasma? Unity runs like a dog for me on 12.10 (ran fine on 12.04) but the other desktops ran fine. I'm leaning towards Gnome Shell at the moment. | It's because your video card doesn't support Ubuntu 12.10 ,I had the same problem with my asus k50c 1.500 Mhz 2 gb ram video card sis vga 771/671 when I made the upgrade to 12.10 and the upgrade worked fine , my applications all got to slow down so I return to 12.04 and I can advise U to do the same thing ,because unfortunately these are old video cards so U can not pretend to much from it. |
5,299,788 | I have a 1:1 relationship between table 'A' and 'B' in my .DBML. The FK in the database is in place and the .DBML diagram shows an association line between 'A' and 'B'. However, I cannot get the code generator to create a child property in the 'A' entity. All I have is the FK column. In the Association properties, I have ChildProperty set to true. However, the code generator will not create the child property. I have dropped and added the two tables several times.
Anyone have any ideas? | 2011/03/14 | [
"https://Stackoverflow.com/questions/5299788",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/163534/"
] | The O/R designer will refuse to create an association property if a **primary key is missing** on one of the associated tables. Make sure all of your associated tables have a primary key. | Not sure, but I think what you call 1:1 is actually seen by the DBML as 1:\* because the list can "have" many of your fk-table, e.g. one empley oyee can have one city, but each city can "have" many employees.
AFAIK a primary key in each table is a prerequisite without which the DBML will not "work". An error is issued when saving it. Your project will compile, but you'll see the errors later. HTH |
111,875 | I'm trying to setup a windows server, but I can't seem to get it to install ethernet drivers :(
motherboard is ms-6743 chipset 82865g/pe/p(intel)
The MSI drivers specifically for this motherboard are bad links. Not surprising... MSI is rarely helpful.
Sisoft Sandra doesn't see any network devices, and all of my leads to drivers have reported that there is no network adapter to install a driver for.
The light is ON on the mobo, the onboard setting in bios is ON and the computer worked just fine with a standard install of windows 7 about a month ago.
I don't know what to do :( | 2010/02/11 | [
"https://serverfault.com/questions/111875",
"https://serverfault.com",
"https://serverfault.com/users/34145/"
] | That chipset uses an Intel 82562EZ on-board ethernet controller if I'm not mistaken.
[The drivers you need](http://downloadcenter.intel.com/Detail_Desc.aspx?agr=Y&ProdId=1702&DwnldID=18720&lang=eng) are available from [Intel](http://downloadcenter.intel.com/SearchResult.aspx?lang=eng&ProductFamily=Ethernet+Components&ProductLine=Ethernet+Controllers&ProductProduct=Intel%C2%AE+82562EZ+Fast+Ethernet+Controller). Download them using another computer, transfer the installer and run it. | I would try googling for the proper driver. See the detailed specs on where the LAN is coming from, if its from an intel chipset, then intel probably has drivers. Make sure you are using the proper drivers (32 or 64 bit). |
111,875 | I'm trying to setup a windows server, but I can't seem to get it to install ethernet drivers :(
motherboard is ms-6743 chipset 82865g/pe/p(intel)
The MSI drivers specifically for this motherboard are bad links. Not surprising... MSI is rarely helpful.
Sisoft Sandra doesn't see any network devices, and all of my leads to drivers have reported that there is no network adapter to install a driver for.
The light is ON on the mobo, the onboard setting in bios is ON and the computer worked just fine with a standard install of windows 7 about a month ago.
I don't know what to do :( | 2010/02/11 | [
"https://serverfault.com/questions/111875",
"https://serverfault.com",
"https://serverfault.com/users/34145/"
] | That chipset uses an Intel 82562EZ on-board ethernet controller if I'm not mistaken.
[The drivers you need](http://downloadcenter.intel.com/Detail_Desc.aspx?agr=Y&ProdId=1702&DwnldID=18720&lang=eng) are available from [Intel](http://downloadcenter.intel.com/SearchResult.aspx?lang=eng&ProductFamily=Ethernet+Components&ProductLine=Ethernet+Controllers&ProductProduct=Intel%C2%AE+82562EZ+Fast+Ethernet+Controller). Download them using another computer, transfer the installer and run it. | Consider finding another temporary NIC. USB-based or whathaveyou. Then fire up Windows Update.
Does Windows Update then find the driver for your onboard NIC? |
345,761 | I created an iCloud account for my daughter, and her profile has an avatar assigned to it that I didn’t choose:
[](https://i.stack.imgur.com/0S3DJ.jpg)
I think it’s Stitch from the Disney film *Lilo and Stitch* and appears in my daughter’s contact record and in the family iCloud settings. The picture has been on the account since the moment I created it, and my daughter does not currently know of the existence of this account (or currently have any iOS devices), so can not have uploaded it herself.
Was this automatically selected by Apple? I created an account for my son last year and it didn’t have an avatar, and neither does my own, so I was wondering if this could have been picked up from another service she has a profile on (although I checked Gravatar and she doesn’t have one on there). | 2018/12/15 | [
"https://apple.stackexchange.com/questions/345761",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/185714/"
] | I asked her (without giving away why) if she’s used this as a profile picture elsewhere and she told me it’s the pic on her YouTube profile. I added her Google email address as a secondary email (for account recovery) when I created her iCloud account, so Apple must have scraped her Google account for a profile picture. | The person signed in to iCloud can upload any image they choose and no avatar is chosen by Apple. On macOS there are some generic account photos so your question makes great sense, but iCloud has no such stock content supplied.
To date, everything is a digital file that someone uploads from the photos app or loaded with intention in to the iCloud control apps (settings / system preferences) or web site.
Tell your daughter she has good taste and I’ll encourage you and your son to start having fun with art or photos that reduce to a set of pixels. |
17,048,238 | **aeson-schema** is a package for validating JSON-data against a JSON-schema.
Has anyone an example of how to use it ? | 2013/06/11 | [
"https://Stackoverflow.com/questions/17048238",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | Persistent data is information that can outlive the program that creates it. The majority of complex programs use persistent data: GUI applications need to store user preferences across program invocations, web applications track user movements and orders over long periods of time, etc. (source provided below)
Here is the answer your question:
Lightweight persistence is a storage area which requires a little or no work from the developer side. Examples:Java serialization is a form of lightweight persistence because it can be used to persist Java objects directly to a file with very little effort.
I am very happy that you are not just reading the book...rather you are asking questions about anything you come across in the book. good luck
[source](http://openjpa.apache.org/builds/1.2.3/apache-openjpa/docs/jpa_overview_intro_transpers.html) | There is a processing in java (and other languages) called serialization. Basically it lets you turn an object into a byte stream, so it can be written to a file, stored in a database, sent to a cloud, etc. The idea is that there is an easy and automatic translation between the stored object and the in-memory RAM object. If you do it yourself, such as writing individual fields to a file or database, you need to come up with a file format or database schema. This is heavy-weight storage.
Here is a tutorial on java serialization: <http://www.tutorialspoint.com/java/java_serialization.htm> |
200,951 | I'm looking to get Far Cry 4 for my computer. I do believe the game is thirty gigabytes, and my internet is capped. If I were to get the game on a disk, would it still use bandwidth, or would it install directly to my hard drive? | 2015/01/08 | [
"https://gaming.stackexchange.com/questions/200951",
"https://gaming.stackexchange.com",
"https://gaming.stackexchange.com/users/98992/"
] | Though the majority of the game will be installed, forum posts such as [this](https://forums.ubi.com/showthread.php/1471486-I-can-not-install-Far-Cry-4-from-DVD?p=11841388&viewfull=1#post11841388) (from 2016) suggest that:
>
> the version that comes on the DVD's is outdated by almost 2 years
>
>
>
and that
>
> and many of those 8 patches were several gig in size anyway
>
>
>
This post is, in itself, probabably outdated by now. | <http://forums.nexusmods.com/index.php?/topic/452910-installing-skyrim-from-the-dvd/>
This post details how to force Steam to install a game from the disk instead of download. There will most likely be an update or 2 after install, but it probably won't even reach a half gig. |
200,951 | I'm looking to get Far Cry 4 for my computer. I do believe the game is thirty gigabytes, and my internet is capped. If I were to get the game on a disk, would it still use bandwidth, or would it install directly to my hard drive? | 2015/01/08 | [
"https://gaming.stackexchange.com/questions/200951",
"https://gaming.stackexchange.com",
"https://gaming.stackexchange.com/users/98992/"
] | Once installed, Far Cry 4 can be played in offline mode, which uses very little, if any, bandwidth. | <http://forums.nexusmods.com/index.php?/topic/452910-installing-skyrim-from-the-dvd/>
This post details how to force Steam to install a game from the disk instead of download. There will most likely be an update or 2 after install, but it probably won't even reach a half gig. |
200,951 | I'm looking to get Far Cry 4 for my computer. I do believe the game is thirty gigabytes, and my internet is capped. If I were to get the game on a disk, would it still use bandwidth, or would it install directly to my hard drive? | 2015/01/08 | [
"https://gaming.stackexchange.com/questions/200951",
"https://gaming.stackexchange.com",
"https://gaming.stackexchange.com/users/98992/"
] | Though the majority of the game will be installed, forum posts such as [this](https://forums.ubi.com/showthread.php/1471486-I-can-not-install-Far-Cry-4-from-DVD?p=11841388&viewfull=1#post11841388) (from 2016) suggest that:
>
> the version that comes on the DVD's is outdated by almost 2 years
>
>
>
and that
>
> and many of those 8 patches were several gig in size anyway
>
>
>
This post is, in itself, probabably outdated by now. | Once installed, Far Cry 4 can be played in offline mode, which uses very little, if any, bandwidth. |
367,261 | I'm replacing WinXP with 13.04 on an older PC using wubi.exe on a USB stick. I had no problem changing the BIOS on another system that was a bit newer but when I change the settings on the older PC to boot from USB, I get a DOS message saying "Searching for boot record...Not found" & asked to try again. I don't have the ability to boot from a live CD so is there a reason why I can boot from a USB on a newer computer but can't from an older one? Both have options to choose to boot from USB, but the older one can find no boot record. The system was built on 12/13/01 by American Megatrends.
Since I don't have enough "reputation points" to post the screen shot image, you can see it at <http://img.photobucket.com/albums/v633/boonevillephil/1029131748-00.jpg>. | 2013/10/28 | [
"https://askubuntu.com/questions/367261",
"https://askubuntu.com",
"https://askubuntu.com/users/191722/"
] | I had the exact same issue and struggled for hours, untill I booted from USB pendriveLinux to set up the USB stick with 13.10
<http://www.pendrivelinux.com/>
Tor-André | Yout should run Boot reapair from the usb or live cd
<https://help.ubuntu.com/community/Boot-Repair> |
182,629 | I graduated last year from a prestigious university where I was in general a very good student. I am now applying to graduate programs.
The class (upper-level undergraduate mathematics) had moved to an online format due to the pandemic; the first part of the class was in person. (This was 2 years ago.) The exam in question was closed to all aids, such as peer collaboration, the textbook, notes, and the Internet. Moreover, although the exam was available within a 36-hour window as a PDF file, it was stipulated that it be taken within 2 hours. There was no monitoring for compliance with the self-timing or closed-book requirements. These rules were laid out unambiguously, and I signed a declaration of academic honesty, in which I affirmed, falsely, that I had complied with the rules. I consulted the textbook extensively during the exam and took 6 hours to complete the exam (because I was quite literally studying the material during the exam period). I got away with it.
To be honest, at the time, it did not even occur to me that it was morally wrong to do such a thing. I was under pressure from my other classes and I felt that I did not have time to study beforehand. I made the following rationalizations, mostly subconsciously: (1) it was not particularly wrong since I was not copying answers or looking up solutions, but merely "refreshing my memory" with key theorems; (2) I had performed extremely well before, and I could easily have gotten the same grade if I had studied, so I was not obtaining anything that lay beyond my potential (I got a perfect score on this virtual exam just as I had on an earlier in-person exam of similar difficulty); (3) other students would inevitably break the rules; (4) the lack of enforcement was an implicit signal that they were more idealistic guidelines than rules; (5) my other classes had relaxed the closed-book requirement in light of the virtual format.
The incident has begun to weigh heavily on my conscience out of the blue; I had nearly forgotten about it between now and then. In hindsight, it was profoundly wrong for me to have done it. I feel enormously guilty about this incident and can only think of how foolish it was to have minimized it with those self-deceptions. Needless to say, I have no desire to ever again violate the norms of academic honesty. It may sound implausible, but I don't think I realized that what I had done was cheating, and how big a deal it is, until recently.
What should I do? In light of the severity of the infraction and my prospective plans in academia, is it incorrect to remain without raising the issue publicly, as I have until now? | 2022/02/22 | [
"https://academia.stackexchange.com/questions/182629",
"https://academia.stackexchange.com",
"https://academia.stackexchange.com/users/-1/"
] | Congratulations on holding such a moral code. It will pay dividends in your life if you surround yourself with honest people. What you did was wrong. And the fact that you regret it and worry about shows that you understand it.
Now do not complicate your life because of the mistake you made. Firstly, you got a grade, more than you deserved. So go back and revisit the course so that your real grades match what you have in your CV. Because this would be bad for your future in multiple ways. This by itself is a penance.
Secondly, there are ways in which you could fix it. You could give free tuition to deserving students. Help them so that they need not teach. This is especially useful as the world is standing back after the pandemic. Or dedicate a part of your savings to an African school. You can come up with many such ideas.
Thirdly, you are too young and you would see many exams in your life, university or otherwise. Much more crucial than this one. So forgive yourself and continue to work on maintaining this moral code. Life is long and mistakes will happen.
Finally, if you are someone who is so obsessed with this that it is mentally having a bad effect on you; then go and confess to your course coordinator. Most probably, he is going to have a laugh and will soothe your feelings and send you off. However, be prepared for any outcome.
Pandemic has shown the world a lot of ugly things. I am sure you can forgive yourself considering these circumstances and the weird openings they provided for such temptations. | Reflect on it and, more particularly, on your justification at the time.
"Everyone else does that so this is the norm" leaves one with either challenging the norm, their own moral code or living on with the guilt. You might not always come up with the most idealistic answer possible to various questions the life poses - unless you are bent on dying a martyr. But do give it due consideration and figure out what is it that really matters to you. Find your core values and shape yourself into the image of what could exist in a world you would actually love to live in. Be that change you want to see in the world around you. |
182,629 | I graduated last year from a prestigious university where I was in general a very good student. I am now applying to graduate programs.
The class (upper-level undergraduate mathematics) had moved to an online format due to the pandemic; the first part of the class was in person. (This was 2 years ago.) The exam in question was closed to all aids, such as peer collaboration, the textbook, notes, and the Internet. Moreover, although the exam was available within a 36-hour window as a PDF file, it was stipulated that it be taken within 2 hours. There was no monitoring for compliance with the self-timing or closed-book requirements. These rules were laid out unambiguously, and I signed a declaration of academic honesty, in which I affirmed, falsely, that I had complied with the rules. I consulted the textbook extensively during the exam and took 6 hours to complete the exam (because I was quite literally studying the material during the exam period). I got away with it.
To be honest, at the time, it did not even occur to me that it was morally wrong to do such a thing. I was under pressure from my other classes and I felt that I did not have time to study beforehand. I made the following rationalizations, mostly subconsciously: (1) it was not particularly wrong since I was not copying answers or looking up solutions, but merely "refreshing my memory" with key theorems; (2) I had performed extremely well before, and I could easily have gotten the same grade if I had studied, so I was not obtaining anything that lay beyond my potential (I got a perfect score on this virtual exam just as I had on an earlier in-person exam of similar difficulty); (3) other students would inevitably break the rules; (4) the lack of enforcement was an implicit signal that they were more idealistic guidelines than rules; (5) my other classes had relaxed the closed-book requirement in light of the virtual format.
The incident has begun to weigh heavily on my conscience out of the blue; I had nearly forgotten about it between now and then. In hindsight, it was profoundly wrong for me to have done it. I feel enormously guilty about this incident and can only think of how foolish it was to have minimized it with those self-deceptions. Needless to say, I have no desire to ever again violate the norms of academic honesty. It may sound implausible, but I don't think I realized that what I had done was cheating, and how big a deal it is, until recently.
What should I do? In light of the severity of the infraction and my prospective plans in academia, is it incorrect to remain without raising the issue publicly, as I have until now? | 2022/02/22 | [
"https://academia.stackexchange.com/questions/182629",
"https://academia.stackexchange.com",
"https://academia.stackexchange.com/users/-1/"
] | Congratulations on holding such a moral code. It will pay dividends in your life if you surround yourself with honest people. What you did was wrong. And the fact that you regret it and worry about shows that you understand it.
Now do not complicate your life because of the mistake you made. Firstly, you got a grade, more than you deserved. So go back and revisit the course so that your real grades match what you have in your CV. Because this would be bad for your future in multiple ways. This by itself is a penance.
Secondly, there are ways in which you could fix it. You could give free tuition to deserving students. Help them so that they need not teach. This is especially useful as the world is standing back after the pandemic. Or dedicate a part of your savings to an African school. You can come up with many such ideas.
Thirdly, you are too young and you would see many exams in your life, university or otherwise. Much more crucial than this one. So forgive yourself and continue to work on maintaining this moral code. Life is long and mistakes will happen.
Finally, if you are someone who is so obsessed with this that it is mentally having a bad effect on you; then go and confess to your course coordinator. Most probably, he is going to have a laugh and will soothe your feelings and send you off. However, be prepared for any outcome.
Pandemic has shown the world a lot of ugly things. I am sure you can forgive yourself considering these circumstances and the weird openings they provided for such temptations. | >
> (1) it was not particularly wrong since I was [...] merely "refreshing my memory";
> (3) other students would inevitably break the rules; (4) the lack of
> enforcement was an implicit signal that they were more idealistic
> guidelines than rules;
>
>
>
For all purposes, you could have been the only student taking the exams, so point (3) does not matter at all. Regarding the other points, they are rational externalization of your guilty feelings trying to look for an "easy" way out, since you claimed in the paragraph just above
>
> (because I was quite literally studying the material during the exam period). I got away with it.
> These rules were laid out unambiguously, and I signed a declaration of
> academic honesty, in which I affirmed, falsely, that I had complied
> with the rules.
>
>
>
So you cheated and you are already facing the personal consequences (most of the time we set rules to protect one from oneself, not from the others).
You have three choices:
* you go full honest, and you contact a lawyer (to protect yourself) before writing to the university that you cheated in an exam, leaving them to set the bar about external, independently evaluated consequences;
* you enroll in another but similar graduate program from another institution (you were a good student, so you can expect to complete your degree in much shorter time than), removing your "cheated" degree from your CV;
* ignore your guilty feelings. |
182,629 | I graduated last year from a prestigious university where I was in general a very good student. I am now applying to graduate programs.
The class (upper-level undergraduate mathematics) had moved to an online format due to the pandemic; the first part of the class was in person. (This was 2 years ago.) The exam in question was closed to all aids, such as peer collaboration, the textbook, notes, and the Internet. Moreover, although the exam was available within a 36-hour window as a PDF file, it was stipulated that it be taken within 2 hours. There was no monitoring for compliance with the self-timing or closed-book requirements. These rules were laid out unambiguously, and I signed a declaration of academic honesty, in which I affirmed, falsely, that I had complied with the rules. I consulted the textbook extensively during the exam and took 6 hours to complete the exam (because I was quite literally studying the material during the exam period). I got away with it.
To be honest, at the time, it did not even occur to me that it was morally wrong to do such a thing. I was under pressure from my other classes and I felt that I did not have time to study beforehand. I made the following rationalizations, mostly subconsciously: (1) it was not particularly wrong since I was not copying answers or looking up solutions, but merely "refreshing my memory" with key theorems; (2) I had performed extremely well before, and I could easily have gotten the same grade if I had studied, so I was not obtaining anything that lay beyond my potential (I got a perfect score on this virtual exam just as I had on an earlier in-person exam of similar difficulty); (3) other students would inevitably break the rules; (4) the lack of enforcement was an implicit signal that they were more idealistic guidelines than rules; (5) my other classes had relaxed the closed-book requirement in light of the virtual format.
The incident has begun to weigh heavily on my conscience out of the blue; I had nearly forgotten about it between now and then. In hindsight, it was profoundly wrong for me to have done it. I feel enormously guilty about this incident and can only think of how foolish it was to have minimized it with those self-deceptions. Needless to say, I have no desire to ever again violate the norms of academic honesty. It may sound implausible, but I don't think I realized that what I had done was cheating, and how big a deal it is, until recently.
What should I do? In light of the severity of the infraction and my prospective plans in academia, is it incorrect to remain without raising the issue publicly, as I have until now? | 2022/02/22 | [
"https://academia.stackexchange.com/questions/182629",
"https://academia.stackexchange.com",
"https://academia.stackexchange.com/users/-1/"
] | Reflect on it and, more particularly, on your justification at the time.
"Everyone else does that so this is the norm" leaves one with either challenging the norm, their own moral code or living on with the guilt. You might not always come up with the most idealistic answer possible to various questions the life poses - unless you are bent on dying a martyr. But do give it due consideration and figure out what is it that really matters to you. Find your core values and shape yourself into the image of what could exist in a world you would actually love to live in. Be that change you want to see in the world around you. | >
> (1) it was not particularly wrong since I was [...] merely "refreshing my memory";
> (3) other students would inevitably break the rules; (4) the lack of
> enforcement was an implicit signal that they were more idealistic
> guidelines than rules;
>
>
>
For all purposes, you could have been the only student taking the exams, so point (3) does not matter at all. Regarding the other points, they are rational externalization of your guilty feelings trying to look for an "easy" way out, since you claimed in the paragraph just above
>
> (because I was quite literally studying the material during the exam period). I got away with it.
> These rules were laid out unambiguously, and I signed a declaration of
> academic honesty, in which I affirmed, falsely, that I had complied
> with the rules.
>
>
>
So you cheated and you are already facing the personal consequences (most of the time we set rules to protect one from oneself, not from the others).
You have three choices:
* you go full honest, and you contact a lawyer (to protect yourself) before writing to the university that you cheated in an exam, leaving them to set the bar about external, independently evaluated consequences;
* you enroll in another but similar graduate program from another institution (you were a good student, so you can expect to complete your degree in much shorter time than), removing your "cheated" degree from your CV;
* ignore your guilty feelings. |
392,024 | (I've tried this in MySql)
I believe they're semantically equivalent. Why not identify this trivial case and speed it up? | 2008/12/24 | [
"https://Stackoverflow.com/questions/392024",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11236/"
] | ...just to add some detail.
Calling the DELETE statement tells the database engine to generate a transaction log of all the records deleted. In the event the delete was done in error, you can restore your records.
Calling the TRUNCATE statement is a blanket "all or nothing" that removes all the records with no transaction log to restore from. It is definitely faster, but should only be done when you're sure you don't need any of the records you're going to remove. | For MySql 5 using InnoDb as the storage engine, TRUNCATE acts just like DELETE without a WHERE clause: i.e. for large tables it takes ages because it deletes rows one-by-one. This is changing in version 6.x.
see
<http://dev.mysql.com/doc/refman/5.1/en/truncate-table.html>
for 5.1 info (row-by-row with InnoDB) and
<http://blogs.mysql.com/peterg/category/personal-opinion/>
for changes in 6.x
---
Editor's note
-------------
This answer is [clearly contradicted by the MySQL documentation](http://dev.mysql.com/doc/refman/5.0/en/truncate-table.html):
"For an InnoDB table before version 5.0.3, InnoDB processes TRUNCATE TABLE by deleting rows one by one. As of MySQL 5.0.3, row by row deletion is used only if there are any FOREIGN KEY constraints that reference the table. If there are no FOREIGN KEY constraints, InnoDB performs fast truncation by dropping the original table and creating an empty one with the same definition, which is much faster than deleting rows one by one." |
392,024 | (I've tried this in MySql)
I believe they're semantically equivalent. Why not identify this trivial case and speed it up? | 2008/12/24 | [
"https://Stackoverflow.com/questions/392024",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11236/"
] | truncate table cannot be rolled back, it is like dropping and recreating the table. | Delete from table deletes each row from the one at a time and adds a record into the transaction log so that the operation can be rolled back. The time taken to delete is also proportional to the number of indexes on the table, and if there are any foreign key constraints (for innodb).
Truncate effectively drops the table and recreates it and can not be performed within a transaction. It therefore required fewer operations and executes quickly. Truncate also does not make use of any on delete triggers.
Exact details about why this is quicker in MySql can be found in the MySql documentation:
<http://dev.mysql.com/doc/refman/5.0/en/truncate-table.html> |
392,024 | (I've tried this in MySql)
I believe they're semantically equivalent. Why not identify this trivial case and speed it up? | 2008/12/24 | [
"https://Stackoverflow.com/questions/392024",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11236/"
] | truncate table cannot be rolled back, it is like dropping and recreating the table. | For MySql 5 using InnoDb as the storage engine, TRUNCATE acts just like DELETE without a WHERE clause: i.e. for large tables it takes ages because it deletes rows one-by-one. This is changing in version 6.x.
see
<http://dev.mysql.com/doc/refman/5.1/en/truncate-table.html>
for 5.1 info (row-by-row with InnoDB) and
<http://blogs.mysql.com/peterg/category/personal-opinion/>
for changes in 6.x
---
Editor's note
-------------
This answer is [clearly contradicted by the MySQL documentation](http://dev.mysql.com/doc/refman/5.0/en/truncate-table.html):
"For an InnoDB table before version 5.0.3, InnoDB processes TRUNCATE TABLE by deleting rows one by one. As of MySQL 5.0.3, row by row deletion is used only if there are any FOREIGN KEY constraints that reference the table. If there are no FOREIGN KEY constraints, InnoDB performs fast truncation by dropping the original table and creating an empty one with the same definition, which is much faster than deleting rows one by one." |
392,024 | (I've tried this in MySql)
I believe they're semantically equivalent. Why not identify this trivial case and speed it up? | 2008/12/24 | [
"https://Stackoverflow.com/questions/392024",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11236/"
] | ...just to add some detail.
Calling the DELETE statement tells the database engine to generate a transaction log of all the records deleted. In the event the delete was done in error, you can restore your records.
Calling the TRUNCATE statement is a blanket "all or nothing" that removes all the records with no transaction log to restore from. It is definitely faster, but should only be done when you're sure you don't need any of the records you're going to remove. | Delete from table deletes each row from the one at a time and adds a record into the transaction log so that the operation can be rolled back. The time taken to delete is also proportional to the number of indexes on the table, and if there are any foreign key constraints (for innodb).
Truncate effectively drops the table and recreates it and can not be performed within a transaction. It therefore required fewer operations and executes quickly. Truncate also does not make use of any on delete triggers.
Exact details about why this is quicker in MySql can be found in the MySql documentation:
<http://dev.mysql.com/doc/refman/5.0/en/truncate-table.html> |
392,024 | (I've tried this in MySql)
I believe they're semantically equivalent. Why not identify this trivial case and speed it up? | 2008/12/24 | [
"https://Stackoverflow.com/questions/392024",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11236/"
] | truncate table cannot be rolled back, it is like dropping and recreating the table. | ...just to add some detail.
Calling the DELETE statement tells the database engine to generate a transaction log of all the records deleted. In the event the delete was done in error, you can restore your records.
Calling the TRUNCATE statement is a blanket "all or nothing" that removes all the records with no transaction log to restore from. It is definitely faster, but should only be done when you're sure you don't need any of the records you're going to remove. |
79,476 | If I fixed a portable hole on the front of a large flat shield,
looking at the front of the shield, I would see a a 5 foot deep hole.
BUT, looking at the shield from behind, would there also be a hole?
Would the back be flat and untouched?
Would arrows shot into the hole re-appear 5 feet behind me?
Would they just disappear?
I'm sure standing behind the shield, the arrows would not hurt me, as the ground or walls a hole is affixed to are unharmed after the hole is removed, so no permanent damage is done.
From the front, would the hole be limited by the thickness of the shield?
Would the hole extend 5 feet back through empty space leaving a hole in my torso?
Would the hole GLUE things into a fixed position behind it? Like an invisible thumbtack made of negative space?
Because a hole in a wall opens a space to hide in, crawl through, but how solid are the walls? you cant go sideways inside a hole, but is this because you are in a rock wall, and the unaffected rock is blocking your path? or is the hole itself the unmovable wall? Would there be no resistance, if there was no physical material blocking your path? i.e. COULD an outside object enter hole-space from the side, if there was NOTHING there to block entry?
To make positional questions easier,
z-dimension is the depth of the upright hole, front to back,
and x is shoulder to shoulder, Y-dimension up and down.
If I stood behind the shield (say, 5mm thick?), would the dimensional hole cause me to be separated from my lungs? If so, would they still function on the other plane? Would blood flow cross the threshold in x/y directions unhindered? Would a solid force stop my heart/lungs from oxygenating the rest of my body? Would it continue to work as expected, but just do so invisibly?
Would I be "thumbtacked" into position and held, until someone in the front peeled the hole away? Could I come and go from behind the shield at will, because the dimensional adjustment only applies to the front surface of the shield along the x/y axis' leaving the back unaffected? | 2016/04/30 | [
"https://rpg.stackexchange.com/questions/79476",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/28771/"
] | The description of [portable hole](https://www.dndbeyond.com/magic-items/portable-hole) says a portable hole
>
> ...unfolds into a circular sheet 6 feet in diameter. You can use an action to unfold a portable hole and place it on or against a solid surface, whereupon the portable hole creates an extradimensional hole 10 feet deep. The cylindrical space within the hole exists on a different plane, so it can’t be used to create open passages.
>
>
>
A narrow reading of this text would suggest that you can't put a portable hole on a shield unless your shield is 6 feet in diameter. (Otherwise you haven't placed all of the portable hole "on a solid surface", so it doesn't activate its magical powers.)
Let's assume that you could get around this — for example because you had a very large shield, or a very small portable hole, or you managed to get the portable hole to activate while partially folded up.
A portable hole is very much like a bag of holding, so a good way to think about this is if you had opened up a bag of holding and you were waving the open mouth around.
The space within the hole exists on a different plane — essentially there's now a teleportation gate on the surface of your shield, and anything that touches the teleportation gate gets partially teleported to a 10-foot cylindrical space on another plane of existence. It is not possible to enter or interact with the cylindrical space, except by going through the teleportation gate on the front of your shield. So the cylindrical space does not harm you, affect you, or interact with you, unless you reach around and stick your arm through the *front* of your shield.
Striking the back wall of the portable hole will have no effect on the wearer. Striking the back wall also won't damage the hole. (If you were using a [bag of holding](https://www.dndbeyond.com/magic-items/bag-of-holding), it could get "overloaded, pierced, or torn", destroying the bag. Portable holes do not have this weakness.)
I'm guessing this is pretty convenient in combat, because any blows that strike the shield no longer impart their force against your shield arm — instead the attacker's sword goes slightly into the extradimensional space. On the other hand, it might be possible to damage the portable hole if a sword blow strikes against the edge of the effect. It's certainly possible for a creature to peel the portable hole off your shield and run off with it. Lastly, there's a small chance that someone could throw a bag of holding into your portable hole, which would destroy both items and teleport you into the astral plane. | I want to offer a slightly different reading of the text. Here is the description of a portable hole, and some emphasis (mine).
>
> **Portable Hole:** A portable hole is **a circle of cloth** spun from the webs of a phase spider interwoven with strands of ether and beams of starlight. When opened fully, a portable hole is 6 feet in diameter, but it can be folded up to be as small as a pocket handkerchief. When spread upon any surface, it causes an extradimensional space 10 feet deep to come into being. **This hole can be picked up** from inside or out by simply **taking hold of the edges of the cloth and folding it up.** Either way, the entrance disappears, but anything inside the hole remains.
>
>
> The only air in the hole is that which enters when the hole is opened. It contains enough air to supply one Medium creature or two Small creatures for 10 minutes. The cloth does not accumulate weight even if its hole is filled. Each portable hole opens on its own particular nondimensional space. If a bag of holding is placed within a portable hole, a rift to the Astral Plane is torn in that place. Both the bag and the cloth are sucked into the void and forever lost. If a portable hole is placed within a bag of holding, it opens a gate to the Astral Plane. The hole, the bag, and any creatures within a 10-foot radius are drawn there, the portable hole and bag of holding being destroyed in the process.
>
>
>
There is some issues you should take note of...
1. To answer the majority of your questions, it creates an extra-dimensional space. Nothing appears behind you or anything of that nature. I'd almost think of it like holding a big box in front of your character - what goes into the box, is simply in the box.
2. The inside of the portable hole is made of cloth (*this is an assumption, based on the circle of cloth used to open it and the comments in the description*). I think it is assumed that shooting something sharp into it will cause it to rip (*effects, unknown*).
3. Although not specified, it does not state that the hole needs to be entirely unfolded. Just simply that it needs to be placed on a solid surface. I assume a shield is a solid surface, and you can open the cloth as required.
4. As far as RAW goes, it does not state that it is fixed - simply that it needs to be fixed to a solid object.
5. RAW leads me to believe that the the thickness of the shield has no bearing on the depth of the hole itself. |
79,476 | If I fixed a portable hole on the front of a large flat shield,
looking at the front of the shield, I would see a a 5 foot deep hole.
BUT, looking at the shield from behind, would there also be a hole?
Would the back be flat and untouched?
Would arrows shot into the hole re-appear 5 feet behind me?
Would they just disappear?
I'm sure standing behind the shield, the arrows would not hurt me, as the ground or walls a hole is affixed to are unharmed after the hole is removed, so no permanent damage is done.
From the front, would the hole be limited by the thickness of the shield?
Would the hole extend 5 feet back through empty space leaving a hole in my torso?
Would the hole GLUE things into a fixed position behind it? Like an invisible thumbtack made of negative space?
Because a hole in a wall opens a space to hide in, crawl through, but how solid are the walls? you cant go sideways inside a hole, but is this because you are in a rock wall, and the unaffected rock is blocking your path? or is the hole itself the unmovable wall? Would there be no resistance, if there was no physical material blocking your path? i.e. COULD an outside object enter hole-space from the side, if there was NOTHING there to block entry?
To make positional questions easier,
z-dimension is the depth of the upright hole, front to back,
and x is shoulder to shoulder, Y-dimension up and down.
If I stood behind the shield (say, 5mm thick?), would the dimensional hole cause me to be separated from my lungs? If so, would they still function on the other plane? Would blood flow cross the threshold in x/y directions unhindered? Would a solid force stop my heart/lungs from oxygenating the rest of my body? Would it continue to work as expected, but just do so invisibly?
Would I be "thumbtacked" into position and held, until someone in the front peeled the hole away? Could I come and go from behind the shield at will, because the dimensional adjustment only applies to the front surface of the shield along the x/y axis' leaving the back unaffected? | 2016/04/30 | [
"https://rpg.stackexchange.com/questions/79476",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/28771/"
] | The description of [portable hole](https://www.dndbeyond.com/magic-items/portable-hole) says a portable hole
>
> ...unfolds into a circular sheet 6 feet in diameter. You can use an action to unfold a portable hole and place it on or against a solid surface, whereupon the portable hole creates an extradimensional hole 10 feet deep. The cylindrical space within the hole exists on a different plane, so it can’t be used to create open passages.
>
>
>
A narrow reading of this text would suggest that you can't put a portable hole on a shield unless your shield is 6 feet in diameter. (Otherwise you haven't placed all of the portable hole "on a solid surface", so it doesn't activate its magical powers.)
Let's assume that you could get around this — for example because you had a very large shield, or a very small portable hole, or you managed to get the portable hole to activate while partially folded up.
A portable hole is very much like a bag of holding, so a good way to think about this is if you had opened up a bag of holding and you were waving the open mouth around.
The space within the hole exists on a different plane — essentially there's now a teleportation gate on the surface of your shield, and anything that touches the teleportation gate gets partially teleported to a 10-foot cylindrical space on another plane of existence. It is not possible to enter or interact with the cylindrical space, except by going through the teleportation gate on the front of your shield. So the cylindrical space does not harm you, affect you, or interact with you, unless you reach around and stick your arm through the *front* of your shield.
Striking the back wall of the portable hole will have no effect on the wearer. Striking the back wall also won't damage the hole. (If you were using a [bag of holding](https://www.dndbeyond.com/magic-items/bag-of-holding), it could get "overloaded, pierced, or torn", destroying the bag. Portable holes do not have this weakness.)
I'm guessing this is pretty convenient in combat, because any blows that strike the shield no longer impart their force against your shield arm — instead the attacker's sword goes slightly into the extradimensional space. On the other hand, it might be possible to damage the portable hole if a sword blow strikes against the edge of the effect. It's certainly possible for a creature to peel the portable hole off your shield and run off with it. Lastly, there's a small chance that someone could throw a bag of holding into your portable hole, which would destroy both items and teleport you into the astral plane. | Well, of course it is going to be up to the GM.
If it were me, you would need to spread out the hole to its full 6 feet in order to access the extra-dimensional space.
So you would need a six-foot shield. Maybe a giant's or something.
On the other hand, if you had a portable hole differing from RAW, say 2 feet, yes, it would work. It would be weird, but it would work.
Things thrown into the shield would not be felt by the wielder. The interior "surface" is extra-dimensional in nature and is not particularly hard or soft or cloth or anything else. Something thrown in wouldn't automatically fall out, but if the shield (and hole) were held whether accidentally or on purpose facing down, stuff might fall out.
From the back it would appear to be a normal shield. The hole wouldn't affect anything behind the shield in any way.
Attaching the hole might be problematic. If I were the GM, it would be a case of the player tries something, the GM narrates the results. Tacks or staples might be difficult to put through it, and if you could, it might not be good for the hole. Although since the backside is cloth, I suppose glue might work.
It's a pretty clever idea. |
79,476 | If I fixed a portable hole on the front of a large flat shield,
looking at the front of the shield, I would see a a 5 foot deep hole.
BUT, looking at the shield from behind, would there also be a hole?
Would the back be flat and untouched?
Would arrows shot into the hole re-appear 5 feet behind me?
Would they just disappear?
I'm sure standing behind the shield, the arrows would not hurt me, as the ground or walls a hole is affixed to are unharmed after the hole is removed, so no permanent damage is done.
From the front, would the hole be limited by the thickness of the shield?
Would the hole extend 5 feet back through empty space leaving a hole in my torso?
Would the hole GLUE things into a fixed position behind it? Like an invisible thumbtack made of negative space?
Because a hole in a wall opens a space to hide in, crawl through, but how solid are the walls? you cant go sideways inside a hole, but is this because you are in a rock wall, and the unaffected rock is blocking your path? or is the hole itself the unmovable wall? Would there be no resistance, if there was no physical material blocking your path? i.e. COULD an outside object enter hole-space from the side, if there was NOTHING there to block entry?
To make positional questions easier,
z-dimension is the depth of the upright hole, front to back,
and x is shoulder to shoulder, Y-dimension up and down.
If I stood behind the shield (say, 5mm thick?), would the dimensional hole cause me to be separated from my lungs? If so, would they still function on the other plane? Would blood flow cross the threshold in x/y directions unhindered? Would a solid force stop my heart/lungs from oxygenating the rest of my body? Would it continue to work as expected, but just do so invisibly?
Would I be "thumbtacked" into position and held, until someone in the front peeled the hole away? Could I come and go from behind the shield at will, because the dimensional adjustment only applies to the front surface of the shield along the x/y axis' leaving the back unaffected? | 2016/04/30 | [
"https://rpg.stackexchange.com/questions/79476",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/28771/"
] | The description of [portable hole](https://www.dndbeyond.com/magic-items/portable-hole) says a portable hole
>
> ...unfolds into a circular sheet 6 feet in diameter. You can use an action to unfold a portable hole and place it on or against a solid surface, whereupon the portable hole creates an extradimensional hole 10 feet deep. The cylindrical space within the hole exists on a different plane, so it can’t be used to create open passages.
>
>
>
A narrow reading of this text would suggest that you can't put a portable hole on a shield unless your shield is 6 feet in diameter. (Otherwise you haven't placed all of the portable hole "on a solid surface", so it doesn't activate its magical powers.)
Let's assume that you could get around this — for example because you had a very large shield, or a very small portable hole, or you managed to get the portable hole to activate while partially folded up.
A portable hole is very much like a bag of holding, so a good way to think about this is if you had opened up a bag of holding and you were waving the open mouth around.
The space within the hole exists on a different plane — essentially there's now a teleportation gate on the surface of your shield, and anything that touches the teleportation gate gets partially teleported to a 10-foot cylindrical space on another plane of existence. It is not possible to enter or interact with the cylindrical space, except by going through the teleportation gate on the front of your shield. So the cylindrical space does not harm you, affect you, or interact with you, unless you reach around and stick your arm through the *front* of your shield.
Striking the back wall of the portable hole will have no effect on the wearer. Striking the back wall also won't damage the hole. (If you were using a [bag of holding](https://www.dndbeyond.com/magic-items/bag-of-holding), it could get "overloaded, pierced, or torn", destroying the bag. Portable holes do not have this weakness.)
I'm guessing this is pretty convenient in combat, because any blows that strike the shield no longer impart their force against your shield arm — instead the attacker's sword goes slightly into the extradimensional space. On the other hand, it might be possible to damage the portable hole if a sword blow strikes against the edge of the effect. It's certainly possible for a creature to peel the portable hole off your shield and run off with it. Lastly, there's a small chance that someone could throw a bag of holding into your portable hole, which would destroy both items and teleport you into the astral plane. | IF this were in my game, I would ask that the player hire someone that knows this kind of magic well to either make a hole of a particular shape and size as a custom item, and make them earn the result. Also, a hole of smaller size would be equally smaller internally. For example the RAW is a 6ft diameter 10ft deep. If they wanted a 2ft hole, it would be (2/6) x 10 feet deep. If they had a kite shield, it would be kite shaped and an approximately scaled hole depth.
As far as the edge is concerned, I agree that if damaged the hole would be considered broken and the contents would be trapped inside the pocket dimension unless they can break free, or the item is repaired by someone capable enough to make the item in the first place. I would also let my players find out the hard way that an attack could cut the edge and ruin their new toy if they don't specify a way of protecting it from being struck because I'm an evil bastard and I love making my players cringe... muahahahahaaaaa!
This would make for some SUPER interesting combat! The only reason I would allow this mechanic is because it can be broken free of with a DC 10 STR check as an action. The interior would certainly be an indestructible impenetrable extra-dimensional surface devoid of all properties acting only as a barrier. It would be clever to make it removable as well, maybe one portion of the border that would allow you to access the edge of the cloth so you can fold it. Otherwise, it would remain open all of the time and lose some of its functionality as a storage device.
<https://roll20.net/compendium/dnd5e/Portable%20Hole#content> |
79,476 | If I fixed a portable hole on the front of a large flat shield,
looking at the front of the shield, I would see a a 5 foot deep hole.
BUT, looking at the shield from behind, would there also be a hole?
Would the back be flat and untouched?
Would arrows shot into the hole re-appear 5 feet behind me?
Would they just disappear?
I'm sure standing behind the shield, the arrows would not hurt me, as the ground or walls a hole is affixed to are unharmed after the hole is removed, so no permanent damage is done.
From the front, would the hole be limited by the thickness of the shield?
Would the hole extend 5 feet back through empty space leaving a hole in my torso?
Would the hole GLUE things into a fixed position behind it? Like an invisible thumbtack made of negative space?
Because a hole in a wall opens a space to hide in, crawl through, but how solid are the walls? you cant go sideways inside a hole, but is this because you are in a rock wall, and the unaffected rock is blocking your path? or is the hole itself the unmovable wall? Would there be no resistance, if there was no physical material blocking your path? i.e. COULD an outside object enter hole-space from the side, if there was NOTHING there to block entry?
To make positional questions easier,
z-dimension is the depth of the upright hole, front to back,
and x is shoulder to shoulder, Y-dimension up and down.
If I stood behind the shield (say, 5mm thick?), would the dimensional hole cause me to be separated from my lungs? If so, would they still function on the other plane? Would blood flow cross the threshold in x/y directions unhindered? Would a solid force stop my heart/lungs from oxygenating the rest of my body? Would it continue to work as expected, but just do so invisibly?
Would I be "thumbtacked" into position and held, until someone in the front peeled the hole away? Could I come and go from behind the shield at will, because the dimensional adjustment only applies to the front surface of the shield along the x/y axis' leaving the back unaffected? | 2016/04/30 | [
"https://rpg.stackexchange.com/questions/79476",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/28771/"
] | I want to offer a slightly different reading of the text. Here is the description of a portable hole, and some emphasis (mine).
>
> **Portable Hole:** A portable hole is **a circle of cloth** spun from the webs of a phase spider interwoven with strands of ether and beams of starlight. When opened fully, a portable hole is 6 feet in diameter, but it can be folded up to be as small as a pocket handkerchief. When spread upon any surface, it causes an extradimensional space 10 feet deep to come into being. **This hole can be picked up** from inside or out by simply **taking hold of the edges of the cloth and folding it up.** Either way, the entrance disappears, but anything inside the hole remains.
>
>
> The only air in the hole is that which enters when the hole is opened. It contains enough air to supply one Medium creature or two Small creatures for 10 minutes. The cloth does not accumulate weight even if its hole is filled. Each portable hole opens on its own particular nondimensional space. If a bag of holding is placed within a portable hole, a rift to the Astral Plane is torn in that place. Both the bag and the cloth are sucked into the void and forever lost. If a portable hole is placed within a bag of holding, it opens a gate to the Astral Plane. The hole, the bag, and any creatures within a 10-foot radius are drawn there, the portable hole and bag of holding being destroyed in the process.
>
>
>
There is some issues you should take note of...
1. To answer the majority of your questions, it creates an extra-dimensional space. Nothing appears behind you or anything of that nature. I'd almost think of it like holding a big box in front of your character - what goes into the box, is simply in the box.
2. The inside of the portable hole is made of cloth (*this is an assumption, based on the circle of cloth used to open it and the comments in the description*). I think it is assumed that shooting something sharp into it will cause it to rip (*effects, unknown*).
3. Although not specified, it does not state that the hole needs to be entirely unfolded. Just simply that it needs to be placed on a solid surface. I assume a shield is a solid surface, and you can open the cloth as required.
4. As far as RAW goes, it does not state that it is fixed - simply that it needs to be fixed to a solid object.
5. RAW leads me to believe that the the thickness of the shield has no bearing on the depth of the hole itself. | IF this were in my game, I would ask that the player hire someone that knows this kind of magic well to either make a hole of a particular shape and size as a custom item, and make them earn the result. Also, a hole of smaller size would be equally smaller internally. For example the RAW is a 6ft diameter 10ft deep. If they wanted a 2ft hole, it would be (2/6) x 10 feet deep. If they had a kite shield, it would be kite shaped and an approximately scaled hole depth.
As far as the edge is concerned, I agree that if damaged the hole would be considered broken and the contents would be trapped inside the pocket dimension unless they can break free, or the item is repaired by someone capable enough to make the item in the first place. I would also let my players find out the hard way that an attack could cut the edge and ruin their new toy if they don't specify a way of protecting it from being struck because I'm an evil bastard and I love making my players cringe... muahahahahaaaaa!
This would make for some SUPER interesting combat! The only reason I would allow this mechanic is because it can be broken free of with a DC 10 STR check as an action. The interior would certainly be an indestructible impenetrable extra-dimensional surface devoid of all properties acting only as a barrier. It would be clever to make it removable as well, maybe one portion of the border that would allow you to access the edge of the cloth so you can fold it. Otherwise, it would remain open all of the time and lose some of its functionality as a storage device.
<https://roll20.net/compendium/dnd5e/Portable%20Hole#content> |
79,476 | If I fixed a portable hole on the front of a large flat shield,
looking at the front of the shield, I would see a a 5 foot deep hole.
BUT, looking at the shield from behind, would there also be a hole?
Would the back be flat and untouched?
Would arrows shot into the hole re-appear 5 feet behind me?
Would they just disappear?
I'm sure standing behind the shield, the arrows would not hurt me, as the ground or walls a hole is affixed to are unharmed after the hole is removed, so no permanent damage is done.
From the front, would the hole be limited by the thickness of the shield?
Would the hole extend 5 feet back through empty space leaving a hole in my torso?
Would the hole GLUE things into a fixed position behind it? Like an invisible thumbtack made of negative space?
Because a hole in a wall opens a space to hide in, crawl through, but how solid are the walls? you cant go sideways inside a hole, but is this because you are in a rock wall, and the unaffected rock is blocking your path? or is the hole itself the unmovable wall? Would there be no resistance, if there was no physical material blocking your path? i.e. COULD an outside object enter hole-space from the side, if there was NOTHING there to block entry?
To make positional questions easier,
z-dimension is the depth of the upright hole, front to back,
and x is shoulder to shoulder, Y-dimension up and down.
If I stood behind the shield (say, 5mm thick?), would the dimensional hole cause me to be separated from my lungs? If so, would they still function on the other plane? Would blood flow cross the threshold in x/y directions unhindered? Would a solid force stop my heart/lungs from oxygenating the rest of my body? Would it continue to work as expected, but just do so invisibly?
Would I be "thumbtacked" into position and held, until someone in the front peeled the hole away? Could I come and go from behind the shield at will, because the dimensional adjustment only applies to the front surface of the shield along the x/y axis' leaving the back unaffected? | 2016/04/30 | [
"https://rpg.stackexchange.com/questions/79476",
"https://rpg.stackexchange.com",
"https://rpg.stackexchange.com/users/28771/"
] | Well, of course it is going to be up to the GM.
If it were me, you would need to spread out the hole to its full 6 feet in order to access the extra-dimensional space.
So you would need a six-foot shield. Maybe a giant's or something.
On the other hand, if you had a portable hole differing from RAW, say 2 feet, yes, it would work. It would be weird, but it would work.
Things thrown into the shield would not be felt by the wielder. The interior "surface" is extra-dimensional in nature and is not particularly hard or soft or cloth or anything else. Something thrown in wouldn't automatically fall out, but if the shield (and hole) were held whether accidentally or on purpose facing down, stuff might fall out.
From the back it would appear to be a normal shield. The hole wouldn't affect anything behind the shield in any way.
Attaching the hole might be problematic. If I were the GM, it would be a case of the player tries something, the GM narrates the results. Tacks or staples might be difficult to put through it, and if you could, it might not be good for the hole. Although since the backside is cloth, I suppose glue might work.
It's a pretty clever idea. | IF this were in my game, I would ask that the player hire someone that knows this kind of magic well to either make a hole of a particular shape and size as a custom item, and make them earn the result. Also, a hole of smaller size would be equally smaller internally. For example the RAW is a 6ft diameter 10ft deep. If they wanted a 2ft hole, it would be (2/6) x 10 feet deep. If they had a kite shield, it would be kite shaped and an approximately scaled hole depth.
As far as the edge is concerned, I agree that if damaged the hole would be considered broken and the contents would be trapped inside the pocket dimension unless they can break free, or the item is repaired by someone capable enough to make the item in the first place. I would also let my players find out the hard way that an attack could cut the edge and ruin their new toy if they don't specify a way of protecting it from being struck because I'm an evil bastard and I love making my players cringe... muahahahahaaaaa!
This would make for some SUPER interesting combat! The only reason I would allow this mechanic is because it can be broken free of with a DC 10 STR check as an action. The interior would certainly be an indestructible impenetrable extra-dimensional surface devoid of all properties acting only as a barrier. It would be clever to make it removable as well, maybe one portion of the border that would allow you to access the edge of the cloth so you can fold it. Otherwise, it would remain open all of the time and lose some of its functionality as a storage device.
<https://roll20.net/compendium/dnd5e/Portable%20Hole#content> |
44,609 | I am doing handstand pushups. What is a good way to increase progression of handstand pushups without using a weight vest? I am open to suggestions using a wall, or not against a wall. Thanks,
[](https://i.stack.imgur.com/XvH9f.png) | 2021/12/05 | [
"https://fitness.stackexchange.com/questions/44609",
"https://fitness.stackexchange.com",
"https://fitness.stackexchange.com/users/36768/"
] | The amount of info is very minimal so this is all I can give you: the only conclusion I make based on this picture is that you mainly need to work on mobility and form. It seems like you already have the strength. I think working on form will help you get more reps or more added weight.
The things that I would work on is;
* **Shoulder mobility**, you want to be 1 straight line when you stand on your hands.
* **Core and glute engagement**, again to do with being straight and having full control over your handstand. It seems like your core and glutes are not engaged fully, which makes your handstand slightly sloppy. This also has to do with the previous point.
You can see an example in the picture below. You want your hands, shoulders, hips, knees and ankles to be in a straight line to have full control over your handstand. If for example your hips aren't in line with your hands, your knees and feet also won't be.
[](https://i.stack.imgur.com/MpfLZ.jpg)
Having full control over your handstand will help tremendously towards doing more HPU reps.
If for what ever reason you really just want to increase power, simply doing HPUs against a wall will help. This eliminates the balance part out of the movement, so you can fully focus on getting the reps in. After you have comfortably increased reps against a wall, you'll notice that you can also do more reps without a wall.
I hope this helps! | The best thing about standing exercises first is to strengthen the abdominal muscles and then focus and create contraction on those muscles when doing standing exercises. Reverse movements of a different world and having mental focus is the most important thing
The photo you took shows that you left the pelvis and the bottom of the spine. Concentrate on standing on the ball and take it (like Dandasana or Virabadrasana) and after learning this, use it in reverse movements. namaste |
14,099 | Is the invention of a faster computer (let's say a quantum computer) or the lack of advancements in computing power (let's say it takes a decade to make any progress greater than incremental increases on currently technology) capable of affecting the amount of time it takes to halve the block reward?
In other words, **does the block reward get halved every 4 years, regardless of how much the hardware available is or isn't progressing?** | 2013/10/27 | [
"https://bitcoin.stackexchange.com/questions/14099",
"https://bitcoin.stackexchange.com",
"https://bitcoin.stackexchange.com/users/3381/"
] | The block halving takes place every 210,000 blocks. The difficulty retargeting mechanism makes it so that 210,000 blocks take approximately 4 years, but this is not exact. If the hashrate is rapidly growing, it will take a little less than that.
For example, if the total network hashrate doubles every year, it will take about 1 month less. If it doubles every week (increases by a factor of 4.5 quadrillion every year), it will take only 2 years. | The short answer is yes, provided the difficulty factor feedback loop (adjusted after 2016 (14\*24\*6) block awards which nominally maps to every two weeks for whatever next generation hardware gets used) remains stable, which it should from basic control theory 101.
The reason has everything to do with mathematics and the convergence of the geometric series to limit the number of Bitcoins to 21M.
Go to <http://www.basic-mathematics.com/geometric-sequence-calculator.html>. Enter 10.5M (number of Bitcoins mined during the first four years 50\*4\*365\*24\*6) into the first field, 0.5 into the second field (number of Bitcoins awarded gets scaled back every 4 years), 8 into the third field for n (roughly the number of 4 year time constants between 2009 & 2040), and select Sn and hit calculate. As n approaches infinity, the geometric series converges to 21M. |
22,234 | I'm attempting to model operating margins and a time plot indicated that the series may follow an autoregressive process. I initially fitted data to an AR(1) model and it appeared that residual correlation was present in the 4th lag term. I added an additional 4th lag and while the AC in the fourth residual did decrease, the t-stat is still slightly greater than 2. Additionally, the second (4th lagged regressor) appears to be highly insignificant. I'm looking for suggestions as to how to improve on the model specification. | 2015/12/09 | [
"https://quant.stackexchange.com/questions/22234",
"https://quant.stackexchange.com",
"https://quant.stackexchange.com/users/18614/"
] | what you should analyze:
* Look at seasonalities as user Horeseless points out.
* Look at ACF, if it cuts off suddenly then there is something of MA nature, if it decays slowly then it is rather AR.
* Look at partial ACF to see which lags are relevant.
You find theory and code [here](https://www.otexts.org/fpp/8/9). | you should look arch/garch models |
22,234 | I'm attempting to model operating margins and a time plot indicated that the series may follow an autoregressive process. I initially fitted data to an AR(1) model and it appeared that residual correlation was present in the 4th lag term. I added an additional 4th lag and while the AC in the fourth residual did decrease, the t-stat is still slightly greater than 2. Additionally, the second (4th lagged regressor) appears to be highly insignificant. I'm looking for suggestions as to how to improve on the model specification. | 2015/12/09 | [
"https://quant.stackexchange.com/questions/22234",
"https://quant.stackexchange.com",
"https://quant.stackexchange.com/users/18614/"
] | Before fitting any ARIMA model, make sure that:
1. There is no seasonal trend in your data. If it is present then deseasonalized it by taking appropriate lag difference, as pointed by @Horseless
2. Before fitting any model, make sure data is stationary.
3. Once stationarity is achieved, plot ACF and PACF and find appropriate lag. To find appropriate lag selection, you can use various information criterion, like AIC, BIC, SIC, HQIC etc. (Lot of R packages are available that allow automatic selection of appropriate lag order)
After fitting appropriate model, make sure that error terms (residuals) are white noise. | you should look arch/garch models |
22,234 | I'm attempting to model operating margins and a time plot indicated that the series may follow an autoregressive process. I initially fitted data to an AR(1) model and it appeared that residual correlation was present in the 4th lag term. I added an additional 4th lag and while the AC in the fourth residual did decrease, the t-stat is still slightly greater than 2. Additionally, the second (4th lagged regressor) appears to be highly insignificant. I'm looking for suggestions as to how to improve on the model specification. | 2015/12/09 | [
"https://quant.stackexchange.com/questions/22234",
"https://quant.stackexchange.com",
"https://quant.stackexchange.com/users/18614/"
] | what you should analyze:
* Look at seasonalities as user Horeseless points out.
* Look at ACF, if it cuts off suddenly then there is something of MA nature, if it decays slowly then it is rather AR.
* Look at partial ACF to see which lags are relevant.
You find theory and code [here](https://www.otexts.org/fpp/8/9). | Before fitting any ARIMA model, make sure that:
1. There is no seasonal trend in your data. If it is present then deseasonalized it by taking appropriate lag difference, as pointed by @Horseless
2. Before fitting any model, make sure data is stationary.
3. Once stationarity is achieved, plot ACF and PACF and find appropriate lag. To find appropriate lag selection, you can use various information criterion, like AIC, BIC, SIC, HQIC etc. (Lot of R packages are available that allow automatic selection of appropriate lag order)
After fitting appropriate model, make sure that error terms (residuals) are white noise. |
128,305 | in my environment here we have started using trucrypt to encrypt and protect our laptops that are being brought out of the office.
The issue comes with the password, we can document the passwords and assign them to users but if they simply use the program to change the password, and then forget it we are in trouble.
We backup our data to external locations so it should be fine, but is there any way to install a bypass to be able to boot the laptop or stop users changing their password (while they have local admin access)?
Or should we try another solution?
thanks. | 2010/04/01 | [
"https://serverfault.com/questions/128305",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
] | Truecrypt has a recovery disk option, which it all-but forces you to complete before encrypting the disk. That CD can be used to recover the partition even if the password has subsequently been changed.
Outside of this if you're after a more robust and enterprise-ready solution, PointSec offer full-disk encryption with administrative recovery abilities. | TrueCrypt has no backdoor or master key, so if you lose the password to an encrypted volume, you will have a problem. If this is an important situation for you, the recovery CD will be an important step to take. |
128,305 | in my environment here we have started using trucrypt to encrypt and protect our laptops that are being brought out of the office.
The issue comes with the password, we can document the passwords and assign them to users but if they simply use the program to change the password, and then forget it we are in trouble.
We backup our data to external locations so it should be fine, but is there any way to install a bypass to be able to boot the laptop or stop users changing their password (while they have local admin access)?
Or should we try another solution?
thanks. | 2010/04/01 | [
"https://serverfault.com/questions/128305",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
] | Truecrypt has a recovery disk option, which it all-but forces you to complete before encrypting the disk. That CD can be used to recover the partition even if the password has subsequently been changed.
Outside of this if you're after a more robust and enterprise-ready solution, PointSec offer full-disk encryption with administrative recovery abilities. | Full disk or volume encryption products targeted at medium to large business have built in recovery options. At a basic level the encryption key is stored in a central directory and updated when changed (when the machine comes back on the network anyway). BitLocker can do this with Active Directory. More advanced solutions such as Pointsec and Sophos use several layers of encryption keys, and typically:
* the disk/volume is encrypted with a machine key
* the machine key is encrypted with a user key
* both keys are managed in a central database
This provides a lot of advantages, for example multiple users can access an encrypted disk with their own key (password, passphrase, token, etc).
You typically pay a lot for these, compared to TrueCrypt, so you would need to understand if all the benefits outweigh the cost.
So in your case, regular reliable (and tested) backups of the laptop data are a good step to protect against someone suffering a memory fault and being denied access to their encrypted data. |
128,305 | in my environment here we have started using trucrypt to encrypt and protect our laptops that are being brought out of the office.
The issue comes with the password, we can document the passwords and assign them to users but if they simply use the program to change the password, and then forget it we are in trouble.
We backup our data to external locations so it should be fine, but is there any way to install a bypass to be able to boot the laptop or stop users changing their password (while they have local admin access)?
Or should we try another solution?
thanks. | 2010/04/01 | [
"https://serverfault.com/questions/128305",
"https://serverfault.com",
"https://serverfault.com/users/-1/"
] | Truecrypt has a recovery disk option, which it all-but forces you to complete before encrypting the disk. That CD can be used to recover the partition even if the password has subsequently been changed.
Outside of this if you're after a more robust and enterprise-ready solution, PointSec offer full-disk encryption with administrative recovery abilities. | It looks like that both Bitlocker and Truecrypt can be bypassed if they're mounted on a computer: <http://www.net-security.org/secworld.php?id=9077> |
1,473 | In the hope of attracting more entries into the competitions, I'd like to express my excitement over receiving three fantastic books for winning one of the weeks.



The first one I've started reading and is great so far, the last two are beautifully diagrammed and bound in lovely hard-covers. Thanks Laura and SE.
Maybe others might like to post their choices too? | 2012/04/18 | [
"https://cooking.meta.stackexchange.com/questions/1473",
"https://cooking.meta.stackexchange.com",
"https://cooking.meta.stackexchange.com/users/8315/"
] | I got:

Which has a fantastic carbonara recipe, and a great technique for cooking duck breasts, amongst many other things.
and

Which has completely changed the way I bake bread.
Cheers, Stack Exchange! | For "cocktails", I selected the following books:
[The Bartender's Black Book 10th Edition](http://rads.stackoverflow.com/amzn/click/1935879995)

[The Boozy Baker: 75 Recipes for Spirited Sweets](http://rads.stackoverflow.com/amzn/click/0762438029)

[Baking Illustrated](http://rads.stackoverflow.com/amzn/click/0936184752)

Thank you, Stack Exchange! |
1,473 | In the hope of attracting more entries into the competitions, I'd like to express my excitement over receiving three fantastic books for winning one of the weeks.



The first one I've started reading and is great so far, the last two are beautifully diagrammed and bound in lovely hard-covers. Thanks Laura and SE.
Maybe others might like to post their choices too? | 2012/04/18 | [
"https://cooking.meta.stackexchange.com/questions/1473",
"https://cooking.meta.stackexchange.com",
"https://cooking.meta.stackexchange.com/users/8315/"
] | I got the *Professional Chef*,
*A passion for cheese* on Rumyancheto's recommendation,

And a random book on Indian cuisine that I haven't received yet and so can't review:
 |  -  - 
Vegan Holiday Kitchen really delivered on a few of its dishes and I look forward to just plopping it down once Thanksgiving comes around and going through that section with the family to figure out who's making what.
Veganomicon is a sturdy bible of recipes and information on the topic of Vegan cooking. It is my personal resource for finding the common denominator among any vegan dish. I have turned to it over and over not only for a few that are the baseline, *nailed-it* formulations, as well as the ones that just hammer it home with a perfect rendition. The skillet corn bread is as quick, simple, and elegant as it is nommalicious and flexible.
Vegan Diner is definitely a cookbook geared to the vegans who don't want to give up their favorite dishes. It lets them know how to not do that. Well-done, but the recipes themselves have a certain same-y quality and turn to the same tricks. But what the recipes lack individually in flair, the book itself compensates for with scope; the collection itself is a good resource for browsing and idea forming. The recipes are easy and the writing undaunting. |
1,473 | In the hope of attracting more entries into the competitions, I'd like to express my excitement over receiving three fantastic books for winning one of the weeks.



The first one I've started reading and is great so far, the last two are beautifully diagrammed and bound in lovely hard-covers. Thanks Laura and SE.
Maybe others might like to post their choices too? | 2012/04/18 | [
"https://cooking.meta.stackexchange.com/questions/1473",
"https://cooking.meta.stackexchange.com",
"https://cooking.meta.stackexchange.com/users/8315/"
] | I have won twice so far, once at the beginning and once after the proper waiting period.
My first prize was The Professional Chef. A great reference work with the proper technique for all important Western recipes. Gorgeous design.

The second time, I got both Shirley Corriher books. These are the best applied food science books I have seen. Not as all-encompassing as McGee On Food And Cooking, but they cover many common types of dishes, and go into great depth for each one.
 
Thank you StackExchange and Laura for the nice contest! |  -  - 
Vegan Holiday Kitchen really delivered on a few of its dishes and I look forward to just plopping it down once Thanksgiving comes around and going through that section with the family to figure out who's making what.
Veganomicon is a sturdy bible of recipes and information on the topic of Vegan cooking. It is my personal resource for finding the common denominator among any vegan dish. I have turned to it over and over not only for a few that are the baseline, *nailed-it* formulations, as well as the ones that just hammer it home with a perfect rendition. The skillet corn bread is as quick, simple, and elegant as it is nommalicious and flexible.
Vegan Diner is definitely a cookbook geared to the vegans who don't want to give up their favorite dishes. It lets them know how to not do that. Well-done, but the recipes themselves have a certain same-y quality and turn to the same tricks. But what the recipes lack individually in flair, the book itself compensates for with scope; the collection itself is a good resource for browsing and idea forming. The recipes are easy and the writing undaunting. |
1,473 | In the hope of attracting more entries into the competitions, I'd like to express my excitement over receiving three fantastic books for winning one of the weeks.



The first one I've started reading and is great so far, the last two are beautifully diagrammed and bound in lovely hard-covers. Thanks Laura and SE.
Maybe others might like to post their choices too? | 2012/04/18 | [
"https://cooking.meta.stackexchange.com/questions/1473",
"https://cooking.meta.stackexchange.com",
"https://cooking.meta.stackexchange.com/users/8315/"
] | I picked up [On Food and Cooking](http://rads.stackoverflow.com/amzn/click/0684800012) and [Cookwise](http://rads.stackoverflow.com/amzn/click/0688102298).


Thank you, Stack Exchange! |  -  - 
Vegan Holiday Kitchen really delivered on a few of its dishes and I look forward to just plopping it down once Thanksgiving comes around and going through that section with the family to figure out who's making what.
Veganomicon is a sturdy bible of recipes and information on the topic of Vegan cooking. It is my personal resource for finding the common denominator among any vegan dish. I have turned to it over and over not only for a few that are the baseline, *nailed-it* formulations, as well as the ones that just hammer it home with a perfect rendition. The skillet corn bread is as quick, simple, and elegant as it is nommalicious and flexible.
Vegan Diner is definitely a cookbook geared to the vegans who don't want to give up their favorite dishes. It lets them know how to not do that. Well-done, but the recipes themselves have a certain same-y quality and turn to the same tricks. But what the recipes lack individually in flair, the book itself compensates for with scope; the collection itself is a good resource for browsing and idea forming. The recipes are easy and the writing undaunting. |
1,473 | In the hope of attracting more entries into the competitions, I'd like to express my excitement over receiving three fantastic books for winning one of the weeks.



The first one I've started reading and is great so far, the last two are beautifully diagrammed and bound in lovely hard-covers. Thanks Laura and SE.
Maybe others might like to post their choices too? | 2012/04/18 | [
"https://cooking.meta.stackexchange.com/questions/1473",
"https://cooking.meta.stackexchange.com",
"https://cooking.meta.stackexchange.com/users/8315/"
] | I picked up [On Food and Cooking](http://rads.stackoverflow.com/amzn/click/0684800012) and [Cookwise](http://rads.stackoverflow.com/amzn/click/0688102298).


Thank you, Stack Exchange! | For "cocktails", I selected the following books:
[The Bartender's Black Book 10th Edition](http://rads.stackoverflow.com/amzn/click/1935879995)

[The Boozy Baker: 75 Recipes for Spirited Sweets](http://rads.stackoverflow.com/amzn/click/0762438029)

[Baking Illustrated](http://rads.stackoverflow.com/amzn/click/0936184752)

Thank you, Stack Exchange! |
1,473 | In the hope of attracting more entries into the competitions, I'd like to express my excitement over receiving three fantastic books for winning one of the weeks.



The first one I've started reading and is great so far, the last two are beautifully diagrammed and bound in lovely hard-covers. Thanks Laura and SE.
Maybe others might like to post their choices too? | 2012/04/18 | [
"https://cooking.meta.stackexchange.com/questions/1473",
"https://cooking.meta.stackexchange.com",
"https://cooking.meta.stackexchange.com/users/8315/"
] | I was the first winner, so I got these two cookbooks, both of which I'd checked out from the library and tested first:


I just made a Briyani from Fish Indian Style last night, so both books have been very worthwhile. Thanks, SE!
[Capsule review of Fitzmorris here.](http://fuzzychef.org/archives/Ten-New-Orleans-Cookbooks-12-2011.html) | For "cocktails", I selected the following books:
[The Bartender's Black Book 10th Edition](http://rads.stackoverflow.com/amzn/click/1935879995)

[The Boozy Baker: 75 Recipes for Spirited Sweets](http://rads.stackoverflow.com/amzn/click/0762438029)

[Baking Illustrated](http://rads.stackoverflow.com/amzn/click/0936184752)

Thank you, Stack Exchange! |
1,473 | In the hope of attracting more entries into the competitions, I'd like to express my excitement over receiving three fantastic books for winning one of the weeks.



The first one I've started reading and is great so far, the last two are beautifully diagrammed and bound in lovely hard-covers. Thanks Laura and SE.
Maybe others might like to post their choices too? | 2012/04/18 | [
"https://cooking.meta.stackexchange.com/questions/1473",
"https://cooking.meta.stackexchange.com",
"https://cooking.meta.stackexchange.com/users/8315/"
] | I got the *Professional Chef*,
*A passion for cheese* on Rumyancheto's recommendation,

And a random book on Indian cuisine that I haven't received yet and so can't review:
 | For "cocktails", I selected the following books:
[The Bartender's Black Book 10th Edition](http://rads.stackoverflow.com/amzn/click/1935879995)

[The Boozy Baker: 75 Recipes for Spirited Sweets](http://rads.stackoverflow.com/amzn/click/0762438029)

[Baking Illustrated](http://rads.stackoverflow.com/amzn/click/0936184752)

Thank you, Stack Exchange! |
1,473 | In the hope of attracting more entries into the competitions, I'd like to express my excitement over receiving three fantastic books for winning one of the weeks.



The first one I've started reading and is great so far, the last two are beautifully diagrammed and bound in lovely hard-covers. Thanks Laura and SE.
Maybe others might like to post their choices too? | 2012/04/18 | [
"https://cooking.meta.stackexchange.com/questions/1473",
"https://cooking.meta.stackexchange.com",
"https://cooking.meta.stackexchange.com/users/8315/"
] | I got:

Which has a fantastic carbonara recipe, and a great technique for cooking duck breasts, amongst many other things.
and

Which has completely changed the way I bake bread.
Cheers, Stack Exchange! |  -  - 
Vegan Holiday Kitchen really delivered on a few of its dishes and I look forward to just plopping it down once Thanksgiving comes around and going through that section with the family to figure out who's making what.
Veganomicon is a sturdy bible of recipes and information on the topic of Vegan cooking. It is my personal resource for finding the common denominator among any vegan dish. I have turned to it over and over not only for a few that are the baseline, *nailed-it* formulations, as well as the ones that just hammer it home with a perfect rendition. The skillet corn bread is as quick, simple, and elegant as it is nommalicious and flexible.
Vegan Diner is definitely a cookbook geared to the vegans who don't want to give up their favorite dishes. It lets them know how to not do that. Well-done, but the recipes themselves have a certain same-y quality and turn to the same tricks. But what the recipes lack individually in flair, the book itself compensates for with scope; the collection itself is a good resource for browsing and idea forming. The recipes are easy and the writing undaunting. |
1,473 | In the hope of attracting more entries into the competitions, I'd like to express my excitement over receiving three fantastic books for winning one of the weeks.



The first one I've started reading and is great so far, the last two are beautifully diagrammed and bound in lovely hard-covers. Thanks Laura and SE.
Maybe others might like to post their choices too? | 2012/04/18 | [
"https://cooking.meta.stackexchange.com/questions/1473",
"https://cooking.meta.stackexchange.com",
"https://cooking.meta.stackexchange.com/users/8315/"
] | I just won coffee week and am going to get these two books:
The first book for my inner fatty :D

And the second book to supplement Reinhart's Bread Baker's Apprentice that I already have.

Thanks Lauren! :) | For "cocktails", I selected the following books:
[The Bartender's Black Book 10th Edition](http://rads.stackoverflow.com/amzn/click/1935879995)

[The Boozy Baker: 75 Recipes for Spirited Sweets](http://rads.stackoverflow.com/amzn/click/0762438029)

[Baking Illustrated](http://rads.stackoverflow.com/amzn/click/0936184752)

Thank you, Stack Exchange! |
1,473 | In the hope of attracting more entries into the competitions, I'd like to express my excitement over receiving three fantastic books for winning one of the weeks.



The first one I've started reading and is great so far, the last two are beautifully diagrammed and bound in lovely hard-covers. Thanks Laura and SE.
Maybe others might like to post their choices too? | 2012/04/18 | [
"https://cooking.meta.stackexchange.com/questions/1473",
"https://cooking.meta.stackexchange.com",
"https://cooking.meta.stackexchange.com/users/8315/"
] | I've got a well-worn copy of Baking Illustrated that I adore, so I went ahead and got [The New Best Recipe](http://rads.stackoverflow.com/amzn/click/0936184744), since it seems to be the same thing but for general-purpose cooking
Since I live with two guys who like to experiment with seasonings, I also got [The Flavor Bible](http://rads.stackoverflow.com/amzn/click/0316118400). | For "cocktails", I selected the following books:
[The Bartender's Black Book 10th Edition](http://rads.stackoverflow.com/amzn/click/1935879995)

[The Boozy Baker: 75 Recipes for Spirited Sweets](http://rads.stackoverflow.com/amzn/click/0762438029)

[Baking Illustrated](http://rads.stackoverflow.com/amzn/click/0936184752)

Thank you, Stack Exchange! |
26,445,483 | I am building a web service using the Dropwizard framework (version 0.7.0). It involves executing some read-only queries to the database, manipulating the result set and then returning that data set. I am using MySQL as a database engine. Since I am new to this framework, I want to know which option I should choose: Hibernate or JDBI. | 2014/10/18 | [
"https://Stackoverflow.com/questions/26445483",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3004293/"
] | I've used both of these. I've used Hibernate with GORM in Grails as well as in a traditional Spring app and I've used JDBI in Dropwizard.
I have really enjoyed the simplicity of JDBI and here are a couple of reasons why I prefer it over Hibernate.
1. I know exactly what SQL is going to be executed to acquire the data I'm requesting. With Hibernate, you can sometimes have to do a lot of messing around with HQL and configuring your objects to what you intended to have returned. You ultimately resort to SQL, but then have the difficultly of properly mapping your results back to your domain objects, or you give up and allow hibernate to fetch them one by one.
2. I don't need to worry about lazy/eager fetching and how that is going to affect my query time on large data sets.
3. Mappings aren't complicated because you manage them on your own and you don't have to rely on getting the right combinations of annotations and optimizations.
For your case in particular, it sounds like you'd want something lightweight because you don't have a lot of use cases and that would definitely be JDBI over Hibernate in my opinion. | if you have very few work upon database then use JDBI else go for Hibernate as it is very strong and provide many additional features to your persistence logic. |
26,445,483 | I am building a web service using the Dropwizard framework (version 0.7.0). It involves executing some read-only queries to the database, manipulating the result set and then returning that data set. I am using MySQL as a database engine. Since I am new to this framework, I want to know which option I should choose: Hibernate or JDBI. | 2014/10/18 | [
"https://Stackoverflow.com/questions/26445483",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3004293/"
] | I've used both of these. I've used Hibernate with GORM in Grails as well as in a traditional Spring app and I've used JDBI in Dropwizard.
I have really enjoyed the simplicity of JDBI and here are a couple of reasons why I prefer it over Hibernate.
1. I know exactly what SQL is going to be executed to acquire the data I'm requesting. With Hibernate, you can sometimes have to do a lot of messing around with HQL and configuring your objects to what you intended to have returned. You ultimately resort to SQL, but then have the difficultly of properly mapping your results back to your domain objects, or you give up and allow hibernate to fetch them one by one.
2. I don't need to worry about lazy/eager fetching and how that is going to affect my query time on large data sets.
3. Mappings aren't complicated because you manage them on your own and you don't have to rely on getting the right combinations of annotations and optimizations.
For your case in particular, it sounds like you'd want something lightweight because you don't have a lot of use cases and that would definitely be JDBI over Hibernate in my opinion. | Really, both of these solutions are just "lock-in".
If you want to go with a persisted model type interface, write your code against JPA (if you are sure it's only going to back to a relational database) or JDO (if you might want to back to relational and other-type databases, like the no-SQL movement). This is because with either of these solutions, when problems occur you can switch persistence providers without rewriting the bulk of your code.
If you want to go with a procedural persistence model (dealing with SQL queries directly and such), then go with JDBi or perhaps even JDBC. JDBi provides a very nice abstraction over JDBC; however, there are cases where you want the lower level access (for performance reasons, of the kind were you are tuning the queries and database in concert). Again JDBC is a standard such that you can swap out one database for another with some ease; however, the SQL itself won't be as easy to swap out.
To amend the SQL swap out problems, I recommend using sets of property files to hold the queries, and then a Resource loader type mechanisim to bind the SQL for the right database to the code. It isn't 100% foolproof; but it does get you a bit further.
Now, if you ask me what I'd use, I highly recommend JDO. |
151,778 | I work in a PHP development shop and several new developers have joined the team. One of the new members insists on declaring class properties at the bottom of the class declaration rather than the at the top, as one would normally expect.
In all my 5 years of working in web development, I have never seen this done. Is it a common coding style? | 2012/06/06 | [
"https://softwareengineering.stackexchange.com/questions/151778",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/55981/"
] | Some dev environments get around these kinds of issues by using static analysis tools - StyleCop for .NET and Checkstyle for Java. If there are any for php, I'd suggest getting one.
Likewise, it doesn't so much matter what style you, or your company, uses - just that it stays consistent. If you can't find a static analysis tool for php, then just document known style issues (don't discuss it, just document what exists in current, stable source files that people haven't had style problems with) and call that the style guide.
This is an issue that can explode into all kinds of bikeshedding. It's important to not let it become an issue where people feel like they can provide input.
Toss that up onto your project wiki/shared folder/whatever and send out an e-mail to the team.
If you're not a lead/senior/someone with the authority to do this then give the doc to your lead and ask that they do it.
Enforce it, and change it as appropriate, in code reviews. | **Overview**
Weird. I just discover I worked several years, putting properties at the bottom of the class declaration. And, later, several years, putting properties at the top of the class declaration, and, I almost didn't notice the change ...
**Answer**
Both, cases are good, but, int this specific case, your developer should stick to this company standard. |
151,778 | I work in a PHP development shop and several new developers have joined the team. One of the new members insists on declaring class properties at the bottom of the class declaration rather than the at the top, as one would normally expect.
In all my 5 years of working in web development, I have never seen this done. Is it a common coding style? | 2012/06/06 | [
"https://softwareengineering.stackexchange.com/questions/151778",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/55981/"
] | Some dev environments get around these kinds of issues by using static analysis tools - StyleCop for .NET and Checkstyle for Java. If there are any for php, I'd suggest getting one.
Likewise, it doesn't so much matter what style you, or your company, uses - just that it stays consistent. If you can't find a static analysis tool for php, then just document known style issues (don't discuss it, just document what exists in current, stable source files that people haven't had style problems with) and call that the style guide.
This is an issue that can explode into all kinds of bikeshedding. It's important to not let it become an issue where people feel like they can provide input.
Toss that up onto your project wiki/shared folder/whatever and send out an e-mail to the team.
If you're not a lead/senior/someone with the authority to do this then give the doc to your lead and ask that they do it.
Enforce it, and change it as appropriate, in code reviews. | I think that the simplest way is to follow a coding standard like PEAR or Zend Framework and many others. so if the company have its own standards then the user must apply them. |
151,778 | I work in a PHP development shop and several new developers have joined the team. One of the new members insists on declaring class properties at the bottom of the class declaration rather than the at the top, as one would normally expect.
In all my 5 years of working in web development, I have never seen this done. Is it a common coding style? | 2012/06/06 | [
"https://softwareengineering.stackexchange.com/questions/151778",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/55981/"
] | **Overview**
Weird. I just discover I worked several years, putting properties at the bottom of the class declaration. And, later, several years, putting properties at the top of the class declaration, and, I almost didn't notice the change ...
**Answer**
Both, cases are good, but, int this specific case, your developer should stick to this company standard. | I think that the simplest way is to follow a coding standard like PEAR or Zend Framework and many others. so if the company have its own standards then the user must apply them. |
56,409 | English Standard Version Matthew 5:18
>
> For truly, I say to you, until heaven and earth pass away, not an **iota**, not a **dot**, will pass from the Law until all is accomplished.
>
>
>
American Standard Version
>
> For verily I say unto you, Till heaven and earth pass away, one **jot** or one **tittle** shall in no wise pass away from the law, till all things be accomplished.
>
>
>
Does this verse give any hint that Jesus spoke Aramaic or Greek? | 2021/03/08 | [
"https://hermeneutics.stackexchange.com/questions/56409",
"https://hermeneutics.stackexchange.com",
"https://hermeneutics.stackexchange.com/users/35953/"
] | Not much can be deduced from this verse about the original spoken language of the NT. It says much more about the OT Hebrew (and possibly Aramaic) idiom of the first century Jewish culture. The two words involved as simply:
* ἰῶτα - iota (A. V. jot), the Hebrew letter, yodh י, the smallest of them all; hence equivalent to the minutest part: Matthew 5:18. (Cf. Iota.) THAYER. [This is the only instance in the NT.]
* κεραία - (WH κέρεα (see their Appendix, p. 151)), κεραιας, ἡ (κέρας), a little horn; extremity, apex, point; used by the Greek grammarians of the accents and diacritical points. In Matthew 5:18 ((where see Wetstein; cf. also Edersheim, Jesus the Messiah, 1:537f)); Luke 16:17 of the little lines, or projections, by which the Hebrew letters in other respects similar differ from each other, as cheth ח and he ה, daleth ד and resh ר, beth ב and kaph כ (A. V. tittle); the meaning is, 'not even the minutest part of the law shall perish.' ((Aeschylus, Thucydides, others.)) THAYER
Both words are distinctly Greek in origin but as used here very strong Hebrew overtones. Thus, the comment of Jesus was equally understood by both Greek, Hebrews and Aramaic speakers. [This is part of the genius of Jesus' preaching.] | **Jot | Iota (ἰῶτα) | Yod (י)** is the smallest letter of the Alef-Beyt. In the Tanakh, the word 'Yod' (יד) is literally a **Hand** of YHVH in [Ezekiel 37:1] "The Hand of YHVH" ( יַד יְהֹוָֽה ) came upon him. **In context to Matthew 5:18, Yeshua (Jesus) of Nazareth is making a deeper metaphor in stating God's Hand (Yod) will not pass away from the law. The Greek mistransliteration ' ἰῶτα ' (iota) loses the meaning.**
**'Tittle' (Stroke)** refers to Latin (titulus) mistranslated from the Greek ' κεραία ' (keraia) from the Hebrew term 'Kera' ( כְרָעַ֨ ) - meaning the 'leg' a letter stands on. To Greeks, the 'Kera' meant a horn instead of leg, so scholars assume Yeshua meant the 'heel' stroke of a Dalet (ד). - **Tittle is not referring to a 'dot' or 'dagesh', because the niqqud used by Masoretes was not applied to Hebrew manuscripts before 70AD. Which means Yeshua (Jesus) did not refer to niqqud.** |
56,409 | English Standard Version Matthew 5:18
>
> For truly, I say to you, until heaven and earth pass away, not an **iota**, not a **dot**, will pass from the Law until all is accomplished.
>
>
>
American Standard Version
>
> For verily I say unto you, Till heaven and earth pass away, one **jot** or one **tittle** shall in no wise pass away from the law, till all things be accomplished.
>
>
>
Does this verse give any hint that Jesus spoke Aramaic or Greek? | 2021/03/08 | [
"https://hermeneutics.stackexchange.com/questions/56409",
"https://hermeneutics.stackexchange.com",
"https://hermeneutics.stackexchange.com/users/35953/"
] | Not much can be deduced from this verse about the original spoken language of the NT. It says much more about the OT Hebrew (and possibly Aramaic) idiom of the first century Jewish culture. The two words involved as simply:
* ἰῶτα - iota (A. V. jot), the Hebrew letter, yodh י, the smallest of them all; hence equivalent to the minutest part: Matthew 5:18. (Cf. Iota.) THAYER. [This is the only instance in the NT.]
* κεραία - (WH κέρεα (see their Appendix, p. 151)), κεραιας, ἡ (κέρας), a little horn; extremity, apex, point; used by the Greek grammarians of the accents and diacritical points. In Matthew 5:18 ((where see Wetstein; cf. also Edersheim, Jesus the Messiah, 1:537f)); Luke 16:17 of the little lines, or projections, by which the Hebrew letters in other respects similar differ from each other, as cheth ח and he ה, daleth ד and resh ר, beth ב and kaph כ (A. V. tittle); the meaning is, 'not even the minutest part of the law shall perish.' ((Aeschylus, Thucydides, others.)) THAYER
Both words are distinctly Greek in origin but as used here very strong Hebrew overtones. Thus, the comment of Jesus was equally understood by both Greek, Hebrews and Aramaic speakers. [This is part of the genius of Jesus' preaching.] | I doubt it can tell us what language was spoken on the occasion.
The concept works as written in Greek with reference to "iota" and "keraia". It also works in Hebrew with reference to "yod" and "kera", and it works in Aramaic provided the audience is familiar with the Hebrew in their scriptures. (Since He's referring to the Torah I think it's safe to say they are)
But it is a nice argument that Jesus was sufficiently literate to be familiar with the Hebrew scriptures (or the Septuagint if you like), and that He expected a degree of literacy from his audience as well. If their only familiarity with the Tanakh was oral targums, they would not have known what a jot and a tittle were. |
7,076,388 | We were planning to develop a desktop application with MS Access as DB. But we have certain doubts ?
1. When we install this application after development in client machine does it require MS Acess ?
2. If Yes, do they need to buy licence of MS Access from Microsoft or is it free ?
3. How can we conduct a check while installing the software if MS Access on the system ? If not how can we install MS Access also along with our application ? | 2011/08/16 | [
"https://Stackoverflow.com/questions/7076388",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/896407/"
] | In your comments to Haz and duffymo, you indicated your intention is to store "huge size images" in the database. That is not a best practice with MS Access. Due to the way images are stored, the db file size will increase by more than the size of the image files. Starting with Access 2007, there is an improved storage method which reduces that bloat, but it is still an issue. Furthermore this could be a deal-breaker, because the absolute hard-wired file size limit for an Access db file is 2 GB ... your database might not be able to accommodate enough huge images to meet your needs.
I'm unclear about your concern over the need to install Access itself. With recent (since Win 2000) 32-bit Windows versions, the components required to use an Access db file are included as part of the operating system. If you're dealing with 64 bit Windows, you may need to get the [2007 Office System Driver: Data Connectivity Components](http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=23734)
Installing Access should only be required if your application uses Access for more than just data storage. An application which uses Access as the front-end client (with Access forms, reports, etc.) would require some form of Access to be installed, but it needn't be the full-blown version. You could design your application for the Access runtime version, which is free of cost starting with Access 2007:
1. [Access 2007 Download: Access Runtime](http://www.microsoft.com/download/en/details.aspx?id=4438)
2. [Microsoft Access 2010 Runtime](http://www.microsoft.com/download/en/details.aspx?id=10910)
However, if you're using something else (e.g. Dot.Net) for your application front-end, you wouldn't need any form of Access installed. | It depends on how your software works. Do you have a program that calls the Access DB, or is your program implemented using access forms?
If your just using Access as a DB
1. No, you just require the JET runtime
2. You can find the JET runtime on the microsoft site
3. It depends on what installer package you are using. You can include the MDAC MSI as a dependency if your using a typical .NET Installer.
If your using Access for the DB and the program
1. Yes
2. Yes they need to buy it. Just like you needed to buy it to develop your database. No it's not free
3. If Access is not on their computer, they will need to purchase it, then insert the CD and complete the installation. |
7,076,388 | We were planning to develop a desktop application with MS Access as DB. But we have certain doubts ?
1. When we install this application after development in client machine does it require MS Acess ?
2. If Yes, do they need to buy licence of MS Access from Microsoft or is it free ?
3. How can we conduct a check while installing the software if MS Access on the system ? If not how can we install MS Access also along with our application ? | 2011/08/16 | [
"https://Stackoverflow.com/questions/7076388",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/896407/"
] | Never ever store large images in an access data base. A big database is a slow database. Use vba to check and create file paths and store the images outside of the database. Hit me up for the code to do that | It depends on how your software works. Do you have a program that calls the Access DB, or is your program implemented using access forms?
If your just using Access as a DB
1. No, you just require the JET runtime
2. You can find the JET runtime on the microsoft site
3. It depends on what installer package you are using. You can include the MDAC MSI as a dependency if your using a typical .NET Installer.
If your using Access for the DB and the program
1. Yes
2. Yes they need to buy it. Just like you needed to buy it to develop your database. No it's not free
3. If Access is not on their computer, they will need to purchase it, then insert the CD and complete the installation. |
7,076,388 | We were planning to develop a desktop application with MS Access as DB. But we have certain doubts ?
1. When we install this application after development in client machine does it require MS Acess ?
2. If Yes, do they need to buy licence of MS Access from Microsoft or is it free ?
3. How can we conduct a check while installing the software if MS Access on the system ? If not how can we install MS Access also along with our application ? | 2011/08/16 | [
"https://Stackoverflow.com/questions/7076388",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/896407/"
] | MS Access is often the cheapest way to develop a distributed database application and frankly you should have your client buy a licenses it the tool meets your requirements. I think building a desktop application from scratch is often ridiculous when you'll probably spend well over $100 of your time building features that come out of the box with MS Access. Unless you're doing this work for free, buying software or software components to help speed up delivery is always a good idea.
One caveat. Access Db files are limited to 2gb. If you have to store data that exceeds this, you'll either need to connect the Access database to a larger database or create a complex partitioning strategy. | It depends on how your software works. Do you have a program that calls the Access DB, or is your program implemented using access forms?
If your just using Access as a DB
1. No, you just require the JET runtime
2. You can find the JET runtime on the microsoft site
3. It depends on what installer package you are using. You can include the MDAC MSI as a dependency if your using a typical .NET Installer.
If your using Access for the DB and the program
1. Yes
2. Yes they need to buy it. Just like you needed to buy it to develop your database. No it's not free
3. If Access is not on their computer, they will need to purchase it, then insert the CD and complete the installation. |
7,076,388 | We were planning to develop a desktop application with MS Access as DB. But we have certain doubts ?
1. When we install this application after development in client machine does it require MS Acess ?
2. If Yes, do they need to buy licence of MS Access from Microsoft or is it free ?
3. How can we conduct a check while installing the software if MS Access on the system ? If not how can we install MS Access also along with our application ? | 2011/08/16 | [
"https://Stackoverflow.com/questions/7076388",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/896407/"
] | In your comments to Haz and duffymo, you indicated your intention is to store "huge size images" in the database. That is not a best practice with MS Access. Due to the way images are stored, the db file size will increase by more than the size of the image files. Starting with Access 2007, there is an improved storage method which reduces that bloat, but it is still an issue. Furthermore this could be a deal-breaker, because the absolute hard-wired file size limit for an Access db file is 2 GB ... your database might not be able to accommodate enough huge images to meet your needs.
I'm unclear about your concern over the need to install Access itself. With recent (since Win 2000) 32-bit Windows versions, the components required to use an Access db file are included as part of the operating system. If you're dealing with 64 bit Windows, you may need to get the [2007 Office System Driver: Data Connectivity Components](http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=23734)
Installing Access should only be required if your application uses Access for more than just data storage. An application which uses Access as the front-end client (with Access forms, reports, etc.) would require some form of Access to be installed, but it needn't be the full-blown version. You could design your application for the Access runtime version, which is free of cost starting with Access 2007:
1. [Access 2007 Download: Access Runtime](http://www.microsoft.com/download/en/details.aspx?id=4438)
2. [Microsoft Access 2010 Runtime](http://www.microsoft.com/download/en/details.aspx?id=10910)
However, if you're using something else (e.g. Dot.Net) for your application front-end, you wouldn't need any form of Access installed. | Never ever store large images in an access data base. A big database is a slow database. Use vba to check and create file paths and store the images outside of the database. Hit me up for the code to do that |
7,076,388 | We were planning to develop a desktop application with MS Access as DB. But we have certain doubts ?
1. When we install this application after development in client machine does it require MS Acess ?
2. If Yes, do they need to buy licence of MS Access from Microsoft or is it free ?
3. How can we conduct a check while installing the software if MS Access on the system ? If not how can we install MS Access also along with our application ? | 2011/08/16 | [
"https://Stackoverflow.com/questions/7076388",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/896407/"
] | In your comments to Haz and duffymo, you indicated your intention is to store "huge size images" in the database. That is not a best practice with MS Access. Due to the way images are stored, the db file size will increase by more than the size of the image files. Starting with Access 2007, there is an improved storage method which reduces that bloat, but it is still an issue. Furthermore this could be a deal-breaker, because the absolute hard-wired file size limit for an Access db file is 2 GB ... your database might not be able to accommodate enough huge images to meet your needs.
I'm unclear about your concern over the need to install Access itself. With recent (since Win 2000) 32-bit Windows versions, the components required to use an Access db file are included as part of the operating system. If you're dealing with 64 bit Windows, you may need to get the [2007 Office System Driver: Data Connectivity Components](http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=23734)
Installing Access should only be required if your application uses Access for more than just data storage. An application which uses Access as the front-end client (with Access forms, reports, etc.) would require some form of Access to be installed, but it needn't be the full-blown version. You could design your application for the Access runtime version, which is free of cost starting with Access 2007:
1. [Access 2007 Download: Access Runtime](http://www.microsoft.com/download/en/details.aspx?id=4438)
2. [Microsoft Access 2010 Runtime](http://www.microsoft.com/download/en/details.aspx?id=10910)
However, if you're using something else (e.g. Dot.Net) for your application front-end, you wouldn't need any form of Access installed. | MS Access is often the cheapest way to develop a distributed database application and frankly you should have your client buy a licenses it the tool meets your requirements. I think building a desktop application from scratch is often ridiculous when you'll probably spend well over $100 of your time building features that come out of the box with MS Access. Unless you're doing this work for free, buying software or software components to help speed up delivery is always a good idea.
One caveat. Access Db files are limited to 2gb. If you have to store data that exceeds this, you'll either need to connect the Access database to a larger database or create a complex partitioning strategy. |
184,286 | In 2020, I started to work on a grant proposal for the Austrian Academy Science Fund to fund a two-year postdoc. After two months, I was told by my prospective supervisor that the proposal was so good, that it could become a three-year project and give a grant to a PhD student.
I accepted, allowing my prospective supervisor to act as the principal investigator of the project. Three days before submitting, without my consent, my prospective supervisor put his name as first author of the proposal, in the version "Final 3," but I wrote at least 75% of it.
And there are four letters attached to the proposal by external advisors that report my name first.
**Is this a case of plagiarism or other misconduct?**
I already quit this job, because my supervisor/boss ridiculed a specific study of mine in line with the proposal saying it was non-sense. Honestly, in my next job I want to leave the academy anyway, I just wanted to understand more about the episode. I mean, he may have done it on a mistake, but given the range of acts during the job, I start to think that he actually has a tendency towards appropriation. | 2022/04/15 | [
"https://academia.stackexchange.com/questions/184286",
"https://academia.stackexchange.com",
"https://academia.stackexchange.com/users/92960/"
] | *As the title has changed to contain a specific question since I originally wrote this answer, I want to address the title question explicitly as well: no, I do not think it is appropriate to ever change authorship order without agreement among the authors. However, I think grant proposals are not the same as published work, and sometimes grants are governed by rules about who is allowed to be responsible. It's possible this was done as an administrative change, though the correct behavior would have been to discuss this change with the other authors before making it.*
Is there any *meaning at all* to the order of authors in a *grant proposal*? I really doubt it. Therefore, I can't see any value/benefit your supervisor would get from making this change, and therefore no malice.
I'm of course familiar with designating some individual as a "PI", which often requires them to be a professor or otherwise 'permanent' employee of an institution (or, alternatively, requires them to be a degree-seeking student or postdoctoral trainee), of course. The grants I contribute to have only the PI listed as though they are an "author"; everyone else is listed as some other type of contributor. The PI is responsible for administration of the project (boring admin stuff: budgets and assurances and regulatory compliance); they may not do all or most of the actual work, and very often have a smaller percentage of their salary covered for a project than students and post docs do who are working directly on a project.
My best guess is that, even though no instructions were given about author order, your supervisor assumed that the PI needed to be first, and made that change. I don't see, from the information I have here, a reason to consider *this aspect* to be any sort of willful violation.
If you were on better terms, I'd recommend simply asking for an explanation (in a non-accusative way), e.g., "I noticed you've changed the order of authors; is it necessary for the PI to be first author? or was there some other reason for the change?" That conversation might also be a good time to raise issues of authorship order for papers that come out of the project, where (field-dependent) that order *does* matter.
Since it seems like you have other conflicts with this person, well, I can't tell you how to judge them overall, but I would recommend basing that evaluation on everything else you know instead. | I am sorry it happened to you. Unfortunately, I am afraid your experience is shared by many PhD students at some point of their career. It is not uncommon in academia for established researchers to diminish contributions of their collaborators and exaggerate their own contribution. I called some professors out on this and the "justification" they gave me was that even though early career researchers do more work, the smaller part of work done by established professors is still more important, because their expertise and recognition is much higher. An extreme but real example everyone mentioned are "engineering labs" where PIs are always included in all publications because they "contributed the lab" for others to work at.
So basically, their argument is: "yes, I contributed only 25% of effort for this proposal, but because I am 10 times more famous than you, my name should go first". However, it does not feel like a satisfactory argument, neither as a fair practice. This is one of many dark pages of academic culture, which I believe should be turned and changed. To try to answer your specific question, many academics would probably agree that changing order of authors before submission is not completely the "right thing to do". However, depending on rules and regulations in your particular location, this is probably not counted as plagiarism neither as an academic misconduct. |
184,286 | In 2020, I started to work on a grant proposal for the Austrian Academy Science Fund to fund a two-year postdoc. After two months, I was told by my prospective supervisor that the proposal was so good, that it could become a three-year project and give a grant to a PhD student.
I accepted, allowing my prospective supervisor to act as the principal investigator of the project. Three days before submitting, without my consent, my prospective supervisor put his name as first author of the proposal, in the version "Final 3," but I wrote at least 75% of it.
And there are four letters attached to the proposal by external advisors that report my name first.
**Is this a case of plagiarism or other misconduct?**
I already quit this job, because my supervisor/boss ridiculed a specific study of mine in line with the proposal saying it was non-sense. Honestly, in my next job I want to leave the academy anyway, I just wanted to understand more about the episode. I mean, he may have done it on a mistake, but given the range of acts during the job, I start to think that he actually has a tendency towards appropriation. | 2022/04/15 | [
"https://academia.stackexchange.com/questions/184286",
"https://academia.stackexchange.com",
"https://academia.stackexchange.com/users/92960/"
] | This depends on the grant guidelines, and your question doesn't mention the type of grant you are targeting now, but for some grants, authors other than the PI just *have to* be listed as co-authors.
The distinction between PI and co-author sounds a lot like this is an FWF (Austrian Science Fund) standalone grant, in which case your PI has to be the "author". To be more precise, the FWF guidelines for standalone projects don't even distinguish between first and subsequent authors (or "authors" and "co-authors"). They only distinguish between the applicant and the co-author(s), if any. So unless you are the applicant (which you and your PI seem to have ruled out), you have to be listed as co-author. This is a necessary formality.
This also means that your PI is not using the correct terminology in the project description, when using the term "author". The FWF only knows "applicants" and "co-authors". However, this is most likely of no practical concern. The application form (unlike the project description where you can write whatever you like) does not even contain a field for the "author", only for "applicants", and "co-authors", as mentioned above.
Quoting the [guidelines](https://www.fwf.ac.at/fileadmin/files/Dokumente/Antragstellung/Einzelprojekte/p_application-guidelines.pdf):
>
> Co-authors form: All persons who have made substantial research-related contributions to
> the conception and writing of the application should be named as co -authors. A brief
> description of the nature of each contribution should be included; where there are no co-
> authors, applicants should state this explicitly on the form.
>
>
>
In case you want to ensure that your contribution is recognizable on your CV, you could add a short explanation of your and your PI's respective roles. | I am sorry it happened to you. Unfortunately, I am afraid your experience is shared by many PhD students at some point of their career. It is not uncommon in academia for established researchers to diminish contributions of their collaborators and exaggerate their own contribution. I called some professors out on this and the "justification" they gave me was that even though early career researchers do more work, the smaller part of work done by established professors is still more important, because their expertise and recognition is much higher. An extreme but real example everyone mentioned are "engineering labs" where PIs are always included in all publications because they "contributed the lab" for others to work at.
So basically, their argument is: "yes, I contributed only 25% of effort for this proposal, but because I am 10 times more famous than you, my name should go first". However, it does not feel like a satisfactory argument, neither as a fair practice. This is one of many dark pages of academic culture, which I believe should be turned and changed. To try to answer your specific question, many academics would probably agree that changing order of authors before submission is not completely the "right thing to do". However, depending on rules and regulations in your particular location, this is probably not counted as plagiarism neither as an academic misconduct. |
33,099,818 | I’m unable to launch an emulator in android studio due to the following error:
emulator: WARNING: Increasing RAM size to 1GB
emulator: device fd:620
HAXM is working and emulator runs in fast virt mode
Cannot set up guest memory 'pc.ram': Invalid argument
When installing HAXM I set the memory to 1024MB and the emulator (using Nexus 5) has been configured to only use 256MB of RAM :
[](https://i.stack.imgur.com/O3msz.png)
I’ve tried increasing/decreasing the RAM setting but it appears to have no affect with the same error message appearing each time.
I don’t understanding why this is happening and have tried installing different versions of Android Studio. I’m currently running 1.3.2 which is the version my colleague is successfully using on the same spec PC (4GB ram, Windows 7 32 bit).
The error indicates that the emulator is trying to increase its RAM size to 1GB dispite it being set to 256MB. | 2015/10/13 | [
"https://Stackoverflow.com/questions/33099818",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5393086/"
] | It appears that down grading the system image from Marshmallow (API 23) to KitKat (API 19) has resolved this issue. Its been stable for a week and the emulator isnt trying to increae the RAM size. | Have you ever tried to change the Emulator's OS (System image)?
Try to change to Lollipop with API -> Level 22, ABI -> x86, Target -> Android 5.1 (with Google API)
In this window, you can see and download this specific version - among many others - by checking 'Show downloadable system images'
Hope this helps
Chris Martin |
290,662 | Where can I setup custom errors for directories in my application such as App\_Code, App\_Browsers, etc.? I already have customErrors configured in the web.config and that works as expected. For example,
<http://www.mysite.com/bla.aspx> > redirects to 404 page
but
<http://www.mysite.com/App_Code/> > displays "The system cannot find the file specified."
There's no physical App\_Code directory for my site. Is this something that I can change in IIS? | 2008/11/14 | [
"https://Stackoverflow.com/questions/290662",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2034/"
] | I believe you will need to set the error pages in IIS itself, as the requests you talk about never reach the ASP.NET application. The reason your first example works is because IIS recognises the .ASPX extension and forwards it to ASP.NET. | Add a Wildcard Mapping to IIS to run ALL Requests through ASP.net, then you can use Global.asax to handle the error.
Taken [from here](http://www.asp.net/learn/mvc/tutorial-08-cs.aspx):
>
> Follow these steps to create a wildcard script map with IIS 6.0:
>
>
> * Right-click a website and select Properties
> * Select the Home Directory tab
> * Click the Configuration button
> * Select the Mappings tab
> * Click the Insert button (see Figure 4)
> * Paste the path to the aspnet\_isapi.dll into the Executable field (you can copy this path from the script map for .aspx files)
> * Uncheck the checkbox labeled Verify that file exists
> * Click the OK button
>
>
> |
290,662 | Where can I setup custom errors for directories in my application such as App\_Code, App\_Browsers, etc.? I already have customErrors configured in the web.config and that works as expected. For example,
<http://www.mysite.com/bla.aspx> > redirects to 404 page
but
<http://www.mysite.com/App_Code/> > displays "The system cannot find the file specified."
There's no physical App\_Code directory for my site. Is this something that I can change in IIS? | 2008/11/14 | [
"https://Stackoverflow.com/questions/290662",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2034/"
] | You are trying to server content from an Protected Folder... ??
I think you might need to allow access to these folders to get the nice errors you are looking for...
<http://www.webdavsystem.com/server/documentation/hosting_iis_asp_net/protected_folders>
That being said... there is a reason these folders are protected.
I would never put anything i needed IIS to serve in protected folders.
But there are always reasons to do do something? i have broke a few rules in my short lifespan :)
**UPDATE:**
Found this when i tried this locally: <http://support.microsoft.com/kb/942047/>
Looks like those reserved directories throw special 404's you might be able to get IIS to Target the 404.8 type... with out opening up serving to those directories | I believe you will need to set the error pages in IIS itself, as the requests you talk about never reach the ASP.NET application. The reason your first example works is because IIS recognises the .ASPX extension and forwards it to ASP.NET. |
290,662 | Where can I setup custom errors for directories in my application such as App\_Code, App\_Browsers, etc.? I already have customErrors configured in the web.config and that works as expected. For example,
<http://www.mysite.com/bla.aspx> > redirects to 404 page
but
<http://www.mysite.com/App_Code/> > displays "The system cannot find the file specified."
There's no physical App\_Code directory for my site. Is this something that I can change in IIS? | 2008/11/14 | [
"https://Stackoverflow.com/questions/290662",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2034/"
] | You are trying to server content from an Protected Folder... ??
I think you might need to allow access to these folders to get the nice errors you are looking for...
<http://www.webdavsystem.com/server/documentation/hosting_iis_asp_net/protected_folders>
That being said... there is a reason these folders are protected.
I would never put anything i needed IIS to serve in protected folders.
But there are always reasons to do do something? i have broke a few rules in my short lifespan :)
**UPDATE:**
Found this when i tried this locally: <http://support.microsoft.com/kb/942047/>
Looks like those reserved directories throw special 404's you might be able to get IIS to Target the 404.8 type... with out opening up serving to those directories | Add a Wildcard Mapping to IIS to run ALL Requests through ASP.net, then you can use Global.asax to handle the error.
Taken [from here](http://www.asp.net/learn/mvc/tutorial-08-cs.aspx):
>
> Follow these steps to create a wildcard script map with IIS 6.0:
>
>
> * Right-click a website and select Properties
> * Select the Home Directory tab
> * Click the Configuration button
> * Select the Mappings tab
> * Click the Insert button (see Figure 4)
> * Paste the path to the aspnet\_isapi.dll into the Executable field (you can copy this path from the script map for .aspx files)
> * Uncheck the checkbox labeled Verify that file exists
> * Click the OK button
>
>
> |
11,077 | Is the 1918 flu pandemic responsible for the majority of Iranian casualties during the first world war (1917-1919)?
are the British (who were the a
occupying power) responsible for it?
(because the people of Iran in those days did not have sufficient knowledge of infectious diseases and had probably never seen a microscope, so this must be one of the reasons that they did not know what was responsible for the majority of Iranian casualties during the first world war.) | 2013/12/16 | [
"https://history.stackexchange.com/questions/11077",
"https://history.stackexchange.com",
"https://history.stackexchange.com/users/3345/"
] | 1. Flu is caused by a virus. A virus is too small for an optical microscope.
2. The 1918 flu pandemic was neither caused nor spread by humans intentionally (although some nations uses quarantine to good effect).
3. Humans still have no effective flu treatment.
4. Blaming the British for Iranian deaths from the pandemic is preposterous. The British did not quarantine any part of the Empire from the rest, so why would they be expected to do that in Persia? | I haven't found a lot of numbers specifically for British Persia, but it is amost certianly the case that far more subjects of that area died from the Spanish Flu ([50-100 million killed world-wide](http://en.wikipedia.org/wiki/Spanish_flu)) rather than WWI ([about 16 million killed](http://en.wikipedia.org/wiki/World_War_I_casualties), mostly in Europe and Africa). Even among the heaviest combatants, the the numbers were close (eg: UK 1 mil for war, 250K for the Flu, France 1.7 mil for war, 400K for the Flu)
The flu was a pandemic, that hit every corner of the globe hard. People like to try to blame these things on someone, but that's really unrealistic. Certinaly the British were no more at fault that the Chinese (an early theory had it originating there), or Kansans. Certianly there was nothing they could have done to stop it even if they tried. People on isloated Pacific islands died off ([8% of the population of Tonga](http://en.wikipedia.org/wiki/Spanish_flu#Devastated_communities)) every bit as much as folks in well-traveled crossroads. |
29,785 | Is there a plant (not a microscopic type but one that is visible to the naked eye) that has so much iron (or magnetite), cobalt, or nickel in its body that it can attract a magnet?
In this case "attract" would mean that a person holding a small magnet next to the plant can feel a small attraction between the magnet and the plant. | 2015/02/21 | [
"https://biology.stackexchange.com/questions/29785",
"https://biology.stackexchange.com",
"https://biology.stackexchange.com/users/-1/"
] | There was a 2011 [study](http://authors.library.caltech.edu/23694/1/Corsini2011p13835J_Appl_Phys.pdf) where they used a sensitive atomic magnetometer to try to detect a plant's magnetic field. They stated that:
>
> To our knowledge, no one has yet detected the magnetic
> field from a plant. Biochemical processes, in the form of ionic
> flows and time varying ionic distributions, generate electrical
> currents and time-varying electric fields, both of which produce
> a magnetic field. However, contrasted to muscle contraction
> and brain processes, which have a characteristic time
> scale shorter than one second, plant bioprocesses span several
> minutes to several days and the expected magnetic field from
> such processes is correspondingly smaller.
>
>
> Measurements with a sensitive atomic magnetometer were performed on the Titan arum
> (Amorphophallus titanum) inflorescence, known for its fast biochemical processes while blooming.
> We find that the magnetic field from these processes, projected along the Earth’s magnetic field,
> and measured at the surface of the plant, is <
> 0.6 lG.
>
>
>
So according to this, no, you wouldn't be able to sense the magnetic field from a plant with a magnet. | I would like to add from my random-seminar experience that there are magnetic bacteria, that orient along magnetic field due to magnetosomes, e.g. *Magnetospirillum magnetotacticum*. Please look into [Wiki article](http://en.wikipedia.org/wiki/Magnetotactic_bacteria). So one can imagine symbiotic association between plant and such bacteria.
More to OP's question, remember that any electric current (aka moving charges) creates magnetic field around it. If net charge of plant's xylem sap is non-zero than there will be magnetic field created when current flows. But it will be minuscule. |
195,661 | not so important, but an answer I gave was accepted today on StackOverflow.com without giving reputation points. I got a badge, but that's all I saw. Is it something I am missing or is it a bug? | 2013/09/04 | [
"https://meta.stackexchange.com/questions/195661",
"https://meta.stackexchange.com",
"https://meta.stackexchange.com/users/233172/"
] | I'm assuming you're talking about [this answer](https://stackoverflow.com/questions/18559600/facebook-app-check-user-permissions/18559712#18559712). You didn't receive reputation for the acceptance because you have it marked as a Community Wiki answer, and you don't get reputation from those. | Was it [this answer](https://stackoverflow.com/a/18562672/254830)? You appear to have correctly received reputation for it on [your SO account](https://stackoverflow.com/users/2059741/tattvamasi):
 |
33,203 | I have a mac mini 2011 with OSX Lion. And I have connected my monitor (with integrated speakers) through hdmi port. The sound is working fine, but the volume keys on the keyboard don't work.
When I press one of them the volume image appears on the screen but with a prohibited signal.
Is there a way to fix this and make my keyboard keys change the system volume?
By the way, they work very well when I use another audio output device | 2011/12/08 | [
"https://apple.stackexchange.com/questions/33203",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/13241/"
] | The audio signal over HDMI is encoded. Encoded audio streams should be normalized to 0 dB. You cannot change this behavior as the audio signal would not be normalized anymore. You can only use the volume controls of your TV set.
Some programs (like iTunes) have volume control themselves, those can be used to change the volume of that specific program. (Although this goes somewhat against the principle that HDMI audio should be normalized.)
See also [this discussion](https://discussions.apple.com/thread/2529122?start=0&tstart=0) on Apple Support Communities (same answer). | As SoundFlower is extremely outdated and not actively maintained. I've started looking for a better solution.
I highly recommend this, works pretty great on my LG monitor:
<https://github.com/MonitorControl/MonitorControl>
This project also supports the default volume/brightness keyboard controls. Instead of using a digital line like SoundFlower, it tries to change the volume of your display itself.
**Edit**:
Since posting this answer, I've become a maintainer of this project, and it's grown quite a bit.
I would still recommend it to everyone wanting to change volume/brightness on external monitors. Works on both Intel and ARM |
33,203 | I have a mac mini 2011 with OSX Lion. And I have connected my monitor (with integrated speakers) through hdmi port. The sound is working fine, but the volume keys on the keyboard don't work.
When I press one of them the volume image appears on the screen but with a prohibited signal.
Is there a way to fix this and make my keyboard keys change the system volume?
By the way, they work very well when I use another audio output device | 2011/12/08 | [
"https://apple.stackexchange.com/questions/33203",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/13241/"
] | See a solution here for the problem - <http://www.vanetta.net/2012/07/enabling-hdmi-audio-controls-on-2011.html>
Requires the free third party app - <https://code.google.com/p/soundflower/> but it works perfectly. | As SoundFlower is extremely outdated and not actively maintained. I've started looking for a better solution.
I highly recommend this, works pretty great on my LG monitor:
<https://github.com/MonitorControl/MonitorControl>
This project also supports the default volume/brightness keyboard controls. Instead of using a digital line like SoundFlower, it tries to change the volume of your display itself.
**Edit**:
Since posting this answer, I've become a maintainer of this project, and it's grown quite a bit.
I would still recommend it to everyone wanting to change volume/brightness on external monitors. Works on both Intel and ARM |
33,203 | I have a mac mini 2011 with OSX Lion. And I have connected my monitor (with integrated speakers) through hdmi port. The sound is working fine, but the volume keys on the keyboard don't work.
When I press one of them the volume image appears on the screen but with a prohibited signal.
Is there a way to fix this and make my keyboard keys change the system volume?
By the way, they work very well when I use another audio output device | 2011/12/08 | [
"https://apple.stackexchange.com/questions/33203",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/13241/"
] | As SoundFlower is extremely outdated and not actively maintained. I've started looking for a better solution.
I highly recommend this, works pretty great on my LG monitor:
<https://github.com/MonitorControl/MonitorControl>
This project also supports the default volume/brightness keyboard controls. Instead of using a digital line like SoundFlower, it tries to change the volume of your display itself.
**Edit**:
Since posting this answer, I've become a maintainer of this project, and it's grown quite a bit.
I would still recommend it to everyone wanting to change volume/brightness on external monitors. Works on both Intel and ARM | A solution using the free version of [Sound Siphon](http://staticz.com/soundsiphon/)
>
> To solve both of these issues:
> Launch the Sound Siphon app. In Sound
> Siphon’s preferences select your output device as the pass through
> device. Turn Sound Siphon on.
>
>
> Sound Siphon processes the audio before it goes to your output device.
> Now you can use your keyboard to control the volume"
>
>
>
<http://staticz.com/add-keyboard-volume-controls/> |
2,146 | I want to grow some bulbs in my office. I have windows facing east, west, and north-east. The office is air conditioned, but not humid. What flowering bulbs can I plant, that will grow well under these conditions?
I'm specifically interested in [rain lilies](http://en.wikipedia.org/wiki/Zephyranthes) and [hyacinths](http://en.wikipedia.org/wiki/Hyacinth_%28plant%29). Will these grow well indoors? | 2011/09/23 | [
"https://gardening.stackexchange.com/questions/2146",
"https://gardening.stackexchange.com",
"https://gardening.stackexchange.com/users/140/"
] | I believe you will find "tender bulbs" are easier (less work) to grow indoors than "hardy bulbs", though both types can be grown indoors.
Therefore given the choice between [Rain Lily](http://en.wikipedia.org/wiki/Zephyranthes) (tender bulb) and [Hyacinth](http://en.wikipedia.org/wiki/Hyacinth_%28plant%29) (hardy bulb) I would go with the Rain Lily.
[Hippeastrum](http://en.wikipedia.org/wiki/Hippeastrum) is another tender bulb you may want to look at, it's a "popular" indoor flowering (bulb) plant.
Once you've selected the exact flowering (bulb) plant you want to grow indoors, read up on the plants specific requirements eg
* Pot size and preferred growing medium.
* Amount of light.
* Watering.
* Does it prefer being kept away from drafts (indoor ventilation systems).
* After flower care, needs.
---
*Good luck! and please report back here, letting us know what you choose and how you get on with growing it indoors.* | You will not be able to grow a long term plant that way, but you could probably force something. Trying different things would be a good way to to find what does best in your conditions. Hyacinths or narcissus seem like good choices to try out. |
2,146 | I want to grow some bulbs in my office. I have windows facing east, west, and north-east. The office is air conditioned, but not humid. What flowering bulbs can I plant, that will grow well under these conditions?
I'm specifically interested in [rain lilies](http://en.wikipedia.org/wiki/Zephyranthes) and [hyacinths](http://en.wikipedia.org/wiki/Hyacinth_%28plant%29). Will these grow well indoors? | 2011/09/23 | [
"https://gardening.stackexchange.com/questions/2146",
"https://gardening.stackexchange.com",
"https://gardening.stackexchange.com/users/140/"
] | I believe you will find "tender bulbs" are easier (less work) to grow indoors than "hardy bulbs", though both types can be grown indoors.
Therefore given the choice between [Rain Lily](http://en.wikipedia.org/wiki/Zephyranthes) (tender bulb) and [Hyacinth](http://en.wikipedia.org/wiki/Hyacinth_%28plant%29) (hardy bulb) I would go with the Rain Lily.
[Hippeastrum](http://en.wikipedia.org/wiki/Hippeastrum) is another tender bulb you may want to look at, it's a "popular" indoor flowering (bulb) plant.
Once you've selected the exact flowering (bulb) plant you want to grow indoors, read up on the plants specific requirements eg
* Pot size and preferred growing medium.
* Amount of light.
* Watering.
* Does it prefer being kept away from drafts (indoor ventilation systems).
* After flower care, needs.
---
*Good luck! and please report back here, letting us know what you choose and how you get on with growing it indoors.* | You should not have any difficulty growing hyacinths under the conditions you describe, although, as J. Musser points out, you will not be able to make permanent residents of them. When I was a child, my mother successfully "forced" bowls of hyacinths (fooled them into thinking that winter was over and it was Spring and time to bloom) every year, so that they always flowered over Christmas. After planting the bulbs in bowls of damp potting compost, she placed them in a cool, dark cupboard for about three months and, as soon as green shoots appeared, she gradually reintroduced them to the light:
>
> Although our winters are considered mild by northern standards, it does get dreary and wet in January and February. Forcing spring bulbs indoors will provide you a cheery pot of flowers when everything else outside is bare and brown. Now is the time to begin preparation.
>
>
> *The easiest bulbs to force are crocuses, hyacinths, and daffodils*. Tulips can be forced, but they are more finicky than the others and require a much longer chilling period.
>
>
> Buy large firm bulbs at a nursery and use 6" pots. A 6" pot will hold 3 hyacinths, 6 daffodils, 6 tulips, or 12 crocuses. Put 3" of a peat-based compost at the bottom of the pot and set the bulbs on it. The bulbs should be placed close together, but they should not touch each other, nor should they touch the sides of the container. Do not force the bulbs down into the compost.
>
>
> Fill the pot with more compost, pressing it firmly but not too tightly around the bulbs. When you have finished, the tips of the bulbs should be just above the surface, and, there should be about ¼" between the top of the compost and the top of the container. Water so that the growing medium is damp, but not soggy.
>
>
> The bulbs need a cold, frost-free period in the dark. A temperature of 40F is ideal. The refrigerator is an ideal place to chill the bulbs. The proper chill time varies with the type of bulb that is being forced. Crocuses are chilled for six weeks, daffodils and hyacinths need 12-14 weeks, and tulips require 16 weeks. Check occasionally to make sure that the growing medium is still moist and that growth has not started.
>
>
> When the shoots are one to two inches tall, it is time to move the container into a cool room indoors. 50F is ideal, but temperatures between 60 and 65 F will be sufficiently cool. Place in a shady spot for a couple of days and then move near a sunny window for a few days. The leaves will begin to develop, and in seven to ten days, flower buds will begin to form. When the buds begin to color, move the container to the chosen site for flowering. This should be a bright, sunny place that is free from drafts and away from radiators or heating ducts. Keep the growing medium moist at all times, turn the container occasionally to promote even growth, and enjoy your spring flowers in the midst of winter's gloom.
>
>
>
[Andie Rathbone, Smith County Master Gardener
Texas AgriLife Extension Service](http://easttexasgardening.tamu.edu/tips/house/indoorbulbs.html)
[Paperwhite Narcissus](https://en.wikipedia.org/wiki/Narcissus_papyraceus) is another plant that is fairly easy to grow indoors.
There is further information on forcing hyacinths, with detailed illustrations [here](http://www.gardenhive.com/flowers/hyacinths/grow/bowls/).
As far as Rain Lilies are concerned, I can't speak from experience, but a quick online search [here](http://magazine.angieslist.com/landscaping/articles/tips-for-planting-the-hardy-rain-lily-in-southwest.aspx) suggests that they can be grown indoors, but will not flower as well as they would outdoors, due to lower light levels and lack of rain; however, if the light from your office windows is good, it is worth giving them a try.
**Update**
Further to your comment below: After the blooms have died down, the usual practice, in the case of hyacinths (and also daffodils, crocus and tulips), is to lift the bulbs and plant them in the garden, in the hope that they will bloom outdoors next Spring; however, some plants such as *Amaryllis*, if given the right care, will go on flowering every year indoors.
If you can't plant your bulbs outdoors after flowering, and don't want to discard them, you could leave them to dry out, then store them in a cool, dry place and, in the Autumn, start the process again, although there is a risk they may not bloom the second time round. |
25,034,262 | I converted my console application to windows forms application. Now if I need to run this program in both forms and console what do i do? I tried running it as WinForms and console... in both the cases only one of them are opened. Any advice? | 2014/07/30 | [
"https://Stackoverflow.com/questions/25034262",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3884808/"
] | You have got two very different outputs which want to share the same code base for logic.
You will need to separate your application logic into a code library, then reference it from a Windows App and a Console App. In each app call the appropriate methods to perform whatever functionality you want. | You can start project in Console application and add windows forms in it. I tried in my projects. :) |
468,201 | Why is thunderbolt considered faster than other?
Apart from the inherent channel properties which affect the speed of data propagation, what else determines the speed of data transfer?
Of course more channels => parallel transfer=> more speed. Also differential signals would be much more reliable for high speed data transfer.
**But how can a protocol/architecture enable faster data transfer?**
I am pretty sure I am missing something fundamental here.
This is a very basic question to understand the reason for why we have so many serial communication protocols. | 2019/11/20 | [
"https://electronics.stackexchange.com/questions/468201",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/38769/"
] | >
> But how can a protocol/architecture enable faster data transfer?
>
>
>
I am unsure what you mean. A protocol change should not make much of a difference in datarate unless you're switching from a very inefficient protocol (for example, repeating every bit 4 times) to a more efficient one.
You forgot to mention **Bandwidth**. An oldfashioned serial connection is quite slow (up to 115200 bits/second) by today's standards as it has a very low bandwith due to the electronics, wires and connectors that are used.
Thunderbolt is much faster not only because it uses more connections in parallel but those connections need to have a **high bandwidth**. You cannot use the same type of wire that would suffice for the 115200 bits/second serial connection. For Thunderbolt you need high bandwidth (a couple of GHz) capable wire that has shielding even though differential signalling is used. Obviously high speed electronics are needed as well. Also the signal lines need to be properly **terminated** with the correct impedance at each end.
All that isn't needed for a slow serial connection.
I am sure I can make a slow serial connection work over a couple of meters distance using almost any piece of wire that you give me. To make a working Thunderbolt connection over a couple of meters distance you need a suitable Thunderbolt compatible cable, little else will work. | This question is as wide as the entire branch of electrical engineering - communication. In short, a communication link is faster if it uses faster bit switching, and if it is wider. No tricks here. The trick is how a communication protocol achieves faster data rates over individual lines, and how it combines the individual lines into parallel words.
Faster data rates of copper connects are achieved by using differential signaling, smaller signal swing amplitudes, and redundant data and clock encoding.
Differential signaling is a no-brainer, but reducing amplitudes down to 50-100mV requires additional effort to make the signal decodable on receiver end. To make the signal decodable with simple single-threshold receiver, people drive signal edges at higher amplitude, and "de-emhasize" signal plateaus. Or use more sophisticated multi-level data encoding. Data are encoded in such a way that transmission of long 0000000... or 1111111... does not cause the signal to keep the line in one physical state for long time, otherwise the transmission line will be charged in one direction, and it will be difficult to switch it back to the background level. So the data are encoded in the way that the physical levels are constantly switching even if the data pattern doesn't change. Whichever protocol does this equalization better, it will achieve higher data rates.
Another feature of modern protocols is that they "embed" the clock within data. And the receiver end uses so-called CDR - clock-data-recovery, to extract right data from the signal. Signals in modern protocol look like just a white noise, and special circuitry and algorithms are used to get correct data out of all this mess.
Yet another feature for a good fast communication protocol is its ability to tune their receiver and transmitters to accommodate electrical channel properties. At 20 GHz+ signal rates all copper wires surrounded by dielectric insulation do attenuate the signal, and with different degree depending on frequency component. Modern serial protocols employ "linear equalization" of channels they are connected to. They use a programmable filter that amplifies higher frequencies relative of low ones. Since in many cases the channel can be anything (like a USB cable, or a different memory module, or properties of PCB trace change with board temperature), modern protocols adjust the shape of their input filter dynamically. The process of adjustment is called "link training". Modern protocols perform link training on every exit from low power state (which is another important chapter in communication protocol). Links can do power transitions on a millisecond basis and faster. Every time the link comes out of idle state, the transmitters on both ends of lines sent special synchronization and training sequences before the receiver CDR locks into stable and decodable patterns. Whichever protocol architecture does this process faster, it will have less overhead and faster overall.
Yet another areas is how a protocol merge individual (and nearly asynchronous) lanes into a wider bus, how to auto-correct individual accidental errors, how to recover from bigger errors. Whoever can do this faster and smoother has a "faster bus". Yet another modern feature is to automatically/dynamically switch number of used lanes and their base rate depending of actual data transfer demand, as PCIe 4+ does.
So, as one can see, there is quite a bit more in bus architectures than just be wider and use differential lines. |
468,201 | Why is thunderbolt considered faster than other?
Apart from the inherent channel properties which affect the speed of data propagation, what else determines the speed of data transfer?
Of course more channels => parallel transfer=> more speed. Also differential signals would be much more reliable for high speed data transfer.
**But how can a protocol/architecture enable faster data transfer?**
I am pretty sure I am missing something fundamental here.
This is a very basic question to understand the reason for why we have so many serial communication protocols. | 2019/11/20 | [
"https://electronics.stackexchange.com/questions/468201",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/38769/"
] | That depends on your definition of "faster" which in turn depends on what you're doing with the communication channel.
**Simple case: unidirectional**
The "display" part of HDMI (let's not mention the embedded extras like I2C etc) is a unidirectional source synchronous serial link using multiple differential channels. This is the simplest as it is unidirectional. The sender uses specified protocol to pack data into frames and transmits them, the receiver processes it, but does not reply. There are no ACKnowledgements, no retransmission in case of error, etc. It is purely a stream.
This is similar to say, RS-232 Serial, SPDIF, UDP over Ethernet...
In this case, "speed" is purely throughput in bits per second. That's determined by your physical channel properties (bandwidth, noise, etc) as per [Shannon's theorem](https://en.wikipedia.org/wiki/Shannon%E2%80%93Hartley_theorem) which gives an upper bound for the capacity of a channel in bits/second. This is easy to grasp intuitively: more bandwidth means more capacity, and more noise means less capacity. In a real design, bit error rate is also a design parameter. Shannon's capacity is an upper bound, assuming a perfect error correction code is available. In practice, actual capacity will be lower, and the less errors you want, the more redundancy and "safety margins" you will need, which also reduces throughput.
How much of the available capacity is actually utilized depends a lot on the channel coding and protocol used. For example, using an error-correction code allows to increase throughput while keeping the bit error rate under control, up to a point. In some cases, like SPDIF an error-detection code is enough, and the receiver "hides" errors by interpolating over the corrupted sample. In other cases, like RS-232 serial, the bit error rate is assumed to be "low enough" and error handling is not implemented.
The protocol itself will also influence throughput, via packet headers which are overhead and consume bandwidth for example.
**Harder case: Bidirectional**
USB, Thunderbold, PCIexpress, TCP/IP aren't simple streams, they are bidirectional and both sender and receiver talk to each other. They may acknowledge that packets are properly received, request retransmission in case of error, etc.
This makes latency quite important. If packets must be re-transmitted in case of error, then the sender must keep in its own RAM all the data that has been transmitted but not acknowledged yet by the receiver, in case the receiver requests a re-transmission. Thus we have a design compromise between RAM size (expensive), latency (imposed by transmission distance, number of hops/hubs, packet size, etc) and throughput. Since a packet can only be ACK'ed once it is completely received and error-checked, smaller packets may be an advantage and offer lower latency, but there is more overhead for headers, etc.
For example, a LPC4330 microcontroller with 100BaseT ethernet and 64kB dedicated to packet buffer will happily saturate an ethernet connection with UDP packets. But 64kB is only 6.5 milliseconds worth of buffering at full throughput, so if you want to use TCP to a destination with a 30ms ping, it won't work. You'll have to lower throughput until you have enough buffers to keep all non-ACKed packets in case they need retransmission.
So there are lots of compromises at the protocol level to optimize performance for a particular use case, which is why there is no one-size-fits-all protocol.
**Real Time**
Sometimes "faster" means "lowest latency" and throughput is only important as it reduces the time required to transmit N bytes of data, but the link won't be used at full capacity. As an example, SPI has very low latency (just the time to transmit a few bytes) but USB has quite high latency because "real-time" isochronous or interrupt transfers only occur on each µframe. Also USB has a lot more software and protocol overhead. So if you want to control something in real-time and you don't want extra phase lag in your control loop, SPI would be a much better choice.
**Final boss: USB mass storage**
Most of the time you're not just transmitting data for the sake of it, but in order to actually do something, for example read a file from a USB stick.
In this case protocol is extremely important for performance. Consider a transaction between host and device like:
Host - "Device, send sector 123"
Device - ACK
...device fetches data...
Device - sends data
Host - ACK
Host - "Device, send sector 124"
Each exchange takes time (latency) so a protocol that can do more things in less exchanges will be "faster" although it transmits data at the same speed, because it will waste less time waiting, and more time transmitting. Let's upgrade this protocol:
Host - "Device, send sectors 1 to 100000"
In this case, the device will try to push data through the channel for the entire read range at maximum throughput, without having to wait for a new command after each sector.
An even more efficient protocol would use Command Queuing (like SATA NCQ) to reduce latency even more.
This explains the difference in benchmarks between [random reads and sequential reads](https://www.tomshardware.com/reviews/usb-3.0-thumb-drive-review,3477-3.html) for example. | This question is as wide as the entire branch of electrical engineering - communication. In short, a communication link is faster if it uses faster bit switching, and if it is wider. No tricks here. The trick is how a communication protocol achieves faster data rates over individual lines, and how it combines the individual lines into parallel words.
Faster data rates of copper connects are achieved by using differential signaling, smaller signal swing amplitudes, and redundant data and clock encoding.
Differential signaling is a no-brainer, but reducing amplitudes down to 50-100mV requires additional effort to make the signal decodable on receiver end. To make the signal decodable with simple single-threshold receiver, people drive signal edges at higher amplitude, and "de-emhasize" signal plateaus. Or use more sophisticated multi-level data encoding. Data are encoded in such a way that transmission of long 0000000... or 1111111... does not cause the signal to keep the line in one physical state for long time, otherwise the transmission line will be charged in one direction, and it will be difficult to switch it back to the background level. So the data are encoded in the way that the physical levels are constantly switching even if the data pattern doesn't change. Whichever protocol does this equalization better, it will achieve higher data rates.
Another feature of modern protocols is that they "embed" the clock within data. And the receiver end uses so-called CDR - clock-data-recovery, to extract right data from the signal. Signals in modern protocol look like just a white noise, and special circuitry and algorithms are used to get correct data out of all this mess.
Yet another feature for a good fast communication protocol is its ability to tune their receiver and transmitters to accommodate electrical channel properties. At 20 GHz+ signal rates all copper wires surrounded by dielectric insulation do attenuate the signal, and with different degree depending on frequency component. Modern serial protocols employ "linear equalization" of channels they are connected to. They use a programmable filter that amplifies higher frequencies relative of low ones. Since in many cases the channel can be anything (like a USB cable, or a different memory module, or properties of PCB trace change with board temperature), modern protocols adjust the shape of their input filter dynamically. The process of adjustment is called "link training". Modern protocols perform link training on every exit from low power state (which is another important chapter in communication protocol). Links can do power transitions on a millisecond basis and faster. Every time the link comes out of idle state, the transmitters on both ends of lines sent special synchronization and training sequences before the receiver CDR locks into stable and decodable patterns. Whichever protocol architecture does this process faster, it will have less overhead and faster overall.
Yet another areas is how a protocol merge individual (and nearly asynchronous) lanes into a wider bus, how to auto-correct individual accidental errors, how to recover from bigger errors. Whoever can do this faster and smoother has a "faster bus". Yet another modern feature is to automatically/dynamically switch number of used lanes and their base rate depending of actual data transfer demand, as PCIe 4+ does.
So, as one can see, there is quite a bit more in bus architectures than just be wider and use differential lines. |
38,141,385 | I would like to know if there is a way to keep IIS express and my web page running after closing Visual Studio. I am not just closing debug session but I want to close Visual studio itself.
Is it possible?
If nor, can I achieve that using command prompt? | 2016/07/01 | [
"https://Stackoverflow.com/questions/38141385",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4380155/"
] | It could be your project settings. Please check if you have enabled edit and continue in debug mode:
In Visual Studio, right click your web project in the Solutions Explorer > click Properties > select the 'Web' tab on the left pane > uncheck the 'Enable Edit and Continue' checkbox on the right pane.
Then run your web project and the IIS Express should retain listing of your web site even after you stop debugging.
I must say it wouldn't retain the web site after closing Visual Studio. | i am not sure there is anyway to keep open IIS Express.but
you can host website on local IIS to access your website or webpage |
190,840 | Is it possible to mount a Linux ext4 partition on Mac OS X?
Please describe the procedure - risk free - thanks.
**Edit Aug 2012**
The best solution I found was to
* use a 2nd machine with Linux,
* mount on Linux the ext4 FS,
* [install NFS on Linux](https://help.ubuntu.com/community/SettingUpNFSHowTo) and export the folder,
* then [mount the NFS partition on Mac](http://ivanvillareal.com/osx/nfs-mac-osx-lion/). | 2010/10/14 | [
"https://serverfault.com/questions/190840",
"https://serverfault.com",
"https://serverfault.com/users/51913/"
] | It seems it isn't currently possible.
A possible workaround for some scenarios is explained at
<http://www.cyberciti.biz/faq/mac-os-x-read-ext3-ext4-external-usb-hard-disk-partition/> | You might be able to do it by running Linux in a Virtualbox VM. Only thing you'd stand to lose is time configuring and installing (And space on the drive). Certainly won't hurt the Mac to try it. |
190,840 | Is it possible to mount a Linux ext4 partition on Mac OS X?
Please describe the procedure - risk free - thanks.
**Edit Aug 2012**
The best solution I found was to
* use a 2nd machine with Linux,
* mount on Linux the ext4 FS,
* [install NFS on Linux](https://help.ubuntu.com/community/SettingUpNFSHowTo) and export the folder,
* then [mount the NFS partition on Mac](http://ivanvillareal.com/osx/nfs-mac-osx-lion/). | 2010/10/14 | [
"https://serverfault.com/questions/190840",
"https://serverfault.com",
"https://serverfault.com/users/51913/"
] | I have found a solution couple of month ago. I bought this one <https://www.paragon-software.com/home/extfs-mac/> and it works like a charm. And I didn't have a chance to find any free solution. ext4fuse is fine but for read-only mode. | It seems it isn't currently possible.
A possible workaround for some scenarios is explained at
<http://www.cyberciti.biz/faq/mac-os-x-read-ext3-ext4-external-usb-hard-disk-partition/> |
190,840 | Is it possible to mount a Linux ext4 partition on Mac OS X?
Please describe the procedure - risk free - thanks.
**Edit Aug 2012**
The best solution I found was to
* use a 2nd machine with Linux,
* mount on Linux the ext4 FS,
* [install NFS on Linux](https://help.ubuntu.com/community/SettingUpNFSHowTo) and export the folder,
* then [mount the NFS partition on Mac](http://ivanvillareal.com/osx/nfs-mac-osx-lion/). | 2010/10/14 | [
"https://serverfault.com/questions/190840",
"https://serverfault.com",
"https://serverfault.com/users/51913/"
] | Try [MacFuse](http://code.google.com/p/macfuse/) with [ext4fuse](http://github.com/gerard/ext4fuse). If you want it to be risk-free, mount it read-only or duplicate the partition first. | You might be able to do it by running Linux in a Virtualbox VM. Only thing you'd stand to lose is time configuring and installing (And space on the drive). Certainly won't hurt the Mac to try it. |
190,840 | Is it possible to mount a Linux ext4 partition on Mac OS X?
Please describe the procedure - risk free - thanks.
**Edit Aug 2012**
The best solution I found was to
* use a 2nd machine with Linux,
* mount on Linux the ext4 FS,
* [install NFS on Linux](https://help.ubuntu.com/community/SettingUpNFSHowTo) and export the folder,
* then [mount the NFS partition on Mac](http://ivanvillareal.com/osx/nfs-mac-osx-lion/). | 2010/10/14 | [
"https://serverfault.com/questions/190840",
"https://serverfault.com",
"https://serverfault.com/users/51913/"
] | I have found a solution couple of month ago. I bought this one <https://www.paragon-software.com/home/extfs-mac/> and it works like a charm. And I didn't have a chance to find any free solution. ext4fuse is fine but for read-only mode. | You might be able to do it by running Linux in a Virtualbox VM. Only thing you'd stand to lose is time configuring and installing (And space on the drive). Certainly won't hurt the Mac to try it. |
190,840 | Is it possible to mount a Linux ext4 partition on Mac OS X?
Please describe the procedure - risk free - thanks.
**Edit Aug 2012**
The best solution I found was to
* use a 2nd machine with Linux,
* mount on Linux the ext4 FS,
* [install NFS on Linux](https://help.ubuntu.com/community/SettingUpNFSHowTo) and export the folder,
* then [mount the NFS partition on Mac](http://ivanvillareal.com/osx/nfs-mac-osx-lion/). | 2010/10/14 | [
"https://serverfault.com/questions/190840",
"https://serverfault.com",
"https://serverfault.com/users/51913/"
] | I have found a solution couple of month ago. I bought this one <https://www.paragon-software.com/home/extfs-mac/> and it works like a charm. And I didn't have a chance to find any free solution. ext4fuse is fine but for read-only mode. | Try [MacFuse](http://code.google.com/p/macfuse/) with [ext4fuse](http://github.com/gerard/ext4fuse). If you want it to be risk-free, mount it read-only or duplicate the partition first. |
248,469 | When viewing a question, there is a list on the right that displays related questions. However, I suspect they do not display the full list of related questions since the size of the list seems to be fixed at 10.
Sometimes, I would like to have a more exhaustive list of related questions.
Could that feature be added? For example, a whole new page with a list of questions related to a particular question. | 2015/02/12 | [
"https://meta.stackexchange.com/questions/248469",
"https://meta.stackexchange.com",
"https://meta.stackexchange.com/users/282583/"
] | I like this request. I think it would make the most sense to have a link underneath the related questions that says "more related questions", in the same way that Hot Network Questions has a "more hot questions" link. This would allow the people who want to see them could have the option, but it wouldn't add any additional overhead for people who don't want to see them.
Example:
[](https://i.stack.imgur.com/98iHZ.png) | I believe Tags serves that purpose sufficiently. Sometime one question can be relevant to many topics.
You are right that related questions list around 10 question matching with word entered in title, I think Its sufficient for a brief idea. To see all the related question user can visit the Tag section. |
46,080 | I'm using TortoiseSVN against the SourceForge SVN repository. I'd like to move a file from one folder to another in order to maintain its revision history. Is this possible? If so, how do you do it?
(My current strategy has been to copy the file into the new folder and check it in and then delete the file from the current folder.) | 2008/09/05 | [
"https://Stackoverflow.com/questions/46080",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1463/"
] | Check out section **[5.14.2. Moving files and folders](http://tortoisesvn.net/docs/release/TortoiseSVN_en/tsvn-dug-rename.html#tsvn-dug-rename-move)** (or check out "move" in the Index of the help) of the TortoiseSVN help. You do a move via right-dragging. It also mentions that you need to commit from the parent folder to make it "one" revision. This works for doing the change in a working copy.
(Note that the SVN items in the following image will only show up if the destination folder has already been added to the repository.)

You can also do the move via the Repo Browser (section **[5.23. The Repository Browser](http://tortoisesvn.net/docs/release/TortoiseSVN_en/tsvn-dug-repobrowser.html)** of the help). | If you are moving folders via Repository Browser, then there is no Move option on right-click; the only way is to drag and drop. |
46,080 | I'm using TortoiseSVN against the SourceForge SVN repository. I'd like to move a file from one folder to another in order to maintain its revision history. Is this possible? If so, how do you do it?
(My current strategy has been to copy the file into the new folder and check it in and then delete the file from the current folder.) | 2008/09/05 | [
"https://Stackoverflow.com/questions/46080",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1463/"
] | If I'm not wrong starting from version 1.5 SVN can track moved files\folders. In TortoiseSVN use can move file via drag&drop. | If you are moving folders via Repository Browser, then there is no Move option on right-click; the only way is to drag and drop. |
46,080 | I'm using TortoiseSVN against the SourceForge SVN repository. I'd like to move a file from one folder to another in order to maintain its revision history. Is this possible? If so, how do you do it?
(My current strategy has been to copy the file into the new folder and check it in and then delete the file from the current folder.) | 2008/09/05 | [
"https://Stackoverflow.com/questions/46080",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1463/"
] | Since you're using Tortoise you may want to check out this link on LosTechies. It should be almost exactly what you are looking for.
<http://www.lostechies.com/blogs/joshua_lockwood/archive/2007/09/12/subversion-tip-of-the-day-moving-files.aspx> | May also be called, "rename" by tortoise, but svn move, is the command in the barebones svn client. |
46,080 | I'm using TortoiseSVN against the SourceForge SVN repository. I'd like to move a file from one folder to another in order to maintain its revision history. Is this possible? If so, how do you do it?
(My current strategy has been to copy the file into the new folder and check it in and then delete the file from the current folder.) | 2008/09/05 | [
"https://Stackoverflow.com/questions/46080",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1463/"
] | Since you're using Tortoise you may want to check out this link on LosTechies. It should be almost exactly what you are looking for.
<http://www.lostechies.com/blogs/joshua_lockwood/archive/2007/09/12/subversion-tip-of-the-day-moving-files.aspx> | Transferring a file using TortoiseSVN:
**Step:1**
Please Select the files which you want to move,
Right-click and drag the files to the folder which you to move them to,
A window will popup after follow the below instruction
[](https://i.stack.imgur.com/tf2tB.png)
**Step 2:**
After you click the above the commit the file as below mention
[](https://i.stack.imgur.com/VIMUC.png) |
46,080 | I'm using TortoiseSVN against the SourceForge SVN repository. I'd like to move a file from one folder to another in order to maintain its revision history. Is this possible? If so, how do you do it?
(My current strategy has been to copy the file into the new folder and check it in and then delete the file from the current folder.) | 2008/09/05 | [
"https://Stackoverflow.com/questions/46080",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1463/"
] | In TortoiseSVN right click somewhere and go TortoiseSVN > Repo Browser open the repository.
All you then have to do is drag and drop the file from one folder to the where you want it. It'll ask you to add a commit message and it defaults it to "Moved file/folder remotely" | If you are moving folders via Repository Browser, then there is no Move option on right-click; the only way is to drag and drop. |
2,702 | What is a **Flight Director**, and how does it differ from **Autopilot**? | 2014/03/25 | [
"https://aviation.stackexchange.com/questions/2702",
"https://aviation.stackexchange.com",
"https://aviation.stackexchange.com/users/1700/"
] | The flight director is related to the autopilot system. It displays a guide on the artificial horizon, which shows the attitude of the airplane, but does nothing to control the plane. The guide represents a reference of an airplane attitude that will follow the parameters set for the autopilot. The pilot can manually fly the plane directly where the flight director indicates, and by doing so the plane will follow the parameters set for the autopilot.
If the autopilot is engaged, autopilot flies the plane to follow the flight director. The flight director serves as a visual indication of where the autopilot wants the plane to go. Although a flight director typically accompanies an autopilot system, some aircraft have a flight director without an autopilot.
The procedure to engage them is to first turn on the flight director, which will show where the autopilot wants the plane to be, and then to engage the autopilot, which will then automatically fly the plane.
This is what the autopilot controls can look like.

Here is the Boeing flight director, visible as the crossed magenta lines in the center of the screen.
 | It lets the pilot know what the autopilot would do if it were flying instead, by displaying an indicator, usually a miniature pink plane or line, on the artificial horizon.
The difference between the autopilot and flight director is that the autopilot flies the plane, the flight director gives the pilot an idea of what the autopilot would like to do if it was in charge. |
2,702 | What is a **Flight Director**, and how does it differ from **Autopilot**? | 2014/03/25 | [
"https://aviation.stackexchange.com/questions/2702",
"https://aviation.stackexchange.com",
"https://aviation.stackexchange.com/users/1700/"
] | The flight director is related to the autopilot system. It displays a guide on the artificial horizon, which shows the attitude of the airplane, but does nothing to control the plane. The guide represents a reference of an airplane attitude that will follow the parameters set for the autopilot. The pilot can manually fly the plane directly where the flight director indicates, and by doing so the plane will follow the parameters set for the autopilot.
If the autopilot is engaged, autopilot flies the plane to follow the flight director. The flight director serves as a visual indication of where the autopilot wants the plane to go. Although a flight director typically accompanies an autopilot system, some aircraft have a flight director without an autopilot.
The procedure to engage them is to first turn on the flight director, which will show where the autopilot wants the plane to be, and then to engage the autopilot, which will then automatically fly the plane.
This is what the autopilot controls can look like.

Here is the Boeing flight director, visible as the crossed magenta lines in the center of the screen.
 | It's the traditional flight instruments, integrated to work in a complementary way. For example the steering bars moving across the face of the compass so as to give the pilot visual ques for turning onto a desired heading and/or altitude.
The flight director gives a visual display of the overall aircraft attitude in space. The flight director display reacts to inputs coming from, for example, heading dials, electronic flight plans, and actual aircraft movement. The flight director does not physically control the aircraft.
The autopilot physically controls the aircraft in response to essentially the same inputs. The flight director essentially does not care if a human or the autopilot is actually in control. The autopilot adds intelligence which allows for various levels of pilot non-intervention up to the point of virtually hands-off control from takeoff to landing.
Together, the flight director and autopilot permit the pilot to let his skills atrophy while inducing a sense of inferiority on the part of the pilot in command; thus permitting well controlled crashes. |
2,702 | What is a **Flight Director**, and how does it differ from **Autopilot**? | 2014/03/25 | [
"https://aviation.stackexchange.com/questions/2702",
"https://aviation.stackexchange.com",
"https://aviation.stackexchange.com/users/1700/"
] | The flight director is related to the autopilot system. It displays a guide on the artificial horizon, which shows the attitude of the airplane, but does nothing to control the plane. The guide represents a reference of an airplane attitude that will follow the parameters set for the autopilot. The pilot can manually fly the plane directly where the flight director indicates, and by doing so the plane will follow the parameters set for the autopilot.
If the autopilot is engaged, autopilot flies the plane to follow the flight director. The flight director serves as a visual indication of where the autopilot wants the plane to go. Although a flight director typically accompanies an autopilot system, some aircraft have a flight director without an autopilot.
The procedure to engage them is to first turn on the flight director, which will show where the autopilot wants the plane to be, and then to engage the autopilot, which will then automatically fly the plane.
This is what the autopilot controls can look like.

Here is the Boeing flight director, visible as the crossed magenta lines in the center of the screen.
 | Simply: Flight Director is a system that computes desired pitch and roll from parameters like heading, altitude, vertical speed. Then an autopilot computes control surfaces deflection from given parameters: pitch and roll.
So: Pilot sets altitude, heading for Flight Director.
Flight director then computes pitch and roll for Autopilot (and human pilot, too).
If Autopilot is engaged, then it computes control surfaces deflections and moves the surfaces.
Flight Director is also displayed on PFD in some way, so if Autopilot is disengaged, the human pilot can still fly to follow FD's directions. |
2,702 | What is a **Flight Director**, and how does it differ from **Autopilot**? | 2014/03/25 | [
"https://aviation.stackexchange.com/questions/2702",
"https://aviation.stackexchange.com",
"https://aviation.stackexchange.com/users/1700/"
] | It lets the pilot know what the autopilot would do if it were flying instead, by displaying an indicator, usually a miniature pink plane or line, on the artificial horizon.
The difference between the autopilot and flight director is that the autopilot flies the plane, the flight director gives the pilot an idea of what the autopilot would like to do if it was in charge. | It's the traditional flight instruments, integrated to work in a complementary way. For example the steering bars moving across the face of the compass so as to give the pilot visual ques for turning onto a desired heading and/or altitude.
The flight director gives a visual display of the overall aircraft attitude in space. The flight director display reacts to inputs coming from, for example, heading dials, electronic flight plans, and actual aircraft movement. The flight director does not physically control the aircraft.
The autopilot physically controls the aircraft in response to essentially the same inputs. The flight director essentially does not care if a human or the autopilot is actually in control. The autopilot adds intelligence which allows for various levels of pilot non-intervention up to the point of virtually hands-off control from takeoff to landing.
Together, the flight director and autopilot permit the pilot to let his skills atrophy while inducing a sense of inferiority on the part of the pilot in command; thus permitting well controlled crashes. |
2,702 | What is a **Flight Director**, and how does it differ from **Autopilot**? | 2014/03/25 | [
"https://aviation.stackexchange.com/questions/2702",
"https://aviation.stackexchange.com",
"https://aviation.stackexchange.com/users/1700/"
] | It lets the pilot know what the autopilot would do if it were flying instead, by displaying an indicator, usually a miniature pink plane or line, on the artificial horizon.
The difference between the autopilot and flight director is that the autopilot flies the plane, the flight director gives the pilot an idea of what the autopilot would like to do if it was in charge. | Simply: Flight Director is a system that computes desired pitch and roll from parameters like heading, altitude, vertical speed. Then an autopilot computes control surfaces deflection from given parameters: pitch and roll.
So: Pilot sets altitude, heading for Flight Director.
Flight director then computes pitch and roll for Autopilot (and human pilot, too).
If Autopilot is engaged, then it computes control surfaces deflections and moves the surfaces.
Flight Director is also displayed on PFD in some way, so if Autopilot is disengaged, the human pilot can still fly to follow FD's directions. |
2,702 | What is a **Flight Director**, and how does it differ from **Autopilot**? | 2014/03/25 | [
"https://aviation.stackexchange.com/questions/2702",
"https://aviation.stackexchange.com",
"https://aviation.stackexchange.com/users/1700/"
] | It's the traditional flight instruments, integrated to work in a complementary way. For example the steering bars moving across the face of the compass so as to give the pilot visual ques for turning onto a desired heading and/or altitude.
The flight director gives a visual display of the overall aircraft attitude in space. The flight director display reacts to inputs coming from, for example, heading dials, electronic flight plans, and actual aircraft movement. The flight director does not physically control the aircraft.
The autopilot physically controls the aircraft in response to essentially the same inputs. The flight director essentially does not care if a human or the autopilot is actually in control. The autopilot adds intelligence which allows for various levels of pilot non-intervention up to the point of virtually hands-off control from takeoff to landing.
Together, the flight director and autopilot permit the pilot to let his skills atrophy while inducing a sense of inferiority on the part of the pilot in command; thus permitting well controlled crashes. | Simply: Flight Director is a system that computes desired pitch and roll from parameters like heading, altitude, vertical speed. Then an autopilot computes control surfaces deflection from given parameters: pitch and roll.
So: Pilot sets altitude, heading for Flight Director.
Flight director then computes pitch and roll for Autopilot (and human pilot, too).
If Autopilot is engaged, then it computes control surfaces deflections and moves the surfaces.
Flight Director is also displayed on PFD in some way, so if Autopilot is disengaged, the human pilot can still fly to follow FD's directions. |
70,018 | I came across this sentence in the American Heritage Dictionary, but still do not understand it.
>
> This proposal is the best so far, **modulo the fact** that parts of it
> need modification.
>
>
>
The definition of *[modulo](http://www.thefreedictionary.com/modulo)* provided is *correcting or adjusting for something, as by leaving something out of account*.
Please elaborate on the meaning of *modulo the fact*. Does it mean the same as the following?
>
> This proposal is the best so far, but note what parts of it still
> need to be modified.
>
>
> | 2012/06/05 | [
"https://english.stackexchange.com/questions/70018",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/21952/"
] | I only see this usage from academics with a background that includes England. It is used to mean the opposite of *except* - some part of the set is included, not excluded, and you're saying that you're including it even though some people might not. It isn't a substitute for *but* or *except*, because those would be about excluding something from the set. It might be closer to *even though*.
In math, *modulo* is the remainder after dividing, so `5 mod 2` is 1. In words, it's something like *even after accounting for*. However, it has been heard by generations of people who aren't sure what it means, don't want to ask, and feel that smart people use it. Those people tend to use it as *except* or *but*, meaning that you probably can't be entirely sure any more what someone means when they use it. | The OED3 has *mod* as a preposition dating from 1854, and *modulo* as a preposition dating from 1887 — but those are the more purely mathematical senses, not the extended senses that seem to crop up in the 1950s.
However, programmers and perhaps others regularly use *mod* or *modulo* in its extended sense to mean “save/except for”, or “without”, or “minus”. It’s to exclude something. This isn’t a mathematical use, although it may be a form of shop jargon.
Again, it is by no means uncommon in programmer circles, although I don’t know that I’ve myself used it in formal writing. |
70,018 | I came across this sentence in the American Heritage Dictionary, but still do not understand it.
>
> This proposal is the best so far, **modulo the fact** that parts of it
> need modification.
>
>
>
The definition of *[modulo](http://www.thefreedictionary.com/modulo)* provided is *correcting or adjusting for something, as by leaving something out of account*.
Please elaborate on the meaning of *modulo the fact*. Does it mean the same as the following?
>
> This proposal is the best so far, but note what parts of it still
> need to be modified.
>
>
> | 2012/06/05 | [
"https://english.stackexchange.com/questions/70018",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/21952/"
] | The pattern
>
> X modulo Y
>
>
>
is an informal but common parlance in technical, especially mathematically, oriented talk. It is used to mean informally 'X, ignoring Y'. For example,
>
> "The rocket design was flawless, modulo the toxic waste produced by its fuel."
>
>
>
The meaning is inspired by, but not perfectly corresponding to, the arithmetic modulo function (for example, clock-time addition) which when suitably abstracted involves 'collapsing' all items of a set into the special items of the set, so that the full set does not need to be dealt with (this is where the associated meaning of 'ignoring' comes from).
In your interpretation "note what parts of it still need to be modified", the 'modified' part is irrelevant. 'Modulo' is pragmatically "I'm telling you about the most important part (the X), but remarking on the existence of some part that might be important for other reasons but under the current context we want to ignore (the Y)". | Here's a use of 'modulo' by a mathematician working on the four-colour theorem quoted in Msrk Walters, "[It Appears That Four Colors Suffice: A Historical
Overview of the Four-Color Theorem](http://historyofmathematics.org/wp-content/uploads/2013/09/2004-Walters.pdf) (2004):
>
> Shortly after testing the final configuration for reducibility,
> Appel celebrated the success by etching the statement ‘Modulo careful checking, it appears that four colors suffice’ onto the department’s blackboard.
>
>
>
This follows the general sense 'A mod B' being 'A seems generally true but for B' but B is not necessarily an exception, but something to bear in mind. |
70,018 | I came across this sentence in the American Heritage Dictionary, but still do not understand it.
>
> This proposal is the best so far, **modulo the fact** that parts of it
> need modification.
>
>
>
The definition of *[modulo](http://www.thefreedictionary.com/modulo)* provided is *correcting or adjusting for something, as by leaving something out of account*.
Please elaborate on the meaning of *modulo the fact*. Does it mean the same as the following?
>
> This proposal is the best so far, but note what parts of it still
> need to be modified.
>
>
> | 2012/06/05 | [
"https://english.stackexchange.com/questions/70018",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/21952/"
] | Like *zero* or [*orthogonal*](http://volokh.com/2010/01/11/orthogonal-ooh/), the word *modulo* has a precise technical meaning in mathematics. This makes it useful in some limited non-mathematical situations. But most people just say it to sound smart, and miss the point.
### Good usage
In mathematics, "A = B modulo C" means roughly that A and B are the same thing, except for differences of type C.
* Butane and isobutane are the same, *modulo* their shape.
* If you remember your dad's birthday but don't remember which year, then you know the day of his birth *modulo* one year.
### Bad usage
Sometimes people use "modulo" as a synonym for "except for." Example: ["All mammals, modulo the monotremes, give birth to live young."](https://en.wiktionary.org/wiki/modulo#Preposition) This example is arguably correct, but we have far better words for this (like except).
Sometimes people use the word modulo to mean "almost." This is almost as bad as saying "literally" when you mean "figuratively." The word "modulo" has nothing to do with closeness. In math, -1 and 999999999999 are equal modulo 1000000000000.
### Your example
>
> This proposal is the best so far, modulo the fact that parts of it need modification.
>
>
>
If the author knows what "modulo" really means, this should mean
>
> This proposal would be the best so far if parts of it were modified.
>
>
>
The dictionary author could have found a better example. | If anyone is offering a definition that is a synonym for “notwithstanding”, then just use “notwithstanding”.
“Modulo” has a distinct usefulness that is to do with remainders (i.e. what is left over after you put items into groups), and contrary to many other opinions here, it is useful beyond mathematics.
>
> Politicians vote along party lines, modulo party membership
>
>
>
means
>
> Politicians vote along party lines, split into parties and as groups, with a few leftover (sometimes) because they don’t fit into parties
>
>
>
and not simply
>
> Politicians vote along party lines, unless they don’t have a party
>
>
>
along with the missing implication that most have them always, and sometimes all have them.
This is the correct usage taken beyond mathematics without the grandstanding. If the grandstanding form is synonymous with “notwithstanding”, maybe the correct form is synonymous with “after sorting into every”; but the next phrase must be a noun, not a circumstance or condition.
It is at this point we have to face up to the unsatisfactory use of the word “notwithstanding”, where some people follow that word with a precondition and others do the opposite, and give an exclusion. Others still just give the noun without stating inclusion or exclusion, but if you follow a literal logic, it should be an exclusion to the preceding statement. |
70,018 | I came across this sentence in the American Heritage Dictionary, but still do not understand it.
>
> This proposal is the best so far, **modulo the fact** that parts of it
> need modification.
>
>
>
The definition of *[modulo](http://www.thefreedictionary.com/modulo)* provided is *correcting or adjusting for something, as by leaving something out of account*.
Please elaborate on the meaning of *modulo the fact*. Does it mean the same as the following?
>
> This proposal is the best so far, but note what parts of it still
> need to be modified.
>
>
> | 2012/06/05 | [
"https://english.stackexchange.com/questions/70018",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/21952/"
] | The OED3 has *mod* as a preposition dating from 1854, and *modulo* as a preposition dating from 1887 — but those are the more purely mathematical senses, not the extended senses that seem to crop up in the 1950s.
However, programmers and perhaps others regularly use *mod* or *modulo* in its extended sense to mean “save/except for”, or “without”, or “minus”. It’s to exclude something. This isn’t a mathematical use, although it may be a form of shop jargon.
Again, it is by no means uncommon in programmer circles, although I don’t know that I’ve myself used it in formal writing. | If anyone is offering a definition that is a synonym for “notwithstanding”, then just use “notwithstanding”.
“Modulo” has a distinct usefulness that is to do with remainders (i.e. what is left over after you put items into groups), and contrary to many other opinions here, it is useful beyond mathematics.
>
> Politicians vote along party lines, modulo party membership
>
>
>
means
>
> Politicians vote along party lines, split into parties and as groups, with a few leftover (sometimes) because they don’t fit into parties
>
>
>
and not simply
>
> Politicians vote along party lines, unless they don’t have a party
>
>
>
along with the missing implication that most have them always, and sometimes all have them.
This is the correct usage taken beyond mathematics without the grandstanding. If the grandstanding form is synonymous with “notwithstanding”, maybe the correct form is synonymous with “after sorting into every”; but the next phrase must be a noun, not a circumstance or condition.
It is at this point we have to face up to the unsatisfactory use of the word “notwithstanding”, where some people follow that word with a precondition and others do the opposite, and give an exclusion. Others still just give the noun without stating inclusion or exclusion, but if you follow a literal logic, it should be an exclusion to the preceding statement. |
70,018 | I came across this sentence in the American Heritage Dictionary, but still do not understand it.
>
> This proposal is the best so far, **modulo the fact** that parts of it
> need modification.
>
>
>
The definition of *[modulo](http://www.thefreedictionary.com/modulo)* provided is *correcting or adjusting for something, as by leaving something out of account*.
Please elaborate on the meaning of *modulo the fact*. Does it mean the same as the following?
>
> This proposal is the best so far, but note what parts of it still
> need to be modified.
>
>
> | 2012/06/05 | [
"https://english.stackexchange.com/questions/70018",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/21952/"
] | I only see this usage from academics with a background that includes England. It is used to mean the opposite of *except* - some part of the set is included, not excluded, and you're saying that you're including it even though some people might not. It isn't a substitute for *but* or *except*, because those would be about excluding something from the set. It might be closer to *even though*.
In math, *modulo* is the remainder after dividing, so `5 mod 2` is 1. In words, it's something like *even after accounting for*. However, it has been heard by generations of people who aren't sure what it means, don't want to ask, and feel that smart people use it. Those people tend to use it as *except* or *but*, meaning that you probably can't be entirely sure any more what someone means when they use it. | If anyone is offering a definition that is a synonym for “notwithstanding”, then just use “notwithstanding”.
“Modulo” has a distinct usefulness that is to do with remainders (i.e. what is left over after you put items into groups), and contrary to many other opinions here, it is useful beyond mathematics.
>
> Politicians vote along party lines, modulo party membership
>
>
>
means
>
> Politicians vote along party lines, split into parties and as groups, with a few leftover (sometimes) because they don’t fit into parties
>
>
>
and not simply
>
> Politicians vote along party lines, unless they don’t have a party
>
>
>
along with the missing implication that most have them always, and sometimes all have them.
This is the correct usage taken beyond mathematics without the grandstanding. If the grandstanding form is synonymous with “notwithstanding”, maybe the correct form is synonymous with “after sorting into every”; but the next phrase must be a noun, not a circumstance or condition.
It is at this point we have to face up to the unsatisfactory use of the word “notwithstanding”, where some people follow that word with a precondition and others do the opposite, and give an exclusion. Others still just give the noun without stating inclusion or exclusion, but if you follow a literal logic, it should be an exclusion to the preceding statement. |
70,018 | I came across this sentence in the American Heritage Dictionary, but still do not understand it.
>
> This proposal is the best so far, **modulo the fact** that parts of it
> need modification.
>
>
>
The definition of *[modulo](http://www.thefreedictionary.com/modulo)* provided is *correcting or adjusting for something, as by leaving something out of account*.
Please elaborate on the meaning of *modulo the fact*. Does it mean the same as the following?
>
> This proposal is the best so far, but note what parts of it still
> need to be modified.
>
>
> | 2012/06/05 | [
"https://english.stackexchange.com/questions/70018",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/21952/"
] | The example that The American Heritage Dictionary uses for modulo, "This proposal is the best so far, modulo the fact that parts of it need modification," is a confusing and poor usage of the phrase. Unless a reader has a strong analytical mind and understanding of the phrase, they would likely interpret the sentence to mean that the proposal is the best *as-is*, and it would be even better if it was modified.
On the other hand if the sentence is correctly interpreted, **the sentence is meaningless**, for a couple reasons. First, a “fact” does not change the quality of the proposal, and so has no impact upon its ranking (i.e. whether it is “best” or not). **In general the phrase “modulo the fact” is a poor usage of the word modulo.** Second, even if the reader imagines modifying the proposal to be the best, the sentence is still meaningless, because every proposal could be modified to be the best.
Mitch’s example is far better:
>
> "The rocket design was flawless, modulo the toxic waste produced by its fuel."
>
>
>
The best replacement word that I can come up with is “**discounting**” or "**not accounting**", so
>
> "This proposal is the best so far, discounting that parts of it need modification."
>
>
>
(Still meaningless for the second reason stated above.)
Or, in Mitch’s example,
>
> "The rocket design was flawless, discounting the toxic waste produced by its fuel."
>
>
> | If anyone is offering a definition that is a synonym for “notwithstanding”, then just use “notwithstanding”.
“Modulo” has a distinct usefulness that is to do with remainders (i.e. what is left over after you put items into groups), and contrary to many other opinions here, it is useful beyond mathematics.
>
> Politicians vote along party lines, modulo party membership
>
>
>
means
>
> Politicians vote along party lines, split into parties and as groups, with a few leftover (sometimes) because they don’t fit into parties
>
>
>
and not simply
>
> Politicians vote along party lines, unless they don’t have a party
>
>
>
along with the missing implication that most have them always, and sometimes all have them.
This is the correct usage taken beyond mathematics without the grandstanding. If the grandstanding form is synonymous with “notwithstanding”, maybe the correct form is synonymous with “after sorting into every”; but the next phrase must be a noun, not a circumstance or condition.
It is at this point we have to face up to the unsatisfactory use of the word “notwithstanding”, where some people follow that word with a precondition and others do the opposite, and give an exclusion. Others still just give the noun without stating inclusion or exclusion, but if you follow a literal logic, it should be an exclusion to the preceding statement. |
70,018 | I came across this sentence in the American Heritage Dictionary, but still do not understand it.
>
> This proposal is the best so far, **modulo the fact** that parts of it
> need modification.
>
>
>
The definition of *[modulo](http://www.thefreedictionary.com/modulo)* provided is *correcting or adjusting for something, as by leaving something out of account*.
Please elaborate on the meaning of *modulo the fact*. Does it mean the same as the following?
>
> This proposal is the best so far, but note what parts of it still
> need to be modified.
>
>
> | 2012/06/05 | [
"https://english.stackexchange.com/questions/70018",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/21952/"
] | Here's a use of 'modulo' by a mathematician working on the four-colour theorem quoted in Msrk Walters, "[It Appears That Four Colors Suffice: A Historical
Overview of the Four-Color Theorem](http://historyofmathematics.org/wp-content/uploads/2013/09/2004-Walters.pdf) (2004):
>
> Shortly after testing the final configuration for reducibility,
> Appel celebrated the success by etching the statement ‘Modulo careful checking, it appears that four colors suffice’ onto the department’s blackboard.
>
>
>
This follows the general sense 'A mod B' being 'A seems generally true but for B' but B is not necessarily an exception, but something to bear in mind. | Like *zero* or [*orthogonal*](http://volokh.com/2010/01/11/orthogonal-ooh/), the word *modulo* has a precise technical meaning in mathematics. This makes it useful in some limited non-mathematical situations. But most people just say it to sound smart, and miss the point.
### Good usage
In mathematics, "A = B modulo C" means roughly that A and B are the same thing, except for differences of type C.
* Butane and isobutane are the same, *modulo* their shape.
* If you remember your dad's birthday but don't remember which year, then you know the day of his birth *modulo* one year.
### Bad usage
Sometimes people use "modulo" as a synonym for "except for." Example: ["All mammals, modulo the monotremes, give birth to live young."](https://en.wiktionary.org/wiki/modulo#Preposition) This example is arguably correct, but we have far better words for this (like except).
Sometimes people use the word modulo to mean "almost." This is almost as bad as saying "literally" when you mean "figuratively." The word "modulo" has nothing to do with closeness. In math, -1 and 999999999999 are equal modulo 1000000000000.
### Your example
>
> This proposal is the best so far, modulo the fact that parts of it need modification.
>
>
>
If the author knows what "modulo" really means, this should mean
>
> This proposal would be the best so far if parts of it were modified.
>
>
>
The dictionary author could have found a better example. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.