qid
int64
1
74.7M
question
stringlengths
12
33.8k
date
stringlengths
10
10
metadata
list
response_j
stringlengths
0
115k
response_k
stringlengths
2
98.3k
10,938
Can anyone suggest me some good books on cryptography? I have just starting studying cryptography but I know elementary number theory, abstract algebra and algorithms. Also please mention the difficulty level of the book.
2013/03/31
[ "https://cs.stackexchange.com/questions/10938", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/778/" ]
"Introduction to Modern Cryptography", Jonathan Katz and Yehuda Lindell. This is a great book for learning about provable security. And for actual crypto protocols and algorithms, there's always the classic: "Handbook of Applied Crypto" by Paul van Oorschot, A. J. Menezes, and Scott Vanstone. This is more a reference book than a textbook. And its available free of cost on one of the authors web-pages.
I would have to say that **Applied Cryptography** by Bruce Schneier is the best I have come across. It's a good introduction, but at the same time have a detailed level.
10,938
Can anyone suggest me some good books on cryptography? I have just starting studying cryptography but I know elementary number theory, abstract algebra and algorithms. Also please mention the difficulty level of the book.
2013/03/31
[ "https://cs.stackexchange.com/questions/10938", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/778/" ]
I would have to say that **Applied Cryptography** by Bruce Schneier is the best I have come across. It's a good introduction, but at the same time have a detailed level.
IMHO a very good book for you may be: J. Hoffstein, J.Pipher, J. H. Silverman, An Introduction to Mathematical Cryptography. ISBN 976-1-4419-2674-6. It is published 2010 in the Springer series Undergraduate Texts in Mathematics.
10,938
Can anyone suggest me some good books on cryptography? I have just starting studying cryptography but I know elementary number theory, abstract algebra and algorithms. Also please mention the difficulty level of the book.
2013/03/31
[ "https://cs.stackexchange.com/questions/10938", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/778/" ]
It depends on what the purpose is, I personally read the [A very short introduction to Cryptography](http://rads.stackoverflow.com/amzn/click/0192803158) which was a perfect guide to someone knowing nothing about this subject. If your purpose is to begin applying readily available algorithms this is a good book. If you want to get in deap and look at the functioning of algorithms you will need something more thorough
I would have to say that **Applied Cryptography** by Bruce Schneier is the best I have come across. It's a good introduction, but at the same time have a detailed level.
10,938
Can anyone suggest me some good books on cryptography? I have just starting studying cryptography but I know elementary number theory, abstract algebra and algorithms. Also please mention the difficulty level of the book.
2013/03/31
[ "https://cs.stackexchange.com/questions/10938", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/778/" ]
I'd recommend "[Understanding Cryptography](http://www.crypto-textbook.com/)", Christof Paar & Jan Pelzl, if you are self teaching some cryptography. Why? Since it can be really hard to just follow a textbook by yourself, professor Christof Paar uploaded his lectures on youtube ([Introduction to Cryptography by Christof Paar](https://www.youtube.com/channel/UC1usFRN4LCMcfIV7UjHNuQg/videos)) so you can have a more complete experience.
I would have to say that **Applied Cryptography** by Bruce Schneier is the best I have come across. It's a good introduction, but at the same time have a detailed level.
10,938
Can anyone suggest me some good books on cryptography? I have just starting studying cryptography but I know elementary number theory, abstract algebra and algorithms. Also please mention the difficulty level of the book.
2013/03/31
[ "https://cs.stackexchange.com/questions/10938", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/778/" ]
"Introduction to Modern Cryptography", Jonathan Katz and Yehuda Lindell. This is a great book for learning about provable security. And for actual crypto protocols and algorithms, there's always the classic: "Handbook of Applied Crypto" by Paul van Oorschot, A. J. Menezes, and Scott Vanstone. This is more a reference book than a textbook. And its available free of cost on one of the authors web-pages.
IMHO a very good book for you may be: J. Hoffstein, J.Pipher, J. H. Silverman, An Introduction to Mathematical Cryptography. ISBN 976-1-4419-2674-6. It is published 2010 in the Springer series Undergraduate Texts in Mathematics.
10,938
Can anyone suggest me some good books on cryptography? I have just starting studying cryptography but I know elementary number theory, abstract algebra and algorithms. Also please mention the difficulty level of the book.
2013/03/31
[ "https://cs.stackexchange.com/questions/10938", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/778/" ]
"Introduction to Modern Cryptography", Jonathan Katz and Yehuda Lindell. This is a great book for learning about provable security. And for actual crypto protocols and algorithms, there's always the classic: "Handbook of Applied Crypto" by Paul van Oorschot, A. J. Menezes, and Scott Vanstone. This is more a reference book than a textbook. And its available free of cost on one of the authors web-pages.
It depends on what the purpose is, I personally read the [A very short introduction to Cryptography](http://rads.stackoverflow.com/amzn/click/0192803158) which was a perfect guide to someone knowing nothing about this subject. If your purpose is to begin applying readily available algorithms this is a good book. If you want to get in deap and look at the functioning of algorithms you will need something more thorough
10,938
Can anyone suggest me some good books on cryptography? I have just starting studying cryptography but I know elementary number theory, abstract algebra and algorithms. Also please mention the difficulty level of the book.
2013/03/31
[ "https://cs.stackexchange.com/questions/10938", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/778/" ]
"Introduction to Modern Cryptography", Jonathan Katz and Yehuda Lindell. This is a great book for learning about provable security. And for actual crypto protocols and algorithms, there's always the classic: "Handbook of Applied Crypto" by Paul van Oorschot, A. J. Menezes, and Scott Vanstone. This is more a reference book than a textbook. And its available free of cost on one of the authors web-pages.
I'd recommend "[Understanding Cryptography](http://www.crypto-textbook.com/)", Christof Paar & Jan Pelzl, if you are self teaching some cryptography. Why? Since it can be really hard to just follow a textbook by yourself, professor Christof Paar uploaded his lectures on youtube ([Introduction to Cryptography by Christof Paar](https://www.youtube.com/channel/UC1usFRN4LCMcfIV7UjHNuQg/videos)) so you can have a more complete experience.
10,938
Can anyone suggest me some good books on cryptography? I have just starting studying cryptography but I know elementary number theory, abstract algebra and algorithms. Also please mention the difficulty level of the book.
2013/03/31
[ "https://cs.stackexchange.com/questions/10938", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/778/" ]
It depends on what the purpose is, I personally read the [A very short introduction to Cryptography](http://rads.stackoverflow.com/amzn/click/0192803158) which was a perfect guide to someone knowing nothing about this subject. If your purpose is to begin applying readily available algorithms this is a good book. If you want to get in deap and look at the functioning of algorithms you will need something more thorough
IMHO a very good book for you may be: J. Hoffstein, J.Pipher, J. H. Silverman, An Introduction to Mathematical Cryptography. ISBN 976-1-4419-2674-6. It is published 2010 in the Springer series Undergraduate Texts in Mathematics.
10,938
Can anyone suggest me some good books on cryptography? I have just starting studying cryptography but I know elementary number theory, abstract algebra and algorithms. Also please mention the difficulty level of the book.
2013/03/31
[ "https://cs.stackexchange.com/questions/10938", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/778/" ]
I'd recommend "[Understanding Cryptography](http://www.crypto-textbook.com/)", Christof Paar & Jan Pelzl, if you are self teaching some cryptography. Why? Since it can be really hard to just follow a textbook by yourself, professor Christof Paar uploaded his lectures on youtube ([Introduction to Cryptography by Christof Paar](https://www.youtube.com/channel/UC1usFRN4LCMcfIV7UjHNuQg/videos)) so you can have a more complete experience.
IMHO a very good book for you may be: J. Hoffstein, J.Pipher, J. H. Silverman, An Introduction to Mathematical Cryptography. ISBN 976-1-4419-2674-6. It is published 2010 in the Springer series Undergraduate Texts in Mathematics.
1,452,291
When I open the terminator console in Ubuntu 22.04 LTS following shows:[![A gear instead of the actual terminator icon](https://i.stack.imgur.com/6v7Xn.png)](https://i.stack.imgur.com/6v7Xn.png) This wasn't an issue in Ubuntu 20.04 and I've only seen it happen with the terminator console. I've purged it and reinstalled but the issue remains. What else could I try? Thanks in advance.
2023/01/01
[ "https://askubuntu.com/questions/1452291", "https://askubuntu.com", "https://askubuntu.com/users/-1/" ]
> > I was working in Ubuntu 16.04 Xenial. > > > Ubuntu 16.04 has been out of support since April 2021. Consider installing a later LTS version (e.g. 22.04, which will be supported until 2027). > > Now, I see a blank purple screen. I can neither log in to the GUI nor > the terminal. How can I restore the lost system files without damaging > working data? > > > Flash Ubuntu 22.04 on a USB stick, and boot from it. You can use the live session to copy your data to an external hard drive (or, to a partition where Ubuntu is not installed). > > Note: The machine is shared. So, there are other user accounts on the device. Therefore, a fresh install isn't an option. > > > After removing the default python version, none of the users will be able to access the GUI. Reinstalling is the least painful option.
Launch Ubuntu as you would usually. Allow it to reach the login screen completely. When you arrive, avoid signing in. Instead, on your keyboard, press Ctrl + Alt + F3. Ubuntu will transition from the graphical login screen to a terminal that is just in black and white. When prompted, type your username. When prompted for your password, type it in. You'll land on a terminal screen that looks familiar. Just as in your graphical terminal windows, you can navigate here. You should check the file in your home folder. You need to be there right away after logging in. You must look up the file with the proper flags because it is a hidden "dot file." The following command performs a search using ls and grep. The file's permissions should be mentioned first, followed by its owner's username and group. You've identified the issue's root if "root" is listed there.
144,299
It has come to my attention that, at the very least in 5E, [Naga's are effectively immortal](https://rpg.stackexchange.com/questions/58848/how-does-naga-rejuvenation-work). When slain, they simply return to full HP in a matter of days. However, I have not managed to find any confirmation that the same is true in 3.5 Hence my question: Are Naga as immortal in 3.5 as they are in 5e? And if yes, then how could they be permanently put down?
2019/04/01
[ "https://rpg.stackexchange.com/questions/144299", "https://rpg.stackexchange.com", "https://rpg.stackexchange.com/users/24453/" ]
### Nagas are not immortal in D&D 3.5. The official D&D 3.5 [Monster Index](http://archive.wizards.com/default.asp?x=dnd/lists/monsters) lists a great number of different nagas, but none of them have the ability to return to life when slain. At best, they are exceptionally long-lived or immortal until slain, and some are difficult to kill, but once slain, none have the ability to return to life that their 5th edition partners possess. You can permanently kill a D&D 3.5 naga the same way you kill any creature.
As you can see [on the 3.5's SRD](http://www.d20srd.org/srd/monsters/naga.htm), none of the nagas have any kind of ability to *come back from the dead*, that mechanic is exclusive to 5th edition. As for how to kill them, that is already spelled on their ability: > > **Rejuvenation**: If it dies, the naga returns to life in 1d6 days and regains all its Hit Points. Only a **wish spell** can prevent this trait from functioning. > > >
222,343
> > They came to his help > > > I found this sentence while I was studying English in my English grammar book. And my book said it meant: > > They came to help him > > > But I don't understand what it means when I read this sentence without an explanation. When I read **I wanted his help** this sentence I can understand. I want to know why it means **They came to help him** like this!
2019/08/27
[ "https://ell.stackexchange.com/questions/222343", "https://ell.stackexchange.com", "https://ell.stackexchange.com/users/99625/" ]
"They came to his help" does not sound idiomatic to me at all. If it ever has been used, it is not used in modern speech. I have never personally heard it used, nor would I use myself. We would more likely say: * "They came to help him" * "They came to his aid" "They came to his *aid*" *is* idiomatic if perhaps a little formal. [This ngram](https://books.google.com/ngrams/graph?content=came%20to%20his%20aid%2C%20came%20to%20his%20help%2C%20came%20to%20my%20aid%2C%20came%20to%20my%20help%2Ccame%20to%20help%20me%2Ccame%20to%20help%20him&year_start=1800&year_end=2000&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Ccame%20to%20his%20aid%3B%2Cc0%3B.t1%3B%2Ccame%20to%20his%20help%3B%2Cc0%3B.t1%3B%2Ccame%20to%20my%20aid%3B%2Cc0%3B.t1%3B%2Ccame%20to%20my%20help%3B%2Cc0%3B.t1%3B%2Ccame%20to%20help%20me%3B%2Cc0%3B.t1%3B%2Ccame%20to%20help%20him%3B%2Cc0#t1%3B%2Ccame%20to%20his%20aid%3B%2Cc0%3B.t1%3B%2Ccame%20to%20his%20help%3B%2Cc0%3B.t1%3B%2Ccame%20to%20my%20aid%3B%2Cc0%3B.t1%3B%2Ccame%20to%20my%20help%3B%2Cc1%3B.t1%3B%2Ccame%20to%20help%20me%3B%2Cc0%3B.t1%3B%2Ccame%20to%20help%20him%3B%2Cc0) is interesting - it compares: * came to his aid * came to his help * came to my aid * came to my help * came to help me * came to help him The results seem to indicate that "to my/his aid" has always been the more popular phrase, and even though it has declined in usage, is still more widely used than any of the others.
come to help, come to aid and come to rescue mean more or less the same and they are idioms. For example , *1.If the Government had not come to my help, I would have died.* *2 .If he had not come to my rescue, I would have been in deep trouble* *3 .If my friend had not come to my aid, I would have been in a financial crisis* come to one's help means help somebody when they are in deep trouble <https://ludwig.guru/s/come+to+my+help>
56,697,475
I'm new on Mongodb server. I'm trying to create a replica set on it. But when i change the configuration on the mongod.conf, my mongodb service is not working or running anymore. Any idea how to solve this? thank you
2019/06/21
[ "https://Stackoverflow.com/questions/56697475", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8051410/" ]
Could you please share the error you are facing along with the MongoDB configuration file. Only after seeing that, I can help you with your issue. You can also view the configuration file options here, <https://docs.mongodb.com/manual/reference/configuration-options/> Thanks
Before making any changes to mongod.conf file stop the serivce and after making the changes start it.
3,107,872
I have a JTable displays the event accordingly, I want to do like when mouse over the table cell will pop out a small box show the event details. Something like tooltip how can i do that? is there any component in swing doing that?
2010/06/24
[ "https://Stackoverflow.com/questions/3107872", "https://Stackoverflow.com", "https://Stackoverflow.com/users/236501/" ]
Have a read about [How to Use Tables: Specifying Tool Tips for Cells](http://java.sun.com/docs/books/tutorial/uiswing/components/table.html#celltooltip).
Use JToolTip and HTML. More info here: <http://java.sun.com/docs/books/tutorial/uiswing/components/html.html>
16,470
Has FIDE ever given reasons why it rates woman in different category and organises their tournaments seperately?
2017/01/26
[ "https://chess.stackexchange.com/questions/16470", "https://chess.stackexchange.com", "https://chess.stackexchange.com/users/12108/" ]
First, let's clear up a misunderstanding. Women and men are *rated* on the same scale, with the same formula. A woman with a FIDE rating of 2400 can be assumed to be equally matched with a man with the same 2400 rating. FIDE merely defines four titles that are only available for women. In general, woman can and do participate in tournaments against men. However, there are a handful of tournaments open only to women, including a Women's World Championship. The reason for the tournaments and titles is obvious: some women want to participate in such tournaments, and some women appreciate the titles. Chess is filled with reserved tournaments and not-too-prestigious titles of other kinds - this isn't a "woman-thing". Every club, state, or national championship is a tournament which sacrifices strength and prestige to encourage the participation of a specific group. The USCF has titles down to "4th Category", a title which requires hardly more than a heartbeat and a willingness to play in at least five tournaments, obviously to encourage tournament play among those still learning the game. FIDE correctly calculates that having the WGM, WIM, WFM, and WCM provides motivation for *some* women and encourages them to improve their game and participate in tournaments. Given that FIDE exists to promote chess among all populations, including women, providing these title is a no-brainer. Damning women who participate in such tournaments or accept such titles is as ridiculous as complaining that the Canadian National Championship is tainted by the lack of Americans in the field or considering the USCF Candidate Master title to be a farce because it is easier to obtain than the FIDE Candidate Master. There is no shame or fraud in competing for these titles, as the requirements are well-known and observers can easily understand exactly how much prestige should be attached to them. Some of the strongest women players may disdain the women-only titles, but there are many male GMs who disdained their IM or FM norms. You may choose to learn what you want from their examples.
Chess is not a sport that relies on physical skill, not withstanding the rigors of tournament and match play. Men and Women should not be separated on the FIDE title scales. Separate tournaments? Maybe - that is the organizers prerogative. World Championship? Yes. Titles, no. This is False Prestige. After watching the Women's Championship from (Spain?) last month and the horrid play by a supposed FIM making horrendous errors, I have no faith any of them. You are either an IM or you are not. The Polgar(s) got it right when they refused to accept watered-down, female FIDE titles. There should be no difference, especially in this day and age where the average woman will hardly blink at telling you how smart she is and wants equal pay for equal play. I say, prove it. Chess should award titles equally across both genders on the same scale and not discriminate women any longer.
10,862
> > **Possible Duplicate:** > > [How do I update the OS in my device?](https://android.stackexchange.com/questions/13510/how-do-i-update-the-os-in-my-device) > > > I'm not an expert in this, and need help. I have serious issues with the Froyo build and want to upgrade my brand new phone to Gingerbread. How can I do this? It is not available as a standard upgrade through the manufacturer.
2011/06/24
[ "https://android.stackexchange.com/questions/10862", "https://android.stackexchange.com", "https://android.stackexchange.com/users/6143/" ]
1. [Root it](https://android.stackexchange.com/questions/1184/how-do-i-root-my-phone). 2. Install Clockworkmod Recovery. 1. Install [ROM Manager](https://market.android.com/details?id=com.koushikdutta.rommanager). 2. Run it and select your device. 3. Choose the "Flash Clockwork recovery" (or similar) option. 3. Install [a custom ROM](http://forum.xda-developers.com/forumdisplay.php?f=909). Note that doing this voids your warranty and may brick your device. I highly recommend reading through the XDA forum (the last link above) so you get a good grasp of everything involved here. In particular, a lot of the ROMs are for "GEN2 devices". I don't know know if that's a hardware revision or a firmware version, or if there are any relevant differences between the Dell XCD35 and the ZTE Blade that you would need to account for before installing a Blade ROM. **Edit:** See <http://wiki.modaco.com/index.php/ZTE_Blade#What_different_Blade_versions_are_out_there.3F> for some useful info, the rest of that wiki is useful too.
Full guide to root & upgrate to 2.3 rom found on [android.modaco.com](http://android.modaco.com/topic/335781-22apr-guide-how-to-install-a-custom-rom-on-the-zte-u-v880-or-gen2-device/)
11,851
I am an EU national holding a (indefinite leave to remain - permanent resident status)= PR since October 2016. I would like to apply for UK citizenship. In the AN citizenship form it asks for job/employer. My two questions are the following: 1. Can a PhD student in the UK apply (hence the person is not working)? 2. Can a person who is unemployed (and not studying) at the moment of application apply?
2017/08/17
[ "https://expatriates.stackexchange.com/questions/11851", "https://expatriates.stackexchange.com", "https://expatriates.stackexchange.com/users/13356/" ]
According to the UK nationality guidance on naturalision, you would meet the basic eligibility through residency at 5 years. At that point, employment is not a factor when you qualify, financial soundness is, per the [good character requirement](https://www.gov.uk/government/publications/good-character-nationality-policy-guidance). > > [**Residence requirements**](https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/632935/naturalisation-as-a-British-citizen-by-discretion-v1.0.pdf) > > This section tells you how to consider if an applicant meets the residence requirement for naturalisation. > > > In order to qualify for naturalisation as a British citizen, an individual is required to demonstrate close links with, and a commitment to the UK. As part of this the expectation is that applicants should meet the residence requirements. > > > Whilst there is some discretion to waive some of these requirements, this cannot be done to the extent that the requirements are ignored. > > > **Residence requirements: section 6(1)** > > The residence requirements which someone applying under [section 6(1) of the British Nationality Act 198](http://www.legislation.gov.uk/ukpga/1981/61/section/6)1 are that the applicant was: > > > * in the UK at the beginning of the period of 5 years ending with the date of the application > * not absent from the UK for more than either: > o 450 days in that 5 year period > o 90 days in the period of 12 months ending with the date of application > * not, on the date of application, subject under the immigration laws to any restriction on the period of stay in the UK > * not, at any other time in the 12 month period ending with date of application, subject under the immigration laws to any restriction on the period of stay in the UK > * not at any time in the period of 5 years ending with the date of application, in the UK in breach of the immigration laws > > >
Once you have indefinite leave to remain, you have this status until you either commit some serious crime, or leave the UK for two years. No requirement to be employed etc. I haven't seen any requirement to be employed to gain citizenship either. Make sure what the status of your original citizenship will be. For example, Dutch nationals will lose their Dutch nationality. German embassy says "we don't know; we'll tell you when we know".
384,589
I own Rocket League both disc and download format and I want to sell the disc, but I want to make sure it works before I sell it. When I put the disc in, it says "Do you want to switch to the disc version of this game? The downloaded version will be deleted." I just want to know if that would completely get rid of my downloaded version or can I get it back?
2021/04/12
[ "https://gaming.stackexchange.com/questions/384589", "https://gaming.stackexchange.com", "https://gaming.stackexchange.com/users/270305/" ]
Yes, if you choose to replace the digital version with the disc version, **it will be deleted from the console**. However, you will still be able to re-download the digital version (although you will need to delete the disc version).
IF you put the disc in, it screws up the download version. I did that with terraria. Had to reinstall the download since the disc overwrote it.
3,729
I am curious why Hegel became more important than Schelling. First of all, how would Schelling's ideas differ from Hegel's? I read that there are some supernatural elements in Schelling's, but do not know specifically. Next, are Schelling's later ideas basically the reason for Schelling's under-appreciated status?
2012/09/22
[ "https://philosophy.stackexchange.com/questions/3729", "https://philosophy.stackexchange.com", "https://philosophy.stackexchange.com/users/2418/" ]
In short, Hegel benefited from having his ideas more clearly worked out and regularly published which led to followings that became influential within continental philosophy. Hegel's authority has been a hallmark even for the enemies of his thought. And so it has become acceptable to ignore Schelling, but not Hegel. But as our historical reflective distance expands it is more evident that we cannot ignore either, especially the former. I take you to be inquiring into the dominant grand narratives of Western philosophy which take Hegel as a leading figure while Schelling is just lumped in with Fichte as the triad of German Idealism. Professional and popular culture philosophers alike have perpetuated this account. The beef between Schelling and Hegel is well-documented and given Hegel’s rising stature in taking over Berlin, many find Schelling’s later works as personal attacks of jealously. Schelling’s last public lectures on mythology and revelation are deemed largely as theosophy or “spiritual channeling” offering only religious or theological value. The conventional wisdom is impoverished and ignores Schelling’s own contributions in attempting to breakout of Hegel’s spell of logocentrism. There are several important differences between Schelling and Hegel that have been underappreciated. Given their connections, such as, being roommates in 1795 or editing a journal together, etc. a more serious problem persists involving the conflation of Schelling and Hegel. Zizek's reliance on Hegel's logic as the basis for his parallax ontology relies on a smuggled in reading of Schelling's later philosophy is a typical example of such misconstruing. Presupposed in his interpretation of Schelling's second draft of the Weltalter is the consistency between Schelling's attempts to overcome German Idealism with the mature Hegel's logic as the completion of the Idea for the transformation of positing substance as subject. You will not find this philosophical approach fruitful regardless of Zizek's aims at perversity and sleight-of-hand. Reading Schelling's philosophical journey as providing the ground for a materialist ontology or Marxist-oriented social critique is irresponsible rendering, in my estimation. This also involves the trouble with associating Schelling’s thought with the psychoanalysis of Freud or Lacan. Clearly, Schelling is credited as the first to develop a notion of unconsciousness in his philosophy but he dealt with it on a cosmically experiential level and not as some faculty human beings carry around in their heads. I applaud efforts at showing Schelling’s relevance to our world despite taking some of these approaches to be misleading. A Schellingian renaissance of sorts is taking place while much of Hegel’s authority has been challenged or eclipsed. Schelling moves us in the direction of contemporary philosophical sentiments which are pluralistic and, unlike Hegel, opposed to rooting history in Absolute Spirit with its apex in modern Christian Europe, but rather in a meta-drama of the fusion of “world-consciousness.” I see his thought as a springboard for American pragmatism and process philosophy and this is evident in the volume edited by Catherine Keller and Anne Daniell, entitled, Process and Difference: Between Cosmological and Poststructuralist Postmodernisms (2002). As Jerry Day notes in his book, Voegelin, Schelling, and the Philosophy of Historical Existence, (Columbia: University of Missouri Press, 2003, p. 143) “Schelling does not take the self-consciousness attained at the end of his reflections and project it back into the beginning of his discussion in order to found a system with self-consciousness as its ground. This step was, in Schelling’s estimation, the essential mistake of Fichte and Hegel.” Schelling’s superiority to Hegel comes out in several works, but probably the most decisive is in his philosophy of art completed way before Hegel’s which is undeniably a masterpiece and superior in style and historical surveying than Schelling’s (see Hegel’s Lectures on Aesthetics, 2 vols.). But that does not mean we can overlook the fact that Schelling does not provide some absolute perspective by which we can rank the philosopher against the artist, or the prophet contra the genius for all time. Rather, as David Simpson notes in the forward to Scott’s translation of Schelling’s philosophy of art lectures, “Schelling’s idea of the history of art is not founded upon a naïve progressivism of the sort that underlies Hegel’s alternative model (sophisticated as it is on its surface). It is in this sense closer to a secular, twentieth-century notion of history, according to which things are simply different at different times, without offering evidence of some totalizing pattern evolving with the passing of time” (Schelling, Philosophy of Art, trans. Douglas W. Scott, Minneapolis: University of Minnesota, 1989, p. xviii). Naturally, the concern arises why it has taken so long for postmodernists to recognize the fruits of Schelling’s projects or to question the superficial appropriations of such a paramount philosopher. You should be commended for asking such an important yet delicate question and I have a lot more to add on the subject in a forthcoming article on the power of negation.
It is very unfortunate that Hegel achieved superstardom and Schelling ended up half forgotten. If you compare them on purely cognitive capacities, Schelling was a much greater genius. Schelling was actually one of the smartest men that ever lived, he was on the same intellectual level as Plato, Goethe or Wittgenstein. Kant started a new revolutionary way of doing philosophy that we now call german idealism. Fichte got rid of the thing-in-itself, but for him nature became a dead byproduct of the ego, nature is just not-ego that limits the ego. Schelling rightly disagreed with Fichte's wrong approach regarding objective reality and brought nature back to the central stage, but he appealed to the notion of intuition epistemologically. In other words, we can understand the deepest truths about nature, but not rationally, it can be mystically revealed to us in a religious like state, but not scientifically or systematically. Schopenhauer agreed with this and he also believed that the will (the Kantian thing-in-itself) cannot be known rationally. Hegel thought that Schelling was on the right track, but the project is far from done. He thought Fichte's criticism of Kant and dialectical approach to philosophy is correct, and Schelling's dynamic unfolding of nature is also correct, but Schelling's thesis that nature is fundamentally irrational (thus not logically systematizable) is wrong. Hegel completed Schelling's project by creating a philosophical system. So to answer your question: Hegel is more important than Schelling because of where he stands in the history of german idealism. Kant started it, Fichte and Schelling advanced it, and Hegel completed it. The guy who brings everything together always gets the most credit. Interestingly, if it wasn't for Heidegger's interest in Schelling, the latter could have been totally forgotten by now.
95,362
Private key x = pubkey B (x (+ or -) 1) = pubkey A, C I only know the public keys A and C I am looking for a way to know if the public key sequence is A B C or C B A B = 0xa5e42a634fa42f4f22c756429a06fd104a12a0c3a61ae4b738b1716913c82732 A = 0xb616c736dd3d768e2e7b30b6e71caa3cd58359127af62bc633716eb2e782cca4 C = 0xc357ffa5f463c7a2b0ba760ceef0c7a7ae77042cae93788ce39a797693e0cc64 I don't know the private key of x value. The public key corresponding to x is B Input: B Output x (+1) = A or C
2020/04/20
[ "https://bitcoin.stackexchange.com/questions/95362", "https://bitcoin.stackexchange.com", "https://bitcoin.stackexchange.com/users/105236/" ]
I think there are some very convincing theoretical arguments to be made, but there is also just a very practical consideration: Right now, a very large portion of BTC is being held in the cold wallets of popular exchange platforms. Hardcore bitcoiners will shake their heads and declare *"Not your keys, not your coins!"*, but this apparently has not stopped traders and normal users alike from keeping their coins stored with a custodial third party. Looking at the ['bitcoin rich list'](https://bitcoin.stackexchange.com/questions/91242/bitcoin-rich-list-reliable-or-not/91243#91243), we can confirm the huge number of coins held by exchanges. This fact would put exchange operators in an undue position of power over the network: **by staking coins owned by their users, the exchange operators can obtain a large, centralized point of control over the network's consensus operations.** There are already risks present when allowing a third party custodian to manage your coins/private keys, **but switching to a POS system adds an entirely new and very serious type of risk!** This is a very serious risk because it is existential in nature: if an exchange operator were to abuse their control of this huge number of coins somehow (by staking maliciously in some way or another), it would affect every user of the system, not just the users of that exchange. This problem is only amplified by the fact that you have to not only trust the exchange operators to not act maliciously, you also have to trust them to secure their system against theft and intrusion. It is bad enough when hackers steal funds, giving hackers the ability to attack the network consensus as well is, in my opinion, an untenable addition of risk. In case you aren't convinced: This risk is not theoretical, an attack like this recently happened on an altcoin network (Steem, mid-late Feb 2020). It appears that exchanges colluded to stake the coins they held custody of, in an effort to disrupt network consensus. A quick websearch [brings this article about it up](https://cryptocoingrowth.com/2020/03/02/steem-goes-down-after-major-exchanges-hijack-consensus-mechanism/).
In addition to other answers, Bitcoin investors would also like to have a very conservative approach to updating bitcoin. Messing with the core idea will increase the perceived risk for something wanting to be a store of value.
95,362
Private key x = pubkey B (x (+ or -) 1) = pubkey A, C I only know the public keys A and C I am looking for a way to know if the public key sequence is A B C or C B A B = 0xa5e42a634fa42f4f22c756429a06fd104a12a0c3a61ae4b738b1716913c82732 A = 0xb616c736dd3d768e2e7b30b6e71caa3cd58359127af62bc633716eb2e782cca4 C = 0xc357ffa5f463c7a2b0ba760ceef0c7a7ae77042cae93788ce39a797693e0cc64 I don't know the private key of x value. The public key corresponding to x is B Input: B Output x (+1) = A or C
2020/04/20
[ "https://bitcoin.stackexchange.com/questions/95362", "https://bitcoin.stackexchange.com", "https://bitcoin.stackexchange.com/users/105236/" ]
In addition to other answers, Bitcoin investors would also like to have a very conservative approach to updating bitcoin. Messing with the core idea will increase the perceived risk for something wanting to be a store of value.
Proof of stake just doesn't work the same as mining from an economic incentive standpoint. Miners make real-world investments, in advance, in equipment that becomes less valuable as difficulty increases. Miners have no guarantee that their investment will pay off, they merely have a probability of finding a good proof of work. Staking chains are vulnerable to new attacks, like "long range" attacks, "fake stake" attacks, etc. Staking is just as easy to pool and manipulate as mining. Proof of stake systems have some good solutions, but they aren't all solved. Until they are solved, Bitcoin definitely won't transition. A more realistic transition would be to a proof-of-burn, where a p2sh burn is locked to a height, and gets you a decaying probability of being able to mine some future block.
95,362
Private key x = pubkey B (x (+ or -) 1) = pubkey A, C I only know the public keys A and C I am looking for a way to know if the public key sequence is A B C or C B A B = 0xa5e42a634fa42f4f22c756429a06fd104a12a0c3a61ae4b738b1716913c82732 A = 0xb616c736dd3d768e2e7b30b6e71caa3cd58359127af62bc633716eb2e782cca4 C = 0xc357ffa5f463c7a2b0ba760ceef0c7a7ae77042cae93788ce39a797693e0cc64 I don't know the private key of x value. The public key corresponding to x is B Input: B Output x (+1) = A or C
2020/04/20
[ "https://bitcoin.stackexchange.com/questions/95362", "https://bitcoin.stackexchange.com", "https://bitcoin.stackexchange.com/users/105236/" ]
Bitcoin should switch to BFT (Byzantine Fault Tolerant) PoS which is secure by definition. Most people that don't like PoS are thinking in "vanilla" or "chain based" PoS protocols which are certainly more insecure than PoW. Ethereum 2.0 is using Casper currently in the Beacon Chain, and other coins are also using BFT PoS protocols. I will try to resume BFT PoS in a few lines: * Each block must be signed by 2/3 of validators based on staking power. That provides absolute finality in just one block. * There is a penalty system that punish evil behavior or validator inactivity which makes it MUCH MORE secure than PoW. If you sign 2 blocks with the same height, that's evil behavior and your staking deposit is burned automatically. If 1/3 of validators are inactive and the network halts, the penalty system start to burn slowly the deposits until the network can restart. * Even if you have 99,9% of staking power you can't censor transactions for free because as far there is a single honest validator in the network he will include the transaction you want to censor in his block. If the attacker doesn't want to sign the block of the honest validator, his staking deposit receives a small percent of penalty, but a small percent of millions of dollars is too much money. The honest validator receives also a penalty, but his deposit is small so it's not a problem. * If somehow the attacker has ALL staking power and start to censor for free and don't allow other validators register their staking deposits to create blocks, the community as a whole would agree to burn all staking deposits and restart the network, which makes every coin holder much richer than before. Regards
Proof of stake just doesn't work the same as mining from an economic incentive standpoint. Miners make real-world investments, in advance, in equipment that becomes less valuable as difficulty increases. Miners have no guarantee that their investment will pay off, they merely have a probability of finding a good proof of work. Staking chains are vulnerable to new attacks, like "long range" attacks, "fake stake" attacks, etc. Staking is just as easy to pool and manipulate as mining. Proof of stake systems have some good solutions, but they aren't all solved. Until they are solved, Bitcoin definitely won't transition. A more realistic transition would be to a proof-of-burn, where a p2sh burn is locked to a height, and gets you a decaying probability of being able to mine some future block.
95,362
Private key x = pubkey B (x (+ or -) 1) = pubkey A, C I only know the public keys A and C I am looking for a way to know if the public key sequence is A B C or C B A B = 0xa5e42a634fa42f4f22c756429a06fd104a12a0c3a61ae4b738b1716913c82732 A = 0xb616c736dd3d768e2e7b30b6e71caa3cd58359127af62bc633716eb2e782cca4 C = 0xc357ffa5f463c7a2b0ba760ceef0c7a7ae77042cae93788ce39a797693e0cc64 I don't know the private key of x value. The public key corresponding to x is B Input: B Output x (+1) = A or C
2020/04/20
[ "https://bitcoin.stackexchange.com/questions/95362", "https://bitcoin.stackexchange.com", "https://bitcoin.stackexchange.com/users/105236/" ]
Proof of Stake is basically a case of having your cake and eating it, too. PoW is a simple work-around to a coordination problem that was previously thought to be unsolvable. It sort of "cheats" by providing an economic solution to a distributed systems challenge, by introducing a real cost as a disincentive to unwanted behavior as well as using a reward system both to bootstrap itself and to incentivize security. The advantages of Bitcoin's PoW system include that the group of block authors is truly open to anyone with computational resources, that the system converges on one ground-truth because there is a real cost in producing a competing chaintip, and that it is simple enough for its security model to be well understood. PoS is more similar to the approaches that were pursued before the publication of Bitcoin. PoS is naturally divergent as there is no real cost in staking. The "Nothing at Stake problem" allows stakers to work on multiple chaintips and only publish the next block from the chain most favorable to them. There are different ways of approaching the vastly different security model of PoS. [!["Casper summany: FIx stuff with Staking. FIx the problems in that with bonding and checkpoints. Fix problems with that with being fuzzily forgiving about slashing. Make it all 'rigorous' by doing real proofs of something somewhere" – Bram Cohen](https://i.stack.imgur.com/bWZYl.png)](https://i.stack.imgur.com/bWZYl.png) Source: [Bram Cohen describing ETH's PoS research](https://twitter.com/bramcohen/status/956263457392795648) For example, ETH's effort to switch to PoS has been in research since at least 2015. The latest I've read, ETH's current PoS proposal piles multiple layers of complexity on top of the staking to achieve convergence. Stakers have to register as "Validators" of which there are a limited number, put up a sizeable collateral that can be slashed in retaliation for misbehavior, and additionally maintain frozen capital to stake in the first place. More mitigations are in place to punish validator malfunction and recover the system from such breakdowns. Other approaches to and issues with PoS include: * Some systems introduce a central party that rubberstamps the latest block (e.g. Peercoin). A central coordinating party costs the system its censorship resistance. * It's difficult to fairly launch a PoS system since stakers have to hold funds in the system to author blocks. Many PoS systems get either started as airdrops, Initial Coin Offerings (ICOs), or a proof of burn auction. * Staking requires some representation of the private key to be online at all times, which may mean that it is easier to redirect some of the staking power (in early PoS systems it had to be the actual private key, so not only staking power but actual funds could get stolen). Not participating in staking means that your share in the monetary supply is being inflated away. * Some systems require coins to have a certain amount of confirmations before being allowed to be used for staking, so spending funds interrupts your staking revenue. * Some people expect that [staking revenue will be taxed differently than mining revenue](https://medium.com/@bendavenport/a-stake-to-the-heart-57fcd8ec323b). * Some PoS systems can be gamed for profit by trying a vast number of block candidates to cause the staker to get blocks more often than their stake should qualify them for. Such an incentive may turn such PoS systems just into PoW schemes under the hood. * Some researchers argue that ["by depending only on resources within the system, proof of stake cannot be used to form a distributed consensus, since it depends on the very history it is trying to form to enforce loss of value"](https://download.wpsoftware.net/bitcoin/pos.pdf). So, while the Ethereum Foundation keeps giving (and missing) new [delivery dates](https://twitter.com/martybent/status/896775534658686979) for an incomplete research project, there seems to be less interest among Bitcoin contributors to discuss Rube Goldberg contraptions. And then, beyond the general skepsis for PoS, it wouldn't be feasible to just switch to it: > > "Even if there somehow was a workable solution that had desirable > properties and security proofs, it would be working under a vastly > different security model than PoW… and nobody can just decide to make > such a change without enormous community consensus for such an > invasive change." –Pieter Wuille > > >
In addition to other answers, Bitcoin investors would also like to have a very conservative approach to updating bitcoin. Messing with the core idea will increase the perceived risk for something wanting to be a store of value.
95,362
Private key x = pubkey B (x (+ or -) 1) = pubkey A, C I only know the public keys A and C I am looking for a way to know if the public key sequence is A B C or C B A B = 0xa5e42a634fa42f4f22c756429a06fd104a12a0c3a61ae4b738b1716913c82732 A = 0xb616c736dd3d768e2e7b30b6e71caa3cd58359127af62bc633716eb2e782cca4 C = 0xc357ffa5f463c7a2b0ba760ceef0c7a7ae77042cae93788ce39a797693e0cc64 I don't know the private key of x value. The public key corresponding to x is B Input: B Output x (+1) = A or C
2020/04/20
[ "https://bitcoin.stackexchange.com/questions/95362", "https://bitcoin.stackexchange.com", "https://bitcoin.stackexchange.com/users/105236/" ]
I think there are some very convincing theoretical arguments to be made, but there is also just a very practical consideration: Right now, a very large portion of BTC is being held in the cold wallets of popular exchange platforms. Hardcore bitcoiners will shake their heads and declare *"Not your keys, not your coins!"*, but this apparently has not stopped traders and normal users alike from keeping their coins stored with a custodial third party. Looking at the ['bitcoin rich list'](https://bitcoin.stackexchange.com/questions/91242/bitcoin-rich-list-reliable-or-not/91243#91243), we can confirm the huge number of coins held by exchanges. This fact would put exchange operators in an undue position of power over the network: **by staking coins owned by their users, the exchange operators can obtain a large, centralized point of control over the network's consensus operations.** There are already risks present when allowing a third party custodian to manage your coins/private keys, **but switching to a POS system adds an entirely new and very serious type of risk!** This is a very serious risk because it is existential in nature: if an exchange operator were to abuse their control of this huge number of coins somehow (by staking maliciously in some way or another), it would affect every user of the system, not just the users of that exchange. This problem is only amplified by the fact that you have to not only trust the exchange operators to not act maliciously, you also have to trust them to secure their system against theft and intrusion. It is bad enough when hackers steal funds, giving hackers the ability to attack the network consensus as well is, in my opinion, an untenable addition of risk. In case you aren't convinced: This risk is not theoretical, an attack like this recently happened on an altcoin network (Steem, mid-late Feb 2020). It appears that exchanges colluded to stake the coins they held custody of, in an effort to disrupt network consensus. A quick websearch [brings this article about it up](https://cryptocoingrowth.com/2020/03/02/steem-goes-down-after-major-exchanges-hijack-consensus-mechanism/).
Proof of stake just doesn't work the same as mining from an economic incentive standpoint. Miners make real-world investments, in advance, in equipment that becomes less valuable as difficulty increases. Miners have no guarantee that their investment will pay off, they merely have a probability of finding a good proof of work. Staking chains are vulnerable to new attacks, like "long range" attacks, "fake stake" attacks, etc. Staking is just as easy to pool and manipulate as mining. Proof of stake systems have some good solutions, but they aren't all solved. Until they are solved, Bitcoin definitely won't transition. A more realistic transition would be to a proof-of-burn, where a p2sh burn is locked to a height, and gets you a decaying probability of being able to mine some future block.
95,362
Private key x = pubkey B (x (+ or -) 1) = pubkey A, C I only know the public keys A and C I am looking for a way to know if the public key sequence is A B C or C B A B = 0xa5e42a634fa42f4f22c756429a06fd104a12a0c3a61ae4b738b1716913c82732 A = 0xb616c736dd3d768e2e7b30b6e71caa3cd58359127af62bc633716eb2e782cca4 C = 0xc357ffa5f463c7a2b0ba760ceef0c7a7ae77042cae93788ce39a797693e0cc64 I don't know the private key of x value. The public key corresponding to x is B Input: B Output x (+1) = A or C
2020/04/20
[ "https://bitcoin.stackexchange.com/questions/95362", "https://bitcoin.stackexchange.com", "https://bitcoin.stackexchange.com/users/105236/" ]
Proof of Stake is basically a case of having your cake and eating it, too. PoW is a simple work-around to a coordination problem that was previously thought to be unsolvable. It sort of "cheats" by providing an economic solution to a distributed systems challenge, by introducing a real cost as a disincentive to unwanted behavior as well as using a reward system both to bootstrap itself and to incentivize security. The advantages of Bitcoin's PoW system include that the group of block authors is truly open to anyone with computational resources, that the system converges on one ground-truth because there is a real cost in producing a competing chaintip, and that it is simple enough for its security model to be well understood. PoS is more similar to the approaches that were pursued before the publication of Bitcoin. PoS is naturally divergent as there is no real cost in staking. The "Nothing at Stake problem" allows stakers to work on multiple chaintips and only publish the next block from the chain most favorable to them. There are different ways of approaching the vastly different security model of PoS. [!["Casper summany: FIx stuff with Staking. FIx the problems in that with bonding and checkpoints. Fix problems with that with being fuzzily forgiving about slashing. Make it all 'rigorous' by doing real proofs of something somewhere" – Bram Cohen](https://i.stack.imgur.com/bWZYl.png)](https://i.stack.imgur.com/bWZYl.png) Source: [Bram Cohen describing ETH's PoS research](https://twitter.com/bramcohen/status/956263457392795648) For example, ETH's effort to switch to PoS has been in research since at least 2015. The latest I've read, ETH's current PoS proposal piles multiple layers of complexity on top of the staking to achieve convergence. Stakers have to register as "Validators" of which there are a limited number, put up a sizeable collateral that can be slashed in retaliation for misbehavior, and additionally maintain frozen capital to stake in the first place. More mitigations are in place to punish validator malfunction and recover the system from such breakdowns. Other approaches to and issues with PoS include: * Some systems introduce a central party that rubberstamps the latest block (e.g. Peercoin). A central coordinating party costs the system its censorship resistance. * It's difficult to fairly launch a PoS system since stakers have to hold funds in the system to author blocks. Many PoS systems get either started as airdrops, Initial Coin Offerings (ICOs), or a proof of burn auction. * Staking requires some representation of the private key to be online at all times, which may mean that it is easier to redirect some of the staking power (in early PoS systems it had to be the actual private key, so not only staking power but actual funds could get stolen). Not participating in staking means that your share in the monetary supply is being inflated away. * Some systems require coins to have a certain amount of confirmations before being allowed to be used for staking, so spending funds interrupts your staking revenue. * Some people expect that [staking revenue will be taxed differently than mining revenue](https://medium.com/@bendavenport/a-stake-to-the-heart-57fcd8ec323b). * Some PoS systems can be gamed for profit by trying a vast number of block candidates to cause the staker to get blocks more often than their stake should qualify them for. Such an incentive may turn such PoS systems just into PoW schemes under the hood. * Some researchers argue that ["by depending only on resources within the system, proof of stake cannot be used to form a distributed consensus, since it depends on the very history it is trying to form to enforce loss of value"](https://download.wpsoftware.net/bitcoin/pos.pdf). So, while the Ethereum Foundation keeps giving (and missing) new [delivery dates](https://twitter.com/martybent/status/896775534658686979) for an incomplete research project, there seems to be less interest among Bitcoin contributors to discuss Rube Goldberg contraptions. And then, beyond the general skepsis for PoS, it wouldn't be feasible to just switch to it: > > "Even if there somehow was a workable solution that had desirable > properties and security proofs, it would be working under a vastly > different security model than PoW… and nobody can just decide to make > such a change without enormous community consensus for such an > invasive change." –Pieter Wuille > > >
I think there are some very convincing theoretical arguments to be made, but there is also just a very practical consideration: Right now, a very large portion of BTC is being held in the cold wallets of popular exchange platforms. Hardcore bitcoiners will shake their heads and declare *"Not your keys, not your coins!"*, but this apparently has not stopped traders and normal users alike from keeping their coins stored with a custodial third party. Looking at the ['bitcoin rich list'](https://bitcoin.stackexchange.com/questions/91242/bitcoin-rich-list-reliable-or-not/91243#91243), we can confirm the huge number of coins held by exchanges. This fact would put exchange operators in an undue position of power over the network: **by staking coins owned by their users, the exchange operators can obtain a large, centralized point of control over the network's consensus operations.** There are already risks present when allowing a third party custodian to manage your coins/private keys, **but switching to a POS system adds an entirely new and very serious type of risk!** This is a very serious risk because it is existential in nature: if an exchange operator were to abuse their control of this huge number of coins somehow (by staking maliciously in some way or another), it would affect every user of the system, not just the users of that exchange. This problem is only amplified by the fact that you have to not only trust the exchange operators to not act maliciously, you also have to trust them to secure their system against theft and intrusion. It is bad enough when hackers steal funds, giving hackers the ability to attack the network consensus as well is, in my opinion, an untenable addition of risk. In case you aren't convinced: This risk is not theoretical, an attack like this recently happened on an altcoin network (Steem, mid-late Feb 2020). It appears that exchanges colluded to stake the coins they held custody of, in an effort to disrupt network consensus. A quick websearch [brings this article about it up](https://cryptocoingrowth.com/2020/03/02/steem-goes-down-after-major-exchanges-hijack-consensus-mechanism/).
95,362
Private key x = pubkey B (x (+ or -) 1) = pubkey A, C I only know the public keys A and C I am looking for a way to know if the public key sequence is A B C or C B A B = 0xa5e42a634fa42f4f22c756429a06fd104a12a0c3a61ae4b738b1716913c82732 A = 0xb616c736dd3d768e2e7b30b6e71caa3cd58359127af62bc633716eb2e782cca4 C = 0xc357ffa5f463c7a2b0ba760ceef0c7a7ae77042cae93788ce39a797693e0cc64 I don't know the private key of x value. The public key corresponding to x is B Input: B Output x (+1) = A or C
2020/04/20
[ "https://bitcoin.stackexchange.com/questions/95362", "https://bitcoin.stackexchange.com", "https://bitcoin.stackexchange.com/users/105236/" ]
Proof of Stake is basically a case of having your cake and eating it, too. PoW is a simple work-around to a coordination problem that was previously thought to be unsolvable. It sort of "cheats" by providing an economic solution to a distributed systems challenge, by introducing a real cost as a disincentive to unwanted behavior as well as using a reward system both to bootstrap itself and to incentivize security. The advantages of Bitcoin's PoW system include that the group of block authors is truly open to anyone with computational resources, that the system converges on one ground-truth because there is a real cost in producing a competing chaintip, and that it is simple enough for its security model to be well understood. PoS is more similar to the approaches that were pursued before the publication of Bitcoin. PoS is naturally divergent as there is no real cost in staking. The "Nothing at Stake problem" allows stakers to work on multiple chaintips and only publish the next block from the chain most favorable to them. There are different ways of approaching the vastly different security model of PoS. [!["Casper summany: FIx stuff with Staking. FIx the problems in that with bonding and checkpoints. Fix problems with that with being fuzzily forgiving about slashing. Make it all 'rigorous' by doing real proofs of something somewhere" – Bram Cohen](https://i.stack.imgur.com/bWZYl.png)](https://i.stack.imgur.com/bWZYl.png) Source: [Bram Cohen describing ETH's PoS research](https://twitter.com/bramcohen/status/956263457392795648) For example, ETH's effort to switch to PoS has been in research since at least 2015. The latest I've read, ETH's current PoS proposal piles multiple layers of complexity on top of the staking to achieve convergence. Stakers have to register as "Validators" of which there are a limited number, put up a sizeable collateral that can be slashed in retaliation for misbehavior, and additionally maintain frozen capital to stake in the first place. More mitigations are in place to punish validator malfunction and recover the system from such breakdowns. Other approaches to and issues with PoS include: * Some systems introduce a central party that rubberstamps the latest block (e.g. Peercoin). A central coordinating party costs the system its censorship resistance. * It's difficult to fairly launch a PoS system since stakers have to hold funds in the system to author blocks. Many PoS systems get either started as airdrops, Initial Coin Offerings (ICOs), or a proof of burn auction. * Staking requires some representation of the private key to be online at all times, which may mean that it is easier to redirect some of the staking power (in early PoS systems it had to be the actual private key, so not only staking power but actual funds could get stolen). Not participating in staking means that your share in the monetary supply is being inflated away. * Some systems require coins to have a certain amount of confirmations before being allowed to be used for staking, so spending funds interrupts your staking revenue. * Some people expect that [staking revenue will be taxed differently than mining revenue](https://medium.com/@bendavenport/a-stake-to-the-heart-57fcd8ec323b). * Some PoS systems can be gamed for profit by trying a vast number of block candidates to cause the staker to get blocks more often than their stake should qualify them for. Such an incentive may turn such PoS systems just into PoW schemes under the hood. * Some researchers argue that ["by depending only on resources within the system, proof of stake cannot be used to form a distributed consensus, since it depends on the very history it is trying to form to enforce loss of value"](https://download.wpsoftware.net/bitcoin/pos.pdf). So, while the Ethereum Foundation keeps giving (and missing) new [delivery dates](https://twitter.com/martybent/status/896775534658686979) for an incomplete research project, there seems to be less interest among Bitcoin contributors to discuss Rube Goldberg contraptions. And then, beyond the general skepsis for PoS, it wouldn't be feasible to just switch to it: > > "Even if there somehow was a workable solution that had desirable > properties and security proofs, it would be working under a vastly > different security model than PoW… and nobody can just decide to make > such a change without enormous community consensus for such an > invasive change." –Pieter Wuille > > >
I think there are at least four reasons: 1. The miners are stakeholders in the bitcoin ecosystem. Mining solves a problem for them. Taking away PoW mining would make bitcoin no longer work for one of its most important group of stakeholders. 2. Non-miners are in bitcoin because they like what bitcoin is. If they want some other consensus scheme, they know where to find it. There is certainly room in the market for at least one PoW chain and that's what bitcoin is. 3. Major changes impose costs on every participant in the ecosystem. Every implementation has to implement the new rules. Everyone has to test that the new stuff doesn't break anything they're relying on. 4. There isn't a consensus in the community that PoS can provide the same level of security as PoW at lower cost. That's the claim PoS advocates make, but it's far from an accepted truth.
95,362
Private key x = pubkey B (x (+ or -) 1) = pubkey A, C I only know the public keys A and C I am looking for a way to know if the public key sequence is A B C or C B A B = 0xa5e42a634fa42f4f22c756429a06fd104a12a0c3a61ae4b738b1716913c82732 A = 0xb616c736dd3d768e2e7b30b6e71caa3cd58359127af62bc633716eb2e782cca4 C = 0xc357ffa5f463c7a2b0ba760ceef0c7a7ae77042cae93788ce39a797693e0cc64 I don't know the private key of x value. The public key corresponding to x is B Input: B Output x (+1) = A or C
2020/04/20
[ "https://bitcoin.stackexchange.com/questions/95362", "https://bitcoin.stackexchange.com", "https://bitcoin.stackexchange.com/users/105236/" ]
I think there are at least four reasons: 1. The miners are stakeholders in the bitcoin ecosystem. Mining solves a problem for them. Taking away PoW mining would make bitcoin no longer work for one of its most important group of stakeholders. 2. Non-miners are in bitcoin because they like what bitcoin is. If they want some other consensus scheme, they know where to find it. There is certainly room in the market for at least one PoW chain and that's what bitcoin is. 3. Major changes impose costs on every participant in the ecosystem. Every implementation has to implement the new rules. Everyone has to test that the new stuff doesn't break anything they're relying on. 4. There isn't a consensus in the community that PoS can provide the same level of security as PoW at lower cost. That's the claim PoS advocates make, but it's far from an accepted truth.
In addition to other answers, Bitcoin investors would also like to have a very conservative approach to updating bitcoin. Messing with the core idea will increase the perceived risk for something wanting to be a store of value.
95,362
Private key x = pubkey B (x (+ or -) 1) = pubkey A, C I only know the public keys A and C I am looking for a way to know if the public key sequence is A B C or C B A B = 0xa5e42a634fa42f4f22c756429a06fd104a12a0c3a61ae4b738b1716913c82732 A = 0xb616c736dd3d768e2e7b30b6e71caa3cd58359127af62bc633716eb2e782cca4 C = 0xc357ffa5f463c7a2b0ba760ceef0c7a7ae77042cae93788ce39a797693e0cc64 I don't know the private key of x value. The public key corresponding to x is B Input: B Output x (+1) = A or C
2020/04/20
[ "https://bitcoin.stackexchange.com/questions/95362", "https://bitcoin.stackexchange.com", "https://bitcoin.stackexchange.com/users/105236/" ]
Proof of Stake is basically a case of having your cake and eating it, too. PoW is a simple work-around to a coordination problem that was previously thought to be unsolvable. It sort of "cheats" by providing an economic solution to a distributed systems challenge, by introducing a real cost as a disincentive to unwanted behavior as well as using a reward system both to bootstrap itself and to incentivize security. The advantages of Bitcoin's PoW system include that the group of block authors is truly open to anyone with computational resources, that the system converges on one ground-truth because there is a real cost in producing a competing chaintip, and that it is simple enough for its security model to be well understood. PoS is more similar to the approaches that were pursued before the publication of Bitcoin. PoS is naturally divergent as there is no real cost in staking. The "Nothing at Stake problem" allows stakers to work on multiple chaintips and only publish the next block from the chain most favorable to them. There are different ways of approaching the vastly different security model of PoS. [!["Casper summany: FIx stuff with Staking. FIx the problems in that with bonding and checkpoints. Fix problems with that with being fuzzily forgiving about slashing. Make it all 'rigorous' by doing real proofs of something somewhere" – Bram Cohen](https://i.stack.imgur.com/bWZYl.png)](https://i.stack.imgur.com/bWZYl.png) Source: [Bram Cohen describing ETH's PoS research](https://twitter.com/bramcohen/status/956263457392795648) For example, ETH's effort to switch to PoS has been in research since at least 2015. The latest I've read, ETH's current PoS proposal piles multiple layers of complexity on top of the staking to achieve convergence. Stakers have to register as "Validators" of which there are a limited number, put up a sizeable collateral that can be slashed in retaliation for misbehavior, and additionally maintain frozen capital to stake in the first place. More mitigations are in place to punish validator malfunction and recover the system from such breakdowns. Other approaches to and issues with PoS include: * Some systems introduce a central party that rubberstamps the latest block (e.g. Peercoin). A central coordinating party costs the system its censorship resistance. * It's difficult to fairly launch a PoS system since stakers have to hold funds in the system to author blocks. Many PoS systems get either started as airdrops, Initial Coin Offerings (ICOs), or a proof of burn auction. * Staking requires some representation of the private key to be online at all times, which may mean that it is easier to redirect some of the staking power (in early PoS systems it had to be the actual private key, so not only staking power but actual funds could get stolen). Not participating in staking means that your share in the monetary supply is being inflated away. * Some systems require coins to have a certain amount of confirmations before being allowed to be used for staking, so spending funds interrupts your staking revenue. * Some people expect that [staking revenue will be taxed differently than mining revenue](https://medium.com/@bendavenport/a-stake-to-the-heart-57fcd8ec323b). * Some PoS systems can be gamed for profit by trying a vast number of block candidates to cause the staker to get blocks more often than their stake should qualify them for. Such an incentive may turn such PoS systems just into PoW schemes under the hood. * Some researchers argue that ["by depending only on resources within the system, proof of stake cannot be used to form a distributed consensus, since it depends on the very history it is trying to form to enforce loss of value"](https://download.wpsoftware.net/bitcoin/pos.pdf). So, while the Ethereum Foundation keeps giving (and missing) new [delivery dates](https://twitter.com/martybent/status/896775534658686979) for an incomplete research project, there seems to be less interest among Bitcoin contributors to discuss Rube Goldberg contraptions. And then, beyond the general skepsis for PoS, it wouldn't be feasible to just switch to it: > > "Even if there somehow was a workable solution that had desirable > properties and security proofs, it would be working under a vastly > different security model than PoW… and nobody can just decide to make > such a change without enormous community consensus for such an > invasive change." –Pieter Wuille > > >
Proof of stake just doesn't work the same as mining from an economic incentive standpoint. Miners make real-world investments, in advance, in equipment that becomes less valuable as difficulty increases. Miners have no guarantee that their investment will pay off, they merely have a probability of finding a good proof of work. Staking chains are vulnerable to new attacks, like "long range" attacks, "fake stake" attacks, etc. Staking is just as easy to pool and manipulate as mining. Proof of stake systems have some good solutions, but they aren't all solved. Until they are solved, Bitcoin definitely won't transition. A more realistic transition would be to a proof-of-burn, where a p2sh burn is locked to a height, and gets you a decaying probability of being able to mine some future block.
95,362
Private key x = pubkey B (x (+ or -) 1) = pubkey A, C I only know the public keys A and C I am looking for a way to know if the public key sequence is A B C or C B A B = 0xa5e42a634fa42f4f22c756429a06fd104a12a0c3a61ae4b738b1716913c82732 A = 0xb616c736dd3d768e2e7b30b6e71caa3cd58359127af62bc633716eb2e782cca4 C = 0xc357ffa5f463c7a2b0ba760ceef0c7a7ae77042cae93788ce39a797693e0cc64 I don't know the private key of x value. The public key corresponding to x is B Input: B Output x (+1) = A or C
2020/04/20
[ "https://bitcoin.stackexchange.com/questions/95362", "https://bitcoin.stackexchange.com", "https://bitcoin.stackexchange.com/users/105236/" ]
Bitcoin should switch to BFT (Byzantine Fault Tolerant) PoS which is secure by definition. Most people that don't like PoS are thinking in "vanilla" or "chain based" PoS protocols which are certainly more insecure than PoW. Ethereum 2.0 is using Casper currently in the Beacon Chain, and other coins are also using BFT PoS protocols. I will try to resume BFT PoS in a few lines: * Each block must be signed by 2/3 of validators based on staking power. That provides absolute finality in just one block. * There is a penalty system that punish evil behavior or validator inactivity which makes it MUCH MORE secure than PoW. If you sign 2 blocks with the same height, that's evil behavior and your staking deposit is burned automatically. If 1/3 of validators are inactive and the network halts, the penalty system start to burn slowly the deposits until the network can restart. * Even if you have 99,9% of staking power you can't censor transactions for free because as far there is a single honest validator in the network he will include the transaction you want to censor in his block. If the attacker doesn't want to sign the block of the honest validator, his staking deposit receives a small percent of penalty, but a small percent of millions of dollars is too much money. The honest validator receives also a penalty, but his deposit is small so it's not a problem. * If somehow the attacker has ALL staking power and start to censor for free and don't allow other validators register their staking deposits to create blocks, the community as a whole would agree to burn all staking deposits and restart the network, which makes every coin holder much richer than before. Regards
In addition to other answers, Bitcoin investors would also like to have a very conservative approach to updating bitcoin. Messing with the core idea will increase the perceived risk for something wanting to be a store of value.
24,504,341
this is my first topic . I've done a quick search to make sure im not posting an unessecary topic already existing. Im using Windows 7 32bit system and when im trying to install JDK 8u05. I downloaded the .exe file ( curious why there is no .rar archive available for download ) from the Oracle main website and when i try to run it , nothing happens. Just loads for a few seconds and then nothing. I tried restarting , re-downloading , turning off UAC , running as an admininstrator. Any suggestions ? Thanks in advance
2014/07/01
[ "https://Stackoverflow.com/questions/24504341", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3792782/" ]
Just solved this , i need to switch users . Seems there was no right in user i worked on. Thanks anyway !
**Downloading the Installer** If you save the self-installing executable file to disk without running it from the download page at the web site, note the file size specified on the download page. After the download has completed, verify that you have downloaded the complete file. **Running the JDK Installer** You must have administrative permissions in order to install the JDK on Microsoft Windows. The file jdk-8version-windows-i586-i.exe is the JDK installer for 32-bit systems. The file jdk-8version-windows-x64.exe is the JDK installer for 64-bit systems. If you downloaded either file instead of running it directly from the web site, double-click the installer's icon. Then, follow the instructions the installer provides. When finished with the installation, you can delete the downloaded file to recover disk space. Installers for JDK 7u6 and later install the JavaFX SDK and integrate it into the JDK installation directory. Installers for JDK 7u2 to 7u5 install the JDK first, then start the JavaFX SDK installer, which installs JavaFX SDK in the default directory C:\Program Files\Oracle\JavaFX 2.0 SDK or C:\Program Files (x86)\Oracle\JavaFX 2.0 SDK on 64-bit operating systems. If you want to install the JavaFX SDK (version 2.0.2) with JDK 7u1 or earlier, see <http://docs.oracle.com/javafx/2/installation/jfxpub-installation.htm> for more information. **Starting to Use the JDK** Use the Java item in the Windows Start menu to get access to essential Java information and functions, including help, API documentation, the Java Control Panel, checking for updates, and Java Mission Control. If you are new to developing and running programs in the Java programming language, see <http://download.oracle.com/javase/tutorial> for some guidance. Note especially the tutorial trails under the heading Trails Covering the Basics. You can also download the JDK documentation from <http://www.oracle.com/technetwork/java/javase/downloads/index-jsp-138363.html> page.
16,006
> > Related: > > > * <http://meta.askubuntu.com/questions/15996/tag-synonym-request-asus-related-stuff> (in dispute) > * <http://meta.askubuntu.com/questions/7733/do-we-need-brand-manufacturer-tags> > > > This is a call for community discussion on the matter of manufacturer/brand specific tags with relation to Ask Ubuntu. --- **History** It was brought up in chat and Meta requests to merge the [asus-pc](https://askubuntu.com/questions/tagged/asus-pc "show questions tagged 'asus-pc'") and [asus-laptop](https://askubuntu.com/questions/tagged/asus-laptop "show questions tagged 'asus-laptop'") tags to [asus](https://askubuntu.com/questions/tagged/asus "show questions tagged 'asus'") and set up tag synonyms. The justification was that we do not need additional Asus tags for tagging Asus-related questions. Another user has made an argument that the [asus-pc](https://askubuntu.com/questions/tagged/asus-pc "show questions tagged 'asus-pc'") and [asus-laptop](https://askubuntu.com/questions/tagged/asus-laptop "show questions tagged 'asus-laptop'") tags are more useful than the Asus tag, and are quoting [this question on Super User](https://meta.superuser.com/questions/8402/manufacturer-company-tags-are-back-again). The editing of questions to add in [asus-pc](https://askubuntu.com/questions/tagged/asus-pc "show questions tagged 'asus-pc'") and [asus-laptop](https://askubuntu.com/questions/tagged/asus-laptop "show questions tagged 'asus-laptop'") have been getting negative attention in review queues. That is, reviews of suggested tag revisions such as these have been receiving rejections recently. --- It can be generally argued that there are cases where a generic manufactuer tag are bad. This applies for Logitech, for instance, in which they make mice, keyboards, webcams, and many other peripheral devices. This may also apply for cases such as Intel or AMD, in which they all create CPUs, GPUs, and other devices which need handled individually. There are also cases where having individual product tags is poor compared to having an all-inclusive tag. Consider, if we made a tag for every single Dell computer model, we'd have a thousand tags that don't add anything. The same applies for Apple devices. That said, if a company only produces computers, it may not necessarily be a good idea to explicitly separate it into laptop, desktop, server, etc. subcategories. To quote the "Do we need brand/manufacturer tags" question's answers from three years ago, a core question comes to mind with regards to these types of flags: > > Who would filter using (the tag)? Who would think this is an expertise area? > > > With this question in mind, there is argument for either side of the discussion - whether it's keeping the tags merged which has been done today, or whether we start separating this tag out. --- **The Question** This discussion thread here brings up a very specific question for the community to answer to decide how to proceed on this individual case: With regards to the [asus](https://askubuntu.com/questions/tagged/asus "show questions tagged 'asus'") tag, should we leave them all lumped together under the manufacturer general tag, or should we split them up into individual subcategory-tags (such as [asus-laptop](https://askubuntu.com/questions/tagged/asus-laptop "show questions tagged 'asus-laptop'"), [asus-pc](https://askubuntu.com/questions/tagged/asus-pc "show questions tagged 'asus-pc'"), etc.)?
2016/08/29
[ "https://meta.askubuntu.com/questions/16006", "https://meta.askubuntu.com", "https://meta.askubuntu.com/users/10616/" ]
I've stated my opinion before, in one of the ASUS Meta questions, but I'm going to state it again, just in a more complex way, and with a bit of backtracking :p. The tag [asus](https://askubuntu.com/questions/tagged/asus "show questions tagged 'asus'") says it's for every ASUS electronic device, and that's where there might be a problem, since ASUS doesn't only make computers. It also makes motherboards, tablets, phones and maybe other things not listed in its store. So maybe it's a bit of a generalization to group everything together. **However**, ASUS phones and tablets don't tend to run Ubuntu. Some of the models probably have Touch ports, but then those should require their own tags ([asus-tablet](https://askubuntu.com/questions/tagged/asus-tablet "show questions tagged 'asus-tablet'"), [asus-phone](https://askubuntu.com/questions/tagged/asus-phone "show questions tagged 'asus-phone'")), as the hardware and necessary instructions are likely very different from their desktops. Also, I doubt we're going to be getting questions specifically about ASUS motherboards, unless the computers are custom-built, in which case the motherboard is still probably not very important. But here's another problem. There are quite a few product lines of ASUS laptops. This I did not realize when I was just looking up the ZenBook series. They have Chromebooks, gaming PCs, workstations, 2-in-1s, eeePCs, and a few others. Obviously, some of these, like the Chromebook, are special and need special instructions to run Ubuntu properly. There's also the Chromebox, which is a desktop. Here's a new proposition. I say we don't look at this completely objectively and equally. We don't use the argument "we'd have to create one for every type"; we don't need to do that in this case. Most of the product lines are pretty similar: ROG, X, Vivo, Zen, Multimedia, even Transformer -- these are all pretty similar in general terms. The desktops seem to be mostly similar, internally, as well (except the Chromebox). Since most of the desktops and laptops aren't very different, would it make sense to create just one [asus-computer](https://askubuntu.com/questions/tagged/asus-computer "show questions tagged 'asus-computer'") tag (or something like that) and have it apply to all the normal models? Then, we could make [asus-chrome](https://askubuntu.com/questions/tagged/asus-chrome "show questions tagged 'asus-chrome'") (to include desktop and laptops -- I don't care if it is [asus-chromebook](https://askubuntu.com/questions/tagged/asus-chromebook "show questions tagged 'asus-chromebook'") and [asus-chromebox](https://askubuntu.com/questions/tagged/asus-chromebox "show questions tagged 'asus-chromebox'") instead) to include those special devices. If ASUS happens to make others, or I'm missing one, then we add that too. So, hopefully I can say this concisely. What if we do this: instead of deciding on whether or not we should make a tag for every single model, why don't we do this intelligently (in the programming/coding sense)? Why don't we stop using the argument, "we can't make this tag because then we'd have to make all the others," and use, "we can't make this tag because it isn't special hardware"? Let's think about each individual situation and decide. Let's make a tag for a specific model, but only if that specific model actually needs a tag (special hardware/situation). **We need conditionals if we want tag-making to make sense.** I failed at the concise solution, but I hope I got my point across.
As the original poster of the synonym request, I feel like my justification is needed in this regard. After noticing (and rejecting) edits in the review queue that were changing [asus](https://askubuntu.com/questions/tagged/asus "show questions tagged 'asus'") to [asus-laptop](https://askubuntu.com/questions/tagged/asus-laptop "show questions tagged 'asus-laptop'") or [asus-computer](https://askubuntu.com/questions/tagged/asus-computer "show questions tagged 'asus-computer'"), I decided enough was enough and submitted the meta post mentioned in the original question. It seems excessive to have three tags for *the exact same thing*. While tags such as [asus-computer](https://askubuntu.com/questions/tagged/asus-computer "show questions tagged 'asus-computer'") would be acceptable, there's no reason to then ALSO have tags like [asus](https://askubuntu.com/questions/tagged/asus "show questions tagged 'asus'") and [asus-laptop](https://askubuntu.com/questions/tagged/asus-laptop "show questions tagged 'asus-laptop'"). Tags should be as easy to find as possible, and should be as short as necessary to convey the point. As long as there's no need to divide [asus](https://askubuntu.com/questions/tagged/asus "show questions tagged 'asus'"), we shouldn't. If the need comes, we can move posts over to a more appropriate tag (see Zacharee's post). However, these tags should only be used ***when they're appropriate***, which is a problem with this site. [asus](https://askubuntu.com/questions/tagged/asus "show questions tagged 'asus'") (and its derivatives) should not be used unless the question actually is Asus specific. If it can be applied to any other system, [asus](https://askubuntu.com/questions/tagged/asus "show questions tagged 'asus'") should not be applied to the post in question. The reason I suggested the tag edit and the synonyms was to remedy the issue of the tagging discrepancy and unify everything under the single [asus](https://askubuntu.com/questions/tagged/asus "show questions tagged 'asus'") tag, as there was no need (in my mind looking at the questions) to actually have a split as of yet.
34,000
I was surprised to discover my dictionary had this entry for *dilemma*: > > a situation in which a difficult choice has to be made between two or more alternatives, esp. equally undesirable ones > > > The notion of *dilemma* meaning *two or more* flies against what I was taught about the word. The very idea of a [*false dilemma*](http://en.wikipedia.org/wiki/False_dilemma) is specifically based on the number two. Has my dictionary merely updated its definition to encapsulate the many people who use *dilemma* for more than two equal choices? Or was someone in my youth being unnecessarily pedantic?
2011/07/13
[ "https://english.stackexchange.com/questions/34000", "https://english.stackexchange.com", "https://english.stackexchange.com/users/6006/" ]
First, the words "trilemma" and "multilemma" have been used. I know. I did it in a freshman writing class in 1982-3. They were footnoted with explanation as to their meaning relative to "dilemma". Since I was an avid Latin student circa 1970, using "dilemma" when there are multiple unpleasant choices went against the grain. The adjunct, a bitter wannabe, let these variation pass without comment. As to "dissection", the prefix in this case is not "di-", meaning "two", but "dis-" meaning apart, as in "discombobulated".
A dilemma is just an (unpleasant/difficult) choice, and most such choices involve only two options, but that does not mean that they can only have two options. I suppose it was made worse with the "*on the horns of*" precursor, because most beasts only have two horns, but the horns don't represent the choices, they represent the unpleasantness. "*On the spike of a dilemma*" would work just the same. *Fred's dilemma: Should he do A, B, C, or D?* No problem. I've seen similar a similar error with "dissect", where people believed it meant "cut in two".
34,000
I was surprised to discover my dictionary had this entry for *dilemma*: > > a situation in which a difficult choice has to be made between two or more alternatives, esp. equally undesirable ones > > > The notion of *dilemma* meaning *two or more* flies against what I was taught about the word. The very idea of a [*false dilemma*](http://en.wikipedia.org/wiki/False_dilemma) is specifically based on the number two. Has my dictionary merely updated its definition to encapsulate the many people who use *dilemma* for more than two equal choices? Or was someone in my youth being unnecessarily pedantic?
2011/07/13
[ "https://english.stackexchange.com/questions/34000", "https://english.stackexchange.com", "https://english.stackexchange.com/users/6006/" ]
Interesting - I first encountered the expression **false dichotomy** which I think expresses the intent more accurately despite being slightly pompous. I was then mildly surprised to find the term more popularly written and spoken as *dilemma* since as you point out a *dilemma* is not necessarily and certainly not intrinsically limited to two options. I also prefer dichotomy since by definition it suggests a division into two non-overlapping or mutually exclusive parts, and since conflicting opinions are almost never mutually exclusive - the possibility of mediation presupposes the existence of common ground - it more clearly calls out the contrived nature of such thinking. It might be cynical but I suppose that false dilemma has been popularly adopted simply because *dilemma is close enough*, and for the most part ordinary people don't care for precision as much as convenience and familiarity.
A dilemma is just an (unpleasant/difficult) choice, and most such choices involve only two options, but that does not mean that they can only have two options. I suppose it was made worse with the "*on the horns of*" precursor, because most beasts only have two horns, but the horns don't represent the choices, they represent the unpleasantness. "*On the spike of a dilemma*" would work just the same. *Fred's dilemma: Should he do A, B, C, or D?* No problem. I've seen similar a similar error with "dissect", where people believed it meant "cut in two".
34,000
I was surprised to discover my dictionary had this entry for *dilemma*: > > a situation in which a difficult choice has to be made between two or more alternatives, esp. equally undesirable ones > > > The notion of *dilemma* meaning *two or more* flies against what I was taught about the word. The very idea of a [*false dilemma*](http://en.wikipedia.org/wiki/False_dilemma) is specifically based on the number two. Has my dictionary merely updated its definition to encapsulate the many people who use *dilemma* for more than two equal choices? Or was someone in my youth being unnecessarily pedantic?
2011/07/13
[ "https://english.stackexchange.com/questions/34000", "https://english.stackexchange.com", "https://english.stackexchange.com/users/6006/" ]
The [etymology for *dilemma*](http://www.etymonline.com/index.php?search=dilemma&searchmode=none) reveals that the original meaning of the word was specific to two (di-) premises (lemmas). In fact, Etymology Online states > > It should be used only of situations where someone is forced to choose between two alternatives, both unfavorable to him. > > > So yes, there are those who would argue that the word is only "properly" used for two unpleasant alternatives. I would speculate that your dictionary has been updated to include more modern usage, which is less specific about the number of choices to be made, perhaps because the "important" part of the meaning is that a person must make an unpleasant choice.
First, the words "trilemma" and "multilemma" have been used. I know. I did it in a freshman writing class in 1982-3. They were footnoted with explanation as to their meaning relative to "dilemma". Since I was an avid Latin student circa 1970, using "dilemma" when there are multiple unpleasant choices went against the grain. The adjunct, a bitter wannabe, let these variation pass without comment. As to "dissection", the prefix in this case is not "di-", meaning "two", but "dis-" meaning apart, as in "discombobulated".
34,000
I was surprised to discover my dictionary had this entry for *dilemma*: > > a situation in which a difficult choice has to be made between two or more alternatives, esp. equally undesirable ones > > > The notion of *dilemma* meaning *two or more* flies against what I was taught about the word. The very idea of a [*false dilemma*](http://en.wikipedia.org/wiki/False_dilemma) is specifically based on the number two. Has my dictionary merely updated its definition to encapsulate the many people who use *dilemma* for more than two equal choices? Or was someone in my youth being unnecessarily pedantic?
2011/07/13
[ "https://english.stackexchange.com/questions/34000", "https://english.stackexchange.com", "https://english.stackexchange.com/users/6006/" ]
Interesting - I first encountered the expression **false dichotomy** which I think expresses the intent more accurately despite being slightly pompous. I was then mildly surprised to find the term more popularly written and spoken as *dilemma* since as you point out a *dilemma* is not necessarily and certainly not intrinsically limited to two options. I also prefer dichotomy since by definition it suggests a division into two non-overlapping or mutually exclusive parts, and since conflicting opinions are almost never mutually exclusive - the possibility of mediation presupposes the existence of common ground - it more clearly calls out the contrived nature of such thinking. It might be cynical but I suppose that false dilemma has been popularly adopted simply because *dilemma is close enough*, and for the most part ordinary people don't care for precision as much as convenience and familiarity.
First, the words "trilemma" and "multilemma" have been used. I know. I did it in a freshman writing class in 1982-3. They were footnoted with explanation as to their meaning relative to "dilemma". Since I was an avid Latin student circa 1970, using "dilemma" when there are multiple unpleasant choices went against the grain. The adjunct, a bitter wannabe, let these variation pass without comment. As to "dissection", the prefix in this case is not "di-", meaning "two", but "dis-" meaning apart, as in "discombobulated".
34,000
I was surprised to discover my dictionary had this entry for *dilemma*: > > a situation in which a difficult choice has to be made between two or more alternatives, esp. equally undesirable ones > > > The notion of *dilemma* meaning *two or more* flies against what I was taught about the word. The very idea of a [*false dilemma*](http://en.wikipedia.org/wiki/False_dilemma) is specifically based on the number two. Has my dictionary merely updated its definition to encapsulate the many people who use *dilemma* for more than two equal choices? Or was someone in my youth being unnecessarily pedantic?
2011/07/13
[ "https://english.stackexchange.com/questions/34000", "https://english.stackexchange.com", "https://english.stackexchange.com/users/6006/" ]
First, the words "trilemma" and "multilemma" have been used. I know. I did it in a freshman writing class in 1982-3. They were footnoted with explanation as to their meaning relative to "dilemma". Since I was an avid Latin student circa 1970, using "dilemma" when there are multiple unpleasant choices went against the grain. The adjunct, a bitter wannabe, let these variation pass without comment. As to "dissection", the prefix in this case is not "di-", meaning "two", but "dis-" meaning apart, as in "discombobulated".
Classically, the expression was "on the horns of a dilemma". When you had to choose between two equally unattractive options, it was described with reference to a mythical two-horned beast. I'm sure your dictionary is going with the current usage, which allows more than two options. If we can believe Wikipedia, the story is described [here](http://en.wikipedia.org/wiki/Wikipedia%3aHorns_of_a_dilemma)
34,000
I was surprised to discover my dictionary had this entry for *dilemma*: > > a situation in which a difficult choice has to be made between two or more alternatives, esp. equally undesirable ones > > > The notion of *dilemma* meaning *two or more* flies against what I was taught about the word. The very idea of a [*false dilemma*](http://en.wikipedia.org/wiki/False_dilemma) is specifically based on the number two. Has my dictionary merely updated its definition to encapsulate the many people who use *dilemma* for more than two equal choices? Or was someone in my youth being unnecessarily pedantic?
2011/07/13
[ "https://english.stackexchange.com/questions/34000", "https://english.stackexchange.com", "https://english.stackexchange.com/users/6006/" ]
Interesting - I first encountered the expression **false dichotomy** which I think expresses the intent more accurately despite being slightly pompous. I was then mildly surprised to find the term more popularly written and spoken as *dilemma* since as you point out a *dilemma* is not necessarily and certainly not intrinsically limited to two options. I also prefer dichotomy since by definition it suggests a division into two non-overlapping or mutually exclusive parts, and since conflicting opinions are almost never mutually exclusive - the possibility of mediation presupposes the existence of common ground - it more clearly calls out the contrived nature of such thinking. It might be cynical but I suppose that false dilemma has been popularly adopted simply because *dilemma is close enough*, and for the most part ordinary people don't care for precision as much as convenience and familiarity.
As an updated dictionary indicated to you, and as other sources demonstrated to me, too, the word *dilemma* can be used for more than two alternatives. You can view it as if you're using it in a recurring binary sense, where you have more than two options but you are considering them all in pairs over and over again, until you've covered them all, kind of how some programming languages find the largest number in a set of numbers, if you've ever read about some algorithms for this computational process. Moreover, another way of referring to the *false dilemma* fallacy is to call it *the fallacy of the excluded middle*. And clearly, the "middle" does not necessarily have to be only between two extremes; it can also be between two *sets* of extreme options. And by way of this, treating a literal "dilemma" as something that tolerates more than just two options can help us realize a third, previously unknown option. These are not just my own philosophical thoughts about it; I refer you to this phrase from Dictionary.com's web page on the word *dilemma*, "But even logicians disagree on whether certain situations are dilemmas or mere syllogisms." And the *Usage note* section of the same source will help you see, without doubt, that this is the correct, modern understanding of the word *dilemma*.
34,000
I was surprised to discover my dictionary had this entry for *dilemma*: > > a situation in which a difficult choice has to be made between two or more alternatives, esp. equally undesirable ones > > > The notion of *dilemma* meaning *two or more* flies against what I was taught about the word. The very idea of a [*false dilemma*](http://en.wikipedia.org/wiki/False_dilemma) is specifically based on the number two. Has my dictionary merely updated its definition to encapsulate the many people who use *dilemma* for more than two equal choices? Or was someone in my youth being unnecessarily pedantic?
2011/07/13
[ "https://english.stackexchange.com/questions/34000", "https://english.stackexchange.com", "https://english.stackexchange.com/users/6006/" ]
The [etymology for *dilemma*](http://www.etymonline.com/index.php?search=dilemma&searchmode=none) reveals that the original meaning of the word was specific to two (di-) premises (lemmas). In fact, Etymology Online states > > It should be used only of situations where someone is forced to choose between two alternatives, both unfavorable to him. > > > So yes, there are those who would argue that the word is only "properly" used for two unpleasant alternatives. I would speculate that your dictionary has been updated to include more modern usage, which is less specific about the number of choices to be made, perhaps because the "important" part of the meaning is that a person must make an unpleasant choice.
Interesting - I first encountered the expression **false dichotomy** which I think expresses the intent more accurately despite being slightly pompous. I was then mildly surprised to find the term more popularly written and spoken as *dilemma* since as you point out a *dilemma* is not necessarily and certainly not intrinsically limited to two options. I also prefer dichotomy since by definition it suggests a division into two non-overlapping or mutually exclusive parts, and since conflicting opinions are almost never mutually exclusive - the possibility of mediation presupposes the existence of common ground - it more clearly calls out the contrived nature of such thinking. It might be cynical but I suppose that false dilemma has been popularly adopted simply because *dilemma is close enough*, and for the most part ordinary people don't care for precision as much as convenience and familiarity.
34,000
I was surprised to discover my dictionary had this entry for *dilemma*: > > a situation in which a difficult choice has to be made between two or more alternatives, esp. equally undesirable ones > > > The notion of *dilemma* meaning *two or more* flies against what I was taught about the word. The very idea of a [*false dilemma*](http://en.wikipedia.org/wiki/False_dilemma) is specifically based on the number two. Has my dictionary merely updated its definition to encapsulate the many people who use *dilemma* for more than two equal choices? Or was someone in my youth being unnecessarily pedantic?
2011/07/13
[ "https://english.stackexchange.com/questions/34000", "https://english.stackexchange.com", "https://english.stackexchange.com/users/6006/" ]
Interesting - I first encountered the expression **false dichotomy** which I think expresses the intent more accurately despite being slightly pompous. I was then mildly surprised to find the term more popularly written and spoken as *dilemma* since as you point out a *dilemma* is not necessarily and certainly not intrinsically limited to two options. I also prefer dichotomy since by definition it suggests a division into two non-overlapping or mutually exclusive parts, and since conflicting opinions are almost never mutually exclusive - the possibility of mediation presupposes the existence of common ground - it more clearly calls out the contrived nature of such thinking. It might be cynical but I suppose that false dilemma has been popularly adopted simply because *dilemma is close enough*, and for the most part ordinary people don't care for precision as much as convenience and familiarity.
Classically, the expression was "on the horns of a dilemma". When you had to choose between two equally unattractive options, it was described with reference to a mythical two-horned beast. I'm sure your dictionary is going with the current usage, which allows more than two options. If we can believe Wikipedia, the story is described [here](http://en.wikipedia.org/wiki/Wikipedia%3aHorns_of_a_dilemma)
34,000
I was surprised to discover my dictionary had this entry for *dilemma*: > > a situation in which a difficult choice has to be made between two or more alternatives, esp. equally undesirable ones > > > The notion of *dilemma* meaning *two or more* flies against what I was taught about the word. The very idea of a [*false dilemma*](http://en.wikipedia.org/wiki/False_dilemma) is specifically based on the number two. Has my dictionary merely updated its definition to encapsulate the many people who use *dilemma* for more than two equal choices? Or was someone in my youth being unnecessarily pedantic?
2011/07/13
[ "https://english.stackexchange.com/questions/34000", "https://english.stackexchange.com", "https://english.stackexchange.com/users/6006/" ]
The [etymology for *dilemma*](http://www.etymonline.com/index.php?search=dilemma&searchmode=none) reveals that the original meaning of the word was specific to two (di-) premises (lemmas). In fact, Etymology Online states > > It should be used only of situations where someone is forced to choose between two alternatives, both unfavorable to him. > > > So yes, there are those who would argue that the word is only "properly" used for two unpleasant alternatives. I would speculate that your dictionary has been updated to include more modern usage, which is less specific about the number of choices to be made, perhaps because the "important" part of the meaning is that a person must make an unpleasant choice.
As an updated dictionary indicated to you, and as other sources demonstrated to me, too, the word *dilemma* can be used for more than two alternatives. You can view it as if you're using it in a recurring binary sense, where you have more than two options but you are considering them all in pairs over and over again, until you've covered them all, kind of how some programming languages find the largest number in a set of numbers, if you've ever read about some algorithms for this computational process. Moreover, another way of referring to the *false dilemma* fallacy is to call it *the fallacy of the excluded middle*. And clearly, the "middle" does not necessarily have to be only between two extremes; it can also be between two *sets* of extreme options. And by way of this, treating a literal "dilemma" as something that tolerates more than just two options can help us realize a third, previously unknown option. These are not just my own philosophical thoughts about it; I refer you to this phrase from Dictionary.com's web page on the word *dilemma*, "But even logicians disagree on whether certain situations are dilemmas or mere syllogisms." And the *Usage note* section of the same source will help you see, without doubt, that this is the correct, modern understanding of the word *dilemma*.
34,000
I was surprised to discover my dictionary had this entry for *dilemma*: > > a situation in which a difficult choice has to be made between two or more alternatives, esp. equally undesirable ones > > > The notion of *dilemma* meaning *two or more* flies against what I was taught about the word. The very idea of a [*false dilemma*](http://en.wikipedia.org/wiki/False_dilemma) is specifically based on the number two. Has my dictionary merely updated its definition to encapsulate the many people who use *dilemma* for more than two equal choices? Or was someone in my youth being unnecessarily pedantic?
2011/07/13
[ "https://english.stackexchange.com/questions/34000", "https://english.stackexchange.com", "https://english.stackexchange.com/users/6006/" ]
First, the words "trilemma" and "multilemma" have been used. I know. I did it in a freshman writing class in 1982-3. They were footnoted with explanation as to their meaning relative to "dilemma". Since I was an avid Latin student circa 1970, using "dilemma" when there are multiple unpleasant choices went against the grain. The adjunct, a bitter wannabe, let these variation pass without comment. As to "dissection", the prefix in this case is not "di-", meaning "two", but "dis-" meaning apart, as in "discombobulated".
As an updated dictionary indicated to you, and as other sources demonstrated to me, too, the word *dilemma* can be used for more than two alternatives. You can view it as if you're using it in a recurring binary sense, where you have more than two options but you are considering them all in pairs over and over again, until you've covered them all, kind of how some programming languages find the largest number in a set of numbers, if you've ever read about some algorithms for this computational process. Moreover, another way of referring to the *false dilemma* fallacy is to call it *the fallacy of the excluded middle*. And clearly, the "middle" does not necessarily have to be only between two extremes; it can also be between two *sets* of extreme options. And by way of this, treating a literal "dilemma" as something that tolerates more than just two options can help us realize a third, previously unknown option. These are not just my own philosophical thoughts about it; I refer you to this phrase from Dictionary.com's web page on the word *dilemma*, "But even logicians disagree on whether certain situations are dilemmas or mere syllogisms." And the *Usage note* section of the same source will help you see, without doubt, that this is the correct, modern understanding of the word *dilemma*.
187,521
A couple of decades ago I graduated from a Russian university with an MS in physics, and my MS thesis contained a critical flaw. In short, the thesis was about static perturbations in a certain physical system, but the system itself is unstable in the very same model, with the instability length being comparable to the characteristic size of the static perturbations in question. The whole investigation didn't make any scientific sense, because the assumed physical system can't be physically realized in the first place. I discovered the flaw a few months before submitting my thesis. It happened rather accidentally: I wanted to formally prove that the system is stable, but the result of my calculations showed that the opposite is true. I had never heard from my supervisor, who had given me the problem for my thesis, or from his colleagues that the system may be unstable. Everyone simply did not even think about that possibility. After I discovered the flaw, I faced the dilemma as to what to do about it. My final choice was to tell no one, even my supervisor, and simply go on to get my MS degree, deliberately failing to mention my stability analysis and its outcome in my thesis and thereby concealing the flaw. I did so because getting my MS degree asap and moving abroad for PhD studies was my highest priority. If I had raised the issue about the flaw, I would have had to start my MS project over, if allowed at all, and spend two more years to get my MS degree. I should have performed the stability analysis at the very beginning of my MS project, but I didn't, and it's partially a fault of my supervisor, who directed my work in a very rigid way, giving me very specific tasks and deadlines. He never told me to check whether the system is stable. The official research plan, which he and I signed, did not contain any mention of a stability analysis. It was my own initiative to try to prove that the system is stable, because I felt that this was needed to make my investigation complete. I didn't even talk to my supervisor about my idea to perform the stability analysis. After I discovered the flaw, I was sure that if I talked to my supervisor about it, he would say the whole MS project had to be canceled. A product of the Soviet era, he was ruthlessly strict in terms of norms and ethics and had little compassion towards students. After I submitted my thesis, my supervisor insisted that I write and publish an article based on the thesis. I didn't want to do it, but I had to. After all, I needed good recommendation letters from my supervisor, so I had to obey. The article was published in a reputable American journal and was later cited about 20 times. Writing that article was the most unpleasant experience in my scientific career. To clarify, neither my thesis nor the paper claimed that the system is stable. That the system is stable was an implicit inherent assumption of the model, and it was quite a popular model at that time. The model was invented and used for other purposes well before I even started my MS project. That is, my advisor gave me a known model that no one knew to be faulty at the time, and asked me to use it for a new purpose. As explained above, I accidentally discovered and deliberately concealed that the assumption that the system is stable is wrong and can be shown to be wrong in the framework of the very same model, so the model is inherently self-contradictory regardless of the purpose of its use. As I expected, no one found the flaw, so I successfully got my MS degree in Russia, moved abroad, got a Western PhD degree, and some years later published an article explaining the flaw. In that article, I explicitly wrote that the model and all articles based on it are invalid science. I cited some articles, including my article based on the MS thesis, as examples of invalid science. No one published a comment in response. In private conversations, my colleagues confirmed that my conclusion about the flaw is correct. And the faulty model practically stopped being used after that. Many years have passed since then, and I have built a solid career and have articles published in Physical Review Letters, even as the first author, but I still feel uneasy about the fact that I started my academic career with a misleading MS thesis and deliberately concealed the flaw in order to graduate smoothly. I understand that what I did is a research misconduct, but the question I'm still struggling to find the answer to is whether my research misconduct was ethically justifiable under the circumstances. My colleagues say it was, but I'm unsure whether they are frank about it, so I really want to hear what other people have to say. I want truly impartial answers from people who do not know me. This is why I'm posting my question here. Here are some additional details: 1. If I had not concealed the flaw, I would almost certainly have not become a scientist at all, because I could not afford two more undergraduate years in Russia. My parents didn't want to help me financially any further, so I had to get my MS degree asap and move abroad. At that time (late 1990s), living in Russia was very hard because of an economic crisis. Besides, even if I had found a way to finance the additional undergraduate years in Russia, the delay of my graduation would have harmed my chances to win the prestigious Western PhD stipend that I won. 2. Formally speaking, my MS thesis and the article based on it might be seen as not containing any flaw, because I was given a specific physical model and investigated static perturbations within the framework of that model as requested; the fact that the model is faulty is a separate, although related, thing. The message of my thesis was essentially that if we take that model and make those calculations, we get those results. It was a valid message per se. I was just an undergraduate student who had to do what the supervisor said. He gave me the model and requested certain calculations. I did them absolutely accurately and wrote up the results. 3. My MS project wasn't a significant research project anyway. It was rather a training project to learn how to do calculations and write up results. Even if the model were not faulty, the article would not have had any considerable impact. No one used the results of that project. People merely cited my paper. 4. The only harm due to me concealing the flaw was that a number of scientists continued using the same faulty model for other purposes, unsuspecting that the model is faulty. If I had told them that the model is faulty, they would have spent their research efforts for something more useful. But that would have put my own career in danger, because exposing the flaw too early might have resulted in a retraction of my MS degree and a subsequent termination of my PhD studies. I didn't want to take that risk. **Was my research misconduct ethically justifiable under the circumstances? Or should I have reported the flaw right after I discovered it, even at the huge expense explained above? Or what should I have done after I discovered the flaw?**
2022/08/04
[ "https://academia.stackexchange.com/questions/187521", "https://academia.stackexchange.com", "https://academia.stackexchange.com/users/111943/" ]
Honestly, this sounds more like a philosophical / personal ethics question than an professional ethics question. The "academic ethics" answers are what you already know: Withholding important relevant information when you publish something is definitely wrong. And, having done that, calling attention to it later (which you did) was the right thing to do. Beyond that, I think you're left in the messy world of being an imperfect human with competing needs and obligations and motivations.
Your assumptions about the cost of telling your advisor about the flaw are very likely wrong. **Discovering that a widely used physical model is flawed is a contribution which is definitely strong enough for a master thesis.** So, to answer your question, you should have told your advisor, submitted your proof of instability as a thesis, received a well deserved excellent grade, and gone on with your career. The second best thing would be to tell them after your graduation was sealed, but before submitting the paper. Regardless: for a young student, feeling confused and scared about the situation is understandable, and, in legal parlance, it is clearly a mitigating circumstance. It does not nullify the fact that there was a misconduct, though. Even if nobody "used" your result, it did its share of the damage: the more papers about the model are published, the less conceivable it is that it may be invalid. However, by now, if there were any such damage, you have clearly undone it. And even the most heinous crimes, which this one is not, usually have statute of limitation.
187,521
A couple of decades ago I graduated from a Russian university with an MS in physics, and my MS thesis contained a critical flaw. In short, the thesis was about static perturbations in a certain physical system, but the system itself is unstable in the very same model, with the instability length being comparable to the characteristic size of the static perturbations in question. The whole investigation didn't make any scientific sense, because the assumed physical system can't be physically realized in the first place. I discovered the flaw a few months before submitting my thesis. It happened rather accidentally: I wanted to formally prove that the system is stable, but the result of my calculations showed that the opposite is true. I had never heard from my supervisor, who had given me the problem for my thesis, or from his colleagues that the system may be unstable. Everyone simply did not even think about that possibility. After I discovered the flaw, I faced the dilemma as to what to do about it. My final choice was to tell no one, even my supervisor, and simply go on to get my MS degree, deliberately failing to mention my stability analysis and its outcome in my thesis and thereby concealing the flaw. I did so because getting my MS degree asap and moving abroad for PhD studies was my highest priority. If I had raised the issue about the flaw, I would have had to start my MS project over, if allowed at all, and spend two more years to get my MS degree. I should have performed the stability analysis at the very beginning of my MS project, but I didn't, and it's partially a fault of my supervisor, who directed my work in a very rigid way, giving me very specific tasks and deadlines. He never told me to check whether the system is stable. The official research plan, which he and I signed, did not contain any mention of a stability analysis. It was my own initiative to try to prove that the system is stable, because I felt that this was needed to make my investigation complete. I didn't even talk to my supervisor about my idea to perform the stability analysis. After I discovered the flaw, I was sure that if I talked to my supervisor about it, he would say the whole MS project had to be canceled. A product of the Soviet era, he was ruthlessly strict in terms of norms and ethics and had little compassion towards students. After I submitted my thesis, my supervisor insisted that I write and publish an article based on the thesis. I didn't want to do it, but I had to. After all, I needed good recommendation letters from my supervisor, so I had to obey. The article was published in a reputable American journal and was later cited about 20 times. Writing that article was the most unpleasant experience in my scientific career. To clarify, neither my thesis nor the paper claimed that the system is stable. That the system is stable was an implicit inherent assumption of the model, and it was quite a popular model at that time. The model was invented and used for other purposes well before I even started my MS project. That is, my advisor gave me a known model that no one knew to be faulty at the time, and asked me to use it for a new purpose. As explained above, I accidentally discovered and deliberately concealed that the assumption that the system is stable is wrong and can be shown to be wrong in the framework of the very same model, so the model is inherently self-contradictory regardless of the purpose of its use. As I expected, no one found the flaw, so I successfully got my MS degree in Russia, moved abroad, got a Western PhD degree, and some years later published an article explaining the flaw. In that article, I explicitly wrote that the model and all articles based on it are invalid science. I cited some articles, including my article based on the MS thesis, as examples of invalid science. No one published a comment in response. In private conversations, my colleagues confirmed that my conclusion about the flaw is correct. And the faulty model practically stopped being used after that. Many years have passed since then, and I have built a solid career and have articles published in Physical Review Letters, even as the first author, but I still feel uneasy about the fact that I started my academic career with a misleading MS thesis and deliberately concealed the flaw in order to graduate smoothly. I understand that what I did is a research misconduct, but the question I'm still struggling to find the answer to is whether my research misconduct was ethically justifiable under the circumstances. My colleagues say it was, but I'm unsure whether they are frank about it, so I really want to hear what other people have to say. I want truly impartial answers from people who do not know me. This is why I'm posting my question here. Here are some additional details: 1. If I had not concealed the flaw, I would almost certainly have not become a scientist at all, because I could not afford two more undergraduate years in Russia. My parents didn't want to help me financially any further, so I had to get my MS degree asap and move abroad. At that time (late 1990s), living in Russia was very hard because of an economic crisis. Besides, even if I had found a way to finance the additional undergraduate years in Russia, the delay of my graduation would have harmed my chances to win the prestigious Western PhD stipend that I won. 2. Formally speaking, my MS thesis and the article based on it might be seen as not containing any flaw, because I was given a specific physical model and investigated static perturbations within the framework of that model as requested; the fact that the model is faulty is a separate, although related, thing. The message of my thesis was essentially that if we take that model and make those calculations, we get those results. It was a valid message per se. I was just an undergraduate student who had to do what the supervisor said. He gave me the model and requested certain calculations. I did them absolutely accurately and wrote up the results. 3. My MS project wasn't a significant research project anyway. It was rather a training project to learn how to do calculations and write up results. Even if the model were not faulty, the article would not have had any considerable impact. No one used the results of that project. People merely cited my paper. 4. The only harm due to me concealing the flaw was that a number of scientists continued using the same faulty model for other purposes, unsuspecting that the model is faulty. If I had told them that the model is faulty, they would have spent their research efforts for something more useful. But that would have put my own career in danger, because exposing the flaw too early might have resulted in a retraction of my MS degree and a subsequent termination of my PhD studies. I didn't want to take that risk. **Was my research misconduct ethically justifiable under the circumstances? Or should I have reported the flaw right after I discovered it, even at the huge expense explained above? Or what should I have done after I discovered the flaw?**
2022/08/04
[ "https://academia.stackexchange.com/questions/187521", "https://academia.stackexchange.com", "https://academia.stackexchange.com/users/111943/" ]
Just to make the advice formal, you were most likely wrong in not bringing the issue to your advisor when you noticed it. But, given your point 2, that might not have changed anything. On the other hand, it might have delayed your degree while you came to a more complete result. But, with few exceptions, such as those that literally harm other people, such errors in the past can and should be left in the past. This is especially true if you have learned from them and don't intend to repeat them. Panic is understandable and usually forgivable for such things. In some religions, for example, there is the concept of "forgiveness" that don't require public confessions. No one is perfect. No one always does the right thing. But if we learn from our mistakes we do can better the next time. So, your ethics at the time are questionable, and you likely committed a violation. But, let it rest. And, you aren't responsible for the fact that others used the same faulty model. That is on them. Had you developed the model yourself and hid its flaws (misrepresented them) then the issue would be more serious. But if it was accepted at the time then it is an artifact of scientific enquiry. You may have an opportunity, actually, though it would be awkward to exploit it. If that model is still being used, leading to suboptimal results, you could make your misgivings known. It shouldn't require a confession of guilt to do so either. --- You are probably too hard on yourself in point 1.
You had a tough choice to make under difficult conditions. To me what you did seems reasonable, I might have acted similarly in your place. You did publish the flaw, just not as soon as you discovered it. Unless it was being used in real-life systems and people came to physical harm because of the flaw during that publishing delay, I don't think you have too much to beat yourself up about. It might have been possible to include the flaw in your thesis (I personally feel it would have made it stronger, not weaker, if other people were using the model without being aware of the flaw), but I guess we'll never know how things would have gone for you if you'd done that.
187,521
A couple of decades ago I graduated from a Russian university with an MS in physics, and my MS thesis contained a critical flaw. In short, the thesis was about static perturbations in a certain physical system, but the system itself is unstable in the very same model, with the instability length being comparable to the characteristic size of the static perturbations in question. The whole investigation didn't make any scientific sense, because the assumed physical system can't be physically realized in the first place. I discovered the flaw a few months before submitting my thesis. It happened rather accidentally: I wanted to formally prove that the system is stable, but the result of my calculations showed that the opposite is true. I had never heard from my supervisor, who had given me the problem for my thesis, or from his colleagues that the system may be unstable. Everyone simply did not even think about that possibility. After I discovered the flaw, I faced the dilemma as to what to do about it. My final choice was to tell no one, even my supervisor, and simply go on to get my MS degree, deliberately failing to mention my stability analysis and its outcome in my thesis and thereby concealing the flaw. I did so because getting my MS degree asap and moving abroad for PhD studies was my highest priority. If I had raised the issue about the flaw, I would have had to start my MS project over, if allowed at all, and spend two more years to get my MS degree. I should have performed the stability analysis at the very beginning of my MS project, but I didn't, and it's partially a fault of my supervisor, who directed my work in a very rigid way, giving me very specific tasks and deadlines. He never told me to check whether the system is stable. The official research plan, which he and I signed, did not contain any mention of a stability analysis. It was my own initiative to try to prove that the system is stable, because I felt that this was needed to make my investigation complete. I didn't even talk to my supervisor about my idea to perform the stability analysis. After I discovered the flaw, I was sure that if I talked to my supervisor about it, he would say the whole MS project had to be canceled. A product of the Soviet era, he was ruthlessly strict in terms of norms and ethics and had little compassion towards students. After I submitted my thesis, my supervisor insisted that I write and publish an article based on the thesis. I didn't want to do it, but I had to. After all, I needed good recommendation letters from my supervisor, so I had to obey. The article was published in a reputable American journal and was later cited about 20 times. Writing that article was the most unpleasant experience in my scientific career. To clarify, neither my thesis nor the paper claimed that the system is stable. That the system is stable was an implicit inherent assumption of the model, and it was quite a popular model at that time. The model was invented and used for other purposes well before I even started my MS project. That is, my advisor gave me a known model that no one knew to be faulty at the time, and asked me to use it for a new purpose. As explained above, I accidentally discovered and deliberately concealed that the assumption that the system is stable is wrong and can be shown to be wrong in the framework of the very same model, so the model is inherently self-contradictory regardless of the purpose of its use. As I expected, no one found the flaw, so I successfully got my MS degree in Russia, moved abroad, got a Western PhD degree, and some years later published an article explaining the flaw. In that article, I explicitly wrote that the model and all articles based on it are invalid science. I cited some articles, including my article based on the MS thesis, as examples of invalid science. No one published a comment in response. In private conversations, my colleagues confirmed that my conclusion about the flaw is correct. And the faulty model practically stopped being used after that. Many years have passed since then, and I have built a solid career and have articles published in Physical Review Letters, even as the first author, but I still feel uneasy about the fact that I started my academic career with a misleading MS thesis and deliberately concealed the flaw in order to graduate smoothly. I understand that what I did is a research misconduct, but the question I'm still struggling to find the answer to is whether my research misconduct was ethically justifiable under the circumstances. My colleagues say it was, but I'm unsure whether they are frank about it, so I really want to hear what other people have to say. I want truly impartial answers from people who do not know me. This is why I'm posting my question here. Here are some additional details: 1. If I had not concealed the flaw, I would almost certainly have not become a scientist at all, because I could not afford two more undergraduate years in Russia. My parents didn't want to help me financially any further, so I had to get my MS degree asap and move abroad. At that time (late 1990s), living in Russia was very hard because of an economic crisis. Besides, even if I had found a way to finance the additional undergraduate years in Russia, the delay of my graduation would have harmed my chances to win the prestigious Western PhD stipend that I won. 2. Formally speaking, my MS thesis and the article based on it might be seen as not containing any flaw, because I was given a specific physical model and investigated static perturbations within the framework of that model as requested; the fact that the model is faulty is a separate, although related, thing. The message of my thesis was essentially that if we take that model and make those calculations, we get those results. It was a valid message per se. I was just an undergraduate student who had to do what the supervisor said. He gave me the model and requested certain calculations. I did them absolutely accurately and wrote up the results. 3. My MS project wasn't a significant research project anyway. It was rather a training project to learn how to do calculations and write up results. Even if the model were not faulty, the article would not have had any considerable impact. No one used the results of that project. People merely cited my paper. 4. The only harm due to me concealing the flaw was that a number of scientists continued using the same faulty model for other purposes, unsuspecting that the model is faulty. If I had told them that the model is faulty, they would have spent their research efforts for something more useful. But that would have put my own career in danger, because exposing the flaw too early might have resulted in a retraction of my MS degree and a subsequent termination of my PhD studies. I didn't want to take that risk. **Was my research misconduct ethically justifiable under the circumstances? Or should I have reported the flaw right after I discovered it, even at the huge expense explained above? Or what should I have done after I discovered the flaw?**
2022/08/04
[ "https://academia.stackexchange.com/questions/187521", "https://academia.stackexchange.com", "https://academia.stackexchange.com/users/111943/" ]
I don't think that your MS thesis has a "fatal flaw". While I do recognize that being physically possible is nice, it's not necessary. There is the entire field of abstract mathematics and I've never heard anyone seriously claiming that it's immoral and/or "academic malpractice" to be a mathematician. You never claimed that it were stable, just that *if* it is, *then* it will behave in that way describe by you. In fact, many contributions to physics are actual physically impossible. To give my favorite example: The schwarzschild (simple black hole) solution of general relativity (and the associated properties like the schwarzschild radius) implicitly requires a fully empty universe, which is impossible, still it's well known and important.
Your assumptions about the cost of telling your advisor about the flaw are very likely wrong. **Discovering that a widely used physical model is flawed is a contribution which is definitely strong enough for a master thesis.** So, to answer your question, you should have told your advisor, submitted your proof of instability as a thesis, received a well deserved excellent grade, and gone on with your career. The second best thing would be to tell them after your graduation was sealed, but before submitting the paper. Regardless: for a young student, feeling confused and scared about the situation is understandable, and, in legal parlance, it is clearly a mitigating circumstance. It does not nullify the fact that there was a misconduct, though. Even if nobody "used" your result, it did its share of the damage: the more papers about the model are published, the less conceivable it is that it may be invalid. However, by now, if there were any such damage, you have clearly undone it. And even the most heinous crimes, which this one is not, usually have statute of limitation.
187,521
A couple of decades ago I graduated from a Russian university with an MS in physics, and my MS thesis contained a critical flaw. In short, the thesis was about static perturbations in a certain physical system, but the system itself is unstable in the very same model, with the instability length being comparable to the characteristic size of the static perturbations in question. The whole investigation didn't make any scientific sense, because the assumed physical system can't be physically realized in the first place. I discovered the flaw a few months before submitting my thesis. It happened rather accidentally: I wanted to formally prove that the system is stable, but the result of my calculations showed that the opposite is true. I had never heard from my supervisor, who had given me the problem for my thesis, or from his colleagues that the system may be unstable. Everyone simply did not even think about that possibility. After I discovered the flaw, I faced the dilemma as to what to do about it. My final choice was to tell no one, even my supervisor, and simply go on to get my MS degree, deliberately failing to mention my stability analysis and its outcome in my thesis and thereby concealing the flaw. I did so because getting my MS degree asap and moving abroad for PhD studies was my highest priority. If I had raised the issue about the flaw, I would have had to start my MS project over, if allowed at all, and spend two more years to get my MS degree. I should have performed the stability analysis at the very beginning of my MS project, but I didn't, and it's partially a fault of my supervisor, who directed my work in a very rigid way, giving me very specific tasks and deadlines. He never told me to check whether the system is stable. The official research plan, which he and I signed, did not contain any mention of a stability analysis. It was my own initiative to try to prove that the system is stable, because I felt that this was needed to make my investigation complete. I didn't even talk to my supervisor about my idea to perform the stability analysis. After I discovered the flaw, I was sure that if I talked to my supervisor about it, he would say the whole MS project had to be canceled. A product of the Soviet era, he was ruthlessly strict in terms of norms and ethics and had little compassion towards students. After I submitted my thesis, my supervisor insisted that I write and publish an article based on the thesis. I didn't want to do it, but I had to. After all, I needed good recommendation letters from my supervisor, so I had to obey. The article was published in a reputable American journal and was later cited about 20 times. Writing that article was the most unpleasant experience in my scientific career. To clarify, neither my thesis nor the paper claimed that the system is stable. That the system is stable was an implicit inherent assumption of the model, and it was quite a popular model at that time. The model was invented and used for other purposes well before I even started my MS project. That is, my advisor gave me a known model that no one knew to be faulty at the time, and asked me to use it for a new purpose. As explained above, I accidentally discovered and deliberately concealed that the assumption that the system is stable is wrong and can be shown to be wrong in the framework of the very same model, so the model is inherently self-contradictory regardless of the purpose of its use. As I expected, no one found the flaw, so I successfully got my MS degree in Russia, moved abroad, got a Western PhD degree, and some years later published an article explaining the flaw. In that article, I explicitly wrote that the model and all articles based on it are invalid science. I cited some articles, including my article based on the MS thesis, as examples of invalid science. No one published a comment in response. In private conversations, my colleagues confirmed that my conclusion about the flaw is correct. And the faulty model practically stopped being used after that. Many years have passed since then, and I have built a solid career and have articles published in Physical Review Letters, even as the first author, but I still feel uneasy about the fact that I started my academic career with a misleading MS thesis and deliberately concealed the flaw in order to graduate smoothly. I understand that what I did is a research misconduct, but the question I'm still struggling to find the answer to is whether my research misconduct was ethically justifiable under the circumstances. My colleagues say it was, but I'm unsure whether they are frank about it, so I really want to hear what other people have to say. I want truly impartial answers from people who do not know me. This is why I'm posting my question here. Here are some additional details: 1. If I had not concealed the flaw, I would almost certainly have not become a scientist at all, because I could not afford two more undergraduate years in Russia. My parents didn't want to help me financially any further, so I had to get my MS degree asap and move abroad. At that time (late 1990s), living in Russia was very hard because of an economic crisis. Besides, even if I had found a way to finance the additional undergraduate years in Russia, the delay of my graduation would have harmed my chances to win the prestigious Western PhD stipend that I won. 2. Formally speaking, my MS thesis and the article based on it might be seen as not containing any flaw, because I was given a specific physical model and investigated static perturbations within the framework of that model as requested; the fact that the model is faulty is a separate, although related, thing. The message of my thesis was essentially that if we take that model and make those calculations, we get those results. It was a valid message per se. I was just an undergraduate student who had to do what the supervisor said. He gave me the model and requested certain calculations. I did them absolutely accurately and wrote up the results. 3. My MS project wasn't a significant research project anyway. It was rather a training project to learn how to do calculations and write up results. Even if the model were not faulty, the article would not have had any considerable impact. No one used the results of that project. People merely cited my paper. 4. The only harm due to me concealing the flaw was that a number of scientists continued using the same faulty model for other purposes, unsuspecting that the model is faulty. If I had told them that the model is faulty, they would have spent their research efforts for something more useful. But that would have put my own career in danger, because exposing the flaw too early might have resulted in a retraction of my MS degree and a subsequent termination of my PhD studies. I didn't want to take that risk. **Was my research misconduct ethically justifiable under the circumstances? Or should I have reported the flaw right after I discovered it, even at the huge expense explained above? Or what should I have done after I discovered the flaw?**
2022/08/04
[ "https://academia.stackexchange.com/questions/187521", "https://academia.stackexchange.com", "https://academia.stackexchange.com/users/111943/" ]
Honestly, this sounds more like a philosophical / personal ethics question than an professional ethics question. The "academic ethics" answers are what you already know: Withholding important relevant information when you publish something is definitely wrong. And, having done that, calling attention to it later (which you did) was the right thing to do. Beyond that, I think you're left in the messy world of being an imperfect human with competing needs and obligations and motivations.
You are in the clear, provided you have never *published* anything you knew to be flawed at the time you handed in your final checked page proofs. The grey area here is that MSc and PhD theses do count as a publication of sorts, although most people will know that there usually are many loose ends and (too) many theses are extremely rough around the edges. If the thesis is available online or on a shelf in the university, then someone may consult it and be led astray or at the very least waste some of their time. And this something you should feel uneasy about (but not too uneasy... there is a strange herd instinct in academia where everybody keeps working on the same thing everybody else does, long after the flaws have become well known to anyone able to understand the point). You can write a short erratum, outlining that there is an important caveat on the results of the thesis, and ask the library or whoever manages your MSc thesis as a public resource to ensure that the erratum is physically or electronically merged with the thesis. This is essentially how you would fix a flaw with a publication in a physical journal, where the flaw has become apparent to you after its appearance in the public domain. There is no logical reason why this should not work, except that you are operating within the Russian system, which can be horribly inflexible and unreasonable... As to whether you earned the MSc, yes and no. You demonstrated sufficient ability to the satisfaction of the evaluators at the time, so yes. Still, you ought to have had the scientific courage and frankness to inform your supervisor, so no. You were afraid to speak up at the time fearing it would engender a whole lot of trouble for you with no positive outcome for anyone at the end. I understand that. I have had a Russian PhD student who was traumatised by her former Russian supervisor, and she described him in almost the same words you are describing yours.
187,521
A couple of decades ago I graduated from a Russian university with an MS in physics, and my MS thesis contained a critical flaw. In short, the thesis was about static perturbations in a certain physical system, but the system itself is unstable in the very same model, with the instability length being comparable to the characteristic size of the static perturbations in question. The whole investigation didn't make any scientific sense, because the assumed physical system can't be physically realized in the first place. I discovered the flaw a few months before submitting my thesis. It happened rather accidentally: I wanted to formally prove that the system is stable, but the result of my calculations showed that the opposite is true. I had never heard from my supervisor, who had given me the problem for my thesis, or from his colleagues that the system may be unstable. Everyone simply did not even think about that possibility. After I discovered the flaw, I faced the dilemma as to what to do about it. My final choice was to tell no one, even my supervisor, and simply go on to get my MS degree, deliberately failing to mention my stability analysis and its outcome in my thesis and thereby concealing the flaw. I did so because getting my MS degree asap and moving abroad for PhD studies was my highest priority. If I had raised the issue about the flaw, I would have had to start my MS project over, if allowed at all, and spend two more years to get my MS degree. I should have performed the stability analysis at the very beginning of my MS project, but I didn't, and it's partially a fault of my supervisor, who directed my work in a very rigid way, giving me very specific tasks and deadlines. He never told me to check whether the system is stable. The official research plan, which he and I signed, did not contain any mention of a stability analysis. It was my own initiative to try to prove that the system is stable, because I felt that this was needed to make my investigation complete. I didn't even talk to my supervisor about my idea to perform the stability analysis. After I discovered the flaw, I was sure that if I talked to my supervisor about it, he would say the whole MS project had to be canceled. A product of the Soviet era, he was ruthlessly strict in terms of norms and ethics and had little compassion towards students. After I submitted my thesis, my supervisor insisted that I write and publish an article based on the thesis. I didn't want to do it, but I had to. After all, I needed good recommendation letters from my supervisor, so I had to obey. The article was published in a reputable American journal and was later cited about 20 times. Writing that article was the most unpleasant experience in my scientific career. To clarify, neither my thesis nor the paper claimed that the system is stable. That the system is stable was an implicit inherent assumption of the model, and it was quite a popular model at that time. The model was invented and used for other purposes well before I even started my MS project. That is, my advisor gave me a known model that no one knew to be faulty at the time, and asked me to use it for a new purpose. As explained above, I accidentally discovered and deliberately concealed that the assumption that the system is stable is wrong and can be shown to be wrong in the framework of the very same model, so the model is inherently self-contradictory regardless of the purpose of its use. As I expected, no one found the flaw, so I successfully got my MS degree in Russia, moved abroad, got a Western PhD degree, and some years later published an article explaining the flaw. In that article, I explicitly wrote that the model and all articles based on it are invalid science. I cited some articles, including my article based on the MS thesis, as examples of invalid science. No one published a comment in response. In private conversations, my colleagues confirmed that my conclusion about the flaw is correct. And the faulty model practically stopped being used after that. Many years have passed since then, and I have built a solid career and have articles published in Physical Review Letters, even as the first author, but I still feel uneasy about the fact that I started my academic career with a misleading MS thesis and deliberately concealed the flaw in order to graduate smoothly. I understand that what I did is a research misconduct, but the question I'm still struggling to find the answer to is whether my research misconduct was ethically justifiable under the circumstances. My colleagues say it was, but I'm unsure whether they are frank about it, so I really want to hear what other people have to say. I want truly impartial answers from people who do not know me. This is why I'm posting my question here. Here are some additional details: 1. If I had not concealed the flaw, I would almost certainly have not become a scientist at all, because I could not afford two more undergraduate years in Russia. My parents didn't want to help me financially any further, so I had to get my MS degree asap and move abroad. At that time (late 1990s), living in Russia was very hard because of an economic crisis. Besides, even if I had found a way to finance the additional undergraduate years in Russia, the delay of my graduation would have harmed my chances to win the prestigious Western PhD stipend that I won. 2. Formally speaking, my MS thesis and the article based on it might be seen as not containing any flaw, because I was given a specific physical model and investigated static perturbations within the framework of that model as requested; the fact that the model is faulty is a separate, although related, thing. The message of my thesis was essentially that if we take that model and make those calculations, we get those results. It was a valid message per se. I was just an undergraduate student who had to do what the supervisor said. He gave me the model and requested certain calculations. I did them absolutely accurately and wrote up the results. 3. My MS project wasn't a significant research project anyway. It was rather a training project to learn how to do calculations and write up results. Even if the model were not faulty, the article would not have had any considerable impact. No one used the results of that project. People merely cited my paper. 4. The only harm due to me concealing the flaw was that a number of scientists continued using the same faulty model for other purposes, unsuspecting that the model is faulty. If I had told them that the model is faulty, they would have spent their research efforts for something more useful. But that would have put my own career in danger, because exposing the flaw too early might have resulted in a retraction of my MS degree and a subsequent termination of my PhD studies. I didn't want to take that risk. **Was my research misconduct ethically justifiable under the circumstances? Or should I have reported the flaw right after I discovered it, even at the huge expense explained above? Or what should I have done after I discovered the flaw?**
2022/08/04
[ "https://academia.stackexchange.com/questions/187521", "https://academia.stackexchange.com", "https://academia.stackexchange.com/users/111943/" ]
You had a tough choice to make under difficult conditions. To me what you did seems reasonable, I might have acted similarly in your place. You did publish the flaw, just not as soon as you discovered it. Unless it was being used in real-life systems and people came to physical harm because of the flaw during that publishing delay, I don't think you have too much to beat yourself up about. It might have been possible to include the flaw in your thesis (I personally feel it would have made it stronger, not weaker, if other people were using the model without being aware of the flaw), but I guess we'll never know how things would have gone for you if you'd done that.
You are in the clear, provided you have never *published* anything you knew to be flawed at the time you handed in your final checked page proofs. The grey area here is that MSc and PhD theses do count as a publication of sorts, although most people will know that there usually are many loose ends and (too) many theses are extremely rough around the edges. If the thesis is available online or on a shelf in the university, then someone may consult it and be led astray or at the very least waste some of their time. And this something you should feel uneasy about (but not too uneasy... there is a strange herd instinct in academia where everybody keeps working on the same thing everybody else does, long after the flaws have become well known to anyone able to understand the point). You can write a short erratum, outlining that there is an important caveat on the results of the thesis, and ask the library or whoever manages your MSc thesis as a public resource to ensure that the erratum is physically or electronically merged with the thesis. This is essentially how you would fix a flaw with a publication in a physical journal, where the flaw has become apparent to you after its appearance in the public domain. There is no logical reason why this should not work, except that you are operating within the Russian system, which can be horribly inflexible and unreasonable... As to whether you earned the MSc, yes and no. You demonstrated sufficient ability to the satisfaction of the evaluators at the time, so yes. Still, you ought to have had the scientific courage and frankness to inform your supervisor, so no. You were afraid to speak up at the time fearing it would engender a whole lot of trouble for you with no positive outcome for anyone at the end. I understand that. I have had a Russian PhD student who was traumatised by her former Russian supervisor, and she described him in almost the same words you are describing yours.
187,521
A couple of decades ago I graduated from a Russian university with an MS in physics, and my MS thesis contained a critical flaw. In short, the thesis was about static perturbations in a certain physical system, but the system itself is unstable in the very same model, with the instability length being comparable to the characteristic size of the static perturbations in question. The whole investigation didn't make any scientific sense, because the assumed physical system can't be physically realized in the first place. I discovered the flaw a few months before submitting my thesis. It happened rather accidentally: I wanted to formally prove that the system is stable, but the result of my calculations showed that the opposite is true. I had never heard from my supervisor, who had given me the problem for my thesis, or from his colleagues that the system may be unstable. Everyone simply did not even think about that possibility. After I discovered the flaw, I faced the dilemma as to what to do about it. My final choice was to tell no one, even my supervisor, and simply go on to get my MS degree, deliberately failing to mention my stability analysis and its outcome in my thesis and thereby concealing the flaw. I did so because getting my MS degree asap and moving abroad for PhD studies was my highest priority. If I had raised the issue about the flaw, I would have had to start my MS project over, if allowed at all, and spend two more years to get my MS degree. I should have performed the stability analysis at the very beginning of my MS project, but I didn't, and it's partially a fault of my supervisor, who directed my work in a very rigid way, giving me very specific tasks and deadlines. He never told me to check whether the system is stable. The official research plan, which he and I signed, did not contain any mention of a stability analysis. It was my own initiative to try to prove that the system is stable, because I felt that this was needed to make my investigation complete. I didn't even talk to my supervisor about my idea to perform the stability analysis. After I discovered the flaw, I was sure that if I talked to my supervisor about it, he would say the whole MS project had to be canceled. A product of the Soviet era, he was ruthlessly strict in terms of norms and ethics and had little compassion towards students. After I submitted my thesis, my supervisor insisted that I write and publish an article based on the thesis. I didn't want to do it, but I had to. After all, I needed good recommendation letters from my supervisor, so I had to obey. The article was published in a reputable American journal and was later cited about 20 times. Writing that article was the most unpleasant experience in my scientific career. To clarify, neither my thesis nor the paper claimed that the system is stable. That the system is stable was an implicit inherent assumption of the model, and it was quite a popular model at that time. The model was invented and used for other purposes well before I even started my MS project. That is, my advisor gave me a known model that no one knew to be faulty at the time, and asked me to use it for a new purpose. As explained above, I accidentally discovered and deliberately concealed that the assumption that the system is stable is wrong and can be shown to be wrong in the framework of the very same model, so the model is inherently self-contradictory regardless of the purpose of its use. As I expected, no one found the flaw, so I successfully got my MS degree in Russia, moved abroad, got a Western PhD degree, and some years later published an article explaining the flaw. In that article, I explicitly wrote that the model and all articles based on it are invalid science. I cited some articles, including my article based on the MS thesis, as examples of invalid science. No one published a comment in response. In private conversations, my colleagues confirmed that my conclusion about the flaw is correct. And the faulty model practically stopped being used after that. Many years have passed since then, and I have built a solid career and have articles published in Physical Review Letters, even as the first author, but I still feel uneasy about the fact that I started my academic career with a misleading MS thesis and deliberately concealed the flaw in order to graduate smoothly. I understand that what I did is a research misconduct, but the question I'm still struggling to find the answer to is whether my research misconduct was ethically justifiable under the circumstances. My colleagues say it was, but I'm unsure whether they are frank about it, so I really want to hear what other people have to say. I want truly impartial answers from people who do not know me. This is why I'm posting my question here. Here are some additional details: 1. If I had not concealed the flaw, I would almost certainly have not become a scientist at all, because I could not afford two more undergraduate years in Russia. My parents didn't want to help me financially any further, so I had to get my MS degree asap and move abroad. At that time (late 1990s), living in Russia was very hard because of an economic crisis. Besides, even if I had found a way to finance the additional undergraduate years in Russia, the delay of my graduation would have harmed my chances to win the prestigious Western PhD stipend that I won. 2. Formally speaking, my MS thesis and the article based on it might be seen as not containing any flaw, because I was given a specific physical model and investigated static perturbations within the framework of that model as requested; the fact that the model is faulty is a separate, although related, thing. The message of my thesis was essentially that if we take that model and make those calculations, we get those results. It was a valid message per se. I was just an undergraduate student who had to do what the supervisor said. He gave me the model and requested certain calculations. I did them absolutely accurately and wrote up the results. 3. My MS project wasn't a significant research project anyway. It was rather a training project to learn how to do calculations and write up results. Even if the model were not faulty, the article would not have had any considerable impact. No one used the results of that project. People merely cited my paper. 4. The only harm due to me concealing the flaw was that a number of scientists continued using the same faulty model for other purposes, unsuspecting that the model is faulty. If I had told them that the model is faulty, they would have spent their research efforts for something more useful. But that would have put my own career in danger, because exposing the flaw too early might have resulted in a retraction of my MS degree and a subsequent termination of my PhD studies. I didn't want to take that risk. **Was my research misconduct ethically justifiable under the circumstances? Or should I have reported the flaw right after I discovered it, even at the huge expense explained above? Or what should I have done after I discovered the flaw?**
2022/08/04
[ "https://academia.stackexchange.com/questions/187521", "https://academia.stackexchange.com", "https://academia.stackexchange.com/users/111943/" ]
Just to make the advice formal, you were most likely wrong in not bringing the issue to your advisor when you noticed it. But, given your point 2, that might not have changed anything. On the other hand, it might have delayed your degree while you came to a more complete result. But, with few exceptions, such as those that literally harm other people, such errors in the past can and should be left in the past. This is especially true if you have learned from them and don't intend to repeat them. Panic is understandable and usually forgivable for such things. In some religions, for example, there is the concept of "forgiveness" that don't require public confessions. No one is perfect. No one always does the right thing. But if we learn from our mistakes we do can better the next time. So, your ethics at the time are questionable, and you likely committed a violation. But, let it rest. And, you aren't responsible for the fact that others used the same faulty model. That is on them. Had you developed the model yourself and hid its flaws (misrepresented them) then the issue would be more serious. But if it was accepted at the time then it is an artifact of scientific enquiry. You may have an opportunity, actually, though it would be awkward to exploit it. If that model is still being used, leading to suboptimal results, you could make your misgivings known. It shouldn't require a confession of guilt to do so either. --- You are probably too hard on yourself in point 1.
I don't think that your MS thesis has a "fatal flaw". While I do recognize that being physically possible is nice, it's not necessary. There is the entire field of abstract mathematics and I've never heard anyone seriously claiming that it's immoral and/or "academic malpractice" to be a mathematician. You never claimed that it were stable, just that *if* it is, *then* it will behave in that way describe by you. In fact, many contributions to physics are actual physically impossible. To give my favorite example: The schwarzschild (simple black hole) solution of general relativity (and the associated properties like the schwarzschild radius) implicitly requires a fully empty universe, which is impossible, still it's well known and important.
187,521
A couple of decades ago I graduated from a Russian university with an MS in physics, and my MS thesis contained a critical flaw. In short, the thesis was about static perturbations in a certain physical system, but the system itself is unstable in the very same model, with the instability length being comparable to the characteristic size of the static perturbations in question. The whole investigation didn't make any scientific sense, because the assumed physical system can't be physically realized in the first place. I discovered the flaw a few months before submitting my thesis. It happened rather accidentally: I wanted to formally prove that the system is stable, but the result of my calculations showed that the opposite is true. I had never heard from my supervisor, who had given me the problem for my thesis, or from his colleagues that the system may be unstable. Everyone simply did not even think about that possibility. After I discovered the flaw, I faced the dilemma as to what to do about it. My final choice was to tell no one, even my supervisor, and simply go on to get my MS degree, deliberately failing to mention my stability analysis and its outcome in my thesis and thereby concealing the flaw. I did so because getting my MS degree asap and moving abroad for PhD studies was my highest priority. If I had raised the issue about the flaw, I would have had to start my MS project over, if allowed at all, and spend two more years to get my MS degree. I should have performed the stability analysis at the very beginning of my MS project, but I didn't, and it's partially a fault of my supervisor, who directed my work in a very rigid way, giving me very specific tasks and deadlines. He never told me to check whether the system is stable. The official research plan, which he and I signed, did not contain any mention of a stability analysis. It was my own initiative to try to prove that the system is stable, because I felt that this was needed to make my investigation complete. I didn't even talk to my supervisor about my idea to perform the stability analysis. After I discovered the flaw, I was sure that if I talked to my supervisor about it, he would say the whole MS project had to be canceled. A product of the Soviet era, he was ruthlessly strict in terms of norms and ethics and had little compassion towards students. After I submitted my thesis, my supervisor insisted that I write and publish an article based on the thesis. I didn't want to do it, but I had to. After all, I needed good recommendation letters from my supervisor, so I had to obey. The article was published in a reputable American journal and was later cited about 20 times. Writing that article was the most unpleasant experience in my scientific career. To clarify, neither my thesis nor the paper claimed that the system is stable. That the system is stable was an implicit inherent assumption of the model, and it was quite a popular model at that time. The model was invented and used for other purposes well before I even started my MS project. That is, my advisor gave me a known model that no one knew to be faulty at the time, and asked me to use it for a new purpose. As explained above, I accidentally discovered and deliberately concealed that the assumption that the system is stable is wrong and can be shown to be wrong in the framework of the very same model, so the model is inherently self-contradictory regardless of the purpose of its use. As I expected, no one found the flaw, so I successfully got my MS degree in Russia, moved abroad, got a Western PhD degree, and some years later published an article explaining the flaw. In that article, I explicitly wrote that the model and all articles based on it are invalid science. I cited some articles, including my article based on the MS thesis, as examples of invalid science. No one published a comment in response. In private conversations, my colleagues confirmed that my conclusion about the flaw is correct. And the faulty model practically stopped being used after that. Many years have passed since then, and I have built a solid career and have articles published in Physical Review Letters, even as the first author, but I still feel uneasy about the fact that I started my academic career with a misleading MS thesis and deliberately concealed the flaw in order to graduate smoothly. I understand that what I did is a research misconduct, but the question I'm still struggling to find the answer to is whether my research misconduct was ethically justifiable under the circumstances. My colleagues say it was, but I'm unsure whether they are frank about it, so I really want to hear what other people have to say. I want truly impartial answers from people who do not know me. This is why I'm posting my question here. Here are some additional details: 1. If I had not concealed the flaw, I would almost certainly have not become a scientist at all, because I could not afford two more undergraduate years in Russia. My parents didn't want to help me financially any further, so I had to get my MS degree asap and move abroad. At that time (late 1990s), living in Russia was very hard because of an economic crisis. Besides, even if I had found a way to finance the additional undergraduate years in Russia, the delay of my graduation would have harmed my chances to win the prestigious Western PhD stipend that I won. 2. Formally speaking, my MS thesis and the article based on it might be seen as not containing any flaw, because I was given a specific physical model and investigated static perturbations within the framework of that model as requested; the fact that the model is faulty is a separate, although related, thing. The message of my thesis was essentially that if we take that model and make those calculations, we get those results. It was a valid message per se. I was just an undergraduate student who had to do what the supervisor said. He gave me the model and requested certain calculations. I did them absolutely accurately and wrote up the results. 3. My MS project wasn't a significant research project anyway. It was rather a training project to learn how to do calculations and write up results. Even if the model were not faulty, the article would not have had any considerable impact. No one used the results of that project. People merely cited my paper. 4. The only harm due to me concealing the flaw was that a number of scientists continued using the same faulty model for other purposes, unsuspecting that the model is faulty. If I had told them that the model is faulty, they would have spent their research efforts for something more useful. But that would have put my own career in danger, because exposing the flaw too early might have resulted in a retraction of my MS degree and a subsequent termination of my PhD studies. I didn't want to take that risk. **Was my research misconduct ethically justifiable under the circumstances? Or should I have reported the flaw right after I discovered it, even at the huge expense explained above? Or what should I have done after I discovered the flaw?**
2022/08/04
[ "https://academia.stackexchange.com/questions/187521", "https://academia.stackexchange.com", "https://academia.stackexchange.com/users/111943/" ]
One option is to focus on the future not the past. Can you do something to make sure no one junior to you is in that position? An ethical test I was taught is to imagine what you did is one day printed as a scandal story in the New York Times. To put it in a bad light: **"Famous professor concealed fatal flaw in early research study. Morals questioned"** What would you want your truthful response to be? You could already have in the article: > > "Friends of Dr. X note that they were in a precarious position at the > time, not in charge of the direction of the research, and published > their findings once they were in a secure professional position and able to do > so." > > > But even more than that one might hope to be able to add: > > "Students of Dr. X rushed to defend them. 'Dr. X has always run their > lab so that no student would ever be in that position.' The student > continued, 'Dr, X. always has us double-check our studies for those > kind of fundamental flaws, and ensures that anyone whose study does > fall apart just before submission is helped to graduate, or given the > extra funding they need. They have become a champion of data honesty > in the field, and they passionately support journals that help > scientists publish their 'failures' to encourage honesty in research." > > > I've laid it on a bit thick, and I'm not in the sciences, but hopefully this is helpful: is there something you can do to take the responsibility you feel (earned or not) and pay it forward?
Your assumptions about the cost of telling your advisor about the flaw are very likely wrong. **Discovering that a widely used physical model is flawed is a contribution which is definitely strong enough for a master thesis.** So, to answer your question, you should have told your advisor, submitted your proof of instability as a thesis, received a well deserved excellent grade, and gone on with your career. The second best thing would be to tell them after your graduation was sealed, but before submitting the paper. Regardless: for a young student, feeling confused and scared about the situation is understandable, and, in legal parlance, it is clearly a mitigating circumstance. It does not nullify the fact that there was a misconduct, though. Even if nobody "used" your result, it did its share of the damage: the more papers about the model are published, the less conceivable it is that it may be invalid. However, by now, if there were any such damage, you have clearly undone it. And even the most heinous crimes, which this one is not, usually have statute of limitation.
187,521
A couple of decades ago I graduated from a Russian university with an MS in physics, and my MS thesis contained a critical flaw. In short, the thesis was about static perturbations in a certain physical system, but the system itself is unstable in the very same model, with the instability length being comparable to the characteristic size of the static perturbations in question. The whole investigation didn't make any scientific sense, because the assumed physical system can't be physically realized in the first place. I discovered the flaw a few months before submitting my thesis. It happened rather accidentally: I wanted to formally prove that the system is stable, but the result of my calculations showed that the opposite is true. I had never heard from my supervisor, who had given me the problem for my thesis, or from his colleagues that the system may be unstable. Everyone simply did not even think about that possibility. After I discovered the flaw, I faced the dilemma as to what to do about it. My final choice was to tell no one, even my supervisor, and simply go on to get my MS degree, deliberately failing to mention my stability analysis and its outcome in my thesis and thereby concealing the flaw. I did so because getting my MS degree asap and moving abroad for PhD studies was my highest priority. If I had raised the issue about the flaw, I would have had to start my MS project over, if allowed at all, and spend two more years to get my MS degree. I should have performed the stability analysis at the very beginning of my MS project, but I didn't, and it's partially a fault of my supervisor, who directed my work in a very rigid way, giving me very specific tasks and deadlines. He never told me to check whether the system is stable. The official research plan, which he and I signed, did not contain any mention of a stability analysis. It was my own initiative to try to prove that the system is stable, because I felt that this was needed to make my investigation complete. I didn't even talk to my supervisor about my idea to perform the stability analysis. After I discovered the flaw, I was sure that if I talked to my supervisor about it, he would say the whole MS project had to be canceled. A product of the Soviet era, he was ruthlessly strict in terms of norms and ethics and had little compassion towards students. After I submitted my thesis, my supervisor insisted that I write and publish an article based on the thesis. I didn't want to do it, but I had to. After all, I needed good recommendation letters from my supervisor, so I had to obey. The article was published in a reputable American journal and was later cited about 20 times. Writing that article was the most unpleasant experience in my scientific career. To clarify, neither my thesis nor the paper claimed that the system is stable. That the system is stable was an implicit inherent assumption of the model, and it was quite a popular model at that time. The model was invented and used for other purposes well before I even started my MS project. That is, my advisor gave me a known model that no one knew to be faulty at the time, and asked me to use it for a new purpose. As explained above, I accidentally discovered and deliberately concealed that the assumption that the system is stable is wrong and can be shown to be wrong in the framework of the very same model, so the model is inherently self-contradictory regardless of the purpose of its use. As I expected, no one found the flaw, so I successfully got my MS degree in Russia, moved abroad, got a Western PhD degree, and some years later published an article explaining the flaw. In that article, I explicitly wrote that the model and all articles based on it are invalid science. I cited some articles, including my article based on the MS thesis, as examples of invalid science. No one published a comment in response. In private conversations, my colleagues confirmed that my conclusion about the flaw is correct. And the faulty model practically stopped being used after that. Many years have passed since then, and I have built a solid career and have articles published in Physical Review Letters, even as the first author, but I still feel uneasy about the fact that I started my academic career with a misleading MS thesis and deliberately concealed the flaw in order to graduate smoothly. I understand that what I did is a research misconduct, but the question I'm still struggling to find the answer to is whether my research misconduct was ethically justifiable under the circumstances. My colleagues say it was, but I'm unsure whether they are frank about it, so I really want to hear what other people have to say. I want truly impartial answers from people who do not know me. This is why I'm posting my question here. Here are some additional details: 1. If I had not concealed the flaw, I would almost certainly have not become a scientist at all, because I could not afford two more undergraduate years in Russia. My parents didn't want to help me financially any further, so I had to get my MS degree asap and move abroad. At that time (late 1990s), living in Russia was very hard because of an economic crisis. Besides, even if I had found a way to finance the additional undergraduate years in Russia, the delay of my graduation would have harmed my chances to win the prestigious Western PhD stipend that I won. 2. Formally speaking, my MS thesis and the article based on it might be seen as not containing any flaw, because I was given a specific physical model and investigated static perturbations within the framework of that model as requested; the fact that the model is faulty is a separate, although related, thing. The message of my thesis was essentially that if we take that model and make those calculations, we get those results. It was a valid message per se. I was just an undergraduate student who had to do what the supervisor said. He gave me the model and requested certain calculations. I did them absolutely accurately and wrote up the results. 3. My MS project wasn't a significant research project anyway. It was rather a training project to learn how to do calculations and write up results. Even if the model were not faulty, the article would not have had any considerable impact. No one used the results of that project. People merely cited my paper. 4. The only harm due to me concealing the flaw was that a number of scientists continued using the same faulty model for other purposes, unsuspecting that the model is faulty. If I had told them that the model is faulty, they would have spent their research efforts for something more useful. But that would have put my own career in danger, because exposing the flaw too early might have resulted in a retraction of my MS degree and a subsequent termination of my PhD studies. I didn't want to take that risk. **Was my research misconduct ethically justifiable under the circumstances? Or should I have reported the flaw right after I discovered it, even at the huge expense explained above? Or what should I have done after I discovered the flaw?**
2022/08/04
[ "https://academia.stackexchange.com/questions/187521", "https://academia.stackexchange.com", "https://academia.stackexchange.com/users/111943/" ]
Just to make the advice formal, you were most likely wrong in not bringing the issue to your advisor when you noticed it. But, given your point 2, that might not have changed anything. On the other hand, it might have delayed your degree while you came to a more complete result. But, with few exceptions, such as those that literally harm other people, such errors in the past can and should be left in the past. This is especially true if you have learned from them and don't intend to repeat them. Panic is understandable and usually forgivable for such things. In some religions, for example, there is the concept of "forgiveness" that don't require public confessions. No one is perfect. No one always does the right thing. But if we learn from our mistakes we do can better the next time. So, your ethics at the time are questionable, and you likely committed a violation. But, let it rest. And, you aren't responsible for the fact that others used the same faulty model. That is on them. Had you developed the model yourself and hid its flaws (misrepresented them) then the issue would be more serious. But if it was accepted at the time then it is an artifact of scientific enquiry. You may have an opportunity, actually, though it would be awkward to exploit it. If that model is still being used, leading to suboptimal results, you could make your misgivings known. It shouldn't require a confession of guilt to do so either. --- You are probably too hard on yourself in point 1.
There are already better answers in this thread but as mentioned elsewhere, this is more to do with personal ethics/philosophy. In this sense citing Nietzsche seems pertinent (needless to say, when taken with a grain of salt): > > One is healthy when one can laugh at the earnestness and zeal with > which one has been hypnotized by any single detail of our life, and > the bite of conscience is like a dog biting on a stone. > > > That which happened already happened, and moreover, could not have possibly happened otherwise. Another point that I didn't see emphasized here is that rather than brooding away and being remorseful about this event, which has absolutely no value after all this time, and what is more, benefits no one, you could actually think of ways of benefiting others to make up for whatever you feel you have to make up for. Indeed you seem to be in a position to spread some valuable lessons. E.g. you could sensibilise coworkers to be lenient and flexible to students in such situations and, what is more, lecture the students about both sides of the coin as you most clearly elaborate in your questions here. Finally, it might be worth mentioning (another ethical standpoint) that frequently great things have muddy/dark beginnings. I would highly recommend once studying Heraclitus' unity of opposites.
187,521
A couple of decades ago I graduated from a Russian university with an MS in physics, and my MS thesis contained a critical flaw. In short, the thesis was about static perturbations in a certain physical system, but the system itself is unstable in the very same model, with the instability length being comparable to the characteristic size of the static perturbations in question. The whole investigation didn't make any scientific sense, because the assumed physical system can't be physically realized in the first place. I discovered the flaw a few months before submitting my thesis. It happened rather accidentally: I wanted to formally prove that the system is stable, but the result of my calculations showed that the opposite is true. I had never heard from my supervisor, who had given me the problem for my thesis, or from his colleagues that the system may be unstable. Everyone simply did not even think about that possibility. After I discovered the flaw, I faced the dilemma as to what to do about it. My final choice was to tell no one, even my supervisor, and simply go on to get my MS degree, deliberately failing to mention my stability analysis and its outcome in my thesis and thereby concealing the flaw. I did so because getting my MS degree asap and moving abroad for PhD studies was my highest priority. If I had raised the issue about the flaw, I would have had to start my MS project over, if allowed at all, and spend two more years to get my MS degree. I should have performed the stability analysis at the very beginning of my MS project, but I didn't, and it's partially a fault of my supervisor, who directed my work in a very rigid way, giving me very specific tasks and deadlines. He never told me to check whether the system is stable. The official research plan, which he and I signed, did not contain any mention of a stability analysis. It was my own initiative to try to prove that the system is stable, because I felt that this was needed to make my investigation complete. I didn't even talk to my supervisor about my idea to perform the stability analysis. After I discovered the flaw, I was sure that if I talked to my supervisor about it, he would say the whole MS project had to be canceled. A product of the Soviet era, he was ruthlessly strict in terms of norms and ethics and had little compassion towards students. After I submitted my thesis, my supervisor insisted that I write and publish an article based on the thesis. I didn't want to do it, but I had to. After all, I needed good recommendation letters from my supervisor, so I had to obey. The article was published in a reputable American journal and was later cited about 20 times. Writing that article was the most unpleasant experience in my scientific career. To clarify, neither my thesis nor the paper claimed that the system is stable. That the system is stable was an implicit inherent assumption of the model, and it was quite a popular model at that time. The model was invented and used for other purposes well before I even started my MS project. That is, my advisor gave me a known model that no one knew to be faulty at the time, and asked me to use it for a new purpose. As explained above, I accidentally discovered and deliberately concealed that the assumption that the system is stable is wrong and can be shown to be wrong in the framework of the very same model, so the model is inherently self-contradictory regardless of the purpose of its use. As I expected, no one found the flaw, so I successfully got my MS degree in Russia, moved abroad, got a Western PhD degree, and some years later published an article explaining the flaw. In that article, I explicitly wrote that the model and all articles based on it are invalid science. I cited some articles, including my article based on the MS thesis, as examples of invalid science. No one published a comment in response. In private conversations, my colleagues confirmed that my conclusion about the flaw is correct. And the faulty model practically stopped being used after that. Many years have passed since then, and I have built a solid career and have articles published in Physical Review Letters, even as the first author, but I still feel uneasy about the fact that I started my academic career with a misleading MS thesis and deliberately concealed the flaw in order to graduate smoothly. I understand that what I did is a research misconduct, but the question I'm still struggling to find the answer to is whether my research misconduct was ethically justifiable under the circumstances. My colleagues say it was, but I'm unsure whether they are frank about it, so I really want to hear what other people have to say. I want truly impartial answers from people who do not know me. This is why I'm posting my question here. Here are some additional details: 1. If I had not concealed the flaw, I would almost certainly have not become a scientist at all, because I could not afford two more undergraduate years in Russia. My parents didn't want to help me financially any further, so I had to get my MS degree asap and move abroad. At that time (late 1990s), living in Russia was very hard because of an economic crisis. Besides, even if I had found a way to finance the additional undergraduate years in Russia, the delay of my graduation would have harmed my chances to win the prestigious Western PhD stipend that I won. 2. Formally speaking, my MS thesis and the article based on it might be seen as not containing any flaw, because I was given a specific physical model and investigated static perturbations within the framework of that model as requested; the fact that the model is faulty is a separate, although related, thing. The message of my thesis was essentially that if we take that model and make those calculations, we get those results. It was a valid message per se. I was just an undergraduate student who had to do what the supervisor said. He gave me the model and requested certain calculations. I did them absolutely accurately and wrote up the results. 3. My MS project wasn't a significant research project anyway. It was rather a training project to learn how to do calculations and write up results. Even if the model were not faulty, the article would not have had any considerable impact. No one used the results of that project. People merely cited my paper. 4. The only harm due to me concealing the flaw was that a number of scientists continued using the same faulty model for other purposes, unsuspecting that the model is faulty. If I had told them that the model is faulty, they would have spent their research efforts for something more useful. But that would have put my own career in danger, because exposing the flaw too early might have resulted in a retraction of my MS degree and a subsequent termination of my PhD studies. I didn't want to take that risk. **Was my research misconduct ethically justifiable under the circumstances? Or should I have reported the flaw right after I discovered it, even at the huge expense explained above? Or what should I have done after I discovered the flaw?**
2022/08/04
[ "https://academia.stackexchange.com/questions/187521", "https://academia.stackexchange.com", "https://academia.stackexchange.com/users/111943/" ]
Just to make the advice formal, you were most likely wrong in not bringing the issue to your advisor when you noticed it. But, given your point 2, that might not have changed anything. On the other hand, it might have delayed your degree while you came to a more complete result. But, with few exceptions, such as those that literally harm other people, such errors in the past can and should be left in the past. This is especially true if you have learned from them and don't intend to repeat them. Panic is understandable and usually forgivable for such things. In some religions, for example, there is the concept of "forgiveness" that don't require public confessions. No one is perfect. No one always does the right thing. But if we learn from our mistakes we do can better the next time. So, your ethics at the time are questionable, and you likely committed a violation. But, let it rest. And, you aren't responsible for the fact that others used the same faulty model. That is on them. Had you developed the model yourself and hid its flaws (misrepresented them) then the issue would be more serious. But if it was accepted at the time then it is an artifact of scientific enquiry. You may have an opportunity, actually, though it would be awkward to exploit it. If that model is still being used, leading to suboptimal results, you could make your misgivings known. It shouldn't require a confession of guilt to do so either. --- You are probably too hard on yourself in point 1.
The general question: --------------------- > > Is it ethically justifiable to conceal a fatal conceptual flaw in a thesis > > > No, it's not justifiable to do so, regardless of the circumstances. In some extreme circumstances (e.g. gun-to-your-head, not the circumstances your predicament) it might be excusable, but not justifiable. > > to avoid an unaffordable 2-year setback if the flaw is the advisor's fault? > > > People who are not you, nor your advisor, nor in the same department/university as the both of you, should not have faulty research misrepresented to them because of inter-personal or intra-institutional issues between the authors of the research. --- Your specific case ------------------ I agree with Ernest Bredar's [answer](https://academia.stackexchange.com/a/187577/7319): Your M.Sc. was not fatally flawed. It was *somewhat* flawed. Not to mention the fact that your analysis might apply in a somewhat-similar situation where the system *is* stable. > > I didn't want to do it, but I had to. > > > You didn't have to. > > I still feel uneasy about the fact that I started my academic career with a misleading MS thesis and deliberately concealed the flaw in order to graduate smoothly. > > > Luckily, your ethical misdeed did not seem to lead researchers along invalid paths, and once you published the extra paper, you may not have been "absolved", but you cut off the possibility of future "damage" of your action, which is about the best you could hope for. Yes, you did something wrong. No, you probably would not have lost your M.Sc. over it (AFAICT). You'll just have to acknowledge that you are not a morally perfect person - and also, that not all crimes, let alone misdeeds, are punished; so you can't be purified or excused by suffering or punishment. Try to use your sense of guilt as a motivator to do right by others, and to encourage your students to be honest and forthcoming, on the one hand, and forgiving on the other. That's a sort of penance, or atonement, that to me seems fitting.
64,248
How would you distinguish the man from the machine?
2011/04/01
[ "https://softwareengineering.stackexchange.com/questions/64248", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/18476/" ]
Challenge it to a game of "Global Thermonuclear War". Or perhaps a game of tic-tac-toe versus itself.
If you were to answer dishonestly, how would you answer this question?
64,248
How would you distinguish the man from the machine?
2011/04/01
[ "https://softwareengineering.stackexchange.com/questions/64248", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/18476/" ]
**What is the airspeed velocity of an unladen swallow?**
What would an M look like if you were standing on your head?
64,248
How would you distinguish the man from the machine?
2011/04/01
[ "https://softwareengineering.stackexchange.com/questions/64248", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/18476/" ]
I'd just ask him "If you could pose a question to a turing test candidate, what would it be?".
Anything ironic. So far machines are totally incapable of interpreting jokes and irony. Although some people are too, so you may get some false negatives ;-)
64,248
How would you distinguish the man from the machine?
2011/04/01
[ "https://softwareengineering.stackexchange.com/questions/64248", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/18476/" ]
"Are you Watson?" :-p Jokes aside, I think it is impossible to determine man from machine with a single question, especially without any context info.
"Sorry I'm late. Got held up at my mother's funeral." Would any intelligent being other than a human respond to that as a human would? I think not.
64,248
How would you distinguish the man from the machine?
2011/04/01
[ "https://softwareengineering.stackexchange.com/questions/64248", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/18476/" ]
"Will your answer to this question be negative?" Note: the original Turing test proposal was for the computer to pretend to be a woman, the interviewer to be a man, and the test limited to five minutes. If the man was unable to determine if the computer was a woman or not in five minutes, we would have to conclude that the computer was intelligent, "because the converse is not polite".
I would ask anything where there isn't a clear cut answer and which usually involves strong or varied opinions and/or emotions from human participants. For example: * What do do you think of the current situation in Libya? * What are your thoughts on the recent disaster in Japan? * How do you think we should resolve the humanitarian crisis in the Ivory Coast? * Why do you think Coldplay became so popular? * What do you think about Charlie Sheen? * What new technologies should we foresee in the next twenty years?
64,248
How would you distinguish the man from the machine?
2011/04/01
[ "https://softwareengineering.stackexchange.com/questions/64248", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/18476/" ]
You’re in a desert walking along in the sand when all of the sudden you look down, and you see a tortoise, it’s crawling toward you. You reach down, you flip the tortoise over on its back. The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over, but it can’t, not without your help. But you’re not helping. Why is that?
*How are you feeling today?* and go on with empathic conversation.
64,248
How would you distinguish the man from the machine?
2011/04/01
[ "https://softwareengineering.stackexchange.com/questions/64248", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/18476/" ]
"Will your answer to this question be negative?" Note: the original Turing test proposal was for the computer to pretend to be a woman, the interviewer to be a man, and the test limited to five minutes. If the man was unable to determine if the computer was a woman or not in five minutes, we would have to conclude that the computer was intelligent, "because the converse is not polite".
Ask a logical question which requires infinite recursion for evaluation and hope the programmers weren't smart enough to account for that kind of question.
64,248
How would you distinguish the man from the machine?
2011/04/01
[ "https://softwareengineering.stackexchange.com/questions/64248", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/18476/" ]
Challenge it to a game of "Global Thermonuclear War". Or perhaps a game of tic-tac-toe versus itself.
**What is the airspeed velocity of an unladen swallow?**
64,248
How would you distinguish the man from the machine?
2011/04/01
[ "https://softwareengineering.stackexchange.com/questions/64248", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/18476/" ]
"Why are manhole covers round?" Perhaps followed up with "Where do you see yourself in 5 years?" --- EDIT: I've come to think of that Douglas Hofstadter has done a delightful piece on this exact subject (including the highest rated answer) and found an online version at <http://www.cse.unr.edu/~sushil/class/ai/papers/coffeehouse.html>. Especially the scenario where he tries to disclose Nicolai in the "Post Scriptum" section is a fantastic read. I believe I read this in Metamagical Themes.
Why might a guy say to another guy "Oh, be a fine girl kiss me"?
64,248
How would you distinguish the man from the machine?
2011/04/01
[ "https://softwareengineering.stackexchange.com/questions/64248", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/18476/" ]
"Will your answer to this question be negative?" Note: the original Turing test proposal was for the computer to pretend to be a woman, the interviewer to be a man, and the test limited to five minutes. If the man was unable to determine if the computer was a woman or not in five minutes, we would have to conclude that the computer was intelligent, "because the converse is not polite".
What would an M look like if you were standing on your head?
287,622
Stars can be crushed by gravity and create black holes or neutron stars. Why doesn't the same happen with any planet if it is in the same space time? Please explain it in simple way. Note: I am not a physicist but have some interest in physics.
2016/10/20
[ "https://physics.stackexchange.com/questions/287622", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/133633/" ]
In very simple terms which I hope you will understand. The gravitational force of attraction depends on mass and distance. For the atoms which make up the Earth there are two forces acting on them, the gravitational attraction due to all the other atoms and the Coulomb/electrostatic repulsive force between the electrons orbiting the atoms. The electron shells repel one another. As mass increases the gravitational attractive force increases and the atoms come closer together and the repulsion between the electron shells increases to balance the increased gravitational attraction. If the mass increases even more the Coulomb repulsive force cannot balance the increased gravitational attractive force and the atom collapses with protons and electrons combining to form neutrons. You then have an entity composed of neutrons - a neutron star. There is still the gravitational attractive force between neutrons but now the repulsive force is provided by the strong nuclear force between the neutrons - neutrons do not like to be "squashed". Increase the mass even more and the gravitational attractive force increases and so does the repulsive force between neutrons by the neutrons coming closer together. Eventually if you increase the mass even more the repulsive force between the neutrons is not sufficient to balance the gravitational attractive force between the neutrons and so you get a further collapse into a black hole. So the simple answer to your question is that the gravitational forces between the atoms which make up a planet are not large enough to initiate catastrophic collapse because the mass of a planet is not large enough.
Planets *are* crushed by gravity! That's why, for example, Earth is a densely packed spherical rock rather than a loose cloud of dust. There's just not *enough* crushing 'force' to do more than that.
287,622
Stars can be crushed by gravity and create black holes or neutron stars. Why doesn't the same happen with any planet if it is in the same space time? Please explain it in simple way. Note: I am not a physicist but have some interest in physics.
2016/10/20
[ "https://physics.stackexchange.com/questions/287622", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/133633/" ]
Planets *are* crushed by gravity! That's why, for example, Earth is a densely packed spherical rock rather than a loose cloud of dust. There's just not *enough* crushing 'force' to do more than that.
You must understand that there are two factors involved here, first one is gravity that is trying to bring the planet closer and crush it and the second factors tries to resist this crushing e.g. pauli exclusion principle leads to repulsion sometimes, nuclear reaction also resist crushing in stars . So this play of two different factors leads to crushing in some but not all cases.
287,622
Stars can be crushed by gravity and create black holes or neutron stars. Why doesn't the same happen with any planet if it is in the same space time? Please explain it in simple way. Note: I am not a physicist but have some interest in physics.
2016/10/20
[ "https://physics.stackexchange.com/questions/287622", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/133633/" ]
In very simple terms which I hope you will understand. The gravitational force of attraction depends on mass and distance. For the atoms which make up the Earth there are two forces acting on them, the gravitational attraction due to all the other atoms and the Coulomb/electrostatic repulsive force between the electrons orbiting the atoms. The electron shells repel one another. As mass increases the gravitational attractive force increases and the atoms come closer together and the repulsion between the electron shells increases to balance the increased gravitational attraction. If the mass increases even more the Coulomb repulsive force cannot balance the increased gravitational attractive force and the atom collapses with protons and electrons combining to form neutrons. You then have an entity composed of neutrons - a neutron star. There is still the gravitational attractive force between neutrons but now the repulsive force is provided by the strong nuclear force between the neutrons - neutrons do not like to be "squashed". Increase the mass even more and the gravitational attractive force increases and so does the repulsive force between neutrons by the neutrons coming closer together. Eventually if you increase the mass even more the repulsive force between the neutrons is not sufficient to balance the gravitational attractive force between the neutrons and so you get a further collapse into a black hole. So the simple answer to your question is that the gravitational forces between the atoms which make up a planet are not large enough to initiate catastrophic collapse because the mass of a planet is not large enough.
The particles which make up atoms are electrically charged, and they repel each other when they get too close to each other. Gravitational forces only attract one particle to another, and never repel, but they're extremely weak compared to the electrical force. To create a black hole, the gravitational force needs to overcome these repulsive forces between particles. For objects like the earth and the sun, the repulsive forces are much greater than the gravitational force.
287,622
Stars can be crushed by gravity and create black holes or neutron stars. Why doesn't the same happen with any planet if it is in the same space time? Please explain it in simple way. Note: I am not a physicist but have some interest in physics.
2016/10/20
[ "https://physics.stackexchange.com/questions/287622", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/133633/" ]
Planets *are* crushed by gravity! That's why, for example, Earth is a densely packed spherical rock rather than a loose cloud of dust. There's just not *enough* crushing 'force' to do more than that.
The particles which make up atoms are electrically charged, and they repel each other when they get too close to each other. Gravitational forces only attract one particle to another, and never repel, but they're extremely weak compared to the electrical force. To create a black hole, the gravitational force needs to overcome these repulsive forces between particles. For objects like the earth and the sun, the repulsive forces are much greater than the gravitational force.
194,785
I have published an application and it went live after sometime. Now I have enabled automatic updates in the play store settings. I can see my app there and it's showing me update button. But why it is not automatically updated? Also i haven't got any notification about new update available? Can anyone please tell how automatic updates work? App starts downloading update the time it is available or play store check updates after some interval?
2018/04/16
[ "https://android.stackexchange.com/questions/194785", "https://android.stackexchange.com", "https://android.stackexchange.com/users/196661/" ]
When an update to an application is published in the Play Store, updates start rolling out. Not all users will see the update immediately; I think the figure is a few hours for the update to be available throughout the country or world. The Play Store app checks for updates periodically, I think a few times daily. You can configure toggles to: 1. Notify when updates are available 2. Automatically install updates 3. Notify when automatic updates have occurred A user will only be alerted for an update if the application is installed from the Play Store. As a developer, you may have installed from adb, in which case the application will not update from the Play Store. You can always reinstall the Play Store version, which will be updated.
Yes as answered above the updates don't happen right away. It could take up to 24 hrs for it to reach all users. The playstore generally updated when the device is idle and plugged in
15,497
AI experts like Ben Goertzel and Ray Kurzweil say that AGI will be developed in the coming decade. Are they credible?
2019/09/17
[ "https://ai.stackexchange.com/questions/15497", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/17601/" ]
As a riff on my answer to [this question](https://ai.stackexchange.com/questions/7875/is-the-singularity-something-to-be-taken-seriously/7888#7888), which is about the broader concern of the development of the singularity, rather than the narrower concern of the development of AGI: I can say that among AI researchers I interact with, it far more common to view the development of AGI in the next decade as speculation (or even wild speculation) than as settled fact. This is borne out by [surveys of AI researchers](https://nickbostrom.com/papers/survey.pdf), with 80% thinking "The earliest that machines will be able to simulate learning and every other aspect of human intelligence" is in "more than 50 years" or "never", and just a few percent thinking that such forms of AI are "near". It's possible to quibble over what exactly is meant by AGI, but it seems likely that for us to reach AGI, we'd need to simulate human-level intelligence in at least most of its aspects. The fact that AI researchers think this is very far off suggests that they also think AGI is not right around the corner. I suspect that the reasons AI researchers are less optimistic about AGI than Kurzweil or others in tech (but not in AI), are rooted in the fact that we still don't have a good understanding of what human intelligence *is*. It's difficult to simulate something that we can't pin down. Another factor is that most AI researchers have been working in AI for a long time. There are countless past proposals for AGI frameworks, and *all* of them have been not just wrong, but in the end, more or less hopelessly wrong. I think this creates an innate skepticism of AGI, which may perhaps be unfair. Nonetheless, expert opinion on this one is pretty well settled: no AGI this decade, and maybe not ever!
I wouldn't take anything Ray Kurzweil says especially seriously. Actual AI experts spend large quantities of time reading the existing scientific literature, and working to expand it. Because Kurzweil doesn't spend much of his time actually *learning* about AI, he has plenty of time in which to talk about it. Loudly. This is harmful to research, because 1) a lot of the uninformed predictions he and others make resemble doomsday scenarios, and 2) the predictions of *good* things have insanely optimistic time frames attached, and when they don't come true, research funding may be lost because AI hasn't lived up to what people thought it promised. AI research has been progressing very rapidly in the last decade, but if we're being honest, a lot of the credit for that has to go to the people who develop research-grade graphics cards. The ability to perform *massive* amounts of linear algebra in parallel has allowed us to use techniques that we've known about for a couple decades, but that were too computationally expensive to be practical at the time. And because those techniques are now practical, a lot of current research is applying those techniques to new problems, and modifying and improving them based on what we've learned. (I don't want to understate the contributions here; there have been a lot of *really* clever ideas developed in the last ten years. But it's mostly consistent iterative improvement of techniques that already existed, rather than completely revolutionary ideas.) To make human-equivalent AIs, we'll probably need to make a few of those giant conceptual leaps. And each of those leaps will then need to be followed up by a decade or two of iterative improvement, because that's how the process works. Case in point, the revolutionary idea that eventually led to all the Deep Learning models out there today was [this one](https://www.nature.com/articles/323533a0), dated 1986. First, there was the revolutionary idea. It was followed up by a bunch of work that built on it and expanded it in new directions. The work eventually stagnated because of hardware constraints. Then hardware scientists and engineers made some advances that let us continue work, and only then did we finally start getting the major applications that we're seeing today. We know human-level intelligence is possible, since humans manage it. I have little doubt that we'll figure out how to do it with AI eventually (maybe in my lifetime, maybe not). But if you want Kurzweil's predictions to be even remotely plausible, you might want to add a zero to the end of most of his time frames.
15,497
AI experts like Ben Goertzel and Ray Kurzweil say that AGI will be developed in the coming decade. Are they credible?
2019/09/17
[ "https://ai.stackexchange.com/questions/15497", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/17601/" ]
As a riff on my answer to [this question](https://ai.stackexchange.com/questions/7875/is-the-singularity-something-to-be-taken-seriously/7888#7888), which is about the broader concern of the development of the singularity, rather than the narrower concern of the development of AGI: I can say that among AI researchers I interact with, it far more common to view the development of AGI in the next decade as speculation (or even wild speculation) than as settled fact. This is borne out by [surveys of AI researchers](https://nickbostrom.com/papers/survey.pdf), with 80% thinking "The earliest that machines will be able to simulate learning and every other aspect of human intelligence" is in "more than 50 years" or "never", and just a few percent thinking that such forms of AI are "near". It's possible to quibble over what exactly is meant by AGI, but it seems likely that for us to reach AGI, we'd need to simulate human-level intelligence in at least most of its aspects. The fact that AI researchers think this is very far off suggests that they also think AGI is not right around the corner. I suspect that the reasons AI researchers are less optimistic about AGI than Kurzweil or others in tech (but not in AI), are rooted in the fact that we still don't have a good understanding of what human intelligence *is*. It's difficult to simulate something that we can't pin down. Another factor is that most AI researchers have been working in AI for a long time. There are countless past proposals for AGI frameworks, and *all* of them have been not just wrong, but in the end, more or less hopelessly wrong. I think this creates an innate skepticism of AGI, which may perhaps be unfair. Nonetheless, expert opinion on this one is pretty well settled: no AGI this decade, and maybe not ever!
My simple answer is **NO**. Let me elaborate. If you closely observe nature, you see that nothing changes drastically all of a sudden. Even when it does, it doesn't stay for long. Field of AI, has just started and it needs a lot more evolution to achieve AGI. Though AI is solving many directed problems like Face Recognition, Speech Recognition and many more (applications are innumerable), all these can be considered as Narrow AI. They solve a particular task. For AI reach to the state where it can better than humans in all aspects, not only do we need breakthroughs in algorithms, we also need many more breakthroughs in electronics and physics. Please read this article. Summary is experts(around 350) estimate that there’s a 50% chance that AGI will occur until 2060. So, there is a very bleak chance that AGI will become a reality in next decade. <https://blog.aimultiple.com/artificial-general-intelligence-singularity-timing/>
15,497
AI experts like Ben Goertzel and Ray Kurzweil say that AGI will be developed in the coming decade. Are they credible?
2019/09/17
[ "https://ai.stackexchange.com/questions/15497", "https://ai.stackexchange.com", "https://ai.stackexchange.com/users/17601/" ]
I wouldn't take anything Ray Kurzweil says especially seriously. Actual AI experts spend large quantities of time reading the existing scientific literature, and working to expand it. Because Kurzweil doesn't spend much of his time actually *learning* about AI, he has plenty of time in which to talk about it. Loudly. This is harmful to research, because 1) a lot of the uninformed predictions he and others make resemble doomsday scenarios, and 2) the predictions of *good* things have insanely optimistic time frames attached, and when they don't come true, research funding may be lost because AI hasn't lived up to what people thought it promised. AI research has been progressing very rapidly in the last decade, but if we're being honest, a lot of the credit for that has to go to the people who develop research-grade graphics cards. The ability to perform *massive* amounts of linear algebra in parallel has allowed us to use techniques that we've known about for a couple decades, but that were too computationally expensive to be practical at the time. And because those techniques are now practical, a lot of current research is applying those techniques to new problems, and modifying and improving them based on what we've learned. (I don't want to understate the contributions here; there have been a lot of *really* clever ideas developed in the last ten years. But it's mostly consistent iterative improvement of techniques that already existed, rather than completely revolutionary ideas.) To make human-equivalent AIs, we'll probably need to make a few of those giant conceptual leaps. And each of those leaps will then need to be followed up by a decade or two of iterative improvement, because that's how the process works. Case in point, the revolutionary idea that eventually led to all the Deep Learning models out there today was [this one](https://www.nature.com/articles/323533a0), dated 1986. First, there was the revolutionary idea. It was followed up by a bunch of work that built on it and expanded it in new directions. The work eventually stagnated because of hardware constraints. Then hardware scientists and engineers made some advances that let us continue work, and only then did we finally start getting the major applications that we're seeing today. We know human-level intelligence is possible, since humans manage it. I have little doubt that we'll figure out how to do it with AI eventually (maybe in my lifetime, maybe not). But if you want Kurzweil's predictions to be even remotely plausible, you might want to add a zero to the end of most of his time frames.
My simple answer is **NO**. Let me elaborate. If you closely observe nature, you see that nothing changes drastically all of a sudden. Even when it does, it doesn't stay for long. Field of AI, has just started and it needs a lot more evolution to achieve AGI. Though AI is solving many directed problems like Face Recognition, Speech Recognition and many more (applications are innumerable), all these can be considered as Narrow AI. They solve a particular task. For AI reach to the state where it can better than humans in all aspects, not only do we need breakthroughs in algorithms, we also need many more breakthroughs in electronics and physics. Please read this article. Summary is experts(around 350) estimate that there’s a 50% chance that AGI will occur until 2060. So, there is a very bleak chance that AGI will become a reality in next decade. <https://blog.aimultiple.com/artificial-general-intelligence-singularity-timing/>
67,927
In my (first) attempt to remove the crank arms from a bike, I broke the crank puller (Super B TB-6485). I applied more force than I was comfortable with, thinking the threads had perhaps rusted over time. When attaching the puller to the crank, I pulled back the driver a fair bit from the coupler, screwed in the coupler, tightened it lightly with a spanner, and then proceeded to turn the driver down as far as I could. I then applied as much pressure as I could, and I heard it snap. Uncoupled everything, and the two bits seen in images 1-3 fell to the ground. What did I do wrong here? Can the tool be used without the parts that fell out? Should I have removed them beforehand? I'm honestly a little confused as to their function. Thanks in advance. Images below. [![Puller assembled](https://i.stack.imgur.com/NgkIu.jpg)](https://i.stack.imgur.com/NgkIu.jpg) [![Puller disassembled](https://i.stack.imgur.com/a2E7Y.jpg)](https://i.stack.imgur.com/a2E7Y.jpg) [![Pin with 2.5mm hex socket (3/32") snapped at the thread](https://i.stack.imgur.com/ST4DE.jpg)](https://i.stack.imgur.com/ST4DE.jpg) [![Square tapered spindle scratched, but seems sound](https://i.stack.imgur.com/B81HO.jpg)](https://i.stack.imgur.com/B81HO.jpg)
2020/05/07
[ "https://bicycles.stackexchange.com/questions/67927", "https://bicycles.stackexchange.com", "https://bicycles.stackexchange.com/users/49342/" ]
You shouldn't use the little adapter that you can disassemble from the tool for a square taper crank. Check, but you should find that it has a larger diameter than fits through the square hole in the cranks. It is used for Octalink, ISIS etc cranks with a larger spindle. What I think happened is you got a little way with the tool, then the adapter bottomed out against the crank itself (not the spindle). You were then effectively trying to pull the tool apart against itself, and succeeded. You probably don't need to buy another. You should be able to use the disassembled puller to remove the crank, and the adapter should still work on suitable cranks, just not this crank or other square taper ones.
I have owned own a Park Tool version of this tool since 1988, and it has given me great results. If your tool works like mine, here is how to use it: After removing the cap from the crank, one side of that threaded cylinder should thread down into the crank. Screw it in all the way. Then take the handle park and screw it into the cylinder. Eventually you will feel it bottom out on the end of the spindle. Keep turning, and you will notice it pulling the crank off. After one or two more turns, it should slide off. I am not sure what the other two parts with your tool do. Maybe adapters for oddball crank designs?
1,650,314
I need in column D the total of column A, B, and C that in D1 i should have =SUM(A1:C1) in D2 I should have =SUM(A2:C2) etc. and it should continue the whole column by it self Thanks!! M.K.
2021/05/20
[ "https://superuser.com/questions/1650314", "https://superuser.com", "https://superuser.com/users/1360361/" ]
Use the fill handle. 1. Type in cell D1: =SUM(A1:C1) 2. Press Ctrl+Enter (which keeps you in the same cell) 3. There is a little green box (called the fill handle) in the corner of the cell. Double-click that box (or drag down as far as you want). That will make D2: =SUM(A2:C2) and so on...
Agree with Dave. Here is a .gif for clarity: [![enter image description here](https://i.stack.imgur.com/14Ukl.gif)](https://i.stack.imgur.com/14Ukl.gif)
8,302
Which nations have the best bonus to assist in a conquest victory?
2010/10/01
[ "https://gaming.stackexchange.com/questions/8302", "https://gaming.stackexchange.com", "https://gaming.stackexchange.com/users/2074/" ]
**Japan**. While other nations have bonuses that certainly assist in producing an army, or movement of troops, Japan's bonus directly relates to combat. Since you'll be fighting a lot for a conquest victory, your forces will certainly be fighting hurt - and the ability to always fight at full strength certainly makes it the best option.
Japan is the best. However i find the mogal Keshik unit to be very good at taking down city defenses and can help in an early game conquest. Especially since they can move after attacking, so you end up moving in, hitting the city and then moving out of harms way. 2 or 3 will deplete a cities defenses to allow ground troops to take the city.
3,554
I have a lot of Excel datasheet (in one file) I want to batch input to Access but I can't find a sample code to do it the name of datasheet in access is same as in excel is there any code of VBscript can do it? maybe in access or excel vba? or any simple software?
2011/06/29
[ "https://dba.stackexchange.com/questions/3554", "https://dba.stackexchange.com", "https://dba.stackexchange.com/users/2205/" ]
The simplest way I can think of is probably writing some VBA code in the Excel workbook. How much effort that is depends on factors like how many columns you have in each sheet, what format the data is in, how similar each sheet is, and how often you are going to have to do this. Does all the data from each sheet go in to a one table or multiple tables in the Access database? Another option could be run write some code to export each data to a CSV files, combine and then import into Access.
I would strongly recommend going the CSV route. Exporting from Excell to CSV is simply a matter of doing a "Save As...". Importing it into Access is a [pretty simple matter](http://www.brighthub.com/computing/windows-platform/articles/27511.aspx) as well. (If that link dies, there's many more like it on Google.)
1,574,962
I want to know is there any way to expire the viewstate after a particuler given time.
2009/10/15
[ "https://Stackoverflow.com/questions/1574962", "https://Stackoverflow.com", "https://Stackoverflow.com/users/165873/" ]
Have you tried using Session instead. It will be better security than placing an expiration date into the ViewState which can be modified by the user. Sessions have a default expiration of 20 minutes, but you can modify that.
No, there isn't an expiration feature of the viewstate. But maybe you can set a datetime value to an viewstate variable that you'll check later for your own expiration logic.
1,574,962
I want to know is there any way to expire the viewstate after a particuler given time.
2009/10/15
[ "https://Stackoverflow.com/questions/1574962", "https://Stackoverflow.com", "https://Stackoverflow.com/users/165873/" ]
Have you tried using Session instead. It will be better security than placing an expiration date into the ViewState which can be modified by the user. Sessions have a default expiration of 20 minutes, but you can modify that.
It sounds as if you need to use cookies (not exactly like viewstate but can hold data too). You can force cookies to expire or set an expiration date but you cannot do the same view state.
25,061
I manage a few development teams at the moment. Each team develops different projects for different clients. Right now I'm trying to let team leads work with clients directly without me, but the problem is that none of them has good experience designing UI. I do. We have a designer, who I'm trying to teach how to create UI the way I do. The problem is that she is neither into business nor into development, so her designs are quite typical: as beautiful as useless and/or difficult to implement. So, what do I do with designs? It consumes a significant portion of my time each week... Let's summarize: 1. The designer creates interfaces that are difficult to develop. Okay, I've handled to her my own sketch design system and introduced to main UI frameworks that we use. She is getting better at this. 2. The designer isn't into business. She doesn't know when it makes sense to spend money on more complex things and when we should just make it simple and improve when needed. Okay, I may be able to put this responsibility on the Product Owner. 3. The designer doesn't know all the intricacies of a product, PO's vision and user feedback. What do I do about it? If I'll ask her to consume all this information, then she'll be spending more time learning the product, than actually designing it.
2018/10/14
[ "https://pm.stackexchange.com/questions/25061", "https://pm.stackexchange.com", "https://pm.stackexchange.com/users/33819/" ]
I've done this a couple times, tho it wasn't always a webapp & wasn't for scaling reasons. Here's what we did: * Treat it like a new app. In our case, we renamed the app when we did that. * Put it in its own repo. * Freeze development on the old one except for critical bugfixes, so all resources can be devoted to the migration. * Pare the feature set if possible, at least for the first release. EG if you have features that are rarely if ever used, don't commit to them. Obviously do this in consultation with stakeholders, explaining that you're trying to manage scope to keep the schedule reasonable. * Define acceptance criteria very clearly. Help your stakeholders understand why the behavior, appearance, or results may not be identical, because of under-the-hood stuff they don't normally think about and/or because the new framework has different things built in. * Anticipate that you will find bugs in the existing code, that should be corrected in the new code. However, bear in mind that you may need to patch the new code to reproduce the old code's bad behavior for ease of acceptance testing, with the understanding that after acceptance, you'll remove those patches and get the new behavior approved. * Resist the temptation to make changes or improvements in behavior during the migration. File tickets for them; maybe leave architectural hooks for them, or comments in the code; but don't try to "migrate to new framework" and "change existing behavior" at the same time. * Plan to write a commissioning report at the end, that explains how you vetted the new code against the old code and addresses any changes that were made, including an appendix with any relevant test reports. Write it as you go, it'll save you grief later. * Allow yourself a generous amount of slack in your schedule, because you WILL likely run into unanticipated technical difficulties. It's always better to underpromise and overdeliver. * Make sure your stakeholders are aware of that schedule, & buy into it up front. This manages expectations & gives you something to point to when they get impatient that it's taking so long. A couple of technical points: We already had a large set of regression tests & the concern was mostly about numerical results. We put all our patches-to-reproduce-bugs into the same module with an obvious name, and kept the correct code present in comments, so that it would be easy to remove the patches when done. There's a tension between doing the easy things first so you can prove the concept and get a subset up and running and under review as soon as possible, and doing the most complicated things first so that you don't end up having to re-architect halfway through. The usual agile assumption to do the simplest thing that works & refactor as necessary as you go along is a bit of a mismatch to this situation, in my opinion, because in this case you DO know "you're gonna need it". I might suggest having part of the team investing in analysis of the most complicated bits while the initial infrastructure, technology selection/learning curve, etc is going on. I mention this because this was the reason one of my migration projects ran significantly over the planned schedule: we did the easy things first, and then had to shoehorn complexity in under time pressure. Hope this helps! I'll be interested in other answers as well.
I don't think there's a "right" answer to this. Vicki's answer is perfectly valid. I'm going to give another answer that I think is also right. Start with making a list of the biggest problems your application has now. Maybe that is database redundancy, load time, architectural fragility, test coverage, whatever. Now sort that list into an improvement backlog. Now start at the top and work your way down. Easy, right? Well, there are a few things to consider: 1) is your application's code loosely coupled or, better yet, decoupled? How many pieces can you separate out? If you have a lot of tightly-coupled components, you're probably stuck in a full-replace approach like Vicki suggests or you have to put in some effort in improving the architecture and maintainability of your application before you can do anything else. 2) Small partial fixes or big fixes? Let's take your React example. Let's assume for a moment that your existing UI communicates with web services for business logic. You could just replace the whole UI side and let react communicate with the web services. That may also mean some adjusting of the web services for REST if you used SOAP or the Microsoft web service protocol before. This is a big fix for the problem of resources. On the other hand, a small fix might be as simple as adding hardware or a load-balancing solution. Doing that may not permanently fix the problem, but may drive the problem much lower on the backlog. For the cost of a server or three you just bought yourself a year of in your application and you can focus on other problems that are harder to solve than buying some extra CPUs. 3) Focus on solving problems, not implementing designs. For each of those items, the goal is to improve load speed X amount or reduce errors to Y threshold. If you are just implementing a predefined solution bit-by-bit, you may be better off taking a full replacement approach, because you're basically doing that anyway. This approach only pays off if you're getting the benefits of solving each problem as you go. As I said, there isn't necessarily a "right" answer, in which of these approaches to pic or how you'd go through this approach. In this approach, you iteratively make the best choices at the time and get benefits as you go. In the other approach, you get to start from a clean slate with no constraints, but you only find out at the end if your new approach solves the problem. You have to look at your situation and decide which is right.
154,819
According to MW, [*full-blown*](http://www.merriam-webster.com/dictionary/full-blown) means > > having all of the qualities that are associated with a particular thing or type of person : **fully developed** > > > Having used it in this sense recently and noting its similarity (in sound and meaning) to the more easily explainable [*full-grown*](http://www.thefreedictionary.com/full-grown), I wondered why *blown* is in *full-blown*. My first suspicion was the *blown* was an analogy to the product of some craft, like glass-blowing. I was unable to find an entry dedicated to *full-blown* on Etymonline, though I did find [some entries that referred to it](http://www.etymonline.com/index.php?search=full-blown), particularly [**blow v.2**](http://www.etymonline.com/index.php?term=blow&allowed_in_frame=0): > > "to bloom, blossom" (intransitive), from Old English blowan "to flower, blossom, flourish," from Proto-Germanic \*blæ- (cf. Old Saxon bloian, Old Frisian bloia, Middle Dutch and Dutch bloeien, Old High German bluoen, German blühen), from PIE \*bhle-, extended form of \*bhel- (2) "to blow, inflate, swell" (see bole). **This word is the source of the blown in full-blown**. > > > Google [nGrams](https://books.google.com/ngrams/graph?content=fullblown%2C+full-blown&year_start=1800&year_end=2000&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Cfullblown%3B%2Cc0%3B.t1%3B%2Cfull%20-%20blown%3B%2Cc0) suggests that a horticultural origin may make sense with *full-blown*. The oldest reference to *full-blown* I found via nGrams was from ["Satirical, humourous & familiar pieces"](http://books.google.com/books?id=J6QVAAAAYAAJ&pg=PA23&dq=%22fullblown+rose%22&hl=en&sa=X&ei=pnARU8HIAanlyQGKkIHACw&ved=0CDoQ6AEwATi4Aw#v=onepage&q=%22fullblown%20rose%22&f=false) printed by G.Nicholson and Co., 1795: > > "You must know that in my person I am tall and thin, with a fair complexion and light flaxen hair; but of such extreme susceptibility of shame, that, on the smallest subject of confusion, my blood all rushes into my cheeks, and I appear a perfect full-blown rose. > > > Assuming the origin of *full-blown* is from this sense of *blooming* (earlier or contradictory examples welcome), are there other uses of *blow* or *blown* familiar to the modern ear that retain or allude to this meaning?
2014/03/01
[ "https://english.stackexchange.com/questions/154819", "https://english.stackexchange.com", "https://english.stackexchange.com/users/-1/" ]
The OED defines *full-blown* as ‘filled with wind, puffed out (lit. and fig.)’. The contemporary Oxford Dictionaries defines it as ‘fully developed’. The OED’s earliest citation is from 1615: > > With cheeks full blowne Each man will wish the case had beene his > owne. > > > The earliest floral use comes later in the same century: > > Some did the Way with full-blown Roses spread. > > > The OED’s etymological note on the term refers to definition 22a of ‘blow’: > > To swell (up or out) by sending a current of air into; to inflate, > puff up. > > > One current use is in the expression *full-blown AIDS*.
To **blow** is certainly the verb that is used when a rose goes from bud to bloom, (hence becoming *full-blown*); "One thing is certain, and the Rest is Lies/ The Flower that once has blown forever dies." (Fitzgerald, *Rubaiyat of Omar Khayyam*, XXVIII; pointless debate continues whether this is actually a free translation or Fitzgerald writing his own poem on the same subject as the original). The OED has citations from 1000 AD on, and believes this is the meaning in Shakespeare's "I know a bank whereon the wild thyme blows", though the only citation where the word unambiguously means 'blossom' is "April, May, and June, while that trees blowen." from 1400. The problem is that flowers do grow, blossom, and dance in the wind; any of the three can be described as *blowing*, even if a poet should care about the difference.
154,819
According to MW, [*full-blown*](http://www.merriam-webster.com/dictionary/full-blown) means > > having all of the qualities that are associated with a particular thing or type of person : **fully developed** > > > Having used it in this sense recently and noting its similarity (in sound and meaning) to the more easily explainable [*full-grown*](http://www.thefreedictionary.com/full-grown), I wondered why *blown* is in *full-blown*. My first suspicion was the *blown* was an analogy to the product of some craft, like glass-blowing. I was unable to find an entry dedicated to *full-blown* on Etymonline, though I did find [some entries that referred to it](http://www.etymonline.com/index.php?search=full-blown), particularly [**blow v.2**](http://www.etymonline.com/index.php?term=blow&allowed_in_frame=0): > > "to bloom, blossom" (intransitive), from Old English blowan "to flower, blossom, flourish," from Proto-Germanic \*blæ- (cf. Old Saxon bloian, Old Frisian bloia, Middle Dutch and Dutch bloeien, Old High German bluoen, German blühen), from PIE \*bhle-, extended form of \*bhel- (2) "to blow, inflate, swell" (see bole). **This word is the source of the blown in full-blown**. > > > Google [nGrams](https://books.google.com/ngrams/graph?content=fullblown%2C+full-blown&year_start=1800&year_end=2000&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Cfullblown%3B%2Cc0%3B.t1%3B%2Cfull%20-%20blown%3B%2Cc0) suggests that a horticultural origin may make sense with *full-blown*. The oldest reference to *full-blown* I found via nGrams was from ["Satirical, humourous & familiar pieces"](http://books.google.com/books?id=J6QVAAAAYAAJ&pg=PA23&dq=%22fullblown+rose%22&hl=en&sa=X&ei=pnARU8HIAanlyQGKkIHACw&ved=0CDoQ6AEwATi4Aw#v=onepage&q=%22fullblown%20rose%22&f=false) printed by G.Nicholson and Co., 1795: > > "You must know that in my person I am tall and thin, with a fair complexion and light flaxen hair; but of such extreme susceptibility of shame, that, on the smallest subject of confusion, my blood all rushes into my cheeks, and I appear a perfect full-blown rose. > > > Assuming the origin of *full-blown* is from this sense of *blooming* (earlier or contradictory examples welcome), are there other uses of *blow* or *blown* familiar to the modern ear that retain or allude to this meaning?
2014/03/01
[ "https://english.stackexchange.com/questions/154819", "https://english.stackexchange.com", "https://english.stackexchange.com/users/-1/" ]
The OED defines *full-blown* as ‘filled with wind, puffed out (lit. and fig.)’. The contemporary Oxford Dictionaries defines it as ‘fully developed’. The OED’s earliest citation is from 1615: > > With cheeks full blowne Each man will wish the case had beene his > owne. > > > The earliest floral use comes later in the same century: > > Some did the Way with full-blown Roses spread. > > > The OED’s etymological note on the term refers to definition 22a of ‘blow’: > > To swell (up or out) by sending a current of air into; to inflate, > puff up. > > > One current use is in the expression *full-blown AIDS*.
"Unambiguously"? How about Thomas Gray's line from "Ode on the Death of a Favorite Cat"-- 'Twas on a lofty vase's side Where China's gayest art had dyed The azure flowers that blow.
154,819
According to MW, [*full-blown*](http://www.merriam-webster.com/dictionary/full-blown) means > > having all of the qualities that are associated with a particular thing or type of person : **fully developed** > > > Having used it in this sense recently and noting its similarity (in sound and meaning) to the more easily explainable [*full-grown*](http://www.thefreedictionary.com/full-grown), I wondered why *blown* is in *full-blown*. My first suspicion was the *blown* was an analogy to the product of some craft, like glass-blowing. I was unable to find an entry dedicated to *full-blown* on Etymonline, though I did find [some entries that referred to it](http://www.etymonline.com/index.php?search=full-blown), particularly [**blow v.2**](http://www.etymonline.com/index.php?term=blow&allowed_in_frame=0): > > "to bloom, blossom" (intransitive), from Old English blowan "to flower, blossom, flourish," from Proto-Germanic \*blæ- (cf. Old Saxon bloian, Old Frisian bloia, Middle Dutch and Dutch bloeien, Old High German bluoen, German blühen), from PIE \*bhle-, extended form of \*bhel- (2) "to blow, inflate, swell" (see bole). **This word is the source of the blown in full-blown**. > > > Google [nGrams](https://books.google.com/ngrams/graph?content=fullblown%2C+full-blown&year_start=1800&year_end=2000&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Cfullblown%3B%2Cc0%3B.t1%3B%2Cfull%20-%20blown%3B%2Cc0) suggests that a horticultural origin may make sense with *full-blown*. The oldest reference to *full-blown* I found via nGrams was from ["Satirical, humourous & familiar pieces"](http://books.google.com/books?id=J6QVAAAAYAAJ&pg=PA23&dq=%22fullblown+rose%22&hl=en&sa=X&ei=pnARU8HIAanlyQGKkIHACw&ved=0CDoQ6AEwATi4Aw#v=onepage&q=%22fullblown%20rose%22&f=false) printed by G.Nicholson and Co., 1795: > > "You must know that in my person I am tall and thin, with a fair complexion and light flaxen hair; but of such extreme susceptibility of shame, that, on the smallest subject of confusion, my blood all rushes into my cheeks, and I appear a perfect full-blown rose. > > > Assuming the origin of *full-blown* is from this sense of *blooming* (earlier or contradictory examples welcome), are there other uses of *blow* or *blown* familiar to the modern ear that retain or allude to this meaning?
2014/03/01
[ "https://english.stackexchange.com/questions/154819", "https://english.stackexchange.com", "https://english.stackexchange.com/users/-1/" ]
To **blow** is certainly the verb that is used when a rose goes from bud to bloom, (hence becoming *full-blown*); "One thing is certain, and the Rest is Lies/ The Flower that once has blown forever dies." (Fitzgerald, *Rubaiyat of Omar Khayyam*, XXVIII; pointless debate continues whether this is actually a free translation or Fitzgerald writing his own poem on the same subject as the original). The OED has citations from 1000 AD on, and believes this is the meaning in Shakespeare's "I know a bank whereon the wild thyme blows", though the only citation where the word unambiguously means 'blossom' is "April, May, and June, while that trees blowen." from 1400. The problem is that flowers do grow, blossom, and dance in the wind; any of the three can be described as *blowing*, even if a poet should care about the difference.
"Unambiguously"? How about Thomas Gray's line from "Ode on the Death of a Favorite Cat"-- 'Twas on a lofty vase's side Where China's gayest art had dyed The azure flowers that blow.
11,981
Recently, a proofreader suggested an edit for my story: > > a laughter laugh escaped my throat. > > > The [New Oxford American Dictionary](http://www.oxforddictionaries.com/us/definition/american_english/laughter) suggests: > > **laughter** > > *noun* > > [mass noun] > > > > the action or sound of laughing: > *he roared with laughter* > > > and > > **laugh** > > *noun* > > > > **1** an act of laughing: > *she gave a loud, silly laugh* > > > Following the definitions it would seem as if the correction goes in the opposite side here. What's the difference in usage between the nouns *laugh* and *laughter*? Are there some subtle differences in connotations, associations, undertones, usage patterns I failed to notice?
2013/11/07
[ "https://ell.stackexchange.com/questions/11981", "https://ell.stackexchange.com", "https://ell.stackexchange.com/users/30/" ]
Ditto Matt, but maybe it will help if I say it a slightly different way. "Laughter" is a mass noun, referring to the concept of laughing in general. "Laugh" is a countable noun. A person could let out one laugh or two laughs, but he couldn't make one laughter. On the other hand you could say, "Laughter is not appropriate at a funeral." You are referring to the idea in general. You wouldn't say, "Laugh is not appropriate at a funeral." You might say, "A laugh is not appropriate", though that would be awkward, because you really are talking about the general idea. It's like the difference between "humanity" (in the sense of the sum total of all human beings) versus "person". You could say, "Humanity is becoming too dependent on technology", meaning all people, in general. Or you could say, "This person is becoming too dependent on technology", meaning one particular person. You wouldn't say, "A humanity is ..." doing whatever, as that would imply that there are other "humanities" out there somewhere.
The rule here is that the article ***a*** should not be used with uncountable mass nouns such as ***laughter***, whereas it is OK to use it with the ordinary noun ***laugh***. Hence you can either say: > > ***Laughter*** escaped my throat > > > ***A laugh*** escaped my throat > > > ***A laugh*** differs from ***laughter*** in that it refers to a single "Ha" ([example](https://www.youtube.com/watch?v=WLNkbRvraho)), whereas ***laughter*** would normally be a longer series of laughs ([example](https://www.youtube.com/watch?v=NkEOvVa6KeQ)).
66,657
In the *Fargo* episode *The Gift of the Magi* Ronald Reagan (portrayed by Bruce Campbell) is campaigning for presidency and Lou works as his security detail. They have a conversation in the toilet where Reagan compares (rather inappropriately) Lou's service in Vietnam to his own role in a WW2 movie. Later when Lou [asks](http://www.springfieldspringfield.co.uk/view_episode_scripts.php?tv-show=fargo-2014&episode=s02e05): > > Governor, I don't mean to, uh What we did over there, the war? Um and > now? My wife's got lymphoma. Uh, Stage III. And, uh, lately, the state > of things, uh Well, sometimes, I late at night I wonder if maybe the > sickness of this world, if it isn't inside my wife somehow. The-- the > cancer. I don't-- I don't know what I'm saying, except Do you really > think we'll get out of this mess we're in? > > > Reagan is unable to give any answer, apart from some standard political nonsense and patting Lou's back. The (almost) whole scene can be seen [here](http://www.mrctv.org/videos/fxs-fargo-paints-reagan-bumbling-idiot-and-fake). This depiction of Reagan makes him look like an unpleasant, not very social or wise person. This seems to be in contrast with what can be read on [Wikipedia](https://en.wikipedia.org/wiki/Ronald_Reagan#Cultural_and_political_image): > > Reagan's ability to connect with Americans earned him the > laudatory moniker "The Great Communicator." (...) Reagan was known to > joke frequently during his lifetime, displayed humor throughout his > presidency, and was famous for his storytelling. > > > Moreover, in [this article](http://www.hollywoodreporter.com/live-feed/fargo-boss-noah-hawley-ronald-837575) the showrunner Noah Hawley says that Reagan has a key role in the show, but as an indicator of changes. He clearly states that presenting his views on Reagan was not his intention: > > I don't have a moral opinion on Reagan; that's not part of the show. > > > Taking above facts into consideration why the showrunners showed Ronald Reagan in a negative light?
2017/01/11
[ "https://movies.stackexchange.com/questions/66657", "https://movies.stackexchange.com", "https://movies.stackexchange.com/users/19042/" ]
It seems that the show was trying to portray both the warmth and approachability you describe AND the way he was perceived as somewhat superficial. Specifically addressed by [**Bruce Campbell**](http://www.hollywoodreporter.com/live-feed/fargo-bruce-campbell-ronald-reagan-838622) in an interview with *The Hollywood Reporter*. > > **HR:** I love the scene with Reagan and Lou at the urinal, because that scene seems to capture both Reagan's believable empathy, but also how superficial and empty he could come across. How did you approach that scene? > > > **BC:** *(Laughs.)* He doesn't have an answer! He doesn't have all the answers. We can say that we have all the answers. We can get up there and give speeches and tell people, "You know if you want a great country again, here's what we have to do," but it doesn't stop people from getting cancer. It doesn't stop their lives from being discarded. Speeches aren't going to stop anything. So yes, the theory is great. "Let's pick ourselves up by our bootstraps" and he honestly believes that as an American you can overcome anything, even your wife who's dying. He couldn't abandon his approach, but it does show a little bit of the fallibility of it, that it is a pie in the sky theory. Instead of being the president goes, "I feel your pain, all you poor people, we're going to help you right now," that was not the approach. If you were poor, that was your fault. Americans can do anything. Why are you poor? "You just have to work a little harder." He's still stuck with the attitude at the time, "Well, if you just roll up your sleeves and sweat a bit..." > > >
There are plenty of horrible people who get to positions of great influence because they push the right buttons on people. Those people might also be considered "great communicators," which is essentially what a demagogue does. Not to say that Reagan was a horrible person.... also, not to say that he wasn't. This is to say that Wikipedia having a laudatory statement about someone, Reagan, especially, doesn't necessarily translate to fact. The conservatives have created a mythological version of Reagan that, in no way, actually maps to the actual person who held office as president or his actions while president. It's entirely likely that a version of him that shows him with flaws is as likely to be accurate as a version that shows him as some kind of kindly saint. Keep in mind, that any show with Bruce Campbell playing a part, is highly likely to be exaggerated in some way or form for panache and effect. That's kind of his style.
125,241
Which of the following sentence is correct/apt to notate the past action which was not executed till date? Sentence 1: > > A request was made for grant of permission to X on April 2013 but it was not done yet. > > > Sentence 2: > > A request has made for grant of permission to X on April 2013 but it was not done yet. > > > Sentence 3: > > A request has been made for granting of permission to X on April 2013 but it was not done yet. > > >
2013/09/04
[ "https://english.stackexchange.com/questions/125241", "https://english.stackexchange.com", "https://english.stackexchange.com/users/51141/" ]
There are several points here. When a sentence refers to an event at a particular time in the past, the past tense, rather than the present perfect construction, is used, at least in British English, as in the first part of sentence 1. The present perfect construction, again in British English, is used to describe a past event that has current relevance, or which refers to the time up until now. That requires the second part of the sentence to read *. . . but it has not been done yet*. *A request . . . for grant of permission to* is a little unusual, but it may perhaps be used formulaically in certain legal contexts. Elsewhere, it might be more usual to find *a request . . . to grant permission to*. In the second half of the sentence, it might be preferable to repeat *made*, rather than use *done*. When a month, rather than a specific date is referred to, the preposition used is *in* rather than *on*. The adverb *yet* is perhaps better placed after *has not*, rather than at the end of the sentence. Putting all this together, the sentence would be more likely to occur in British English as: > > A request was made to grant permission to X in April 2013, but it has > not yet been made . > > > I’ve specified British English, because I know that American English makes different choices between the past tense and the present perfect construction, and a speaker of American English might give a different answer.
This question is horribly confusing to a native speaker. *A grant of permission* does exist in certain contexts, mostly legalese, where it is not exactly the same as *the granting of permission*, or just *permission* which is normal. *Permission to X* only works where X is a verb; if a noun, the phrase is *permission for X*. 'A request was made' is (unless done for effect) less natural and forceful than 'I made a request'. The request may have been made 'in April 2013' or 'on April 3rd', but not 'on April'. And 'not executed till date' may mean it has not been done yet, or that it was only done today. All these points will affect the choice of words; until you clarify the question, any answer is just a guess.
26,480,818
Yesterday, I upgraded to iOS 8.1 and now I cannot build apps in Xcode 5.1.1. Is there or is there going to be an iOS 8.1 SDK that I can install on Xcode 5.1.1?
2014/10/21
[ "https://Stackoverflow.com/questions/26480818", "https://Stackoverflow.com", "https://Stackoverflow.com/users/992809/" ]
There is not going to be an iOS 8.1 SDK that you can install on Xcode 5.1.1. Apple doesn't support older versions of Xcode. You can build apps that are compatible with iOS 8.1 with Xcode 5.x, but you can't use the new features of iOS 8.1.
It's good to know that you can ONLY update to xcode 6.x if you have maverik or yosemite installed. Also Xcode 5.1 will NOT allow iPads with iOS 8.x installed to be used as a developing devices so you are stuck with using only the simulators. While facing the same issue i was trying to update to xcode 6.1 just to find out that it won't install on my Mountain Lion system. This means it's probably time to upgrade my OSX first.
26,480,818
Yesterday, I upgraded to iOS 8.1 and now I cannot build apps in Xcode 5.1.1. Is there or is there going to be an iOS 8.1 SDK that I can install on Xcode 5.1.1?
2014/10/21
[ "https://Stackoverflow.com/questions/26480818", "https://Stackoverflow.com", "https://Stackoverflow.com/users/992809/" ]
There is not going to be an iOS 8.1 SDK that you can install on Xcode 5.1.1. Apple doesn't support older versions of Xcode. You can build apps that are compatible with iOS 8.1 with Xcode 5.x, but you can't use the new features of iOS 8.1.
I have a dual boot machine that boots to 10.8.5 and 10.10.5. On the 10.10.5 installation, I have Xcode 6.x and Xcode 7.2 (latest as of this writing). I just built an iOS 8.x application using "Mountain Lion/Xcode 5.1.1/iOS SDK 8.1" and am debugging on an iPod Touch running 8.4.1. In my 10.10 boot, I run Xcode7 and debug devices on iOS 9.2. With some limitations, it is possible to use newer iOS SDKs with older Xcodes. It depends on how many new language features are used in the SDK headers. Apple SDK headers tend to be very well designed. Simply copy the iOS 8 and device support files from the newer Xcode(s) to the older Xcode. Obviously, you will not be able to design use 8.1 features in your Storyboards and XIBs. For example: Copy the iOS 8.x SDK from Xcode 6.x into Xcode 5.1.1 SOURCE FOLDER /Volumes/MACOSX10.10/Applications/Xcode6.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs DESTINATION FOLDER /Volumes/MACOSX10.8/Applications/Xcode5.1.1.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs And copy the appropriate Device Support files from the newer Xcode into the older Xcode: SOURCE FOLDER /Volumes/MACOSX10.10/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/DeviceSupport DESTINATION FOLDER /Volumes/MACOSX10.8/Applications/Xcode5.1.1.app/Contents/Developer/Platforms/iPhoneOS.platform/DeviceSupport This method of munging Xcode has worked in a variety of forms since iOS 4. In general, it is almost always possible to use the +1 newer SDK with the previous version of Xcode. This is because (in theory) there is a switchover period when Apple is using the older toolchain to compile the newer tools. ALSO, after installing the new SDKs, it may be necessary to recreate your project schemes before the hardware appears in the drop down on the Xcode toolbar. Usually what I do is go into "Manage Schemes", select all the existing schemes and delete them. Then, click "Autocreate Schemes Now". This is a general technique for moving projects from one Xcode installation to another and seems to be important for hardware recognition.
252,881
I'm running tests from the same database as I use in development. I recall using Rails and I used a separate database for testing (mainly coz the tutorial I was going by said so). Made sense though. I was wondering it this was the common way to do things or whether there was an alternative. Can anyone shed a little light on this?
2014/08/10
[ "https://softwareengineering.stackexchange.com/questions/252881", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/142254/" ]
Quite common, but… it is a result of misusing PHPUnit It is important to distinguish unit-tests from integration-tests and acceptance-tests. The main aim of PHPUnit is writing unit-tests. Unit test is supposed to test isolated "units" (functions, public methods). Everything else, including database, is emulated via [Test Doubles](http://phpunit.de/manual/current/en/test-doubles.html) (Mocks). So, if you need to test method of your controller you "mock" model classes so that they return pre-defined (or randomly generated) answers. If you need to test the model you mock db-access classes and, taking this to the extreme, if you need to test db-driver you mock the socket-layer to return specific byte-sequences. On the other hand it is possible to use PHPUnit for acceptance-tests (though, there are better tools for that). Acceptance tests in web-applications are supposed to fake web-requests and check that application returns proper responses. It is very desirable to check the full-stack, so, requests should hit database eventually. In this case it is a good idea to use a separate database, so that tests do not harm production data.
I used to (not in PHP admittedly, but this applies to every DB) test stored procedures on the dev DB. DBs have transactions. So start one, truncate tables, insert test data, run tests and then rollback. Your DB will be back to its old state. Simple.
333,488
With the Spring'21 release, Salesforce have added an In-App Learning icon to the global header. [![enter image description here](https://i.stack.imgur.com/Yt24r.png)](https://i.stack.imgur.com/Yt24r.png) Is there a way to remove this from the header?
2021/02/06
[ "https://salesforce.stackexchange.com/questions/333488", "https://salesforce.stackexchange.com", "https://salesforce.stackexchange.com/users/359/" ]
Unfortunately, you cannot remove In-App Learning icon. All users will have access to view the new icon in the global header and the panel. Check more details [here](https://help.salesforce.com/articleView?id=release-notes.rn_general_inapplearning.htm&release=230&type=5). Thanks
Currently there is no way to remove the icon, but there is an active idea to allow it to be hidden. Please vote for this idea, "Option to hide In-App Learning": <https://trailblazer.salesforce.com/ideaView?id=0874V0000015KoMQAU> And if you are a Salesforce Partner, take a look of this: <https://partners.salesforce.com/0D54V00005GZZMx>
20,052
Since they are in fact new to that particular site. They do appear in the frontpage view but I'm wondering if they benefit from appearing in the "New Questions" list. It has it's pitfalls. Like what if a 2 year old question is migrated. (and more) And it has it's advantages. Like people who only monitor the "New Questions" list and the frontpage list can often be to chaotic. What are your thoughts?
2009/09/03
[ "https://meta.stackexchange.com/questions/20052", "https://meta.stackexchange.com", "https://meta.stackexchange.com/users/22459/" ]
I have wanted either that or a new listing of "recently migrated" (something the mods would be very thankful for because of the inevitability of cleanup requirements). Keeping it in the middle of the list has a tendency for it to get lost in the shuffle and it may not get the attention it needs.
I think this is a great idea. Even if the question is very old, the fact that it was on the wrong site means that it was not viewed by the correct audience. If a question is moved to Server Fault it is still a new question there and deserves a fair shot at getting answered.
20,052
Since they are in fact new to that particular site. They do appear in the frontpage view but I'm wondering if they benefit from appearing in the "New Questions" list. It has it's pitfalls. Like what if a 2 year old question is migrated. (and more) And it has it's advantages. Like people who only monitor the "New Questions" list and the frontpage list can often be to chaotic. What are your thoughts?
2009/09/03
[ "https://meta.stackexchange.com/questions/20052", "https://meta.stackexchange.com", "https://meta.stackexchange.com/users/22459/" ]
**Yes**, I think the timestamp should be reset. I mean, *it's a new question*... it just happens to have a closed, locked, doppelganger on another site. Give it a fresh start.
I think this is a great idea. Even if the question is very old, the fact that it was on the wrong site means that it was not viewed by the correct audience. If a question is moved to Server Fault it is still a new question there and deserves a fair shot at getting answered.
6,807
Why are there no down votes for comments as there are for questions and answers. People sometimes use the comments to give answers as well.
2018/10/25
[ "https://electronics.meta.stackexchange.com/questions/6807", "https://electronics.meta.stackexchange.com", "https://electronics.meta.stackexchange.com/users/194168/" ]
Because people should not give answers in the comment section. Such answers should be flagged and deleted, not downvoted. Comments on Stack Exchange are an afterthought and essentially a "second hand citizen". As you can see when you try to write one, the clarification iterates that they are for asking for clarification and explicitly *not* for answering. You can not accept a comment as the right answer, and you can not edit mistakes in it. They can also be removed at any time.
This is site wide (through all of SE's networks) and is not going to change as it would affect all sites. This has already been covered here in the meta: [Allow downvoting comments](https://meta.stackexchange.com/questions/3615/allow-downvoting-comments)
213,722
An interviewer asked me this question: > > Tables are created with appropriate normalization rules, However the database is performing slow. [Ie.: The select, insert statements are taking time to do his operation.] What are areas we need to look to improve the database performance. > > > Obviously this is a vague question. What sorts of things might be wrong with a database that is running slowly, even when normalized?
2013/10/08
[ "https://softwareengineering.stackexchange.com/questions/213722", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/46506/" ]
Sounds to me that your interviewer was not looking for a data scientist answer but was simply looking to make sure you understand that "normalization" != "performance". So I'll keep this answer at the level that I'm guessing he wanted. Normalization means minimizing redundancy in stored data. Instead you setup relationships (often with foreign constraints) between multiple tables. However, while normalization might lead to a smaller amount of stored data, often it creates performance problems because now many queries end up joining multiple tables. Same thing with adding data where you might now have to update multiple tables at once. Often, speed gains could be achieved by de-normalizing the data. You are storing more and there might be duplicates, but when it comes to running most frequently used queries, all your data would now be in one table. Getting results from one table is usually much easier on the hardware than having to join multiple tables
Making INSERT statements run faster is a bit of an arcane art. But that's probably not the focus. The point of a database isn't putting data into it; it's getting it back out in interesting and useful ways. So the main things to focus on are SELECT statements. The first thing I would look at is checking the query plans on slow queries. See if you have any table scans that are taking up a significant percentage of your time. A table scan is when the database engine has to examine every row individually to see if it meets a WHERE criterion. If you find one of these, you can make the query run faster by indexing the table on the appropriate WHERE criteria. This can take search times from O(N) down to O(log N) or even O(1). Some databases will make it easy on you: their query plan analyzer will point out that you're missing an index and suggest what you ought to create. Also, check out the joins on your query. Make sure they aren't using too broad of joining criteria, and be careful that you're not using left outer joins when a full join would work. Both of these issues can cause a poorly-written query to produce too many rows and take longer to run. If you don't have missing indices or bad joins, a more advanced trick is *denormalization*: setting up columns on tables that duplicate data that can be found in other tables, to allow you to avoid joins or aggregates that can be expensive. This has to be done carefully, though, with triggers so that the data remains in sync, and is best done only if you know what you're doing and if there aren't any better alternatives available.
213,722
An interviewer asked me this question: > > Tables are created with appropriate normalization rules, However the database is performing slow. [Ie.: The select, insert statements are taking time to do his operation.] What are areas we need to look to improve the database performance. > > > Obviously this is a vague question. What sorts of things might be wrong with a database that is running slowly, even when normalized?
2013/10/08
[ "https://softwareengineering.stackexchange.com/questions/213722", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/46506/" ]
I would talk about how there are many things which can be done to improve performance. The first thing is always to investigate if the correct indexes are in place. Of particular concern in a normalized database is making sure FKs are indexed. Likely this would fix many performance issues. Other things to look at would be rewriting the SQL code to use more performant techniques such as getting rid of cursors and correlated subqueries and making where clauses sargable. You would want to review the worst performing queries individually. You would also want to review queries that are frequently run (especially if multiple users run them simultaneously) as a small change in those could multiply through the system. If your worst queries are coming from an ORM, they might need to be rewritten as stored procs so they can be performance tuned. You might also want to make sure you have a performance problem. What you might have is really a blocking problem where performant code is being blocked by other processes and has to wait. Then you would look at hardware, if you have underpowered hardware and network connections, likely no other change is going to fix that. In a large enterprise system, you might consider data partitioning. Denormalization is a technique to improve performance but it is the **last** thing you would want to consider. First, you have the risk to the data of changing the structure that drastically. Converting the data to the new structure is something that can go very badly wrong if a mistake is made and it is more time-consuming to make this type of structural change than any of the other possible performance improvements. It would also be irresponsible to denormalize without creating triggers to make sure the data stays in synch as it is changed in the denormalized tables. This may mean selects are imporved but action queries are slower, so performance may not be imporved as much as you think. It is also a concern that in denormalizing, you may be making the tables significantly wider and that can affect performance negatively if you have wide tables.
Making INSERT statements run faster is a bit of an arcane art. But that's probably not the focus. The point of a database isn't putting data into it; it's getting it back out in interesting and useful ways. So the main things to focus on are SELECT statements. The first thing I would look at is checking the query plans on slow queries. See if you have any table scans that are taking up a significant percentage of your time. A table scan is when the database engine has to examine every row individually to see if it meets a WHERE criterion. If you find one of these, you can make the query run faster by indexing the table on the appropriate WHERE criteria. This can take search times from O(N) down to O(log N) or even O(1). Some databases will make it easy on you: their query plan analyzer will point out that you're missing an index and suggest what you ought to create. Also, check out the joins on your query. Make sure they aren't using too broad of joining criteria, and be careful that you're not using left outer joins when a full join would work. Both of these issues can cause a poorly-written query to produce too many rows and take longer to run. If you don't have missing indices or bad joins, a more advanced trick is *denormalization*: setting up columns on tables that duplicate data that can be found in other tables, to allow you to avoid joins or aggregates that can be expensive. This has to be done carefully, though, with triggers so that the data remains in sync, and is best done only if you know what you're doing and if there aren't any better alternatives available.
213,722
An interviewer asked me this question: > > Tables are created with appropriate normalization rules, However the database is performing slow. [Ie.: The select, insert statements are taking time to do his operation.] What are areas we need to look to improve the database performance. > > > Obviously this is a vague question. What sorts of things might be wrong with a database that is running slowly, even when normalized?
2013/10/08
[ "https://softwareengineering.stackexchange.com/questions/213722", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/46506/" ]
Making INSERT statements run faster is a bit of an arcane art. But that's probably not the focus. The point of a database isn't putting data into it; it's getting it back out in interesting and useful ways. So the main things to focus on are SELECT statements. The first thing I would look at is checking the query plans on slow queries. See if you have any table scans that are taking up a significant percentage of your time. A table scan is when the database engine has to examine every row individually to see if it meets a WHERE criterion. If you find one of these, you can make the query run faster by indexing the table on the appropriate WHERE criteria. This can take search times from O(N) down to O(log N) or even O(1). Some databases will make it easy on you: their query plan analyzer will point out that you're missing an index and suggest what you ought to create. Also, check out the joins on your query. Make sure they aren't using too broad of joining criteria, and be careful that you're not using left outer joins when a full join would work. Both of these issues can cause a poorly-written query to produce too many rows and take longer to run. If you don't have missing indices or bad joins, a more advanced trick is *denormalization*: setting up columns on tables that duplicate data that can be found in other tables, to allow you to avoid joins or aggregates that can be expensive. This has to be done carefully, though, with triggers so that the data remains in sync, and is best done only if you know what you're doing and if there aren't any better alternatives available.
Specifically, in the Query Execution Plan look for actions that are table scans instead of index seeks. It is a hint that you might want to add an index to say a column that represents foreign key (they don't get created automatically) Other options would be to put your data files on different physical disks. Using RAID for your partitions might work as well. At the very least, you want to separate log files from that data files...so that writing to the log does not impact the write time to the data file. More advanced scenarios include clustering and sharding to allow the load for searches to be spread across multiple nodes.
213,722
An interviewer asked me this question: > > Tables are created with appropriate normalization rules, However the database is performing slow. [Ie.: The select, insert statements are taking time to do his operation.] What are areas we need to look to improve the database performance. > > > Obviously this is a vague question. What sorts of things might be wrong with a database that is running slowly, even when normalized?
2013/10/08
[ "https://softwareengineering.stackexchange.com/questions/213722", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/46506/" ]
I would talk about how there are many things which can be done to improve performance. The first thing is always to investigate if the correct indexes are in place. Of particular concern in a normalized database is making sure FKs are indexed. Likely this would fix many performance issues. Other things to look at would be rewriting the SQL code to use more performant techniques such as getting rid of cursors and correlated subqueries and making where clauses sargable. You would want to review the worst performing queries individually. You would also want to review queries that are frequently run (especially if multiple users run them simultaneously) as a small change in those could multiply through the system. If your worst queries are coming from an ORM, they might need to be rewritten as stored procs so they can be performance tuned. You might also want to make sure you have a performance problem. What you might have is really a blocking problem where performant code is being blocked by other processes and has to wait. Then you would look at hardware, if you have underpowered hardware and network connections, likely no other change is going to fix that. In a large enterprise system, you might consider data partitioning. Denormalization is a technique to improve performance but it is the **last** thing you would want to consider. First, you have the risk to the data of changing the structure that drastically. Converting the data to the new structure is something that can go very badly wrong if a mistake is made and it is more time-consuming to make this type of structural change than any of the other possible performance improvements. It would also be irresponsible to denormalize without creating triggers to make sure the data stays in synch as it is changed in the denormalized tables. This may mean selects are imporved but action queries are slower, so performance may not be imporved as much as you think. It is also a concern that in denormalizing, you may be making the tables significantly wider and that can affect performance negatively if you have wide tables.
Sounds to me that your interviewer was not looking for a data scientist answer but was simply looking to make sure you understand that "normalization" != "performance". So I'll keep this answer at the level that I'm guessing he wanted. Normalization means minimizing redundancy in stored data. Instead you setup relationships (often with foreign constraints) between multiple tables. However, while normalization might lead to a smaller amount of stored data, often it creates performance problems because now many queries end up joining multiple tables. Same thing with adding data where you might now have to update multiple tables at once. Often, speed gains could be achieved by de-normalizing the data. You are storing more and there might be duplicates, but when it comes to running most frequently used queries, all your data would now be in one table. Getting results from one table is usually much easier on the hardware than having to join multiple tables
213,722
An interviewer asked me this question: > > Tables are created with appropriate normalization rules, However the database is performing slow. [Ie.: The select, insert statements are taking time to do his operation.] What are areas we need to look to improve the database performance. > > > Obviously this is a vague question. What sorts of things might be wrong with a database that is running slowly, even when normalized?
2013/10/08
[ "https://softwareengineering.stackexchange.com/questions/213722", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/46506/" ]
Sounds to me that your interviewer was not looking for a data scientist answer but was simply looking to make sure you understand that "normalization" != "performance". So I'll keep this answer at the level that I'm guessing he wanted. Normalization means minimizing redundancy in stored data. Instead you setup relationships (often with foreign constraints) between multiple tables. However, while normalization might lead to a smaller amount of stored data, often it creates performance problems because now many queries end up joining multiple tables. Same thing with adding data where you might now have to update multiple tables at once. Often, speed gains could be achieved by de-normalizing the data. You are storing more and there might be duplicates, but when it comes to running most frequently used queries, all your data would now be in one table. Getting results from one table is usually much easier on the hardware than having to join multiple tables
Specifically, in the Query Execution Plan look for actions that are table scans instead of index seeks. It is a hint that you might want to add an index to say a column that represents foreign key (they don't get created automatically) Other options would be to put your data files on different physical disks. Using RAID for your partitions might work as well. At the very least, you want to separate log files from that data files...so that writing to the log does not impact the write time to the data file. More advanced scenarios include clustering and sharding to allow the load for searches to be spread across multiple nodes.
213,722
An interviewer asked me this question: > > Tables are created with appropriate normalization rules, However the database is performing slow. [Ie.: The select, insert statements are taking time to do his operation.] What are areas we need to look to improve the database performance. > > > Obviously this is a vague question. What sorts of things might be wrong with a database that is running slowly, even when normalized?
2013/10/08
[ "https://softwareengineering.stackexchange.com/questions/213722", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/46506/" ]
I would talk about how there are many things which can be done to improve performance. The first thing is always to investigate if the correct indexes are in place. Of particular concern in a normalized database is making sure FKs are indexed. Likely this would fix many performance issues. Other things to look at would be rewriting the SQL code to use more performant techniques such as getting rid of cursors and correlated subqueries and making where clauses sargable. You would want to review the worst performing queries individually. You would also want to review queries that are frequently run (especially if multiple users run them simultaneously) as a small change in those could multiply through the system. If your worst queries are coming from an ORM, they might need to be rewritten as stored procs so they can be performance tuned. You might also want to make sure you have a performance problem. What you might have is really a blocking problem where performant code is being blocked by other processes and has to wait. Then you would look at hardware, if you have underpowered hardware and network connections, likely no other change is going to fix that. In a large enterprise system, you might consider data partitioning. Denormalization is a technique to improve performance but it is the **last** thing you would want to consider. First, you have the risk to the data of changing the structure that drastically. Converting the data to the new structure is something that can go very badly wrong if a mistake is made and it is more time-consuming to make this type of structural change than any of the other possible performance improvements. It would also be irresponsible to denormalize without creating triggers to make sure the data stays in synch as it is changed in the denormalized tables. This may mean selects are imporved but action queries are slower, so performance may not be imporved as much as you think. It is also a concern that in denormalizing, you may be making the tables significantly wider and that can affect performance negatively if you have wide tables.
Specifically, in the Query Execution Plan look for actions that are table scans instead of index seeks. It is a hint that you might want to add an index to say a column that represents foreign key (they don't get created automatically) Other options would be to put your data files on different physical disks. Using RAID for your partitions might work as well. At the very least, you want to separate log files from that data files...so that writing to the log does not impact the write time to the data file. More advanced scenarios include clustering and sharding to allow the load for searches to be spread across multiple nodes.
14,664,018
I have a JSF page with charts without AJAX support. I noticed that the time of loading of the charts is very slow. Is it possible for example to load the body of the JSF page and to display "Loading..." inside the DIVs where the charts are positioned while the chart is being loaded? I use Primefaces for charts generation.
2013/02/02
[ "https://Stackoverflow.com/questions/14664018", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1103606/" ]
The [downloads API](https://developer.chrome.com/extensions/downloads.html) (as of now, available on the dev channel only) seems to support your request. Each invocation of the [chrome.downloads.download](https://developer.chrome.com/extensions/downloads.html#method-download) method constitutes a download of its own inside a separate thread (or process, for that matter).
Look at Metalink Downloader: <https://chrome.google.com/webstore/detail/metalink-downloader/jnpljlobbiggcdikagmiepniibjdinap> Metalink Downloader can set 10 simultaneous connections to download from one url.
62,729,716
I have a Domain class with a Float field named hfMonto, and when updating it in the controller using hfObject.properties=params Being params: [![enter image description here](https://i.stack.imgur.com/xta95.png)](https://i.stack.imgur.com/xta95.png) When looking at the table, the hfMonto value is rounded up: [![enter image description here](https://i.stack.imgur.com/RpxK5.png)](https://i.stack.imgur.com/RpxK5.png) And the version field keeps increasing every time. Setting the hibernate logging to trace shows no errors or warnings. I'm running Grails 3.3.11 on top of Java 1.8.0\_252-8u252-b09-1ubuntu1-b09 Any hints? Thanks **UPDATE: it only happens with numbers bigger than 1,000,000**
2020/07/04
[ "https://Stackoverflow.com/questions/62729716", "https://Stackoverflow.com", "https://Stackoverflow.com/users/253944/" ]
I finally made it work by changing the database field to *double* and the Domain attribute to *BigDecimal*.
My suggestion is to check the database field; sometimes the database field (in the actual database) does not match the hibernate field Make sure that what you are returning from your frontend is what you are expecting; sometimes the frontend returns a 'string' which chops the floating point.
1,650
**Background**: If I like a particular song, I listen to it while doing other things until I get sick of that song. This creates an extremely strong bond between the two things: the song and the activity. The activities usually being programming languages or video games. Whenever I hear the song again, the emotions and/or visuals of whatever I was doing flood my brain, and I can remember them vividly. ### Questions * How does hearing a song trigger memories associated with that song? * Is there anything special about songs and memory associations? * Has any particular research been conducted on the association between songs and memory?
2012/09/16
[ "https://cogsci.stackexchange.com/questions/1650", "https://cogsci.stackexchange.com", "https://cogsci.stackexchange.com/users/1186/" ]
I believe this phenomenon is well known in cognitive science. That is how our memory works. The simpliest explanation would be the Hebbian learning rule: "Neurons that fire together, wire together." So, you can imagine some neurons firing when you hear the music and some when you see the game. Now, if these used to fire together, they are probably connected. Somehow you've learned that these stimuli come together and your brain is recalling them all even if this time only the audio stimulus is presented. Try reading more about episodic memory: > > For example, memories of people’s faces, the taste of the wine, the music that was playing, etc, might all be part of the memory of a particular dinner with friends. By repeatedly reactivating or “playing back” this particular activity pattern in the various regions of the cortex, they become so strongly linked with one another that they no longer need the hippocampus to act as their link, and the memory of the music that was playing that night, for example, can act as an index entry, and may be enough to bring back the entire scene of the dinner party. > [Episodic and semantic memory](http://www.human-memory.net/types_episodic.html) > > > However, auditory triggers are possibly not the strongest available. According to an idea called "Proustian phenomenon" the strongest would be smell. Example study to that matter: [Proust revisited: Odours as triggers of aversive memories](http://www.tandfonline.com/doi/abs/10.1080/02699931.2011.555475) --- Reference: ---------- Toffalo, M.B.J., Smeets, M.A.M., van den Hout, M.A. (2012). Proust revisited: Odours as triggers of aversive memories. *Cognition & Emotion, 26*(1):83-92.
According to network models of memory, when information is stored in memory, it is not stored separately and by itself, but together with all the other aspects of the situation that you percieved. For example, if you listen to music, you do not simply memorize the tune, but also your mood, the causes for that mood (your girlfriend next to you), the smells and sights of the place, even what you did before and after listening to the music. Each of these informations is stored as a knot or node in the "semantic network" that is your memory, and with each time that things happen in conjunction, the connection between these nodes is strengthened a little bit. Now, if you retrieve information from memory (e.g. you hear the music again or come to the concert hall and remember the place or tune), the node for that information gets activated and that activation spreads along the connections of that node to other nodes and activates them as well. So for example listening to that piece of music again will remind you of all the notable situations, but also of things that get activated through other activated nodes. For example remembering your girlfriend sitting next to you at that one time, will remind you also of how she broke up with you a few weeks later, and thus the music may make you sad, although when you listened to it the first time, it was a happy situation. If you google for "semantic network" you will find comprehensive explanations, illustrations and examples. As to the answer by Oriesok Vlassky explaining episodic memory: Both models integrate easily and do not contradict each other but rather complement each other well. So I would say the answer is his and mine combined.
1,650
**Background**: If I like a particular song, I listen to it while doing other things until I get sick of that song. This creates an extremely strong bond between the two things: the song and the activity. The activities usually being programming languages or video games. Whenever I hear the song again, the emotions and/or visuals of whatever I was doing flood my brain, and I can remember them vividly. ### Questions * How does hearing a song trigger memories associated with that song? * Is there anything special about songs and memory associations? * Has any particular research been conducted on the association between songs and memory?
2012/09/16
[ "https://cogsci.stackexchange.com/questions/1650", "https://cogsci.stackexchange.com", "https://cogsci.stackexchange.com/users/1186/" ]
I believe this phenomenon is well known in cognitive science. That is how our memory works. The simpliest explanation would be the Hebbian learning rule: "Neurons that fire together, wire together." So, you can imagine some neurons firing when you hear the music and some when you see the game. Now, if these used to fire together, they are probably connected. Somehow you've learned that these stimuli come together and your brain is recalling them all even if this time only the audio stimulus is presented. Try reading more about episodic memory: > > For example, memories of people’s faces, the taste of the wine, the music that was playing, etc, might all be part of the memory of a particular dinner with friends. By repeatedly reactivating or “playing back” this particular activity pattern in the various regions of the cortex, they become so strongly linked with one another that they no longer need the hippocampus to act as their link, and the memory of the music that was playing that night, for example, can act as an index entry, and may be enough to bring back the entire scene of the dinner party. > [Episodic and semantic memory](http://www.human-memory.net/types_episodic.html) > > > However, auditory triggers are possibly not the strongest available. According to an idea called "Proustian phenomenon" the strongest would be smell. Example study to that matter: [Proust revisited: Odours as triggers of aversive memories](http://www.tandfonline.com/doi/abs/10.1080/02699931.2011.555475) --- Reference: ---------- Toffalo, M.B.J., Smeets, M.A.M., van den Hout, M.A. (2012). Proust revisited: Odours as triggers of aversive memories. *Cognition & Emotion, 26*(1):83-92.
[Janata's (2009) study](http://cercor.oxfordjournals.org/content/19/11/2579.full) might be of interest to you. Specifically the paper proposes that the Media Pre-Frontal Cortex (MPFC) "...associates music and memories when we experience emotionally salient episodic memories that are triggered by familiar songs from our personal past." **References**: Janata, P. (2009). The neural architecture of music-evoked autobiographical memories. Cerebral Cortex, 19(11), 2579-2594. <http://cercor.oxfordjournals.org/content/19/11/2579.full>
262,390
In English, do you have a proverb like “big fish eats small fish” which means “justice belongs to the stronger”? For example, suppose there is a successful new startup. Big companies start to eye the smaller one. Finally they acquire the small startup even though the startup wants to be independent. But the startup couldn’t keep its independence due to its limited financial resources.
2015/07/26
[ "https://english.stackexchange.com/questions/262390", "https://english.stackexchange.com", "https://english.stackexchange.com/users/105551/" ]
We have an allied proverb sometimes referred to as "the New Golden Rule": > > He who has the gold rules. > > > Doyle, Mieder & Shapiro, *The [Yale] Dictionary of Modern Proverbs* (2012), expresses this saying somewhat differently: > > He who has the gold makes the rules. > > > or > > Whoever has the gold rules. > > > The earliest citation of its first formulation is from 1967 (referring to an earlier *Wizard of Id* cartoon). --- A much older and very well known expression of the same idea is "**Might makes right**," which I suppose includes the right under big-fish law to eat little fish. Or again, to invoke the spirit of Anatole France, "The law, in its majestic equality, permits big fish and little fish alike to gulp each other down." --- And to top things off, we do have the saying "Big fish eat little fish" in English. Here is the entry for that proverb in Martin Manser, *The Facts on File Dictionary of Proverbs* (2002): > > **big fish eat little fish** Small organizations or insignificant people tend to be swallowed up or destroyed by those that are greater and more powerful ... The proverb was first recorded in a text dating from before 1200. In Shakespeare's play *Pericles* (2:1), the following exchange occurs between two fishermen: "'Master, I marvel how the fishes live in the sea.' 'Why, as men do a-land—the great ones eat up the little ones.'" > > >
An old proverb, **"The weakest go to the wall"** (or "...goes to the wall" as in Romeo and Juliet) looks like what you are looking for. > > * "go to the wall" - Lose a conflict, be defeated [TFD](http://idioms.thefreedictionary.com/go+to+the+wall) > > > There is also **"The House always wins."**, an old proverb where "The House" means a casino, where every game is statistically in the house's favor.
262,390
In English, do you have a proverb like “big fish eats small fish” which means “justice belongs to the stronger”? For example, suppose there is a successful new startup. Big companies start to eye the smaller one. Finally they acquire the small startup even though the startup wants to be independent. But the startup couldn’t keep its independence due to its limited financial resources.
2015/07/26
[ "https://english.stackexchange.com/questions/262390", "https://english.stackexchange.com", "https://english.stackexchange.com/users/105551/" ]
We have an allied proverb sometimes referred to as "the New Golden Rule": > > He who has the gold rules. > > > Doyle, Mieder & Shapiro, *The [Yale] Dictionary of Modern Proverbs* (2012), expresses this saying somewhat differently: > > He who has the gold makes the rules. > > > or > > Whoever has the gold rules. > > > The earliest citation of its first formulation is from 1967 (referring to an earlier *Wizard of Id* cartoon). --- A much older and very well known expression of the same idea is "**Might makes right**," which I suppose includes the right under big-fish law to eat little fish. Or again, to invoke the spirit of Anatole France, "The law, in its majestic equality, permits big fish and little fish alike to gulp each other down." --- And to top things off, we do have the saying "Big fish eat little fish" in English. Here is the entry for that proverb in Martin Manser, *The Facts on File Dictionary of Proverbs* (2002): > > **big fish eat little fish** Small organizations or insignificant people tend to be swallowed up or destroyed by those that are greater and more powerful ... The proverb was first recorded in a text dating from before 1200. In Shakespeare's play *Pericles* (2:1), the following exchange occurs between two fishermen: "'Master, I marvel how the fishes live in the sea.' 'Why, as men do a-land—the great ones eat up the little ones.'" > > >
Another saying that can convey the concept you are referring to is: ***[The law of the jungle](http://idioms.thefreedictionary.com/the+law+of+the+jungle):*** > > * the way in which only the strongest and cleverest people in a society stay alive or succeed. > > > + I was brought up on the streets where the law of the jungle applies, so I soon learnt how to look after myself. > > > Cambridge IdiomS Dictionary