qid
int64
1
74.7M
question
stringlengths
12
33.8k
date
stringlengths
10
10
metadata
list
response_j
stringlengths
0
115k
response_k
stringlengths
2
98.3k
114,907
Jane is one of my female colleagues. She doesn't get on well with a third colleague, Joe. As he does every morning, Joe comes in to our office to say hello. Right after that, I receive an instant message from Jane through the office chat (in a private channel) saying: > > I don't want to be mean but does he shower in the morning? > > > I know that Joe can sometimes smell sweaty, especially when it's hot, but it has never been a major problem for me. I don't know how to respond to that, without being rude to either of them and staying as professional as I can. Notes: * Jane is from China, we work in western Europe. * Jane can be really sensitive to body odours. * Joe does not work near Jane, and the morning greeting is about the only physical contact they have every day. * It's not the first time she said something rude about somebody else in the company : I always try to change the subject or not to respond, but it's the first time it's so blatantly mean.
2018/06/28
[ "https://workplace.stackexchange.com/questions/114907", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/88713/" ]
~~Close her question as off-topic.~~ Dismiss her gossip: > > Joe does not share his daily routine with me, so I wouldn't know about it. You should probably ask him directly if you are so curious. > > > If she continues bothering you, you can choose to be more stern: > > Jane, I do not want to have this talk. If it really bothers you, you can talk to Joe directly. > > >
Good answers here already, though I can't help but feel one is missing: it might not be about Joe at all. Maybe she just has difficulties socialising and will use any occasion to talk (or email) you, hoping it will lead to a conversation? Maybe she would like to befriend you or she might even be in love with you? How are other interactions between you and her?
114,907
Jane is one of my female colleagues. She doesn't get on well with a third colleague, Joe. As he does every morning, Joe comes in to our office to say hello. Right after that, I receive an instant message from Jane through the office chat (in a private channel) saying: > > I don't want to be mean but does he shower in the morning? > > > I know that Joe can sometimes smell sweaty, especially when it's hot, but it has never been a major problem for me. I don't know how to respond to that, without being rude to either of them and staying as professional as I can. Notes: * Jane is from China, we work in western Europe. * Jane can be really sensitive to body odours. * Joe does not work near Jane, and the morning greeting is about the only physical contact they have every day. * It's not the first time she said something rude about somebody else in the company : I always try to change the subject or not to respond, but it's the first time it's so blatantly mean.
2018/06/28
[ "https://workplace.stackexchange.com/questions/114907", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/88713/" ]
~~Close her question as off-topic.~~ Dismiss her gossip: > > Joe does not share his daily routine with me, so I wouldn't know about it. You should probably ask him directly if you are so curious. > > > If she continues bothering you, you can choose to be more stern: > > Jane, I do not want to have this talk. If it really bothers you, you can talk to Joe directly. > > >
I am surprised that none of the other answers have proposed what I would consider the simplest response to comments of this nature: No response at all. At work, if somebody asks you a question or makes a statement that you find awkward, offensive or otherwise unpleasant, and it has no direct link to or effect upon your work, you *are not* obliged to respond and you *always* have the option of simply saying nothing and letting the comment hang. In my experience it's an effective way of making it very clear you do not wish to discuss a topic, in a completely neutral way that doesn't compromise your professionalism or draw any sort of opinion out of you either way. This is, I believe, what you want. It can be a bit socially awkward, especially in person (though less so in chat) but that is on the other person - they brought the awkward situation about and it's not your duty to rescue it and make it less embarrassing for them. In any case, something that may alleviate this in particularly awkward cases is responding immediately with your own question on a completely different topic, just to avoid silence and to divert both of your attention in a more positive direction.
114,907
Jane is one of my female colleagues. She doesn't get on well with a third colleague, Joe. As he does every morning, Joe comes in to our office to say hello. Right after that, I receive an instant message from Jane through the office chat (in a private channel) saying: > > I don't want to be mean but does he shower in the morning? > > > I know that Joe can sometimes smell sweaty, especially when it's hot, but it has never been a major problem for me. I don't know how to respond to that, without being rude to either of them and staying as professional as I can. Notes: * Jane is from China, we work in western Europe. * Jane can be really sensitive to body odours. * Joe does not work near Jane, and the morning greeting is about the only physical contact they have every day. * It's not the first time she said something rude about somebody else in the company : I always try to change the subject or not to respond, but it's the first time it's so blatantly mean.
2018/06/28
[ "https://workplace.stackexchange.com/questions/114907", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/88713/" ]
I would simply say > > Doesn't bother me > > > and leave it there. It is not rude to either one and more likely than not Jane should get the hint that you are not interested in discussing Joe with Jane behind his back.
Honesty. If it isn't an issue for you or you don't smell it,tell her so. In general, it is good to stay out of interpersonal squabbles and gossip. Is the colleague cycling to work? Good thing to tell her too. It might even be an indication to get communal showers at work. Though that is a whole large can of worms on its own, so tread carefully if at all. Also, don't use the colleague as an example why showers are needed. In fact, you shouldn't even be talking about getting showers if there are none, since you don't have an issue.
114,907
Jane is one of my female colleagues. She doesn't get on well with a third colleague, Joe. As he does every morning, Joe comes in to our office to say hello. Right after that, I receive an instant message from Jane through the office chat (in a private channel) saying: > > I don't want to be mean but does he shower in the morning? > > > I know that Joe can sometimes smell sweaty, especially when it's hot, but it has never been a major problem for me. I don't know how to respond to that, without being rude to either of them and staying as professional as I can. Notes: * Jane is from China, we work in western Europe. * Jane can be really sensitive to body odours. * Joe does not work near Jane, and the morning greeting is about the only physical contact they have every day. * It's not the first time she said something rude about somebody else in the company : I always try to change the subject or not to respond, but it's the first time it's so blatantly mean.
2018/06/28
[ "https://workplace.stackexchange.com/questions/114907", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/88713/" ]
Honesty. If it isn't an issue for you or you don't smell it,tell her so. In general, it is good to stay out of interpersonal squabbles and gossip. Is the colleague cycling to work? Good thing to tell her too. It might even be an indication to get communal showers at work. Though that is a whole large can of worms on its own, so tread carefully if at all. Also, don't use the colleague as an example why showers are needed. In fact, you shouldn't even be talking about getting showers if there are none, since you don't have an issue.
Good answers here already, though I can't help but feel one is missing: it might not be about Joe at all. Maybe she just has difficulties socialising and will use any occasion to talk (or email) you, hoping it will lead to a conversation? Maybe she would like to befriend you or she might even be in love with you? How are other interactions between you and her?
114,907
Jane is one of my female colleagues. She doesn't get on well with a third colleague, Joe. As he does every morning, Joe comes in to our office to say hello. Right after that, I receive an instant message from Jane through the office chat (in a private channel) saying: > > I don't want to be mean but does he shower in the morning? > > > I know that Joe can sometimes smell sweaty, especially when it's hot, but it has never been a major problem for me. I don't know how to respond to that, without being rude to either of them and staying as professional as I can. Notes: * Jane is from China, we work in western Europe. * Jane can be really sensitive to body odours. * Joe does not work near Jane, and the morning greeting is about the only physical contact they have every day. * It's not the first time she said something rude about somebody else in the company : I always try to change the subject or not to respond, but it's the first time it's so blatantly mean.
2018/06/28
[ "https://workplace.stackexchange.com/questions/114907", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/88713/" ]
I am surprised that none of the other answers have proposed what I would consider the simplest response to comments of this nature: No response at all. At work, if somebody asks you a question or makes a statement that you find awkward, offensive or otherwise unpleasant, and it has no direct link to or effect upon your work, you *are not* obliged to respond and you *always* have the option of simply saying nothing and letting the comment hang. In my experience it's an effective way of making it very clear you do not wish to discuss a topic, in a completely neutral way that doesn't compromise your professionalism or draw any sort of opinion out of you either way. This is, I believe, what you want. It can be a bit socially awkward, especially in person (though less so in chat) but that is on the other person - they brought the awkward situation about and it's not your duty to rescue it and make it less embarrassing for them. In any case, something that may alleviate this in particularly awkward cases is responding immediately with your own question on a completely different topic, just to avoid silence and to divert both of your attention in a more positive direction.
Honesty. If it isn't an issue for you or you don't smell it,tell her so. In general, it is good to stay out of interpersonal squabbles and gossip. Is the colleague cycling to work? Good thing to tell her too. It might even be an indication to get communal showers at work. Though that is a whole large can of worms on its own, so tread carefully if at all. Also, don't use the colleague as an example why showers are needed. In fact, you shouldn't even be talking about getting showers if there are none, since you don't have an issue.
114,907
Jane is one of my female colleagues. She doesn't get on well with a third colleague, Joe. As he does every morning, Joe comes in to our office to say hello. Right after that, I receive an instant message from Jane through the office chat (in a private channel) saying: > > I don't want to be mean but does he shower in the morning? > > > I know that Joe can sometimes smell sweaty, especially when it's hot, but it has never been a major problem for me. I don't know how to respond to that, without being rude to either of them and staying as professional as I can. Notes: * Jane is from China, we work in western Europe. * Jane can be really sensitive to body odours. * Joe does not work near Jane, and the morning greeting is about the only physical contact they have every day. * It's not the first time she said something rude about somebody else in the company : I always try to change the subject or not to respond, but it's the first time it's so blatantly mean.
2018/06/28
[ "https://workplace.stackexchange.com/questions/114907", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/88713/" ]
I would simply say > > Doesn't bother me > > > and leave it there. It is not rude to either one and more likely than not Jane should get the hint that you are not interested in discussing Joe with Jane behind his back.
Good answers here already, though I can't help but feel one is missing: it might not be about Joe at all. Maybe she just has difficulties socialising and will use any occasion to talk (or email) you, hoping it will lead to a conversation? Maybe she would like to befriend you or she might even be in love with you? How are other interactions between you and her?
114,907
Jane is one of my female colleagues. She doesn't get on well with a third colleague, Joe. As he does every morning, Joe comes in to our office to say hello. Right after that, I receive an instant message from Jane through the office chat (in a private channel) saying: > > I don't want to be mean but does he shower in the morning? > > > I know that Joe can sometimes smell sweaty, especially when it's hot, but it has never been a major problem for me. I don't know how to respond to that, without being rude to either of them and staying as professional as I can. Notes: * Jane is from China, we work in western Europe. * Jane can be really sensitive to body odours. * Joe does not work near Jane, and the morning greeting is about the only physical contact they have every day. * It's not the first time she said something rude about somebody else in the company : I always try to change the subject or not to respond, but it's the first time it's so blatantly mean.
2018/06/28
[ "https://workplace.stackexchange.com/questions/114907", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/88713/" ]
I would simply say > > Doesn't bother me > > > and leave it there. It is not rude to either one and more likely than not Jane should get the hint that you are not interested in discussing Joe with Jane behind his back.
I am surprised that none of the other answers have proposed what I would consider the simplest response to comments of this nature: No response at all. At work, if somebody asks you a question or makes a statement that you find awkward, offensive or otherwise unpleasant, and it has no direct link to or effect upon your work, you *are not* obliged to respond and you *always* have the option of simply saying nothing and letting the comment hang. In my experience it's an effective way of making it very clear you do not wish to discuss a topic, in a completely neutral way that doesn't compromise your professionalism or draw any sort of opinion out of you either way. This is, I believe, what you want. It can be a bit socially awkward, especially in person (though less so in chat) but that is on the other person - they brought the awkward situation about and it's not your duty to rescue it and make it less embarrassing for them. In any case, something that may alleviate this in particularly awkward cases is responding immediately with your own question on a completely different topic, just to avoid silence and to divert both of your attention in a more positive direction.
114,907
Jane is one of my female colleagues. She doesn't get on well with a third colleague, Joe. As he does every morning, Joe comes in to our office to say hello. Right after that, I receive an instant message from Jane through the office chat (in a private channel) saying: > > I don't want to be mean but does he shower in the morning? > > > I know that Joe can sometimes smell sweaty, especially when it's hot, but it has never been a major problem for me. I don't know how to respond to that, without being rude to either of them and staying as professional as I can. Notes: * Jane is from China, we work in western Europe. * Jane can be really sensitive to body odours. * Joe does not work near Jane, and the morning greeting is about the only physical contact they have every day. * It's not the first time she said something rude about somebody else in the company : I always try to change the subject or not to respond, but it's the first time it's so blatantly mean.
2018/06/28
[ "https://workplace.stackexchange.com/questions/114907", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/88713/" ]
I am surprised that none of the other answers have proposed what I would consider the simplest response to comments of this nature: No response at all. At work, if somebody asks you a question or makes a statement that you find awkward, offensive or otherwise unpleasant, and it has no direct link to or effect upon your work, you *are not* obliged to respond and you *always* have the option of simply saying nothing and letting the comment hang. In my experience it's an effective way of making it very clear you do not wish to discuss a topic, in a completely neutral way that doesn't compromise your professionalism or draw any sort of opinion out of you either way. This is, I believe, what you want. It can be a bit socially awkward, especially in person (though less so in chat) but that is on the other person - they brought the awkward situation about and it's not your duty to rescue it and make it less embarrassing for them. In any case, something that may alleviate this in particularly awkward cases is responding immediately with your own question on a completely different topic, just to avoid silence and to divert both of your attention in a more positive direction.
Good answers here already, though I can't help but feel one is missing: it might not be about Joe at all. Maybe she just has difficulties socialising and will use any occasion to talk (or email) you, hoping it will lead to a conversation? Maybe she would like to befriend you or she might even be in love with you? How are other interactions between you and her?
152,017
Many academic positions are advertised for PhD students. Does it make sense to apply for such a position *even if you already have a PhD*, in the hope that they might also consider a post-doc instead? Why would the hiring institution *not* want a post-doc instead of a PhD student? Why would a prospective post-doc *not* want such a position? In case somebody wonders why I'm asking: * Often, there are many more projects advertised for prospective PhD students than for prospective post-docs. * Many of the projects seem to be scientifically challenging enough to be interesting also for a post-doc, especially if one is switching fields or completing the project in a shorter time frame. * I cannot see why institutions would be opposed to hiring post-docs instead of PhD students, considering the much higher qualification they bring. I can imagine that there are some limitations attached to funding, though. For context, I am in an engineering / computer science field in Europe, where PhD and post-doc positions offer comparable salaries.
2020/07/21
[ "https://academia.stackexchange.com/questions/152017", "https://academia.stackexchange.com", "https://academia.stackexchange.com/users/61059/" ]
While it might sometimes be possible to convert the funding from a PhD to a postdoc position, in general I don't think it's a good idea to apply as a postdoc for a PhD position. That being said, you can always contact the PI and ask whether they would have another position for a postdoc. > > * Often, there are many more projects advertised for prospective PhD students than for prospective post-docs. > > > This is due to several reasons: * First not every PhD student continues as a postdoc: a lot of them go to industry, some of them quit or fail... and some of them achieve a mediocre PhD which doesn't allow them to get a postdoc job. So mathematically there must be more PhD positions than postdoc positions: if say only 20% of PhDs continue as postdocs, there must be roughly 5 times more PhD positions offered than postdoc ones. * Universities have a duty to teach students, they don't have a duty to hire temporary research staff. The number of PhD students taught is an important target indicator for an institution, whereas the number of postdocs is more a result of their success at grant applications. > > * Many of the projects seem to be scientifically challenging enough to be interesting also for a post-doc, especially if one is switching fields or completing the project in a shorter time frame. > * I cannot see why institutions would be opposed to hiring post-docs instead of PhD students, considering the much higher qualification they bring. I can imagine that there are some limitations attached to funding, though. > > > I've heard the very vague estimation that one year of postdoc is equivalent to 3 years of PhD in terms of research productivity (incidentally, they are often paid around 3 times more in countries where the PhD is a grant). There's some truth to the idea that a postdoc can do the same job as a PhD faster. However there are also important differences which can make a PI choose a PhD rather than a postdoc: * a PhD topic can be more prospective and evolve over time. * the PI might want the PhD to test their own research ideas, so they want to supervise the work closely. On the contrary, a postdoc is usually more independent. * supervising PhDs is a must to advance their career. * co-supervising a PhD student is a common way to start a collaboration with a colleague, whereas a postdoc doesn't need much supervision (if any). It's important to understand that universities are not like companies, they are not looking for the best quality/price ratio for a task. The goal of funding a PhD is not only to answer a research question, it's also to teach somebody how to do research so that they can contribute to society later. This is why the main reason for an institution to hire PhD students is simply that it's an essential part of their mission, that's why they receive public money. This is also why there are indeed very often limitations on the funding, simply because a lot of PhD funding comes from national or international programs meant to increase research capacity by training researchers.
> > Does it make sense to apply for such a position even if you already have a PhD, in the hope that they might also consider a post-doc instead? > > > Yes, in general, this makes sense, because there is a chance that the position can be converted into a postdoc position. Whether that's indeed the case will depend on the nature of the funding. In some cases it will be possible, in others not. Therefore, before you apply, it's best to contact the PI and ask them if such conversion is possible. Benefits: 1. You are now already on the PI's radar, which is a positive thing if you're a good candidate. 2. The PI might appreciate that you proactively sought contact. That's a good behavior pattern. 3. If the particular position cannot be turned into a postdoc, you will know it earlier and save time for applying and waiting. Drawbacks: 1. Apparently none. > > Why would the hiring institution not want a post-doc instead of a PhD student? > > > There can be rules, for example, if dedicated funding is allocated for PhD students. I know that this is the case for some positions in Germany. Beyond rules, there can be some benefits in hiring a PhD student. For example, graduated PhD students is a success metric in hiring decisions. > > Why would a prospective post-doc not want such a position? > > > From the perspective of the post-doc, I don't see how the position (if a conversion is possible) would be different than any other post-doc position. This assumes that the institution not tries to lowball your salary somehow because the position was initially meant for a PhD student (which would be a red flag).
152,017
Many academic positions are advertised for PhD students. Does it make sense to apply for such a position *even if you already have a PhD*, in the hope that they might also consider a post-doc instead? Why would the hiring institution *not* want a post-doc instead of a PhD student? Why would a prospective post-doc *not* want such a position? In case somebody wonders why I'm asking: * Often, there are many more projects advertised for prospective PhD students than for prospective post-docs. * Many of the projects seem to be scientifically challenging enough to be interesting also for a post-doc, especially if one is switching fields or completing the project in a shorter time frame. * I cannot see why institutions would be opposed to hiring post-docs instead of PhD students, considering the much higher qualification they bring. I can imagine that there are some limitations attached to funding, though. For context, I am in an engineering / computer science field in Europe, where PhD and post-doc positions offer comparable salaries.
2020/07/21
[ "https://academia.stackexchange.com/questions/152017", "https://academia.stackexchange.com", "https://academia.stackexchange.com/users/61059/" ]
> > Does it make sense to apply for such a position even if you already have a PhD, in the hope that they might also consider a post-doc instead? > > > Yes, in general, this makes sense, because there is a chance that the position can be converted into a postdoc position. Whether that's indeed the case will depend on the nature of the funding. In some cases it will be possible, in others not. Therefore, before you apply, it's best to contact the PI and ask them if such conversion is possible. Benefits: 1. You are now already on the PI's radar, which is a positive thing if you're a good candidate. 2. The PI might appreciate that you proactively sought contact. That's a good behavior pattern. 3. If the particular position cannot be turned into a postdoc, you will know it earlier and save time for applying and waiting. Drawbacks: 1. Apparently none. > > Why would the hiring institution not want a post-doc instead of a PhD student? > > > There can be rules, for example, if dedicated funding is allocated for PhD students. I know that this is the case for some positions in Germany. Beyond rules, there can be some benefits in hiring a PhD student. For example, graduated PhD students is a success metric in hiring decisions. > > Why would a prospective post-doc not want such a position? > > > From the perspective of the post-doc, I don't see how the position (if a conversion is possible) would be different than any other post-doc position. This assumes that the institution not tries to lowball your salary somehow because the position was initially meant for a PhD student (which would be a red flag).
Things are going to differ country to country, field to field and situation to situation but often PhD studentships are funded by outside funding bodies (this is not the case in the US interestingly). Those funding bodies are not primarily interested in the research that is produced from a PhD project, but in the education provided. All those PhD students that don't go on to become Postdocs? Many of them go and use their PhD in industry, which is an economic boost. Most PIs I know would indeed rather have a postdoc than a student, but in the end its not the PI that controls the money or the priorities, its whoever is providing the funding. That said sometimes a PI might have a non-externally funded position, or funding from a more fungible source. Normally they would aleady be advertising a postdoc positions, but I have known a PI convert 3 years of a PhD student into 1 year of postdoc (postdocs cost 3x as much as PhDs here) before.
152,017
Many academic positions are advertised for PhD students. Does it make sense to apply for such a position *even if you already have a PhD*, in the hope that they might also consider a post-doc instead? Why would the hiring institution *not* want a post-doc instead of a PhD student? Why would a prospective post-doc *not* want such a position? In case somebody wonders why I'm asking: * Often, there are many more projects advertised for prospective PhD students than for prospective post-docs. * Many of the projects seem to be scientifically challenging enough to be interesting also for a post-doc, especially if one is switching fields or completing the project in a shorter time frame. * I cannot see why institutions would be opposed to hiring post-docs instead of PhD students, considering the much higher qualification they bring. I can imagine that there are some limitations attached to funding, though. For context, I am in an engineering / computer science field in Europe, where PhD and post-doc positions offer comparable salaries.
2020/07/21
[ "https://academia.stackexchange.com/questions/152017", "https://academia.stackexchange.com", "https://academia.stackexchange.com/users/61059/" ]
> > Does it make sense to apply for such a position even if you already have a PhD, in the hope that they might also consider a post-doc instead? > > > Yes, in general, this makes sense, because there is a chance that the position can be converted into a postdoc position. Whether that's indeed the case will depend on the nature of the funding. In some cases it will be possible, in others not. Therefore, before you apply, it's best to contact the PI and ask them if such conversion is possible. Benefits: 1. You are now already on the PI's radar, which is a positive thing if you're a good candidate. 2. The PI might appreciate that you proactively sought contact. That's a good behavior pattern. 3. If the particular position cannot be turned into a postdoc, you will know it earlier and save time for applying and waiting. Drawbacks: 1. Apparently none. > > Why would the hiring institution not want a post-doc instead of a PhD student? > > > There can be rules, for example, if dedicated funding is allocated for PhD students. I know that this is the case for some positions in Germany. Beyond rules, there can be some benefits in hiring a PhD student. For example, graduated PhD students is a success metric in hiring decisions. > > Why would a prospective post-doc not want such a position? > > > From the perspective of the post-doc, I don't see how the position (if a conversion is possible) would be different than any other post-doc position. This assumes that the institution not tries to lowball your salary somehow because the position was initially meant for a PhD student (which would be a red flag).
It does not make any sense, at least in Europe. Do not know, maybe it is different somewhere else. Do not waste the paper on such an application, protect the planet. While it is possible to have a very naive view that more competence and experience always makes the better candidate, scientific world also has many other restrictions directed against "professional PhD students". I remember trying this really hard out of desperation, many years ago. At that time I already had post doctoral positions in leading universities, publications in good journals, but eventually was forced to yield and go to the industry, like everybody does. At the same time, people right from my university (same master degree, same specialty, very comparable grades) where finding PhD positions no problem. The only way how it could possibly work is if the professor has multiple options and can convert a PhD grant into something that suits for you. But I have never seen this happening.
152,017
Many academic positions are advertised for PhD students. Does it make sense to apply for such a position *even if you already have a PhD*, in the hope that they might also consider a post-doc instead? Why would the hiring institution *not* want a post-doc instead of a PhD student? Why would a prospective post-doc *not* want such a position? In case somebody wonders why I'm asking: * Often, there are many more projects advertised for prospective PhD students than for prospective post-docs. * Many of the projects seem to be scientifically challenging enough to be interesting also for a post-doc, especially if one is switching fields or completing the project in a shorter time frame. * I cannot see why institutions would be opposed to hiring post-docs instead of PhD students, considering the much higher qualification they bring. I can imagine that there are some limitations attached to funding, though. For context, I am in an engineering / computer science field in Europe, where PhD and post-doc positions offer comparable salaries.
2020/07/21
[ "https://academia.stackexchange.com/questions/152017", "https://academia.stackexchange.com", "https://academia.stackexchange.com/users/61059/" ]
While it might sometimes be possible to convert the funding from a PhD to a postdoc position, in general I don't think it's a good idea to apply as a postdoc for a PhD position. That being said, you can always contact the PI and ask whether they would have another position for a postdoc. > > * Often, there are many more projects advertised for prospective PhD students than for prospective post-docs. > > > This is due to several reasons: * First not every PhD student continues as a postdoc: a lot of them go to industry, some of them quit or fail... and some of them achieve a mediocre PhD which doesn't allow them to get a postdoc job. So mathematically there must be more PhD positions than postdoc positions: if say only 20% of PhDs continue as postdocs, there must be roughly 5 times more PhD positions offered than postdoc ones. * Universities have a duty to teach students, they don't have a duty to hire temporary research staff. The number of PhD students taught is an important target indicator for an institution, whereas the number of postdocs is more a result of their success at grant applications. > > * Many of the projects seem to be scientifically challenging enough to be interesting also for a post-doc, especially if one is switching fields or completing the project in a shorter time frame. > * I cannot see why institutions would be opposed to hiring post-docs instead of PhD students, considering the much higher qualification they bring. I can imagine that there are some limitations attached to funding, though. > > > I've heard the very vague estimation that one year of postdoc is equivalent to 3 years of PhD in terms of research productivity (incidentally, they are often paid around 3 times more in countries where the PhD is a grant). There's some truth to the idea that a postdoc can do the same job as a PhD faster. However there are also important differences which can make a PI choose a PhD rather than a postdoc: * a PhD topic can be more prospective and evolve over time. * the PI might want the PhD to test their own research ideas, so they want to supervise the work closely. On the contrary, a postdoc is usually more independent. * supervising PhDs is a must to advance their career. * co-supervising a PhD student is a common way to start a collaboration with a colleague, whereas a postdoc doesn't need much supervision (if any). It's important to understand that universities are not like companies, they are not looking for the best quality/price ratio for a task. The goal of funding a PhD is not only to answer a research question, it's also to teach somebody how to do research so that they can contribute to society later. This is why the main reason for an institution to hire PhD students is simply that it's an essential part of their mission, that's why they receive public money. This is also why there are indeed very often limitations on the funding, simply because a lot of PhD funding comes from national or international programs meant to increase research capacity by training researchers.
Things are going to differ country to country, field to field and situation to situation but often PhD studentships are funded by outside funding bodies (this is not the case in the US interestingly). Those funding bodies are not primarily interested in the research that is produced from a PhD project, but in the education provided. All those PhD students that don't go on to become Postdocs? Many of them go and use their PhD in industry, which is an economic boost. Most PIs I know would indeed rather have a postdoc than a student, but in the end its not the PI that controls the money or the priorities, its whoever is providing the funding. That said sometimes a PI might have a non-externally funded position, or funding from a more fungible source. Normally they would aleady be advertising a postdoc positions, but I have known a PI convert 3 years of a PhD student into 1 year of postdoc (postdocs cost 3x as much as PhDs here) before.
152,017
Many academic positions are advertised for PhD students. Does it make sense to apply for such a position *even if you already have a PhD*, in the hope that they might also consider a post-doc instead? Why would the hiring institution *not* want a post-doc instead of a PhD student? Why would a prospective post-doc *not* want such a position? In case somebody wonders why I'm asking: * Often, there are many more projects advertised for prospective PhD students than for prospective post-docs. * Many of the projects seem to be scientifically challenging enough to be interesting also for a post-doc, especially if one is switching fields or completing the project in a shorter time frame. * I cannot see why institutions would be opposed to hiring post-docs instead of PhD students, considering the much higher qualification they bring. I can imagine that there are some limitations attached to funding, though. For context, I am in an engineering / computer science field in Europe, where PhD and post-doc positions offer comparable salaries.
2020/07/21
[ "https://academia.stackexchange.com/questions/152017", "https://academia.stackexchange.com", "https://academia.stackexchange.com/users/61059/" ]
While it might sometimes be possible to convert the funding from a PhD to a postdoc position, in general I don't think it's a good idea to apply as a postdoc for a PhD position. That being said, you can always contact the PI and ask whether they would have another position for a postdoc. > > * Often, there are many more projects advertised for prospective PhD students than for prospective post-docs. > > > This is due to several reasons: * First not every PhD student continues as a postdoc: a lot of them go to industry, some of them quit or fail... and some of them achieve a mediocre PhD which doesn't allow them to get a postdoc job. So mathematically there must be more PhD positions than postdoc positions: if say only 20% of PhDs continue as postdocs, there must be roughly 5 times more PhD positions offered than postdoc ones. * Universities have a duty to teach students, they don't have a duty to hire temporary research staff. The number of PhD students taught is an important target indicator for an institution, whereas the number of postdocs is more a result of their success at grant applications. > > * Many of the projects seem to be scientifically challenging enough to be interesting also for a post-doc, especially if one is switching fields or completing the project in a shorter time frame. > * I cannot see why institutions would be opposed to hiring post-docs instead of PhD students, considering the much higher qualification they bring. I can imagine that there are some limitations attached to funding, though. > > > I've heard the very vague estimation that one year of postdoc is equivalent to 3 years of PhD in terms of research productivity (incidentally, they are often paid around 3 times more in countries where the PhD is a grant). There's some truth to the idea that a postdoc can do the same job as a PhD faster. However there are also important differences which can make a PI choose a PhD rather than a postdoc: * a PhD topic can be more prospective and evolve over time. * the PI might want the PhD to test their own research ideas, so they want to supervise the work closely. On the contrary, a postdoc is usually more independent. * supervising PhDs is a must to advance their career. * co-supervising a PhD student is a common way to start a collaboration with a colleague, whereas a postdoc doesn't need much supervision (if any). It's important to understand that universities are not like companies, they are not looking for the best quality/price ratio for a task. The goal of funding a PhD is not only to answer a research question, it's also to teach somebody how to do research so that they can contribute to society later. This is why the main reason for an institution to hire PhD students is simply that it's an essential part of their mission, that's why they receive public money. This is also why there are indeed very often limitations on the funding, simply because a lot of PhD funding comes from national or international programs meant to increase research capacity by training researchers.
It does not make any sense, at least in Europe. Do not know, maybe it is different somewhere else. Do not waste the paper on such an application, protect the planet. While it is possible to have a very naive view that more competence and experience always makes the better candidate, scientific world also has many other restrictions directed against "professional PhD students". I remember trying this really hard out of desperation, many years ago. At that time I already had post doctoral positions in leading universities, publications in good journals, but eventually was forced to yield and go to the industry, like everybody does. At the same time, people right from my university (same master degree, same specialty, very comparable grades) where finding PhD positions no problem. The only way how it could possibly work is if the professor has multiple options and can convert a PhD grant into something that suits for you. But I have never seen this happening.
152,017
Many academic positions are advertised for PhD students. Does it make sense to apply for such a position *even if you already have a PhD*, in the hope that they might also consider a post-doc instead? Why would the hiring institution *not* want a post-doc instead of a PhD student? Why would a prospective post-doc *not* want such a position? In case somebody wonders why I'm asking: * Often, there are many more projects advertised for prospective PhD students than for prospective post-docs. * Many of the projects seem to be scientifically challenging enough to be interesting also for a post-doc, especially if one is switching fields or completing the project in a shorter time frame. * I cannot see why institutions would be opposed to hiring post-docs instead of PhD students, considering the much higher qualification they bring. I can imagine that there are some limitations attached to funding, though. For context, I am in an engineering / computer science field in Europe, where PhD and post-doc positions offer comparable salaries.
2020/07/21
[ "https://academia.stackexchange.com/questions/152017", "https://academia.stackexchange.com", "https://academia.stackexchange.com/users/61059/" ]
Things are going to differ country to country, field to field and situation to situation but often PhD studentships are funded by outside funding bodies (this is not the case in the US interestingly). Those funding bodies are not primarily interested in the research that is produced from a PhD project, but in the education provided. All those PhD students that don't go on to become Postdocs? Many of them go and use their PhD in industry, which is an economic boost. Most PIs I know would indeed rather have a postdoc than a student, but in the end its not the PI that controls the money or the priorities, its whoever is providing the funding. That said sometimes a PI might have a non-externally funded position, or funding from a more fungible source. Normally they would aleady be advertising a postdoc positions, but I have known a PI convert 3 years of a PhD student into 1 year of postdoc (postdocs cost 3x as much as PhDs here) before.
It does not make any sense, at least in Europe. Do not know, maybe it is different somewhere else. Do not waste the paper on such an application, protect the planet. While it is possible to have a very naive view that more competence and experience always makes the better candidate, scientific world also has many other restrictions directed against "professional PhD students". I remember trying this really hard out of desperation, many years ago. At that time I already had post doctoral positions in leading universities, publications in good journals, but eventually was forced to yield and go to the industry, like everybody does. At the same time, people right from my university (same master degree, same specialty, very comparable grades) where finding PhD positions no problem. The only way how it could possibly work is if the professor has multiple options and can convert a PhD grant into something that suits for you. But I have never seen this happening.
26,224
I had a question during PSM1 assessment: > > The IT manager asks Development team for a status report describing the progress throughout the Sprint. The Development team asks the Scrum Master for advice. The Scrum Master should/will: > (Select two best options) > > > * Create and deliver the report to the manager herself. * Tell the development team to figure it out themselves * Tell the development team to fit the report into the Sprint Backlog. * Talk to the IT manager and explain that the progress in Scrum comes from inspecting an increment at the Sprint Review. * Ask the Product Owner to send the manager the report. Can anyone suggest me proper answer out of given options?
2019/04/17
[ "https://pm.stackexchange.com/questions/26224", "https://pm.stackexchange.com", "https://pm.stackexchange.com/users/28200/" ]
Status reports are not a thing in Scrum. The "manager" can be either a shareholder to the Product Owner, in which case they'll be updated through that channel, or they can be hands-off and only manage what is left, i.e. the organisation, not the projects. Either way, the correct answer is: > > Talk to the IT manager and explain that the progress in Scrum comes from inspecting an increment at the Sprint Review. > > > If I'd have to pick a second fit, I'd pick "Create and deliver the report to the manager herself", because the data is all there. The board and burndown chart (or whatever the team uses to visualize progress) should be updated daily and if the manager is not happy with that visualization, maybe because the physical board is in a different building or the tool is not accessible to them, I would see it as the Scrum Masters job to present the same data in another visualization to the manager. Maybe as a screenshot of the board or automated email or graph of the burndown.
As per scrum guide: Monitoring Progress Toward a Goal At any point in time, the total work remaining to reach a goal can be summed. The Product Owner tracks this total work remaining at least every Sprint Review. The Product Owner compares this amount with work remaining at previous Sprint Reviews to assess progress toward completing projected work by the desired time for the goal. This information is made transparent to all stakeholders. I would go with PO to provide the report if it is absolutely a necessity in the organisation. SM can definitely coach the organisation on inspect & adapt and how delivery is more important then reports and also making sure the artifacts are transparent to everyone.
13,212,382
Working on a motion detector [js library](https://github.com/alonisser/WebcamSwiper) built with [WebRTC](http://www.webrtc.org/) + canvas. When I run the app I immediately get very high cpu usage. I optimized the loops etc, but the basic problem seems to be accessing the camera eg WebRTC. **Is there a way to make WebRTC behave better?** Perhaps another configuration? Or is there something I'm missing? Could this be some js memory leak I'm handling wrong? What am I doing wrong? You can check another demo [here](http://swiper.4p-tech.co.il) with the same lib and a different one using WebRTC and with same problem [here](http://enotionz.github.com/jscii/)
2012/11/03
[ "https://Stackoverflow.com/questions/13212382", "https://Stackoverflow.com", "https://Stackoverflow.com/users/690204/" ]
The demo looks like it does motion detection by inspecting the video image's pixels. It appears to render it to a canvas then retrieve the canvas image data. This is slow because it's just a slow thing to do - there are lots of pixels, frames come through quickly, and it's a high CPU job. It's made worse by the fact Javascript is not always very efficient at this kind of data-heavy processing. So I don't think the slowness is inherent to WebRTC. It's just heavy javascript.
Have you tried using [Web Workers](http://www.html5rocks.com/en/tutorials/workers/basics/) for the computation? There is a [demo using web workers for motion tracking](http://htmlfive.appspot.com/static/tracker1.html) (Firefox only). CPU usage does seem high in this demo but the worker's reported fps is way over the framerate of the video so there might be some benefit to try and limit the worker to 30fps.
13,103,922
If i have signal values x[T] and filter coefficients b[i], i can perform filtering using convolution. ![enter image description here](https://i.stack.imgur.com/dALkg.png) Suppose i have spectrum of x (after FFT) and i need to perform filtering using filters coefficients, how can i perform this? I heard that in frequency domain it will be multiplying, rather than convolution (time domain). But i can't find an equation to use it. I have 614000 values in y = fft(x[T]) vector and 119 filter coefficients (generated using fdatool), i can't multiply them directly ... Thanks.
2012/10/27
[ "https://Stackoverflow.com/questions/13103922", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1149797/" ]
You will need the spectrum of a zero-padded signal values x and the FFT of a zero-padded (to the same length) FIR filter kernel b in order to do fast-convolution by multiplication. Otherwise, without zero-padding you will end up doing circular convolution instead of linear convolution. For long data vectors and relatively short FIR filter kernels, and to avoid needing really long FFTs and IFFTs for such, you might want to look into overlap-add or overlap-save fast convolution algorithms.
You need to transform your filter to frequency domain as well. Then in frequency domain you can use element-wise multiplication. <http://en.wikipedia.org/wiki/Convolution_theorem>
5,128,539
I am trying to create a simple animation, juste a picture that need to move from left to write while rotating around the vertical axis. I tried just to have the first position normal and the final position scaled at -100% in width but it also distort and move up and down. I already did that in acstionscript but not just graphicaly. Thanks
2011/02/26
[ "https://Stackoverflow.com/questions/5128539", "https://Stackoverflow.com", "https://Stackoverflow.com/users/585806/" ]
CodeModel and Eclipse JDT worked for [this fellow](https://stackoverflow.com/questions/121324/a-java-api-to-generate-java-source-files), and he ultimately chose CodeModel.
Try this <http://code.google.com/p/javaparser/> i use this to rewrite huge portions of my existing code base, also ASTs can be constructed with this and it is a readable wrapper written over eclipse core....
1,378
I was interested in searching for a question asking about the link between food colouring and hyperactivity (and other harmful effects). A search didn't really return anything obvious, but way down the list (where most people wouldn't bother looking), I found a relevant question: [Do food additives ("E Number") cause hyperactivity?](https://skeptics.stackexchange.com/questions/502/is-any-of-the-e-number-additives-generally-accepted-as-harmful) It would be great to have a duplicate question about the link between food colouring and hyperactivity that links to it. Is it ok to create one?
2012/02/03
[ "https://skeptics.meta.stackexchange.com/questions/1378", "https://skeptics.meta.stackexchange.com", "https://skeptics.meta.stackexchange.com/users/5343/" ]
I have edited the title and my answer (which is the accepted answer) to better reflect the actual question being asked. Truth be told, asking whether "X is harmful" is hardly informative. As I mentioned in my answer, at a concentration high enough, even vitamin C can be lethal. The question is now specifically about hyperactivity, and the title reflects that. It should now be easier to find it through search, both here and on Google. I have also deleted the answers which addressed the weaker claim ("Is X harmful?").
Erm - why would you want to do that? The question already exists. If you are wanting it to be easier to find, can you provide an answer that has new or useful information which has been missed so far? If not, review the tags - are they appropriate for the search you carried out? Is the question not as good as it should be - you can suggest edits. Anyway - duplicates are a bad thing mmm'kay
13,079
I've seen a security question/requirement that a website login return the same error message for invalid password as for non-existent user. The idea being that this it makes it impossible to discover valid usernames by scanning the error messages. In cases where self-signup is available, does this make any difference? You can't allow signup with the same username twice, so an attacker could just attempt to sign up with usernames until a valid username is found. Thanks
2012/03/24
[ "https://security.stackexchange.com/questions/13079", "https://security.stackexchange.com", "https://security.stackexchange.com/users/19690/" ]
No, you have no choice for the signup process because the user name must be unique. However, to mitigate the attack you describe, your system should force a delay between attempts. For example, a 15-second delay will probably not be noticeable to humans (because it will take longer than that to choose a new user name), but an attacker is limited to 4 guesses per minute. On a related note, there's an urban legend about a web site that required its uses to have unique *passwords* as well as unique user names. Needless to say, this did *not* enhance security. :-)
Yes, it does make a difference. I could scan the site testing for names until a valid name is found, then hammer that with a brute-force password attempt. Not knowing a valid username means I have to guess that any username might be valid, which increases my uncertainty and increases the number of test cases. If I know what account I want to brute-force, then no, it doesn't matter that there is no difference in messages.
13,079
I've seen a security question/requirement that a website login return the same error message for invalid password as for non-existent user. The idea being that this it makes it impossible to discover valid usernames by scanning the error messages. In cases where self-signup is available, does this make any difference? You can't allow signup with the same username twice, so an attacker could just attempt to sign up with usernames until a valid username is found. Thanks
2012/03/24
[ "https://security.stackexchange.com/questions/13079", "https://security.stackexchange.com", "https://security.stackexchange.com/users/19690/" ]
No, you have no choice for the signup process because the user name must be unique. However, to mitigate the attack you describe, your system should force a delay between attempts. For example, a 15-second delay will probably not be noticeable to humans (because it will take longer than that to choose a new user name), but an attacker is limited to 4 guesses per minute. On a related note, there's an urban legend about a web site that required its uses to have unique *passwords* as well as unique user names. Needless to say, this did *not* enhance security. :-)
@Yoav said it very well. Yes, it's important to protect your customers privacy and (probably) your customer list. I think a good mitigation is to combine the answers from @Adam and @twobeers. 1. Require an email verification step every time a username/email is created or changed. Don't reveal whether or not the account exists. 2. Rate Limit any forms that can modify username/emails. The combination of these two protects your customers identity and limits the email.
13,079
I've seen a security question/requirement that a website login return the same error message for invalid password as for non-existent user. The idea being that this it makes it impossible to discover valid usernames by scanning the error messages. In cases where self-signup is available, does this make any difference? You can't allow signup with the same username twice, so an attacker could just attempt to sign up with usernames until a valid username is found. Thanks
2012/03/24
[ "https://security.stackexchange.com/questions/13079", "https://security.stackexchange.com", "https://security.stackexchange.com/users/19690/" ]
If there's **at least one** way to discover which user names are available on the site, it then means that you can try to brute-force/dictionary-attack/social engineer those specific accounts. In the case you described then it doesn't make any tangible security difference to try to hide the error reason on the login page, since it's trivial to discover the real reason otherwise. Depending on what's more important to you, you can either make your login process more user (and attacker) friendly, or instead try to secure your registration (or any other) process that might reveal which accounts exist on your system. When defending against such cases related to the the authentication process for a typical web application, you should normally take into account the following routes: * **Signup/Registration** - this can reveal which accounts are available as well as allow flooding your system with fake/stale accounts, name-squatting etc. Captcha can usually provide good protection, as well as timeouts, but won't stop manual, slower attacks. I would also suggest not giving hints about username availability or the success of the registration process. You can just say "Thanks for registering. A confirmation email will be sent shortly to confirm the account" (or something like that) * **Login page** - this is the obvious point, and where most applications already have fairly standard protection, including good-practice of non-revealing errors etc. Slowing down the login process, or monitoring abuse can also help. Lockout for failed logins is also a possibility, but then you're more prone to denial-of-service attacks. * **Forgot password** - this is often neglected when considering information leakage. When someone puts their email/username in the forgot password field, you should probably respond with the same message regardless if the email/account is known or not. Note that this might create support issues (*"I put in my email and your system said it's sending me a reset email, but I didn't get anything..."*) * **Account details/email changes** - many applications allow you to, e.g. change your email address or even account name. This would also potentially leak out whether or not a given account or email address already exist or not. Same rules apply here.
Yes, it does make a difference. I could scan the site testing for names until a valid name is found, then hammer that with a brute-force password attempt. Not knowing a valid username means I have to guess that any username might be valid, which increases my uncertainty and increases the number of test cases. If I know what account I want to brute-force, then no, it doesn't matter that there is no difference in messages.
13,079
I've seen a security question/requirement that a website login return the same error message for invalid password as for non-existent user. The idea being that this it makes it impossible to discover valid usernames by scanning the error messages. In cases where self-signup is available, does this make any difference? You can't allow signup with the same username twice, so an attacker could just attempt to sign up with usernames until a valid username is found. Thanks
2012/03/24
[ "https://security.stackexchange.com/questions/13079", "https://security.stackexchange.com", "https://security.stackexchange.com/users/19690/" ]
Yes, it does make a difference. I could scan the site testing for names until a valid name is found, then hammer that with a brute-force password attempt. Not knowing a valid username means I have to guess that any username might be valid, which increases my uncertainty and increases the number of test cases. If I know what account I want to brute-force, then no, it doesn't matter that there is no difference in messages.
@Yoav said it very well. Yes, it's important to protect your customers privacy and (probably) your customer list. I think a good mitigation is to combine the answers from @Adam and @twobeers. 1. Require an email verification step every time a username/email is created or changed. Don't reveal whether or not the account exists. 2. Rate Limit any forms that can modify username/emails. The combination of these two protects your customers identity and limits the email.
13,079
I've seen a security question/requirement that a website login return the same error message for invalid password as for non-existent user. The idea being that this it makes it impossible to discover valid usernames by scanning the error messages. In cases where self-signup is available, does this make any difference? You can't allow signup with the same username twice, so an attacker could just attempt to sign up with usernames until a valid username is found. Thanks
2012/03/24
[ "https://security.stackexchange.com/questions/13079", "https://security.stackexchange.com", "https://security.stackexchange.com/users/19690/" ]
If there's **at least one** way to discover which user names are available on the site, it then means that you can try to brute-force/dictionary-attack/social engineer those specific accounts. In the case you described then it doesn't make any tangible security difference to try to hide the error reason on the login page, since it's trivial to discover the real reason otherwise. Depending on what's more important to you, you can either make your login process more user (and attacker) friendly, or instead try to secure your registration (or any other) process that might reveal which accounts exist on your system. When defending against such cases related to the the authentication process for a typical web application, you should normally take into account the following routes: * **Signup/Registration** - this can reveal which accounts are available as well as allow flooding your system with fake/stale accounts, name-squatting etc. Captcha can usually provide good protection, as well as timeouts, but won't stop manual, slower attacks. I would also suggest not giving hints about username availability or the success of the registration process. You can just say "Thanks for registering. A confirmation email will be sent shortly to confirm the account" (or something like that) * **Login page** - this is the obvious point, and where most applications already have fairly standard protection, including good-practice of non-revealing errors etc. Slowing down the login process, or monitoring abuse can also help. Lockout for failed logins is also a possibility, but then you're more prone to denial-of-service attacks. * **Forgot password** - this is often neglected when considering information leakage. When someone puts their email/username in the forgot password field, you should probably respond with the same message regardless if the email/account is known or not. Note that this might create support issues (*"I put in my email and your system said it's sending me a reset email, but I didn't get anything..."*) * **Account details/email changes** - many applications allow you to, e.g. change your email address or even account name. This would also potentially leak out whether or not a given account or email address already exist or not. Same rules apply here.
@Yoav said it very well. Yes, it's important to protect your customers privacy and (probably) your customer list. I think a good mitigation is to combine the answers from @Adam and @twobeers. 1. Require an email verification step every time a username/email is created or changed. Don't reveal whether or not the account exists. 2. Rate Limit any forms that can modify username/emails. The combination of these two protects your customers identity and limits the email.
120,917
In my division (of a very large company) it seems common that someone's line manager is also one's project manager. Not always but often. This seems counterintuitive to me as if I have an issue with my project manager, then I report it to my line manager? This is in France - I have been working in other countries where I would have found that crazy. But now I wonder what is normal or acceptable. Edit: maybe an addition to define the terms as I understand them (I might be wrong) * line manager: the person in charge of personal objectives, how I fit in the division, what projects I will work on * project manager: the person in charge of my day-to-day tasks
2018/10/17
[ "https://workplace.stackexchange.com/questions/120917", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/93479/" ]
**Yes, it frequently happens this way.** There's lots of reasons for it. A company may be small enough that neither of those jobs is enough to occupy a person full time. Some companies like to keep reporting structures small and simple - the person who assesses your performance is also the person who handles you day to day activity. Reporting to two people sometimes creates unclear boundaries. If most of your work is project-based then your line manager may end up with very little to do, especially if projects are long and ongoing, such as when each project is basically "produce the next version of our product". There are still plenty of ways to handle problems that may arise with your boss. Talking to HR and talking to your boss' boss are two of the most common
In my experience, it's not uncommon in Japan, US, Germany and UK.
120,917
In my division (of a very large company) it seems common that someone's line manager is also one's project manager. Not always but often. This seems counterintuitive to me as if I have an issue with my project manager, then I report it to my line manager? This is in France - I have been working in other countries where I would have found that crazy. But now I wonder what is normal or acceptable. Edit: maybe an addition to define the terms as I understand them (I might be wrong) * line manager: the person in charge of personal objectives, how I fit in the division, what projects I will work on * project manager: the person in charge of my day-to-day tasks
2018/10/17
[ "https://workplace.stackexchange.com/questions/120917", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/93479/" ]
In my US-Midwest experience no, but I have seen many project managers be promoted to managers and directors of development groups over technical people. Good luck.
In my experience, it's not uncommon in Japan, US, Germany and UK.
120,917
In my division (of a very large company) it seems common that someone's line manager is also one's project manager. Not always but often. This seems counterintuitive to me as if I have an issue with my project manager, then I report it to my line manager? This is in France - I have been working in other countries where I would have found that crazy. But now I wonder what is normal or acceptable. Edit: maybe an addition to define the terms as I understand them (I might be wrong) * line manager: the person in charge of personal objectives, how I fit in the division, what projects I will work on * project manager: the person in charge of my day-to-day tasks
2018/10/17
[ "https://workplace.stackexchange.com/questions/120917", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/93479/" ]
"Commonplace" is a tough metric to put a measure on. I have worked in companies that have used both models successfully. The scenario you describe with separate managers, I have heard referred to as Matrix Management. This has worked well when the projects were short lived and individual contributors moved fluidly from project to project, where each project has its own subject matter expert. In this model, each contributor has a single manager watching over their performance as a whole so the individual project managers can focus on their deliverables. For companies where projects are longer lived, the Matrix model can create unnecessary overhead since there will not be numerous sources of evaluation that a single person needs to coalesce. The project manager will already have full insight in to the performance and activities of each project team member. Whether or not it is commonplace is more likely to be a function of how the company works and the scale of projects that contributors might work on. Each company would have its own best fit. Either model can be very successful and both are frequently used.
In my experience, it's not uncommon in Japan, US, Germany and UK.
120,917
In my division (of a very large company) it seems common that someone's line manager is also one's project manager. Not always but often. This seems counterintuitive to me as if I have an issue with my project manager, then I report it to my line manager? This is in France - I have been working in other countries where I would have found that crazy. But now I wonder what is normal or acceptable. Edit: maybe an addition to define the terms as I understand them (I might be wrong) * line manager: the person in charge of personal objectives, how I fit in the division, what projects I will work on * project manager: the person in charge of my day-to-day tasks
2018/10/17
[ "https://workplace.stackexchange.com/questions/120917", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/93479/" ]
**Yes, it frequently happens this way.** There's lots of reasons for it. A company may be small enough that neither of those jobs is enough to occupy a person full time. Some companies like to keep reporting structures small and simple - the person who assesses your performance is also the person who handles you day to day activity. Reporting to two people sometimes creates unclear boundaries. If most of your work is project-based then your line manager may end up with very little to do, especially if projects are long and ongoing, such as when each project is basically "produce the next version of our product". There are still plenty of ways to handle problems that may arise with your boss. Talking to HR and talking to your boss' boss are two of the most common
In my US-Midwest experience no, but I have seen many project managers be promoted to managers and directors of development groups over technical people. Good luck.
120,917
In my division (of a very large company) it seems common that someone's line manager is also one's project manager. Not always but often. This seems counterintuitive to me as if I have an issue with my project manager, then I report it to my line manager? This is in France - I have been working in other countries where I would have found that crazy. But now I wonder what is normal or acceptable. Edit: maybe an addition to define the terms as I understand them (I might be wrong) * line manager: the person in charge of personal objectives, how I fit in the division, what projects I will work on * project manager: the person in charge of my day-to-day tasks
2018/10/17
[ "https://workplace.stackexchange.com/questions/120917", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/93479/" ]
**Yes, it frequently happens this way.** There's lots of reasons for it. A company may be small enough that neither of those jobs is enough to occupy a person full time. Some companies like to keep reporting structures small and simple - the person who assesses your performance is also the person who handles you day to day activity. Reporting to two people sometimes creates unclear boundaries. If most of your work is project-based then your line manager may end up with very little to do, especially if projects are long and ongoing, such as when each project is basically "produce the next version of our product". There are still plenty of ways to handle problems that may arise with your boss. Talking to HR and talking to your boss' boss are two of the most common
"Commonplace" is a tough metric to put a measure on. I have worked in companies that have used both models successfully. The scenario you describe with separate managers, I have heard referred to as Matrix Management. This has worked well when the projects were short lived and individual contributors moved fluidly from project to project, where each project has its own subject matter expert. In this model, each contributor has a single manager watching over their performance as a whole so the individual project managers can focus on their deliverables. For companies where projects are longer lived, the Matrix model can create unnecessary overhead since there will not be numerous sources of evaluation that a single person needs to coalesce. The project manager will already have full insight in to the performance and activities of each project team member. Whether or not it is commonplace is more likely to be a function of how the company works and the scale of projects that contributors might work on. Each company would have its own best fit. Either model can be very successful and both are frequently used.
120,917
In my division (of a very large company) it seems common that someone's line manager is also one's project manager. Not always but often. This seems counterintuitive to me as if I have an issue with my project manager, then I report it to my line manager? This is in France - I have been working in other countries where I would have found that crazy. But now I wonder what is normal or acceptable. Edit: maybe an addition to define the terms as I understand them (I might be wrong) * line manager: the person in charge of personal objectives, how I fit in the division, what projects I will work on * project manager: the person in charge of my day-to-day tasks
2018/10/17
[ "https://workplace.stackexchange.com/questions/120917", "https://workplace.stackexchange.com", "https://workplace.stackexchange.com/users/93479/" ]
"Commonplace" is a tough metric to put a measure on. I have worked in companies that have used both models successfully. The scenario you describe with separate managers, I have heard referred to as Matrix Management. This has worked well when the projects were short lived and individual contributors moved fluidly from project to project, where each project has its own subject matter expert. In this model, each contributor has a single manager watching over their performance as a whole so the individual project managers can focus on their deliverables. For companies where projects are longer lived, the Matrix model can create unnecessary overhead since there will not be numerous sources of evaluation that a single person needs to coalesce. The project manager will already have full insight in to the performance and activities of each project team member. Whether or not it is commonplace is more likely to be a function of how the company works and the scale of projects that contributors might work on. Each company would have its own best fit. Either model can be very successful and both are frequently used.
In my US-Midwest experience no, but I have seen many project managers be promoted to managers and directors of development groups over technical people. Good luck.
79,524
I have Lang's 2 volume set on "Cyclotomic fields", and Washington's "Introduction to Cyclotomic Fields", but I feel I need something more elementary. Maybe I need to read some more on algebraic number theory, I do not know. So I would appreciate suggestions of books, or chapters in a book, lecture notes, etc. that would give me an introduction. I am specifically interested in connection of cyclotomic fields and Bernoulli numbers. Thank you.
2011/11/06
[ "https://math.stackexchange.com/questions/79524", "https://math.stackexchange.com", "https://math.stackexchange.com/users/11069/" ]
I would just start by looking at Marcus' Number Fields for the basic algebraic number theory. It also contains tons of exercises. If you read the first 4 chapters, you should have the necessary background for most of Washington's book. I'm not familiar with Lang. I started studying algebraic number theory last summer by going through Marcus book.
pki's suggestion is good. A couple of other books worth a look are Pollard and Diamond, The Theory of Algebraic Numbers (in the MAA Carus Mathematical Monographs series), and Stewart and Tall, Algebraic Number Theory. Ireland and Rosen, A Classical Introduction to Modern Number Theory, doesn't get as far into algebraic number theory as the others, but it is well-written and has a chapter on cyclotomic fields and a chapter on Bernoulli numbers.
6,337,625
very new to drupal. I need to create section of my website that contains a directory of 300 companies. I think it be best to create separate databasetables and not included this data in drupal's nodal architecture. Is there a typical/best practise way to manage custom data in drupal
2011/06/13
[ "https://Stackoverflow.com/questions/6337625", "https://Stackoverflow.com", "https://Stackoverflow.com/users/461880/" ]
You might want to take a look at the Drupal 7 [Entity API](http://www.istos.it/blog/drupal/drupal-entities-part-1-moving-beyond-nodes). If you want to use your own database scheme within Drupal that would be the way to do it. I have been using it for a while and it sounds exactly like what you are looking for.
The way I did it was to get the module that allows you to use php in pages (PHP filter) and then write a simple database wrapper. Drupal provides a good theme framework that makes it pretty easy to make themed tables with support for sorting etc.
43,974,135
This weekend I was working on a game I am developing. On Level 2 a number of players will get the best score (9 moves). Rather than placing everyone with the same score in 1st position, the first player to get 9 moves is positioned 1st, the second player to get 9 moves is positioned 2nd and so on. Is it possible to make them all tied in 1st place since they all have the same best score?
2017/05/15
[ "https://Stackoverflow.com/questions/43974135", "https://Stackoverflow.com", "https://Stackoverflow.com/users/646818/" ]
I can't find anything in Apple's documentation that specifically defines behavior for tie scores, but I think that's the expected behavior. What you'll have to do is [retrieve the leaderboard](https://developer.apple.com/library/content/documentation/NetworkingInternet/Conceptual/GameKit_Guide/Achievements/Achievements.html#//apple_ref/doc/uid/TP40008304-CH7-SW30) and design your own UI for displaying it.
Ah, The **classic First Come First Served** Scenario. As @mmd1080 recommends, an in-game leaderboard would be the solution. You can also take the long road and request for this feature (tie condition) to be added to the iOS Game Center, since you are the developer of that application.
131,433
I've read [JRM's Syntax-rules Primer](http://www.xs4all.nl/~hipster/lib/scheme/gauche/define-syntax-primer.txt) for the Merely Eccentric and it has helped me understand syntax-rules and how it's different from common-lisp's define-macro. syntax-rules is only one way of implementing a syntax transformer within define-syntax. I'm looking for two things, the first is more examples and explanations of syntax-rules and the second is good sources for learning the other ways of using define-syntax. What resources do you recommend?
2008/09/25
[ "https://Stackoverflow.com/questions/131433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19784/" ]
The JRM Syntax-rules primer is quite good, but Chapter 36 of Programming Languages: Application and Interpretation, by Shriram Krishnamurti <http://www.cs.brown.edu/~sk/Publications/Books/ProgLangs/>) also has good coverage of writing Scheme macros. That material has been used and improved over several short articles, tech reports, etc, over the past 10 years, so it's not a 'this was true about the X implementation of Scheme in 1983 that is no longer accessible' paper.
The list of resources at [The Scheme Cookbook](https://web.archive.org/web/20150321052219/http://schemecookbook.org/Cookbook/GettingStartedMacros) is a great place to start. If you prefer papers, then don't hessitate to visit [readscheme.org](https://web.archive.org/web/20160306064729/http://readscheme.org/).
12,939
I've been trying to identify the music that is heard while one talks diplomacy with *Qin Shi Huang* in *Civilization IV*. The melody has been stuck in my head for a while now, and I'd like to know what it is I keep whistling. Sadly, it's not listed on [the Wikipedia article about music in *Civilization IV*](http://en.wikipedia.org/wiki/Music_in_Civilization_IV). Since I have not played any other games in the series, it is entirely possible the theme is a remake of music from an older game in the *Civilization* series. Does anyone know more about this music theme? For reference: I'm talking about this song:
2010/12/19
[ "https://gaming.stackexchange.com/questions/12939", "https://gaming.stackexchange.com", "https://gaming.stackexchange.com/users/5581/" ]
It is an arrangement of the title theme to the Super NES conversion of the original Civilisation. [If this link doesn't work I'm sure you can find it](https://www.youtube.com/watch?v=3FJ5v0acZkk)
I believe thats an original track for civilization 4. It's not based on an existing song as I didn't find any references to any track on album. But to be sure I would simply send an email to the game support team and I think they could confirm this. As for the music the game already has you could just download that youtube video and extract the sound. Alternatively, if you want I can get the game from a friend and try to locate/rip the track for you. However, I think you can find music similar to the this one easily. Just search for Chinese classical music on youtube.
29,002,680
I have created a Kibana 4 dashboard a few days earlier and named it as "test dashboard".But when I load it now , it is empty as shown in the screenshot below. ![enter image description here](https://i.stack.imgur.com/Vf1H4.png) It was perfectly working and was showing various analytics when I created. Any clues?.
2015/03/12
[ "https://Stackoverflow.com/questions/29002680", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
Have you indexed any data in the last 15 minutes? If not, you need to adjust the time range in the upper right to include a period that has data.
The only thing I'd add to Vineeth's answer is that you access the time scales by clicking on the time scale shown in the top right corner of the Kibana screen (in this case 'last 15 minutes')
29,002,680
I have created a Kibana 4 dashboard a few days earlier and named it as "test dashboard".But when I load it now , it is empty as shown in the screenshot below. ![enter image description here](https://i.stack.imgur.com/Vf1H4.png) It was perfectly working and was showing various analytics when I created. Any clues?.
2015/03/12
[ "https://Stackoverflow.com/questions/29002680", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
By default kibana tries to show the last 15 minutes worth data. You might not have indexed the any data before that 15 minutes. In this case , you can change the time filter as shown below - ![enter image description here](https://i.stack.imgur.com/VhuxP.png)
The only thing I'd add to Vineeth's answer is that you access the time scales by clicking on the time scale shown in the top right corner of the Kibana screen (in this case 'last 15 minutes')
58,989
Is it possible to delete only one step in the middle of history (keeping all steps after it) without juggling with multiple files?
2015/09/02
[ "https://graphicdesign.stackexchange.com/questions/58989", "https://graphicdesign.stackexchange.com", "https://graphicdesign.stackexchange.com/users/32915/" ]
No, it's not possible to delete a step in your history without deleting all steps after it.
No, not possible. History only stores the list of steps, not its actions.
36,362
I am seeing an emerging design pattern in web apps that is used for helping new users get oriented to a page or application. It consists of showing a diagram with succinct helper-text over a semi-transparent overlay, sometimes with arrows pointing to specific controls on the page. One of the best example of this I have seen is in UX Pin, an online wireframing/design tool. Has anyone ever utilized this pattern - and if so, what is it called? Or how did *you* refer to it? I am also interested in learning how it is accomplished. Is there a tool or plug-in that might be useful for achieving this effect, and is it possible to do this in a reusable fashion without placing static text in a transparent png?
2013/03/14
[ "https://ux.stackexchange.com/questions/36362", "https://ux.stackexchange.com", "https://ux.stackexchange.com/users/28539/" ]
I don't think there is a specific name for it as it probably depends of its implementation. I've seen 2 versions: a fully static which must be closed first and one with "holes" in the overlay which allow users to interact with the page underneath. Here is a pattern with some proposed keywords to combine below: *Pattern* Location + Piece of UI + Representation *Keywords* * **Location**: on-page, on-screen, etc. * **Piece of UI**: help, helper, guide, on-boarding * **Representation**: overlay, screen Exemples: on-page helper overlay, on-screen guide, etc.
I would call it a **tooltip** *with a semi transparent background*. ![enter image description here](https://i.stack.imgur.com/6tRev.png) At least this is what webappers calls it, where they implement the tooltip with jQuery and CSS described in their article [Simple Transparent Tooltips with jQuery and CSS](http://www.webappers.com/2008/10/17/simple-transparent-tooltips-with-jquery-and-css/).
36,362
I am seeing an emerging design pattern in web apps that is used for helping new users get oriented to a page or application. It consists of showing a diagram with succinct helper-text over a semi-transparent overlay, sometimes with arrows pointing to specific controls on the page. One of the best example of this I have seen is in UX Pin, an online wireframing/design tool. Has anyone ever utilized this pattern - and if so, what is it called? Or how did *you* refer to it? I am also interested in learning how it is accomplished. Is there a tool or plug-in that might be useful for achieving this effect, and is it possible to do this in a reusable fashion without placing static text in a transparent png?
2013/03/14
[ "https://ux.stackexchange.com/questions/36362", "https://ux.stackexchange.com", "https://ux.stackexchange.com/users/28539/" ]
I would call it a **tooltip** *with a semi transparent background*. ![enter image description here](https://i.stack.imgur.com/6tRev.png) At least this is what webappers calls it, where they implement the tooltip with jQuery and CSS described in their article [Simple Transparent Tooltips with jQuery and CSS](http://www.webappers.com/2008/10/17/simple-transparent-tooltips-with-jquery-and-css/).
It can also be called a **wizard**. Semi transparent overlay is just a css property and you can modify that based on the visual attributes of your platform. Wizard can be used for multiple functionalities: 1. First run experience/ initial few times experience 2. Performing multiple steps to complete an action 3. Informing the user about what are the new feature releases for the platform. Wizards can be static i.e. on a fixed location on the page or can move to a particular part of the page to give more context to the user. The wizard would typically be static for point 2 mentioned above and can be mobile for 1 and 3. **Example of First Run Experience of Google Data Studio:** [![FRE Google Data Studio](https://i.stack.imgur.com/xQtvu.png)](https://i.stack.imgur.com/xQtvu.png) **Example of Performing multiple steps to complete an action - Microsoft PowerApps:** [![Multistep form Microsoft PowerApps](https://i.stack.imgur.com/yiqEw.png)](https://i.stack.imgur.com/yiqEw.png) **Example of informing users about new feature releases - Confluence by Atlassian:** [![Confluence new feature release](https://i.stack.imgur.com/cGHEQ.png)](https://i.stack.imgur.com/cGHEQ.png) **Documentation of Wizard Patterns with code snippets:** * [Clarity Design System Wizards](https://clarity.design/documentation/wizards) * [SAP Fiori Design Guidelines Wizard](https://experience.sap.com/fiori-design-web/wizard/)
36,362
I am seeing an emerging design pattern in web apps that is used for helping new users get oriented to a page or application. It consists of showing a diagram with succinct helper-text over a semi-transparent overlay, sometimes with arrows pointing to specific controls on the page. One of the best example of this I have seen is in UX Pin, an online wireframing/design tool. Has anyone ever utilized this pattern - and if so, what is it called? Or how did *you* refer to it? I am also interested in learning how it is accomplished. Is there a tool or plug-in that might be useful for achieving this effect, and is it possible to do this in a reusable fashion without placing static text in a transparent png?
2013/03/14
[ "https://ux.stackexchange.com/questions/36362", "https://ux.stackexchange.com", "https://ux.stackexchange.com/users/28539/" ]
I don't think there is a specific name for it as it probably depends of its implementation. I've seen 2 versions: a fully static which must be closed first and one with "holes" in the overlay which allow users to interact with the page underneath. Here is a pattern with some proposed keywords to combine below: *Pattern* Location + Piece of UI + Representation *Keywords* * **Location**: on-page, on-screen, etc. * **Piece of UI**: help, helper, guide, on-boarding * **Representation**: overlay, screen Exemples: on-page helper overlay, on-screen guide, etc.
It can also be called a **wizard**. Semi transparent overlay is just a css property and you can modify that based on the visual attributes of your platform. Wizard can be used for multiple functionalities: 1. First run experience/ initial few times experience 2. Performing multiple steps to complete an action 3. Informing the user about what are the new feature releases for the platform. Wizards can be static i.e. on a fixed location on the page or can move to a particular part of the page to give more context to the user. The wizard would typically be static for point 2 mentioned above and can be mobile for 1 and 3. **Example of First Run Experience of Google Data Studio:** [![FRE Google Data Studio](https://i.stack.imgur.com/xQtvu.png)](https://i.stack.imgur.com/xQtvu.png) **Example of Performing multiple steps to complete an action - Microsoft PowerApps:** [![Multistep form Microsoft PowerApps](https://i.stack.imgur.com/yiqEw.png)](https://i.stack.imgur.com/yiqEw.png) **Example of informing users about new feature releases - Confluence by Atlassian:** [![Confluence new feature release](https://i.stack.imgur.com/cGHEQ.png)](https://i.stack.imgur.com/cGHEQ.png) **Documentation of Wizard Patterns with code snippets:** * [Clarity Design System Wizards](https://clarity.design/documentation/wizards) * [SAP Fiori Design Guidelines Wizard](https://experience.sap.com/fiori-design-web/wizard/)
14,415
In 7 Wonders some stages of building a wonder provide special powers. For example stage of 2 of Olympia (side A) allows you to build one free structure per age. If you have built 2 wonders in this case can you build 2 free structures per age?
2014/01/26
[ "https://boardgames.stackexchange.com/questions/14415", "https://boardgames.stackexchange.com", "https://boardgames.stackexchange.com/users/6710/" ]
**Each player has a single Wonder board, and each Wonder stage can only be built once.** The [Quickstart rules](http://rprod.com/uploads/file/7WONDERS_QUICKRULES_US_COLOR.pdf) cover this on the first page under **Actions**, but it is also covered in the [main rules](http://rprod.com/uploads/file/7WONDERS_RULES_US_COLOR.pdf) under **Game Overview**, Clarifications. You may only build the stages in order from left to right, and each stage can only be built once per game. > > **build a stage of their Wonder**(in the order given by the board, from left to right): the card is placed partially under the board, face down. > > > 2.b. **Build a Stage of a Wonder** - [...] each stage can only built once per game. (page 5, main rules) > > > For example, if you have **The Lighthouse of Alexandria**, building the 3 stages of side (A) would give you 3VP, one Raw Material of your choice per build step, and 7VP in stage order. Building the 3 stages of side (B) would give you a Raw Material of your choice per build step, a Production Good of your choice per build step, and 7VP in stage order. ![Lighthouse of Alexandria](https://i.stack.imgur.com/cSWvK.jpg)
It seems like you're seriously misinterpreting something because your question doesn't make any sense as stated. The best guess I can make: You can only build each wonder stage once. You cannot build the same wonder stage multiple times to get multiple copies of its ability.
14,415
In 7 Wonders some stages of building a wonder provide special powers. For example stage of 2 of Olympia (side A) allows you to build one free structure per age. If you have built 2 wonders in this case can you build 2 free structures per age?
2014/01/26
[ "https://boardgames.stackexchange.com/questions/14415", "https://boardgames.stackexchange.com", "https://boardgames.stackexchange.com/users/6710/" ]
It seems like you're seriously misinterpreting something because your question doesn't make any sense as stated. The best guess I can make: You can only build each wonder stage once. You cannot build the same wonder stage multiple times to get multiple copies of its ability.
When building multiple wonders special powers accumulate. So if you build Olympia's special power twice you get 2 free structures per age. However for other structures such as Halicarnassus when you build it's special ability you only get to use it in the age that you built it.
14,415
In 7 Wonders some stages of building a wonder provide special powers. For example stage of 2 of Olympia (side A) allows you to build one free structure per age. If you have built 2 wonders in this case can you build 2 free structures per age?
2014/01/26
[ "https://boardgames.stackexchange.com/questions/14415", "https://boardgames.stackexchange.com", "https://boardgames.stackexchange.com/users/6710/" ]
**Each player has a single Wonder board, and each Wonder stage can only be built once.** The [Quickstart rules](http://rprod.com/uploads/file/7WONDERS_QUICKRULES_US_COLOR.pdf) cover this on the first page under **Actions**, but it is also covered in the [main rules](http://rprod.com/uploads/file/7WONDERS_RULES_US_COLOR.pdf) under **Game Overview**, Clarifications. You may only build the stages in order from left to right, and each stage can only be built once per game. > > **build a stage of their Wonder**(in the order given by the board, from left to right): the card is placed partially under the board, face down. > > > 2.b. **Build a Stage of a Wonder** - [...] each stage can only built once per game. (page 5, main rules) > > > For example, if you have **The Lighthouse of Alexandria**, building the 3 stages of side (A) would give you 3VP, one Raw Material of your choice per build step, and 7VP in stage order. Building the 3 stages of side (B) would give you a Raw Material of your choice per build step, a Production Good of your choice per build step, and 7VP in stage order. ![Lighthouse of Alexandria](https://i.stack.imgur.com/cSWvK.jpg)
When building multiple wonders special powers accumulate. So if you build Olympia's special power twice you get 2 free structures per age. However for other structures such as Halicarnassus when you build it's special ability you only get to use it in the age that you built it.
7,832,160
I am new in control-M. I have a work flow that i developed in Informatica....When i run the work flow in informatica workflow manager itz working fine...but when i scheduled it in Control-M Its making an error - > > The Repository Service marked the session or session instance as > impacted, and the Integration Service is not configured to run > impacted sessions > > > What will be the reason for this? If any one knows please help me..that will be very greatful for me.... Regards.....
2011/10/20
[ "https://Stackoverflow.com/questions/7832160", "https://Stackoverflow.com", "https://Stackoverflow.com/users/424310/" ]
An impacted session means that the mapping that the session is configured for has been changed, but the session has not been refreshed since. You can fix this easily by right-clicking on the session and choosing 'Refresh Mapping', and then validating the session. You can alternatively configure the Integration Service to run impacted sessions, but this is not recommended as it will not stop misconfigured sessions from running.
Another possible fix: After the source or target definitions were updated in the mapping within Developer PowerCenter, I did this in Workflow Manager. Checkout workflow > right click task > Refresh Mapping > Validate Then I had to go and fix the connection strings for each source and target that I changed in Developer PowerCenter.
7,832,160
I am new in control-M. I have a work flow that i developed in Informatica....When i run the work flow in informatica workflow manager itz working fine...but when i scheduled it in Control-M Its making an error - > > The Repository Service marked the session or session instance as > impacted, and the Integration Service is not configured to run > impacted sessions > > > What will be the reason for this? If any one knows please help me..that will be very greatful for me.... Regards.....
2011/10/20
[ "https://Stackoverflow.com/questions/7832160", "https://Stackoverflow.com", "https://Stackoverflow.com/users/424310/" ]
Another possible fix: After the source or target definitions were updated in the mapping within Developer PowerCenter, I did this in Workflow Manager. Checkout workflow > right click task > Refresh Mapping > Validate Then I had to go and fix the connection strings for each source and target that I changed in Developer PowerCenter.
Checkout workflow -> Validate -> **Ctrl+S** -> Check in workflow
7,832,160
I am new in control-M. I have a work flow that i developed in Informatica....When i run the work flow in informatica workflow manager itz working fine...but when i scheduled it in Control-M Its making an error - > > The Repository Service marked the session or session instance as > impacted, and the Integration Service is not configured to run > impacted sessions > > > What will be the reason for this? If any one knows please help me..that will be very greatful for me.... Regards.....
2011/10/20
[ "https://Stackoverflow.com/questions/7832160", "https://Stackoverflow.com", "https://Stackoverflow.com/users/424310/" ]
I had the same problem, but the solution was a little bit different. I had one of the shortcuts components checked out. So Even If I had checked in everthing from mapping to workflow. The repository service had marked the session as impacted. So after checking in the shortcuts I was able to run the mapping. Regards, Vic
This might be due to you are running the workflow in checkout mode. * If you are running manually, it will take checkout version. * If you are running through control-M it will take previous checkin version, it might be impacted. Please check for if it is checkin or checkout.
7,832,160
I am new in control-M. I have a work flow that i developed in Informatica....When i run the work flow in informatica workflow manager itz working fine...but when i scheduled it in Control-M Its making an error - > > The Repository Service marked the session or session instance as > impacted, and the Integration Service is not configured to run > impacted sessions > > > What will be the reason for this? If any one knows please help me..that will be very greatful for me.... Regards.....
2011/10/20
[ "https://Stackoverflow.com/questions/7832160", "https://Stackoverflow.com", "https://Stackoverflow.com/users/424310/" ]
This might be due to you are running the workflow in checkout mode. * If you are running manually, it will take checkout version. * If you are running through control-M it will take previous checkin version, it might be impacted. Please check for if it is checkin or checkout.
Check 2 things: 1. Edit Session -> mapping-> source-> connections Is connection correct? If it isn't, choice the correct connection. 2. Edit Session -> config object-> error handling-> override tracing After to get session log, we can forget the settings that taking back to "override tracing = None" from "override tracing = Verbose"
7,832,160
I am new in control-M. I have a work flow that i developed in Informatica....When i run the work flow in informatica workflow manager itz working fine...but when i scheduled it in Control-M Its making an error - > > The Repository Service marked the session or session instance as > impacted, and the Integration Service is not configured to run > impacted sessions > > > What will be the reason for this? If any one knows please help me..that will be very greatful for me.... Regards.....
2011/10/20
[ "https://Stackoverflow.com/questions/7832160", "https://Stackoverflow.com", "https://Stackoverflow.com/users/424310/" ]
An impacted session means that the mapping that the session is configured for has been changed, but the session has not been refreshed since. You can fix this easily by right-clicking on the session and choosing 'Refresh Mapping', and then validating the session. You can alternatively configure the Integration Service to run impacted sessions, but this is not recommended as it will not stop misconfigured sessions from running.
Check 2 things: 1. Edit Session -> mapping-> source-> connections Is connection correct? If it isn't, choice the correct connection. 2. Edit Session -> config object-> error handling-> override tracing After to get session log, we can forget the settings that taking back to "override tracing = None" from "override tracing = Verbose"
7,832,160
I am new in control-M. I have a work flow that i developed in Informatica....When i run the work flow in informatica workflow manager itz working fine...but when i scheduled it in Control-M Its making an error - > > The Repository Service marked the session or session instance as > impacted, and the Integration Service is not configured to run > impacted sessions > > > What will be the reason for this? If any one knows please help me..that will be very greatful for me.... Regards.....
2011/10/20
[ "https://Stackoverflow.com/questions/7832160", "https://Stackoverflow.com", "https://Stackoverflow.com/users/424310/" ]
I had the same problem, but the solution was a little bit different. I had one of the shortcuts components checked out. So Even If I had checked in everthing from mapping to workflow. The repository service had marked the session as impacted. So after checking in the shortcuts I was able to run the mapping. Regards, Vic
Checkout workflow -> Validate -> **Ctrl+S** -> Check in workflow
7,832,160
I am new in control-M. I have a work flow that i developed in Informatica....When i run the work flow in informatica workflow manager itz working fine...but when i scheduled it in Control-M Its making an error - > > The Repository Service marked the session or session instance as > impacted, and the Integration Service is not configured to run > impacted sessions > > > What will be the reason for this? If any one knows please help me..that will be very greatful for me.... Regards.....
2011/10/20
[ "https://Stackoverflow.com/questions/7832160", "https://Stackoverflow.com", "https://Stackoverflow.com/users/424310/" ]
Another possible fix: After the source or target definitions were updated in the mapping within Developer PowerCenter, I did this in Workflow Manager. Checkout workflow > right click task > Refresh Mapping > Validate Then I had to go and fix the connection strings for each source and target that I changed in Developer PowerCenter.
Check 2 things: 1. Edit Session -> mapping-> source-> connections Is connection correct? If it isn't, choice the correct connection. 2. Edit Session -> config object-> error handling-> override tracing After to get session log, we can forget the settings that taking back to "override tracing = None" from "override tracing = Verbose"
7,832,160
I am new in control-M. I have a work flow that i developed in Informatica....When i run the work flow in informatica workflow manager itz working fine...but when i scheduled it in Control-M Its making an error - > > The Repository Service marked the session or session instance as > impacted, and the Integration Service is not configured to run > impacted sessions > > > What will be the reason for this? If any one knows please help me..that will be very greatful for me.... Regards.....
2011/10/20
[ "https://Stackoverflow.com/questions/7832160", "https://Stackoverflow.com", "https://Stackoverflow.com/users/424310/" ]
Another possible fix: After the source or target definitions were updated in the mapping within Developer PowerCenter, I did this in Workflow Manager. Checkout workflow > right click task > Refresh Mapping > Validate Then I had to go and fix the connection strings for each source and target that I changed in Developer PowerCenter.
This might be due to you are running the workflow in checkout mode. * If you are running manually, it will take checkout version. * If you are running through control-M it will take previous checkin version, it might be impacted. Please check for if it is checkin or checkout.
7,832,160
I am new in control-M. I have a work flow that i developed in Informatica....When i run the work flow in informatica workflow manager itz working fine...but when i scheduled it in Control-M Its making an error - > > The Repository Service marked the session or session instance as > impacted, and the Integration Service is not configured to run > impacted sessions > > > What will be the reason for this? If any one knows please help me..that will be very greatful for me.... Regards.....
2011/10/20
[ "https://Stackoverflow.com/questions/7832160", "https://Stackoverflow.com", "https://Stackoverflow.com/users/424310/" ]
I had the same problem, but the solution was a little bit different. I had one of the shortcuts components checked out. So Even If I had checked in everthing from mapping to workflow. The repository service had marked the session as impacted. So after checking in the shortcuts I was able to run the mapping. Regards, Vic
Check 2 things: 1. Edit Session -> mapping-> source-> connections Is connection correct? If it isn't, choice the correct connection. 2. Edit Session -> config object-> error handling-> override tracing After to get session log, we can forget the settings that taking back to "override tracing = None" from "override tracing = Verbose"
7,832,160
I am new in control-M. I have a work flow that i developed in Informatica....When i run the work flow in informatica workflow manager itz working fine...but when i scheduled it in Control-M Its making an error - > > The Repository Service marked the session or session instance as > impacted, and the Integration Service is not configured to run > impacted sessions > > > What will be the reason for this? If any one knows please help me..that will be very greatful for me.... Regards.....
2011/10/20
[ "https://Stackoverflow.com/questions/7832160", "https://Stackoverflow.com", "https://Stackoverflow.com/users/424310/" ]
An impacted session means that the mapping that the session is configured for has been changed, but the session has not been refreshed since. You can fix this easily by right-clicking on the session and choosing 'Refresh Mapping', and then validating the session. You can alternatively configure the Integration Service to run impacted sessions, but this is not recommended as it will not stop misconfigured sessions from running.
Checkout workflow -> Validate -> **Ctrl+S** -> Check in workflow
99,091
Looking through material design components, specifically [text input fields](https://material.google.com/components/text-fields.html) the guidelines are really clear for input fields. However, its unclear how to lay out non input text fields. Say I've got a form like this: [![enter image description here](https://i.stack.imgur.com/Lh7wz.png)](https://i.stack.imgur.com/Lh7wz.png) That's fine if the user is expecting to edit all the fields, however, I've often got this kind of information read-only (or it doesn't make sense to edit it ever like if it's reporting information back from an operation or calculation) However, the idea of the "label" for the "intput field" that hovers over it is established. If I were not to use the input text box outlined I'd end up with something awfully messy if things were combined - for example, if I could only edit the employee number & floor area: [![enter image description here](https://i.stack.imgur.com/YP0cW.png)](https://i.stack.imgur.com/YP0cW.png) Or should I emulate the "floating label" without the line underneath to indicate user entry? [![enter image description here](https://i.stack.imgur.com/fKSzh.png)](https://i.stack.imgur.com/fKSzh.png)
2016/09/09
[ "https://ux.stackexchange.com/questions/99091", "https://ux.stackexchange.com", "https://ux.stackexchange.com/users/86951/" ]
**Consistency is key.** [Angular Material - Demo - Input](https://material.angularjs.org/1.1.0/demo/input) Your second image would break the consistent visual that is reinforced on the form. The "floating label" emulation would keep the consistency, but only having the input underline removed may not clearly display that the the building is not an editable field. Perhaps gray out the text similar to the floating label. This may not be required, and just omitting the field underline maybe enough. If you look at the disabled input field example in the Material Demo I linked, you can see that the value is grayed out, but also underlined with a dotted line. The grayed out text informs the user that the input is not currently editable, the dotted line signifies that the user maybe able to change it in the future. I'd also have the input label be "Building", instead of "Building Name". You have that on the first two images, but not on the third. Hope this helps!
Color is the most obvious way to show the user that there is a difference between editable and readonly fields. Unfortunately the options you illustrated all look disjointed to me. I feel like each entry should have the underline otherwise it's hard to tell the difference between non-form items (like the title) and form items. You could try changing the foreground color to light grey but you'd have to find a different color for you titles (perhaps the apps primary color? something non grey scale)
99,091
Looking through material design components, specifically [text input fields](https://material.google.com/components/text-fields.html) the guidelines are really clear for input fields. However, its unclear how to lay out non input text fields. Say I've got a form like this: [![enter image description here](https://i.stack.imgur.com/Lh7wz.png)](https://i.stack.imgur.com/Lh7wz.png) That's fine if the user is expecting to edit all the fields, however, I've often got this kind of information read-only (or it doesn't make sense to edit it ever like if it's reporting information back from an operation or calculation) However, the idea of the "label" for the "intput field" that hovers over it is established. If I were not to use the input text box outlined I'd end up with something awfully messy if things were combined - for example, if I could only edit the employee number & floor area: [![enter image description here](https://i.stack.imgur.com/YP0cW.png)](https://i.stack.imgur.com/YP0cW.png) Or should I emulate the "floating label" without the line underneath to indicate user entry? [![enter image description here](https://i.stack.imgur.com/fKSzh.png)](https://i.stack.imgur.com/fKSzh.png)
2016/09/09
[ "https://ux.stackexchange.com/questions/99091", "https://ux.stackexchange.com", "https://ux.stackexchange.com/users/86951/" ]
**Consistency is key.** [Angular Material - Demo - Input](https://material.angularjs.org/1.1.0/demo/input) Your second image would break the consistent visual that is reinforced on the form. The "floating label" emulation would keep the consistency, but only having the input underline removed may not clearly display that the the building is not an editable field. Perhaps gray out the text similar to the floating label. This may not be required, and just omitting the field underline maybe enough. If you look at the disabled input field example in the Material Demo I linked, you can see that the value is grayed out, but also underlined with a dotted line. The grayed out text informs the user that the input is not currently editable, the dotted line signifies that the user maybe able to change it in the future. I'd also have the input label be "Building", instead of "Building Name". You have that on the first two images, but not on the third. Hope this helps!
You could use a text field component that is disabled. As per the MD spec: > > Disabled text fields are uneditable. They have a dotted input line and > less opacity so that they appear less tappable. > > > [![enter image description here](https://i.stack.imgur.com/PgLo3.png)](https://i.stack.imgur.com/PgLo3.png) Reference: <https://material.io/guidelines/components/text-fields.html#text-fields-states>
23,313
We recently added a new server to our system with a 6 TB RAID array for file storage of completed audio and video projects. We have traditionally been a Microsoft house, so I used Windows Server 2008 for the new machine. When I went to make my first backup, I attached a little 4TB GTech box to use for the backup as we were only using about 2.5 TB on the server array at this point in time. I was then going to take that offsite when the backup was complete. At this point I discovered that the Microsoft backup had changed drastically in Server 2008. No longer can you select what files to backup, but instead you have to backup a full volume. When I read up on it, I decided this was OK as it is supposed to be faster and makes a mountable virtual drive file as the result. But then I discovered that because of limitations in Shadow copy, the new Microsoft software can only backup volumes of less than 2 TB !!! I even used an MSDN incident to confirm this with them. I have tried to shy away from third party backup apps as I have always believed that basic backup functionality should be built into the OS, and Microsoft has been been traditionally good about providing backwards compatibility to restore backups made from years ago on a newer system. With third party tools you are always having to deal with renewal charges and version changes, and then if you decide to switch companies or the company folds, you have to worry about transitioning old backups, etc. And tools such as RSync can take days to complete a full backup or restore when you are talking about terrabytes of files. So I wanted to know the commuinty's thoughts on a few options I am considering understanding that budget is definitely a concern: A) Trying to move the data off the system and repartition. This is not ideal for the structure our data takes. In the world of audio and video, 2TB is a starting place. But it could be possible. B) Switch the server to a Debian Server box, but I am unsure which tools should be used for backup in that environment, and whether I might run into similar problems. C) Using a third party tool. D) Other thoughts?
2009/06/10
[ "https://serverfault.com/questions/23313", "https://serverfault.com", "https://serverfault.com/users/9005/" ]
> > B) Switch the server to a Debian > Server box, but I am unsure which > tools should be used for backup in > that environment, and whether I might > run into similar problems. > > > I recommend against it. If you are proficient in Windows doing a "quick hack" to switch over isn't a solution that will make your customers or you happy. A debian backup solution would be to * create squashfs images (not too well tested) * use bacula (I consider it the best open source backup tool) Baculas feature set is impressive, they have a native windows client support options if you need them for compliance and a couple of plugins to backup exchange/mysql(/oracle - not sure about that). If you don't mind setting up a new backup infrastructure give bacula a try, I think it's worth the effort if done with the right mindset (yes it will cause work, it's not a fire & forget installation) > > C) Using a third party tool. > D) Other thoughts? > > > I don't do windows normally but I remember that I read something about the changed backup utils that microsoft provides and that there is a windows 2000 style backup tool somewhere available from microsoft. Maybe that is enough of a pointer to find the right thing
i found out the hard way the exact same thing. for now to make the backup i am using another server on the domain with ntbackup to mount and backup the data for offsite redundancy. problem is that open files will get skipped and why shadow snapshot was a good idea. we are currently going to explore symantec backup (i think they bought veritas?) anyways, i loved veritas back in 2005 so i would like to keep something that is similar. plus didn't veritas write ntbackup in conjunction with ms? switching to linux flavour is not a bad idea but do you have the know how to admin it? good luck.
23,313
We recently added a new server to our system with a 6 TB RAID array for file storage of completed audio and video projects. We have traditionally been a Microsoft house, so I used Windows Server 2008 for the new machine. When I went to make my first backup, I attached a little 4TB GTech box to use for the backup as we were only using about 2.5 TB on the server array at this point in time. I was then going to take that offsite when the backup was complete. At this point I discovered that the Microsoft backup had changed drastically in Server 2008. No longer can you select what files to backup, but instead you have to backup a full volume. When I read up on it, I decided this was OK as it is supposed to be faster and makes a mountable virtual drive file as the result. But then I discovered that because of limitations in Shadow copy, the new Microsoft software can only backup volumes of less than 2 TB !!! I even used an MSDN incident to confirm this with them. I have tried to shy away from third party backup apps as I have always believed that basic backup functionality should be built into the OS, and Microsoft has been been traditionally good about providing backwards compatibility to restore backups made from years ago on a newer system. With third party tools you are always having to deal with renewal charges and version changes, and then if you decide to switch companies or the company folds, you have to worry about transitioning old backups, etc. And tools such as RSync can take days to complete a full backup or restore when you are talking about terrabytes of files. So I wanted to know the commuinty's thoughts on a few options I am considering understanding that budget is definitely a concern: A) Trying to move the data off the system and repartition. This is not ideal for the structure our data takes. In the world of audio and video, 2TB is a starting place. But it could be possible. B) Switch the server to a Debian Server box, but I am unsure which tools should be used for backup in that environment, and whether I might run into similar problems. C) Using a third party tool. D) Other thoughts?
2009/06/10
[ "https://serverfault.com/questions/23313", "https://serverfault.com", "https://serverfault.com/users/9005/" ]
> > B) Switch the server to a Debian > Server box, but I am unsure which > tools should be used for backup in > that environment, and whether I might > run into similar problems. > > > I recommend against it. If you are proficient in Windows doing a "quick hack" to switch over isn't a solution that will make your customers or you happy. A debian backup solution would be to * create squashfs images (not too well tested) * use bacula (I consider it the best open source backup tool) Baculas feature set is impressive, they have a native windows client support options if you need them for compliance and a couple of plugins to backup exchange/mysql(/oracle - not sure about that). If you don't mind setting up a new backup infrastructure give bacula a try, I think it's worth the effort if done with the right mindset (yes it will cause work, it's not a fire & forget installation) > > C) Using a third party tool. > D) Other thoughts? > > > I don't do windows normally but I remember that I read something about the changed backup utils that microsoft provides and that there is a windows 2000 style backup tool somewhere available from microsoft. Maybe that is enough of a pointer to find the right thing
I would separate system and data. Backup system with Windows Backup solution, and copy data through a robocopy. You won't be able to store more than one backup shortly, and in some time you will have to buy a new externall storage to store one full backup. Remind that this external storage should only be connected during the backup process and then been put as far as possible from the production data. From a Linux, using built-in tools, i would do a cp / or tar gzipped / or a dd if equal places. It won't help on what you are looking for (not using third party software)
21,684,589
I recently decided to try out VS2013 before making the jump from VS2010 and I found that I have to download the help files too. After searching online, it appears there is no ISO of the VS2013 documentation. I rather found documentation for [VS2012](http://www.microsoft.com/en-us/download/details.aspx?id=34794) and this [VS Help Downloader](http://vshd2012.codeplex.com/). What I would like to know is: > > Does Visual Studio 2013 use the same documentation as Visual Studio 2012? > > > I need to be sure of this before I commence downloading the fairly large VS2012 Documentation ISO on my slow internet connection.
2014/02/10
[ "https://Stackoverflow.com/questions/21684589", "https://Stackoverflow.com", "https://Stackoverflow.com/users/117870/" ]
The short answer is "yes", any VS2012 Help Viewer Content you have downloaded/installed will be used by the VS2013 Help Viewer. Additionally, when you use VS2013's Help Viewer to install help from online you may notice that only VS2012 help is (currently) available (there is no VS2013-specific Help Viewer content, yet.) By default VS2013 uses MSDN Online, which is a change from prior VS versions (which relied on a local Help and/or MSDN installation.) This is why the VS2013 installation behaves differently and does not prompt for your Help preferences when installation finishes. **If you just want to download for offline use** rather than constantly load content from online, from VS2013 navigate to Main Menu -> Help > Set Help Preference -> Launch in Help Viewer, this should prompt you to download the documentation if not already installed. You can also use the downloader built into the latest Help Viewer to download any content you're missing. **If you want to download help once and store it for later** such as for multiple installations, limited/paid-for bandwidth, etc then you can consider using the [Visual Studio 2012/2013 Help Downloader](http://vshd2012.codeplex.com/) (an open-source project on CodePlex.) This allows you to download the content files and then import them into any installation of Help Viewer.
I have not used VS 2012. i m using VS 2013. after hectic search i found the following link for documentation from microsoft.com for VS 2012 and VS 2013. whatever your choice you can go for it <http://www.microsoft.com/en-us/download/details.aspx?id=34794>
6,220,362
I've got and sql express database I need to extract some data from. I have three fields. ID,NAME,DATE. In the DATA column there is values like "654;654;526". Yes, semicolons includes. Now those number relate to another table(two - field ID and NAME). The numbers in the DATA column relate to the ID field in the 2nd table. How can I via sql do a replace or lookup so instead of getting the number 654;653;526 I get the NAME field instead..... See the photo. Might explain this better <http://i.stack.imgur.com/g1OCj.jpg>
2011/06/02
[ "https://Stackoverflow.com/questions/6220362", "https://Stackoverflow.com", "https://Stackoverflow.com/users/411490/" ]
Redesign the database unless this is a third party database you are supporting. This will never be a good design and should never have been built this way. This is one of those times you bite the bullet and fix it before things get worse which they will. Yeu need a related table to store the values in. One of the very first rules of database design is never store more than one piece of information in a field. And hopefully those aren't your real field names, they are atriocious too. You need more descriptive field names. Since it a third party database, you need to look up the split function or create your own. You will want to transform the data to a relational form in a temp table or table varaiable to use in the join later.
The following may help: [How to use GROUP BY to concatenate strings in SQL Server?](https://stackoverflow.com/questions/273238/how-to-use-group-by-to-concatenate-strings-in-sql-server)
6,220,362
I've got and sql express database I need to extract some data from. I have three fields. ID,NAME,DATE. In the DATA column there is values like "654;654;526". Yes, semicolons includes. Now those number relate to another table(two - field ID and NAME). The numbers in the DATA column relate to the ID field in the 2nd table. How can I via sql do a replace or lookup so instead of getting the number 654;653;526 I get the NAME field instead..... See the photo. Might explain this better <http://i.stack.imgur.com/g1OCj.jpg>
2011/06/02
[ "https://Stackoverflow.com/questions/6220362", "https://Stackoverflow.com", "https://Stackoverflow.com/users/411490/" ]
Redesign the database unless this is a third party database you are supporting. This will never be a good design and should never have been built this way. This is one of those times you bite the bullet and fix it before things get worse which they will. Yeu need a related table to store the values in. One of the very first rules of database design is never store more than one piece of information in a field. And hopefully those aren't your real field names, they are atriocious too. You need more descriptive field names. Since it a third party database, you need to look up the split function or create your own. You will want to transform the data to a relational form in a temp table or table varaiable to use in the join later.
This can be done, but it won't be nice. You should create a scalar valued function, that takes in the string with id's and returns a string with names.
6,220,362
I've got and sql express database I need to extract some data from. I have three fields. ID,NAME,DATE. In the DATA column there is values like "654;654;526". Yes, semicolons includes. Now those number relate to another table(two - field ID and NAME). The numbers in the DATA column relate to the ID field in the 2nd table. How can I via sql do a replace or lookup so instead of getting the number 654;653;526 I get the NAME field instead..... See the photo. Might explain this better <http://i.stack.imgur.com/g1OCj.jpg>
2011/06/02
[ "https://Stackoverflow.com/questions/6220362", "https://Stackoverflow.com", "https://Stackoverflow.com/users/411490/" ]
Redesign the database unless this is a third party database you are supporting. This will never be a good design and should never have been built this way. This is one of those times you bite the bullet and fix it before things get worse which they will. Yeu need a related table to store the values in. One of the very first rules of database design is never store more than one piece of information in a field. And hopefully those aren't your real field names, they are atriocious too. You need more descriptive field names. Since it a third party database, you need to look up the split function or create your own. You will want to transform the data to a relational form in a temp table or table varaiable to use in the join later.
This denormalized structure is similar to the way values were stored in the quasi-object-relational database known as PICK. Cool database, in many respects ahead of its time, though in other respects, a dinosaur. If you want to return the multiple names as a delimited string, it's easy to do with a scalar function. If you want to return the multiple rows as a table, your engine has to support functions that return a type of TABLE.
627,062
Let's say one has a class that performs a certain type of task. And let's say that there are a number of variations of that task. The actions are the same, just a few parameters change (*e.g.*, for soft boiled egg, action = boil, time = 5 min.; for hard boiled egg, action = boil, time = 11 min., *etc*.). Number of parameters that vary is about 10. I see there are three ways to do this: * Use a switch and set the params in code based on type. * Save the parameters in a database or file and retrieve them based on task type. * Subclass the task, overriding the parameters of the parent class and instantiate subclassed objects to perform the task in question. The first option is clumsy. But how do I decide between the other two? 1) Retrieve parameters from file or db. * PRO: No need for subclassing or factory. Simple. * CON: Requires additional query or file access. Parameters no longer visible in code. 2) Subclass the task. * PRO: Does not require additional query or file access. Parameters maintained in code. * CON: Proliferation of classes and need to make factory. Have I correctly identified the pros and cons? What other criteria should I use to decide the issue? Please advise. THANKS!
2009/03/09
[ "https://Stackoverflow.com/questions/627062", "https://Stackoverflow.com", "https://Stackoverflow.com/users/49529/" ]
My instinct says that if the objects are of logically different classes they need to be of physically different classes. Option 1 might be acceptable for specific problems, but options 2 is way out. A class should be self-defined without deference to runtime data. (there's probably even exceptions to that too, but I like hyperbole)
I think your CON may be understated for the DB-parameters option. If the number of parameters that can vary is about 10, you're still likely going to be dealing with if/switch logic inside the routine that pulls those parameters. Given your description, the overhead of writing sub-classes and a factory method sounds like a lesser evil than your logic involved in variable parameters.
627,062
Let's say one has a class that performs a certain type of task. And let's say that there are a number of variations of that task. The actions are the same, just a few parameters change (*e.g.*, for soft boiled egg, action = boil, time = 5 min.; for hard boiled egg, action = boil, time = 11 min., *etc*.). Number of parameters that vary is about 10. I see there are three ways to do this: * Use a switch and set the params in code based on type. * Save the parameters in a database or file and retrieve them based on task type. * Subclass the task, overriding the parameters of the parent class and instantiate subclassed objects to perform the task in question. The first option is clumsy. But how do I decide between the other two? 1) Retrieve parameters from file or db. * PRO: No need for subclassing or factory. Simple. * CON: Requires additional query or file access. Parameters no longer visible in code. 2) Subclass the task. * PRO: Does not require additional query or file access. Parameters maintained in code. * CON: Proliferation of classes and need to make factory. Have I correctly identified the pros and cons? What other criteria should I use to decide the issue? Please advise. THANKS!
2009/03/09
[ "https://Stackoverflow.com/questions/627062", "https://Stackoverflow.com", "https://Stackoverflow.com/users/49529/" ]
My instinct says that if the objects are of logically different classes they need to be of physically different classes. Option 1 might be acceptable for specific problems, but options 2 is way out. A class should be self-defined without deference to runtime data. (there's probably even exceptions to that too, but I like hyperbole)
I had the chance to work with 2 similar systems on 2 different projects. The first went through the sub-class route, and the second through the configuration route (db in our case). The first made the team feel they had a lot of power to customize it, as its subclasses could also have very specific behaviors. With time, understanding the system wasn't that direct, as these bits of behavior is hidden in the subclasses. On the second system, the configuration read pretty clear what was expected from each specific case, both on data + behaviors. Testing the classes became focused on testing they handled these settings well. With time some base configurations where added, which enable overriding specific features when needed. To reduce amount of code being handled by the main class, some behaviors were moved to specific classes, and they where loaded based on the configuration. I found the second approach pretty powerful when on a highly dynamic environment, where new data/understanding of how it was going to work kept coming in. Settings/behaviors where clearly defined/handled by the classes, no open keys with data that the system didn't understand. That said, this doesn't mean that this configuration needs to be maintained externally. You could easily have code that adds the different configurations to a list of active configurations. If you use that with the builder pattern, giving defaults to it, you can have a similar base configurations approach with not that much code.
253,365
Would it be incorrect to rollback an approved suggested edit that obviously shouldn't be approved? Sometimes an edit gets approved because the reviewers don't take time to consider it well enough and sometimes it appears to just be a flat out wrong review. For example, this suggested edit was approved, but it clearly should have been a comment or another answer entirely: <https://stackoverflow.com/review/suggested-edits/4711938> Thus, it seems as though a rollback should be made since it was not the original intent of the author. Would rolling back be appropriate in such a case? It seems to violate the system of checks and balances for suggested edits. *Non-Duplicates:* * [Rollback an edit that skirts a SO rule?](https://meta.stackoverflow.com/questions/252307/rollback-an-edit-that-skirts-a-so-rule) (The linked question is addressing edits that violate Stack Overflow rules, which this question is not.) * [Rollback button in suggested edits queue](https://meta.stackoverflow.com/questions/251333/rollback-button-in-suggested-edits-queue) (The linked question is addressing adding a rollback button instead of it's proper use.)
2014/05/03
[ "https://meta.stackoverflow.com/questions/253365", "https://meta.stackoverflow.com", "https://meta.stackoverflow.com/users/3150271/" ]
> > It seems to violate the system of checks and balances for suggested edits. > > > Yes, but that system is already screwed. If you are thinking about whether or not to roll it back, your opinion is already better than two robo-reviewers. If you find a blatantly terrible edit like that, certainly roll it back. If you aren't quite sure, don't do anything before getting a second and third opinion on chat or here on MSO.
I'd suggest that you roll it back, but with a comment to the editor, who appears to be the original question asker, that he should edit his question or post his own answer.
253,365
Would it be incorrect to rollback an approved suggested edit that obviously shouldn't be approved? Sometimes an edit gets approved because the reviewers don't take time to consider it well enough and sometimes it appears to just be a flat out wrong review. For example, this suggested edit was approved, but it clearly should have been a comment or another answer entirely: <https://stackoverflow.com/review/suggested-edits/4711938> Thus, it seems as though a rollback should be made since it was not the original intent of the author. Would rolling back be appropriate in such a case? It seems to violate the system of checks and balances for suggested edits. *Non-Duplicates:* * [Rollback an edit that skirts a SO rule?](https://meta.stackoverflow.com/questions/252307/rollback-an-edit-that-skirts-a-so-rule) (The linked question is addressing edits that violate Stack Overflow rules, which this question is not.) * [Rollback button in suggested edits queue](https://meta.stackoverflow.com/questions/251333/rollback-button-in-suggested-edits-queue) (The linked question is addressing adding a rollback button instead of it's proper use.)
2014/05/03
[ "https://meta.stackoverflow.com/questions/253365", "https://meta.stackoverflow.com", "https://meta.stackoverflow.com/users/3150271/" ]
> > It seems to violate the system of checks and balances for suggested edits. > > > Yes, but that system is already screwed. If you are thinking about whether or not to roll it back, your opinion is already better than two robo-reviewers. If you find a blatantly terrible edit like that, certainly roll it back. If you aren't quite sure, don't do anything before getting a second and third opinion on chat or here on MSO.
Go ahead & rollback questionable edits. The checks & balances clearly include the option for *someone else* to roll things back *AFTER* edits are approved. It’s not a statement of who is good, bad or otherwise but a statement that we are all human & the process of approving edits can be mechanical at best. So if you are not caught in that cycle, see something amiss, don’t feel like you are playing politics. Just do it.
2,817,525
A known bug with a Windows XP security update KB978037 can occur with Visual Studio 2005 (and 2008) where sometimes if you restart a debugging session on a console app then the console window doesn't get closed even though the owner process no longer exists. The problem is discussed further here: [Visual studio debug console sometimes stays open and is impossible to close](https://stackoverflow.com/questions/2402875/visual-studio-debug-console-sometimes-stays-open-and-is-impossible-to-close) These zombie windows then can not be closed via the Taskbar or via the TaskManager, and typically require a power off/on to get rid of them. Over the period of even a single day you can accumulate quite a few of them, which clog up your TaskBar and are generally annoying. I thought I would knock up a simple C++ Win32 utility to attempt to call DestroyWindow() on these windows by passing the windows handle as a cmd-line argument and converting it to a HWND. I'm converting the handle from a string by parsing it as a DWORD then casting the DWORD to a HWND. This appears to be working as if I call GetWindowInfo() on the handle it succeeds. However calling DestroyWindow() on the handle fails with error 5 (access denied), presumably because the caller process (i.e. my app) doesn't own the window in question. Any ideas as to how I might get rid of the zombie windows, either via the above approach or any other alternative short of rebooting? I'm in a corporate environment so installing/uninstalling updates/service-packs etc isn't an option.
2010/05/12
[ "https://Stackoverflow.com/questions/2817525", "https://Stackoverflow.com", "https://Stackoverflow.com/users/84538/" ]
I know this is not what you want to hear but the only way to get rid of these windows *is* by rebooting the computer Apparently the windows team is trying to fix it: <http://blogs.msdn.com/debugger/archive/2010/03/11/help-my-console-windows-won-t-go-away.aspx> but not much seems to be happening
This issue is still a problem with Visual Studio 2010 on Windows 7! [See this thread](https://stackoverflow.com/questions/7995493/visual-studio-cannot-build-a-simple-project-more-than-once)
2,817,525
A known bug with a Windows XP security update KB978037 can occur with Visual Studio 2005 (and 2008) where sometimes if you restart a debugging session on a console app then the console window doesn't get closed even though the owner process no longer exists. The problem is discussed further here: [Visual studio debug console sometimes stays open and is impossible to close](https://stackoverflow.com/questions/2402875/visual-studio-debug-console-sometimes-stays-open-and-is-impossible-to-close) These zombie windows then can not be closed via the Taskbar or via the TaskManager, and typically require a power off/on to get rid of them. Over the period of even a single day you can accumulate quite a few of them, which clog up your TaskBar and are generally annoying. I thought I would knock up a simple C++ Win32 utility to attempt to call DestroyWindow() on these windows by passing the windows handle as a cmd-line argument and converting it to a HWND. I'm converting the handle from a string by parsing it as a DWORD then casting the DWORD to a HWND. This appears to be working as if I call GetWindowInfo() on the handle it succeeds. However calling DestroyWindow() on the handle fails with error 5 (access denied), presumably because the caller process (i.e. my app) doesn't own the window in question. Any ideas as to how I might get rid of the zombie windows, either via the above approach or any other alternative short of rebooting? I'm in a corporate environment so installing/uninstalling updates/service-packs etc isn't an option.
2010/05/12
[ "https://Stackoverflow.com/questions/2817525", "https://Stackoverflow.com", "https://Stackoverflow.com/users/84538/" ]
[Microsoft's hot fix for Windows XP and Windows 2003](http://support.microsoft.com/kb/982551)
This issue is still a problem with Visual Studio 2010 on Windows 7! [See this thread](https://stackoverflow.com/questions/7995493/visual-studio-cannot-build-a-simple-project-more-than-once)
4,993,254
I am building an app with the functionality to publish messages to users walls while specific actions runs on my website. What I have done is (briefly): 1. Registered my own app on Facebook 2. Added a login button on my website with permission to publish: Log in on Facebook 1. Downloaded facebook-php-sdk library It is now I start having problems. I do not know how to do what I want to do now. What I want to do: When a user logs on to facebook via my website. I want a file on my site to be called, where I can update the user's data in my own database as well. Because that is not what the canvas url is meant to do? How it is no, seems no file at all is called on my site when I click on Login.
2011/02/14
[ "https://Stackoverflow.com/questions/4993254", "https://Stackoverflow.com", "https://Stackoverflow.com/users/616341/" ]
Confirmed. Keep-Alive is a mechanism to prevent costly TCP connection negotiation. Your PHP process/thread starts as normal and needs to allocate all resources as usual. Regarding a high load situation, it might be wise to even keep the keep-alive period not too high: All connection requests compete for free connection slots of your server. If all slots are in-use by keep-alive connections, other users might not connect. But, as usual, the optimal amount of slots and good keep-alive period depends on your specific load situation.
no , http keep-alive save resources of tcp connection . php and mysql will not even aware of that connection is open , when you will make next request it would be fast because time spend resolving ip address , and opening new tcp connection will be saved all this things remain with apache.
300,797
I listen to a podcast that I like, but every episodes ends with > > Our listeners are what make [podcast name] possible. > > > which makes me cringe a little each time I hear it. Is it just me, or is the sentence wrong? And if so, what is the correct form - should both verbs be singular? And is there a difference between UK and US usage? (The podcast is based in the USA). I found a [similar question](https://english.stackexchange.com/questions/130381/), but it does not apply exactly (and the [one it links to](https://english.stackexchange.com/questions/67553/) only discusses cases where the sentence stars with a time interval). Thank you.
2016/01/19
[ "https://english.stackexchange.com/questions/300797", "https://english.stackexchange.com", "https://english.stackexchange.com/users/156128/" ]
"Our listeners are what make our podcast possible" is grammatical. (But it took a little while for me to figure that out; thank you to everyone else who left comments and answers!) Like you, I felt uncomfortable with it after you brought it up, and I'll discuss the reasons for that below, but they are based on semantics rather than purely on the grammatical structure. As a subject, "our listeners" triggers plural agreement on the corresponding verb in all varieties of English that I know of. In this sentence, that verb is "are." It isn't anything like a collective noun: collective nouns, such as "collection" or "group," are *singular* in form (morphology) but plural in meaning. But "listeners" is clearly *plural* in form, as it has the plural suffix *-s*. As you said, the answers to the question about ['Is it “5–6 weeks are a lot of time” or “5–6 weeks is a lot of time”?'](https://english.stackexchange.com/questions/67553/is-it-5-6-weeks-are-a-lot-of-time-or-5-6-weeks-is-a-lot-of-time?lq=1) only say it is possible to use singular verbs with nominally-plural subjects that are "quantities or measurements" (usually "of time, money, distance, weight"). There is no measurement involved in the noun phrase "our listeners," so I don't think it's natural to use a singular verb with it. So that's what I'd say about the first verb in the sentence. But the verb "make/makes" is part of a later, distinct relative clause whose subject is the word "what." So to figure out which to use between "make" and "makes," we need to determine whether "what" is singular or plural here. This is a tricky question, since relative pronouns always have one form, but can take different types of agreement depending on the situation. A relevant situation that I have just thought of is the transparency of the relative pronoun "who" to person agreement, discussed in the following post: ["You who is" OR "you who are"](https://english.stackexchange.com/questions/304258/you-who-is-or-you-who-are). I think that I would *not* use person agreement in sentences like "Our listeners are what make [podcast name] possible." Here's what I mean. To simplify matters, assume we're talking to a single person, "Sally." I would say > > Sally, you are what makes [podcast name] possible. > > > rather than > > Sally, you are what make [podcast name] possible. > > > In fact, a [Google Ngram search](https://books.google.com/ngrams/graph?content=you%20are%20what%20make%2C%20you%20are%20what%20makes&year_start=1800&year_end=2000&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Cyou%20are%20what%20makes%3B%2Cc0) turns up no examples of the structure "you are what make"; "you are what makes" *is* attested, although rarely, and only consistently after the 1980s. On the other hand, as FumbleFingers has discussed, it is *true* that "these are what make" is more frequent in the Google corpus than "these are what makes," and also, ["they are what make" is more frequent than "they are what makes.](https://books.google.com/ngrams/graph?content=they+are+what+makes%2Cthey+are+what+make&year_start=1800&year_end=2000&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Cthey%20are%20what%20makes%3B%2Cc0%3B.t1%3B%2Cthey%20are%20what%20make%3B%2Cc0)" [![enter image description here](https://i.stack.imgur.com/qnVgt.png)](https://i.stack.imgur.com/qnVgt.png) My interpretation of this is that in constructions like this with "what," the verb after "what" does not have to agree with the subject of the preceding clause. However, it evidently can inflect for plurality. It does seem this is an exception to the default situation where "what" triggers singular agreement on a following verb. The [English Forums explanation](https://www.englishforums.com/English/AllPluralSingular/xkgvr/post.htm) that FumbleFingers linked to in a comment gives examples where a plural verb can/must be used if the complement includes a plural noun phrase (such as "What they desperately want are clothes and shelter"), but the relevant clause in the sentence you're asking about ("what make/makes [podcast name] possible") doesn't have a plural noun phrase complement. FumbleFingers' [grammarphobia](http://www.grammarphobia.com/blog/2012/06/subject-complement.html) link also doesn't seem to cover sentences of this exact type. But perhaps it could simply be considered the inversion of "What make [podcast name] possible are our viewers," which is a sentence whose form is discussed (and confirmed to be grammatical) in the grammarphobia article. However, it doesn't seem safe to assume that we can invert the sentence and leave the verb the same. [This question](https://english.stackexchange.com/questions/114123/singular-or-plural-verb-after-what) seems to indicate that if a relative pronoun such as "what" starts a "free relative clause," the word "what" does not have to agree in plurality with the preceding noun "listeners." From what I can tell, it is a [free relative clause](https://books.google.com/books?id=wWm8EbQwCZAC&pg=PA126&lpg=PA126&dq=bound%20relative%20clause%20definition&source=bl&ots=vvdGE4YZ96&sig=ZNFcxTEl7fhrO9d8GuUG7B-HJ-M&hl=en&sa=X&ved=0ahUKEwjNq-i4lbbKAhUE52MKHTknDwcQ6AEIKzAF#v=onepage&q&f=false) in your sentence (I'm not even sure if bound relative clauses with "what" are possible in standard English; they usually use other relative pronouns such as "which" or "who"). This would mean that it's grammatically possible to treat "what" as either singular or plural; it appears that speakers usually do treat "what" as plural in this grammatical context (judging from the NGram that [Fumblefingers](https://english.stackexchange.com/a/300865/77227) has provided). However, even if a plural verb is more commonly chosen, it does seem to be possible for "what" to be taken as singular here and get singular verb agreement. FumbleFinger's ngram link indicates that a notable minority of speakers take this option in the Google Books corpus, and I think the evident opacity of the structure in modern usage to *personal* agreement (as shown in sentences using the pronoun "you") provides some analogical support to the idea that it can be opaque to *number* agreement as well. When I first read your post, my reaction was that I would say "Our listeners are what makes our podcast possible." Here are some other example sentences where I would use a singular verb rather than agreeing with the plural subject of the main clause: * They are what is wrong with this place. * They are all that makes my life worth living. Different speakers may have different intuitions about this, however. Maybe a good way to think of it is this: just as you could say "our viewers are the thing that makes our podcast great," you could also say "our viewers are the things that make our podcast great." Both are grammatically correct, although they may have slightly different shades of meaning. And since "what" can be plural or singular, it can mean either "the thing that" or "the things that." An interesting point besides this: despite the fact that I would use a singular "makes" in the OP's sentence, I agree that only plural "make" is appropriate in a sentence like > > Blue grass pastures, fields of clover; These are what make Mansfield > grow. > > > –["What Makes Mansfield Grow,"](https://books.google.com/books?id=a9E973EAKxYC&pg=PA8&dq=%22these%20are%20what%20make%22&hl=en&sa=X&ved=0ahUKEwiSmP_l_LjKAhVX2GMKHUKvBMMQ6AEIJDAB#v=onepage&q=%22these%20are%20what%20make%22&f=false) from *Around Mansfield* by the Mansfield Historical Society This is because I would not be comfortable saying "Blue grass pastures, fields of clover; These are the thing that makes Mansfield grow." Semantically, these are not one thing, or even one type of thing.
I'm astonished to see that as I write, the only response is 4 users (one commenter and 3 upvoters) claiming **our listeners** is singular, and another comment effectively endorsing the singular usage by converting the noun phrase to ***a*** group/collection of our listeners. I can only assume this sort of nonsense somehow arises from the AmE [tendency to treat collective nouns as singular](https://english.stackexchange.com/questions/1338/are-collective-nouns-always-plural-or-are-certain-ones-singular). But it's hard to see how ***listeners*** could be thought of as a "collective noun" in the same way as ***class, team, family*** etc. And I'm sure not even the most committed proponent of this grammatical principle could endorse, say, *Our users **is** seriously mistaken*. --- TL;DR: Obviously ***listeners*** is plural, regardless of whether they're ***ours*** or not. If anyone has doubts about the plurality of ***what***, check out [grammarphobia](http://www.grammarphobia.com/blog/2012/06/subject-complement.html), where it says... > > *Note that “what” is construed as **singular when the complement is singular**, and **plural when the complement is plural**.* > > > --- I love "AmE/BrE usage split" questions. *These are what **make** ELU interesting*. But according to NGrams, even Americans aren't likely to say *these are what **makes** ELU interesting*... [[![](https://i.stack.imgur.com/OsKvK.png)][5](https://i.stack.imgur.com/OsKvK.png)](https://books.google.com/ngrams/graph?content=these%20are%20what%20make%2Cthese%20are%20what%20makes&year_start=1900&year_end=2000&corpus=17&smoothing=3&share=&direct_url=t1%3B%2Cthese%20are%20what%20make%3B%2Cc0%3B.t1%3B%2Cthese%20are%20what%20makes%3B%2Cc0) And even though it's fine to say what's between your ears ***is*** your brains, no-one says [Our brains **is** what **makes** us (human)](https://www.google.com/search?tbm=bks&q=%22our%20brains%20is%20what%20makes%20us%22). Where that link just has a couple of "accidental collocations" - the standard form is [*Our brains **are** what **make** us human*](https://www.google.com/search?tbm=bks&q=%22our%20brains%20are%20what%20make%20us%22).
300,797
I listen to a podcast that I like, but every episodes ends with > > Our listeners are what make [podcast name] possible. > > > which makes me cringe a little each time I hear it. Is it just me, or is the sentence wrong? And if so, what is the correct form - should both verbs be singular? And is there a difference between UK and US usage? (The podcast is based in the USA). I found a [similar question](https://english.stackexchange.com/questions/130381/), but it does not apply exactly (and the [one it links to](https://english.stackexchange.com/questions/67553/) only discusses cases where the sentence stars with a time interval). Thank you.
2016/01/19
[ "https://english.stackexchange.com/questions/300797", "https://english.stackexchange.com", "https://english.stackexchange.com/users/156128/" ]
I'm astonished to see that as I write, the only response is 4 users (one commenter and 3 upvoters) claiming **our listeners** is singular, and another comment effectively endorsing the singular usage by converting the noun phrase to ***a*** group/collection of our listeners. I can only assume this sort of nonsense somehow arises from the AmE [tendency to treat collective nouns as singular](https://english.stackexchange.com/questions/1338/are-collective-nouns-always-plural-or-are-certain-ones-singular). But it's hard to see how ***listeners*** could be thought of as a "collective noun" in the same way as ***class, team, family*** etc. And I'm sure not even the most committed proponent of this grammatical principle could endorse, say, *Our users **is** seriously mistaken*. --- TL;DR: Obviously ***listeners*** is plural, regardless of whether they're ***ours*** or not. If anyone has doubts about the plurality of ***what***, check out [grammarphobia](http://www.grammarphobia.com/blog/2012/06/subject-complement.html), where it says... > > *Note that “what” is construed as **singular when the complement is singular**, and **plural when the complement is plural**.* > > > --- I love "AmE/BrE usage split" questions. *These are what **make** ELU interesting*. But according to NGrams, even Americans aren't likely to say *these are what **makes** ELU interesting*... [[![](https://i.stack.imgur.com/OsKvK.png)][5](https://i.stack.imgur.com/OsKvK.png)](https://books.google.com/ngrams/graph?content=these%20are%20what%20make%2Cthese%20are%20what%20makes&year_start=1900&year_end=2000&corpus=17&smoothing=3&share=&direct_url=t1%3B%2Cthese%20are%20what%20make%3B%2Cc0%3B.t1%3B%2Cthese%20are%20what%20makes%3B%2Cc0) And even though it's fine to say what's between your ears ***is*** your brains, no-one says [Our brains **is** what **makes** us (human)](https://www.google.com/search?tbm=bks&q=%22our%20brains%20is%20what%20makes%20us%22). Where that link just has a couple of "accidental collocations" - the standard form is [*Our brains **are** what **make** us human*](https://www.google.com/search?tbm=bks&q=%22our%20brains%20are%20what%20make%20us%22).
"Our listeners are what make X." I think, here X may be [noun](http://www.englishgrammar-a2z.com/p/noun.html) (any name of podcast channel, episode, or radio channel etc).
300,797
I listen to a podcast that I like, but every episodes ends with > > Our listeners are what make [podcast name] possible. > > > which makes me cringe a little each time I hear it. Is it just me, or is the sentence wrong? And if so, what is the correct form - should both verbs be singular? And is there a difference between UK and US usage? (The podcast is based in the USA). I found a [similar question](https://english.stackexchange.com/questions/130381/), but it does not apply exactly (and the [one it links to](https://english.stackexchange.com/questions/67553/) only discusses cases where the sentence stars with a time interval). Thank you.
2016/01/19
[ "https://english.stackexchange.com/questions/300797", "https://english.stackexchange.com", "https://english.stackexchange.com/users/156128/" ]
I'm astonished to see that as I write, the only response is 4 users (one commenter and 3 upvoters) claiming **our listeners** is singular, and another comment effectively endorsing the singular usage by converting the noun phrase to ***a*** group/collection of our listeners. I can only assume this sort of nonsense somehow arises from the AmE [tendency to treat collective nouns as singular](https://english.stackexchange.com/questions/1338/are-collective-nouns-always-plural-or-are-certain-ones-singular). But it's hard to see how ***listeners*** could be thought of as a "collective noun" in the same way as ***class, team, family*** etc. And I'm sure not even the most committed proponent of this grammatical principle could endorse, say, *Our users **is** seriously mistaken*. --- TL;DR: Obviously ***listeners*** is plural, regardless of whether they're ***ours*** or not. If anyone has doubts about the plurality of ***what***, check out [grammarphobia](http://www.grammarphobia.com/blog/2012/06/subject-complement.html), where it says... > > *Note that “what” is construed as **singular when the complement is singular**, and **plural when the complement is plural**.* > > > --- I love "AmE/BrE usage split" questions. *These are what **make** ELU interesting*. But according to NGrams, even Americans aren't likely to say *these are what **makes** ELU interesting*... [[![](https://i.stack.imgur.com/OsKvK.png)][5](https://i.stack.imgur.com/OsKvK.png)](https://books.google.com/ngrams/graph?content=these%20are%20what%20make%2Cthese%20are%20what%20makes&year_start=1900&year_end=2000&corpus=17&smoothing=3&share=&direct_url=t1%3B%2Cthese%20are%20what%20make%3B%2Cc0%3B.t1%3B%2Cthese%20are%20what%20makes%3B%2Cc0) And even though it's fine to say what's between your ears ***is*** your brains, no-one says [Our brains **is** what **makes** us (human)](https://www.google.com/search?tbm=bks&q=%22our%20brains%20is%20what%20makes%20us%22). Where that link just has a couple of "accidental collocations" - the standard form is [*Our brains **are** what **make** us human*](https://www.google.com/search?tbm=bks&q=%22our%20brains%20are%20what%20make%20us%22).
"Our listeners are what make/makes [podcast name] possible." It probably depends on what is inferred: "Our listeners are [the things or people who] make this podcast possible." "Our listeners are [the reason that] makes this podcast possible." I think both answers are acceptable.
300,797
I listen to a podcast that I like, but every episodes ends with > > Our listeners are what make [podcast name] possible. > > > which makes me cringe a little each time I hear it. Is it just me, or is the sentence wrong? And if so, what is the correct form - should both verbs be singular? And is there a difference between UK and US usage? (The podcast is based in the USA). I found a [similar question](https://english.stackexchange.com/questions/130381/), but it does not apply exactly (and the [one it links to](https://english.stackexchange.com/questions/67553/) only discusses cases where the sentence stars with a time interval). Thank you.
2016/01/19
[ "https://english.stackexchange.com/questions/300797", "https://english.stackexchange.com", "https://english.stackexchange.com/users/156128/" ]
"Our listeners are what make our podcast possible" is grammatical. (But it took a little while for me to figure that out; thank you to everyone else who left comments and answers!) Like you, I felt uncomfortable with it after you brought it up, and I'll discuss the reasons for that below, but they are based on semantics rather than purely on the grammatical structure. As a subject, "our listeners" triggers plural agreement on the corresponding verb in all varieties of English that I know of. In this sentence, that verb is "are." It isn't anything like a collective noun: collective nouns, such as "collection" or "group," are *singular* in form (morphology) but plural in meaning. But "listeners" is clearly *plural* in form, as it has the plural suffix *-s*. As you said, the answers to the question about ['Is it “5–6 weeks are a lot of time” or “5–6 weeks is a lot of time”?'](https://english.stackexchange.com/questions/67553/is-it-5-6-weeks-are-a-lot-of-time-or-5-6-weeks-is-a-lot-of-time?lq=1) only say it is possible to use singular verbs with nominally-plural subjects that are "quantities or measurements" (usually "of time, money, distance, weight"). There is no measurement involved in the noun phrase "our listeners," so I don't think it's natural to use a singular verb with it. So that's what I'd say about the first verb in the sentence. But the verb "make/makes" is part of a later, distinct relative clause whose subject is the word "what." So to figure out which to use between "make" and "makes," we need to determine whether "what" is singular or plural here. This is a tricky question, since relative pronouns always have one form, but can take different types of agreement depending on the situation. A relevant situation that I have just thought of is the transparency of the relative pronoun "who" to person agreement, discussed in the following post: ["You who is" OR "you who are"](https://english.stackexchange.com/questions/304258/you-who-is-or-you-who-are). I think that I would *not* use person agreement in sentences like "Our listeners are what make [podcast name] possible." Here's what I mean. To simplify matters, assume we're talking to a single person, "Sally." I would say > > Sally, you are what makes [podcast name] possible. > > > rather than > > Sally, you are what make [podcast name] possible. > > > In fact, a [Google Ngram search](https://books.google.com/ngrams/graph?content=you%20are%20what%20make%2C%20you%20are%20what%20makes&year_start=1800&year_end=2000&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Cyou%20are%20what%20makes%3B%2Cc0) turns up no examples of the structure "you are what make"; "you are what makes" *is* attested, although rarely, and only consistently after the 1980s. On the other hand, as FumbleFingers has discussed, it is *true* that "these are what make" is more frequent in the Google corpus than "these are what makes," and also, ["they are what make" is more frequent than "they are what makes.](https://books.google.com/ngrams/graph?content=they+are+what+makes%2Cthey+are+what+make&year_start=1800&year_end=2000&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Cthey%20are%20what%20makes%3B%2Cc0%3B.t1%3B%2Cthey%20are%20what%20make%3B%2Cc0)" [![enter image description here](https://i.stack.imgur.com/qnVgt.png)](https://i.stack.imgur.com/qnVgt.png) My interpretation of this is that in constructions like this with "what," the verb after "what" does not have to agree with the subject of the preceding clause. However, it evidently can inflect for plurality. It does seem this is an exception to the default situation where "what" triggers singular agreement on a following verb. The [English Forums explanation](https://www.englishforums.com/English/AllPluralSingular/xkgvr/post.htm) that FumbleFingers linked to in a comment gives examples where a plural verb can/must be used if the complement includes a plural noun phrase (such as "What they desperately want are clothes and shelter"), but the relevant clause in the sentence you're asking about ("what make/makes [podcast name] possible") doesn't have a plural noun phrase complement. FumbleFingers' [grammarphobia](http://www.grammarphobia.com/blog/2012/06/subject-complement.html) link also doesn't seem to cover sentences of this exact type. But perhaps it could simply be considered the inversion of "What make [podcast name] possible are our viewers," which is a sentence whose form is discussed (and confirmed to be grammatical) in the grammarphobia article. However, it doesn't seem safe to assume that we can invert the sentence and leave the verb the same. [This question](https://english.stackexchange.com/questions/114123/singular-or-plural-verb-after-what) seems to indicate that if a relative pronoun such as "what" starts a "free relative clause," the word "what" does not have to agree in plurality with the preceding noun "listeners." From what I can tell, it is a [free relative clause](https://books.google.com/books?id=wWm8EbQwCZAC&pg=PA126&lpg=PA126&dq=bound%20relative%20clause%20definition&source=bl&ots=vvdGE4YZ96&sig=ZNFcxTEl7fhrO9d8GuUG7B-HJ-M&hl=en&sa=X&ved=0ahUKEwjNq-i4lbbKAhUE52MKHTknDwcQ6AEIKzAF#v=onepage&q&f=false) in your sentence (I'm not even sure if bound relative clauses with "what" are possible in standard English; they usually use other relative pronouns such as "which" or "who"). This would mean that it's grammatically possible to treat "what" as either singular or plural; it appears that speakers usually do treat "what" as plural in this grammatical context (judging from the NGram that [Fumblefingers](https://english.stackexchange.com/a/300865/77227) has provided). However, even if a plural verb is more commonly chosen, it does seem to be possible for "what" to be taken as singular here and get singular verb agreement. FumbleFinger's ngram link indicates that a notable minority of speakers take this option in the Google Books corpus, and I think the evident opacity of the structure in modern usage to *personal* agreement (as shown in sentences using the pronoun "you") provides some analogical support to the idea that it can be opaque to *number* agreement as well. When I first read your post, my reaction was that I would say "Our listeners are what makes our podcast possible." Here are some other example sentences where I would use a singular verb rather than agreeing with the plural subject of the main clause: * They are what is wrong with this place. * They are all that makes my life worth living. Different speakers may have different intuitions about this, however. Maybe a good way to think of it is this: just as you could say "our viewers are the thing that makes our podcast great," you could also say "our viewers are the things that make our podcast great." Both are grammatically correct, although they may have slightly different shades of meaning. And since "what" can be plural or singular, it can mean either "the thing that" or "the things that." An interesting point besides this: despite the fact that I would use a singular "makes" in the OP's sentence, I agree that only plural "make" is appropriate in a sentence like > > Blue grass pastures, fields of clover; These are what make Mansfield > grow. > > > –["What Makes Mansfield Grow,"](https://books.google.com/books?id=a9E973EAKxYC&pg=PA8&dq=%22these%20are%20what%20make%22&hl=en&sa=X&ved=0ahUKEwiSmP_l_LjKAhVX2GMKHUKvBMMQ6AEIJDAB#v=onepage&q=%22these%20are%20what%20make%22&f=false) from *Around Mansfield* by the Mansfield Historical Society This is because I would not be comfortable saying "Blue grass pastures, fields of clover; These are the thing that makes Mansfield grow." Semantically, these are not one thing, or even one type of thing.
"Our listeners are what make X." I think, here X may be [noun](http://www.englishgrammar-a2z.com/p/noun.html) (any name of podcast channel, episode, or radio channel etc).
300,797
I listen to a podcast that I like, but every episodes ends with > > Our listeners are what make [podcast name] possible. > > > which makes me cringe a little each time I hear it. Is it just me, or is the sentence wrong? And if so, what is the correct form - should both verbs be singular? And is there a difference between UK and US usage? (The podcast is based in the USA). I found a [similar question](https://english.stackexchange.com/questions/130381/), but it does not apply exactly (and the [one it links to](https://english.stackexchange.com/questions/67553/) only discusses cases where the sentence stars with a time interval). Thank you.
2016/01/19
[ "https://english.stackexchange.com/questions/300797", "https://english.stackexchange.com", "https://english.stackexchange.com/users/156128/" ]
"Our listeners are what make our podcast possible" is grammatical. (But it took a little while for me to figure that out; thank you to everyone else who left comments and answers!) Like you, I felt uncomfortable with it after you brought it up, and I'll discuss the reasons for that below, but they are based on semantics rather than purely on the grammatical structure. As a subject, "our listeners" triggers plural agreement on the corresponding verb in all varieties of English that I know of. In this sentence, that verb is "are." It isn't anything like a collective noun: collective nouns, such as "collection" or "group," are *singular* in form (morphology) but plural in meaning. But "listeners" is clearly *plural* in form, as it has the plural suffix *-s*. As you said, the answers to the question about ['Is it “5–6 weeks are a lot of time” or “5–6 weeks is a lot of time”?'](https://english.stackexchange.com/questions/67553/is-it-5-6-weeks-are-a-lot-of-time-or-5-6-weeks-is-a-lot-of-time?lq=1) only say it is possible to use singular verbs with nominally-plural subjects that are "quantities or measurements" (usually "of time, money, distance, weight"). There is no measurement involved in the noun phrase "our listeners," so I don't think it's natural to use a singular verb with it. So that's what I'd say about the first verb in the sentence. But the verb "make/makes" is part of a later, distinct relative clause whose subject is the word "what." So to figure out which to use between "make" and "makes," we need to determine whether "what" is singular or plural here. This is a tricky question, since relative pronouns always have one form, but can take different types of agreement depending on the situation. A relevant situation that I have just thought of is the transparency of the relative pronoun "who" to person agreement, discussed in the following post: ["You who is" OR "you who are"](https://english.stackexchange.com/questions/304258/you-who-is-or-you-who-are). I think that I would *not* use person agreement in sentences like "Our listeners are what make [podcast name] possible." Here's what I mean. To simplify matters, assume we're talking to a single person, "Sally." I would say > > Sally, you are what makes [podcast name] possible. > > > rather than > > Sally, you are what make [podcast name] possible. > > > In fact, a [Google Ngram search](https://books.google.com/ngrams/graph?content=you%20are%20what%20make%2C%20you%20are%20what%20makes&year_start=1800&year_end=2000&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Cyou%20are%20what%20makes%3B%2Cc0) turns up no examples of the structure "you are what make"; "you are what makes" *is* attested, although rarely, and only consistently after the 1980s. On the other hand, as FumbleFingers has discussed, it is *true* that "these are what make" is more frequent in the Google corpus than "these are what makes," and also, ["they are what make" is more frequent than "they are what makes.](https://books.google.com/ngrams/graph?content=they+are+what+makes%2Cthey+are+what+make&year_start=1800&year_end=2000&corpus=15&smoothing=3&share=&direct_url=t1%3B%2Cthey%20are%20what%20makes%3B%2Cc0%3B.t1%3B%2Cthey%20are%20what%20make%3B%2Cc0)" [![enter image description here](https://i.stack.imgur.com/qnVgt.png)](https://i.stack.imgur.com/qnVgt.png) My interpretation of this is that in constructions like this with "what," the verb after "what" does not have to agree with the subject of the preceding clause. However, it evidently can inflect for plurality. It does seem this is an exception to the default situation where "what" triggers singular agreement on a following verb. The [English Forums explanation](https://www.englishforums.com/English/AllPluralSingular/xkgvr/post.htm) that FumbleFingers linked to in a comment gives examples where a plural verb can/must be used if the complement includes a plural noun phrase (such as "What they desperately want are clothes and shelter"), but the relevant clause in the sentence you're asking about ("what make/makes [podcast name] possible") doesn't have a plural noun phrase complement. FumbleFingers' [grammarphobia](http://www.grammarphobia.com/blog/2012/06/subject-complement.html) link also doesn't seem to cover sentences of this exact type. But perhaps it could simply be considered the inversion of "What make [podcast name] possible are our viewers," which is a sentence whose form is discussed (and confirmed to be grammatical) in the grammarphobia article. However, it doesn't seem safe to assume that we can invert the sentence and leave the verb the same. [This question](https://english.stackexchange.com/questions/114123/singular-or-plural-verb-after-what) seems to indicate that if a relative pronoun such as "what" starts a "free relative clause," the word "what" does not have to agree in plurality with the preceding noun "listeners." From what I can tell, it is a [free relative clause](https://books.google.com/books?id=wWm8EbQwCZAC&pg=PA126&lpg=PA126&dq=bound%20relative%20clause%20definition&source=bl&ots=vvdGE4YZ96&sig=ZNFcxTEl7fhrO9d8GuUG7B-HJ-M&hl=en&sa=X&ved=0ahUKEwjNq-i4lbbKAhUE52MKHTknDwcQ6AEIKzAF#v=onepage&q&f=false) in your sentence (I'm not even sure if bound relative clauses with "what" are possible in standard English; they usually use other relative pronouns such as "which" or "who"). This would mean that it's grammatically possible to treat "what" as either singular or plural; it appears that speakers usually do treat "what" as plural in this grammatical context (judging from the NGram that [Fumblefingers](https://english.stackexchange.com/a/300865/77227) has provided). However, even if a plural verb is more commonly chosen, it does seem to be possible for "what" to be taken as singular here and get singular verb agreement. FumbleFinger's ngram link indicates that a notable minority of speakers take this option in the Google Books corpus, and I think the evident opacity of the structure in modern usage to *personal* agreement (as shown in sentences using the pronoun "you") provides some analogical support to the idea that it can be opaque to *number* agreement as well. When I first read your post, my reaction was that I would say "Our listeners are what makes our podcast possible." Here are some other example sentences where I would use a singular verb rather than agreeing with the plural subject of the main clause: * They are what is wrong with this place. * They are all that makes my life worth living. Different speakers may have different intuitions about this, however. Maybe a good way to think of it is this: just as you could say "our viewers are the thing that makes our podcast great," you could also say "our viewers are the things that make our podcast great." Both are grammatically correct, although they may have slightly different shades of meaning. And since "what" can be plural or singular, it can mean either "the thing that" or "the things that." An interesting point besides this: despite the fact that I would use a singular "makes" in the OP's sentence, I agree that only plural "make" is appropriate in a sentence like > > Blue grass pastures, fields of clover; These are what make Mansfield > grow. > > > –["What Makes Mansfield Grow,"](https://books.google.com/books?id=a9E973EAKxYC&pg=PA8&dq=%22these%20are%20what%20make%22&hl=en&sa=X&ved=0ahUKEwiSmP_l_LjKAhVX2GMKHUKvBMMQ6AEIJDAB#v=onepage&q=%22these%20are%20what%20make%22&f=false) from *Around Mansfield* by the Mansfield Historical Society This is because I would not be comfortable saying "Blue grass pastures, fields of clover; These are the thing that makes Mansfield grow." Semantically, these are not one thing, or even one type of thing.
"Our listeners are what make/makes [podcast name] possible." It probably depends on what is inferred: "Our listeners are [the things or people who] make this podcast possible." "Our listeners are [the reason that] makes this podcast possible." I think both answers are acceptable.
300,797
I listen to a podcast that I like, but every episodes ends with > > Our listeners are what make [podcast name] possible. > > > which makes me cringe a little each time I hear it. Is it just me, or is the sentence wrong? And if so, what is the correct form - should both verbs be singular? And is there a difference between UK and US usage? (The podcast is based in the USA). I found a [similar question](https://english.stackexchange.com/questions/130381/), but it does not apply exactly (and the [one it links to](https://english.stackexchange.com/questions/67553/) only discusses cases where the sentence stars with a time interval). Thank you.
2016/01/19
[ "https://english.stackexchange.com/questions/300797", "https://english.stackexchange.com", "https://english.stackexchange.com/users/156128/" ]
"Our listeners are what make/makes [podcast name] possible." It probably depends on what is inferred: "Our listeners are [the things or people who] make this podcast possible." "Our listeners are [the reason that] makes this podcast possible." I think both answers are acceptable.
"Our listeners are what make X." I think, here X may be [noun](http://www.englishgrammar-a2z.com/p/noun.html) (any name of podcast channel, episode, or radio channel etc).
16,044,792
I'm trying to determine the role of ASP.NET worker threads. My IIS 7 installation defaulted to allowing a maximum of 25 worker threads, whereas I would have otherwise set it to 1. When a user requests an `.aspx` page, I understand that that request will retrieve a worker thread. But does the loading of each of the images on that page also grab a worker thread? And once an image is retrieved, is the worker thread that retrieved it also responsible for transmitting it to the user (via blocking-tcp-sockets?)?
2013/04/16
[ "https://Stackoverflow.com/questions/16044792", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1362135/" ]
Let's walk through how a web request to an ASPX page looks to a user and their browser. The user navigates their browser to the ASPX page. On the server, IIS recognizes this as an ASP.NET request and routes it to the .NET handlers for processing, which includes taking a worker thread, processing the page, and delivering the resulting HTML back to the user's browser. This does not include delivering the actual images, JavaScript files, CSS files, and other external resources - just the resulting HTML from the page itself goes back to the user's browser. When the user's browser renders the page, it will then make additional requests for the other resources on the page - images, JavaScript files, etc. When IIS receives the requests for these files, it will process them as static content, and therefore the ASP.NET handlers (and their worker processes) is not involved in processing or delivering the content. Note that you *can* configure IIS to use the .NET handlers to process these types of requests, but for static content, IIS won't do that out-of-the-box.
IIS 7's installer includes "Common Http Features->Static Content" when installing. This module is responsible for handling the static content and I do not believe it uses any worker threads. One worker thread seems a bit meager though, even for a test server. If your code goes into a long process (say a long query) you will be blocked from running other pages waiting out a single worker process. What prompted you to wish to set it to 1?
16,044,792
I'm trying to determine the role of ASP.NET worker threads. My IIS 7 installation defaulted to allowing a maximum of 25 worker threads, whereas I would have otherwise set it to 1. When a user requests an `.aspx` page, I understand that that request will retrieve a worker thread. But does the loading of each of the images on that page also grab a worker thread? And once an image is retrieved, is the worker thread that retrieved it also responsible for transmitting it to the user (via blocking-tcp-sockets?)?
2013/04/16
[ "https://Stackoverflow.com/questions/16044792", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1362135/" ]
Let's walk through how a web request to an ASPX page looks to a user and their browser. The user navigates their browser to the ASPX page. On the server, IIS recognizes this as an ASP.NET request and routes it to the .NET handlers for processing, which includes taking a worker thread, processing the page, and delivering the resulting HTML back to the user's browser. This does not include delivering the actual images, JavaScript files, CSS files, and other external resources - just the resulting HTML from the page itself goes back to the user's browser. When the user's browser renders the page, it will then make additional requests for the other resources on the page - images, JavaScript files, etc. When IIS receives the requests for these files, it will process them as static content, and therefore the ASP.NET handlers (and their worker processes) is not involved in processing or delivering the content. Note that you *can* configure IIS to use the .NET handlers to process these types of requests, but for static content, IIS won't do that out-of-the-box.
I am not sure which IIS version you are talking about, so [here is something](http://forums.iis.net/t/1139229.aspx/1) I read some time back on v5.1. > > When using ASPCompat and session state, the runtime may serialize requests to the same session to a single thread. > > > Otherwise, by default, if you make 12 requests to your application to the sleeping page, ASP.NET will make each subsequent request wait until a thread is freed up by a previous request. You can control this behavior via the and configuration settings(machine.config), where the number of concurrent threads is the difference between maxWorkerThreads and minFreeThreads. Please also double-check that these settings arent set such that your application is only capable of processing one ASP.NET request concurrently. > > >
16,044,792
I'm trying to determine the role of ASP.NET worker threads. My IIS 7 installation defaulted to allowing a maximum of 25 worker threads, whereas I would have otherwise set it to 1. When a user requests an `.aspx` page, I understand that that request will retrieve a worker thread. But does the loading of each of the images on that page also grab a worker thread? And once an image is retrieved, is the worker thread that retrieved it also responsible for transmitting it to the user (via blocking-tcp-sockets?)?
2013/04/16
[ "https://Stackoverflow.com/questions/16044792", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1362135/" ]
Let's walk through how a web request to an ASPX page looks to a user and their browser. The user navigates their browser to the ASPX page. On the server, IIS recognizes this as an ASP.NET request and routes it to the .NET handlers for processing, which includes taking a worker thread, processing the page, and delivering the resulting HTML back to the user's browser. This does not include delivering the actual images, JavaScript files, CSS files, and other external resources - just the resulting HTML from the page itself goes back to the user's browser. When the user's browser renders the page, it will then make additional requests for the other resources on the page - images, JavaScript files, etc. When IIS receives the requests for these files, it will process them as static content, and therefore the ASP.NET handlers (and their worker processes) is not involved in processing or delivering the content. Note that you *can* configure IIS to use the .NET handlers to process these types of requests, but for static content, IIS won't do that out-of-the-box.
Yes if you have blocking calls in your asp.net app code. No if you respond to request with data that is all in memory. Static Images should not be served via Asp.Net, a static file server like IIS or nginx should do this much faster.
27,481
I need more desk space, so I want to move my desktop tower to the floor. Are there any platforms that I could just lay on the carpet? I want to make sure there's enough room underneath the case for ventilation, and not to have any issues with static etc.
2009/08/21
[ "https://superuser.com/questions/27481", "https://superuser.com", "https://superuser.com/users/3183/" ]
You could try something like [this](http://www.google.com/products/catalog?q=floor+computer+stand&cid=3909432297628997841&sa=title#p). It has wheels and is a little bit above the floor, giving your computer room to breathe.
There are plenty - I recently picked one up for $10. If you have an IKEA desk there are also under-hang supports although these wont take the wider cases you can get. Despite that, I keep my computer on my desk, it seems to get far less dust in it.
27,481
I need more desk space, so I want to move my desktop tower to the floor. Are there any platforms that I could just lay on the carpet? I want to make sure there's enough room underneath the case for ventilation, and not to have any issues with static etc.
2009/08/21
[ "https://superuser.com/questions/27481", "https://superuser.com", "https://superuser.com/users/3183/" ]
You could try something like [this](http://www.google.com/products/catalog?q=floor+computer+stand&cid=3909432297628997841&sa=title#p). It has wheels and is a little bit above the floor, giving your computer room to breathe.
If you want to go really cheep you can grab some extra level construction supplies, like bricks and small pieces of lumber, that you or a friend have laying around. Clean them off and use that to make a stand for you computer. If you don't have access to this kind of thing it might not be worth it to you, but free is free.
27,481
I need more desk space, so I want to move my desktop tower to the floor. Are there any platforms that I could just lay on the carpet? I want to make sure there's enough room underneath the case for ventilation, and not to have any issues with static etc.
2009/08/21
[ "https://superuser.com/questions/27481", "https://superuser.com", "https://superuser.com/users/3183/" ]
You could try something like [this](http://www.google.com/products/catalog?q=floor+computer+stand&cid=3909432297628997841&sa=title#p). It has wheels and is a little bit above the floor, giving your computer room to breathe.
If your system functions fine sitting on your desk, you shouldn't have any problems with it sitting directly on the floor. (Although be aware that it will accumulate dust internally at a higher rate). Wheels are very useful for under-desk systems; makes pulling them out to get at the back connectors much easier (provided all the cables have enough slack). Either add-on ones (see ephilip's answer), or if a new-/re-build is an option look at something like Akasa's Eclipse or Mirage cases where they're part of the deal.
40,651,710
I am working on a project that revolves around EF. I have the core layers of my project separated into different projects. Unfortunately I constantly get errors that are happening because I am missing a reference to EntityFramework.SqlServer.dll in my project. Adding the .dll file to my build folder fixes this issue, but I want to solve it by a "using" statement in my code, however I cannot do that as I am missing a reference to Entity Framework in my project. I was wondering how do I add one? \*PS If I serach for EntityFramework in the "Add reference" panel, I don't get EF as an option. \*\*I could always install it with NuGet, but wouldn't that be redundant, since I already have it installed?
2016/11/17
[ "https://Stackoverflow.com/questions/40651710", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5493420/" ]
I had the same issue on a N-tier solution where the DAL uses EF. I solved it by adding a reference to the EntityFramework.SqlServer.dll in the web project. Go to *references --> Add Reference -->* in the dialog, choose *COM* and press *browse*. Then go to your project which is using EF and go to the projects bin folder where the EF references are stored. Select the *EntityFramework.SqlServer.dll* and add it. Then right click on the newly added reference --> *Properties* --> Select *Copy Local* to true[![enter image description here](https://i.stack.imgur.com/CbOuN.png)](https://i.stack.imgur.com/CbOuN.png)
I have run into similar situation. the way I resolved it is as follows: Go to references -> under assembly references, select browse and go the current projects packages folder and browse to the entityframework.dll and entityframework.sqlserver.dll of a specific version that I am targeting my application to. This resolved build issues.
40,651,710
I am working on a project that revolves around EF. I have the core layers of my project separated into different projects. Unfortunately I constantly get errors that are happening because I am missing a reference to EntityFramework.SqlServer.dll in my project. Adding the .dll file to my build folder fixes this issue, but I want to solve it by a "using" statement in my code, however I cannot do that as I am missing a reference to Entity Framework in my project. I was wondering how do I add one? \*PS If I serach for EntityFramework in the "Add reference" panel, I don't get EF as an option. \*\*I could always install it with NuGet, but wouldn't that be redundant, since I already have it installed?
2016/11/17
[ "https://Stackoverflow.com/questions/40651710", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5493420/" ]
I had the same issue on a N-tier solution where the DAL uses EF. I solved it by adding a reference to the EntityFramework.SqlServer.dll in the web project. Go to *references --> Add Reference -->* in the dialog, choose *COM* and press *browse*. Then go to your project which is using EF and go to the projects bin folder where the EF references are stored. Select the *EntityFramework.SqlServer.dll* and add it. Then right click on the newly added reference --> *Properties* --> Select *Copy Local* to true[![enter image description here](https://i.stack.imgur.com/CbOuN.png)](https://i.stack.imgur.com/CbOuN.png)
I also ran into this problem. I was accessing EF via a separate Data Access assembly, and EF worked perfectly when I called it from a test console app from within the SAME project, but it gave errors when trying to access it from my main application via an instance of my DAL Object. I added the 2 references to the main application but it still gave errors. Then I realised what the actual issue was. I needed to bring across the EF configuration sections from the App.config in my DAL project, into the main application. Once I'd done this, everything worked perfectly. Basically you need to copy across the contents of the connectionStrings and entityFramework sections. Otherwise, EF doesn't know how to connect itself. Of course if your creating the EF connection in code inside your DAL, then you may not have this problem.
40,651,710
I am working on a project that revolves around EF. I have the core layers of my project separated into different projects. Unfortunately I constantly get errors that are happening because I am missing a reference to EntityFramework.SqlServer.dll in my project. Adding the .dll file to my build folder fixes this issue, but I want to solve it by a "using" statement in my code, however I cannot do that as I am missing a reference to Entity Framework in my project. I was wondering how do I add one? \*PS If I serach for EntityFramework in the "Add reference" panel, I don't get EF as an option. \*\*I could always install it with NuGet, but wouldn't that be redundant, since I already have it installed?
2016/11/17
[ "https://Stackoverflow.com/questions/40651710", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5493420/" ]
I had the same issue on a N-tier solution where the DAL uses EF. I solved it by adding a reference to the EntityFramework.SqlServer.dll in the web project. Go to *references --> Add Reference -->* in the dialog, choose *COM* and press *browse*. Then go to your project which is using EF and go to the projects bin folder where the EF references are stored. Select the *EntityFramework.SqlServer.dll* and add it. Then right click on the newly added reference --> *Properties* --> Select *Copy Local* to true[![enter image description here](https://i.stack.imgur.com/CbOuN.png)](https://i.stack.imgur.com/CbOuN.png)
You can also use Nuget by right-click on your project -> Manage Nuget Packages, then search for "EntityFramework". Install the [Entity Framework with Nuget from Microsoft.](https://i.stack.imgur.com/M6E7y.png) It will add both EntityFramework and EntityFramework.SqlServer packages.
40,651,710
I am working on a project that revolves around EF. I have the core layers of my project separated into different projects. Unfortunately I constantly get errors that are happening because I am missing a reference to EntityFramework.SqlServer.dll in my project. Adding the .dll file to my build folder fixes this issue, but I want to solve it by a "using" statement in my code, however I cannot do that as I am missing a reference to Entity Framework in my project. I was wondering how do I add one? \*PS If I serach for EntityFramework in the "Add reference" panel, I don't get EF as an option. \*\*I could always install it with NuGet, but wouldn't that be redundant, since I already have it installed?
2016/11/17
[ "https://Stackoverflow.com/questions/40651710", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5493420/" ]
You can also use Nuget by right-click on your project -> Manage Nuget Packages, then search for "EntityFramework". Install the [Entity Framework with Nuget from Microsoft.](https://i.stack.imgur.com/M6E7y.png) It will add both EntityFramework and EntityFramework.SqlServer packages.
I have run into similar situation. the way I resolved it is as follows: Go to references -> under assembly references, select browse and go the current projects packages folder and browse to the entityframework.dll and entityframework.sqlserver.dll of a specific version that I am targeting my application to. This resolved build issues.
40,651,710
I am working on a project that revolves around EF. I have the core layers of my project separated into different projects. Unfortunately I constantly get errors that are happening because I am missing a reference to EntityFramework.SqlServer.dll in my project. Adding the .dll file to my build folder fixes this issue, but I want to solve it by a "using" statement in my code, however I cannot do that as I am missing a reference to Entity Framework in my project. I was wondering how do I add one? \*PS If I serach for EntityFramework in the "Add reference" panel, I don't get EF as an option. \*\*I could always install it with NuGet, but wouldn't that be redundant, since I already have it installed?
2016/11/17
[ "https://Stackoverflow.com/questions/40651710", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5493420/" ]
You can also use Nuget by right-click on your project -> Manage Nuget Packages, then search for "EntityFramework". Install the [Entity Framework with Nuget from Microsoft.](https://i.stack.imgur.com/M6E7y.png) It will add both EntityFramework and EntityFramework.SqlServer packages.
I also ran into this problem. I was accessing EF via a separate Data Access assembly, and EF worked perfectly when I called it from a test console app from within the SAME project, but it gave errors when trying to access it from my main application via an instance of my DAL Object. I added the 2 references to the main application but it still gave errors. Then I realised what the actual issue was. I needed to bring across the EF configuration sections from the App.config in my DAL project, into the main application. Once I'd done this, everything worked perfectly. Basically you need to copy across the contents of the connectionStrings and entityFramework sections. Otherwise, EF doesn't know how to connect itself. Of course if your creating the EF connection in code inside your DAL, then you may not have this problem.
6,035
Early home computers and game consoles output video to TV sets. The NTSC color clock frequency is 3.58 MHz. This informed the design of some video systems: <http://pineight.com/mw/index.php?title=Dot_clock_rates> In particular, the Atari 2600 and Intellivision have one pixel per color clock, which is an obviously reasonable way to do it. In the Apple II, the pixel clock is exactly twice the color clock. That makes sense because it has an option to turn *off* the color clock to generate reasonably crisp black-and-white text, then turn it on for the ability to generate artifact colors. This arrangement is very economical on parts count, which was important at the time. The Atari 800 also has a pixel clock exactly twice the color clock, but as far as I know, it does *not* have the option to turn off the color clock. I'm trying to figure out what advantage it gains from this. Specifically, I know if you run at an exact multiple of the color clock you can generate artifact colors, but surely you would get strictly better results by running exactly at the color clock and spending the memory and bandwidth on coloring fewer pixels? For example, say the Atari is operating in a mode with 1 bit per pixel, and generating artifact colors. Would it not be better off halving the pixel resolution and using 2 bits per pixel to just generate the wider range of colors directly? There is a theory that says it makes sense to subsample the chroma information, in other words run the luma information at twice the frequency, because the luma information is more important, but as far as I can see based on e.g. <https://en.wikipedia.org/wiki/Apple_II_graphics#Color_on_the_Apple_II> the result of this is that the luma information simply gets converted into artifact colors, and you might as well have done this directly. The situation with other machines like the NES looks even worse; it outputs somewhat more than one pixel per color clock, but less than two, so that the extra resolution will just convert into uncontrollable color fringing. On the face of it, the NES looks like a reduction of the resolution to 3.58 MHz pixel clock would produce better results for lower cost. What advantage was there in going higher than one pixel per color clock (in machines that weren't going to turn off the color clock to generate black-and-white text like the Apple II), that I am missing?
2018/03/21
[ "https://retrocomputing.stackexchange.com/questions/6035", "https://retrocomputing.stackexchange.com", "https://retrocomputing.stackexchange.com/users/4274/" ]
The pixel clock has to be fast enough to generate the number of pixels you want to display horizontally within the 56 microsecond scan line interval. At 3.58MHz, you only get about 200 pixels. This was fine for the Atari 2600 et al, which had 160 horizontal pixels, but the other systems you mentioned had higher horizontal resolution, so had to use a faster pixel clock. Edited to clarify in response to comment: This is actually useful. A standard definition TV is actually able to display more detail horizontally in terms of *luminance* (i.e. brightness) than it can in *chrominance* (i.e. "colour"). An NTSC TV *is* limited to approximately 200 transitions of colour in each horizontal line, due to the 3.58MHz colour signal bandwidth, but it can manage somewhere between 400 and 700 brightness transitions per line, depending on the quality of the electronics and how good the signal it's receiving is. PAL and SECAM have slightly higher figures, but in the grand scheme of things the difference is small. This is why the last generation of systems that were designed primarily for TV output tended to have between 256 and 320 horizontal pixels (e.g. the Commodore 64 or Sinclair Spectrum) -- these were the most convenient figures that could reliably be displayed. 480 or 512 pixels might have been interesting, but a lot of users would never have been able to see the detail added by such high resolutions, so it wasn't commercially useful to provide it. On the other end of the spectrum (no pun intended), DVD was designed with a horizontal resolution of 720 pixels because that was the *lowest* convenient resolution that was universally acknowledged to be *beyond* the capabilities of NTSC and PAL TVs. But it still encoded the picture in a format that had a lower chrominance resolution than luminance - it uses [4:2:0](https://en.wikipedia.org/wiki/Chroma_subsampling#4:2:0) subsampling, which actually has less resolution in the vertical direction than a TV signal has (although it probably has more in the horizontal direction).
Short Answer: ============= **There is no relation.** What seems like a relaiton is non related coincidence. --- Long Answer: ============ First of all, there is no colour clock. The mentioned frequency of 3.58 MHz is **not** a colour clock, but the carrier frequency used to **modulate** the encoded colour signal atop the basic B&W signal. There is no relation to RAM speed, pixel generation or alike. Especially nothing that needs to be adjusted to this clock, either direct or in any multiple thereof. Computers are digital and use digital clocking. TV isn't. The reason why this frequency is used that often in home computers is simple: cost. A 3.58 MHz crystal is *dirt cheap* compared to more 'logical' values. And that's not just by some pennies. They where the single most produced value. For example, in 1980 (just checked some magazines) a 4 MHz crystal (and next to any other) was around 4-5 USD, while a 3.58 MHz could be acquired at 0.87 USD. That's quite a lot to be saved for a mere 10% less speed. Further, depending on the kind of video generation, the 3.58 MHz were needed to encode the colour signal. So instead of having two crystals, one for CPU and Memory, the other for signal generation, one was sufficient - saving even more. --- Technical background: *lines and pixel*, and *resolution vs. colour*. ===================================================================== The colour signal in itself doesn't define any pixels, as it again is analogue. The colour carrier frequency is about 227.5 times the line frequency. Together with a usable line length of 52/64 this gives ~185 complete colour changes. In a digital system this would be the same as a maximum of 370 'colour' pixels. Now colour is just some frosting atop a b&w base signal. This signal gives us the intensity of a spot and is again analogue. There is no pixelation. Sure, on Y a 'pixel' is formed by the lines used to draw the picture. So while this makes discrete steps along the vertical, horizontally any stepping between 1 (one pixel per line) and infinite is possible (\*1). Due to analogue available bandwidth in a real world transmission system it's for all existing (classic) TV systems on this planet less than 320 vertical lines (\*2). In today's terms that may be described as 640 pixels. In reality there are usually less than 550 usable. So if we really want to talk about **what amount of pixels is possible**, we need to take both numbers into account. Up to ~370 distinct, non interacting colour positions and up to ~550 distinct B&W positions are possible. As a result any system producing with up to 370 pixel can be displayed on a TV based CRT system. Each of these pixels will be able to have any (displayable) colour at any (possible) intensity (\*3). With more than ~370 pixels per line, a classic colour TV will no longer be able to guarantee a distinct colour to each pixel. For example an orange pixel next to a yellow pixel might, even at 500 pixels per line, still come out well defined, while a blue instead will tend to look more like green. Now it depends on one pixel ahead if it will become blue over time or not. A sequence of blue and yellow dots will look like bluish green and yellowish green instead. (No, this is not the (in)famous NTSC colour bleeding, though looking similar.) (It also isn't the restriction of adjacent colours on an Apple II, as this is given through the encoding used by Woz.) So, long story short: **there is no direct relation between pixel clock, memory clock and colour carrier.** If at all, it's within design decisions taken by whoever made a computer following certain design goals - usually price, thus reduction of components. --- \*1 - Okay, the X resolution is limited by the upper frequency the electron beam can be modulated, and even before that by the upper frequency the television signal allows. As a third limiter, usually somewhere in between, the colour mask further limits arbitrary changes. \*2 - Horizontal resolution is described in classic analogue TV as the number of vertical black lines on a white background that still can be displayed. Or in other words, that (sine) signal which still can be encoded as full transition between minimum and maximum intensity - which happens to be exactly the frequency assigned to a channel. For most TV systems something between 4 and 6 MHz \*3 - Well, since an analogue signal can not flip from on state to the exact opposite in zero time, there will be, in both cases (colour and intensity) border effects, which will become more and more visible as frequency (changes) close in to maximum distance and maximum frequency.
6,035
Early home computers and game consoles output video to TV sets. The NTSC color clock frequency is 3.58 MHz. This informed the design of some video systems: <http://pineight.com/mw/index.php?title=Dot_clock_rates> In particular, the Atari 2600 and Intellivision have one pixel per color clock, which is an obviously reasonable way to do it. In the Apple II, the pixel clock is exactly twice the color clock. That makes sense because it has an option to turn *off* the color clock to generate reasonably crisp black-and-white text, then turn it on for the ability to generate artifact colors. This arrangement is very economical on parts count, which was important at the time. The Atari 800 also has a pixel clock exactly twice the color clock, but as far as I know, it does *not* have the option to turn off the color clock. I'm trying to figure out what advantage it gains from this. Specifically, I know if you run at an exact multiple of the color clock you can generate artifact colors, but surely you would get strictly better results by running exactly at the color clock and spending the memory and bandwidth on coloring fewer pixels? For example, say the Atari is operating in a mode with 1 bit per pixel, and generating artifact colors. Would it not be better off halving the pixel resolution and using 2 bits per pixel to just generate the wider range of colors directly? There is a theory that says it makes sense to subsample the chroma information, in other words run the luma information at twice the frequency, because the luma information is more important, but as far as I can see based on e.g. <https://en.wikipedia.org/wiki/Apple_II_graphics#Color_on_the_Apple_II> the result of this is that the luma information simply gets converted into artifact colors, and you might as well have done this directly. The situation with other machines like the NES looks even worse; it outputs somewhat more than one pixel per color clock, but less than two, so that the extra resolution will just convert into uncontrollable color fringing. On the face of it, the NES looks like a reduction of the resolution to 3.58 MHz pixel clock would produce better results for lower cost. What advantage was there in going higher than one pixel per color clock (in machines that weren't going to turn off the color clock to generate black-and-white text like the Apple II), that I am missing?
2018/03/21
[ "https://retrocomputing.stackexchange.com/questions/6035", "https://retrocomputing.stackexchange.com", "https://retrocomputing.stackexchange.com/users/4274/" ]
I think you're conflating a few issues: 1. being in-phase with the colour subcarrier; 2. being sampled at a rate less than or equal to the colour subcarrier; and 3. being sampled at an integer division of the colour subcarrier. Being in-phase has exactly one effect: the artefacts on horizontal edges are consistent from one line to the next. The edges do not demonstrate chroma crawl. Being at a rate less than or equal to the colour subcarrier also has exactly one effect: the true colour is going to be displayed somewhere, at least instantaneously, for each pixel. Being at any integer divisor of the colour clock rate buys a third separate advantage: the pixel looks identical no matter where you put it on the display. Nothing you can do is going to get you sharp pixels. All you're doing is picking which sort of artefacts you want. However, you buy yourself a substantial disadvantage for being in-phase: NTSC signals are not normally in-phase *by careful design*. Being 50% out of phase makes the colour subcarrier's interference with the luminance signal *much less visible*. This stems from the original design requirement that a colour signal be viewable on an unfiltered black and white set from before the specification of colour without undue ugliness. You buy yourself a substantial disadvantage for being less than or equal to the colour subcarrier frequency: low-resolution graphics. You also buy yourself at least two substantial disadvantage if you optimise for being exactly on the colour subcarrier: 1. your allegation that adding luminance information above and beyond the colour subcarrier frequency doesn't have a visible effect becomes true because the information you can add gets trapped in the vestigial parts of the subcarrier filtering. The actual rule is that with real-life filters, luminance information is liable to be lost only exactly when it is a multiple or divisor of the colour subcarrier — a comb filter is often considered the gold standard for chroma/luma separation and it has that name *exactly because* its frequency domain response graph spikes at integer intervals; and 2. you've optimised for something that isn't actually a constant supposing you ever want to ship in a PAL country. A chip you didn't mention is the Texas Instruments TMS 9918, which is in-phase but uses a non-integer divisor of the colour subcarrier (specifically, each pixel is 2/3rds of an NTSC cycle long). TI considered the constancy of horizontal colour artefacts [*to be a bug not a feature*](http://spatula-city.org/~im14u2c/vdp-99xx/e1/Message_on_Modification_to_9918_from_1982.pdf), dubbing it the rainbow effect. The linked memo shows a suggested modification that switches to ordinary chroma crawl. The non-engineers were apparently filing it as a bug report. So, to summarise, if you are at exactly the colour subcarrier rate: * you look much worse on old and/or cheap black and white sets that don't filter the chroma, which the NTSC spec says they shouldn't have to; * you lose the fine luminance information as a simple practical consequence of the frequency response of ordinary separation filters; * except in PAL countries of course, where all the software you optimised for the results of your decision suddenly has the opposite design decision. And to throw in a bonus argument of lesser weight: the SCART connector dates from the '70s. Even by 1982 all but the very cheapest European micros offered full RGB output — cf. the Oric or the Electron. Fixating on the subset of users that have a magically clean RF connection or a TV with composite connectors but not a SCART or S-Video socket isn't a very long-sighted strategy.
The pixel clock has to be fast enough to generate the number of pixels you want to display horizontally within the 56 microsecond scan line interval. At 3.58MHz, you only get about 200 pixels. This was fine for the Atari 2600 et al, which had 160 horizontal pixels, but the other systems you mentioned had higher horizontal resolution, so had to use a faster pixel clock. Edited to clarify in response to comment: This is actually useful. A standard definition TV is actually able to display more detail horizontally in terms of *luminance* (i.e. brightness) than it can in *chrominance* (i.e. "colour"). An NTSC TV *is* limited to approximately 200 transitions of colour in each horizontal line, due to the 3.58MHz colour signal bandwidth, but it can manage somewhere between 400 and 700 brightness transitions per line, depending on the quality of the electronics and how good the signal it's receiving is. PAL and SECAM have slightly higher figures, but in the grand scheme of things the difference is small. This is why the last generation of systems that were designed primarily for TV output tended to have between 256 and 320 horizontal pixels (e.g. the Commodore 64 or Sinclair Spectrum) -- these were the most convenient figures that could reliably be displayed. 480 or 512 pixels might have been interesting, but a lot of users would never have been able to see the detail added by such high resolutions, so it wasn't commercially useful to provide it. On the other end of the spectrum (no pun intended), DVD was designed with a horizontal resolution of 720 pixels because that was the *lowest* convenient resolution that was universally acknowledged to be *beyond* the capabilities of NTSC and PAL TVs. But it still encoded the picture in a format that had a lower chrominance resolution than luminance - it uses [4:2:0](https://en.wikipedia.org/wiki/Chroma_subsampling#4:2:0) subsampling, which actually has less resolution in the vertical direction than a TV signal has (although it probably has more in the horizontal direction).
6,035
Early home computers and game consoles output video to TV sets. The NTSC color clock frequency is 3.58 MHz. This informed the design of some video systems: <http://pineight.com/mw/index.php?title=Dot_clock_rates> In particular, the Atari 2600 and Intellivision have one pixel per color clock, which is an obviously reasonable way to do it. In the Apple II, the pixel clock is exactly twice the color clock. That makes sense because it has an option to turn *off* the color clock to generate reasonably crisp black-and-white text, then turn it on for the ability to generate artifact colors. This arrangement is very economical on parts count, which was important at the time. The Atari 800 also has a pixel clock exactly twice the color clock, but as far as I know, it does *not* have the option to turn off the color clock. I'm trying to figure out what advantage it gains from this. Specifically, I know if you run at an exact multiple of the color clock you can generate artifact colors, but surely you would get strictly better results by running exactly at the color clock and spending the memory and bandwidth on coloring fewer pixels? For example, say the Atari is operating in a mode with 1 bit per pixel, and generating artifact colors. Would it not be better off halving the pixel resolution and using 2 bits per pixel to just generate the wider range of colors directly? There is a theory that says it makes sense to subsample the chroma information, in other words run the luma information at twice the frequency, because the luma information is more important, but as far as I can see based on e.g. <https://en.wikipedia.org/wiki/Apple_II_graphics#Color_on_the_Apple_II> the result of this is that the luma information simply gets converted into artifact colors, and you might as well have done this directly. The situation with other machines like the NES looks even worse; it outputs somewhat more than one pixel per color clock, but less than two, so that the extra resolution will just convert into uncontrollable color fringing. On the face of it, the NES looks like a reduction of the resolution to 3.58 MHz pixel clock would produce better results for lower cost. What advantage was there in going higher than one pixel per color clock (in machines that weren't going to turn off the color clock to generate black-and-white text like the Apple II), that I am missing?
2018/03/21
[ "https://retrocomputing.stackexchange.com/questions/6035", "https://retrocomputing.stackexchange.com", "https://retrocomputing.stackexchange.com/users/4274/" ]
I think you're conflating a few issues: 1. being in-phase with the colour subcarrier; 2. being sampled at a rate less than or equal to the colour subcarrier; and 3. being sampled at an integer division of the colour subcarrier. Being in-phase has exactly one effect: the artefacts on horizontal edges are consistent from one line to the next. The edges do not demonstrate chroma crawl. Being at a rate less than or equal to the colour subcarrier also has exactly one effect: the true colour is going to be displayed somewhere, at least instantaneously, for each pixel. Being at any integer divisor of the colour clock rate buys a third separate advantage: the pixel looks identical no matter where you put it on the display. Nothing you can do is going to get you sharp pixels. All you're doing is picking which sort of artefacts you want. However, you buy yourself a substantial disadvantage for being in-phase: NTSC signals are not normally in-phase *by careful design*. Being 50% out of phase makes the colour subcarrier's interference with the luminance signal *much less visible*. This stems from the original design requirement that a colour signal be viewable on an unfiltered black and white set from before the specification of colour without undue ugliness. You buy yourself a substantial disadvantage for being less than or equal to the colour subcarrier frequency: low-resolution graphics. You also buy yourself at least two substantial disadvantage if you optimise for being exactly on the colour subcarrier: 1. your allegation that adding luminance information above and beyond the colour subcarrier frequency doesn't have a visible effect becomes true because the information you can add gets trapped in the vestigial parts of the subcarrier filtering. The actual rule is that with real-life filters, luminance information is liable to be lost only exactly when it is a multiple or divisor of the colour subcarrier — a comb filter is often considered the gold standard for chroma/luma separation and it has that name *exactly because* its frequency domain response graph spikes at integer intervals; and 2. you've optimised for something that isn't actually a constant supposing you ever want to ship in a PAL country. A chip you didn't mention is the Texas Instruments TMS 9918, which is in-phase but uses a non-integer divisor of the colour subcarrier (specifically, each pixel is 2/3rds of an NTSC cycle long). TI considered the constancy of horizontal colour artefacts [*to be a bug not a feature*](http://spatula-city.org/~im14u2c/vdp-99xx/e1/Message_on_Modification_to_9918_from_1982.pdf), dubbing it the rainbow effect. The linked memo shows a suggested modification that switches to ordinary chroma crawl. The non-engineers were apparently filing it as a bug report. So, to summarise, if you are at exactly the colour subcarrier rate: * you look much worse on old and/or cheap black and white sets that don't filter the chroma, which the NTSC spec says they shouldn't have to; * you lose the fine luminance information as a simple practical consequence of the frequency response of ordinary separation filters; * except in PAL countries of course, where all the software you optimised for the results of your decision suddenly has the opposite design decision. And to throw in a bonus argument of lesser weight: the SCART connector dates from the '70s. Even by 1982 all but the very cheapest European micros offered full RGB output — cf. the Oric or the Electron. Fixating on the subset of users that have a magically clean RF connection or a TV with composite connectors but not a SCART or S-Video socket isn't a very long-sighted strategy.
Short Answer: ============= **There is no relation.** What seems like a relaiton is non related coincidence. --- Long Answer: ============ First of all, there is no colour clock. The mentioned frequency of 3.58 MHz is **not** a colour clock, but the carrier frequency used to **modulate** the encoded colour signal atop the basic B&W signal. There is no relation to RAM speed, pixel generation or alike. Especially nothing that needs to be adjusted to this clock, either direct or in any multiple thereof. Computers are digital and use digital clocking. TV isn't. The reason why this frequency is used that often in home computers is simple: cost. A 3.58 MHz crystal is *dirt cheap* compared to more 'logical' values. And that's not just by some pennies. They where the single most produced value. For example, in 1980 (just checked some magazines) a 4 MHz crystal (and next to any other) was around 4-5 USD, while a 3.58 MHz could be acquired at 0.87 USD. That's quite a lot to be saved for a mere 10% less speed. Further, depending on the kind of video generation, the 3.58 MHz were needed to encode the colour signal. So instead of having two crystals, one for CPU and Memory, the other for signal generation, one was sufficient - saving even more. --- Technical background: *lines and pixel*, and *resolution vs. colour*. ===================================================================== The colour signal in itself doesn't define any pixels, as it again is analogue. The colour carrier frequency is about 227.5 times the line frequency. Together with a usable line length of 52/64 this gives ~185 complete colour changes. In a digital system this would be the same as a maximum of 370 'colour' pixels. Now colour is just some frosting atop a b&w base signal. This signal gives us the intensity of a spot and is again analogue. There is no pixelation. Sure, on Y a 'pixel' is formed by the lines used to draw the picture. So while this makes discrete steps along the vertical, horizontally any stepping between 1 (one pixel per line) and infinite is possible (\*1). Due to analogue available bandwidth in a real world transmission system it's for all existing (classic) TV systems on this planet less than 320 vertical lines (\*2). In today's terms that may be described as 640 pixels. In reality there are usually less than 550 usable. So if we really want to talk about **what amount of pixels is possible**, we need to take both numbers into account. Up to ~370 distinct, non interacting colour positions and up to ~550 distinct B&W positions are possible. As a result any system producing with up to 370 pixel can be displayed on a TV based CRT system. Each of these pixels will be able to have any (displayable) colour at any (possible) intensity (\*3). With more than ~370 pixels per line, a classic colour TV will no longer be able to guarantee a distinct colour to each pixel. For example an orange pixel next to a yellow pixel might, even at 500 pixels per line, still come out well defined, while a blue instead will tend to look more like green. Now it depends on one pixel ahead if it will become blue over time or not. A sequence of blue and yellow dots will look like bluish green and yellowish green instead. (No, this is not the (in)famous NTSC colour bleeding, though looking similar.) (It also isn't the restriction of adjacent colours on an Apple II, as this is given through the encoding used by Woz.) So, long story short: **there is no direct relation between pixel clock, memory clock and colour carrier.** If at all, it's within design decisions taken by whoever made a computer following certain design goals - usually price, thus reduction of components. --- \*1 - Okay, the X resolution is limited by the upper frequency the electron beam can be modulated, and even before that by the upper frequency the television signal allows. As a third limiter, usually somewhere in between, the colour mask further limits arbitrary changes. \*2 - Horizontal resolution is described in classic analogue TV as the number of vertical black lines on a white background that still can be displayed. Or in other words, that (sine) signal which still can be encoded as full transition between minimum and maximum intensity - which happens to be exactly the frequency assigned to a channel. For most TV systems something between 4 and 6 MHz \*3 - Well, since an analogue signal can not flip from on state to the exact opposite in zero time, there will be, in both cases (colour and intensity) border effects, which will become more and more visible as frequency (changes) close in to maximum distance and maximum frequency.
57,010
If I want to create a new document in SharePoint. The only type of document I can create is MS Word. I want to be able to create other office document types like powerpoint, acces, excel, visio etc. Please provide me the steps?
2013/01/15
[ "https://sharepoint.stackexchange.com/questions/57010", "https://sharepoint.stackexchange.com", "https://sharepoint.stackexchange.com/users/13711/" ]
you need to add another document template: <http://office.microsoft.com/en-us/office365-sharepoint-online-enterprise-help/add-multiple-office-templates-to-a-document-library-HA102409514.aspx>
I developed a web part that creates new Office files in web apps on any site. You can download it here - select "Free Web Part That Creates New Office Documents In Office 365 Web Apps" on this page: <http://wb2-web.sharepoint.com/Pages/o365answers.aspx> If you're using Office 365, you can't create Visio etc files because 365 isn't a true cloud solution and doesn't offer the hosted software you'd need to do that. The Office web apps work by making a copy of a template, which is what this web part does for you.
105,795
What are some quick and easy ways to determine if my web site is healthy or poor when it comes to accessibility to readers and other like devices? I don't have a reader, but need a way to test so that I can improve the site in this area. Are there any coding strategies or methods for testing this?
2017/04/26
[ "https://webmasters.stackexchange.com/questions/105795", "https://webmasters.stackexchange.com", "https://webmasters.stackexchange.com/users/52407/" ]
There's a free screen reader program for Windows called NVDA. It can be downloaded here: <https://www.nvaccess.org/download/> Macs come with a screen reader program called Voiceover, WebAIM has an article on how to use it: <http://webaim.org/articles/voiceover/> I found it very enlightening as a sighted person to try navigating some websites via a screen reader. Experiencing the frustration first-hand of dealing with a badly coded website really helped drive home the importance of doing this correctly.
Here's a site I use for stuff like this sometimes. It's not aimed at accessibility, per se, but it's a good way to see how your site renders on various platforms. <http://browsershots.org/> If you need further assistance with font-sizing, check this out: <https://medium.com/@madhum86/css-font-sizing-pixels-vs-em-vs-rem-vs-percent-vs-viewport-units-b1485716afe7>
205,071
Desperately trying to find a receptacle to match these 10-years-old cords which come from a Beckman Coulter FC500 cytometer. Anyone know the name? The receptacle previously used is labelled "AMP" with a squiggly M. Requested infos: Diameter of connector barrel at the end is 4.2mm/.165". Length of gold cap portion is 15.5mm/.610" The connector seems to be push insertion, with a cheap friction-lock style retention clip on the receptacle (easily bent). I've included the bag labelled "FOA" but it's electrical, not fiber optic! Could just be a reused bag... [![enter image description here](https://i.stack.imgur.com/6PzDT.jpg)](https://i.stack.imgur.com/6PzDT.jpg) [![enter image description here](https://i.stack.imgur.com/opFFZ.jpg)](https://i.stack.imgur.com/opFFZ.jpg) [![enter image description here](https://i.stack.imgur.com/2ZxZc.jpg)](https://i.stack.imgur.com/2ZxZc.jpg) [![enter image description here](https://i.stack.imgur.com/xIXnd.jpg)](https://i.stack.imgur.com/xIXnd.jpg) [![enter image description here](https://i.stack.imgur.com/Gcf7L.jpg)](https://i.stack.imgur.com/Gcf7L.jpg)
2015/12/08
[ "https://electronics.stackexchange.com/questions/205071", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/7038/" ]
More precisely the are called "Coaxicon" by TE; here's a brochure <http://www.farnell.com/datasheets/595245.pdf>. Beware that they come in several sizes. It looks like yours are the "miniature" version (there's also "subminiature" and "size 8"). Since it seems you want the PCB-mount socket (there's also a cable mount): [![enter image description here](https://i.stack.imgur.com/RhqoP.png)](https://i.stack.imgur.com/RhqoP.png)
They are TE connectors, at digikey <http://www.digikey.com/product-search/en?mpart=201145-4&vendor=17> typically used with backplane connectors, it seems. [![enter image description here](https://i.stack.imgur.com/87E9K.jpg)](https://i.stack.imgur.com/87E9K.jpg)
47,433
Recently I want to write and read(mostly write) parquet file with golang. Is there any good tools and library? Thanks
2017/12/06
[ "https://softwarerecs.stackexchange.com/questions/47433", "https://softwarerecs.stackexchange.com", "https://softwarerecs.stackexchange.com/users/35724/" ]
This project is used for reading/writing parquet file: <https://github.com/xitongsys/parquet-go>
There's also <https://github.com/xitongsys/parquet-go> which to me seems to have a little bit simpler API
1,238
Is there any risk of wort infection by not sanitizing hops before dry hopping?
2010/01/17
[ "https://homebrew.stackexchange.com/questions/1238", "https://homebrew.stackexchange.com", "https://homebrew.stackexchange.com/users/445/" ]
There is an infection risk any time you open up your fermenter and especially when you throw stuff into it. If you dry hop at the right time you reduce that risk. * The alcohol built up protects against infection * The hops already in the beer act as a preservative * The pH is unfriendly to new growth * Most of the easy to eat sugars are already consumed For these reasons, it is important to dry hop late in the fermentation. You need to allow alcohol and carbon dioxide to build up in solution to fight off invaders. Listen to the [Dec 8, 2008 episode](http://thebrewingnetwork.com/shows/Brew-Strong/Brew-Strong-12-08-08-Dry-Hopping) of Brew Strong for techniques and technicalities. The same rules apply to many other fermentation additions. Other herbs and chips can be thrown right in. The more surface area the addition has - the more nooks & crannies - the larger risk of infection because there are more places for baddies to hide. [I recommend](https://homebrew.stackexchange.com/questions/940/sanitizing-oak-chips/946#946) soaking wood chips in alcohol of some sort, for example.
I've put hops for dry-hopping into a sterilized mesh bag (note\* the bag was boiled in a pot of water and cooled with lid on to RT before adding the hops). I then squirt the outside of the bag with a squirt-bottle filled with grain alcohol. This kept things sterile and allowed for hop infusion without messy hop bits ending up in the final beer.
14,765,806
I have used a webform to my drupal 7 website.And now I want to put a webform to another page(two webforms in different web pages).How can i configure webform for it?
2013/02/08
[ "https://Stackoverflow.com/questions/14765806", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1716110/" ]
Have you used, webform as node, or block? If u have used, webform as block, then u goto admin/block and then configure. From that page, you can configure, where u want the page to dispaly web form
Create another webform just as u created the previous one. Log in to your website with admin privileges and click on the **Add Content > Webform** from the admin menu. Create a new webform here. After this is saved, you can add the needed form elements. This new webform is a node and will be listed in the content list page. If you want to render this webform as a block, Go to particular **webform, > Edit > Form settings** and in the Advanced settings tab, Check the "**Available as block**" checkbox. Hope this helps..
28,632,163
I have a simple matlab code. It loads a file and draws a graph using datas in this file. I want that when I press a html button, the graph is drew.I read some recommendations on web/blogs. Have to I use com server? **EDIT** My app is not a web app yet. I run it local pc, it is just a htm file.I use google chrome for now.
2015/02/20
[ "https://Stackoverflow.com/questions/28632163", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4319615/" ]
The answer to your question is not necessary, you could have some workarounds to avoid using a COM server (although it's well explained in the official [documentation](http://www.mathworks.com/help/matlab/matlab_external/call-matlab-functions-from-a-web-application.html)). It depends if your webapp is a Java application, PHP, etc. A simple approach using PHP would be using [passthru](http://php.net/manual/en/function.passthru.php) to run your matlab app and later you could link to the generated file. Related questions: * [Can I run MATLAB code on a web site?](https://stackoverflow.com/questions/1451503/can-i-run-matlab-code-on-a-web-site) * [How to run MATLAB code from Java?](https://stackoverflow.com/questions/2130539/how-to-run-matlab-codes-from-java-end)
One way to do so would be for you to generate the C++ code out of Matlab code using [Matlab coder](https://www.mathworks.com/products/matlab-coder.html), then there are a few way ways to run C/C++ codes in the browser, see below links: [Running C in A Browser](https://stackoverflow.com/questions/25713194/running-c-in-a-browser) and [Native client](https://developer.chrome.com/native-client) It is worthwhile to mention that it might not be the best idea and you might face many fundamental issues since you are practically converting Matlab code two times to different programming languages. Currently, there are many issues with Google Native Client, including the fact that you can not run it on every browser, other services are essentially better than the google native client. Finally, if you have a simple code I would recommend you to rewrite it in JS instead of going throw all the trouble.
98,919
Let's say you are a visa national flying into a UK airport with a valid entry clearance. Between exiting the airplane and passing through passport control you have a medical emergency that requires you to be urgently transported to a hospital. What would then happen with your immigration status? Possible options I see: 1. The immigration officers will search your luggage, find your passport and stamp it - could be tricky if you have your passport on you in the ambulance. 2. They ask you to report back to the airport once you're healthy for processing. 3. They send a policeman to watch your hospital ward 24/7 until you are able to go back to the airport. 4. They give you some sort of emergency entry clearance and you don't have to deal with it in any way.
2017/07/27
[ "https://travel.stackexchange.com/questions/98919", "https://travel.stackexchange.com", "https://travel.stackexchange.com/users/9009/" ]
I worked in the A&E at Ashford hospital in Middlesex about 20 years ago. Ashford is a couple of miles from Heathrow, and all people requiring hospital assessment from Terminal 4 were brought to us. I can recall only a couple of times where people were brought from arriving flights not having gone through immigration, but on both occasions they were accompanied by a police officer. They were admitted to the hospital, so I don't know what happened after they left the Emergency department, except the police officer went off to the ward with them.
This is about the US rather than the UK, but [here is a relevant article](http://med.stanford.edu/news/all-news/2013/07/this-is-not-a-test-in-caring-for-airplane-crash-victims-training-and-teamwork-prevailed.html) about the treatment at Stanford Hospital of passengers traveling on [Asiana Airlines flight 214](https://en.wikipedia.org/wiki/Asiana_Airlines_Flight_214), which crashed just before the runway at the San Francisco airport. From the article: > > And when they were healthy enough to leave the hospital, they couldn't > simply be discharged because they had not yet cleared U.S. Immigration > and Customs Enforcement. > > > "They came straight from the runway," Weiss said. "They weren't > officially in the United States yet." Adding to the complication was > the fact that seven were minors unaccompanied by parents. > > > Sitting in open patient rooms and lounges, social workers, > translators, Red Cross and customs officers, as well as a > representative from Asiana Airlines, worked past midnight clearing > patients for release. > > >
121,077
I have searched in google but have found no result and have been unable to install it yet. I want to install yum on fedora core 6 on my server to install ffmpeg and ffmpeg-php. How can i install it without error ? I have ssh connection so i have to use the command prompt
2010/03/10
[ "https://serverfault.com/questions/121077", "https://serverfault.com", "https://serverfault.com/users/-1/" ]
You do not absoluetly need yum to get or install packages. You might be able to find needed packages (even yum if you wish) from the archived mirror servers. You can find these under <http://fedoraproject.org>. Have you thought of installing the latest Fedora release?
You say "without errors" so i take it you have tried a few things already, what have you tried? and where did it go wrong? Have you tried this guide ? <http://www.mjmwired.net/resources/mjm-fedora-fc6.html#yum> Seems to be what you're looking for, but it's quite old though (Published: 11 October 2006 (updated: 31 May 2007))
197,436
I was wondering if I can save my village on my iPad mini 2 just like on PC. Does it have the same features as on PC?
2014/12/22
[ "https://gaming.stackexchange.com/questions/197436", "https://gaming.stackexchange.com", "https://gaming.stackexchange.com/users/96246/" ]
***There are numerous differences between Minecraft PC and PE*** If you would like for a comparison, [this site does a great job](http://minemum.com/which-version) of describing the basics changes, [while this site](http://www.supercheats.com/minecraft-pocket-edition/walkthrough/minecraft-pe-version-differences) goes into more depth between the changes, including with the console version. The answer your first question, yes you **can** save in Minecraft PE --- Sources/Additional Info: ======================== [Which Minecraft Version Should We Get?](http://minemum.com/which-version) [Minecraft PE Version Differences](http://www.supercheats.com/minecraft-pocket-edition/walkthrough/minecraft-pe-version-differences)
Yes if you are playing the paid version on the Ipad Mini 2 then yes it should automatically save for you. Personally if you have a computer and you like Minecraft just buy it on your computer.
197,436
I was wondering if I can save my village on my iPad mini 2 just like on PC. Does it have the same features as on PC?
2014/12/22
[ "https://gaming.stackexchange.com/questions/197436", "https://gaming.stackexchange.com", "https://gaming.stackexchange.com/users/96246/" ]
***There are numerous differences between Minecraft PC and PE*** If you would like for a comparison, [this site does a great job](http://minemum.com/which-version) of describing the basics changes, [while this site](http://www.supercheats.com/minecraft-pocket-edition/walkthrough/minecraft-pe-version-differences) goes into more depth between the changes, including with the console version. The answer your first question, yes you **can** save in Minecraft PE --- Sources/Additional Info: ======================== [Which Minecraft Version Should We Get?](http://minemum.com/which-version) [Minecraft PE Version Differences](http://www.supercheats.com/minecraft-pocket-edition/walkthrough/minecraft-pe-version-differences)
Yes just click quit to title nothing bad will happen