qid int64 1 74.7M | question stringlengths 12 33.8k | date stringlengths 10 10 | metadata list | response_j stringlengths 0 115k | response_k stringlengths 2 98.3k |
|---|---|---|---|---|---|
855 | What is a portfolio, and what should it consist of? Please include details such as the format, use of, number of items, types of items, etc. | 2011/02/10 | [
"https://graphicdesign.stackexchange.com/questions/855",
"https://graphicdesign.stackexchange.com",
"https://graphicdesign.stackexchange.com/users/69/"
] | A portfolio should sum up the work you have done and the impression you are trying to give. Generally speaking ,if you are a web designer, a portfolio fo print work would be less relevant than your web design work so you would promote your web design work over your print work.
Think of it like a colourful CV/resume. You would taylor it to the job you are applying for. If you are less experienced then you would show the aspects of your portfolio that reflect relevant knowledge that can be applied to the field you are aiming at. With more experience in a specific field you can then select the best work of that field alone. | A portfolio is a place where one includes their best work to show it off. Portfolio pages help in the selection criteria for business services. [Here](http://www.slideshare.net/webdesigntips/what-is-portfolio-website-design) is a slide that covers almost all about what a portfolio is and what it requires, including the importance, etc. |
68,788 | One standard approach to showing compactness of first-order logic is to show completeness, of which compactness is an easy corollary. I am told this approach is deprecated nowadays, as Compactness is seen as the "deeper" result.
Could someone explain this intuition? In particular, are there logical systems which are compact but not complete?
EDIT: Andreas brings up an excellent point, which is that the Completeness theorem is really a combination of two very important but almost unrelated results. First, the Compactness theorem. Second, the recursive enumerability of logical validities. Note that neither of these results depends on the details on the syntactic or axiomatic system.
What is the connection between these two aspects of completeness? Are there logical systems that have one of these properties but not the other? | 2011/06/25 | [
"https://mathoverflow.net/questions/68788",
"https://mathoverflow.net",
"https://mathoverflow.net/users/9896/"
] | The point is that we care about the models, rather than about the proofs. The compactness theorem---the claim that a theory is satisfiable iff every finite subset of it is satisfiable---is fundamentally connected to the models, and the possiblility of truth in these models. To use it, you need to understand your theory, the models of your theory and the models of finite pieces of your theory. And this is what we want to be thinking about and what we know about. In particular, when working with models, we can use all the mathematical tools and constructions at our disposal, with no need to remain inside any first-order language or formal system (well, perhaps we have set theory as our background system). We are free to reason about the models via reducts and ultrapowers and limits of systems of morphisms and so on, using any mathematical method at all.
The completeness theorem, in contrast, is fundamentally connected with the details of a formal deduction system. And so when using it, one is thinking about whether certain tautologies might be provable or not, or whether a certain formal consequence is allowed in the system or not.
But when we are studying a certain first-order class of groups or rings or whatever, such details about the proof system might seem to be an irrelevant distraction. | I was to say that the answers given so far are all wrong and misleading, but thanksfully I recalled that I am not a mathematician :-)
There are mainly two approaches to the concept of "logic".
1. The classical (or mathematical) approach to logic. Roughly speaking, a logic consists of two classes: a set of formulae and a class of models, together with a satisfiability relation saying what formulae are true in what models. Then, we may develop a proof system (or various proof systems) for the logic, which helps us --- in a systematic and coherent way --- derive satisfiability of formulae. Desirable properties of such proof systems are soundness (what we have derived is true), completness (what is true, we can derive). There is also compactness (if something follows a theory then it follows from a finite subset of the theory) that refers to the logic itself (here: satisfiability relation). This is how mathematicians are taught logic.
2. Modern (or computer-scientistic) approach to logic. Logic is a kind of a formal system (deductive system). To help proving facts about such a system, we may introduce the concept of "models" (or various concepts of models) for the logic. Desirable properties of such classes of models are that the deductive system over them is sound and complete (that is --- for a given system --- we develop the appropriate concept of models, such that the proof system is sound and complete; if we add/remove some axioms/rules to the system then we have to restrict/extend our class of models; this is most easily seen in temporal logics --- for example, LTL is sound and complete in *linear* models). There is also compactness (if something follows a theory then it follows from a finite subset of the theory) that refers to the logic itself (here: proof system; if a logic allows only finitary proofs, then it is obviously compact). Simply, in this approach, the system is fundamental here. This is how computer scientists are taught logic.
Of course, in the presence of soundness and completeness, classical compactness and modern compactness coincide.
So, moving back to your question --- I do agree with other answers saying that completeness and compactness are just far different concepts, so neither is "deeper". However, I do not think that the classification of wht belongs to models and what belongs to proofs is that obvious --- it is just all about how you think of logic. |
68,788 | One standard approach to showing compactness of first-order logic is to show completeness, of which compactness is an easy corollary. I am told this approach is deprecated nowadays, as Compactness is seen as the "deeper" result.
Could someone explain this intuition? In particular, are there logical systems which are compact but not complete?
EDIT: Andreas brings up an excellent point, which is that the Completeness theorem is really a combination of two very important but almost unrelated results. First, the Compactness theorem. Second, the recursive enumerability of logical validities. Note that neither of these results depends on the details on the syntactic or axiomatic system.
What is the connection between these two aspects of completeness? Are there logical systems that have one of these properties but not the other? | 2011/06/25 | [
"https://mathoverflow.net/questions/68788",
"https://mathoverflow.net",
"https://mathoverflow.net/users/9896/"
] | Really, compactness is seen as the deeper result? I have to say that I am also more interested in the models rather than formulas and deduction systems, and hence like
to teach students proofs of the compactness theorem that do not use the completeness
theorem. Typically I prove compactness using ultraproducts.
On the other hand, I still believe that the completeness theorem for first order logic
is the most important theorem of mathematical logic.
The theorem tells you that in principle there are computer checkable proofs for all "true"
theorems. I know that many mathematicians are not really concerned with having
a solid foundation for the concept of a "proof" (you know it when you see it).
But having this formal concept of proof in the background helps tremendously when
you want to fight off people who present their $n$-th proof of the inconsistency
of PA or ZFC, or that there are no infinite sets or that CH holds.
Also, the completeness theorem does explain why we can do mathematics the way we do.
Even though nobody really writes formal proofs of anything, ever, unless the correctness
of the proofs is seriously challenged (see the FLYSPECK project at <http://code.google.com/p/flyspeck/>).
Also, I think that the proof of the completeness theorem is deeper than of the compactness
theorem. In some sense the proof of the completeness theorem (I am thinking of the proof where one builds a canonical model of a maximally consistent Henkin theory) is more straight forward than for example the ultraproduct proof of the compactness theorem,
but it is more complicated in the details and certainly less accessible to mainstream mathematics than for instance the ultraproduct proof of the compactness theorem. | Here is an interesting quote from Bruno Poizat's "A Course in Model Theory":
>
> The compactness theorem, in the forms of Theorems 4.5 and 4.6, is due to Gödel; in fact, as explained in the beginning of Section 4.3 [Henkin's Method], the theorem was for Gödel a simple corollary (we could even say an unexpected corollary, a rather strange remark!) of his "completeness theorem" of logic, in which he showed that a finite system of rules of inference is sufficient to express the notion of consequence (see Chapter 7). It could also have been taken from [Herbrand 1928] or [Gentzen 1934], in which results of the same sort were proven.
>
>
> This unfortunate compactness theorem was brought in by the back door, and we might say that its original modesty still does it wrong in logic textbooks. In my opinion it is a much more essential and primordial (and thus also less sophisticated) than Gödel's completeness theorem, which states that we can formalize deduction in a certain arithmetic way; it is an error in method to deduce it from the latter.
>
>
> If we do it this way, it is by a very blind fidelity to the historic conditions that witnessed its birth. The weight of this tradition is apparent even in a work like [Chang-Keisler 1973], which was considered a bible of model theory in the 1970s; it begins with syntactic developments that have nothing to do with anything in the succeeding chapters. This approach---deducing Compactness from the possibility of axiomatizing the notion of deduction---once applied to the propositional calculus gives the strangest proof on record of the compactness of $2^\omega$!
>
>
> It is undoubtedly more "logical," but it is inconvenient, to require the student to absorb a system of formal deduction, ultimately quite arbitrary, which can be justified only much later when we can show that it indeed represents the notion of semantic consequence. We should not lose sight of the fact that the formalisms have no raison d'être except insofar as they are adequate for representing notions of substance.
>
>
>
See also this [old answer of mine](https://mathoverflow.net/questions/9309/in-model-theory-does-compactness-easily-imply-completeness/11014#11014) where the same quote appears. |
68,788 | One standard approach to showing compactness of first-order logic is to show completeness, of which compactness is an easy corollary. I am told this approach is deprecated nowadays, as Compactness is seen as the "deeper" result.
Could someone explain this intuition? In particular, are there logical systems which are compact but not complete?
EDIT: Andreas brings up an excellent point, which is that the Completeness theorem is really a combination of two very important but almost unrelated results. First, the Compactness theorem. Second, the recursive enumerability of logical validities. Note that neither of these results depends on the details on the syntactic or axiomatic system.
What is the connection between these two aspects of completeness? Are there logical systems that have one of these properties but not the other? | 2011/06/25 | [
"https://mathoverflow.net/questions/68788",
"https://mathoverflow.net",
"https://mathoverflow.net/users/9896/"
] | Really, compactness is seen as the deeper result? I have to say that I am also more interested in the models rather than formulas and deduction systems, and hence like
to teach students proofs of the compactness theorem that do not use the completeness
theorem. Typically I prove compactness using ultraproducts.
On the other hand, I still believe that the completeness theorem for first order logic
is the most important theorem of mathematical logic.
The theorem tells you that in principle there are computer checkable proofs for all "true"
theorems. I know that many mathematicians are not really concerned with having
a solid foundation for the concept of a "proof" (you know it when you see it).
But having this formal concept of proof in the background helps tremendously when
you want to fight off people who present their $n$-th proof of the inconsistency
of PA or ZFC, or that there are no infinite sets or that CH holds.
Also, the completeness theorem does explain why we can do mathematics the way we do.
Even though nobody really writes formal proofs of anything, ever, unless the correctness
of the proofs is seriously challenged (see the FLYSPECK project at <http://code.google.com/p/flyspeck/>).
Also, I think that the proof of the completeness theorem is deeper than of the compactness
theorem. In some sense the proof of the completeness theorem (I am thinking of the proof where one builds a canonical model of a maximally consistent Henkin theory) is more straight forward than for example the ultraproduct proof of the compactness theorem,
but it is more complicated in the details and certainly less accessible to mainstream mathematics than for instance the ultraproduct proof of the compactness theorem. | I was to say that the answers given so far are all wrong and misleading, but thanksfully I recalled that I am not a mathematician :-)
There are mainly two approaches to the concept of "logic".
1. The classical (or mathematical) approach to logic. Roughly speaking, a logic consists of two classes: a set of formulae and a class of models, together with a satisfiability relation saying what formulae are true in what models. Then, we may develop a proof system (or various proof systems) for the logic, which helps us --- in a systematic and coherent way --- derive satisfiability of formulae. Desirable properties of such proof systems are soundness (what we have derived is true), completness (what is true, we can derive). There is also compactness (if something follows a theory then it follows from a finite subset of the theory) that refers to the logic itself (here: satisfiability relation). This is how mathematicians are taught logic.
2. Modern (or computer-scientistic) approach to logic. Logic is a kind of a formal system (deductive system). To help proving facts about such a system, we may introduce the concept of "models" (or various concepts of models) for the logic. Desirable properties of such classes of models are that the deductive system over them is sound and complete (that is --- for a given system --- we develop the appropriate concept of models, such that the proof system is sound and complete; if we add/remove some axioms/rules to the system then we have to restrict/extend our class of models; this is most easily seen in temporal logics --- for example, LTL is sound and complete in *linear* models). There is also compactness (if something follows a theory then it follows from a finite subset of the theory) that refers to the logic itself (here: proof system; if a logic allows only finitary proofs, then it is obviously compact). Simply, in this approach, the system is fundamental here. This is how computer scientists are taught logic.
Of course, in the presence of soundness and completeness, classical compactness and modern compactness coincide.
So, moving back to your question --- I do agree with other answers saying that completeness and compactness are just far different concepts, so neither is "deeper". However, I do not think that the classification of wht belongs to models and what belongs to proofs is that obvious --- it is just all about how you think of logic. |
68,788 | One standard approach to showing compactness of first-order logic is to show completeness, of which compactness is an easy corollary. I am told this approach is deprecated nowadays, as Compactness is seen as the "deeper" result.
Could someone explain this intuition? In particular, are there logical systems which are compact but not complete?
EDIT: Andreas brings up an excellent point, which is that the Completeness theorem is really a combination of two very important but almost unrelated results. First, the Compactness theorem. Second, the recursive enumerability of logical validities. Note that neither of these results depends on the details on the syntactic or axiomatic system.
What is the connection between these two aspects of completeness? Are there logical systems that have one of these properties but not the other? | 2011/06/25 | [
"https://mathoverflow.net/questions/68788",
"https://mathoverflow.net",
"https://mathoverflow.net/users/9896/"
] | The point is that we care about the models, rather than about the proofs. The compactness theorem---the claim that a theory is satisfiable iff every finite subset of it is satisfiable---is fundamentally connected to the models, and the possiblility of truth in these models. To use it, you need to understand your theory, the models of your theory and the models of finite pieces of your theory. And this is what we want to be thinking about and what we know about. In particular, when working with models, we can use all the mathematical tools and constructions at our disposal, with no need to remain inside any first-order language or formal system (well, perhaps we have set theory as our background system). We are free to reason about the models via reducts and ultrapowers and limits of systems of morphisms and so on, using any mathematical method at all.
The completeness theorem, in contrast, is fundamentally connected with the details of a formal deduction system. And so when using it, one is thinking about whether certain tautologies might be provable or not, or whether a certain formal consequence is allowed in the system or not.
But when we are studying a certain first-order class of groups or rings or whatever, such details about the proof system might seem to be an irrelevant distraction. | I think that everything important that can be said about the
differences between Compactness and Completeness Theorems and their
proofs from the technical point of view has been said. (I also like
most the detailed and elucidating answer given by Joel David Hamkins
(at
[In model theory, does compactness easily imply completeness?](https://mathoverflow.net/questions/9309/in-model-theory-does-compactness-easily-imply-completeness))). On
the other hand, one of the most important differences between these
theorems is a non-technical one, and indeed some previous answers
contain hints to this effect. Indeed, Completeness Theorem has an
obvious metamathematical (or even philosophical) flavour as opposed to
Compactness Theorem. Actually, it is about the relation between the
two most important mathematical notions, i.e., those of proof and
truth.
And here I would like to argue with those (Carl Mummert and Stefan
Geschke) who claim that sometimes Completeness Theorem is used in
everyday mathematics. Actually, as I see it, it is *about* everyday
mathematics, but it *does not belong* to everyday mathematics.
Actually, contrary to what Carl Mummert says, I doubt that, in
everyday mathematics, anybody in any time uses completeness theorem in
either an explicit or implicit way. Obviously, one can successfully
work in any field of mathematics (which are not intimately connected
to logic) without any knowledge of mathematical logic. (Clearly she or
he has to have a good sense of logic, but this is a completely
different matter.) In other words (unlike Carl Mummert), I cannot
imagine any `difficulties in an alternate world where mathematicians
have to distinguish between "true in all groups" and "provable from
the axioms of a group" '. The reason is simple. I do not think that
anyone proves "that a group identity is derivable from the axioms
of a group by working semantically and showing that the identity holds
in every group." Though I am not a group theorist, I think that no
group theorist is interested in the statements that are provable from
the axioms of group theory *alone*. (On the other hand, of course, the
most important elementary statements needed to begin group theory at
all are usually derived directly from the axioms.) Most mathematicians
work in intuitive set theory and freely make use of the different
possibilities that this rich theory offers (independently of the fact
that she or he is aware of the existence of ZFC). (Actually, the
notion of a group itself is defined as a model, that is, generally in
terms of sets rather than a first order theory. And, of course, this
kind of definition is very practical, since otherwise every course on
groups have to be preceded by an introduction to logic.) I think that
the pure first order theory of groups has only theoretical or didactic
significance for being a nice widely known example of a first order
theory.
Likewise, I do not agree with Stefan Geschke that "the completeness
theorem does explain why we can do mathematics the way we do." Just
the other way around. Clearly, metamathematics is the study of real
mathematics by exact mathematical means. Therefore, its notions are
intended to mimic those of everyday informal mathematics as faithfully
as this is possible. So a metamathematical result cannot explain or
justify anything. What it can do is to describe in exact terms and
clarify the way mathematics is normally done (and, of course, to draw
consequences *about* everyday mathematics from the results of this
description). But its results do not affect the way mathematics is
normally done. Obviously, we would do everyday mathematics in exactly
the same way if the Completeness Theorem did not hold. Just as those
mathematicians do who never have heard of this theorem. And indeed, we
do arithmetic in exactly the same way as mathematicians before Gödel
(who might well think that true arithmetic was recursively axiomatizable) did. |
68,788 | One standard approach to showing compactness of first-order logic is to show completeness, of which compactness is an easy corollary. I am told this approach is deprecated nowadays, as Compactness is seen as the "deeper" result.
Could someone explain this intuition? In particular, are there logical systems which are compact but not complete?
EDIT: Andreas brings up an excellent point, which is that the Completeness theorem is really a combination of two very important but almost unrelated results. First, the Compactness theorem. Second, the recursive enumerability of logical validities. Note that neither of these results depends on the details on the syntactic or axiomatic system.
What is the connection between these two aspects of completeness? Are there logical systems that have one of these properties but not the other? | 2011/06/25 | [
"https://mathoverflow.net/questions/68788",
"https://mathoverflow.net",
"https://mathoverflow.net/users/9896/"
] | This is a side comment. There are several answers that explain why compactness is so important in model theory, and I agree with what they say. But I want to point out that the "in model theory" part is key here. In the overall study of logic, not restricted to model theory, both compactness and completeness are important, and each of those has areas of logic that favor it. Model theory, being a semantic field, naturally identifies more with semantic notions.
In mathematics outside logic, I think there is more implicit use of completeness than of compactness. Every time I prove that an identify is derivable from the the axioms of a group by working semantically and showing that the identity holds in every group, I am implicitly using the completeness theorem. It is easy to miss this or take it for granted, because the completeness theorem is so well known.
There are systems that do not have complete deduction systems; one example is second-order logic with full second-order semantics. In this system it is perfectly possible for something to be true in every model without being provable in our usual proof system. Therefore, when we study this system in logic, we have to keep a close watch on whether we have shown something is provable, or just shown that it is logically valid.
Imagine the difficulties in an alternate world where mathematicians have to distinguish between "true in all groups" and "provable from the axioms of a group". The completeness theorem is what lets us ignore this. By comparison, it's more difficult to see reflections of the compactness theorem in everyday mathematics. | Compactness is a "semantic" theorem, whose statement involves no "syntactic" concepts such as proofs or provability. So it seems one should not need the latter concepts to prove compactness (and of course, one does not). |
68,788 | One standard approach to showing compactness of first-order logic is to show completeness, of which compactness is an easy corollary. I am told this approach is deprecated nowadays, as Compactness is seen as the "deeper" result.
Could someone explain this intuition? In particular, are there logical systems which are compact but not complete?
EDIT: Andreas brings up an excellent point, which is that the Completeness theorem is really a combination of two very important but almost unrelated results. First, the Compactness theorem. Second, the recursive enumerability of logical validities. Note that neither of these results depends on the details on the syntactic or axiomatic system.
What is the connection between these two aspects of completeness? Are there logical systems that have one of these properties but not the other? | 2011/06/25 | [
"https://mathoverflow.net/questions/68788",
"https://mathoverflow.net",
"https://mathoverflow.net/users/9896/"
] | The point is that we care about the models, rather than about the proofs. The compactness theorem---the claim that a theory is satisfiable iff every finite subset of it is satisfiable---is fundamentally connected to the models, and the possiblility of truth in these models. To use it, you need to understand your theory, the models of your theory and the models of finite pieces of your theory. And this is what we want to be thinking about and what we know about. In particular, when working with models, we can use all the mathematical tools and constructions at our disposal, with no need to remain inside any first-order language or formal system (well, perhaps we have set theory as our background system). We are free to reason about the models via reducts and ultrapowers and limits of systems of morphisms and so on, using any mathematical method at all.
The completeness theorem, in contrast, is fundamentally connected with the details of a formal deduction system. And so when using it, one is thinking about whether certain tautologies might be provable or not, or whether a certain formal consequence is allowed in the system or not.
But when we are studying a certain first-order class of groups or rings or whatever, such details about the proof system might seem to be an irrelevant distraction. | Here is an interesting quote from Bruno Poizat's "A Course in Model Theory":
>
> The compactness theorem, in the forms of Theorems 4.5 and 4.6, is due to Gödel; in fact, as explained in the beginning of Section 4.3 [Henkin's Method], the theorem was for Gödel a simple corollary (we could even say an unexpected corollary, a rather strange remark!) of his "completeness theorem" of logic, in which he showed that a finite system of rules of inference is sufficient to express the notion of consequence (see Chapter 7). It could also have been taken from [Herbrand 1928] or [Gentzen 1934], in which results of the same sort were proven.
>
>
> This unfortunate compactness theorem was brought in by the back door, and we might say that its original modesty still does it wrong in logic textbooks. In my opinion it is a much more essential and primordial (and thus also less sophisticated) than Gödel's completeness theorem, which states that we can formalize deduction in a certain arithmetic way; it is an error in method to deduce it from the latter.
>
>
> If we do it this way, it is by a very blind fidelity to the historic conditions that witnessed its birth. The weight of this tradition is apparent even in a work like [Chang-Keisler 1973], which was considered a bible of model theory in the 1970s; it begins with syntactic developments that have nothing to do with anything in the succeeding chapters. This approach---deducing Compactness from the possibility of axiomatizing the notion of deduction---once applied to the propositional calculus gives the strangest proof on record of the compactness of $2^\omega$!
>
>
> It is undoubtedly more "logical," but it is inconvenient, to require the student to absorb a system of formal deduction, ultimately quite arbitrary, which can be justified only much later when we can show that it indeed represents the notion of semantic consequence. We should not lose sight of the fact that the formalisms have no raison d'être except insofar as they are adequate for representing notions of substance.
>
>
>
See also this [old answer of mine](https://mathoverflow.net/questions/9309/in-model-theory-does-compactness-easily-imply-completeness/11014#11014) where the same quote appears. |
4,870,911 | I have experience of integration of facebook and twitter api. But Its just feed post.
I want full Facebook application which shows friends list their messages and events.
And i want same thing for twitter and linkedin.
If any body have idea or code for this please give me link or post that.
Thanks | 2011/02/02 | [
"https://Stackoverflow.com/questions/4870911",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/591879/"
] | the API for Linkedin You can refer this :
<http://developer.linkedin.com/thread/1169?tstart=30> | You can use UIWebView to load twitter and facebook. It will let you exploit the entire functionality offered by these two Social networking sites. Though It's not impossible to build an application the way you want it to be but using webview will save your time as well as efforts. Since you are already aware of the FB and twitter integration so I am sure you are already aware as to what all we can do from iPhone application..viz.. Setting Status, reading POSTS etc.. I am not quite sure about Linkedin.. Just in case if you come across any solution for this let me also know about that. |
375,594 | Do measurements of time-scales for decoherence disprove some versions of Copenhagen or MWI?
Since these discussions of interpretations of quantum mechanics often shed more heat than light, I want to state some clear definitions.
**standard qm** = linearity; observables are self-adjoint operators; wavefunction evolves unitarily; complete sets of observables exist
**MWI-lite** = synonym for standard qm
**MWI-heavy** = standard qm plus various statements about worlds and branching
**CI** = standard qm plus an additional axiom describing a nonunitary collapse process associated with observation
Many people who have formulated or espoused MWI-heavy or CI seem to have made statements that branching or collapse would be an instantaneous process. (Everett and von Neumann seem to have subscribed to this.) In this case, MWI-heavy and CI would be vulnerable to falsification if it could be proved that the relevant process was not instantaneous.
Decoherence makes specific predictions about time scales. Are there experiments verifying predictions of the time-scale for decoherence that could be interpreted as falsifying MWI-heavy and CI (or at least some versions thereof)?
I'm open to well-reasoned answers that cite recent work and argue, e.g., that MWI-heavy and MWI-lite are the same except for irrelevant verbal connotations, or that processes like branching and collapse are inherently unobservable and therefore statements about their instantaneous nature are not empirically testable. It seems possible to me that the instantaneousness is:
* not empirically testable even in principle.
* untestable for all practical purposes (FAPP).
* testable, but only with technologies that date to ca. 1980 or later.
An example somewhat along these lines is an experiment by Lee at al. ("Generation of room-temperature entanglement in diamond with broadband pulses", can be found by googling) in which they put two macroscopic diamond crystals in an entangled state and then detected the entanglement (including phase) in 0.5 ps, which was shorter than the 7 ps decoherence time. This has been interpreted by [Belli et al.](https://arxiv.org/abs/1601.07927) as ruling out part of the parameter space for objective collapse models. If the coherence times were made longer (e.g., through the use of lower temperatures), then an experiment of this type could rule out the parameters of what is apparently the most popular viable version of this type of theory, [GRW](https://en.wikipedia.org/wiki/Ghirardi%E2%80%93Rimini%E2%80%93Weber_theory). Although this question isn't about objective collapse models, this is the same sort of general thing I'm interested in: using decoherence time-scales to rule out interpretations of quantum mechanics. | 2017/12/21 | [
"https://physics.stackexchange.com/questions/375594",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
] | I am not aware of any experimental evidence, so this probably does not qualify as an answer. However I can offer a reference that addresses this question theoretically:
* Armen E. Allahverdyan, Roger Balian, Theo M. Nieuwenhuizen (2011) *Understanding quantum measurement from the solution of dynamical models*, <https://arxiv.org/abs/1107.2138>
and by the same group, but more recently:
* A.E. Allahverdyan, R. Balian, T.M. Nieuwenhuizen. (2017) *A sub-ensemble theory of ideal quantum measurement processes.* Annals of Physics, 376C, [Sciencedaily URL](https://www.sciencedaily.com/releases/2017/03/170315115118.htm), full article: <https://arxiv.org/abs/1303.7257>
Essentially they do what the OP describes in the question. They take a dynamical model of a macroscopic system and solve its unitary evolution within the Schrödinger equation. Then they try to look if some "measurement-like structure" emerges just from the many-body dynamics, without collapse.
There is one **main difference to decorence**, where usually only a system and an environment is considered (e.g. the [Leggett-Caldeira model](http://www.scholarpedia.org/article/Caldeira-Leggett_model), also cf. [wiki article on quantum dissipation](https://en.wikipedia.org/wiki/Quantum_dissipation)). In the work mentioned above, a macroscopic system that mimics a **detector** is included. Like the environment this is also a macroscopic system, but unlike the environment it has some special properties that allow it to record information. In the first paper this is done by considering a ferro-magnet, whose spontaneous symmetry breaking allows it to have a macroscopic polarization, which is essentially a deterministic property after equilibration (simply because the flip probability is very low).
As far as I am aware this is far from a solution to the measurement problem, some open issues are mentioned in the articles themselves. At least it goes into the right direction however, especially it starts addressing the question of **measurement timescales**, which can maybe also pave the way for experimental investigations thereof. | Do measurements of time-scales for decoherence disprove some versions of Copenhagen or MWI?
No.
From Decoherence on wikipedia (emphasis mine):
>
> Decoherence has been used to understand the collapse of the wavefunction in quantum mechanics. **Decoherence does not generate actual wave function collapse**. It only provides an explanation for the observation of wave function collapse, as the quantum nature of the system "leaks" into the environment. That is, components of the wavefunction are decoupled from a coherent system, and acquire phases from their immediate surroundings. **A total superposition of the global or universal wavefunction still exists (and remains coherent at the global level), but its ultimate fate remains an interpretational issue.** Specifically, decoherence does not attempt to explain the measurement problem. Rather, decoherence provides an explanation for the transition of the system to a mixture of states that seem to correspond to those states observers perceive.
>
>
>
As Wolpertinger said, to disprove Copenhagen or MWI you should challenge the postulate that the measurement act is instantaneous, by taking into account both detector and probe. I'm not an expert on this, so I cannot add much. I just wanted to point out that decoherence is not enough to solve the measurement problem.
Some further relevant quotes:
>
> The discontinuous "wave function collapse" postulated in the Copenhagen interpretation to enable the theory to be related to the results of laboratory measurements cannot be understood as an aspect of the normal dynamics of quantum mechanics via the decoherence process. Decoherence is an important part of some modern refinements of the Copenhagen interpretation. Decoherence shows how a macroscopic system interacting with a lot of microscopic systems (e.g. collisions with air molecules or photons) moves from being in a pure quantum state—which in general will be a coherent superposition (see Schrödinger's cat)—to being in an incoherent improper mixture of these states. [...]
> However, decoherence by itself may not give a complete solution of the measurement problem, since all components of the wave function still exist in a global superposition, which is explicitly acknowledged in the many-worlds interpretation. All decoherence explains, in this view, is why these coherences are no longer available for inspection by local observers. **To present a solution to the measurement problem in most interpretations of quantum mechanics, decoherence must be supplied with some nontrivial interpretational considerations** [...]
>
>
> |
19,940,141 | I'm studying algorithms to find connected components of graph., but I still don't know why is important to find connected components. In which applications do we use connected components of a graph?
Edit:
I want to know which graph analysis is dependent to the connected components of a graph? means that if i find the connected components of a graph, i can do that graph analysis easier. For example if i find the connected components, Can i cluster the graph easier? if yes, which graph analysis i can do better?
Thanks. | 2013/11/12 | [
"https://Stackoverflow.com/questions/19940141",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2812081/"
] | As you've already found out, any user can write to the event log with existing event sources.
You have to treat the event log as unsafe input, that could potentially have been manipulated and that you need to handle carefully.
Without more details on what you are trying to accomplish it's hard to give any more specific advice. | I may also add on to Anders Abel. Not only can it be written to, it can also be cleared without notice. This may have serious adverse affects to your program if not handled correctly.
There are ways to "secure" the event log using user permissions etc, however this doesn't stop applications or the system from changing the EventLog. |
17,641 | It is a frequent idiom for a group of characters to interact while eating Chinese food. Generally of the carry out variety, with chop sticks being plunged into the deep fold-up boxes. It seems to me that this is intended to bring a particular thematic element, but I can't quite put my finger on it. I see it way too often for it not to have a specific significance. | 2014/03/03 | [
"https://movies.stackexchange.com/questions/17641",
"https://movies.stackexchange.com",
"https://movies.stackexchange.com/users/3361/"
] | I do think you have something in thinking that Chinese food has a particular significance as take-out. There's an aura of a comfortable, but pressured, environment, where people are banding together to get stuff done. But, enough of my personal opinion and on to some quotes and particular examples. Much of this is cribbed from TV Tropes.
* **Jewish people and Chinese food:** Starting with the trope of Jewish people going out for Chinese on Christmas ([Peking Duck Christmas](http://tvtropes.org/pmwiki/pmwiki.php/Main/PekingDuckChristmas)), there is a broader association in all cases. For example, in *My Favorite Year*, the more urbane of two put-upon assistants bonds with the romantic interest by ordering Chinese food when they have to work through the night, saying "Catherine, Jews know two things: suffering, and where to find great Chinese food."
* **Chinese food and long hours/stuff to do:** This is particularly associated with *The West Wing* where the characters order Chinese food when they're facing a long creative challenge, like writing jokes for the Correspondent's Dinner
* **Chinese food and comfort:** *Orange is the New Black* uses the theme this way by centering the main character's romantic choices around a quote about Chinese food. After having a wild and tempestuous relationship with Alex, a drug smuggler, the main character Piper is told by her more settled best friend that eventually, she's going to want "someone who knows when to order Chinese food".
In summary, Chinese food is associated with familiar but stressed situations, particularly among more intellectual and/or Jewish characters; and can imply both "something needs to be done" and "something needs to get fixed emotionally". | I think sometimes people look for symbolism a little too much in cinema. In this case, I contend that there is **no** symbolism in Chinese food. It just happens to be an easily recognizable take-out food that can be recognized without branding. Also, hamburgers or pizza have been used but there are a lot of challenges with that. You have to control the environment, and when someone takes a bite you can't just replace it. But if you throw some rice or noodles in a box and someone takes a bite, you can refill it. |
17,641 | It is a frequent idiom for a group of characters to interact while eating Chinese food. Generally of the carry out variety, with chop sticks being plunged into the deep fold-up boxes. It seems to me that this is intended to bring a particular thematic element, but I can't quite put my finger on it. I see it way too often for it not to have a specific significance. | 2014/03/03 | [
"https://movies.stackexchange.com/questions/17641",
"https://movies.stackexchange.com",
"https://movies.stackexchange.com/users/3361/"
] | I do think you have something in thinking that Chinese food has a particular significance as take-out. There's an aura of a comfortable, but pressured, environment, where people are banding together to get stuff done. But, enough of my personal opinion and on to some quotes and particular examples. Much of this is cribbed from TV Tropes.
* **Jewish people and Chinese food:** Starting with the trope of Jewish people going out for Chinese on Christmas ([Peking Duck Christmas](http://tvtropes.org/pmwiki/pmwiki.php/Main/PekingDuckChristmas)), there is a broader association in all cases. For example, in *My Favorite Year*, the more urbane of two put-upon assistants bonds with the romantic interest by ordering Chinese food when they have to work through the night, saying "Catherine, Jews know two things: suffering, and where to find great Chinese food."
* **Chinese food and long hours/stuff to do:** This is particularly associated with *The West Wing* where the characters order Chinese food when they're facing a long creative challenge, like writing jokes for the Correspondent's Dinner
* **Chinese food and comfort:** *Orange is the New Black* uses the theme this way by centering the main character's romantic choices around a quote about Chinese food. After having a wild and tempestuous relationship with Alex, a drug smuggler, the main character Piper is told by her more settled best friend that eventually, she's going to want "someone who knows when to order Chinese food".
In summary, Chinese food is associated with familiar but stressed situations, particularly among more intellectual and/or Jewish characters; and can imply both "something needs to be done" and "something needs to get fixed emotionally". | I don't think Chinese food, in particular, is being used as a theme. I think it's delivery food in general. It evokes the sense that these people don't have time to go anywhere or do anything other than work on the project they're currently working on.
In America, there are really only two universal foods that are delivered: pizza and Chinese. That is changing, and it varies by region, but those are the two standards.
So, the question then would be, "Why choose Chinese over pizza?"
1. Sometimes they don't. Sometimes it's pizza.
2. Some people might considered Chinese a "higher class" of food than a greasy pizza.
3. (My personal opinion) When the characters have that A-HA! moment, it's much more effective visually to point a pair of chopsticks while saying, "Yes! That's exactly it!" than it is to point a floppy piece of pizza. |
17,641 | It is a frequent idiom for a group of characters to interact while eating Chinese food. Generally of the carry out variety, with chop sticks being plunged into the deep fold-up boxes. It seems to me that this is intended to bring a particular thematic element, but I can't quite put my finger on it. I see it way too often for it not to have a specific significance. | 2014/03/03 | [
"https://movies.stackexchange.com/questions/17641",
"https://movies.stackexchange.com",
"https://movies.stackexchange.com/users/3361/"
] | I do think you have something in thinking that Chinese food has a particular significance as take-out. There's an aura of a comfortable, but pressured, environment, where people are banding together to get stuff done. But, enough of my personal opinion and on to some quotes and particular examples. Much of this is cribbed from TV Tropes.
* **Jewish people and Chinese food:** Starting with the trope of Jewish people going out for Chinese on Christmas ([Peking Duck Christmas](http://tvtropes.org/pmwiki/pmwiki.php/Main/PekingDuckChristmas)), there is a broader association in all cases. For example, in *My Favorite Year*, the more urbane of two put-upon assistants bonds with the romantic interest by ordering Chinese food when they have to work through the night, saying "Catherine, Jews know two things: suffering, and where to find great Chinese food."
* **Chinese food and long hours/stuff to do:** This is particularly associated with *The West Wing* where the characters order Chinese food when they're facing a long creative challenge, like writing jokes for the Correspondent's Dinner
* **Chinese food and comfort:** *Orange is the New Black* uses the theme this way by centering the main character's romantic choices around a quote about Chinese food. After having a wild and tempestuous relationship with Alex, a drug smuggler, the main character Piper is told by her more settled best friend that eventually, she's going to want "someone who knows when to order Chinese food".
In summary, Chinese food is associated with familiar but stressed situations, particularly among more intellectual and/or Jewish characters; and can imply both "something needs to be done" and "something needs to get fixed emotionally". | I think that movies and TV shows use those Chinese takeout boxes just because it’s easy for production. They could be empty boxes and they wouldn't actually have to prepare any food for the scene or multiple scenes at that. |
17,641 | It is a frequent idiom for a group of characters to interact while eating Chinese food. Generally of the carry out variety, with chop sticks being plunged into the deep fold-up boxes. It seems to me that this is intended to bring a particular thematic element, but I can't quite put my finger on it. I see it way too often for it not to have a specific significance. | 2014/03/03 | [
"https://movies.stackexchange.com/questions/17641",
"https://movies.stackexchange.com",
"https://movies.stackexchange.com/users/3361/"
] | I think sometimes people look for symbolism a little too much in cinema. In this case, I contend that there is **no** symbolism in Chinese food. It just happens to be an easily recognizable take-out food that can be recognized without branding. Also, hamburgers or pizza have been used but there are a lot of challenges with that. You have to control the environment, and when someone takes a bite you can't just replace it. But if you throw some rice or noodles in a box and someone takes a bite, you can refill it. | I don't think Chinese food, in particular, is being used as a theme. I think it's delivery food in general. It evokes the sense that these people don't have time to go anywhere or do anything other than work on the project they're currently working on.
In America, there are really only two universal foods that are delivered: pizza and Chinese. That is changing, and it varies by region, but those are the two standards.
So, the question then would be, "Why choose Chinese over pizza?"
1. Sometimes they don't. Sometimes it's pizza.
2. Some people might considered Chinese a "higher class" of food than a greasy pizza.
3. (My personal opinion) When the characters have that A-HA! moment, it's much more effective visually to point a pair of chopsticks while saying, "Yes! That's exactly it!" than it is to point a floppy piece of pizza. |
17,641 | It is a frequent idiom for a group of characters to interact while eating Chinese food. Generally of the carry out variety, with chop sticks being plunged into the deep fold-up boxes. It seems to me that this is intended to bring a particular thematic element, but I can't quite put my finger on it. I see it way too often for it not to have a specific significance. | 2014/03/03 | [
"https://movies.stackexchange.com/questions/17641",
"https://movies.stackexchange.com",
"https://movies.stackexchange.com/users/3361/"
] | I think sometimes people look for symbolism a little too much in cinema. In this case, I contend that there is **no** symbolism in Chinese food. It just happens to be an easily recognizable take-out food that can be recognized without branding. Also, hamburgers or pizza have been used but there are a lot of challenges with that. You have to control the environment, and when someone takes a bite you can't just replace it. But if you throw some rice or noodles in a box and someone takes a bite, you can refill it. | I think that movies and TV shows use those Chinese takeout boxes just because it’s easy for production. They could be empty boxes and they wouldn't actually have to prepare any food for the scene or multiple scenes at that. |
17,641 | It is a frequent idiom for a group of characters to interact while eating Chinese food. Generally of the carry out variety, with chop sticks being plunged into the deep fold-up boxes. It seems to me that this is intended to bring a particular thematic element, but I can't quite put my finger on it. I see it way too often for it not to have a specific significance. | 2014/03/03 | [
"https://movies.stackexchange.com/questions/17641",
"https://movies.stackexchange.com",
"https://movies.stackexchange.com/users/3361/"
] | I don't think Chinese food, in particular, is being used as a theme. I think it's delivery food in general. It evokes the sense that these people don't have time to go anywhere or do anything other than work on the project they're currently working on.
In America, there are really only two universal foods that are delivered: pizza and Chinese. That is changing, and it varies by region, but those are the two standards.
So, the question then would be, "Why choose Chinese over pizza?"
1. Sometimes they don't. Sometimes it's pizza.
2. Some people might considered Chinese a "higher class" of food than a greasy pizza.
3. (My personal opinion) When the characters have that A-HA! moment, it's much more effective visually to point a pair of chopsticks while saying, "Yes! That's exactly it!" than it is to point a floppy piece of pizza. | I think that movies and TV shows use those Chinese takeout boxes just because it’s easy for production. They could be empty boxes and they wouldn't actually have to prepare any food for the scene or multiple scenes at that. |
19,733 | In this society a lot of people say "I'm a Buddhist" but they're just saying that. So my question is, how can a person become a real Buddhist? And how do we know he's a real Buddhist? | 2017/03/22 | [
"https://buddhism.stackexchange.com/questions/19733",
"https://buddhism.stackexchange.com",
"https://buddhism.stackexchange.com/users/8305/"
] | You become a real Buddhist when you walk the path of Dhamma. The Buddhist teaching is to be experienced here and now and to be verified here and now. There are many ways this has been put forward:
* Sila - Samadhi - Panna where later is realisation at the experiential level or at the level of wisdom
* Pariyatti - Patipatti - Pativedha where the latter is the experience
* sutta-maya-panna - cinta-maya-panna - bhavana-maya panna where the latter is experiential wisdom
When you have the 1st vision of the Dhamma your faith never changes. This is because nobody can convince you otherwise of what you have already seen for yourself. | * Have unwavering faith in Buddha
* Have unwavering faith in Dhamma
* Have unwavering faith in Sangha
* Be relentless in eliminating Sakkaaya Ditti |
19,733 | In this society a lot of people say "I'm a Buddhist" but they're just saying that. So my question is, how can a person become a real Buddhist? And how do we know he's a real Buddhist? | 2017/03/22 | [
"https://buddhism.stackexchange.com/questions/19733",
"https://buddhism.stackexchange.com",
"https://buddhism.stackexchange.com/users/8305/"
] | You become a real Buddhist when you walk the path of Dhamma. The Buddhist teaching is to be experienced here and now and to be verified here and now. There are many ways this has been put forward:
* Sila - Samadhi - Panna where later is realisation at the experiential level or at the level of wisdom
* Pariyatti - Patipatti - Pativedha where the latter is the experience
* sutta-maya-panna - cinta-maya-panna - bhavana-maya panna where the latter is experiential wisdom
When you have the 1st vision of the Dhamma your faith never changes. This is because nobody can convince you otherwise of what you have already seen for yourself. | To become a true Buddhist one should have a pleasant mind towards, and confidence, in the Supreme Buddha. This confidence should be rooted (mulajata), and it should be well established (patitthita). To develop this kind of unshakeable confidence, it is important to know about the knowledge of The Buddha. A “Sotāpanna”/stream entry Buddist is a True Buddhist.
The Pali Canon recognizes four levels of Awakening, the first of which is called “Sotāpanna”/stream entry. This gains its name from the fact that a person who has attained this level has entered the "stream" flowing inevitably to nibbana. He/she is guaranteed to achieve full awakening within seven lifetimes at most, and in the interim will not be reborn in any of the lower realms.
The practices leading to stream entry are encapsulated in four factors:
Association with people of integrity is a factor for stream entry.
Listening to the true Dhamma is a factor for stream entry.
Appropriate attention is a factor for stream entry.
Practice in accordance with the Dhamma is a factor for stream entry.
The Sotapanna is free from the following three fetters (samyojana):–
1. The wrong view that the aggregates of physical and mental phenomena are ego or self. (sakkāya-ditthi or personality-belief).
2. Any doubt about the Buddha, the Dhamma, the Sangha and the discipline (vicikicchā or sceptical doubt).
3. Belief that methods other than that of cultivating the qualities of the eightfold noble path and developing insight into the four noble truths will bring eternal peace (silabbataparāmāsa or belief in mere rite and ritual).
Furthermore, his observation of the five precepts remains pure and absolute, as a matter of course. For these reasons a Sotāpanna is well secured from being reborn in the unhappy existences of the four lower worlds. He will lead the happy life in the world of human beings and devas for seven existences at the most and during this period he will attain Arahantship and nibbāna.
[](https://i.stack.imgur.com/12oiQ.jpg) |
19,733 | In this society a lot of people say "I'm a Buddhist" but they're just saying that. So my question is, how can a person become a real Buddhist? And how do we know he's a real Buddhist? | 2017/03/22 | [
"https://buddhism.stackexchange.com/questions/19733",
"https://buddhism.stackexchange.com",
"https://buddhism.stackexchange.com/users/8305/"
] | You become a real Buddhist when you walk the path of Dhamma. The Buddhist teaching is to be experienced here and now and to be verified here and now. There are many ways this has been put forward:
* Sila - Samadhi - Panna where later is realisation at the experiential level or at the level of wisdom
* Pariyatti - Patipatti - Pativedha where the latter is the experience
* sutta-maya-panna - cinta-maya-panna - bhavana-maya panna where the latter is experiential wisdom
When you have the 1st vision of the Dhamma your faith never changes. This is because nobody can convince you otherwise of what you have already seen for yourself. | I don't know about "Buddhism" as a name. There are those who honor the Buddha by practicing the eightfold path. There are those that honor the Buddha by giving garlands and incense but both are wholesome and real "Buddhist" practices alright. Even someone who has little idea what they are doing can technically be a "real" Buddhist. It's just a word.
What is more real than any other Buddhism? What do I know? Maybe it's the Buddha's actual core teaching and those enlightened enough to recognize the Buddha's actual core teaching in the Suttas that is "Real Buddhism".
I think that, "really" The Buddha's teaching is always Buddhism but Buddhism isn't always The Buddha's actual core teaching , or in many instances, Buddhism isn't even anywhere near the Buddha's actual core teaching.
**We should** be able to **talk about** things like **this...**
...and how we feel about things with out fear of being misunderstood, persecuted, oppressed, corrected and censored by whoever has enough nerve to think they understand themselves enough to judge other forum members as "wrong". How can this be good karma? Can a "real" Buddhist just decide that we can be judge , jury and executioner and that is congruent with the Dhamma?
Who in this forum really understands enough to transgress punitive measures against other fellow human beings and still at the same time be a "real" Buddhist?
**Anyone who justifies they're transgressions against another person because the person violated the "group's rules" can never be sure they are being fair or appropriate when they punish unless they are enlightened.** Maybe I am missing something...I certainly know that I am no saint, this is just my little opinion. -Metta |
19,733 | In this society a lot of people say "I'm a Buddhist" but they're just saying that. So my question is, how can a person become a real Buddhist? And how do we know he's a real Buddhist? | 2017/03/22 | [
"https://buddhism.stackexchange.com/questions/19733",
"https://buddhism.stackexchange.com",
"https://buddhism.stackexchange.com/users/8305/"
] | You become a real Buddhist when you walk the path of Dhamma. The Buddhist teaching is to be experienced here and now and to be verified here and now. There are many ways this has been put forward:
* Sila - Samadhi - Panna where later is realisation at the experiential level or at the level of wisdom
* Pariyatti - Patipatti - Pativedha where the latter is the experience
* sutta-maya-panna - cinta-maya-panna - bhavana-maya panna where the latter is experiential wisdom
When you have the 1st vision of the Dhamma your faith never changes. This is because nobody can convince you otherwise of what you have already seen for yourself. | I think a real Buddhist at the very least follows the five precepts. |
19,733 | In this society a lot of people say "I'm a Buddhist" but they're just saying that. So my question is, how can a person become a real Buddhist? And how do we know he's a real Buddhist? | 2017/03/22 | [
"https://buddhism.stackexchange.com/questions/19733",
"https://buddhism.stackexchange.com",
"https://buddhism.stackexchange.com/users/8305/"
] | You become a real Buddhist when you walk the path of Dhamma. The Buddhist teaching is to be experienced here and now and to be verified here and now. There are many ways this has been put forward:
* Sila - Samadhi - Panna where later is realisation at the experiential level or at the level of wisdom
* Pariyatti - Patipatti - Pativedha where the latter is the experience
* sutta-maya-panna - cinta-maya-panna - bhavana-maya panna where the latter is experiential wisdom
When you have the 1st vision of the Dhamma your faith never changes. This is because nobody can convince you otherwise of what you have already seen for yourself. | One "becomes a Buddhist" by seeking refuge in the 3 jewels: the Buddha, the dharma, and the sangha. By seeking refuge we mean actively learning and using that knowledge.
After the point of inquiry into "Buddhism?" then you start to learn about it thru literature or asking people or videos. All part of the dharma. Then as you learn and envelope yourself with the Buddhist community ... the sangha. But it all starts with the initial seeking of refuge.
That initial step is all that is "required" to be a real Buddhist. |
2,519,632 | What's a good book, or a good way to learn HTML, XHTML, and CSS? I recently graduated from school and I only know Java, and C++.
I would like to learn some web development so I am looking for some good books and resources. | 2010/03/25 | [
"https://Stackoverflow.com/questions/2519632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/69514/"
] | If you ask me, whatever introductory level book you can find at your public library on html or css or "web programming" is a great place to start as far as books go. Typically, those kinds of books are a little bit "stale" as far as you're likely to find something that's not the latest greatest version of HTML, but for the most part very few parts of the HTML spec get deprecated or removed from version to version, so anything you'd learn from slightly obsolete books will generally still exist, but
There's a ton of great stuff online too about every specific topic you could want, for the most part, you could get by without buying any books
Here's a link (<http://jwinblad.com/webprogramming/webdesign.php>) to some of my personal bookmarks on web-development that I like to keep handy like the specifications for CSS and HTML that enumerate every possible tag or CSS property and give you a brief description of what each one means and is used for.
Of course, actually trying out different tags and CSS experimentally is sometimes much more helpful in learning. If there's a website that does something cool, you can often times learn how to they do their cool feature by viewing the source-code of the page or its style sheet using the tools provided in your web-browser. Create a hello-world demo-page and then work from there on adding extra tags and a style sheet and so on. If there's something specific you want to do, you can search for tips on how to do that particular thing
If you already know Java and C++, it should not be difficult to learn HTML/XHTML and CSS. But if you're looking at learning this with the hopes of it being a career direction or paying job, you will probably want to delve into more than just HTML, nobody seems to be looking for people to write webpages that look like they came out of 1998 or 2001, you can get nicer looking stuff that that with almost no HTML knowledge using WYSIWYG tools...once you get the basics of HTML understood and know where to look up tags and CSS descriptors, you may want to branch out either into a client-side scripting language like Javascript or a server-side programming language or framework (PHP, Ruby on Rails, etc) or trendy web-technology like Flash. It kind of depends what your goals are in learning web-programming. | I'd recommend [Professional CSS: Cascading Style Sheets for Web Design](https://rads.stackoverflow.com/amzn/click/com/047017708X). There apparently is a second edition now, but my first edition has:
>
> Chapter 2: Best Practices for XHTML and CSS
>
>
>
The book (first edition) is basically comprised of case studies of css and xhtml implementations at ESPN, PGA Championship, and the University of Florida, with many great tips and explanations of why things are done a certain way. |
2,519,632 | What's a good book, or a good way to learn HTML, XHTML, and CSS? I recently graduated from school and I only know Java, and C++.
I would like to learn some web development so I am looking for some good books and resources. | 2010/03/25 | [
"https://Stackoverflow.com/questions/2519632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/69514/"
] | If you are interested in online resources, The [SitePoint Reference](http://reference.sitepoint.com/) seems good. It covers HTML, CSS, and JavaScript. The information seems clear, and there is the capability to add user notes as well.
If you prefer printed material, I started out with HTML for Dummies - despite the common opinion on For Dummies books, they are actually useful for picking up a new subject. I keep handy the HTML/XHTML Definitive Guide and the CSS Definitive Guide - both from O'Reilly. Those two are good for references.
For JavaScript, I recommend Simply JavaScript from SitePoint, and Dom Scripting from Friends of Ed. | If you know programming languages like Java, then I'd recommend checking out the [HTML4 spec](http://www.w3.org/TR/REC-html40/) on the W3C site.
It is as close as you'll come to the official docs.
I'd also recommend learning the differences between HTML and XHTML, why XHTML has no benefits to today's web (IE, content types, error handling too unforgiving) and also I'd look into HTML5, just to keep current.
Here is a [quick overview](http://jwinblad.com/webprogramming/html.php) of differences between HTML and XHTML that I found whilst surfing [Jessica's](https://stackoverflow.com/users/272208/jessica) [website](http://jwinblad.com/). |
2,519,632 | What's a good book, or a good way to learn HTML, XHTML, and CSS? I recently graduated from school and I only know Java, and C++.
I would like to learn some web development so I am looking for some good books and resources. | 2010/03/25 | [
"https://Stackoverflow.com/questions/2519632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/69514/"
] | If you are interested in online resources, The [SitePoint Reference](http://reference.sitepoint.com/) seems good. It covers HTML, CSS, and JavaScript. The information seems clear, and there is the capability to add user notes as well.
If you prefer printed material, I started out with HTML for Dummies - despite the common opinion on For Dummies books, they are actually useful for picking up a new subject. I keep handy the HTML/XHTML Definitive Guide and the CSS Definitive Guide - both from O'Reilly. Those two are good for references.
For JavaScript, I recommend Simply JavaScript from SitePoint, and Dom Scripting from Friends of Ed. | I'm working through Head First HTML with CSS & XHTML. I have found it to be quite useful in staring me off with these technologies. |
2,519,632 | What's a good book, or a good way to learn HTML, XHTML, and CSS? I recently graduated from school and I only know Java, and C++.
I would like to learn some web development so I am looking for some good books and resources. | 2010/03/25 | [
"https://Stackoverflow.com/questions/2519632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/69514/"
] | IMHO, the best way to learn it is by doing it, make a plan for a website, and have a go at making it happen (repeat as required), by the time you put html, css, javascript, and eventually a server side framework together, it can be a bit of a dark art, and there is much learning that can only happen when you are actually doing it (and feeling the pain of IE 6).
As mentioned by others, Sitepoint, Smashingmagazine, W3Schools (to name a few) are all handy references.
Would also suggest learning jquery as you learn javascript, some good starting tutorials here <http://docs.jquery.com/Tutorials>.
Also install firebug in firefox so you can start digging under the hood of sites you like.
With regard to books, from personal experience, I have a stack of outdated technology specific books that I have not touched for quite a few years. The ones that focus on why rather than how get much higher rotation.
If you have learnt java and c++, the mechanics of the technologies shouldn't be too hard to pick up, but many programmers tend to suck at things related to UI, so if you were to get a book, I would recommend "Don't Make Me Think" or other books related to usability and interface design.
HTH. Good Luck. | If you're looking for reference information, [W3 Schools](http://www.w3schools.com/) is a great place to start. [Smashing Magazine](http://www.smashingmagazine.com/) is great for pretty much everything to do with web development. I'd also recommend [A List Apart,](http://www.alistapart.com/) which often has great articles about some of the more difficult CSS concepts. And last, but certainly not least, I'd check out the articles on [24 ways](http://24ways.org); while they only have 24 updates every year (in December), they are written by some of the best people in the industry.
Ans since you're interested in web development, you'll probably end up wanting to learn some Javascript as well. ppk's site [quirksmode.org](http://www.quirksmode.org) is a great place for that.
Well, I hope this can be some help to you, and wish you best of luck. Also, of course, you can always ask any question that you have here at Stack Overflow. |
2,519,632 | What's a good book, or a good way to learn HTML, XHTML, and CSS? I recently graduated from school and I only know Java, and C++.
I would like to learn some web development so I am looking for some good books and resources. | 2010/03/25 | [
"https://Stackoverflow.com/questions/2519632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/69514/"
] | IMHO, the best way to learn it is by doing it, make a plan for a website, and have a go at making it happen (repeat as required), by the time you put html, css, javascript, and eventually a server side framework together, it can be a bit of a dark art, and there is much learning that can only happen when you are actually doing it (and feeling the pain of IE 6).
As mentioned by others, Sitepoint, Smashingmagazine, W3Schools (to name a few) are all handy references.
Would also suggest learning jquery as you learn javascript, some good starting tutorials here <http://docs.jquery.com/Tutorials>.
Also install firebug in firefox so you can start digging under the hood of sites you like.
With regard to books, from personal experience, I have a stack of outdated technology specific books that I have not touched for quite a few years. The ones that focus on why rather than how get much higher rotation.
If you have learnt java and c++, the mechanics of the technologies shouldn't be too hard to pick up, but many programmers tend to suck at things related to UI, so if you were to get a book, I would recommend "Don't Make Me Think" or other books related to usability and interface design.
HTH. Good Luck. | I'd recommend [Professional CSS: Cascading Style Sheets for Web Design](https://rads.stackoverflow.com/amzn/click/com/047017708X). There apparently is a second edition now, but my first edition has:
>
> Chapter 2: Best Practices for XHTML and CSS
>
>
>
The book (first edition) is basically comprised of case studies of css and xhtml implementations at ESPN, PGA Championship, and the University of Florida, with many great tips and explanations of why things are done a certain way. |
2,519,632 | What's a good book, or a good way to learn HTML, XHTML, and CSS? I recently graduated from school and I only know Java, and C++.
I would like to learn some web development so I am looking for some good books and resources. | 2010/03/25 | [
"https://Stackoverflow.com/questions/2519632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/69514/"
] | If you ask me, whatever introductory level book you can find at your public library on html or css or "web programming" is a great place to start as far as books go. Typically, those kinds of books are a little bit "stale" as far as you're likely to find something that's not the latest greatest version of HTML, but for the most part very few parts of the HTML spec get deprecated or removed from version to version, so anything you'd learn from slightly obsolete books will generally still exist, but
There's a ton of great stuff online too about every specific topic you could want, for the most part, you could get by without buying any books
Here's a link (<http://jwinblad.com/webprogramming/webdesign.php>) to some of my personal bookmarks on web-development that I like to keep handy like the specifications for CSS and HTML that enumerate every possible tag or CSS property and give you a brief description of what each one means and is used for.
Of course, actually trying out different tags and CSS experimentally is sometimes much more helpful in learning. If there's a website that does something cool, you can often times learn how to they do their cool feature by viewing the source-code of the page or its style sheet using the tools provided in your web-browser. Create a hello-world demo-page and then work from there on adding extra tags and a style sheet and so on. If there's something specific you want to do, you can search for tips on how to do that particular thing
If you already know Java and C++, it should not be difficult to learn HTML/XHTML and CSS. But if you're looking at learning this with the hopes of it being a career direction or paying job, you will probably want to delve into more than just HTML, nobody seems to be looking for people to write webpages that look like they came out of 1998 or 2001, you can get nicer looking stuff that that with almost no HTML knowledge using WYSIWYG tools...once you get the basics of HTML understood and know where to look up tags and CSS descriptors, you may want to branch out either into a client-side scripting language like Javascript or a server-side programming language or framework (PHP, Ruby on Rails, etc) or trendy web-technology like Flash. It kind of depends what your goals are in learning web-programming. | IMHO, the best way to learn it is by doing it, make a plan for a website, and have a go at making it happen (repeat as required), by the time you put html, css, javascript, and eventually a server side framework together, it can be a bit of a dark art, and there is much learning that can only happen when you are actually doing it (and feeling the pain of IE 6).
As mentioned by others, Sitepoint, Smashingmagazine, W3Schools (to name a few) are all handy references.
Would also suggest learning jquery as you learn javascript, some good starting tutorials here <http://docs.jquery.com/Tutorials>.
Also install firebug in firefox so you can start digging under the hood of sites you like.
With regard to books, from personal experience, I have a stack of outdated technology specific books that I have not touched for quite a few years. The ones that focus on why rather than how get much higher rotation.
If you have learnt java and c++, the mechanics of the technologies shouldn't be too hard to pick up, but many programmers tend to suck at things related to UI, so if you were to get a book, I would recommend "Don't Make Me Think" or other books related to usability and interface design.
HTH. Good Luck. |
2,519,632 | What's a good book, or a good way to learn HTML, XHTML, and CSS? I recently graduated from school and I only know Java, and C++.
I would like to learn some web development so I am looking for some good books and resources. | 2010/03/25 | [
"https://Stackoverflow.com/questions/2519632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/69514/"
] | If you know programming languages like Java, then I'd recommend checking out the [HTML4 spec](http://www.w3.org/TR/REC-html40/) on the W3C site.
It is as close as you'll come to the official docs.
I'd also recommend learning the differences between HTML and XHTML, why XHTML has no benefits to today's web (IE, content types, error handling too unforgiving) and also I'd look into HTML5, just to keep current.
Here is a [quick overview](http://jwinblad.com/webprogramming/html.php) of differences between HTML and XHTML that I found whilst surfing [Jessica's](https://stackoverflow.com/users/272208/jessica) [website](http://jwinblad.com/). | I'm working through Head First HTML with CSS & XHTML. I have found it to be quite useful in staring me off with these technologies. |
2,519,632 | What's a good book, or a good way to learn HTML, XHTML, and CSS? I recently graduated from school and I only know Java, and C++.
I would like to learn some web development so I am looking for some good books and resources. | 2010/03/25 | [
"https://Stackoverflow.com/questions/2519632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/69514/"
] | If you know programming languages like Java, then I'd recommend checking out the [HTML4 spec](http://www.w3.org/TR/REC-html40/) on the W3C site.
It is as close as you'll come to the official docs.
I'd also recommend learning the differences between HTML and XHTML, why XHTML has no benefits to today's web (IE, content types, error handling too unforgiving) and also I'd look into HTML5, just to keep current.
Here is a [quick overview](http://jwinblad.com/webprogramming/html.php) of differences between HTML and XHTML that I found whilst surfing [Jessica's](https://stackoverflow.com/users/272208/jessica) [website](http://jwinblad.com/). | If you're looking for reference information, [W3 Schools](http://www.w3schools.com/) is a great place to start. [Smashing Magazine](http://www.smashingmagazine.com/) is great for pretty much everything to do with web development. I'd also recommend [A List Apart,](http://www.alistapart.com/) which often has great articles about some of the more difficult CSS concepts. And last, but certainly not least, I'd check out the articles on [24 ways](http://24ways.org); while they only have 24 updates every year (in December), they are written by some of the best people in the industry.
Ans since you're interested in web development, you'll probably end up wanting to learn some Javascript as well. ppk's site [quirksmode.org](http://www.quirksmode.org) is a great place for that.
Well, I hope this can be some help to you, and wish you best of luck. Also, of course, you can always ask any question that you have here at Stack Overflow. |
2,519,632 | What's a good book, or a good way to learn HTML, XHTML, and CSS? I recently graduated from school and I only know Java, and C++.
I would like to learn some web development so I am looking for some good books and resources. | 2010/03/25 | [
"https://Stackoverflow.com/questions/2519632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/69514/"
] | If you ask me, whatever introductory level book you can find at your public library on html or css or "web programming" is a great place to start as far as books go. Typically, those kinds of books are a little bit "stale" as far as you're likely to find something that's not the latest greatest version of HTML, but for the most part very few parts of the HTML spec get deprecated or removed from version to version, so anything you'd learn from slightly obsolete books will generally still exist, but
There's a ton of great stuff online too about every specific topic you could want, for the most part, you could get by without buying any books
Here's a link (<http://jwinblad.com/webprogramming/webdesign.php>) to some of my personal bookmarks on web-development that I like to keep handy like the specifications for CSS and HTML that enumerate every possible tag or CSS property and give you a brief description of what each one means and is used for.
Of course, actually trying out different tags and CSS experimentally is sometimes much more helpful in learning. If there's a website that does something cool, you can often times learn how to they do their cool feature by viewing the source-code of the page or its style sheet using the tools provided in your web-browser. Create a hello-world demo-page and then work from there on adding extra tags and a style sheet and so on. If there's something specific you want to do, you can search for tips on how to do that particular thing
If you already know Java and C++, it should not be difficult to learn HTML/XHTML and CSS. But if you're looking at learning this with the hopes of it being a career direction or paying job, you will probably want to delve into more than just HTML, nobody seems to be looking for people to write webpages that look like they came out of 1998 or 2001, you can get nicer looking stuff that that with almost no HTML knowledge using WYSIWYG tools...once you get the basics of HTML understood and know where to look up tags and CSS descriptors, you may want to branch out either into a client-side scripting language like Javascript or a server-side programming language or framework (PHP, Ruby on Rails, etc) or trendy web-technology like Flash. It kind of depends what your goals are in learning web-programming. | If you're looking for reference information, [W3 Schools](http://www.w3schools.com/) is a great place to start. [Smashing Magazine](http://www.smashingmagazine.com/) is great for pretty much everything to do with web development. I'd also recommend [A List Apart,](http://www.alistapart.com/) which often has great articles about some of the more difficult CSS concepts. And last, but certainly not least, I'd check out the articles on [24 ways](http://24ways.org); while they only have 24 updates every year (in December), they are written by some of the best people in the industry.
Ans since you're interested in web development, you'll probably end up wanting to learn some Javascript as well. ppk's site [quirksmode.org](http://www.quirksmode.org) is a great place for that.
Well, I hope this can be some help to you, and wish you best of luck. Also, of course, you can always ask any question that you have here at Stack Overflow. |
2,519,632 | What's a good book, or a good way to learn HTML, XHTML, and CSS? I recently graduated from school and I only know Java, and C++.
I would like to learn some web development so I am looking for some good books and resources. | 2010/03/25 | [
"https://Stackoverflow.com/questions/2519632",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/69514/"
] | If you are interested in online resources, The [SitePoint Reference](http://reference.sitepoint.com/) seems good. It covers HTML, CSS, and JavaScript. The information seems clear, and there is the capability to add user notes as well.
If you prefer printed material, I started out with HTML for Dummies - despite the common opinion on For Dummies books, they are actually useful for picking up a new subject. I keep handy the HTML/XHTML Definitive Guide and the CSS Definitive Guide - both from O'Reilly. Those two are good for references.
For JavaScript, I recommend Simply JavaScript from SitePoint, and Dom Scripting from Friends of Ed. | If you're looking for reference information, [W3 Schools](http://www.w3schools.com/) is a great place to start. [Smashing Magazine](http://www.smashingmagazine.com/) is great for pretty much everything to do with web development. I'd also recommend [A List Apart,](http://www.alistapart.com/) which often has great articles about some of the more difficult CSS concepts. And last, but certainly not least, I'd check out the articles on [24 ways](http://24ways.org); while they only have 24 updates every year (in December), they are written by some of the best people in the industry.
Ans since you're interested in web development, you'll probably end up wanting to learn some Javascript as well. ppk's site [quirksmode.org](http://www.quirksmode.org) is a great place for that.
Well, I hope this can be some help to you, and wish you best of luck. Also, of course, you can always ask any question that you have here at Stack Overflow. |
37,271,401 | Disclaimer: I am a total noob when it comes to anything .Net, but I have to get stuck in for a project I'm working on.
I see there are already some posts on this here, but no complete answer on how to resolve this. I get this warning:
>
> Can't find PInvoke DLL 'sqlceme35.dll'
>
>
>
when trying to deploy to a **Windows Mobile 6.5.3 emulator** from Visual Studio (I'm coding in C#). I am obviously using Sql Server CE for the application. I see it deploys just fine to emulators running older versions of Windows Mobile (namely 5.0).
Could anybody please explain this? | 2016/05/17 | [
"https://Stackoverflow.com/questions/37271401",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6344656/"
] | sqlceme35.dll is not part of a standard Windows Mobile SDK installation and needs to be installed separately (see <https://www.microsoft.com/en-us/download/details.aspx?id=8831>) and deployed manually (copy and install CAB file from your PC after install, see <https://msdn.microsoft.com/en-us/library/13kw2t64%28v=vs.90%29.aspx>).
In your case you need to install the cab files from the wce500 subdirectory. ( "drive:\Program Files\Microsoft SQL Server Compact Edition\v3.5\Devices\wce400 or wce500\CPU architecture type").
Although the site <https://technet.microsoft.com/en-us/library/ms172361%28v=sql.105%29.aspx> states that SQL Server CE runtimes will be deployed automatically, this is not always the case. So best is to manually install the runtimes before running an app that depends on them. | C:\Program Files (x86)\Microsoft SQL Server Compact Edition\v3.5\Devices\wce500\armv4i
Get the following cab files:
* sqlce.wce5.armv4i.CAB
* sqlce.repl.ppc.wce5.armv4i.CAB
* sqlce.ppc.wce5.armv4i.CAB
* sqlce.dev.ENU.wce5.armv4i.CAB
Install those onto the Program files directory of your pocket pc. Once they are installed, you will then see a folder called "Microsoft SQL Server Compact Edition". It will have the dlls that your application uses. |
271,142 | Yennefer and Geralt are supposed to have an especially close romantic relationship. Yet I through the Witcher games, Geralt freely sleeps with a number of other female characters.
I cannot recall Yennefer ever discussing how she feels about this during the games. Do we ever get an insight on her opinions about Geralt's promiscuity? | 2016/06/23 | [
"https://gaming.stackexchange.com/questions/271142",
"https://gaming.stackexchange.com",
"https://gaming.stackexchange.com/users/63056/"
] | At the very least, her actions give us insight of her disapproval. That said, Geralt and Yennefer's relationship is not necessarily a natural one.
Geralt and Yennefer's Relationship
----------------------------------
In Witcher lore, Yennefer first appears in "[The Last Wish](http://witcher.wikia.com/wiki/The_Last_Wish_(short_story) "\"The Last Wish\" (short story) @ Witcher Wiki")". Without the need for unnecessary spoilers, Geralt frees a djinn, and is granted three wishes. Towards the end of the story, Geralt uses his last wish *"so that Geralt and Yennefer, despite their many differences, could not live without each other."*
The love affair is only lightly referenced during the first two games. To quote Geralt, *"My amnesia prevents me from remembering our relations in the past, but I have the impression I once loved a sorceress, deeply..."*. In the second game, it only comes to light *at the end* that Yennefer may actually be alive. It is suggested that she may also suffer amnesia.
While you can woo other woman, during Witcher 3, Yenneffer is mostly still under the spell of the djinn. Further towards the end of the game, Yenneffer will ask you to help her trap another djinn, during the quest The Last Wish. She intends to use the djinn to remove the spell of the previous djinn, to find out wether her and Geralt truely love eachother. If you complete the quest, you have the option to tell Yennefer you still love her, or tell her you do not.
It is clear that she still has feelings for Geralt. If you choose to tell her you no longer have feelings, the quest log reveals that *"The truth came as a brutal shock to Yennefer, though she was not the kind to let this show."* If you choose to tell her you still love her, there is a high probability that you will be moving towards the "Yennefer ending".
Reaction to promiscuity
-----------------------
If you try to romance both Yennefer *and* Triss, you will be treated to a scene before the search for the Sunstone, where they both try to seduce Geralt, together. Unfortunately, it is a ruse, and the two chain Geralt to a bed, and leave him there, naked. From that point on, conversing with either character make it clear that they have lost all interest in pursuing a romance with Geralt.
Further speculation
-------------------
There does not appear to be any reaction to other cases of promiscuity, either from using brothels, or romancing DLC characters.
Within scope of the Arqade, it is up to you to make your own assumptions on whether these other cases were simply unknown to Yennefer, or whether she was happy with Geralt exploring alternate sexual partners, but not alternate romantic partners. There is also the potential argument that all other encounters are simply not cannon to the main story, and simply available for the purpose of providing player choice.
Further information
-------------------
As you likely know, the three The Witcher games are based off a a series of like-named novels and short stories. While asking solely in context of the games is permitted on this site, we have an alternate Stack Overflow site that will take questions in regards to the entire Witcher series, including video game, TV series and novels. If you would like to explore the possibility of further insight in the novel series, [SciFi Exchange](https://scifi.stackexchange.com/ "SciFi Stack Exchange (questions regarding SciFi & Fantasy)") would be a good place to ask. | I'll add a few more information from the books: Yennefer asks once Geralt "has he been faithful to her", for which he replies "Yes, I've been always thinking only about you", which seems to satisfy her. She seems to get a bit upset with Triss having a brief affair with Geralt, but she settles the matter after having a "strong talk to" with her friend.
Saying all that Yennefer is not saint herself: in the "A shard of ice" story she and Geralt met her old lover, a mage called Istredd.
>
> "My deep friendship with Yennefer," continued Istredd, "started quite some time ago, witcher. It has long been a friendship without obligations, based on long or short, but more orless regular, periods spent with one another. This type of casual relationship is often practiced amongst our profession.
>
>
>
Geralt and Istredd have at one stage a heated discussion about who is better suited as her partner. The mage points to his wealth and status and says that Geralt is just a "plaything". Annoyed Geralt replies:
>
> The witcher thought for a moment and decided to finish it.
> "Because," he burst out, "Last night she made love with me and not you."
> Istredd picked up the skull, stroking it.
> "A ha," the magician said slowly, "Fine. Well. She made love with me this morning.
>
>
>
In other words, it seems that both Geralt and Yennefer are people for whom **casual** sex with other partners is not a barrier for a true romantic relationship. |
271,142 | Yennefer and Geralt are supposed to have an especially close romantic relationship. Yet I through the Witcher games, Geralt freely sleeps with a number of other female characters.
I cannot recall Yennefer ever discussing how she feels about this during the games. Do we ever get an insight on her opinions about Geralt's promiscuity? | 2016/06/23 | [
"https://gaming.stackexchange.com/questions/271142",
"https://gaming.stackexchange.com",
"https://gaming.stackexchange.com/users/63056/"
] | At the very least, her actions give us insight of her disapproval. That said, Geralt and Yennefer's relationship is not necessarily a natural one.
Geralt and Yennefer's Relationship
----------------------------------
In Witcher lore, Yennefer first appears in "[The Last Wish](http://witcher.wikia.com/wiki/The_Last_Wish_(short_story) "\"The Last Wish\" (short story) @ Witcher Wiki")". Without the need for unnecessary spoilers, Geralt frees a djinn, and is granted three wishes. Towards the end of the story, Geralt uses his last wish *"so that Geralt and Yennefer, despite their many differences, could not live without each other."*
The love affair is only lightly referenced during the first two games. To quote Geralt, *"My amnesia prevents me from remembering our relations in the past, but I have the impression I once loved a sorceress, deeply..."*. In the second game, it only comes to light *at the end* that Yennefer may actually be alive. It is suggested that she may also suffer amnesia.
While you can woo other woman, during Witcher 3, Yenneffer is mostly still under the spell of the djinn. Further towards the end of the game, Yenneffer will ask you to help her trap another djinn, during the quest The Last Wish. She intends to use the djinn to remove the spell of the previous djinn, to find out wether her and Geralt truely love eachother. If you complete the quest, you have the option to tell Yennefer you still love her, or tell her you do not.
It is clear that she still has feelings for Geralt. If you choose to tell her you no longer have feelings, the quest log reveals that *"The truth came as a brutal shock to Yennefer, though she was not the kind to let this show."* If you choose to tell her you still love her, there is a high probability that you will be moving towards the "Yennefer ending".
Reaction to promiscuity
-----------------------
If you try to romance both Yennefer *and* Triss, you will be treated to a scene before the search for the Sunstone, where they both try to seduce Geralt, together. Unfortunately, it is a ruse, and the two chain Geralt to a bed, and leave him there, naked. From that point on, conversing with either character make it clear that they have lost all interest in pursuing a romance with Geralt.
Further speculation
-------------------
There does not appear to be any reaction to other cases of promiscuity, either from using brothels, or romancing DLC characters.
Within scope of the Arqade, it is up to you to make your own assumptions on whether these other cases were simply unknown to Yennefer, or whether she was happy with Geralt exploring alternate sexual partners, but not alternate romantic partners. There is also the potential argument that all other encounters are simply not cannon to the main story, and simply available for the purpose of providing player choice.
Further information
-------------------
As you likely know, the three The Witcher games are based off a a series of like-named novels and short stories. While asking solely in context of the games is permitted on this site, we have an alternate Stack Overflow site that will take questions in regards to the entire Witcher series, including video game, TV series and novels. If you would like to explore the possibility of further insight in the novel series, [SciFi Exchange](https://scifi.stackexchange.com/ "SciFi Stack Exchange (questions regarding SciFi & Fantasy)") would be a good place to ask. | the couple doesn’t care about the other being sexually intimate with other partners but Geralt was willing to kill Istredd for a while after finding out that he might lose Yen to him. |
271,142 | Yennefer and Geralt are supposed to have an especially close romantic relationship. Yet I through the Witcher games, Geralt freely sleeps with a number of other female characters.
I cannot recall Yennefer ever discussing how she feels about this during the games. Do we ever get an insight on her opinions about Geralt's promiscuity? | 2016/06/23 | [
"https://gaming.stackexchange.com/questions/271142",
"https://gaming.stackexchange.com",
"https://gaming.stackexchange.com/users/63056/"
] | I'll add a few more information from the books: Yennefer asks once Geralt "has he been faithful to her", for which he replies "Yes, I've been always thinking only about you", which seems to satisfy her. She seems to get a bit upset with Triss having a brief affair with Geralt, but she settles the matter after having a "strong talk to" with her friend.
Saying all that Yennefer is not saint herself: in the "A shard of ice" story she and Geralt met her old lover, a mage called Istredd.
>
> "My deep friendship with Yennefer," continued Istredd, "started quite some time ago, witcher. It has long been a friendship without obligations, based on long or short, but more orless regular, periods spent with one another. This type of casual relationship is often practiced amongst our profession.
>
>
>
Geralt and Istredd have at one stage a heated discussion about who is better suited as her partner. The mage points to his wealth and status and says that Geralt is just a "plaything". Annoyed Geralt replies:
>
> The witcher thought for a moment and decided to finish it.
> "Because," he burst out, "Last night she made love with me and not you."
> Istredd picked up the skull, stroking it.
> "A ha," the magician said slowly, "Fine. Well. She made love with me this morning.
>
>
>
In other words, it seems that both Geralt and Yennefer are people for whom **casual** sex with other partners is not a barrier for a true romantic relationship. | the couple doesn’t care about the other being sexually intimate with other partners but Geralt was willing to kill Istredd for a while after finding out that he might lose Yen to him. |
134,753 | This is about the 1980 movie *Superman II* with Reeve and Hackman.
The only information I noticed in the film itself was this line from Superman's mother:
>
> The one danger we have considered is that the Phantom Zone might - we cannot predict - just might be cracked by a nuclear explosion in space. I cannot say I am glad you asked me that -
>
>
>
This almost makes it sound like a nuke *anywhere* in space would shatter the Phantom Zone, but if that was the case it ought to only take one supernova or warring space-capable civilization anywhere in the universe to shatter it, in which case it wouldn't have lasted very long.
So what's special about the part of outer space around Earth that makes the Phantom Zone gateway nukeable from there? Did the gateway follow Superman's ~~christmas ornament~~ spaceship all the way from Krypton to Earth or something? | 2016/07/16 | [
"https://scifi.stackexchange.com/questions/134753",
"https://scifi.stackexchange.com",
"https://scifi.stackexchange.com/users/35327/"
] | In this movie the Phantom Zone is a prison that travels through space and you could argue that the Zone was caught in Krypton's explosion and followed a similar path as Kal El's ship.
Furthermore, later in the movie, Superman grabs the nuke in Paris and disposes of it by throwing it in deep space where it just happens to explode near the wayward Phantom Zone. Yes, it's a plot contrivance but all of fiction is a contrivance.
And it's not a nuke anywhere in space; it's just that the Zone is strong, but not strong enough to survive a big enough explosion. | Well, perhaps when the rocket ship was sent from Krypton if you remember in the new version with the added scenes the ship passed by them in space so maybe, it was caught in the rockets trajectory, even though it left them behind because it was much faster letting them drift the same way the ship was going, and eventually getting to our galaxy? Maybe? |
134,753 | This is about the 1980 movie *Superman II* with Reeve and Hackman.
The only information I noticed in the film itself was this line from Superman's mother:
>
> The one danger we have considered is that the Phantom Zone might - we cannot predict - just might be cracked by a nuclear explosion in space. I cannot say I am glad you asked me that -
>
>
>
This almost makes it sound like a nuke *anywhere* in space would shatter the Phantom Zone, but if that was the case it ought to only take one supernova or warring space-capable civilization anywhere in the universe to shatter it, in which case it wouldn't have lasted very long.
So what's special about the part of outer space around Earth that makes the Phantom Zone gateway nukeable from there? Did the gateway follow Superman's ~~christmas ornament~~ spaceship all the way from Krypton to Earth or something? | 2016/07/16 | [
"https://scifi.stackexchange.com/questions/134753",
"https://scifi.stackexchange.com",
"https://scifi.stackexchange.com/users/35327/"
] | It's not explained how, or indeed why, the Phantom Zone was *so close to Earth* in either the film or its official novelisation. That being said, if we look at the earlier (1976) script treatment, it's explained that **the Phantom Zone is intended to remain in orbit around Krypton.**
>
> **JOR-EL**: (a trifle uneasily) *No one, not even the scientists or Krypton, can predict all eventualities throughout all eternity. The
> one danger we have considered is that the Phantom Zone might, just
> might -- we cannot know -- be cracked by a major nuclear explosion in
> space.*
>
>
> EXTREMELY TIGHT CLOSEUP LEX LUTHOR -- He sits bolt upright, his eyes
> shining.
>
>
> **JOR-EL**: *But our computers show this possibility as .000000002 in one million. Earthlings cannot travel this far in space, nor would
> they ever have reason to cause such an explosion. So I think we can
> consider the Phantom Zone secure.*
>
>
> [Superman II - '76 Treatment](http://www.supermanhomepage.com/movies/supermanii_scriptment_2_76.txt)
>
>
>
It follows that the Phantom Zone was likely pushed out of orbit by the explosion of Krypton, and then followed the same trajectory as Kal-El's pod toward Earth. This is also the standard explanation given as to why so much Kryptonite has landed on Earth. | Well, perhaps when the rocket ship was sent from Krypton if you remember in the new version with the added scenes the ship passed by them in space so maybe, it was caught in the rockets trajectory, even though it left them behind because it was much faster letting them drift the same way the ship was going, and eventually getting to our galaxy? Maybe? |
42,090 | Running an update check on all the installed modules on my site takes quite some time, and sometimes even exceeds PHP's runtime limits. Leaving the module enabled makes using the web based administration very difficult, as many different actions cause it to make a full run.
Is there a way to leave the module enabled, but only have full updates triggered by cron instead of every time the status or module pages are visited in admin?
I'm using Drupal 6.26. | 2012/09/08 | [
"https://drupal.stackexchange.com/questions/42090",
"https://drupal.stackexchange.com",
"https://drupal.stackexchange.com/users/942/"
] | In regard to the **Update** module, my solution is to never use it on a live website, only in Staging or Devel, but not with the final website.
Also [Elysia Cron](http://drupal.org/project/elysia_cron) lets you to choose when you want to run all the hook\_crons. I think this is is a "must have" module in all drupal projects. | Yes. Of course. I don't think it is built-in, so you actually have a pretty cool idea for an easy but useful custom module. I actually made something very close to this not too long ago.
You create a module say updateManager, and in your module file add elysia cron as dependency,
One of the goals of the module could be to have your module add an text field on the update configuration page to easily set the date and time you want update.php to run.
This field supports crontab syntax 0 \* \* \* \* syntax,
* Thanks to elysia cron api, it is easy to set, store and retrieve and play with cron task in your code. If you want you could even just create a cron task here that overides the main cron task and would be listed in elysia cron task panel.
+ In your custom module you can easily add the field to the 'update configuration form' with a form\_alter on the form.
+ Another option is to turn off updates, and do it with a cron task that runs update via drush. |
424,254 | We're planning to use standard ASP.NET user authentication for our application. However, by default this requires us to have our user database on our web server in the App\_Data folder.
This is usually a **big** no-no for us - our databases are all behind the firewall and all access is done via a WCF Service layer.
If the database was on a different server, but directly accessible from the web server then this still vioates our usual architecture rules.
Should we worry about our user database living on our web server? Does ASP.NET offer an out-of-the-box alternative?
NOTE: We're using .NET 3.5 and SQL Server 2005 | 2009/01/08 | [
"https://Stackoverflow.com/questions/424254",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/39709/"
] | You can install the neccessary db tables etc. in any SQL Server database.
Use the aspnet\_regsql.exe Wizard found in C:\WINDOWS\Microsoft.NET\Framework....... to set up the target database.
Then simply update the connection strings in the provider configurations in the web.config. | Yes and Yes.
1. If you ever need to move to multiple web servers you shouldn't have the user data on one of those servers.
2. There are multiple was to do this, but check out this link for details on one [MSDN How To: Use Forms Authentication with SQL Server in ASP.NET 2.0](http://msdn.microsoft.com/en-us/library/ms998317.aspx) |
424,254 | We're planning to use standard ASP.NET user authentication for our application. However, by default this requires us to have our user database on our web server in the App\_Data folder.
This is usually a **big** no-no for us - our databases are all behind the firewall and all access is done via a WCF Service layer.
If the database was on a different server, but directly accessible from the web server then this still vioates our usual architecture rules.
Should we worry about our user database living on our web server? Does ASP.NET offer an out-of-the-box alternative?
NOTE: We're using .NET 3.5 and SQL Server 2005 | 2009/01/08 | [
"https://Stackoverflow.com/questions/424254",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/39709/"
] | You can install the neccessary db tables etc. in any SQL Server database.
Use the aspnet\_regsql.exe Wizard found in C:\WINDOWS\Microsoft.NET\Framework....... to set up the target database.
Then simply update the connection strings in the provider configurations in the web.config. | Yes, you should worry. No, there is no out-of-the-box solution. ASP.NET only ships with a SQL Membership Provider and an Active Directory membership provider [(reference)](https://web.archive.org/web/20210513220018/http://aspnet.4guysfromrolla.com/articles/120705-1.aspx). You will have to use a custom membership provider to provide your functionality. |
424,254 | We're planning to use standard ASP.NET user authentication for our application. However, by default this requires us to have our user database on our web server in the App\_Data folder.
This is usually a **big** no-no for us - our databases are all behind the firewall and all access is done via a WCF Service layer.
If the database was on a different server, but directly accessible from the web server then this still vioates our usual architecture rules.
Should we worry about our user database living on our web server? Does ASP.NET offer an out-of-the-box alternative?
NOTE: We're using .NET 3.5 and SQL Server 2005 | 2009/01/08 | [
"https://Stackoverflow.com/questions/424254",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/39709/"
] | You can install the neccessary db tables etc. in any SQL Server database.
Use the aspnet\_regsql.exe Wizard found in C:\WINDOWS\Microsoft.NET\Framework....... to set up the target database.
Then simply update the connection strings in the provider configurations in the web.config. | you can create your own Custom membership provider by overriding the methods and properties of the following abstract class: public abstract class MembershipProvider. Once you override them, then you can use any valid datasource to authenticate the user. For example, you can use MYSQL, SQL server or even XML file to authticate your users. These provider models are really really cool. |
2,423,203 | What is the difference between server and client Hotspot. Is there any reason to switch production environment to -server. Please share your practical experience. Is there any performance boost? Related to Oracle UCM 10g | 2010/03/11 | [
"https://Stackoverflow.com/questions/2423203",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/290570/"
] | Yes, there can be a *huge* performance boost in some cases. When benchmarking my Protocol Buffers implementation, I was comparing it against the Java implementation - and I was really pleased, until I switched on -server... and saw the Java performance double. I don't know the details of everything it does, but basically it lets the JIT work harder, as it expects the code to be running for longer.
I wouldn't expect that to be the case in every application of course, but it can make a big difference. Of course, it won't have much effect unless your application is already CPU-bound on the JVM. I have no experience with Oracle UCM, so couldn't say how much effect it will have on your specific use. Have you already performed appropriate analysis of where the bottleneck in your system is? | The server VM collects stats for a longer time than the client VM before converting Java bytecode to native code. A **bit** more here: <http://java.sun.com/j2se/1.3/docs/guide/performance/hotspot.html#server> |
109,568 | In the Netflix movie *1922* we see the father and the son eat dinner right after talking with the sheriff about the missing wife.
I noticed that they were holding knives in their right hands and forks on their left. Then the father finished cutting up a piece of steak maybe, before switching the fork to the right hand. I wouldn't have though much about it if the son didn't do the same.
Was it a common practice back in 1920s for people to keep switching knives and forks while eating?
Edit:
-----
After seeing the numerous comments posted to this question, I would like to point out one thing:
I know that most people are right handed and it would make sense for the dominant hand to exert force to cut the food using the knife, so it is logical for the knife to be held by the right hand and the left hand transfers the food from the plate to the mouth, which is a simpler task compared to cutting up food.
I also should point out that, from where I come from, we only use the right hand to transfer food from the plate to the mouth for cultural/religious/transitional reasons. But the majority of the foods don't require cutlery (or if it does require, people use their bare hands anyway) -- again, that is from where I come from, but some do use cutlery, and when they do, they use hold the knives with the left hand and spoon/fork with the right hand for the reasons above.
Now some "modernized" people follow the European style strictly, meaning: knives only in the right hand and fork on the left hand. But they are looked down by the majority of the our society for using the wrong hand to eat. Some of the "modernized" people will appease both parties, by holding the knives with their right hand, and right after cutting up the food, they switch fork to the right hand so that only the "righteous" hand does the feeding, and it's a win-win.
That last part is what triggered me to ask the question, because that's just what happened in the movie. Although, I doubt that the father and son switched forks to their right hand for social or religious reasons. Maybe it's something to do with something else.. I don't know. | 2020/06/13 | [
"https://movies.stackexchange.com/questions/109568",
"https://movies.stackexchange.com",
"https://movies.stackexchange.com/users/18998/"
] | As other posters have said, the use of a knife and fork is a cultural artifact. How they were used will vary by culture. As some have not pointed out, culture is a very regional as well as generational thing. So are manners, gentility, and hospitality to name a few. In a country as diverse as the US, it varies greatly and changes over time. In some areas, the above method is still taught, but not always applied.
There once was a time in or before the antebellum era wear wearing white gloves to eat at formal dinners was the cultural norm. The cleanliness of your gloves after the meal was testament to how cultured you were.
Conversely, there are still older, native Texans who subscribe to the long held Texas tradition of “Never put your knife in your gun hand.” This is done whether eating or fighting, regardless of which hand is your gun hand. This goes right along with the belief in Texas that having a dull knife (or none at all) or a tough steak is uncivilized. | Eating and other customs change with geography and time.
On the subject of eating utensils, my answer to this question shows how some persons reacted to the use of different ones:
[https://skeptics.stackexchange.com/questions/42159/did-the-catholic-church-forbid-the-use-of-forks-in-medieval-times/42167#42167[1]](https://skeptics.stackexchange.com/questions/42159/did-the-catholic-church-forbid-the-use-of-forks-in-medieval-times/42167#42167%5B1%5D) |
109,568 | In the Netflix movie *1922* we see the father and the son eat dinner right after talking with the sheriff about the missing wife.
I noticed that they were holding knives in their right hands and forks on their left. Then the father finished cutting up a piece of steak maybe, before switching the fork to the right hand. I wouldn't have though much about it if the son didn't do the same.
Was it a common practice back in 1920s for people to keep switching knives and forks while eating?
Edit:
-----
After seeing the numerous comments posted to this question, I would like to point out one thing:
I know that most people are right handed and it would make sense for the dominant hand to exert force to cut the food using the knife, so it is logical for the knife to be held by the right hand and the left hand transfers the food from the plate to the mouth, which is a simpler task compared to cutting up food.
I also should point out that, from where I come from, we only use the right hand to transfer food from the plate to the mouth for cultural/religious/transitional reasons. But the majority of the foods don't require cutlery (or if it does require, people use their bare hands anyway) -- again, that is from where I come from, but some do use cutlery, and when they do, they use hold the knives with the left hand and spoon/fork with the right hand for the reasons above.
Now some "modernized" people follow the European style strictly, meaning: knives only in the right hand and fork on the left hand. But they are looked down by the majority of the our society for using the wrong hand to eat. Some of the "modernized" people will appease both parties, by holding the knives with their right hand, and right after cutting up the food, they switch fork to the right hand so that only the "righteous" hand does the feeding, and it's a win-win.
That last part is what triggered me to ask the question, because that's just what happened in the movie. Although, I doubt that the father and son switched forks to their right hand for social or religious reasons. Maybe it's something to do with something else.. I don't know. | 2020/06/13 | [
"https://movies.stackexchange.com/questions/109568",
"https://movies.stackexchange.com",
"https://movies.stackexchange.com/users/18998/"
] | Specifically addressing the question.
The practice of cutting food such as a steak with a knife in the dominant hand (more commonly the right hand in the USA), and fork in the non-dominant hand, then switching to just a fork in the dominant hand to eat is a common practice in USA dining today, not just in the 1920s.
Both this style and the current common European style of keeping the knife in the dominant hand are acceptable etiquette in USA dining. In my personal experience the 'switch' style is more common today in the midwest USA where I live.
The following article suggests, without evidence, that Americans are slowly abandoning this practice. However the article does at least substantiate that the practice is common.
<https://slate.com/human-interest/2013/06/fork-and-knife-use-americans-need-to-stop-cutting-and-switching.html> | As other posters have said, the use of a knife and fork is a cultural artifact. How they were used will vary by culture. As some have not pointed out, culture is a very regional as well as generational thing. So are manners, gentility, and hospitality to name a few. In a country as diverse as the US, it varies greatly and changes over time. In some areas, the above method is still taught, but not always applied.
There once was a time in or before the antebellum era wear wearing white gloves to eat at formal dinners was the cultural norm. The cleanliness of your gloves after the meal was testament to how cultured you were.
Conversely, there are still older, native Texans who subscribe to the long held Texas tradition of “Never put your knife in your gun hand.” This is done whether eating or fighting, regardless of which hand is your gun hand. This goes right along with the belief in Texas that having a dull knife (or none at all) or a tough steak is uncivilized. |
109,568 | In the Netflix movie *1922* we see the father and the son eat dinner right after talking with the sheriff about the missing wife.
I noticed that they were holding knives in their right hands and forks on their left. Then the father finished cutting up a piece of steak maybe, before switching the fork to the right hand. I wouldn't have though much about it if the son didn't do the same.
Was it a common practice back in 1920s for people to keep switching knives and forks while eating?
Edit:
-----
After seeing the numerous comments posted to this question, I would like to point out one thing:
I know that most people are right handed and it would make sense for the dominant hand to exert force to cut the food using the knife, so it is logical for the knife to be held by the right hand and the left hand transfers the food from the plate to the mouth, which is a simpler task compared to cutting up food.
I also should point out that, from where I come from, we only use the right hand to transfer food from the plate to the mouth for cultural/religious/transitional reasons. But the majority of the foods don't require cutlery (or if it does require, people use their bare hands anyway) -- again, that is from where I come from, but some do use cutlery, and when they do, they use hold the knives with the left hand and spoon/fork with the right hand for the reasons above.
Now some "modernized" people follow the European style strictly, meaning: knives only in the right hand and fork on the left hand. But they are looked down by the majority of the our society for using the wrong hand to eat. Some of the "modernized" people will appease both parties, by holding the knives with their right hand, and right after cutting up the food, they switch fork to the right hand so that only the "righteous" hand does the feeding, and it's a win-win.
That last part is what triggered me to ask the question, because that's just what happened in the movie. Although, I doubt that the father and son switched forks to their right hand for social or religious reasons. Maybe it's something to do with something else.. I don't know. | 2020/06/13 | [
"https://movies.stackexchange.com/questions/109568",
"https://movies.stackexchange.com",
"https://movies.stackexchange.com/users/18998/"
] | Specifically addressing the question.
The practice of cutting food such as a steak with a knife in the dominant hand (more commonly the right hand in the USA), and fork in the non-dominant hand, then switching to just a fork in the dominant hand to eat is a common practice in USA dining today, not just in the 1920s.
Both this style and the current common European style of keeping the knife in the dominant hand are acceptable etiquette in USA dining. In my personal experience the 'switch' style is more common today in the midwest USA where I live.
The following article suggests, without evidence, that Americans are slowly abandoning this practice. However the article does at least substantiate that the practice is common.
<https://slate.com/human-interest/2013/06/fork-and-knife-use-americans-need-to-stop-cutting-and-switching.html> | Eating and other customs change with geography and time.
On the subject of eating utensils, my answer to this question shows how some persons reacted to the use of different ones:
[https://skeptics.stackexchange.com/questions/42159/did-the-catholic-church-forbid-the-use-of-forks-in-medieval-times/42167#42167[1]](https://skeptics.stackexchange.com/questions/42159/did-the-catholic-church-forbid-the-use-of-forks-in-medieval-times/42167#42167%5B1%5D) |
5,568,111 | This is a philosophical question, actually.
It's been a year I'm programming using PHP as a language and, as all of you know, PHP is very liberal as for the datatypes.
I was wondering: given that, is it a good practice to allow methods to return different kind of values?
Example: I'm used to set the variable that should be returned at a default value (usually false) and, if during the execution of the method everything goes well, the variable gets the value the successful execution has given.
Disclaimer: I try to document everything using PHPDoc. | 2011/04/06 | [
"https://Stackoverflow.com/questions/5568111",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/204790/"
] | Seems like a correct approach: returning boolean false allows the method/function to return other logical "false" values such as 0 or the empty string as valid while still allowing the caller to check its result with the `===` operator. | >
> I was wondering: given that, is it a
> good practice to allow methods to
> return different kind of values?
>
>
>
No, it's not. Not in PHP not in other languages. Better add an other method for a specific type of results. |
5,568,111 | This is a philosophical question, actually.
It's been a year I'm programming using PHP as a language and, as all of you know, PHP is very liberal as for the datatypes.
I was wondering: given that, is it a good practice to allow methods to return different kind of values?
Example: I'm used to set the variable that should be returned at a default value (usually false) and, if during the execution of the method everything goes well, the variable gets the value the successful execution has given.
Disclaimer: I try to document everything using PHPDoc. | 2011/04/06 | [
"https://Stackoverflow.com/questions/5568111",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/204790/"
] | It's a common approach, at least in PHP, and it isn't a bad practice.
It can be documented without problem using the PHPDoc convention: <http://manual.phpdoc.org/HTMLSmartyConverter/HandS/phpDocumentor/tutorial_tags.return.pkg.html> | >
> I was wondering: given that, is it a
> good practice to allow methods to
> return different kind of values?
>
>
>
No, it's not. Not in PHP not in other languages. Better add an other method for a specific type of results. |
5,568,111 | This is a philosophical question, actually.
It's been a year I'm programming using PHP as a language and, as all of you know, PHP is very liberal as for the datatypes.
I was wondering: given that, is it a good practice to allow methods to return different kind of values?
Example: I'm used to set the variable that should be returned at a default value (usually false) and, if during the execution of the method everything goes well, the variable gets the value the successful execution has given.
Disclaimer: I try to document everything using PHPDoc. | 2011/04/06 | [
"https://Stackoverflow.com/questions/5568111",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/204790/"
] | Seems like a correct approach: returning boolean false allows the method/function to return other logical "false" values such as 0 or the empty string as valid while still allowing the caller to check its result with the `===` operator. | It's a common approach, at least in PHP, and it isn't a bad practice.
It can be documented without problem using the PHPDoc convention: <http://manual.phpdoc.org/HTMLSmartyConverter/HandS/phpDocumentor/tutorial_tags.return.pkg.html> |
148,787 | I was wondering what "core" means? CPU or memory or both? | 2010/06/04 | [
"https://superuser.com/questions/148787",
"https://superuser.com",
"https://superuser.com/users/9265/"
] | A core is a processing unit. It may or may not have a number of caches (small quick memory) of its own, depending on the design of the chip. What most people consider 'the memory' (ie the main RAM) is not directly connected to the idea of a core. | Some further read :
* <http://en.wikipedia.org/wiki/Multi-core>
* <http://en.wikipedia.org/wiki/Central_processing_unit>
From what I understand, one or more core are part of a processor (CPU).
Memory can refer to the cache (a small and faster type of memory) or the RAM. The RAM isn't part of the CPU. |
148,787 | I was wondering what "core" means? CPU or memory or both? | 2010/06/04 | [
"https://superuser.com/questions/148787",
"https://superuser.com",
"https://superuser.com/users/9265/"
] | Like most terms, it depends on context.
* The term **cpu core** is frequently used these days. It refers to one of the independent processing units of [a multi-core processor](http://en.wikipedia.org/wiki/Multi-core_processor).
* The term **core memory** is a leftover from an early form of random access memory (RAM). [Magnetic core memory](http://en.wikipedia.org/wiki/Magnetic_core_memory) was first patented in 1947 and was used in early computers through the 50s and 60s. According to Wikipedia's article, magnetic core memory was replaced by integrated silicon RAM chips in the 1970's. Unlike modern silicon RAM, core memory was non-volatile -- it retained its contents indefinitely without power. | A core is a processing unit. It may or may not have a number of caches (small quick memory) of its own, depending on the design of the chip. What most people consider 'the memory' (ie the main RAM) is not directly connected to the idea of a core. |
148,787 | I was wondering what "core" means? CPU or memory or both? | 2010/06/04 | [
"https://superuser.com/questions/148787",
"https://superuser.com",
"https://superuser.com/users/9265/"
] | The word "core" has multiple meanings. These days, it's mostly used to refer to the [actual processing units](http://en.wikipedia.org/wiki/Multi-core) within the CPU (now that they tend to have more than one), but it used to be that "core" referred to the [amount of memory](http://en.wikipedia.org/wiki/Magnetic_core_memory), not processing units, in a machine. Hence the term "[core dump](http://en.wikipedia.org/wiki/Core_dump)," which refers to a readout of memory as of just before a crash. | A core is a processing unit. It may or may not have a number of caches (small quick memory) of its own, depending on the design of the chip. What most people consider 'the memory' (ie the main RAM) is not directly connected to the idea of a core. |
148,787 | I was wondering what "core" means? CPU or memory or both? | 2010/06/04 | [
"https://superuser.com/questions/148787",
"https://superuser.com",
"https://superuser.com/users/9265/"
] | Like most terms, it depends on context.
* The term **cpu core** is frequently used these days. It refers to one of the independent processing units of [a multi-core processor](http://en.wikipedia.org/wiki/Multi-core_processor).
* The term **core memory** is a leftover from an early form of random access memory (RAM). [Magnetic core memory](http://en.wikipedia.org/wiki/Magnetic_core_memory) was first patented in 1947 and was used in early computers through the 50s and 60s. According to Wikipedia's article, magnetic core memory was replaced by integrated silicon RAM chips in the 1970's. Unlike modern silicon RAM, core memory was non-volatile -- it retained its contents indefinitely without power. | Some further read :
* <http://en.wikipedia.org/wiki/Multi-core>
* <http://en.wikipedia.org/wiki/Central_processing_unit>
From what I understand, one or more core are part of a processor (CPU).
Memory can refer to the cache (a small and faster type of memory) or the RAM. The RAM isn't part of the CPU. |
148,787 | I was wondering what "core" means? CPU or memory or both? | 2010/06/04 | [
"https://superuser.com/questions/148787",
"https://superuser.com",
"https://superuser.com/users/9265/"
] | The word "core" has multiple meanings. These days, it's mostly used to refer to the [actual processing units](http://en.wikipedia.org/wiki/Multi-core) within the CPU (now that they tend to have more than one), but it used to be that "core" referred to the [amount of memory](http://en.wikipedia.org/wiki/Magnetic_core_memory), not processing units, in a machine. Hence the term "[core dump](http://en.wikipedia.org/wiki/Core_dump)," which refers to a readout of memory as of just before a crash. | Some further read :
* <http://en.wikipedia.org/wiki/Multi-core>
* <http://en.wikipedia.org/wiki/Central_processing_unit>
From what I understand, one or more core are part of a processor (CPU).
Memory can refer to the cache (a small and faster type of memory) or the RAM. The RAM isn't part of the CPU. |
148,787 | I was wondering what "core" means? CPU or memory or both? | 2010/06/04 | [
"https://superuser.com/questions/148787",
"https://superuser.com",
"https://superuser.com/users/9265/"
] | Like most terms, it depends on context.
* The term **cpu core** is frequently used these days. It refers to one of the independent processing units of [a multi-core processor](http://en.wikipedia.org/wiki/Multi-core_processor).
* The term **core memory** is a leftover from an early form of random access memory (RAM). [Magnetic core memory](http://en.wikipedia.org/wiki/Magnetic_core_memory) was first patented in 1947 and was used in early computers through the 50s and 60s. According to Wikipedia's article, magnetic core memory was replaced by integrated silicon RAM chips in the 1970's. Unlike modern silicon RAM, core memory was non-volatile -- it retained its contents indefinitely without power. | The word "core" has multiple meanings. These days, it's mostly used to refer to the [actual processing units](http://en.wikipedia.org/wiki/Multi-core) within the CPU (now that they tend to have more than one), but it used to be that "core" referred to the [amount of memory](http://en.wikipedia.org/wiki/Magnetic_core_memory), not processing units, in a machine. Hence the term "[core dump](http://en.wikipedia.org/wiki/Core_dump)," which refers to a readout of memory as of just before a crash. |
64,998,115 | I have created 3 different notebook using pyspark code in Azure synapse Analytics. Notebook is running using spark pool.
There is only one spark pool for all 3 notebook. when these 3 notebook run individually, spark pool starts for all 3 notebook by default.
The issue which i am facing is related to spark pool. It is taking 10 minutes to start in each notebook. The Vcores assigned is 4 and executor is 1.
Can somebody please help me to know how can we boost the start of spark pool in azure synapse Analytics. | 2020/11/25 | [
"https://Stackoverflow.com/questions/64998115",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14310176/"
] | I have this problem a lot too. It takes 4-5 minutes in my experience as well.
If it takes longer, make sure you publish (save) your notebook first, then reload the page. Sometimes that refreshes the underlying Livy session. | The performance of your Apache Spark pool jobs depends on multiple factors. These performance factors include:
* How your data is stored
* How the cluster has configured (Small, Medium, Large)
* The operations that are used when processing the data.
Common challenges you might face include:
* Memory constraints due to improperly sized executors.
* Long-running operations
* Tasks that result in cartesian operations.
There are also many optimizations that can help you overcome these challenges, such as caching and allowing for data skew.
The following [article Optimize Apache Spark jobs (preview) in Azure Synapse Analytics](https://learn.microsoft.com/en-us/azure/synapse-analytics/spark/apache-spark-performance) describes common Spark job optimizations and recommendations. |
19,101 | I would like to organize a game playing session for 140 people in the same room. I could just split them up into lots of small groups and organize a tournament. But then I started to wonder, are there any massively multiplayer non-online games? My restriction is that any material needed will have to be printed out. | 2014/09/26 | [
"https://boardgames.stackexchange.com/questions/19101",
"https://boardgames.stackexchange.com",
"https://boardgames.stackexchange.com/users/6874/"
] | Once the second Pledge enters the battlefield, the first one again starts granting protection from white to the Gnomes, which causes the second aura to fall off. This happens as a state-based action just after the second Pledge resolves. | See Kevin's answer: the second Pledge causes the first to also grant protection from white.
If somehow both Pledges are colorless, and then both stop being colorless at end of turn, then my original answer applies:
Both Pledges will go to the graveyard, for exactly the reason you state. As soon as the Gnome's effect ends, state-based effects are checked and both auras are enchanting an illegal permanent. The Gnomes have two "instances" of protection from white, each of which doesn't remove one copy of Pledge of Loyalty, but not the other. Since all state-based effects are checked at once, both Pledges go to the graveyard simultaneously. |
19,101 | I would like to organize a game playing session for 140 people in the same room. I could just split them up into lots of small groups and organize a tournament. But then I started to wonder, are there any massively multiplayer non-online games? My restriction is that any material needed will have to be printed out. | 2014/09/26 | [
"https://boardgames.stackexchange.com/questions/19101",
"https://boardgames.stackexchange.com",
"https://boardgames.stackexchange.com/users/6874/"
] | Let's walk through this. Our starting state is:
>
> **Gnome** is enchanted by **Colorless Pledge**.
>
>
>
The following events occur:
* You cast a second Pledge, targeting Gnome
* Pledge resolves and attaches to Gnome
The state of the board is now:
>
> **Gnome** is enchanted by **Colorless Pledge** and **White Pledge**. Gnome has Protection from White (except Colorless Pledge) and Protection from White (except White Pledge).
>
>
>
Now, the active player would receive priority, and so state based actions are checked.
* All White auras except for Colorless Pledge are moved to the graveyard. White Pledge is moved to the graveyard.
* All White auras except for White Pledge are moved to the graveyard. Colorless Pledge is unaffected.
Your final board state is:
>
> **Gnome** is enchanted by a **Colorless Pledge**
>
>
>
At the end of the turn, the Colorless Pledge becomes White again, but remains attaches because an exception was specifically made for it. | See Kevin's answer: the second Pledge causes the first to also grant protection from white.
If somehow both Pledges are colorless, and then both stop being colorless at end of turn, then my original answer applies:
Both Pledges will go to the graveyard, for exactly the reason you state. As soon as the Gnome's effect ends, state-based effects are checked and both auras are enchanting an illegal permanent. The Gnomes have two "instances" of protection from white, each of which doesn't remove one copy of Pledge of Loyalty, but not the other. Since all state-based effects are checked at once, both Pledges go to the graveyard simultaneously. |
19,101 | I would like to organize a game playing session for 140 people in the same room. I could just split them up into lots of small groups and organize a tournament. But then I started to wonder, are there any massively multiplayer non-online games? My restriction is that any material needed will have to be printed out. | 2014/09/26 | [
"https://boardgames.stackexchange.com/questions/19101",
"https://boardgames.stackexchange.com",
"https://boardgames.stackexchange.com/users/6874/"
] | Once the second Pledge enters the battlefield, the first one again starts granting protection from white to the Gnomes, which causes the second aura to fall off. This happens as a state-based action just after the second Pledge resolves. | Let's walk through this. Our starting state is:
>
> **Gnome** is enchanted by **Colorless Pledge**.
>
>
>
The following events occur:
* You cast a second Pledge, targeting Gnome
* Pledge resolves and attaches to Gnome
The state of the board is now:
>
> **Gnome** is enchanted by **Colorless Pledge** and **White Pledge**. Gnome has Protection from White (except Colorless Pledge) and Protection from White (except White Pledge).
>
>
>
Now, the active player would receive priority, and so state based actions are checked.
* All White auras except for Colorless Pledge are moved to the graveyard. White Pledge is moved to the graveyard.
* All White auras except for White Pledge are moved to the graveyard. Colorless Pledge is unaffected.
Your final board state is:
>
> **Gnome** is enchanted by a **Colorless Pledge**
>
>
>
At the end of the turn, the Colorless Pledge becomes White again, but remains attaches because an exception was specifically made for it. |
225,782 | The human body contains mana, the life force that can be transferred for use as a source for magic. The individual's capacity for mana slowly grows with time as the person ages into adulthood, finally reaching its limits during middle age, and then steadily declines. A mage must take care to control the amount of mana they utilize, as using too much at one time can sap their life force and lead to their death. However, a mage can substitute the mana of others for a magical source with no cost to themselves, transferring all the risk to the victim and leaving their own mana intact for use in less risky endeavors. The Incan empire is a brutal regime that is built upon magecraft, the use of magic to shape the natural world. It expands its territory through the conquest of its neighbors, taking captives as slaves to use as fuel for their rituals. These victims are ritually sacrificed on an altar by Incan priests, who use the mana released from the death of the victim as fuel to power their spells. However, this has the potential of backfiring. This system creates many enemies among Incan neighbors, who may band together or fight to the death against them, knowing what will happen to their people. It also can lead to slave revolts among captives. Luckily, magecraft seemingly provides an alternative solution.
Within each male sperm cell is a microscopic organism known as animalcule, a complete preformed individual representing miniature versions of human beings. These preformed humans develop and enlarge into fully formed human beings through the process of conception and birth. Magecraft allows individuals to bypass this long and convoluted process to create life in order to create a perfect servant loyal to its creator, known as a homunculus. These homunculi are grown within a specially built cauldron designed to hold magic brews. This brew is filled with various ingredients, such as eye of newt, as well as other lay ingredients, such as cow intestines and the "seed" of a male. The resulting "child" emerges from this concoction as a fully grown adult, bound to obey its master's commands. Although they are intelligent, homunculi lack free will and individuality, making them the perfect servant.
The Incan empire have considered the potential of swapping out captives for homunculi for the purpose of using them in their rituals. On the surface, the benefits are obvious. Creating a literal slave race bound to your will would make methods of control much easier and cheaper, mitigating risks. As they are created from magic as adults, they contain all the mana they need at birth without having to go through the long timeframe of aging to an appropriate level, saving time. In addition, they can be grown in bulk, as the ingredients aren't exactly rare. As such, the theory of replacing captives with homunculi in the flesh economy is sound in theory. What would preven this system from working? | 2022/03/16 | [
"https://worldbuilding.stackexchange.com/questions/225782",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/52361/"
] | You already established your rule why it wouldn't work:
>
> The individual's capacity for mana slowly grows with time as the
> person ages into adulthood, finally reaching its limits during middle
> age, and then steadily declines.
>
>
>
The questions that need to be answered are:
* What is mana? Actual life force, spirit/soul, blood, calories, bodyheat?
* How does one replenish mana? A potion, food and drink, time?
* Can homunculi ingredients be used as mana?
* Does a created homunculi only contain the ingredient's mana at birth with the capacity of an adult? | ### Despite being magical constructs, a crowd of Homunculi contain much less extractable Mana than any single human would
Life energy and magical energy are not the same in quality or quantity. Mages can cast power spells because a small amount of life energy can be turned into large amounts of magical energy. As you mention, over exerting can cause the loss of life so the conversion is not infinite.
Homunculi are made of magical energy, not living energy, so even though they may be pumped full of different amounts of energy for different constructs and roles given, for the use of rituals, life energy simply works to a far greater effect.
In theory, yes, you could farm and mass produce, optimizing the process and finding cheaper alternatives for resource costs, and make a sustainable source of ritual fuel over a period of time.
Ooooor...
You can just sacrifice one person to get a month's worth of the same production in a few minutes. Even more so when you have plenty of living batteries free to be scooped up and held in cages for when they are needed.
### This may be a turning point in itself
Since the Sacrifical tribe isn't bothering to try farming mana from Homunculi, in order to resist the growing threat neighboring countries are pooling their resources to build a large scale operation in order to get enough fuel to resist and fight back the enemy.
Without sacrificing people themselves, it may be the only way to level the field in having enough mana to not be rolled over by sheer force.
They may discover methods of, rather than producing more mana, using it more effectively by building specialized Homunculi designed specifically to focus, multiply, or enhance mana being extracted from sacrificed constructs.
They learn in the process though that it is in some way tainted and causes environmental problems with local magic to harness mana in such a way. |
225,782 | The human body contains mana, the life force that can be transferred for use as a source for magic. The individual's capacity for mana slowly grows with time as the person ages into adulthood, finally reaching its limits during middle age, and then steadily declines. A mage must take care to control the amount of mana they utilize, as using too much at one time can sap their life force and lead to their death. However, a mage can substitute the mana of others for a magical source with no cost to themselves, transferring all the risk to the victim and leaving their own mana intact for use in less risky endeavors. The Incan empire is a brutal regime that is built upon magecraft, the use of magic to shape the natural world. It expands its territory through the conquest of its neighbors, taking captives as slaves to use as fuel for their rituals. These victims are ritually sacrificed on an altar by Incan priests, who use the mana released from the death of the victim as fuel to power their spells. However, this has the potential of backfiring. This system creates many enemies among Incan neighbors, who may band together or fight to the death against them, knowing what will happen to their people. It also can lead to slave revolts among captives. Luckily, magecraft seemingly provides an alternative solution.
Within each male sperm cell is a microscopic organism known as animalcule, a complete preformed individual representing miniature versions of human beings. These preformed humans develop and enlarge into fully formed human beings through the process of conception and birth. Magecraft allows individuals to bypass this long and convoluted process to create life in order to create a perfect servant loyal to its creator, known as a homunculus. These homunculi are grown within a specially built cauldron designed to hold magic brews. This brew is filled with various ingredients, such as eye of newt, as well as other lay ingredients, such as cow intestines and the "seed" of a male. The resulting "child" emerges from this concoction as a fully grown adult, bound to obey its master's commands. Although they are intelligent, homunculi lack free will and individuality, making them the perfect servant.
The Incan empire have considered the potential of swapping out captives for homunculi for the purpose of using them in their rituals. On the surface, the benefits are obvious. Creating a literal slave race bound to your will would make methods of control much easier and cheaper, mitigating risks. As they are created from magic as adults, they contain all the mana they need at birth without having to go through the long timeframe of aging to an appropriate level, saving time. In addition, they can be grown in bulk, as the ingredients aren't exactly rare. As such, the theory of replacing captives with homunculi in the flesh economy is sound in theory. What would preven this system from working? | 2022/03/16 | [
"https://worldbuilding.stackexchange.com/questions/225782",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/52361/"
] | **TL/DR: Mana is not released from the death of the victim. Mana is borrowed from the victim, then given back. Mages also get exhausted after using too much mana.**
**To answer your question, stealing mana from homunculi is dangerous - when the homunculus dies (this cannot be avoided), the stolen mana becomes dead and can severely weaken or even kill the user.**
Mana is innate. Humans are born with it, and their capacity for mana grows with the passing of time until it reaches its peak - then slowly declines (much like, say, a person's height). Hence, the strongest mana comes from the user themselves, so the most powerful spells are cast only by the user's own mana.
However, it IS possible to take mana - not without consequences, of course. Mana, when borrowed from a victim, can be used to shield the user's own mana.
When a user takes a victim's mana, the user can cast spells with the victim's mana to protect his *own* mana. The victim, on the other hand, is left without mana. He is powerless until the user returns the victim's mana.
Here is where another factor should be added: exhaustion. During normal circumstances, mana can run out if a user casts too many spells. When the tank is empty, the user will have to wait before his mana regenerates again.
However, when a user takes a victim's mana, the stolen mana is immediately returned to the victim when the user's tank runs empty.
Mana can be returned in a variety ways. Remember that humans only have so much capacity to store mana. The first way to return mana is willingly - that is, if a user decides to give the mana back to the victim. The second way is burnout: if a user runs out of borrowed mana, then that mana is returned to the victim by default. The third way is by death: if the mana is too much for the user to handle, the user dies because their capacity is not large enough to hold their own mana *and* the stolen mana at the same time.
This is one of the main reasons why homunculi are not used for mana swapping. Homunculi already have a full capacity for mana when they are created; hence, many of the mages who steal mana from homunculi die when their capacity is overstuffed..
But what happens to the mages who *don't* die that way? Let's say a mage was actually strong enough to hold all the homunculus' mana at once. What would happen to him? This brings me to my next question.
What if the victim dies while the user still has the victim's mana? The user can't return the stolen mana to anyone, so the dead mana is stuck inside the user's body. Here, one of three things can happen.
Firstly, the dead mana can eat away at the user's own mana, so as all the mana decays, the user is eventually left with nothing.
Secondly, the dead mana could merge with the user's own mana, resulting in an abomination. In this case, the user can no longer cast spells correctly due to his warped mana.
Third (and arguably the most gruesome), the dead mana becomes infected and spreads throughout the rest of the user's body, corrupting his mana until he dies a slow and painful death.
Now, it wouldn't be outrageous to say that homunculi die when their mana is stolen - after all, mana is their very foundation. Therefore, mages tend to stay away from swapping their mana with homunculi. The consequences are simply too grave. | Mana from different sources do different things.
================================================
Why not use animal sacrifices instead of humans? Because while each animal's death releases a comparable *amount* of mana as a human's death, it's not the sort of mana the Sacrificer tribe needs for those spells and rituals where they use human sacrifices.
Animal mana is useful...to an extent
------------------------------------
You can draw moderate amounts of mana from animals (and humans) without killing them and small amounts without harming them. That's one reason why witches and wizards sometimes have familiars - they can tap into their familiars' mana for certain spells and brews and rituals.
The Sacrificer tribe does, in fact, use the mana from killing some animals for other spells, just like many other tribes do. But for example when you want to unleash an earthquake on a neighboring tribe, augury mana won't do, nor will clairvoyance mana, so you can't use the mana released by killing birds and rabbits and cows and sheep. A bear might release suitable mana, but they're dangerous and difficult to capture alive.
Human mana is very versatile
----------------------------
Mana drawn from humans is very versatile and can be channeled and transformed into mana for most any magical effect. Much like stem cells can develop into any type of cell the body needs. Once transformed, that mana is no longer versatile. You might be able to reuse some of it for something similar, but you can't turn animation mana into scrying mana or fire mana into wind mana.
Homunculus mana is the wrong kind
---------------------------------
The mana used to create a homunculus is the sort used for animating inanimate objects and growing plants and animals. Killing/destroying it releases that mana, but it's no longer versatile. It can only be used for similar uses - growing crops, moving statues, etc. And you get less usable mana out than you put in.
In theory you could sacrifice a large group of homunculi and reapply some of the released mana to animate a giant statue or grow a vegetation wall. But that's not the sort of magic the Sacrificer tribe is interested in doing. Nor are they patient enough to build up the needed homunculi for the task when there's an easier, faster alternative. |
225,782 | The human body contains mana, the life force that can be transferred for use as a source for magic. The individual's capacity for mana slowly grows with time as the person ages into adulthood, finally reaching its limits during middle age, and then steadily declines. A mage must take care to control the amount of mana they utilize, as using too much at one time can sap their life force and lead to their death. However, a mage can substitute the mana of others for a magical source with no cost to themselves, transferring all the risk to the victim and leaving their own mana intact for use in less risky endeavors. The Incan empire is a brutal regime that is built upon magecraft, the use of magic to shape the natural world. It expands its territory through the conquest of its neighbors, taking captives as slaves to use as fuel for their rituals. These victims are ritually sacrificed on an altar by Incan priests, who use the mana released from the death of the victim as fuel to power their spells. However, this has the potential of backfiring. This system creates many enemies among Incan neighbors, who may band together or fight to the death against them, knowing what will happen to their people. It also can lead to slave revolts among captives. Luckily, magecraft seemingly provides an alternative solution.
Within each male sperm cell is a microscopic organism known as animalcule, a complete preformed individual representing miniature versions of human beings. These preformed humans develop and enlarge into fully formed human beings through the process of conception and birth. Magecraft allows individuals to bypass this long and convoluted process to create life in order to create a perfect servant loyal to its creator, known as a homunculus. These homunculi are grown within a specially built cauldron designed to hold magic brews. This brew is filled with various ingredients, such as eye of newt, as well as other lay ingredients, such as cow intestines and the "seed" of a male. The resulting "child" emerges from this concoction as a fully grown adult, bound to obey its master's commands. Although they are intelligent, homunculi lack free will and individuality, making them the perfect servant.
The Incan empire have considered the potential of swapping out captives for homunculi for the purpose of using them in their rituals. On the surface, the benefits are obvious. Creating a literal slave race bound to your will would make methods of control much easier and cheaper, mitigating risks. As they are created from magic as adults, they contain all the mana they need at birth without having to go through the long timeframe of aging to an appropriate level, saving time. In addition, they can be grown in bulk, as the ingredients aren't exactly rare. As such, the theory of replacing captives with homunculi in the flesh economy is sound in theory. What would preven this system from working? | 2022/03/16 | [
"https://worldbuilding.stackexchange.com/questions/225782",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/52361/"
] | Put quite simply, sacrificing an homunculus will not gain a net increase in mana.
Creating an homunculus costs mana. Sacrificing an homunculus gains mana. However, because no process is 100% efficient, it takes more mana than a homunculus contains to create one, and sacrificing it yields less mana than it contains. The cycle is therefore a net loss.
Sacrificing humans, for all that they tend to not want to be sacrificed, yields a net gain, even if relatively small due to the inevitable devaluation of human life that sacrificing people in job lots would cause. It doesn't cost mana to grow a human.
The math is simple: Human sacrifice = net gain. Homunculus sacrifice = net loss. No doubt it was tried, and found to be disappointing. | A child's mana (or a homunculus's) is linked their parent's. This is a part of the natural order that helps parents train their children in magic, and also is partially responsible for the devastation felt when losing a child. Normally a parent may have a dozen children with a few surviving to adulthood. The mana drain on the parent from these few losses spread out over several years is noticeable, but recoverable. However, the natural order is less forgiving at scale, when mages start trying to use bulk homunculi. Each homunculi spent drains the parent a little bit, and too many too quickly can be fatal. Thus, a hybrid system was created where slaves are captured, and they are used to generate homunculi, who are not linked to the mage syphoning their mana. |
225,782 | The human body contains mana, the life force that can be transferred for use as a source for magic. The individual's capacity for mana slowly grows with time as the person ages into adulthood, finally reaching its limits during middle age, and then steadily declines. A mage must take care to control the amount of mana they utilize, as using too much at one time can sap their life force and lead to their death. However, a mage can substitute the mana of others for a magical source with no cost to themselves, transferring all the risk to the victim and leaving their own mana intact for use in less risky endeavors. The Incan empire is a brutal regime that is built upon magecraft, the use of magic to shape the natural world. It expands its territory through the conquest of its neighbors, taking captives as slaves to use as fuel for their rituals. These victims are ritually sacrificed on an altar by Incan priests, who use the mana released from the death of the victim as fuel to power their spells. However, this has the potential of backfiring. This system creates many enemies among Incan neighbors, who may band together or fight to the death against them, knowing what will happen to their people. It also can lead to slave revolts among captives. Luckily, magecraft seemingly provides an alternative solution.
Within each male sperm cell is a microscopic organism known as animalcule, a complete preformed individual representing miniature versions of human beings. These preformed humans develop and enlarge into fully formed human beings through the process of conception and birth. Magecraft allows individuals to bypass this long and convoluted process to create life in order to create a perfect servant loyal to its creator, known as a homunculus. These homunculi are grown within a specially built cauldron designed to hold magic brews. This brew is filled with various ingredients, such as eye of newt, as well as other lay ingredients, such as cow intestines and the "seed" of a male. The resulting "child" emerges from this concoction as a fully grown adult, bound to obey its master's commands. Although they are intelligent, homunculi lack free will and individuality, making them the perfect servant.
The Incan empire have considered the potential of swapping out captives for homunculi for the purpose of using them in their rituals. On the surface, the benefits are obvious. Creating a literal slave race bound to your will would make methods of control much easier and cheaper, mitigating risks. As they are created from magic as adults, they contain all the mana they need at birth without having to go through the long timeframe of aging to an appropriate level, saving time. In addition, they can be grown in bulk, as the ingredients aren't exactly rare. As such, the theory of replacing captives with homunculi in the flesh economy is sound in theory. What would preven this system from working? | 2022/03/16 | [
"https://worldbuilding.stackexchange.com/questions/225782",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/52361/"
] | **Good luck sacrificing homunculi.**
They are freaking incredibly tough. You think one is dead but you have got no mana from it. Then you check and not only is it not dead but it is getting back up. Excessive measures to ensure death prove not excessive enough. People tell stories about times you get the mana from the sacrifice and then later you lose it again because the homunculus has claimed it back and is going about its business.
People die so easy in comparison. The priests got lazy - a litty stabby stabby and you got your mana. Easier to stay with people. | **MAKE THEM FEEL IT**
The homunculus suffers for its master. It does work, it takes hits. It dies for the master. But it still feels all the pain because someone has to.
In rare cases the pain the homunculus feels can be shared by the master.
It most commonly happens when the homunculus is unconscious. If it is unconscious but not yet destroyed then any suffering its body feels is transferred to the master. Although the physical impact is absorbed, the homu is not consciously present to absorb the subjective experience of the impact so the master feels the pain instead. E.g The master may feel his leg to be broken even though it remains intact. This is more severe with homunculi that have already endured excessive suffering, and whose master's have accrued much karmic debt.
Forming a connection between the masters and their homunculus can increase the cost of the sacrifice to the point where it is no longer economical to sac them for mana because it also costs life force. Some people in your story might do it, but it's not generally profitable. You don't get back more than the value of the thing you sacrificed or else the system is broke. |
225,782 | The human body contains mana, the life force that can be transferred for use as a source for magic. The individual's capacity for mana slowly grows with time as the person ages into adulthood, finally reaching its limits during middle age, and then steadily declines. A mage must take care to control the amount of mana they utilize, as using too much at one time can sap their life force and lead to their death. However, a mage can substitute the mana of others for a magical source with no cost to themselves, transferring all the risk to the victim and leaving their own mana intact for use in less risky endeavors. The Incan empire is a brutal regime that is built upon magecraft, the use of magic to shape the natural world. It expands its territory through the conquest of its neighbors, taking captives as slaves to use as fuel for their rituals. These victims are ritually sacrificed on an altar by Incan priests, who use the mana released from the death of the victim as fuel to power their spells. However, this has the potential of backfiring. This system creates many enemies among Incan neighbors, who may band together or fight to the death against them, knowing what will happen to their people. It also can lead to slave revolts among captives. Luckily, magecraft seemingly provides an alternative solution.
Within each male sperm cell is a microscopic organism known as animalcule, a complete preformed individual representing miniature versions of human beings. These preformed humans develop and enlarge into fully formed human beings through the process of conception and birth. Magecraft allows individuals to bypass this long and convoluted process to create life in order to create a perfect servant loyal to its creator, known as a homunculus. These homunculi are grown within a specially built cauldron designed to hold magic brews. This brew is filled with various ingredients, such as eye of newt, as well as other lay ingredients, such as cow intestines and the "seed" of a male. The resulting "child" emerges from this concoction as a fully grown adult, bound to obey its master's commands. Although they are intelligent, homunculi lack free will and individuality, making them the perfect servant.
The Incan empire have considered the potential of swapping out captives for homunculi for the purpose of using them in their rituals. On the surface, the benefits are obvious. Creating a literal slave race bound to your will would make methods of control much easier and cheaper, mitigating risks. As they are created from magic as adults, they contain all the mana they need at birth without having to go through the long timeframe of aging to an appropriate level, saving time. In addition, they can be grown in bulk, as the ingredients aren't exactly rare. As such, the theory of replacing captives with homunculi in the flesh economy is sound in theory. What would preven this system from working? | 2022/03/16 | [
"https://worldbuilding.stackexchange.com/questions/225782",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/52361/"
] | Disaster followed.
They sacrificed homunculi, there were earthquakes, storms, drought, and other disasters. Whether the gods were offended, or the mana was bad, or the thing was a total coincidence, anyone who suggests trying it again will find himself the next sacrifice. | You already established your rule why it wouldn't work:
>
> The individual's capacity for mana slowly grows with time as the
> person ages into adulthood, finally reaching its limits during middle
> age, and then steadily declines.
>
>
>
The questions that need to be answered are:
* What is mana? Actual life force, spirit/soul, blood, calories, bodyheat?
* How does one replenish mana? A potion, food and drink, time?
* Can homunculi ingredients be used as mana?
* Does a created homunculi only contain the ingredient's mana at birth with the capacity of an adult? |
225,782 | The human body contains mana, the life force that can be transferred for use as a source for magic. The individual's capacity for mana slowly grows with time as the person ages into adulthood, finally reaching its limits during middle age, and then steadily declines. A mage must take care to control the amount of mana they utilize, as using too much at one time can sap their life force and lead to their death. However, a mage can substitute the mana of others for a magical source with no cost to themselves, transferring all the risk to the victim and leaving their own mana intact for use in less risky endeavors. The Incan empire is a brutal regime that is built upon magecraft, the use of magic to shape the natural world. It expands its territory through the conquest of its neighbors, taking captives as slaves to use as fuel for their rituals. These victims are ritually sacrificed on an altar by Incan priests, who use the mana released from the death of the victim as fuel to power their spells. However, this has the potential of backfiring. This system creates many enemies among Incan neighbors, who may band together or fight to the death against them, knowing what will happen to their people. It also can lead to slave revolts among captives. Luckily, magecraft seemingly provides an alternative solution.
Within each male sperm cell is a microscopic organism known as animalcule, a complete preformed individual representing miniature versions of human beings. These preformed humans develop and enlarge into fully formed human beings through the process of conception and birth. Magecraft allows individuals to bypass this long and convoluted process to create life in order to create a perfect servant loyal to its creator, known as a homunculus. These homunculi are grown within a specially built cauldron designed to hold magic brews. This brew is filled with various ingredients, such as eye of newt, as well as other lay ingredients, such as cow intestines and the "seed" of a male. The resulting "child" emerges from this concoction as a fully grown adult, bound to obey its master's commands. Although they are intelligent, homunculi lack free will and individuality, making them the perfect servant.
The Incan empire have considered the potential of swapping out captives for homunculi for the purpose of using them in their rituals. On the surface, the benefits are obvious. Creating a literal slave race bound to your will would make methods of control much easier and cheaper, mitigating risks. As they are created from magic as adults, they contain all the mana they need at birth without having to go through the long timeframe of aging to an appropriate level, saving time. In addition, they can be grown in bulk, as the ingredients aren't exactly rare. As such, the theory of replacing captives with homunculi in the flesh economy is sound in theory. What would preven this system from working? | 2022/03/16 | [
"https://worldbuilding.stackexchange.com/questions/225782",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/52361/"
] | Homonculus mana makes the absorber servile, much like the homonculus it comes from. Even the most ruthless tyrant ends up a will-less servant carrying everyone's water after they try it. Not ideal for ambitious young Incas.
You could riff on it and have it that some who tries it actually end up as very nice, humble people. | You already established your rule why it wouldn't work:
>
> The individual's capacity for mana slowly grows with time as the
> person ages into adulthood, finally reaching its limits during middle
> age, and then steadily declines.
>
>
>
The questions that need to be answered are:
* What is mana? Actual life force, spirit/soul, blood, calories, bodyheat?
* How does one replenish mana? A potion, food and drink, time?
* Can homunculi ingredients be used as mana?
* Does a created homunculi only contain the ingredient's mana at birth with the capacity of an adult? |
225,782 | The human body contains mana, the life force that can be transferred for use as a source for magic. The individual's capacity for mana slowly grows with time as the person ages into adulthood, finally reaching its limits during middle age, and then steadily declines. A mage must take care to control the amount of mana they utilize, as using too much at one time can sap their life force and lead to their death. However, a mage can substitute the mana of others for a magical source with no cost to themselves, transferring all the risk to the victim and leaving their own mana intact for use in less risky endeavors. The Incan empire is a brutal regime that is built upon magecraft, the use of magic to shape the natural world. It expands its territory through the conquest of its neighbors, taking captives as slaves to use as fuel for their rituals. These victims are ritually sacrificed on an altar by Incan priests, who use the mana released from the death of the victim as fuel to power their spells. However, this has the potential of backfiring. This system creates many enemies among Incan neighbors, who may band together or fight to the death against them, knowing what will happen to their people. It also can lead to slave revolts among captives. Luckily, magecraft seemingly provides an alternative solution.
Within each male sperm cell is a microscopic organism known as animalcule, a complete preformed individual representing miniature versions of human beings. These preformed humans develop and enlarge into fully formed human beings through the process of conception and birth. Magecraft allows individuals to bypass this long and convoluted process to create life in order to create a perfect servant loyal to its creator, known as a homunculus. These homunculi are grown within a specially built cauldron designed to hold magic brews. This brew is filled with various ingredients, such as eye of newt, as well as other lay ingredients, such as cow intestines and the "seed" of a male. The resulting "child" emerges from this concoction as a fully grown adult, bound to obey its master's commands. Although they are intelligent, homunculi lack free will and individuality, making them the perfect servant.
The Incan empire have considered the potential of swapping out captives for homunculi for the purpose of using them in their rituals. On the surface, the benefits are obvious. Creating a literal slave race bound to your will would make methods of control much easier and cheaper, mitigating risks. As they are created from magic as adults, they contain all the mana they need at birth without having to go through the long timeframe of aging to an appropriate level, saving time. In addition, they can be grown in bulk, as the ingredients aren't exactly rare. As such, the theory of replacing captives with homunculi in the flesh economy is sound in theory. What would preven this system from working? | 2022/03/16 | [
"https://worldbuilding.stackexchange.com/questions/225782",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/52361/"
] | **Good luck sacrificing homunculi.**
They are freaking incredibly tough. You think one is dead but you have got no mana from it. Then you check and not only is it not dead but it is getting back up. Excessive measures to ensure death prove not excessive enough. People tell stories about times you get the mana from the sacrifice and then later you lose it again because the homunculus has claimed it back and is going about its business.
People die so easy in comparison. The priests got lazy - a litty stabby stabby and you got your mana. Easier to stay with people. | **Homunculi don't have full mana because they're created only from sperm.**
Basing this on @Ivella's comment. The homunculi can function well for mundane purposes, but because they are not created from both sperm and egg, they don't have the full mystical characteristics of a natural creature.
It's even possible that the mages aren't even clear on this. In a pre-technological culture that believes, like @Ivella says, that a person "comes a sperm" (actually they'd say "from the semen" since they haven't observed sperm cells), they don't realize there is a female contribution which is also crucial. |
225,782 | The human body contains mana, the life force that can be transferred for use as a source for magic. The individual's capacity for mana slowly grows with time as the person ages into adulthood, finally reaching its limits during middle age, and then steadily declines. A mage must take care to control the amount of mana they utilize, as using too much at one time can sap their life force and lead to their death. However, a mage can substitute the mana of others for a magical source with no cost to themselves, transferring all the risk to the victim and leaving their own mana intact for use in less risky endeavors. The Incan empire is a brutal regime that is built upon magecraft, the use of magic to shape the natural world. It expands its territory through the conquest of its neighbors, taking captives as slaves to use as fuel for their rituals. These victims are ritually sacrificed on an altar by Incan priests, who use the mana released from the death of the victim as fuel to power their spells. However, this has the potential of backfiring. This system creates many enemies among Incan neighbors, who may band together or fight to the death against them, knowing what will happen to their people. It also can lead to slave revolts among captives. Luckily, magecraft seemingly provides an alternative solution.
Within each male sperm cell is a microscopic organism known as animalcule, a complete preformed individual representing miniature versions of human beings. These preformed humans develop and enlarge into fully formed human beings through the process of conception and birth. Magecraft allows individuals to bypass this long and convoluted process to create life in order to create a perfect servant loyal to its creator, known as a homunculus. These homunculi are grown within a specially built cauldron designed to hold magic brews. This brew is filled with various ingredients, such as eye of newt, as well as other lay ingredients, such as cow intestines and the "seed" of a male. The resulting "child" emerges from this concoction as a fully grown adult, bound to obey its master's commands. Although they are intelligent, homunculi lack free will and individuality, making them the perfect servant.
The Incan empire have considered the potential of swapping out captives for homunculi for the purpose of using them in their rituals. On the surface, the benefits are obvious. Creating a literal slave race bound to your will would make methods of control much easier and cheaper, mitigating risks. As they are created from magic as adults, they contain all the mana they need at birth without having to go through the long timeframe of aging to an appropriate level, saving time. In addition, they can be grown in bulk, as the ingredients aren't exactly rare. As such, the theory of replacing captives with homunculi in the flesh economy is sound in theory. What would preven this system from working? | 2022/03/16 | [
"https://worldbuilding.stackexchange.com/questions/225782",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/52361/"
] | Homonculus mana makes the absorber servile, much like the homonculus it comes from. Even the most ruthless tyrant ends up a will-less servant carrying everyone's water after they try it. Not ideal for ambitious young Incas.
You could riff on it and have it that some who tries it actually end up as very nice, humble people. | The most direct approach would be that, due to their different origins, homunculi don't possess mana, in the same way they don't possess free will.
If that doesn't work, they can have only the minimum needed for survival. Any working would be fatal to them, and produce a barely useful minimum of mana, possibly even less than it took to create them. Combine this with the effort of creating them, and you'd have to breed (and temporarily feed) a battalion of single-use homunculi for a single spell. |
225,782 | The human body contains mana, the life force that can be transferred for use as a source for magic. The individual's capacity for mana slowly grows with time as the person ages into adulthood, finally reaching its limits during middle age, and then steadily declines. A mage must take care to control the amount of mana they utilize, as using too much at one time can sap their life force and lead to their death. However, a mage can substitute the mana of others for a magical source with no cost to themselves, transferring all the risk to the victim and leaving their own mana intact for use in less risky endeavors. The Incan empire is a brutal regime that is built upon magecraft, the use of magic to shape the natural world. It expands its territory through the conquest of its neighbors, taking captives as slaves to use as fuel for their rituals. These victims are ritually sacrificed on an altar by Incan priests, who use the mana released from the death of the victim as fuel to power their spells. However, this has the potential of backfiring. This system creates many enemies among Incan neighbors, who may band together or fight to the death against them, knowing what will happen to their people. It also can lead to slave revolts among captives. Luckily, magecraft seemingly provides an alternative solution.
Within each male sperm cell is a microscopic organism known as animalcule, a complete preformed individual representing miniature versions of human beings. These preformed humans develop and enlarge into fully formed human beings through the process of conception and birth. Magecraft allows individuals to bypass this long and convoluted process to create life in order to create a perfect servant loyal to its creator, known as a homunculus. These homunculi are grown within a specially built cauldron designed to hold magic brews. This brew is filled with various ingredients, such as eye of newt, as well as other lay ingredients, such as cow intestines and the "seed" of a male. The resulting "child" emerges from this concoction as a fully grown adult, bound to obey its master's commands. Although they are intelligent, homunculi lack free will and individuality, making them the perfect servant.
The Incan empire have considered the potential of swapping out captives for homunculi for the purpose of using them in their rituals. On the surface, the benefits are obvious. Creating a literal slave race bound to your will would make methods of control much easier and cheaper, mitigating risks. As they are created from magic as adults, they contain all the mana they need at birth without having to go through the long timeframe of aging to an appropriate level, saving time. In addition, they can be grown in bulk, as the ingredients aren't exactly rare. As such, the theory of replacing captives with homunculi in the flesh economy is sound in theory. What would preven this system from working? | 2022/03/16 | [
"https://worldbuilding.stackexchange.com/questions/225782",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/52361/"
] | Homonculus mana makes the absorber servile, much like the homonculus it comes from. Even the most ruthless tyrant ends up a will-less servant carrying everyone's water after they try it. Not ideal for ambitious young Incas.
You could riff on it and have it that some who tries it actually end up as very nice, humble people. | He's A Soulja (But Not Really)
------------------------------
So, you want to know why your artificially created being who has no free will or individuality cannot be used to generate mana? It is simply because, unlike humans, homunculi do not have souls. And every mage has been told since they were a wee initiate to the arcane arts that the soul is where mana is generated.
To be clear, "soul" in this case does not necessarily mean anything other than "that thing which lets humans generate usable mana". Think of it as shorthand for some known but hard to study phenomenon. In this case, your mages know that humans generate mana that they can tap into via sacrifice but homunculi do not. So there is obviously some difference between the two. The other differences are having free will or not, and how they are created/born. Taken altogether it would be pretty simple for a mage to determine "The homunculi process causes them to be born missing some fundamental aspect of humanity which allows us to generate mana. We call this aspect 'the soul'".
The nice thing about this from a story telling perspective is that it opens up a couple of different plot hooks you could explore. Maybe someone creates a homunculus that is able to generate mana and has to keep the knowledge secret from his rivals. Maybe a homunulus develops free will but has to hide it from it's evil master. Maybe the real problem with the creation process is a lack of feminine energy, which gets discovered by a rogue female mage. You don't even have to really delve into the religious or spiritual ramifications of souls, since they are mostly used as shorthand for "human sapience". |
225,782 | The human body contains mana, the life force that can be transferred for use as a source for magic. The individual's capacity for mana slowly grows with time as the person ages into adulthood, finally reaching its limits during middle age, and then steadily declines. A mage must take care to control the amount of mana they utilize, as using too much at one time can sap their life force and lead to their death. However, a mage can substitute the mana of others for a magical source with no cost to themselves, transferring all the risk to the victim and leaving their own mana intact for use in less risky endeavors. The Incan empire is a brutal regime that is built upon magecraft, the use of magic to shape the natural world. It expands its territory through the conquest of its neighbors, taking captives as slaves to use as fuel for their rituals. These victims are ritually sacrificed on an altar by Incan priests, who use the mana released from the death of the victim as fuel to power their spells. However, this has the potential of backfiring. This system creates many enemies among Incan neighbors, who may band together or fight to the death against them, knowing what will happen to their people. It also can lead to slave revolts among captives. Luckily, magecraft seemingly provides an alternative solution.
Within each male sperm cell is a microscopic organism known as animalcule, a complete preformed individual representing miniature versions of human beings. These preformed humans develop and enlarge into fully formed human beings through the process of conception and birth. Magecraft allows individuals to bypass this long and convoluted process to create life in order to create a perfect servant loyal to its creator, known as a homunculus. These homunculi are grown within a specially built cauldron designed to hold magic brews. This brew is filled with various ingredients, such as eye of newt, as well as other lay ingredients, such as cow intestines and the "seed" of a male. The resulting "child" emerges from this concoction as a fully grown adult, bound to obey its master's commands. Although they are intelligent, homunculi lack free will and individuality, making them the perfect servant.
The Incan empire have considered the potential of swapping out captives for homunculi for the purpose of using them in their rituals. On the surface, the benefits are obvious. Creating a literal slave race bound to your will would make methods of control much easier and cheaper, mitigating risks. As they are created from magic as adults, they contain all the mana they need at birth without having to go through the long timeframe of aging to an appropriate level, saving time. In addition, they can be grown in bulk, as the ingredients aren't exactly rare. As such, the theory of replacing captives with homunculi in the flesh economy is sound in theory. What would preven this system from working? | 2022/03/16 | [
"https://worldbuilding.stackexchange.com/questions/225782",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/52361/"
] | Put quite simply, sacrificing an homunculus will not gain a net increase in mana.
Creating an homunculus costs mana. Sacrificing an homunculus gains mana. However, because no process is 100% efficient, it takes more mana than a homunculus contains to create one, and sacrificing it yields less mana than it contains. The cycle is therefore a net loss.
Sacrificing humans, for all that they tend to not want to be sacrificed, yields a net gain, even if relatively small due to the inevitable devaluation of human life that sacrificing people in job lots would cause. It doesn't cost mana to grow a human.
The math is simple: Human sacrifice = net gain. Homunculus sacrifice = net loss. No doubt it was tried, and found to be disappointing. | Mana only exists in free-willed spirits. Male sperm contains the animalcule, but the female egg contains the anima that inhabits the animalcule. If you try to create an homonculus out of both male and female gametes, it turns out to be no different than a fast grown rebellious normal human.
But that is not all: as I said, mana is tied to the free-willed spirit itself, to the point that it is almost inversely proportional to innocence. So if the spirit (or mind, if you are a materialist) has not developed with the magically grown body, its will have the mana of a newborn baby. |
38,563,204 | I am searching for a way to make my table sortable by clicking on the column names but I cannot figure it out. I tried different ways that I found on the internet like installing angular2-datable (npm install angular-data-table) and do an import {DataTableDirectives} from 'angular2-datatable/datatable'; installing easytable (npm install ng2-easy-table) but it didn't work for me. I am also looking for a datepicker that works on all the browsers but couldn't find anyone.
Do you guys have any suggestions? I am working with Typescript and Angular2.
Please help!! | 2016/07/25 | [
"https://Stackoverflow.com/questions/38563204",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6434325/"
] | The best way to use prime ng grid .That makes Better Ui
<http://www.primefaces.org/primeng/#/> | I am also unable to install angular-datatable because of permission denied while accessing **.config** directory.
You can add using the following command:
**sudo chown -R : /path/to/home/directory/.config**
This will transfer ownership to **user\_name** .Let me know,if you find other error. |
4,740,178 | What provider and driver offer the best performance when connection to SQL Server using ADO?
I'm connecting MS Access 2007 to SQL Server 2008.
Provider Options:
1. OLE DB provider for ODBC (MSDASQL.1) (default provider)
2. OLE DB provider for SQL Server (SQLOLEDB)
3. There may be other options that I'm not aware of
Driver Options:
1. SQL Server (version 2000.85.1132.00 - SQLSRV32.DLL 4/14/2008)
2. SQL Server Native Client 10.0 (version 2007.100.2531.00 SQLNCLI10.DLL 3/30/2009)
3. There may be other options that I'm not aware of. | 2011/01/19 | [
"https://Stackoverflow.com/questions/4740178",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/504958/"
] | You should test it in your specific environment to be sure. Whatever the performance differences are, they should be negligible. | Microsoft's Official Statement is here:
<http://msdn.microsoft.com/de-de/library/ms130978.aspx>
In short: For an ADO-Connection, use SQLOLEDB.
MSDASQL is deprecated for a long, long time and does not do well with varchar(max) fields, for example.
When using the Native Client, you will have to specify DataTypeCompatibility=80, which removes many of the new features anyway, so you will gain little. Also, the Native Client will probably not be on your client computers by default, so you will need to install it. |
48,415 | Images, as I call them, are an important part of your prose.
Now, let's look at examples of bad images:
From [Onision's (from now on, Onii-san) book, Reaper's Creek](https://www.youtube.com/watch?v=oFGMBvIJ0iQ) :
>
> Maybe that current lead him to a gathering of logs, and assuming he had not already drowned, he was sucked under the logs, causing him to rapidly cease existing in the world as we knew it.
>
>
>
I might add other examples later.
I think you get the gist of it. An image here isn't painted with just colors but with mental connections and **associations**.
Unfortunately, Onii-san's writing is below mortifyingly atrocious, and while the terrible mental image was easy to spot, it didn't imply what would make for a good mental image. In fact, when I found a good mental image it also didn't help:
>
> I welcomed the silence like a warm blanket on a cold night.
>
>
>
Even the broken clock shows the right time twice a day.
**So, what makes a mental image good, impactful and vivid, instead of a laughing stock?** | 2019/10/06 | [
"https://writers.stackexchange.com/questions/48415",
"https://writers.stackexchange.com",
"https://writers.stackexchange.com/users/25507/"
] | What makes it good is a good use of sensory information that the reader can recognize, "a warm blanket on a cold night" is talking about a particular sensory feeling we can relate to.
It has to be consistent with the character, and what the character knows, but ALSO something the reader can recognize. Saying *"The animal reminded him of a vinglebeast"* may be consistent with his experience, but doesn't help the reader at all.
It should not be overblown, or "purple" prose, meaning prose that is so flowery or ornate or poetic that it draws attention to itself, thus breaking the reader's reverie to focus on the description. Two ways to avoid this: Fewer adjectives and shorter length, readers expect descriptive prose to be easily digestible, not a paragraph long.
On the other end of the spectrum, it should not be cliché, a description we have heard so often it **also** breaks reverie. "It hit him like a ton of bricks" was certainly original at some point, and it must have been wildly successful imagery to be quoted so often that it **became** a cliché, but at this point it is just tired.
Which leaves you in the middle somewhere, original enough to avoid cliché, but not **so** original the reader notices it, or has to parse it, or for whatever reason it breaks their reverie.
It needs to aid the reader's imagination of the scene, or mental or physical state of the character. What is the character feeling? What sensation or emotion? What do they see? What do they smell?
These are some guidelines of what NOT to do. There isn't exactly a formula for good imagination aids, that is part of the art of writing and applying your own imagination. | Descriptive Words
-----------------
You are looking for descriptive adjectives and adverbs. If you look at the two passages you've provided, you'll notice that the first is almost devoid of adjectives and adverbs, while the second uses description, as well as an entire adverbial phrase (I think that's what it's called - essentially everything after 'silence').
Why does this work? Without description, you are using nouns and verbs. X happened, Y failed, and Z began. With description, you give the reader an idea of *what something was like*. X happened quickly. Y failed completely. Z began with gusto. These are of course basic examples.
Which Words?
------------
Some words are better than others. I remember one example on this very topic I once read. It was an excerpt (I'll provide a link if I can find it), about a steamboat owner seeing his new steamboat in the dark of night for the first time. If I just stick to the nouns and verbs, you have something like this:
>
> He saw the boat, moored at the dock, its paint and smokestacks visible against the night.
>
>
>
Add in some basic adjectives and adverbs, as well as some more descriptive verbs, and things get better:
>
> He could just see his new steamboat, moored at the dock hidden in the rushes, its black paint and smokestakes gleaming in the starlight.
>
>
>
A bit better. We're starting to get a good picture. Words/phrase like 'could just see', 'gleaming', 'black paint', and 'starlight' really help. But there's one final trick. If you assume that the story you are telling is related from a PoV (which it should be), then it makes sense that the PoV's emotions will color the narrative, right? How do you show that? Pause the tale and tell the reader what the PoV character is thinking? Have another character comment on the PoV's emotional state? There's a better way: use only descriptive words which *suggest **how*** the character feels. In the excerpt, the owner feels a great sense of pride in his new boat. Here is the actual excerpt, as best as I can recall. Pay attention to the choice of words, particularly in the last half:
>
> He could just make out his new steamboat, moored at the dock hidden in the rushes. Its black paint gleamed in the starlight, and its tall smokestacks reached towards the sky, threatening to pierce the deep blackness.
>
>
>
Words like 'gleamed', 'tall', 'threatening to pierce', and even 'reached' to some extent all suggest how the owner sees his new boat: with pride, as something powerful.
How does this help you?
-----------------------
Writing this way does take some practice. I would certainly recommend you read some good classical literature. I'm talking things like Charles Dickens, Robert Lewis Stevenson, Jane Austen, and all the other greats. They use language as a tool (sometimes a bit too much - but it's a wealth of information for your question), and there's a lot you can learn from them.
When you are writing, try to determine what you are trying to *say* about the nouns and verbs you are using. Even if you aren't trying to convey the emotions of a character, you should still want to invoke emotions in your reader. That's ultimately what draws a picture: enough description to get the reader's imagination working. Take your first passage. Someone drowns in logs. But how are we, the readers, supposed to feel? Afraid? Sad? Maybe happy for some reason? Identify that, and identify the language which will get that across. I find using a resource like thesaurus.com works great for finding words to convey what you want.
Best of luck in your writing! |
307,842 | My wife needs to know, for academic reasons, what to call such a game. She's writing about using games to learn things, so here's your chance to help improve the general perception of video games ;)
In my childhood, we called the games within a game "minigames" or "the puzzle games inside the game." But I don't know what to call the "container" game, which has the other (mini)games within it.
Examples of games that contain others:
* Final Fantasy 7 (PlayStation 1) contains a Chocobo race, a motorcycle race with Cloud or with a submarine, etc.
* Star Ocean - The Second Story (PlayStation 1) contains a bunny race, a cooking competition, etc.
What do you call the overarching game? | 2017/04/27 | [
"https://gaming.stackexchange.com/questions/307842",
"https://gaming.stackexchange.com",
"https://gaming.stackexchange.com/users/5353/"
] | TLDR: You wouldn't call it anything special.
The vernacular in regards to the overarching game is to use a possessive when referring to the minigame. So for instance, Final Fantasy's Chocobo Races. There is no special word or phrase given to the "main game" as you say.
[The wiki entry for minigame](https://en.wikipedia.org/wiki/Minigame) does not mention a phrase or word to describe the "main game" either. [IGN](http://www.ign.com/articles/2013/12/22/the-best-games-within-games) has a phrase that pertains to the minigame calling it "The Game Within The Game" but no phrase that describes the main game either. After further research I am unable to find anything suitable.
Perhaps something to look into would be: [Gameification](https://en.wikipedia.org/wiki/Gamification) . I know that this doesn't have to do with the name of the overarching game, but it could perhaps lead you to some sort of answer.
This is a little different than including things like Chocobo Racing or Gwent inside of The Witcher series. Gameification basically means putting a gaming element around something that usually doesn't have those elements to increase user enjoyment or engagement. | While other answers address the specifics of the example games given, there is actually a couple of different subtypes of games that would also apply to 'Game that has other games within it' that I think is worth covering:
**Minigame Compilations**
-------------------------
These are games where the entire goal/intent of the game is to overcome a wide variety of puzzles and challenges.
There may be little to no game mechanics *outside* of the puzzles/challenges. Plot/Story may non-existent, or only exist in a bare-minimum fashion to drive some sort of progression & unlock more or harder minigames/challenges.
While Wikipedia calls these 'Minigame Compilations', A large subset of games in this category would also be classified as '**Party Games**', and purely singleplayer versions may also be referred to as '**Puzzle Games**'.
Note that not all 'group-play' games are minigame compilations (eg. Mario Kart), nor does every Puzzle game feature multiple different minigames (eg. SpaceChem), so your partner may not be able to use 'Party Game' or 'Puzzle Game' for her academic paper.
Some examples include:
* [Wii Sports](https://en.wikipedia.org/wiki/Wii_Sports) (Sport-based minigames)
* [Crash Bash](https://en.wikipedia.org/wiki/Crash_Bash) (Party Game, loose single-player story)
* [Mario Party](https://en.wikipedia.org/wiki/Mario_Party) (Party Game)
* [Gizmos & Gadgets!](https://en.wikipedia.org/wiki/Gizmos_%26_Gadgets!) - (Educational, Science-based minigames, loose singleplayer story)
* [1-2-Switch](https://en.wikipedia.org/wiki/1-2-Switch) (Party Game)
See also: Wikipedia's [List of Minigame Compilations](https://en.wikipedia.org/wiki/Category:Minigame_compilations)
---
Game Packs & Collections
------------------------
This classification is where the outer game acts as more of a 'launcher' for a collection of larger games (rather than minigames/challenges).
These types of games are generally called '**Packages**', '**Packs**', **Compilations** or '**Collections**', the marketing folk among us would probably call it '**[Product Bundling](https://en.wikipedia.org/wiki/Product_bundling)**'.
Some examples include:
* [The Jackbox Party Pack](https://en.wikipedia.org/wiki/The_Jackbox_Party_Pack) - a collection of trivia-likes and other group/party games.
* [Rare Replay](https://en.wikipedia.org/wiki/Rare_Replay) - a 30th anniversary collection of Rare video games ([Rare](https://en.wikipedia.org/wiki/Rare_(company)) the company, not as in common vs rare).
* [NES](https://en.wikipedia.org/wiki/NES_Classic_Edition)/[SNES](https://en.wikipedia.org/wiki/Super_NES_Classic_Edition) Mini/Classics - reproduction consoles loaded with classic Nintendo titles.
* [Mega Games 6](https://segaretro.org/Mega_Games_6) - A '6 in 1' compilation cartridge for the Sega Genesis/Mega Drive
* Various 'Sega Mega Drive Genesis Collections/Packs', eg [For PS2](https://en.wikipedia.org/wiki/Sega_Genesis_Collection), and [For Windows/Steam](https://en.wikipedia.org/wiki/Sega_Genesis_Classics_Pack). - Because one can not own enough copies of Sega games.
See also: Wikipedia's [List of Video Game Compilations](https://en.wikipedia.org/wiki/Category:Video_game_compilations).
The features of the 'outer game' can vary greatly, depending on the implementation. Most will have basic 'launcher' stuff like the ability to browse games and tweak some minor graphics and sound options, but other features *can* include:
* **Advanced graphics & sound** - Scanlines & filters, colour/palette swapping, zoom/scaling, aspect ratio manipulation etc
* **Achievements and Challenge Modes**
* **Progression/unlocks** - eg. 'behind the scenes', concept art, cheats, mods/tweaks, bonus levels and even occasionally entire 'hidden' games.
* **Pause, Save state and 'rewind' functionality** that may have not been possible on the original game.
* **Cheat engines and Mod integration**
---
Conclusion
----------
While larger games containing incidental minigames may not have an explicit name, there are games out there that almost purely focus on this idea, and those **can** be grouped and named: '**Minigame Compilation**', or loosely 'Party'/'Puzzle' Games' would generally fit this scenario. While not quite what you're after, Game Compilation or Pack is also worth noting due to following a similar concept but with more rich games than just puzzles and challenges. |
1,717,805 | Here is the question I have been posed:
"What is the best way to handle in valid credentials when logging into a site. Do we tell the user if their username was invalid? Or likewise if their password is invalid?"
I did some searching, but I'm having trouble finding a site with some best practices for this, to refer them to.
My Question for the community here:
Does anyone here know a site that has some good guidelines/best practices for this? | 2009/11/11 | [
"https://Stackoverflow.com/questions/1717805",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9406/"
] | Just say that their credentials were incorrect.
Telling them that one piece of information was correct means that you're helping hackers discover user names at the very least.
If I enter:
>
> admin
>
>
> Password1
>
>
>
for example and I get a response that the password was invalid I now know that there is a user called "admin" on you system. I can now just vary the password in an effort to gain access.
If the response was "invalid user name or password" then I'm no wiser about whether there is a user called "admin" or not. | The most authoritative discussion I can find on this issue is from the "Web Security Testing Cookbook," Recipe 12.8.
The book points out:
1. You should provide a generic message indicating either the username or password was incorrect; revealing that just the username is correct allows attackers to enumerate valid usernames.
2. Account lockout functionality, after X number of tries, also carries the same risk; attackers can lockout accounts to find out if the usernames were valid or not.
You can read the whole "recipe" via Google Books here:
<http://books.google.com/books?id=VmrSJ3V-s_MC&lpg=PA249&ots=cU7V62FQOA&dq=web%20security%20reveal%20valid%20username&pg=PA248#v=onepage&q=web%20security%20reveal%20valid%20username&f=false> |
102,183 | I am looking for best practices that big organizations follow for code check-in and validations.
Currently we follow these steps,
- Developer writes code
- Developer do some initial tests
- Code is awaiting validation now
- Technical lead reviews the code (possible bugs, see if coding convention is followed etc)
- Once approved by technical lead, the code goes in QA state
- Once QA approves the code is checked in into the trunk.
We are now moving to a new project and I was looking for some best practices that would ease the process. We have custom made software that maintains the code status.
Thanks,
Ali | 2011/08/18 | [
"https://softwareengineering.stackexchange.com/questions/102183",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/34681/"
] | It seems like bad practice to me that there are no check-ins until the code is production ready. I would have a production branch and only cut over to it when the code has gone through all of those steps. my version of your process would be something like this:
* Developer writes code, checks it in.
* Does initial testing, checks in fixes
* Code gets reviewed, suggested changes (if any) are checked in.
* Reviewed by QA, any changes/fixes here are checked in.
* Code is cut over to the main branch, ready to go run free in the wild.
In your example, it sounds like a check in would only be made once every few days, where checking in is something you should be doing multiple times a day. | The code should always be in source control. The new code can be committed to a branch, reviews, changes, improvements are done there.
QA can be build from the branch.
AFter final approval, merge to trunk. |
102,183 | I am looking for best practices that big organizations follow for code check-in and validations.
Currently we follow these steps,
- Developer writes code
- Developer do some initial tests
- Code is awaiting validation now
- Technical lead reviews the code (possible bugs, see if coding convention is followed etc)
- Once approved by technical lead, the code goes in QA state
- Once QA approves the code is checked in into the trunk.
We are now moving to a new project and I was looking for some best practices that would ease the process. We have custom made software that maintains the code status.
Thanks,
Ali | 2011/08/18 | [
"https://softwareengineering.stackexchange.com/questions/102183",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/34681/"
] | It seems like bad practice to me that there are no check-ins until the code is production ready. I would have a production branch and only cut over to it when the code has gone through all of those steps. my version of your process would be something like this:
* Developer writes code, checks it in.
* Does initial testing, checks in fixes
* Code gets reviewed, suggested changes (if any) are checked in.
* Reviewed by QA, any changes/fixes here are checked in.
* Code is cut over to the main branch, ready to go run free in the wild.
In your example, it sounds like a check in would only be made once every few days, where checking in is something you should be doing multiple times a day. | I'd agree that code should be checked in as often as possible, but don't allow check ins that would break the build. Continuous integration is a very good tool to use as well IMO. Requiring all checkins pass the build process and unit tests (and even test coverage if possible) is a good way to ensure that people aren't just throwing stuff over the wall.
Prototypes and other long-running features should go into separate branches as needed and could have less strict rules. |
102,183 | I am looking for best practices that big organizations follow for code check-in and validations.
Currently we follow these steps,
- Developer writes code
- Developer do some initial tests
- Code is awaiting validation now
- Technical lead reviews the code (possible bugs, see if coding convention is followed etc)
- Once approved by technical lead, the code goes in QA state
- Once QA approves the code is checked in into the trunk.
We are now moving to a new project and I was looking for some best practices that would ease the process. We have custom made software that maintains the code status.
Thanks,
Ali | 2011/08/18 | [
"https://softwareengineering.stackexchange.com/questions/102183",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/34681/"
] | It seems like bad practice to me that there are no check-ins until the code is production ready. I would have a production branch and only cut over to it when the code has gone through all of those steps. my version of your process would be something like this:
* Developer writes code, checks it in.
* Does initial testing, checks in fixes
* Code gets reviewed, suggested changes (if any) are checked in.
* Reviewed by QA, any changes/fixes here are checked in.
* Code is cut over to the main branch, ready to go run free in the wild.
In your example, it sounds like a check in would only be made once every few days, where checking in is something you should be doing multiple times a day. | I agree with the others that only a single check in is not very good. You should check in all the time, unfortunately "Enterprise" VCS seem to make it difficult to check in, which is near-suicide as far as I can tell. Major integration hassles invariably result.
One thing I would add: do a "diff" on the code in version control, and the code about-to-be-checked-in. At the very least, seeing the diff will let you write more cogent check-in comments. Doing a "diff" before check-in can prevent you from overwriting someone else's changes, or other horrible mistakes. |
35,468,029 | Trying to debug a stored procedure on a local SQL Server Express instance. I am running SSMS As Administrator. My login is in the sysadmin server role. My connection user is in the sysadmin server role. I get the message "Unable to start the Transact-SQL debugger, could not connect to the Database Engine instance 'localhost\sqlexpress'." | 2016/02/17 | [
"https://Stackoverflow.com/questions/35468029",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2132098/"
] | Yes - I switched to using a Windows Authentication connection and it works now. So the answer is Yes, debugging works with SQL Express. | I believe the Express version does not have the debugger. I personally have never found the debugger of value anyway because problems with t-sql tend to be of the "did I get correct results" variety (which are often debugged by trying different variation of where clauses and joins until you find the culprit).
They are often do not concern problems of state while stepping through a series of steps. If you are doing something with many steps in one proc, then add a test or debug variable and write code to populate up what you want to see at that point in time (might be a variable, might be a select query, just depends on what you are trying to do) when running in test mode. |
174,251 | Under this sequence of events:
1. Someone posts a question
2. I post an answer
3. OP comments on my answer
4. A third party adds a comment to my answer
5. I add a comment to my answer
Does the OP receive automatic notifications for either comment #4 or #5? The actual post in question is [using content\_for and and ajax/Jquery to partially update webpage](https://stackoverflow.com/questions/15713470/using-content-for-and-and-ajax-jquery-to-partially-update-webpage/15713497?noredirect=1#comment22319719_15713497). (I added a manual ping to the OP at the end, because I wasn't sure.)
The best source I could find for when automatic notifications occur I found at [When exactly do I get comment notifications?](https://meta.stackexchange.com/questions/125208/when-exactly-do-i-get-comment-notifications). If I'm reading this correctly, the OP will not get automatic notifications for either of those above events. Is this correct? That post is also over a year old, so I don't know if it's up-to-date with respect to how it works.
I apologize if this is a dupe, but finding details about exactly when automatic notifications occur is difficult. | 2013/03/30 | [
"https://meta.stackexchange.com/questions/174251",
"https://meta.stackexchange.com",
"https://meta.stackexchange.com/users/155650/"
] | >
> Does the OP receive automatic notifications for either comment #4 or #5?
>
>
>
No. The OP is notified on your answer only if that third user pings the OP.
>
> If I'm reading this correctly, the OP will not get automatic notifications for either of those above events. Is this correct?
>
>
>
Yes! | >
> No OP will not get notified until and unless you use @ and OP name
> this will notify him that the comment is related or targeted towards
> him. Also you will not require your name with @tag to recieve
> notification as you are owner of the question. You can use only one
> person name with @ with respect to the reputation point you have in
> your account.
>
>
> |
26,293,227 | I am a student programmer and the topic my degree work is to finalize one of the input methods for touchscreen devices by visually impaired people (including the blind).
I want to make my application work correct with TalkBack. But I totally don't know, how to do it. I've found the package for accessibility, but it's not clear for me, how to it integrates with TB. | 2014/10/10 | [
"https://Stackoverflow.com/questions/26293227",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4122043/"
] | You can start with simple layout with ImageView and add android:contentDescription="your string" as a parameter in xml. Then turn on talkback and click on that image to see what happens. | As an application developer, you don't need to specifically integrate your app with TalkBack. Instead, you should focus on providing correct data to the accessibility framework. This will ensure that your application works not only with TalkBack, but also with Braille and switch-based accessibility services.
See the Android Developer guide on [Making Applications Accessible](https://developer.android.com/guide/topics/ui/accessibility/apps.html) for an overview of what steps you need to take to ensure your application works correctly with accessibility services.
You may also want to watch the Google I/O 2012 talk [Making Android Apps Accessible](http://www.youtube.com/watch?v=q3HliaMjL38), which covers basic application accessibility. |
26,293,227 | I am a student programmer and the topic my degree work is to finalize one of the input methods for touchscreen devices by visually impaired people (including the blind).
I want to make my application work correct with TalkBack. But I totally don't know, how to do it. I've found the package for accessibility, but it's not clear for me, how to it integrates with TB. | 2014/10/10 | [
"https://Stackoverflow.com/questions/26293227",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4122043/"
] | You can start with simple layout with ImageView and add android:contentDescription="your string" as a parameter in xml. Then turn on talkback and click on that image to see what happens. | Use **android:contentDescription="Generic Image"** in any View with any custom content.
Note: When using ViewGroup, should be careful of clicking through view.
Here is a example: <https://github.com/dotrinhdev/AndroidTalkback> |
26,293,227 | I am a student programmer and the topic my degree work is to finalize one of the input methods for touchscreen devices by visually impaired people (including the blind).
I want to make my application work correct with TalkBack. But I totally don't know, how to do it. I've found the package for accessibility, but it's not clear for me, how to it integrates with TB. | 2014/10/10 | [
"https://Stackoverflow.com/questions/26293227",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4122043/"
] | As an application developer, you don't need to specifically integrate your app with TalkBack. Instead, you should focus on providing correct data to the accessibility framework. This will ensure that your application works not only with TalkBack, but also with Braille and switch-based accessibility services.
See the Android Developer guide on [Making Applications Accessible](https://developer.android.com/guide/topics/ui/accessibility/apps.html) for an overview of what steps you need to take to ensure your application works correctly with accessibility services.
You may also want to watch the Google I/O 2012 talk [Making Android Apps Accessible](http://www.youtube.com/watch?v=q3HliaMjL38), which covers basic application accessibility. | Use **android:contentDescription="Generic Image"** in any View with any custom content.
Note: When using ViewGroup, should be careful of clicking through view.
Here is a example: <https://github.com/dotrinhdev/AndroidTalkback> |
4,018 | I am using ESRI based GIS software, Postgresql/PostGIS/ArcSDE DB and we have a Mincom Ellipse asset management system.
Currently our all our spatial information regarding asset sites have been recorded as points, this has been versatile for the fact that it accommodates mapping at various scales. Now that we are integrating our Asset Management System with our GIS database the asset management guys want the GIS features to reflect the structure e.g A building footprint as a polygon instead of a point.
My question in terms of the spatial data management. Should I be maintaining two sets of data? One for the asset representations and then one for various mapping tasks?
Thanks
DB | 2010/12/01 | [
"https://gis.stackexchange.com/questions/4018",
"https://gis.stackexchange.com",
"https://gis.stackexchange.com/users/1357/"
] | I feel like you might have a couple of questions in your question. For the question in your title, your don't provide enough information about your GIS or asset management system to answer.
However, I think this is a good question, but certainly not limited to asset management.
>
> Do I now have to create a polygon
> layer for my dams to be used with the
> asset management system for viewing at
> 1:1,000 then a point layer for mapping
> purposes when producing a map of the
> same sites at 1:100,000?
>
>
>
Currently, we have both the building outlines and point features in our Esri geodatabases. We're just starting a Cityworks implementation, but it looks like the point features are what we are using to relate our tables to (*since we maintain the point features but the building outlines are maintained by a different agency*).
Having both the point and polygon geometries for the same feature isn't uncommon. For Esri geodatabases, these have to go into different featureclasses. You can't mix geometry types in Esri featureclasses (at least not in a way that is recognized by Esri software).
---
**Update:**
Since you are using an Esri geodatabase, you might be able to use cartographic representations. I haven't used them (until a minute ago), but it looks like it works. In my screen shot, I'm display 1 layer with the building cartographic representation and the 2nd layer with the actual feature geometry. If you apply the scale ranges, you can have buildings change from poly to point symbols as you zoom out. I will say that the user interface for cartographic reps feels less refined than the rest of ArcMap and ArcCatalog.
 | >
> Do I now have to create a polygon
> layer for my dams to be used with the
> asset management system for viewing at
> 1:1,000 then a point layer for mapping
> purposes when producing a map of the
> same sites at 1:100,000?
>
>
>
One alternative might be to develop a [custom renderer](http://help.arcgis.com/en/sdk/10.0/arcobjects_net/componenthelp/index.html#//0012000004wn000000) that displays points for dams when zoomed out beyond a certain scale. |
4,018 | I am using ESRI based GIS software, Postgresql/PostGIS/ArcSDE DB and we have a Mincom Ellipse asset management system.
Currently our all our spatial information regarding asset sites have been recorded as points, this has been versatile for the fact that it accommodates mapping at various scales. Now that we are integrating our Asset Management System with our GIS database the asset management guys want the GIS features to reflect the structure e.g A building footprint as a polygon instead of a point.
My question in terms of the spatial data management. Should I be maintaining two sets of data? One for the asset representations and then one for various mapping tasks?
Thanks
DB | 2010/12/01 | [
"https://gis.stackexchange.com/questions/4018",
"https://gis.stackexchange.com",
"https://gis.stackexchange.com/users/1357/"
] | I feel like you might have a couple of questions in your question. For the question in your title, your don't provide enough information about your GIS or asset management system to answer.
However, I think this is a good question, but certainly not limited to asset management.
>
> Do I now have to create a polygon
> layer for my dams to be used with the
> asset management system for viewing at
> 1:1,000 then a point layer for mapping
> purposes when producing a map of the
> same sites at 1:100,000?
>
>
>
Currently, we have both the building outlines and point features in our Esri geodatabases. We're just starting a Cityworks implementation, but it looks like the point features are what we are using to relate our tables to (*since we maintain the point features but the building outlines are maintained by a different agency*).
Having both the point and polygon geometries for the same feature isn't uncommon. For Esri geodatabases, these have to go into different featureclasses. You can't mix geometry types in Esri featureclasses (at least not in a way that is recognized by Esri software).
---
**Update:**
Since you are using an Esri geodatabase, you might be able to use cartographic representations. I haven't used them (until a minute ago), but it looks like it works. In my screen shot, I'm display 1 layer with the building cartographic representation and the 2nd layer with the actual feature geometry. If you apply the scale ranges, you can have buildings change from poly to point symbols as you zoom out. I will say that the user interface for cartographic reps feels less refined than the rest of ArcMap and ArcCatalog.
 | Depending upon scale I would show features differently.
To save of storing multiple geometries for your feature you can use geometric centroids of buildings to compute a single point to represent the asset at much larger scales this will allow you to store single geometries for your assets.
But it does depend on your GIS software as to how this is implemented. |
4,018 | I am using ESRI based GIS software, Postgresql/PostGIS/ArcSDE DB and we have a Mincom Ellipse asset management system.
Currently our all our spatial information regarding asset sites have been recorded as points, this has been versatile for the fact that it accommodates mapping at various scales. Now that we are integrating our Asset Management System with our GIS database the asset management guys want the GIS features to reflect the structure e.g A building footprint as a polygon instead of a point.
My question in terms of the spatial data management. Should I be maintaining two sets of data? One for the asset representations and then one for various mapping tasks?
Thanks
DB | 2010/12/01 | [
"https://gis.stackexchange.com/questions/4018",
"https://gis.stackexchange.com",
"https://gis.stackexchange.com/users/1357/"
] | I suggest you have one table that contains both the polygon and point data. This table would have (at minimum):
* an id column that is a foreign key to the matching asset record,
* a geometry column that contains that polygon geometry and
* a geometry column that contains the point geometry.
Create a trigger that updates the point column based on inserts/changes in the polygon column using st\_pointonsurface.
Create two views, one that has only the polygon columns and one that contains only the point column (include the id column and any others in the views, of course). These views are what you register with SDE.
This way you should be able to worry only about keeping the polygon data up-to-date. If there's no polygon, you can still put in a point. Remember to filter out records with null geometries from the views. | >
> Do I now have to create a polygon
> layer for my dams to be used with the
> asset management system for viewing at
> 1:1,000 then a point layer for mapping
> purposes when producing a map of the
> same sites at 1:100,000?
>
>
>
One alternative might be to develop a [custom renderer](http://help.arcgis.com/en/sdk/10.0/arcobjects_net/componenthelp/index.html#//0012000004wn000000) that displays points for dams when zoomed out beyond a certain scale. |
4,018 | I am using ESRI based GIS software, Postgresql/PostGIS/ArcSDE DB and we have a Mincom Ellipse asset management system.
Currently our all our spatial information regarding asset sites have been recorded as points, this has been versatile for the fact that it accommodates mapping at various scales. Now that we are integrating our Asset Management System with our GIS database the asset management guys want the GIS features to reflect the structure e.g A building footprint as a polygon instead of a point.
My question in terms of the spatial data management. Should I be maintaining two sets of data? One for the asset representations and then one for various mapping tasks?
Thanks
DB | 2010/12/01 | [
"https://gis.stackexchange.com/questions/4018",
"https://gis.stackexchange.com",
"https://gis.stackexchange.com/users/1357/"
] | I suggest you have one table that contains both the polygon and point data. This table would have (at minimum):
* an id column that is a foreign key to the matching asset record,
* a geometry column that contains that polygon geometry and
* a geometry column that contains the point geometry.
Create a trigger that updates the point column based on inserts/changes in the polygon column using st\_pointonsurface.
Create two views, one that has only the polygon columns and one that contains only the point column (include the id column and any others in the views, of course). These views are what you register with SDE.
This way you should be able to worry only about keeping the polygon data up-to-date. If there's no polygon, you can still put in a point. Remember to filter out records with null geometries from the views. | Depending upon scale I would show features differently.
To save of storing multiple geometries for your feature you can use geometric centroids of buildings to compute a single point to represent the asset at much larger scales this will allow you to store single geometries for your assets.
But it does depend on your GIS software as to how this is implemented. |
4,018 | I am using ESRI based GIS software, Postgresql/PostGIS/ArcSDE DB and we have a Mincom Ellipse asset management system.
Currently our all our spatial information regarding asset sites have been recorded as points, this has been versatile for the fact that it accommodates mapping at various scales. Now that we are integrating our Asset Management System with our GIS database the asset management guys want the GIS features to reflect the structure e.g A building footprint as a polygon instead of a point.
My question in terms of the spatial data management. Should I be maintaining two sets of data? One for the asset representations and then one for various mapping tasks?
Thanks
DB | 2010/12/01 | [
"https://gis.stackexchange.com/questions/4018",
"https://gis.stackexchange.com",
"https://gis.stackexchange.com/users/1357/"
] | >
> Do I now have to create a polygon
> layer for my dams to be used with the
> asset management system for viewing at
> 1:1,000 then a point layer for mapping
> purposes when producing a map of the
> same sites at 1:100,000?
>
>
>
One alternative might be to develop a [custom renderer](http://help.arcgis.com/en/sdk/10.0/arcobjects_net/componenthelp/index.html#//0012000004wn000000) that displays points for dams when zoomed out beyond a certain scale. | Depending upon scale I would show features differently.
To save of storing multiple geometries for your feature you can use geometric centroids of buildings to compute a single point to represent the asset at much larger scales this will allow you to store single geometries for your assets.
But it does depend on your GIS software as to how this is implemented. |
118,971 | The company I work at recently hosted a web service in Windows Azure and announced that. Now trade online magazines say lots of meaningless stuff like "company X moves to the cloud", "company X drops desktops for the cloud", etc.
Looks like there're lots of materials out there (starting with [Wikipedia](http://en.wikipedia.org/w/index.php?title=Cloud_computing&oldid=459932705)) that are very lengthy and talk a lot about "services" and "low entry price" and other stuff but I've read all that and don't see how they could be helpful for a layman in drawing a line between a service in a cloud and Stack Exchange that is also a service but is run on brick-and-mortar servers in a colocation.
Now from my experience with Windows Azure the real difference is the following. With a cloud the service owner rents hardware, network bandwidth and right to use the middleware (Windows 2008 that is used in Azure roles for example) on demand and also there's some maintenance assistance (like if the computer where a role is running crashes another computer is automatically found and the role is redeployed). Without a cloud the service owner will have to deal with all that on his own.
Will that be the right distinction? | 2011/11/10 | [
"https://softwareengineering.stackexchange.com/questions/118971",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/587/"
] | Yes, pretty much.
With the "cloud" (as in "cloud providers"), you are renting the diskspace, bandwidth, CPU and memory owned by the provider and the means to use them from your software. They give you the infrastructure and you don't own the hardware.
There are other forms of cloud computing that don't involve these providers, where you (the organisation) owns the hardware as well.
In either regard, this mostly means that your software is running on a distributed network of computers, available on the Internet. | Cloud computing begins with *Renting* hard disks to servers. However, it goes beyond that much more. This is not to say that there isn't any hype about it; but i am trying to define what is the key difference between being in cloud and not so!
In my office we have a set of servers, which i can access from anywhere. Does this qualify to be a cloud? NO! And so is true for many data centers as is.
The core element that forms cloud computing is of course, the hardware infrastructure (servers and disk space) used exclusively through public internet. However, what is important is how this being managed. A critical infrastructure element (though i doubt if people would disagree if you say must) is the visualization.
In, (what i think) a real cloud, all these servers are combined to become pool of resources tied together over a framework where virtual machines are created. One can create, archive and delete machines. Transfer hard disk space from one machine to another like how you mount them in real machines. These technologies allows that one can shift the data and OS of these machines to move from one physical server to another seamlessly and it comes with various redundancy options and management consoles for services.
Understand, in good old days (as well as today), one used to get personal homepages and company websites - on hosting space. This isn't quite a cloud.
Though, i agree that now-a-days anyone who got a static ip - think he has created a cloud- and indeed the word *cloud* has been misused to an extent that there is no real definition to it now! |
118,971 | The company I work at recently hosted a web service in Windows Azure and announced that. Now trade online magazines say lots of meaningless stuff like "company X moves to the cloud", "company X drops desktops for the cloud", etc.
Looks like there're lots of materials out there (starting with [Wikipedia](http://en.wikipedia.org/w/index.php?title=Cloud_computing&oldid=459932705)) that are very lengthy and talk a lot about "services" and "low entry price" and other stuff but I've read all that and don't see how they could be helpful for a layman in drawing a line between a service in a cloud and Stack Exchange that is also a service but is run on brick-and-mortar servers in a colocation.
Now from my experience with Windows Azure the real difference is the following. With a cloud the service owner rents hardware, network bandwidth and right to use the middleware (Windows 2008 that is used in Azure roles for example) on demand and also there's some maintenance assistance (like if the computer where a role is running crashes another computer is automatically found and the role is redeployed). Without a cloud the service owner will have to deal with all that on his own.
Will that be the right distinction? | 2011/11/10 | [
"https://softwareengineering.stackexchange.com/questions/118971",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/587/"
] | No. Cloud computing is not merely a way to rent resources.
Cloud is all about services that:
* are delivered over the network (possibly the Internet)
* are fully controlled by API
* are fully automatable and automated
* require no human interaction for control
* are delivered as a commodity
* are billed like a utility: for measured usage
* require no capital expenditure or up-front payment
* have seemingly infinite capacity
* permit at-will immediate allocation of arbitrarily many units of the service
* permit at-will immediate disposal of arbitrarily many units of the service
NIST has a [full definition](http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf) of what a cloud service is. | Cloud computing does not only provide resource renting.
It also offers a fault tolerance layer, should the rented resources fail. Serious cloud providers work hard to deliver a service without interruption. |
118,971 | The company I work at recently hosted a web service in Windows Azure and announced that. Now trade online magazines say lots of meaningless stuff like "company X moves to the cloud", "company X drops desktops for the cloud", etc.
Looks like there're lots of materials out there (starting with [Wikipedia](http://en.wikipedia.org/w/index.php?title=Cloud_computing&oldid=459932705)) that are very lengthy and talk a lot about "services" and "low entry price" and other stuff but I've read all that and don't see how they could be helpful for a layman in drawing a line between a service in a cloud and Stack Exchange that is also a service but is run on brick-and-mortar servers in a colocation.
Now from my experience with Windows Azure the real difference is the following. With a cloud the service owner rents hardware, network bandwidth and right to use the middleware (Windows 2008 that is used in Azure roles for example) on demand and also there's some maintenance assistance (like if the computer where a role is running crashes another computer is automatically found and the role is redeployed). Without a cloud the service owner will have to deal with all that on his own.
Will that be the right distinction? | 2011/11/10 | [
"https://softwareengineering.stackexchange.com/questions/118971",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/587/"
] | While it's hyped as something new, cloud computing really a new marketing twist on the time-sharing distributed computing model emerged in the mid-to-late 1960's. Of course, there are huge technical improvements but, when you look at it closely, it's not too much different from hooking up to a mainframe via an acoustic coupler and a teletype terminal to access applications and data. These systems were huge moneymakers back in their day but the Apple II and IBM PC put an end to it. Now, through cloud computing, this business model is seeing a renaissance. | Cloud computing begins with *Renting* hard disks to servers. However, it goes beyond that much more. This is not to say that there isn't any hype about it; but i am trying to define what is the key difference between being in cloud and not so!
In my office we have a set of servers, which i can access from anywhere. Does this qualify to be a cloud? NO! And so is true for many data centers as is.
The core element that forms cloud computing is of course, the hardware infrastructure (servers and disk space) used exclusively through public internet. However, what is important is how this being managed. A critical infrastructure element (though i doubt if people would disagree if you say must) is the visualization.
In, (what i think) a real cloud, all these servers are combined to become pool of resources tied together over a framework where virtual machines are created. One can create, archive and delete machines. Transfer hard disk space from one machine to another like how you mount them in real machines. These technologies allows that one can shift the data and OS of these machines to move from one physical server to another seamlessly and it comes with various redundancy options and management consoles for services.
Understand, in good old days (as well as today), one used to get personal homepages and company websites - on hosting space. This isn't quite a cloud.
Though, i agree that now-a-days anyone who got a static ip - think he has created a cloud- and indeed the word *cloud* has been misused to an extent that there is no real definition to it now! |
118,971 | The company I work at recently hosted a web service in Windows Azure and announced that. Now trade online magazines say lots of meaningless stuff like "company X moves to the cloud", "company X drops desktops for the cloud", etc.
Looks like there're lots of materials out there (starting with [Wikipedia](http://en.wikipedia.org/w/index.php?title=Cloud_computing&oldid=459932705)) that are very lengthy and talk a lot about "services" and "low entry price" and other stuff but I've read all that and don't see how they could be helpful for a layman in drawing a line between a service in a cloud and Stack Exchange that is also a service but is run on brick-and-mortar servers in a colocation.
Now from my experience with Windows Azure the real difference is the following. With a cloud the service owner rents hardware, network bandwidth and right to use the middleware (Windows 2008 that is used in Azure roles for example) on demand and also there's some maintenance assistance (like if the computer where a role is running crashes another computer is automatically found and the role is redeployed). Without a cloud the service owner will have to deal with all that on his own.
Will that be the right distinction? | 2011/11/10 | [
"https://softwareengineering.stackexchange.com/questions/118971",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/587/"
] | Yes, pretty much.
With the "cloud" (as in "cloud providers"), you are renting the diskspace, bandwidth, CPU and memory owned by the provider and the means to use them from your software. They give you the infrastructure and you don't own the hardware.
There are other forms of cloud computing that don't involve these providers, where you (the organisation) owns the hardware as well.
In either regard, this mostly means that your software is running on a distributed network of computers, available on the Internet. | Cloud computing says absolutely nothing about who owns the resources. Cloud computing is an architecture for developing distributed, network-based applications. There are a number of cloud computing service providers out there, such as Azure Services Platform, Amazon Web Services, Google App Engine, and a number of others. However, using someone else's service is not a prerequisite for developing a cloud computing infrastructure.
The idea behind cloud computing is that you put services and applications on networked devices. You could utilize a hosting service, which would shift maintenance and support to other entities. You could also create your own infrastructure for cloud computing. In addition, there is nothing that says that cloud computing must be public. Yes, you can put your applications and services on the public Internet (with the appropraite security for your applications), but you can also create private clouds within your organization.
In the end, with cloud computing, you don't know where or what you are accessing. You see a service or application without any knowledge of what is behind that service or application. The entire cloud is of no consequence to clients - you know that things that you can use exist, are accessible, and use them. They could be in a "server room", or you could be accessing a distributed grid of sensors and workstations. It really doesn't matter. |
118,971 | The company I work at recently hosted a web service in Windows Azure and announced that. Now trade online magazines say lots of meaningless stuff like "company X moves to the cloud", "company X drops desktops for the cloud", etc.
Looks like there're lots of materials out there (starting with [Wikipedia](http://en.wikipedia.org/w/index.php?title=Cloud_computing&oldid=459932705)) that are very lengthy and talk a lot about "services" and "low entry price" and other stuff but I've read all that and don't see how they could be helpful for a layman in drawing a line between a service in a cloud and Stack Exchange that is also a service but is run on brick-and-mortar servers in a colocation.
Now from my experience with Windows Azure the real difference is the following. With a cloud the service owner rents hardware, network bandwidth and right to use the middleware (Windows 2008 that is used in Azure roles for example) on demand and also there's some maintenance assistance (like if the computer where a role is running crashes another computer is automatically found and the role is redeployed). Without a cloud the service owner will have to deal with all that on his own.
Will that be the right distinction? | 2011/11/10 | [
"https://softwareengineering.stackexchange.com/questions/118971",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/587/"
] | No. Cloud computing is not merely a way to rent resources.
Cloud is all about services that:
* are delivered over the network (possibly the Internet)
* are fully controlled by API
* are fully automatable and automated
* require no human interaction for control
* are delivered as a commodity
* are billed like a utility: for measured usage
* require no capital expenditure or up-front payment
* have seemingly infinite capacity
* permit at-will immediate allocation of arbitrarily many units of the service
* permit at-will immediate disposal of arbitrarily many units of the service
NIST has a [full definition](http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf) of what a cloud service is. | Cloud computing begins with *Renting* hard disks to servers. However, it goes beyond that much more. This is not to say that there isn't any hype about it; but i am trying to define what is the key difference between being in cloud and not so!
In my office we have a set of servers, which i can access from anywhere. Does this qualify to be a cloud? NO! And so is true for many data centers as is.
The core element that forms cloud computing is of course, the hardware infrastructure (servers and disk space) used exclusively through public internet. However, what is important is how this being managed. A critical infrastructure element (though i doubt if people would disagree if you say must) is the visualization.
In, (what i think) a real cloud, all these servers are combined to become pool of resources tied together over a framework where virtual machines are created. One can create, archive and delete machines. Transfer hard disk space from one machine to another like how you mount them in real machines. These technologies allows that one can shift the data and OS of these machines to move from one physical server to another seamlessly and it comes with various redundancy options and management consoles for services.
Understand, in good old days (as well as today), one used to get personal homepages and company websites - on hosting space. This isn't quite a cloud.
Though, i agree that now-a-days anyone who got a static ip - think he has created a cloud- and indeed the word *cloud* has been misused to an extent that there is no real definition to it now! |
118,971 | The company I work at recently hosted a web service in Windows Azure and announced that. Now trade online magazines say lots of meaningless stuff like "company X moves to the cloud", "company X drops desktops for the cloud", etc.
Looks like there're lots of materials out there (starting with [Wikipedia](http://en.wikipedia.org/w/index.php?title=Cloud_computing&oldid=459932705)) that are very lengthy and talk a lot about "services" and "low entry price" and other stuff but I've read all that and don't see how they could be helpful for a layman in drawing a line between a service in a cloud and Stack Exchange that is also a service but is run on brick-and-mortar servers in a colocation.
Now from my experience with Windows Azure the real difference is the following. With a cloud the service owner rents hardware, network bandwidth and right to use the middleware (Windows 2008 that is used in Azure roles for example) on demand and also there's some maintenance assistance (like if the computer where a role is running crashes another computer is automatically found and the role is redeployed). Without a cloud the service owner will have to deal with all that on his own.
Will that be the right distinction? | 2011/11/10 | [
"https://softwareengineering.stackexchange.com/questions/118971",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/587/"
] | Cloud computing begins with *Renting* hard disks to servers. However, it goes beyond that much more. This is not to say that there isn't any hype about it; but i am trying to define what is the key difference between being in cloud and not so!
In my office we have a set of servers, which i can access from anywhere. Does this qualify to be a cloud? NO! And so is true for many data centers as is.
The core element that forms cloud computing is of course, the hardware infrastructure (servers and disk space) used exclusively through public internet. However, what is important is how this being managed. A critical infrastructure element (though i doubt if people would disagree if you say must) is the visualization.
In, (what i think) a real cloud, all these servers are combined to become pool of resources tied together over a framework where virtual machines are created. One can create, archive and delete machines. Transfer hard disk space from one machine to another like how you mount them in real machines. These technologies allows that one can shift the data and OS of these machines to move from one physical server to another seamlessly and it comes with various redundancy options and management consoles for services.
Understand, in good old days (as well as today), one used to get personal homepages and company websites - on hosting space. This isn't quite a cloud.
Though, i agree that now-a-days anyone who got a static ip - think he has created a cloud- and indeed the word *cloud* has been misused to an extent that there is no real definition to it now! | Cloud computing does not only provide resource renting.
It also offers a fault tolerance layer, should the rented resources fail. Serious cloud providers work hard to deliver a service without interruption. |
118,971 | The company I work at recently hosted a web service in Windows Azure and announced that. Now trade online magazines say lots of meaningless stuff like "company X moves to the cloud", "company X drops desktops for the cloud", etc.
Looks like there're lots of materials out there (starting with [Wikipedia](http://en.wikipedia.org/w/index.php?title=Cloud_computing&oldid=459932705)) that are very lengthy and talk a lot about "services" and "low entry price" and other stuff but I've read all that and don't see how they could be helpful for a layman in drawing a line between a service in a cloud and Stack Exchange that is also a service but is run on brick-and-mortar servers in a colocation.
Now from my experience with Windows Azure the real difference is the following. With a cloud the service owner rents hardware, network bandwidth and right to use the middleware (Windows 2008 that is used in Azure roles for example) on demand and also there's some maintenance assistance (like if the computer where a role is running crashes another computer is automatically found and the role is redeployed). Without a cloud the service owner will have to deal with all that on his own.
Will that be the right distinction? | 2011/11/10 | [
"https://softwareengineering.stackexchange.com/questions/118971",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/587/"
] | While it's hyped as something new, cloud computing really a new marketing twist on the time-sharing distributed computing model emerged in the mid-to-late 1960's. Of course, there are huge technical improvements but, when you look at it closely, it's not too much different from hooking up to a mainframe via an acoustic coupler and a teletype terminal to access applications and data. These systems were huge moneymakers back in their day but the Apple II and IBM PC put an end to it. Now, through cloud computing, this business model is seeing a renaissance. | Cloud computing does not only provide resource renting.
It also offers a fault tolerance layer, should the rented resources fail. Serious cloud providers work hard to deliver a service without interruption. |
118,971 | The company I work at recently hosted a web service in Windows Azure and announced that. Now trade online magazines say lots of meaningless stuff like "company X moves to the cloud", "company X drops desktops for the cloud", etc.
Looks like there're lots of materials out there (starting with [Wikipedia](http://en.wikipedia.org/w/index.php?title=Cloud_computing&oldid=459932705)) that are very lengthy and talk a lot about "services" and "low entry price" and other stuff but I've read all that and don't see how they could be helpful for a layman in drawing a line between a service in a cloud and Stack Exchange that is also a service but is run on brick-and-mortar servers in a colocation.
Now from my experience with Windows Azure the real difference is the following. With a cloud the service owner rents hardware, network bandwidth and right to use the middleware (Windows 2008 that is used in Azure roles for example) on demand and also there's some maintenance assistance (like if the computer where a role is running crashes another computer is automatically found and the role is redeployed). Without a cloud the service owner will have to deal with all that on his own.
Will that be the right distinction? | 2011/11/10 | [
"https://softwareengineering.stackexchange.com/questions/118971",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/587/"
] | Cloud computing says absolutely nothing about who owns the resources. Cloud computing is an architecture for developing distributed, network-based applications. There are a number of cloud computing service providers out there, such as Azure Services Platform, Amazon Web Services, Google App Engine, and a number of others. However, using someone else's service is not a prerequisite for developing a cloud computing infrastructure.
The idea behind cloud computing is that you put services and applications on networked devices. You could utilize a hosting service, which would shift maintenance and support to other entities. You could also create your own infrastructure for cloud computing. In addition, there is nothing that says that cloud computing must be public. Yes, you can put your applications and services on the public Internet (with the appropraite security for your applications), but you can also create private clouds within your organization.
In the end, with cloud computing, you don't know where or what you are accessing. You see a service or application without any knowledge of what is behind that service or application. The entire cloud is of no consequence to clients - you know that things that you can use exist, are accessible, and use them. They could be in a "server room", or you could be accessing a distributed grid of sensors and workstations. It really doesn't matter. | While it's hyped as something new, cloud computing really a new marketing twist on the time-sharing distributed computing model emerged in the mid-to-late 1960's. Of course, there are huge technical improvements but, when you look at it closely, it's not too much different from hooking up to a mainframe via an acoustic coupler and a teletype terminal to access applications and data. These systems were huge moneymakers back in their day but the Apple II and IBM PC put an end to it. Now, through cloud computing, this business model is seeing a renaissance. |
118,971 | The company I work at recently hosted a web service in Windows Azure and announced that. Now trade online magazines say lots of meaningless stuff like "company X moves to the cloud", "company X drops desktops for the cloud", etc.
Looks like there're lots of materials out there (starting with [Wikipedia](http://en.wikipedia.org/w/index.php?title=Cloud_computing&oldid=459932705)) that are very lengthy and talk a lot about "services" and "low entry price" and other stuff but I've read all that and don't see how they could be helpful for a layman in drawing a line between a service in a cloud and Stack Exchange that is also a service but is run on brick-and-mortar servers in a colocation.
Now from my experience with Windows Azure the real difference is the following. With a cloud the service owner rents hardware, network bandwidth and right to use the middleware (Windows 2008 that is used in Azure roles for example) on demand and also there's some maintenance assistance (like if the computer where a role is running crashes another computer is automatically found and the role is redeployed). Without a cloud the service owner will have to deal with all that on his own.
Will that be the right distinction? | 2011/11/10 | [
"https://softwareengineering.stackexchange.com/questions/118971",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/587/"
] | Cloud computing says absolutely nothing about who owns the resources. Cloud computing is an architecture for developing distributed, network-based applications. There are a number of cloud computing service providers out there, such as Azure Services Platform, Amazon Web Services, Google App Engine, and a number of others. However, using someone else's service is not a prerequisite for developing a cloud computing infrastructure.
The idea behind cloud computing is that you put services and applications on networked devices. You could utilize a hosting service, which would shift maintenance and support to other entities. You could also create your own infrastructure for cloud computing. In addition, there is nothing that says that cloud computing must be public. Yes, you can put your applications and services on the public Internet (with the appropraite security for your applications), but you can also create private clouds within your organization.
In the end, with cloud computing, you don't know where or what you are accessing. You see a service or application without any knowledge of what is behind that service or application. The entire cloud is of no consequence to clients - you know that things that you can use exist, are accessible, and use them. They could be in a "server room", or you could be accessing a distributed grid of sensors and workstations. It really doesn't matter. | Cloud computing does not only provide resource renting.
It also offers a fault tolerance layer, should the rented resources fail. Serious cloud providers work hard to deliver a service without interruption. |
118,971 | The company I work at recently hosted a web service in Windows Azure and announced that. Now trade online magazines say lots of meaningless stuff like "company X moves to the cloud", "company X drops desktops for the cloud", etc.
Looks like there're lots of materials out there (starting with [Wikipedia](http://en.wikipedia.org/w/index.php?title=Cloud_computing&oldid=459932705)) that are very lengthy and talk a lot about "services" and "low entry price" and other stuff but I've read all that and don't see how they could be helpful for a layman in drawing a line between a service in a cloud and Stack Exchange that is also a service but is run on brick-and-mortar servers in a colocation.
Now from my experience with Windows Azure the real difference is the following. With a cloud the service owner rents hardware, network bandwidth and right to use the middleware (Windows 2008 that is used in Azure roles for example) on demand and also there's some maintenance assistance (like if the computer where a role is running crashes another computer is automatically found and the role is redeployed). Without a cloud the service owner will have to deal with all that on his own.
Will that be the right distinction? | 2011/11/10 | [
"https://softwareengineering.stackexchange.com/questions/118971",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/587/"
] | Cloud computing says absolutely nothing about who owns the resources. Cloud computing is an architecture for developing distributed, network-based applications. There are a number of cloud computing service providers out there, such as Azure Services Platform, Amazon Web Services, Google App Engine, and a number of others. However, using someone else's service is not a prerequisite for developing a cloud computing infrastructure.
The idea behind cloud computing is that you put services and applications on networked devices. You could utilize a hosting service, which would shift maintenance and support to other entities. You could also create your own infrastructure for cloud computing. In addition, there is nothing that says that cloud computing must be public. Yes, you can put your applications and services on the public Internet (with the appropraite security for your applications), but you can also create private clouds within your organization.
In the end, with cloud computing, you don't know where or what you are accessing. You see a service or application without any knowledge of what is behind that service or application. The entire cloud is of no consequence to clients - you know that things that you can use exist, are accessible, and use them. They could be in a "server room", or you could be accessing a distributed grid of sensors and workstations. It really doesn't matter. | Cloud computing begins with *Renting* hard disks to servers. However, it goes beyond that much more. This is not to say that there isn't any hype about it; but i am trying to define what is the key difference between being in cloud and not so!
In my office we have a set of servers, which i can access from anywhere. Does this qualify to be a cloud? NO! And so is true for many data centers as is.
The core element that forms cloud computing is of course, the hardware infrastructure (servers and disk space) used exclusively through public internet. However, what is important is how this being managed. A critical infrastructure element (though i doubt if people would disagree if you say must) is the visualization.
In, (what i think) a real cloud, all these servers are combined to become pool of resources tied together over a framework where virtual machines are created. One can create, archive and delete machines. Transfer hard disk space from one machine to another like how you mount them in real machines. These technologies allows that one can shift the data and OS of these machines to move from one physical server to another seamlessly and it comes with various redundancy options and management consoles for services.
Understand, in good old days (as well as today), one used to get personal homepages and company websites - on hosting space. This isn't quite a cloud.
Though, i agree that now-a-days anyone who got a static ip - think he has created a cloud- and indeed the word *cloud* has been misused to an extent that there is no real definition to it now! |
103,513 | In my story, I want a creature to be immune to bullets, but giving it bulletproof skin seems too obvious. I want it so that you could shoot the creature a few times at its seemingly vital spots (the head, heart, etc.), but it won't die right away and can still escape or attack you. The best way to make sure it dies quickly is by draining most of its blood, which means you have to get close enough to cause a large wound with a bladed weapon.
Yes, I know you can just use a really big gun or explosives to achieve the same effect, but in this scenario, the options are limited to something relatively cheap, lightweight, and doesn't cause a lot of collateral damage. To give you some image, the creature is about as agile, as strong, and as big as a polar bear.
Preferably the reason it's immune to bullets is based on real animals, or at least something that could exist in the natural world. | 2018/01/30 | [
"https://worldbuilding.stackexchange.com/questions/103513",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/41106/"
] | What about a dense skeleton?
The skull would protect the brain of course, and you could have a ribcage where the ribs are more tightly packed (or even overlapping dual layer, allowing expansion in both layers but still protecting against vital shots) so that lungs, heart et al are protected. You still need a circulatory system so the idea of large trauma from a blade still works.
Sure, you could get a shot into the arms or legs, but that would not be fatal. Even a stomach wound wouldn't kill the animal immediately meaning that it could still attack you.
It violates the possibility of the brain, heart or lungs being shot, but still provides for firearms to do damage generally, just not in critical areas that would cause immediate death.
This does have some precedent in nature; certain herd animals have hardened skulls, and there are plenty of dinosaurs with hardened skeletons as they used parts of their bodies as clubs, rams or spears. | How big is this creature allowed to be?
I'd like to take this in a different direction than the other answers: **make the creature "sparse"** (as the crown of a tree) and/or large (as a coral reef), so that point-effect weapons (such as bullets) have a very low chance of doing significant damage.
Maybe combined with vital organs somehow being distributed or in unforeseeable places, and you really have to hack it to pieces in order to kill it.
I realize this makes it look more like a *very lively plant* than an animal, but depending on your reality ... that line can be surprisingly blurry anyway. |
103,513 | In my story, I want a creature to be immune to bullets, but giving it bulletproof skin seems too obvious. I want it so that you could shoot the creature a few times at its seemingly vital spots (the head, heart, etc.), but it won't die right away and can still escape or attack you. The best way to make sure it dies quickly is by draining most of its blood, which means you have to get close enough to cause a large wound with a bladed weapon.
Yes, I know you can just use a really big gun or explosives to achieve the same effect, but in this scenario, the options are limited to something relatively cheap, lightweight, and doesn't cause a lot of collateral damage. To give you some image, the creature is about as agile, as strong, and as big as a polar bear.
Preferably the reason it's immune to bullets is based on real animals, or at least something that could exist in the natural world. | 2018/01/30 | [
"https://worldbuilding.stackexchange.com/questions/103513",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/41106/"
] | Your creatures may have a reasonably thick layer of [ballistic gel](https://en.wikipedia.org/wiki/Ballistic_gelatin) or blubber-like material under their skin, which scope shall be to slow down and thus absorb most of the kinetic energy of bullets, sparing damages to vital organs.
You can either change the density of the material (shortcoming: it makes the thing heavier) or its viscosity to improve the effectiveness of the dissipation. | **Self-Regenerative tissue**
your creature has a very fast inmune and regenerative system, making beheading (or very big caliber shots on the head), and thus cutting all neural links, the only way to kill it.
**disclaimer: your creature might have strong bonds to young-adult self destructive women, cigars, muscle cars and alcohol.**
[regeneration by chemical reaction is possible](https://www.sciencedaily.com/releases/2016/04/160428152117.htm)
[](https://i.stack.imgur.com/931LF.jpg) |
103,513 | In my story, I want a creature to be immune to bullets, but giving it bulletproof skin seems too obvious. I want it so that you could shoot the creature a few times at its seemingly vital spots (the head, heart, etc.), but it won't die right away and can still escape or attack you. The best way to make sure it dies quickly is by draining most of its blood, which means you have to get close enough to cause a large wound with a bladed weapon.
Yes, I know you can just use a really big gun or explosives to achieve the same effect, but in this scenario, the options are limited to something relatively cheap, lightweight, and doesn't cause a lot of collateral damage. To give you some image, the creature is about as agile, as strong, and as big as a polar bear.
Preferably the reason it's immune to bullets is based on real animals, or at least something that could exist in the natural world. | 2018/01/30 | [
"https://worldbuilding.stackexchange.com/questions/103513",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/41106/"
] | Um why not use an actual polar bear? Or something very similar; bears, especially the big bears, (Polar and Grisly) are notoriously hard to kill with small calibre rounds, the combination of muscle layers, fat, and fur over their primary body cavity makes getting at their organs really unlikely with handgun rounds. They also have very thick muscles and bones in their skulls so headshots often lodge near the surface breaking bone but not punching through. | How big is this creature allowed to be?
I'd like to take this in a different direction than the other answers: **make the creature "sparse"** (as the crown of a tree) and/or large (as a coral reef), so that point-effect weapons (such as bullets) have a very low chance of doing significant damage.
Maybe combined with vital organs somehow being distributed or in unforeseeable places, and you really have to hack it to pieces in order to kill it.
I realize this makes it look more like a *very lively plant* than an animal, but depending on your reality ... that line can be surprisingly blurry anyway. |
103,513 | In my story, I want a creature to be immune to bullets, but giving it bulletproof skin seems too obvious. I want it so that you could shoot the creature a few times at its seemingly vital spots (the head, heart, etc.), but it won't die right away and can still escape or attack you. The best way to make sure it dies quickly is by draining most of its blood, which means you have to get close enough to cause a large wound with a bladed weapon.
Yes, I know you can just use a really big gun or explosives to achieve the same effect, but in this scenario, the options are limited to something relatively cheap, lightweight, and doesn't cause a lot of collateral damage. To give you some image, the creature is about as agile, as strong, and as big as a polar bear.
Preferably the reason it's immune to bullets is based on real animals, or at least something that could exist in the natural world. | 2018/01/30 | [
"https://worldbuilding.stackexchange.com/questions/103513",
"https://worldbuilding.stackexchange.com",
"https://worldbuilding.stackexchange.com/users/41106/"
] | Um why not use an actual polar bear? Or something very similar; bears, especially the big bears, (Polar and Grisly) are notoriously hard to kill with small calibre rounds, the combination of muscle layers, fat, and fur over their primary body cavity makes getting at their organs really unlikely with handgun rounds. They also have very thick muscles and bones in their skulls so headshots often lodge near the surface breaking bone but not punching through. | Your creatures may have a reasonably thick layer of [ballistic gel](https://en.wikipedia.org/wiki/Ballistic_gelatin) or blubber-like material under their skin, which scope shall be to slow down and thus absorb most of the kinetic energy of bullets, sparing damages to vital organs.
You can either change the density of the material (shortcoming: it makes the thing heavier) or its viscosity to improve the effectiveness of the dissipation. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.