qid int64 1 74.7M | question stringlengths 12 33.8k | date stringlengths 10 10 | metadata list | response_j stringlengths 0 115k | response_k stringlengths 2 98.3k |
|---|---|---|---|---|---|
466,916 | We know that the speed of light depends on the density of the medium it is travelling through. It travels faster through less dense media and slower through more dense media.
When we produce sound, a series of rarefactions and compressions are created in the medium by the vibration of the source of sound. Compressions have high pressure and high density, while rarefactions have low pressure and low density.
If light is made to propagate through such a disturbance in the medium, does it experience refraction due to changes in the density of the medium? Why don't we observe this? | 2019/03/17 | [
"https://physics.stackexchange.com/questions/466916",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/181963/"
] | Actually this effect has been discovered in 1932 with light diffracted by ultra-sound waves.
In order to get observable effects you need ultra-sound
with wavelengths in the μm range (i.e. not much longer than light waves),
and thus sound frequencies in the MHz range.
See for example here:
* [On the Scattering of Light by Supersonic Waves](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1076242/)
by Debye and Sears in 1932
>
> [](https://i.stack.imgur.com/5ZhcG.png)
>
>
>
* [Propriétés optiques des milieux solides et liquides soumis aux
vibrations élastiques ultra sonores](https://hal.archives-ouvertes.fr/jpa-00233115)
(Optical properties of solid and liquid media subjected to ultrasonic elastic vibrations)
by Lucas and Biquard in 1932
translated from French:
>
> **Abstract** : This article describes the main optical properties presented by solid and liquid media, subjected to ultra sonic elastic vibrations whose frequencies range from 600,000 to 30 million per second. These ultra sounds were obtained by Langevin's method using piezoelectric quartz excited with high frequency. Under these conditions, and according to the relative sizes of the elastic wavelengths, the light wavelengths, and the opening of the light beam passing through the medium studied, different optical phenomena are observed. In the case of the smallest elastic wavelengths of up to a few tenths of a millimeter, grating-like light diffraction patterns are observed when the incident light rays run parallel to the elastic wave planes. ...
>
> [](https://i.stack.imgur.com/3IP90.png)
>
>
>
* [The diffraction of light by high frequency sound waves: Part I](https://link.springer.com/article/10.1007/BF03035840)
by Raman and Nagendra Nathe in 1935
>
> A theory of the phenomenon of the diffraction of light by sound-waves of high frequency in a medium, discovered by Debye and Sears and Lucas and Biquard, is developed.
>
>
> | You can see the effect of density change on refractive index due to heating of air. For a simple example, light a candle and look through the air column directly above the flame. The flame heats air which rises, but the flow is turbulent, so you'll see objects on the other side of the air column shimmer as the stream of hot air wavers from side to side.
You can see this effect when you look across a paved surface on a hot sunny day.
You won't see this effect with sound, at least not at typical listening levels because the density changes are too small (as noted in one of the other answers). |
466,916 | We know that the speed of light depends on the density of the medium it is travelling through. It travels faster through less dense media and slower through more dense media.
When we produce sound, a series of rarefactions and compressions are created in the medium by the vibration of the source of sound. Compressions have high pressure and high density, while rarefactions have low pressure and low density.
If light is made to propagate through such a disturbance in the medium, does it experience refraction due to changes in the density of the medium? Why don't we observe this? | 2019/03/17 | [
"https://physics.stackexchange.com/questions/466916",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/181963/"
] | I have seen it with standing waves in water, a PhyWe demonstration experiment. The frequency 800 kHz, which gives a distance between nodes of about a millimeter. The standing wave is in a cuvette, between the head of a piezo hydrophone transducer and the bottom. When looking through the water, one sees the varying index of refraction as a "wavyness" of the background.
I could not find a description of this online, but I found this about demonstration experiments in air: <https://docplayer.org/52348266-Unsichtbares-sichtbar-machen-schallwellenfronten-im-bild.html> | You can see the effect of density change on refractive index due to heating of air. For a simple example, light a candle and look through the air column directly above the flame. The flame heats air which rises, but the flow is turbulent, so you'll see objects on the other side of the air column shimmer as the stream of hot air wavers from side to side.
You can see this effect when you look across a paved surface on a hot sunny day.
You won't see this effect with sound, at least not at typical listening levels because the density changes are too small (as noted in one of the other answers). |
466,916 | We know that the speed of light depends on the density of the medium it is travelling through. It travels faster through less dense media and slower through more dense media.
When we produce sound, a series of rarefactions and compressions are created in the medium by the vibration of the source of sound. Compressions have high pressure and high density, while rarefactions have low pressure and low density.
If light is made to propagate through such a disturbance in the medium, does it experience refraction due to changes in the density of the medium? Why don't we observe this? | 2019/03/17 | [
"https://physics.stackexchange.com/questions/466916",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/181963/"
] | A few factors contribute to this:
* Air has low index of refraction therefore optical effects arising from its mechanical pressure will be weak;
* Even loud sounds have low mechanical pressure. Wolfram Alpha database lists 200 pascals as pressure of jet airplane at 100 meters, which works out as ~0.5% pressure difference between peak and trough;
* Waves do not cause harsh boundary between high and low pressures;
* Sources of loud sounds typically cause other phenomena that obscure this. Combustion creates light and heat, and rapid pressure release can force water in the air to become opaque.
Even with all that, it *is* possible to magnify the effect using distant point light and either by merely [observing refracted patterns](https://en.wikipedia.org/wiki/Shadowgraph) or creating a setup where [half of the refocused image is blocked](https://en.wikipedia.org/wiki/Schlieren_photography). Using the second technique it is [possible to observe clap of hands](https://www.youtube.com/watch?v=px3oVGXr4mo). | You can see the effect of density change on refractive index due to heating of air. For a simple example, light a candle and look through the air column directly above the flame. The flame heats air which rises, but the flow is turbulent, so you'll see objects on the other side of the air column shimmer as the stream of hot air wavers from side to side.
You can see this effect when you look across a paved surface on a hot sunny day.
You won't see this effect with sound, at least not at typical listening levels because the density changes are too small (as noted in one of the other answers). |
143,631 | I'm sifting through some incorrect permission issues and discovered the [namei](http://man7.org/linux/man-pages/man1/namei.1.html) command for Linux. Homebrew doesn't currently have a Mac port.
>
> namei - follow a pathname until a terminal point is found
>
>
>
Is there a command or series of commands that can be used to accomplish the same thing on OS X? | 2014/08/30 | [
"https://apple.stackexchange.com/questions/143631",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/8007/"
] | What's officially "supported" and what's possible don't match. I have a late-2012 rMBP and got 4K out of it at 30Hz.
I took a screenshot as proof:

Just a normal mini-displayport<->displayport cable was used.
More details in my answer here: <https://apple.stackexchange.com/a/147765/39878>
or on this blog post: <http://www.mattburns.co.uk/blog/2014/09/30/running-the-4k-aoc-u2868pqu-and-intel-hd4000-graphics/> | Only 2013 Macs (and upwards) are [compatible with 4K](http://support.apple.com/kb/HT6008).
Current retina MacBook Pro ([13" and 15"](https://www.apple.com/macbook-pro/specs-retina/)) are compatible with 4K but only at 24Hz |
143,631 | I'm sifting through some incorrect permission issues and discovered the [namei](http://man7.org/linux/man-pages/man1/namei.1.html) command for Linux. Homebrew doesn't currently have a Mac port.
>
> namei - follow a pathname until a terminal point is found
>
>
>
Is there a command or series of commands that can be used to accomplish the same thing on OS X? | 2014/08/30 | [
"https://apple.stackexchange.com/questions/143631",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/8007/"
] | Only 2013 Macs (and upwards) are [compatible with 4K](http://support.apple.com/kb/HT6008).
Current retina MacBook Pro ([13" and 15"](https://www.apple.com/macbook-pro/specs-retina/)) are compatible with 4K but only at 24Hz | Here's your answer <http://support.apple.com/kb/HT6008>
This document from Apple explains |
143,631 | I'm sifting through some incorrect permission issues and discovered the [namei](http://man7.org/linux/man-pages/man1/namei.1.html) command for Linux. Homebrew doesn't currently have a Mac port.
>
> namei - follow a pathname until a terminal point is found
>
>
>
Is there a command or series of commands that can be used to accomplish the same thing on OS X? | 2014/08/30 | [
"https://apple.stackexchange.com/questions/143631",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/8007/"
] | What's officially "supported" and what's possible don't match. I have a late-2012 rMBP and got 4K out of it at 30Hz.
I took a screenshot as proof:

Just a normal mini-displayport<->displayport cable was used.
More details in my answer here: <https://apple.stackexchange.com/a/147765/39878>
or on this blog post: <http://www.mattburns.co.uk/blog/2014/09/30/running-the-4k-aoc-u2868pqu-and-intel-hd4000-graphics/> | Max supposed supported resolution on that card for an external monitor is 2560x1600, I'm afraid.
The 2013 can do 4k, but not the 2012. |
143,631 | I'm sifting through some incorrect permission issues and discovered the [namei](http://man7.org/linux/man-pages/man1/namei.1.html) command for Linux. Homebrew doesn't currently have a Mac port.
>
> namei - follow a pathname until a terminal point is found
>
>
>
Is there a command or series of commands that can be used to accomplish the same thing on OS X? | 2014/08/30 | [
"https://apple.stackexchange.com/questions/143631",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/8007/"
] | Max supposed supported resolution on that card for an external monitor is 2560x1600, I'm afraid.
The 2013 can do 4k, but not the 2012. | Here's your answer <http://support.apple.com/kb/HT6008>
This document from Apple explains |
143,631 | I'm sifting through some incorrect permission issues and discovered the [namei](http://man7.org/linux/man-pages/man1/namei.1.html) command for Linux. Homebrew doesn't currently have a Mac port.
>
> namei - follow a pathname until a terminal point is found
>
>
>
Is there a command or series of commands that can be used to accomplish the same thing on OS X? | 2014/08/30 | [
"https://apple.stackexchange.com/questions/143631",
"https://apple.stackexchange.com",
"https://apple.stackexchange.com/users/8007/"
] | What's officially "supported" and what's possible don't match. I have a late-2012 rMBP and got 4K out of it at 30Hz.
I took a screenshot as proof:

Just a normal mini-displayport<->displayport cable was used.
More details in my answer here: <https://apple.stackexchange.com/a/147765/39878>
or on this blog post: <http://www.mattburns.co.uk/blog/2014/09/30/running-the-4k-aoc-u2868pqu-and-intel-hd4000-graphics/> | Here's your answer <http://support.apple.com/kb/HT6008>
This document from Apple explains |
14,720 | The first time Walt sold Gus meth was toward the end of season 2 of *Breaking Bad* after they met in Los Pollos Hermanos. Guss trait as a careful man is highlighted several times. One example was his straight up refusal to even speak to Walt because he saw that Jesse was high. However, he goes on to do the transaction. Walt gets his money and Gus gets the meth. At this stage Gus didn't know Walt had a brother in-law in the DEA.
Then, in the final episode of season 2 Gus visits the DEA office as he's sponsoring a fun run. While in the office he spots a picture of Walter. He asks Hank who the picture is of and Hank explains that its his brother in-law.
The next time we see Gus is when Walt goes to Los Pollos Hermanos to tell him he's retiring. Gus straight out offers him 3 million (I believe? I need to check the figure) for 3 months work. I find it a bit hard to believe, seeing as how there was such emphasis on him being a careful man, that he would talk to him in this way the very next time he sees him after finding out about Hank.
Why did he take this action? | 2013/10/22 | [
"https://movies.stackexchange.com/questions/14720",
"https://movies.stackexchange.com",
"https://movies.stackexchange.com/users/3203/"
] | I always figured it was because Gus realized at that moment that Walt and he shared similar methodologies, and Gus realized that in order for Walt to remain hidden from the DEA, he must also be an incredibly careful man.
He misjudged Walt based on Jesse's condition, and this was the moment he realized there was more to the man.
As a more sinister aside, Gus gained a huge amount of *leverage* over Walt by learning such personal information. If he was serious about going into partnership with Walt, sure he could have gotten Mike to do some investigative work anyway, but the opportunity presented itself to him and he simply took him up on it.
Gus is clearly trying to gain some kind of insight into the workings of the DEA, it's no coincidence he's sponsoring them for a fun-run. He's proactively ingratiating himself into their operations; either for intelligence, to better camouflage his operation or possibly a mixture of both.
Having Walt onside is a risk, but a calculated one, and possibly one in which the benefits (to Gus at least) outweigh the danger. | By that time Gus had invested massive amounts of money on building the meth lab beneath the laundry, and probably had a huge list of customers waiting on the blue meth with their hands on their wallets. Also, Gus had yet to find a replacement for Walt. So letting go of Walt now would've meant a massive financial loss. Apparently it was a loss Gus (and his partners?) were averse to undertake. However, Gus becomes much more careful with Walt afterwards and puts in motion plans to replace him as meth cook. |
8,639 | I have a few questions about a few verses, Genesis 48:15-16.
>
> And [Jacob] blessed Joseph and said, “The God before whom my fathers Abraham and Isaac walked, the God who has been my shepherd all my life long to this day, the angel who has redeemed me from all evil, bless the boys; and in them let my name be carried on, and the name of my fathers Abraham and Isaac; and let them grow into a multitude in the midst of the earth.”
>
>
>
1. Who is the Angel? It seems like he's talking about God as the angel is attributed with redemption of some sort. Is it accurate to call God an angel? If not, who else could the angel be?
2. What sort of redemption could Jacob have been talking about? Was he talking about the promise of the seed (Salvation from evil of future generations) from Genesis 3:15? Or just a general salvation from earthly evils during his lifetime? Or did he have some pre-law, pre-messianic concept of a salvation from sin? | 2014/03/21 | [
"https://hermeneutics.stackexchange.com/questions/8639",
"https://hermeneutics.stackexchange.com",
"https://hermeneutics.stackexchange.com/users/2150/"
] | The first thing we need to understand is that the Hebrew word מַלְאָךְ (*mal'akh*) literally means "messenger." It can refer to human messengers ([Hag. 1:13](http://www.blbclassic.org/Bible.cfm?b=Hag&c=1&v=13&t=KJV#conc/13)) as well as spiritual messengers ([Gen. 22:11](http://www.blbclassic.org/Bible.cfm?b=Gen&c=22&v=11&t=KJV#conc/11); the latter is what we commonly refer to as "angels"). A related noun מַלְאָכוּת (*mal'akhut*) derived from the same triliteral root מל"ך means "message" ([Hag. 1:13](http://www.blbclassic.org/Bible.cfm?b=Hag&c=1&v=13&t=KJV#conc/13)). The English word "angel" comes from a loose transliteration of the Greek word ἀγγελός (*angelos*). But, like the Hebrew word מַלְאָךְ, it also means "messenger" and can refer to human ([Jam. 2:25](http://www.blbclassic.org/Bible.cfm?b=Jas&c=2&v=25&t=KJV#conc/25)) and spiritual messengers ([Matt. 1:20](http://www.blbclassic.org/Bible.cfm?b=Mat&c=1&v=20&t=KJV#conc/20)).
All that being said, now we can interpret Gen. 48:15-16.
>
> טו וַיְבָרֶךְ אֶת יוֹסֵף וַיֹּאמַר הָאֱלֹהִים אֲשֶׁר הִתְהַלְּכוּ אֲבֹתַי לְפָנָיו אַבְרָהָם וְיִצְחָק הָאֱלֹהִים הָרֹעֶה אֹתִי מֵעוֹדִי עַד הַיּוֹם הַזֶּה טז הַמַּלְאָךְ הַגֹּאֵל אֹתִי מִכָּל רָע יְבָרֵךְ אֶת הַנְּעָרִים וְיִקָּרֵא בָהֶם שְׁמִי וְשֵׁם אֲבֹתַי אַבְרָהָם וְיִצְחָק וְיִדְגּוּ לָרֹב בְּקֶרֶב הָאָרֶץ
>
>
> 15 And he blessed Yosef, and said, "The God, before whom my fathers Avraham and Yitzchak walked, the God who shepherds me ever since until today, 16 the messenger who redeems me from all evil, bless the children, and let my name be named on them, and the name of my fathers, Avraham and Yitzchak, and let them grow into a multitude in the midst of the earth.
>
>
>
We must focus on the idea of redemption from evil. This is not a function of any mere human messenger. In the Tanakh, humans redeem property ([Lev. 25:25](http://www.blbclassic.org/Bible.cfm?b=Lev&c=25&v=25&t=KJV#conc/25)), houses ([Lev. 27:15](http://www.blbclassic.org/Bible.cfm?b=Lev&c=27&v=15&t=KJV#conc/15)), fields ([Lev. 27:19](http://www.blbclassic.org/Bible.cfm?b=Lev&c=27&v=15&t=KJV#conc/19)), relatives via Levirate marriage ([Ruth 3:9](http://www.blbclassic.org/Bible.cfm?b=Rth&c=3&v=9&t=KJV#conc/9)), etc. However, it is Yahveh who redeems His peoples' soul ([Psa. 69:18](http://www.blbclassic.org/Bible.cfm?b=Psa&c=69&v=18&t=KJV#conc/18)) and life ([Psa. 103:4](http://www.blbclassic.org/Bible.cfm?b=Psa&c=103&v=1&t=KJV#conc/4); [Lam. 3:58](http://www.blbclassic.org/Bible.cfm?b=Lam&c=3&v=58&t=KJV#conc/58)); Yahveh redeems His people from the power of the grave ([Hos. 13:14](http://www.blbclassic.org/Bible.cfm?b=Hos&c=13&v=14&t=KJV#conc/14)) and from death ([Hos. 13:14](http://www.blbclassic.org/Bible.cfm?b=Hos&c=13&v=14&t=KJV#conc/14)). Numerous times, Yahveh is referred to as "the redeemer" (הַגֹּאֵל) ([Isa. 47:4](http://www.blbclassic.org/Bible.cfm?b=Isa&c=47&v=4&t=KJV#conc/4)) of His people.
[Keli and Delitzsch](http://www.studylight.org/com/kdo/view.cgi?bk=0&ch=48) wrote,
>
> This triple reference to God, in which the Angel who is placed on an equality with Ha-Elohim cannot possibly be a created angel, but must be the "Angel of God," i.e., God manifested in the form of the Angel of Jehovah, or the "Angel of His face" (Isaiah 43:9)...
>
>
>
So, is the מַלְאַךְ יַהְוֶה (*mal'akh Yahveh*), God Himself?
In [Gen. 28:18-22](http://www.blbclassic.org/Bible.cfm?b=Gen&c=28&v=1&t=KJV#conc/18), Ya'akov anoints a stone and makes a vow to God, saying, "If God will be with me, and will keep me in this way that I go, and will give me bread to eat, and raiment to put on, so that I come again to my father's house in peace, then Yahveh shall be my God."
Notice that Ya'akov makes a vow to Yahveh, i.e. God.
A few chapters later, in [Gen. 31:11-13](http://www.blbclassic.org/Bible.cfm?b=Gen&c=31&v=1&t=KJV#conc/11), Ya'akov states,
>
> And **the angel of God** spoke to me in a dream, saying, "Ya'akov!" And I said, "Here I am!" And he said, "Now lift up your eyes, and see, all the rams which leap upon the cattle are ringstraked, speckled, and grisled, for I have seen all that Laban does to you. **I am the God of Beit-El** ("the House of God"), where you anointed the pillar, and **where you vowed a vow to me**. Now arise! Get out of this land, and return to the land of your kindred!"
>
>
>
Notice how "the angel of God" (lit. "messenger of God") identifies himself as "the God of Beit-El" and then says that Ya'akov "vowed a vow to me." When we go back to [Gen. 28:18-22](http://www.blbclassic.org/Bible.cfm?t=KJV&b=Gen&c=28&v=18&x=0&y=0#conc/18), you'll see that Ya'akov vowed a vow to Yahveh, God.
Therefore, the messenger who redeems Ya'akov from evil could be none other than Yahveh Himself, especially because such a function (i.e., redemption from evil) is something that only Yahveh can do, being "the redeemer of Israel" ([Isa. 49:7](http://www.blbclassic.org/Bible.cfm?b=Isa&c=49&t=KJV#conc/7)). | Jesus walked with Abraham
Jesus was in Jacobs heart
Jesus is the wrestler
We know God as Jesus mostly, the son of God, word of God, core of God, essence of God. Character of God in flesh and blood life form. Son of man and son of God.
God or Yeah is spirit of life generating life from eternity to eternity.
Yeahwhoo is God the father who is spirit of generating.
Yeahwhy is God pouring out of himself through his apostles and prophets via dreams/visions/angels. The generated emanate.
Yeahshuah is gods saving of mankind, eternal life, hard to "get" core/heart of God that was not perceived by most because Jesus prefers subtle glory as he is the once and future king.
King as God
Son as being part of God.
Human as he was cut off from God while mankind went with idols of angels and themselves as god.
The first generation is the hue of God. The spectrum of colors of his own.
Within spectrum one color is most like God. One spirit has a spirit the same or most the same to God.
That generating we know as holy spirit as it's ongoing in stages unveiling the plan of God through or to angelic apostles and their prophets. All pointing to Christ forward and back.
Michael is an angel who is like God in character.
And he is the agent of God within the spectrum of color visible to human eye.
Spirit but visible to those who can perceive him.
God uses his hand a lot in the bible to demonstrate his direct action.
The hand of God represents God.
Like cowboy terms a hand is a servant of his master and Jesus is a servant to his father's power/will.
The hand that rocks the cradle, the baby in the cradle, the covenant between them.
The God of the 7 seals/covenants
The God of law of his will and testament.
The God of eternal life.
One god, being, force, reality, truth, King. |
8,639 | I have a few questions about a few verses, Genesis 48:15-16.
>
> And [Jacob] blessed Joseph and said, “The God before whom my fathers Abraham and Isaac walked, the God who has been my shepherd all my life long to this day, the angel who has redeemed me from all evil, bless the boys; and in them let my name be carried on, and the name of my fathers Abraham and Isaac; and let them grow into a multitude in the midst of the earth.”
>
>
>
1. Who is the Angel? It seems like he's talking about God as the angel is attributed with redemption of some sort. Is it accurate to call God an angel? If not, who else could the angel be?
2. What sort of redemption could Jacob have been talking about? Was he talking about the promise of the seed (Salvation from evil of future generations) from Genesis 3:15? Or just a general salvation from earthly evils during his lifetime? Or did he have some pre-law, pre-messianic concept of a salvation from sin? | 2014/03/21 | [
"https://hermeneutics.stackexchange.com/questions/8639",
"https://hermeneutics.stackexchange.com",
"https://hermeneutics.stackexchange.com/users/2150/"
] | The first thing we need to understand is that the Hebrew word מַלְאָךְ (*mal'akh*) literally means "messenger." It can refer to human messengers ([Hag. 1:13](http://www.blbclassic.org/Bible.cfm?b=Hag&c=1&v=13&t=KJV#conc/13)) as well as spiritual messengers ([Gen. 22:11](http://www.blbclassic.org/Bible.cfm?b=Gen&c=22&v=11&t=KJV#conc/11); the latter is what we commonly refer to as "angels"). A related noun מַלְאָכוּת (*mal'akhut*) derived from the same triliteral root מל"ך means "message" ([Hag. 1:13](http://www.blbclassic.org/Bible.cfm?b=Hag&c=1&v=13&t=KJV#conc/13)). The English word "angel" comes from a loose transliteration of the Greek word ἀγγελός (*angelos*). But, like the Hebrew word מַלְאָךְ, it also means "messenger" and can refer to human ([Jam. 2:25](http://www.blbclassic.org/Bible.cfm?b=Jas&c=2&v=25&t=KJV#conc/25)) and spiritual messengers ([Matt. 1:20](http://www.blbclassic.org/Bible.cfm?b=Mat&c=1&v=20&t=KJV#conc/20)).
All that being said, now we can interpret Gen. 48:15-16.
>
> טו וַיְבָרֶךְ אֶת יוֹסֵף וַיֹּאמַר הָאֱלֹהִים אֲשֶׁר הִתְהַלְּכוּ אֲבֹתַי לְפָנָיו אַבְרָהָם וְיִצְחָק הָאֱלֹהִים הָרֹעֶה אֹתִי מֵעוֹדִי עַד הַיּוֹם הַזֶּה טז הַמַּלְאָךְ הַגֹּאֵל אֹתִי מִכָּל רָע יְבָרֵךְ אֶת הַנְּעָרִים וְיִקָּרֵא בָהֶם שְׁמִי וְשֵׁם אֲבֹתַי אַבְרָהָם וְיִצְחָק וְיִדְגּוּ לָרֹב בְּקֶרֶב הָאָרֶץ
>
>
> 15 And he blessed Yosef, and said, "The God, before whom my fathers Avraham and Yitzchak walked, the God who shepherds me ever since until today, 16 the messenger who redeems me from all evil, bless the children, and let my name be named on them, and the name of my fathers, Avraham and Yitzchak, and let them grow into a multitude in the midst of the earth.
>
>
>
We must focus on the idea of redemption from evil. This is not a function of any mere human messenger. In the Tanakh, humans redeem property ([Lev. 25:25](http://www.blbclassic.org/Bible.cfm?b=Lev&c=25&v=25&t=KJV#conc/25)), houses ([Lev. 27:15](http://www.blbclassic.org/Bible.cfm?b=Lev&c=27&v=15&t=KJV#conc/15)), fields ([Lev. 27:19](http://www.blbclassic.org/Bible.cfm?b=Lev&c=27&v=15&t=KJV#conc/19)), relatives via Levirate marriage ([Ruth 3:9](http://www.blbclassic.org/Bible.cfm?b=Rth&c=3&v=9&t=KJV#conc/9)), etc. However, it is Yahveh who redeems His peoples' soul ([Psa. 69:18](http://www.blbclassic.org/Bible.cfm?b=Psa&c=69&v=18&t=KJV#conc/18)) and life ([Psa. 103:4](http://www.blbclassic.org/Bible.cfm?b=Psa&c=103&v=1&t=KJV#conc/4); [Lam. 3:58](http://www.blbclassic.org/Bible.cfm?b=Lam&c=3&v=58&t=KJV#conc/58)); Yahveh redeems His people from the power of the grave ([Hos. 13:14](http://www.blbclassic.org/Bible.cfm?b=Hos&c=13&v=14&t=KJV#conc/14)) and from death ([Hos. 13:14](http://www.blbclassic.org/Bible.cfm?b=Hos&c=13&v=14&t=KJV#conc/14)). Numerous times, Yahveh is referred to as "the redeemer" (הַגֹּאֵל) ([Isa. 47:4](http://www.blbclassic.org/Bible.cfm?b=Isa&c=47&v=4&t=KJV#conc/4)) of His people.
[Keli and Delitzsch](http://www.studylight.org/com/kdo/view.cgi?bk=0&ch=48) wrote,
>
> This triple reference to God, in which the Angel who is placed on an equality with Ha-Elohim cannot possibly be a created angel, but must be the "Angel of God," i.e., God manifested in the form of the Angel of Jehovah, or the "Angel of His face" (Isaiah 43:9)...
>
>
>
So, is the מַלְאַךְ יַהְוֶה (*mal'akh Yahveh*), God Himself?
In [Gen. 28:18-22](http://www.blbclassic.org/Bible.cfm?b=Gen&c=28&v=1&t=KJV#conc/18), Ya'akov anoints a stone and makes a vow to God, saying, "If God will be with me, and will keep me in this way that I go, and will give me bread to eat, and raiment to put on, so that I come again to my father's house in peace, then Yahveh shall be my God."
Notice that Ya'akov makes a vow to Yahveh, i.e. God.
A few chapters later, in [Gen. 31:11-13](http://www.blbclassic.org/Bible.cfm?b=Gen&c=31&v=1&t=KJV#conc/11), Ya'akov states,
>
> And **the angel of God** spoke to me in a dream, saying, "Ya'akov!" And I said, "Here I am!" And he said, "Now lift up your eyes, and see, all the rams which leap upon the cattle are ringstraked, speckled, and grisled, for I have seen all that Laban does to you. **I am the God of Beit-El** ("the House of God"), where you anointed the pillar, and **where you vowed a vow to me**. Now arise! Get out of this land, and return to the land of your kindred!"
>
>
>
Notice how "the angel of God" (lit. "messenger of God") identifies himself as "the God of Beit-El" and then says that Ya'akov "vowed a vow to me." When we go back to [Gen. 28:18-22](http://www.blbclassic.org/Bible.cfm?t=KJV&b=Gen&c=28&v=18&x=0&y=0#conc/18), you'll see that Ya'akov vowed a vow to Yahveh, God.
Therefore, the messenger who redeems Ya'akov from evil could be none other than Yahveh Himself, especially because such a function (i.e., redemption from evil) is something that only Yahveh can do, being "the redeemer of Israel" ([Isa. 49:7](http://www.blbclassic.org/Bible.cfm?b=Isa&c=49&t=KJV#conc/7)). | Israel means Inheritance. The sons of God in heaven are the Elohom, are called 'Principles" they who are responsible to collect the inheritance from Earth(Spiritual Israel in the finalty) ) redeemed through Christ. To this the one who wrestled with the Patriarch Jacob was none else than Michael the Archangel who introduced the name "Israel" |
119,699 | I recently signed up for [Up bank](https://up.com.au/).
The process went like this:
1. Go to to the website, download the app to your phone.
2. Enter your phone number.
3. Enter the SMS-sent verification code.
4. Enter your address.
5. Enter your Australian Driver's License number.
That's it, you now have an account you deposit money into and they're sending a card in the mail.
I'm curious how this fits Australian KYC laws. This seems easy to abuse - for example lists of stolen driver's license numbers could be used to create bank accounts. (Admittedly - using a phone number is a second part of KYC as getting an Australian phone number requires an in person ID check). | 2020/01/28 | [
"https://money.stackexchange.com/questions/119699",
"https://money.stackexchange.com",
"https://money.stackexchange.com/users/20732/"
] | We can't tell their exact policy but. Most banks have a tiered or stepped underwriting process.
Example:
* Level 1 - Requirements: Valid phone number, driver licence and address. Allowed to: Add money to the account.
* Level 2 - Requirements: 100 point check (scanned passport etc). Allowed to: remove upto $5k from the account.
and on and on.
There is a trade-off to easy onboarding and security, and this is the modern way to manage this. | According to their own website:
>
> Up is designed, developed and delivered through a collaboration
> between Ferocia Pty Ltd ABN 67 152 963 712 ("Ferocia") and Bendigo and
> Adelaide Bank Limited ABN 11 068 049 178
>
>
>
So the [Adelaide Bank Limited](https://abr.business.gov.au/ABN/View?id=11068049178) is the bank behind the "UP" brand. It's a digital bank like many others. I'm pretty sure that your concerns about security is somehow regulated by the Australia government and also by internal process using modern algorithims with Artificial Inteligence and etc. |
119,699 | I recently signed up for [Up bank](https://up.com.au/).
The process went like this:
1. Go to to the website, download the app to your phone.
2. Enter your phone number.
3. Enter the SMS-sent verification code.
4. Enter your address.
5. Enter your Australian Driver's License number.
That's it, you now have an account you deposit money into and they're sending a card in the mail.
I'm curious how this fits Australian KYC laws. This seems easy to abuse - for example lists of stolen driver's license numbers could be used to create bank accounts. (Admittedly - using a phone number is a second part of KYC as getting an Australian phone number requires an in person ID check). | 2020/01/28 | [
"https://money.stackexchange.com/questions/119699",
"https://money.stackexchange.com",
"https://money.stackexchange.com/users/20732/"
] | It's done with a electronic instant [DVS Check](https://www.idmatch.gov.au/), a [credit ping](https://www.equifax.com.au/business-enterprise/solutions/aml-compliance) (not a full check) and [safe harbour](https://www.austrac.gov.au/business/how-comply-and-report-guidance-and-resources/customer-identification-and-verification/customer-identification-know-your-customer-kyc)
If you used a stolen driver's license you could theoretically sign up (provided its not been reported stolen) but you would be committing identity fraud and it is possible to solve but its a [nightmare for the victim](https://www.abc.net.au/news/2019-09-06/drivers-licence-identity-theft-leaves-victims-exposed/11439668). | According to their own website:
>
> Up is designed, developed and delivered through a collaboration
> between Ferocia Pty Ltd ABN 67 152 963 712 ("Ferocia") and Bendigo and
> Adelaide Bank Limited ABN 11 068 049 178
>
>
>
So the [Adelaide Bank Limited](https://abr.business.gov.au/ABN/View?id=11068049178) is the bank behind the "UP" brand. It's a digital bank like many others. I'm pretty sure that your concerns about security is somehow regulated by the Australia government and also by internal process using modern algorithims with Artificial Inteligence and etc. |
879,621 | I have some files that are uuencoded, and I need to decode them, using either .NET 2.0 or Visual C++ 6.0. Any good libraries/classes that will help here? It looks like this is not built into .NET or MFC. | 2009/05/18 | [
"https://Stackoverflow.com/questions/879621",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/64257/"
] | Try uudeview, [here](http://www.fpx.de/fp/Software/UUDeview/). It is an open source library which works well and will also handle yenc files in addition to uuencoded ones. You can use it with C/C++ or write an interop wrapper for C# without much trouble. | Code Project has a .NET library + source code for uuencoding/decoding. The actual algorithm itself is quite widely disseminated over the web and is quite short.
The Code Project link: <http://www.codeproject.com/KB/security/TextCoDec.aspx>
Short intro from the article:
>
> This article presents a class library
> for encoding/decoding files and/or
> text in several algorithms in .NET.
> Some of the features of this library:
>
>
> Encoding/decoding text in Quoted
> Printable Encoding/decoding files and
> text in Base64 Encoding/decoding files
> and text in UUEncode Encoding/decoding
> files in yEnc
>
>
> |
11,600 | From reading the rules it would appear there are two kinds of damage and then straight *loss of life*:
>
> 118.3 If an effect causes a player to gain or lose life, that player's life total is adjusted accordingly.
>
>
>
From reading that I would guess that effects like *Extort* are not sources of damage. You simply lose the life.
>
> 119.2a Damage may be dealt as a result of combat. Each attacking and blocking creature deals combat damage equal to its power during the combat damage step.
>
>
>
So damage from creatures during combat is combat damage.
>
> 119.2b Damage may be dealt as an effect of a spell or ability. The spell or ability will specify which object deals that damage.
>
>
>
And that is direct damage from an object.
So the arguments are these:
Does loss of life as outlined by rule 118.3 (at top) count as being dealt damage? Do triggered abilities that redirect or reduce damage effect loss of life?
My second related question: You put [Arcane teachings](http://gatherer.wizards.com/Pages/Card/Details.aspx?multiverseid=130530) on a [Blinding Angel](http://gatherer.wizards.com/pages/card/Details.aspx?multiverseid=83007) and tap her to deal one damage to your opponent. Am I right in saying that because the damage was direct damage (not dealt during combat) then the triggered ability of preventing the combat phase of the damaged player is not triggered? | 2013/03/28 | [
"https://boardgames.stackexchange.com/questions/11600",
"https://boardgames.stackexchange.com",
"https://boardgames.stackexchange.com/users/5081/"
] | This is cleared up in rule 118.2:
>
> Damage dealt to a player normally causes that player to lose that much life. See rule 119.3.
>
>
>
So you actually have it backwards. **Damage to a player is loss of life**, not the other way around. When a player is "dealt damage," they lose that much life. (See below for more on "normally.") This is emphasized in rule 119.1a:
>
> Damage can't be dealt to an object that's neither a creature nor a planeswalker.
>
>
>
Spells that specifically say "lose life" cannot be reduced by spells that redirect damage. Damage is caused as "a result of combat" (119.2a) or "as an effect of a spell or ability" (119.2b). Spells that cause loss of life do not cause damage. Instead, they go around damage and just cause the loss of life. There are two ways to cause damage (quoted above as combat damage and damage from spells), and when these objects would inflict damage on a player, that player loses that much life.
**If a player takes damage, they lose that much life; if a player loses life, they lose that much life.** Additionally, if a creature takes damage, it takes that much damage; creatures do not lose life. Think of life as the currency of players, which damage can impact in a negative way.
This distinction between damage and loss of life is important. They are intentionally kept separate for cards like [Griselbrand](http://magiccards.info/avr/en/106.html). The designers wouldn't want you to be able to prevent the loss of life caused by his ability by casting a simple damage reduction spell, like [Reflect Damage](http://magiccards.info/query?q=reflect%20damage&v=card&s=cname), so loss of life is kept as a separate concept.
If a triggered ability says "whenever a player is dealt damage," it would not trigger when that player is affected by a spell that causes loss of life.
In regards to the Blinding Angel example, **you are correct.** Blinding Angel did not deal combat damage so its ability would not trigger.
---
The word "normally" in rule 118.2 refers to one way in which damage is modified, laid out specifically in 119.3b. This rules lays out an important exception to loss of life:
>
> Damage dealt to a player by a source with infect causes that player to get that many poison counters.
>
>
>
This is an exception; players lose life when they are dealt damage 99.9% of the time. | The relationship can be summed up by two rules:
>
> 119.2a Damage may be dealt as a result of combat. Each attacking and blocking creature deals combat damage equal to its power during the combat damage step.
>
>
> 119.3a Damage dealt to a player by a source without infect causes that player to lose that much life.
>
>
>
So combat damage is a special subset of damage and non-infect damage to a player causes life loss. Neither of the reverse statements are true. There are many ways to deal damage without it being combat damage. There are also many ways to cause a player to lose life without that player taking damage.
119.2a is the definition of combat damage. To count as combat damage (such as for [Fog](http://gatherer.wizards.com/Pages/Search/Default.aspx?name=%2b%5bFog%5d) or [Curiosity](http://gatherer.wizards.com/Pages/Search/Default.aspx?name=%2b%5bCuriosity%5d)), the damage must be dealt as a result of combat, not just during the combat phase. This is specifically damage dealt by attacking and blocking creatures during the turn-based action at the beginning of a combat damage step (see below for more details).
119.3a is why damage to a player changes their life total. Damage and life loss are two totally separate game mechanics, joined only by this single rule. There is nothing to make the relationship go the other way. So, for example, [Lightning Bolt](http://gatherer.wizards.com/Pages/Search/Default.aspx?name=%2b%5bLightning%20Bolt%5d) will trigger the ability on [Exquisite Blood](http://gatherer.wizards.com/Pages/Search/Default.aspx?name=%2b%5bExquisite%20Blood%5d), but [Circle of Protection: Black](http://gatherer.wizards.com/Pages/Search/Default.aspx?name=%2b%5bCircle%5d%2b%5bof%5d%2b%5bProtection%5d%2b%5bBlack%5d) cannot prevent the life loss from [Kaervek's Spite](http://gatherer.wizards.com/Pages/Search/Default.aspx?name=%2b%5bKaervek%27s%5d%2b%5bSpite%5d).
---
### Details on combat damage
>
> 510.1. First, the active player announces how each attacking creature assigns its combat damage, then the defending player announces how each blocking creature assigns its combat damage. This turn-based action doesn't use the stack.
>
>
> 510.1a Each attacking creature and each blocking creature assigns combat damage equal to its power. Creatures that would assign 0 or less damage this way don't assign combat damage at all.
>
>
> 510.2. Second, all combat damage that's been assigned is dealt simultaneously. This turn-based action doesn't use the stack. No player has the chance to cast spells or activate abilities between the time combat damage is assigned and the time it's dealt.
>
>
> |
4,905 | I am currently working on Super OSD - an on screen display project. <http://code.google.com/p/super-osd> has all the details.
At the moment I'm using a dsPIC MCU to do the job. This is a very powerful DSP (40 MIPS @ 80 MHz, three-register single-cycle operations and a MAC unit) and, importantly, it comes in a DIP package (because I'm using a breadboard to prototype it.) I'm really getting every last bit of performance out of it running the OSD - the chip has about 200ns or 10 cycles per pixel on the output stage so the code has to be very optimised in this part (for this reason it will always be written in assembly.)
Now I was considering using an FPGA for this because due to the parallel architecture of such a chip it is possible to have a simple logic program running the OSD. Things like drawing lines and algorithmic code would be handled by an MCU, but the actual output would be done with an FPGA. And some simple things like setting pixels or drawing horizontal and vertical lines I would like to integrate onto the FPGA, to improve speed.
I have some questions:
1. Will it cost significantly more? The cheapest FPGA's I found were ~£5 each and the dsPIC is £3 each. So it will cost more, but by how much?
2. The dsPIC fits in a SO28 package. I would not like to go bigger than SO28 or TQFP44. Most FPGA's I've seen come in BGA or TQFP>100 packages, which aren't an option at the moment, due to the shear size, and the difficulty of soldering them myself.
3. How much current is used by an FPGA? The dsPIC solution currently consumes about 55mA +/- 10mA, which is okay at the moment. Would an FPGA consume more or less? Is it variable, or is it pretty much static, like the dsPIC?
4. I need at least 12KB of graphics memory to store the OSD graphics. Do FPGA's have this kind of memory available on the chip or is this only available with external chips? | 2010/10/06 | [
"https://electronics.stackexchange.com/questions/4905",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/1225/"
] | In principle this is good candidate for FPGA based design. Regarding your requirements:
ad 1. The FPGA most likely will be more expensive, by how much that depends on the device you choose. At first glance smallest Spartan 3 from Xilinx (XC3S50AN) will be more then enough for this task (~10£ from Farnell). I think you can assume this is upper boundary for the cost (it has 56kB RAM inside, so it is more then you need). You may find cheaper device either from Xilinx offering or their competitors Altera and Lattice.
ad 2. The package is the tough issue, I did not saw FPGA with smaller footprint either. However maybe you can use CPLD device (for sake of argument the CPLDs are small FPGAs) which may be in smaller package (PLCC or QFN). On plus side they will be cheaper (even single $) on negative side most likely will not have RAM inside. With CPLD probably you would need external SRAM chip.
ad 3. FPGAs and CPLD current consumption is highly dependent on the programmed design. However there is good chance that FPGA and especially CPLD design would consume less than your current solution.
ad 4. FPGA do have that kind of memory inside, CPLD most certainly not. This may be solved by external sram chip (or two). For example:
|SRAM 1| <--> |CPLD| <--> |uC|
|SRAM 2| <-->
In such arrangement while the uC is writing to SRAM 1, the CPLD will be displaying data from SRAM 2. The CPLD should be able to handle both task simultaneously.
Of course you can solve this in other ways too:
1) use faster uController (ARM for example)
2) use device with some programmable fabric and uC inside (for example FPSLIC from Atmel, however I have never used such devices and I know very little about those)
Standard disclaimer -> as designs are open problems, with many constrains and possible solutions whatever I wrote above may not be true for your case. I believe it is worth checking those option, though. | You could use a CPLD rather than an FPGA, such as one of the Altera MAX II parts. They are available in QFP44 packages, unlike FPGAs. They are actually small FPGAs, but Altera plays down that aspect. CPLDs have an advantage over most FPGAs in that they have on-chip configuration memory, FPGAs generally require an external flash chip. There are other CPLDs, of course, but I like the MAX II.
It's impossible to say what the current consumption will be, as it depends on clock speeds and the amount of logic that is actually in use.
FPGAs usually have a limited amount of on-chip memory you can use, but you will need external memory with a CPLD.
Another option would be an [XMOS](http://www.xmos.com/) chip, but the smallest one (the XS1-L1) is in a QFP64 package. It has plenty of on-chip RAM - 64k. |
4,905 | I am currently working on Super OSD - an on screen display project. <http://code.google.com/p/super-osd> has all the details.
At the moment I'm using a dsPIC MCU to do the job. This is a very powerful DSP (40 MIPS @ 80 MHz, three-register single-cycle operations and a MAC unit) and, importantly, it comes in a DIP package (because I'm using a breadboard to prototype it.) I'm really getting every last bit of performance out of it running the OSD - the chip has about 200ns or 10 cycles per pixel on the output stage so the code has to be very optimised in this part (for this reason it will always be written in assembly.)
Now I was considering using an FPGA for this because due to the parallel architecture of such a chip it is possible to have a simple logic program running the OSD. Things like drawing lines and algorithmic code would be handled by an MCU, but the actual output would be done with an FPGA. And some simple things like setting pixels or drawing horizontal and vertical lines I would like to integrate onto the FPGA, to improve speed.
I have some questions:
1. Will it cost significantly more? The cheapest FPGA's I found were ~£5 each and the dsPIC is £3 each. So it will cost more, but by how much?
2. The dsPIC fits in a SO28 package. I would not like to go bigger than SO28 or TQFP44. Most FPGA's I've seen come in BGA or TQFP>100 packages, which aren't an option at the moment, due to the shear size, and the difficulty of soldering them myself.
3. How much current is used by an FPGA? The dsPIC solution currently consumes about 55mA +/- 10mA, which is okay at the moment. Would an FPGA consume more or less? Is it variable, or is it pretty much static, like the dsPIC?
4. I need at least 12KB of graphics memory to store the OSD graphics. Do FPGA's have this kind of memory available on the chip or is this only available with external chips? | 2010/10/06 | [
"https://electronics.stackexchange.com/questions/4905",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/1225/"
] | You could use a CPLD rather than an FPGA, such as one of the Altera MAX II parts. They are available in QFP44 packages, unlike FPGAs. They are actually small FPGAs, but Altera plays down that aspect. CPLDs have an advantage over most FPGAs in that they have on-chip configuration memory, FPGAs generally require an external flash chip. There are other CPLDs, of course, but I like the MAX II.
It's impossible to say what the current consumption will be, as it depends on clock speeds and the amount of logic that is actually in use.
FPGAs usually have a limited amount of on-chip memory you can use, but you will need external memory with a CPLD.
Another option would be an [XMOS](http://www.xmos.com/) chip, but the smallest one (the XS1-L1) is in a QFP64 package. It has plenty of on-chip RAM - 64k. | My inclination would be to use something to buffer the timing between the processor and the display. Having hardware that can show an entire frame of video without processor intervention may be nice, but perhaps overkill. I would suggest that the best compromise between hardware and software complexity would probably be to make something with two or three independent 1024-bit shift registers (two bits per pixel, to allow for black, white, gray, or transparent), and a means of switching between them. Have the PIC load up a shift register, and then have the hardware start shifting that one out while it sets a flag so the PIC can load the next one. With two shift registers, the PIC would have have 64us between the time it is told a shift register is available and the time all the data has to be shifted. With three shift registers, the PIC would have to average one line every 64us, but it could tolerate a delay of up to 64us.
Note that while a 1024-bit FIFO would be just as good as two 1024-bit shift registers, and in a CPLD a FIFO only costs one macrocell per bit, plus some control logic, in most other types of logic two bits of shift register will be cheaper than one bit of FIFO.
An alternative approach would be to connect a CPLD to an SRAM, and make a simple video subsystem with that. Aesthetically, I like the on-the-fly video generation, and if somebody made nice cheap 1024-bit shift-register chips it's the approach I'd favor, but using an external SRAM may be cheaper than using an FPGA with enough resources to make multiple 1024-bit shift registers. For your output resolution it will be necessary to clock out data at 12M pixels/sec, or 3MBytes/sec. It should be possible to arrange things to allow for data to be clocked in at a rate of up to 10mbps without too much difficulty by interleaving memory cycles; the biggest trick would be preventing data corruption if a sync pulse doesn't come at the precise moment expected. |
4,905 | I am currently working on Super OSD - an on screen display project. <http://code.google.com/p/super-osd> has all the details.
At the moment I'm using a dsPIC MCU to do the job. This is a very powerful DSP (40 MIPS @ 80 MHz, three-register single-cycle operations and a MAC unit) and, importantly, it comes in a DIP package (because I'm using a breadboard to prototype it.) I'm really getting every last bit of performance out of it running the OSD - the chip has about 200ns or 10 cycles per pixel on the output stage so the code has to be very optimised in this part (for this reason it will always be written in assembly.)
Now I was considering using an FPGA for this because due to the parallel architecture of such a chip it is possible to have a simple logic program running the OSD. Things like drawing lines and algorithmic code would be handled by an MCU, but the actual output would be done with an FPGA. And some simple things like setting pixels or drawing horizontal and vertical lines I would like to integrate onto the FPGA, to improve speed.
I have some questions:
1. Will it cost significantly more? The cheapest FPGA's I found were ~£5 each and the dsPIC is £3 each. So it will cost more, but by how much?
2. The dsPIC fits in a SO28 package. I would not like to go bigger than SO28 or TQFP44. Most FPGA's I've seen come in BGA or TQFP>100 packages, which aren't an option at the moment, due to the shear size, and the difficulty of soldering them myself.
3. How much current is used by an FPGA? The dsPIC solution currently consumes about 55mA +/- 10mA, which is okay at the moment. Would an FPGA consume more or less? Is it variable, or is it pretty much static, like the dsPIC?
4. I need at least 12KB of graphics memory to store the OSD graphics. Do FPGA's have this kind of memory available on the chip or is this only available with external chips? | 2010/10/06 | [
"https://electronics.stackexchange.com/questions/4905",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/1225/"
] | In principle this is good candidate for FPGA based design. Regarding your requirements:
ad 1. The FPGA most likely will be more expensive, by how much that depends on the device you choose. At first glance smallest Spartan 3 from Xilinx (XC3S50AN) will be more then enough for this task (~10£ from Farnell). I think you can assume this is upper boundary for the cost (it has 56kB RAM inside, so it is more then you need). You may find cheaper device either from Xilinx offering or their competitors Altera and Lattice.
ad 2. The package is the tough issue, I did not saw FPGA with smaller footprint either. However maybe you can use CPLD device (for sake of argument the CPLDs are small FPGAs) which may be in smaller package (PLCC or QFN). On plus side they will be cheaper (even single $) on negative side most likely will not have RAM inside. With CPLD probably you would need external SRAM chip.
ad 3. FPGAs and CPLD current consumption is highly dependent on the programmed design. However there is good chance that FPGA and especially CPLD design would consume less than your current solution.
ad 4. FPGA do have that kind of memory inside, CPLD most certainly not. This may be solved by external sram chip (or two). For example:
|SRAM 1| <--> |CPLD| <--> |uC|
|SRAM 2| <-->
In such arrangement while the uC is writing to SRAM 1, the CPLD will be displaying data from SRAM 2. The CPLD should be able to handle both task simultaneously.
Of course you can solve this in other ways too:
1) use faster uController (ARM for example)
2) use device with some programmable fabric and uC inside (for example FPSLIC from Atmel, however I have never used such devices and I know very little about those)
Standard disclaimer -> as designs are open problems, with many constrains and possible solutions whatever I wrote above may not be true for your case. I believe it is worth checking those option, though. | Cheapest solution with the lowest learning curve would be to move to a higher powered processor, ARM most likely.
Programming a FPGA/CPLD in VHDL/Verilog is a pretty steep learning curve coming from C for many people. They also aren't overly cheap parts.
Using a decently capable ARM maybe a LPC1769? (cortex-M3) you would also likely be able to replace the PIC18 in your design.
As for the through hole issue, as long as you can get the SoC in an exposed pin QFP type package, [just grab some of these adapters](http://www.futurlec.com/SMD_Adapters.shtml) for the needed pin out for your prototyping. |
4,905 | I am currently working on Super OSD - an on screen display project. <http://code.google.com/p/super-osd> has all the details.
At the moment I'm using a dsPIC MCU to do the job. This is a very powerful DSP (40 MIPS @ 80 MHz, three-register single-cycle operations and a MAC unit) and, importantly, it comes in a DIP package (because I'm using a breadboard to prototype it.) I'm really getting every last bit of performance out of it running the OSD - the chip has about 200ns or 10 cycles per pixel on the output stage so the code has to be very optimised in this part (for this reason it will always be written in assembly.)
Now I was considering using an FPGA for this because due to the parallel architecture of such a chip it is possible to have a simple logic program running the OSD. Things like drawing lines and algorithmic code would be handled by an MCU, but the actual output would be done with an FPGA. And some simple things like setting pixels or drawing horizontal and vertical lines I would like to integrate onto the FPGA, to improve speed.
I have some questions:
1. Will it cost significantly more? The cheapest FPGA's I found were ~£5 each and the dsPIC is £3 each. So it will cost more, but by how much?
2. The dsPIC fits in a SO28 package. I would not like to go bigger than SO28 or TQFP44. Most FPGA's I've seen come in BGA or TQFP>100 packages, which aren't an option at the moment, due to the shear size, and the difficulty of soldering them myself.
3. How much current is used by an FPGA? The dsPIC solution currently consumes about 55mA +/- 10mA, which is okay at the moment. Would an FPGA consume more or less? Is it variable, or is it pretty much static, like the dsPIC?
4. I need at least 12KB of graphics memory to store the OSD graphics. Do FPGA's have this kind of memory available on the chip or is this only available with external chips? | 2010/10/06 | [
"https://electronics.stackexchange.com/questions/4905",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/1225/"
] | In principle this is good candidate for FPGA based design. Regarding your requirements:
ad 1. The FPGA most likely will be more expensive, by how much that depends on the device you choose. At first glance smallest Spartan 3 from Xilinx (XC3S50AN) will be more then enough for this task (~10£ from Farnell). I think you can assume this is upper boundary for the cost (it has 56kB RAM inside, so it is more then you need). You may find cheaper device either from Xilinx offering or their competitors Altera and Lattice.
ad 2. The package is the tough issue, I did not saw FPGA with smaller footprint either. However maybe you can use CPLD device (for sake of argument the CPLDs are small FPGAs) which may be in smaller package (PLCC or QFN). On plus side they will be cheaper (even single $) on negative side most likely will not have RAM inside. With CPLD probably you would need external SRAM chip.
ad 3. FPGAs and CPLD current consumption is highly dependent on the programmed design. However there is good chance that FPGA and especially CPLD design would consume less than your current solution.
ad 4. FPGA do have that kind of memory inside, CPLD most certainly not. This may be solved by external sram chip (or two). For example:
|SRAM 1| <--> |CPLD| <--> |uC|
|SRAM 2| <-->
In such arrangement while the uC is writing to SRAM 1, the CPLD will be displaying data from SRAM 2. The CPLD should be able to handle both task simultaneously.
Of course you can solve this in other ways too:
1) use faster uController (ARM for example)
2) use device with some programmable fabric and uC inside (for example FPSLIC from Atmel, however I have never used such devices and I know very little about those)
Standard disclaimer -> as designs are open problems, with many constrains and possible solutions whatever I wrote above may not be true for your case. I believe it is worth checking those option, though. | My inclination would be to use something to buffer the timing between the processor and the display. Having hardware that can show an entire frame of video without processor intervention may be nice, but perhaps overkill. I would suggest that the best compromise between hardware and software complexity would probably be to make something with two or three independent 1024-bit shift registers (two bits per pixel, to allow for black, white, gray, or transparent), and a means of switching between them. Have the PIC load up a shift register, and then have the hardware start shifting that one out while it sets a flag so the PIC can load the next one. With two shift registers, the PIC would have have 64us between the time it is told a shift register is available and the time all the data has to be shifted. With three shift registers, the PIC would have to average one line every 64us, but it could tolerate a delay of up to 64us.
Note that while a 1024-bit FIFO would be just as good as two 1024-bit shift registers, and in a CPLD a FIFO only costs one macrocell per bit, plus some control logic, in most other types of logic two bits of shift register will be cheaper than one bit of FIFO.
An alternative approach would be to connect a CPLD to an SRAM, and make a simple video subsystem with that. Aesthetically, I like the on-the-fly video generation, and if somebody made nice cheap 1024-bit shift-register chips it's the approach I'd favor, but using an external SRAM may be cheaper than using an FPGA with enough resources to make multiple 1024-bit shift registers. For your output resolution it will be necessary to clock out data at 12M pixels/sec, or 3MBytes/sec. It should be possible to arrange things to allow for data to be clocked in at a rate of up to 10mbps without too much difficulty by interleaving memory cycles; the biggest trick would be preventing data corruption if a sync pulse doesn't come at the precise moment expected. |
4,905 | I am currently working on Super OSD - an on screen display project. <http://code.google.com/p/super-osd> has all the details.
At the moment I'm using a dsPIC MCU to do the job. This is a very powerful DSP (40 MIPS @ 80 MHz, three-register single-cycle operations and a MAC unit) and, importantly, it comes in a DIP package (because I'm using a breadboard to prototype it.) I'm really getting every last bit of performance out of it running the OSD - the chip has about 200ns or 10 cycles per pixel on the output stage so the code has to be very optimised in this part (for this reason it will always be written in assembly.)
Now I was considering using an FPGA for this because due to the parallel architecture of such a chip it is possible to have a simple logic program running the OSD. Things like drawing lines and algorithmic code would be handled by an MCU, but the actual output would be done with an FPGA. And some simple things like setting pixels or drawing horizontal and vertical lines I would like to integrate onto the FPGA, to improve speed.
I have some questions:
1. Will it cost significantly more? The cheapest FPGA's I found were ~£5 each and the dsPIC is £3 each. So it will cost more, but by how much?
2. The dsPIC fits in a SO28 package. I would not like to go bigger than SO28 or TQFP44. Most FPGA's I've seen come in BGA or TQFP>100 packages, which aren't an option at the moment, due to the shear size, and the difficulty of soldering them myself.
3. How much current is used by an FPGA? The dsPIC solution currently consumes about 55mA +/- 10mA, which is okay at the moment. Would an FPGA consume more or less? Is it variable, or is it pretty much static, like the dsPIC?
4. I need at least 12KB of graphics memory to store the OSD graphics. Do FPGA's have this kind of memory available on the chip or is this only available with external chips? | 2010/10/06 | [
"https://electronics.stackexchange.com/questions/4905",
"https://electronics.stackexchange.com",
"https://electronics.stackexchange.com/users/1225/"
] | Cheapest solution with the lowest learning curve would be to move to a higher powered processor, ARM most likely.
Programming a FPGA/CPLD in VHDL/Verilog is a pretty steep learning curve coming from C for many people. They also aren't overly cheap parts.
Using a decently capable ARM maybe a LPC1769? (cortex-M3) you would also likely be able to replace the PIC18 in your design.
As for the through hole issue, as long as you can get the SoC in an exposed pin QFP type package, [just grab some of these adapters](http://www.futurlec.com/SMD_Adapters.shtml) for the needed pin out for your prototyping. | My inclination would be to use something to buffer the timing between the processor and the display. Having hardware that can show an entire frame of video without processor intervention may be nice, but perhaps overkill. I would suggest that the best compromise between hardware and software complexity would probably be to make something with two or three independent 1024-bit shift registers (two bits per pixel, to allow for black, white, gray, or transparent), and a means of switching between them. Have the PIC load up a shift register, and then have the hardware start shifting that one out while it sets a flag so the PIC can load the next one. With two shift registers, the PIC would have have 64us between the time it is told a shift register is available and the time all the data has to be shifted. With three shift registers, the PIC would have to average one line every 64us, but it could tolerate a delay of up to 64us.
Note that while a 1024-bit FIFO would be just as good as two 1024-bit shift registers, and in a CPLD a FIFO only costs one macrocell per bit, plus some control logic, in most other types of logic two bits of shift register will be cheaper than one bit of FIFO.
An alternative approach would be to connect a CPLD to an SRAM, and make a simple video subsystem with that. Aesthetically, I like the on-the-fly video generation, and if somebody made nice cheap 1024-bit shift-register chips it's the approach I'd favor, but using an external SRAM may be cheaper than using an FPGA with enough resources to make multiple 1024-bit shift registers. For your output resolution it will be necessary to clock out data at 12M pixels/sec, or 3MBytes/sec. It should be possible to arrange things to allow for data to be clocked in at a rate of up to 10mbps without too much difficulty by interleaving memory cycles; the biggest trick would be preventing data corruption if a sync pulse doesn't come at the precise moment expected. |
14,178,709 | I'm working on an Android application that stores data in a SQLite database. My question is, where does this database file get stored on the filesystem when you're using an emulator?
And can we modify that database file using some tool ? How ? | 2013/01/06 | [
"https://Stackoverflow.com/questions/14178709",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1563048/"
] | The first one will block an ASP.Net request thread until you get a result from the database.
The second one will release the ASP.Net thread immediately, then grab another one when the result comes in.
Therefore, the second one is more scalable.
*Note that this answer assumes that the chain of asynchrony you're calling is correctly written and ends in actual async socket operations.* | >
> The first one will block an ASP.NET request thread until the task completes.
>
> The second will release the thread immediately, and then grab another one when the task is complete.
>
>
>
What is the advantage?
A Thread is actually a pretty costly resource. It consumes OS resources, and a Thread has a Stack that contains all the variables of all the methods that were called before it got to your methods. Lets say your server is beefy enough that it can handle 100 threads. You can handle 100 requests. Lets say it takes 100ms to handle each request. That gives you 1000 requests per second.
Say it turns out that your GetProductAsync() call takes 90ms of those 100ms. Its not uncommon for a database or service to take up most of the time. Making these calls Async means that you now only need each of your threads for 10ms. Suddenly, you can support 10000 requests per second on the same server.
So the "advantage of an async controller" could be 10x more requests per second.
Of course, it all depends how scalable your backend is too, but why introduce bottlenecks when .NET does all the hard work for you. There's a lot more to it than just async, and as always the devil is in the details. There's lots of resources to help, e.g. <http://msdn.microsoft.com/en-us/library/vstudio/hh191443.aspx> |
197,303 | [](https://i.stack.imgur.com/RogId.png)
is there a **clean** way to convert a 2d vector ( from relative position between two objects ) to 1d index using blender drivers? it should be extendable to add more items later . it is useful to control a grease pencil frame with 'Time Offset' fixed modifier.
[](https://i.stack.imgur.com/4fSHi.png) | 2020/10/10 | [
"https://blender.stackexchange.com/questions/197303",
"https://blender.stackexchange.com",
"https://blender.stackexchange.com/users/51375/"
] | I don't know if there is a solution to grease pencil related but in a shader node you could use a formular like
Index = Row \* GridWith + Column
to get an index from a 2d Point. | To translate global position into a position in a grid:
Basically, if the grid can have numeric identities in reading order...
This relies on the grid being cartessian, and made of one meter cells. I placed the corner of the top left cell at the origin. Conversion to polar coordinates is probably possible, but more complicated (need to know I'm on the right track first so no wasted work, you know ;-)
#round(pos.x-0.5)+round(-pos.y-0.5)\*GRID\_X\_SIZE |
37,873,391 | I was working in a client application with alfresco and in need to capture the changes in docs from user's alfresco account. From further reading I came to know that I need to set some properties in ***alfresco-global.properties*** file to enable change log audit. So is there anyway I can do this using an API without requesting user to do this ? Please help | 2016/06/17 | [
"https://Stackoverflow.com/questions/37873391",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4327283/"
] | For Community there is no direct way to do this other than using addon's or writing your own custom code.
There are some ways you can use when using the JavaScript Api of Alfresco.
There is an Open Source module [here](https://github.com/loftuxab/alfresco-jmx) using JMX and a paid one [here](http://www.contezza.nl/store/p26/Contezza_Alfresco_Admin_Console.html) using a custom Share page. | I'm not sure something like that is possible, other then using JMX. I'd be happy is someone would prove me wrong, though.
<http://docs.alfresco.com/5.1/concepts/jmx-intro-config.html> |
3,253,289 | Please guide me, how do you enable autocomplete functionality in VS C++? By auto-complete, I mean, when I put a dot after control name, the editor should display a dropdown menu to select from.
Thank you. | 2010/07/15 | [
"https://Stackoverflow.com/questions/3253289",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/389134/"
] | Start writing, then just press CTRL+SPACE and there you go ... | Include the class that you are using Within your text file, then intelliSense will know where to look when you type within your text file. This works for me.
So it’s important to check the Unreal API to see where the included class is so that you have the path to type on the include line. Hope that makes sense. |
3,253,289 | Please guide me, how do you enable autocomplete functionality in VS C++? By auto-complete, I mean, when I put a dot after control name, the editor should display a dropdown menu to select from.
Thank you. | 2010/07/15 | [
"https://Stackoverflow.com/questions/3253289",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/389134/"
] | It's enabled by default. Probably you just tried on an expression that failed to autocomplete.
In case you deactivated it somehow... you can enable it in the Visual Studio settings. Just browse to the Editor settings, then to the subgroup C/C++ and activate it again... should read something like "List members automatically" or "Auto list members" (sorry, I have the german Visual Studio).
Upon typing something like std::cout. a dropwdownlist with possible completitions should pop up. | Include the class that you are using Within your text file, then intelliSense will know where to look when you type within your text file. This works for me.
So it’s important to check the Unreal API to see where the included class is so that you have the path to type on the include line. Hope that makes sense. |
3,253,289 | Please guide me, how do you enable autocomplete functionality in VS C++? By auto-complete, I mean, when I put a dot after control name, the editor should display a dropdown menu to select from.
Thank you. | 2010/07/15 | [
"https://Stackoverflow.com/questions/3253289",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/389134/"
] | When you press ctrl + space, look in the Status bar below.. It will display a message saying IntelliSense is unavailable for C++ / CLI, if it doesn't support it.. The message will look like this -
[](https://i.stack.imgur.com/cW8sS.png) | It's enabled by default. Probably you just tried on an expression that failed to autocomplete.
In case you deactivated it somehow... you can enable it in the Visual Studio settings.
[Step 1: Go to settings](https://i.stack.imgur.com/98hG5.png)
[Step 2: Search for complete and enable all the auto complete functions](https://i.stack.imgur.com/LPA4j.png)
I believe that show help |
3,253,289 | Please guide me, how do you enable autocomplete functionality in VS C++? By auto-complete, I mean, when I put a dot after control name, the editor should display a dropdown menu to select from.
Thank you. | 2010/07/15 | [
"https://Stackoverflow.com/questions/3253289",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/389134/"
] | Start writing, then just press CTRL+SPACE and there you go ... | All the answers were missing Ctrl-J (which enables and disables autocomplete). |
3,253,289 | Please guide me, how do you enable autocomplete functionality in VS C++? By auto-complete, I mean, when I put a dot after control name, the editor should display a dropdown menu to select from.
Thank you. | 2010/07/15 | [
"https://Stackoverflow.com/questions/3253289",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/389134/"
] | All the answers were missing Ctrl-J (which enables and disables autocomplete). | * Goto => Tools >> Options >> Text Editor >> C/C++ >> Advanced >>
IntelliSense
* Change => Member List Commit Aggressive to True |
3,253,289 | Please guide me, how do you enable autocomplete functionality in VS C++? By auto-complete, I mean, when I put a dot after control name, the editor should display a dropdown menu to select from.
Thank you. | 2010/07/15 | [
"https://Stackoverflow.com/questions/3253289",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/389134/"
] | VS is kinda funny about C++ and IntelliSense. There are times it won't notice that it's supposed to be popping up something. This is due in no small part to the complexity of the language, and all the compiling (or at least parsing) that'd need to go on in order to make it better.
If it doesn't work for you at all, and it used to, and you've checked the VS options, [maybe this can help](http://www.windows-tech.info/4/d59787d312b9935a.php). | It's enabled by default. Probably you just tried on an expression that failed to autocomplete.
In case you deactivated it somehow... you can enable it in the Visual Studio settings.
[Step 1: Go to settings](https://i.stack.imgur.com/98hG5.png)
[Step 2: Search for complete and enable all the auto complete functions](https://i.stack.imgur.com/LPA4j.png)
I believe that show help |
3,253,289 | Please guide me, how do you enable autocomplete functionality in VS C++? By auto-complete, I mean, when I put a dot after control name, the editor should display a dropdown menu to select from.
Thank you. | 2010/07/15 | [
"https://Stackoverflow.com/questions/3253289",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/389134/"
] | * Goto => Tools >> Options >> Text Editor >> C/C++ >> Advanced >>
IntelliSense
* Change => Member List Commit Aggressive to True | 'ctrl'+'space' will open C/C++ autocomplete. |
3,253,289 | Please guide me, how do you enable autocomplete functionality in VS C++? By auto-complete, I mean, when I put a dot after control name, the editor should display a dropdown menu to select from.
Thank you. | 2010/07/15 | [
"https://Stackoverflow.com/questions/3253289",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/389134/"
] | It's enabled by default. Probably you just tried on an expression that failed to autocomplete.
In case you deactivated it somehow... you can enable it in the Visual Studio settings. Just browse to the Editor settings, then to the subgroup C/C++ and activate it again... should read something like "List members automatically" or "Auto list members" (sorry, I have the german Visual Studio).
Upon typing something like std::cout. a dropwdownlist with possible completitions should pop up. | I came across over the following post:
<http://blogs.msdn.com/b/raulperez/archive/2010/03/19/c-intellisense-options.aspx>
The issue is that the "IntelliSense" option in c++ is disabled.
This link explains about the IntelliSense database configuration and options.
After enabling the database you must close and reopen visual studio
for autocomplete use 'ctrl'+'space' |
3,253,289 | Please guide me, how do you enable autocomplete functionality in VS C++? By auto-complete, I mean, when I put a dot after control name, the editor should display a dropdown menu to select from.
Thank you. | 2010/07/15 | [
"https://Stackoverflow.com/questions/3253289",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/389134/"
] | When you press ctrl + space, look in the Status bar below.. It will display a message saying IntelliSense is unavailable for C++ / CLI, if it doesn't support it.. The message will look like this -
[](https://i.stack.imgur.com/cW8sS.png) | Have you tried Visual Assist X ? Sort of lights up the VS editor. |
3,253,289 | Please guide me, how do you enable autocomplete functionality in VS C++? By auto-complete, I mean, when I put a dot after control name, the editor should display a dropdown menu to select from.
Thank you. | 2010/07/15 | [
"https://Stackoverflow.com/questions/3253289",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/389134/"
] | Start writing, then just press CTRL+SPACE and there you go ... | It's enabled by default. Probably you just tried on an expression that failed to autocomplete.
In case you deactivated it somehow... you can enable it in the Visual Studio settings. Just browse to the Editor settings, then to the subgroup C/C++ and activate it again... should read something like "List members automatically" or "Auto list members" (sorry, I have the german Visual Studio).
Upon typing something like std::cout. a dropwdownlist with possible completitions should pop up. |
953,146 | Is there a way to run .net based applications without .net framework installed. Is there a way to do this. Is there a software that can achive this. Commercial software is also possible.
Added:
Has anyone any experience with [VMWare thin client](http://www.vmware.com/products/thinapp/overview.html)? | 2009/06/04 | [
"https://Stackoverflow.com/questions/953146",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12452/"
] | In the interests of completeness and something to consider along with the responses re MONO.
Have you thought about maybe writing the app in native code instead? That way you can simply just deploy your exe. If you need a RAD environment for productivity, then tools like Delphi or C++ Builder will give you a very FCL like feel (Delphi's VCL was architected by Anders Hejlsberg before he moved to MS, so probably no co-incidence that C# feels very familiar to Delphites) | If you mean "Can I run a .NET application without having to install a framework at all?" then the answer is no, you cannot.
If you mean "Can I run a .NET application without having to install Microsoft's .NET framework and CLR?" then the answer is only if you can find an alternative, and Mono is the only one I know of. |
953,146 | Is there a way to run .net based applications without .net framework installed. Is there a way to do this. Is there a software that can achive this. Commercial software is also possible.
Added:
Has anyone any experience with [VMWare thin client](http://www.vmware.com/products/thinapp/overview.html)? | 2009/06/04 | [
"https://Stackoverflow.com/questions/953146",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12452/"
] | You can use mono to static-link all the framework dlls you need.
Of course, that limits you to the mono implementation of the framework, which is getting better but is still incomplete in a few places.
---
**Update:**
Based on your various comments, my best suggestion is to use version 2.0 of the framework. That will install just fine on windows 2000 with no trouble, and you can target it from Visual Studio 2008 if you need to.
---
I'm also a little curious as to your windows 2000 requirement. Are you deploying to business or home environments?
Almost no home users have windows 2000. Home users ended up with (shudder)Windows ME instead, which was released about the same time, and for that reason have almost completely moved on to Windows XP. You're more likely to see as windows 98 machine in a home than windows 2000, and not even Microsoft still supports windows 98.
On the other hand, an awful lot of businesses still use windows 2000 machines in large numbers. But business environments don't usually have a problem installing the .Net framework. They can even add it to machines automatically via group policy deployment if they have to. | If you mean "Can I run a .NET application without having to install a framework at all?" then the answer is no, you cannot.
If you mean "Can I run a .NET application without having to install Microsoft's .NET framework and CLR?" then the answer is only if you can find an alternative, and Mono is the only one I know of. |
953,146 | Is there a way to run .net based applications without .net framework installed. Is there a way to do this. Is there a software that can achive this. Commercial software is also possible.
Added:
Has anyone any experience with [VMWare thin client](http://www.vmware.com/products/thinapp/overview.html)? | 2009/06/04 | [
"https://Stackoverflow.com/questions/953146",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12452/"
] | You can use mono to static-link all the framework dlls you need.
Of course, that limits you to the mono implementation of the framework, which is getting better but is still incomplete in a few places.
---
**Update:**
Based on your various comments, my best suggestion is to use version 2.0 of the framework. That will install just fine on windows 2000 with no trouble, and you can target it from Visual Studio 2008 if you need to.
---
I'm also a little curious as to your windows 2000 requirement. Are you deploying to business or home environments?
Almost no home users have windows 2000. Home users ended up with (shudder)Windows ME instead, which was released about the same time, and for that reason have almost completely moved on to Windows XP. You're more likely to see as windows 98 machine in a home than windows 2000, and not even Microsoft still supports windows 98.
On the other hand, an awful lot of businesses still use windows 2000 machines in large numbers. But business environments don't usually have a problem installing the .Net framework. They can even add it to machines automatically via group policy deployment if they have to. | My team faced a similar problem. We needed to run our .NET 3.5 WPF app under Windows PE, which has no usable .NET framework. I evaluated all the options and found Xenocode PostBuild to be the best.
It's GUI is a bit counterintuitive and there were some bumps in the road getting it working, but it's been reliable since.
If you go that route, be advised you need to make sure your code is fully debugged before you generate the unmanaged executable, as you cannot debug the resulting app (unless you like assembler).
Also note that embedding the .NET framework makes for a big executable. ~20MB for 2.0, and ~40MB for 3.5. |
953,146 | Is there a way to run .net based applications without .net framework installed. Is there a way to do this. Is there a software that can achive this. Commercial software is also possible.
Added:
Has anyone any experience with [VMWare thin client](http://www.vmware.com/products/thinapp/overview.html)? | 2009/06/04 | [
"https://Stackoverflow.com/questions/953146",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12452/"
] | There are a several different tools out there, a couple I have tried are:
* [XenoCode Postbuild](http://www.xenocode.com/) (now [Spoon Studio](http://spoon.net/studio/)) (now [TurboStudio](https://turbo.net/studio))
* [Salamander .NET Linker](http://www.remotesoft.com/linker/)
You can find more by doing a search for "[.NET Linker](http://www.google.com/#hl=en&q=.net+linker)."
The two above, which I tried, seemed to work ok, but I never widely tested my code built with them. I tried them mostly out of curiosity.
My .NET apps are mostly used by IT departments. Installing the .NET framework is no big deal for them.
If you want to write software more targeted at end users then the .NET install may turn them off. | You did not mention the type of software that you were looking to run so I figured I would add my two cents.
Microsoft has released Silverlight, a .NET based browser plugin, and they have been working with Novell to put out a version of Silverlight based upon the Mono compiler mentioned above called Moonlight. Microsoft natively supports Windows and Mac OS X 10.5.
If you want more information here are some links:
<http://en.wikipedia.org/wiki/Microsoft_Silverlight>
<http://www.microsoft.com/silverlight/> |
953,146 | Is there a way to run .net based applications without .net framework installed. Is there a way to do this. Is there a software that can achive this. Commercial software is also possible.
Added:
Has anyone any experience with [VMWare thin client](http://www.vmware.com/products/thinapp/overview.html)? | 2009/06/04 | [
"https://Stackoverflow.com/questions/953146",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12452/"
] | **Use Mono, it is developed by Novell and is open-source**
Edit: Question was about running without an installed runtime regardless of "supplier". Even so, here is a link to Mono's wikipedia entry. Enjoy.
<http://en.wikipedia.org/wiki/Mono_(software)> | My team faced a similar problem. We needed to run our .NET 3.5 WPF app under Windows PE, which has no usable .NET framework. I evaluated all the options and found Xenocode PostBuild to be the best.
It's GUI is a bit counterintuitive and there were some bumps in the road getting it working, but it's been reliable since.
If you go that route, be advised you need to make sure your code is fully debugged before you generate the unmanaged executable, as you cannot debug the resulting app (unless you like assembler).
Also note that embedding the .NET framework makes for a big executable. ~20MB for 2.0, and ~40MB for 3.5. |
953,146 | Is there a way to run .net based applications without .net framework installed. Is there a way to do this. Is there a software that can achive this. Commercial software is also possible.
Added:
Has anyone any experience with [VMWare thin client](http://www.vmware.com/products/thinapp/overview.html)? | 2009/06/04 | [
"https://Stackoverflow.com/questions/953146",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12452/"
] | There are a several different tools out there, a couple I have tried are:
* [XenoCode Postbuild](http://www.xenocode.com/) (now [Spoon Studio](http://spoon.net/studio/)) (now [TurboStudio](https://turbo.net/studio))
* [Salamander .NET Linker](http://www.remotesoft.com/linker/)
You can find more by doing a search for "[.NET Linker](http://www.google.com/#hl=en&q=.net+linker)."
The two above, which I tried, seemed to work ok, but I never widely tested my code built with them. I tried them mostly out of curiosity.
My .NET apps are mostly used by IT departments. Installing the .NET framework is no big deal for them.
If you want to write software more targeted at end users then the .NET install may turn them off. | You can use mono to static-link all the framework dlls you need.
Of course, that limits you to the mono implementation of the framework, which is getting better but is still incomplete in a few places.
---
**Update:**
Based on your various comments, my best suggestion is to use version 2.0 of the framework. That will install just fine on windows 2000 with no trouble, and you can target it from Visual Studio 2008 if you need to.
---
I'm also a little curious as to your windows 2000 requirement. Are you deploying to business or home environments?
Almost no home users have windows 2000. Home users ended up with (shudder)Windows ME instead, which was released about the same time, and for that reason have almost completely moved on to Windows XP. You're more likely to see as windows 98 machine in a home than windows 2000, and not even Microsoft still supports windows 98.
On the other hand, an awful lot of businesses still use windows 2000 machines in large numbers. But business environments don't usually have a problem installing the .Net framework. They can even add it to machines automatically via group policy deployment if they have to. |
953,146 | Is there a way to run .net based applications without .net framework installed. Is there a way to do this. Is there a software that can achive this. Commercial software is also possible.
Added:
Has anyone any experience with [VMWare thin client](http://www.vmware.com/products/thinapp/overview.html)? | 2009/06/04 | [
"https://Stackoverflow.com/questions/953146",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12452/"
] | There are a several different tools out there, a couple I have tried are:
* [XenoCode Postbuild](http://www.xenocode.com/) (now [Spoon Studio](http://spoon.net/studio/)) (now [TurboStudio](https://turbo.net/studio))
* [Salamander .NET Linker](http://www.remotesoft.com/linker/)
You can find more by doing a search for "[.NET Linker](http://www.google.com/#hl=en&q=.net+linker)."
The two above, which I tried, seemed to work ok, but I never widely tested my code built with them. I tried them mostly out of curiosity.
My .NET apps are mostly used by IT departments. Installing the .NET framework is no big deal for them.
If you want to write software more targeted at end users then the .NET install may turn them off. | **Use Mono, it is developed by Novell and is open-source**
Edit: Question was about running without an installed runtime regardless of "supplier". Even so, here is a link to Mono's wikipedia entry. Enjoy.
<http://en.wikipedia.org/wiki/Mono_(software)> |
953,146 | Is there a way to run .net based applications without .net framework installed. Is there a way to do this. Is there a software that can achive this. Commercial software is also possible.
Added:
Has anyone any experience with [VMWare thin client](http://www.vmware.com/products/thinapp/overview.html)? | 2009/06/04 | [
"https://Stackoverflow.com/questions/953146",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12452/"
] | My team faced a similar problem. We needed to run our .NET 3.5 WPF app under Windows PE, which has no usable .NET framework. I evaluated all the options and found Xenocode PostBuild to be the best.
It's GUI is a bit counterintuitive and there were some bumps in the road getting it working, but it's been reliable since.
If you go that route, be advised you need to make sure your code is fully debugged before you generate the unmanaged executable, as you cannot debug the resulting app (unless you like assembler).
Also note that embedding the .NET framework makes for a big executable. ~20MB for 2.0, and ~40MB for 3.5. | [This](http://social.msdn.microsoft.com/Forums/en-US/csharpgeneral/thread/3f69fb09-8837-4024-9d07-c1844f4afd6a/) is one of the better explanations (among the many) I have found:
>
> As a practical matter, it's not possible. Theoretically, a compiler could examine
> all the classes your application is using and include that code in your
> application, and compile the whole thing to native code. But, that still doesn't
> account for the CLR itself which contains core functionality like the garbage
> collector, assembly loader, metadata reader, etc. All these things are in native
> code, so they would have to be duplicated.
>
>
> In addition, not all methods in .NET classes are in managed code. If you look at
> the disassembled code in Reflector, you'll see that some of the methods are marked
> with the MethodImplAttributes.InternalCall flag. This means that the actual
> implementation of the method is internal to the CLR. Any system that compiled C#
> (or any other .NET language) to native code would have to duplicate all of this,
> and that would be a herculean effort. And the resulting app would likely be quite
> large.
>
>
> |
953,146 | Is there a way to run .net based applications without .net framework installed. Is there a way to do this. Is there a software that can achive this. Commercial software is also possible.
Added:
Has anyone any experience with [VMWare thin client](http://www.vmware.com/products/thinapp/overview.html)? | 2009/06/04 | [
"https://Stackoverflow.com/questions/953146",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12452/"
] | My team faced a similar problem. We needed to run our .NET 3.5 WPF app under Windows PE, which has no usable .NET framework. I evaluated all the options and found Xenocode PostBuild to be the best.
It's GUI is a bit counterintuitive and there were some bumps in the road getting it working, but it's been reliable since.
If you go that route, be advised you need to make sure your code is fully debugged before you generate the unmanaged executable, as you cannot debug the resulting app (unless you like assembler).
Also note that embedding the .NET framework makes for a big executable. ~20MB for 2.0, and ~40MB for 3.5. | The only alternative to .NET framework I know is MONO (for LINUX). |
953,146 | Is there a way to run .net based applications without .net framework installed. Is there a way to do this. Is there a software that can achive this. Commercial software is also possible.
Added:
Has anyone any experience with [VMWare thin client](http://www.vmware.com/products/thinapp/overview.html)? | 2009/06/04 | [
"https://Stackoverflow.com/questions/953146",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12452/"
] | You can use mono to static-link all the framework dlls you need.
Of course, that limits you to the mono implementation of the framework, which is getting better but is still incomplete in a few places.
---
**Update:**
Based on your various comments, my best suggestion is to use version 2.0 of the framework. That will install just fine on windows 2000 with no trouble, and you can target it from Visual Studio 2008 if you need to.
---
I'm also a little curious as to your windows 2000 requirement. Are you deploying to business or home environments?
Almost no home users have windows 2000. Home users ended up with (shudder)Windows ME instead, which was released about the same time, and for that reason have almost completely moved on to Windows XP. You're more likely to see as windows 98 machine in a home than windows 2000, and not even Microsoft still supports windows 98.
On the other hand, an awful lot of businesses still use windows 2000 machines in large numbers. But business environments don't usually have a problem installing the .Net framework. They can even add it to machines automatically via group policy deployment if they have to. | This really sounds like more trouble than its worth when you are working with an OS that supports .net.
.net 2.0 I think even comes down as a Windows Update these days, its only 26mb, and you only install it once. If you want something thats win32 native go back to unmanaged C++.
Also check out: [SmallestDotNet](http://www.hanselman.com/smallestdotnet/) (although not windows 2000, it mentions that "Soon, Microsoft will release a super-small download for XP SP2 machines that have no version of the .NET Framework".) |
533,975 | I know the typical usage is *on* but I'm wondering what's more comprehensive:
>
> The number of species in the planet
>
>
>
or
>
> The number of species on the planet
>
>
> | 2020/05/11 | [
"https://english.stackexchange.com/questions/533975",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/384798/"
] | The number of species on the planet, would be correct. On and in play an important role here, as the species don't live in the planet, but on. Hope this helps. | A quick search of the iWeb corpus says that *on* is more frequent than *in* by a ratio of 100:1. If you're going for something more all-encompasing, ***sharing*** the planet or ***inhabiting*** the planet are good choices. For something with a bit more flair, ***occupying*** the planet or ***enjoying*** the planet might work. |
283,728 | **Goal**: I'm trying to implement a new lightning record page for accounts, and I want to only assign it to a small group of employees. I can't assign it to an entire profile.
I'm aware that it's possible to assign a record page to a profile in a specific app, when the record is of a specific record type. However, I wish to assign the record page to specific users, or perhaps only to users where their "department" field has a certain value. I *could* create a new profile and only assign it to the chosen users, however the org has a lot of profiles already, so I need to avoid additional profiles.
Is anyone aware of a way to achieve my goal? | 2019/11/04 | [
"https://salesforce.stackexchange.com/questions/283728",
"https://salesforce.stackexchange.com",
"https://salesforce.stackexchange.com/users/29134/"
] | One solution is to edit the existing lighting record page. Add the components that the targeted audience should see, and then provide an advanced filter for each of those components, such that they will only be visible for them. For the existing components, that you wish to hide, add the "reverse" filter.
On each component:
1. Under component visibility, select "Add filter"
2. Choose advanced
3. Under "Field" click "Select"
4. Select User, and then the desired field to use.
This technique makes it possible to only show components for a small group of people without having to create new profiles. | If this were a custom VisualForce page, you'd create a group and add each of the members you want to assign it to. You'd also create a permission set for the controller and page and associate it the the Group. However, we're talking about a Lightning page which doesn't have an Apex Controller. Consequently, I'm not entirely certain if the same strategy will work.
If you created an App that used the Lightning Page, then you could assign the Page as the default page for users with profile who use that App, then create a permission set to use the app which you assign to the Group. That might be another approach.
What it sounds like you essentially want to do is override the default page for these particular users so they'll use the new page instead of the default page. I've not seen this particular use case addressed in Lightning Documentation that I'm aware of without some kind of record type being involved.
@Andreas86 seems to have provided the best solution. If possible, I would still use Groups and membership in the Group as a visibility requirement for what component will or will net be seen. If you want Components to appear in different locations, you can duplicate them on a page and apply different visibility filters to each one such that a user will never see both. |
6,063,144 | My friend has a blurred image of a thief's license plate. Is it possible to run an algorithm on these pixels to determine the most likely characters that the pixels represent?
(The fact that it's a license plate is irrelevant, the solution should work by principle on any photographed text that is difficult to decipher.)
Please help me find my dog!
Update: My friend sent my two still images, they are very poor. He doesn't have any shots of the license plates unfortunately.
Image 1:
[An image of the van. I'm trying to decipher the text above the word 'rentals'.](http://www.sugarpitch.com/van_front.png)
Image 2:
[This is the same van, with presumably the same text. It's in the upper right-hand corner of the image](http://www.sugarpitch.com/van_side.png) | 2011/05/19 | [
"https://Stackoverflow.com/questions/6063144",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/761578/"
] | It depends on why/how it's blurred. There are a number of things you could try though: one would be a simple sharpening with an unsharp mask. Another I've found surprisingly effective at times is to simply invert the colors in a photo -- sometimes things that are really hard to read normally just pop right out when inverted.
For a one-time task like this, however, you probably want to use existing tools (e.g., Photoshop or The Gimp), rather than writing new code. It'll take a long time and a lot of effort to match what they already have just waiting to be used. | I'd recommend a sharpening, followed by Sobel filter to find edges, then perform your OCR on it.
Refs:
<http://en.wikipedia.org/wiki/Sobel_operator>
<http://www.bythom.com/sharpening.htm> |
6,063,144 | My friend has a blurred image of a thief's license plate. Is it possible to run an algorithm on these pixels to determine the most likely characters that the pixels represent?
(The fact that it's a license plate is irrelevant, the solution should work by principle on any photographed text that is difficult to decipher.)
Please help me find my dog!
Update: My friend sent my two still images, they are very poor. He doesn't have any shots of the license plates unfortunately.
Image 1:
[An image of the van. I'm trying to decipher the text above the word 'rentals'.](http://www.sugarpitch.com/van_front.png)
Image 2:
[This is the same van, with presumably the same text. It's in the upper right-hand corner of the image](http://www.sugarpitch.com/van_side.png) | 2011/05/19 | [
"https://Stackoverflow.com/questions/6063144",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/761578/"
] | Play with Photoshop. Try different sharpening filters, in different strengths and different orders. Also play with posterization. Revert to the original image frequently. Look for what works. Use your eyes. If you can't see the answer (after applying filters), OCR probably won't either. | I'd recommend a sharpening, followed by Sobel filter to find edges, then perform your OCR on it.
Refs:
<http://en.wikipedia.org/wiki/Sobel_operator>
<http://www.bythom.com/sharpening.htm> |
6,063,144 | My friend has a blurred image of a thief's license plate. Is it possible to run an algorithm on these pixels to determine the most likely characters that the pixels represent?
(The fact that it's a license plate is irrelevant, the solution should work by principle on any photographed text that is difficult to decipher.)
Please help me find my dog!
Update: My friend sent my two still images, they are very poor. He doesn't have any shots of the license plates unfortunately.
Image 1:
[An image of the van. I'm trying to decipher the text above the word 'rentals'.](http://www.sugarpitch.com/van_front.png)
Image 2:
[This is the same van, with presumably the same text. It's in the upper right-hand corner of the image](http://www.sugarpitch.com/van_side.png) | 2011/05/19 | [
"https://Stackoverflow.com/questions/6063144",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/761578/"
] | Motion blur can be removed, because all of the information is still in the photograph. But in this case, I'm not sure any form of image processing is going to help.
I apologize if you already tried this, but have you looked through the "rentals" section of the phone book to see if you can find a company with a similar logo? I assume this is a van from a rental equipment or "rent-to-own" business, not a car rental agency. | I'd recommend a sharpening, followed by Sobel filter to find edges, then perform your OCR on it.
Refs:
<http://en.wikipedia.org/wiki/Sobel_operator>
<http://www.bythom.com/sharpening.htm> |
6,063,144 | My friend has a blurred image of a thief's license plate. Is it possible to run an algorithm on these pixels to determine the most likely characters that the pixels represent?
(The fact that it's a license plate is irrelevant, the solution should work by principle on any photographed text that is difficult to decipher.)
Please help me find my dog!
Update: My friend sent my two still images, they are very poor. He doesn't have any shots of the license plates unfortunately.
Image 1:
[An image of the van. I'm trying to decipher the text above the word 'rentals'.](http://www.sugarpitch.com/van_front.png)
Image 2:
[This is the same van, with presumably the same text. It's in the upper right-hand corner of the image](http://www.sugarpitch.com/van_side.png) | 2011/05/19 | [
"https://Stackoverflow.com/questions/6063144",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/761578/"
] | It depends on why/how it's blurred. There are a number of things you could try though: one would be a simple sharpening with an unsharp mask. Another I've found surprisingly effective at times is to simply invert the colors in a photo -- sometimes things that are really hard to read normally just pop right out when inverted.
For a one-time task like this, however, you probably want to use existing tools (e.g., Photoshop or The Gimp), rather than writing new code. It'll take a long time and a lot of effort to match what they already have just waiting to be used. | Theoretically it is possible under ideal conditions. But it requires that you know the transform from the original to the blurred image.
Image compression, non-linearities in the camera, limited resolution and noise may get in the way. If you're lucky a standard photoshop sharpening filter will do. |
6,063,144 | My friend has a blurred image of a thief's license plate. Is it possible to run an algorithm on these pixels to determine the most likely characters that the pixels represent?
(The fact that it's a license plate is irrelevant, the solution should work by principle on any photographed text that is difficult to decipher.)
Please help me find my dog!
Update: My friend sent my two still images, they are very poor. He doesn't have any shots of the license plates unfortunately.
Image 1:
[An image of the van. I'm trying to decipher the text above the word 'rentals'.](http://www.sugarpitch.com/van_front.png)
Image 2:
[This is the same van, with presumably the same text. It's in the upper right-hand corner of the image](http://www.sugarpitch.com/van_side.png) | 2011/05/19 | [
"https://Stackoverflow.com/questions/6063144",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/761578/"
] | Play with Photoshop. Try different sharpening filters, in different strengths and different orders. Also play with posterization. Revert to the original image frequently. Look for what works. Use your eyes. If you can't see the answer (after applying filters), OCR probably won't either. | Theoretically it is possible under ideal conditions. But it requires that you know the transform from the original to the blurred image.
Image compression, non-linearities in the camera, limited resolution and noise may get in the way. If you're lucky a standard photoshop sharpening filter will do. |
6,063,144 | My friend has a blurred image of a thief's license plate. Is it possible to run an algorithm on these pixels to determine the most likely characters that the pixels represent?
(The fact that it's a license plate is irrelevant, the solution should work by principle on any photographed text that is difficult to decipher.)
Please help me find my dog!
Update: My friend sent my two still images, they are very poor. He doesn't have any shots of the license plates unfortunately.
Image 1:
[An image of the van. I'm trying to decipher the text above the word 'rentals'.](http://www.sugarpitch.com/van_front.png)
Image 2:
[This is the same van, with presumably the same text. It's in the upper right-hand corner of the image](http://www.sugarpitch.com/van_side.png) | 2011/05/19 | [
"https://Stackoverflow.com/questions/6063144",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/761578/"
] | Motion blur can be removed, because all of the information is still in the photograph. But in this case, I'm not sure any form of image processing is going to help.
I apologize if you already tried this, but have you looked through the "rentals" section of the phone book to see if you can find a company with a similar logo? I assume this is a van from a rental equipment or "rent-to-own" business, not a car rental agency. | Theoretically it is possible under ideal conditions. But it requires that you know the transform from the original to the blurred image.
Image compression, non-linearities in the camera, limited resolution and noise may get in the way. If you're lucky a standard photoshop sharpening filter will do. |
6,553 | There are many photos of the ocean, which in some parts is littered with all sorts of waste, often plastic bags, containers etc.
**I just wonder - how do such large amounts of plastic land in the ocean/sea?** Isn't waste disposed of at secure sites - so such a bag would either be recycled or in the worst case end up on some landfill?
Of course rogue companies or entities could occasionally dump some trash into the ocean, but the massive amounts of it seem surprising to me. Besides, often the argument is to not use plastic bags, containers etc. Simply USING them shouldn't put them in the sea.
**So what does, and can it be stopped?** | 2018/04/15 | [
"https://sustainability.stackexchange.com/questions/6553",
"https://sustainability.stackexchange.com",
"https://sustainability.stackexchange.com/users/5629/"
] | Carelessness, wind and rain.
Humans are careless, so leave all manner of plastic outside in their yards. A gust of wind catches the plastic and blows it onto a nearby road. It rains, and the plastic is washed into the gutter and stormwater system, where it travels through drains, culverts, creeks, rivers and ultimately out to sea.
That's why pollution at outflows is always worst after a storm... where wind and rain go hand in hand.
Commercial and industrial premises are just as bad as residential ones — if not worse — but the cause is still the same: Carelessness, wind and rain.
Of course we can (if we want) specifically blame winds for toppling garbage bins set out for collection, and animals for digging through trash, and the homeless for rummaging through dumpsters and leaving the lids open, and kids for unwrapping their toys outside, and transport companies for poorly wrapped pallets, and construction companies for a plethora of loose materials on building sites, and children for posting 'lost dog' signs on electricity poles, and nationalists for flying flags, and Christians for decorating outdoor Christmas trees, and any of a million other groups of people for putting/leaving lightweight plastics outside where the elements can relocate them... but the root cause is still the same for all of them — so it makes more sense to focus on the root cause.
Unfortunately, having garbage that 'disappears on its own accord' is probably seen as a *good thing* by the 'average' person, not a bad thing, so I doubt you can change people's behaviour enough to make a measurable difference. As long as 'trash' is something people 'put outside' and ultimately 'someone' or 'something' makes it 'disappear', this problem will persist.
"Out of sight, out of mind" is a powerful enemy. | There are many ways how plastic waste ends up in the sea. A non-complete list would include the following: Micro-plastics from our washing processes, they are in almost everything like shampoo, peelings, cosmetics, etc., but also our clothes contain a lot of plastics which leaves the washing machines continuously.
Next are ships, loosing load, leaving old fishing nets in the oceans and of course (even when forbidden internationally) there are still a lot of ships simply dumping waste into the sea.
A big problem are underdeveloped countries, especially the poor suburbs of big cities, where there is neither a sensibility or education on plastic waste nor are there treatments, collection or deposits to recycle plastic waste.
And also a lot of people (not only in underdeveloped countries) leave a lot of plastic waste at beaches where next high tide takes it to the sea.
However, even in the developed countries we have an enormous exportation of plastic waste (previously collected and assumed to be recycled or deposited locally) to other countries, declared to be "cheaper recycled" there. In many cases nobody really knows what happens with this waste once has reached its destination country - if it reaches those countries at all.
Many people are thinking about catching plastics out of the sea, but avoidance of introduction of plastic waste is a major topic as well. As it is difficult to educate so many people the general avoidance of plastic may be an important step. |
16,817 | I'm working on a 2D platformer in XNA. One of things I'd like to be a main design characteristic is procedural content generation. The first step of that is to procedurally generate the terrain. So, I've done loads of research on how to generate Perlin noise, smooth it out, play with parameters and all that jazz. I've also spent loads of time at pcg.wikidot.com. The problem I have is generating a level that is 100% connected. Meaning, it's possible for the character to get from the left part of the level all the way to the right part.
Right now, I use the perlin noise to generate a sprite when the noise value at that point is < 0.
Here is a video of the issue I'm having (please excuse the crazy background issues and the fact my character gets stuck when I generate a new level).
<http://screencast.com/t/uWJsIGLoih>
As you can see in the video, the terrain looks interesting enough, but you can't do a whole lot because the Perlin noise compartmentalizes the level into small, random chambers.
I attempted to perform a random walk and overlay the perlin noise on top of it, so I guarantee a path from left to right. The issue with that is illustrated below:
<http://screencast.com/t/ilLvxdp3>
So my question is: What kinds of things can I do to ensure the player can get from the left part of the level to the right part of the level? | 2011/09/03 | [
"https://gamedev.stackexchange.com/questions/16817",
"https://gamedev.stackexchange.com",
"https://gamedev.stackexchange.com/users/9705/"
] | That video's just confusing, and I can't work out what you're trying to show with it to be honest. But I would suggest Perlin noise is probably the wrong algorithm for the job. It's designed for smoothly undulating continuous entities, whereas you're trying to make scattered and discrete entities. What would probably work better is using random values to re-arrange and adjust hand-designed arrangements of platforms to ensure that the resulting area is playable.
There's some information on how Spelunky did this here: <http://tinysubversions.com/2009/09/spelunkys-procedural-space/> | Try modifying the perlin value based on height, i.e. at the bottom add 1 to the noise value, at the top subtract 1 from the noise value, and in between linearly interpolate between 1 and -1. This way, at the bottom you will almost be guaranteed to get a value >= 0 and at the top you will be almost guaranteed to get a value <= 0.
This is assuming values < 0 is supposed to be air. If I understand your description correct, you'll have to switch it around.
Cheers. |
16,817 | I'm working on a 2D platformer in XNA. One of things I'd like to be a main design characteristic is procedural content generation. The first step of that is to procedurally generate the terrain. So, I've done loads of research on how to generate Perlin noise, smooth it out, play with parameters and all that jazz. I've also spent loads of time at pcg.wikidot.com. The problem I have is generating a level that is 100% connected. Meaning, it's possible for the character to get from the left part of the level all the way to the right part.
Right now, I use the perlin noise to generate a sprite when the noise value at that point is < 0.
Here is a video of the issue I'm having (please excuse the crazy background issues and the fact my character gets stuck when I generate a new level).
<http://screencast.com/t/uWJsIGLoih>
As you can see in the video, the terrain looks interesting enough, but you can't do a whole lot because the Perlin noise compartmentalizes the level into small, random chambers.
I attempted to perform a random walk and overlay the perlin noise on top of it, so I guarantee a path from left to right. The issue with that is illustrated below:
<http://screencast.com/t/ilLvxdp3>
So my question is: What kinds of things can I do to ensure the player can get from the left part of the level to the right part of the level? | 2011/09/03 | [
"https://gamedev.stackexchange.com/questions/16817",
"https://gamedev.stackexchange.com",
"https://gamedev.stackexchange.com/users/9705/"
] | In using the word *connectedness*, you've come within a hair's breadth of the tool best suited to determining a solution: graph theory.
Connectedness is a property of graphs. Graphs can be either connected or disconnected (as you're experiencing, AKA a multigraph). Any game level, in any number of dimensions, can be represented as a graph, and logically, this is often the best way to manipulate them. Your game world is a graph in terms of the adjacency between the individual building blocks in your world; and also at the level of connectivity between your various areas. You can use the former to derive the latter.
There is a crucial point to consider when doing working with (2D) levels as graphs, and that is *planarity*. Depending on your requirements, planarity may or may not be a concern. Given your use of noise, I expect the latter, however I outline the options here so that you know what they are.
**Planar graph** - simplest example is a labyrinth. A labyrinth differs from a maze in that it contains no branchings -- it is *unicursal*. If you were to take a solid block of shrubbery(!), and generate a labyrith through it, then at no point could a turning that the labyrinth takes, run into an existing path. It's a bit like a game of Snake, really -- if the path is the snake's body, it cannot be allowed to bite/intersect itself. *Further*, you could have a planar maze; this would branch, but at no point could the branches be allowed to intersect existing parts of the maze already generated, just as with a labyrinth.
**Non-planar graph** - Simplest example is a city street map. A city is essentially a maze. However, it is a highly-connected maze in that there are many individual road routes to get from one place to another. Moreover, a non-planar graph embedding allows crossings, which is exactly what intersections are. And as we know, a city is not a city without intersections. They are integral to traffic flow. In games, this can be good or bad, depending on your goals. Good level flow allows AI to act more easily, and exploration to be freer; while on the other hand it also allows a player to get from startpoint to goal quickly -- potentially too quickly.
This brings us to your approach, which is to use noise. Depending on Perlin noise output is interpreted, it can have some level of connectedness as the macro scale, but it is not designed for 1-connectedness (a single graph). This leaves you a few options.
1. Drop the use of Perlin noise and instead generate a random, planar (non-crossing), connected graph. This provides maximum flow control. However this approach is non-trivial, because graph planarity requires the identification and removal of the Kuratowski subgraphs K3,3 and K5; as well as producing a subsequent planar embedding; both of which are NP-complete problems. This is without a doubt the hardest approach, but it had to be mentioned first to know where you stand. All other methods are a shortcut of some sort, around this method, which is the fundamental math behind maze generation.
2. Drop the use of Perlin noise and instead generate a random, non-planar graph [embedded](http://en.wikipedia.org/wiki/Graph_embedding) within a planar surface (AKA a planar embedding) -- this how games like Diablo and the roguelikes can be made to work easily, as they both use a grid structure to subdivide a planar space (in fact, the vast majority of levels in the roguelikes DO allow crossings, evident in the number of four way intersections). Algorithms producing the connectivity between cells or template rooms are often called "carvers" or "tunnellers", because they carve empty space out of a block of solid rock, incrementally.
3. Do as option (2), but avoid crossings. Thus both embedding (level geometry) is planar, and the topology (level flow) is also planar. You will have to be careful not to generate yourself into dead-ends, if you wish to avoid crossings.
4. Generate your map using noise. Then, using a flood fill algorithm on every cell in your unconnected level (which is a graph, albeit multipart and grid-based), you can deduce all unconnected, discrete subgraphs within that greater multigraph. Next, consider how you want to connect each individual subgraph. If you prefer to avoid crossings, I suggest a sequential connection of these. If not, you can connect them any way you wish.
In order to do this organically, rather than producing hard, straight passages, I would use some sort of coherence function to meld the closest points of each pair of subgraphs (if linking sequentially). This will make the join more "liquid", which is in keeping with the typical Perlin output. The other way you could join areas would be to nudge them closer together, so there is some minimal overlap of the empty spaces.
5. Generate an excessively large map using noise. Isolate all subgraphs as described in option 3. Determine which is the most interesting, according to certain criteria (could be size, or something else, but size would be easiest). Pick out and use only that subgraph, which is already completely self-connected. The difficulty with this approach is that you may find it hard to control the size of your resultant graphs, unless you brute force generate a really large map, or many smaller ones, to pick your perfect subgraph. This is because the size of the subgraphs really depends on the Perlin parameters used, and how you interpret the result.
As an aside to the last two, something I'm sure you have already done, but just in case not: Create a minimal Perlin noise test case in Flash. Play around with parameters until you get a higher degree of connectivity between your "island" areas. I don't think this could ever solve your problem 100% across all generations, since Perlin noise has no inherent guarantee of connectedness. But it could improve connectedness.
Whatever you don't understand, ask and I will clarify. | Try modifying the perlin value based on height, i.e. at the bottom add 1 to the noise value, at the top subtract 1 from the noise value, and in between linearly interpolate between 1 and -1. This way, at the bottom you will almost be guaranteed to get a value >= 0 and at the top you will be almost guaranteed to get a value <= 0.
This is assuming values < 0 is supposed to be air. If I understand your description correct, you'll have to switch it around.
Cheers. |
16,817 | I'm working on a 2D platformer in XNA. One of things I'd like to be a main design characteristic is procedural content generation. The first step of that is to procedurally generate the terrain. So, I've done loads of research on how to generate Perlin noise, smooth it out, play with parameters and all that jazz. I've also spent loads of time at pcg.wikidot.com. The problem I have is generating a level that is 100% connected. Meaning, it's possible for the character to get from the left part of the level all the way to the right part.
Right now, I use the perlin noise to generate a sprite when the noise value at that point is < 0.
Here is a video of the issue I'm having (please excuse the crazy background issues and the fact my character gets stuck when I generate a new level).
<http://screencast.com/t/uWJsIGLoih>
As you can see in the video, the terrain looks interesting enough, but you can't do a whole lot because the Perlin noise compartmentalizes the level into small, random chambers.
I attempted to perform a random walk and overlay the perlin noise on top of it, so I guarantee a path from left to right. The issue with that is illustrated below:
<http://screencast.com/t/ilLvxdp3>
So my question is: What kinds of things can I do to ensure the player can get from the left part of the level to the right part of the level? | 2011/09/03 | [
"https://gamedev.stackexchange.com/questions/16817",
"https://gamedev.stackexchange.com",
"https://gamedev.stackexchange.com/users/9705/"
] | General approaches to this problem:
1. Construct the map in a way that guarantees connectedness from the start. Many of the [dungeon generators on PCG wiki](http://pcg.wikidot.com/pcg-algorithm%3adungeon-generation) work this way.
2. Generate a potentially disconnected map, and then write something (maybe a pathfinder) that checks for connectedness. Throw away the maps that don't work.
3. Generate a potentially disconnected map, and then fix it up whenever it's not connected. Whenever your checking algorithms find an area that's impassable, blast tunnels, build bridges, add teleporters, etc..
4. Generate a potentially disconnected map, and then give the player tools to connect things when needed. Turn the disconnected map bug into a feature. Terraria, Minecraft, etc. do this.
5. Generate a potentially disconnected map, and then make the player restart or move to a different level if the level can't be completed. I believe some roguelikes do this.
I agree with Kylotan that a noise function probably isn't ideal, but noise functions can do a lot and you might be able to fix it up in some way. See [this article](http://accidentalnoise.sourceforge.net/minecraftworlds.html) for some ideas. | Try modifying the perlin value based on height, i.e. at the bottom add 1 to the noise value, at the top subtract 1 from the noise value, and in between linearly interpolate between 1 and -1. This way, at the bottom you will almost be guaranteed to get a value >= 0 and at the top you will be almost guaranteed to get a value <= 0.
This is assuming values < 0 is supposed to be air. If I understand your description correct, you'll have to switch it around.
Cheers. |
16,817 | I'm working on a 2D platformer in XNA. One of things I'd like to be a main design characteristic is procedural content generation. The first step of that is to procedurally generate the terrain. So, I've done loads of research on how to generate Perlin noise, smooth it out, play with parameters and all that jazz. I've also spent loads of time at pcg.wikidot.com. The problem I have is generating a level that is 100% connected. Meaning, it's possible for the character to get from the left part of the level all the way to the right part.
Right now, I use the perlin noise to generate a sprite when the noise value at that point is < 0.
Here is a video of the issue I'm having (please excuse the crazy background issues and the fact my character gets stuck when I generate a new level).
<http://screencast.com/t/uWJsIGLoih>
As you can see in the video, the terrain looks interesting enough, but you can't do a whole lot because the Perlin noise compartmentalizes the level into small, random chambers.
I attempted to perform a random walk and overlay the perlin noise on top of it, so I guarantee a path from left to right. The issue with that is illustrated below:
<http://screencast.com/t/ilLvxdp3>
So my question is: What kinds of things can I do to ensure the player can get from the left part of the level to the right part of the level? | 2011/09/03 | [
"https://gamedev.stackexchange.com/questions/16817",
"https://gamedev.stackexchange.com",
"https://gamedev.stackexchange.com/users/9705/"
] | In using the word *connectedness*, you've come within a hair's breadth of the tool best suited to determining a solution: graph theory.
Connectedness is a property of graphs. Graphs can be either connected or disconnected (as you're experiencing, AKA a multigraph). Any game level, in any number of dimensions, can be represented as a graph, and logically, this is often the best way to manipulate them. Your game world is a graph in terms of the adjacency between the individual building blocks in your world; and also at the level of connectivity between your various areas. You can use the former to derive the latter.
There is a crucial point to consider when doing working with (2D) levels as graphs, and that is *planarity*. Depending on your requirements, planarity may or may not be a concern. Given your use of noise, I expect the latter, however I outline the options here so that you know what they are.
**Planar graph** - simplest example is a labyrinth. A labyrinth differs from a maze in that it contains no branchings -- it is *unicursal*. If you were to take a solid block of shrubbery(!), and generate a labyrith through it, then at no point could a turning that the labyrinth takes, run into an existing path. It's a bit like a game of Snake, really -- if the path is the snake's body, it cannot be allowed to bite/intersect itself. *Further*, you could have a planar maze; this would branch, but at no point could the branches be allowed to intersect existing parts of the maze already generated, just as with a labyrinth.
**Non-planar graph** - Simplest example is a city street map. A city is essentially a maze. However, it is a highly-connected maze in that there are many individual road routes to get from one place to another. Moreover, a non-planar graph embedding allows crossings, which is exactly what intersections are. And as we know, a city is not a city without intersections. They are integral to traffic flow. In games, this can be good or bad, depending on your goals. Good level flow allows AI to act more easily, and exploration to be freer; while on the other hand it also allows a player to get from startpoint to goal quickly -- potentially too quickly.
This brings us to your approach, which is to use noise. Depending on Perlin noise output is interpreted, it can have some level of connectedness as the macro scale, but it is not designed for 1-connectedness (a single graph). This leaves you a few options.
1. Drop the use of Perlin noise and instead generate a random, planar (non-crossing), connected graph. This provides maximum flow control. However this approach is non-trivial, because graph planarity requires the identification and removal of the Kuratowski subgraphs K3,3 and K5; as well as producing a subsequent planar embedding; both of which are NP-complete problems. This is without a doubt the hardest approach, but it had to be mentioned first to know where you stand. All other methods are a shortcut of some sort, around this method, which is the fundamental math behind maze generation.
2. Drop the use of Perlin noise and instead generate a random, non-planar graph [embedded](http://en.wikipedia.org/wiki/Graph_embedding) within a planar surface (AKA a planar embedding) -- this how games like Diablo and the roguelikes can be made to work easily, as they both use a grid structure to subdivide a planar space (in fact, the vast majority of levels in the roguelikes DO allow crossings, evident in the number of four way intersections). Algorithms producing the connectivity between cells or template rooms are often called "carvers" or "tunnellers", because they carve empty space out of a block of solid rock, incrementally.
3. Do as option (2), but avoid crossings. Thus both embedding (level geometry) is planar, and the topology (level flow) is also planar. You will have to be careful not to generate yourself into dead-ends, if you wish to avoid crossings.
4. Generate your map using noise. Then, using a flood fill algorithm on every cell in your unconnected level (which is a graph, albeit multipart and grid-based), you can deduce all unconnected, discrete subgraphs within that greater multigraph. Next, consider how you want to connect each individual subgraph. If you prefer to avoid crossings, I suggest a sequential connection of these. If not, you can connect them any way you wish.
In order to do this organically, rather than producing hard, straight passages, I would use some sort of coherence function to meld the closest points of each pair of subgraphs (if linking sequentially). This will make the join more "liquid", which is in keeping with the typical Perlin output. The other way you could join areas would be to nudge them closer together, so there is some minimal overlap of the empty spaces.
5. Generate an excessively large map using noise. Isolate all subgraphs as described in option 3. Determine which is the most interesting, according to certain criteria (could be size, or something else, but size would be easiest). Pick out and use only that subgraph, which is already completely self-connected. The difficulty with this approach is that you may find it hard to control the size of your resultant graphs, unless you brute force generate a really large map, or many smaller ones, to pick your perfect subgraph. This is because the size of the subgraphs really depends on the Perlin parameters used, and how you interpret the result.
As an aside to the last two, something I'm sure you have already done, but just in case not: Create a minimal Perlin noise test case in Flash. Play around with parameters until you get a higher degree of connectivity between your "island" areas. I don't think this could ever solve your problem 100% across all generations, since Perlin noise has no inherent guarantee of connectedness. But it could improve connectedness.
Whatever you don't understand, ask and I will clarify. | That video's just confusing, and I can't work out what you're trying to show with it to be honest. But I would suggest Perlin noise is probably the wrong algorithm for the job. It's designed for smoothly undulating continuous entities, whereas you're trying to make scattered and discrete entities. What would probably work better is using random values to re-arrange and adjust hand-designed arrangements of platforms to ensure that the resulting area is playable.
There's some information on how Spelunky did this here: <http://tinysubversions.com/2009/09/spelunkys-procedural-space/> |
16,817 | I'm working on a 2D platformer in XNA. One of things I'd like to be a main design characteristic is procedural content generation. The first step of that is to procedurally generate the terrain. So, I've done loads of research on how to generate Perlin noise, smooth it out, play with parameters and all that jazz. I've also spent loads of time at pcg.wikidot.com. The problem I have is generating a level that is 100% connected. Meaning, it's possible for the character to get from the left part of the level all the way to the right part.
Right now, I use the perlin noise to generate a sprite when the noise value at that point is < 0.
Here is a video of the issue I'm having (please excuse the crazy background issues and the fact my character gets stuck when I generate a new level).
<http://screencast.com/t/uWJsIGLoih>
As you can see in the video, the terrain looks interesting enough, but you can't do a whole lot because the Perlin noise compartmentalizes the level into small, random chambers.
I attempted to perform a random walk and overlay the perlin noise on top of it, so I guarantee a path from left to right. The issue with that is illustrated below:
<http://screencast.com/t/ilLvxdp3>
So my question is: What kinds of things can I do to ensure the player can get from the left part of the level to the right part of the level? | 2011/09/03 | [
"https://gamedev.stackexchange.com/questions/16817",
"https://gamedev.stackexchange.com",
"https://gamedev.stackexchange.com/users/9705/"
] | General approaches to this problem:
1. Construct the map in a way that guarantees connectedness from the start. Many of the [dungeon generators on PCG wiki](http://pcg.wikidot.com/pcg-algorithm%3adungeon-generation) work this way.
2. Generate a potentially disconnected map, and then write something (maybe a pathfinder) that checks for connectedness. Throw away the maps that don't work.
3. Generate a potentially disconnected map, and then fix it up whenever it's not connected. Whenever your checking algorithms find an area that's impassable, blast tunnels, build bridges, add teleporters, etc..
4. Generate a potentially disconnected map, and then give the player tools to connect things when needed. Turn the disconnected map bug into a feature. Terraria, Minecraft, etc. do this.
5. Generate a potentially disconnected map, and then make the player restart or move to a different level if the level can't be completed. I believe some roguelikes do this.
I agree with Kylotan that a noise function probably isn't ideal, but noise functions can do a lot and you might be able to fix it up in some way. See [this article](http://accidentalnoise.sourceforge.net/minecraftworlds.html) for some ideas. | That video's just confusing, and I can't work out what you're trying to show with it to be honest. But I would suggest Perlin noise is probably the wrong algorithm for the job. It's designed for smoothly undulating continuous entities, whereas you're trying to make scattered and discrete entities. What would probably work better is using random values to re-arrange and adjust hand-designed arrangements of platforms to ensure that the resulting area is playable.
There's some information on how Spelunky did this here: <http://tinysubversions.com/2009/09/spelunkys-procedural-space/> |
16,817 | I'm working on a 2D platformer in XNA. One of things I'd like to be a main design characteristic is procedural content generation. The first step of that is to procedurally generate the terrain. So, I've done loads of research on how to generate Perlin noise, smooth it out, play with parameters and all that jazz. I've also spent loads of time at pcg.wikidot.com. The problem I have is generating a level that is 100% connected. Meaning, it's possible for the character to get from the left part of the level all the way to the right part.
Right now, I use the perlin noise to generate a sprite when the noise value at that point is < 0.
Here is a video of the issue I'm having (please excuse the crazy background issues and the fact my character gets stuck when I generate a new level).
<http://screencast.com/t/uWJsIGLoih>
As you can see in the video, the terrain looks interesting enough, but you can't do a whole lot because the Perlin noise compartmentalizes the level into small, random chambers.
I attempted to perform a random walk and overlay the perlin noise on top of it, so I guarantee a path from left to right. The issue with that is illustrated below:
<http://screencast.com/t/ilLvxdp3>
So my question is: What kinds of things can I do to ensure the player can get from the left part of the level to the right part of the level? | 2011/09/03 | [
"https://gamedev.stackexchange.com/questions/16817",
"https://gamedev.stackexchange.com",
"https://gamedev.stackexchange.com/users/9705/"
] | That video's just confusing, and I can't work out what you're trying to show with it to be honest. But I would suggest Perlin noise is probably the wrong algorithm for the job. It's designed for smoothly undulating continuous entities, whereas you're trying to make scattered and discrete entities. What would probably work better is using random values to re-arrange and adjust hand-designed arrangements of platforms to ensure that the resulting area is playable.
There's some information on how Spelunky did this here: <http://tinysubversions.com/2009/09/spelunkys-procedural-space/> | A simple approach that's easy for you to adopt given your existing code is to keep generating random maps until one meets your connectedness constraint.
You can likely quickly check your connectedness constraint using a flood fill from the starting position. Does it fill all the way to a legal end-point?
You can likely do thousands of such checks per second, so its likely a perfectly acceptable hack. |
16,817 | I'm working on a 2D platformer in XNA. One of things I'd like to be a main design characteristic is procedural content generation. The first step of that is to procedurally generate the terrain. So, I've done loads of research on how to generate Perlin noise, smooth it out, play with parameters and all that jazz. I've also spent loads of time at pcg.wikidot.com. The problem I have is generating a level that is 100% connected. Meaning, it's possible for the character to get from the left part of the level all the way to the right part.
Right now, I use the perlin noise to generate a sprite when the noise value at that point is < 0.
Here is a video of the issue I'm having (please excuse the crazy background issues and the fact my character gets stuck when I generate a new level).
<http://screencast.com/t/uWJsIGLoih>
As you can see in the video, the terrain looks interesting enough, but you can't do a whole lot because the Perlin noise compartmentalizes the level into small, random chambers.
I attempted to perform a random walk and overlay the perlin noise on top of it, so I guarantee a path from left to right. The issue with that is illustrated below:
<http://screencast.com/t/ilLvxdp3>
So my question is: What kinds of things can I do to ensure the player can get from the left part of the level to the right part of the level? | 2011/09/03 | [
"https://gamedev.stackexchange.com/questions/16817",
"https://gamedev.stackexchange.com",
"https://gamedev.stackexchange.com/users/9705/"
] | In using the word *connectedness*, you've come within a hair's breadth of the tool best suited to determining a solution: graph theory.
Connectedness is a property of graphs. Graphs can be either connected or disconnected (as you're experiencing, AKA a multigraph). Any game level, in any number of dimensions, can be represented as a graph, and logically, this is often the best way to manipulate them. Your game world is a graph in terms of the adjacency between the individual building blocks in your world; and also at the level of connectivity between your various areas. You can use the former to derive the latter.
There is a crucial point to consider when doing working with (2D) levels as graphs, and that is *planarity*. Depending on your requirements, planarity may or may not be a concern. Given your use of noise, I expect the latter, however I outline the options here so that you know what they are.
**Planar graph** - simplest example is a labyrinth. A labyrinth differs from a maze in that it contains no branchings -- it is *unicursal*. If you were to take a solid block of shrubbery(!), and generate a labyrith through it, then at no point could a turning that the labyrinth takes, run into an existing path. It's a bit like a game of Snake, really -- if the path is the snake's body, it cannot be allowed to bite/intersect itself. *Further*, you could have a planar maze; this would branch, but at no point could the branches be allowed to intersect existing parts of the maze already generated, just as with a labyrinth.
**Non-planar graph** - Simplest example is a city street map. A city is essentially a maze. However, it is a highly-connected maze in that there are many individual road routes to get from one place to another. Moreover, a non-planar graph embedding allows crossings, which is exactly what intersections are. And as we know, a city is not a city without intersections. They are integral to traffic flow. In games, this can be good or bad, depending on your goals. Good level flow allows AI to act more easily, and exploration to be freer; while on the other hand it also allows a player to get from startpoint to goal quickly -- potentially too quickly.
This brings us to your approach, which is to use noise. Depending on Perlin noise output is interpreted, it can have some level of connectedness as the macro scale, but it is not designed for 1-connectedness (a single graph). This leaves you a few options.
1. Drop the use of Perlin noise and instead generate a random, planar (non-crossing), connected graph. This provides maximum flow control. However this approach is non-trivial, because graph planarity requires the identification and removal of the Kuratowski subgraphs K3,3 and K5; as well as producing a subsequent planar embedding; both of which are NP-complete problems. This is without a doubt the hardest approach, but it had to be mentioned first to know where you stand. All other methods are a shortcut of some sort, around this method, which is the fundamental math behind maze generation.
2. Drop the use of Perlin noise and instead generate a random, non-planar graph [embedded](http://en.wikipedia.org/wiki/Graph_embedding) within a planar surface (AKA a planar embedding) -- this how games like Diablo and the roguelikes can be made to work easily, as they both use a grid structure to subdivide a planar space (in fact, the vast majority of levels in the roguelikes DO allow crossings, evident in the number of four way intersections). Algorithms producing the connectivity between cells or template rooms are often called "carvers" or "tunnellers", because they carve empty space out of a block of solid rock, incrementally.
3. Do as option (2), but avoid crossings. Thus both embedding (level geometry) is planar, and the topology (level flow) is also planar. You will have to be careful not to generate yourself into dead-ends, if you wish to avoid crossings.
4. Generate your map using noise. Then, using a flood fill algorithm on every cell in your unconnected level (which is a graph, albeit multipart and grid-based), you can deduce all unconnected, discrete subgraphs within that greater multigraph. Next, consider how you want to connect each individual subgraph. If you prefer to avoid crossings, I suggest a sequential connection of these. If not, you can connect them any way you wish.
In order to do this organically, rather than producing hard, straight passages, I would use some sort of coherence function to meld the closest points of each pair of subgraphs (if linking sequentially). This will make the join more "liquid", which is in keeping with the typical Perlin output. The other way you could join areas would be to nudge them closer together, so there is some minimal overlap of the empty spaces.
5. Generate an excessively large map using noise. Isolate all subgraphs as described in option 3. Determine which is the most interesting, according to certain criteria (could be size, or something else, but size would be easiest). Pick out and use only that subgraph, which is already completely self-connected. The difficulty with this approach is that you may find it hard to control the size of your resultant graphs, unless you brute force generate a really large map, or many smaller ones, to pick your perfect subgraph. This is because the size of the subgraphs really depends on the Perlin parameters used, and how you interpret the result.
As an aside to the last two, something I'm sure you have already done, but just in case not: Create a minimal Perlin noise test case in Flash. Play around with parameters until you get a higher degree of connectivity between your "island" areas. I don't think this could ever solve your problem 100% across all generations, since Perlin noise has no inherent guarantee of connectedness. But it could improve connectedness.
Whatever you don't understand, ask and I will clarify. | General approaches to this problem:
1. Construct the map in a way that guarantees connectedness from the start. Many of the [dungeon generators on PCG wiki](http://pcg.wikidot.com/pcg-algorithm%3adungeon-generation) work this way.
2. Generate a potentially disconnected map, and then write something (maybe a pathfinder) that checks for connectedness. Throw away the maps that don't work.
3. Generate a potentially disconnected map, and then fix it up whenever it's not connected. Whenever your checking algorithms find an area that's impassable, blast tunnels, build bridges, add teleporters, etc..
4. Generate a potentially disconnected map, and then give the player tools to connect things when needed. Turn the disconnected map bug into a feature. Terraria, Minecraft, etc. do this.
5. Generate a potentially disconnected map, and then make the player restart or move to a different level if the level can't be completed. I believe some roguelikes do this.
I agree with Kylotan that a noise function probably isn't ideal, but noise functions can do a lot and you might be able to fix it up in some way. See [this article](http://accidentalnoise.sourceforge.net/minecraftworlds.html) for some ideas. |
16,817 | I'm working on a 2D platformer in XNA. One of things I'd like to be a main design characteristic is procedural content generation. The first step of that is to procedurally generate the terrain. So, I've done loads of research on how to generate Perlin noise, smooth it out, play with parameters and all that jazz. I've also spent loads of time at pcg.wikidot.com. The problem I have is generating a level that is 100% connected. Meaning, it's possible for the character to get from the left part of the level all the way to the right part.
Right now, I use the perlin noise to generate a sprite when the noise value at that point is < 0.
Here is a video of the issue I'm having (please excuse the crazy background issues and the fact my character gets stuck when I generate a new level).
<http://screencast.com/t/uWJsIGLoih>
As you can see in the video, the terrain looks interesting enough, but you can't do a whole lot because the Perlin noise compartmentalizes the level into small, random chambers.
I attempted to perform a random walk and overlay the perlin noise on top of it, so I guarantee a path from left to right. The issue with that is illustrated below:
<http://screencast.com/t/ilLvxdp3>
So my question is: What kinds of things can I do to ensure the player can get from the left part of the level to the right part of the level? | 2011/09/03 | [
"https://gamedev.stackexchange.com/questions/16817",
"https://gamedev.stackexchange.com",
"https://gamedev.stackexchange.com/users/9705/"
] | In using the word *connectedness*, you've come within a hair's breadth of the tool best suited to determining a solution: graph theory.
Connectedness is a property of graphs. Graphs can be either connected or disconnected (as you're experiencing, AKA a multigraph). Any game level, in any number of dimensions, can be represented as a graph, and logically, this is often the best way to manipulate them. Your game world is a graph in terms of the adjacency between the individual building blocks in your world; and also at the level of connectivity between your various areas. You can use the former to derive the latter.
There is a crucial point to consider when doing working with (2D) levels as graphs, and that is *planarity*. Depending on your requirements, planarity may or may not be a concern. Given your use of noise, I expect the latter, however I outline the options here so that you know what they are.
**Planar graph** - simplest example is a labyrinth. A labyrinth differs from a maze in that it contains no branchings -- it is *unicursal*. If you were to take a solid block of shrubbery(!), and generate a labyrith through it, then at no point could a turning that the labyrinth takes, run into an existing path. It's a bit like a game of Snake, really -- if the path is the snake's body, it cannot be allowed to bite/intersect itself. *Further*, you could have a planar maze; this would branch, but at no point could the branches be allowed to intersect existing parts of the maze already generated, just as with a labyrinth.
**Non-planar graph** - Simplest example is a city street map. A city is essentially a maze. However, it is a highly-connected maze in that there are many individual road routes to get from one place to another. Moreover, a non-planar graph embedding allows crossings, which is exactly what intersections are. And as we know, a city is not a city without intersections. They are integral to traffic flow. In games, this can be good or bad, depending on your goals. Good level flow allows AI to act more easily, and exploration to be freer; while on the other hand it also allows a player to get from startpoint to goal quickly -- potentially too quickly.
This brings us to your approach, which is to use noise. Depending on Perlin noise output is interpreted, it can have some level of connectedness as the macro scale, but it is not designed for 1-connectedness (a single graph). This leaves you a few options.
1. Drop the use of Perlin noise and instead generate a random, planar (non-crossing), connected graph. This provides maximum flow control. However this approach is non-trivial, because graph planarity requires the identification and removal of the Kuratowski subgraphs K3,3 and K5; as well as producing a subsequent planar embedding; both of which are NP-complete problems. This is without a doubt the hardest approach, but it had to be mentioned first to know where you stand. All other methods are a shortcut of some sort, around this method, which is the fundamental math behind maze generation.
2. Drop the use of Perlin noise and instead generate a random, non-planar graph [embedded](http://en.wikipedia.org/wiki/Graph_embedding) within a planar surface (AKA a planar embedding) -- this how games like Diablo and the roguelikes can be made to work easily, as they both use a grid structure to subdivide a planar space (in fact, the vast majority of levels in the roguelikes DO allow crossings, evident in the number of four way intersections). Algorithms producing the connectivity between cells or template rooms are often called "carvers" or "tunnellers", because they carve empty space out of a block of solid rock, incrementally.
3. Do as option (2), but avoid crossings. Thus both embedding (level geometry) is planar, and the topology (level flow) is also planar. You will have to be careful not to generate yourself into dead-ends, if you wish to avoid crossings.
4. Generate your map using noise. Then, using a flood fill algorithm on every cell in your unconnected level (which is a graph, albeit multipart and grid-based), you can deduce all unconnected, discrete subgraphs within that greater multigraph. Next, consider how you want to connect each individual subgraph. If you prefer to avoid crossings, I suggest a sequential connection of these. If not, you can connect them any way you wish.
In order to do this organically, rather than producing hard, straight passages, I would use some sort of coherence function to meld the closest points of each pair of subgraphs (if linking sequentially). This will make the join more "liquid", which is in keeping with the typical Perlin output. The other way you could join areas would be to nudge them closer together, so there is some minimal overlap of the empty spaces.
5. Generate an excessively large map using noise. Isolate all subgraphs as described in option 3. Determine which is the most interesting, according to certain criteria (could be size, or something else, but size would be easiest). Pick out and use only that subgraph, which is already completely self-connected. The difficulty with this approach is that you may find it hard to control the size of your resultant graphs, unless you brute force generate a really large map, or many smaller ones, to pick your perfect subgraph. This is because the size of the subgraphs really depends on the Perlin parameters used, and how you interpret the result.
As an aside to the last two, something I'm sure you have already done, but just in case not: Create a minimal Perlin noise test case in Flash. Play around with parameters until you get a higher degree of connectivity between your "island" areas. I don't think this could ever solve your problem 100% across all generations, since Perlin noise has no inherent guarantee of connectedness. But it could improve connectedness.
Whatever you don't understand, ask and I will clarify. | A simple approach that's easy for you to adopt given your existing code is to keep generating random maps until one meets your connectedness constraint.
You can likely quickly check your connectedness constraint using a flood fill from the starting position. Does it fill all the way to a legal end-point?
You can likely do thousands of such checks per second, so its likely a perfectly acceptable hack. |
16,817 | I'm working on a 2D platformer in XNA. One of things I'd like to be a main design characteristic is procedural content generation. The first step of that is to procedurally generate the terrain. So, I've done loads of research on how to generate Perlin noise, smooth it out, play with parameters and all that jazz. I've also spent loads of time at pcg.wikidot.com. The problem I have is generating a level that is 100% connected. Meaning, it's possible for the character to get from the left part of the level all the way to the right part.
Right now, I use the perlin noise to generate a sprite when the noise value at that point is < 0.
Here is a video of the issue I'm having (please excuse the crazy background issues and the fact my character gets stuck when I generate a new level).
<http://screencast.com/t/uWJsIGLoih>
As you can see in the video, the terrain looks interesting enough, but you can't do a whole lot because the Perlin noise compartmentalizes the level into small, random chambers.
I attempted to perform a random walk and overlay the perlin noise on top of it, so I guarantee a path from left to right. The issue with that is illustrated below:
<http://screencast.com/t/ilLvxdp3>
So my question is: What kinds of things can I do to ensure the player can get from the left part of the level to the right part of the level? | 2011/09/03 | [
"https://gamedev.stackexchange.com/questions/16817",
"https://gamedev.stackexchange.com",
"https://gamedev.stackexchange.com/users/9705/"
] | General approaches to this problem:
1. Construct the map in a way that guarantees connectedness from the start. Many of the [dungeon generators on PCG wiki](http://pcg.wikidot.com/pcg-algorithm%3adungeon-generation) work this way.
2. Generate a potentially disconnected map, and then write something (maybe a pathfinder) that checks for connectedness. Throw away the maps that don't work.
3. Generate a potentially disconnected map, and then fix it up whenever it's not connected. Whenever your checking algorithms find an area that's impassable, blast tunnels, build bridges, add teleporters, etc..
4. Generate a potentially disconnected map, and then give the player tools to connect things when needed. Turn the disconnected map bug into a feature. Terraria, Minecraft, etc. do this.
5. Generate a potentially disconnected map, and then make the player restart or move to a different level if the level can't be completed. I believe some roguelikes do this.
I agree with Kylotan that a noise function probably isn't ideal, but noise functions can do a lot and you might be able to fix it up in some way. See [this article](http://accidentalnoise.sourceforge.net/minecraftworlds.html) for some ideas. | A simple approach that's easy for you to adopt given your existing code is to keep generating random maps until one meets your connectedness constraint.
You can likely quickly check your connectedness constraint using a flood fill from the starting position. Does it fill all the way to a legal end-point?
You can likely do thousands of such checks per second, so its likely a perfectly acceptable hack. |
30,693 | I am looking for an answer to something I am lost about, hopefully I can fins some help. I recently moved into a new home. The previous owners had a gas dryer and no outlets for an electric dryer. I had the gas line removed as it was faulty, and am now trying to install an electrical line for my electric dryer. I purchased the correct 3 wire outlet, a new 220v breaker since there was not an extra one in the breaker box, and a 15ft indoor copper building wire (which is 4 wires). I wired the cable to the outlet using the 2 hot wires (red and black) and the white wire (neutral). I left the bare copper wire (ground) out as from my understanding the dryer is already grounded and this wire is not needed. then ew breaker is installed and now i need to wire the outlet to the breaker. Here is my issue. I can run the red and black wires to the breaker, no problem, but there does not appear to be an open spot on the neutral bar, where all the other white wires are running to. There is another bar directly below it, which i assume is the ground bar that has open slots. the top bar has a large black wire running from it to the outside along with the 2 main wires coming into the house. the bottom bar has a black wire that runs to a screw a few inches away on the breaker box itself. With there being no open slots for my neutral wire to go, where should i put it?
Can I put it on the bottom bar?
Can I tie it in with another neutral wire?
Perhaps one of the other neutral wires can be moved to the other bar?
Edit: I just noticed that behind the two bars at the bottom, there is a copper piece of metal right in the center that appears to connect the 2 bars, at least I assume so.
Edit 2: Just thought I would update everyone on the situation. I added a 220V 30Amp breaker to the breaker box. I ran the red and black wires to this breaker. i then moved the green wire running to the neutral bus down to the ground bus. This green wire was labeled as upstairs kitchen with its associated breaker. I checked the kitchen equipment, it is all running normally. I then hooked the white neutral wire coming from the dryer outlet into the now open spot on the neutral bar. i then turned on the breaker and my dryer appears to be working normally as well. Thank you all for the help. I will of course be updating my dryer hookups to the 4 wire standard as soon as possible, and i will probably be adding an additional neutral bar to my breaker box as well. I will update this post if i run into any problems.
 | 2013/08/15 | [
"https://diy.stackexchange.com/questions/30693",
"https://diy.stackexchange.com",
"https://diy.stackexchange.com/users/14553/"
] | Without being able to see the cables as they enter the cabinet; or the ability to touch or trace them, here is what I assume is going on.
Definitions:
============

Grounded (neutral) from the service
-----------------------------------
A typical single split phase service is made up of 3 wires. Two ungrounded (hot) conductors, and one grounded (neutral) conductor. The ungrounded (hot) conductors will connect to the main service panel through a disconnect (usually a large breaker), while the grounded (neutral) connects to the neutral lug. The neutral lug will be bonded (electrically connected) to the neutral bus bar, and all grounded (neutral) branch circuit conductors will terminate at the neutral bus.
Grounding Electrode Conductor
-----------------------------
This conductor is used to connect the grounding electrode (ground rod, etc.), to the grounding bus in the panel. All equipment grounding conductors will be connected to this bus.
Bonding Jumper
--------------
The bonding jumper is used to bond (electrically connect), the un-energized metal parts of the panel to the grounding system.
Assumption:
===========
Since it appears that (what I assume is) the grounding electrode conductor terminates at the neutral bus, I'm also assuming that this is the main service disconnect. This leads me to believe that the neutral and grounding buses are bonded (electrically connected). In which case, technically, grounded (neutral) branch circuit conductors *can* terminate at the grounding bus.
So you have two options:
1. Terminate the grounded (neutral) from the new circuit to the grounding bus.
2. Move the green wire that is terminated on the neutral bus, to the grounding bus. Then terminate the grounded (neutral) from the new circuit, to the freed up slot on the neutral bus.
Additional Information and Code Compliance:
===========================================
Number of Conductors
--------------------
Since this is a new circuit, it has to be installed to current code standards.
>
> National Electrical Code 2011
> =============================
>
>
> ARTICLE 250 — GROUNDING AND BONDING
> -----------------------------------
>
>
> ### VI. Equipment Grounding and Equipment Grounding Conductors
>
>
> **250.140 Frames of Ranges and Clothes Dryers.** Frames of electric ranges, wall-mounted ovens, counter-mounted cooking units, clothes dryers, and outlet or junction boxes that are part of the circuit for these appliances shall be connected to the equipment grounding conductor in the manner specified by 250.134 or 250.138.
>
>
>
Which in this case means installing a [NEMA 14](http://en.wikipedia.org/wiki/NEMA_connector#NEMA_14) receptacle for the dryer, and a proper grounding conductor.

You'll have to follow the dryer manufacturers installation instructions for upgrading to a 4 wire cord. For more information see [this answer](https://diy.stackexchange.com/a/25527/33), and [this answer](https://diy.stackexchange.com/a/30517/33).
Since you've said that you're already using 4 wire cable, you'll simply have to terminate the grounding conductor in the cable to the grounding bus in the service panel. Then connect the other end of the grounding conductor to the grounding terminal in the dryer receptacle.
Size of Conductors
------------------
You'll also want to be sure that you're using the proper size breaker and conductors. In the case of a dryer, you'll typically use a 30 ampere breaker and 10 AWG conductors (depending on the length of the run). However, you'll want to check the dryer manufacturers installation instructions to verify this. | It is hard for me to tell if the two bars are physically connected from this picture and I would be surprised if they were. They should not be. The ground and neutral are connected together at the service pole, not the main panel. Why? The high voltage main line on the pole is 17,000 VAC, the secondary of the transformer on the utility pole is, for residential, 240 VAC. These voltage "potentials" are not physically connect, but magnetically connected. IN fact, the secondary can be "any" voltage potential based on other aspects of the circuit. So....the electric utility connects a ground wire to one side of the transformer to "pull" the voltage of that wire to ground potential. We have all seen a ground rod below the main panel, this is the same concept as the ground wire on the "pole".
So now we have a wire that is at ground potential (voltage) which we call the neutral. We won't get shocked by touching this wire (if we ground it correctly, otherwise, ouch).
In your picture, I see a large stranded wire on the right connected to the top bus bar which also has multiple white wires (for other circuits) connected to this same bus. I believe this large stranded wire is the ground wire coming from the pole. Then I see a black wire, screwed to the metal panel casing coming from the left, wired to the bottom bar, or bus. This is confusing, the "ground" and "neutral" should be seperate (even though they are the same at the pole).
So the answer....put all the "ground" wires on the same bus. Put all the "neutral" wires on the same bus. Then it's easy, connect your 4-wire to the associated bus bars.
If your bus bars are full, you can gang the extra wires by using a single wire "from" the bus to multiple wires connected using a wire nut (this is the same concept as putting only 1 wire on a breaker that is rated for only a single wire). Or add another bus (either ground or neutral) to the panel as needed (i.e., screw another bus next to the one that's full).
And your done! |
2,475,744 | what do you think is an interesting topic in distributed systems.
i should pic a topic and present it on monday. at first i chose to talk about Wuala, but after reading about it, i don't think its that interesting.
so what is an interesting (new) topic in distributed systems that i can research about.
sorry if this is the wrong place to post. | 2010/03/19 | [
"https://Stackoverflow.com/questions/2475744",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/274359/"
] | Take for example a database like [Cassandra](http://cassandra.apache.org/) with the following features:
* Decentralized: Every node in the cluster is identical. There are no network bottlenecks. There are **no single points of failure**.
* Elastic: Read and write throughput both increase **linearly** as new machines are added, with no downtime or interruption to applications.
* Fault Tolerant: Data is automatically replicated to multiple nodes for fault-tolerance. Replication across multiple data centers is supported. Failed nodes can be replaced with **no downtime**.
* **Consistent**, Eventually: Cassandra implements an eventually consistent model and includes sophisticated features such as Hinted Handoff and Read Repair to minimize inconsistency windows.
* Highly Availabile: Writes and reads offer a **tunable ConsistencyLevel**, all the way from "writes never fail" to "block for all replicas to be readable," with the quorum level in the middle.
I think you could hold a semester of lectures on just solving problems encountered creating such a system and/or making it high-performance. As a bonus, the topic is of wide interest (anyone writing applications for the web, basically) and already partly known so you have a good chance to capture the attention of a crowd of developers. | The consensus agreement.
1. The Byzantine Generals Problem in the synchronous environment.
2. The whole idea of impossibility proof by FLP for asynchronous systems.
3. The sincere effort of Lamport to have the best possible solution for the problem in asynchronous leading to PAXOS. |
2,475,744 | what do you think is an interesting topic in distributed systems.
i should pic a topic and present it on monday. at first i chose to talk about Wuala, but after reading about it, i don't think its that interesting.
so what is an interesting (new) topic in distributed systems that i can research about.
sorry if this is the wrong place to post. | 2010/03/19 | [
"https://Stackoverflow.com/questions/2475744",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/274359/"
] | Take for example a database like [Cassandra](http://cassandra.apache.org/) with the following features:
* Decentralized: Every node in the cluster is identical. There are no network bottlenecks. There are **no single points of failure**.
* Elastic: Read and write throughput both increase **linearly** as new machines are added, with no downtime or interruption to applications.
* Fault Tolerant: Data is automatically replicated to multiple nodes for fault-tolerance. Replication across multiple data centers is supported. Failed nodes can be replaced with **no downtime**.
* **Consistent**, Eventually: Cassandra implements an eventually consistent model and includes sophisticated features such as Hinted Handoff and Read Repair to minimize inconsistency windows.
* Highly Availabile: Writes and reads offer a **tunable ConsistencyLevel**, all the way from "writes never fail" to "block for all replicas to be readable," with the quorum level in the middle.
I think you could hold a semester of lectures on just solving problems encountered creating such a system and/or making it high-performance. As a bonus, the topic is of wide interest (anyone writing applications for the web, basically) and already partly known so you have a good chance to capture the attention of a crowd of developers. | coordinated checkpointing is interesting. To recover from a failure a system must be returned to a correct state. So distributed systems record and recover their state through checkpointing and logging.
With checkpointing the system records its state from time to time. And when an error occurs the system reverts to that.
A record of the systems state is called also called distributed snapshot. With coordinated checkpointing processes write, in sync, records of all input & output since the previous snapshot. The coordination is necessary because without you have a domino effect where you can't determine what the global state was at any time, you keep having to trace events backwards until you reach the systems initial state. |
8,587,045 | I need to parse content type of typed URL:
1. if it's an image then do something.
2. if it's a web page then scan this page for images and fill array with images' SRC attributes (and order it by size).
How can i do this using only JS?
How can i do this using only ASP.NET and C#? | 2011/12/21 | [
"https://Stackoverflow.com/questions/8587045",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | For the activity feed, we use <https://github.com/justquick/django-activity-stream> Documentation: <http://justquick.github.com/django-activity-stream/>
For the js widget and live notifications, we use <https://github.com/subsume/django-subscription> yourlabs example, it depends on redis but you can easily add a model backend if you really want to. Redis is a good choices it's half a megabyte of dependency. Documentation: <http://django-social.rtfd.org>
There is no application that does meta-notifications ("notification groupping") properly but a lot of research has been done. Basically you need another app, with a MetaNotification model, and something (management command, signal ...) that will visit notifications and create MetaNotification instances. Then you should display MetaNotification lists rather than Activity or notification list.
Finnaly, if you want configurable email notifications then you can use django-notifications: <https://github.com/jtauber/django-notification> or this app which looks nicer: <http://www.tomaz.me/django-notifications/>
I'm not aware of any app that does it all. It *is* going to be some work for you.
"It's a long way to the top if you wanna rock'n'roll" or as I like to say "patience and perseverance" :) | <https://pypi.python.org/pypi/feedly> allows you to build newsfeed and notification systems using Cassandra and/or Redis. Examples of what you can build are applications like the Facebook newsfeed, your Twitter stream or your Pinterest following page. |
44,530 | Why are there no Mach 4+ fighter aircraft? It seems that such aircraft would have massive advantages when it comes to being able to fire longer range missiles and evade return fire. | 2017/10/10 | [
"https://aviation.stackexchange.com/questions/44530",
"https://aviation.stackexchange.com",
"https://aviation.stackexchange.com/users/16245/"
] | Today's fighter are designed with a max speed around mach 2 (max speed is generally calculated with no emport and 50% fuel, the operational top speed is way lower). They keep this ability to pass Mach 1 only to be able to respond fast enough to intercept / air policing missions.
Countries do not design very fast fighters (and planes in general) because :
1. It's extremely hard to achieve.
2. It does not give any advantage.
3. It has no use anymore.
Back in the cold war period, radars and SAMs where not very sophisticated yet, they did not see very far and Ground to Air missiles were not able to get very high. So to protect their airspace from potential intruders / enemy bombers, countries were relying on interceptor aircraft, a fast jet able to catch up the target before it gets to its objective.
At this time, going very fast, very high was an advantage : SAMs where not threatening. The US made a very successful reconnaissance plane exploiting that: the [SR-71 Blackbird](https://en.wikipedia.org/wiki/Lockheed_SR-71_Blackbird) (max speed : mach 3.3). Many missiles were fired towards the SR-71, none reached its target.
Speed was used for intercept and reconnaissance / intel.
But radars and SAMs improved, getting more range, becoming able to threaten very fast planes. Also satellite images got better and better. The need for very fast interceptors decreased as radar range increased, and the need for very fast recon aircraft decreased as satellite images got better.
Now why does going very very fast not give any advantage to a fighter? Because you cannot turn.
The thing to keep in mind is G force: force the same turn rate, the faster you go, the more g you have to take. And a human can sustain a very limited amount of g. This means a plane going at Mach 4 will barely be able to turn without blacking out its pilot. And if you can't turn, you won't evade enemy missiles, making useless the bit more range gained on the missiles.
And then there is the technical challenge, dealing with a little drag as possible (that means no missile / bomb / fuel under the wings), heat, airframe deformation and wear ... for an advantage which isn't there anymore.
Despite all that, the US are suspected to work on a Mach 6+ aircraft, probably unpiloted.
(Please keep in mind that I simplified a lot of things to not make this post too complex) | Fuel consumption and combat range. Going to mach 4 it would be a problem for the fuel consumption.
Some legends, they say something there is but we do not know it :-) |
44,530 | Why are there no Mach 4+ fighter aircraft? It seems that such aircraft would have massive advantages when it comes to being able to fire longer range missiles and evade return fire. | 2017/10/10 | [
"https://aviation.stackexchange.com/questions/44530",
"https://aviation.stackexchange.com",
"https://aviation.stackexchange.com/users/16245/"
] | In addition to the other answers: at high speed, the aircraft is heated by air compression. At Mach 2.2, skin temperatures become so high you can't safely use aluminium anymore, you have to switch to steel or titanium, both of which are expensive to manufacture. The SR-71 was built in titanium. It also leaked like a sieve while on the ground, heat expansion at speed sealed those leaks. The SR-71 had to use a special fuel to make that an acceptable risk.
At Mach 4 this issue gets worse again, and you may have to use active cooling or fragile heat shielding to keep your airframe from melting. | Fuel consumption and combat range. Going to mach 4 it would be a problem for the fuel consumption.
Some legends, they say something there is but we do not know it :-) |
44,530 | Why are there no Mach 4+ fighter aircraft? It seems that such aircraft would have massive advantages when it comes to being able to fire longer range missiles and evade return fire. | 2017/10/10 | [
"https://aviation.stackexchange.com/questions/44530",
"https://aviation.stackexchange.com",
"https://aviation.stackexchange.com/users/16245/"
] | The [planned successor to the SR-71](https://en.wikipedia.org/wiki/Project_Isinglass) was supposed to reach Mach 4 to 5 but was never completed because of projected costs equivalent to almost 20 billion in today's dollars.
But that was a reconnaissance platform. A fighter has to do more than fly fast and shoot pretty pictures.
**Agility** would come first on this list: The realistically possible [turn rate of a Mach 5 airplane](https://aviation.stackexchange.com/questions/36167/can-hypersonic-aircraft-be-agile-without-the-g-forces-harming-the-pilot/36241#36241) would let it complete a full circle in about 800 seconds, that is almost a quarter of an hour, just for a single circle. If you want to turn any quicker, you [better slow down](https://aviation.stackexchange.com/questions/32599/what-is-the-average-speed-used-by-modern-jet-fighters-when-in-dogfight/34696#34696), complete your turn, and speed up again.
[](https://i.stack.imgur.com/S54uH.jpg)
([NASA](https://www.hq.nasa.gov/office/pao/History/x15conf/what.html)) A Mach 4+ turn needs a lot ground, the SR-71 routinely crossed multiple smaller countries to execute a 180.
**Visual contact** is another requirement in todays muddled conflicts. In order to avoid collateral damage (and embarrassing political situations), it is often demanded that pilots visually identify a target before opening fire. Try that at Mach 4! Stand-off weapons will be of no use in this situation, and top speed will be irrelevant.
**Small size** would maybe come as a surprise, but the faster aircraft [tend to be bigger and heavier](https://en.wikipedia.org/wiki/North_American_XF-108_Rapier), too. This will make them much more expensive and fewer will be procured. The specialty materials needed for the extreme flight envelope will make them maintenance-intensive, too. This will make them high-value assets that cannot be risked in combat. In the end, the fighting will be done by the [smaller, more numerous and less expensive](https://aviation.stackexchange.com/questions/32599/what-is-the-average-speed-used-by-modern-jet-fighters-when-in-dogfight/34696#34696) platforms. Why then fund such an expensive diva in the first place?
**Operating altitude**: If you look at [the envelope of the SR-71](https://aviation.stackexchange.com/questions/39215/what-are-the-limiting-factors-for-high-altitude-planes-e-g-u2-or-sr71-prevent/39223#39223), you will notice that flight speeds above Mach 3 required it to climb above 65,000 ft. Only then is air density low enough to lower drag sufficiently for prolonged Mach 3 flight. A Mach 4 design would need to climb to 90,000 ft to play out its design speed. What altitude will the adversary be at? | Fuel consumption and combat range. Going to mach 4 it would be a problem for the fuel consumption.
Some legends, they say something there is but we do not know it :-) |
44,530 | Why are there no Mach 4+ fighter aircraft? It seems that such aircraft would have massive advantages when it comes to being able to fire longer range missiles and evade return fire. | 2017/10/10 | [
"https://aviation.stackexchange.com/questions/44530",
"https://aviation.stackexchange.com",
"https://aviation.stackexchange.com/users/16245/"
] | Probably the best answer to this question is that, up to this point, there really has not been a need to build fighters capable of those speeds to counter the air forces of friendly nations.
Prior to the mid to late 1970s, the USAF was quietly making preparations at the behest of several aircraft OEMs to make the transition from manned aircraft into high speed missiles for both strategic and tactical uses, given the tremendous punishments both US and modern western air power endured against Soviet radar guided missiles in conflicts like Vietnam and Yom Kippur. The costs associated with developing a high speed manned aircraft which could evade these threats was prohibitive when compared with the costs of developing a disposable missile to counter it or any future threat, hence the reasons the XB-70, among other programs, were cancelled. However with the advent of stealth technology in the late 1970s and 1980s, the radar threat was effectively nullified and the exact details of how to build effective stealth aircraft remained out of adversarial hands for the past 20 years, giving us a tactical advantage over existing platforms and making the expenditure of national treasure on a high speed manned platform a moot point.
But that's not to say high speed aircraft are dead. The Lockheed Skunk Works is currently developing a new unmanned reconnaissance aircraft, dubbed SR-72, capable of Mach 6+, using a combined cycle engine. Hypersonic aircraft have been proposed over the years for troop or ordnance delivery and there is good evidence that the US and China are currently developing hypersonic air-to-air, surface to air, and ballistic missiles.
It may be that the quantum leap that stealth technology gave us is what delayed the development of high supersonic and hypersonic manned combat aircraft. And, as potential threat nations develop and perfect their own stealth technology, nullifying the advantages it gave us, that we begin to search of other options like additional speed to retain the tactical advantage. | Fuel consumption and combat range. Going to mach 4 it would be a problem for the fuel consumption.
Some legends, they say something there is but we do not know it :-) |
44,530 | Why are there no Mach 4+ fighter aircraft? It seems that such aircraft would have massive advantages when it comes to being able to fire longer range missiles and evade return fire. | 2017/10/10 | [
"https://aviation.stackexchange.com/questions/44530",
"https://aviation.stackexchange.com",
"https://aviation.stackexchange.com/users/16245/"
] | Today's fighter are designed with a max speed around mach 2 (max speed is generally calculated with no emport and 50% fuel, the operational top speed is way lower). They keep this ability to pass Mach 1 only to be able to respond fast enough to intercept / air policing missions.
Countries do not design very fast fighters (and planes in general) because :
1. It's extremely hard to achieve.
2. It does not give any advantage.
3. It has no use anymore.
Back in the cold war period, radars and SAMs where not very sophisticated yet, they did not see very far and Ground to Air missiles were not able to get very high. So to protect their airspace from potential intruders / enemy bombers, countries were relying on interceptor aircraft, a fast jet able to catch up the target before it gets to its objective.
At this time, going very fast, very high was an advantage : SAMs where not threatening. The US made a very successful reconnaissance plane exploiting that: the [SR-71 Blackbird](https://en.wikipedia.org/wiki/Lockheed_SR-71_Blackbird) (max speed : mach 3.3). Many missiles were fired towards the SR-71, none reached its target.
Speed was used for intercept and reconnaissance / intel.
But radars and SAMs improved, getting more range, becoming able to threaten very fast planes. Also satellite images got better and better. The need for very fast interceptors decreased as radar range increased, and the need for very fast recon aircraft decreased as satellite images got better.
Now why does going very very fast not give any advantage to a fighter? Because you cannot turn.
The thing to keep in mind is G force: force the same turn rate, the faster you go, the more g you have to take. And a human can sustain a very limited amount of g. This means a plane going at Mach 4 will barely be able to turn without blacking out its pilot. And if you can't turn, you won't evade enemy missiles, making useless the bit more range gained on the missiles.
And then there is the technical challenge, dealing with a little drag as possible (that means no missile / bomb / fuel under the wings), heat, airframe deformation and wear ... for an advantage which isn't there anymore.
Despite all that, the US are suspected to work on a Mach 6+ aircraft, probably unpiloted.
(Please keep in mind that I simplified a lot of things to not make this post too complex) | In addition to the other answers: at high speed, the aircraft is heated by air compression. At Mach 2.2, skin temperatures become so high you can't safely use aluminium anymore, you have to switch to steel or titanium, both of which are expensive to manufacture. The SR-71 was built in titanium. It also leaked like a sieve while on the ground, heat expansion at speed sealed those leaks. The SR-71 had to use a special fuel to make that an acceptable risk.
At Mach 4 this issue gets worse again, and you may have to use active cooling or fragile heat shielding to keep your airframe from melting. |
44,530 | Why are there no Mach 4+ fighter aircraft? It seems that such aircraft would have massive advantages when it comes to being able to fire longer range missiles and evade return fire. | 2017/10/10 | [
"https://aviation.stackexchange.com/questions/44530",
"https://aviation.stackexchange.com",
"https://aviation.stackexchange.com/users/16245/"
] | Today's fighter are designed with a max speed around mach 2 (max speed is generally calculated with no emport and 50% fuel, the operational top speed is way lower). They keep this ability to pass Mach 1 only to be able to respond fast enough to intercept / air policing missions.
Countries do not design very fast fighters (and planes in general) because :
1. It's extremely hard to achieve.
2. It does not give any advantage.
3. It has no use anymore.
Back in the cold war period, radars and SAMs where not very sophisticated yet, they did not see very far and Ground to Air missiles were not able to get very high. So to protect their airspace from potential intruders / enemy bombers, countries were relying on interceptor aircraft, a fast jet able to catch up the target before it gets to its objective.
At this time, going very fast, very high was an advantage : SAMs where not threatening. The US made a very successful reconnaissance plane exploiting that: the [SR-71 Blackbird](https://en.wikipedia.org/wiki/Lockheed_SR-71_Blackbird) (max speed : mach 3.3). Many missiles were fired towards the SR-71, none reached its target.
Speed was used for intercept and reconnaissance / intel.
But radars and SAMs improved, getting more range, becoming able to threaten very fast planes. Also satellite images got better and better. The need for very fast interceptors decreased as radar range increased, and the need for very fast recon aircraft decreased as satellite images got better.
Now why does going very very fast not give any advantage to a fighter? Because you cannot turn.
The thing to keep in mind is G force: force the same turn rate, the faster you go, the more g you have to take. And a human can sustain a very limited amount of g. This means a plane going at Mach 4 will barely be able to turn without blacking out its pilot. And if you can't turn, you won't evade enemy missiles, making useless the bit more range gained on the missiles.
And then there is the technical challenge, dealing with a little drag as possible (that means no missile / bomb / fuel under the wings), heat, airframe deformation and wear ... for an advantage which isn't there anymore.
Despite all that, the US are suspected to work on a Mach 6+ aircraft, probably unpiloted.
(Please keep in mind that I simplified a lot of things to not make this post too complex) | Probably the best answer to this question is that, up to this point, there really has not been a need to build fighters capable of those speeds to counter the air forces of friendly nations.
Prior to the mid to late 1970s, the USAF was quietly making preparations at the behest of several aircraft OEMs to make the transition from manned aircraft into high speed missiles for both strategic and tactical uses, given the tremendous punishments both US and modern western air power endured against Soviet radar guided missiles in conflicts like Vietnam and Yom Kippur. The costs associated with developing a high speed manned aircraft which could evade these threats was prohibitive when compared with the costs of developing a disposable missile to counter it or any future threat, hence the reasons the XB-70, among other programs, were cancelled. However with the advent of stealth technology in the late 1970s and 1980s, the radar threat was effectively nullified and the exact details of how to build effective stealth aircraft remained out of adversarial hands for the past 20 years, giving us a tactical advantage over existing platforms and making the expenditure of national treasure on a high speed manned platform a moot point.
But that's not to say high speed aircraft are dead. The Lockheed Skunk Works is currently developing a new unmanned reconnaissance aircraft, dubbed SR-72, capable of Mach 6+, using a combined cycle engine. Hypersonic aircraft have been proposed over the years for troop or ordnance delivery and there is good evidence that the US and China are currently developing hypersonic air-to-air, surface to air, and ballistic missiles.
It may be that the quantum leap that stealth technology gave us is what delayed the development of high supersonic and hypersonic manned combat aircraft. And, as potential threat nations develop and perfect their own stealth technology, nullifying the advantages it gave us, that we begin to search of other options like additional speed to retain the tactical advantage. |
44,530 | Why are there no Mach 4+ fighter aircraft? It seems that such aircraft would have massive advantages when it comes to being able to fire longer range missiles and evade return fire. | 2017/10/10 | [
"https://aviation.stackexchange.com/questions/44530",
"https://aviation.stackexchange.com",
"https://aviation.stackexchange.com/users/16245/"
] | The [planned successor to the SR-71](https://en.wikipedia.org/wiki/Project_Isinglass) was supposed to reach Mach 4 to 5 but was never completed because of projected costs equivalent to almost 20 billion in today's dollars.
But that was a reconnaissance platform. A fighter has to do more than fly fast and shoot pretty pictures.
**Agility** would come first on this list: The realistically possible [turn rate of a Mach 5 airplane](https://aviation.stackexchange.com/questions/36167/can-hypersonic-aircraft-be-agile-without-the-g-forces-harming-the-pilot/36241#36241) would let it complete a full circle in about 800 seconds, that is almost a quarter of an hour, just for a single circle. If you want to turn any quicker, you [better slow down](https://aviation.stackexchange.com/questions/32599/what-is-the-average-speed-used-by-modern-jet-fighters-when-in-dogfight/34696#34696), complete your turn, and speed up again.
[](https://i.stack.imgur.com/S54uH.jpg)
([NASA](https://www.hq.nasa.gov/office/pao/History/x15conf/what.html)) A Mach 4+ turn needs a lot ground, the SR-71 routinely crossed multiple smaller countries to execute a 180.
**Visual contact** is another requirement in todays muddled conflicts. In order to avoid collateral damage (and embarrassing political situations), it is often demanded that pilots visually identify a target before opening fire. Try that at Mach 4! Stand-off weapons will be of no use in this situation, and top speed will be irrelevant.
**Small size** would maybe come as a surprise, but the faster aircraft [tend to be bigger and heavier](https://en.wikipedia.org/wiki/North_American_XF-108_Rapier), too. This will make them much more expensive and fewer will be procured. The specialty materials needed for the extreme flight envelope will make them maintenance-intensive, too. This will make them high-value assets that cannot be risked in combat. In the end, the fighting will be done by the [smaller, more numerous and less expensive](https://aviation.stackexchange.com/questions/32599/what-is-the-average-speed-used-by-modern-jet-fighters-when-in-dogfight/34696#34696) platforms. Why then fund such an expensive diva in the first place?
**Operating altitude**: If you look at [the envelope of the SR-71](https://aviation.stackexchange.com/questions/39215/what-are-the-limiting-factors-for-high-altitude-planes-e-g-u2-or-sr71-prevent/39223#39223), you will notice that flight speeds above Mach 3 required it to climb above 65,000 ft. Only then is air density low enough to lower drag sufficiently for prolonged Mach 3 flight. A Mach 4 design would need to climb to 90,000 ft to play out its design speed. What altitude will the adversary be at? | In addition to the other answers: at high speed, the aircraft is heated by air compression. At Mach 2.2, skin temperatures become so high you can't safely use aluminium anymore, you have to switch to steel or titanium, both of which are expensive to manufacture. The SR-71 was built in titanium. It also leaked like a sieve while on the ground, heat expansion at speed sealed those leaks. The SR-71 had to use a special fuel to make that an acceptable risk.
At Mach 4 this issue gets worse again, and you may have to use active cooling or fragile heat shielding to keep your airframe from melting. |
44,530 | Why are there no Mach 4+ fighter aircraft? It seems that such aircraft would have massive advantages when it comes to being able to fire longer range missiles and evade return fire. | 2017/10/10 | [
"https://aviation.stackexchange.com/questions/44530",
"https://aviation.stackexchange.com",
"https://aviation.stackexchange.com/users/16245/"
] | In addition to the other answers: at high speed, the aircraft is heated by air compression. At Mach 2.2, skin temperatures become so high you can't safely use aluminium anymore, you have to switch to steel or titanium, both of which are expensive to manufacture. The SR-71 was built in titanium. It also leaked like a sieve while on the ground, heat expansion at speed sealed those leaks. The SR-71 had to use a special fuel to make that an acceptable risk.
At Mach 4 this issue gets worse again, and you may have to use active cooling or fragile heat shielding to keep your airframe from melting. | Probably the best answer to this question is that, up to this point, there really has not been a need to build fighters capable of those speeds to counter the air forces of friendly nations.
Prior to the mid to late 1970s, the USAF was quietly making preparations at the behest of several aircraft OEMs to make the transition from manned aircraft into high speed missiles for both strategic and tactical uses, given the tremendous punishments both US and modern western air power endured against Soviet radar guided missiles in conflicts like Vietnam and Yom Kippur. The costs associated with developing a high speed manned aircraft which could evade these threats was prohibitive when compared with the costs of developing a disposable missile to counter it or any future threat, hence the reasons the XB-70, among other programs, were cancelled. However with the advent of stealth technology in the late 1970s and 1980s, the radar threat was effectively nullified and the exact details of how to build effective stealth aircraft remained out of adversarial hands for the past 20 years, giving us a tactical advantage over existing platforms and making the expenditure of national treasure on a high speed manned platform a moot point.
But that's not to say high speed aircraft are dead. The Lockheed Skunk Works is currently developing a new unmanned reconnaissance aircraft, dubbed SR-72, capable of Mach 6+, using a combined cycle engine. Hypersonic aircraft have been proposed over the years for troop or ordnance delivery and there is good evidence that the US and China are currently developing hypersonic air-to-air, surface to air, and ballistic missiles.
It may be that the quantum leap that stealth technology gave us is what delayed the development of high supersonic and hypersonic manned combat aircraft. And, as potential threat nations develop and perfect their own stealth technology, nullifying the advantages it gave us, that we begin to search of other options like additional speed to retain the tactical advantage. |
44,530 | Why are there no Mach 4+ fighter aircraft? It seems that such aircraft would have massive advantages when it comes to being able to fire longer range missiles and evade return fire. | 2017/10/10 | [
"https://aviation.stackexchange.com/questions/44530",
"https://aviation.stackexchange.com",
"https://aviation.stackexchange.com/users/16245/"
] | The [planned successor to the SR-71](https://en.wikipedia.org/wiki/Project_Isinglass) was supposed to reach Mach 4 to 5 but was never completed because of projected costs equivalent to almost 20 billion in today's dollars.
But that was a reconnaissance platform. A fighter has to do more than fly fast and shoot pretty pictures.
**Agility** would come first on this list: The realistically possible [turn rate of a Mach 5 airplane](https://aviation.stackexchange.com/questions/36167/can-hypersonic-aircraft-be-agile-without-the-g-forces-harming-the-pilot/36241#36241) would let it complete a full circle in about 800 seconds, that is almost a quarter of an hour, just for a single circle. If you want to turn any quicker, you [better slow down](https://aviation.stackexchange.com/questions/32599/what-is-the-average-speed-used-by-modern-jet-fighters-when-in-dogfight/34696#34696), complete your turn, and speed up again.
[](https://i.stack.imgur.com/S54uH.jpg)
([NASA](https://www.hq.nasa.gov/office/pao/History/x15conf/what.html)) A Mach 4+ turn needs a lot ground, the SR-71 routinely crossed multiple smaller countries to execute a 180.
**Visual contact** is another requirement in todays muddled conflicts. In order to avoid collateral damage (and embarrassing political situations), it is often demanded that pilots visually identify a target before opening fire. Try that at Mach 4! Stand-off weapons will be of no use in this situation, and top speed will be irrelevant.
**Small size** would maybe come as a surprise, but the faster aircraft [tend to be bigger and heavier](https://en.wikipedia.org/wiki/North_American_XF-108_Rapier), too. This will make them much more expensive and fewer will be procured. The specialty materials needed for the extreme flight envelope will make them maintenance-intensive, too. This will make them high-value assets that cannot be risked in combat. In the end, the fighting will be done by the [smaller, more numerous and less expensive](https://aviation.stackexchange.com/questions/32599/what-is-the-average-speed-used-by-modern-jet-fighters-when-in-dogfight/34696#34696) platforms. Why then fund such an expensive diva in the first place?
**Operating altitude**: If you look at [the envelope of the SR-71](https://aviation.stackexchange.com/questions/39215/what-are-the-limiting-factors-for-high-altitude-planes-e-g-u2-or-sr71-prevent/39223#39223), you will notice that flight speeds above Mach 3 required it to climb above 65,000 ft. Only then is air density low enough to lower drag sufficiently for prolonged Mach 3 flight. A Mach 4 design would need to climb to 90,000 ft to play out its design speed. What altitude will the adversary be at? | Probably the best answer to this question is that, up to this point, there really has not been a need to build fighters capable of those speeds to counter the air forces of friendly nations.
Prior to the mid to late 1970s, the USAF was quietly making preparations at the behest of several aircraft OEMs to make the transition from manned aircraft into high speed missiles for both strategic and tactical uses, given the tremendous punishments both US and modern western air power endured against Soviet radar guided missiles in conflicts like Vietnam and Yom Kippur. The costs associated with developing a high speed manned aircraft which could evade these threats was prohibitive when compared with the costs of developing a disposable missile to counter it or any future threat, hence the reasons the XB-70, among other programs, were cancelled. However with the advent of stealth technology in the late 1970s and 1980s, the radar threat was effectively nullified and the exact details of how to build effective stealth aircraft remained out of adversarial hands for the past 20 years, giving us a tactical advantage over existing platforms and making the expenditure of national treasure on a high speed manned platform a moot point.
But that's not to say high speed aircraft are dead. The Lockheed Skunk Works is currently developing a new unmanned reconnaissance aircraft, dubbed SR-72, capable of Mach 6+, using a combined cycle engine. Hypersonic aircraft have been proposed over the years for troop or ordnance delivery and there is good evidence that the US and China are currently developing hypersonic air-to-air, surface to air, and ballistic missiles.
It may be that the quantum leap that stealth technology gave us is what delayed the development of high supersonic and hypersonic manned combat aircraft. And, as potential threat nations develop and perfect their own stealth technology, nullifying the advantages it gave us, that we begin to search of other options like additional speed to retain the tactical advantage. |
216,825 | I installed the video module and activated video and video ui modules using Drupal 7:
<https://www.drupal.org/project/video>
I created a content type with a video field. Now I can upload videos with this content type.
But I am not able to embed a youtube video. I can not find any settings or any embed widget as shown here.
I want to upload videos and I want to embed videos (youtube, vimeo..).
But I can not find any way to embed a video unsig this video module.
Do I have to install further modules?
On the module page you can read:
>
> Video module allows you to embedded videos from YouTube, Vimeo,
> Facebook, Vine etc (Drupal 8 only) and upload videos and play using
> HTML5 video player.
>
>
>
I can not find out how to embed a video using this module. Can you help?
I do not want to use another module, I want to find out how to embed videos using THIS module. Be in mind of this when you write your answer. Thank you. | 2016/10/04 | [
"https://drupal.stackexchange.com/questions/216825",
"https://drupal.stackexchange.com",
"https://drupal.stackexchange.com/users/61816/"
] | If you install the WYSIWYG module, you can enable a button for embedding videos that have been uploaded using the Video module, and then you can use that button to embed videos into content. | That's not what the Video module is for. It's for handling uploaded videos.
To get a video embed field use [Video Embed Field](https://www.drupal.org/project/video_embed_field).
>
> Video Embed field creates a simple field type that allows you to embed videos from YouTube and Vimeo and show their thumbnail previews simply by entering the video's url.
>
>
> |
26,693 | I have a SharePoint 2010 publishing site collection with a Term Store that I want to use for navigation. I understand the process of get the collection of published pages and examining the managed metadata column for a specific term or terms, but is it possible to find all items tagged with a specific term without iterating through every single page? Kind of like flipping the one to many relationship of items, pages, etc. to terms around backwards so for each term there could be one or more items using it.
If this doesn't make sense I'll clarify. | 2012/01/11 | [
"https://sharepoint.stackexchange.com/questions/26693",
"https://sharepoint.stackexchange.com",
"https://sharepoint.stackexchange.com/users/6367/"
] | You cannot change a template once you have created the site. You can either make the desired changes to the existing site (new lists, libraries, page configuration) or you can create a new site with the desired template and then move existing content. If it is document content you can use the Explorer view to cut and past documents or use the send to feature to move the content. It is also possible to use custom or 3rd party tools to copy the documents and content from a source to a destination site. This may also make it possible to translate any of the meta-data as needed. | I just want to clarify before I answer. When you say "Site Template", are you refering to a file with an .wsp extension that lives in the solution gallery?
If that is indeed what you mean, you're going to have a bit of work to change an existing site template. You have to backup your database, save all your subsites as site templates, note your permissions, delete all of the sites, then recreate the site using the template you want, then recreate all it's subsites using your subsite site templates with the right permissions on all.
Not an easy task. |
18,073,778 | I'v just learned a few languages (for 2 years now), and now I want to make programs with graphic interfaces. Thing is, I just don't know which languages to use.
What languages/programs (and what methods of these programs) are used to make programs with graphic interface? (I know that C# and JAVA are graphic, but I don't know what methods...)
What languages/programs (and what methods of these programs) are used to make applications to IPhone, Android ,and whatever ?
languages/programs (and what methods of these programs) are used to make/edit videos?
Thanks a lot! | 2013/08/06 | [
"https://Stackoverflow.com/questions/18073778",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2650265/"
] | Almost all programming languages have libraries that help you create a GUI (Graphical User Interface). Most programming languages, including C++, C#, and Java are general-purpose programming languages - you can use them to program whatever you want.
For Java for example, see this tutorial: [Creating a GUI With JFC/Swing](http://docs.oracle.com/javase/tutorial/uiswing/).
If you want to write an [Android app](http://developer.android.com/index.html), you'll program in Java.
For [iOS and Mac OS X](https://developer.apple.com/), you'll most likely write your app in Objective-C. | Pretty much all higher level languages use graphic interface. you just have to do your research to find out how to use GUI in each language. Applications used on the iPhone are written in Objective-C and Android uses java for their apps. |
18,073,778 | I'v just learned a few languages (for 2 years now), and now I want to make programs with graphic interfaces. Thing is, I just don't know which languages to use.
What languages/programs (and what methods of these programs) are used to make programs with graphic interface? (I know that C# and JAVA are graphic, but I don't know what methods...)
What languages/programs (and what methods of these programs) are used to make applications to IPhone, Android ,and whatever ?
languages/programs (and what methods of these programs) are used to make/edit videos?
Thanks a lot! | 2013/08/06 | [
"https://Stackoverflow.com/questions/18073778",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2650265/"
] | Almost all programming languages have libraries that help you create a GUI (Graphical User Interface). Most programming languages, including C++, C#, and Java are general-purpose programming languages - you can use them to program whatever you want.
For Java for example, see this tutorial: [Creating a GUI With JFC/Swing](http://docs.oracle.com/javase/tutorial/uiswing/).
If you want to write an [Android app](http://developer.android.com/index.html), you'll program in Java.
For [iOS and Mac OS X](https://developer.apple.com/), you'll most likely write your app in Objective-C. | Your question is quite vague. But I'll give you some advice. Before asking this kind of question on stackoverflow, you really should make a search on your own with google.
About graphic interface using JAVA, you can use [swing](https://www.google.lu/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&sqi=2&ved=0CEsQFjAB&url=http://download.oracle.com/javase/tutorial/uiswing&ei=1qIAUrmUN4SMswbc3IGYDw&usg=AFQjCNEtbuJqikUwLj2JRyfcUkFgDrCI8wD) which is the most famous way to do it (especially if you're a beginner and want to familiarize with graphic interface development concepts). But there are a lot of other libraries to do GUI, for exemple if you want to do with 3D you have [openGL lib](http://opengl.j3d.org/) or [jMonkey](http://jmonkeyengine.org/) (uses openGl).
About [Android](http://developer.android.com/guide/topics/ui/index.html), it has its own SDK in java.
About iOS (iPhone), it is made with ObjectiveC.
And about C#, I don't know a lot about it but if you do a quite search on google you can find things like [this](http://msdn.microsoft.com/en-us/library/ms173080%28v=vs.90%29.aspx). |
5,929,747 | I'm looking for a compression algorithm which works with symbols smaller than a byte. I did a quick research about compression algorithms and it's being hard to find out the size of the used symbols. Anyway, there are streams with symbols smaller than 8-bit. Is there a parameter for DEFLATE to define the size of its symbols? | 2011/05/08 | [
"https://Stackoverflow.com/questions/5929747",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/743945/"
] | **plaintext symbols smaller than a byte**
The original descriptions of LZ77 and LZ78 describe them in terms of a sequence of decimal digits (symbols that are approximately half the size of a byte).
If you google for "DNA compression algorithm", you can get a bunch of information on algorithms specialized for compression files that are almost entirely composed of the 4 letters A G C T, a dictionary of 4 symbols, each one about 1/4 as small as a byte.
Perhaps one of those algorithms might work for you with relatively little tweaking.
The LZ77-style compression used in LZMA may appear to use two bytes per symbol for the first few symbols that it compresses.
But after compressing a few hundred plaintext symbols
(the letters of natural-language text, or sequences of decimal digits, or sequences of the 4 letters that represent DNA bases, etc.), the two-byte compressed "chunks" that LZMA puts out often represent a dozen or more plaintext symbols.
(I suspect the same is true for all similar algorithms, such as the LZ77 algorithm used in DEFLATE).
If your files use only a restricted alphabet of much less than all 256 possible byte values,
in principle a programmer could adapt a variant of DEFLATE (or some other algorithm) that could make use of information about that alphabet to produce compressed files a few bits smaller in size than the same files compressed with standard DEFLATE.
However, many byte-oriented text compression algorithms -- LZ77, LZW, LZMA, DEFLATE, etc. build a dictionary of common long strings, and may give compression performance (with sufficiently large source file) within a few percent of that custom-adapted variant -- often the advantages of using a standard compressed file format is worth sacrificing a few percent of potential space savings.
**compressed symbols smaller than a byte**
Many compression algorithms, including some that give the best known compression on benchmark files, output compressed information bit-by-bit (such as most of the PAQ series of compressors, and some kinds of arithmetic coders), while others output variable-length compressed information without regard for byte boundaries (such as Huffman compression).
Some ways of describing arithmetic coding talk about pieces of information, such as individual bits or pixels, that are compressed to "less than one bit of information".
EDIT:
The "counting argument" explains why it's not possible to compress all possible bytes, much less all possible bytes and a few common sequences of bytes, into codewords that are all less than 8 bits long.
Nevertheless, several compression algorithms can and often do represent represent some bytes or (more rarely) some sequences of bytes, each with a codeword that is less than 8 bit long, by "sacrificing" or "escaping" less-common bytes that end up represented by other codewords that (including the "escape") are more than 8 bits long.
Such algorithms include:
* The Pike [Text compression using 4 bit coding](http://en.wikibooks.org/wiki/Data_Compression/Dictionary_compression#Text_compression_using_4_bit_coding)
* byte-oriented Huffman
* several [combination algorithms](http://en.wikibooks.org/wiki/Data_Compression/Order/Entropy#combination) that do LZ77-like parsing of the file into "symbols", where each symbol represents a sequence of bytes, and then Huffman-compressing those symbols -- such as DEFLATE, LZX, LZH, LZHAM, etc.
The Pike algorithm uses the 4 bits "0101" to represent 'e' (or in some contexts 'E'), the 8 bits "0000 0001" to represent the word " the" (4 bytes, including the space before it) (or in some contexts " The" or " THE"), etc.
It has a small dictionary of about 200 of the most-frequent English words,
including a sub-dictionary of 16 extremely common English words.
When compressing English text with byte-oriented Huffman coding, the sequence "e " (e space) is compressed to two codewords with a total of typically 6 bits.
Alas, when Huffman coding is involved, I can't tell you the exact size of those "small" codewords, or even tell you exactly what byte or byte sequence a small codeword represents, because it is different for every file.
Often the same codeword represents a different byte (or different byte sequence) at different locations in the same file.
The decoder decides which byte or byte sequence a codeword represents based on clues left behind by the compressor in the headers, and on the data decompressed so far.
With range coding or arithmetic coding, the "codeword" may not even be an integer number of bits. | You may want to look into a Golomb-Code. A golomb code use a divide and conquer algorithm to compress the inout. It's not a dictionary compression but it's worth to mention. |
49,030,629 | I would like to know that the Python interpreter is doing in my production environments.
Some time ago I wrote a simple tool called [live-trace](https://github.com/guettli/live-trace) which runs a daemon thread which collects stacktraces every N milliseconds.
But signal handling in the interpreter itself has one disadvantage:
>
> Although Python signal handlers are called asynchronously as far as the Python user is concerned, they can only occur between the “atomic” instructions of the Python interpreter. This means that signals arriving during long calculations implemented purely in C (such as regular expression matches on large bodies of text) may be delayed for an arbitrary amount of time.
>
>
>
Source: <https://docs.python.org/2/library/signal.html>
How could I work around above constraint and get a stacktrace, even if the interpreter is in some C code for several seconds?
Related: <https://github.com/23andMe/djdt-flamegraph/issues/5> | 2018/02/28 | [
"https://Stackoverflow.com/questions/49030629",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/633961/"
] | Have you tried [Pyflame](https://github.com/uber/pyflame)? It's based on ptrace, so it shouldn't be affected by CPython's signal handling subtleties. | Maybe the [perf-tool](https://github.com/brendangregg/perf-tools) from Brendan Gregg can help |
49,030,629 | I would like to know that the Python interpreter is doing in my production environments.
Some time ago I wrote a simple tool called [live-trace](https://github.com/guettli/live-trace) which runs a daemon thread which collects stacktraces every N milliseconds.
But signal handling in the interpreter itself has one disadvantage:
>
> Although Python signal handlers are called asynchronously as far as the Python user is concerned, they can only occur between the “atomic” instructions of the Python interpreter. This means that signals arriving during long calculations implemented purely in C (such as regular expression matches on large bodies of text) may be delayed for an arbitrary amount of time.
>
>
>
Source: <https://docs.python.org/2/library/signal.html>
How could I work around above constraint and get a stacktrace, even if the interpreter is in some C code for several seconds?
Related: <https://github.com/23andMe/djdt-flamegraph/issues/5> | 2018/02/28 | [
"https://Stackoverflow.com/questions/49030629",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/633961/"
] | I use [py-spy](https://github.com/benfred/py-spy) with [speedscope](https://github.com/jlfwong/speedscope) now. It is a very cool combination.
[](https://i.stack.imgur.com/R5ITf.png)
py-spy works on Windows/Linux/macOS, can output flame graphs by its own and is actively deployed, eg. subprocess profiling support was added in October 2019. | Have you tried [Pyflame](https://github.com/uber/pyflame)? It's based on ptrace, so it shouldn't be affected by CPython's signal handling subtleties. |
49,030,629 | I would like to know that the Python interpreter is doing in my production environments.
Some time ago I wrote a simple tool called [live-trace](https://github.com/guettli/live-trace) which runs a daemon thread which collects stacktraces every N milliseconds.
But signal handling in the interpreter itself has one disadvantage:
>
> Although Python signal handlers are called asynchronously as far as the Python user is concerned, they can only occur between the “atomic” instructions of the Python interpreter. This means that signals arriving during long calculations implemented purely in C (such as regular expression matches on large bodies of text) may be delayed for an arbitrary amount of time.
>
>
>
Source: <https://docs.python.org/2/library/signal.html>
How could I work around above constraint and get a stacktrace, even if the interpreter is in some C code for several seconds?
Related: <https://github.com/23andMe/djdt-flamegraph/issues/5> | 2018/02/28 | [
"https://Stackoverflow.com/questions/49030629",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/633961/"
] | I use [py-spy](https://github.com/benfred/py-spy) with [speedscope](https://github.com/jlfwong/speedscope) now. It is a very cool combination.
[](https://i.stack.imgur.com/R5ITf.png)
py-spy works on Windows/Linux/macOS, can output flame graphs by its own and is actively deployed, eg. subprocess profiling support was added in October 2019. | Maybe the [perf-tool](https://github.com/brendangregg/perf-tools) from Brendan Gregg can help |
30,069 | To be precise, I know that *allow* means *to permit*, and *allow for* is more like *to make something possible, to enable, to make a provision for*, but I'm still in doubt when I have to decide whether to use the preposition *for* or not.
For example, in the sentence taken from Google Dictionary entry used to explain usage of *allow*:
>
> They agreed to a ceasefire to allow talks with the government.
>
>
>
I'm not sure what I would use in this example — maybe even *allow for*. | 2011/06/16 | [
"https://english.stackexchange.com/questions/30069",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/627/"
] | I agree with Robusto, I think. There is a semantic difference between "allow" and "allow for". "B did X, allowing Y" implies that by doing X, B directly caused Y to happen. However, "B did X, allowing for Y" implies that doing X may or may not, in fact, actually cause Y; Y may happen with or without X, or Y may require something else to happen besides or in addition to X.
Short non-sequitur, but consider a sentence in the context of carpentry. "He spaced the boards a quarter-inch from the wall, allowing expansion". Those who know carpentry know that you don't "allow" boards to expand; they simply swell and shrink with temperature and humidity regardless of what you do. Instead, you must "allow for" the boards to expand by taking an action ensuring that WHEN they expand, there is no adverse consequence. So, the correct statement, in context, is "He spaced the boards a quarter-inch from the wall, allowing for expansion".
Back to your OP, "a ceasefire allowing talks" implies that talks will not happen without a ceasefire happening first. "A ceasefire allowing for talks" may imply that talks are already happening, or that they could happen regardless of a ceasefire, but that the ceasefire facilitates those talks. Either may be correct, depending on the situation being described.
So, "allow" denotes permission, and thus a direct cause/effect. "Allow for" denotes either facilitation or "proaction" in anticipation of, and thus breaks the direct cause/effect relationship. | I believe that stylistically your instincts are correct. While omitting the "for" may be grammatical, since the verb "allow" refers to the agreement, the apposition of "cease fire" makes it sound as if that were the direct agent, not the reason the real agents used. "They" (meaning the warring parties) were the ones who did the allowing. |
30,069 | To be precise, I know that *allow* means *to permit*, and *allow for* is more like *to make something possible, to enable, to make a provision for*, but I'm still in doubt when I have to decide whether to use the preposition *for* or not.
For example, in the sentence taken from Google Dictionary entry used to explain usage of *allow*:
>
> They agreed to a ceasefire to allow talks with the government.
>
>
>
I'm not sure what I would use in this example — maybe even *allow for*. | 2011/06/16 | [
"https://english.stackexchange.com/questions/30069",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/627/"
] | I agree that 'allow' means 'to permit', or 'to enable'. However, I see 'allow for' as a shortened form of 'make allowance for' (i.e., 'make provision for'), which I think holds a very different meaning to the word 'enable'.
I use 'allow for' only when referring to a possible scenario or event that merits a contingency plan...
For example:
* The expansion cavity of the furnace
roof is necessary to **allow for**
thermal expansion of the refractory
lining; inadequate spacing will
inevitably give rise to spalling of the
bricks.
* "Did you seek to **allow for** that anomaly, or was it sheer luck?
* "Should we bother to **allow for** such an unlikely, but potentially catastrophic event?" | I believe that stylistically your instincts are correct. While omitting the "for" may be grammatical, since the verb "allow" refers to the agreement, the apposition of "cease fire" makes it sound as if that were the direct agent, not the reason the real agents used. "They" (meaning the warring parties) were the ones who did the allowing. |
30,069 | To be precise, I know that *allow* means *to permit*, and *allow for* is more like *to make something possible, to enable, to make a provision for*, but I'm still in doubt when I have to decide whether to use the preposition *for* or not.
For example, in the sentence taken from Google Dictionary entry used to explain usage of *allow*:
>
> They agreed to a ceasefire to allow talks with the government.
>
>
>
I'm not sure what I would use in this example — maybe even *allow for*. | 2011/06/16 | [
"https://english.stackexchange.com/questions/30069",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/627/"
] | I believe that stylistically your instincts are correct. While omitting the "for" may be grammatical, since the verb "allow" refers to the agreement, the apposition of "cease fire" makes it sound as if that were the direct agent, not the reason the real agents used. "They" (meaning the warring parties) were the ones who did the allowing. | The verb allow means a kind of feeling of something in law or official emotion .
The phrase allow for means the decision you made is not absolutely decided by the rules but is decided by the mind of yourself。
They are same with the chinese word allow=允许 ,allow for=许可.
that is all |
30,069 | To be precise, I know that *allow* means *to permit*, and *allow for* is more like *to make something possible, to enable, to make a provision for*, but I'm still in doubt when I have to decide whether to use the preposition *for* or not.
For example, in the sentence taken from Google Dictionary entry used to explain usage of *allow*:
>
> They agreed to a ceasefire to allow talks with the government.
>
>
>
I'm not sure what I would use in this example — maybe even *allow for*. | 2011/06/16 | [
"https://english.stackexchange.com/questions/30069",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/627/"
] | I agree with Robusto, I think. There is a semantic difference between "allow" and "allow for". "B did X, allowing Y" implies that by doing X, B directly caused Y to happen. However, "B did X, allowing for Y" implies that doing X may or may not, in fact, actually cause Y; Y may happen with or without X, or Y may require something else to happen besides or in addition to X.
Short non-sequitur, but consider a sentence in the context of carpentry. "He spaced the boards a quarter-inch from the wall, allowing expansion". Those who know carpentry know that you don't "allow" boards to expand; they simply swell and shrink with temperature and humidity regardless of what you do. Instead, you must "allow for" the boards to expand by taking an action ensuring that WHEN they expand, there is no adverse consequence. So, the correct statement, in context, is "He spaced the boards a quarter-inch from the wall, allowing for expansion".
Back to your OP, "a ceasefire allowing talks" implies that talks will not happen without a ceasefire happening first. "A ceasefire allowing for talks" may imply that talks are already happening, or that they could happen regardless of a ceasefire, but that the ceasefire facilitates those talks. Either may be correct, depending on the situation being described.
So, "allow" denotes permission, and thus a direct cause/effect. "Allow for" denotes either facilitation or "proaction" in anticipation of, and thus breaks the direct cause/effect relationship. | I agree that 'allow' means 'to permit', or 'to enable'. However, I see 'allow for' as a shortened form of 'make allowance for' (i.e., 'make provision for'), which I think holds a very different meaning to the word 'enable'.
I use 'allow for' only when referring to a possible scenario or event that merits a contingency plan...
For example:
* The expansion cavity of the furnace
roof is necessary to **allow for**
thermal expansion of the refractory
lining; inadequate spacing will
inevitably give rise to spalling of the
bricks.
* "Did you seek to **allow for** that anomaly, or was it sheer luck?
* "Should we bother to **allow for** such an unlikely, but potentially catastrophic event?" |
30,069 | To be precise, I know that *allow* means *to permit*, and *allow for* is more like *to make something possible, to enable, to make a provision for*, but I'm still in doubt when I have to decide whether to use the preposition *for* or not.
For example, in the sentence taken from Google Dictionary entry used to explain usage of *allow*:
>
> They agreed to a ceasefire to allow talks with the government.
>
>
>
I'm not sure what I would use in this example — maybe even *allow for*. | 2011/06/16 | [
"https://english.stackexchange.com/questions/30069",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/627/"
] | I agree with Robusto, I think. There is a semantic difference between "allow" and "allow for". "B did X, allowing Y" implies that by doing X, B directly caused Y to happen. However, "B did X, allowing for Y" implies that doing X may or may not, in fact, actually cause Y; Y may happen with or without X, or Y may require something else to happen besides or in addition to X.
Short non-sequitur, but consider a sentence in the context of carpentry. "He spaced the boards a quarter-inch from the wall, allowing expansion". Those who know carpentry know that you don't "allow" boards to expand; they simply swell and shrink with temperature and humidity regardless of what you do. Instead, you must "allow for" the boards to expand by taking an action ensuring that WHEN they expand, there is no adverse consequence. So, the correct statement, in context, is "He spaced the boards a quarter-inch from the wall, allowing for expansion".
Back to your OP, "a ceasefire allowing talks" implies that talks will not happen without a ceasefire happening first. "A ceasefire allowing for talks" may imply that talks are already happening, or that they could happen regardless of a ceasefire, but that the ceasefire facilitates those talks. Either may be correct, depending on the situation being described.
So, "allow" denotes permission, and thus a direct cause/effect. "Allow for" denotes either facilitation or "proaction" in anticipation of, and thus breaks the direct cause/effect relationship. | >
> allow means to permit, and allow for is more like to make something
> possible, to enable, to make a provision for, but I'm still in doubt
> when I have to decide whether to use the preposition for or not.
>
>
>
Refining that characterizaton somewhat, consider also that the forms, "allow..." and "allow for..." are suggestive of verb (or verb and helping word) moods. To say, allow a thing most likely reflects an imperative verb mood (in the sense of a command albeit a passive and/or implied command. To allow a thing could also be indicative, a simple statement of happenstance...especially in past tense: e.g., "... you *allowed* the sink to overflow." (Obviously, you don't normally allow *for* a sink to overflow....unless, perhaps, you are installing a floor drain; but that, still would not entail indicative mood, as demonstrated next.)
Allowing *for* (a thing), on the other hand, explicitly connotes that the allowance more than equals the need; that there is doubt as to sufficiency of that which will or might be neeeded to be allowed. Such doubt sets the mood of the phrase, *allow for*, as subjunctive. (In the floor drain example, the indefinite capacity of the floor drain reveals uncertainty as to how much overflow will actually need to be accommodated...so, still, subjunctive.
So all you need to do is figure out the mood in which *allow* is used, then modify, or not, accordingly.
The prevailing verb moods are: indicative, imperative, subjunctive, and infinitive:
[moods of verbs](http://www.dailywritingtips.com/english-grammar-101-verb-mood/) |
30,069 | To be precise, I know that *allow* means *to permit*, and *allow for* is more like *to make something possible, to enable, to make a provision for*, but I'm still in doubt when I have to decide whether to use the preposition *for* or not.
For example, in the sentence taken from Google Dictionary entry used to explain usage of *allow*:
>
> They agreed to a ceasefire to allow talks with the government.
>
>
>
I'm not sure what I would use in this example — maybe even *allow for*. | 2011/06/16 | [
"https://english.stackexchange.com/questions/30069",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/627/"
] | I agree with Robusto, I think. There is a semantic difference between "allow" and "allow for". "B did X, allowing Y" implies that by doing X, B directly caused Y to happen. However, "B did X, allowing for Y" implies that doing X may or may not, in fact, actually cause Y; Y may happen with or without X, or Y may require something else to happen besides or in addition to X.
Short non-sequitur, but consider a sentence in the context of carpentry. "He spaced the boards a quarter-inch from the wall, allowing expansion". Those who know carpentry know that you don't "allow" boards to expand; they simply swell and shrink with temperature and humidity regardless of what you do. Instead, you must "allow for" the boards to expand by taking an action ensuring that WHEN they expand, there is no adverse consequence. So, the correct statement, in context, is "He spaced the boards a quarter-inch from the wall, allowing for expansion".
Back to your OP, "a ceasefire allowing talks" implies that talks will not happen without a ceasefire happening first. "A ceasefire allowing for talks" may imply that talks are already happening, or that they could happen regardless of a ceasefire, but that the ceasefire facilitates those talks. Either may be correct, depending on the situation being described.
So, "allow" denotes permission, and thus a direct cause/effect. "Allow for" denotes either facilitation or "proaction" in anticipation of, and thus breaks the direct cause/effect relationship. | The verb allow means a kind of feeling of something in law or official emotion .
The phrase allow for means the decision you made is not absolutely decided by the rules but is decided by the mind of yourself。
They are same with the chinese word allow=允许 ,allow for=许可.
that is all |
30,069 | To be precise, I know that *allow* means *to permit*, and *allow for* is more like *to make something possible, to enable, to make a provision for*, but I'm still in doubt when I have to decide whether to use the preposition *for* or not.
For example, in the sentence taken from Google Dictionary entry used to explain usage of *allow*:
>
> They agreed to a ceasefire to allow talks with the government.
>
>
>
I'm not sure what I would use in this example — maybe even *allow for*. | 2011/06/16 | [
"https://english.stackexchange.com/questions/30069",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/627/"
] | I agree that 'allow' means 'to permit', or 'to enable'. However, I see 'allow for' as a shortened form of 'make allowance for' (i.e., 'make provision for'), which I think holds a very different meaning to the word 'enable'.
I use 'allow for' only when referring to a possible scenario or event that merits a contingency plan...
For example:
* The expansion cavity of the furnace
roof is necessary to **allow for**
thermal expansion of the refractory
lining; inadequate spacing will
inevitably give rise to spalling of the
bricks.
* "Did you seek to **allow for** that anomaly, or was it sheer luck?
* "Should we bother to **allow for** such an unlikely, but potentially catastrophic event?" | >
> allow means to permit, and allow for is more like to make something
> possible, to enable, to make a provision for, but I'm still in doubt
> when I have to decide whether to use the preposition for or not.
>
>
>
Refining that characterizaton somewhat, consider also that the forms, "allow..." and "allow for..." are suggestive of verb (or verb and helping word) moods. To say, allow a thing most likely reflects an imperative verb mood (in the sense of a command albeit a passive and/or implied command. To allow a thing could also be indicative, a simple statement of happenstance...especially in past tense: e.g., "... you *allowed* the sink to overflow." (Obviously, you don't normally allow *for* a sink to overflow....unless, perhaps, you are installing a floor drain; but that, still would not entail indicative mood, as demonstrated next.)
Allowing *for* (a thing), on the other hand, explicitly connotes that the allowance more than equals the need; that there is doubt as to sufficiency of that which will or might be neeeded to be allowed. Such doubt sets the mood of the phrase, *allow for*, as subjunctive. (In the floor drain example, the indefinite capacity of the floor drain reveals uncertainty as to how much overflow will actually need to be accommodated...so, still, subjunctive.
So all you need to do is figure out the mood in which *allow* is used, then modify, or not, accordingly.
The prevailing verb moods are: indicative, imperative, subjunctive, and infinitive:
[moods of verbs](http://www.dailywritingtips.com/english-grammar-101-verb-mood/) |
30,069 | To be precise, I know that *allow* means *to permit*, and *allow for* is more like *to make something possible, to enable, to make a provision for*, but I'm still in doubt when I have to decide whether to use the preposition *for* or not.
For example, in the sentence taken from Google Dictionary entry used to explain usage of *allow*:
>
> They agreed to a ceasefire to allow talks with the government.
>
>
>
I'm not sure what I would use in this example — maybe even *allow for*. | 2011/06/16 | [
"https://english.stackexchange.com/questions/30069",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/627/"
] | I agree that 'allow' means 'to permit', or 'to enable'. However, I see 'allow for' as a shortened form of 'make allowance for' (i.e., 'make provision for'), which I think holds a very different meaning to the word 'enable'.
I use 'allow for' only when referring to a possible scenario or event that merits a contingency plan...
For example:
* The expansion cavity of the furnace
roof is necessary to **allow for**
thermal expansion of the refractory
lining; inadequate spacing will
inevitably give rise to spalling of the
bricks.
* "Did you seek to **allow for** that anomaly, or was it sheer luck?
* "Should we bother to **allow for** such an unlikely, but potentially catastrophic event?" | The verb allow means a kind of feeling of something in law or official emotion .
The phrase allow for means the decision you made is not absolutely decided by the rules but is decided by the mind of yourself。
They are same with the chinese word allow=允许 ,allow for=许可.
that is all |
30,069 | To be precise, I know that *allow* means *to permit*, and *allow for* is more like *to make something possible, to enable, to make a provision for*, but I'm still in doubt when I have to decide whether to use the preposition *for* or not.
For example, in the sentence taken from Google Dictionary entry used to explain usage of *allow*:
>
> They agreed to a ceasefire to allow talks with the government.
>
>
>
I'm not sure what I would use in this example — maybe even *allow for*. | 2011/06/16 | [
"https://english.stackexchange.com/questions/30069",
"https://english.stackexchange.com",
"https://english.stackexchange.com/users/627/"
] | >
> allow means to permit, and allow for is more like to make something
> possible, to enable, to make a provision for, but I'm still in doubt
> when I have to decide whether to use the preposition for or not.
>
>
>
Refining that characterizaton somewhat, consider also that the forms, "allow..." and "allow for..." are suggestive of verb (or verb and helping word) moods. To say, allow a thing most likely reflects an imperative verb mood (in the sense of a command albeit a passive and/or implied command. To allow a thing could also be indicative, a simple statement of happenstance...especially in past tense: e.g., "... you *allowed* the sink to overflow." (Obviously, you don't normally allow *for* a sink to overflow....unless, perhaps, you are installing a floor drain; but that, still would not entail indicative mood, as demonstrated next.)
Allowing *for* (a thing), on the other hand, explicitly connotes that the allowance more than equals the need; that there is doubt as to sufficiency of that which will or might be neeeded to be allowed. Such doubt sets the mood of the phrase, *allow for*, as subjunctive. (In the floor drain example, the indefinite capacity of the floor drain reveals uncertainty as to how much overflow will actually need to be accommodated...so, still, subjunctive.
So all you need to do is figure out the mood in which *allow* is used, then modify, or not, accordingly.
The prevailing verb moods are: indicative, imperative, subjunctive, and infinitive:
[moods of verbs](http://www.dailywritingtips.com/english-grammar-101-verb-mood/) | The verb allow means a kind of feeling of something in law or official emotion .
The phrase allow for means the decision you made is not absolutely decided by the rules but is decided by the mind of yourself。
They are same with the chinese word allow=允许 ,allow for=许可.
that is all |
154,644 | My company doesn't have that many developers. THere is just one lead who manages 18 of us in a company with a total white collar workforce of maybe 70. Salary is also limited to increases of just 3% a year. Basically, if you want to advance, you need to leave within a two year period. They survive by having good pay at any given experience level, but also have a low average tenure (currently just 10 months) because of the pay and lack of upward mobility.
I joined about a year ago ago and am thinking about my future as I learn all this information about low tenure and few increases. Waiting to become lead isn't an option as the current lead is someone who would have a hard time moving to an equivalent job because of his background. He lacks the traditional cs degree and only his current job title reflects that he does development so he will be there for a long time.
Because much of our work is quite simple and the salary/advancement issue, I want to spend more time developing myself professionally. Conferences, blogs, resume driven development, and all that good stuff. Stuff some node.js into some older projects as a microservice or something.
The problem is, each developer works on their own project or projects. The bus factor for most things is 1 as management wants us to each be responsible for something. When people leave, that makes management irate as then a system is untended for several weeks. As a result, they check our Linkedins regularly to see if they have changed.
What can one do to professionally develop without pissing off the current employeR? | 2020/03/08 | [
"https://workplace.stackexchange.com/questions/154644",
"https://workplace.stackexchange.com",
"https://workplace.stackexchange.com/users/115424/"
] | This is far from an exhaustive answer, but in those cases you listed, the primary issue seems to be that the individuals did not have effective relationships with their superiors (or the superiors above them).
#1 should have been taken care of with a simple 5-minute talk with the PM about how they were off for a month, their team knew they were off, and so by assigning work to them anyway and allowing it to fail, they were deliberately sabotaging the project to try and make them look bad.
For #2, maternity leave is more difficult because you're off for a long period. But why was there no continuity plan for who should look after their clients while they were on leave? What was their boss doing the entire time? How did none of the clients have the individual's number or a way to contact them?
---
The general point is that **you need people on your side**, and you especially need people higher up in your chain of command to be on your team, or at least to have a favourable impression of you.
Neither of these situations seem especially complex or intelligent, they could have been easily headed off *had there been a single person at the company looking out for the individuals involved*.
You don't need to be friends with everyone, or even most people, but you do need to be on good terms with your boss, your boss's boss should at least be aware of your existence, and you should make sure you have a minimum of at least 1 friend and ally in the workplace.
Alternatively, since these are the same company, it's possible that the whole thing is completely dysfunctional and full of conniving, manipulative, backstabbing individuals. Where the way you climb the corporate ladder is by being even more ruthless and manipulative than anyone else. If that's the case, then the only solution is to quit and find a job at a different, more sane company.
---
tl;dr, to explicitly answer your originally question: You need your boss (and preferably their boss as well) to be supportive and on your side. And while you can be on leave, you should spend at least some minimum amount of time each week (literally just a couple of hours) checking in with work and staying abreast of what's going on. If you do all of that, then you should be alright. | While these are fairly sad stories to hear, my experience is a bit different. I have a mental disorder which not only labels me as unreliable but also makes me need some sick leave some amount of the time per year.
You may call me lucky but I never had any problems with coming back to work. For some part of it, I am making myself quite knowledgeable on subjects that usually take over a month to take over. For some other part, I also tend to choose work environments that are friendly to the stigma my illness carries, and make sure to make a very good impression. So usually, people are eager that I come back from sick leave, despite me not working any extra hour.
In a non-dysfunctional workplace there is enough important work for everybody to be able to take responsibility and make an impact. Sure, people will try to cope with your absence and doing so they may put in place the very mechanism that could you get you replaced, but if you are able to put yourself in a position to keep your job in a company that has a lot to do, you keep it. |
154,644 | My company doesn't have that many developers. THere is just one lead who manages 18 of us in a company with a total white collar workforce of maybe 70. Salary is also limited to increases of just 3% a year. Basically, if you want to advance, you need to leave within a two year period. They survive by having good pay at any given experience level, but also have a low average tenure (currently just 10 months) because of the pay and lack of upward mobility.
I joined about a year ago ago and am thinking about my future as I learn all this information about low tenure and few increases. Waiting to become lead isn't an option as the current lead is someone who would have a hard time moving to an equivalent job because of his background. He lacks the traditional cs degree and only his current job title reflects that he does development so he will be there for a long time.
Because much of our work is quite simple and the salary/advancement issue, I want to spend more time developing myself professionally. Conferences, blogs, resume driven development, and all that good stuff. Stuff some node.js into some older projects as a microservice or something.
The problem is, each developer works on their own project or projects. The bus factor for most things is 1 as management wants us to each be responsible for something. When people leave, that makes management irate as then a system is untended for several weeks. As a result, they check our Linkedins regularly to see if they have changed.
What can one do to professionally develop without pissing off the current employeR? | 2020/03/08 | [
"https://workplace.stackexchange.com/questions/154644",
"https://workplace.stackexchange.com",
"https://workplace.stackexchange.com/users/115424/"
] | The simple answer is: you can not. Period. Career wise promotions and the way forward goes goes to people who play the game best - from being nice to their boss to actually delivering, in various degrees (in some companies delivering is actually no important at all). If you are a worse player than someone else you loose, that is simply how the game is plaid. You could as well ask what how you can win a gold medal in any Olympic category when you are not as good as the winner.
The point is that the higher you go, the less positions are to be filled and thus the competition gets harder. Obviously you should not play it stupid - in your example the first person was NOT labelled unreliable, he WAS unreliable. He failed to maintain clear communication with his team and his superior. You said yourself - "had always straddled the line". It may well be this was done on purpose because the team was tired of him or feel t betrayed. From a Senior Engineer I do not expect him to "always straddle the line" and permanently have issues. How can he compete with someone who does not do that, does not have the issues and does not put more time in? He can not.
Same with the second person. Maternity leave. Now, you do not rank this as country wise, but in most countries maternity leave is not a holiday (i.e. one or two weeks) but significantly longer. You state: "arranged to have him made the point man. He did this in part by sifting through her paper calendar book of notes that she left on her desk when she went away." This sounds like bollocks to me. If the sales person goes to holidays for 6 months or so for maternity leave, do you really expect the clients to not have a contact point? The notes she left on her desk (WTF? Not in the files? She just disappears for half a year without cleaning up her desk? FISHY) are not HER notes, they are the company notes. Someone else in the team saw a chance and she obviously was:
* Not maintaining a good enough relationship to the clients and
* Not preparing her maternity leave enough by informing the clients.
If she would have had a good enough relationship with the clients, she would have informed them - you will find very few clients will just dump a sales contact they work well together under those circumstances. This smells like deflection - someone else had to handle her clients while she was away and he did such a better job that he is now the contact. Also - how comes all this happened without the team coordinator taking sure she is not left without clients? The whole story smells like a week old fish. Deflection of someone grinding their way up possibly by not doing a stellar job and now blaming someone else - typical deflection.
The simple question at the end marks it:
"How can I compete career wise with those who have no obligations outside work and are willing to do whatever it takes to win?"
You can not. You will always be second fiddle compared to someone who can put more energy and focus into his job than you AND is willing to do what it takes. You can mitigate this to a degree by being in a job where team work counts more and the career is not cut throat (like seriously MOST jobs are in this category) but if you are in a job where results count first and someone else delivers more - then no, you do not get a winning medal for being second. And your boss is quite likely just paying lip service to team building etc - he has his own career to handle and he prefers to build that on stronger players.
Your two examples, though, are bad - both hint at way more underlying problems than you tell us because both are quite seriously quite off the line for "doing more". Both smell like there are untold realities there that make this whole thing look very different if you ask the people around why they did it. | While these are fairly sad stories to hear, my experience is a bit different. I have a mental disorder which not only labels me as unreliable but also makes me need some sick leave some amount of the time per year.
You may call me lucky but I never had any problems with coming back to work. For some part of it, I am making myself quite knowledgeable on subjects that usually take over a month to take over. For some other part, I also tend to choose work environments that are friendly to the stigma my illness carries, and make sure to make a very good impression. So usually, people are eager that I come back from sick leave, despite me not working any extra hour.
In a non-dysfunctional workplace there is enough important work for everybody to be able to take responsibility and make an impact. Sure, people will try to cope with your absence and doing so they may put in place the very mechanism that could you get you replaced, but if you are able to put yourself in a position to keep your job in a company that has a lot to do, you keep it. |
154,644 | My company doesn't have that many developers. THere is just one lead who manages 18 of us in a company with a total white collar workforce of maybe 70. Salary is also limited to increases of just 3% a year. Basically, if you want to advance, you need to leave within a two year period. They survive by having good pay at any given experience level, but also have a low average tenure (currently just 10 months) because of the pay and lack of upward mobility.
I joined about a year ago ago and am thinking about my future as I learn all this information about low tenure and few increases. Waiting to become lead isn't an option as the current lead is someone who would have a hard time moving to an equivalent job because of his background. He lacks the traditional cs degree and only his current job title reflects that he does development so he will be there for a long time.
Because much of our work is quite simple and the salary/advancement issue, I want to spend more time developing myself professionally. Conferences, blogs, resume driven development, and all that good stuff. Stuff some node.js into some older projects as a microservice or something.
The problem is, each developer works on their own project or projects. The bus factor for most things is 1 as management wants us to each be responsible for something. When people leave, that makes management irate as then a system is untended for several weeks. As a result, they check our Linkedins regularly to see if they have changed.
What can one do to professionally develop without pissing off the current employeR? | 2020/03/08 | [
"https://workplace.stackexchange.com/questions/154644",
"https://workplace.stackexchange.com",
"https://workplace.stackexchange.com/users/115424/"
] | This is far from an exhaustive answer, but in those cases you listed, the primary issue seems to be that the individuals did not have effective relationships with their superiors (or the superiors above them).
#1 should have been taken care of with a simple 5-minute talk with the PM about how they were off for a month, their team knew they were off, and so by assigning work to them anyway and allowing it to fail, they were deliberately sabotaging the project to try and make them look bad.
For #2, maternity leave is more difficult because you're off for a long period. But why was there no continuity plan for who should look after their clients while they were on leave? What was their boss doing the entire time? How did none of the clients have the individual's number or a way to contact them?
---
The general point is that **you need people on your side**, and you especially need people higher up in your chain of command to be on your team, or at least to have a favourable impression of you.
Neither of these situations seem especially complex or intelligent, they could have been easily headed off *had there been a single person at the company looking out for the individuals involved*.
You don't need to be friends with everyone, or even most people, but you do need to be on good terms with your boss, your boss's boss should at least be aware of your existence, and you should make sure you have a minimum of at least 1 friend and ally in the workplace.
Alternatively, since these are the same company, it's possible that the whole thing is completely dysfunctional and full of conniving, manipulative, backstabbing individuals. Where the way you climb the corporate ladder is by being even more ruthless and manipulative than anyone else. If that's the case, then the only solution is to quit and find a job at a different, more sane company.
---
tl;dr, to explicitly answer your originally question: You need your boss (and preferably their boss as well) to be supportive and on your side. And while you can be on leave, you should spend at least some minimum amount of time each week (literally just a couple of hours) checking in with work and staying abreast of what's going on. If you do all of that, then you should be alright. | The simple answer is: you can not. Period. Career wise promotions and the way forward goes goes to people who play the game best - from being nice to their boss to actually delivering, in various degrees (in some companies delivering is actually no important at all). If you are a worse player than someone else you loose, that is simply how the game is plaid. You could as well ask what how you can win a gold medal in any Olympic category when you are not as good as the winner.
The point is that the higher you go, the less positions are to be filled and thus the competition gets harder. Obviously you should not play it stupid - in your example the first person was NOT labelled unreliable, he WAS unreliable. He failed to maintain clear communication with his team and his superior. You said yourself - "had always straddled the line". It may well be this was done on purpose because the team was tired of him or feel t betrayed. From a Senior Engineer I do not expect him to "always straddle the line" and permanently have issues. How can he compete with someone who does not do that, does not have the issues and does not put more time in? He can not.
Same with the second person. Maternity leave. Now, you do not rank this as country wise, but in most countries maternity leave is not a holiday (i.e. one or two weeks) but significantly longer. You state: "arranged to have him made the point man. He did this in part by sifting through her paper calendar book of notes that she left on her desk when she went away." This sounds like bollocks to me. If the sales person goes to holidays for 6 months or so for maternity leave, do you really expect the clients to not have a contact point? The notes she left on her desk (WTF? Not in the files? She just disappears for half a year without cleaning up her desk? FISHY) are not HER notes, they are the company notes. Someone else in the team saw a chance and she obviously was:
* Not maintaining a good enough relationship to the clients and
* Not preparing her maternity leave enough by informing the clients.
If she would have had a good enough relationship with the clients, she would have informed them - you will find very few clients will just dump a sales contact they work well together under those circumstances. This smells like deflection - someone else had to handle her clients while she was away and he did such a better job that he is now the contact. Also - how comes all this happened without the team coordinator taking sure she is not left without clients? The whole story smells like a week old fish. Deflection of someone grinding their way up possibly by not doing a stellar job and now blaming someone else - typical deflection.
The simple question at the end marks it:
"How can I compete career wise with those who have no obligations outside work and are willing to do whatever it takes to win?"
You can not. You will always be second fiddle compared to someone who can put more energy and focus into his job than you AND is willing to do what it takes. You can mitigate this to a degree by being in a job where team work counts more and the career is not cut throat (like seriously MOST jobs are in this category) but if you are in a job where results count first and someone else delivers more - then no, you do not get a winning medal for being second. And your boss is quite likely just paying lip service to team building etc - he has his own career to handle and he prefers to build that on stronger players.
Your two examples, though, are bad - both hint at way more underlying problems than you tell us because both are quite seriously quite off the line for "doing more". Both smell like there are untold realities there that make this whole thing look very different if you ask the people around why they did it. |
154,644 | My company doesn't have that many developers. THere is just one lead who manages 18 of us in a company with a total white collar workforce of maybe 70. Salary is also limited to increases of just 3% a year. Basically, if you want to advance, you need to leave within a two year period. They survive by having good pay at any given experience level, but also have a low average tenure (currently just 10 months) because of the pay and lack of upward mobility.
I joined about a year ago ago and am thinking about my future as I learn all this information about low tenure and few increases. Waiting to become lead isn't an option as the current lead is someone who would have a hard time moving to an equivalent job because of his background. He lacks the traditional cs degree and only his current job title reflects that he does development so he will be there for a long time.
Because much of our work is quite simple and the salary/advancement issue, I want to spend more time developing myself professionally. Conferences, blogs, resume driven development, and all that good stuff. Stuff some node.js into some older projects as a microservice or something.
The problem is, each developer works on their own project or projects. The bus factor for most things is 1 as management wants us to each be responsible for something. When people leave, that makes management irate as then a system is untended for several weeks. As a result, they check our Linkedins regularly to see if they have changed.
What can one do to professionally develop without pissing off the current employeR? | 2020/03/08 | [
"https://workplace.stackexchange.com/questions/154644",
"https://workplace.stackexchange.com",
"https://workplace.stackexchange.com/users/115424/"
] | This is far from an exhaustive answer, but in those cases you listed, the primary issue seems to be that the individuals did not have effective relationships with their superiors (or the superiors above them).
#1 should have been taken care of with a simple 5-minute talk with the PM about how they were off for a month, their team knew they were off, and so by assigning work to them anyway and allowing it to fail, they were deliberately sabotaging the project to try and make them look bad.
For #2, maternity leave is more difficult because you're off for a long period. But why was there no continuity plan for who should look after their clients while they were on leave? What was their boss doing the entire time? How did none of the clients have the individual's number or a way to contact them?
---
The general point is that **you need people on your side**, and you especially need people higher up in your chain of command to be on your team, or at least to have a favourable impression of you.
Neither of these situations seem especially complex or intelligent, they could have been easily headed off *had there been a single person at the company looking out for the individuals involved*.
You don't need to be friends with everyone, or even most people, but you do need to be on good terms with your boss, your boss's boss should at least be aware of your existence, and you should make sure you have a minimum of at least 1 friend and ally in the workplace.
Alternatively, since these are the same company, it's possible that the whole thing is completely dysfunctional and full of conniving, manipulative, backstabbing individuals. Where the way you climb the corporate ladder is by being even more ruthless and manipulative than anyone else. If that's the case, then the only solution is to quit and find a job at a different, more sane company.
---
tl;dr, to explicitly answer your originally question: You need your boss (and preferably their boss as well) to be supportive and on your side. And while you can be on leave, you should spend at least some minimum amount of time each week (literally just a couple of hours) checking in with work and staying abreast of what's going on. If you do all of that, then you should be alright. | (I disagreed with the existing answers, so wanted to add my own, but I upvoted them as I think they are valid viewpoints even though I don't agree!)
I'm aware I am making a few assumptions here:
1. I'm assuming you are quite early in your career (is it your first 'career' type of job?) from some of the comments you made about maybe starting a family 'someday' etc.
2. I'm assuming you are in the UK or other Commonwealth english-speaking country, specifically not in the USA, due to your spelling of the word "labelled", references to 'maternity leave', etc.
**Your situation isn't standard in companies**
In short... the kind of behaviour in Case 1 and Case 2 as you described them... aren't normal for most companies. They are pretty dysfunctional behaviour.
In Case 1 (I commented on your Q to ask for clarification) it seems that the team (and I use the word 'team' quite loosely!) continued to assign work to someone who was medically excused from work for the moment, then remained silent in sprint review meetings when asked about that subject, presumably to "show up" the absent team member.
What a weirdly passive-aggressive thing to do! I could sort of understand one passive-aggressive inclined person doing something like this, in a misguided kind of way. But this was an orchestrated act as a team. I'm not sure if that is passive-aggressive or just plain mobbing actually.
(BTW in genuine Scrum this shouldn't be able to happen, because each Scrum team member 'voluntarily' commits to take on particular tasks during the sprint.)
In Case 2 (maternity leave) have you asked yourself why it is that a junior colleague told clients she had left the company?
**A frame challenge...**
As such I think you are asking the wrong question in your circumstances (but I agree it is a good question in general!) because of false assumptions. To put it in user story terms you are essentially asking "As a person with commitments in life outside of the company... I want to prove my value to the company... So that I am considered on equal terms with the 'careerists'."
And that is a valid question, I don't deny it! But I am answering in the context of the additional info you provided. In most normal companies what would happen is that a month or a few months here and there in a few years long career would be covered for, handled by management, and accepted as a normal part of business.
**Conclusion**
It seems to me that there's a kind of "Hunger Games" (I've actually never watched it myself, but I know what it is through pop-culture) dystopian culture at your company in which resources are scarce, everyone is in competition with each other.
Search out "scarcity mindset", e.g. the link below. It's a situation of believing that there's only so much to go around so 'peers' are competitors, essentially.
<https://www.psychologytoday.com/gb/blog/science-choice/201504/the-scarcity-mindset>
These kind of cultures sometimes develop spontaneously but are typically due to some event in the past (like surprise mass layoffs) influencing the way people think of their colleagues (collaborators or rivals for "the few jobs we have remaining"...)
The responses you've described in your Case 1 and Case 2 are not normal in most workplaces, as I said. Case 1 is just plain wrong. Case 2 depends on interpretation -- it's not unreasonable to go through someone's hand written 'rolodex' to find contacts if the person was not so good at keeping centralized electronic records. But saying to clients that Person 2 had left the company (what?!) is not normal and is unacceptable.
It seems to me that here you have a situation where for whatever reason (and if you think about it, you can probably identify the reason) people are constantly in competition with each other, undermining other colleagues at every opportunity, maternity leave (etc) is an opportunity to be seized upon rather than just covered for a few months in a mildly inconvenient way, etc.
I did find it striking that in the cases you described it was "**junior**" people undermining someone senior to them, in order to discredit them in some way.
I wonder if that's the only way people are able to move up in this company? Not by the honest route of proving themselves through increased knowledge, project exposure, value to multiple teams etc.. but instead just by discrediting someone else so that they can be promoted in the person's place? (And I wonder how that will end for them when they inevitably have to take time off for maternity, mental health, a broken leg, or whatever it is?!)
I'm sorry it isn't a neat answer in terms of "here are some steps I can do" but I would suggest really looking inside whether this company is dysfunctional about that, do you want to continue to work in a place like that and so on. |
154,644 | My company doesn't have that many developers. THere is just one lead who manages 18 of us in a company with a total white collar workforce of maybe 70. Salary is also limited to increases of just 3% a year. Basically, if you want to advance, you need to leave within a two year period. They survive by having good pay at any given experience level, but also have a low average tenure (currently just 10 months) because of the pay and lack of upward mobility.
I joined about a year ago ago and am thinking about my future as I learn all this information about low tenure and few increases. Waiting to become lead isn't an option as the current lead is someone who would have a hard time moving to an equivalent job because of his background. He lacks the traditional cs degree and only his current job title reflects that he does development so he will be there for a long time.
Because much of our work is quite simple and the salary/advancement issue, I want to spend more time developing myself professionally. Conferences, blogs, resume driven development, and all that good stuff. Stuff some node.js into some older projects as a microservice or something.
The problem is, each developer works on their own project or projects. The bus factor for most things is 1 as management wants us to each be responsible for something. When people leave, that makes management irate as then a system is untended for several weeks. As a result, they check our Linkedins regularly to see if they have changed.
What can one do to professionally develop without pissing off the current employeR? | 2020/03/08 | [
"https://workplace.stackexchange.com/questions/154644",
"https://workplace.stackexchange.com",
"https://workplace.stackexchange.com/users/115424/"
] | The simple answer is: you can not. Period. Career wise promotions and the way forward goes goes to people who play the game best - from being nice to their boss to actually delivering, in various degrees (in some companies delivering is actually no important at all). If you are a worse player than someone else you loose, that is simply how the game is plaid. You could as well ask what how you can win a gold medal in any Olympic category when you are not as good as the winner.
The point is that the higher you go, the less positions are to be filled and thus the competition gets harder. Obviously you should not play it stupid - in your example the first person was NOT labelled unreliable, he WAS unreliable. He failed to maintain clear communication with his team and his superior. You said yourself - "had always straddled the line". It may well be this was done on purpose because the team was tired of him or feel t betrayed. From a Senior Engineer I do not expect him to "always straddle the line" and permanently have issues. How can he compete with someone who does not do that, does not have the issues and does not put more time in? He can not.
Same with the second person. Maternity leave. Now, you do not rank this as country wise, but in most countries maternity leave is not a holiday (i.e. one or two weeks) but significantly longer. You state: "arranged to have him made the point man. He did this in part by sifting through her paper calendar book of notes that she left on her desk when she went away." This sounds like bollocks to me. If the sales person goes to holidays for 6 months or so for maternity leave, do you really expect the clients to not have a contact point? The notes she left on her desk (WTF? Not in the files? She just disappears for half a year without cleaning up her desk? FISHY) are not HER notes, they are the company notes. Someone else in the team saw a chance and she obviously was:
* Not maintaining a good enough relationship to the clients and
* Not preparing her maternity leave enough by informing the clients.
If she would have had a good enough relationship with the clients, she would have informed them - you will find very few clients will just dump a sales contact they work well together under those circumstances. This smells like deflection - someone else had to handle her clients while she was away and he did such a better job that he is now the contact. Also - how comes all this happened without the team coordinator taking sure she is not left without clients? The whole story smells like a week old fish. Deflection of someone grinding their way up possibly by not doing a stellar job and now blaming someone else - typical deflection.
The simple question at the end marks it:
"How can I compete career wise with those who have no obligations outside work and are willing to do whatever it takes to win?"
You can not. You will always be second fiddle compared to someone who can put more energy and focus into his job than you AND is willing to do what it takes. You can mitigate this to a degree by being in a job where team work counts more and the career is not cut throat (like seriously MOST jobs are in this category) but if you are in a job where results count first and someone else delivers more - then no, you do not get a winning medal for being second. And your boss is quite likely just paying lip service to team building etc - he has his own career to handle and he prefers to build that on stronger players.
Your two examples, though, are bad - both hint at way more underlying problems than you tell us because both are quite seriously quite off the line for "doing more". Both smell like there are untold realities there that make this whole thing look very different if you ask the people around why they did it. | (I disagreed with the existing answers, so wanted to add my own, but I upvoted them as I think they are valid viewpoints even though I don't agree!)
I'm aware I am making a few assumptions here:
1. I'm assuming you are quite early in your career (is it your first 'career' type of job?) from some of the comments you made about maybe starting a family 'someday' etc.
2. I'm assuming you are in the UK or other Commonwealth english-speaking country, specifically not in the USA, due to your spelling of the word "labelled", references to 'maternity leave', etc.
**Your situation isn't standard in companies**
In short... the kind of behaviour in Case 1 and Case 2 as you described them... aren't normal for most companies. They are pretty dysfunctional behaviour.
In Case 1 (I commented on your Q to ask for clarification) it seems that the team (and I use the word 'team' quite loosely!) continued to assign work to someone who was medically excused from work for the moment, then remained silent in sprint review meetings when asked about that subject, presumably to "show up" the absent team member.
What a weirdly passive-aggressive thing to do! I could sort of understand one passive-aggressive inclined person doing something like this, in a misguided kind of way. But this was an orchestrated act as a team. I'm not sure if that is passive-aggressive or just plain mobbing actually.
(BTW in genuine Scrum this shouldn't be able to happen, because each Scrum team member 'voluntarily' commits to take on particular tasks during the sprint.)
In Case 2 (maternity leave) have you asked yourself why it is that a junior colleague told clients she had left the company?
**A frame challenge...**
As such I think you are asking the wrong question in your circumstances (but I agree it is a good question in general!) because of false assumptions. To put it in user story terms you are essentially asking "As a person with commitments in life outside of the company... I want to prove my value to the company... So that I am considered on equal terms with the 'careerists'."
And that is a valid question, I don't deny it! But I am answering in the context of the additional info you provided. In most normal companies what would happen is that a month or a few months here and there in a few years long career would be covered for, handled by management, and accepted as a normal part of business.
**Conclusion**
It seems to me that there's a kind of "Hunger Games" (I've actually never watched it myself, but I know what it is through pop-culture) dystopian culture at your company in which resources are scarce, everyone is in competition with each other.
Search out "scarcity mindset", e.g. the link below. It's a situation of believing that there's only so much to go around so 'peers' are competitors, essentially.
<https://www.psychologytoday.com/gb/blog/science-choice/201504/the-scarcity-mindset>
These kind of cultures sometimes develop spontaneously but are typically due to some event in the past (like surprise mass layoffs) influencing the way people think of their colleagues (collaborators or rivals for "the few jobs we have remaining"...)
The responses you've described in your Case 1 and Case 2 are not normal in most workplaces, as I said. Case 1 is just plain wrong. Case 2 depends on interpretation -- it's not unreasonable to go through someone's hand written 'rolodex' to find contacts if the person was not so good at keeping centralized electronic records. But saying to clients that Person 2 had left the company (what?!) is not normal and is unacceptable.
It seems to me that here you have a situation where for whatever reason (and if you think about it, you can probably identify the reason) people are constantly in competition with each other, undermining other colleagues at every opportunity, maternity leave (etc) is an opportunity to be seized upon rather than just covered for a few months in a mildly inconvenient way, etc.
I did find it striking that in the cases you described it was "**junior**" people undermining someone senior to them, in order to discredit them in some way.
I wonder if that's the only way people are able to move up in this company? Not by the honest route of proving themselves through increased knowledge, project exposure, value to multiple teams etc.. but instead just by discrediting someone else so that they can be promoted in the person's place? (And I wonder how that will end for them when they inevitably have to take time off for maternity, mental health, a broken leg, or whatever it is?!)
I'm sorry it isn't a neat answer in terms of "here are some steps I can do" but I would suggest really looking inside whether this company is dysfunctional about that, do you want to continue to work in a place like that and so on. |
9,594 | There is not *[was not]* a tag for nonduality. Would someone please make a correspondence between nonduality and Buddhism as to "stage" or "attainment", qualifications, or whatever is applicable?
**EDIT:** I was thinking of Nonduality as a stage, but it is apparently seen more as a position or way of describing things? Mariana Caplan, in the book *"Eyes Wide Open - Cultivating Discernement on the Spiritual Path"* has this paragraph at the top of page 163 (paperback):
>
> Ngakpa Chogyam, a Tibetan Buddhist teacher from Wales, offers a
> perspective on nonduality that includes all of life as a direct
> expression of the nondual core of truth. He explains that nonduality,
> or emptiness, has two facets: one is empty, or nondual, and the other
> is form, or duality. Therefore, duality is not illusory but is one
> *aspect* of nonduality. Like the two sides of a coin, the formless reality has two dimensions -- one is form, the other is formless. When
> we perceive duality as separate from nonduality (or nonduality as
> separate from duality), we do not engage the world of manifestation
> from the perspective of oneness, and thereby we fall into an erroneous
> relationship with it. From this perspective it is not "life" or
> duality that is maya, or illusion: rather it is our relationship to
> the world that is illusory.
>
>
>
This accords with the Heart Sutra. So, I was actually asking about **the Experience of this**, rather than whether it is true or not. *"Both is, and is not. Neither is, nor is not."* (Buddha)
**Second Addition:** I find a correspondence between the **formal / post-formal operations** distinction and the observation that some people get stuck when thinking of abstractions like nonduality, and others do not. Some people are more literal and fundamental, and others are more mystical. I think this is the key to understanding differences, and post-formal thought is an ability that develops through use. Here is a link from [a teacher's experience](http://siobhancurious.com/2007/09/08/formal-operational-vs-post-formal-thinking-in-adolescents-and-emerging-adults/). | 2015/06/16 | [
"https://buddhism.stackexchange.com/questions/9594",
"https://buddhism.stackexchange.com",
"https://buddhism.stackexchange.com/users/-1/"
] | There's an article on that subject [Dhamma and Non-duality](http://www.accesstoinsight.org/lib/authors/bodhi/bps-essay_27.html) by
Bhikkhu Bodhi.
The following is basically all direct quotes from that article, except very summarized (I'm extracting sentences and sentence fragments).
---
**Non-dual system**
For the Vedanta, non-duality (advaita) means the absence of an ultimate distinction between the Atman, the innermost self, and Brahman, the divine reality, the underlying ground of the world.
The Mahayana schools, despite their great differences, concur in upholding a thesis that, from the Theravada point of view, borders on the outrageous. This is the claim that there is no ultimate difference between samsara and Nirvana, defilement and purity, ignorance and enlightenment. For the Mahayana, the enlightenment which the Buddhist path is designed to awaken consists precisely in the realization of this non-dualistic perspective.
**Not a non-dual system**
As for the Theravada tradition:
* Virtue
+ Non-duality: the adept isn't bound by rules because "The sage has transcended all conventional distinctions of good and evil"
+ Theravada: "the liberated one lives restrained by the rules of the Vinaya, seeing danger in the slightest faults"
* Meditation:
+ Non-duality: defilements are mere appearances devoid of intrinsic reality
+ Theravada: hindrances are "causes of blindness, causes of ignorance, destructive to wisdom, not conducive to Nibbana"
* Wisdom:
+ Non-duality: concrete phenomena, in their distinctions and their plurality, are mere appearance, while true reality is the One: either a substantial Absolute (the Atman, Brahman, the Godhead, etc.), or a metaphysical zero (Sunyata, the Void Nature of Mind, etc.). For such systems, liberation comes with the arrival at the fundamental unity in which opposites merge and distinctions evaporate like dew
+ Theravada: wisdom not in the direction of an all-embracing identification with the All, but toward disengagement and detachment, release from the All
---
So to answer your question, if I understand the article, it's saying that "non-duality" in Buddhism includes statements like "nirvana and samsara are the same" and "everything is equally empty" ... but non-duality is not a feature of Theravada Buddhism (it's a feature of Mahayana Buddhism, and of Tantrayana). | The experience of form is very sobering, that is when we encounter a person who is angry or destitute, we experience the wrath or the suffering of the person.
When we experience the world as empty we see that the angry or destitute person is empty in that its physical existence is dependent on conditions and the the way we see the situation is dependent on our own conditioned mind, created by our own conditioned mind.
Supposing we do not understand what the person is saying then our experience would be totally different.
And supposing that someone tell us that some of these destitute persons are actually some stingy millionaires and we believe those stories, then our perspectives become different.
These are just simple examples on how the world is illusory because it is perceived in a way that is conditioned by the mind.
Now if we pursue further we can say that since physical forms are empty and the visual forms are also empty because it is illusory. The world is empty!
If we just engaged with just the formless then we become complacent and stay at equanimity, if we see emptiness correctly; if we did not see emptiness correctly, it becomes coldness, or even cruelty.
Engaging with just the formless is an extreme. Just try saying to a tree, falling in your direction that it is empty and see what the result is!
Engaging with just the form is what most people do that, believing things to be solid, that there is a "person" purposely doing all those things to annoy us.
When we see the middle way, both the form and the formless as being the true aspects of reality (both conventional and absolute), then we can act compassionately, or with the perception of oneness.
How? Is another question! |
23,844,104 | Is it possible to execute a commit only for a selected table?
The problem I have is that I do not know if there are more tables updated by the process, so that I want to prevent to update them with my commit!
It is mass processing and just at the end of the process it will call the commit (logically).
So is there a way update one special table instead of all?
Case: A mass processing Run, like the one with which I work, is updated on its very end.
Our/My code is just a BAdI implementation with several functions. To get specific data it is necessary to make a request to another system, this request cannot let any footsteps in the history log, so it is strictly necessary to roll back the request for not persisting it in the System.
This request is used for work item entries. So if I process 1 amount of data sets and generate a work item, this work item will be registered to get thrown after the Run. On the processing of the 2º amount of data sets and make the request to the other system, I have to rollback it but the work item entry of the first amount of data sets will be deleted (roll backed).
Could it be possible to make a those updates with a direct commit? -> No.
Can't I simply manage the tables to get updated with internal tables and update them at the end of the run? -> No, because those classes don't belong to us.
Why I don't simple make the rollback on the end of the run? -> those classes don't belong to us, and otherwise the whole massprocessing-structure has to be changed. | 2014/05/24 | [
"https://Stackoverflow.com/questions/23844104",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3167232/"
] | As a programmer you have control and should know what is being updated. If you structure your code into LUW's then you can control what tables are being updated and at what point you catch an error that will still allow the appropriate rollback. So the answer specifically about the commit statement was given above but as a programmer you can group your statements to accomplish the same.
Later..... | AFAIK the moment commit is called, all the DML preceding that commit statement and after the last commit is committed to database. So if after your commit there are more commits then they are not influenced by your commit neither your commit can influence theirs unless ofcourse there is a condition of rollback. |
1,098,799 | Every html document is an xml document. In the current project there are a lot of html tags which are not properly closed. This is a ruby on rails application. I want to put an after filter which will parse the whole html output and will raise an error if the parsing detects that it is not a well-formed document.
In this case well-formed means that all the tags are properly closed. What is a good ruby parser to use in this case which is also fast. | 2009/07/08 | [
"https://Stackoverflow.com/questions/1098799",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/131849/"
] | HTMLTidy seems to be the most popular plugin for other languages, and there is a RoR version available too.
<http://blog.cosinux.org/pages/rails-tidy> | [markup\_validity](http://tenderlovemaking.com/2009/06/12/easy-markup-validation/) provides some (X)HTML validation features.
You can also use nokogiri [as described here](http://groups.google.com/group/nokogiri-talk/browse_frm/thread/31145249155a90e9/ac645b6df4ed65c5?lnk=gst&q=well+formed#ac645b6df4ed65c5). |
1,098,799 | Every html document is an xml document. In the current project there are a lot of html tags which are not properly closed. This is a ruby on rails application. I want to put an after filter which will parse the whole html output and will raise an error if the parsing detects that it is not a well-formed document.
In this case well-formed means that all the tags are properly closed. What is a good ruby parser to use in this case which is also fast. | 2009/07/08 | [
"https://Stackoverflow.com/questions/1098799",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/131849/"
] | HTMLTidy seems to be the most popular plugin for other languages, and there is a RoR version available too.
<http://blog.cosinux.org/pages/rails-tidy> | Why would you close your tags? It's only going to slow you down!
<http://blog.errorhelp.com/2009/06/27/the-highest-traffic-site-in-the-world-doesnt-close-its-html-tags/> |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.