text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
2
The Nature Of Fasting
By: Shaykh Al-Islam Taqiuddin Ahmad
`Abdul-Halim Ibn Taymiyyah
Darussalam Publishers and Distributers
3
, ´ · , | ¸ - ¸, =|· _ · - . · · · « · ;· , =|· , ´ , · - . · ·
{Festinq is prcs·ribcd tor you es it wes
prcs·ribcd tor tbosc bctorc you.}
4
Publishers Note
All praise is due to Allah, and may He grant peace and blessings upon His Last
Messenger Muhammad (salAllahu ‘alayhi wa sallam).
This is a translation one of the smaller publications from the’ Al lay out work.
In the end all praise is due to Allah, and upon Him we depend.
Abdul-Malik Mujahid
General Manager
Darussalam
5
Contents
THE NATURE OF THINGS THAT BREAK THE FAST…………….. 6
THE NATURE OF THINGS THAT DO NOT BREAK THE FAST… 16
Questions and Answers
Fasting the Cloudy day and the Day of Doubt…………………………..37
Fasting and Shortening the Prayer for the Traveler………………….. 38
Fasting for the Traveler: Better or Worse……………………………... 41
Must One Intend to Fast the Night Before……………………………... 41
Is the Intention Necessary Every Day………………………………….. 43
How Fast is the Fast to be Broken……………………………………… 43
Eating After the Earlier Adhan…………………………………………. 43
If Fasting Causes Fainting and Madness………………………………. 44
The Case of A Pregnant Woman……………………………………….. 45
6
The Nature of Things that Break the Fast
,, -` , ¸ ` -` , · ,` ,
In tlc ¦-mc ot All-l. Mo·t Cr-·iou·, Mo·t Mcr·itul
Praise be to Allah. We praise Him, seek His Help, and ask his Forgiveness. We seek
refuge in Allah from the evil of our souls and the evil of our deeds. Whomever Allaah
guides, there is no one to mislead him. And whomever He misleads, there will be no
guide for him.
We testify that there is none worthy of worship except Allah, ascribing no partners to
Him. And we testify that Muhammad is His Servant and His Messenger (salAllahu
‘alayhi wa sallam).
What breaks one’s fast is of two kinds: One type will break the fast according to the texts
and consensus of the scholars. This includes: Eating, drinking, and sexual intercourse.
Allah, the Almighty, said:
¦ _ ¸ · , . ,' ' ` , ,` - ¸ · · ,` . ` , - ¸ · ` ¸ ,` , . ` , - ` , ´ ¸` , , _` - ,` , ,` , , ¯ , ` , ´ ` · . ¯ · ,` · ` , , ` ¸` ·,` , , .. ·
¸` , ¦
¦So now l-vc ·cxu-l rcl-tion· witl tlcm -nd ·cck tl-t wli·l All-l l-· ord-incd tor you
,ott·¦rin¿,, -nd c-t -nd drink until tlc wlitc tlrc-d ,li¿lt, ot tlc d-wn -¦¦c-r· to you
di·tin·t trom tlc ll-·k tlrc-d ,d-rknc·· ot ni¿lt,, tlcn ·om¦lctc your t-·t till tlc ni¿ltt-ll.¦
Thus, Allah permitted sexual relations (during the night of the fast), so, it is inferred from
that fasting it to abstain from: Sexual intercourse, eating and drinking. Since Allah the
Almighty, said before this:
` , ´ ` · ¸ · ¸, _ · . ¯ ¯ ` · , . ` , ´` , · . ¯¦
¦l-·tin¿ i· ¦rc··rilcd tor you -· it w-· ¦rc··rilcd tor tlo·c lctorc you.¦
Then it is understood that fasting was known to them as abstaining from eating, drinking
and sexual intercourse, and that the word (fast) was known to them before Islam and they
acted according to it with this meaning, as recorded in the Two Sahihs from ‘A’ishah
(May Allah be pleased with her): “The Quraysh used to fast the day of ‘Ashura’ in the
pre-Islamic era.” (Al-Bukhari and Muslim)
It has been narrated through many routes [of narration] that before prescribing the fast in
the month of Ramadhan, the Messenger of Allah (salAllahu ‘alayhi wa sallam) ordered
7
fasting on the day of ‘Ashura’ and he sent a herald to proclaim that. Thus, it is inferred
that the word (Fasting) was known to them.
It is also established by the texts and the consensus of the Muslims that the menstruation
blood invalidates the fast, hence, the menstruating woman does not fast, but she makes it
up.
It is textually established from the narration of Luqayt bin Saburah that the Prophet
(salAllahu ‘alayhi wa sallam) said to him:
,,Ix-¿¿cr-tc in inl-lin¿ -nd cxl-lin¿ ot w-tcr ,in your no·c, unlc·· you -rc t-·tin¿.,, (Abu
Dawud And At-Tirmidhi)
It is inferred from this, that water reaching the stomach through the nose breaks one’s fast.
And this is the opinion of the majority of the scholars.
There are two Hadiths in the Sunan, one of them is narrated by Hisham bin Hasan, from
Muhammad bin Sirin, from Abu Hurayrah (radyAllahu`anhu) who said: “The Messenger
of Allah (salAllahu ‘alayhi wa sallam) said:
,,Vlocvcr i· ovcr¦owcrcd ly vomit wlilc t-·tin¿, lc doc· not l-vc to m-kc it u¦. Iut it
lc vomit· ,intcntion-lly,, tlcn lc m-kc· it u¦.,, (Abu Dawud And At-Tirmidhi)
This Hadith is not confirmed according to a group of the scholars. They say: “It is the
words of Abu Hurayrah.” Abu Dawud said: “I heard Ahmad bin Hanbal saying:
‘It is not of any worth.”
Al-Khattabi said, “Meaning it is not preserved. At Tirmidhi said: ‘I asked Muhammad
bin Isma`il [Al Bukhari] about this Hadith and he said that he did know it except through
‘Isa bin Yunus, and he [Muhammad] added: “I do not think it is preserved.” He also
narrated that Yahya bin Kathir narrated on the authority of `Umar bin Al-Hakam that Abu
Hurayrah’s opinion was that vomit does not break fast.”
Al-Khattabi said: “Abu Dawud mentioned that Hafs bin Ghiyath narrated it from Hisham
just as it was narrated by ‘Isa bin Yunus.” He [Al-Khattabi] said, “I do not know that
there is any difference between the scholars over the question that whoever was
overpowered by vomit does not have to make it up, nor that who intentionally vomits,
then he has to make it up. They only differed over the atonement. The majority of them
said: ‘He has only to make it up.’ But ‘Ata’ said: ‘He has to make it up and to do the
atonement.’ This was quoted from Al-Awza’i, and it is the saying of Abu Thawr.”
I (Ibn Taymiyyah) say: This is implied also by one of the two narrations form Ahmad
answering about the atonement for cupping. Since if it was necessary for the cupped,
then even more so for intentional vomiting. But what is apparent from his school is that
atonement is not obligatory except in the case of sexual intercourse as stated by Ash
Shafi`i.
8
Those who do not affirm the Hadith in question do so because it has not reached them
through a dependable route. They indicate that it has a deficiency, in that it was narrated
exclusively by ‘Isa bin Yunus. But as is clear, he is not alone with it, rather it was also
narrated by Hafs bin Ghiyath, and the other Hadith supports it.
That is the Hadith recorded by Ahmad and the Sunan compilers, like At-Tirmidhi, on the
authority of Abu Ad-Darda’ that the Prophet (salAllahu ‘alayhi wa sallam) vomited and
broke his fast (Ahmad and Abu Dawud) That was mentioned to Thawban who said: “He
(Abu Ad-Darda’) has told the truth. I, myself, poured the water for his ablution.” But the
wording of Ahmad is: “The Messenger of Allah (salAllahu ‘alayhi wa sallam) vomited
and performed ablution.” Recorded by Ahmad on the authority of Husayn Al-Mu`alim.
(Ahmad and At-Tirmidhi).
Al-Athram said: “I said to Ahmad, “They have contradicted each other with this Hadith.
Ahmad said: “But Husayn Al-Mu`alim’s narration is good.” At Tirmidhi said: “This
Hadith of Husayn is the most correct thing on this topic.”
Accordingly, the obligation of ablution for vomiting was inferred from it. Yet it does not
support this. For he may have intended that ablution is legislated for that, since it says
nothing but that he performed ablution, and merely performing it does not prove that it is
obligatory. Instead, it only proves that ablution in such case is legitimate. If it is said: “It
is desirable,” then such would be applicable from the Hadith.
Similarly, in the case of what was narrated from some companions about ablution in the
case of bleeding, there is nothing in such narrations to prove doing so is obligatory. But
it shows only that it is desirable. There is nothing among the Shari`ah proofs to support
requiring that.
Rather, Ad-Daraqutni and others recorded from Humayd that Anas said: “The Messenger
of Allah (salAllahu ‘alayhi wa sallam) was cupped, and did not perform ablution. He
washed only the location of the cupping.”
Ibn Al-Jawzi recorded it in his book entitled: Hujjatul-Mukhalaf, and he did not weaken
it, although his habit is to act upon the disparaging remarks reported wherever possible.
As for the narrated Hadith which says:
,,¯lrcc ,tlin¿·, do not lrc-k tlc t-·t· Vomitin¿, ·u¦¦in¿, -nd wct drc-m·.,, (At-Tirmidhi)
In another wording:
,,¯lcy l-vc not lrokcn [tlcir t-·t|· ¦ot tlc onc wlo vomit·, nor tlc onc wlo l-· - wct
drc-m, nor tlc onc ·u¦¦cd.,, (Abu Dawud)
Its chain is confirmed. What is narrated by Ath-Thawri and others, from Zayd bin Aslam,
from a man among his companions, from a man among the companions of the Prophet
9
(salAllahu ‘alayhi wa sallam) saying: “The Messenger of Allah (salAllahu ‘alayhi wa
sallam)..” this was recorded by Abu Dawud, and this man is not known. ‘Abdur Rahman
bin Zayd bin Aslam reported it from his father from ‘Ata’ from Abu Sa`id from the
Prophet; but ‘Abdur-Rahman is weak according to the scholars of ‘Ilm Ar-Rijal.
(Knowledge of the men of Hadith)
I say: His two Marfu` narrations from Zayd do not contradict his Mursal narration, rather
it supports them. So the Hadith is confirmed from Zayd bin Aslam, but contains the
wording:
,,Vlcn onc i· ovcr¦owcrcd witl vomitin¿,,
And others have reported it from Zayd bin Aslam in Mursal form.
Yahya bin Ma`in said: “The Hadith of Zayd bin Aslam is nothing.” And if it were
correct, it would mean: “Whoever was overpowered by vomit.” Because he connected it
with having a wet dream, and one does not have a wet dream by choice, since he is asleep,
so it does not break ones fast according to the consensus.
As for the Hadith about cupping, it is either abrogated or abrogating; due to the Hadith of
Ibn ‘Abbas which says that the Messenger of Allah (salAllahu ‘alayhi wa sallam) was
cupped while fasting and in a state of Ihram, (Ahmad, Abu Dawud and At-Tirmidhi).
And perhaps vomiting, if it is included under the meaning of intentional vomiting, then it
may also be abrogated. This supports the view that the prohibition of cupping came later.
It is known that if there are two contradicting texts, one changing the rule and other
remaining upon it, the one changing is given preference since it is abrogating the other,
and the earlier is more likely to be the abrogated.
As to him who masturbates then ejaculates, he breaks his fast. The wet dream only
applies to the one who ejaculates while asleep.
By analogy, a group of scholars thought that no emission breaks the fast, and that the one
who intentionally vomits only breaks his fast since it is likely that some of the vomit will
return (to the stomach). Others say that the mere fact that menstruation breaks the fast
contradicts such analogy.
As we have explained about the fundamentals, there is nothing in the Shari`ah that
contradicts sound analogy.
If it is said: “You have said that the one who intentionally breaks his fast, his doing so is
one of the major sins, and the one who intentionally delays the day prayer until the night
without any excuse, his deed is considered one of the major sins, and that it would not
after that be acceptable from him according to the most apparent of the two sayings of the
scholars.
10
But the one who missed the Friday prayer or throwing the pebbles (During Hajj) or other
cases of acts of worship whose time is limited. For such things he has been ordered to
make up.
It is also narrated in the Hadith about the one who has sexual intercourse in the day of
Ramadhan that the Messenger of Allah commanded him to make it up?”
Then the response to this is that he commanded him (the one overpowered by vomit) to
make it up since man only vomits uncontrollably, like the patient who gets better by
vomiting, or the one intentionally vomiting after eat some doubtful food, as was done by
Abu Bakr when he knew that the food he had eaten was earned by a soothsayer.
So if one who vomits has an excuse for doing so, then what he has done is permissible,
and thus, he entered the category of the sick who are entitled to making it up. He is not
one of those who broke their fast without excuse.
As for his command to the one that had sexual intercourse (in the day of the month of
Ramadhan) of making it up, it is a weak Hadith. More than one of the major scholars of
Hadith classify it as weak.
This Hadith is confirmed by many routes in the Two Sahihs via the narration of Abu
Hurayrah and ‘A’ishah, and none of them mentioned the command to make it up. Had
the Messenger of Allah (salAllahu ‘alayhi wa sallam) commanded him to make it up,
they would not have neglected to mention it; since it is a legislative ruling that must be
clarified. Since the Messenger of Allah (salAllahu ‘alayhi wa sallam) did not command
him to make it up, it follows that making it up would not have been acceptable from him.
This is a proof that he intentionally broke his fast, hence he was neither forgetful nor
ignorant.
As for the one who has sexual intercourse during the days of Ramadhan out of
forgetfulness, there are three views from the Madhhab of Ahmad and others, and there
are three narrations mentioned about it.
1. Neither making it up nor atonement are required. This is the view of Ash-Shafi`i,
Abu Hanifah, and most of the others.
2. He must make it up, without atonement. This is the view of Malik.
3. He must do both. This is the popular position of Ahmad.
The first is more obvious as has been properly explained in its appropriate place.
It is confirmed by evidence in the Book and the Sunnah that whoever does a prohibited
deed mistakenly, or out of forgetfulness, then Allah will not punish him for that, so his
status is like the status of one who did not at all, there is no sin upon him, and the one
who does not commit a sin is neither considered disobedient nor having a committed a
prohibited act. In his case he did what he was commanded and did not do what he was
11
prohibited. Such a case does not invalidate his worship, his worship is only invalidated
when he does not do what he was commanded or he does what he has been prohibited
from him.
Contrarily; Hajj is not invalidated by doing any of the forbidden things mistakenly or out
of forgetfulness, not sexual intercourse or other than that, and this is the more apparent of
the views of Ash-Shafi`i.
As for the atonement and the ransom, they become obligatory because they replace the
value of the thing destroyed. Likewise, if a boy, an insane person, or sleeping person
were to destroy it, he becomes responsible for replacing it.
The ransom in case of killing game mistakenly or out of forgetfulness has the status of
ransom for accidental murder. The atonement for accidental murder is obligatory based
upon texts of the Qur’an and the consensus of the Muslims.
As for the other violations during Hajj, such as clipping one’s nails, shortening one’s
moustache, using perfume and wearing normal clothing, they cannot be classified under
this topic. Even if one pays ransom for such actions, this does not make it similar to the
ransom of killing game; since the latter is a means of replacing the value of the thing that
was destroyed. Thus, the one doing a prohibited deed mistakenly or out of forgetfulness
pays ransom alone, only in the case of killing game.
There are different opinions among people here:
1. This is one of them, it is the saying of the Dhahiriyah.
2. The second view includes both things during forgetfulness, as said by Abu
Hanifah, and Ahmad, and Al-Qadhi and his companions also chose it.
3. The third view makes a distinction between what causes damage, such as killing
game, shaving hair, and clipping nails; and what does not cause damage, such as
perfume and dress. This is the view of Ash-Shafi`i and Ahmad in the second
narration from him, and a group of his companions chose it as well. This view is
better than the other, but removing hair and clipping nails should be classified
along with the dress and perfume, not under killing game. In this case this view
would even be better.
4. The fourth view is that mistakenly killing game should not be included. This is
according to a narration from Ahmad. This applies even more so to removing
hair and clipping nails.
Contrarily, if the fasting person eats, drinks, or has sexual intercourse out of forgetfulness
or mistakenly, he does not have to make it up. This is the view of a group of
predecessors and a group of those after them.
Some of them said that in the case of forgetfulness or being mistaken the fast is broken.
This was the view of Malik. Abu Hanifah said: “This is his analogy. But he is
contradicted by the Hadith of Abu Hurayrah about forgetting”[1]
12
Some others say that the one eating mistakenly has broken his fast while the one who has
forgotten has not. This is the view of Abu Hanifah, Ash Shafi`i and Ahmad. Abu
Hanifah gave preference to the position of the one who forgot. As for the followers of
Ash-Shafi`i and Ahmad they say: “Forgetfulness does not break the fast because it is
uncontrollable. On the contrary, in the case of the mistaken, it was possible for him not
to break his fast until he was certain that the sun had set, and that he refrain from eating
when he is not sure about the beginning of Fajr.”
This distinction is deficient, and the opposite is actually the case. According to the
Sunnah, the fasting person is commanded to hasten to break his fast and to delay pre-
dawn meal (As-Suhur). In case of overcast, a long time must pass before one is sure
whether it is time to break the fast or not. This may cause him to miss the (reward of)
hastening to break his fast and of performing the Maghrib prayer which he is required to
expedite. If he is not sure of sunset, he would have to postpone the Maghrib prayer until
he is sure of its time. In this case he may postpone it until the dusk goes away and still be
unsure.
It is reported from Ibrahim An-Nakha`i and others among the predecessors – and it is the
view of Abi Hanifah – that in the case of cloudy weather, they considered it
recommended to delay the Maghrib and the Thuhr prayers, and advancing the ‘Isha’ and
the ‘Asr’ prayers. There are also texts in that regard from Ahmad and others.
Some of the followers of Abu Hanifah thought that this was done in an attempt to pray
when the two times meet. This is not the case, because immediately after this
“precautionary time of meeting” is the time of ‘Asr and ‘Isha’. But this was done
because these two sets of prayers may be combined in the case of some excuse, and
cloudy weather is a case of an excuse. So the first of the two prayers is delayed and the
second of them is advanced, for the sake of two benefits:
1. The first is to ease the matter for people to perform them one time for fear of rain,
so it is like the case of combining for rain.
2. The second is to be sure of time of Maghrib. The same with combining Thuhr and
`Asr according to the most apparent of the two views, and this is one of the two
reports from Ahmad. This means combining them due to thick mud, strong wind
and the like as the scholars state. This is the view of Malik, and the most apparent
of the two sayings in the Madhhab of Ahmad.
In addition, the potential wrong committed by advancing ‘Asr and ‘Isha’ is preferred to
that of advancing Thuhr and Maghrib; since performing a prayer before its time is not
permissible under any circumstances. While it is permissible to perform them during the
time of Thuhr and Maghrib; since this is their due time in the case of an excuse. The
state of uncertainty is a case of an excuse. Thus, to combine two prayers in the case of
uncertainty is more reasonable than praying them individually during a time of doubt.
13
This is related to what is said by those who say it was done in order to catch the time the
two prayers meet. But such a time only occurs in the case of prayers that share a time.
Do not see that they did not recommend delaying Fajr, ‘Isha’ or ‘Asr? If the reason for
all of this was actually for fear of performing the prayer before its time, then this would
have also applied to Fajr, `Asr and ‘Isha’.
It is narrated that the Prophet (salAllahu ‘alayhi wa sallam) urged us to hasten peforming
the `Asr prayer:
,,Ix¦cditc tlc ¦r-ycr on tlc ·loudy d-y. Indccd, wlocvcr lc-vc· tlc ‘A·r ¦r-ycr, li· dccd
will lc in v-in.,, (Al-Bukhari and Ibn Majah.)
If it is said: “If it is desirable to delay Maghrib during cloudy weather, then breaking the
fast would also be delayed.” Then we say that it is only desirable to delay along with
advancing ‘Isha’, such that they are prayed before dusk disappears. But if one delays it
until he fears the disappearance of dusk, then this is not recommended, nor is it
recommended to delay breaking the fast until that time.
Thus, the legislated combining for rainy weather is the combining in advance, at the time
of Maghrib. It is not recommended to delay Maghrib until the disappearance of dusk.
This would cause a great hardship on people, while combining has been legislated to ease
matter for the Muslims.
Both the delay and the advancement that are recommended do not mean the performance
of the two prayers without any time in between. Thuhr is delayed and ‘Asr advanced, but
there may be a short period of time between them. The same with Maghrib and ‘Isha’,
the pray one, and wait for the other, but only a time in which none would need to go to
their home and then return. This type of combining is allowed. It is not conditional based
upon instantaneous succession, according to the most correct view as a we mentioned in
different place.
It is confirmed in Sahih Al-Bukhari from Asma’ bint Abi Bakr, may Allah be pleased
with her, who said: “One day, during the lifetime of the Prophet (salAllahu ‘alayhi wa
sallam) we broke our fast in Ramadhan on a cloudy day, then, the sun appeared again.”
Two rulings are inferred from this narration:
1. It is not recommended to delay breaking the fast during cloudy weather until one
is sure of sunset. They did not do this, nor did the Prophet (salAllahu ‘alayhi wa
sallam) order them to. It is well known that the Messenger of Allah (salAllahu
‘alayhi wa sallam) along with his companions, may Allah be pleased with them,
are well aware of the rulings, and they are the more compliant to Allah, the
Almighty, and to His Messenger (salAllahu ‘alayhi wa sallam) than those who
followed them.
2. Making up the fast is not obligatory. Since, had the Prophet (salAllahu ‘alayhi wa
sallam) commanded them to make it up, it would have been popular among them
14
and it would have been conveyed to us, just as their breaking their fast was
conveyed to us. Since it was not conveyed, it is inferred that He (salAllahu
‘alayhi wa sallam) did not command it.
If it is said: “Hisham bin ‘Urwah was asked, ‘Were they commanded to make it up.’ And
he said: ‘Isn’t making it up essential?’” Then the response is that this is mere view of
Hisham, then they reported it with the Hadith he narrated. Proving that he had no
knowledge about that is what Ma`mar narrated: “I heard Hisham saying: ‘I do not know
whether they made it up or not.” Al-Bukhari reported this. Hisham narrated this Hadith
from his wife, Fatimah bint Al-Mundhir from Asma’ (may Allah be pleased with her).
Hisham also narrated from his father ‘Urwah that they were not commanded to make it
up, and ‘Urwah is more knowledgeable than his son. This is the view of Ishaq bin
Rahwiyah. Ahmad said: “By analogy, it does not break his fast. We only left this view
because of the Hadith of ‘Umar.
Ishaq bin Rahwiyah is a colleague of Ahmad bin Hanbal, and he is in agreement with his
Madhhab, in its fundamentals and its branches. And many of their sayings are in accord.
Al-Kawsakh asked his questions from Ahmad and Ishaq as did others. Similarly At-
Tirmidhi combined the sayings of Ahmad and Ishaq, for he reported both of their sayings
from the issues of Al-Kawsakh.
Abu Za`rah, Abu Hatim, Ibn Qutaybah, and other Imams of knowledge and Sunnah and
Hadith used to learn the Madhhab of Ahmad and Ishaq and give preference to their views
over the views of the others. Other Imams of Hadith such as Al-Bukhari, Muslim, At-
Tirmidhi, An Nasa’i and others who followed them are all among those who took
knowledge and Fiqh from them, as well as Dawud who took from the companions of
Ishaq.
Whenever Ahmad bin Hanbal was asked about Ishaq, he used to say: “Am I asked about
Ishaq? Nay, Ishaq should be asked about me.”
Ash-Shafi`i, Ahmad bin Hanbal, Ishaq, Abu ‘Ubayd, Abu Thawr, Muhammad bin Nasr
Al-Marwazi, Dawud bin ‘Ali and their like are all Fuqaha’ of Hadith, may Allaah be
pleased with them all.
Additionally Allah the Almighty said in His Book:
,` - ¸ · · ,` . ` , - ¸ · ` ¸ ,` , . ` , - ` , ´ ¸` , , _` - ,` , ,` , , ¯ ,¦
1 (2:187)
The verse along with the authentic Hadiths of the Prophet (salAllahu ‘alayhi wa sallam)
clearly state the command to eat until Fajr appears plainly. Thus, even in the state of
doubt concerning Fajr, one is commanded to eat, as has been clarified.
15
* * *
[1] The wording of the Hadith as follows: “Whoever forgets while fasting then eats or
drinks, let him complete his fast; since Allah has fed him and caused him to drink.”
(Agreed upon)
16
The Nature of Things that Do Not Break the Fast
As for kohl, injections, drops dropped into urethra, and the stomach and brain treatments,
the scholars differ over them. Some of them are of the opinion that none of them break
the fast. Some ruled out kohl, some others ruled out eye drops, and a group ruled out
kohl and eye drops and considered all else as breaking the fast. The most apparent view
is that none of these break the fast.
The fast is one pillar of the religion of Islam about which all people, the specialized as
well as the average must be aware. Had these things been prohibited by Allah and His
Messenger for the fasting person, or had they invalidated one’s fast, it would have been
necessary for the Messenger of Allah (salAllahu ‘alayhi wa sallam) to clearly explain
that. If the Messenger of Allah (salAllahu ‘alayhi wa sallam) had clarified it, the
companions, may Allaah be pleased with them, would have reported it to the nation as
they have with the rest of the legislative matters.
Since none of the knowledgeable scholars reported a Hadith – neither Sahih, weak,
connected, or Mursal – from the Prophet (salAllahu ‘alayhi wa sallam) then it is known
that he mentioned nothing in this concern.
The reported Hadith related to the kohl is weak. It is recorded by Abu Dawud in his
Sunan. No one else recorded it. It is not included in the Musnad of Ahmad nor in other
reliable compilations of Hadith.
The wording of the Hadith is as follows: “Abu Dawud said: An Nufayli narrated to us:
‘Ali bin Thabit narrated to us; ‘Abdur-Rahman bin An-Nu`man bin Ma`bid bin Hawdhah
narrated to us on the authority of his father, on the authority of his grandfather from the
Prophet (salAllahu ‘alayhi wa sallam) “That he commanded using kohl when going to
sleep, and said:
,,lct tlc t-·tin¿ ¦cr·on ·t-y - w-y trom it.,, (Abu Dawud)
Abu Dawud said: “Yahya bin Ma`in said to me: ‘This Hadith is Munkar.’” Meaning the
Hadith related to the kohl.
Al-Mundhiri said: “Abdur-Rahman is weak.” And Abu Hatim Ar-Razi said: “He
(‘Abdur-Rahman) is truthful, but who knows his father and his trustworthiness and
memorization abilities?”
The same with Ma`bid; he was contradicted by another weak Hadith compiled by At-
Tirmidhi with his chain, to Anas bin Malik. `Abdul-A`la bin Wasil narrated to us, Al-
Hassan bin Atiyyah narrated to us, Abu `Atikah narrated to us from Anas bin Malik who
said: “A man came to the Prophet (salAllahu ‘alayhi wa sallam) and said: ‘My eye is sore.
Am I permitted to apply kohl while fasting?’ The Prophet said: “Yes.”
17
At-Tirmidhi comments on this Hadith saying: “its chain of narrators is not strong.
Nothing is correct from the Prophet on this topic, and Abu `Atikah is weak.” (At-
Tirmidhi)
These are the words of At-Tirmidhi, Al-Bukhari said about Ma`bid: “His narrations are
Munkar.” An-Nasa’I said: “He is not trustworthy.” And Ar Razi said; “His narrations are
skipped.”
Those who said that these things like injections, and stomach and brain treatment break
the fast have no proof from the Prophet (salAllahu ‘alayhi wa sallam) they deduced their
view from analogy. Their strongest proof is the saying of the Messenger of Allaah
(salAllahu ‘alayhi wa sallam):
,,And cx-¿¿cr-tc in inl-lin¿ -nd cxl-lin¿ ot w-tcr [into your no·c| cx·c¦t it you -rc
t-·tin¿.,, (Abu Dawud And At-Tirmidhi)
They argue that this proves that whatever one causes to reach the brain breaks the fast.
The same analogy would include whatever one causes to reach the stomach through
injections etc., whether it reaches the stomach through normal passageway or whatever
way it enters the stomach.
Those who ruled out eye drops say the eye drops do not reach directly to the stomach but
they seep; but what goes through the urethra/rectum is like what reaches the stomach
through the mouth and the nose.
Those who ruled out kohl say that the eye is not a normal passageway like the two ducts
(the sex organ and the anus); but kohl is absorbed in the body like water and oil.
Those who said that kohl breaks one’s fast that it reaches inside the throat till the fasting
person expectorates it because there is a passageway from the eye to the throat.
As has been explained, all of them rely upon analogy. The fast is not invalidated this
way for the following reasons.
1. Analogy is an argument if its conditions are correct, we have previously
mentioned the principle that all of judgements of the Shar`iah are clarified by
texts, correct analogy will only prove what is also proven by texts.
So when we know that the Messenger of Allaah (salAllahu ‘alayhi wa sallam) neither
imposed nor prohibited anything, then we know that it is neither unlawful nor obligatory
and that the analogy that states that it is obligatory or prohibited is a false analogy.
We are well aware that there is neither in the Book nor in the Sunnah what shows that
such things break the fast. Thus, we know that they do not break the fast.
18
2. The rulings which the Ummah needs to be aware of must be made publicly clear
by the Messenger (salAllahu ‘alayhi wa sallam) and transmitted throughout the
Ummah. If this has taken place, then it is known that such rulings are not from
the religion of Allah.
For example, we know that fasting any other month besides the month of Ramadhan has
not been made obligatory, and there is no pilgrimage to a house other than the House of
Allah, and there is no daily obligatory prayer other than the five prayers.
It is known also that the Messenger of Allah did not impose making Ghusl for merely
fondling one’s wife if there is not ejaculation of semen. He did not impose ablution for
being scared nearly to death, although there might be some excretions that occur. He also
did not order two Rak`ahs to be performed after the Sa`i between As-Safa and Al-
Marwah as he ordered after Tawaf around the House.
In this way, we came to know that semen is not filthy; since it was not reported through
any reliable chain of narrators that the Messenger of Allah (salAllahu ‘alayhi wa sallam)
commanded the Muslims to wash it off, neither from their bodies nor from their clothing,
although most of the people would experience it. The Messenger of Allah (salAllahu
‘alayhi wa sallam) commanded menstruating woman to wash menstruation blood from
her underwear, although there is no dire need for this. He did not command the male
Muslims to wash semen, nether from their bodies nor from their clothes.
The Hadith narrated by some Fiqh scholars, which says: “The dress should be washed
from urine, stool, semen, prosthetic fluid, and blood.” Is not the Prophet’s (salAllahu
‘alayhi wa sallam) saying. None of the reliable compilations of the Hadith contain it.
None of the specialists in the science of Hadith collected it with a reliable chain of
narrators. It is narrated from ‘Amar. It seems that it is his own saying.
As for the washing and scratching the semen from the clothes of the Messenger of Allah
(salAllahu ‘alayhi wa sallam) by ‘Aishah (may Allah be pleased with her), it is not an
evidence that doing so is an obligation. Since the clothing is also washed to remove dirt,
mucus, and spittle.
The commandment of the Messenger of Allah (salAllahu ‘alayhi wa sallam) for anything
is the only factor to consider any affair obligatory; especially when we know that the
Messenger did not command the Muslims to wash it off of their clothing. Besides, it was
not reported that he ordered ‘Aishah to wash it. He only silently approved of her doing
so. This implies its permissibility, or that it is an agreeable endorsement or a
recommendation. As far as an obligation is concerned, there must be a proof.
In this way we came to know that he (salAllahu ‘alayhi wa sallam) did not impose the
ablution because of touching women nor from filthy matters that come out of the organs
other than via the two ducts. It has not been narrated with a good chain of narrators that
he commanded such action; although the people used to vomit, and were being cupped,
and were wounded in the battlefield. The vein of some of his companions was cut, and
19
bled, yet none reported that he (salAllahu ‘alayhi wa sallam) commanded them to
perform ablution because of it.
In addition, the people used to touch their wives lustfully and without lust, yet none of the
Muslims reported that he (salAllahu ‘alayhi wa sallam) commanded them to perform
ablution because of this. Besides, the Qur’an does not refer to this; since what is intended
by the touching is sexual intercourse as has been explained in its proper place.
His commandment to perform ablution for touching the sexual organ is merely a
recommendation, regardless whether it excites the man or not. It is also recommended
for one who gets excited when he touches a women to perform ablution. This ruling
applies to the one who contemplates sexual desire, then his sexual organ becomes erect.
Performance of ablution when one is excited carries the ruling of the performance of the
ablution when one gets angry. This is recommended because of the narration in the
Sunan that the Prophet (salAllahu ‘alayhi wa sallam) said:
,,An¿cr i· trom S-t-n. S-t-n i· ·rc-tcd trom tlc tirc. And tlc tirc i· di·tin¿ui·lcd ly w-tcr.
¯lu·, it onc ot you ¿ct· -n¿ry, lct lim ¦crtorm -llution.,, (Ahmad and Abu Dawud)
Uncontrollable sexual desire is from Satan. So the Prophet’s (salAllahu ‘alayhi wa
sallam) order to perform ablution for what has been touched by fire is because what was
touched by fire mixes with the body, and thus the order for ablution is one of the
recommendations. There is nothing in the texts proving that this is abrogated, rather they
prove that it is not obligatory. Saying that it is recommended is more just than the other
views; the view that it is obligatory, and the view that it is abrogated. This is one of the
two views in the Madhhab of Ahmad and others.
In this way we come to know that both the urine and the dung of the animal whose flesh
is eaten is not filthy, for these people were shepherds of camels and sheep. They used to
sit and pray in their animal pens which would be full of their dung. Had these places
been considered as Al-Hushuwsh (the places where one answers the call of nature), the
Messenger of Allah (salAllahu ‘alayhi wa sallam) would have commanded them to avoid
them and not to stain their bodies and clothes in them nor pray in them.
It is confirmed that the Messenger of Allah (salAllahu ‘alayhi wa sallam) and his
companions performed prayer in sheep pens, and he commanded praying in sheep pens
but he (salAllahu ‘alayhi wa sallam) prohibited praying in camels pens. Thus, it is
known that this was not due to the filth of the dung, but to the reason for which he
commanded performing ablution after eating camel meat. When he was asked about
ablution after eating the meat of sheep, he (salAllahu ‘alayhi wa sallam) said”
,,It you wi·l, ¦crtorm -llution. And it you wi·l, do not ¦crtorm it.,, (Muslim)
He (salAllahu ‘alayhi wa sallam) further said:
20
,,·-mcl· wcrc ·rc-tcd trom tlc }inn. ¯lcrc i· - dcvil on tlc lum¦ ot c-·l ·-mcl.,, (Abu
Dawud, Ibn Majah, the second half from Ahmad)
He (salAllahu ‘alayhi wa sallam) also said:
,,Io-·tin¿ -nd l-u¿ltinc·· -rc -mon¿ tlc l-dd-din[1| ·-mcl owncr·, tr-nquility i· witl tlc
·lcc¦ lcrdcr·.,, (Al-Bukhari and Muslim)
Since camels have devilish characteristics that Allah the Almighty and his Messenger
(salAllahu ‘alayhi wa sallam) dislike, people were commanded to perform ablution after
eating their meat, and prayer in their pens was prohibited just as he prohibited its
performance in bathrooms because they are the dwelling places of the devils.
Prayer should be avoided in the dwelling place of evil souls, and evil bodies, nay evil
souls are beloved by the evil bodies.
Since devils attend Al-Hushuwsh, prayer in them is more worthy to be avoided than in the
bath house, and camels pens, and more worthy than polluted land.
There is no specific text concerning Al-Hushuwsh because it was well known to the
Muslims, thus, they did not need a specific clarification.
For this reason, none of the Muslims used to sit in Al-Hushuwsh or pray in them. They
would go out in the open to answer the call of nature before they had water closets in
their houses.
When they heard his prohibition of praying in the bath houses or camel pens, they knew
with all the more reason that prayer in Al-Hushuwsh was strictly prohibited. Although
there is a Hadith narrated which prohibits the performance of the prayer in the grave yard,
the slaughter house, the dump, Al-Hushuwsh, the middle of the road, the camel pens, and
the surface of the Ka`bah. (At-Tirmidhi and Ibn Majah)
But the scholars of Hadith are in disagreement about it. The companions of Ahmad have
two opinions: Some of them see this to hold the position of a prohibition, while others
say that the Hadith is not confirmed.
I did not find either a prohibition or a commandment in Ahmad’s sayings, although he
disliked the performances of prayer in places of punishment. This was reported by his
son ‘Abdullah in a Hadith recorded in the Musnad from ‘Ali, it was also recorded by Abu
Dawud. It mentions Al-Hushuwsh, camel pens and the bath houses. These three were
also mentioned by Al-Kharqi and others.
The ruling concerning this depends on one’s view. He may clarify it by analogy to the
other mentioned texts, or he may affirm the Hadiths. Whoever disagrees would have to
negate the Hadith and explain the distinction. Besides, the prohibition can be a
prohibition of dislike or a prohibition of unlawfulness.
21
Since these are rulings concerning average everyday practices, and they must be clarified
publicly by the Prophet (salAllahu ‘alayhi wa sallam), and they must be conveyed by the
Ummah, and it is well known that kohl and other common things – such as oil, taking a
bath, scent and perfume are used by most people – then were it that such things break the
fast, the Prophet (salAllahu ‘alayhi wa sallam) would have explained it. Since he did not
explain this, we come to know that it can be classified under scent, musk and oil. Scent
may pass to the brain through the nose. Tissues absorb the oil which strengthens man.
He gains strength from the scent. Since the fasting person was not prohibited from such
things, then this proves that fragrance, oil, and kohl are permissible.
During the life time of the Prophet the Muslims used to suffer wounds whether in the
Jihad or otherwise; they also suffered wounds in the stomach and the head and used to be
treated with the prescribed medical substances. Had this been breaking the fast, he
(salAllahu ‘alayhi wa sallam) would have clarified this. Since he (salAllahu ‘alayhi wa
sallam) did not prohibit the fasting person from doing this, we know that it does not
break the fast.
3. Affirming that something breaks the fast by analogy must be done by an analogy
that is correct. This will either be by comparing two things that are in the same
category, or by eliminating any distinguishing factors between them. So either
the evidence supports the reason for the basic case, then it is applied to its
branches, or it is known that there is no difference between them in their
characteristics from the view of the Shari`ah.
As for the case in question, the analogy is negated. This is because there is nothing
among the evidences stating that what Allah and His Messenger (salAllahu ‘alayhi wa
sallam) appointed as breaking the fast is that which goes to the brain or the body, or what
reaches through a passageway (other than the normal one), or reaches the stomach
(through passageways other than the mouth) etc. This seems to be what the people of this
view want to impose as criteria on Allah and His Messenger (salAllahu ‘alayhi wa
sallam).
They are saying: “Allah and His Messenger (salAllahu ‘alayhi wa sallam) made eating
and drinking among the things that break the fast because they have the joint meaning of
substances used to treat stomach and head ailments that reach the brain and the stomach.
Or other things that reach the stomach like kohl, something injected, or what passes
through the urethra (rectum) etc.”
Since there is no proof from Allah and His Messenger from which to derive this
description, then the saying, “Allah and His Messenger (salAllahu ‘alayhi wa sallam)
only ranked these as things that break the fast for this reason” is a saying without
knowledge.
22
And the statement, “Allah made it unlawful for the fasting person to do this” then this is a
claim of “this is lawful and this is unlawful” without knowledge, claiming that Allah has
said something without knowledge. This is not permissible.
Any of the scholars who thinks that the joint meaning determines the judgement, then he
is in the same category as one who believes that his Madhhab is correct when it actually
is not. Or he is like the one who tries to prove something that was not even mentioned by
the Messenger (salAllahu ‘alayhi wa sallam). This is their own Ijtihad (judgement) for
which they will rewarded; but it is not required the Muslim to consider their saying a
Shari`ah evidence that is necessary to be followed.
4. Analogy can only be correct if the Shari`ah does not mention the reason for the
ruling after we study its basic qualities, and it will not apply except in the case of
the same qualities. Since we have affirmed that the basic reason corresponds to or
is akin to, or resembles contrary to what these people say, then it must be
investigated. And if we find that there are two basic descriptions, then it is not
possible that we say this judgement applies to one of them but not the other.
It is well known that both the texts and the consensus affirm that eating, drinking, sexual
intercourse, and menstruation break the fast. We also have known that the Prophet
(salAllahu ‘alayhi wa sallam) has forbidden the one performing the ablution to be
excessive in rinsing his nose if he is fasting. The rinsing of the nose as an analogy is their
strongest argument as previously explained. But it is still a weak analogy. This is
because when one inhales water into the nostrils, water descends to the throat then to his
stomach. So the result is the same as the result when drinking with the mouth. His body
is nourished by this water, his thirst is removed, the food in his stomach is digested, all
just the same as in the case of drinking water.
If the texts have not mentioned a prohibition of this, then reason leads one to know that
this falls under the same category as drinking, there is no distinction between the two
cases except for the means by which the water enters the mouth, and that is not relevant
because the mere entrance of water in the mouth does not break the fast. So it does not
break the fast nor does it fall under the category of something that does due to the
absence of the results in question. Rather, it is a means that leads to breaking the fast.
But this is not the case with kohl, and injections, and stomach and brain medicines, for
kohl does not provide nourishment, nor does anyone put kohl in their stomach, nor via
the nose nor mouth. Similarly, injections do not provide any nourishment in anyway, it
only goes through the branches of the body. Just like inhaling some type of laxative or
being frightened until one expels what is in his stomach.
So the injection does not reach the stomach, and the medicine that reaches the stomach in
the case of treatments from stomach surgery or for the brain are not like what reaches the
stomach for nourishment.
Allah, the Almighty, said
23
` , ´ ` · ¸ · ¸, _ · . ¯ ¯ ` · , . ` , ´` , · . ¯
¦l-·tin¿ i· ¦rc··rilcd tor you -· it w-· ¦rc··rilcd tor tlo·c lctorc you.¦
The Messenger of Allah (salAllahu ‘alayhi wa sallam) said:
,,l-·tin¿ i· - ·licld.,, (Al-Bukhari)
He (salAllahu ‘alayhi wa sallam) also said:
,,Indccd, S-t-n ru·lc· tlrou¿l tlc llood ot tlc ·on ot Ad-m ·o ·on·tri·t lim ly lun¿cr
-nd t-·t.,, (Al-Bukhari and Muslim, the second part is without a source)
The fasting person has been prohibited from eating and drinking because they are the
cause of strength. Thus, eating and drinking produce much blood, the medium in which
Satan rushes. Such blood is the blood produced from food. It is not produced from the
injection, nor from kohl. It is not produced from the medications taken by the one treated
for stomach or head wounds, but it is produced from the water inhaled through the nose,
hence the prohibition of exaggerating when rinsing the nose keeps one’s fast intact.
When such meanings are found among the fundamentals to be confirmed by the texts and
by consensus, then their claim that the Shari`ah applies the judgement to what they have
described is contradicted by these descriptions, and when there is a contradiction on a
fundamental matter it invalidates every type in its category.
If such meanings are contained in the firm fundamental by the text and consensus, thus,
their allegation that the Legislator (Allah) bases His Rulings concerning what breaks the
fast in accordance of theirs, turns out to contradict what is mentioned. A contradiction in
the fundamental voids all kinds of analogy as long as it is not clarified that the description
they claim is for another reason beside that.
5. It is well known that the text and the consensus establish the prohibition of the
fasting person from eating, drinking, and sexual intercourse. It is authentically
reported that the Prophet (salAllahu ‘alayhi wa sallam) said:
,,Indccd, S-t-n ru·lc· tlrou¿l tlc ·on ot Ad-m’· llood.,, (Al-Bukhari and Muslim)
Undoubtedly, blood is produced from eating and drinking; so when one eats or drinks, the
ducts of Satan (the veins) widen. So it is said, ‘so constrict the ducts by hunger.’ Some
say this is a narration (from the Prophet (salAllahu ‘alayhi wa sallam)).
Similarly the Prophet (salAllahu ‘alayhi wa sallam) said;
24
,,Vlcn l-m-dl-n lc¿in· tlc ¿-tc· ot l-r-di·c -rc o¦cncd, tlc ¿-tc· ot tlc lirc -rc ·lo·cd,
-nd tlc dcvil· -rc ·l-incd.,, (Al-Bukhari and Muslim)
If these ducts were constrained and the hearts rushed to do the good deeds by which the
gates of Paradise are opened, and abstained from evil deeds then the devils are chained,
so their effects lessens due to being chained. They will be unable to do what they used to
do in the months other than Ramadhan. The Messenger of Allah (salAllahu ‘alayhi wa
sallam) did not say that they would be killed or die. He said: “Chained”. The chained
devil may cuase some harm, but his harm in the month of Ramadhan is less and weaker
than in other months. This depends on the perfection of the fast. Thus, he whose fast is
perfect repels the devil better than the one whose fast is imperfect. This is the wisdom
behind prohibiting the fasting person from eating and drinking. The ruling is stabled only
on what is in agreement to it.
The Shari`ah proves (that the fast is broken in) cases with these qualities and effects, and
these things are negated in the case of injections and kohl and their like.
If it is said: “Kohl descends into the body and changes into blood.”
Then the reply is that this is like what was said about humidity that ascends from the nose
to the brain then changes into blood, and the oil absorbed by the body as well. The
prohibited is only what reaches the stomach like nourishment, and changes into blood
that circulates the body.
6. We have made this a sixth category, so we have compared the kohl, injections,
and the like with the incense, and oil etc., by finding the common characteristic
between them i.e. they do not nourish and produce blood from the stomach. This
characteristic is that which necessitates that such a thing does not break the fast.
And this may be found in the cases of dispute.
The branch may be based upon two fundamentals, each of them joined to what has the
qualities that resemble it according to what is relevant to the Shari`ah, and we have
already mentioned what is relevant here to the Shari`ah.
If it is said: “What would be the cause if one were to eat dust, pebbles or the like which
are not nourishing and provide no benefit?” Then the reply is that these are processed by
the stomach and changed into blood which supports the body, but they are incomplete
forms of nourishment. This is like the case of the one who has taken poison or something
else that causes him harm. And like the one who ate too much food and suffered
indigestion. Prohibition of such things in the fast is more obvious; since it is prohibited
while not fasting. This is like the prohibition of a husband from having sexual
intercourse with his wife so it is more obvious that he is prohibited from adultery.
If it is said: “Then sexual intercourse breaks the fast and menstruation blood breaks the
fast; although there is no relation between them.”
25
Then the reply is that these rulings are established by the text and the consensus, so there
is no need to use analogies to prove them. The reasons may vary, thus, the prohibition of
eating and drinking is for one reason and the prohibition of sexual intercourse while
fasting and the fact that it breaks the fast is for another reason. The fast is broken by
menstruation for a certain reason, yet we do not say that menstruation is forbidden. This
is because things that break by text and consensus are divided into matters of choice that
are not lawful for the servant, like eating and sexual intercourse, and things that there is
no choice for like menstruation. So in this way there are different reasons.
We say in the case of sexual intercourse, in essence it is the cause of ejaculation of semen
which carries the same ruling as that of intentionally vomiting. Menstruation, and
cupping, we will explain later, Allah willing. Sexual intercourse is a process of emission.
It is not a process of replenishing like eating and drinking. But from the view that it is
one of the two lusts, it carries the same ruling as eating and drinking.
In the authentic Hadith, the Prophet (salAllahu ‘alayhi wa sallam) said that Allah the
Almighty said,
,,l-·tin¿ i· tor Mc, -nd I rcw-rd tor it. Onc lc-vc· li· dc·irc -nd tood tor My ·-kc.,, (Al-
Bukhari and Muslim)
The human leaving what he desires for the sake of Allah is the objective that the worship
will be rewarded for. In the same way that the one in a state of Ihram is rewarded for
leaving usual dress, perfume and the like of physical luxuries.
Sexual intercourse is one of the most loved lusts of the body. It pleases the soul and
brings it delight. It moves the lust, the blood, and the body as a whole more than eating.
Since Satan flows through man’s blood, and nourishment produces the blood which is his
medium, then when man eats or drinks, his soul tends to lust, and its will and love for the
acts of worship weakens. This is more apparent in the case of sexual intercourse.
Its effect in strengthening the soul’s will for desire and weakening it for worship is
greater.
In fact sexual intercourse is the dominant desire, it is a greater desire than that of eating
and drinking. For such reason one having sexual intercourse during the day of Ramadhan
has to pay the ransom equal to that for Zihar. Emancipation or its equivalent is
obligatory on him according to the Sunnah and the consensus. Because it is more grave,
its temptation is stronger, the corruption resulting from it is more severe. This is the
greater of the two reasons for the prohibition of sexual intercourse.
As for as it weakening the body, being a form of emission, this is another reason. In this
view it becomes like eating and menstruation, but it is graver, so it spoils the fast more
than eating and menstruation.
26
If we think about the wisdom behind menstruation in accordance with the analogy, then
we say that the Shari`ah delivers justice in all affairs, exaggeration in worship is a form
of injustice which it prohibits, for it orders moderation in worship.
For this reason, it commands the expedition of breaking the fast and delaying the pre-
dawn meal. This is why it prohibited uninterrupted fasting. The Messenger of Allah
(salAllahu ‘alayhi wa sallam) said:
,,¯lc lc·t ·tylc ot t-·t i· tl-t ot D-wud: lc u·cd to t-·t cvcry otlcr d-y, -nd lc did not tlcc
wlcn t-·in¿ tlc cncmy.,, (Al-Bukhari and Muslim)
Thus, justice in acts of worship is one of the greatest objectives of the Shari`ah. For this
reason, Allah the Almighty says:
{ ¸, ` ·` ' . -`, . · . ¸ ,` ` · . , ` , ´ ` · ¸ - ' · . , ~ ,` · , -` . ,` ·¯ ¸, ,', ' ,¦
¦O you wlo lclicvc: M-kc not unl-wtul tlc wlolc·omc tlin¿· tl-t l-vc lccn m-dc
l-wtul to you. And tr-n·¿rc·· not. Indccd, All-l doc· not likc tlc tr-n·¿rc··or·.¦ (5.87)
So prohibited the lawful has been categorized as an antagonist to justice. He also said:
{ ¸ · ` , · . , , ` ,` , ` . - ' ¸ . , ~ ` , ,` , · ` ·` , - ,` · · ¸, ¸ · ¸ , = · _ · ¯ · ¸, ¦
¦` ·` · ,` ,` ` · , , , ` , · ` - ' ,¦
¦lor tlc wron¿ doin¿ ot tlc }cw·, Vc m-dc unl-wtul tor tlcm ·crt-in ¿ood tood· wli·l
l-d lccn l-wtul tor tlcm, -nd tor tlcir lindcrin¿ m-ny trom All-l’· V-y. And tlcir
t-kin¿ lil-, tlou¿l tlcy wcrc torliddcn trom t-kin¿ it.¦ (4:160,161)
Since they were unjust, their punishment was the prohibition of the good, lawful food.
To the contrary, for the just and moderate nation (the Muslims) all the wholesome things
were made lawful to them, while all the filthy things were made unlawful for them.
Since this is the case, the fasting person has been prohibited from food and drink which
strengthens and nourishes him, he must be prohibited from the emission of what weakens
him and empties the substance with which he is nourished. Otherwise it will be harmful
for him and at odds with his act of worship, not a just form of it.
Emissions are of two types:
A type one has no control to prevent, or its emission causes him no harm. These are not
prohibited for him, like urine and stool, their emission causes him no harm, and he is not
able to prevent them, when it is time, they will come out. Their evacuation is not harmful
to him but rather beneficial to him.
Vomiting emits the food and drink that the stomach turns into nourishment, the same
with masturbation because of the desire associated with it, it is the process of the
27
emission of semen which is internally turned into blood, so it is an emission of the blood
which nourishes him. For this reason, excessive emission of semen may be harmful for
humans, even coming out red.
The blood that is discharged during menstruation is a form of blood emission, so the
menstruating woman is able to fast another time outside of her menstrual period when her
blood does not flow.
Her fasting in such circumstances is just because the blood that strengthens her body is
not being discharged. If she were to fast during her period then the blood which gives her
strength would be discharged as well, weakening her body, then her fasting would no
longer be just, so she has been commanded to fast while she is not menstruating.
The case with Mustahadhah is different. Since her bleeding takes a long time, and it is
impossible for her to be commanded to make the fast up later; since during another time
she may also be bleeding. Thus, her bleeding is uncontrollable exactly as the case of
uncontrollable vomiting, blood discharge due to abscesses, a wet dream and the like.
These things have the limited time based upon which one may exercise some control.
Therefore these are not considered to negate the fast as is the case with menstrual blood.
Contrary to this is the drainage of blood through cupping, venesection and the like. The
scholars differ over whether cupping breaks the fast or not. There are many Hadiths
mentioned from the Prophet (salAllahu ‘alayhi wa sallam) saying:
,,¯lc onc ·u¦¦in¿ -nd tlc onc ·u¦¦cd l-vc lrokcn [tlc t-·t|.,, (Abu Dawud, At-Tirmidhi, Al-
Bukhari without a complete chain)
The preserving Hadith scholar Imams have explained them. Among the companions
many did not like the fasting person to be cupped, and some of them would only do it
during the night. The people of Basrah would close the cupping shops when the month
of Ramadhan began. The view that cupping breaks the fast is that of most of the Fiqh
scholars of Hadith such as Ahmad bin Hanbal, Ishaq bin Rahwiyah, Ibn Khuzaymah, Ibn
Al-Mundhir and others.
The Fiqh scholars among the people of Hadith are the closest in following the
commandments of Muhammad (salAllahu ‘alayhi wa sallam). Those who do not see that
cupping breaks the fast base their opinion on the narration reported in the Sahih which
says: “The Prophet was cupped while fasting and in Ihram.”
Ahmad and others criticized the wording “while fasting.” They say that the established
narration is that he was cupped while in the state of Ihram.
Ahmad said: “Yahya bin Sa`id said: Shu`bah said, `Al-Hakam on the authority of
Miqsam from Ibn `Abbas that the Prophet (salAllahu ‘alayhi wa sallam) was cupped
while fasting and in the state of Ihram.
28
Muhanna said: “I asked Ahmad abou the Hadith of Habib bin Ash-Shadid on the
authority of Maymun bin Mihran from Ibn `Abbas that the Prophet (salAllahu ‘alayhi wa
sallam) was cupped while fasting and in the state of Ihram. Ahmad said: ‘It is not
correct.’” Yahya bin Sa`id Al-Ansari rejected it saying that the Hadiths of Maymun bin
Mihran from Ibn `Abbas amount to only about fifteen Hadiths.
Al-Athram said: “I heard Abu `Abdullah mentioning this Hadith, and saying it is weak.”
He also said. “The books of Al-Ansari were lost during the turmoil and he used to narrate
from the books of his slave. This is one of those.’”
Muhanna also said: “I asked Ahmad abou the Hadith of Qubaysah on the authority of
Sufyan on the authority of Hammad on the authority of Sa`id bin Jubayr from Ibn `Abbas
which says: “The Prophet (salAllahu ‘alayhi wa sallam) was cupped while fasting and in
the state of Ihram.’ Ahmad said: ‘It is a mistake on the part of Qubaysah.’ I asked Yahya
about Qubaysah. He said: ‘He is a trustworthy, but he is mistaken in what he narrates
from Sufyan from Sa`id.’”
Muhanna said: “I asked Ahmad about the Hadith of Ibn `Abbas which says: ‘The Prophet
(salAllahu ‘alayhi wa sallam) was cupped while in the state of Ihram and fasting.’
Ahmad said: ‘It does not include “fasting.” It includes only “in the state of Ihram,”
similar was mentioned by Sufyan from `Amr bin Dinar from Tawus from Ibn `Abbas.
And from `Abdur-Razzaq from Ma`mar from Ibn Khaytham from Sa`id bin Jubayr from
Ibn `Abbas. These followers of Ibn `Abbas did not mention anything about “while he
was fasting.”
This is what has been mentioned by Imam Ahmad, and it is what was agreed upon by the
Two Shaykhs, Al-Bukhari and Muslim. For this reason they rejected the narration which
mentions: “while fasting.” They agreed only on the narration that mentions “while in the
state of Ihram.” As mentioned by Imam Ahmad. The narration in the Two Sahihs says:
“The Prophet (salAllahu ‘alayhi wa sallam) was cupped while in the state of Ihram.”
Some interpreted the Hadith mentioning cupping in ways that are weak; such as they
were backbiting, and that it was something else they did that broke their fast.
Their best argument in this regard is the saying of Ash-Shafi`i and others that it was
abrogated. This saying was on the eighteenth of Ramadhan, his cupping while fasting
and being in Ihram was after this; since the Ihram was after the month of Ramadhan. But
this view is also weak because his cupping while he was in Ihram and fasting has nothing
to do with being after the month of Ramadhan when he said:
,,¯lc onc ·u¦¦in¿ -nd tlc onc ·u¦¦cd l-vc lrokcn [tlc t-·t|.,,
The Messenger of Allah (salAllahu ‘alayhi wa sallam) adopted the state of Ihram in the
sixth year, on the year of Al-Hudaybiyah in the month of Dhul-Qa`dah. He also adopted
the state of Ihram for `Umrat Al-Qadha’ (The `Umrah to be made up) in the year that
29
succeeded the sixth year in the month of Dhul-Qa`dah. In the eith year he adopted Ihram
from Al-Ji`ranah in the month of Dhul-Qa`dah, during the year of the Conquest. Then he
adopted the state of Ihram in the tenth year for the Farwell Pilgrimage.
Thus, the narration that mentions cupping while fasting does not clarify in which one of
the four times of Ihram he was cupped.
The abrogation argument should be based on two conditions.
1. The cupping took place in his last Hajj or during his `Umrah from Al-Ji`ranah,
since his saying:
,,¯lc onc ·u¦¦in¿ -nd tlc onc ·u¦¦cd l-vc lrokcn [tlc t-·t|.,,
Was in the eighth year, the year of Conquest. Based on this, we say that `Umrah during
which he was cupped was either in the sixth year or the seventh year, either in the year of
the make up `Umrah or the year of Al-Hudaybiyah.
2. It is known that when he was cupped, he did not break his fast. There is nothing
gin the Hadith to prove this. This fast was not in the month of Ramadhan because
he did not adopt Ihram in the month of Ramadhan. He must have been fasting
while traveling, but fast while traveling was not obligatory. Rather what is
confirmed from him (salAllahu ‘alayhi wa sallam) in the Sahih is that breaking
the fast while traveling was the latter of his two practices. He traveled during the
year of the Conquest of Makkah while fasting; upon reaching Kadid he broke his
fast while he people were watching him. It is not known that he fasted after this
that he would fast while traveling. We also do not know of him fasting while in
Ihram for Hajj. All of this supports the view that he was cupped while in Ihram
before the year of the Conquest of Makkah when he said:
,,¯lc onc ·u¦¦in¿ -nd tlc onc ·u¦¦cd l-vc lrokcn [tlc t-·t|.,,
This was said during the year of the Conquest whitout a doubt, as reported in the most
authentic Hadiths.
Ahmad said: Isma`il informed us from Khalid Al-Hadhdha’ from Abi Qilabah from Al-
Ash`ath from Shaddad bin `Aws that in the year of the Conquest he passed with the
Prophet by a man who was being cupped in Baqi` after eighteen days passed during the
month of Ramadhan. At that time the Messenger of Allah |.,,
Imam Ahmad also said: Isma`il narrated to us saying that Hisham Ad-Distawa’I on the
authority of Yahya bin Abi Kathir from Abi Qilabah from Abi Asma’ from Thawban who
30
said: “The Messenger of Allah (salAllahu ‘alayhi wa sallam) came to a man while he was
being cupped during the month of Ramadhan and said:
,,¯lc onc ·u¦¦in¿ -nd tlc onc ·u¦¦cd l-vc lrokcn [tlc t-·t|.,,
He also said: Abu Al-Jawab narrated to us from `Amar bin Zuriq from ‘Ata’ bin As-Sa’ib
who said: Al-Hasan narrated to me from Ma`qil bin Sinnan Al-Ashja`I who said: “The
Prophet (salAllahu ‘alayhi wa sallam) passed by me while I was being cupped on the
eighteenth day of the month of Ramadhan and said:
,,¯lc onc ·u¦¦in¿ -nd tlc onc ·u¦¦cd l-vc lrokcn [tlc t-·t|.,,
At-Tirmidhi mentioned it from `Ali bin Al-Madani who said: “The most relibable Hadith
on this topic is the Hadith of Thawban, and the Hadith of Shaddad bin `Aws.”
At-Tirmidhi said: “I asked Al-Bukhari who said: ‘On this topic, there is not a Hadith
more authentic than the Hadith of Shaddad bin `Aws and the Hadith of Thawban.’ I said:
‘What about the contradictions?’ He said: “Both of them are Sahih in my opinion; since
Yahya bin Sa`id narrated it from Abi Qilabah from Abi Asma’ from Thawban, and he
also narrated from Abu Al-Ash`asth from Shaddad as two Hadiths together.’”
What Al-Bukhari said is among the clearest proofs to the soundness of the two Hadiths
Abi Qilabah reported. As for him saying that there is some confusion in the narration it is
because it was narrated with two different chains of narrators.
So it is clear that – the Imam – Yahya bin Sa`id narrated this from Abi Qilabah with this
chain of narrated and with the other. Such practice only means that the Hadith has
multiple routes.
Az-Zuhri narrated this Hadith from Sa`id from Abu Hurayrah one time, and sometimes
from someone else from Abu Hurayrah. Thus, this is the abrogating Hadith even though
the time is not known.
When two pieces of information contradict each other, one changing from the basic case,
and the other remaining according to it, then the changing one is the abrogating, so that
the ruling should not be changed twice. If we suppose that his cupping (the cupping of
the Prophet) was before he prohibited the fasting person from cupping, then the ruling did
not change the matter. And if we suppose it was after the prohibition, it means that the
ruling must have changed twice.
Besides, if the fasting was not obligatory, it may be that he broke his fast out for the need
to be cupped. They used to break their voluntary fast for things less important than this.
He would enter his house, if they said: “We have food,” he would say: “Bring it, for I
have been fasting since the morning.”
31
Ibn `Abbas who did not know what the Messenger of Allah (salAllahu ‘alayhi wa sallam)
intended only saw him, or he was told by someone who saw him that he had been fasting
since the morning and he was cupped. This does not necessitate that they knew whether
he intended to continue his fast or not. It seems as if those who claim that the Hadith is
abrogated are limited to argue it from these two ways, one is that it is not a proof, and the
other is that it is abrogated.
But what proves and supports that the Hadith in question is the abrogating one is what
was reported from Ad-Daraqutni, that Al-Baghawi informed us: ‘Uthman bin Abi
Shaybah said: Khalid bin Mukhlid informed us from ‘Abdullah bin Al-Muthanna from
Thabit from Anas bin Malik who said: “We first disliked cupping for the fasting person
when the Messenger of Allah (salAllahu ‘alayhi wa sallam) passed by Ja`far bin Abu
talib while he was being cupped and said:
,,Iotl tlc·c two l-vc lrokcn tlcir t-·t.,,
Then the Messenger of Allah (salAllahu ‘alayhi wa sallam) permitted cupping for the
fasting person, and Anas would be cupped while fasting.
Ad-Daraqutni said: “All of them (the narrators) are trustworthy and I know no of any
deficiency in it.”
Abu Al-Faraj Ibn Al-Jawzi said: “Ahmad bin Hanbal said: ‘Khalid bin Mukhlid has many
Munkar Hadiths.” I say, a proof that this Hadith is one of his Munkar Hadiths is that
none of the reliable compilations of Hadith narrated it, although it seems to meet the
criteria of Al-Bukhari.
What is popular among the people of Basrah is that cupping breaks the fast, and Ja`far
bin Abi Talib only arrived from Ethiopia in the year of Khaybar, at the end of the sixth
year or at the beginning of the seventh year, since Khaybar took place during this period
in the seventh year, and they say it was during the year of Mu’tah expedition before the
year of the Conquest of Makkah. But he did not attend the Conquest of Makkah, so he
observed fast only one time with the Prophet in the seventh year. If this ruling was
legislated during that year, it would have been circulated and widespread.
The Hadith in question was after this during the eighth year. If it is preserved, it is
understood that the Prophet (salAllahu ‘alayhi wa sallam) said this for two successive
years. No confirmed statement has been conveyed from him allowing cupping after that.
Thus, it seems to be a Mudraj Hadith from Anas who did not say that, or that Anas did
not hear this from the Prophet (salAllahu ‘alayhi wa sallam) but was informed that the
Messenger of Allah (salAllahu ‘alayhi wa sallam) permitted it. Or it may be that some of
the Tabi‘in reported it.
A proof that this Hadith is not preserved – neither from Anas nor Thabit – is what Al-
Bukhari narrated in his Sahih on the authority of Thabit saying: “Anas bin Malik was
asked: `Did you (companions) dislike cupping for the fasting person?’ He said: ‘No
32
except for fear of weakness.’” There is another nattation that adds the words: “Duing the
lifetime of the Prophet (salAllahu ‘alayhi wa sallam).”
This is Thabit who narrated cupping on the authority of Anas, and there is nothing in this
except that they disliked cupping for the sake of the weakness it causes. If he knew that
it breaks the fast he would not say this, and if he knew that there was permission for it
then he would not say they disliked what the Prophet (salAllahu ‘alayhi wa sallam)
permitted. So it is known that Anas only knew what he thought which was that the
companions who only disliked cupping because of the weakness it causes.
This meaning is sound, and it is the reason for breaking the fast just as it is broken by
intentionally vomiting and by the menstruation blood of the woman.
What supports the view that breaking the fast by cupping is the abrogating rule, is what is
reported from most of his close companions who accompanied him in both residence and
travelling and who knew what the others did not know of his affairs like Bilal, ‘A’ishah,,
may Allah be pleased with them. And like Usamah, and Thawban his two freed slaves
may Allah be pleased with them, the Ansar (the helpers) who were his entourage, like
Rafi` bin Khadij and Shaddad bin `Aws.
In the Musnad of Imam Ahmad; ‘Abdur-Razzaq narrated to us, Ma`mar narrated to us
from Yahya bin kathir from `Abdullah bin Qaridh from As-Sa`ib bin Yazid on the
authority of Rafi` bin Khadij bin Hanbal said: “The most authentic Hadith in this topic is the Hadith of
Rafi`.”[2]
[Ahmad said: “Yahya bin Sa`id narrated to us from Ash`ath Al-Harani from Usamah bin
Zayd also said: “Yazid bin Harun narrated to us from Abu Al-‘Ala’ from Qatadah from
Shahr bin Hawshab from Bilal|.,,
He also said: “`Ali bin `Abdullah narrated to us that `Abdul-Wahab Ath-Thaqafi said:
Yunus bin `Ubayd narrated from Al-Hasan from Abu Hurayrah|.,,
33
Ahmad said: “Abu An-Nadhr narrated to us; Abu Mu`awiyah narrated to us, from Sufyan,
from Layth, from ‘Ata’ from, ‘Aishah’ who said that Allah’s Messenger |.,,
Although it is said that Al-Hasan Al-Basri did not hear from ‘Usamah nor from Abu
Hurayrah, he has many Hadiths from the companions on this topic by which he used to
make verdicts, such narrations are Ma`qal bin Sinnan, `Usamah and Abu Hurayrah, this is
why Al-Bukhari said: “Al-Hasan would.” The people of Basrah used to close the cupping
shops at the beginning of the month of Ramadhan, as mentioned by Ahmad and others.
Anas bin Malik was the last to die in Basrah, and all the scholars of Basrah used to follow
his views. Had there been a Sunnah with Anas from the Prophet stating his permission of
cupping after the prohibition of it, then it would have been known and followed by the
people of Basrah; especially when it is mentioned that Thabit is the one who heard it
from Anas. Thabit was one of Basrah’s well known scholars and one of the closes to Al-
Has. A further support to this is that Abu Qilabah is one of the closest
companions of Anas and he is the one who narrates the Hadith:
,,¯lc onc ·u¦¦in¿ -nd tlc onc ·u¦¦cd l-vc lrokcn [tlc t-·t|.,,. This was mentioned by Al-Kharqi
since he mentioned “when one is cupped” among the times that break one’s fast,
but he did not mention when one cups.
But what is popularly written from Ahmad and the majority of his companions is that
both matters break the fast, and this is what is proven by the texts, so there is no way to
avert it, even if we did not understand the wisdom behind it.
• It breaks the fast only in case of the one cupped whose blood is drained, cases of
venesection and other things that are not covered by the term “cupping” do not
break the fast. This is the saying of Al-Qadhi and his companions. This is the
view mentioned by the author of Al-Muharrar. Then, according to this saying,
does slitting the ear fall under the category of cupping or not? The later scholars
differ over this. Some of them said that is a kind of cupping. This is what was
said by our Shaykh Abu Muhammad Al-Maqdisi, and it is supported by all the
scholars` statements. For there is nothing in their sayings that make an exception
34
for such incisions, and if it was something that did not fall under the category of
cupping then they would have mentioned it therefore it is known that, according
to them, incision area a kind form of cupping. Our Shaykh Abu Muhammad said:
“This is what is correct.” Up to his saying. [3]
[“Some of them said these cuts are not of a type of cupping since the process is even
weaker than venesection. If they say that venesection does not break the fast, then such
incisions would fall into two different categories. This is the view of Abu `Abdullah bin
Hamdan.
But the first view is more correct since incisions are a type of cupping or similar to it
every way since cupping is not limited to the leg, it is done on the head, the neck, the
nape, etc.
Those who differentiate between them say: “the one making an incision does not suck the
blood from the vein as the one cupping does, thus, he is not the same the one cupping,
and therefore he can not be considered the same as one being cupped either.’.
As for the case of the one being cupped, it includes what was known and what was
unknown; since the meaning denotes all of them, contrary to the meaning of cupper.
It may be said that it is included by the meaning of one cupping, but the one cupping
sucks the blood and is liable to have some of the blood enter his throat as we discussed
before.
Some others say that the fast is broken by the one cupping, but the one cupping sucks the
blood and is liable to have some of the blood enter his throat as we discussed before.
Some others say that the fast is broken by the one making incisions. This is the view of
those who consider the term cupping to include both meanings.
As for those who said that cupping but not venesection breaks one’s fast, they allege that
it is an act of worship, so we do not have to know the wisdom behind it, and analogies for
it cannot be made.
Ibn `Aqil said: “The fast of the one cupped is broken due to the cut in the skin, even if
there was no discharge of blood; since this is included in the meaning of the word
cupping.’ This is the weakest view.]
35
• This is the right view and the choice of Abu Al-Muzaffer bin Hubayrah, the just
and the knowledgeable, that both cupping, venesection and the like break the fast.
That is because the meaning (action) done in cupping is done in venesection
legislatively, reasonably, and naturally. Since the Prophet (salAllahu ‘alayhi wa
sallam) has persuaded cupping and even ordered it, his persuasion would include
what falls under its meaning like venesection and the like.
The extreme heat in hot lands agitates the blood which rises under the skin, then it is
drained by cupping. As for cold lands, the blood sinks in the veins due to the cold; since
like things attract. Likewise; stomachs get hot in winter and cold in summer. Thus, the
inhabitants of cold lands practice venesection and incisions in the veins, while those of
the hot countries practice cupping. There is no difference whatsoever between the,
neither legislatively nor from the view of reason.”
We have clarified that the view that cupping breaks the fast is in accordance with both the
fundamentals and analogy, and that it is in the same category as menstruation blood,
intentional vomiting, and masturbation.
Based on this, any way that one intends to drain their blood would break their fast, just as
it is broken by intentional vomiting by any means; whether one puts his hand inside his
mouth, inhales what helps throw up the food in his stomach, or if one puts his hand under
his belly to vomit intentionally. These are different ways to vomit intentionally. These
are different ways to vomit intentionally, and the other cases are different ways to drain
blood. Similarly the drainage of blood by any means mentioned, this one or that one is
the same from the view that it is an attempt to purify.
Thus, the perfection of the Shari`ah, its justice, and its moderation is clarified, and that
what has been established by texts and their meanings conform and concur with each
other. Allah, the Almighty, says:
¦ _ · ¯ · · ` - ·, · ,` - , · ,` , · · ` ¸ · . ¯ ` , , ¦
¦U-d it lccn trom otlcr tl-n All-l, tlcy would ·urcly l-vc tound tlcrcin m-ny -
·ontr-di·tion.¦(4:82)
As for the one cupping, he removes the air in the cup by sucking the air from it. Then the
air attracts the blood therein. It happens that some tiny drops of the blood ascend with
the air and reach his throat while he is unaware of it.
The rule in the case of something subtle but probable is that of presumption. For
example; the sleeping one who passes wind while not knowing is commanded to perform
the ablution. This applies to one cupping even more so because some of the drained
blood may reach his stomach through his saliva while he does not perceive it. The blood
falls into the category of the greatest fast breakers. It is unlawful by itself since it
encourages lusts and transgressing the just limits. The fasting person is commanded to
reduce its quantity (by fasting). The (suckled) blood increases the blood of the one
cupping, and this is prohibited. For this reason the fast of the one sleeping is nullified
36
even though he is not sure he passed wind, since it may occur while he is unaware. The
case is the same since the blood may enter the throat of the cupper while he is unaware of
it. As for the one making an incision, he is not a cupper; since he is not liable to have
blood in his throat, his fast is not broken.
Therefore if a cupper does not suck on the cup, or has someone suck the blood instead of
him, or he drains the blood a different way, then his fast would not be broken. The
Prophet’s (salAllahu ‘alayhi wa sallam) statement was meant only for the familiar cupper,
although the wording is general. Even if he (salAllahu ‘alayhi wa sallam) meant only a
specific person, the ruling still applies to all others under his category. The legislative
rule is that if a duty is applied to one person in the community then it applies to all of the
community. Thus, anyone or anything not falling under definition of the word is not
intended by the word and the concerned ruling is not applied to him in accordance with
the Shari`ah and with reason.
* * *
[1] One who owns two hundred or more camels [Al-Qamus Al-Muhit]
[2] The section between brackets appears to have been added from a different section of
the Fatawa of Ibn Taymiyyah. And Allah knows best.
[3] See the previous note
37
Question and Answers
Fasting the Cloudy day and the Day of Doubt
Shaykh al-Islam, may Allah have mercy upon him, was asked about fasting on the day of
cloud cover, whether it is obligatory or not, and on the day of doubt, is it prohibited or
not.
He answered:
As for the case of a cloudy day when the crescent cannot be seen or it is difficult to see it,
then the scholars have a number of sayings about it, that is, these are from the Madhhab
of Ahmad as well as others.
1. That fasting it is prohibited. But this prohibition is either a type of Tahrim
(absolutely unlawful), or Tanzih (severely discouraged) according to two views.
This is what is well known in the Madhhab of Malik and Ash-Shafi`i. It is also
the view of Ahmad in one of the reports and it was the choice of a group of his
companions like Abu Al-Khattab, Ibn `Aqil, Abu al-Qasim bin Mandih al-
Asbahani and others.
2. That fasting it is obligatory as chosen by Al-Qadhi, Al-Kharqi, and others among
the companions of Ahmad. It is even said that this is the most famous of what is
reported from Ahmad. But what is confirmed from Ahmad for any who is
familiar with his texts and statements is that he considered it recommended to fast
on the cloudy day, following `Abdullah bin `Umar did not oblige people to fast it,
but he would do so out of precaution. This was narrated on the authority of
`Umar, `Ali, Mu`wiyah, Abu Hurayrah, Ibn `Umar, ‘A’ishah, ‘Asma’ and others.
Among them were some who would not fast it, as in the case of the majority of the
companions. Others would prohibit fasting on it, like `Amar bin Yasir, and others. So
Ahmad, may Allah be pleased with him, would fast it out of precaution.
As for it being an obligation to fast it, this has no foundation among the words of Ahmad
or his companions. Rather many of his followers believed that it was obligatory
according to his Madhhab and they support that view.
3. It is allowable to fast it or not. This is the Madhhab of Abi Hanifah and others. It
is also the Madhhab of Ahmad in the clear texts from him. It is the Madhhab of
many of the companions and the Tabi`in if not the majority of them.
Just as it is known that there is something obstructing the observance of dawn, then
refraining or not (from eating, drinking and sexual intercourse) is allowable i.e. one is
38? Or if he is in
doubt whether the amount due for Zakah is one hundred or one hundred and twenty, so
he pays the higher amount?
All the rules of the Shari`ah uphold the fact precaution is neither obligatory nor unlawful.
One has the choice to fast it either with a general intention or the particular one. That is
to intend fasting it is a day of Ramadhan, if it is Ramadhan, otherwise not. This is
allowed in the Madhhab of Abu Hanifah, and Ahmad in accordance with the most correct
of the two narrations from him. This was reported by Al-Marwazi and others. It is also
the choice of Al-Kharqi in his explanation to Al-Mukhtasar and the choice of Abu Al-
Barakat and others.
As for the second opinion, it is that it is not allowed except with the intention that it is a
day of Ramadhan; this is according to one of the two narrations from Ahmad. This was
the view chosen by Al-Qadhi and a group of his companions.
* * *
Fasting and Shortening the Prayer for the Traveler
He was asked also wa
sallam).
They disputed over traveling for an act of disobedience, like one who travels for highway
robbery and the like, for which there are two views, and they also disputed over
shortening the prayer.
39 i· not -n -·t ot ri¿ltcou·nc·· to t-·t wlilc tr-vclin¿.,, (Al:
{ ,` ` · ` , ´ , ` , ,`, . , ,` ` , ` , ´ , ` · ` , ,`, , - ' ¸ ·`, ' ` ¸ · ·` · · ¸ , _ · ` , ' ., , · . ¯ ¸ · ,¦
¦And wlocvcr i· ill or on - }ourncy, tlc ·-mc numlcr ,ot t-·tin¿ d-y· mi··cd mu·t lc m-dc
u¦, trom otlcr d-y·. All-l intcnd· c-·c tor you, -nd Uc doc· not w-nt to m-kc tlin¿·
ditti·ult tor you.¦
It is recorded in the Musnad that the Prophet (salAllahu ‘alayhi wa sallam) said:
,,Indccd, All-l likc· tl-t Ui· ¦crmi··ion lc -do¦tcd, }u·t -· Uc l-tc· tl-t -·t· ot
di·olcdicn·c lc ·ommittcd.,, (Ahmad)
40:
,,It you lrc-k your t-·t, tli· i· ¿ood. And it you t-·t, tlcrc i· no l-rm.,, (Al-Bukhari and
Muslim)
In another Hadith he (salAllahu ‘alayhi wa sallam) said:
,,¯lc lc·t -mon¿ you -rc tlo·c wlo ·lortcn tlcir ¦r-ycr· -nd do not t-·t wlilc tr-vclin¿.,,
(-
known. (Al--
known
41.
Fasting for the Traveler: Better or Worse
He was asked about a traveler during Ramadhan who suffered neither hunger, thirst, nor
toil. What is better for him; to fast or break his fast?
He answered:
As for the traveler he breaks his fast according to the consensus of the Muslims, even if
there is no inconvenience. Breaking the fast is better for him. If he fasts, it is allowed
according to the majority of scholars; but a group of them say that there is no reward for
him.
* * *
Must One Intend to Fast the Night Before?
He was asked about an Imam of a congregational Masjid who was a Hanafi, who
mentioned to his congregation that he has a book which states that fasting in Ramadhan
without an intention before ‘Isha’ of the previous night, or after it, or the time of the pre-
dawn meal, then such fasting will not be rewarded. Is this correct or not?
He answered:
Praise be to Allah. Every Muslim should believe that the fast is obligatory for him, and he
wants to fast the month of Ramadhan. If he knows that the next day is a day of
Ramadhan, he has to intend fasting and to know that the intention is only by the heart.
Everyone who knows what he wants should intend it whether he utters his intention or
does not utter it.
42
Utterance of the intention is not obligatory according to the consensus of the Muslims.
Masses of the Muslims fast while having this intention, and their fast is sound without
any dispute between the scholars.
As for determining the intention for (fasting) the month of Ramadhan: Is it obligatory?
There are three views in the Madhhab of Ahmad:
1. The fast will not be rewarded for without the intention to fast the month of
Ramadhan. If he fasts with a general intention, or specific, or an intention for a
voluntary fast, or to fulfill a vow, he should be rewarded for none of these, as is
popular in the Madhhab of Ash Shafi`i, and Ahmad in one of the narrations.
2. It is worthy of reward in general, according to the Madhhab of Abu Hanfiah.
3. Fasting with an intention that is not specified is worthy of reward, except in the case
of the intention of Ramadhan. This is the third narration from Ahmad. It is the
choice of Al-Kharqi and Abu Al-Barakat.
The truth of this issue is that the intention follows the knowledge. If he knows that the
coming days is a day of Ramadhan, he has to define the intention in accordance with it.
If he intends to observe an optional fast, or undefined fast, his fast is not liable to be
rewarded. Since Allah, the Almighty, commanded him to intend fulfilling an obligatory
fast which is the month of Ramadhan, which he knows is obligatory. If he does not do
the obligatory action, he does not meet his obligation.
But if he does not know that the coming day is of the month of Ramadhan, he is not
required to define the intention. Whoever (of the scholars) considered it obligatory while
he did not know, he is obliging the existence of two contradicting things simultaneously.
If it is said that his fast is allowable, and he fasts in such case with either a specified or
unspecified intention, then his fast is worthy of reward. But if he intends an optional fast,
then he comes to know that it was of the month of Ramadhan, his fast is liable to be
meritorious. His case is like the case of a man who had some amount of money due from
him without knowledge of it, then he gave it anyway as a form of charity. Later it
became known that he actually owed that amount to whom he paid it. Thus he is not
required to pay it again. He will say: “What I gave you was what I owed you.” And
Allah knows the realities of all things.
As for the narration from Ahmad which states that the people are to follow the Imam in
his intention, and that fasting and breaking the fast is to be done in accordance with what
the people know – this is based upon what is recorded in the Sunan; The Prophet
(salAllahu ‘alayhi wa sallam) said:
,,Your t-·t i· on tlc d-y you t-·t. Your lrc-kin¿ t-·t i· on tlc d-y you lrc-k your t-·t.
Your d-y ot ‘Id AlAdl- i· on tlc d-y you ·clclr-tc it.,, (Abu Dawud and At-Tirmidhi)
* * *
43
Is the Intention Necessary Every Day?
He was asked: “What about one intending fast; does he need to form the intention every
day or not?”
He answered:
Everyone who knows that the coming day is a day of the month of Ramadhan and he
knows that he is to fast it, then he has intended to whether he proclaimed it or not. This is
what the masses of the Muslims do, all of them intend to fast. )
* * *
How Fast is the Fast to be Broken?
He was asked: about the sunset: “Is it permissible for the fasting person to break his fast
as soon as the sun sets?”
He answered:
If the whole disc of the sun disappears, the fasting person permitted to break his fast, it
does not matter if the red color still remains in the horizon.
When the whole disc disappears, darkness appears from the east. The Prophet (salAllahu
‘alayhi wa sallam) said:
,,Vlcn tlc ni¿lt ·omc· trom lcrc, -nd tlc d-y cnd· u¦ tlcrc, -nd tlc ·un l-· ·ct, indccd
tlc t-·t i· to lc lrokcn.,, (Al-Bukhari and Muslim)
* * *
Eating After the Earlier Adhan
He was asked about a fasting person who ate after the Adhan of Subh prayer during
Ramadhan: “What would the case be?”
He answered:
44
Praise be to Allah. If the Mu’adhdhin was calling the Adhan before Fajr has entered – as
Bilal would call the Adhan before Fajr began during the time of the Prophet (salAllahu
‘alayhi wa sallam), and as the Mu’adhdhins do in Damascus and in other countries before
Fajr begins – then there is no harm in eating or drinking after this for a little while.
If he is in doubt about whether Fajr has entered or not, then he is permitted to eat and
drink until it is clear that it has entered. If after that he learns that he had eaten after Fajr
had begun then there is a difference of opinion over whether it is obligatory for him to
make it up or not.
The most apparent view is that it is not obligatory for him to make it up as is confirmed
from `Umar. A group of predecessors and the successors have the same view. But
making it up is the popular ruling according to the Madhhabs of the Four Imams. And
Allah knows best.
* * *
If Fasting Causes Fainting and Madness
He was asked about a man who, whenever he wants to fast, he faints, and speaks
incomprehensibly. He may continue for days in this state. Some people accuse him of
madness, although this is not apparent from him?
He answered:
Praise be to Allah. If fasting causes such illness for him, he is permitted to break his fast
and make it up. If this happens whenever he fasts, then he is unable to fast. Hence he is
required to feed a poor person for everyday he breaks the fast. And Allah knows best.
* * *
The Case of A Pregnant Woman
He, may Allah have mercy on him, was asked about a pregnant woman who saw
discharge similar to that of menstruation. The blood seems regular. The midwives
advised her to break her fast for the embroyo’s health. The woman felt no pain. Is she
permitted to break her fast or not?
45
He answered:
If the pregnant woman fears any harm may befall her embryo, she is permitted to break
her fast, then make up a day for a day and feed a poor person for each day of the days she
broke her fast a pound of bread with its condiment. .
* * *
46 | https://www.scribd.com/doc/99378413/The-Nature-of-Fasting-1-By-Shaykh-Al-Islam-Taqiuddin-Ahmad | CC-MAIN-2018-05 | refinedweb | 16,605 | 66.67 |
I
If you would like to receive an email when updates are made to this post, please register here
RSS
PingBack from
Is there a list of fixes included in the service pack available at all?
@J:
In relation to ignoring the reboot prompt, which SP1 does that apply to? TFS or VS2008?
That looks like a good list for VS & .NET. I will produce a list for TFS in the next couple of days.
The reboot issues applies to .NET 3.5 SP1 and any other SP that includes it. That means VS, but not TFS.
How long (roughly as you like), I've been running my install for nearly 4 hours, (The only indication I have it's actually running is through ProcMon)
The install for Visual Studio Team System 2008 Development Edition SP1 took approximately 2 hours on my machine. Nothing fancy, Vista Enterprise, 2GB RAM, 3.2 GHZ Pentium D.
Much longer than I though it would when I began, however, I did not have any issues. I did notice a prompt to reboot from Windows Update, but I ignored it until the install was complete.
That's a little longer than what I have heard is "typical" - not sure why. However, the install is not fast no matter how you slice it. Over an hour is pretty normal.
I had an issue installing TFS SP1. Kept getting a unique key failure when trying to create an index on the dbo.Constants table in one of the database tables. There were somehow 4-5 entries with the same username and RemovedDate (or DateRemoved, I am at home now and don't remember the exact name) which caused the setup to have a fatal error and left TFS in an unusable state until I figured out what was going on by looking at the msi log file. To resolve the issue, I simply changed the minute value of each duplicate entry to make them unique and then re-ran the installation. Not sure if this is a known bug or what but you may want to look into it.
Thanks for the info. I'll ask someone to look into it.
Will there be a seperate installation for the Team Explorer. We have a team where there is a need for installing the Team Explorer to get access to work items, sources, etc. but member do not work with Visual Studio for coding etc. Installing a 800mb sp1 where the original installation is 387mb seems overkill.
Other thing: when installing I only had 600mb free space. Installation could only start when I had 2.3gb available. After freeing up until 2.3gb I ended up with having 1.7gb free after installation. Is the 2.3gb really needed?
No, there won't be a separate Team Explorer patch. You must use the VS patch. However, if you only have the Team Explorer installed, it will only update the Team Explorer components.
I believe the need for all of that extra space is for temporary files that it uses during setup.
Brian Harry on VS/VSTS/TFS/.NET 3.5 SP1 is shipping! and More things to know about installing SP1 Etienne...
Brian, our developers would like to upgrade to SP1 right away. Meaning we should probably upgrade our build machines as well so we compile against the same versions as they do.
However, I think we're going to wait a few weeks before we update our TFS 2008 server to SP1 so that we can test the install on our DEV TFS instance first.
Have you heard of any issues with Developer Machines (running VS 2008 SP1) or Build Machines (running VS 2008 SP1, .NET 3.5 SP1, and Team Build SP1) against a TFS 2008 server running RTM?
Long installs might be the result of the prompt for the original installation disk "popping under" the install progress dialog box.
After getting SP1 up and running - I went to add the infragistics Silverlight controls to the toolbox -- as soon as the openfile dialog box posts, VS2008 vanishes with two separate errors appearing in the event log....NET Runtime version 2.0.50727.3053 - Fatal Execution Engine Error (6AFA0F92) (0)
Log Name: Application
Source: .NET Runtime
Date: 8/13/2008 2:10:04 PM
Event ID: 1023
Task Category: None
Level: Error
Keywords: Classic
User: N/A
Computer: guycox-vistalt.alex.robbinsgioia.com
Description:
.NET Runtime version 2.0.50727.3053 - Fatal Execution Engine Error (6ACF5E00) (80131506)
Event Xml:
<Event xmlns="">
<System>
<Provider Name=".NET Runtime" />
<EventID Qualifiers="0">1023</EventID>
<Level>2</Level>
<Task>0</Task>
<Keywords>0x80000000000000</Keywords>
<TimeCreated SystemTime="2008-08-13T18:10:04.000Z" />
<EventRecordID>2562</EventRecordID>
<Channel>Application</Channel>
<Computer>guycox-vistalt.alex.robbinsgioia.com</Computer>
<Security />
</System>
<EventData>
<Data>.NET Runtime version 2.0.50727.3053 - Fatal Execution Engine Error (6ACF5E00) (80131506)</Data>
</EventData>
</Event>
Mac, the only thing you are suggesting that would worry me at all is Team Build SP1 against a pre-SP1 server. It will probably work but all of our testing is always done using matched versions of TFS and Team build. You'll be blazing a bit of new territory in that respect.
GuyO,
We have confirmed the bug and it is in the UI framework in the .NET Framework. The team is still assessing the issue. If this issue is a real problem for you, you could contact customer service and request a hotfix. That will initiate a process internally. We don't honor all hotfix requests but our goal is to honor at least 90% of them.
Once SP1 for TFS is installed, in a dual-server config, what's the proper upgrade order for SQL Server? Does the reporting server on the AT need to be upgraded first; does the database server need to go first; does it matter?
Thanks!
Hi onovotny,
If you plan to Upgrade to SQL Server 2008, the order I recommend is
Update the Database Server
Update the Reporting Services on the AT
Repair TFS
I wound up doing that (but with RS first since the SQL docs said it was a supported config).
I ran into issues with the repair though with WMI exceptions. I use three FQDN's (wss. for sharepoint/rs, tfs. for tfs and tswa. for web access). The tfrsconfig command kept failing, so I wound up renaming it and putting a dummy do-nothing program in its place. I also had an issue with CreateDS which I solved the same way. After the repair finished, I restored the original commands and ran used a modified command line that was in the msi log but pointing to the correct name...
I actually skipped tfrsconfig since it's already setup right, but I did re-run CreateDS as it created a new datasource name from what was there.
Now things seem to work with the exception of TSWA. TSWA's Reports tab shows an error that it didn't get the response it was expecting.
Reports from the IDE work though...
Did you apply SP1 to the client on the TSWA server?
Yes, that's the first thing I did.
Installing the TSWA SP1 CTP did the trick and the Reports tab works again.
Mac, running an SP1 build agent/server against an RTM TFS 2008 server will work correctly. Also, a TFS SP1 server will work with an RTM build agent/server. We tested it both ways.
Buck
I see GuyO already posted a similiar issue but an FYI...I just installed vs 2008 sp1 and am attempting to add the wpf datagrid control to my toolbox but when doing so, VS immediately closes down and I also get the 2 errors in my event viewer:
Event Type: Error
Event Source: .NET Runtime
Event Category: None
Event ID: 1023
Date: 8/18/2008
Time: 11:01:46 AM
User: N/A
Computer: RFEUERM
.NET Runtime version 2.0.50727.3053 - Fatal Execution Engine Error (7A035E00) (80131506)
.NET Runtime version 2.0.50727.3053 - Fatal Execution Engine Error (7A2E0F92) (0)
Hi,
For me, VS2008 SP1 crashes and I get exactly the same errors as Rick in the event log when the WPF designer loads (i.e. when openíng a XAML file). The designer worked at first but now it crashes every time, even after reboot.
Any solution/workaround for this problem yet?
Btw, I'm running on Win 2003.
Now I've tried to uninstall and reinstall SP1 and also to remove all thirp-party add-ons (suggested somewhere else) but to no effect. Only thing left now, I guess, is to remove SP1 and wait for a resolution to the problem.
I read (somwhere, can't find it again) that if you have both the client and app tier on the same machine, you need to install the VS SP1 first and THEN the TFS SP1...
I did a 'slipstreamed' install of TFS w/ SP1 but now I need to install Team Explorer on the TFS box.
What's going to happen if I install Team Explorer on there and then VS SP1? Will it break? Is there a way to do a 'slipstreamed' install of Team Explorer w/ VS SP1?
This is a VS/VC/PSDK issue, not a TS issue, but everybody talking about plans to upgrade their build servers should take a chill pill:
Issue 2.3.1.13 in the release notes is the tip of an iceberg, the manifests for any DLL or EXE built against the CRT and/or MFC import libs have the RTM version number embedded in them, leading to a failure when the built program is deployed. Connect issue 362504.
One can't help wondering what role native code scenarios play in the VS test plans. One also wonders, perhaps knowing it to be "just one of those things", how the ATL problem came to be written up in the release notes without somebody thinking to check all the other import libraries. But perpaps the root causes are unrelated. And by "unrelated" I mean related only through the cultural antipathy of the VS team toward native code tools and toward we who depend on those tools.
Actually it probably isn't fair to characterize it as antipathy, it's probably just apathy. But even that is a shallow analysis: I've come to believe that thinking of VS as an application rather than a mission-critical systems program (when it is really both) has allowed the developers and the QA apparatus to focus on feature churn to the exclusion of core/legacy functionality and the interactions among features. Having had a lot of time to meditate on this, my considered opinion is that this imbalance is probably the root cause of this deplorable situation. It just feels better to cast aspersions on the morals and motives of individuals I've never met for some reason. Probably because their decisions have caused me a great deal of inconvenience and embarassment of late.
Oh well, that's my $0.02 worth.
Why am I posting here? Because the light is on and someone is home here. They shoot the messenger from behind hard cover over in the other camp.
VERY SLOW INSTALL...
The progress bar stayed at 100% for over an hour on a dual core processor system.
This does not give us a good feeling about the quality of this product!
i get the same fatal execution error today as the above when i click f5...
we have wpf in our windowsapp - but there is another error msg before :
- System
- Provider
[ Name] Application Error
- EventID 1000
[ Qualifiers] 0
Level 2
Task 100
Keywords 0x80000000000000
- TimeCreated
[ SystemTime] 2008-08-22T11:25:12.000Z
EventRecordID 17269
Channel Application
Computer [removed}
Security
- EventData
devenv.exe
9.0.30729.1
488f2b50
unknown
0.0.0.0
00000000
c0000005
330001ca
348
01c9043c88a15c93
UPdateing my rant about the library dependencies: I've updated the connect ticket. There are two real problems, neither is a show-stopper.
One, the rules that determine how DLLs are chosen at runtime are extremely complex and there is very poor visibilitiy into them. Depends.exe, the best tool available as far as I can tell, has a bug and a design deficiency. The bug is what caused me to chase down this rabbit hole in the first place.
But that's a good thing, because otherwise it would have been a while before I noticed that our build outputs showed dependencies on older versions of the libraries. That means that our cusomters could end up running our program against the wrong libraries, creating a huge regression risk.
I'm still trying to undestand how all the moving parts interrelate but I'm moving forward again, will try once again to get an automated build going.
It's been a long strange trip. TFS source control is a huge win. Everything else has been a huge disappointment.
One the .NET Runtime version 2.0.50727.3053 error uninstall powercommand tools for visual studio.
This has been backed up by multiple people,
Here's one that could be a symptom of installing SP1 on the development machine while the server remains at RTM:
I'm trying to do a large merge. I get a few hundred conflicts and spend a couple of days, between other tasks, resolving them. All resolved now. No intervening checkins on either branch. But when I try to check in my thousands of pending changes I get the checkin progress dialog briefly, followed by the conflict resolution dialog. It's list is empty and it has the "(!) All conflicts resolved but no files checked in due to initial conflicts." message across the top.
I've tried exiting and restarting the IDE, no joy. I've merged many similar batches along this same merge path prior to installing SP1. I'm hoping to upgrade the server soon but right now I'm full time banging my head against our build automation conundrum.
I think I'll try backing SP1 out of the machine that has the merge pending and see what happens.
Nah, that didn't help. I tried the checkin from the command line and got the warnings: two files which had been deleted in the target branch were silently pended as merge, edit. I undid them and re-executed the merge, it pended them silently again.
That was with SP1 deinstalled from VS TeamDev and Team Explorer but I left the Framework installed. Anyway, the checkin completed once I identified the problem files.
I checked, there was nothing in the source control Output window; no indication from inside the IDE as to what was really wrong.
Both of these were files changed earlier in the source branch and I must have skipped them in previous merges. I just don't recall having seen it act this way; the empty conflict resolution dialog seems new.
I have the same problem as Rick and Guy. Where can I get the hotfix for the net runtime error? I have been trying to install VS2008 for the last two weeks.
Thanks...
Sorry for being gone so long. Had other stuff I had to get done. Rick, Guy, Hutty, I am checking on the status of the hotfix now. I'll post as soon as I find out.
Sorry about that.
Jim, you should be able to install SP1 again and it will just update the components that still need it.
swn1, I'm sorry you've had so many issues. I am passing on your various pieces of feedback to all of the right people in DevDiv. I'll let you know what I hear back.
Hello swn1,
sorry about problems during the merge operation.
The files resurrected during the merge are definitely confusing, it would be nice to do history on both the source and target item, to see what changes need to be propagated.
The empty Resolve Dialog is disturbing. How did you perform checkin initially - I understand it was inside VS - was it from the Pending Changes Toolwindow, Solution Explorer or Source Control Explorer? Am I correct, that when you did checkin from command line, resolve dialog contained conflicts? It would indicate bug in code that displays Resolve dialog inside VS.
Thanks Brian for the feedback.
Michal -
TFS source control gets easily confused when renames, undeletes, and other namespace operations are composed, with each other or with edits. I've found that I sometimes have no choice but to check in a preliminary changeset to conform the target namespace to the source before attempting a merge.
A mathematician would recognize this as a broken "group" problem and provide some high-level analysis to help resolve it. I've seen very recently some blog text from somebody on the team who understands the math -- he was talking about SCC namespaces in a way that actually made sense -- so you've got the resources in-house.
I always use the Pending Changes window to launch checkins; it was what I discovered first and it gives me the most confidence / best chance at not messing it up. I multitask a lot and way too frequently have changes belonging to different tasks pending.
A way to partition pending changes into groups (proto-changesets) would be a great feature.
When I did the checkin from the command line it did not give me a conflict dialog at all, if I remember correctly. But that's not reliable, I may have dismissed it. In any case, there was console output including text in yellow that described the underlying problem and identified the two files that were involved. With that clue I was able to work around it.
As I mentioned, that diagnostic output did not appear in the Output window within VS.
Hope this helps,
-swn
Just completed the sp1 install.
First did vs2008 sp1 and that took 45 min.
Then did the TFS 2008 sp1 and that took about 25 min. The progress bar reached 100% at least several minutes before the installation completed so be patient.
After the install the team service was not running. Restarted it and then had to enable the build agent as well.
-- Steven
Thanks for the feedback swn1. We know there are issues with conflict resolution and management. We are in process of reworking that for our next release and I think you will find both to be much better.
The request for multiple pending changesets has been on our backlog for a while. I'm not sure when we will get to it but we won't forget it.
Thanks,
swn1, Sorry it took so long but here's a response to your VC issue. I can't take credit for it - the VC team put it together for me :)
Firstly, thanks very much for taking the time to post and highlight this issue. It is true that there was recently a change to the default behavior of binding to versions of the CRT/MFC/etc. I would like to point out that this was not an accidental change but rather one that the VC team took after a lot of customer feedback and internal evaluation. Changing default behavior is never an easy choice, because of the “surprise” that it creates. I could go into lots of detail about the pros and cons of the situation but it is probably best if I just point you to the articles that the VC team posted around this decision:
•
•
As you can see from the articles there is also a way to specify your behavior using a set of defines if you prefer the original behavior, again I will leave it to the articles to provide the complete set of options. I also see that the resolution on the Connect site was probably frustrating, just hearing that something is “By Design” is not always a great customer experience – sorry if that added to your annoyance with this issue. As always we really value your feedback on our decisions, if you have any follow up comments or questions please feel free to post again, either here or if you prefer to go straight to the horse’s mouth, over on the VC team blog. Since the original VC Blog article is now closed for comments, you could use this more recent SP1 article.
I've been having extreme problems with Team Build and I wanted to put up some yellow tape around the big open hole so nobody else fall in it:
With the build server and SCC server as VMs on the same big VM server, MSBuild and TFSBuildService crash (seemingly) randomly. I rebuilt my build server as a stand-alone machine and all is well: the clouds parted and birds began to sing.
After more than a year of messing with it I finally have a reliable automated build system that operates on my source tree as-is.
There's a long list of other configuration and install sequence differences between the working server and all the failed incarnations but another user on the forum reported a similar finding and I tried an awful lot of permutations before giving up on the VM. I'm fairly sure it will turn out to be the zero-latency networking exposing a race in the TFS client code common to those two programs.
A much happier camper now,
swn,
I'm the PM for Team Build and I'd like to get some details on the configuration you were using when you were seeing spurious failures. I think you were running the application tier and the build service in separate VMs on the same host. Is that right? Could you provide some specifics on the configuration?
* What virtual machine manager were you using?
* What software was running on each VM?
* How much memory, hard disk space, etc. was allocated for each VM?
* Were there any event log entries coinciding with the spurious failures?
If you'd like to contact me directly, you can initiate a conversation through my MSDN blog at:
Regards,
Jim Lamb
Team Build PM
Any updates on when the fix for the checkin e-mail bug with TSWA will be available on code gallery?
Service Packs for VSTS/TFS do not only contain bug fixes, but quite a few nice new features as you can
Josh, we are in final production and should have it available for download in a week or two.
GuyO, are you still seeing the fatal error after re-install everything? I am the PM from Windows Forms team and would like to gather more info about how to repro this problem.
-Wenbin
Hi GuyO, hope you can see this post. The crash you mentioned after SP1 is installed still repro?
Somewhat belatedly, I've been getting the same error as GuyO and others - turned out to be a conflict with PowerCommands for VS 2008.
More details on a workaround can be found here:
I had similar issues.
I had checked in some delete files the previous day. The next day I got latest from SC. Made some changes. Tried to checkin and the deleted changes from the previous day showed up again, alsong with my new changes. With several attempts to checkin failing with the following: "All conflicts resolved but no files checked in due to initial conflicts." And no conflicts listed.
To resolve I had a co-worker checkin a changeset. I got latest on that changeset. Performed Undo operations on the deletes. Got latest again. Checked in successfully.
This solve the problem for me:
I first encountered this issue well after I had installed PowerCommands (which appears to be the cause of the issue) but directly after installing StyleCop. I believe the StyleCop install made a path or config change that precipitated the issue.
i got this problem when install azure sdk, but after reinstall .net framework 3.5 sp1 then it's OK | http://blogs.msdn.com/bharry/archive/2008/08/12/more-things-to-know-about-installing-sp1.aspx | crawl-002 | refinedweb | 3,997 | 72.97 |
Little Bit
The tiny micro:bit single-board computer from the BBC can be used both as an alternative and as a handy companion to the Raspberry Pi. Get started with this simple yet versatile machine.
Lead Image © victor kuznetsov, 123RF.com
The tiny micro:bit single-board computer from the BBC can be used both as an alternative and as a handy companion to the Raspberry Pi. Get started with this simple yet versatile machine.
The Raspberry Pi single-board computer (SBC) and its countless variants can be used for any project imaginable: from learning to code and building a personal server, to exploring the Internet of Things, to building robots. Because it's a computer that runs a full-blown operating system, Raspberry Pi requires some technical skills and effort to master, and it can be overkill for some projects. Of course, Arduino provides a simpler and more approachable physical computing platform. But wouldn't it be great if you could find a device that combines the versatility of Raspberry Pi with the simplicity of Arduino?
Enter BBC micro:bit [1]. It's probably unfair to call this tiny machine (Figure 1) a mere cross between Raspberry Pi and Arduino, because micro:bit is a unique and innovative device in its own right, and it has several advantages that make it a worthy alternative to both Raspberry Pi and Arduino. The most obvious advantage is the price: At ~£13 (~$17), micro:bit is cheaper than most Raspberry Pi and Arduino models. Unlike Raspberry Pi, you can use micro:bit right out of the box without going through the rigmarole of downloading an image file, burning it onto an SD card, and then booting and configuring the system. This also means it's practically impossible to make micro:bit unbootable or brick the board altogether. Better still, micro:bit has on-board physical buttons, an LED array, and sensors.
The board supports the MicroPython dialect of the Python scripting language, so you build simple projects right away with a minimum of effort. Similar to Raspberry Pi, micro:bit features a GPIO port, so it's possible to use the board with sensors, motors, and other devices. Add in low power consumption, and you have a board that can be put to a variety of practical uses. The best part is that you can use a Raspberry Pi to program and manage micro:bit.
To get started with micro:bit, you need several additional parts, including a USB data cable, a battery holder, and an edge connector breakout board. Both the battery holder and the breakout board are optional, but they can be useful for many projects. The battery holder allows you to power the micro:bit board with two AAA batteries, and the breakout board makes access to GPIO pins easier.
Although you can use the default web-based coding environment at BBC micro:bit World [2] to write micro:bit scripts, you can also program locally using the dedicated Mu editor [3]. To install it on Raspberry Pi, grab the latest
mu.bin binary file from the Mu editor's website and enter:
chmod +x mu.bin sudo usermod -a -G dialout <username>
The first line makes the file executable, and the second line makes it possible to push scripts to micro:bit directly from the editor. Replace
<username> with your actual username (e.g., pi).
With all the pieces in place, connect the Raspberry Pi to the micro:bit board via the micro-USB port, launch the Mu editor, and you are ready to write your first MicroPython script.
The version of MicroPython installed on micro:bit includes the microbit module that provides an API for all the board's hardware components. So all micro:bit scripts should start with the
from microbit import * statement that imports the module (Figure 2). Because the board comes with an array of LEDs, you can start with something more interesting than a simple blinking LED as your first project. For example, you can make the LED array display a scrolling text message using the
display.scroll() method. The simple script below scrolls the "Oh Hai, World!" text on the LED array:
from microbit import * display.scroll("Oh Hai, World!")
To scroll this message infinitely, you can add a
while loop as follows:
from microbit import * while True: display.scroll("Oh Hai World!")
Paste the code above into a new text file in the Mu editor, save the script, make sure that micro:bit is connected to the Raspberry Pi, and then press the Flash button to compile the script and transfer it to micro:bit. The yellow LED on the micro:bit should blink during the transfer, and once the operation is complete, you should see the message scrolling on the LED array.
The LED array can be used to display simple images, too, and the
display.show() method allows you to do just that. Better still, MicroPython comes with a library of ready-made images for use in your scripts. This library includes faces, arrows, and objects that you can integrate into your scripts. Displaying an image is as easy as specifying it in the
display.show() routine; for example:
display.show(Image.MEH) display.show(Image.UMBRELLA)
Each LED in the array can be controlled individually using the
display.set_pixel method that requires three values: pixel row, pixel column, and pixel brightness. So the
display.set_pixel(2, 2, 9) display.set_pixel(2, 2, 0)
commands, for example, set the brightness of the middle LED (rows and columns are numbered 0 through 4) to the maximum level in the first line (thus turning the LED on) and to the minimum level in the second line (thus turning the LED off). The script in Listing 1 [4] demonstrates how the
display.set_pixel() routine can be used to control individual LEDs.
Listing 1
Controlling Individual LEDs
01 from microbit import * 02 from random import randint 03 while True: 04 val = 9 05 x = randint(0, 4) 06 y = randint(0,4) 07 display.set_pixel(x, y, val) 08 while (val > 0): 09 display.set_pixel(x, y, val) 10 sleep(500) 11 val = val-1 12 display.set_pixel(x, y, 0)
The script uses the
randint method to generate random numbers – between
0 and
4 in this case. These numbers are then used to specify pixel position. The
val variable determines the brightness level, and it's set to
9 from the start. The
while (val > 0) loop uses the
val variable as a counter gradually to reduce the LED's brightness and then turn it off. Flash the script to micro:bit, and you should see random LEDs turning on and gradually fading away.
The micropython module identifies the two physical on-board buttons as
button_a and
button_b, and it has three button-related methods. The first one is the
button.is_pressed() routine that returns True if the button is pressed and returns False otherwise. The script in Listing 2 illustrates how this method works. When button A is pressed for half a second, the middle LED turns on; otherwise it stays off.
Listing 2
Programmable Buttons
01 from microbit import * 02 03 while True: 04 sleep(500) 05 if button_a.is_pressed(): 06 display.set_pixel(2, 2, 9) 07 sleep(1000) 08 else: 09 display.set_pixel(2, 2, 0)
The other two button-related methods are
button.was_pressed() (returns True if the button was pressed since the device booted or the last time the method was called) and
button.get_presses() (returns the total number of times the button was pressed).
In addition to the LED array and buttons, micro:bit also features a compass and an accelerometer, and the microbit module has several methods for accessing both components. The
accelerometer.get_x(),
accelerometer.get_y(), and
accelerometer.get_z() methods return acceleration measurements in the x, y, and z axes as integer numbers. Using these methods, you can whip up a simple script like that in Listing 3 to transform micro:bit into a no-frills digital level.
Listing 3
Accelerometer
01 from microbit import * 02 03 while True: 04 x = accelerometer.get_x() 05 if x > 50: 06 display.show(Image.ARROW_W) 07 elif x < -50: 08 display.show(Image.ARROW_E) 09 else: 10 display.show(Image.TARGET)
The script reads the x-axis measurements, and depending on the obtained values, the LED array displays the appropriate arrow images that help you level the board. When the board is leveled, the LED array shows an image that looks like a target. The default
50 and
-50 reference values control the level's sensitivity. Increase the values to make the level less sensitive and decrease to boost sensitivity.
The accelerometer also supports basic gestures, such as
"up",
"down",
"left",
"right",
"face up",
"face down", and
"shake", and the
accelerometer.current_gesture() method can be used to detect these gestures. The code in Listing 4 displays the butterfly image when you shake the board.
Listing 4
The "shake" Gesture
01 from microbit import * 02 03 while True: 04 gesture = accelerometer.current_gesture() 05 if gesture == "shake": 06 display.show(Image.BUTTERFLY) 07 else: 08 display.show(Image.ASLEEP)
The strips at the bottom of the micro:bit board are not decorations: They are GPIO pins. The micro:bit has 25 GPIO connectors, and five of them are wider than the others and have holes, which makes it easier to use them with banana plugs and crocodile clips. The pins labeled 0, 1, and 2 on the front side of the board (in Figure 1, the first three holes, left to right) are digital pins with analog-to-digital converters (ADCs), which means that they can be used with various analog devices and sensors – something you can't do with Raspberry Pi without additional work. If you are familiar with the GPIO on Raspberry Pi, you won't have problems using GPIO pins on micro:bit, and the microbit module provides several methods for working with pins. These include
pin.is_touched() (returns True if the pin is in fact touched),
pin.read_digital() (reads the digital input of the pin; returns 1 if the pin is high, and 0 if it's low), and
pin.write_digital() (sets the pin to high or low using the 1 and 0 values). The following script is a simple and fun example of using micro:bit's GPIO pin:
from microbit import * while True: if pin0.is_touched(): display.show(Image.SURPRISED) else: display.show(Image.ASLEEP)
It displays different messages depending on whether it is being touched or not.
Pages: 6
Price $15.99
(incl. VAT) | https://www.raspberry-pi-geek.com/Archive/2016/19/Programming-the-micro-bit-with-MicroPython | CC-MAIN-2021-10 | refinedweb | 1,772 | 63.49 |
Important: Please read the Qt Code of Conduct -
QHelpEngine how-to
Hello,:
@();
@
but it shows a empty index and a empty content.
Application has no advice output.
Where I did wrong?
Please help me.
Best Regards
Willy
Does it work with the docs on the hard disk?
Did you add the file to the .qrc configuration file?
Are the files present in the resources? You can use QDir on file path ":/" to look at the contents.
bq. Does it work with the docs on the hard disk?
Did you add the file to the .qrc configuration file?
Are the files present in the resources? You can use QDir on file path “:/” to look at the contents.
It doesn't work with docs on the hard disk because I tried to add the absolute path of .qhc file (for example: QHelpEngine("/home/willy/help.qhc")) and it doesn't work.
I added it to .qrc configuration file and the file is present in the resources.
I don't understand why I have to use QDir.
You can use QDir to list the bundled resources (are all expected files in there, etc.). It does not help in using the help engine, though.
Does your help collection work with Qt Assistant? You can test it by adding "-collectionFile xxx" on the command line.
Strange, If I add collection file by Preferences panel of Qt Assistant I see it, but if I use -collectionFile command line option I don't see it...
It works for me, but only if I add data to the keywords and toc sections in the XML configuration file.
This is the help.qhp file:
@
<?xml version="1.0" encoding="UTF-8"?>
<QtHelpProject version="1.0">
<namespace>wchords-client</namespace>
<virtualFolder>help</virtualFolder>
<customFilter name="wchords-client">
<filterAttribute>wchords-client</filterAttribute>
<filterAttribute>all</filterAttribute>
</customFilter>
<filterSection>
<filterAttribute>wchords-client</filterAttribute>
<filterAttribute>all</filterAttribute>
<toc>
<section title="WChords-client Manual" ref="description.html">
<section title="Import Tool" ref="import_tool.html" />
</section>
</toc>
<keywords>
<keyword name="import" id="wchords-client::import" ref="import_tool.html" />
<keyword name="description" id="wchords-client::description" ref="description.html" />
</keywords>
<files>
<file>.html</file>
<file>images/.jpg</file>
</files>
</filterSection>
</QtHelpProject>
@
It seems correct for me...
The file looks ok to me (though I'm not an expert in creating help content).
Can you put together a small sample, just two or three help files, the config files etc. and put a ZIP on pastbin ore something similar. We can have a look then.
Ok I have put all in a zip file containing my help system. It's very small and available at
I did
@
qcollectiongenerator help.qhcp -o help.qhc
@
to generate help collection.
Just tried it on the command line, everything is fine:
@
/Applications/Assistant.app/Contents/MacOS/Assistant -collectionFile help.qhc
@
Yes it works to me too. But why it doesn't work with this code?
@();
@
Maybe I got wrong in something...
Ok, two things:
First: you must call
@
he->setupData();
@
before you can use the help data. It returns a bool if the setup was ok. Use "QHelpEngineCore::error() ": to retrieve an error message (QHelpEngine dervives from QHelpEngineCore).
Second, the more important part: You cannot use Qt resources for storing the help content. You must put it into regular files and distribute them along your application.
The reason for the latter is simple: The compiled help files are actually SQLite databases. Qt does not intercept the file name but hands it over directly to the SQLite functions (by calling toUtf8() on the "path"), which in turn try to open that file - and will eventually fail, of course.
It doesn't work anyway and error() leaves no error. This is the new code:
@
QHelpEngine* he = new QHelpEngine("./help.qhc");
he->setupData();();
@
I took care to call program in the same help.qhc directory.
Try it with an absolute path.
For runtime you can use something like this:
@
QString helpPath = QApplication::applicationDirPath() + "/help_files/help.qhc";
@
Be aware, that this does not work on the Mac due to the applications bundles used there.
I followed your suggest and this is the new code:
@
QFileInfo fileInfo("help.qhc");
if (fileInfo.exists()) {
QHelpEngine* he = new QHelpEngine(fileInfo.absoluteFilePath()); he->setupData(); qDebug() << fileInfo.absoluteFilePath();(); }else{ qDebug() << "File doesn't exist"; }
@
It outputs the correct absolute file name(see line 6) and there is no error.
But it doesn't work....
Do you have help.qch in that directory too? It's needed!
Yes, I checked it with
@
qDebug() << fileInfo.absoluteFilePath();
@
that returns the correct absolute file name of the file.
Ahm... yo need both files:
help.qhc
help.qch
I know, that suffixes are hard to differentiate!
Ahem sorry, I just tried to put that file in the same directory and it seems to work even though it has strange behavior. I have to study the situation...
However thanks :)
Sorry to bring up an old thread but I have just spent an embarrassing number of hours on this issue myself. My problem and solution were exactly the same as willypuzzle's, namely that both the .qhc and .qch files must be present.
Looking at those files now, even with my tiny test help pages it's apparent from the file sizes that the .qch contains the actual content, but this is easily overlooked for new users dealing with the plethora of similar file extensions for the first time. Even when the .qch is missing the .qhc is found and loads successfully, and setupData() is successful too, which unfortunately draws attention away from the missing file being the source of the problem.
I think it would be good if it was made clearer in the documentation that both these files are necessary. Specifically, new users will likely arrive at this page as a first reference:
From this I got the impression that the compressed help was merely a intermediate step, and the collection was the sole final outcome of the process.
[SOLVED]
I don't know it is appropriate to ask the question here (if not please say so i'll start a new topic)
I have the problem with an error, it happens when calling the setCollectionFile() funtion of QHelpEngine.
@void HelperWindow::initialize()
{
HelperWindow::setWindowFlags(Qt::WindowStaysOnTopHint);
m_helpEngine->setCollectionFile(QApplication::applicationDirPath() + "doc/athleticsmanager.qhc"); m_helpEngine->setupData();
}@
When i look deeper into the code behind this function i get this:
@if (fileName == collectionFile())
return;@
It's in the collectionFile() function, that only does
@return d->collectionHandler->collectionFile();@
, where i get "Segmentation Fault"
I dont really know what to do now, is there someone who can help me with this problem?
I encounter quietly the same problem:
- I am able to load my "doc.qhc" using the Qt Assistant and the content widget contains my documentation sections
- my "doc.qhc" and "doc.qch" are located in the same directory
- when building my help widget (see code below), no error is encountered
- my content widget is empty (the content model has 0 rows)
@
QFileInfo info("doc.qhc");
if (info.exists() == false)
{
qDebug << "Help file does not exist";
}
m_engine = new QHelpEngine(info.absoluteFilePath());
if (m_engine->setupData() == false)
{
qDebug << "Help engine setup failed";
}
QGridLayout* layout = new QGridLayout;
QSplitter* helpPanel = new QSplitter(Qt::Horizontal);
helpPanel->insertWidget(0, m_engine->contentWidget());
m_webView = new QWebView;
helpPanel->insertWidget(1, m_webView);
layout->addWidget(helpPanel, 0, 0);
setLayout(layout);
connect(m_engine->contentWidget(), SIGNAL(linkActivated(QUrl)), SLOT(SetHelpSource(QUrl)));
qDebug() << m_engine->contentModel()->rowCount());
@
I had a look on the Qt Assistant source code and it seems to do the same thing to load a collection file...or I missed something.
Any suggestion?
Thanks in advance for your help | https://forum.qt.io/topic/3880/qhelpengine-how-to | CC-MAIN-2021-21 | refinedweb | 1,267 | 59.4 |
I am trying to implement a simple stack with Python using arrays. I was wondering if someone could let me know what's wrong with my code.
class myStack:
def __init__(self):
self = []
def isEmpty(self):
return self == []
def push(self, item):
self.append(item)
def pop(self):
return self.pop(0)
def size(self):
return len(self)
s = myStack()
s.push('1')
s.push('2')
print(s.pop())
print s
I corrected a few problems below. Also, a 'stack', in abstract programming terms, is usually a collection where you add and remove from the top, but the way you implemented it, you're adding to the top and removing from the bottom, which makes it a queue.
class myStack: def __init__(self): self.container = [] # You don't want to assign [] to self - when you do that, you're just assigning to a new local variable called `self`. You want your stack to *have* a list, not *be* a list. def isEmpty(self): return self.size() == 0 # While there's nothing wrong with self.container == [], there is a builtin function for that purpose, so we may as well use it. And while we're at it, it's often nice to use your own internal functions, so behavior is more consistent. def push(self, item): self.container.append(item) # appending to the *container*, not the instance itself. def pop(self): return self.container.pop() # pop from the container, this was fixed from the old version which was wrong def size(self): return len(self.container) # length of the container s = myStack() s.push('1') s.push('2') print(s.pop()) print s | https://codedump.io/share/xLBwkWxapVpY/1/implementing-stack-with-python | CC-MAIN-2017-47 | refinedweb | 272 | 68.36 |
upscli_list_next man page
upscli_list_next — retrieve list items from a UPS
Synopsis
#include <upsclient.h>
int upscli_list_next(UPSCONN_t *ups, unsigned int numq, const char **query, unsigned int *numa, char ***answer)
Description
The upscli_list_next() function takes the pointer ups to a UPSCONN_t state structure, and the pointer query to an array of numq query elements. It performs a read from the network and expects to find either another list item or the end of a list.
You must call upscli_list_start(3) before calling this function.
This function will return 1 and set values in numa and answer if a list item is received. If the list is done, it will return 0, and the values in numa and answer are undefined.
Calling this function after it returns something other than 1 is undefined.
Query Formatting
You may not change the values of numq or query between the call to upscli_list_start(3) and the first call to this function. You also may not change the values between calls to this function.
Answer Formatting
The contents of numa and answer work just like a call to upscli_get(3). The values returned by upsd(8) are identical to a single item request, so this is not surprising.
Error Checking
This function checks the response from upsd(8) against your query. If the response is not part of the list you have requested, it will return an error code.
When this happens, upscli_upserror(3) will return UPSCLI_ERR_PROTOCOL.
Return Value
The upscli_list_next() function returns 1 when list data is present, 0 if the list is finished, or -1 if an error occurs.
It is possible to have an empty list. The function will return 0 for its first call in that case.
See Also
upscli_list_start(3), upscli_strerror(3), upscli_upserror(3)
Referenced By
upsclient(3), upscli_get(3), upscli_list_start(3). | https://www.mankier.com/3/upscli_list_next | CC-MAIN-2017-43 | refinedweb | 301 | 63.8 |
This still needs to be updated. It's not working as it is right now.
Search Criteria
Package Details: alacarte-xfce 3.11.91-1
Dependencies (8)
Required by (1)
- gnome-panel-git (requires alacarte) (optional)
Sources (3)
- alacarte.desktop
-
- unicode-fixes.patch
Latest Comments
lopardo commented on 2015-11-05 13:00
freddie commented on 2015-01-03 14:02
Is it any difference between offical repo alacarte or this except exo,gtk3?? because it looks the same and behave just like alacarte-xfce. So is this package even needed anymore??.
taotedice commented on 2013-04-26 16:07
plp:
Right - thanks for checking that code. I do not think the solution is to implement a previous version.
Question about the fundamental purpose of alacarte, now (ver 3.7.x): It is primarily a gnome3 support package, and I don't think the new gnome3 DE has much use for menu 'folders' any longer - true? Maybe the code that handles 'folders' has been ignored, since it is no longer really necessary for the gnome3 interface? Just a thought...
ShyPixie commented on 2013-04-26 12:43
Add python2-gobject in depends.
Traceback (most recent call last):
File "/usr/bin/alacarte", line 21, in <module>
from Alacarte.MainWindow import main
File "/usr/lib/python2.7/site-packages/Alacarte/MainWindow.py", line 20, in <module>
import gi
ImportError: No module named gi
plp commented on 2013-04-26 05:40
taotedice:
I don't think this is the problem. I looked into MenuEditor.py, and it looks like the $XDG_MENU_PREFIX bug has already been fixed upstream in 3.7.90. (Which actually makes sense, as nothing would have worked with XFCE if it hadn't been.)
Here's the actual code:
def get_default_menu():
prefix = os.environ.get('XDG_MENU_PREFIX', '')
return prefix + 'applications.menu'
class MenuEditor(object):
def __init__(self, basename=None):
basename = basename or get_default_menu()
Unless I'm mistaken, the problem with Ubuntu's patches is that they are for a very old version of Alacarte (3.5.5) that will probably not even run under Arch. So, we either have to go through all of them and try to re-write them for 3.7.90, or revert back to 3.5.5, apply Ubuntu's patches to it, and then try to modify it to make it work.
I don't know which you guys think would be the best option.
taotedice commented on 2013-04-26 00:44
Good suggestion jlacroix.
Xubuntu 12.10's original menu editor, alacarte-3.5.5, had similar issues and was patched: '40-xdg-menu-prefix: updated and reenabled to bring back support of $XDG_MENU_PREFIX'. The patched code does work. I'm not familiar enough with the alacarte package to apply it myself to the current version, unless I do more homework...
Link to the Ubuntu alacarte page describing the issue and patch:
jlacroix commented on 2013-04-25 20:30
For what it may be worth, Alacarte is working perfectly fine in Xubuntu 12.10 and 13.04. It may be worth browsing their notes or bug reports to find out how they made it work.
plp commented on 2013-04-25 17:15
taotedice:
I also noticed that menus don't work but never got around to investigating why that is. The .directory files created by Alacarte appear to be valid, but XFCE doesn't want to honour them.
If you want to try and figure out what's happening, please do so and let me know about your findings. If you can figure this out, I can help you patch Alacarte to make menus work.
taotedice commented on 2013-04-25 09:02
Thanks for maintaining this package. I have the current xfce, gnome and alacarte-xfce packages installed. As of now I can add and remove items (applications) from the menu, but cannot add or remove menus; I would like to add the 'Science' and Engineering' menu folders. Is this functionality normal?
The new entries are being created in my ~/.local/share/desktop-directories/ as 'alacarte-made.directory' (for example), but the entry doesn't show up in the menu. Is there something else I need to do to get allow menu folder editing? Thanks.
plp commented on 2013-04-25 05:32
It's OK, I can handle it.
Though I'm more interested in saving whatever money I can. This way, my children won't go hungry and will be able to continue enjoying their childhood by playing League of Legends and Call of Duty all day and complaining about how slow their 30 Mbps Internet is.
Anonymous comment on 2013-04-24 23:38
If you want I can take this package over so you and whoever else can revolt against your capitalist government.
plp commented on 2013-04-24 19:56
Bump version 3.7.90.
I somehow managed to make the time needed to do this, after all. :-)
plp commented on 2013-04-24 19:07
It looks like glib 2.36.1 is causing havock everywhere.
The problem will probably be resolved by upgrading to the latest version, and I hope I'll be able to do this soon. However, I'll be very busy the next few days. So, I'm going to disown the package in the hope that someone is going to pick it up and do it. If noone does by the time I'm free to tackle it, I'll own it again.
Anonymous comment on 2013-04-24 19:01 25, in <module>
from Alacarte import util
File "/usr/lib/python2.7/site-packages/Alacarte/util.py", line 28, in <module>
from gi._glib import GError
ImportError: cannot import name GError
doesn't start, flagged
plp commented on 2013-03-29 14:45
laracraft304: Done.
BTW, sorry I haven't had time to upgrade to the latest version. You see, I live in Cyprus. You might have heard the news...
ShyPixie commented on 2013-03-28 13:19
And add provides=("alacarte=$pkgver")
ShyPixie commented on 2013-03-28 13:16
Change "../../alacarte.desktop" by "$srcdir/alacarte.desktop"
plp commented on 2013-03-14 11:01
New release 3.7.3-4.
Now with .desktop menu entry. :-)
plp commented on 2013-03-14 10:06
Sorry, I accidentally uploaded an old version of the package. Please try again.
Jristz commented on 2013-03-14 10:04
Aditionally I not find a .desktop for launch the alacarte menueditor
plp commented on 2013-03-14 09:52
Updated to version 3.7.3.
plp commented on 2013-03-13 20:22
Noted. If no one else has an objection, I'll adopt this by tomorrow.
There's also a new version available, 3.7.90, which I'll try to push within the next few days.
Kotus commented on 2013-03-13 18:38
Thanks, now patch is valid, but i got this error message by running alacarte:
""
The problem resolves by installing "pyxml", so you need to add this package in dependency list of PKGBUILD.
Thank you for your contribution for this package, I think you should take over maintance of this package because many will have benefit from this updated package.
Once again, thanks a lot.
plp commented on 2013-03-13 17:57
Sorry Kotus, this website's comment box appears to eat whitespace. Can you try copying it from here?
The patch is important because of a bug in Python's self.dom.toprettyxml() that will corrupt your xfce-applications.menu if you try to add or edit anything using Alacarte.
Kotus commented on 2013-03-13 17:35
@plp
Thank you for updated PKGBUILD
I just installed your PKGBUILD and it works as a charm, but the patch you provided don't work so I had to coment out the patch section of the PKGBUILD
When I try to build with patch enabled i get:
"patching file Alacarte/MenuEditor.py
patch: **** malformed patch at line 5: import xml.dom.minidom"
plp commented on 2013-01-21 10:47
Hi, I got tired of having an old version of alacarte-xfce on my system, so I tried to make a PKGBUILD for the latest version 3.7.3.
It didn't work correctly at first: it put extra whitespace in xfce-applications.menu, and it kept throwing Unicode-related exceptions. So, I dag into the code and tried to fix it. Now, it sort of works OK for me, though I'm sure there are still bugs out there.
Maybe someone can try this out and let me know?
Here's the revised PKGBUILD:
=== BEGIN
# Maintainer: Bartek Piotrowski <barthalion@gmail.com>
# Contributor: 3ED <krzysztof1987 at googlemail>
# Contributor: Jan de Groot <jgc@archlinux.org>
# Contributor: pressh <pressh@gmail.com>
pkgname=alacarte-xfce
_realname=${pkgname/-xfce/}
pkgver=3.7.3
pkgrel=2
pkgdesc="Menu editor for Xfce (with debian patchset)"
arch=(any)
license=('LGPL')
url=""
depends=('gnome-menus' }/3.7/${_realname}-${pkgver}.tar.xz toprettyxml-and-unicode-fixes.patch)
md5sums=('59e1e9041400e57a77e896460555067a' '4bb6b558687f6ffde356758a6c4a7c28')
build() {
cd "${srcdir}/${_realname}-${pkgver}"
patch -p1 -i $srcdir/toprettyxml-and-unicode-fixes
}
=== END
And here's toprettyxml-and-unicode-fixes.patch:
=== BEGIN
diff -aur alacarte-3.7.3/Alacarte/MenuEditor.py alacarte-3.7.3.new/Alacarte/MenuEditor.py
--- alacarte-3.7.3/Alacarte/MenuEditor.py 2013-01-11 02:50:22.000000000 +0200
+++ alacarte-3.7.3.new/Alacarte/MenuEditor.py 2013-01-21 12:12:43.338832019 +0200
@@ -21,6 +21,7 @@
import xml.dom.minidom
import xml.parsers.expat
from gi.repository import GMenu, GLib
+from xml.dom.ext import PrettyPrint
from Alacarte import util
def get_default_menu():
@@ -54,7 +55,7 @@
def save(self):
with codecs.open(self.path, 'w', 'utf8') as f:
- f.write(self.dom.toprettyxml())
+ PrettyPrint(self.dom, stream=f)
def restoreToSystem(self):
self.restoreTree(self.tree.get_root_directory())
@@ -262,6 +263,7 @@
out_path = os.path.join(util.getUserItemPath(), file_id)
contents, length = keyfile.to_data()
+ contents = unicode(contents, 'utf8')
with codecs.open(out_path, 'w', 'utf8') as f:
f.write(contents)
@@ -402,6 +404,7 @@
file_id = util.getUniqueFileId(keyfile.get_string(GLib.KEY_FILE_DESKTOP_GROUP, 'Name'), '.desktop')
contents, length = keyfile.to_data()
+ contents = unicode(contents, 'utf8')
path = os.path.join(util.getUserItemPath(), file_id)
with codecs.open(path, 'w', 'utf8') as f:
@@ -424,6 +427,7 @@
util.fillKeyFile(keyfile, kwargs)
contents, length = keyfile.to_data()
+ contents = unicode(contents, 'utf8')
path = os.path.join(util.getUserDirectoryPath(), file_id)
with codecs.open(path, 'w', 'utf8') as f:
=== END
jlacroix commented on 2013-01-06 02:23
The package builds and installs fine, but when I make changes in Alacarte, it has no effect on the menu at all.
Barthalion commented on 2012-08-17 15:29
I've switched to menulibre. I doubt if Alacarte will work with Xfce again, so if you want to maintain the package, contact me.
Barthalion commented on 2012-08-17 15:27
I've switched to menulibre. I doubt if Alacarte will work with Xfce again, so feel free to take it.
GuestOne commented on 2012-08-05 20:45
You can fix my bug packing the latest dev version of gnome-menus and add it to dependencies.
GuestOne commented on 2012-08-04 17:22
Crash to startup with:'
Barthalion commented on 2012-07-31 10:19
Done, thank you.
However, it still doesn't work as it's supposed to. I recommend switching to menulibre.
ShyPixie commented on 2012-07-30 21:52
it has not worked here.
Use this PKGBUILD:
patch here:
benoliver999 commented on 2012-07-29 18:43
Yep, menulibre works a treat for me too, and doesn't have all the dependencies.
JesusMcCloud commented on 2012-07-22 13:20
I can recommend menulibre as alternative
oboedad55 commented on 2012-07-21 21:13
Hi, I can't get this new version to build. I get this error; ==> ERROR: A failure occurred in build().
Not much help...
Spike29 commented on 2012-07-20 16:44
Indeed, gnome-menus>=3.5.3 is required, but is not in Arch repos yet.
I didn't notice that when I declared package out-of-date :/
ork commented on 2012-07-20 13:05
I can't build the package. The ./configure output gives me this error:
configure: error: Package requirements (libgnome-menu-3.0 >= 3.5.3 pygobject-3.0) were not met:
Requested 'libgnome-menu-3.0 >= 3.5.3' but version of libgnome-menu is 3.4.2
Spike29 commented on 2012-07-18 11:59
Hi, version 3.4.5 is out.
But it doesn't seem to fix bug bug #677343 :(
taotedice commented on 2012-07-05 23:43
ditto - thanks, but still no joy.
Spike29 commented on 2012-07-05 18:48
Thanks for the update.
Unfortunately, it still doesn't work for me either.
Barthalion commented on 2012-07-05 18:09
Still doesn't edit menu for me, but as usually YMMV.
Spike29 commented on 2012-07-05 09:26
Hi, alacarte 3.5.3 is available :
The bug #677343 cyberpatrol mentioned seems to be fixed.
Spike29 commented on 2012-07-05 09:26
Hi, alacarte 3.5.3 is available :
The bug #677343 cyberpatrol mentioned seems to be fixed.
pezcurrel commented on 2012-06-30 15:57
Hi, on my system alacarte-xfce 0.13.4-2 installs and launches ok, but has many problems making it almost useless:
- can't create new folders (dialog window appears, I fill it, click "create", no result)
- ticking-unticking the checkbox for visibility on/off doesn't produce any change
- can't create new separators
- can't change launchers-separators order by clicking the up-down buttons
- can't drag and drop launchers to sort them or copy them in other folders
I only get errors notification when trying to create new separators...
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/Alacarte/MainWindow.py", line 293, in on_new_separator_button_clicked
self.editor.createSeparator(parent, after=after)
File "/usr/lib/python2.7/site-packages/Alacarte/MenuEditor.py", line 221, in createSeparator
self.positionItem(parent, ('Separator',), before, after)
File "/usr/lib/python2.7/site-packages/Alacarte/MenuEditor.py", line 553, in positionItem
index = contents.index(after) + 1
ValueError: <GMenuTreeEntry at 0x1bdacc0> is not in list
...and when trying to sort menu entries...
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/Alacarte/MainWindow.py", line 461, in on_move_down_button_clicked
self.editor.moveItem(item, item.get_parent(), after=after)
File "/usr/lib/python2.7/site-packages/Alacarte/MenuEditor.py", line 279, in moveItem
file_id = self.copyItem(item, new_parent)
File "/usr/lib/python2.7/site-packages/Alacarte/MenuEditor.py", line 258, in copyItem
util.fillKeyFile(keyfile, dict(Categories=[], Hidden=False))
File "/usr/lib/python2.7/site-packages/Alacarte/util.py", line 35, in fillKeyFile
keyfile.set_string_list(DESKTOP_GROUP, key, item)
File "/usr/lib/python2.7/site-packages/gi/types.py", line 43, in function
return info.invoke(*args, **kwargs)
TypeError: set_string_list() takes exactly 5 arguments (4 given)
...which also makes the selected entry disappear.
Previously I had 0.13.2 installed and it had no problems. I have its aur pkg saved, but it no longer installs because the patches makepkg tries to dl from patch-tracker.debian.org are no longer available.
Anonymous comment on 2012-06-24 18:15
Maybe you should try it again when gnome-menus in [extra] is updated to 3.5.2.
Barthalion commented on 2012-06-24 17:15
I tried patches posted by cyberpatrol, but even if they have anything to bug, they don't fix anything.
Anonymous comment on 2012-06-07 00:36
The bug, Spike29 mentioned, is an upstream bug. The AUR package alacarte is affected, too, I guess.
Here are the relevant bug reports:
headkase commented on 2012-06-04 19:55
I get the same error as Spike29 when running alacarte-xfce 0.13.4-2 as a normal user. If I run it as root then it launches, but, then I'm not editing my users menus obviously.
Spike29 commented on 2012-06-04 08:38
Hi, alacarte-xfce successfully built with gnome-menus dependency replacing gnome-menus2, but then it doesn't launch.
I've got the following errors :
Spike29 commented on 2012-06-03 15:23
Yes, please add gnome-menus.
I have the following error while compiling :
"configure: error: Package requirements (libgnome-menu-3.0 >= 3.2.0.1 pygobject-3.0) were not met:
No package 'libgnome-menu-3.0' found"
Spike29 commented on 2012-06-03 14:43
Barthalion commented on 2012-06-03 13:51
There is gnome-menus2 already.
Anonymous comment on 2012-06-03 13:50
Add gnome-menus to build-depends.
Spike29 commented on 2012-06-03 13:24
Source link is dead.
Spike29 commented on 2012-06-03 13:23
Source link is dead.
Anonymous comment on 2012-03-27 18:31
python2-gobject2 is a dep
Anonymous comment on 2012-02-20 15:09
You should remove
groups=('xfce4-goodies')
from the PKGBUILD since groups only work in the binary repos but not with AUR packages.
Barthalion commented on 2012-01-20 08:29
Then use diff - you should notice line with sed, which prepare alacarte for Xfce menus.
Anonymous comment on 2012-01-20 08:23
What is the difference to the package alacarte? The source file and the PKGBUILD seem to be or do the same.
Barthalion commented on 2011-10-20 04:03
You was faster, I noticed location change today. ;) Thanks, and updated of course.
Anonymous comment on 2011-10-19 14:26
Applied patches with new locations found on debian website, problem solved. Here's the updated PKGBUILD I used.
pkgname=alacarte-xfce
_realname=${pkgname/-xfce/}
pkgver=0.13.2
pkgrel=2
pkgdesc="Menu editor for Xfce (with debian patchset)"
arch=(any)
license=('LGPL')
url=""
depends=('gnome-menus2' }/0.13/${_realname}-${pkgver}.tar.bz2{_realname}/${pkgver}-3/01-new_item_location.patch{_realname}/${pkgver}-3/02-fix_delete_undo.patch{_realname}/${pkgver}-3/03-bind_textdomain_codeset.patch{_realname}/${pkgver}-3/10_settings_menu.patch)
sha256sums=('9fa36e5181b1eea947b184cb0f79d796b25cc5a5f122819a1ac2ff01bc7ee4ed'
'3a1d48d8104b7b9c6274906bf4a4f336ce4c96316d382e78a38f4bbe82d00172'
'd7637ee59cae0501f803514b9c26c4d9806c2b61ea948670ec3ac20b169c8e44'
'46c260029ae5b001648776f5b89806f1126c502bd828a879d1002495088742e8'
'64610f00ed9f0f78c28d6cadbb00e59ca5dc18e1675a8011141199bcecf33deb')
build() {
cd "${srcdir}/${_realname}-${pkgver}"
patch -Np1 < "${srcdir}/01-new_item_location.patch"
patch -Np1 < "${srcdir}/02-fix_delete_undo.patch"
patch -Np1 < "${srcdir}/03-bind_textdomain_codeset.patch"
patch -Np1 < "${srcdir}/10_settings_menu
}
Anonymous comment on 2011-10-19 14:15
New patch locations can be found here:
Anonymous comment on 2011-10-19 14:11
Package compiles but does not work. Application crashes immediately at launch, I presume this is because of the missing patches as my googling seems to indicate.
$ alacarte'
Barthalion commented on 2011-10-17 13:25
Temporarily I commented lines associated with patches. If someone have backups of these files, send links here, please.
Aerion commented on 2011-10-17 05:33
The patches cannot be found. When following the links, the following error appears:
There was an error processing ur request
can not find diff file for alacarte-xfce / -xfce
mukhametshin commented on 2011-10-05 21:59
It works now, thanks a lot!
Barthalion commented on 2011-10-05 14:13
Fixed - package uses now gnome-menus2[1] from AUR.
[2]
Barthalion commented on 2011-10-05 03:58
Package "libgnome-menu" never existed, there is gnome-menus and downgrade will help. I will provide package like xfce-menus soon.
mukhametshin commented on 2011-10-04 19:10
Please update the dependences, there is no more package named "libgnome-menu". The build function call returns an error:
...
checking pkg-config is at least version 0.9.0... yes
checking for ALACARTE... no
configure: error: Package requirements (libgnome-menu >= 2.27.92) were not met:
No package 'libgnome-menu' found
Consider adjusting the PKG_CONFIG_PATH environment variable if you
installed software in a non-standard prefix.
Alternatively, you may set the environment variables ALACARTE_CFLAGS
and ALACARTE_LIBS to avoid the need to call pkg-config.
See the pkg-config man page for more details.
==> alacarte-xfce.
Neschur commented on 2011-10-03 15:17
Thanks, works.
Barthalion commented on 2011-10-03 14:04
Probably gnome-menus downgrade will help, but it's short-term fix.
Neschur commented on 2011-10-02 18:02
I am also use xfce. Recently gnome update to 3.2 and updated some packages.
Neschur commented on 2011-10-02 17:59
I am also use xfce. Recently gnome update to 3.2 and alacarte not start in xfce.
Barthalion commented on 2011-10-02 17:17
I'm not using it with Gnome, only with Xfce as description and package name suggest.
Neschur commented on 2011-10-02 17:07
not launch after update from gnome3.0 to gnome3.2
Error:...
File "/usr/lib/python2.7/site-packages/Alacarte/MainWindow.py", line 19, in <module>
Barthalion commented on 2011-08-31 05:47
Of course.
Jristz commented on 2011-08-30 22:00
this build w actual gnome-menus and gnome3 ans xfce 4.8???????????
gadget3000 commented on 2011-08-07 15:59
Hi magicrhesus. If this project is dead and doesn't build you should let aur-general@archlinux.org know and request it's deletion. Send them a link to this page and a reason why it can be deleted.
b9anders commented on 2011-05-14 19:37
this error is also reported in the arch bugs with the following comment:
"is broken because the Settings category doesn't exist in gnome 3.
i think alacarte is a dead project and shouldn't be used"
If it's dead, it doesn't look like this well ever get fixed. Shame, we finally had a menu editor for xfce.
Jristz commented on 2011-05-13 05:46
alacarte-xfce-devel depend of exo-devel, but exo-devel in AUR is outdated
if is secure change exo-devel to exo, changed, instead, update another solution or update exo-devel
Wbuild correctly usin exo from stable version but launch the app mark this error:'
Jristz commented on 2011-05-11 01:58
alacarte-xfce-devel depend of exo-devel, but exo-devel in AUR is outdated
if is secure change exo-devel to exo, changed, instead, update another solution or update exo-devel
xdevla commented on 2010-12-28 15:21
@twa022: from my side, I have no issues
@Barthalion: I ran out of time for now
Barthalion commented on 2010-12-27 07:47
Why don't include it in xfce4-devel repo?
twa022 commented on 2010-12-23 05:55
The applications that have the line "OnlyShowIn=XFCE;" don't show up in the Menu Editor list while the "OnlyShowIn=GNOME;" ones do.
Also, when I unmark an application, it still shows up in the "Other" menu. | https://aur.archlinux.org/packages/alacarte-xfce/?comments=all | CC-MAIN-2016-36 | refinedweb | 3,767 | 59.4 |
> tfs.rar > tfsclean3.c
/* tfsclean3.c: * * --- NOT READY YET --- * * This version of defragmentation is power-hit safe and requires * that there be double the amount of flash as is needed for use by * TFS. The basic idea is similar to tfsclean2.c... * Copy all of the good files over to the "other" flash bank, then have * TFS use the "other" bank as the storage area. * The idea is that the defrag is simply a copy of the good stuff to * the alternate flash block. This requires that after the * good stuff is copied, the now-dirty flash block must be erased in * the background prior to the next tfsclean() call. The fact that * there is no sector erase is what makes this faster. * * If both of these flash banks are in the same flash device, then * having a background erase in progress means that it must be an * interruptible erase (device specific). This is necessary because * while the background erase is in progress there may be a need to * interact with the flash and most devices don't let you do both at the * same time. * * Note that this "background-erase" is what makes this method the * fastest defrag method. It does require that the erase operation be * interruptible, and it requires that the application will provide the * hooks to do */ #include "config.h" #include "cpu.h" #include "stddefs.h" #include "genlib.h" #include "tfs.h" #include "tfsprivate.h" #include "flash.h" //#include "monflags.h" #if INCLUDE_TFS /* int tfsfixup(int verbose, int dontquery) { return(TFSERR_NOTAVAILABLE); } */ #if DEFRAG_TEST_ENABLED int dumpDhdr(DEFRAGHDR *dhp) { return(TFSERR_NOTAVAILABLE); } int dumpDhdrTbl(DEFRAGHDR *dhp, int ftot) { return(TFSERR_NOTAVAILABLE); } #endif /* _tfsclean(): * This is an alternative to the complicated defragmentation above. * It simply scans through the file list and copies all valid files * to RAM; then flash is erased and the RAM is copied back to flash. * <<< WARNING >>> * THIS FUNCTION SHOULD NOT BE INTERRUPTED AND IT WILL BLOW AWAY * ANY APPLICATION CURRENTLY IN CLIENT RAM SPACE. */ /* int _tfsclean(TDEV *tdp, int notused, int verbose) { TFILE *tfp; uchar *tbuf; ulong appramstart; int dtot, nfadd, len, err, chkstat; if (TfsCleanEnable < 0) return(TFSERR_CLEANOFF); appramstart = getAppRamStart(); // Determine how many "dead" files exist. dtot = 0; tfp = (TFILE *)tdp->start; while(validtfshdr(tfp)) { if (!TFS_FILEEXISTS(tfp)) dtot++; tfp = nextfp(tfp,tdp); } if (dtot == 0) return(TFS_OKAY); printf("Reconstructing device %s with %d dead file%s removed...\n", tdp->prefix, dtot,dtot>1 ? "s":""); tbuf = (uchar *)appramstart; tfp = (TFILE *)(tdp->start); nfadd = tdp->start; while(validtfshdr(tfp)) { if (TFS_FILEEXISTS(tfp)) { len = TFS_SIZE(tfp) + sizeof(struct tfshdr); if (len % TFS_FSIZEMOD) len += TFS_FSIZEMOD - (len % TFS_FSIZEMOD); nfadd += len; err = tfsmemcpy(tbuf,(uchar *)tfp,len,0,0); if (err != TFS_OKAY) return(err); ((struct tfshdr *)tbuf)->next = (struct tfshdr *)nfadd; tbuf += len; } tfp = nextfp(tfp,tdp); } // Erase the flash device: err = _tfsinit(tdp); if (err != TFS_OKAY) return(err); // Copy data placed in RAM back to flash: err = AppFlashWrite((ulong *)(tdp->start),(ulong *)appramstart, (tbuf-(uchar*)appramstart)); if (err < 0) return(TFSERR_FLASHFAILURE); // All defragmentation is done, so verify sanity of files... chkstat = tfscheck(tdp,verbose); return(chkstat); } */ #endif | http://read.pudn.com/downloads6/sourcecode/embed/20897/tfs/tfsclean3.c__.htm | crawl-002 | refinedweb | 507 | 55.44 |
This is the mail archive of the cygwin mailing list for the Cygwin project.
Hi Andrew, Desktop LAN Adapter IP: 192.168.0.2 (static) Subnet Mask: 255.255.255.0 no DNS server Wifi Adapter 192.168.254.18 (dynamic) 255.255.255.0 gateway: 192.168.254.254 Laptop LAN 192.168.0.1 (static) 255.255.255.0 no DNS Wifi 192.168.254.19 (dynamic) 255.255.255.0 192.168.254.254 I don't think these namespaces overlap? Note that my ethernet cable is standard 5e, not crossover. I am directly wiring the computers without a switch or hub. This is not supposed to work, and indeed it doesn't work very well. I am content at this point to use wifi's slower transfer speed, since the big 250 GB transfer is done. So I would rather stop using the ethernet cable, as it is a likely failure point. Testing with ethernet cable only, wifi disabled on both: I re-attached the ethernet cable. After manually changing the network type from "public" to "private (work)" on both computers, I was able to access Windows shared folders, as before. Network is still labeled "unidentified" on both machines, as before. Synctoy runs about the same. It finishes the 250 GB sync with about 1000 errors. Not sure whether it's faster; maybe. Cygwin ping works from laptop to desktop. It sends 4 packets, time 0 ms. However, from desktop to laptop, it sends 13 successful pings averaging .5 ms and then freezes. It never presents the statistical summary. Ctrl-c does not break the loop. Had to restart the Cygwin window. I did not observe this behavior before. Ping from desktop windows CLI succeeds normally. Cygwin telnet fails to cross the cable both ways, with error "unable to connect to remote host: Connection timed out". Telnetting localhost gives "connection refused". I don't know how to set up a telnet server. Cygwin ssh to localhost / ip address succeeds. E.g. "ssh localhost" Connection to other machine fails with error "connect to host 192.168.0.x port 22: Connection timed out". Command was e.g. "ssh 192.168.0.1" Thanks for the advice. Please let me know what to do next. Adieu. On Mon, Jul 6, 2015 at 10:26 AM, Andrey Repin <anrdaemon@yandex.ru> wrote: > Greetings, Spinfusion! > >>. > > What are assigned IP addresses on both links? > Your further hints indicate that you have name resolution conflict due to > overlapping network address space. > Disconnect from WiFi on both hosts and try only cable. > >>. > >> Cygwin functionality: >> ping works. >> telnet error: connection timed out >> ssh error: connection timed out >> ssh connection to localhost - works > >>. > > > -- > With best regards, > Andrey Repin > Monday, July 6, 2015 18:24:38 > > Sorry for my terrible english... > -- Problem reports: FAQ: Documentation: Unsubscribe info: | http://cygwin.com/ml/cygwin/2015-07/msg00099.html | CC-MAIN-2019-30 | refinedweb | 471 | 71.21 |
Codename TextBox
The Need for Code Generation
Have you ever wanted to generate code like the wizards do, i.e. start with a template, mix in some symbols and boom, out comes the code? If you’re building a custom AppWizard, you define code like so:
int WINAPI WinMain(HINSTANCE hinst, HINSTANCE, LPSTR, int nShow) { $$IF(coinit) // Initialize COM CoInitialize(0); $$ENDIF // Initialize the ATL module _Module.Init(0, hinst); $$IF(axhost) // Initialize support for control containment AtlAxWinInit(); $$ENDIF // Create and show the main window HMENU hMenu = LoadMenu(_Module.GetResourceInstance(), MAKEINTRESOURCE(IDR_$$ROOT$$)); ...
This is fine if you’ve got the MFC-based interpreter building your code and you’re willing to live within the boundaries of a very small set of features. The ATL Object Wizard-style of generation is similar, i.e.
class [!ClassName] : public CAxDialogImpl<[!ClassName]> { public: [!ClassName]() { } [!crlf] ...
Again, only good if you’re running under the ObjectWizard and again, somewhat limited. What you really want is to be able to do things ASP-style, e.g.
<%@ language=vbscript %> <% ' test.cpp.asp %> <% greeting = Request.QueryString("greeting") if len(greeting) = 0 then greeting = "Hello, World." %> // test.cpp <% if Request.QueryString("iostream") <> "" then %> #include <iostream> using namespace std; <% else %> #include <stdio.h> <% end if %> int main() { <% if Request.QueryString("iostream") <> "" then %> cout << "<%= greeting %>" << endl; <% else %> printf("<%= greeting %>\n"); <% end if %> return 0; }
In this case, you get the same effect as the other two, but you’ve got the full power of a scripting language. However, for this to work, you had to run under ASP… until now…
TextBox
TextBox is a ASP-like script host that will process any file you give it looking for:
- text blocks
- script blocks (<% script %>)
- output blocks (<%= output %>)
- An optional language block as the first line in the file only (<%@ language= language %>)
(TextBox defaults to vbscript and currently only works with vbscript and jscript).
TextBox pre-processes the text to turn the whole thing into into script and hands it to the scripting engine for execution, outputting the result to standard out. Whatever features of the scripting language you want to use, feel free.
Usage
To provide for I/O, TextBox emulates ASP somewhat. It provides two intrinsics, the request object and the response object. The Request object has a single property, QueryString, that works just like ASP. The Response object has a single property, Write, just like ASP. In fact, if you only use Request.QueryString and Response.Write, you should be able to test your script files using ASP.
To set name/value pairs for use by the script via the Request object, the usage of TextBox is like so:
usage: textbox <file> [name=value]
For example, to interpret the file above, any of the following command lines would work:
textbox test.cpp.asp textbox test.cpp.asp greeting="Double Wahoo!" textbox test.cpp.asp iostream=true greeting="Double Wahoo!"
The first would yield the following output:
// test.cpp #include <stdio.h> int main() { printf("Hello, World.\n"); return 0; }
while the last would yield the following:
// test.cpp #include <iostream> using namespace std; int main() { cout << "Double Wahoo!" << endl; return 0; }
Errors and Debugging
If the scripting engine finds an error, it will notify TextBox, who will notify you. However, if you’ve got script debugging enabled on your machine, the scripting engine will ask you if you’d like to fire up the debugger, showing you exactly the offending code.
Not Just Code
Of course, TextBox is good for the generation of any text, not just code.
Download
TextBox is available for download. It’s just a prototype, so please adjust your expectations accordingly. If you have any comments, please send them to csells@sellsbrothers.com.
Copyright (c) 1998-2001, Chris Sells All rights reserved. NO WARRANTIES EXTENDED. Use at your own risk. | http://sellsbrothers.com/12659/ | CC-MAIN-2017-47 | refinedweb | 635 | 66.44 |
Using XAML IValueConverter to Do Creative Things in C#
Value Converters are present in all flavours of XAML, from WPF to UWP; but what are they? Value Converters are used in conjunction with bindings to convert a value from one type to another, or simply change the value. Value converters are added in XAML, but created in C#. Here's an example of how one might look.
Say you have a property that looks like this on an object that is set to your data context…
public bool IsReady { get; set; }
And you have a TextBlock in your XAML that could look like the following…
<TextBlock Visibility="{Binding IsReady}" Text="Well hello there!"/>
Given that the visibility property is not of type bool, instead, it is of type Visibility;, which is derived from the class UIElement. This binding will cause you problems. Step in the Value Converter to save the day. In my project, I've created a class named BooleanToVisibilityConverter, which implements the interface IValueConverter; and looks like the following:
public class BooleanToVis) { throw new NotImplementedException(); } }
Let's examine this converter. Notice that I've not implemented the ConvertBack method. This is because the binding from our property IsReady to the property on the TextBlock is one way. It's going from IsReady to TextBlock.Visibility; therefore, we only need to implement the Convert method.
The first thing I'm doing in this convert method is to check to make sure the value being passed is of type bool. If it isn't, I'm going to return Visibility.Collapsed as a default; this will make the text block disappear on the UI. Then, I'm simply checking the state of the Boolean and returning Visibility.Collapsed or Visibility.Visible accordingly.
Boolean to visibility is actually a very common use for a converter, and frameworks such as WPF come with a converter to do this out of the box. But, how do we get this converter to our text block? At the moment, this converter hasn't been applied to our binding we saw earlier, and the first thing we need to do is declare our converter in the XAML as a resource. In my WPF application, I'm going to do this in the resources of the main window like this…
<Window.Resources> <local:BooleanToVisibilityConverter x: </Window.Resources>
Then, for the binding on the text block's visibility property, I'm going to do this…
<TextBlock Visibility="{Binding IsReady, Converter={StaticResource BooleanToVisibilityConverter}}" Text="Well hello there!"/>
When we run this application, you'll find that the text block is not visible at runtime, but if we add this line to the constructor of our MainWindow.xaml.cs…
public bool IsReady { get; set; } public MainWindow() { IsReady = true; InitializeComponent(); }
…we can change that. Be sure to set the IsReady property true before the call to InitializeComponent, because without any property change notification you will not see any change.
And that's the basics covered. If you are new to XAML or WPF, you can find an example of what's been covered so far on GitHub.
Let's Aim to Do More
Let's move on from a basic implementation of a converter; what else can we do that can really help in a project? Consider the following situation…
You have a TextBox on your UI, through which the user enters some text. However, when the user enters certain words, or perhaps even a number, you want the TextBox to do something visually. That something could be changing the colour of the text to red, but appear black normally.
Now, there are a number of ways of doing this, and the first is to build a custom TextBox. That, however, is an option that will take a bit of time, and probably more than several lines of code to complete. Then, you need to apply the custom TextBox in your XAML, and the project could have several such controls. This could be a valid option for you, but another could be to create a property your Textbox.Foreground property can bind to. Another valid option, but has its downfalls—such as extra code in the object assigned to your data context that will be needed to handle the property and change it.
That's not the option I'm going with in this article. I'm going to use a Converter, and I've constructed one that looks like this…
public class TextContainsNumbersToBrushConverter : IValueConverter { public Brush NormalBrush { get; set; } public Brush HighlightBrush { get; set; } public object Convert( object value, Type targetType, object parameter, CultureInfo culture) { if (value == null) return NormalBrush; var text = value.ToString(); bool result = text .Any(c => char.IsDigit(c)); return result ? HighlightBrush : NormalBrush; } public object ConvertBack( object value, Type targetType, object parameter, CultureInfo culture) { throw new NotImplementedException(); } }
I can declare this converter in the XAML like so…
<local:TextContainsNumbersToBrushConverter x:
Then add a TextBox to my XAML on the main window like this…
<TextBox Text="{Binding Text, Mode=TwoWay, UpdateSourceTrigger=PropertyChanged}" Foreground="{Binding Text, Converter= {StaticResource TextToBrushConverter}}" FontSize="32" Width="200" VerticalAlignment="Center"/>
Unlike the basic implementation we looked at earlier in this article, you will need to implement INotifyPropertyChanged on the property the TextBox.Text is bound to, which is of type string, to have this example work. There is comprehensive documentation on the interface INotifyPropertyChanged available on MSDN that can assist you with this.
When we run the application, and enter some text—without digits—into the TextBox, we can see the results in Figure 1.
Figure 1: The application running without digits in the text
Then, let's add a digit to any given position in the text, as shown in Figure 2…
Figure 2: The application running with digits in the text
The flexibly of the value converter is something I've come to rely on many times over the years when working with XAML, and I continue to make regular use of it. I hope it can solve problems for you, too. If you have any questions on this topic, you can find me on Twitter @GLanata.
Excellent explanation !Posted by Saravanan A on 12/08/2016 07:17am
Thanks for your excellent explanation. I expected what you described. But, I have an compile error below. I search many ways but no luck. Here is the error: Unknown type 'TextContainsNumbersToBrushConverter' in XML namespace 'using:ProjectName' Please help me.Reply | http://www.codeguru.com/columns/dotnet/using-xaml-ivalueconverter-to-do-creative-things-in-c.html | CC-MAIN-2017-09 | refinedweb | 1,067 | 52.9 |
XML Pointer Language XPointer
- General Model
- Xpointer Forms
- Functions
- Using Xpointers
- Future Developments
- Conclusions. According to RFC 3023 [Murata+ 01], XML documents are associated with a number of MIME types.1 For all these di8erent types of XML resources, it is possible to specify a fragment identifier, which is separated from the URI of the resource itself by a crosshatch (#) character. As defined in RFC 2396 [Berners-Lee+ 98] (the standard for URI syntax), a fragment identifier is not an actual part of a URI but is often used in conjunction with one in the so-called URI reference.
Thus, XPointer can be used for specifying references that point to parts of an XML document, and not to the whole document. As a simple example, while the URI references the technical reports page of the W3C (as shown in Figure 6.1), the URI reference(id('xptr')) specifically points to the entry for the XPointer standard on that page.2 This mechanism makes it possible to create links that are much more specific than through the use of URIs only. There are, however, several things to keep in mind, as follows:
Figure 6.1 Snapshot of W3C's technical reports page
The resource must be XML. XPointer is a mechanism for addressing into XML documents, so the resource identified by the URI must be XML.3 In the example just given, this is true since W3C makes its pages available in XHTML, the XML variant of HTML. However, the vast majority of documents on the Web are not XML, and consequently XPointer cannot be used to address into them. While it is assumed that XML resources will become more popular in the near future (in particular since XHTML is the successor of HTML), as long as non-XML browsers are still widely used,4 HTML is likely to remain the most popular language.
As a side note, HTML also supports fragment identifiers, but they are limited to pointing to IDs only (as opposed to the XPath-based addressing capabilities of XPointer). HTML uses its own extremely simple syntax for fragment identifiers, which works by giving the ID as the fragment identifier, so the XML example just given would be equivalent to the HTML version (In this case, there is a simple correspondence between the XML and the HTML fragment identifier, because both address the fragment using its ID.)
The resource must remain available. Of course, a fragment identifier makes sense only as long as the resource is still available. This brings up the well-known problem of broken links in the Web, and it is independent from specifying fragment identifiers. However, because fragment identifiers are often used with URIs, this issue must be addressed. Resources on the Web often have an astonishingly short life span [Dasen & Wilde 01]; and while some resources disappear (i.e., no longer exist or at least are no longer available via a known URI), others are moved to a new URI without having automatic redirections set up by the Web server operator.
The ID must remain the same. In cases where the fragment identifier uses an ID within the document, it will work correctly only as long as the ID remains valid within the document (and, in the example just given, continues to identify the element representing the XPointer entry within the document). However, since we do not have control over the W3C's document management and identification policy, we have no guarantee that the ID will always be the same and that it will always identify the element we want to reference. The basic dilemma behind this is that the resource (the W3C's Web page) and the reference to it (our fragment identifier) are handled by di8erent entities, which do not necessarily cooperate (or even know each other; for example, even though we know the W3C's Web pages, the W3C probably does not know that we used their ID as an example in this book).
The client has to support XPointers. Even if all previous requirements are satisfied (i.e., the document is XML, it is still available via the URI, and the XPointer can still be interpreted meaningfully), the application processing the URI with the fragment identifier must implement the XPointer standard. At the time of writing, this is not the case for almost all available software, though we hope this will change in the near future. In comparison to XLink and XPath, XPointer is lagging behind in the standardization process; and as long as there is no stable standard, it cannot be implemented.
The major browsers in their most current versions (at the time of writing, Internet Explorer 5.5, Navigator 6, and Opera 5) all support XML in the sense that they are able to not only download and interpret XML documents, but also to display them using style sheet mechanisms (CSS and/or XSL). There is, however, no support for XPointer currently. Nevertheless, as soon as XPointer reaches recommendation status, we hope to see XPointer support (as well as XLink support) in the next releases of the major browsers.
These are the requirements that must be met when using XPointer. We believe that in the near future XPointer (along with many other XML-based technologies) will become widely supported and a popular technology. For an illustration of how XPointer may not only become useful for hypermedia applications (which are the focus of this book) but also for other relatively simple cases of usage, consider the following scenario:
You find an interesting quote on the Web, possibly in an XHTML resource, that you would like to send to a friend. Instead of copying the quote into an e-mail (which would mean taking the quote out of context) or simply sending the resource's URI (which would make it necessary to somehow indicate exactly which part of the resource you mean), you select the quote with the mouse and then choose the "Generate XPointer" option from your browser's menu, which automatically generates a URI reference that exactly identifies the selected quote. You paste this URI reference into the e-mail and send it to your friend. This way, you have exactly identified the quote that was important to you without taking it out of context. Upon receiving the URI reference, your friend's browser not only requests and displays the resource containing the quote but also automatically highlights the quote identified by the XPointer part of the URI reference.
This example depends on the browser's ability to generate XPointers. Ideally, it would do so in a clever way, because for each subresource there is a multitude of possibilities for creating an XPointer identifying it. We will discuss this important issue in detail later in this chapter (in sections 6.4.3 and 6.4.4), but by now it should be clear that XPointer can provide a lot of value in an XML-based Web.
We now look at the details of XPointer. Section 6.1 discusses the general data model of XPointer, which is a generalization of XPath's data model. After this introductory section, we go into the details of how XPointers may be used as fragment identifiers, described in section 6.2. The next issue is XPointer's extensions to XPath and, in particular, the additional functions that XPointer defines. These functions are discussed in section 6.3.
After this rather formal discussion of XPointer, we then spend some time considering possible usage scenarios and how XPointer may be applied in the best possible way (section 6.4). Finally, even though XPointer is a very new standard, in section 6.5 we briefly describe our view of what XPointer's future may look like.
6.1 General Model
One of the most important aspects of XPointer is that it defines a generalization of the XPath concepts of nodes, node types, and node sets (as described in section 5.1) to the XPointer concepts of locations, location types, and location sets.6 As a reminder, nodes, node types, and node sets in XPath are used to describe concepts that can be identified as nodes in a document's tree representation, as described by the XML Infoset. XPath functionality, such as filtering an axis output by predicate, is generally de-fined in terms of operations on nodes and node sets (an exception are the string functions, but these are rather limited and always operate on strings within one text node).
XPointer's goal is to define a mechanism for XML fragment identifiers. A very common usage scenario is a user selecting arbitrary document content with a mouse and then wishing to have an XPointer generated that identifies exactly that content (e.g., to use the XPointer for creating a link pointing to that content). Since this selection can span multiple elements and furthermore may start in the middle of the text of one element and end in the middle of another, it is impossible to identify this content with XPath's constructs of nodes or strings. XPointer's solution to this problem is an extension of XPath's data model, described in section 6.1.1. To make the concepts of XPointer's data model easier to understand, we give some examples in section 6.1.2 of how this model maps to real-world scenarios.
6.1.1 XPointer Data Model
XPointer generalizes the concept of XPath nodes to locations, and, in essence, this generalization defines each location to be an XPath node, a point, or a range.7 The following definitions are taken from the XPointer specification8 and show how XPath's definition of a NodeType is extended by the concepts point and range:
Based on these definitions, XPointer also defines the location set as a generalization of XPath's node set. This definition allows XPath node tests to select locations of type point and range from a location set that might include locations of all three types. All locations generated by XPath constructs are nodes, but XPointer constructs can also generate points and ranges. The concepts of points and ranges are defined in the next two sections.
Point
A location of type point is defined by a node, called the container node, and a non-negative integer, called the index. It can represent the location preceding any individual character, or preceding or following any node in the information set constructed from an XML document. Two points are identical if they have the same container node and index. Each point can be either a node point or a character point, which are defined as follows:
Node point. If the container node of a point is of a node type that can have child nodes (that is, when the container node is an element node or a root node), then the index is an index into the child nodes, and.
Character point. When the container node of a point is of a node type that cannot have child nodes (i.e., text nodes, attribute nodes, namespace nodes, comment nodes, and processing instruction nodes), the point immediately after the nth character of the string value.
Figure 6.2 shows the relationship of container nodes, node points, and character points for an example of an element containing text, then another element (which also contains text), and then some more text.
Figure 6.2 Container nodes, node points, and character points. Reproduced
from the XPointer specification [DeRose+ 01a] by kind permission of Steven
DeRose, spring 2002.
XPointer's goal is to make XPath's concepts applicable to locations and not only to nodes, and thus the following properties for applying XPath's concepts to points are defined: The self and descendant-or-self axes of a point contain the point itself. The parent axis of a point is a location set containing a single location, the container node. The ancestor axis contains the point's container node and its ancestors. The ancestor-or-self axis contains the point itself, the point's container node, and its ancestors.
The child, descendant, preceding-sibling, following-sibling, preceding, following, attribute, and namespace axes of points are always empty.
Range
A range is defined by two points, a startpoint and an endpoint.9 A range represents all of the XML structure and content between the startpoint and the endpoint. A range whose start- and endpoints are equal is called a collapsed range. If the container node of one point of a range is a node of a type other than element, text, or root, then the container node of the other point of the range must be the same node.10 The axes of a range are identical to the axes of its startpoint.
As a side note, remember that node sets are only one of the object types defined in XPath (the others being boolean, number, and string), and the other XPath object types also exist in XPointer. However, there is one important exception, and this is the result type of a whole XPointer. While an XPath can evaluate to any object type (consider the rather simple XPath 2+3, which evaluates to a number), XPointer requires an XPointer to always evaluate to a location set (which is logical, given that there is no way that objects other than location sets could be interpreted as fragment identifiers pointing into documents).
In order to understand some of the functions XPointer provides, it is also essential to introduce the concept of the covering range. By definition, a covering range is a range that wholly encompasses a location. This means that the concept of a covering range can be applied to any location, whether it be a node, a point, or a range. Basically, a covering range is the smallest possible range covering a given location. In detail, this is defined as follows:
For a range location, the covering range is identical to the range.
For an attribute or namespace location (both node locations), the container node of the start point and end point of the covering range is the attribute or namespace location; the index of the startpoint of the covering range is 0; and the index of the endpoint of the covering range is the length of the string value of the attribute or namespace location.
For the root location (a node location), the container node of the startpoint and endpoint of the covering range is the root node; the index of the start point of the covering range is 0; and the index of the endpoint of the covering range is the number of children of the root location.
For a point location, the start- and endpoints of the covering range are the point itself.
For any other kind of location, the container node of the startpoint and endpoint of the covering range is the parent of the location; the index of the startpoint of the covering range is the number of preceding sibling nodes of the location; and the index of the endpoint is one greater than the index of the startpoint.
In summary, XPointer's extensions to XPath's data model include the extension of the concept of nodes and node sets to that of locations and location sets (with the location being a node, a point, or a range). XPointer also introduces the concept of a covering range to support mapping of locations to ranges. For each location, there is a well-defined covering range.
In addition to these extensions of XPath's data model, XPointer also extends the concept of document order, as introduced in section 5.1. In general, the concept is extended not only to arrange nodes in a well-defined order but also to include point and range locations. Figure 6.2 shows how the locations of a document fragment are arranged in XPointer's document order. For defining rules regarding how to determine document order, XPointer introduces the concept of an immediately preceding node and then uses it to define how every possible combination of the relevant location types (i.e., nodes, points, and ranges) have to be compared to establish the document order of these locations.
6.1.2 XPointer Data Model Examples
To make these abstract definitions more understandable, we give some examples of how the concepts of locations, points, ranges, and nodes relate to each other. Our scenario is a browser that allows a user to create XPointers by selecting content with a mouse and then using a menu option for generating an XPointer identifying that content.
Marking a Point Within a Document
The simplest use is to mark a point within the document and generate an XPointer for this point (e.g., for creating an XPointer that is attached to an e-mail saying "please insert your text here"). To do this, the browser generates an XPointer that is a point. Depending on where the point has been selected, it is either a node point (if it has been marked within the root node or an element node) or a character point (in all other cases). Depending on how the browser was implemented, di8erent XPointers that refer to the same point could be created. Consider the case where a user selects the point before the first character of a paragraph. Depending on the browser's implementation, this could result in the following:
a node point into the paragraph's parent element node, identifying the point before the paragraph child,
a node point with index zero within the paragraph's element node, or
a character point with index zero within the text node of the paragraph's text.
All these cases make sense. In the first, the point could be used to insert elements before the paragraph element. In the second, the point could be used to insert elements into the paragraph before the first character; while in the third, it could be used to insert characters before the first character within the paragraph's text. It is entirely dependent on the implementation and application how these cases should be treated. Indeed, one possibility would be for the browser to present the user with a choice as to which is the appropriate XPointer.11
Selecting text within one node is the equivalent of selecting two points,12 the start and the end point of the selection, that lie inside the same node (e.g., the same element, attribute, or comment). From the user's point of view, it may not be apparent whether or not the two points are located within the same node (e.g., if two elements are not visibly separated by any formatting); but for the browser, this is easy to determine. Based on the selection of the user, the browser generates an XPointer defining a range between the selected start and end points. Since in this case the selected text lies within one node, the two points can easily be used to construct an XPointer range, which spans between the two points.
Selecting Text That Spans a Number of Nodes
If the start and the end point lie in di8erent nodes, then the range defined by the two points also spans multiple nodes. According to the range constraints defined by XPointer, this is allowed only if the containing nodes of the startpoint and the endpoint are element, text, or root nodes (in all other cases, the start and endpoint must lie within the same node). A possible selection that spans a number of nodes in the case of an XHTML-like paragraph may select text that is all inside the paragraph but still spans multiple nodes. This may be because in between there are other element nodes (e.g., nodes representing emphasized text or hyperlinks). This situation e8ectively places the start and the endpoints in di8erent text nodes (making them both children of the same paragraph node).
To further generalize this scenario, the start point and the end point may occur in entirely di8erent subtrees of the document tree, for example, within a paragraph of the first chapter and in a table cell of the third chapter. This still would represent a valid range, spanning from the character point in the text node representing the first chapter's paragraph to the node point in the table row node directly after the selected table cell.
Making Multiple Selections
Even though today's user interfaces in most cases do not support this type of interaction, it would be perfectly reasonable to implement an interface that allows multiple non-contiguous selections. This could be used to create an XPointer that references multiple ranges within the same resource (e.g., attached to an e-mail saying "what do you think about these three statements?"). In a case such as this, it is mainly a question of the design of the user interface as to how multiple selections could be implemented in a user-friendly way. From XPointer's point of view, the multiple selections could be easily combined to yield a single location set.
The preceding scenarios illustrate some typical cases in which XPointer concepts are relevant.13 So far we have not discussed how exactly the hypothetical browser maps the concepts to actual XPointers, and as a first step toward resolving this issue, we first have to discuss the possible forms of XPointers. | https://www.informit.com/articles/article.aspx?p=29016&seqNum=4 | CC-MAIN-2021-10 | refinedweb | 3,554 | 55.98 |
Last, collections in Java, garbage collectors, exception handling, Java applets, Swing, JDBC, Remote Method Invocation (RMI), Servlets and JSP.
Let’s go…!
Table of Contents
- Object Oriented Programming (OOP)
- General Questions about Java
- Java Threads
- Java Collections
- Garbage Collectors
- Exception Handling
- Java Applets
- Swing
- JDBC
- Remote Method Invocation (RMI)
- Servlets
- JSP.
Object-oriented programming contains many significant features, such as encapsulation, inheritance, polymorphism and abstraction. We analyze each feature separately in the following sections.
Encapsulation
Encaps.
You can refer to our tutorial here for more details and examples on encapsulation.
Polymorphism
Polymorphism is the ability of programming languages to present the same interface for differing underlying data types. A polymorphic type is a type whose operations can also be applied to values of some other type.
Inheritance
Inheritance provides an object with the ability to acquire the fields and methods of another class, called base class. Inheritance provides re-usability of code and can be used to add additional features to an existing class, without modifying it.
Abstraction
Abstraction
Abstraction
1..
5. What are the Data Types supported by Java ? What is Autoboxing and Unboxing ? The eight primitive data types supported by the Java programming language are:
- byte
- short
- int
- long
- float
- double
- boolean
- char.
6. What is Function Overriding and Overloading in Java ?.
7. What is a Constructor, Constructor Overloading in Java and Copy-Constructor ?.
8. Does Java support multiple inheritance ? No, Java does not support multiple inheritance. Each class is able to extend only on one class, but is able to implement more than one interfaces.
9. What sub-class a main method.
Also check out the Abstract class and Interface differences for JDK 8.
10. What are pass by reference and pass by value ?.
Java Threads
11. What is the difference between processes and threads ? A process is an execution of a program, while a
Thread is a single execution sequence within a process. A process can contain multiple threads. A
Thread is sometimes called a lightweight process.
12. Explain different ways of creating a thread. Which one would you prefer and why ? There are three ways that can be used in order for a
Thread to be created:
- A class may extend the
Threadclass.
- A class may implement the
Runnableinterface.
- An application can use the.
13. Explain the available thread states in a high-level..
14. What).
15. How does thread synchronization occurs inside a monitor ? What levels of synchronization can you apply ?.
16. What’s a deadlock ? A condition that occurs when two processes are waiting for each other to complete, before proceeding. The result is that both processes wait endlessly.
17. How.
Java Collections
18. What are the basic interfaces of Java Collections Framework ?.
19. Why Collection doesn’t extend Cloneable and Serializable interfaces ?.
20. What is an Iterator ? The
Iterator interface provides a number of methods that are able to iterate over any
Collection. Each Java
Collection contains the
iterator method that returns an
Iterator instance. Iterators are capable of removing elements from the underlying collection during the iteration. 21. What differences exist between Iterator and ListIterator ?.
22. What is difference between fail-fast and fail-safe ?.
23. How HashMap works in Java ?.
24. What is the importance of hashCode() and equals() methods ? In Java, a
HashMap uses the
hashCode and
equals methods to determine the index of the key-value pair and to detect duplicates. More specifically, the
hashCode method.
25. What differences exist between HashMap and Hashtable ?.
26. What is difference between Array and ArrayList ? When will you use Array over ArrayList ? The
Array and
ArrayList classes differ on the following features:
Arrayscan contain primitive or objects, while an
ArrayListcan contain only objects.
Arrayshave fixed size, while an
ArrayListis dynamic.
- An
ArrayListprovides more methods and features, such as
addAll,
removeAll,
iterator, etc.
- For a list of primitive data types, the collections use autoboxing to reduce the coding effort. However, this approach makes them slower when working on fixed size primitive data types.
27. What is difference between ArrayList and LinkedList ?.
28. What is Comparable and Comparator interface ? List their differences. Java provides the
Comparable interface,.
29. What is Java Priority Queue ?.
30. What do you know about the big-O notation and can you give some examples with respect to different data structures ?.
31. What is the tradeoff between using an unordered array versus an ordered array ?).
32. What are some of the best practices relating to the Java Collection framework ?
-.
33. What’s the difference between Enumeration and Iterator interfaces ?.
34.).
Garbage Collectors
35. What is the purpose of garbage collection in Java, and when is it used ? The purpose of garbage collection is to identify and discard those objects that are no longer needed by the application, in order for the resources to be reclaimed and reused.
36. What does System.gc() and Runtime.gc() methods do ? These methods can be used as a hint to the JVM, in order to start a garbage collection. However, this it is up to the Java Virtual Machine (JVM) to start the garbage collection immediately or later in time.
37. When is the finalize() called ? What is the purpose of finalization ? The finalize method is called by the garbage collector, just before releasing the object’s memory. It is normally advised to release resources held by the object inside the finalize method.
38. If an object reference is set to null, will the Garbage Collector immediately free the memory held by that object ? No, the object will be available for garbage collection in the next cycle of the garbage collector.
39. What is structure of Java Heap ? What is Perm Gen space in Heap ?.
40.).
41. When does an Object becomes eligible for Garbage collection in Java ? A Java object is subject to garbage collection when it becomes unreachable to the program in which it is currently used.
42. Does Garbage collection occur in permanent generation space in JVM ?.
Exception Handling
43. What are the two types of Exceptions in Java ? Which are the differences between them ?.
44. What is the difference between Exception and Error in java ?
Exception and
Error classes are both subclasses of the
Throwable class. The
Exception class is used for exceptional conditions that a user’s program should catch. The
Error class defines exceptions that are not excepted to be caught by the user program.
45. What is the difference between throw and throws ?.
45..
46. What will happen to the Exception object after exception handling ? The
Exception object will be garbage collected in the next garbage collection.
47. How does finally block differ from finalize() method ?.
Java Applets
48. What is an Applet ? A java applet is program that can be included in a HTML page and be executed in a java enabled client browser. Applets are used for creating dynamic and interactive web applications.
49. Explain the life cycle of an Applet. An applet may undergo the following states:
Init: An applet is initialized each time is loaded.
Start: Begin the execution of an applet.
Stop: Stop the execution of an applet.
Destroy: Perform a final cleanup, before unloading the applet.
50. What happens when an applet is loaded ? First of all, an instance of the applet’s controlling class is created. Then, the applet initializes itself and finally, it starts running.
51. What is the difference between an Applet and a Java Application ?.
52. What are the restrictions imposed on Java applets ? Mostly due to security reasons, the following restrictions are imposed on Java applets:
- An applet cannot load libraries or define native methods.
- An applet cannot ordinarily read or write files on the execution host.
- An applet cannot read certain system properties.
- An applet cannot make network connections except to the host that it came from.
- An applet cannot start any program on the host that’s executing it.
53. What are untrusted applets ? Untrusted applets are those Java applets that cannot access or execute local system files. By default, all downloaded applets are considered as untrusted.
54. What is the difference between applets loaded over the internet and applets loaded via the file system ?.
55. What is the applet class loader, and what does it provide ?.
56..
Swing
57. What is the difference between a Choice and a List ?.
58. What is a layout manager ? A layout manager is the used to organize the components in a container.
59. What is the difference between a Scrollbar and a JScrollPane ? A
Scrollbar is a
Component, but not a
Container. A
ScrollPane is a
Container. A
ScrollPane handles its own events and performs its own scrolling.
60. Which Swing methods are thread-safe ? There are only three thread-safe methods: repaint, revalidate, and invalidate.
61. Name three Component subclasses that support painting. The
Canvas,
Frame,
Panel, and Applet classes support painting.
62. What is clipping ? Clipping is defined as the process of confining paint operations to a limited area or shape.
63. What is the difference between a MenuItem and a CheckboxMenuItem ? The
CheckboxMenuItem class extends the
MenuItem class and supports a menu item that may be either checked or unchecked.
64. How are the elements of a BorderLayout organized ? The elements of a
BorderLayout are organized at the borders (North, South, East, and West) and the center of a container.
65. How are the elements of a GridBagLayout organized ? The elements of a
GridBagLayout are organized according to a grid. The elements are of different sizes and may occupy more than one row or column of the grid. Thus, the rows and columns may have different sizes.
66. What is the difference between a Window and a Frame ? The
Frame class extends the Window class and defines a main application window that can have a menu bar.
67. What is the relationship between clipping and repainting ? When a window is repainted by the AWT painting thread, it sets the clipping regions to the area of the window that requires repainting.
68. What is the relationship between an event-listener interface and an event-adapter class ? An event-listener interface defines the methods that must be implemented by an event handler for a particular event. An event adapter provides a default implementation of an event-listener interface.
69. How can a GUI component handle its own events ? A GUI component can handle its own events, by implementing the corresponding event-listener interface and adding itself as its own event listener.
70..
71. What is the design pattern that Java uses for all Swing components ? The design pattern used by Java for all Swing components is the Model View Controller (MVC) pattern.
JDBC
72. What is JDBC ? JDBC is an abstraction layer that allows users to choose between databases. JDBC enables developers to write database applications in Java, without having to concern themselves with the underlying details of a particular database.
73..
74. What is the purpose Class.forName method ? This method is used to method is used to load the driver that will establish a connection to the database.
75. What is the advantage of PreparedStatement over Statement ? PreparedStatements are precompiled and thus, their performance is much better. Also, PreparedStatement objects can be reused with different input values to their queries.
76. What is the use of CallableStatement ? Name the method, which is used to prepare a CallableStatement.();
77. What does Connection pooling mean ?.
Remote Method Invocation (RMI)
78. What is RMI ?.
79. What is the basic principle of RMI architecture ?.
80. What are the layers of RMI Architecture ?.
81. What is the role of Remote Interface in RMI ?.
82..
83. What is meant by binding in RMI ?.
84. What is the difference between using bind() and rebind() methods of Naming Class ? The bind method bind is responsible for binding the specified name to a remote object, while the rebind method is responsible for rebinding the specified name to a new remote object. In case a binding exists for that name, the binding is replaced.
85. What are the steps involved to make work a RMI program ? The following steps must be involved in order for a RMI program to work properly:
- Compilation of all source files.
- Generatation of the stubs using rmic.
- Start the rmiregistry.
- Start the RMIServer.
- Run the client program.
86. What is the role of stub in RMI ?:
- It initiates a connection to the remote JVM containing the remote object.
- It marshals the parameters to the remote JVM.
- It waits for the result of the method invocation and execution.
- It unmarshals the return value or an exception if the method has not been successfully executed.
- It returns the value to the caller.
87. What is DGC ? And how does it work ?.
88..
89. Explain Marshalling and demarshalling. When an application wants to pass its memory objects across a network to another host or persist it to storage, the in-memory representation must be converted to a suitable format. This process is called marshalling and the revert operation is called demarshalling.
90. Explain Serialization and Deserialization..
Servlets
91. What is a Servlet ?.
92. Explain the architechure of a Servlet..
93. What is the difference between an Applet and a Servlet ?.
94. What is the difference between GenericServlet and HttpServlet ?.
95. Explain the life cycle of a Servlet..
96. What is the difference between doGet() and doPost() ?.
97. What is meant by a Web Application ?.
98. What is a Server Side Include (SSI) ?.
99..
100.. See example here.
101. What is the structure of the HTTP response ?.
102. What is a cookie ? What is the difference between session and cookie ?.
103. Which protocol will be used by browser and servlet to communicate ? The browser communicates with a servlet by using the HTTP protocol.
104. What is HTTP Tunneling ?.
105. What’s the difference between sendRedirect and forward methods ?.
106. What is URL Encoding and URL Decoding ? The URL encoding procedure is responsible for replacing all the spaces and every other extra special character of a URL, into their corresponding Hex representation. In correspondence, URL decoding is the exact opposite procedure.
JSP
107. What is a JSP Page ?.
108. How are the JSP requests handled ?.
109. What are the advantages of JSP ?.
110. What are Directives ? What are the different types of Directives available in JSP ?.
111. What are JSP actions ?.
112. What are Scriptlets ? In Java Server Pages (JSP) technology, a scriptlet is a piece of Java-code embedded in a JSP page. The scriptlet is everything inside the tags. Between these tags, a user can add any valid scriplet.
113. What are Decalarations ? Declarations are similar to variable declarations in Java. Declarations are used to declare variables for subsequent use in expressions or scriptlets. To add a declaration, you must use the sequences to enclose your declarations.
114. What are Expressions ? A JSP expression is used to insert the value of a scripting language expression, converted into a string, into the data stream returned to the client, by the web server. Expressions are defined between
<% = and %> tags.
115. What is meant by implicit objects and what are they ? JSP implicit objects are those Java objects that the JSP Container makes available to developers in each page. A developer can call them directly, without being explicitly declared. JSP Implicit Objects are also called pre-defined variables.The following objects are considered implicit in a JSP page:
application
page
request
response
session
exception
out
config
pageContext
Still with us? Wow, that was a huge article about different types of questions that can be used in a Java interview. If you enjoyed this, then subscribe to our newsletter to enjoy weekly updates and complimentary whitepapers! Also, check out our courses for more advanced training!
So, what other Java interview questions are there? Let us know in the comments and we will include them in the article! Happy coding!
That list seems really good. I study computer science and it’ll definitely come in handy once I start looking for a job.
One small error on hashCode and equals: if 2 instances are equal, they must provide the same hashCode, BUT two different instances may return the same hashCode.
Having hashCode always return 1 or some other fixed number satisfies the contract, although it would kill the performance of HashMap and other hash related functionality.
The hashCode just determines in which hashmap bucket the instance is put, and within that bucket the correct instance is then looked up with the much more expensive equals.
perfect post, thanks for sharing.
Nice Post.Thanks for sharing this with us.
This is correct, but many topics are outdated here.
Yeah, there should be some questions about Java 8 features. It will tell if the candidate puts any effort into self improvement.
Really perfect post, thanks for sharing with us.
This is a quality peace of work. Thank you for the effort.
Thanks to the author
I appreciate your efforts to write everything in one place
good very useful to all fresh engineering graduates and experianced to get into job easily.
Thanks a lot, it is very useful
Very good work. It would be really great if you could add more questions especially on the new versions of Java. Also, few implementation examples can be posted for different concept. You have done a great job.
It is very useful for job seekers to get a great job. thanks for sharing with us.
Thanks so much for this compilation. It is very useful.
Thanks, but found some inaccuracies:
Q33 Other threads are able to modify Collection of some Iterator, but next called method on Iterator will throw ConcurrentModificationException
Q37 It is recommended not to override finalize() method in order to release resource as JVM does not guarantee this method to be invoked.
Thanks, I refreshed my java knowledge once again.
Thanks … very useful
Thanks to the author ! Keep updating !
nice job. Thank You. This tutorial is very useful for people who seek job in programming. I just love coding.
nice infrmation
nice infrmation
Super…information
Very useful to us
Thanks a lot for your comments!
very nice info shared. I am preparing for interviews and it is helping me a lot. how to download these questions pdf?
Great source of updated info. Keep it up. Thanks.
Thanks a lot!
where is the link to download this list?
link for download
Your answer to the HashMap question (24) is wrong. A HashMap works even if the keys all return hashcode 1. Hashcode defines the buckets, equals is used for equality checking. There are more problems with both the questions and the answers.
Thanks a lot for your feedback, I corrected my response in question 24.
Moreover, you are more than welcome to discuss your problems about both the questions and answers.
Where is the download Link?
where can we download this?plz send me the link on my mail id.
thanks
There is a detailed explanation of HashMap internals at and JDK6 source code is at.
Rainer is correct in that HashMap will accept multiple keys with the same hashCode. I tested this with a class where all instances have a hashCode of 1. To some extent HashMap compensates for this by hashing the hashCodes of keys. Multiple keys can have the same hashCode since it is used to determine the bucket only in which is stored a list of Entity objects which are distinguished by the result of equals on the values. The Entity class is defined as an implementation of Map.Entry with the addition of a next field which is included in its constructor and used for iteration.
Thanks a lot for your feedback, I corrected my response in question 24.
Hi,
You should correct the answer to the 13 question (Explain the available thread states in a high-level? ). It is quite wrong.
It does not exist a state called Running or Sleepeing.
Take a look:
BR
I was describing the states of a thread in the perspective of the operating system, but it was not clear.
I updated my response, in order to describe the states of a thread, as being regarded by the Java Virtual Machine (JVM).
Question 39 actually includes two questions the second of which is not answered. Please answer this second question, which is “What is Perm Gen space in Heap?”. Thanks
Regarding question 39 “What is Perm Gen space in Heap?”, according to one reply at, “In Sun’s JVM, the permanent generation is not part of the heap. It’s a different space for class definitions and related data, as well as where interned strings live.” The Java VM Specification Java SE7 Edition () does not mention “permanent generation”, “Perm Gen”. or “PermGen” (case invariant). However it does mention the Method Area in terms that resemble Perm Gen functionality and according to a reply at, the Method Area could be considered to be a subset of Perm Gen. This induces me think that Perm Gen is a logical construct that maps to more than one real memory area or subarea.
Also, Perm Gen implementation may depend on the specific JVM. For example, according to a reply at, “JVMs like JRocket don’t have PermGen at all, everything is stored in heap. Only in such context can you call PermGen a “special part” of heap.”
For another example, in Oracle’s Java 8 JVMs, PermGen is replaced by Metaspace, according to. However neither “metaspace”, “Perm Gen”, or “PermGen” (case invariant) are mentioned in the Java VM Specification Java SE8 Edition (). I conclude that Metaspace is a logical construct like Perm Gen which has a JVM option interface and corresponds to operations over more than one memory areas or regions in them.
It would be useful to clarify the distinction between actual JVM run-time areas, as documented in the JVM specification manuals, and logical mappings to them made available through JVM options. But this would be difficult to do exhaustively and authoritatively due to a probable lack of detailed documentation and difficulty in understanding JVM source code if it is available. Some useful clues might be obtained by inspecting OpenJDK sources which are available at.
However, I believe that this level of knowledge is not necessary for most Java developer positions and would apply only only for Java JVM developer positions.
thank you!
just Awesome
Where is the PDFlink?
Hello! sir.. I want to have some knowledge about what is some when a program is run…How the object is created ??? How the loaders work?? Everything. Any help we be praised.
where is the link?
Question 37. It’s a very bad idea to release resources in finalize() method because there is no garanties that this method will be invoked.
This is true. It is best to avoid the finalize method as its behavior is unreliable
Most efficient for the guys.. who studies a day before Viva……thanks man!
really helpful. thnk you so much
awsome work ,it realy worthfull for every one …thank u.
Awesome..The way I needed. Thanks heaps
Nice one mate. Very informative
great work.. i wish if you could include the questions about struts and hibernate also!
115 Java Interview Questions and Answers – The ULTIMATE List (PDF Download)”
How and where to download ?
Where is the link to download the PDF?
I am already a subscriber. #help
So I’m not alone in fact that there is no (promised) pdf around?
btw the list is really good
Awesome content. Very usefull and interesting.
Perfect start for beginners. I also find this video very useful for freshers. Covers the important interview questions in video format.
Where is the pff if you already a member? Hope this time, I would get an answer for my question
very nice job, thank you a lot. it will be helpful for my comming interveiws
I would like to know if this post helped on your interviews.
Hey Kevin, i would like to know if this post helped on your interviews.
Very nice questions…. very helpful…
It is a nice collection of questions. Thank you a lot.
Nice collection. A few comments:
– Question 4>A static method may access an instance variable, but only if gets access to an instance of a class
static int mymethod(A a) { return a.instanceVar + 2; }
– Question 8> Technically, Java does permit a form of multiple inheritance, namely multiple inheritance of type. See here for more info
– Question 44> “excepted” should be “expected”
Very nice collection of interview questions…. very helpful for interview…
Really, where is the PDF Link ?
It’s been asked for many times but there is no reply.
Yoosoof…really ? Try to be grateful for this site instead of making demands…really.
As for the others complaining it is outdated – why dont you come up with a site yourselves.
Thanks Maneas !
This is not compliant for the site.
Exact opposite, I’m one of the fans of the site and many articles published on it.
I just tried to find out why I can not reach PDF versions of some articles, even having membership of newsletter. But I could not see any kind of explanation that Byron noticed with below comment.
This is why I wrote my comment.
I think, wahoo, you should try to be more understanding about what people are trying to say.
Hello all,
In order to get the eBook – the PDF version of this article – you have to subscribe to our newsletter. This is clearly stated at the beginning of the post where the subscription form resides.
If you are already a member of our newsletter and did not receive the eBook, please shoot us an email at our support address and we will help you out.
Best regards,
Byron
Thanks Byron,
You gave the exact answer of my question.
As I mentioned in the reply of wahoo’s comment, I am already a member of newsletter, and got many PDF versions of articles.
But for some articles, like this one, I could’t get it.
I will send an email.
Regards
Please mail me the pdf for the Java interview Questions PDF.
Nice article…
Please mail me the pdf of Java interview questions
good effort its really helpful
Thank you so much! It should be incredibly useful for job interviews.
Thanks a lot guys
Very good post…thank you very much
Good one
Very, very good list of questions and answers. Good job!
Good Collection of Java Interview questions.
Well written and to the point. Thank you!
Give a compelling (technical) argument why the tit-for-tat policy as used in BitTorrent is far from optimal for file sharing in the Internet
thanks… it helped a lot
I am really thankful to the author for a such a great resource. Tomorrow I have an interview for JAVA So this guide would be really helpful. Inspired from your article if you do guest posting do let me know.
Good Job
You didn’t send the link to the PDF file for the ebook….please do so.
Very good post…thank you very much
Does ny one know what kind of interview questions can be asked from 10+ yrs of exp in java.any sample collection?
regist
can I get the PDF?
where is my pdf version please?
Why there are no questions about design? I always ask about Design Patters on Java job interviews.
The question about use of MVC in Swing doesn’t make much sense. The question should be “Explain what is MVC and how it works. Why to use it?”
Kindly send me the PDF version of this article… I need it urgently…
Very nice Information….Carry On…
Thank You
Thank u sir!!
for your post!!
Really really good……super guys……..
Hello,
Thanks for this informative informative as I have a Java interview Monday
Awesome list of posts. Thanks for sharing. I Found one android app as well that contains similar questions and answers. Must try out.
Q6. Overridden methods must have the same name, argument list, and return type.
It doesn’t need to have the same type. It might be a subtype.
Hi, Can you please help me getting the most commonly asked Java Interview Question with their answers or best possible answers. Thanks
This is beautiful for a last minute review before an interview for Java. It also serves as good indexing mechanism to read up on. This post is phenomenal!
awesome! post, Thank you so much such for sharing such an informative artical
Thank you for your post. It’s really helpful.
thank u sir to provide all question in one page really this is very useful for me again thank u
thank you so much for this info after revising this i felf like an expert to my friends as i was able to answer every question fluently to them, nice work…
Very great Java material ! And how can I download the pdf version of this ?
Thanks for sharing!!
very well written and structured , Keep up the good work , Thank you for the effort and sharing .
Thanks a lot, very good information provided.
Can you please send me the PDF.
Thank you
How can i download the pdf? I subscribed before.
package com;
import java.util.ArrayList;
import java.util.List;
public class ABPTest {
public static void main(String[] args) {
// TODO Auto-generated method stub
}
}
abstract class ABP{
//INPUT
private int serachDepth;
private Node root;
abstract public int getVal(Node p, Node root);//TO-DO******************************
//OUTPUT
List nextMoves = new ArrayList();
//Alpha Beta Pruning Function
void f(Node p){
if(p.d == serachDepth ||p.isLeaf() ){
if(p.d%2==0){//MAX
p.v=Math.max(p.v,getVal(p,getRoot()));
}else{//MIN
p.v=Math.min(p.v,getVal(p,getRoot()));
}
return;
}
//DFS children
for(Node c:p.getChildren()){
c.d=p.d+1;
if(c.d%2==0){//MAX
c.v=Neg_Infinity;
}else{//MIN
c.v=Pos_Infinity;
}
c.a=p.a;
c.b=p.b;
f(c);
if(p.d%2==0){//MAX
p.v=Math.max(p.v,c.v);
p.a=Math.max(p.a,c.v);
if(p.d==0){nextMoves.add(c);}
if(p.v>=p.b){break;}
}else{//MIN
p.v=Math.min(p.v,c.v);
p.b=Math.min(p.b,c.v);
if(p.d==0){nextMoves.add(c);}
if(p.v<=p.a){break;}
}
}
}
public int getSerachDepth() {
return serachDepth;
}
public void setSerachDepth(int serachDepth) {
this.serachDepth = serachDepth;
}
public Node getRoot() {
return root;
}
public void setRoot(Node root) {
this.root = root;
}
public List getNextMoves() {
return nextMoves;
}
public void setNextMoves(List nextMoves) {
this.nextMoves = nextMoves;
}
int Pos_Infinity = 999999999;
int Neg_Infinity = -999999999;
}
abstract class Node{
int v;//Heuristic Value
int a;//Alpha
int b;//Beta
int d;//Depth
Object board;
abstract boolean isLeaf();//TO-DO***************************
abstract List getChildren();//TO-DO***************************
}
Nice list. Are you sure this is correct?
The method that prepares a CallableStatement is the following:
CallableStament.prepareCall ();
I didn’t see a static version of this call in the class, and all examples online show it being made by calling prepareCall () on a Connection instance, like this:
conn.prepareCall (SQLString);
nice
I Refreshed my java knowledge.
thank U!!!
Nice post!!
Just wanted to point out Java actually has 4 access modifiers, default, public, private, protected. Default is forgotten a lot of the times, I recently forgot about it in a recent interview. Cheers.
where is the pdf dl link?
Hello Twinkle,
You need to join our newsletter here:
After you enter a VALID email address, you will need to verify your subscription.
A verification email will be sent to you and after you successfully verify your email,
you will get a final welcome email which will contain the links to download the ebooks!
Very Handy before attending a interview.
Great Job author.
Keep continuing the good work.
Can we expect any Docker and Kubernetes configuration and set-up in future.
if i had seen that note before, i would have the dream work. Thank you man.
Thank you very much.
Those questions are very useful.
They helped me to get hired.
Nice information! Thanks for sharing.
One of these questions, CyclicBarrier vs CountDownLatch are asked to me on screening round, I managed to answer it well, thanks to internet and you guys.
How can I get the PDF? I had subscribed….
Greate JOB!
For PDF: In chrome browser, hit Ctrl+P and select simplified version. You get the needed info only.
I found those questions and answers very useful, they are very good for revising knowledge. Thank you for the effort and sharing. Let the code be with you :). | https://www.javacodegeeks.com/2014/04/java-interview-questions-and-answers.html | CC-MAIN-2016-50 | refinedweb | 5,425 | 68.06 |
Workflow 5: Explore NOAA ObsPack#
The NOAA ObsPack products are collections of observation data from many sites which have been collated and standardised. ObsPack data products are prepared by NOAA in consultation with data providers. If you’re using the OpenGHG Hub we cache the NOAA ObsPack data to make retrieval quick and easy. Below we will demonstrate how NOAA ObsPack data can can be explored and plotted.
1. Search, retrieve and plot#
Ww can query the object store and find all the flask data for example
from openghg.client import search search(species="ch4", measurement_type="flask", data_type="NOAA", network="NOAA")
Or we can do an all in one search and retrieve using
get_obs_surface. Here we find CH4 data from Estevan Point, British Columbia, retrieve it and plot it.
from openghg.client import get_obs_surface data = get_obs_surface(site="HPB", species="ch4", network="NOAA")
As there isn’t any ranking data set (see tutorial 2)
get_obs_surface doesn’t know which inlet to select, we need to tell it.
data = get_obs_surface(site="HPB", species="ch4", inlet="93m", data_type="NOAA")
data.plot_timeseries() | https://docs.openghg.org/tutorials/cloud/5_Explore_NOAA_ObsPack.html | CC-MAIN-2022-33 | refinedweb | 178 | 53.61 |
We can start by writing a brute force algorithm that tries to enumerate all possible valid configurations of crosswalks. If we think about how to build this up in a brute force manner, we could start by considering the cow in field $N$ on the top side. We could either build a crosswalk with it or ignore it. If we ignore it, then we have to consider the leftmost $N-1$ fields on the top side and the leftmost $N$ fields on the right side. If we do build a crosswalk though, we should use the rightmost field that would still be a valid crosswalk. After building it, we have the leftmost $N-1$ fields in the top side and, assuming that the crosswalk goes to field $i$ on the bottom side, the leftmost $i-1$ fields on the bottom side.
There could be exponentially many configurations of crosswalks, but for a fixed pair $(a, b)$ of having the leftmost $a$ fields on the top side and leftmost $b$ fields on the bottom side available for building crosswalks, we just have to maintain the maximum number of crosswalks that can be built using just those fields.
Here is Brian Dean's code, which does this iteratively instead of recursively.
#include <iostream> #include <fstream> #include <algorithm> using namespace std; int N; int A[1000][1000]; int S[1000], T[1000]; int main(void) { ifstream fin("nocross.in"); ofstream fout("nocross.out"); fin >> N; for (int i=0; i<N; i++) fin >> S[i]; for (int i=0; i<N; i++) fin >> T[i]; A[0][0] = abs(S[0]-T[0])<=4; for (int i=1; i<N; i++) A[i][0] = max(A[i-1][0], (int)(abs(S[i]-T[0]) <= 4)); for (int i=1; i<N; i++) A[0][i] = max(A[0][i-1], (int)(abs(S[0]-T[i]) <= 4)); for (int i=1; i<N; i++) for (int j=1; j<N; j++) A[i][j] = max( max(A[i-1][j], A[i][j-1]), A[i-1][j-1]+(abs(S[i]-T[j])<=4)); fout << A[N-1][N-1] << "\n"; return 0; } | http://usaco.org/current/data/sol_nocross_gold_feb17.html | CC-MAIN-2018-17 | refinedweb | 362 | 70.77 |
synchronator
Trying Synchronator from Github.
How can I connect this module with Pythoniata?
import DropboxSetup
ImportError: No module named 'DropboxSetup'
Thanky.
- markhamilton1
WARNING
Please exercise caution using anything dropbox related at this time. It appears that the people that maintain the requests library have introduced changes that are not backwards compatible and the dropbox package has not been updated to support the latest version of requests.
In addition the new version of requests is attempting to reference a version of urllib3 that StaSH does not have access to or is not in the repo stash is using.
All of this leads to applications like Synchronator being broken until all of these libraries are made consistent with each other.
If someone knows how to fix these issues please let me know, otherwise I am in communication with the developers at requests and dropbox trying to get this resolved.
Can you use an older version of requests?
@markhamilton1 thank you for your diligence in fereting out solutions to the compatibility issues.9
I
Updated dropbox and requests modules before reading your warning consequently Synchronator and DropboxSetup are broken. Do you have a suggestion?
can anyone post tracebacks of the issues?
I will point out that I was able to run synchronator, and it seemed to be working. I used the requests and dropbox modules installed in the beta (2.9.0 and 6.4.0 respectively). The script ran, and started copying files, though I cancelled it before it got too far.
I am a power user of Synchronator as I do cross development on Windows/Pythonista.
I just checked my versions of requests and dropbox, which work fine in my case:
requests: 2.18.1
dropbox: 8.0.0 | https://forum.omz-software.com/topic/4430/synchronator/? | CC-MAIN-2022-40 | refinedweb | 288 | 65.83 |
GIS Library - Rename file functions. More...
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <grass/gis.h>
Go to the source code of this file.
GIS Library - Rename file functions.
(C) 2001-2008 by the GRASS Development Team
This program is free software under the GNU General Public License (>=v2). Read the file COPYING that comes with GRASS for details.
Definition in file rename.c.
Rename a database file.
The file or directory oldname under the database element directory in the current mapset is renamed to newname.
Bug: This routine does not check to see if the newname name is a valid database file name.
Definition at line 62 of file rename.c.
References G__file_name(), G__name_is_fully_qualified(), G_mapset(), and G_rename_file().
Referenced by Vect_rename().
Rename a file in the filesystem.
**The file or directory oldname is renamed to newname.
Definition at line 35 of file rename.c.
Referenced by G_rename(). | http://grass.osgeo.org/programming6/rename_8c.html | crawl-003 | refinedweb | 153 | 64.17 |
Prerequisites
Basic knowledge of JavaScript and Vue.js will help you get the best out of this tutorial. To make it easy for everyone to follow along, I will endeavor to break down any complex implementation. In addition, you will also need to ensure that you have Node runtime and npm installed on your computer. Click here to install Node and follow this link to install npm if otherwise.
Introduction
Irrespective of the size of your web application, a voice, and video chat feature is an addon that will not only allow your users to communicate in real time, have a face to face interaction or meeting without necessarily being in the same location or region but also improves the engagement and interactivity of your application. While the implementation of voice and video chat might sound so cool, trust me, you don’t want to build this from scratch. This is where an awesome tool like CometChat really shines.
So rather than build a backend for your chat application from scratch, you can easily build the entire functionalities using CometChat API which will enable you to build communication features like voice and video chat in real-time.
Together in this tutorial, we will build a voice and video chat application by leveraging on some of the awesome API made available by CometChat. You will be able to run this application on two separate windows(browser) locally and have the ability to make, receive and reject a call amongst other things after a successful implementation. Once we are done, you would have built an application similar to:
This application will be built with Vue.js and CometChat Pro SDK. The complete source code for this tutorial can be found here on GitHubif you will prefer to head straight into the code.
Getting started
To begin, we will create and install a new Vue.js application using an awesome tool named Vue CLI. This is a standard tool created by the team at Vue.js to help and allow developers to quickly scaffold a new project without hassle. Run the following command from the terminal to install it globally on your computer:
npm install -g @vue/cli
Once the installation is complete, proceed to use the vue command to create a new Vue.js project as shown here:
vue create comet-voice-video
Choose the “manually select features” options by pressing Enter on your keyboard and check the features you will be needing for this project by pressing space on your computer to select one. As shown below you should select Babel, Router, and Linter / Formatter:
For other instructions, type y to use history mode for the router. Ideally, the default mode for Vue-Router is hash(#) mode as it uses the URL hash to simulate a full URL so that page won’t be reloaded when the URL changes. Choosing history mode here will help to get rid of the hash mode in order to achieve URL navigation without a page reload and add this configuration to the router file that will be generated automatically for this project. In addition, select ESLint with error prevention only in order to pick a linter / formatter config. Next, select Lint on save for additional lint features and save your configuration in a dedicated config file for future projects. Type a name for your preset, I named mine vuecomet:
Immediately after the configuration, Vue CLI will start the installation of the application and install all its required dependencies in a new folder named comet-voice-video.
Start the application
Now that the installation of the new application is completed, move into the new project and start the development server with:
// move into the app cd comet-voice-video // start the server npm run serve
View the welcome page of the application on:
In addition, since we will be depending on CometChat Pro to easily build our application, let’s install the SDK before proceeding with the video chat implementation. Stop the development server from running by hitting CTRL + C on your machine and run the following command from the project directory:
npm install @cometchat-pro/chat --save
Now we can easily import CometChat object wherever we want to use CometChat within our application like this:
import { CometChat } from '@cometchat-pro/chat';
Create the CometChat Pro account, APP ID and API Key
Since we will be leveraging on the hosted service of CometChat to successfully build our voice and video chat application, head over to the website here and create a free CometChat Pro account. Fill in all the required information to set up a trial account.
Log in to view your CometChat dashboard and let’s create a new project. This will give us access to a unique APP ID and an API Key
In the ‘Add New App’ dialog, enter a name and click the plus sign to create a new application. Once you are done, click on the Explore button for the new app created. You will be redirected to a new page as shown below:
Next, from the left side menu, go to “API Keys” tab and you will see a page similar to this:
Immediately after you created a new application from your dashboard, CometChat automatically generated an API Key for the new demo application for you. You don’t need to create a new one as this will suffice and give you full access to the functionality offered by CometChat. Don’t forget to note or better still, copy the automatically-generated Full access API Key and application ID as we will need these shortly.
Now that we are done setting up all the necessary tools and credentials needed to successfully create our application, we will start building properly in a bit.
What we want to achieve
Before we start building the application properly, let’s quickly discuss the application structure and how we intend to structure the flow.
Basically, we want users to log in from different locations and be able to chat using voice and video once we host our application on a live server, but for the sake of this tutorial, we will use two different windows locally. Once the user logs in:
we will redirect to a different page where he or she can input the UID of another user and start a video chat. Each user of CometChat is uniquely identified using his or her UID, you can consider this or relate it with a typical unique primary ID of the user from your database, which gives an opportunity to identify such user:
Initialize CometChat
To begin, the typical workflow when using CometChat requires that the settings for CometChat must be initialized by calling the init() method before any other method from CometChat. To start, create a new file named .env within the root directory of the application and paste the following code in it:
// .env VUE_APP_COMMETCHAT_API_KEY=YOUR_API_KEY VUE_APP_COMMETCHAT_APP_ID=YOUR_APP_ID
This will make it very easy to reference and use our application credentials within our project. Don’t forget to replace
YOUR_API_KEY and
YOUR_APP_ID placeholder with the appropriate credentials as obtained from your CometChat dashboard.
Next, navigate to ./src/App.vue file which is the root component for Vue.js applications and replace its content with:
// ./src/App.vue <template> <div id="app"> <router-view/> </div> </template> <script> import { CometChat } from "@cometchat-pro/chat"; import "./App.css"; export default { data() { return {}; }, created() { this.initializeApp(); }, methods: { initializeApp() { var appID = process.env.VUE_APP_COMMETCHAT_APP_ID; CometChat.init(appID).then( () => { console.log("Initialization completed successfully"); }, error => { console.log("Initialization failed with error:", error); } ); } } }; </script>
What we have done here is to include the functional component that will render any matched component for a given path fromVue Router. We will configure the router later in this tutorial. Next, within the section, we imported the CometChat object and a CSS file that we will create next. Lastly, we initialize CometChat by passing the application ID as a parameter.</p> <p>Now create a new file named <code>App.css</code> within <code>./src/App.css</code> and paste the following content in it:</p> <p></p> <div class="highlight"><pre class="highlight plaintext"><code>// ./src/App.css @import ''; @import ''; ; } #auth { width: 600px; margin: 0 auto; } #callScreen { width: 500px; height: 500px; } .home { width: 600px; margin: 0 auto; } </code></pre></div> <p></p> <p>We imported the CDN files for <a href="">Bootstrap</a> and <a href="">Font awesome</a> and then proceeded to add some default style for the application. Feel free to modify this content as you deem fit.</p> <h3> <a name="login-component" href="#login-component" class="anchor"> </a> Login component </h3> <p>One of the key concept when building chat applications with CometChat is to ensure that users are authenticated before they can have access to use CometChat and start a chat. To ensure this, we will create a Login component that will handle the logic for authenticating a user and redirecting such user to the appropriate page for a chat.</p> <p>To begin, create a new folder named auth within the views folder and within the newly created folder, create a new file and call it <code>Login.vue</code>. Open this new file and paste the following contents:</p> <p></p> <div class="highlight"><pre class="highlight plaintext"><code>// ./src/views/auth/Login.vue <template> <div id="auth"> <div id="nav"> <router-linkLogin</router-link> </div> <p> Enter your username to start video chat </p> <p>Create an account through your CometChat dashboard or login with one of our test users (superhero1, superhero2)</p> <form v-on:submit. <div class="form-group"> <input name="username" id="username" class="form-control" placeholder="Enter your username" v- </div> <div class="form-group"> <button type="submit" class="btn btn-success"> Login <span v-</span></button> </div> </form> </div> </template> <script> import { CometChat } from "[@cometchat]()-pro/chat"; export default { data() { return { username: "", showSpinner: false }; }, methods: { authLoginUser() { var apiKey = process.env.VUE_APP_COMMETCHAT_API_KEY; this.showSpinner = true; CometChat.login(this.username, apiKey).then( () => { this.showSpinner = false; this.$router.push({ name: "home" }); }, error => { this.showSpinner = false; console.log("Login failed with error:", error.code); } ); } } }; </script> </code></pre></div> <p></p> <p>What we have done here is pretty simple. First, we included an HTML form and added an input field that will accept the username of a user during the authentication process. Once the form is submitted, it will be processed using a method named authLoginUser().</p> <p>Next within the <script> tag, we imported the CometChat object and created the authLoginUser() method attached to the form. Within the authLoginUser() method, we called the login() method from CometChat and passed our application API and the username to uniquely identify the user.</p> <p>Once the user logs into the application, we used the <a href="">Vue Router</a> to redirect the user to the home page which will be handled by a different component name HomeComponent. We will create that in the next section.</p> <h3> <a name="home-component" href="#home-component" class="anchor"> </a> Home component </h3> <p>In chronological order, what we want to achieve with this component is:</p> <ul> <li>Display the <strong>UID</strong> of the currently logged in user</li> <li>Initiate a video chat</li> <li>Accept or reject a call</li> <li>And finally, be able to log out of the application.</li> </ul> <p>Let’s start adding these functionalities. To begin, open <code>./src/views/Home.vue</code> and replace its content with:</p> <p></p> <div class="highlight"><pre class="highlight plaintext"><code>// ./src/views/Home.vue <template> <div class="home"> <div id="nav"> <button class="btn btn-success" [@click]()="logoutUser">Logout</button> </div> <div class="form-group"> <div class="form-group"> <p>Welcome <b>{{ this.username}}</b>, your UID is <b>{{ this.uid }}</b> <br> Enter the receiver Id to start a chat </p> <p v- <b class="text-danger"> Receiver ID is required </b> </p> <input type="text" class="form-control" placeholder="Enter receiver UID" v- </div> <div v- <button class="btn btn-success" [@click]()="acceptCall">Accept Call</button> <button class="btn btn-success" [@click]()="rejectCall">Reject Call</button> </div> <div v- <button class="btn btn-secondary"> Ongoing Call ... </button> </div> <div v-else> <button [@click]()="startVideoChat" class="btn btn-secondary"> Start Call <span v-</span> </button> </div> </div> <div id="callScreen"></div> </div> </template> </code></pre></div> <p></p> <p>Here, we added contents to display the username and the UID of the logged in user. Next, we included an input field that will accept the UID of the receiver in order to uniquely identify such user and be able to initiate or send a call request.</p> <p>Once a call request has been received, the receiver has an option of either rejecting or accepting the call. We have included click events for this process and will define the methods next.</p> <p><strong>Get the logged in user</strong></p> <p>First, let’s retrieve the details of the currently logged in user by adding this <script> section to our code. Place the contents below within the Home.vue immediately after the closing tag of the <template> section:</p> <p></p> <div class="highlight"><pre class="highlight plaintext"><code>// ./src/views/Home.vue <script> import { CometChat } from "[@cometchat]()-pro/chat"; export default { name: "home", data() { return { username: "", uid: "", session_id: "", receiver_id: null, error: false, showSpinner: false, incomingCall: false, ongoingCall: false }; }, created() { this.getLoggedInUser(); }, methods: { getLoggedInUser() { var user = CometChat.getLoggedinUser().then( user => { this.username = user.name; this.uid = user.uid; }, error => { this.$router.push({ name: "homepage" }); } ); }, } }; </script> </code></pre></div> <p></p> <p>Basically, we imported the CometChat method, defined some properties and corresponding initial values within the data option. And finally, once the component is created, we called a method named <code>getLoggedInUser()</code> to automatically retrieve the details of a logged in user and update the view with it.</p> <p><strong>Start video chat</strong></p> <p>To initiate a chat from the app, we will add this method to the function associated with the Vue instance:</p> <p></p> <div class="highlight"><pre class="highlight plaintext"><code>// ./src/views/Home.vue <script> import { CometChat } from "[@cometchat]()-pro/chat"; export default { ... methods: { ... startVideoChat() { if (!this.receiver_id) this.error = true; this.showSpinner = true; var receiverID = this.receiver_id; var callType = CometChat.CALL_TYPE.VIDEO; var receiverType = CometChat.RECEIVER_TYPE.USER; var call = new CometChat.Call(receiverID, callType, receiverType); CometChat.initiateCall(call).then( outGoingCall => { this.showSpinner = false; console.log("Call initiated successfully:", outGoingCall); // perform action on success. Like show your calling screen. }, error => { console.log("Call initialization failed with exception:", error); } ); } } }; </script> </code></pre></div> <p></p> <p>This method will be used to specify the type of call and that of a receiver. For our application, this will be a <code>CALL_TYPE.VIDEO</code> and <code>RECEIVER_TYPE.USER</code> respectively. Finally, we passed these details through an object of call into the <code>initiateCall()</code> method from the CometChat API to send a call request.</p> <p><strong>Listen and receive calls</strong></p> <p>Once a call has been initiated, we need to listen to the call events within our application and based on the actions that will be carried out by the user to either receive or reject the call, we will call the appropriate method. To do this successfully, we need to register the CallListener listener using the <code>addCallListener()</code> method from CometChat. Add the following contents within the created() method:</p> <p></p> <div class="highlight"><pre class="highlight plaintext"><code>// ./src/views/Home.vue <script> import { CometChat } from "[@cometchat]()-pro/chat"; export default { ... created() { ... let globalContext = this; var<pre class="highlight plaintext"><code>// ./src/views/Home.vue <script> import { CometChat } from "[@cometchat]()-pro/chat"; export default { ... methods: { ... acceptCall() { let globalContext = this; this.ongoingCall = true; this.incomingCall = false; var sessionID = this.session_id; CometChat.acceptCall(sessionID).then( call => { console.log("Call accepted successfully:", call); console.log("call accepted now...."); // start the call using the startCall() method console.log(globalContext.ongoingCall); => { /* Notification received here if current ongoing call is ended.*/ console.log("Call ended:", call); globalContext.ongoingCall = false; globalContext.incomingCall = false; /* hiding/closing the call screen can be done here.*/ } }) ); }, error => { console.log("Call acceptance failed with error", error); // handle exception } ); } } }; </script> </code></pre></div> <p></p> <p>Here, once an incoming call has been accepted, we then called the startCall() method. With this in place, the sender and the recipient can start a chat. Next, add another method to reject calls:</p> <p></p> <div class="highlight"><pre class="highlight plaintext"><code>// ./src/views/Home.vue <script> import { CometChat } from "[@cometchat]()-pro/chat"; export default { ... methods: { ... rejectCall() { var sessionID = this.session_id; var globalContext = this; var status = CometChat.CALL_STATUS.REJECTED; CometChat.rejectCall(sessionID, status).then( call => { console.log("Call rejected successfully", call); globalContext.incomingCall = false; globalContext.ongoingCall = false; globalContext.<pre class="highlight plaintext"><code>// ./src/views/Home.vue <script> import { CometChat } from "[@cometchat]()-pro/chat"; export default { ... methods: { ... logoutUser() { CometChat.logout().then( success => { console.log("Logout completed successfully"); this.$router.push({ name: "homepage" }); console.log(success); }, error => { //Logout failed with exception console.log("Logout failed with exception:", { error }); } ); } } }; </script> </code></pre></div> <p></p> <p>The <code>CometChat.logout()</code> will log the user out of CometChat and also from our application.</p> <p>In case you miss anything, the complete contents of the Home.vue file can be <a href="">found here</a>.</p> <h3> <a name="update-router" href="#update-router" class="anchor"> </a> Update router </h3> <p>Earlier, when we installed Vue.js, we also selected an option of installing Vue Router with it. So open <code>./src/router.js</code> and replace the default content in it with the following:</p> <p></p> <div class="highlight"><pre class="highlight plaintext"><code>// ./src/router.js import Vue from 'vue' import Router from 'vue-router' import Home from './views/Home.vue' import Login from './views/auth/Login.vue' Vue.use(Router) export default new Router({ mode: 'history', base: process.env.BASE_URL, routes: [ { path: '/', redirect: 'login', name: 'homepage' }, { path: '/login', name: 'login', component: Login, }, { path: '/home', name: 'home', component: Home } ] }) </code></pre></div> <p></p> <p>What we have done here is to map each route within our application to the respective component that will handle the logic. This is a basic format of configuring Vue Router for a Vue.js application. Click <a href="">here</a> to find out more about Vue Router.</p> <h3> <a name="test-the-application" href="#test-the-application" class="anchor"> </a> Test the application </h3> <p>Now that you are done adding all the required logic and components, you can now run the application and test it out. Restart the application with npm run serve and navigate to <a href=""></a> to view the app.</p> <p><img src="*qLJzgX9dJ-ZnIB5U.png" alt=""></p> <p>Open the application in two separate windows and login with any two of the test users: superhero1, superhero2, superhero3, superhero4 or superhero5</p> <p><img src="*h2sDQ5lxWHDxWN6p.gif" alt=""></p> <p>Once you are able to log in from both window, enter the UID of one user and click on <strong>Start Call</strong> and once the call request is received by the other user, two new action buttons will be displayed. Click <strong>Accept Call</strong> to accept a call</p> <p><img src="*JgKRdftJdW8sg0CR.gif" alt=""></p> <p>or <strong>Reject Call</strong> to reject a call:</p> <p>If you have followed the tutorial diligently up until now, then you should have a functional voice and video chat application running on your computer.</p> <h3> <a name="conclusion" href="#conclusion" class="anchor"> </a> Conclusion </h3> <p>As seen in this tutorial, you will agree with me that it is so easy to quickly implement a video chat feature in your project. This can be easily integrated into an existing Vue.js application or a new one entirely.</p> <p>Feel free to explore the source code as linked at the beginning of this tutorial and add more features as you deem fit.</p> <p>With CometChat, you can easily extend the functionalities of this application by adding more awesome features. <a href="">Check the official documentation of CometChat Pro</a> to read more about what its got to offer.</p> <p>I do hope you found this tutorial quite helpful? Can’t wait to see what you will build with the knowledge that you have acquired here.</p> <p>Originally published on <a href="">CometChat</a></p>
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/yemiwebby/add-reliable-voice-and-video-chat-with-vue-js-and-cometchat-3nj7 | CC-MAIN-2021-39 | refinedweb | 3,428 | 54.83 |
Concept
ma.
oop concept
FILE CONCEPT
serialization concept
programming concept
Thread concept
IO concept
Multiarray concept
Explain normalization concept?
Explain normalization concept? Explain normalization concept
RANDOM ACCESS FILE CONCEPT
FILE HANDLING CONCEPT
wrapper class concept
Does javaScript have the concept level scope?
Does javaScript have the concept level scope? Does javaScript have the concept level scope
Java + Timer concept - Java Beginners
concept. Hi Friend,
Try the following code:
import java.awt.
Connected Mode and Dissconnect mode coding concept.
Connected Mode and Dissconnect mode coding concept. Tell Me about Connected Mode and Dissconnect mode coding concept. also give me some syantax
registration form using oop concept
SOCKET PROGRAMING IN THE JAVA FOR NETWORKING CONCEPT
Can you suggest any good book to learn struts
Can you suggest any good book to learn struts Can you suggest any good book to learn struts
JSP Code - JSP-Servlet
JSP Code Hi,
Do we have a datagrid equivalent concept in JSP...,
Please visit the following links:
Thanks
JSP - JSP-Servlet
jsp? Hi friend,
Here is the coding to show image in an html... concept is only the file name is stored in the data base...; Visit for more information.
Rose India ? An Exciting New Concept towards Working from Home
Rose India – An Exciting New Concept towards Working from Home
Are you searching for an opportunity to make some money while... be profitably explored by the latent professionals of various fields having good -
A JDBC Connection Pooling Concept
JSP Enumeration
JSP Enumeration
Enumeration, a concept of core java... to display the days of week using the
Enumeration in jsp. An Enumeration object
JSP Tags
JSP Tags Defined JSP Tags?
Tags are a vital concept in Java Server Pages (JSP). Below is a list of tags used in JSP. This section discusses Declaration Tag and Expression Tag in detail; syntax, usage with examples
RetController.java (do get) (my file for reference for a test.. IS LOGIC good Enough ?
RetController.java (do get) (my file for reference for a test.. IS LOGIC good Enough ? try
{
Connection conn=Create_Connection.conOpen();
System.out.println(conn);
String submit
OOPs Concept
OOPs Concept
Introduction: In this section, we will discuss...
through the method overloading concept in java.
Method
Why the Google Penguin update is good for SEO
update is good for SEO, well specially for white-hat SEO and not black-hat....
The recent update is a good sign for White Hat SEO who has been working hard
BRICK-BREAKER game concept - Java Beginners
JSP Code - JSP-Servlet
to display only 10 records per pages in jsp, then how can i achieve this concept...JSP Code Hi,
I have a problem in limiting the number of row... of JSP page
Roll No
Name
Marks
Grade
JSP - JSP-Servlet
....... hai!
man u have a concept of threads.
declare a thread
ADD ROW - JSP-Servlet
ADD ROW Hi Sir,
How to use add row and delete row concept in jsp . Hi Friend,
Please visit the following link:
Thanks
Simple clarification - JSP-Servlet
way to do so with JSP's? Whether it is conceptually right or wrong? Please... Raghavendran,
With JSPs only we cant do it. We have to use AJAX concept which uses ActionClasses, ServletRequests , Javascript etc. Usage of this concept is littlebit
thread with in the servlet..? - JSP-Servlet
thread with in the servlet Please explain me the concept of Java thread ..and it's use with Servlet.Thanks in advance Interview Question
JSP Interview Question What is JSP? Describe its concept.
Java Server Pages (JSP) is a server-side programming technology that enables the creation of dynamic web pages and applications.
A JSP is translated
Inheritance in Java 7
This tutorial describe concept of Inheritance. It is one of OOPs concept
Polymorphism in Java 7
This tutorial describe concept of Polymorphism. It is one of OOPs concept
Concept of Inheritance in Java
Concept of Inheritance in Java
Concept of Inheritance in Java is considered... class from parent class.
Let's understand the concept of inheritance in Java... the concept of Inheritance clearly you must have the idea of class and its
Hibernate Annotations
This tutorial describes the concept of annotations
Jsp Scope Variables - JSP-Interview Questions
Jsp Scope Variables what is the importance of page,session,request n application scope variables in JSP?Am not understanding where which scope.....please, explain me clearly with good java code of each variable..Thanks | http://roseindia.net/tutorialhelp/comment/19955 | CC-MAIN-2014-35 | refinedweb | 729 | 57.87 |
Hi all, I would like to check it is a bug in the SocketServer.py or error in the document about the UDPServer. The document says... handle () ...For stream services, self.request is a socket object; for datagram services, self.request is a string. .... However, I find that it doesn't really return a string when I try. I look at the SocketServer.py, it codes (at arround line 275)... def get_request(self): data, client_addr = self.socket.recvfrom(self.max_packet_size) return (data, self.socket), client_addr which actually return a turple. I wonder which one is correct and will be used in the future version, the document or the code? Any comment? Thanks, Wilson Tam | https://mail.python.org/pipermail/python-list/2000-November/037944.html | CC-MAIN-2016-30 | refinedweb | 114 | 63.96 |
Constructor is a special kind of method used to instantiate (construct) the object. It always has the same name as the class and does not have a return type, since it implicitly returns a new instance of the class.
To be accessible from another class not in its package it needs to be declared with the public access modifier. When a new instance of the MyRectangle class is created, by using the new syntax, the constructor method is called, which in the example below sets the fields to some default values.
package javaapplication19; public class JavaApplication19 { static class MyRectangle { int x, y; public MyRectangle(int a, int b){ x = a; y = b; } public int getArea() { return x * y; } } public static void main(String[] args) { MyRectangle r = new MyRectangle(10, 20); int area = r.getArea(); System.out.println(area); } } | http://codecrawl.com/2014/11/20/java-constructor/ | CC-MAIN-2017-04 | refinedweb | 138 | 53.34 |
Thank you everyone for your comments. They gave me the clue I needed. The below code is working for my situation, which transforms an existing XML by updating dao[@href]. <xsl:template <xsl:copy> <xsl:copy-of <xsl:apply-templates /> <xsl:attribute <!-- do stuff --> </xsl:attribute> </xsl:copy> </xsl:template> Nathan On Thu, Aug 15, 2013 at 11:25 AM, David Carlisle <davidc@xxxxxxxxx> wrote: > On 15/08/2013 15:41, Nathan Tallman wrote: >> >> I'd like to change the values of attribute "href", whenever it appears >> inside a "dao" element. From what I can tell, my template is correct, >> however when ever I stick anything inside xsl:attribute (replace <!-- >> do stuff here -->, nothing happens during the transformation. Am I >> missing something? >> >> <xsl:template >> <xsl:attribute >> <!-- do stuff here --> >> </xsl:attribute> >> </xsl:template> >> >> I'm using XSLT 2.0, Saxon 6.5.5. >> >> Thanks, >> Nathan > > > > In addition to the other comments re XSLT version and namespaces, a template > matching an attribute only does anything if you apply templates to that > attribute. The default processing does not do that: > > <xsl:template > ... stuff.. > <xsl:apply-templates/> > .. > </xsl:template> > > would apply templates to child nodes of <dao> but not to its attributes. > > David | http://www.oxygenxml.com/archives/xsl-list/201308/msg00057.html | CC-MAIN-2018-17 | refinedweb | 200 | 66.64 |
Re: random numbers according to user defined distribution ??
- From: Raymond Hettinger <python@xxxxxxx>
- Date: Thu, 7 Aug 2008 02:13:53 -0700 (PDT)
On Aug 6, 3:02 pm, Alex <axel.kow...@xxxxxx> wrote:
I wonder if it is possible in python to produce random numbers
according to a user defined distribution?
Unfortunately the random module does not contain the distribution I
need :-(
Sure there's a way but it won't be very efficient. Starting with an
arbitrary probability density function over some range, you can run it
through a quadrature routine to create a cumulative density function
over that range. Use random.random() to create a uniform variate x.
Then use a bisecting search to find x in the cumulative density
function over the given range.
from __future__ import division
from random import random
def integrate(f, lo, hi, steps=1000):
dx = (hi - lo) / steps
lo += dx / 2
return sum(f(i*dx + lo) * dx for i in range(steps))
def make_cdf(f, lo, hi, steps=1000):
total_area = integrate(f, lo, hi, steps)
def cdf(x):
assert lo <= x <= hi
return integrate(f, lo, x, steps) / total_area
return cdf
def bisect(target, f, lo, hi, n=20):
'Find x between lo and hi where f(x)=target'
for i in range(n):
mid = (hi + lo) / 2.0
if target < f(mid):
hi = mid
else:
lo = mid
return (hi + lo) / 2.0
def make_user_distribution(f, lo, hi, steps=1000, n=20):
cdf = make_cdf(f, lo, hi, steps)
return lambda: bisect(random(), cdf, lo, hi, n)
if __name__ == '__main__':
def linear(x):
return 3 * x - 6
lo, hi = 2, 10
r = make_user_distribution(linear, lo, hi)
for i in range(20):
print r()
--
Raymond
.
- Follow-Ups:
- References:
- Prev by Date: Re: benchmark
- Next by Date: Re: Calculate sha1 hash of a binary file
- Previous by thread: Re: random numbers according to user defined distribution ??
- Next by thread: Re: random numbers according to user defined distribution ??
- Index(es): | http://coding.derkeiler.com/Archive/Python/comp.lang.python/2008-08/msg00654.html | crawl-002 | refinedweb | 326 | 55.58 |
trace
Percentile
Interactive Tracing and Debugging of Calls to a Function or Method
A.
- Keywords
- programming, debugging
Usage
trace(what, tracer, exit, at, print, signature, where = topenv(parent.frame()), edit = FALSE) untrace(what, signature = NULL, where = topenv(parent.frame()))
tracingState(on = NULL) .doTrace(expr, msg) returnValue(default = NULL)
Arguments
- what
the name, possibly
quote()d, of a function to be traced or untraced. For
untraceor for
tracewith more than one argument, more than one name can be given in the quoted form, and the same action will be applied to each one. For “hidden” functions such as S3 methods in a namespace,
where = *typically needs to be specified as well.
- tracer
either a function or an unevaluated expression. The function will be called or the expression will be evaluated either at the beginning of the call, or before those steps in the call specified by the argument
at. See the details section.
- exit
either a
functionor an unevaluated expression. The function will be called or the expression will be evaluated on exiting the function. See the details section.
- at
optional numeric vector or list. If supplied,
tracerwill be called just before the corresponding step in the body of the function. See the details section.
If
TRUE(as per default), a descriptive line is printed before any trace expression is evaluated.
- signature
If this argument is supplied, it should be a signature for a method for function
what. In this case, the method, and not the function itself, is traced.
- edit
For complicated tracing, such as tracing within a loop inside the function, you will need to insert the desired calls by editing the body of the function. If so, supply the
editargument either as
TRUE, or as the name of the editor you want to use. Then
trace()will call
editand use the version of the function after you edit it. See the details section for additional information.
- where
where to look for the function to be traced; by default, the top-level environment of the call to
trace.
An important use of this argument is to trace functions from a package which are “hidden” or called from another package. The namespace mechanism imports the functions to be called (with the exception of functions in the base package). The functions being called are not the same objects seen from the top-level (in general, the imported packages may not even be attached). Therefore, you must ensure that the correct versions are being traced. The way to do this is to set argument
whereto a function in the namespace (or that namespace). The tracing computations will then start looking in the environment of that function (which will be the namespace of the corresponding package). (Yes, it's subtle, but the semantics here are central to how namespaces work in R.)
- on
logical; a call to the support function
tracingStatereturns
TRUEif tracing is globally turned on,
FALSEotherwise. An argument of one or the other of those values sets the state. If the tracing state is
FALSE, none of the trace actions will actually occur (used, for example, by debugging functions to shut off tracing during debugging).
- expr, msg
arguments to the support function
.doTrace, calls to which are inserted into the modified function or method:
expris the tracing action (such as a call to
browser()), and
msgis a string identifying the place where the trace action occurs.
- default
If
returnValuefinds no return value (e.g. a function exited because of an error, restart or as a result of evaluating a return from a caller function), it will return
defaultinstead.
Details an S4.)
Value.
Note.
References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) The New S Language. Wadsworth & Brooks/Cole.
See Also
browser and
recover, the likeliest
tracing functions;
also,
quote and
substitute for
constructing general expressions.
Aliases
- trace
- untrace
- tracingState
- .doTrace
- returnValue
Examples
library(base)
# NOT RUN {) # } # NOT RUN { <!-- %% methods is loaded when needed --> # } # NOT RUN { <!-- %% if(.isMethodsDispatchOn()) { # non-simple use needs 'methods' package --> # } # NOT RUN { untrace(f) # (as it has changed f's body !) as.list(body(f)) as.list(body(f)[[3]]) # -> stop(..) is [[4]] ## Now call the browser there trace("f", quote(browser(skipCalls = 4)), at = list(c(3,4))) # } # NOT RUN { f(-1,2) # --> enters browser just before stop(..) # } # NOT RUN { ## trace a utility function, with recover so we ## can browse in the calling functions as well. trace("as.matrix", recover) ## turn off the tracing (that happened above) untrace(c("f", "as.matrix")) # } # NOT RUN { ## Useful to find how system2() is called in a higher-up function: trace(base::system2, quote(print(ls.str()))) # } # NOT RUN { ##-------- Tracing hidden functions : need 'where = *' ## ## 'where' can be a function whose environment is meant: trace(quote(ar.yw.default), where = ar) a <- ar(rnorm(100)) # "Tracing ..." untrace(quote(ar.yw.default), where = ar) ## trace() more than one function simultaneously: ## expression(E1, E2, ...) here is equivalent to ## c(quote(E1), quote(E2), quote(.*), ..) trace(expression(ar.yw, ar.yw.default), where = ar) a <- ar(rnorm(100)) # --> 2 x "Tracing ..." # and turn it off: untrace(expression(ar.yw, ar.yw.default), where = ar) # } #")) # } # NOT RUN { <!-- %%}% only if 'methods' --> # } | https://www.rdocumentation.org/packages/base/versions/3.6.2/topics/trace | CC-MAIN-2020-05 | refinedweb | 858 | 57.98 |
I have used secure string before but now I needed to protect other types of objects in memory, what is the best way to do this?
Start with the following example, taken from here:
using System; using System.Security.Cryptography; public class DataProtectionSample { // Create byte array for additional entropy when using Protect method. static byte [] s_aditionalEntropy = { 9, 8, 7, 6, 5 }; public static void Main() { // Create a simple byte array containing data to be encrypted. byte [] secret = { 0, 1, 2, 3, 4, 1, 2, 3, 4 }; //Encrypt the data. byte [] encryptedSecret = Protect( secret ); Console.WriteLine("The encrypted byte array is:"); PrintValues(encryptedSecret); // Decrypt the data and store in a byte array. byte [] originalData = Unprotect( encryptedSecret ); Console.WriteLine("{0}The original data is:", Environment.NewLine); PrintValues(originalData); } public static byte [] Protect( byte [] data ) { try { // Encrypt the data using DataProtectionScope.CurrentUser. The result can be decrypted // only by the same current user. return ProtectedData.Protect( data, s_aditionalEntropy, DataProtectionScope.CurrentUser ); } catch (CryptographicException e) { Console.WriteLine("Data was not encrypted. An error occurred."); Console.WriteLine(e.ToString()); return null; } } public static byte [] Unprotect( byte [] data ) { try { //Decrypt the data using DataProtectionScope.CurrentUser. return ProtectedData.Unprotect( data, s_aditionalEntropy, DataProtectionScope.CurrentUser ); } catch (CryptographicException e) { Console.WriteLine("Data was not decrypted. An error occurred."); Console.WriteLine(e.ToString()); return null; } } public static void PrintValues( Byte[] myArr ) { foreach ( Byte i in myArr ) { Console.Write( "\t{0}", i ); } Console.WriteLine(); } }
Use these methods in this example to protect objects held in memory, all .net objects can be represented with a byte array some with native methods to serialize and deserialize them. You could then use public get and set methods to retrieve and store the data.
The following shows how we can store a guid in a class using protected data:
public Guid secureGUID { get { return new Guid(Unprotect(_secureGUID)); } set { _secureGUID = Protect(value.ToByteArray()); } } private byte[] _secureGUID;
The other thing you will want to make sure you do is run the Array.Clear on dispose of the class, to ensure the array data has been overwritten with zeros in memory.
Hi Mike, thank you for the post it's very helpful. I have a question and hope you can reply. When I checked the dumped memory, the Guid property in the class is encrypted. But the Guid value can still be found from a "Guid" type from memory. I think that's because of the "get" method of Guid property. Am I correct? So that means if the Guid value is sensitive, we can still find it from memory, how to solve it?
Hope can get your response, thank you in advance.
-Eric | https://coderwall.com/p/vod8pa/net-securestring-but-what-about-other-data-types | CC-MAIN-2018-13 | refinedweb | 435 | 51.75 |
#include "libavcodec/avcodec.h"
#include "avformat.h"
#include "rtp.h"
Go to the source code of this file.
RTP packet contains a keyframe.
Definition at line 95 of file rtpdec.h.
Referenced by ff_rdt_parse_packet(), and rdt_parse_packet().
RTP marker bit was set for this packet.
Definition at line 96 of file rtpdec.h.
Referenced by rtp_parse_packet().
Definition at line 60 of file rtpdec.h.
Referenced by rtp_parse_mp4_au(), and rtsp_read_packet().
Structure listing useful vars to parse RTP packet payload.
Definition at line 58 of file rtpdec.c.
Referenced by av_register_all().
Definition at line 52 of file rtpdec.c.
Referenced by av_register_rdt_dynamic_payload_handlers(), and av_register_rtp_dynamic_payload_handlers().
some rtp servers assume client is dead if they don't hear from them.
.. so we send a Receiver Report to the provided ByteIO context (we don't have access to the rtcp handle from here)
Definition at line 168 of file rtpdec.c.
Referenced by rtsp_read_packet().
Return the rtp and rtcp file handles for select() usage to wait for several RTP streams at the same time.
Definition at line 305 of file rtpproto.c.
Referenced by udp_read_packet().
Return the local port used by the RTP connection.
Definition at line 293 of file rtpproto.c.
Referenced by make_setup_request(), and rtsp_cmd_setup().
Definition at line 545 of file rtpdec.c.
Referenced by 270 397 of file rtpdec.c.
Referenced by rtsp_read_packet().
Definition at line 315 of file rtpdec.c.
Referenced by rtsp_open_transport_ctx().
If no filename is given to av_open_input_file because you want to get the local port first, then you must call this function to set the remote server address.
Definition at line 57 of file rtpproto.c.
Referenced by make_setup_request().
from rtsp.c, but used by rtp dynamic protocol handlers.
from rtsp.c, but used by rtp dynamic protocol handlers.
This is broken out as a function because it is used in rtp_h264.c, which is forthcoming.
Definition at line 254 of file rtsp.c.
Referenced by parse_h264_sdp_line(), and sdp_parse_fmtp().
Definition at line 47 of file rtpdec.c.
Referenced by sdp_parse_rtpmap(). | http://ffmpeg.org/doxygen/0.5/rtpdec_8h.html | CC-MAIN-2016-50 | refinedweb | 331 | 55.4 |
Introduction: How to Make "HELLO WORLD' on a LCD Display(Arduino Nano)
This project is for beginners who just bought a 16x2 LCD display.
if you want to make a simple and small project of this LCD,then...
"HELLO WORLD" Is perfect for you!
see the schematic and build the circuit then copy paste the program on your Arduino IDE software
And then you have a simple LCD display circuit,displaying "HELLO WORLD!"
Step 1: Order Your Components:
the components required for this build are:
1xLCD Display(16x2)
1x220 ohm resistor
1xArduino Nano
1xBreadboard
1x10K Potentiometer
some jumper wires or single strand wires.
You can buy all your components from Amazon.com.
Step 2: The Schematic:
Build this small schematic...
the connections:
then.
Step 3: The Code:
The code:
#include <LiquidCrystal.h>//Don't forget to enter this library.setCursor(0, 1); // print the number of seconds since reset: lcd.print(millis() / 1000);
}
Step 4: SUCCESS!
Be the First to Share
Recommendations
2 Comments
2 years ago
I'm confused. STEP2 shows the schematic with Arduino Uno, not Nano.
Am I missing something?
4 years ago
I am very sad to say that I cannot upload instructables for some time cuz my Arduino nano got burned when I was working on my new project. | https://www.instructables.com/How-to-Make-HELLO-WORLD-on-a-LCD-DisplayArduino-Na/ | CC-MAIN-2021-10 | refinedweb | 214 | 75.61 |
In my previous post Introduction to C# Roslyn CTP, I gave a quick introduction about C# Roslyn CTP, and briefed how you could use C# as a Scripting Language using Roslyn. I also gave a quick introduction regarding parsing code using Roslyn APIs. Recommend you to read the same before starting this.
On a side note, you could also read few of my related C# vNext posts here.
In this post, we’ll re-look at few Roslyn features, based on the June 2012 CTP release – so that you can start leveraging Roslyn to write your own build tasks and pre processors. Also, note that you can compile the Syntax Debugger Visualizer sample that comes with the Roslyn CTP – Find it in the Shared folder in the CTP installation. Open the SyntaxDebuggerVisualizer project and compile the libraries, and place it in your Documents\<VisualStudioFolder>\Visualizers so that you can visualize syntax trees during debugging.
Update: Roslyn CTP September 2012 now has got released, and there are some breaking changes. I suggest you read this post about the new September 2012 CTP.
Parsing Syntax Trees
So, let us start by parsing some code. Create a new Roslyn CTP Console project in Visual Studio, and try parsing this code. Note that in the new June 2012 CTP, Roslyn has added support for Query expressions, anonymous types, iterators, Indexers, switch statements etc. A Full list of features added since Oct 2011 CTP is available here.
using System; using System.Collections.Generic; using System.Linq; using System.Reflection; using System.Reflection.Emit; using System.Text; using AForge.Genetic; using Roslyn.Compilers; using Roslyn.Compilers.CSharp; using Roslyn.Scripting; using Roslyn.Scripting.CSharp; namespace Roslyn.Console { class Program { static void Main(string[] args) { string code=@"class SimpleClass { public void SimpleMethod() { var list = new List<string>(); list.Add(""first""); list.Add(""second""); var result = from item in list where item == ""first"" select item; } }"; var tree = SyntaxTree.ParseCompilationUnit(code); }
Now, put a break point near the closing bracket of our Main method, and try bringing up the Syntax Visualizer, assuming have the Visualizer libraries copied as above.
And check out the Syntax Tree formed by Roslyn. The most interesting part is how the LINQ Expressions are parsed. So, this should provide a useful way to load Query Expressions dynamically (for implementing dynamic filters etc), once full version is available.
Walking The Syntax Tree
Now, let us quickly explore how you can walk the syntax tree. You can simply inherit your own SyntaxWalker in the Roslyn API, that internally implements the Visitor pattern for visiting all nodes in the tree. Let us write a quick ConsoleDumpWalker that can dump the above syntax tree to a console, and a quick extension method so that we can use the walker to dump any syntax tree to console.
public static class SyntaxTreeExtensions { public static void Dump(this SyntaxTree tree) { var writer = new ConsoleDumpWalker(); writer.Visit(tree.GetRoot()); } class ConsoleDumpWalker : SyntaxWalker { public override void Visit(SyntaxNode node) { int padding = node.Ancestors().Count(); //To identify leaf nodes vs nodes with children string prepend = node.ChildNodes().Count() > 0 ? "[-]" : "[.]"; //Get the type of the node string line = new String(' ', padding) + prepend + " " + node.GetType().ToString(); //Write the line System.Console.WriteLine(line); base.Visit(node); } } }And now, you can try dumping your syntax tree with the above Dump extension method. For example, dumping the above code we parsed in step 1 should show the syntax tree we found earlier in the Debug syntax visualizer.
var tree = SyntaxTree.ParseCompilationUnit(code); tree.Dump();
Modifying the Syntax Tree
Modifying Syntax Trees are equally easy. You can implement your own syntax re-writers, by inheriting from the SyntaxRewriter class. Here is a pretty naïve SyntaxRewriter, that appends an ‘I’ to the beginning of all interfaces that doesn’t start with an I in their name. In an actual scenario, you should also change the related implementations, but this is just for an example. So, here is our InterfaceRenameRewriter
//Our rewriter public class InterfaceRenameRewriter : SyntaxRewriter { public override SyntaxToken VisitToken(SyntaxToken token) { //If the token is an identifier name, and it's //parent is an interface declaration if (token.Kind == SyntaxKind.IdentifierToken && token.Parent.Kind==SyntaxKind.InterfaceDeclaration) { //If the name doesn't start with I - bluntly fix it. if (!token.GetText().StartsWith("I")) { return Syntax.Identifier("I" + token.GetText()); } } return base.VisitToken(token); } }And you can test our InterfaceRenameRewriter on some sample code.
string code = @"interface SomeInterface { //Some Simple Method public void SomeMethod(); }"; var root = SyntaxTree.ParseCompilationUnit(code).GetRoot(); var commentRewriter = new InterfaceRenameRewriter(); var newRoot = commentRewriter.Visit(root);
And if you inspect newRoot, you’ll find that the interface name got renamed as intended. Here we go.
You can experiment with more overrides available in the SyntaxRewriter, as you understood how to modify syntax trees. Another way to modify syntax trees is directly replacing an old node in the root with a new node, but that is quite simple.
So, in this post we explored a bit about Roslyn Syntax Trees, and we saw how to start bending your code to write your own build tasks and pre-processors- Oh yes, all of us will take a little bit of time and practice to really bend it like Anders and Eric – but you just kicked the ball. Happy coding. Checkout my other C# posts as well
| http://www.amazedsaint.com/2012/07/bending-your-code-like-anders-with-c.html | CC-MAIN-2020-05 | refinedweb | 884 | 57.27 |
Wikiversity:Colloquium/archives/December 2007
From Wikiversity
Open Education
This might be of interest to Wikiversity participants: The Cape Town Open Education Declaration.
--JWSchmidt 16:54, 1 December 2007 (UTC)
Audio registration
Here I have just record a recording help for registration on Wikiversity:
--Juan 21:00, 3 December 2007 (UTC)
- Related link: Wiki Campus Radio. --JWS 23:11, 3 December 2007 (UTC)
IRC meeting: What is Wikiversity?
If you are interested in participating, please coordinate with betawikiversity:User:ZaDiak and list yourself as interested at betawikiversity:Wikiversity:IRC meeting:What is Wikiversity?.
I think there are two matters "in the air" for this discussion. First, the request for a Greek language Wikiversity has been "conditionally approved" but is now held up...see m:Requests for new languages/Wikiversity Greek and related discussions such as this. Also, there was some discussion on the mail list, see: How do Wikiversity and Wikibooks relate to each other?.
--JWS 23:13, 4 December 2007 (UTC)
- Will an agenda be posted before the meeting? --mikeu 00:43, 5 December 2007 (UTC)
- agenda...I suggest you press ZaDiak on that. I think a rumor started that for some languages, people are rejecting Jimbo's decision that Wikiversity-type stuff has to be kicked out of Wikibooks....if so, then why bother making new Wikiversity websites? The request for a Greek Wikiversity got caught in this cross-fire. --JWSchmidt 01:24, 5 December 2007 (UTC)
- Ive added some brief comments to the talk page - regarding the time, it seems that most people are available around 23:00 UTC - should we start it then, and try and keep it to 1-1.5 hours? Cormaggio talk 12:55, 6 December 2007 (UTC)
- I just updated the main meeting page to say that the start time is 23:00 UTC this Saturday, 8 December 2007. It will be in IRC freenode channel wikiversity. --JWSchmidt 16:15, 6 December 2007 (UTC)
- I agree completely with JWS's eloquent post on the Wikiversity mailing list. Further, in my view, it is nonsensical to be talking about revising the main URL yet again now tht Wikiversity is nearing critical mass and a few Wikibookeans would like to poach some web traffic. I may not be around much as I am at my sister's for Christmas and my nephew and I are working on various Lunar Boom Town pieces and operational scenarios. I would far rather be discussing closed and open loop systems and chemistry with him with a few towards augmenting Lunar Boom Town than debating non management of non wiki universities with the non management and undefined leaders of our mobocracy. Back with larger footprint in Jan or Feb for reading club with Daanchr. Merry Christmas to all! user:mirwin
wiki in education (spectroscopy)
I found an interesting wiki website called The Science of Spectroscopy. It has examples from chemistry, medicine and art history. A description of the project is here. Uses a CC Attribution-Noncommercial-Share Alike license. Of particular interest are some of the links at Using wiki in education. Sadly, it does not appear to be very active. The last month of recent changes shows a lot of spam and crap.--mikeu 00:58, 5 December 2007 (UTC)
- I've previously suggested that we should run an adoption process for small educational websites.--JWSchmidt 01:09, 5 December 2007 (UTC)
- Yes, I was thinking we might want to import some of the material. Is the license at that site compatible with here?--mikeu 01:22, 5 December 2007 (UTC)
- The GFDL and any non-commercial-only license are not compatible. "Adoption" would involve telling the other website that we welcome their educational efforts/goals at Wikiversity. Their contributors would have to decide if they are comfortable with a license that allows commercial re-use. --JWSchmidt 03:34, 5 December 2007 (UTC)
- We have Wikiversity:Outreach and Wikiversity:Organizing Wikiversity#Outreach.
--JWS 15:12, 6 December 2007 (UTC)
use public libraries
There are two great sources for college courses given by exemplary professors, wherein one may self educate without cost. Through your public library access the general collection and go to keywords 'modern scholar' and teaching company'. Let's dialouge these courses. --Jurahd 17:29, 6 December 2007 (UTC)
- If you have links to online resources you can add them to Hunter-gatherers project or more specific Wikiversity pages for each topic. --JWSchmidt 18:35, 6 December 2007 (UTC)
Voice Acting Courses
--robin 17:10, 3 December 2007 (UTC) --robin 17:10, 3 December 2007 (UTC) Does Wikiversity have courses in Voiceover/Voice Acting? I am very interested in this field and would like to learn more.
- The Little Space Games learning projects and related game development learning trails offer an opportunity to publish practice sessions in voices. Specifically the Space Traffic Control scripts could use better voice pieces for sound track and game piece designers to use in synthesizing prototypes and final pieces for publication. has references to many other Wikiversity courses related to film that Robert Elliot facilitates. The most applicable to your stated desire may be I hope you find these or other links accessbile through them useful. Hopefully I will hear your efforts around cisLunarFreighter or Lunar Boom Town sometime soon. Do not be bashful about asking for specific scripts or providing an outline of the part you would prefer to try first. Sometimes scriptwriters are bashful and prefer to work towards products they know a voice actor will be interested in recording. Mirwin 17:05, 12 December 2007 (UTC)
- There used to be some small scripts here en:Developing_Scripts_and_Case_Studies_for_cisLunarFreighter/Radio_Communications/STC that requested voice talent readings. I guess someone moved or deleted them. You can either ask a custodian for assistance in finding them, use the history page to locate where they were, use the search engine to try to find them, or request someone provide some new scripts that might be used in the games Space Traffic Control and/or cisLunarFreighter when the game designers or producers get active again. Sorry for the confusion. Seems natural to wikis that when sections go inactive for a while someone comes along with a delete key to "help" "clean" it up so as not to scare away tidy minded newcomers. Mirwin 17:17, 12 December 2007 (UTC)
Time for a change: Featured Content
I'd like to propose that we think about changing the Featured Content on the front page ... think of it as a Xmas present for someone. This comes after realizing that the front page hasn't been edited in over 6 months!!! My current suggestion is for Bloom Clock but please make alternate suggestions. There are a few old candidates mentioned here: Wikiversity:Featured. Countrymike 20:47, 3 December 2007 (UTC)
- I think the whole main page could so with a make-over. I agree that the Bloom Clock project would be a good choice to feature on the main page. --JWSchmidt 23:08, 3 December 2007 (UTC)
Outreach
In the next few month I'm going to be working on some outreach programs. The Ladd Observatory just received a grant to purchase equipment to do programs in the local public schools. For instance, a Brown student would bring a solar telescope to a high school and teach the kids about the Sun. I'm also thinking about ways to work Wikiversity into what I'm doing. One idea that I'd like to try is to recruit some teachers in my area to come here and collaborate with me on creating lessons and activities related to astronomy. Another is to create some online lessons (for instance, see Observational astronomy/Planning) where teachers and students can download astronomy data and analyze the info in the classroom. Of course, anything created here could be used for other types of learning activity, but this is the focus of what I'll be working on.--mikeu 15:07, 11 December 2007 (UTC)
- I wonder if we can start to define a general strategy for Wikiversity Outreach. "recruit some teachers" <-- a first step would just be to make the Wikiversity main page more inviting to teachers.....I don't think we even have a direct link from the main page to Wikiversity:School and university projects and that page has never been developed as a resource to help teachers come to Wikiversity and make good use of wiki resources. We should also have some kind of workshop for crafting short invitations that could be sent to publications read by educators. Short notices and invitations that say basically what is said at the top of this thread would help spread the word that Wikiversity exists and help build collaborative editing groups for specific topic areas.--JWSchmidt 17:04, 11 December 2007 (UTC)
- I will try to get a solar furnace prototype done for Lunar Boom Town if my nephew is interested that consists of creating a solar oven from cardboad and foil. As Lunar Boom Town develops the prototype could be moved to a grade appropriate classroom/extra credit project that science teachers could link to as a resource while the Lunar Boom Town moves on to larger entrepreneurial appropriate furnace facilities that can melt and cast lunar regolith, glass, and aluminum into valuable spacecraft components. Some discussion of local solar fluxes and interference (atmosphere, intruding planetary masses,etc.) will no doubt also result. Mirwin 16:49, 12 December 2007 (UTC)
- Mike, this is really fantastic - I probably didn't give that impression on IRC yesterday (I was distracted!). I think John's proposal is also excellent - we need to document ways that Wikiversity is being used by classrooms/practitioners/learners, and to use these experiences to endeavour to become more useful. Perhaps Wikiversity:School and university projects could exist as part of a portal which would include those tutorials on how to create learning resources/communities - and would also include workspaces to back up the kind of activity that Mike outlines here - ie pages for teachers etc to figure out how to use Wikiversity (something which I think we're all still figuring out)... Cormaggio talk 16:01, 13 December 2007 (UTC)
New Main Page
I've created a candidate new main page at Main_Page/Draft_version_0.4. Comment is invited on the talk page. McCormack 19:16, 13 December 2007 (UTC)
- I like it.--mikeu 20:22, 13 December 2007 (UTC)
- It looks good, except for the lack of featured content, which I assume would be added at some future point? Also, the "Did you know?" box was an idea I liked, and it doesn't seem to have made its way into your version. --Luai lashire 01:54, 14 December 2007 (UTC)
- Did you know box: the "did-you-know" box wasn't really a box at all; theoretically it should have had a switch command which rotated content from one day to the next; but in fact it was a fixed comment about using v: prefixes. It was just dressed up to look like a switching "did-you-know" box. I took out the comment and added it into the development section, so the information is still there. Of course, we could go back to the switching-box idea, but then we need 6 more ideas for tips - one for each day of the week! McCormack 08:02, 14 December 2007 (UTC)
- I also would like to see the Did you know? on the main page. If for now it has too less content, then how about including now already in the design a space for it ? I am sure it will grow over time. ----Erkan Yilmaz (Wikiversity:Chat, wiki blog) 16:54, 14 December 2007 (UTC)
- Featured content: to my knowledge, there have only been two suggestions for featured content so far. The bloom block project, and the film project. The problem is that I don't have an adequate piece of text describing the bloom clock project (yet), and the film project template has aesthetic problems. I will probably work on this today. McCormack 08:03, 14 December 2007 (UTC)
History of the main page
The current main page was created in a large group effort at Wikiversity's foundation in August 2006. Since then it has been changed very little. So little, in fact, that many of the links and comments were still related to the founding of Wikiversity (e.g. comments about motto and logo contests; formation of policy). There have been calls for a new page, including calls from JWSchmidt, for quite some time now. There has also been ongoing, intermittent discussion and drafting. Earlier drafts: Main Page/Draft version 0.1, Main Page/Draft version 0.2, Main Page/Draft version 0.3. There is pretty well universal consensus we need a new page. I think it's something that is now so overdue that we have to fast-track the issue. McCormack 07:53, 14 December 2007 (UTC)
General comments about the proposed new main page
The proposed new main page is primarily a design change rather than a content change. It draws on the designs of the non-English-language wikiversities (French, German, Italian, Spanish), and pulls the English Wikiversity slightly more into harmony with the more recently designed pages of these other wikiversities. Particularly in draft version 0.3 there were also proposals for content changes (i.e. changes of wording, particularly with respect to the introductory text). These have been adopted into the current proposed new main page pretty well without change, except for formatting. Additionally I have removed some links which seem to be related to outdated events. Some areas of uncertainty were research and featured content. McCormack 07:53, 14 December 2007 (UTC)
The wikipedia Main Page throws a lot of contents into your face so that you immediately start reading it. The wikiversity main page keeps the contents at least one or two clicks away. Featured content is good. And featured schools and topics and all. Hillgentleman|Talk 07:56, 14 December 2007 (UTC)
- Hi, Hillgentleman. Are you talking about the old (current) one, or the new (proposed) one? Can you suggest changes to the proposed page? McCormack 07:58, 14 December 2007 (UTC)
- Both, really. Just open two browsers and compare the main pages of wikipedia and wikiversity. I find that wikipedia is more enticing. (But, of course wikipedia has years of accumulated content to back it up...) You can learn something new right away. Hillgentleman|Talk 08:08, 14 December 2007 (UTC)
- So what would you additionally like to see? I'm looking for suggestions. More featured content? More colour? More images? Please be concrete! Can you suggest some new featured content? McCormack 08:13, 14 December 2007 (UTC)
- Put more contents (and information about study groups, like Erkan's) on the front page, and not just links to them, so that people can learn things or find something to do immediately. Hillgentleman|Talk 08:30, 14 December 2007 (UTC)
- Can you give me a link to Erkan's study group? McCormack 08:38, 14 December 2007 (UTC)
- I am not a participant of their irc discussions. But two useful links are user talk:Erkan Yilmaz, user talk:daanschr and de:benutzer:Erkan Yilmaz.Hillgentleman|Talk 14:26, 14 December 2007 (UTC)
- I assume Hillgentleman has the Reading groups in mind, which started recently. ----Erkan Yilmaz (Wikiversity:Chat, wiki blog) 16:58, 14 December 2007 (UTC)
- I think so. It would be great if it is possible to sneak in sometimes. :-) Hillgentleman|Talk 03:34, 15 December 2007 (UTC)
Further ideas about the future of the main page
WV's static front page has been one of its less desirable features in the past, and if we are to promote the growth of WV, one thing we must do is the same as any other major website: show plenty of activity on the main page by changing its content (or parts of that content) frequently. I would suggest that in future, instead of going through lengthy approvals processes for every change, we have a small main page task force, mandated with the task of weekly or even daily changes to the main page. Only major changes, such as design overhauls, would be subject to a wider consensus creation. Of course, the main page task force would be accountable, with a talk page & c., and the community could call a halt to their activities at any time if it got too out of hand. What do people thing about this? McCormack 12:21, 14 December 2007 (UTC)
- Two ways to have "dynamic content" for the main page are to rotate featured content and to have something like a "going on" section. --JWSchmidt 13:52, 14 December 2007 (UTC)
- But in addition to this we need real people making real changes in response to WV's growth and activity. McCormack 14:49, 14 December 2007 (UTC)
- I think it would work well to have small teams responsible for small chunks of the main page intended to be dynamic .... such as what's new (Usable texts at Wikibooks?), featured content, editing tips, in the news (Maybe some Wikinews people would like to manage this space for us.), etc. Mirwin 20:53, 14 December 2007 (UTC)
The proposed featured content
As the featured content will be difficult to see all in one go once the switch is activated, I'm posting the initial featured content here. This was put together during an IRC discussion between 3 of us. But rather hurried. People may like to comment or suggest additional items. Perhaps descriptions could be improved? McCormack 14:48, 14 December 2007 (UTC)
- Observational. Part of the Astronomy Project.
- Bloom clock.
- Networked learning - Assisting you in developing online communication and internet learning skills. The project is based on the principles of networked learning where individuals establish an online identity and formulate relationships with other people and information to communicate and develop knowledge. Part of the Teaching and Learning Online project.
- Historical introduction to philosophy - Do our senses and thoughts reveal reality or just a shadow of reality? What is philosophy? Topics covered include: philosophy of religion - philosophy of mind - epistemology - ethics - free will / determinism - metaphysics - logic.
- Learning the basics of filmmaking - A well designed course on writing scripts, storyboards, and other aspects of narrative Filmmaking. The purpose of this course is to learn how to make a short motion picture starting with pre-production. As you walk through all the steps of making this movie, you will prepare yourself for entering film school.
- Technical writing - This course offers Level 1 and Level 2 courses in technical writing, plus a workshop on writing system requirement specifications. We're constantly updating and restructuring our content, and welcome your active participation in building and improving this learning community.
-!
Stub types - Stub sorting
Please see Wikiversity talk:Stub for possible way of stub sorting. --Gbaor 13:54, 14 December 2007 (UTC)
Did you know?
An idea that sort of developed from some idle discussion on IRC was to create a template that includes random useful tidbits, hints, facts, and other useful information about Wikiversity and wikis that people may not be aware of. This could then be included on the main page, user pages and on other pages. Being bold I decided to go ahead and give creating such a template a try. Here's what it looks like:
What do people think of the idea? Should this template be placed on the main page? So far only 2 entries exist, and I could use everyone's help to create more entries. --dark
lama 20:56, 9 December 2007 (UTC)
- I'm going to start some work space for developing this idea. --JWS 17:13, 10 December 2007 (UTC)
- Looks like an excellent idea to me. Many commercial games and software applications use this technique so they must think it works well with new and returning users. user:mirwin
I have a question: is it also possible to take randomly from (all or certain) WV-pages one sentence and display this in a(nother) box like this ? The idea is similar to the "Random page", but instead we bring excerpts to the user, which on interest could be clicked (and the user goes to the begin of that page). Rotating interval could be fast, perhaps every 30 secs a change. Of course then there exists the possibility that one of the displayed sentence is not "good" (whatever good may be). But I think this is also not bad, because then it is seen and can be optimized, if wished. What is your opinion on this ? ----Erkan Yilmaz (Wikiversity:Chat, wiki blog) 17:06, 14 December 2007 (UTC)
- This sounds like the "gadget" for preview of linked pages..for most linked pages the text at the start of the page is shown....so ya, I think all the code needed must exist. --JWSchmidt 19:39, 15 December 2007 (UTC)
- Well, let's see if we can convince someone to initiate this. ----Erkan Yilmaz (Wikiversity:Chat, wiki blog) 20:03, 15 December 2007 (UTC)
The blue thing
Reading black text on a blue background isn't a pleasant experience for some of us, or at least one of us (me). After a long day working in the blazing sunlight, it's actually hurting my eyes a bit to read black text on a darkish blue background. In fact, there's a big conversation goin on above, but I'm having a hard time reading it!
Please at least arrange an easy preferences option for easy-on-the-eyes reading. I'm going to attempt to revert the js change now in the meantime. --SB_Johnny | talk 22:47, 14 December 2007 (UTC)
- Much better white now :). In the future, I think we should carefully vette changes to the appearance that lower contrast between text and background... it's one thing for short comments, but the section on EW/WV had a lot of text that was embedded in various shades of blue, which got hard on the eyes after a few minutes of reading.
- This brings up another point I've been thinking of for a while though, namely that the edit window is also hard on the eyes because of the teeny-tiny text (not for eyes yet, but probably for others and eventually for me I'd guess, and larger text in the edit window certianly wouldn't bother me). In fact, the teeny-tiny text does cause me problems when it comes to counting colons or brackets in edit windows.--SB_Johnny | talk 13:17, 15 December 2007 (UTC)
interwiki image copy
I just discovered that there is a tool for copying images from Wikipedia to Commons called Move-to-commons assistant. It will also copy images from Wikibooks and Source. Then you can use a link from here to Commons to include the image in a learning project. --mikeu 02:33, 15 December 2007 (UTC)
- I just learned from darkcode that there is also a tool to find out which wikis a commons image is used on: CheckUsage
- For a learning project at Observational astronomy/Supernova I uploaded about 20 images to Category:Supernova Images here on wv. I didn't think that the bulk of those would have much general interest. But, I did leave a note at the commons page to see also the images here in case anyone finds them of use.--mikeu 13:28, 15 December 2007 (UTC)
- Hmmm. those seem to be uploaded as GFDL-SELF... you took those images yourself? We can probably use them as fair use here, but the licensing of the source document would be required before we could load those to commons (they'd be speedy deleted rather quickly as they stand now). --SB_Johnny | talk 13:38, 15 December 2007 (UTC)
- Yes, I took the images myself. What is wrong with choosing GFDL? That is the same license that appears at the bottom of the main page. Why should it be considered fair use? I clearly state on the upload page that "I, the copyright holder of this work, hereby publish it under the following license: GFDL" Commons also allows me to choose GFDL for uploading my own work. --mikeu 14:16, 15 December 2007 (UTC)
- Cool! Mikeu, can you get closeups of the moon ... like lunar landing sites? Spacecraft data is available through USGS but it would be nice to have locally generated top down views of lunar sites for Lunar Boom Town. Maybe only a couple with a description of how created so some of our participants can create additional views if they can access appropriate combination of telescopes and computer communication formats and links. 8) Mirwin 23:14, 16 December 2007 (UTC)
- Never mind. I will place links to at strategic locations in the land planning processes. Be good exercise for someone to figure out what resolution we need for planning purposes. 8)
Extending the search functionality ?
In de.WV we talked about: to add in the Wikiversity search a functionality to also search in WP. Perhaps with a checkbox the user can activate "search in xyz" (where xyz can be: WP, WB,...) to the already existing WV-search. The search results could be displayed in a new window or in a frame inside WV ? Well my question would be, who could be contacted to implement such a feature ? ----Erkan Yilmaz (Wikiversity:Chat, wiki blog) 14:01, 15 December 2007 (UTC)
- I heard from the French WV chat (merci dcrochet) that it could work with a script on tools.wikimedia to search in the WP SQL Datenbank, which is separate from the WV database (access needed). Found also this [2]. ----Erkan Yilmaz (Wikiversity:Chat, wiki blog) 14:53, 15 December 2007 (UTC)
- Easier yet would be to just add search links on the mediawiki page. I don't think we can really do what the toolserver does on this project, but maybe. --SB_Johnny | talk 15:25, 15 December 2007 (UTC)
- It seems there is some html implementation possible, see info at de.WP ----Erkan Yilmaz (Wikiversity:Chat, wiki blog) 20:07, 15 December 2007 (UTC)
Wikiversity is dead. Long live WikiEducator. (?)
The provocative title of this section reflects a statement in a post by a member of the advisory board to the Wikimedia Foundation, posted to an influential and public UNESCO forum yesterday. Quote: "In projects like Wikipedia and WikiEducator we have...". This reflects trends in the long-term dialogue and strategy to associate WikiEducator with Wikipedia, eclipsing Wikiversity, and implying that WikiEducator is really the project which assumes the mantle and prestige that belongs to Wikiversity. None of this is ever directly stated - Wikiversity is merely "eclipsed" and "forgotten" in a strategy of manipulating the dialogue of open education, while WikiEducator slips conveniently into the empty space.
In a series of short posts, I would like to collaborate with others at Wikiversity in establishing what WikiEducator is, whether the eclipsing of Wikiversity is justified, and how we should respond on the UNESCO forum.
- Hi Cormaggio -- for the record, I have copied a full copy of the text I posted on the UNESCO forum below. This was in response to a growing debate on the closed approaches adopted by the group who authored the Cape Town Open Education declaration. WikiEducator has started an open discussion on the Cape Town Declaration over here. I will post a more detailed response under a separate heading to clarify the wide range of issues raised in this discussion. --Mackiwg 05:13, 14 December 2007 (UTC)
Wayne wrote:
"I too have expressed concerns that an Open Education Declaration was developed in a somewhat closed fashion. Developing the declaration in an open wiki, for example, would have gone a long way to promote inclusiveness and wider opportunities for debate and refinement. Lets hope that the authors of the declaration will learn from the experience.
You make an important distinction between the differences between free software and free content (free used here as in freedom of speech <smile>.). DRM and proprietary file formats are clearly a risk to the sustainable growth of the free content movement. As you point out, the challenge for free content is that it is not the medium itself and can be locked behind closed formats or formats which are not editable.
In projects like Wikipedia and WikiEducator we have adopted the free cultural works definition as our mechanism to deal with the differences between free software and free content.
See:.
This requires us to ensure that content is made available under free file formats. Hence, you will not be able to upload a MSWord document on WikiEducator. The important point being that our meaning of free content is derived and founded on the essential freedoms as opposed to an arbitrary license choice.
Working for development, in particular the millennium development goals associated with the eradication of poverty WE does not support the NC restriction. We do not wish to curtail the rights of an individual to earn a living. So for example, an entrepreneur might find ways in which to add services in widening the distribution channels of free content. We encourage and support this kind of initiative. (An attempt to work towards Red Hat equivalents of free content.) Legally -- all modifications to the content must be shared back with the community. Granted - this will be difficult to monitor -- but I prefer a democratic system where we presume innocence until proven guilty. (Unlike DRM - which presumes guilt before any transgressions in copying material are made!). That said -- the orginal WikiEducator materials will always be freely available in free file formats which are editable.
- The context of the quote is pretty well irrelevant. It's the style of dialogue that is being adopted over a long period of time. In "side-comments" somehow WikiEducator becomes privileged and associated with greater projects, while Wikiversity becomes eclipsed. To see the strategy, one really has to have read a huge number of documents being produced about Open Educational Resources. We are not talking about specific messages here, but the way in which these messages are being tilted over the course of time. Mackiwg tried to change the subject here again to the Cape Town Declaration, but this is not the issue here. The issue is his long term commitment to Wikimedia projects and the possible conflicts of interest. I do not feel comfortable with people represented on Wikimedia boards actively competing against Wikimedia projects elsewhere, and as a simple foot-soldier, I wonder how we can be sure that Wikimedia's governing bodies will support Wikiversity 100% when these conflicts of interest exist. Think about the quoted section: "At Wikipedia and WikiEducator we..." - there is no common "we" behind these projects; an illusion of endorsement or association was being created. McCormack 05:42, 14 December 2007 (UTC)
- "how we can be sure that Wikimedia's governing bodies will support Wikiversity 100% when these conflicts of interest exist" <-- Of course, the answer is that we cannot. We have to continue to push for open governance of the Wikimedia Foundation and when we have the chance, we have to vote for Board of Trustee members who are actually members (foot-soldiers) of the Wikimedia community and who do not have conflicts of interest. When Board members who have conflicts of interest ignore their obligation to admit their conflicts of interest, the Wikimedia community should launch recall efforts and have the offenders removed from the Board of Trustees. The advisory board is another matter. The Wikimedia community has to hold the Board of Trustees to their obligation to make use of the advisory board as a resource to aid the Foundation's efforts and goals. By its nature, an advisory board will include people who have their main interests outside of Wikimedia, but such people can constructively work with the Wikimedia Foundation towards common goals. --JWSchmidt 06:39, 14 December 2007 (UTC)
- "In projects like Wikipedia and WikiEducator we have adopted the free cultural works definition" <-- I think there is a "we" here, exactly as stated. The "we" is those projects with a deep interest in free culture. It would be more accurate to say "Wikimedia" rather than "Wikipedia", but it is common practice to say "Wikipedia" when talking to people who have a hard time recognizing the distinction. --JWSchmidt 06:51, 14 December 2007 (UTC)
- The Board of Trustees are elected by the WMF community -- Advisory Board members do not have voting rights or any say on WMF community decisions. Our role is to support and promote the attainment of WMFs goals and have no more say than the foot soldiers. In helping WMF achieve its goals, COL's WikiEducator, has assisted with the development of Wiki ==> pdf technology which all WMF projects will be able to implement in the near future. We have invested real dollars in free software which will add considerable value to all WMF projects - an excellent example of the free knowledge community collaborating in widening access to the sum of all human knowledge -- especially learners who may not have access to the Internet. See WMF's press release - I look forward to seeing this technology implemented on Wikversity. --Mackiwg 07:28, 14 December 2007 (UTC)
- I don't like the word footsoldier. I have no intention to have to fight a war on the side of Wikiversity or Wikimedia versus some other group of people.--Daanschr 11:24, 14 December 2007 (UTC)
- I think the term "footsoldier" was being used to make a distinction between people who are just editors of the Wikimedia wiki projects (the "footsoldiers") and people who have decision-making powers on the Board of Trustees (the "generals"). It is a metaphorical use of a military term in a non-military context. --JWSchmidt 15:38, 14 December 2007 (UTC)
rather long comments, sorry!
- I suppose it would be nice to see the "post by a member of the advisory board", but independent of that, we can still discuss the relationship between Wikiversity and Wikieducator. "WikiEducator is really the project which assumes the mantle and prestige that belongs to Wikiversity" <-- What "belongs to Wikiversity" is what the Wikiversity community has built and if I had to put that into a short phrase maybe, "a chance to explore new ways to use wiki technology to support learning". In particular, the Wikiversity community has established itself as the Wikimedia project where we are free to put communities of learners at center stage and where we have won some freedom to move beyond the traditional Wikimedia "nothing original" doctrine. A central interest in community and freedom to explore a wide variety of content is a strong foundation upon which to build an exploration of ways to use wiki technology to support learning.
- As a Wikimedia Foundation project, Wikiversity is constrained in terms of what we do by the larger Wikimedia community. The Foundation was born from the success of Wikipedia and Jimbo defined the Foundation in terms of using wiki technology to collect and develop educational content. The Wikiversity mission was designed with two main parts that can be called the "conservative" part and the "radical" part. The "conservative" part is very similar to the stated goal of WikiEducator: "a free version of the education curriculum", "free content for use in schools, polytechnics, universities, vocational education institutions". The "conservative" part of the mission fits best with the "Wikipedia model" for creating and hosting educational content. The more radical part of the Wikiversity mission is to host learning projects and communities. This focus on wiki-based communities of learners is "radical" from the perspective of Wikipedia because the Wikipedia project has a long history of putting first the static encyclopedia content while putting less importance on the parallel issue of fostering collaborative learning communities. The Wikiversity emphasis on being "a place to come and interact and help each other figure out how to learn things" (source) does not really grow naturally from the past success of Wikipedia and is an important challenge for Wikiversity.
- In practical terms, this means that the Wikiversity community finds itself experimenting with many different approaches such as reading groups as possible ways to build communities of like-minded learners. The development of learning communities at Wikiversity is naturally linked to our Wikimedia Foundation sister projects in many ways. Wikiversity learning resources hyperlink to resources at the other projects and many Wikiversity participants are Wikimedians who edit and improve resources at Wikipedia and Wikibooks. Also, the sister projects are an important source of Wikiversity participants. Many people come to edit at Wikiversity after first running into the limitations of other projects like Wikipedia. A natural process is for a start to be made on a Wikiversity learning resource and then make links from Wikipedia to Wikiversity so as to inform WIkipedia users about what is available at Wikiversity. In these ways, Wikiversity is created and grows as an integral part of the Wikimedia family.
- Sometimes I wonder what would have happened if when Wikipedia launched the Encyclopædia Britannica had launched a wiki ("Encyclopædia Britannica Wiki") and invited the world to edit Encyclopædia Britannica pages. It might be useful to think about the relationship between Wikiversity and WikiEducator in these terms. There are professionals who get paid to develop curriculum, just like there are professionals who get paid to create encyclopedia content. So unlike the case of Wikipedia where there was no "Encyclopædia Britannica Wiki", Wikiversity is developing in parallel with WikiEducator, a wiki that is oriented towards professional educators. Does it make sense to think of this situation in terms of WikiEducator taking something away from Wikiversity? I do not understand how that can be. Wikiversity and WikiEducator are two complementary education-oriented wikis. Wikiversity has natural orientation towards making use of existing Wikimedia content and attracting Wikimedians to edit at Wikiversity. WikiEducator is naturally oriented towards attracting professional educators and addressing their traditional concerns. Since the content of both projects is copyleft and everything is just a hyperlink away, WikiEducator, Wikiversity and other free-content education-oriented wikis will all develop in parallel in a cooperative way.
- "eclipsing Wikiversity" <-- There is nothing that says the Wikipedia model (and the modifications of that model we are working on here at WIkiversity) of educational content development is the best. Wikiversity is trying to extend the Wikipedia model in new ways...what we are doing is experimental and who knows how successful our approach will be? It may be that Wikiversity will always be in the shadow of WikiEducator. It is natural for WikiEducator to associate its name with that of Wikipedia (and the proven success of Wikipedia) in order to compete for grant funding. There are vast amounts of money spent on curriculum development and if WikiEducator can channel some of that money towards the support of copyleft learning resources then that is a good thing for the Wikimedia goal of a world in which every single human being can freely share in the sum of all knowledge. Hopefully the new versions of the GFDL and the CC-by-sa licenses will be come fully compatible and all barriers between WikiEducator and Wikiversity will collapse, allowing both to build on its strengths while jointly contributing to the process by which wiki is applied as a tool to support learning. --JWS 15:46, 12 December 2007 (UTC)
Merging Wikiversity with WikiEducator
- Perhaps merging with WikiEducator is an option? I surved through their pages and discovered that WikiEducator and Wikiversity are very much alike. It isn't right that WikiEducator only likes to recruit educators, it is open for everybody just like Wikiversity.--Daanschr 20:02, 12 December 2007 (UTC)
- It appears to be open. But what institutions guarantee this? It would be a mistake to simply assume that because a site looks like Wikiversity, its people/managers function in the same manner. Most people who create websites maintain an intrusively high level of control; Wikimedia projects like Wikiversity are very unusual in their openness and inclusivity, but we take this for granted and forget it. McCormack 09:26, 13 December 2007 (UTC)
- It should be noted that high government officials from Oceania are involved in WikiEducator. That is definitely something different from Wikiversity. That could explain the prominence of WikiEducator in UNESCO and the denegrating remarks of the member of the board of the Wikimedia Foundation. Wikiversity is a collective of amateurs without any power in the world. WikiEducator seems to be different, if governments with representation in the UN are involved.--Daanschr 20:07, 12 December 2007 (UTC)
- The involvement of these people would only guarantee openness and inclusivity if they actually understood and monitored the day to day operations of the site, or acted as a board of appeal accessible to all users. Don't forget that the founders of a website can be well-connected and make a show of this, without it actually making the site any better. McCormack 09:26, 13 December 2007 (UTC)
- Some months ago, I did a study/comparison of Wikiversity and WikiEducator to find out what the difference was and what WikiEducator was. This study followed on from a discussion with Wayne Mackintosh which didn't really answer the question. At the time, the results of my study of the actual content of WikiEducator appeared to surprise him. The results of this comparison may be a little out of date now, but indicate a fundamental incompatibility and the inadvisability of merger.
- Administrative and structural issues. Some of the custodians at Wikiversity have done an excellent job of categorisation and generally ensuring good structure. By comparison, actual content pages at WikiEducator were rarely categorised or linked in to portals or the main page, making navigation a nightmare. Importing WikiEducator content would create significant administrative difficulties. In addition, the spam defences at WikiEducator (particularly as regards spam-bot creation of user accounts) do not seem to have been good, also resulting in a bit of a mess there.
- Licencing issues. WV uses GFDL, while WE uses CC-BY-SA. Although some people believe these to be broadly compatible, and some users dual-licence their contributions, there are nevertheless legal issues with any merger.
- Cultural issues. The mindset at Wikiversity follows the Jimmy Wales mould, with emphasis on tolerance and inclusivity. This is the mindset which has created a civil society throughout Wikimedia projects and which promotes a community with near democratic values and qualities. The mindset at Wikieducator is very much the opposite to Jimmy Wales, focussed on control. Rules and policies protecting users from admins are absent and content is pro-actively corrected. A civil society at Wikieducator will not emerge if the level of central control is retained. A merger would risk a shift in the culture of Wikiversity, which would be a loss.
- Point-of-view issues. In some texts relating to WikiEducator (but not all), its founders are very open about their use of WikiEducator to propagate a specific view of digital education. Wikiversity has avoided any such global adoption of a perspective, and rightly so. A merger would be accompanied with serious issues of dealing with POV on imported pages. However I can't imagine that WikiEducator would want to abandon its POV, or that its leadership would want to step down and relinquish control, which would be essential if WikiEducator merged into a WikiMedia project.
- Administrative and structural issues: Well, i've constantly disagreed with the structure of WV as replicating just the kind of structures that we should have been moving away from, so in this sense the job may have been a bit of a disservice to us in the long run. I also find the structuring to be sometimes more confusing than worthwhile -- there are more pages on WV about structure than there are about almost any other topic. Less is better. Countrymike 20:36, 13 December 2007 (UTC)
- "..focused on control..." <-- Can you back this up? I don't think this is true at all. Countrymike 20:36, 13 December 2007 (UTC)
- Actually the so called "leadership" (of which I guess I'm a part of being on the Interim Advisory Board) has been having many discussions lately about how any kind of long term advisory board may be established, what the terms are, etc. Originally such a board was 'appointed' due to such a small community or users being avialable with the stated goal that when users reached 2500 this would all be reviewed. We have also been talking about how to sustain WE beyond the capabilities of the Commonwealth of Learning with consideration being taken towards partnering with international agencies like UNESCO, which is probably why this discussion emerged in the first place. McCormack, please don't try to paint this in a brush whereby WE is seen to be a bunch of control freaks, thats just simply not the case or the ethos of that project.Countrymike 20:36, 13 December 2007 (UTC)
- I took a quick look on Moodle and discovered how enormously large this organization is. I don't think that WikiEducator will stand a chance versus Moodle, just like Citizendium and Encyclopedia Britannica are very small compared to Wikipedia. Wikiversity is a small organization as well, our largeness is dependent on our relationship with the Wikimedia Foundation. If Wikiversity is replaced by WikiEducator by the board of the Wikimedia Foundation, than it will be the virtual end of Wikiversity.
- I agree with the lack of structure on WikiEducator. It should be noted though that many of the topics and schools dramatically lack in activity and are thereby not much better than WikiEducator.
- I am in favour of openness and a civil society. Not only because i like inclusivity of views and persons, but also because an organization on the internet can't become notorious (usefull) without being open for as much of users and views as possible.--Daanschr 09:52, 13 December 2007 (UTC)
- Moodle is an entirely different kettle of fish :-) The most important thing is that Moodle is an open source project with multiple installations, no central installation and predominantly closed content. The closed content nature of Moodle is antithetical to what both WV and WE do. You might also like to read Help:Quiz/Wikiversity compared to Moodle (something I wrote quite a while back). McCormack 10:08, 13 December 2007 (UTC)
Moodle is a tool for school teachers, primarily. It could be seen as a community with nearly 2 million teachers who communicate with eachother on several forums. Who knows in what ways Moodle can be changed in the future. I am working in a computerroom and this school lets its students surfe the internet to any pages, so the use of media doesn't have to be limited to a single organization like Moodle or Wikiversity.
What should be adressed is the kind of people that will use Wikiversity or other websites dedicated to learning. A school is something different from persons like me who want to do something in their leisure time. Organizations, like governments and companies could use Wikiversity to communicate with common civilians. The kind of learning activity is important. Is it an activity dedicated to improving society, or is it just a way to spend a leisure time, or is it a way to learn something which is needed for a carreer or to organize things. I am in favour of openness to experimentation, because their are so many purposes and groups of participants possible.--Daanschr 10:44, 13 December 2007 (UTC)
Size comparison
Daanschr made some observations about size, which seem to be in need of correction. I just compared the stats of the two sites.
- Overall pages: Wikiversity is over three times larger.
- Page edits: Wikiversity is over twice as busy.
- Users: Wikiversity has 8 times the users.
I don't have access to site visitors and page hits, but my feeling is that Wikiversity is probably visited way over 10 times more than WikiEducator, perhaps far higher.
- Well, you know what they say ... bigger is not always better. Countrymike 00:27, 14 December 2007 (UTC)
If folk at Wikiversity want more detailed figures on WikiEducator statistics, you can take a look here. We're a very small project -- but growing rapidly. Our strategy for the coming year is to scale up content development building on our existing foundations. --Mackiwg 05:18, 14 December 2007 (UTC)
Response from a WikiEducator
- Well, I'm pretty much user number 2 on WikiEducator (I was there when we started it on a box laying around the office in Auckland, New Zealand) and having worked extensively on the site for quite some time (bit longer than i've been on WV) I'm always open for questions/dialogue whatever on the project. Personally, I think that the two projects have very different missions - WVs being quite astutely articulated by JWS in the above. I've never heard or read specifically of any strategy to align WE with the Wikimedia foundation although there are definite relations: the founder of WE is on the WM Board of Advisors and the site is hosted and supported by Erik Moeller who is on the Board of Trustees .. so make of that what you will. I've always thought that there could/should/would be greater synergies between the two projects -- but has proved easier said than done. For me the distinction mostly is that WE is for the most part about generating content, OERs, whatever and developing capacity. That's why you see all those high ranking educationalists from the Pacific -- we've been teaching them how to edit on MediaWiki/WikiEducator so that they can go back to the islands and get other people interested and trained. WV is about Learning Projects; there isn't anything close to a Bloom clock or a reading group and in fact when I started the reading group on Illich I purposefully chose WV over WE because it seems to suit WV more. One advantage that WE does have over WV is the amount of control over the code that it has; over the last couple of months WE has experimented with Liquid Threads and today is trialling out a pretty cool print function, enabling pages to be collected and printed. Its nice; its not for a Learning Project, but for a long structure piece of wiki content, its nice. My .02c worth would be for WV to push the Learning Project angle as much as possible, forget the content development and get away from all this School this, Topic this, etc that's crowding up the place. Countrymike 23:12, 12 December 2007 (UTC)
- In a tiny step towards closer collaboration I've had an interwiki link created from WikiEducator to Wikiversity. Can now use [[v:blah]] on WikiEducator to point towards Wikiversity pages. Would be wonderful if we could reciprocate. Countrymike 07:20, 13 December 2007 (UTC)
- I've never done this myself, but I think you can go to m:Interwiki map and propose an interwiki prefix for a new website such as the WikiEducator website. --JWSchmidt 16:57, 13 December 2007 (UTC)
- "School this, Topic this" <-- We could certainly have a discussion about "School this, Topic this"....there have been many such discussions in the past. In the "Wikipedia model", there is "actual wiki content" (encyclopedia articles) and there are also "wikiprojects". Wikipedia wikiprojects are meta-level pages where editors collaborate to create and manage content in particular topic areas. When Wikiversity was gestating within the Wikibooks project a significant number of pages were called "School of Foo" and "Department of Bar". Many of those pages were like wish lists for future Wikiversity content....kind of like a college course catalog. When the Wikiversity website launched and we were copying the old pages from Wikibooks, the school pages became pages in the "School:"namespace and the department pages became pages in the "Topic:" namespace. These pages remain as Wikiversity content development projects where editors can collaborate to plan and develop learning resources. Of course, if you find no use for "School this, Topic this" you never have to go to those pages. I suspect that most people at Wikipedia never bother with wikiprojects. Its one of those things that is there for the people who find them useful. The system of portal pages should give content browsers access to all the actual learning resources at Wikiversity. "push the Learning Project angle as much as possible, forget the content development" <-- This makes no sense to me because my belief is that the learning projects are important Wikiversity content. So saying "forget content development" means "don't create learning projects". --JWSchmidt 00:52, 13 December 2007 (UTC)
- I was too quick with my opinion about WikiEducator, most of what i said didn't make any sense. It is good that you clarified WikiEducator for us, Countrymike.
- It seems that we have a conflict between opposing ideas. The kind of organization that John wants, would overlap WikiEducator in many ways, while Countrymike wants to make a clear distinction between WikiEducator and Wikiversity.
- I don't mind the distinction between schools and topics. That doesn't have to be different from Countrymike's view of Wikiversity. The good thing about a distinction between schools is that people who are interested in a certain genre of learning and not in others can become a member of a learning group dedicated to that genre, which could be organized in a school. Another benefit of a distinction in schools is that experts have a clear place to go to with their expertise, in case Wikiversity becomes a large organization. Of course, not every learning group has to be organized in schools.
- During the meeting on Wikiversity chat, the desire was expressed to be open to experimentation, something unique within the Wikimedia Foundation. I hope this can be the case, and that we will wait for the kind of results of these experiments in the coming decades. At the moment, i am experimenting with reading groups, but these will take years and decades to develop.Daanschr 08:12, 13 December 2007 (UTC)
There's long been a debate about whether Wikiversity and Wikieducator could/should be merged - eg Leigh Blackall posted to his blog, which sparked responses from Teemu Leinonen [3] and myself. And in that discussion, as above, the issues of culture are raised; both WV and WE are influenced by their organisational backdrop - Wikimedia and Commonwealth of Learning, respectively. Both projects have compatible goals, technology, and (soon) licences - but the issue of culture is possibly the more difficult to see merging. In any case, I certainly have no problem in having both projects coexist, given that they will be focusing on different activities (or carrying them out in different ways). In terms of, as Brent suggests, having Wikiversity focus exclusively on learning projects as opposed to content (which, incidentally Teemu also suggests), I don't agree - since there is clearly educational content that does not belong on Wikimedia projects, and which therefore must be allowed to be developed here (even if there are many other OER projects out there). Similarly, I don't believe we should be intentionally limiting ourselves from supporting any particular type of user of an educational space - Wikiversity is surely open to professionals (not just to Wikimedia editors), even if there may be other sites for them to choose from. But to specifically adress McCormack's point, I don't feel that the prestige associated with Wikipedia "belongs" to Wikiversity - I think this is something that we will obviously benefit from, but which we must also earn on our own terms and merit. Cormaggio talk 16:55, 13 December 2007 (UTC)
- Cormaggio - good point. Reminds me of early debates within the Open Source movement where people finally got sick of the endless "I hate microsoft" monkey show and started suggesting that all those whiny slashdotters spent more time making linux compete on its own terms rather than bash the competition. WV will rise and fall based upon the quality of its contents. I think my "distinction" between WE and WV is not meant to be absolute (we should welcome some forms of content), but I guess I see one of the possible distinctions that WV can leverage at this point is focusing on learners and learning activities rather than teachers and content or curriculums. WE seems mostly about teachers (as is the recently discussed Cape Town Declaration), developing curriculum that can be printed out for face-to-face, not particularly something that WV has ever aspired to. Countrymike 20:23, 13 December 2007 (UTC)
- Well, there seem to be some French teachers of Wikiversity, who use Wikiversity for their class.
- The whole debate here is a bit confusing. There are a lot of claims made by several persons on what Wikiversity is and WikiEducator and these claims don't seem to fit together.Daanschr 21:46, 13 December 2007 (UTC)
- That's an interesting observation - and I've been interested to read how people are characterising the scope of one project in contrast with the other - something that also happens between Wikiversity and Wikibooks. It's not always the best way of defining the project, even though it does - always - make sense to clarify project goals and processes. It often helps here to go back to the approved Wikiversity project proposal (essentially our "founding document") - though Wikiversity has developed in strength and complexity since then. Cormaggio talk 12:11, 14 December 2007 (UTC)
A response from an institutional man
What an excellent discussion! --Leighblackall 21:42, 13 December 2007 (UTC) Thanks to CountryMike (Brent) for pointing me into it. And thanks Cormaggio for mentioning the post and discussion from my blog. I'm someone who works in an educational institution and am trying to build a critical awareness of FOS software, content and practices. It is a bit of a hell ride and I sometimes long for the freedom of freelance. In 2006 I started using Wikiversity to build content for the teacher training we do. Pages for blogging, RSS, wikis, podcasting, video, tagging etc. Almost all these things are very foreign ideas to the teachers I work for :( I started adding links to our institution's support, formal courses and qualifications and started to get a little flack from a Wikiversity user. At the time I was feeling very sensitive to criticism because I get it daily from people in my institution who are reluctant to consider FOS software, content and practices in their teaching. I constantly need to demonstrate worth and prove it. When criticism started coming in from Wikiversity I saw the writing on the wall.. this was not going to be sustainable. So I needed a space that would be supportive in every way of an institution trying to make steps towards FOS ethics and exchange. Wikieducator became that space. But all along I wish to be part of the Wikiversity project, and the Wikimedia foundation. I posted to my blog the desire for WV and WE to merge and form Wikilearner, but I think I'd like to retract that. I agree with CountryMike and Teemu. Let Wikiversity become Wikilearner. But what is to happen to Wikieducator? As the stats suggest, it will putter along while the majority gravitate to WV. Wikieducator plays an important role to the Institutions. It offers support for the Institutional culture, but more importantly it facilitates Institutional people into the more free and freelance world of WV - and that's a good thing. Eventually, I hope to be working a lot more in WV (that will hopefully become more of a Wikilearner) but I can't do that until the people I work for are ready to see that their content is not as important as their network and the learning communities that may become resources for their students to tap into. So Wikieducator is the interim (and it is already radical enough). As the people I work for become more comfortable with MediaWiki technology, they will start to engage with Wikimedia Foundation projects more. We already have 3 staff members who are writing the Anatomy and Physiology of Animals text book in Wikibooks! You see, as our teachers become as familiar and enthusiastic for the free world as you already are, you will see that free content will become an everyday thing and you will have lost your competitive edge.. we will start to need learning communities a whole lot more. I only hope that the freedom politics that is the more ugly side of the Wikimedia foundation generally will not stifle the growth of community. Many thanks for your thought provoking discussion, I hope I have added something of worth, and I look forward to the day when this institutionalised man may be free with the rest of you. --Leighblackall 21:42, 13 December 2007 (UTC)
- A school doesn't have to be a real school. Internet will change the whole meaning of learning. A school on Wikiversity can't be the same as a school in the face-to-face world.
- What do you mean with freedom politics? I have only written and discussed articles and didn't belong to the political organization of Wikipedia. The only place where i entered discussing bureaucratic issues was here on Wikiversity.--Daanschr 21:54, 13 December 2007 (UTC)
- "started to get a little flack from a Wikiversity user.....when criticism started coming in from Wikiversity I saw the writing on the wall.. this was not going to be sustainable." <-- This hit me like a brick between the eyes because I have a habit of telling people that Wikiversity is open to all kinds of learning experiments and approaches I recently made the rather sweeping statement that as far as I knew Wikiversity had never turned away someone's contribution of a learning resource. Is this the "flack"? --JWSchmidt 04:56, 14 December 2007 (UTC)
- Yes, that was the flack - but it developed into an email discussion between WiseWoman, Leigh and myself. I seem to remember we patched it up in the end, and that Leigh didn't seem to be left too aggrieved - I take his point that it motivated him to look for alternatives, but I'm kinda disappointed Leigh saw (sees?) it as "writing on the wall". (Leigh, I'll respond to your great comment elsewhere - just a clarification here.) Cormaggio talk 09:54, 14 December 2007 (UTC)
- Hi there Cormaggio. by "writing on the wall" I was talking about myself, my own energy levels. The flack from a Wikiversity user was justified in my view today, and I respect the "civil society", user generated, egalitarian status of Wikiversity. The writing on the wall was my own sanity as I juggled criticism from colleagues met with criticism from Wikiversity. Basically, WV was just a bit too radical for me (and the Institution) at the time. My post here says that that time will pass.. eventually.. the culture of the institutions will change, and come to acknowledge and respect that which Wikipedia et al has achieved... --Leighblackall 10:40, 14 December 2007 (UTC)
- Thanks for the clarification, Leigh - I can see into your mind and situation better now. :-) I think it's very interesting the way you've framed this discussion for yourself in terms of your being an "institutional man" (which brings with it another set of organisational/social/political constraints). I also very much appreciate your forward-looking perspective, acknowledging that open/free learning communities will become more important as open/free content begins to proliferate. ;-) (This isn't a "dichotomy", as Hillgentleman and JWSchmidt point out - but rather an appreciation that there are different educational processes to be facilitated.) In fact, I asked a question about this topic during the recent OpenLearn conference, and it seemed that the funding at the moment is towards the generation of content, but that the support for this open/free content (ie open/free learning communities) isn't really a huge feature on the OER community's radar (yet). Cormaggio talk 11:09, 14 December 2007 (UTC)
Statistics can be very misleading. Since I have enough content at Lunar Boom Town to begin playing and testing scenarios with my first grade nephew I have been rather idle about the Wikiveristy site. After the Christmas season is over I may get real active keying into Lunar Boom Town and cisLunarFreighter and Space Traffic Control what I have learned from playing various scenarios out with him. Personally I think it is not advisable to be advocating major changes or merges at either WikiEducator or Wikiversity since both sites have now spent years collecting interest towards critical mass. Better to be a bit patient and harvest some benefits of having developed viable communities at both sites around their respective cultures and target or emergent markets. Mirwin 23:18, 13 December 2007 (UTC)
A response from Wayne @ WikiEducator
I picked up on this discussion from WikiEducator's del.icio.us feed -- it's great to live in a connected world. I'd like to clarify a few points:
- I founded WikiEducator and posted my first edit on 13 February 2006. I set up Wikieducator shortly before taking up my current position as Education Specialist (eLearning and ICT policy) at COL. You can read up about our early history on Terra Incognita
- The WikiEducator's infrastructure is funded by the Commonwealth of Learning, an international agency dedicated to encourage the development and sharing of open learning and distance education knowledge, resources and technologies.
- I am a member of the WMF Advisory Board. WikiEducator is an independent project and not part of the WMF projects.
- WikiEducator works collaboratively with the free knowledge community in developing free content for education. We recently funded the development of Wiki ==> pdf functionality. See WMF's press release today. This technology is released as open source software which Wikiversity will be able to implement in support of their work. I hope that Wikiversity will assist us in testing the technology. WikiEducator offered to be the gunea pig for testing the early releases of this technology so that the large projects like Wikiversity and Wikibooks would not have to go through the pains associated with the early bugs of new software.
- WikiEducator is not competing with Wikiversity, Wikibooks or any other wiki project in the educational sphere. We are working together with the free knowledge community in widening access to the sum of all human knowledge. It's a big task and the more folk working on this -- the sooner we will achieve our objectives.
- The WikiEducator community have not discussed mergers with any of the WMF projects. User:McCormack's assertions that Wikiversity is dead. Long live WikiEducator. are unfounded. User:McCormack's assertions "that WikiEducator is really the project which assumes the mantle and prestige that belongs to Wikiversity" is UNTRUE. Prestige is something which is earned -- not commanded.
- I do think that it would be a worthwhile exercise for WikiVersity, Wikibooks and Wikieducator to convene some time in the near future to compare experiences and to see how our different approaches can contribute to the vision of the free knowledge community.
Hope this helps. --Mackiwg 06:02, 14 December 2007 (UTC)
- Clarifying your misattributions: my views are precisely the opposite of what you suggest. I suggest that WikiEducator has neither the prestige nor anything else that is derived from Wikipedia (so we agree). As you say, prestige has to be earned. That is why I object to even using "WikiEducator" in the same breath as "Wikipedia" (here we disagree). One way in which prestige can be earnt is through the establishment of a civil society, including institutional structures which prevent administrative abuse. Wikipedia has these in abundance. It is one of its greatest achievements, even if those structures are sometimes criticised. The best thing about Wikiversity is that it is linked into these structures of a civil, near-democratic society, and this is why, for me, the future is Wikiversity. Unfortunately WikiEducator does not have a civil society (yet), nor do I see it likely that one can form until the leadership strategies change. When a leadership style is too strong, it stifles democracy and becomes blind to the potential of its abuse. This is a universal problem with over-strong leadership, so it should hardly be a surprise to you if WikiEducator suffers from this problem - it's nothing personal. I think that exceptionally strong leaders have to create very concrete self-restraining mechanisms in order to protect the democratic structures around them. McCormack 08:37, 14 December 2007 (UTC)
- Further correction: the title of this section (Wikiversity is dead. Long live WikiEducator) which I created represents the opposite of my views. The title parodies the way in which WikiEducator seems to be marketing itself elsewhere. As a parody, it does, of course, state the matter in a provocative fashion that stimulates debate. The problem I see is that WikiEducator is attempting to "place" itself strategically where Wikiversity in fact is: right next to Wikipedia as a Wikimedia project. McCormack 08:37, 14 December 2007 (UTC)
- I withdraw from this kind of discussions here. I frankly do not know what this is all about and what's the need for all the fuss. For me learning on Wikiversity is mainly linked to leisure time and fun. Perhaps we can come up with some usefull concepts for the outside world. I am highly scepticle about the capabilities to dramatically change schooling and i don't see the need for it. I have the impression that you both believe in progress. I regard progress as something silly. Everything which is new becomes dull and old in the long run. For me, day to day communication with humans is the main reason to live. Not some squabling about freedom or schooling.--Daanschr 13:01, 14 December 2007 (UTC)
- "Have fun" is an important part of the culture of Wikiversity. Many people participate at Wikiversity because they can have fun exploring topics that are of great personal interest. "dramatically change schooling" <-- I do not see this as part of the Wikiversity mission, but I think it is fair to say that Wikipedia has broadened the concept of "encyclopedia" and I think wiki technology will also create new learning opportunities. To the extent that new sources of information and information sharing are being fashioned by tools like wiki, it is not unreasonable to speculate about how these advances might make possible some dramatic changes in schooling.....particularly in learning niches where conventional schooling has never been strongly established. "day to day communication with humans is the main reason to live" <-- Some of us in the Wikiversity community are trying to learn how to facilitate the growth of online learning communities where participants can come together and make use of new technologies to promote new forms of "day to day communication with humans". We do not need to frame this process in term of "progress" but I think we can seek useful metrics such as "efficiency". It is not as much fun when multiple independent communities of wiki editors duplicate their efforts at isolated websites. It is more fun when like-minded learners can find each other an collaborate efficiently. "what's the need for all the fuss" <-- Some people who are dedicated to using wiki to support learning ask if it makes sense for a Trustee of the Wikimedia Foundation to adopt a negative attitude towards Wikiversity while supporting WikiEducator. "squabling about freedom" <-- I think many Wikimedians are enthusiastic about the Foundation's mission, but there is room for disagreement about some of the details of how to facilitate that mission. It is understandable that Wikimedia is a magnet for some people who are free software crusaders. This has implications for Wikiversity when we are unable to fully use available computer technology because some new technologies are not free. Do we put learning first or do we put the crusade for free software first? If "squabbles" over such issues distress you, by all means, ignore them and just have fun editing. --JWSchmidt 16:20, 14 December 2007 (UTC)
- "when we are available computer technology because some new technologies are not free. Do we put learning first or do we put the crusade for free software first?" <-- Yea, you can always try to solve the problems in your own way, whatever the constraints are; if the constraints become a burden, you call the developers, or learn to be one yourself :-). Whether one should put learning first or free software first - here is an interesting and clear difference of wikiversity from the rest of wikimedia projects: it is to host learning communities and therefore it is not portable. There cannot be a mirror of wikiversity. Anybody can duplicate the contents but not the community. Hillgentleman|Talk 03:31, 15 December 2007 (UTC)
The dialogue and ideology of freedom
I quote from Wayne (above) about his vision of cooperation for Wikiversity, and his idea that Wikiversity should see how it "can contribute to the vision of the free knowledge community". For those who are new to Wayne's philosophy and way of speaking, "free", in his view, is always used in the idealogically loaded sense of libre. It is not an option, for his way of thinking, to question the truth of this very specific view of freedom. When Wayne states as a goal for Wikiversity that it should contribute to the "free knowledge community", it is bringing us closer to a world in which both the membership and editorial content of Wikiversity should be orientated towards a particularly ideological end. Wikiversity's current culture is founded on openness and inclusivity. A reorientation towards the libre knowledge community is precisely the cultural shift I warned about above in an earlier posting. Newcomers might want to express this cultural shift as a "loss of freedoms", and this is where the language trick becomes insidious: because the meaning of "freedom" has already been hijacked by libre-philosophy, it is difficult for non-libre-thinkers to express their commitments to things like the ancient freedom of association and freedom of speech in their original meanings. In the many meanings of freedom built up in philosophical and political thinking of the last 3000 years, we are talking, however, about a loss of freedom. McCormack 08:58, 14 December 2007 (UTC)
- I'm a bit confused here, McCormack - Wikiversity as a Wikimedia project is an explicitly ideological project: to make educational resources free (and this "free" has always explicitly incorporated the "libre" connotation, as well as lack of cost). It's true Wikiversity is based on openness and inclusivity (probably more so than any other WMF project), but there are clear boundaries of this inclusivity - and that would include content that is copyright or not free enough, in the same way that we would not include propoganda or allow uncivil behaviour (though in many cases, the 'limits' still need to be drawn, and will always be based in dialogue). Cormaggio talk 10:08, 14 December 2007 (UTC)
- A free learning process can include copyrighted materials including textbooks available from libraries or materials published online for free use or at a cost. Naturally our preference is free libre materials accessible to everyone via the internet. My personal preferences include free internet access for everyone on the planet but that will be a while coming. (If you think that is economically infeasible consider the cost of the current Iraqui occupation.[[4]] Others pragmatically look to libre cdroms to spread libre materials which are not "free" but must be financed by somebody with cold hard cash. The only limit that I see as useful to draw is that Wikiversity servers do not host non-free materials or materials so dependent on specific external non free materials as to be considered advertising spam. Mirwin 17:13, 16 December 2007 (UTC)
- I don't understand the selling of libre material. There is already an abundant of knowledge for sale. What does libre material have to add?--Daanschr 18:45, 16 December 2007 (UTC)
- The fact that you can turn around and redistribute it as is or modify it and redistribute the modified versions. Consider a cdrom with a video game. If it is propriety you can play it or give it away or resell it once. If it is free (libre) (say you bought it at a flea market for $3.00USD or downloaded from internet and burned yourself) then you can give it away as many times as you can afford to burn it on cd, you can burn cds and sell them at the same flea market where you bought it for less than $3 USD (hopefully more than price of burning CD slug and energy), you can loan it to a friend who wishes to burn copy or go into business remastering, then there are all permutations of how you can modify the data objects on it and then given away those modifications as long as you meet license requirements for attribution of previous authors, access to source code allowing modification, etc. i.e. passing along the freedoms you received from previous authors to new consumers. Mirwin 04:12, 18 December 2007 (UTC)
content and community
- Re:Leigh Blackall: (two screens above).
- I agree with Hillgentleman. There is a false dichotomy being imagined when people talk about content or community. Some learners use wiki technology to collaborate and create wiki content.....they learn by editing Wikiversity. Other learners use that content to explore their learning goals. We encourage everyone to join this creative loop, click the edit button and participate as part of active learning groups and projects. There is a dynamic positive feedback process by which content supports the community and the community creates content. Of course, Wikiversity does not have to re-create the wheel.....Wikiversity learning resources can always link to other resources outside of Wikiversity and many Wikiversity participants move freely between multiple websites as appropriate to their personal interests as well as the specialties and strengths of various websites. --JWSchmidt 07:09, 14 December 2007 (UTC)
- There are two ways to improve accessability: a good portal (Main Page) and a good content structure. Since wikiversity's goal is to be a place where anybody can come and learn anything (using contents on-site or off-site, on-line or off-line), which is a wide and not very specific goal, it is difficult to have a one-page summary.
- On the other hand, there are many ways to improve the content structure. Schools, Topics, and Dynamic page lists are very useful. There are a lot of interesting ideas in . Hillgentleman|Talk 07:05, 14 December 2007 (UTC)
- Still the main page is important; it should be optimized to guide the most people to do a little something useful. People like to browse wikipedia because they come away knowing a little more - even if it is just something trivial. Wikiversity should deepen this process in certain areas for those who want it. Hillgentleman|Talk 07:32, 14 December 2007 (UTC)
Neither-both here nor-and there
While it is of course deeply frustrating when a foundation Board member somehow forgets about the existence of an entire set of WikiMedia projects, it is in some ways understandable that he would want to highlight a large project that from the start has connections to real-world foundations and organizations. Neither project has really figured out a sensible way to make more of these connections, at least not yet. From the "about" page on WE, it looks like our missions do indeed overlap, but they've got a headstart due to their associations with CoL, and we have a headstart due to our status as a Wikimedia Foundation project. To their advantage/disadvantage, this means that they have a final "customer" who can guide them in designing their "product". To our advantage/disadvantage, we don't know who our "customers" are, and often aren't sure how our "product" will be used.
It's nice to hear input from our compatriots/rivals though, especially their views on the uniqueness of our learning community structure. I think the new main page and portal will help us a lot in explaining ourselves to the foundation and our fellow Wikimedians, but to build learning communities we might need to be a bit more proactive with outreach, rather than just leaving out the main page as an invitation and hoping people will happen to find it. We could outreach to a number of types of organizations, including:
- Universities (brick and mortar), schools, and other institutions of education
- Student groups
- Hobby or avocational groups
- Non-profit groups
- Business groups
- Corporations
- Labor unions
- Small and large businesses
- Religious groups
- Web forums and other online communities
I think so far we've mostly tried to reach out to other wikimedians (primarily wikibookians and wikipedians), but it might be the case that the members of those communities have already found the thing they want to do (otherwise they wouldn't be there, and thus wouldn't be on those projects waiting for someone to reach out). Universities are going to look at us with some degree of skepticism for now (as will most schools). Student groups usually like to have beer with their meetings. Non-profit groups are often overtasked already, but if we could at least in part be providing a web service for them, they might really like that.
Businesses are going to do what's to their advantage as businesses. If our collaborative learning and research projects can help them educate their employees or otherwise improve their growth potential, they will happily participate. Small business owners and independents may be able to use our project as continuing education and improvement, as well as networking.
Religious groups tend to be formed of people with a variety of interests outside of their religion, and making them feel welcome to organize (but not prostelytize) those interests on Wikiversity might help us grow our learning communities exponentially. Web forums and other online communities would probably be hit-or-miss, since like the other wikimedians they might already have found their niche.
The question is, how do we reach out? --SB_Johnny | talk 12:54, 15 December 2007 (UTC)
- "The question is, how do we reach out?" <-- This was identified as a key issue back before Wikiversity launched (for example, see some early discussions at Wikiversity talk:Moving Wikiversity forward). I think we should update Wikiversity:Community Portal to be more of a community workshop for efforts aimed at building collaborative learning groups at Wikiversity. We need a community archive of strategies and experiments for attracting and retaining participants. --JWSchmidt 20:05, 15 December 2007 (UTC)
- I think the most effective outreach is one on one with friends and relatives. Unfortunately until we hit critical mass with something for everyone this will be a very low percentage success rate. I think we need to function somewhat as an internet beacon where successful groups attract attention from search engines and other related online groups because they have something unique and useful to people with those interests. The sandbox sever project has potential to bring in a lot of interest once it is operational to the point we wish to advertise it at sites such as advogato or slash/dot where a lot of young hackers hangout. Likewise I think cisLunarFreighter could bring in a lot of game modders once it is an operational game ... probably two to three years down the road at current levels of interest and effort. Bloom Clock project sounds ready to advertise to gardening or bird watching groups. Mirwin 17:01, 16 December 2007 (UTC)
coures available with streams
hi
i wanted a list of courses available in different fields of every field available even at higher level basically i am looking for a course for my younger brother for his future studies. so that i can show him the different fields available and their future studies and what will he become in future
i could find a long list of different course no connected in seeries like after compiting 10th chose a field arts, commerse, science and then compliting 12th in that striem different coure or field will open to which he can apply to and how many years he has to spent to complite that course etc
if there is already such a link please tell me where it is
if possible could there be a list of all course available so that i can also show that to others and give them proper guidence
i am new to wikiversity
thank u for all your support
- It looks like School:Future studies is ready for anyone who is interested in the topic and ready to start in on the "ground floor" of developing learning resources for the topic at Wikiversity. Maybe you could start by doing a web search for established future studies programs at universities. --JWSchmidt 02:36, 18 December 2007 (UTC)
- Some of the Engineering specialties have pretty extensive lists of courses based upon college catelogs. Wikiversity is still in early stages of development. Personally I expect within 5 years you be able to find extensive material and learning projects here related to almost anything that can be studied at physical schools with rare exceptions of very specialized leading areas of knowledge. This prediction is based upon watching Wikipedia explode in volume, usage, and utility as the internet accessible public discovered how to participate and use it effectively. In my opinion, a good way to teach your brother how to use Wikiversity effectively is to encourage him to participate in areas of personal interest to him. He should get enough structure and discipline at his local physical schools to provide a generic general studies capability. Allow him to pursue personal interests at Wikiversity and he may create the very information you are currently searching for to use as guidance for him. If he does not then someone else will. Probably they will work on it together. Ultimately each individual must be responsible for the stream of links they pursue. Many people will provide generic starting points or learning trails that will improve over time with extensive usage. Mirwin 04:29, 18 December 2007 (UTC)
New research projects: need both etiquette and technical advice
As the bloom clock has finally come together to become less high maintenance (thanks in large part to the technical saavy of darklama), I'd like to start several new research projects. While these projects are agriculture-related (and hence won't make sense to most of you), they can help to define what sorts of research projects we will accept in the future, and how we handle outreach.
Here's what I have in mind:
- 1. An Entomology Clock: This one shouldn't be controversial, as it more or less just follows the example of the bloom clock ("entomology" is just the study of creepy-crawly things with many legs -- but butterflies too!). For this one I really just need some technical advice, because part of the clock will involve what the little buggers are feeding on, and I'd like it to somehow connect with the bloom clock and perhaps other research projects).
- 2. Goat de-wormers suitable for certified organic production: this sounds (and is) very esoteric, but controling intestinal worms in goats without the use of synthetic dewormers is actually a major problem. This would be a "product testing" research project, dealing with specific brand names and commercial formulations (as well as "old-time" remedies which unfortunately aren't as effective as they used to be). This might need to be combined with a goat grazing research project as well (since wormloads will be affected by the type of forage available).
- 3. Ornamental cultivar trials: For trying out new cultivars of ornamental plants that are developed by the breeders but don't have a long history of actual "behavior" in gardens. This would include susceptability to pests and diseases, shade tolerance, winter survival, adaptability to soils, weediness, etc.
- 4. Garden plants and browsing by white-tailed deer: White tailed deer are a rather serious horticultural problem in parts of the United States and Canada, and this project would try to collect data on what gets browsed and what doesn't.
The technical issues are really just this: one of the reasons the bloom clock works so well is that a certain kind of "results" are created as we go along, which is both convenient and (frankly) very satisfying. I can't see how #2, 3, and 4 can be designed that way, and I think the "instant gratification" afforded by the bloom clock's structure is a major advantage.
The etiquette issue is perhaps a bit more interesting, and is very much an issue with #2 (and to a lesser extent #3). In this project, we'd want to get in touch with both the developers of the products in question, as well as with goat herders who are trying to maintain an organic herd (for the record, I personally don't raise my herd organically, precisely because there isn't enough good information available about organic dewormers). I suspect we can get the herders if we can get the companies, but this would be a very new thing for Wikiversity: we'd essentially be using Wikiversity for product testing. I'm not convinced we should be hosting product testing for every concievable product, but I am convinced that we would reap-provide benefits from permitting this sort of project when it comes to agricultural projects, as it would be an excellent example of Wikiversity's mission of teaching-learning.
The other thing, however, is that this begs the question of the need for an OTRS. If a company wants to register itself as a user, it seems to me that we might want some way of confirming that they are who they say they are. We might in some instances want to confirm that the farmers are who they say they are too (particularly if the company in question wants to offer samples of a dewormer to wikiversity-participant-farmers).
There's a lot to think about here -- and I realize some of this might sound alarming to people who are experienced Wikipedians and see the prospect of businesses directly participating in the wiki as a Very Bad Thing -- but I'd like to point out that farmers and small business owners don't have time to go to "brick and mortar" universities, and small companies that create products for niche markets often don't have the capital required to fund research studies by "brick and mortar" universities. Maybe we can do that :). --SB_Johnny | talk 23:35, 18 December 2007 (UTC)
- Projects #2, 3, and 4, I don't know anything about and couldn't help with, but I will toss in my two cents and say that I personally don't have any problem with partnering with businesses, as long as there is no plastering of ads and logos all over Wikiversity. Cooperation yes, advertisement no. As for project #1, though, I think it is a great idea and one I would personally be interested in- moreover, I'm currently in a class called "The Insect Connection" and I know the teacher quite well, she's also my advisor, and I know she'd be fascinated by such a project. I don't know whether or not she'd be willing to trust wiki technology, but she would love the idea of an "Entomology Clock" and I might be able to get her involved (and potentially, her future students!). I've also been thinking for a while about an "Ornithology Clock", if you will, although I don't know if we'd call it that. It would be based on the Bloom Clock model, but also be similar to Cornell's Project Feederwatch. It might be too much to start several of these projects all at once, though, and I know I'm too busy to be involved in actually running one of them, unfortunetly. --Luai lashire 02:35, 19 December 2007 (UTC)
- "I'm not convinced we should be hosting product testing for every conceivable product" <-- Wikiversity should pay attention to the future of innovation and significant contributions from communities of online collaborators in some areas of research. Some research projects are orphans either because it is very hard for companies to profit from them or because nobody has taken the steps needed to devise a research strategy suited to the research problem. We should be trying to devise innovative ways to work with companies as well as public and private funding agencies to tackle appropriate research topics that might yield to methods that make use of a wiki-based community. Getting reliable date will require a good research project design for such Wikimedia wiki-based research projects, but I think it is something we should explore. Good record keeping is a fundamental foundation. An important "trick" for research is blinding. For example, you could have a company make two remedies (one might be a placebo) for the goats and not allow the owners of the goats to know which remedy they are getting....they would collect their results "in the blind". A neutral party (Wikiversity participants) would "break the code" and only then would it be possible to decide from the date which remedy had the best results. --JWSchmidt 03:45, 19 December 2007 (UTC)
- "I'm not convinced we should be hosting product testing for every conceivable product" <-- I think we should host projects as interested parties show up with sufficient resources to perform effectively in accordance with our evolving policies. Obviously we have to build capabilities. If someone wanted to start human trials at Wikiversity I would say they have quite a steep long slope to build the necessity capabilities to do such things ethically and as responsibly as possible, and as legally required. I think you have some great project ideas above and it appears to me that you have already started thinking clearly about how to implement them effectively at Wikiversity. I think real world identity authentication is going to be a critical first step. Somebody has to be accountable for data authenticity and verification or nobody is going to trust it when they get surprising (to them or perhaps everyone) results. Then the question of acquiring resources and effort to reverify vs. simply deleting questioned results will come up. We probably need to find an experienced agricultural college professor who has dealt with some of these issues before to help us along. JWSchmidt's experience and expertise appeared invaluable to me in moving the existing research policy along to something readable, understandable, and (appears to me so far anyway) effective as a starting point. The companys' you approach may have Ag college programs they already work with occasionally ... perhaps they can introduce us and request consideration of your projects for student/prof/research participation. I will think some more about this, it is a big important topic for Wikiversity to realize its full potential. I agree we should be looking for niches where we can productively contribute, not trying to compete with existing brick and mortar capabilities, but using our strength in distributed access to make new things possible. Mirwin 04:24, 19 December 2007 (UTC)
- Perhaps one thing that would help initialize such new projects is a dedicated Wikiversity technical development team area or mailing list. We need a way for people to request and discuss technical needs with people willing to help them accomplish the required technical development whether it is a bit of wiki syntax for easy data logging or prototyping of a sophisticated new capability such as the molecular modeling that Draicone and others are tackling on the sandbox server project for a chemistry class. We need to be routinely developing and testing new required capabilities in a trusted fashion such that new required capabilities can move to operational status on the WMF's server farms and mirrors in a timely fashion vs. the indefinite stall and perpetual confusion we currently seem to experience. Do not get me wrong, the current custodians have done a marvelous job of supporting newcomers. It is just that some things need teams of effective specialists pooling their skillsets to accomplish. Mirwin 04:32, 19 December 2007 (UTC)
Stub Policy
Perhaps we should ask people labeling articles as stubs to outline what they think is missing. For example see hydrogen. It is a discrete small set of physical constants, possibly complete for someone's purpose. Someone tagged it as a stub. I added a link to a rather lengthy complete looking Wikipedia article w:hydrogen. Is this stub now complete and non stubby so that the tag can be removed? What should be added to remove stub status? Hydrogen car technology? Hydrogen production from coal? Hydrogen technologies such as feul cells, bunsen burners, etc.? Personally I get tired of tags that seem to add little to material creation. I understand some people view them as useful for finding stubs .... I just wonder if content experts could not more usefully edit by following the learning trails or existing link maze and modifying materials with a knowledge of the context for which they are created or currently being used. That said, I am in favor of volunteers working/playing when, where, and how they please. I just wonder how one knows the web page is no longer "stubby" in the eyes of the tagger, creator, or community maintainers. Perhaps this is merely a matter of editing boldly and allowing the preponderance of community effort to drift the material towards its proper fate or categorization. What are some others views? Mirwin 17:55, 20 December 2007 (UTC)
- I would very much appreciate it if anyone finds uncategorized astronomy pages to please add them to Category:Physics and Astronomy stubs or at least to Category:Astronomy. Putting an article in either cat will get my attention that an article exists. I just discovered Liquid water on Europa which has an interesting bit of info. I would consider this a stub, although I didn't tag it as such. Instead I added an image and a link to wikipedia. There is no learning activity or jumping off point to start a discussion on the topic. To be honest, I'm not really sure where the creator of the page intended to go with it. --mikeu 18:35, 20 December 2007 (UTC)
- I'm not entirely sure that the idea of "stubs" is even helpful in a Wikiversity context. A page may not require very much content in order to be complete & useful within its context. It makes more sense to describe Wikiversity pages purely on the basis of coherent content and usefulness than on length. --Luai lashire 21:00, 20 December 2007 (UTC)
Changing my user name
Hi. How can I change my user name? a.z. 04:31, 21 December 2007 (UTC)
- Just make a new user account. If you like, you can link the userpage of your old account to the new one. --JWSchmidt 04:47, 21 December 2007 (UTC)
- Thanks, but the name I want has already been registered. It has no contributions, and I was hoping I would be able to have it. Is it possible to deregister it? a.z. 05:29, 21 December 2007 (UTC)
- It might be possible for you to get the name, but many Wikiversity usernames "belong" to Wikipedians who came here just to register the name they use at Wikipedia. You can try contacting User:Cormaggio or User:Sebmol. They have the tool that is needed to rename users. --JWSchmidt 06:07, 21 December 2007 (UTC)
Gadgets
Wikipedia now has some options in user preferences for "gadgets" We could request this for Wikiversity. --JWSchmidt 18:22, 10 December 2007 (UTC)
- Pro. Let's get it to "play" with it. Who doesn't want (it later), must not enable it :-) ----Erkan Yilmaz (Wikiversity:Chat, wiki blog) 17:41, 12 December 2007 (UTC)
Update
The navigation preview gadget is now available here (see your user preferences). Are there any others that we should have? --JWSchmidt 17:14, 18 December 2007 (UTC)
- Second gadget available: Enhanced Talk: Color codes discussions to make easier to follow, much like some forums or bulletin board do.
- Navigation popups: there appears an error message when previewing pictures: imagepage preview failed :( is the query.php extension installed?
- Though for 2 tried file formats (png+gif) the mini pic was shown anyway (tried with different browsers). More info also here. ----Erkan Yilmaz (Wikiversity:Chat, wiki blog) 09:13, 23 December 2007 (UTC)
I've never seen this error messageI had never seen it because I never tried to get a navigation popup for an image...now I see it. Are you saying that you always get that error message? Are you saying that you see the error message with all browsers? I wonder if seeing this error message has something to do with other preferences setting such as skin or Thumbnail size preference. --JWSchmidt 14:00, 23 December 2007 (UTC)
- Erkan Yilmaz (Wikiversity:Chat, wiki blog) 14:27, 23 December 2007 (UTC)
Am I doing something wrong
Following a discussion on IRC, the material previously placed here has been moved to Wikipedia arbitration committee/Pedophilia userbox wheel war. McCormack 16:42, 24 December 2007 (UTC)
School and university projects
I posted a note at w:Wikipedia talk:WikiProject Classroom coordination to get some collaboration started. I'll be watching w:Wikipedia:School and university projects and Wikiversity:School and university projects to see if we get any interest.--mikeu 15:04, 18 December 2007 (UTC)
- I added a section called Wikiversity:School_and_university_projects#Welcoming_committee, please sign up if you are interested.--mikeu 16:41, 26 December 2007 (UTC)
Ayn Rand quote on the main page
The Ayn Rand quote on the main page is a bit distasteful to my opinion. I see love as a normal condition of the body as it has been evolved. It is not a second hand emotion, while creation is something better. I prefer to be part of a moderate Wikiversity, not something radical extremistic.--Daanschr 15:45, 29 December 2007 (UTC)
- I suggest that we start a learning project for discussion and selection of quotes to be used at Template:QOTD. Also, what bothers me most about the current quote of the day ("The second handers offer substitutes for competence such as love, charm, kindness - easy substitutes - and there is no substitute for creation." -Ayn Rand) is the fact that it links directly to Wikipedia. I think it would be good for Wikiversity to have pages about all the people who we quote on the main page. We already have Wikiversity:Quote of the Day, but I would like to see a related page in the main namespace where people could discuss the quotes, what they mean and if using particular quotes are suited to Wikiversity. I started a new page for dealing with QOTD as a subpage of the main page learning project; see Main page learning project/QOTD. --JWSchmidt 16:22, 29 December 2007 (UTC)
- I didn't knew it was the quote of the day. Perhaps, that could be notified and that has been derived from Wikipedia as a solution?--Daanschr 16:52, 29 December 2007 (UTC)
- Sorry I was not clear. The choice of these quotes is made by the Wikiversity community. When I said, "it links directly to Wikipedia," I just meant that the name "Ayn Rand" is a link to Wikipedia. I think we should change that so the link is to a Wikiversity page. --JWSchmidt 17:01, 29 December 2007 (UTC)
- Isn't the general avoidance of WP links a little isolationist? After all, one of the great things about WV as a learning resource is the ease with which it can link into other Wikimedia projects for reference? Learning resources can do with encyclopedic references, and sometimes an encyclopedic reference is better for a learning resource than a circular link into another learning resource? McCormack 17:08, 29 December 2007 (UTC)
- I've never advocated "general avoidance of WP links".....I use them widely in my Wikiversity editing. In this case, my thinking was as follows: if we like a quote so much that we put it on our main page, we can use that as a starting point for involving visitors in a Wikiversity learning project. To do otherwise would just be to miss an opportunity to enhance participation at Wikiversity. --JWSchmidt 17:14, 29 December 2007 (UTC) | http://en.wikiversity.org/wiki/Wikiversity:Colloquium/archives/December_2007 | crawl-002 | refinedweb | 17,525 | 59.13 |
Damon Payne
This is, officially, not supported. VS2005 is required to debug CF2, and VS2005 does not target CE.net 4.2 devices. It was a good step forward when the CF2 service pack one came out with runtime support for CF2 and SqlMobile 3 on CE.net 4.2 devices but as someone who loves the debugger, lack of debug suport was a cruel omission. Often things that are not officially supported will still work in many cases though, so when SP2 came out I tried debugging on my DAP ce4.2 devices. "A file cannot be created when it already exists", an obscure error with hardly a mention on Google groups or MSDN forums, so I gave up.
Today I was debugging on a CE 5 device from this vendor. Somehow it got set to "limited user" mode, a proprietery feature of theirs. Limited user does not have access to read or write most things, including whatever VS2005 needs to start managed debugging on the device. Lo, I get the same error on my CE5 device which had previously been working for a week "A file cannot be created when it already exists". Changing back to "supervisor" mode on the device alleviated the issue. On a whim, I tried the first CE 4.2 device I came across in the office and I get the error.
Enabling "supervisor", and slowly but surely (much slower than CE5) it deploys to the device. Starting debugging fails though and I get an error saying it cannot find a version of the CLR compatible with my application, "Some devices do not support automatic CLR upgrade". Manually upgrading with NETCFv2.wce4.ARMV4.cab, System_SR_ENU.CAB, sqlce30.ppc.wce4.armv4.CAB, sqlce30.dev.ENU.ppc.wce4.armv4.CAB and I'm in business on CE 4.2. If anyone else has this problem I would encourage experimentation, it might work.
Edit: I should note that my target platform in VS2005 was "Windows CE 5.0 Device".
You gotta love Generics in .Net 2.0. I had to do some Object-sorting in memory with some List<T> stuff.
/// <summary>/// Compare objects of type T using a certain property/// </summary>/// <typeparam name="T"></typeparam>public class PropertyComparison<T> : System.Collections.Generic.IComparer<T>{/// <summary>/// What property of the objects to use to compare one to another/// </summary>/// <param name="property"></param>public PropertyComparison(string property){_propName = property;}private string _propName;private PropertyInfo _prop;public int Compare(T x, T y){if (null == _prop){_prop = x.GetType().GetProperty(_propName);}try{IComparable xVal = _prop.GetValue(x, null) as IComparable;IComparable yVal = _prop.GetValue(y, null) as IComparable;return xVal.CompareTo(yVal);}catch (System.NullReferenceException){throw new ApplicationException("Type " + _prop.PropertyType.ToString() + " is not IComparable");}}}
So then I would use it:
if (!string.IsNullOrEmpty(prop)){CarSpot.Mobile.Types.PropertyComparison<Vehicle> c = new CarSpot.Mobile.Types.PropertyComparison<Vehicle>(prop);try{_dataSource.Sort(c);
I have 398 movies in my netflix queue right now, I have been told I will never catch up and get the list down below 300.
People are so negative...
My custom home theater project has been on for a while now. Yesterday the ole city inspector stopped by and gave me a green sticker on the rough-in of the room. I live in a code compliant community which means if I ever want to sell this place I actually need to follow the rules as far as permits and Wisconsin Uniform Dwelling Code and such. Now I just need to finish fishing some wire-conduit and insulate, drywall, paint, enjoy. It will be a nice room, approx. 21' by 25' with 9' ceilings. This has been a long time in planning before the house was even started, one of several big plans coming to fruition right now.
Most people could stop reading now, but some may ask what equipment I have:
Later in 2006 or early 2007 I'll buy another 2 channel amp to do 7.1. On completion, all Wisconsin area nerds will be invited to screen some action films.
I cannot talk about a lot of the stuff that is going on at work, and I am suffering from Blog Withdrawl. I think I am going to merge my technical and non technical blogs here. Soon readers will be flooded with politics, audio, wedding planning, and more.
Theme by Mads Kristensen
Damon Payne is a Microsoft MVP specializing in Smart Client solution architecture.
Tweetses by damonpayne
Get notified when a new post is published. | http://www.damonpayne.com/2006/10/default.aspx | CC-MAIN-2013-20 | refinedweb | 753 | 56.35 |
Learning Objectives
- Correctly implement controller action handlers and helper functions, and call helper functions from controller code.
- Explain the difference between bound and unbound expressions.
- Explain how Visualforce global variables and Aura components global value providers are different.
- Stretch goal: Explain the difference between “c.”, “c.”, and “c:”. (No, that's not a typo.)
Controller Syntax
({ myAction : function(component, event, helper) { // add code for the action }, anotherAction : function(component, event, helper) { // more code }, })
This isn’t a class. It’s a JSON object containing a map of name:value pairs. It can contain only action handlers, which you’ll never invoke directly.
- Your action handlers are declared as anonymous functions assigned to the action name (here, myAction and anotherAction). code.
A quirk of JavaScript syntax means that the first comma in this example is mandatory, but the second one (which looks like a dangling mistake), is allowed but optional.
It’s a best practice to always add the optional comma after your last action handler, even though it’s not necessary, and might make some people’s OCD twitch. This is because at some later time you’ll probably add another action handler, and you’ll forget the comma. (Your humble author has done this too many times to admit publicly.) And then you’ll get a cryptic error, and spend time trying to figure it out. That’s a waste of time, and it’s totally avoidable.
Helper Syntax
myAction : function(component, event, helper) { helper.aHelperFunction(component, 42); },
({ aHelperFunction : function(component, value) { // helper code here }, })
- Helper functions can be declared to take any parameters you want. You call the helpers, not the framework.
- By convention, pass a reference to the component calling the helper as the first parameter to the helper.
- Helpers are singletons, and are shared across all component instances. Don’t keep component state in a helper. Think of helpers more like static methods in your Apex controllers.
- You can add non-functions to a helper, and have the values available to your controller action handlers—for example, an API credential you want to load only once. But keep in mind it’s a shared variable, and there’s nothing guarding against inappropriate access. Think—carefully—before you use this strategy.
Controllers vs. Helpers
So, what do you put in a controller action handler function, and what do you put in a helper?
Many people consider it a best practice to keep action handlers as simple and clear as possible, and abstract out the details into the helper. That’s a sensible approach, but there are others.
Our advice is to put code you want to reuse across action handlers into the helper. Beyond that, it’s a matter of taste and style, and your organization’s coding standards.
Expressions
Expression syntax in Visualforce and Aura component markup is similar. Here’s an example.
The message: {! messageText }
Is this Visualforce or Aura component markup? It’s Visualforce. The same markup throws an error if you use it in an Aura component.*
How can you tell the difference? The expression references messageText, which is presumably a property or getter method on the page’s controller, or maybe on a controller extension. The value is coming “from somewhere” but there are a number of possibilities. You can’t really tell with Visualforce.
Aura components don’t let you get away with pulling a value out of “somewhere.” Here’s the most likely corresponding Aura component markup.
The message: {! v.messageText }
The key difference, the “v.”, is subtle when you’re first getting to know Aura components. It’s the value provider for the view, which is the component itself, and it’s how you reference a component’s attributes. We tackle value providers in the next section. For the moment, just know that if you leave off the value provider, you get an error.
This is especially important to remember when you convert Visualforce markup to Aura components.
The markup looks similar, and it’s easy to copy-and-paste it into place. Don’t forget to add the
value provider in every expression.
Two other points we want to make about Aura component expressions and expression syntax. First, there are actually two different kinds of expressions in Aura components: bound and unbound, which use different expression delimiters.
- Bound expression: {! v.messageText }
- Unbound expression: {# v.messageText }
The difference in syntax is one character, the exclamation point vs. the hash mark. The
difference in behavior is either invisible, or profound, depending on context. The unbound
version acts most like a Visualforce expression.
The
different syntax provides the similar behavior—intuitive, right?
Bound expressions add some Aura component magic, creating a two-way binding between all uses of the value. If you change the value in one place, you change it everywhere. You can read up on the details in Data Binding Between Components.
<aura:component ...> ... The message: {!v.message} <lightning:button ... </aura:component>
The most obvious difference is the value provider, “v.” or “c.”. “v.” is the view, as we saw previously. “c.” is the controller. You use “c.” to reference actions in the controller. Here we bind an action to the button’s click event.
Although you can use a component attribute directly, as we do here with the {!v.message} string attribute, you reference functions in the controller. That is, you can’t call those functions directly. The result of evaluating the {!c.acknowledgeMessage} expression is a reference to a function, not a result value you can use directly. Adding the value provider doesn’t change that.
____________________
* There’s one exception, if messageText is a local <aura:iteration> variable, but let’s keep this simple.
Value Providers
After that last section, what’s left to say about value providers? Plenty. But here we’re going to call out one possible source of confusion. Another chute.
({ "echo" : function(cmp) { // ... var action = cmp.get("c.serverEcho"); action.setParams({ echoString : cmp.get("v.message") }); // ... } })
There are two things to notice. The first is the method used to access the value provider; you have to call get() with the name of the attribute you want to access, including the value provider. So, cmp.get("v.message").
The second is the chute. Look again at that first line of the function. What is
cmp.get("c.serverEcho") getting? It’s getting a
reference to the server-side action. In a component’s markup, “c.” represents the client-side controller. In the same component’s JavaScript
(action handler or helper), “c.” represents the
server-side controller.
Confused? It gets worse.
You’ll also see “c:” in
both markup and JavaScript. That’s c-colon, instead of c-period. “c:” is the default namespace. It represents Aura component code you’ve added to
your org.
We recognize there’s an opportunity for confusion, which is why we call it out here. Keep an eye out for “c” in all its forms: “c.”, “c.”, and “c:”.
Global Value Providers
There is one kind of Aura component value provider—global value providers—that looks like Visualforce global variables. For example, both Visualforce and Aura components have $Label and $Resource. There are two potential chutes here.
Not every global variable available in Visualforce is available as a global value provider in Aura components.
Some global variables and global value providers have the same name. Even so, they behave differently. (If you can’t think of reasons why, head back to Aura Components Core Concepts for a refresher. Before you take the challenge. Hint, hint!)
Don’t try to use a global value provider without reading the documentation for it in the Lightning Aura Components Developer Guide first.
Resources
- Handling Events with Client-Side Controllers
- Sharing JavaScript Code in a Component Bundle with Helpers
- Using Expressions
- Data Binding Between Components
- Data Binding in Lightning Components Performance Best Practices
- Value Providers | https://trailhead.salesforce.com/en/content/learn/modules/lex_dev_lc_vf_tips/lex_dev_lc_vf_tips_syntax | CC-MAIN-2019-18 | refinedweb | 1,312 | 59.5 |
Updating to Spark 3.0 in production
Breaking changes and expected improvements: a production point of view
With more than 70 jobs running with Spark and hundreds of gigabytes of data processed per day, Spark is a critical piece of our data pipelines.
At Teads, we use the official open-source Spark package and spawn AWS EMR clusters to run our jobs. Hence, it is essential to optimize our jobs to reduce billing. With an exciting two-fold speed-up promise, we had to give Spark 3 a try.
In this article, we first cover the main breaking changes we had to take into account to make our code compile with Spark 3. Then, we go through the new features and their performance impact on our production environment.
Breaking Changes
Let’s first discuss the four main breaking changes we faced when updating our codebase to Spark 3:
- Depreciation of
UserDefinedAggregateFunction
- Depreciation of untyped UserDefinedFunction, a.k.a.
udf(AnyRef, DataType)
- Change in behavior of
spark.emptyDataFrame
- Cassandra driver incompatibilities with third-party libraries.
Deprecation of UserDefinedAggregateFunction
A
UserDefinedAggregateFunction, commonly called UDAF, is particularly useful to define custom aggregations, such as averaging sparse arrays.
In Spark 3, the
sql.expressions.UserDefinedAggregateFunction API has been deprecated in favor of the
Aggregator API (based on
algebird Aggregator). The former suffers from a serialization and deserialization overhead with each merge into the buffer (merge of rows or temporary buffer). In this gist experiment, for 1000 rows, there is 1006 ser/deser. The newly introduced API aims to remove this cost.
On top of the fact that it is supposedly faster, this new API is also easier to write, read, and maintain. That makes it preferable from an engineering point of view.
As an example, here is a gist of a
Mean aggregation function with both UDAF and Aggregator.
Concrete impacts of this change observed in production are presented later in this post.
Deprecation of untyped UserDefinedFunction
We also had to face a change in the default behavior of untyped
UserDefinedFunction (aka
udf) between Spark 2.4 and Spark 3.0. This change is explained in this PR and can be illustrated as follows.
Given a UDF defined this way
val f = udf((x: Int) => x, IntegerType). For a null value in Spark 2, it returns null and in Spark 3 it returns the default value of the Java type. In this case, 0 for an
Int.
One should then be especially careful and rely on strong and wide tests as this difference of behavior can change the results. For instance, this code would behave differently depending on the Spark versions:
import org.apache.spark.sql.types._
import org.apache.spark.sql.functions._val f = udf((x: Int) => x + 1, IntegerType)Seq((None), (Some(0))).
toDF("value").
withColumn("incremented", f($"value")).
agg(sum("incremented")).
first.getLong(0) // 1 in Spark2, 2 in Spark3
This deprecation warning raises an exception, that can be removed (if you are sure you want the new behavior) by setting the configuration
spark.sql.legacy.allowUntypedScalaUDF=true.
Change in behavior of `spark.emptyDataFrame`
There is a known hack in Spark 2 that allows you to make a side effect on each executor of a cluster at runtime. The idea of this hack is to create an empty
DataFrame, repartition it by the number of executors and use
foreachPartitions to trigger your side effect as in the following example:
val nbExecutors = sparkSession.sparkContext.getConf.get("spark.executor.instances").toIntsparkSession.
emptyDataframe.
toDF.
repartition(nbExecutors).
foreachPartition {
(_: Iterator[Row]) => idempotentSideEffect()
}
Unfortunately, this hack does not work anymore in Spark 3. To understand why let’s compare the physical plans in both versions.
In Spark 2:
spark.emptyDataFrame.explain(true)== Optimized Logical Plan ==
Repartition 8, true
+- LogicalRDD false== Physical Plan ==
Exchange RoundRobinPartitioning(8)
+- Scan ExistingRDD empty[]== Physical Plan ==
LocalTableScan <empty>
In Spark 3:
== Optimized Logical Plan ==
LocalRelation <empty>== Physical Plan ==
LocalTableScan <empty>
We can observe with the
OptimizedLogicalPlan that the exchange on an empty
Dataframe is now optimized in Spark3 making the trick inefficient.
In a few words, the optimization comes from the inner representation of an empty
Dataframe. It used to be represented with an empty
RDD which prevents optimizations such as
PropagateEmptyRelation. In Spark 3, it is now using an empty
LocalRelation. You can dig more with this commit.
As a workaround, one can now use
spark.range(nbExecutors).repartition(nbExecutors) instead of
spark.emptyDataframe.repartition(nbExecutors) to trigger a side effect on each machine of a cluster at run time.
Cassandra driver incompatibilities between third-party libraries
In the latest version of
spark-cassandra-connector (3.0), the java driver was bumped to version 4.7.
Be careful before spending time to bump it to make sure all your third party libraries using Spark and Cassandra are up to date with the Java driver version 4.
For instance, the latest release (November 2020) of
cassandra-all is still using Cassandra's driver in version 3. Which makes libraries based on
cassandra-all incompatible with
spark-cassandra-connector.
Performance impacts observed in production
With this new Spark version, we observed some interesting speed-ups that are described in the following subsections.
Adaptive Query Execution
Adaptive Query Execution (AQE) is the main feature of Spark 3. In a few words, AQE comes with three new features for optimizing queries at runtime:
- Dynamically coalescing shuffle partitions (coalesce contiguous small tasks)
- Dynamically switching join strategies (change the join strategy based on runtime statistics)
- Dynamically optimizing skew joins (optimize joins with unbalanced data)
For more details about the new features, this blog post seems to be a good starting point.
Sadly, despite the interesting promises related to AQE, we did not observe huge improvement on our side yet. Hence, so far in production, we have only seen the impact of the dynamic coalescing of small partitions. It saved us up to 5% of the runtime. As an example, we studied the size in bytes of the tasks and their performance.
In Spark 2:
In Spark 3:
We can see the difference in behavior between Spark 2 and Spark 3 on a given stage of one of our jobs.
In Spark 2, the stage has 200 tasks (default number of tasks after a shuffle), 170 KB per task, and lasts 18 seconds.
In Spark 3, the stage has 50 tasks, 1450 KB, and lasts 5 seconds. A saving of 70% on this stage.
It is worth mentioning that the newly created stages can sometimes be harder to read and to follow because each stage resubmits its optimized DAG as a new job. For instance, for a single job with 4 stages, you might see 10 stages, 6 skipped and 4 jobs instead of 4 stages, 0 skipped, and 1 job.
It can be quite confusing when you want to track down your job.
One possible explanation of the very limited impacts that were observed can be found in the specific use cases we have, the fact that our jobs are hand-optimized, and the nature of our data.
Indeed,
- We have barely any skew joins.
- Our data load is not variable. We might see a factor 0.5 or 2 depending on the time of the year, but we never see an unexpected 10X load. Hence, we can choose our join strategies and the number of partitions during development.
Nevertheless, this is worth trying and digging. Do not hesitate to set
spark.sql.adaptive.enabled at true and share the performance improvement you get.
UDAF Aggregator
The main improvement we have seen comes from the new UDAF API. We observed a speedup between 10% and 15% for jobs impacted by this enhancement. This performance gain can be explained in two ways:
- No serialization and deserialization of the rows while aggregating (that also reduces a bit the GC pressure).
- Most of the
SortAggregatephysical operators turn into
ObjectHashAggregatewhich saves us a sort.
In Spark 2:
In Spark 3:
In this example, we can observe a 20% duration decrease of one stage, which is a pretty interesting gain. If you want to dig more into the differences between these two physical operators. I suggest you have a look at this detailed blog post.
Nested schema pruning
The
spark.sql.optimizer.nestedSchemaPruning.enabled configuration was available in Spark 2.4.1 and is now default in Spark 3 (see commit). This setting enables the pushdown predicate on nested fields for supported formats. It can improve reading performances by up to 70% in our case.
The good application of this parameter can be checked in the SQL tab:
In our
Scan parquet, we only read
user_context.vidinstead of the whole
user_context structure.
We do not see any reason why this parameter was set at false in the first place. Do not hesitate to enlighten us if you have more insights.
Pushdown predicates on CSV
As we barely use CSV at Teads, the following optimization had almost no impact on our production jobs.
However, it is worth mentioning Spark 3 now supports pushdown predicates on CSV with
spark.sql.csv.filterPushdown.enabled configuration (set to true by default).
If you want to check which format supports pushdown predicates, do not let the query plan trick you. For instance, if we have a CSV with only one column containing id, you will see that this query plan remains unchanged between the two Spark versions:
In Spark 2:
spark.read.option("header", "true").option("inferSchema", "true").csv("test.csv").filter($"id" > 5).explain()== Physical Plan ==
*(1) Project [id#55]
+- *(1) Filter (isnotnull(id#55) && (id#55 > 5))
+- *(1) FileScan csv [id#55] 3:
spark.read.option("header", "true").option("inferSchema", "true").csv("test.csv").filter($"id" > 5).explain(true)== Physical Plan ==
*(1) Project [id#25]
+- *(1) Filter (isnotnull(id#25) && (id#25 > 5))
+- *(1) FileScan csv [id#25] 2 it seems like the filters are pushed at the source level. However, the filter is never used in the source code. For implementation details, do not hesitate to check the source code and more especially this commit. You will notice
filters: Seq[Filter] is an argument of
buildReader, but is never used in Spark 2.
Mapreduce fileoutputcommitter algorithm version
mapreduce.fileoutputcommitter.algoritm is a parameter to control the algorithm used to write data. A known trick is to use version 2 to drastically improve the write performances.
Nothing new on the implementation side with Spark 3. However, the documentation has been fixed.
In this commit, the documentation about the default value has been fixed. It does not come from Spark but is inherited from the Hadoop version. So for Hadoop version >= 3.0.1, the default version is 2, otherwise, it is 1. This might not be true in the future, there are some discussions ongoing.
Conclusion
We successfully migrated our whole stack to Spark 3, including data pipelines, machine learning training, and almond kernel for notebooks.
At first, we were afraid of bumping a big framework that impacts our production a lot, but with an in-house job deployment system that enables us to roll out smoothly and only a few breaking changes in the code. The bump was easier than it seemed.
The gain is noticeable and our AWS billing has been reduced, but unfortunately not as much as we were expecting after reading some benchmarks. In databricks runtime, some queries from the TPC-ds benchmark are sped-up by 18 times.
However, with the newly introduced AQE, we expect to see more and more optimizations at runtime. We cannot wait to see the next Spark releases and optimizations to try again AQE.
If you want some tips on Spark performance improvements, do not hesitate to have a look at our other articles:
Spark performance tuning from the trenches
A collection of best practices and optimization tips for Spark 2.2.0
medium.com
Spark troubleshooting from the trenches
Troubleshooting tricks and external data source management
medium.com
Thanks to all those who reviewed this article. Especially Yann Moisan, Joseph Rocca, Benjamin Davy, and Han Ju.
References:
- Waiting for code physical aggregators operators
- Adaptive Query Execution introduction by databricks
- Benchmarks of Spark 3 in databricks runtime
- Use case of a User defined aggregate function at Teads
- Gist to count the number of ser deser in old Udaf
- Gist with the syntaxes for old Udaf and Aggregator Udaf
- UserDefinedAggregateFunction deprecation documentation
- UserDefinedFunction breaking change commit
- File output committer algorithm version documentation fix
- Nested schema pruning commit
- Spark-cassandra-connector GitHub
- Spark configurations
- Tpc-ds benchmark
- CSV pushdown predicates supported | https://medium.com/teads-engineering/updating-to-spark-3-0-in-production-f0b98aa2014d | CC-MAIN-2021-04 | refinedweb | 2,085 | 55.13 |
Icewind Dale II
Heart of Fury Guide by C.LE
Version: 4.3 | Updated: 08/14/12 | Search Guide | Bookmark Guide
=============================================================================== C h r i s L e e ' s H e a r t o f F u r y, P o w e r g a m i n g, a n d B e y o n d v 4.3 =============================================================================== ===============================================================================. Example: if you ever want to navigate back to the Table of Contents, search for (with an asterisk in front) '---'. Periodically, you'll find mentions of "find shortcuts" - the asterisk followed by the three digit number is exactly what they reference, only without the asterisk. The pattern behind the shortcut keys is simple: the first three letters of each section/sub-section or enough letters to be unique, separated by a colon, and terminated by a dash. ------------------------------------------------------------------------------- Special Note *SPE- Introduction & Contact Info *INT- (aka What the hell is this?) What Happens in Heart of Fury? *WHA- Basic Heart of Fury Mode Concepts *BAS- 1. AC *BAS:AC- a. Druid-based AC *BAS:AC:DRU- 2. Your base attack bonus *BAS:YOU- 3. DR *BAS:DR- 4. Saving throws *BAS:SAV- 5. Luck *BAS:LUC- 6. Damage vs Crowd Control *BAS:DAM- 7. Swords vs Magic *BAS:SWO Building your Party *BUI- 1. Decoy *BUI:DEC- 2. Buffers *BUI:BUF- 3. Crowd Control *BUI:CRO- 4. Other Roles: Damage/Healing *BUI:OTH- a. Maximizing physical damage *BUI:OTH:MAX- 5. Alignments: Good vs Not Good? *BUI:ALI- 6. Good and Bad Feats *BUI:GOODANDBADF- 7. Good and Bad Skills *BUI:GOODANDBADS- Key Racial Breakdown *KEY- 1. Human/Aasimar *KEY:HUM- 2. Drow *KEY:DRO- 3. Deep Gnome *KEY:DEE- Class Breakdown *CLA- 1. Barbarian *CLA:BARB- 2. Bard *CLA:BARD- 3. Cleric *CLA:CLE- a. Domains *CLA:CLE:DOM- 4. Druid *CLA:DRU- a. Forms *CLA:DRU:FOR- 5. Fighter *CLA:FIG- 6. Monk *CLA:MON- 7. Paladin *CLA:PAL- 8. Ranger *CLA:RAN- 9. Rogue *CLA:ROG- 10. Sorcerer *CLA:SOR- 11. Wizard *CLA:WIZ- Spells of Note *SPE- 1. Buffs/Support *SPE:BUF- 2. Crowd Control *SPE:CRO- 3. Damage *SPE:DAM- 4. A Word on Summons *SPE:AWO- Gearing Up *GEA- 1. Which weapon proficiency? *GEA:WHI- 2. Weapons of Note *GEA:WEA- a. High-Saving Throw Weapons *GEA:WEA:HIG- 3. Armor of Note *GEA:ARM- 4. Accessories of Note *GEA:ACC- Sample Parties *SAM- 1. 6-person Good Party *SAM:6PE- 2. 4-person Good Party *SAM:4PE- 3. 2-person Evil Party *SAM:2PE- 4. Playing a Smaller Party *SAM:PLA- ...and more! *AND- 1. Important Notes *AND:IMP- 2. Challenges *AND:CHA- Chapter-by-Chapter Notes *CHA- 1. Prologue *CHA:PRO- 2. One *CHA:ONE- 3. Two *CHA:TWO- 4. Three *CHA:THR- 5. Four *CHA:FOU- 6. Five *CHA:FIV- 7. Six *CHA:SIX- Appendix *APP- 1. History *APP:HIS- 2. My works *APP:MYW- =============================================================================== =============================================================================== Special Note *SPE- ------------------------------------------------------------------------------- Aside from a few minor bugs, Icewind Dale II is a remarkably stable game (after you install the official patch, that is). Unlike Baldur's Gate or Baldur's Gate II, you won't find any massive third-party fixpack to address outstanding issues. That being said, there *are* still a few minor issues that have been fixed, but it rarely ever gets publicity, so it behooves you to go to and download the "IWD2 Unofficial Item/Spell Patch by Gimble". Unzip it using your favorite archiver and toss the files into your IWD2's Override directory. Most of the issues are minor, but if you want a complete, solid play experience, it's still pretty good. The only unfortunate outstanding issue is that Improved Initiative is bugged and unfortunately cannot be fixed by conventional third-party override methods. Though, if someone can make a modified DLL or something to actually fix this, I'll gladly pay them a significant financial reward (let's put it at 1000 dollars 2012, adjusted for inflation). If the link above is broken, let me know, and I'll fix it. =============================================================================== =============================================================================== Introduction & Contact Info (aka What the hell is this?) *INT- ------------------------------------------------------------------------------- Icewind Dale II, in my opinion, is one of *the* most well-designed games ever made for the PC. It is also one of the most challenging, especially when you finish the game and decide to check off the "Heart of Fury mode" difficulty option to play again with your victorious party. However, there's a lack of good guides out there for this super hard difficulty mode, and the few that are out there have knowledge gaps, errors, and in some cases it almost seems like the writers themselves have never even played Heart of Fury (otherwise they would've noticed that some things they suggest don't work at all). Enter this guide! Hopefully you'll find this to be a veritable tome of all sorts of information for playing through Heart of Fury mode. Plus, I've even got extra stuff in case you want to challenge yourself even further (think Final Fantasy Tactics-style self-challenges). If you want to grab a hold of me, pop me an e-mail with the subject line beginning "IWD. =============================================================================== =============================================================================== What Happens in Heart of Fury? *WHA- ------------------------------------------------------------------------------- Courtesy of Ilya Nemetz when you turn on Heart of Fury mode, the following things happen: All enemies get +12 to their challenge rating (i.e. more experience yielded). All enemies get 3x base health (before constitution bonuses) All enemies get +12/13 to their to-hit rolls, though they retain their normal difficulty attacks per round information. All enemies get +10 to their saving throws. All enemies get +10 to all stats, which may improve AC, damage, and health on top of above bonuses. All enemies do 2x damage.** ** Ilya used various CLUAConsole tricks to check out enemy stats, so these should be pretty accurate. By his own admission though, the damage is a "bit" more complicated than just 2x, and I can confirm that ranged weapons and spells tend to do unmodified damage. =============================================================================== =============================================================================== Basic Heart of Fury Mode Concepts *BAS- ------------------------------------------------------------------------------- So, you might *think* you understand how the game works, but just by starting HOF mode, you'll notice that alot of ways IWD2 played in normal mode just don't apply anymore! ------------------------------------------------------------------------------- 1. AC *BAS:AC- At the end of normal difficulty, you might have some characters sitting comfortably at 30+ AC. They get hit on occasion, but nothing they can't handle. Then you start fighting goblins in the Prologue on HOF mode and notice that all of a sudden, these piddling creatures are basically hitting you on every single strike and hitting you *hard*. The monsters' base attack bonuses (BAB) drastically ramp up in HOF mode. As rechet's Powergaming guide so wonderfully points out, regular monsters' BAB bonuses (not counting specifically difficult monsters) easily go up to +52 for the first attack, which means that even with an astronomically high 50 AC, you'll still be hit 95% of the time by that first attack. Not to mention that the normal scaling down of BAB for successive attacks is only by 5, so on a second attack, that's still a potential maximum of +47, which will still hit you an oustanding 85% of the time with 50 AC. (Fortunately though, the number of attacks a monster gets doesn't seem changed from normal difficulty, so monsters won't have a ridiculous number of attacks/turn.) Not to mention that those buggers *hurt* when they hit. Stoneskin may have pretty much negated all damage on normal, but in HOF, melee damage skyrockets (ranged damage doesn't really scale up that much on HOF). Pathetic little critters will easily hit you up to 30 damage without critting, and the really big guys can easily wallop you for 50-60 damage without needing a critical. However, you *can* take advantage of one specific mechanic to get your AC to safe levels. And that's to abuse "generic" AC, which is the only type of AC bonus that stacks with itself (instead of simply using the highest value). rechet's guide covers this, but a complete listing of possible sources of generic AC is as follows: Innate: Deep Gnome (+4) Monk Wisdom Bonus (based on WIS) Monk AC Bonus (+1 per 5 monk levels, up to +6) Bard Song: War Chant of the Sith (+2) Feats: Expertise (up to +5) Dodge (+1) Deflect Arrows (+1 vs ranged) Spells: (Mass) Haste (+4) Tenser's Transformation (+4) Barkskin (up to +5) Items: bracers: Brazen Bands (normal+collector's edition only, +5) bracers: Indomitable Bands (HOF+collector's edition only, +5) necklace: Flame Dance Talisman (normal only, +1) necklace: Sunfire Talisman (HOF only, +3) head: Swing from the Masts (normal only, +1, Rogue only) head: Crow's Nest (HOF only, +3, Rogue only) Special: druid form: Air Elemental (+12) In addition, you can max out other sources, mainly Dexterity, Armor, and Deflection (there's also Shield bonus, but using a Shield will cancel out the best source of generic AC - the Monk Wisdom bonus). For these, these are the good sources. Dexterity: Race that has up to 20 starting DEX feet: Chimandrae's Slippers (+5 DEX) spell: Cat's Grace (+1d4+1) spell: Tenser's Transformation (+2d4) druid form: Air Elemental (base set to 29) Armor: Bracers of Armor +4 spell: Mage Armor (+4) spell: Spirit Armor (+6) spell: Shield (+7) Deflection: Farmer's Cloak (+3) Ring of Protection +3 Dagger of Warding (+3) Baron Sulo's hook (+3, dagger) Various spells (+4) spell: Divine Shell (+7) Note that no specific equippable Armor is mentioned. That's because if you really want to max out AC, the highest possible Armor-based AC (+11) is way too little considering it caps out your Dex-based AC too restrictively, so you're better off with a high Monk wisdom bonus and a high Dexterity bonus. There are also a few specific items/events worth mentioning, because these also help you attain high AC values through Wisdom. Wisdom: Potion of Holy Transference (+2 WIS, -1 DEX) Potion of Clear Purpose (+1 WIS, -2 CON) Banite Quest (+2 WIS)* Paladin Quest (+1 STR, +1 WIS)** Every God Ring (+5 WIS, Paladin/Cleric/Druid only) * You get this bonus if you are a Banite Cleric when you clear the glen of Undead in Kuldahar. ** You get this bonus if you are a Paladin and obtain the Holy Avenger sword. As you can see, there are some pretty strict class requirements that you must have to get the top AC. A reasonable selection of sources for AC might be (and remember, we only really need to shoot for 72, as since at that point monsters will always hit you on a roll of 20, there's no difference between 73 and 100000 AC most of the time)... 1 Paladin/15 Monk/1 Rogue/13 Conjurer Drow ... with 19 base DEX => 17 base DEX (2 Holy Transference) => 22 final DEX (Chimandrae's Slippers) ... with 18 base WIS => 30 base WIS (extra stat point every 4 levels, 2 Holy Transference, 2 Clear Purpose) => 33 base WIS (2 Paladin Quest) => 38 final WIS (Every God Ring) AC: 10 +6 (Dexterity) +14 (Monk Wisdom) +3 (Monk AC) +5 (Expertise) +5 (Indomitable Bands) +3 (Sunfire Talisman) +3 (Crow's Nest) +3 (Ring of Protection +3/Farmer's Cloak) +1 (Dodge) +4 (Haste) +4 (Mage Armor, up to +7 with Shield if necessary) +5 (Barkskin, a party member has to cast this) +2 (Bard song, a party member has to sing this) Total 68 That 68 is a bit shy of the ideal 72, but this character has a few options. Against high BAB monsters, s/he can cast Tenser's Transformation or Shield. Shield bestows an additional +7 off the bat, and a potential extra off the DEX bonus from Tenser's Transformation could bumps him/her to 72. Moreover, thanks to the Conjurer levels, s/he can cast Improved Invisibility (essentially giving a flat out 50% chance for monsters to miss even if they do roll a critical or something, though Blind-Fight Feat helps against this), Blink (a flat 50% chance for attacks against the character to fail, and Blind-Fight doesn't help against Blink), Blur (20% chance for attacks to miss, though it's unclear whether it stacks with Blink or Invisibility), and Mirror Image (essentially a buffer of 2d4 free "hits" the character can take). Moreover, other party members can cast spells like Symbol: Pain, Recitation, Prayer, Chant, and Emotion: Despair; these spells all penalize enemy attack rolls and essentially give your character "extra" AC. As you can see, there is a *bit* of flexibility: you can use Banite cleric levels instead of Paladin, you can trade off Wizard/Monk levels in favor of more Banite levels for Divine Shield, you can use a Deep Gnome, you could even experiment with using a Druid. However, it's pretty essentialy that your AC character have atleast 1 Monk level (for the Wisdom bonus), have some divine levels, and at least 1 rogue level. You can *try* and pass off without Wizard levels and rely on other party members to cast things like Mage Armor, Haste, and Improved Invisibility, but Mirror Image, Blink, Tenser's Transformation, and Shield are all self-cast only, so you should have a safely high AC (70+ without worrying about helper spells like Recitation or Emotion: Despair) and some good healing capabilities if you go that route. However, this does make clear that for AC to be effective at all in HOF, you pretty much need to focus all your efforts into a single character. If you try to have 2 characters with decent AC, you'll probably end up with 2 characters with AC in the high 40's - they might as well have 0 AC given how often they'll end up getting hit. All is not lost, though, for your non-AC characters. There are other mechanisms to keep them safe, which we'll talk about later, though Mirror Image (already mentioned here) is a pretty universally good protection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1a. Druid-based AC *BAS:AC:DRU- If you refer to section CLA:DRU:FOR-, you'll note that one of the druid's possible forms (Air Elemental) has a surprising AC (31 total, based off of 22 base and +9 dex from 29 dexterity). You'll also note that shapeshifting doesn't lose the Monk wisdom AC bonus, and any buffs cast _after_ shapeshifting still apply normally. Moreover, item bonuses to stats stick around. This leads to an alternate direction for AC that is based on heavy investment in Druid levels. There's not a lot of flexibility, though it's an interesting direction. Note that expertise and dodge feat don't help at all. 24 Druid/5 Monk/1 Dreadmaster of Bane Aasimar ...with 20 base WIS => 27 (+7 stat point) => 31 (2 Holy Transference) => 33 (2 Clear Purpose) => 37 (x2 Dreadmaster quest) => 42 (Every God Ring) ...with low base DEX => 29 (set from Air Elemental form) => 34 (Chimandrae's Slippers) AC: 22 (base Air Elemental) +12 (Dexterity) +16 (Monk Wisdom) +1 (Monk AC) +4 (Haste) +4 (Mage Armor) +2 (Bard Song) +5 (Barkskin, someone else has to cast this; won't work if self-cast before shape-shifting) Total 66 AC The interesting point of this is that it is possible to have virtually no overlap between a druid-based AC approach and a traditional decoy AC. Note that since a druid's physical (strength, dexterity, constitution) stats get _set_ upon shape-shifting, you can willy-nilly use all those +wisdom potions and destroy those stats without concern. Again, while 66 AC is short of the magical number, simply using Ghost Armor (+5 deflection) will put you in shooting range of the holy grail of 72. Adding Recitation/Prayer/Symbol: Pain/etc will carry you over, or simply using Spirit Armor instead of Mage Armor. The advantage to this is that, with a properly-built evil deep gnome, you can have two characters with nigh-impregnable ACs, which would not be possible otherwise. ------------------------------------------------------------------------------- 2. Your base attack bonus *BAS:YOU- Fortunately, monster AC's don't really go up that much on HOF. Yes, you'll occasionally run into monsters that are annoyingly hard to hit, but for the most part, even your pathetic Mages will probably be able to hit atleast twice a round at level 30. The basic consequence of this is that in many cases, you can start getting Power Attack for everyone who can use it and maxing out the value for +5 damage. Of course, you might not want too many people melee-ing, as it's hard to protect that many characters. This also means that you should be less worried about keeping Rapid Shot on at all times. It also means that, for the most part, you should start preferring items that do more damage over items that can hit better. A good example of this is Scales of Justice, a special HOF mode Axe that lets you switch into different "modes" - in one mode you can have +5 accuracy and +5 damage, in another you can have +10 damage. In most cases, keeping the +10 damage mode active is probably the best idea, as you're already probably going to be hitting every single one of your attacks. ------------------------------------------------------------------------------- 3. DR *BAS:DR- Damage reduction is important. It was almost abusively good in normal mode (the spells Iron Skins and Stoneskin pretty much granted you temporary immunity to attacks). In HOF, DR gets way worse, since monsters are busy hitting for ridiculous sums of damage. But even if monsters are doing upwards of 60 damage per hit, that 10/+5 DR may not be as good, but it's still a huge chunk of life you're saving every time you're hit. Here's a (probably) complete list of sources for DR. Note that DR has some funky rules about stacking. DR listed in the form of "5/+1" doesn't stack with other similar types of DR. This means that a character with 10/+2 DR and 5/+1 DR will only have 10 damage negated against an enemy with normal weapons, instead of 15. DR in the form of "Slashing resistance" or "Piercing resistence" *does* stack, and also stacks with "5/+1"-style DR. This means that a character with 5/+1 DR and 1/- Slashing resistance will have a total of 6 damage negated from an enemy with a normal slashing weapon. Innate: Barbarian (1 Slashing/Piercing/Bludgeoning/Missile at 11, +additional 1 every 3 levels) Monk (20/+1 at level 20) Bard Song: War Chant of the Sith (2/-)*** Items: bracers: Indomitable Bands (HOF+collector's edition only, 10/+2) bracers: Bands of Focus (normal only, 5/+1) bracers: Bands of the Master (HOF only, 15/+3) cloak: Mystra's Cloak (normal only, 5/+1, Wizard only) cloak: Mystra's Embrace (HOF only, 10/+2, Wizard only) armor: Abishai Hide (normal only, 5/+1) armor: Cornugan Hide (HOF only, 10/+2) armor: Phaen's Tattered Robes (HOF only, 1 Piercing/Bludgeoning) armor: (Imbued) Robe of Absorption (1 Slashing/Piercing/Bludgeoning) shield: Mooncalf's Shield (HOF only, permanent Protection from Arrows, in other words essentially 10/+5 against arrows) Spells: arcane: Stoneskin (10/+5) arcane: Iron Body (infinity/+3)* arcane: Protection from Arrows (up to 10/+5, only ranged)** arcane: Aegis (casts Stoneskin) cleric: Shield of Lathander (3/-, only 2 turns) cleric: Greater Shield of Lathander (30/-, only 3 turns) cleric/ranger: Iron Skins (10 Piercing/Bludgeoning/Slashing, doesn't stack with Stoneskin) divine: Armor of Faith (1/-) * Ostensibly it's suppossed to be 50/+3, but if you look at your character record after you cast this spell, you have an arbitrarily large number/+3 listed as your damage resistance. ** Unlike melee, ranged damage in HOF doesn't really scale upwards, so 10/+5 actually can completely negate ranged damage fairly easily. *** The notation for War Chant of the Sith is a bit misleading. While the game says "2/-", it really gives you 2 Slashing, 2 Bludgeoning, 2 Piercing, and 2 Missile, so it stacks with other such resistance as well as 5/+1 style damage resistance. It's important to note that while you'll frequently meet monsters that can beat +1 DR (as that means they only need a magical weapon to damage you fully), you start getting far less that can beat +2 and +3 DR (and remember that DR of x/- is unbreakable). Looking at the list, it's pretty much the status quo that the best you'll be able to do is 15/+3 for one character and 10/+2 for several others, plus or minus a few extra from a Bard song or from other miscellaneous resistances. It's possible to get a potion gift after Oswald leaves in his airship in Chapter 3 that may permanently increase your resistances (like giving you Slashing 1/-), but the potion you get is random from a list and you only get one per play through, so it's not something to hold out for. By far, however, the best source of DR is Iron Body. As a spell, it lasts a super long time, *actually* grants you complete imperviousness to any attack that doesn't come from a +3 or better source, and doesn't disappear after a set amount of attacks or damage has been absorbed (like Iron Skins or Stoneskin). ------------------------------------------------------------------------------- 4. Saving Throws *BAS:SAV- Monsters get really good at saving throws all of a sudden on HOF. The immediate effect is that your Fireballs and Lightning Bolts start doing way less damage on a consistent basis - this even means that they'll be completely useless against Monk/Rogue type characters that have Evasion/Improved Evasion. The secondary effect is that this essentially means that most spells that don't have an accompanying Spell Focus feat associated with them start sucking. Hard. Without a corresponding Spell Focus, you pretty much need to be casting level 7 and higher spells to have any chance of them sticking, and even then it's a pretty low success rate. A good example of this are the low-level Conjuration snares - Web and Stinking Cloud. On normal, these were a great way to incapacitate a whole swarm of incoming enemies while you gleefuly fireball them to oblivion. On HOF, even in really early parts of the game, you'll find yourself casting 4-6 layers of these spells and still see enemies waltz through easily without getting snared once. By contrast, Entangle, the level 1 spell druidic snare, stays relatively effective the entire game, simply because you can do Greater Spell Focus: Transmutation and effectively make it a level 5 spell compared to a level 3 spell like Stinking Cloud. That 2 spell level difference may not seem like much, but in some cases, it could mean the difference between an enemy failing *only* on a natural 1 (5% chance) or failing on rolls of 3 or lower (15% chance, or three times as often). If you do the math, 2 Entangles in this situation mean that the enemy has a 1 in 4 chance per round of being snared by atleast 1 of the 2 instances of the spell. To achieve the same effect with Stinking Clouds, you'd need 6 copies of Stinking Cloud going at once. The difference grows even starker with Entangle versus Web in a hypothetical situation where the enemy can roll a 4 or less with Entangle and still fail. With just *one* Entangle, you have a 20% chance of ensaring the enemy; with Web, you need 5 copies of the spell going at once just to match those odds. Even with the help of Spell Focus feats, enemies still have insanely high saving throws. This is where a suite of helper spells kick in. Malison gives a flat out -2 penalty to enemy saves and is the bread-and-butter of any HOF spellcasting strategy (short of degeneratively casting nothing but summons). The cleric spells Recitation, Prayer, and Chant give a -2, -1, and -1 (respectively) penalty to enemy saves. The advantage of Malison, Recitation, Prayer, and Chant is that these spells don't let the enemy save against their effects (though you may see them resisted via Spell Resistance on a rare occasion). There's also Emotion: Despair (-2 to saves), but that allows a saving throw and also may affect allies, so this is something to cast *after* the other spells. On the plus side, enemy spell DC's don't seem that much affected by the difficulty upgrade, especially compared to how much better your gear gets, so you'll be able to find yourself shrugging off way more spells/damage than before. On a side note, items that have effects that allow saving throws generally get dramatically worse in HOF. This also includes alot of spells that create item-like effects (like Lich Touch or Destruction). That's because, for the most part, monsters need only a 14 to save against these effects, which generally means that, except against the most vulnerable monsters (like trying a Fortitude save against skeletons), items only have a 5% chance of actually triggering their effects (when the enemies roll a natural 1). Moreover, Spell Focus feats don't help (so Lich Touch and Mordenkainen's Magic Missiles remain unaffected by Greater Spell Focus: Necromancy and Evocation, respectively). However, there are a few very rare exceptions to this general rule, which you can check out in section GEA:WEA:HIG-. ------------------------------------------------------------------------------- 5. Luck *BAS:LUC- Luck is a mysterious thing. Most of the time, you won't know about it nor even really care about its effects. It's also fairly rare. There are exactly four sources for luck in IWD2: the Luck spell (which the Luck potion also uses), the Bard Song Tymora's Melody (+1 to party), Young Ned's Knucky (+2, HOF only), and Tymora's Loop (+3, random drop). What Luck actually does is a bit of a mystery. There's quite a bit of misinformation out there, and I've even been mistaken in earlier versions of this guide. At the very least, Luck __actually__ alters dice rolls instead of simply giving them a bonus after the fact - so a Luck of +1 means that a 19 becomes a 20, a 1 becomes a 2, etc. What __kinds__ of dice rolls it affects is a bit harder to ascertain, but the ones I've managed to test and confirm follows. Luck does (Confirmed): Increase base weapon damage Increase To-Hit and Critical Threat rolls Increase healing effects recieved by the character Reduce spell damage recieved by the character Luck maybe (Difficult to confirm, hinted at by description): Increases skill checks Increases Spell Resistance rolls Luck definitely doesn't (Confirmed): Increase spell damage done by the character Increase "extra" weapon damage effects (like the +1d6 fire damage on "Flaming" or "Flaming Burst" weapons) Increase saving throw rolls A character's total Luck isn't displayed anywhere, so you just have to calculate it based on what items/spells/songs are going on. Suffice it to say that the earlier mentioned 4 sources are the only places you can get Luck. Basically, with a bit of Luck, physical damage characters will start having insane damage output. Imagine this - a guy with a keen axe, with Improved Critical, with Young Ned's Knucky, Tymora's Melody, and a Luck spell. This means that this guy effectively critically hits on a "roll" of atleast 15! (Though, because the dice are actually being modified by the roll, it'll look like your character is just rolling lots of 20s instead of actually critically hitting on a 15.) Factor in Executioner Eyes, and this guy is pretty much critically hitting on every other strike. Not to mention that when equipped with something like a Great Sword, the guy effectively does maximum damage with each hit (as each d6 in the 2d6 base damage gets shifted up by 6). The sheer damage output becomes *insane* at that level. Just be warned: Tymora's Loop, in particular, is a fairly rare random drop (like most completely random drops). I've played through IWD2 many times, and the number of times I've found it I can count on one hand. It is, however, probably the best single item in the game. If you're lucky enough to get two (one in normal, one in HOF), praise your lucky stars. Just imagine - two Tymora's Loop, Young Ned's Knucky, Tymora's Melody, and Luck is a total of +10 luck. How insane would that be??? ------------------------------------------------------------------------------- 6. Damage vs Crowd Control *BAS:DAM- Insane damage possibilities aside, one thing you immediately notice about HOF is that the monsters have more health. *Alot* more health. Suddenly, measly orcs are surviving through castings of Meteor Swarm. In short, when it comes to spells, once you hit HOF, pure damage spells start becoming much, much less effective and crowd control spells become much, much more effective. While you may need to empty out several spell levels worth of damage to clear out a modest pack of monsters, a single good cast of Symbol: Hopelessness, Mass Dominate, or Wail of the Banshee will more than do the job for you. Crowd control also means you greatly increase your party's survivability. Especially given the AC pointers in section BAS:AC-, most of your party is going to be really susceptible to enemies, so even Mirror Images will disappear quite rapidly under a barrage of never-miss arrows and swarming melee attackers - this is particularly devastating if those hits also, say, drain levels. However, if all the enemies are confused or fleeing in horror, for example, then maybe only one or two enemies will pose a threat at any given time, so not only will you be able to better protect your fragile characters, you'll also be able to better focus monster hate on the one or two characters designed to take it. NOTE: The only downside to holding/stunning an enemy is that, while they're helpless, you can not critically hit them, so you may need to adjust your targetting strategies to maximize your damage output. ------------------------------------------------------------------------------- 7. Swords vs Magic *BAS:SWO- As a corrollary to the above, as magic-based damage gets worse, weapon-based damage gets much better. Your base attack bonus (see BAS:YOU-) becomes sufficient for hitting monsters. You start maxing out the number of attacks you can make in a round. Finally, you start getting way better gear, higher stats, and are better able to push Power Attack to higher levels without affecting your accuracy. As such, while a spellcaster may be limited in how much burst damage they can output before they become an underpowered fighter, a single melee character with, say, dual Holy Avengers or a Massive Greataxe of Flame +5 can easily output upwards of 200 damage per round without having to worry about running out of steam. As a case study - one of my HOF parties contained a brute damage melee character equipped with Young Ned's Knucky, dual Cera Sumats, Power Attack +5, Weapon Specialization: Long Sword, and 26 Strength (thanks to the +6 STR belt). By herself, she contributed roughly 70% of all kills and all experience earned by the party - this even though I had other spellcasters who could cast Wail of the Banshee! Basically, once she started attacking an enemy, that enemy would be dead in a few rounds - it was not uncommon for me to see her critical several times in a row for upwards of 60 total damage per hit. So while I could get other spellcasters to burst out area of effect spells that hit for roughly 100 damage per monster (if I was lucky), this one melee character provided the sustained reckless damage that keeps the party moving from one fight to the next without needing to rest. =============================================================================== =============================================================================== Building your Party *BUI- ------------------------------------------------------------------------------- Time now to take the basic Heart of Fury mode concepts and put them to action! ------------------------------------------------------------------------------- 1. Decoy *BUI:DEC- One of the most important character concepts that pretty much any HOF party will need is a Decoy. That is, a character that can take all sorts of brutish punishment while other characters focus on slaying the enemy. There are several ways you can set up a Decoy: AC, Illusion magic, or Otiluke's Resilient Sphere. AC: Refer back to section BAS:AC- and BAS:AC:DRU-. This is probably the stablest way of setting up a Decoy - by having the character be naturally extremely hard to hit. With this kind of set up, you won't really even have to worry about fighting a tough monsters like the Guardian, as a character with a sufficient AC will be incredibly hard to touch. Illusion magic: Blink, Blur, and Improved Invisibility all give a character a flat out chance to avoid being hit, though Blind-Fight helps against Improved Invisibility. Mirror Image and Minor Mirror Image give the character a flat out way to avoid getting hit. With this route, however, you need to heavily prioritize Non-detection, whether the cloak or some other item/spell, as otherwise a single dinky Goblin Shaman can ruin your entire suite of protections with a single See Invibility. Otiluke's Resilient Sphere: I would consider this a bit "degenerative", "abusive", and "lame". You can cast ORS on your own party members (though you probably want to do this on characters with really low Reflex saves), and monsters attacking an ORS-protected party member won't notice that none of their attacks are doing anything, so they'll keep on uselessly attacking. NOTE - the official patch ostensibly fixes AI scripts to recognize when ORS is being used. In all but the ORS case, you also want a really high Spell Resistance. This enables you to fling tons of area of effect spells or recieve high-impact spells (namely Skull Traps with their uncapped damage) without any worry. Be warned abou trying to stack up needlessly high levels of resistance, though, the game caps your Spell Resistance at 50, so there's no point in being a Drow dual-wielding Light of Cera Sumat and Cera Sumat and having a Holy Aura buff. Base Spell Resistance: Deep Gnome (11 + Character Level) Drow (11 + Character Level) Level 13 Monk (10 + Character Level) divine spell: Spell Resistance (12 + Character Level) Stackable Spell Resistance Bonuses: Potion of Arcane Absorption (permanent +2)* Potion of Magic Resistance (permanent +1)* arcane/helm spell: Aegis (+3) divine spell: Holy Aura (+25) divine spell: Greater Shield of Lathander (+40)** longsword: Light of Cera Sumat (+30) longsword: Cera Sumat (+15) robe: Robe of the Evil Archmagi (+1)* * The game is a bit confused about the notations here, as the item descriptions are as "Magic Resistance 2/-", and other effects listed like that (like the ring Cold Steel Reflection) only provide __Magic Damage__ Resistance. These are probably bugs in the implementation of these items, but fortunately they're bugs in your favor. ** It's a great bonus, sure, but it only lasts 3 rounds. You also need to worry about Will/Fortitude saves. That's because no matter how insulated your protections, all you need is for your decoy to get hit by a single Charm Person or Finger of Death for your entire party to start falling apart. Sure, you could probably lose an ancilliary character and resurrect them mid-fight, but once your decoy is gone, you probably need to hit the quick-load. And unlike many other spells, a lot of enchantment (charm) and similar necromancy spells _ignore spell resistance_. What this means in practice is considering the various will-save and fort-save feats, as well as getting the tattoo from the Red Wizards in the Severed Hand in both normal and heart of fury mode. (Special thanks to sir rechet/jukka for point this fact about those spells out!) It is possible to go through the game without an actual Decoy (if a bit significantly more challenging), since all your summons get major buffs in Heart of Fury mode. Under this approach, though, you'll need to stock up *heavily* on the big summons like Shades, Animate Dead, Gate, and Shadow Conjuration, as the last thing you want to happen is a single enemy to cast Banishment to completely wipe out your army. (The Yuan-Ti spellcasters in Chult all have atleast one copy of Dismissal, for example). Moreover, for later battles, powerful enemies like Slayer Knights and Apocalyptic Boneguards will be able to mow through your summons with relative ease, so you definitely want a ready set of spells to resupply your army. ------------------------------------------------------------------------------- 2. Buffers *BUI:BUF- Buff and debuff spells become an important staple for a HOF party. Here's a quick selection of buff spells that you could apply to your entire party (a listed spell may only affect one target at a time, but its listing means that at the very least, you can target multiple party members over several casting). Innate: paladin: Aura of Courage (level 2)* bard: All Bard Songs Arcane: abjuration: Mind Blank conjuration: Mage Armor divination: Executioner Eyes enchantment: Emotion: Hope illusion: (Improved/Mass) Invisibility illusion: Invisibility Sphere transmutation: (Mass) Haste transmutation: Bull's Strength transmutation: Cat's Grace transmutation: Eagle's Splendor Divine (for clerics, unless otherwise listed): Bull's Strength Champion's Strength Exaltation Holy Aura Magic Circle Against Evil Negative Energy Protection Prayer Recitation Remove Fear Spell Resistance Strength of One druid: Aura of Vitality druid: Barkskin lathander: Aura of Vitality mask: Executioner's Eyes oghma: Eagle's Splendor oghma: Executioner's Eyes paladin: Spell Resistance * There appears to be a bug where the Aura doesn't actually do anything for your party members. :( Of these, probably the most important are Barkskin (for the +5 generic AC to put on your Decoy), the bard song War Chant of the Sith (for the +2 generic AC and the small boosts for healing), and Recitation/Prayer (not only for the massive +bonuses to your rolls, but the unpreventable penalties to any enemies in sight). Haste actually gets much worse in HOF, as characters that already have 5 attacks (or 4 attacks with 1 off-hand attack) won't get an extra attack from the spell. ------------------------------------------------------------------------------- 3. Crowd Control *BUI:CRO- You almost assuredly want atleast one character devoted to crowd control, and the more the merrier, as that means more redundancy and more effects going off at the same time. While a good chunk of enemies might resist that first Emotion: Fear, very few will probably resist two simultaneous ones. I'll cover this in more detail in the class/spell breakdowns (find shortcuts: CLA- and SPE:CRO-), but Bards, Druids, Clerics, and Wizards/Sorcerors are very well put to use trying to exercise crowd control instead of brute damage. ------------------------------------------------------------------------------- 4. Other Roles: Damage/Healing *BUI:OTH- Surprisingly, these roles are much less important than you may think, given really good representation in the other roles. With a good decoy and crowd control, you'll never need more than a couple of Heal spells, and maybe a Circle of Healing/Mass Heal or two. Similarly, with really good crowd control, it pretty much doesn't matter how much damage you can output, you've already won the fight. If all your enemies are wandering aimlessly confused or they're all frozen from Symbol: Hopelessness, then it doesn't really matter that you've got two characters with 8 strength trying to hack them down - they're going to go down no matter what. Of course, it's important to strike a balance. If your damage output is way too low, then you run the risk of running into a situation where you may be running low on spells and the monsters have just gotten pretty lucky saving against them, whereas if your damage output were a bit higher, they would've all been dead by now. Similarly, if your healing capabilities are too low, then you may be stuck in an ugly situation where a monster just got a lucky hit on your decoy while he was trying to cast Mirror Image. Suddenly, your decoy's spell is disrupted, he just lost 60-70 health, the game has just auto-paused because the decoy is about to die, and you're out of Heals. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4a. Maximizing Physical Damage *BUI:OTH:MAX- Let's do a small exercise in maximizing physical damage. rechet did so in his Powergaming Party Guide (available at gamefaqs), but his analysis is actually flawed. He presents the following mix as the best build for outputting damage: (Half-Orc) 1 Paladin/4 Fighter/7 Cleric/12 Sorcerer/6 Rogue this is wrong. I will demonstrate that, contrary to what might be expected, the best build for outputting damage is in fact: (Half-Orc) 25 Cleric/4 Fighter/1 Paladin We start with a Half-Orc anyway beacuse it's the only race that lets you start with 20 Strength. To demonstrate our given class mix, we start with a full 30 cleric and see if we can do better. Why start with a cleric? Namely the spells Draw Upon Holy Might, Prayer, and Holy Power. Combined, they give an outstanding +20 damage per strike (+10 strength x1.5 for two-handed, +1 prayer, +4 holy power). Just from this base scenario (before I elaborate further), your cleric is up against the final bosses. He or she buffs him or her self with Draw Upon Holy Might, Prayer, and Holy Power and is equipped with Massive Greataxe of Flame +5 in addition to having Power Attack enabled. This is how the damage would work out: 22 (base average damage from weapon) +15 (from Strength of 30, x1.5 for two-handed bonus) + 4 (from Holy Power) + 1 (from Prayer) + 5 (from Power Attack) --- 47 per strike, attacks are +41/36/31/26/21 (30 +5 from weapon, +10 from Strength, +1 from Prayer, -5 from Power Attack) Now, couldn't we do better by adding in some other classes? Short answer is no. First, what about adding enough Fighter levels to get Weapon Specialization for the +2 damage? Well, the big difference between a Cleric at level 30 and a Cleric at level 26 is Draw Upon Holy Might gives +10 and +8 to Strength, respectively. That 2 point difference in Strength is a +1 difference in damage and to hit. Unfortunately, our strength is at a threshold where losing 2 strength translates into a loss of -1 hit and -2 damage (due to the 1.5x multiplier from a two-handed weapon). Second, what about adding a Paladin level for the +2 Strength per play through? We could, but it would be a wash anyway. We'd get +2 strength, but lose -1 strength from Draw Upon Holy Might, for a net +1 Strength, which buys us literally nothing. Third, what about combining the two? A 25 cleric/4 fighter/1 paladin? We have a net gain! We lose -2 Strength from losing 5 levels of cleric, but then gain +2 Strength from the paladin, but then get an additional +2 damage from the fighter! So how does this compare to the original formulation? Those Sorcerer levels were there for Tenser's Transformation. Now, the main benefit is the +2d4 Strength, which stacks on top of other Strength bonuses. However, on average this is only +5 Strength, and the cost for that is to lose 12 cleric levels, which in and of itself is a loss of +4 Strength from Draw Upon Holy Might. Moreover, because of rounding that average extra +1 Strength from Tenser's Transformation wouldn't actually result in extra damage. All this uncertainty and you give up the ability to recast Draw Upon Holy Might and Holy Power as needed (in case they get dispelled or run out of time, since DUHM only has a 10 round duration), since Tenser's blocks further spellcasting. You wouldn't even be able to cast Tenser's again in case you got a bad roll (and getting better than a 5 from a 2d4 is actually fairly uncommon, only on 6 out of 16 possible different rolls of the dice). What about Rogue levels? Well, they were there for "sneak attack" damage, which is absolutely silly. You sacrifice a lot of attack bonus to do that, and if you really want that extra sneak attack, you can just use one of many cleric offensive powers, which is way more than the piddling 3d6 you'd get from 6 levels of Rogue. This is mostly an illustrative example. With a bunch of support (i.e. other characters casting Symbol: Hopelessness, Holy Word, etc), you can use this character to great success, but on his or her own, he'll probably get destroyed very rapidly. ------------------------------------------------------------------------------- 5. Alignments: Good vs Not Good? *BUI:ALI- Alignment is probably one of the most important party-wide choices you can make. It's generally not a good idea to mix and match, as otherwise you start picking up more of the costs of being one or the other with less of the benefits. Good: Access to (Light of) Cera Sumat swords. Ability to cast Holy Word recklessly.* Evil: Ability to use Neutral/Evil only items, such as Bile of the Damned Massive Halberd of Hate +4 Robe of the Evil Archmagi Unholy Halberd of Chaos Xvimian Fang of Despair Immunity to Blasphemy/Unholy Blight. * See section SPE:CRO- for a full treatment. The main argument against mixing and matching is that the alignment specific spells (Holy Word/Holy Smite, Blasphemy/Unholy Blight) start becoming more useless for you to use and your party becomes more vulnerable to them. ------------------------------------------------------------------------------- 6. Good and Bad Feats *BUI:GOODANDBADF- I'll only be touching on feats that I think deserve special notice. Aegis of Rime/Aqua Mortis/Scion of Storms/Spirit of Flame It's not immediately obvious, but when the game says "all", it really does mean "all", which includes any elemental damage off of magical weapons, for example. Because of this, even if you may never be casting elemental damage spells, it may still be worth getting the appropriate feats, as it may just be equivalent to getting Weapon Specialization plus some resist. An example of this is getting Aegis of Rime while using Halberd of the North. The +20% Cold Damage on the Halberd is almost as good as Weapon Specialization and also gives you some cold resist. Armored Arcana There are some really good shields and armor in HOF mode, and you might want to seriously consider picking up some feat points here so you can use them without spell failure. Mooncalf's Shield is a good example, as it has 15% spell failure (exactly the amount that 3 levels of Armored Arcana can cancel out) but has a permanent Protection from Arrows, which essentially means near immunity to ranged attacks. Similarly, Milton Sixtoes' Armor of Absolute Self has a 15% spell failure but bestows permanent Mind Blank. Or even Cornugan Hide, which has a 20% spell failure rate (so you'll still have a low 5%, though luck items will help cancel that out), which bestows regenerative abilities and 10/+2 DR. Just be sure to pick up the proper armor/shield proficiency if necessary. Cleave It's much worse on HOF than on normal. I'm not entirely sure how this interacts with the flat 5 attack maximum imposed by the Infinity Engine, but at the very least, this means that if you kill an enemy with the first attack, that same high attack bonus will get applied to your next attack against the next enemy. However, it's never ever worth getting a second level of Cleave. Combat Casting Monsters start hitting harder and harder and harder, so this +4 bonus to your concentration checks becomes more and more useful, because the worst that can happen is getting a crucial spell like Heal or Mirror Image interrupted. Dash I think this feat is underrated. It helps you navigate faster, but more importantly, it means characters can outrun enemies or reach other party members (like to cast a touch spell like Heal) much faster, which is a hard-to-quantify benefit. Discipline A bonus to will saves *and* a bonus to concentration? Great! Dirty Fighting It's a "nice to have", but because enemies can save the effects so well in HOF mode, don't expect it to be a game-changer. Extra Smiting Since monsters have way more health in HOF, the extra damage you get out of smite evil gets far, far worse. Extra Turning Undead start having enormous amounts of hit die in HOF, so being able to turn undead gets worse and worse and worse. Stay away. Improved Critical Every character should get this when they can, no questions about it. Improved Initiative It's bugged, so it actually doesn't do anything. Stay away! Lingering Song This feat is what makes the Bard one of the best classes in the game. It's also potentially abusive - there's a bug that can let you arbitrarily stack any number of Bard songs using this. Simply turn on a Bard song, then click some other action (like a spell or a weapon) to disable it. Lingering Song will kick in. Then, click on the Bard song again, and repeat. A second Lingering Song will kick in. You can do this an arbitrary number of times (it helps if the game is paused) and, say, stack Tymora's Melody 20 times or use War Chant of the Sith to give all your characters 100+ AC (though these effects only last for 2 turns unless you keep on doing the Lingering Song trick over and over). If you enjoy a challenge, I would recommend against abusing this. Mercantile Background Stay away! While on normal you may have had trouble keeping enough money to keep your characters fully stocked, in HOF, you'll be *swimming* in riches. After doing the yuan-ti temple, for example, you'll be coming back with lots of +5 and +4 weapons that you'll be able to sell for a total of upwards of 1.7 million gold. Power Attack If a character is going to be melee-ing, this is undoubtedly one of the best feats to pick up, as it's a flat +5 damage in the end game. Rapid Shot See section BAS:YOU- if you need convincing about this. Spell Focus Pretty much every spell caster should be getting the Spell Focuses best suited for them. Enchantment/Transmutation for crowd controllers, Necromancy/Evocation for damage dealers. Spell Penetration This may actually be pretty bad depending on your play style. If you enjoy casting area of effect spells at enemies while your decoy keeps them busy, then you don't want this, as this just means you increase the chance of accidentally clobbering your own party member. If, on the other hand, you're more discreet about spell casting or primarily use spells that don't affect party members (like Chaos), then Spell Penetration is a must have to help affect tough, high SR enemies--a quirk about IWD2 is that enemies either have little to no SR or very high SR. Subvocal Casting Pretty much every caster should use this, though this feat won't help prevent bard songs from being silenced. Also, if you plan on having a caster who can use axes, then you won't need this feat for them, as a very good axe on normal and a super good axe on HOF bestow permanent immunity to silence. Weapon Specialization The only real reason why you want 4 levels of fighter. +2 damage may not sound like much, but that may mean a significant % increase in net damage per round, especially for ranged weapons that don't allow for Strength or Power Attack bonuses to damage. ------------------------------------------------------------------------------- 7. Good and Bad Skills *BUI:GOODANDBADS- I'll only be touching on skills that I think deserve special notice. Animal Empathy I think this is fairly underrated, as its essentially a re-usable, hard to resist charm animal. Unfortunately, animals start getting rarer as the game progresses, but you can also use this to charm enemy-summoned animals. Alchemy You really don't need this to be more than 10 or so with a modest Intelligence. Knowledge (Arcana) See Alchemy (though you need more like 15 base skill points here). Also, keeping stock of Identify spells helps here alot. Open Locks Most of the locks in the game can be broken open with a strength of 18 and a few tries, and you can use a Knock spell for the others, so this isn't a terribly vital skill. Plus, a medium-level Druid makes both Knock and Open Locks obsolete, as a Dire Bear can break open every lock in the game (though you may have to try a few times for some of them). Pick Pocket Note - previously I mentioned here that you needed a really high Pick Pocket score to get access to Young Ned's Knucky. Truth be told, that was an assumption I made, as previously I had always just gibbed him instantly using the enablecheatkeys CTRL-Y trick. (Sorry!) Upon actually testing this out, however, it seems like even with a really high legitimate Pick Pocket, while you can (with a 5% chance) steal some gold off him, the Knucky itself is virtually impossible to get through this means (I made about 50 attempts with a 48 Pick Pocket score, which is 33 plus 7 from Dexterity plus 8 from Master Thievery). In the Gearing Up section (find shortcut: GEA-), I mention specifically how to obtain the Knucky, and it certainly does not involve picking Jemeliah's pockets. In fact, it seems that in HOF mode, pick pocketing becomes __extremely__ difficult (having puny chances of success with even closed to maxed out stats), and as far as I can tell, aside from Jemeliah, no one else gets their loot upgraded. Spellcraft I think this is underrated, but you shouldn't really need more than 20 (including your Intelligence modifier) here. I personally think that getting information on what spells your enemies are casting is really helpful, especially when you see something like Gate being cast, so you can run in and let off a Holy Word before they finish. =============================================================================== =============================================================================== Key Racial Breakdown *KEY- ------------------------------------------------------------------------------- There are a few races that bear special mention for character creation. ------------------------------------------------------------------------------- 1. Human/Aasimar *KEY:HUM- In my opinion, the human and aasimar human subtype are the best overall races for your characters to use. Human One extra feat and two skill points at 1st level, an extra skill point per level, and any class is favorable for multi-classing. That extra feat helps quite a bit with spellcasters who are particularly feat-hungry (for all the elemental damage feats and spell focus feats, for example), and those extra skill points essentially means that even with a 3 intelligence, you have enough skill points to almost max out, say, both Concentration and Spellcraft. The biggest pay off, however, is the multiclassing bonus. Multiclassing in IWD2 (and especially HOF) is really powerful, and letting your favored class be whatever is your highest leveled class is *extremely* helpful and powerful. For example, if you want a multiclass Druid/Cleric, you may want to get the Druid levels first so you can get a maxed out Barkskin as quickly as possible. But no race offers Druid as a favored class! Enter the human, which means you can get to level 12 Druid and start working on the Cleric levels without worrying about an experience penalty. Aasimar The Aasimar is the ultimate sorcerer class. Thanks to the Aasimar, whatever sorcerers you have will have harder to resist spells (and more of them) as you'll be able to get a nice 20 charisma for an extra +1 to your spell DC's and an extra spell here and there. Plus, it's always nice to have a free fire spell on hand for dousing fallen Trolls. Extra note - I left this out before, but the tiefling (while not as good as a straight-out human or aasimar) gets a mention because they get a +2 to intelligence, so it's something to consider if you just have a straight out Wizard who doesn't need to multiclass. ------------------------------------------------------------------------------- 2. Drow *KEY:DRO- Drow are immensely powerful. Sure, the effective character level penalty is pretty steep, but with proper level squatting you'll be maxing out your experience at level 30 pretty early through HOF anyway. So why are Drow so good? Awesomeness Bonus to intelligence. One of only two races that allows for a 20 Intelligence, which not only means lots of skills, but really powerful Wizard spells. Bonus to charisma. Similar to the aasimar. Bonus to will saves. Makes the Drow really hard to affect with some of the tougher, more annoying effects (like Hold Person). Free proficiency with Long Swords. Depending on the character you're creating, this is almost like getting a free feat. Innate SR. See section BUI:DEC- on why this is so great. (Minor) Downsides Light blindness isn't *that* bad, since most of the game takes place indoors or underground, where it has no effect. Penalty to constitution is bad, but with all your other free stats you have a net plus of four stat points (though you won't be able to create a Drow with 18 Constitution). However, Drow are extremely limited in multiclassing options, so you'll need careful planning (and good attention to gender!) to avoid multiclassing penalties. ------------------------------------------------------------------------------- 3. Deep Gnome *KEY:DEE- A race tailor-made for being a decoy. They get an amazing +4 generic AC bonus, the ability to cast Mirror Image, Invisibility, and Blur for free once/day, +2 bonus to all saves, innate non-detection, and innate spell resistance. Wow! Of course, the major downside is that they have a net penalty to stat points (+2 to DEX/WIS, -2 STR, -4 CHA for a net of -2), but fortunately the bonuses they do get matter the most for a high AC (DEX and WIS). The second major downside is that they have the steepest effective character level penalty in the game, so you'll need really good planning with your level squatting and level ups so that you don't end up really dying for that last level on HOF mode. Finally, Deep Gnomes are limited to favoring Illusionists as their favored class, which fortunately isn't *that* bad as Illusionists make for a good decoy mage-type, but still requires good planning to avoid steep multiclassing penalties. =============================================================================== =============================================================================== Class Breakdown *CLA- ------------------------------------------------------------------------------- This is where I take apart each class and discuss their potential for HOF mode. I rate their relative merits on a 4 point scale, as described below. 4/4 - Must Have You pretty much need a __very__ good reason to not have as many levels of this class as possible. 3/4 - Pretty Good A definitely positive addition to your party, but this class is not absolutely essential for Heart of Fury success. 2/4 - Mediocre or Specialized Use The class has a lot of weaknesses, but there may still be some refined use case where you would want a few levels of it in your party. 1/4 - Just Plain Bad or Incredibly Specific Specialized Use There's almost no reason why you want to touch this class. Do this maybe if you're going for a novelty approach, or just can't part with this class concept. There still may be a remote use for this class, so you may still be able to squeeze it in if you want to. 0/4 - Stay Away! Absolutely terrible. There's no saving this class. ------------------------------------------------------------------------------- 1. Barbarian *CLA:BARB- Overall rating: 0/4 Unfortunately, the benefits a barbarian has (a higher hit die, damage resistanced starting at level 11) get canceled out really quickly by the enormous increase in damage that monsters do in HOF mode. Moreover, the short duration of Rage becomes a major pain in the butt, as you may end up going through an entire day's worth of Rage for just one fight. Moreover, a barbarian is really bad for multiclassing, as the real pay offs of the class (namely damage resistance and fatigue-less raging) only occur at the really high levels. ------------------------------------------------------------------------------- 2. Bard *CLA:BARD- Overall rating: 3/4 The Bard has a bunch of things going for it. First, the bard songs are simply amazing. I've already mentioned the insane benefits of both Tymora's Melody and War Chant of the Sith. But Tale of Curran Strongheart is also handy, as it bestows immunity to fear and can even be an instant-cast Remove Fear if you take advantage of Lingering Song (simply start singing it and then immediately do something else to trigger the effect). Unfortunately, Siren's Yearning has a low Will save DC of 14. Furthermore, the regeneration effect of War Chant of the Sith and the enthralling effect of Siren's Yearning don't trigger while they're lingering, so to get the full effect of War Chant of the Sith (and any effect from Siren's Yearning at all), you have to just keep your bard doing nothing but singing. Still, Lingering Song is abusively powerful, and you can arbitrarily stack songs. Turn on Tale of Curran Strongheart, for example, then immediately switch over to Tymora's Meldy, and you get instant immunity to fear for 2 rounds as well as a huge, party-wide Luck/saving throw bonus. Second, the bard actually makes for a formidable fighter. While the Base Attack Bonus progression of a bard is slower than a fighter, in HOF mode, the difference between a maxed out fighter (+30 BAB) and a maxed out bard (+22 BAB) is that the fighter will hit all the time and the bard will hit all the time but maybe every once and a while miss the last attack (both get 5 attacks). Plus, unlike a pure fighter, a Bard has immediate access to spells like Mirror Image and Improved Invisibility, thus vastly increasing his survivability over a normal fighter. Third, the bard has respectible spell casting abilities. In addition to the aforementioned important illusions, the bard also gets helpful spells like Dominate Person, Mass Haste, Shades, Great Shout, and even Mass Dominate, Power Word: Blind, and Wail of the Banshee! Unfortunately, because many of these spells are actually lower spell-leveled compared to a Wizard, enemies will have an easier time resisting some of them, but this is partially offset by the fact that charisma-boosting is easier than intelligence-boosting (at the very least you can just cast Eagle's Splendor on yourself). Fourth, the bard is great in multiples. Bard songs stack, so you could have one bard singing Tymora's Melody and another one War Chant of the Sith. Or both singing Tymora's Melody, or both singing Siren's Yearning. (It almost seems like having 6 of any given song going at once, while possible, seems abusive.) Fifth, there are alot of bard-specific items. These mainly come in the form of instruments which you can use like wands, except that they never run out (although they are limited to being used once/day). Two really good instruments, for example, are the Raging Winds horn (which instantly summons 3-6 powerful barbarian warriors) and Sephica's Prayer (which can cast Heal or Resurrection once/day, though you need atleast 13 Wisdom to use it). There's also 2 special instruments that you equip as a shield, the Lyre of Progression (normal only, +3 STR) and the Lyre of Inner Focus (HOF only, +3 STR, +2 CON). Finally, the bard multiclasses *extremely* well. You can just get 5 levels of bard, enough to pick up Tymora's Melody and some castings of Mirror Image. You can get 11 levels of bard, enough to pick up War Chant of the Sith and castings of Mirror Image/Improved Invisibility. Or you can even go all the way and get 30 levels of Bard to pick up castings of Mass Dominate/Wail of the Banshee/Power Word: Blind. The only downside is that no class other than humans and half-elves can treat the bard as a favored class, so you'll need some careful planning with multiclassing. ------------------------------------------------------------------------------- 3. Cleric *CLA:CLE- Overall rating: 3/4 for one cleric, each additional cleric is a 2/4. Some domains get higher or lower ratings (see next subsection; find shortcut: CLA:CLE:DOM-). Clerics are very versatile. However, the major problem with clerics is that multiples tend to get redundant very quickly (which is why I have decreasing scores for each successive cleric you add to your party). Moreover, to really get the most out of a cleric, you really need to focus on a cleric's competitive advantages, or else the cleric will be outclassed by more focused classes. Their competitive advantages are as follows: Highest spellcasting stat in the game Non-banite clerics are tied with druids in being able to get their wisdom up really high, thanks to Every God Ring (best stat boosting item in the game), the numerous Wisdom-boosting potions you'll find, and the fact that multi-classing to a Paladin will get you a free +1 Strength/Wisdom per play through. Banite clerics, however, get even more Wisdom goodness, as they get a futher +2 Wisdom per play through. Best casters of Symbol: Hopelessness This is one of the most powerful spells in the game, and clerics can do it better than any other class. A Drow Banite with an Every God Ring, two Potions of Holy Transference, and two Potions of Clear Purpose gets a whopping 42 Wisdom in the end game, which is a full +6 extra DC on top of the best a Sorcerer would be able to manage. Not to mention that the Banite gets an additional +1 DC from their domain perk since Symbol: Hopelessness is a Will-save-based spell. This means that a maximized Drow Banite can cast Symbol: Hopelessness with a mind-numbing spell DC of 35, compared to the (by comparison) paltry 28 a maximized Sorcerer can do. Best healers Clerics are the best healers in the game. Druids have Heal and Mass Heal at one higher spell level than Clerics, which means that they get fewer castings at lower levels. Among the best physical damage You may dispute this point by pointing to the Fighter, but any physical character can be made better with cleric levels, thanks to spells like Draw Upon Holy Might, Champion's Strength, and Holy Power. Holy Power, of note, is a full +4 damage per attack for the duration. This also has the side effect of making them excellent ranged weapon wielders, as weapons like Bows don't get affected by Power Attack, yet Holy Power will give them a damage bonus to go along with their higher attacks per round and attack rating. Many useful spells that don't require saving throws Some of this is in the form of spells like Recitation, which have global effects, some of this is because they spend a lot of time casting spells on themselves and their party members, or because they are casting summons. Point is, you don't need a lot of Spell Focus feats to realize their full potential. As long as you keep in mind all these aspects and spend time accentuating them instead of working against them, you will find that clerics add a lot to your party. Just don't get a Lathander cleric and hope you can accomplish much with the Meteor Swarm domain spell. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3a. Domains *CLA:CLE:DOM- Choosing a domain is a very important decisiont to make for a cleric, and as such, a discussion of the good and bad aspects of each deity's domains is warranted. Some domains are of particular note and thus will get a rating bonus or penalty. Painbearer of Ilmater Painbearers can be Lawful Good (thus allowing for a Paladin multi-class). They have many useful domain spells (Magic Circle Against Evil, Emotion: Hope, Holy Power, Stoneskin, Holy Word, Symbol of Pain). Furthermore, their domain ability "Ilmater's Endurance" (which increases Constitution by 6 for 1 round/level/day) is quite useful if you plan on getting a high-level cleric (to lengthen the duration of the effect), as 6 constitution at level 30 is equivalent to 90 extra health. As a side note, Painbearers do well for multi-classing, as both Monks and Paladins have special orders that allow them to mix-and-match with Painbearers. Morninglord of Lathander Morninglords can be Lawful Good (thus allowing for a Paladin multi-class). Their domain spells are a bit mixed in usefulness, though getting an extra Heal and Mass Heal is nothing to sniff at. Unfortunately, their domain abilities are useless, but this is largely made up for by the ability to be Lawful Good. Silverstar of Selune: -1 Rating (2/4, 1/4 for each other cleric) Pretty terrible. The domain abilities are irrelevant, and, aside from Elemental Legion, so are the domain spells. You can't even be Lawful Good, thus also negating one of the main reasons to have a Good-aligned cleric. Watcher of Helm You can still be Lawful Good, but that's pretty much all this domain has going for it. You __do__ get an extra casting of Greater Command out of the domain spells. You also get a casting of Iron Body and Aegis, which are more useful for a cleric than a mage, so if you're into using those spells, that's a plus. Lorekeeper of Oghma While you can be Good or Evil with this domain, you can't be Lawful Good, and you don't get the Banite Quest. Still, you get free castings of Identify (thus releaving you of the need to get Knowledge: Arcana and Alchemy), and the domain spells have a few really useful ones (Malison, Greater Command, Symbol of Hopelessness though at one less spell level, Executioner's Eyes, and Wail of the Banshee) amidst some otherwise useless ones. Getting Wail of the Banshee or Executioner's Eyes may even better than getting the +2 Strength/Wisdom from the Paladin quest (though the Banite bonus is still better). Dreadmaster of Bane: +1 Rating (4/4, 3/4 for each other cleric) If you're playing evil, you __absolutely__ must have at least one of these guys in your party. They get the benefit of a Banite-specific quest (that I've mentioned over and over) that nets you a +2 permanent Wisdom per play-through and have arguably the best domain ability in the game (all their spells requiring Will saves are at +1 DC). Not only that, Banites have really useful domain spells in the form of Emotion: Despair, Greater Command, Gate (available very early on since it's at spell level 7), Power Word: Blind, and Mass Dominate. Banites are well suited to being Decoys (due to their insanely high Wisdom scores) and debuffers (due to their insanely high DC's). Battleguard of Tempus: -1 Rating (2/4, 1/4 for each other cleric) Pretty bad. Relatively useless domain abilities (you can easily get Martial Weapon: Axe proficiency just by multi-classing), and all of the high-level domain spells are useless on Heart of Fury thanks to really high enemy health. The only consolation is that you can get a special book in Targos that lets you cast Champion's Strength for free once/day that only Battleguards can use. Demarch of Mask Relatively weak/useless domain abilities, a lack of any Paladin multi-classing possibilities, and no Banite quest bonus for Wisdom. Still, these shortcomings are compensated for by a whole slew of useful domain spells (Minor Mirror Image, Blur, Mirror Image, Emotion: Despair, Improved Invisibility, Shades, Mass Invisibility, Blasphemy, and Executioner's Eyes). The only domain spell level that misses out on something decent is the 5th, and even then you still have Shadow Conjuration you could use there. Stormlord of Talos No Paladin multi-classing and no Banite quest. You do get two handy domain abilities (5/- electrical resistance and "Destructive Blow", which gives +2 hit/damage for 1 round/level/day). Plus, while not much of the domain spells are useful, you still end up with some heavy hitters, namely Tremor (though only at spell level seven) and Wail of the Banshee. ------------------------------------------------------------------------------- 4. Druid *CLA:DRU- Overall Rating: 3/4 I think the druid gets an unfairly bad rap. On normal, I'd put the druid as the best class in the game (a party of six Druids casting Static Charge - which ignores Spell Resistance - would be brutal), but on HOF, alot of the really awesome druidic abilities starts getting worse and worse. First of all, the really focused, high damage Call Lightning and Static Charge spells start getting pretty bad since enemies start saving against them pretty regularly. Second of all, shape shifting becomes worse because your natural attacks start getting way, way, better. Third of all, the fact that Heal and Mass Heal are one spell level higher than on a cleric simply makes the druid worse at healing. However, the druid has one remarkable plus: Barkskin! This is a really helpful decoy spell in getting to those high AC numbers. It lasts a good amount of time and is relatively easy to recast in battle in case a Dispel Magic got rid of it. In addition to that, the druid has a decent array of crowd control spells. Entangle, Spike Growth, Spike Stones, and Tremor are all very good at crowd control (Entangle especially since it still slows down enemies that fail their saves), plus, they're all from Transmutation. That means with 2 feats into Spell Focus: Transmutation, you've vastly boosted the druid's disabling powers. Plus, since the druid's Tremor is a level 9 spell instead of a cleric's level 8, the druid can kick complete butt at knocking down a huge screen full of enemies (and Tremor affects undead). When all is said though, that may not be enough to make the druid worth taking all the way. Fortunately, though, the druid is perfect for multiclassing. Simply switch over after 12 levels (for a maxed out Barkskin) or even at some point later, and you'll retain most of the benefits of the druid while complementing it with the strengths of another class (or two). Note: It's good to point out that once you have a Druid who can shapeshift into a Dire Bear (which is possible by the level 12 breakpoint), you no longer need anyone with Open Locks, as with a massive 30 Strength, the shapeshifted Druid can break open every lock in the game (though will occasionally need several tries to accomplish it). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4a. Forms *CLA:DRU:FOR- As druids get higher level, they unlock progressively more shapeshifting forms. While not strictly useful for heart of fury mode, when trying to _get_ to heart of fury mode, they can be pretty handy to use. Note that shape-shifting _sets_ innate strength, dexterity, constitution, and DR to new values, but then items that boost them take effect. Armor-enhancing effects don't work, so you're stuck with the AC of the new form. If you also have less attacks than the new form, you gain that number of attacks. If you were ensnared (Entangle, Web) then shape-shifting gives you a few seconds where you are freed from the effects. You also heal hitpoints equal to your level. You don't even need to shift back into a human and then back into the animal to gain the last two benefits, you can just shift again into the same form even. Note that if you multi-class with a Monk, you get to carry over the Monk wisdom AC bonus. While you lose the effects of buffs cast _before_ a shapeshift, you keep them if cast _after_ a shapeshift. This has interesting ramifications. Arctic Boar [level 5] 15 Str, 10 Dex, 17 Con; 16 AC 2 attacks, 3d4 dmg, +2 to hit Has a chance of goring the enemy ('Gored!') upon a strike; if they fail a (low) Fortitude save, they take 1 bleeding damage every round. Also has 2 missile/bludgeoning/piercing/slashing DR and 5 cold resist. Boring Beetle [level 5, feat] 15 Str, 10 Dex, 10 Con; 17 AC 1 attack, 5d4 dmg, +2 to hit Moves slowly. Black Panther [level 5, feat] 18 Str, 10 Dex, 10 Con; 14 AC 3 attacks, 2d6 dmg, +4 to hit, 19-20/x2 crit Has a chance (10%) of doing +2d8 slashing damage. Winter Wolf [level 7] 13 Str, 18 Dex, 16 Con; 19 AC (15 base, +4 dex) 1 attack, 1d6 dmg, +1 to hit Has 15 cold resist, but -15 fire resist. Decently fast movement rate. Shambling Mount [level 8, feat] 18 Str, 10 Dex, 10 Con; 20 AC 2 attacks, 2d8 dmg, +4 to hit Has a chance of doing extra bludgeoning damage (apparently three attempts at +1d4 with 50% chance each). Also has a chance of entangling the enemy for 1 round. Has 20 electric resist, 6 fire resist. Moves quickly. Polar Bear [level 9] 27 Str, 13 Dex, 19 Con; 16 AC (15 base, +1 dex) 3 attacks, 1d10 dmg, +8 to hit, 19-20/x2 crit Dire Bear [level 12] 30 Str, 13 Dex, 19 Con; 18 AC (17 base, +1 dex) 3 attacks, 2d6 dmg, +10 to hit, 19-20/x2 crit Dire Panther [level 14] 24 Str, 21 Dex, 17 Con; 20 AC (15 base, +5 dex) 3 attacks, 2d6 dmg, +10 to hit, 19-20/x2 crit Has a chance (10%) of doing +2d8 slashing damage. Fire Elemental [level 16] 20 Str, 10 Dex, 10 Con; 18 AC 1 attack, 1d8 dmg, +5 to hit, 19-20/x2 crit. Has a chance of doing extra fire damage. Has 20 fire resist, but -15 cold resist. Earth Elemental [level 18] 20 Str, 10 Dex, 10 Con; 18 AC 1 attack, 4d8 dmg, +5 to hit, 19-20/x2 crit. Water Elemental [level 20] 20 Str, 10 Dex, 10 Con; 18 AC 1 attack, 4d8 dmg, +5 to hit, 19-20/x2 crit. Air Elemental [level 22] 18 Str, 29 Dex, 18 Con; 31 AC (22 base, +9 dex) 3 attacks, 3d8+6 dmg, +4 to hit, 19-20/x2 crit. Moves quickly. Also that AC is the basis for a druid-based AC build. ------------------------------------------------------------------------------- 5. Fighter *CLA:FIG- Overall Rating: 1/4 The fighter is pretty much good for multiclassing, and that's it. Its main strength is unlocking Weapon Specialization at level 4. You also get an insane number of extra bonus feats (one at level 1, one at level 2, then another at every other level after), but just the extras you get from getting up to level 4 is way more than you'll probably ever need. ------------------------------------------------------------------------------- 6. Monk *CLA:MON- Overall Rating: 2/4 The monk is tailor made for being a decoy. High AC, potentially innate SR, and even a potential for DR. Plus, there are several monk-specific items that are really good, like the Binding Sash of the Black Raven, which gives +2 to attack rolls and immunity to all sorts of mind-affecting spells. Interestingly though, the monk is probably not best played like a monk. In other words, using just your fists is probably not a good idea. The reason why your attack bonus starts getting really high in HOF mode is because you're equipping weapons that gives you up to +5 (and in some cases even more) to your attack rolls. Plus, unlike your fists, weapons will start doing all sorts of extra things, even if it's as minor as doing fire damage, or doing something as crazy as trying to cast Flesh to Stone on the target. Even if your fists can do 1d20 + 1d6 damage, on average that's only 14 damage, at the cost of having a much harder time for hitting the enemy. By contrast, a Power Attack Longsword +5 will have just as hard of a time hitting the enemy, but do 14.5 damage, but may also have extra effects that your fists can't do (like a Paladin/Monk will be able to get +45 SR off of dual-wielded Holy Avengers). Of course, the stunning effect is decent and potentially really hard to resist and only affects fists (though the stun only lasts 1 round), so with a certain set up, you may want to be using only your fists. Monks have a devil of a time multiclassing. You have to choose an order to even be allowed to gain levels in a monk again after choosing a different class, but with proper planning you may not need to worry about it. It's worth just taking 1 level as a monk just for the WIS bonus, but there are also good breakpoints at levels 5, 10, 15, and 20 (for varying AC bonuses and DR at level 20) or at 13 (for after getting SR). ------------------------------------------------------------------------------- 7. Paladin *CLA:PAL- Overall Rating: 2/4 The paladin has three exceedingly awesome points about it. One, you can use the amazing Holy Avenger(s). Two, you can do the Paladin Quest (which is part of the Holy Avenger thing). Three, at level 2, you gain a permanent immunity to fear and grant that immunity to other people within 10 feet.* The problem, then, is that two out of three of those points can be accomplished with just 1 level of Paladin, and the remaining point with just 2 levels. The paladin's spellcasting, while possessing a few key spells like Draw Upon Holy Might, Prayer, Recitation, and Holy Power, is otherwise mediocre, lacking the stuff that makes the Cleric pretty good and versatile. On top of that, the Paladin's ability to turn undead is meaningless as turning is fairly useless already. That being said, the Paladin does have a couple of other things going for it that might make it reasonably useful in a certain setup. First, Paladins get a couple of neat feats - the most important being Fiendslayer, which requires 8 Paladin levels and Weapon Focus in Large Swords. It gives you +2 hit and damage against chimeras, demons, dragons, and half-dragons. These happen to be the toughest enemy types in the game, and half-dragons, demons, and chimeras in particular dominate the last chapter of the game, so you effectively get a better version of Weapon Specialization against a lot of your enemies (Isair and Madae are demon/half-dragons, by the way). Second, with a large Charisma (Sorceror/Bard multi-class?) and a decent number of levels, Lay on Hands effectively becomes another Heal. With a 30 Charisma, just 10 Paladin levels would be enough to effectively function as a Heal for your more fragile characters (it would heal 100 health). * The Aura effect appears to be bugged in that it doesn't actually do anything for your party members. You'll see a symbol on your other party members in range, but they'll still get feared. WARNING! This isn't listed prominently anywhere in the game and really only known through Core 3e/3.e D&D rules, but Paladins cast spells at half their normal level (so a level 10 Paladin casts as if they were a level 5 spell caster). Do note that Holy Power/Prayer aren't sensitive to spell casting level (for other than duration), so Paladins do make good use of those spells regardless (though require a decently high level just to cast them several times per day). ------------------------------------------------------------------------------- 6. Ranger *CLA:RAN- Overall Rating: 1/4 Ah, alas. Unfortunately, the best part of being a ranger (free Ambidexterity and free Improved Two-Weapon Fighting when fighting without armor or with light armor) can be had by just having 1 level of the ranger or by, you know, just getting the feats manually. The divine spells suck (nothing like Holy Power which the Paladin gets). The favored enemy, while potentially really decent (who wouldn't like +7 to hit/damage against a hard group of monsters?), requires you to __heavily__ invest in a ranger to be remotely effective. It may be worth doing a 20 Ranger/10 arcane caster multiclass, as that way you can get some defensive illusion spells at your disposal. As for favored enemy, don't do what alot of online forums and guides tell you to do and pick Goblin as your first favored enemy. You can buy a Goblin Slayer knife in Targos in HOF which instantly gibs any and all goblins, thus rendering that favored enemy pick useless. Favored Enemy Priority: Trolls Undead Yuan-Ti Ogres Orcs Lizard Men Giants Trolls are consistently found throughout the game, save for chapter four and six, and they're annoyingly resistant to crowd control, especially stunning (in that they are both immune to Holy Word and while stunned they don't fall over for you to hit them with fire/acid, so they have an arbitrary amount of health while held). Undead are fairly prominent throughout all the chapters and they have an annoying tendency to have all sorts of crazy damage resistance. Moreover, on HOF, they are really hard to instantly slay with the various disruption weapons, and some of them are really good at saving against Control Undead, not to mention how bad turning undead becomes. Therefore, getting a good favored enemy bonus against them is probably your best bet, if only just to get as much of an advantage as you can against the endgame Apocalyptic Boneguards. Yuan-Ti, while pretty absent for most of the early game, dominate chapter five ridiculously and are a really annoying bunch of monsters, filled with annoying spellcasters and SR-backed warriors. After that, I just listed enemies in my percieved order of decreasing prevalence. WARNING! This isn't listed prominently anywhere in the game and really only known through Core 3e/3.e D&D rules, but Rangers cast spells at half their normal level (so a level 10 Ranger casts as if they were a level 5 spell caster). ------------------------------------------------------------------------------- 7. Rogue *CLA:ROG- Overall Rating: 0/4, 1/4 if you're really good at micromanagement or you're creating an AC-based Decoy. Unfortunately, unlike other Infinity Engine games, you really don't need a rogue anymore. A smart wizard can pick up the necessary Search or Disable Device skills. Furthermore, sneak attack is much worse on HOF. The problem is that there are two main approaches to using sneak attack. You can use it as the first strike in a 1 on 1, or you can run around trying to sneak attack as many enemies as possible. The problem is that you're basically spending up to 30 levels just to get 15d6 (or an average of 52.5 damage) damage for free the first time you attack an enemy. That damage might be significant on normal, but it's basically two or so free attacks with high level gear, and a much smaller fraction of the enemy's health. Thanks to multiclassing, you can do way better than that. There is, however, one very specific use for a Rogue - Crow's Nest, which is an item that gives +3 Generic AC and is thus essential for an AC-based Decoy, can only be equipped by a character with atleast one level of Rogue. ------------------------------------------------------------------------------- 8. Sorcerer *CLA:SOR- Overall Rating: 4/4 Ah, outstanding! Arcane spells are ridiculously diverse, powerful, and frankly you could have a party of nothing but sorcerors and clean up through HOF. Everything you really need is in arcane magic. Mirror Image, Improved Invisibility, Chaos, Mass Dominate, Wail of the Banshee, all sorts of summons, crap loads of damage spells, crap loads of crowd control. You can even have all sorts of crazy multiclassing possibilities, you just need to get as many levels as the highest spell you want to cast, or as many levels as you want to deal damage, or as many levels as you want your spells to last. For that matter, I've read some stuff that states that you should generally never go past level 20 unless you want the extra damage from Skull Trap, Delayed Blast Fireball, or Horrid Wilting. The reasoning behind this logic is that the rate at which you gain new spells dramatically slows down past level 20, so it's only worth if if you're absolutely trying to squeeze the final amount of damage out of Delayed Blast Fireball. However, that's a very short-sighted opinion - it overlooks spells that have durational components. There are lots of really important spells that get stronger just from having increased duration. The best example of this is Mass Dominate. At level 20, you get 20 rounds of domination, or 2 minutes worth. Going all the way to level 30 increases it to 30 rounds of domination, a full 50% increase in time. Think this doesn't matter? From personal, anecdotal evidence, the difference is that with only 20 rounds, you might not be able to have Mass Dominate last long enough to finish a fight, but 30 rounds is enough to finish a fight and then start picking off the remaining dominated monsters one at a time. ------------------------------------------------------------------------------- 9. Wizard *CLA:WIZ- Overall Rating: 3/4 Like a sorcerer, except with some really nice plusses, but also a few big minuses. Plusses: Extra feats. This can be really helpful since spellcasters have all sorts of crazy feat needs. A wizard also uses Intelligence for casting spells, which means that a wizard is well-suited for getting lots of skill points and spending them on all sorts of miscellaneous skills, like Search, Diplomacy, or Knowledge (Arcana). There are also two Wizard-specific items (Mystra's Cloak and Mystra's Embrace) that are pretty snazzy (see section 2c). Minuses: Wizards will always be slightly worse than Sorcerors for casting spells. They have less overall spells per day (although they have the flexibility to choose which spells they are, so you can pick up spells without worrying about them being too situational or becoming obsolete). Moreover, bonuses to Charisma are easier to find than bonuses to Intelligence. Plus, you're highly dependent on finding scrolls for your spells. This means that while having 1 wizard is really good, once you start having more, you start splitting a very finite supply of scrolls. In fact, there are some level 8 spells that you won't normally find on scrolls (like the ubiquitously mentioned Symbol: Hopelessness). You can try to get them as random drops through Battle Square in the Ice Temple (the higher Battle Square levels can drop higher level scrolls as a reward for finishing a session), but this is a time consuming and inconsistent way to deal with a class weakness. As such, as reflected in my rating, a wizard is just as good, if not better, than a sorceror at first, but with each extra wizard you add to your party, you decrease the quality of your wizards. The bonuses for having extra skills becomes redundant, and you start splitting the scrolls you find throughout the game. =============================================================================== =============================================================================== Spells of Note *SPE- ------------------------------------------------------------------------------- 1. Buffs/Support *SPE:BUF- Blur (illusion) A flat 20% chance to avoid attacks. Not completely spectacular on its own, but combined with, say, Mirror Image, it can greatly extend a character's life. Draw Upon Holy Might (evocation) A great self-buff for a cleric/paladin to use as it gives you a good boost to health and damage. While you may not necessary want a level 30 cleric or paladin, having one would allow this spell to grant an outstanding +10 strength, dexterity, and constitution (which among other things would translate into 150 extra health). Unfortunately, the duration is really short, but fortunately there's not much else you may want to take at this spell level anyway. Be warned that ability bonuses don't stack. Remember that paladins cast this at half-strength, e.g. a level 30 paladin casts this as a level 15 cleric. Eagle's Splendor (transmutation) A good early buff spell before you start getting good charisma-boosting equipment. At the very worst, it basically gives Bards/Sorcerors +1 to the DC, with a potential max of +3 to DC, in addition to any extra spell castings. Emotion: Hope (enchantment) One of the best buff spells you can get (giving a whopping +2 damage bonus in addition to other rolls), the only downside being that it also affects enemies if they're in the area of effect, so either cast this before combat or with very careful aiming. Exaltation (abjuration) One of a cleric's essential support spells because it's one of very few (I think in fact only) ways of getting rid of the effects of Hopelessness, which enemies start being able to use against you pretty effectively in the endgame. Holy Aura (abjuration) Not as good as it would be in normal as the bonus to AC is pretty useless, but the SR resist is very good (especially if you are capable of getting your party's SR high enough to start casting spells like Horrid Wilting at point-blank range). Holy Power (evocation) Grants a set +4 damage bonus to the caster (both clerics and paladins can cast this). A great way to boost damage, as with 5 attacks, that's an extra +20 damage per round. This is even better if the character is using a ranged weapon, as this effectively grants the bonuses of having Power Attack, which normally doesn't affect ranged weapons. Improved Invisibility (illusion) One of *the* best buffs you can cast. It lasts a long time, gives you bonus to attack (since the enemy doesn't get their dex bonus to AC), gives you 50% chance to evade attacks through concealment, and can get enemies to stop attacking you if you cast this while visible and targetted. The downside is that you need a cloak or something that grants Non-detection as a simple See Invisibility will dispel this. Another downside is that until the character buffed with this does something to be "visible", you can't cast anything on him or her. Note that for this purpose, you pretty much have to be doing something around an enemy, as just casting, say, Mirror Image while your party is safe isn't enough to qualify as becoming "visible". Invisibility (illusion) Less of a buff like improved invisibility and more of an escape spell. A lot of enemies pounding down on you and you're out of Mirror Images? Cast this and they'll find another target. Invisibility Sphere (illusion) Like invisibility, but good if you're caught off guard and need some regroup time for your entire party. Just be warned that the area of effected is *small*, so your party has to be pretty close. Iron Body (transmutation) Gives you arbitrary immunity to physical damage against weapons less than +3 (which is pretty much everything until the very end), a suite of other protections, a boost to strength, and so-so physical attacks. Unfortunately, it shuts down your ability to cast further arcane magic, but you won't need to cast them, as whoever is using this becomes a veritable tank. Of course, don't cast this around things like the Slayer Knights of Xvim, or else you'll just have a gimped mage/cleric who walks really slowly and takes lots of damage from +5 weapons. Magic Circle Against Evil (abjuration) Lasts a super duper long time and, more importantly, lets you use spells like Gate and grants you protection from enemies using spells like Gate. Just be warned that you need this defense up *before* the various summon spells are cast, or else it won't do any good. Mass Invisibility (illusion) Like invisibility sphere, but much more forgiving about the area of effect. Good if you just let off a Mass Dominate or some summon spells, as your minions will keep on attacking and immediately go visible (so they'll become targeted) while the rest of your party remains safely hidden and protected. Mind Blank (abjuration) It's so-so protections, but it lasts an entire day, so if you've got nothing else to memorize at this spell slot, use it. Mordenkainen's Sword (evocation) Turns your spellcasters into powerhouse attackers. Moreover, the damage they do can effectively bypass all sorts of damage resistance, so your spellcasters will probably even be able to outdamage your main damage engines in some cases. Prayer (conjuration) I've already mentioned this hundreds of times before, but I'll say it again: +1 to attack rolls, damage rolls, and saving throws for your party, and an unsavable -1 to enemy attack rolls, damage rolls, and saving throws. Just be warned that unlike in Baldur's Gate, the enemy actually needs to be in range to be affected by the negative effects of prayer. Recitation (abjuration) Very similar to a version of Prayer on crack. Unlike Prayer, it won't affect damage rolls or enemy damage rolls, but it will alter attack rolls and saving throws by +2 for you and -2 for enemies. Resurrection (conjuration) I suppose you can go through the game just reloading whenever a party member dies, but that ends up making the game *much* more tedious, especially in HoF, where all you need is 1 round of bad luck to knock out a character. Plus, unlike in every other Infinite Engine game, not only does the battle stay paused while in your inventory screen (unlike Baldur's Gate), but you can also equip armor, so resurrecting a character mid-fight doesn't mean that they'll just have to stay naked the entire time without risking disaster. Remove Fear (abjuration) Fear is probably one of the most common, persistent, and annoying effects in the game. The last thing your party needs is for a stray party member to get feared into an unexplored area where they end up waking up a horde of powerful monsters. Fortunately, not only does this cast blazingly fast and immediately remove fear, it also bestows temporary immunity to fear. Every person capable of casting this should have 1 or 2 copies memorized, as you want lots of redundancy in being able to cast this. ------------------------------------------------------------------------------- 2. Crowd Control *SPE:CRO- Banishment (abjuration) This is effectively a Wail of the Banshee directed at summons. Enemy summons, through some quirk of Heart of Fury, tend to be very vulnerable to spells, so in some cases, this is much more cost effective at clearing the screen of enemies. Chaos (enchantment) Wow! The massive -4 saving throw penalty is part of the spell and makes it essentially equivalent to a level 9 spell. Confusing your enemies is really good, as it makes them just wander around, attack randomly, or just stand in place. It can make the most outmatched battle become trivial to deal with. Confuse (enchantment) Like a low-powered version of Chaos. Not shabby if you want to use this on side skirmishes and save Chaos for the big guns. Control Undead (necromancy) Sort of like a small scale Mass Dominate geared strictly for undead creatures. In one critical way, this is actually better than Mass Dominate, because the control is triggered __constantly__. So even if you accidentally hit the controlled undead with a Delayed Blast Fireball, they'll still remain under your control, whereas creatures hit by mass dominate would go hostile just from a web spell. In this case, you'll see undead momentarily flicker red to hostile, but they'll switch back to green almost immediately. This is true even if you're busy attacking a controlled undead. Disintegrate (transmutation) An instant death spell that has the amazing benefit of killing undead or creatures who may be normally immune to death effects. In the former case, not only is handy to have an insta-kill against undead (who normally have low fortitude saves anyway), but is *really* useful when you start running into super tough undead like Apocalpytic Boneguards (though they have good saves). In the latter, giving yourself an option against creatures like the Guardian or Slayer Knights of Xvim is always nice. The only slight problem is that it takes some time for the projectile to hit the target, and even after the target is hit, it takes a bit of time for it to fade to nothingness. It's important to note for all you Baldur's Gate veterans that, unlike in those games, Disintegrate-ing an enemy in Icewind Dale II does __not__ destroy their items. Any enemy destroyed in this way will simply leave their treasures on the ground, so feel free to be reckless in your Disintegration. Dismissal (abjuration) One of the quirks about summons in Heart of Fury is that those created by enemies retain their normal-difficulty health die information. Given that, Dismissal is 95% of the time an instant-kill spell against enemy summons. Only the most powerful summons cast by the most powerful enemies will be able to shrug it off. Dominate Person (enchantment) A nice, localized version of Mass Dominate for when you really want to pick off a really annoying giant or some such. Fortunately, it also has a penalty to save (-2), so you'll have reasonable success with it. Moreover, this spell has one amazing distinction over Mass Dominate - it can dominate monsters that Mass Dominate would miss. Slayer Knights of Xvim are the best example of this, as they are normally completely ignored by Mass Dominate's effects, but are still vulnerable to being individually dominated via Dominate Person. Though, you'll still need Malison, Prayer/Recitation to give yourself a shot at breaking past their high Will save. Contrary to the spell name/description, you can use this to dominate non-humans (like animals) as well. Emotion: Despair (enchantment) A super short duration counterbalanced by its amazing penalties it bestows on its targets, as very few spells penalize both saving throws and attacks (usually one or the other). The area of effect is a bit limited, though. Emotion: Fear (enchantment) Fear is a pretty good effect to happen - unlike confusion, monsters still don't have a chance of continuing to attack you. It's a shame then, that fear is pretty much a clerical effect or is limited to Horror, which gives enemies an annoying +3 bonus to saves. Enter this spell. Not only is it not lame unlike Horror, it also can be spell focused for extra effectiveness, though the area of effect and duration are pretty limited. Entangle (transmutation) As I've mentioned in section BUI:SAV-, this is like a Web or Stinking Cloud which you can make better with spell focus. Plus, even if enemies make their save, they are still slowed by the spell. The only downside is that you can't cast this indoors or underground. Finger of Death (necromancy) Instant death spell that has the benefit, unlike Disintegrate, of having no projectile and acting instantly, so no slow fade-to-nothing effects (though remember that unlike Disintegrate, Finger of Death does nothing to undead and other special creatures). Great Shout (evocation) Its area of effect is pretty useful in cramped fighting quarters, and with proper Spell Focus, this spell effectively provides an extra way of stunning a huge swatch of creatures for a few rounds. Plus, it casts really quickly, so it can be useful for getting a character out of a bind. Greater Command (enchantment) It casts super quickly, has a wide area, can be spell focused, and instantly incapacitates enemies en masse. Sure, they'll wake up if you hit them, but this means you can focus on one enemy at a time. Hold Monster (enchantment) It's not a spectacular spell, but against low will save creatures, this stands a good chance of completely stunning them in their tracks. Because you can spell focus this, it's effectively a level 9 spell, which puts it as one better than Symbol of Hopelessness, though it doesn't affect more than one creature. Holy Word/Blasphemy (conjuration) A high-level cleric spell that has a near-instantaneous casting time and instantly stuns all non-good characters within range for 1 round, without save or SR checks. It's very hard to explain to you just how good of an effect that is unless you see it yourself. You can suddenly and immediately counter any spells being cast, you can stop the enemy long enough to cast buffs and crowd control spells without fear, and if there's only a few enemies around, then suddenly you can focus fire all your attacks on one enemy at a time (as attacks against a stunned creature always hit). Holy Word also gets much better the more clerics you have that can shout it out. With just two clerics, you can chain together a series of Holy Words so that while one casts it, the other casts a buff spell of some kind. Then the other casts it, while the first casts a different buff spell, etc. All the while, your other party members are busy laying waste to the perpetually stunned enemies. Needless to say, this also makes for effective anti-mage strategies. You can stun down an enemy mage before he or she has the chance to start casting big spells and quickly run in with a few melee attackers and dispatch the mage before he or she can recover. Holy Word is also great as an escape spell, and not just for the person casting it. Stunned enemies acquire new targets when they snap out of it, so simply casting this (at near-instant speed, need I remind you) and then moving all endangered, non-Decoy characters away will save you lots of reload/Resurrection headaches. Blasphemy is a much worse version of Holy Word (since it affects Good instead of Evil enemies), but since it also targets neutral enemies, you'll still be able to get some mileage out of it. Note that there are a few enemies that appear to be susceptible to Holy Word/Blasphemy, but actually aren't. These mainly tend to be Trolls, as they'll show as being affected by it, but they'll still move around and attack. Mass Dominate (enchantment) A ridiculously powerful spell to no end. When you use this, one of two things tends to happen: you gain control of nearly all of the creatures on screen, or you gain control of a chunk of the creatures on screen. If you convert a portion of the total visible monsters, you can use the controlled monsters as cannon fodder and extra damage. If you manage to convert all the monsters in sight, then you can just have them focus fire on each other one at a time. The only slight caveat is that you have to be careful about casting spells on your new minions. Anything that remotely negatively affects them will cause them to go hostile (even something as innocuous as a misplaced web spell or Emotion: Despair). Anything overly beneficial may come back to haunt you when the spell wears off (such as hitting all your minions with Improved Haste or Mass Heal). A good tactic is to cast Malison on enemies as they approach you while at the same time casting Mass Dominate, while every other party member does nothing (except maybe cast prayer/recitation). Malison will finish first before Mass Dominate. This way, since none of your party members are doing anything else, you'll not only convert a huge swath (if not all) of your enemies, there's also no chance that you'll accidentally automatically break domination with a stray arrow or some such. This is a good spell to complement Wail of the Banshee. Creatures with really good fortitude saves very rarely have very good will saves. Moreover, there may be many cases in which Wail of the Banshee will have no effect, whereas Mass Dominate will. Power Word: Blind (conjuration) Probably the only Power Word spell worth using in HOF mode, simply because most of the other ones have no effect if the enemy's health is too high. This one, however, not only still has an effect, but a rather useful effect too. Instantly blinding a swath of creatures means that they miss 50% of the time (although the Blind-Fight feat will diminish this). This also has the nice bonus of making spellcasters and ranged attackers stand around doing nothing, simply because they won't be able to see anything. Slow (transmutation) Even though it's just a level 3 spell, with spell focus, you can still get it to hit creatures with some consistency. The slow walk effect makes it easier for your party members to run out of harm's way. They also have a -2 to hit, making them have a harder time hitting your decoy. Best of all, though, is the fact that monsters lose their last attack while slowed. This can be as much of a 50% reduction in net damage output (for a monster with 2 attacks) and still a 20% damage reduction in the worst case (for a monster with 5 attacks). Symbol of Hopelessness (universal) Outstanding! Hopelessness is great because it's basically like being held except things like Freedom of Movement or Remove Paralysis can't deal with it. You can cause an entire screen full of enemies to stand still in their tracks, giving you lots of time to just relentlessly beat upon them. Note that every once and a while, instead of keeping an enemy in place, this spell will instead fear the enemy. I'm not quite sure what the odds of it holding versus fearing are, though. Symbol of Pain (universal) A pretty good debuff spell. It lasts a really long time and gives an outstanding -4 penalty to attack rolls, among other things, which is very helpful for decoy characters. The only problem is that being as its universal, you can't take spell focus feats to help make this harder to save. Tremor (transmutation) Awesome! Not only is it a level 8 (or 9 for druids) spell, but it can also be spell focused. It also only affects enemies and does a moderate amount of damage in addition to its awesome stunning/knockdown effect. Plus, it's probably one of very few crowd control spells that are effective against undead. Wail of the Banshee (necromancy) A powerhouse of a spell. Any creature without a big fortitude save will collapse instantaneously, dead. In many cases, this is all you need to deal with trivial side skirmishes. It doesn't deal with undead, so you'll need an alternate solution for them. ------------------------------------------------------------------------------- 3. Damage *SPE:DAM- Since most damage spells are pretty uniform, I'm just going to list the important damage spells with some notes. Spells of note: Delayed Blast Fireball Chain Lightning Horrid Wilting Meteor Swarm Skull Trap Acid Storm Cone of Cold ... more inconsequential spells afterwards Delayed Blast Fireball has the best damage potential of any spell, dealing 30d8 damage and having the benefit of being enhanced by the Spirit of Flame feat for +20% damage. Chain Lightning gets a high ranking simply because it's one of very, very few spells safe to use when the enemy has engaged your party in close quarters. Horrid Wilting has the extra benefit of using fortitude as the saving throw instead of reflex, which means you can hit enemies with (Improved) Evasion. There's an extra benefit/caveat in that it doesn't do anything at all to undead, so if you have some undead summoned, you can cast this recklessly without worrying about destroying them. Skull Trap has a really low area of effect, which is both a plus and a minus. A plus because there's less risk of accidentally hurting one of your own party members. A minus because you have to aim with great precision. In addition, skull trap only triggers by proximity, so if you miss just a smidgen, the skull will just float there until something triggers it. However, it deals a nicely hard-to-resist slashing damage. ------------------------------------------------------------------------------- 4. A Word on Summons *SPE:AWO- Amongst all the generic Summon Monster and Summon Nature's Ally and the other similar spells, there are a few that stand out. Animate Dead A cleric gets this as early as their third spell level, and it's a mainstay from that moment on. The Boneguards and Zombie Lords you were summoning in normal get appropriate upgrades in Heart of Fury, complete with damage resistance. Zombie Lords are resistant to fire and bludgeoning damage (though vulnerable to slashing), and Boneguards are resistant to slashing and piercing damage (though vulnerable to fire and bludgeoning). Plus, being undead, they are immune to a bunch of spells that other enemies might use against you (like Blasphemy). The summons stick around for a super long time (the longest of any spell) and, best of all, are immune to Horrid Wilting, so can serve as tanks while you blow away any enemy with that nice damage spell. Gate The Gelugon this calls in will remain useful for most of the game, having many attacks, extra Frost damage, immunity to basic weapons, a constant fear effect, and a super duper long duration. Just make sure that all your party members have Protection from Evil on them, or else the Gelugon will turn hostile on you. Note - for any of you Baldur's Gate veterans, it's important to note that unlike those games, any enemy killed by the demon called in by Gate (and other similar Protection From Evil-based summons, like all the (Lesser) Planar Allies and weaker demon spells) still grant you experience. Plus, they don't have any annoying area of effect spells that'll make you regret calling them in. Shades Brings forth super powerful monsters of all varieties in a slightly weaker, shadowy form. The "weaker" part hardly matters; you're still bringing in creatures like gigantic Frost Giants still capable of soaking up and dealing lots of damage. =============================================================================== =============================================================================== Gearing Up *GEA- ------------------------------------------------------------------------------- 1. Which weapon proficiency? *GEA:WHI- Let's face it, you don't necessarily want to waste a lot of feats picking up extra weapon proficiencies, so what are some good rules of thumb when it comes to picking up weapon proficiency? I personally believe that Martial Weapon: Axe is the best overall one you can pick up (which is extra great if you can get it for free). There are a lot of nasty melee axes, both one handed and two. Plus, the critical threat range is 20/x3, which couples very well with luck bonuses and Improved Critical (far better than for 19-20/x2 critical weapons). In fact, one of the most insane melee weapons is an axe, the Massive Greataxe of Flame +5. (Unfortunately, it's a random drop, so good luck getting one.) In addition to the good melee options, there are a suite of very good throwing axes for ranged characters and spellcasters alike. After that, you'll probably want one melee character get Focus'ed in Bastard Swords or Polearms. Pudu's Fiery Blight and Bastard Sword +3: Cold Fire are easy finds and are both among the best melee weapons you can get (though Pudu's Fiery Blight is at the end of the game). Halberd of the North is also easily available in a store in Kuldahar and is also pretty decent. Plus, choosing Bastard Swords leaves you open for the possibility of using the primo Miasmic and Heroism varieties. After that, Long Swords are probably next. It opens the possibility for using the Holy Avengers, and it just so happens that Long Swords are one of the most common magical weapon types you'll run across, so you'll hardly be short of options for them. Things get iffier next. Maces and Hammers are decent choices after that, but there's nothing spectacular to write home about, save for some jaw-dropping completely random drops in the shape of clubs (see the next session for more details). Plus, most two-handed options (Polearm aside from the ones I already mentioned, Great Sword, Quarterstaffs, and two-handed Hammers) aren't spectacular enough over two-handed axes or simply dual-wielding two one-handed weapons (unless you have a very, very high strength). For non-melee characters, Short Swords (which I believe all characters get anyway) and Bows are the tops, though Axes are still probably generally better, if only because unlike with bows, you can equip a shield with (most) throwing axes. Just keep in mind when creating a character that for throwing weapons, while Strength provides a damage bonus, Dexterity is still the stat to rely on for an attack bonus. In terms of short swords, there are lots of defensive and ranged daggers that you can put to good use (including the best +Intelligence item in the game, though good-aligned characters can't use it). Bows are great if only because you won't have to sink a ridiculous amount of money just to keep your party supplied with ammunition (you'd be surprised how quickly you can go through a quiver of +5 Arrows when you fire 5 per round) thanks to a plethora of Everlast Arrows. However, if you only have one character using a Sling, for example, that's not so bad. It's only when you have two that you start realizing that no amount of stocking up in advance will seem to be able to keep your party members armed with bullets to throw. ------------------------------------------------------------------------------- 2. Weapons of Note *GEA:WEA- Note that weapons I readily discuss elsewhere for specific purposes (like for a Decoy) won't get a re-mention here. WARNING - I'm not completely familiar with how Icewind Dale 2 randomizes the loot, but I've added a [??] next to items that I've never found and I vaguely suspect may be impossible to get, due to oversights in how gear is selected. "Baron" Sulo's Hook (dagger) Both a good decoy support weapon or just a nice weapon for your non-ranged weapon wielding casters to use, since it has a litany of nice defenses (even if non-decoys won't really enjoy the advantage of +3 deflection AC). This is available when you go deeper in to Fell Wood. Bastard Sword +3: Cold Fire (bastard sword) You'll always be able to find this as a set drop, which is good, because this gives you a nice staple for any Bastard Sword wielder to brandish. It's also one of the better one-handed melee weapons, dealing 1d10 + 3 damage plus 1d6 cold and 1d6 fire, for an average of 15.5 damage. The elemental versatility also means that you'll be able to take advantage of weaknesses pretty well. You'll find this early off one of the enemies in the fight against Saablic Tan on __normal__ difficulty. Bastard Sword of Heroism (bastard sword) If you're really lucky to get this random drop, then bastard sword proficiency should become something you should consider. Keen, sure striking, 1d10+3 damage, and an insane extra 3d6 slashing damage per hit. The earliest I've seen this is in one of the containers after the Tyrannar fight at the top of the Cleric Tower in the Severed Hand on __normal__. On Heart of Fury, there's a much greater chance that enemies (starting with the Saablic Tan fight) will just randomly drop this. Big Black Flying-Death (2h throwing axe) The only two handed throwing weapon in the game, and in terms of damage really lives up to its name. If you aren't concerned about wearing a shield, then this will transform anyone into a significant ranged damage force, dealing 1d10+3 damage, plus an additional 1d10 slashing damage, and the extra strength bonus associated with two handed weapons. This HOF weapon is available from Gerbash in Kuldahar. Club of Confusion (club) In addition to having solid base damage (1d6+5 and a 99% chance to deal 2d6 more, for what is essentially 15.5 base damage) and being keen, the best part about this weapon is that the 50% chance of confusing the target __does not allow for a save__. With a high attack rate, you'll be able to make sure that all the enemies you're attacking stay relatively docile (though confused enemies being attacked still have a tendency to fight back). You'll find this in the Mage Tower in the Severed Hand. Club of Dazing +5 (club) [??] Not a terribly exciting weapon, except for the fact that it takes a save higher than 36 to resist the stunning effect and that it's one of few weapons that have a 100% chance to proc this stun effect. (Some other weapons have a less than 100% chance even without mentioning so.) This means that you can easily stunlock an enemy with this weapon, which is just nice. Club of Destiny +5 (club) [??] It's just a lowly club, but it still deals a respectable 1d6+5 damage. More importantly, it permanently enhances the wielder with Luck, as if Luck or a potion of Luck was used on the character. Thus, it won't stack with those other spell or spell-equivalents, but it does mean you won't have to keep buffing someone to take advantage of the myriad plusses a luck bonus gets. Club of Freezing Flames +5 (club) [??] This gets a special mention because despite being a lowly club, it's one of the best melee weapons in the game. It deals 1d6 + 5 base damage, with an additional 2d6 fire and 2d6 frost (both with an extra +10% chance of 1d10 fire or frost), which comes out to a whopping average of 22.5 before the 2d10 total extra elemental burst damage chance. Not even the Bastard Sword of Heroism can top that. In fact, the only reason why the Massive Greataxe of Flame beats out this weapon is because you get extra Strength damage off of wielding the greataxe with two hands. Unfortunately, this is a random drop, but the plus side is that since it's a club, pretty much any melee user can pick it up immediately. Goblin Slayer (dagger) One of many great essential items available in Targos, instantly killing Goblins will let you breeze through the first chapter easily. Plus, it will keep on being useful as you run into various half-goblin warriors at progressively later stages in the game. This is available off the enchantress in Targos. Golden Heart of <Player Name> (long sword) One of the best long swords in the game and it's available the moment you start out in Targos in HOF mode (though for a hefty fee). It's a solid +5 sword, but also gives +2 Strength/Dexterity, +25 health, constant Haste, and constant Freedom of Movement. Constant Haste not only means you move really fast, but means you get the free +4 generic AC bonus without having to worry about buffing yourself (unlike the Boots of Speed which just doubles your movement rate). Freedom of Movement means you don't have to worry about getting held or stunned. Moreover, both these effects are good enough that you might have used up other item slots for them (like a Ring of Freedom of Movement and Boots of Speed), so using this sword effectively gives some spare item slots for even better items. Kegsplitter of Shaengarne Ford (1h axe) You can nab this in Targos after killing the goblins, and it's definitely an investment to make. Alone, it's not too great, but its special feature of "Slays Constructs" means it's a one-hit wonder against Iron Golems. Keep it in reserve for just that case. Halberd of the North It's available early on even in __normal__ difficulty off Conlan in Kuldahar (and you can get a second in HOF mode), but it's still one of the better weapons in the game. It does a solid 17 average damage per strike (5.5 base + 10.5 cold, with a 10% chance for a further 1d10) and is sure striking, though it offers no attack bonus. The combination of sure striking and the massive amounts of cold damage makes this a perfect weapon for dispatching Isair, as the sure striking does a good job of piercing through a lot of his defenses, and Isair has an extra weakness to cold damage. (Light of) Cera Sumat (long sword) Both normal and HOF versions require a battle of epic proportions to obtain and require a Paladin to equip, but it's well worth it. By far the best one handed weapons in the game, they not only output an insane amount of damage (Light of Cera Sumat does a whopping 1d8+10 plus +2d6 against evil creatures, in addition to a +10 attack bonus), but grant huge spell resistance. Unfortunately, unlimited Dispel Magic in IWD2 isn't as great as in Baldur's Gate and Baldur's Gate 2, but the other benefits of the two Holy Avengers are too great to ignore. Refer to a walkthrough if you need help finding these weapons. (Various) Maces of Disruption (mace) You'll find this in several forms, but for most of the game, they provide an excellent answer to undead. Even in HOF mode, alot of undead have terrible fortitude saves, relatively, so a Malison plus Recitation can put them back into range of being instantly slain en masse by the disruption effect. Even against demons and other outsiders with good fortitude saves, having an outright 5% chance to slay the enemy is nothing to be sad about (especially against really tough undead like Apocalpytic Boneguards). The earliest one you can get is on normal difficulty, completing rank 6 of the Battle Square in the Ice Temple. Masher (hammer) Deals a respectable 1d8+5 damage plus an elemental burst of your choice (acid is probably the best overall choice) of 1d6 plus 10% chance of 2d10. The best part is that every setting is effectively a double Keen, in which case the weapon's base critical threat range to 18-20/x3, which is positively ridiculous. Combine with Luck effects and Executioner's Eyes, and you can deal jaw-dropping amounts of damage. This is a random drop. Massive Greataxe of Flame +5 (2h axe) [??] By far probably the most damaging weapon in the game, doing a whopping 2d12+5, +1d6 fire, with a 10% chance for an additional +1d10 fire. Plus, it's a two-handed weapon so you get the extra strength bonus to damage. Unfortunately, as frequently noted, this is a purely random drop, so you can easily go many play throughs without seeing this. Miasmic Bastard Sword (bastard sword) It doesn't look terribly exciting off the top, since it only does a base 1d10 damage and has a bunch of conditionals for its extra effects. However, you'll quickly realize (and I note this below), that enemies need a high saving throw to resist the "Venom" and "Stunning" effects, so with a high base attack bonus and a full five attacks per round, you can disable enemies rapidly and keep them under multiple poison effects at once. Pudu's Fiery Blight (halberd) Wow! It does a solid amount of damage (1d10 + 5 plus 2d6 for an average of 17.5 plus a 10% chance of 2d10 burst), but more importantly, the stun effect is ridiculously hard to resist, so much so that against all but the toughest or luckiest of monsters, you can pretty much keep them stunlocked while you mercilessly eat away their health. In fact, barring two really huge negatives, this would be the hands-down best weapon in the game, as the high damage combined with the nearly-impossible-to-resist stunning would essentially let a melee character with 5 attacks go toe-to-toe with any monster in the game. The first negative is that this is available only at the __very__ end of the game, after you kill Pudu. Depending on how you play out the end quests, this may mean you still have a few more battles (and still two really hard ones coming up), but it definitely minimizes the time you get to use this. The second, more severe, drawback, is that since this is available at the very end of the game, you'll fight quite a bit of monsters that are completely immune to the stun effect, namely Slayer Knights of Xvim, Apocalpytic Boneguards, Isair, and Madae. Still, being able to let one character go one on one with most of the enemies in the top-of-the-war-tower battle is one hell of an endgame perk for someone who's been dedicatedly investing in Halberd proficiency/focus. Scales of Balance (1h axe) A notable axe simply because you can set it to Power mode to deal 1d8+10 damage in addition to having a chance (albeit small) to wound the target and deal 2 damage per round for the next 10 rounds. This is probably one of the most outright devastating one handed weapons you can easily get. This is available from one of the Underdark merchants. Scimitar: Blood Trails (large sword) If you read the next subsection, you'll note that monsters need to roll a 30 for their Fortitude saving throw to resist the effects of this weapon. You'll also be pleased to note that the effect they're trying to resist is a whopping 5 damage/round for 10 round wounding effect (most wounding effects are 1 damage/round or 2 damage/round at the most). With Malison/Recitation/Prayer, five attacks/round, and a high attack bonus, you can quickly rack up all sorts of insane periodic damage on an enemy, as each instance of the 5 damage per round stacks on top of each other! Mages, with their lower fortitude saves, will find themselves quickly bleeding to death. Plus, it sure doesn't hurt that this is also sure striking (ignoring most damage resistances) and also deals a +1d6 slashing damage per strike (though this just makes up for the fact that the base weapon damage is a mere 1d6). This drops off Iyachtu Xvim. Screaming Axe (1h throwing axe) Remarkably good spellcaster support weapon. Not only does it deal an insane amount of damage (1d6+5 and an additional 3d6 slashing), but it grants permanent immunity to silence spells while equipped, thus freeing up a feat slot from having to take subvocal casting. Just keep in mind that you can't get this (or the normal version, which also grants silence immunity) until Kuldahar, so you'll have to put up with getting silenced until then. On an amusing note, every time you throw the HOF version, the axe will actually shout out things like "Incoming!" and "Gotcha!" You can get this off Gerbash in Kuldahar. There's a caveat, though - both this and the normal version of this axe __do not__ get a bonus to damage from strength. At least this means you can put this on a character with 6 Strength with no ill effects. Stormshifter (1h throwing axe) Much better, in my opinion, than the normal equivalent (Cloudkiss), but only good if you're good at micromanaging. Otherwise, you may find that whoever is equipped with this will be hitting your decoy and eating up his or her mirror images. You get this for slaying the mini-boss at the end of level 1 of Dragon's Eye. Throwing Hammer of Thunder +2 (throwing hammer) The special distinction of being only one of two magical throwing hammers that return (and non-returning throwing hammers are insanely expensive, enough so that I think it was a mistake on the part of the developers). It does respectable damage, 1d4+2 plus 1d6 electric, amplified by any Strength bonus, but is also one of very few ways to deal bludgeoning damage from afar. Bludgeoning damage is generally very good as very few monsters have special resistence against it (unlike Slashing or Piercing damage, for example) and many monsters are particularly vulnerable to it (note that Slings don't actually do what would be classified as Bludgeoning damage). You can find this at various points in the game (some as random drops), but you can buy one for sure off the Underdark merchants. Xvimian Fang of Despair (dagger) Good characters can't use it, but it's the best +Intelligence boosting item in the game (+4). Not only that, but having a 20% chance to cast Emotion: Despair on a hit and a 5% chance to cast Flesh to Stone on a hit means that your spell caster can join the fray and pick off disabled enemies. Also note that, unlike what the game says, it's more a Hopelessness (special stun) effect than a Despair (penalty to rolls) effect, which makes it better an effect than you'd think. Too bad it's available so late in the game as a drop off an enemy mage (Saablic Tan, right before arriving at the Severed Hand). Ysha's Sting (throwing dagger) A returning throwing dagger that is already respectable at 1d4+5 damage, but also has the remarkably rare trait of not having a saving throw DC of 14 for its extra effect. In fact, it's fairly difficult to resist its venom effect, so you'll be spreading enormous of poison around with this weapon in tow, just be warned that the poison doesn't stack, it merely refreshes with each fresh injection. This is available in Chult off a table in the southwest section in the temple. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.a High-Saving Throw Weapons *GEA:WEA:HIG- I mentioned before that the vast majority of weapons and weapon-like effects are pretty much useless since they require a measly 14 to save against. I also mentioned that there were a few exceptions. There isn't really a pattern to them, other than the fact that these are all Heart of Fury-mode-only items. However, because I think you, my faithful reader, are special and deserving of my attention, I've gone through a lot of the weapons in the game and tested them out, just to see which are really worth using. There are a few gotchas - First, I didn't go through any ranged weapons. Second, I only tested out weapons that someone might conceivably want to use, so I didn't test out any normal-mode weapons, nor did I test out lame 1d8+2 weapons with a chance of doing something lame for a saving throw. How to read the following table: across from each listed weapon is the DC/Saving Throw for its special effects. This is the number that the enemy must roll with the specified save in order to evade them. Across from that are any special weapon-specific notes that I had to add. The way the weapons are sorted may not make much sense (why is the a "club" grouping separate from the "blunt"?) but that was because I was just going by what was in DSimpson's item listing, so eh, what are you gonna do. Finally, if a weapon in a given category isn't listed, it is safe to assume that the saving throw it needs for its effects is 14 or is so similarly low it is not worth your efforts anyway. [Weapon Name/Category] [DC/Saving Throw] [Notes] AXES - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Coward's Flight n/a *1* BLUNT WEAPONS - - - - - - - - - - - - - - - - - - - - - - - - - - - Mace of Stunning Frost Burst 40 Fort Stunning Star of Speed 40 Fort CLUBS - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Belib's Amazing Everlasting Torch n/a *2* Club of Dazing +5 37 Fort Club of Confusion n/a *3* DAGGERS - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Xvimian Fang of Despair 20 Will/see note *4* Ysha's Sting 37 Fort *5* Dagger of Closing Arguments n/a FLAILS - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Pustule's Flail of Boils 37 Fort/12 Fort *6* HALBERDS - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Holy Swizarnian Hammer of Lucerne 14 Fort *7* Life's Blood Drinker 37 Fort *8* Pudu's Fiery Blight 49 Fort *9* STAVES - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Ryomaru's Harmless Tanuki Staff 24 Fort (stun only) SWORDS - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Bastard Sword +2: Black Adder 24 Fort The Black Lamia's Tongue 27 Fort Bleeding Short Sword +4 27 Fort Charged Short Sword of Wounding +5 40 Fort Lolth's Cruel String see note *A* Miasmic Bastard Sword 36 Fort *B* Scimitar: Blood Trails 30 Fort Scimitar+4: Ichor 27 Fort *1* Neither the Panic or Slow effect on Coward's Flight allows for a save. *2* The burning blood effect always hits for 12/round for 3 rounds, and still retains its potential for berserking. As Ilya Nemetz mentions, this berserking can shut down enemy casters and make enemies attack each other. *3* The confusion effect on Club of Confusion has no save. *4* Even though the game says it's an Emotion: Despair effect, the graphics and the effects on the enemy mirror that of a Symbol: Hopelessness effect. The save listed is for that effect - the Flesh to Stone effect has a separate, low Fortitude save (10). *5* Ysha's Sting, when used as a melee weapon, has a pathetically low 10 Fort save for the same effect. *6* The Venom effect has the higher save; the Contagion/Dolorous Decay effect has the lower save. *7* While the save sucks on the holy smite effect, it still has a small, if piddling, effect even when the enemy saves. Unfortunately, it also seems that the holy smite triggers at *way* less than the listed 25% - it seems more like 5%. *8* The wounding effect on Life's Blood Drinker has the high save, but the vampiric effect has a lower 19 Fort save requirement. *9* The Lower Resistance effect on Pudu's Fiery Blight, like the spell, has no saving throw. It's also important to point out that my test creature (Frost Giant), which has a +27 Fortitude Save, could only succeed on a natural 20 to resist the stun (a natural 20 always succeeds). Suffice it to say that for most enemies the stun will work 95% of the time for most enemies. *A* The Cruel Sting has a low 15 Fort save, but the poisoning effect still has a minor effect even when the saving throw succeeds (2 damage per second for 6 seconds). *B* Instead of two saving throws, one for the poison effect and one for the stun effect, the Miasmic Bastard Sword has only one save for both - it just means that 25% of the time the enemy fails his or her save, he or she also gets stunned. ------------------------------------------------------------------------------- 3. Armor of Note *GEA:ARM- This is a much smaller list as in HOF, most armor is pretty useless for their main purpose (AC), and the character with the highest AC won't be using armor. Barbarian Shield (shield) Tied in overall effectiveness with the shield of duergar fortitude in boosting health, this one grants +1 Constitution. This means that a character who started with an odd number for their constution stat will get 30 extra health at the end of the game, twice as much as the duergar fortitude shield! Unfortunately, this constitution bonus can be negated by using some other item that grants more than +1 to constitution. Depending on the circumstances, though, this can be a really good shield to use. Barbarians that you summon using Raging Winds have a chance of leaving this shield on their corpse when they die in combat. Chain of Drakkas' Fury (none) Despite the fact that there's a punctuation error in the armor's name, this is a nice armor to use for any spellcaster or support attack character. It grants a +3 attack bonus and an extra attack per round (which is useful for the Wizard/Sorceror who will only end up with 3 base attacks at level 30). This is available off one of the soldiers in the Barracks in the Severed Hand. Cornugan Hide Armor (light armor) One of the best DR-granting items since it also combines with a nice regeneration effect. It has 20% arcane spell-casting failure, though, so arcane spellcasters with three ranks in armored arcana will still have a 5% failure rate. This is available for completing rank 3 of Battle Square in the Ice Temple. High Master's Robe (robe) The best intelligence-boosting item for good characters, giving +3. It also gives +3 Charisma and a (at this point) useless +6 bonus to Alchemy and Knowledge (Arcana). This can be found in a container in the Severed Hand. Milton Sixtoe's Armor of Absolute Self (light armor) Permanent mind blank and 15% arcane spell-casting failure rate. Not shabby. You find it in the treasury in the temple inside Chult. Mooncalf's Shield (shield) As often mentioned, this shield grants permanent Protection from Arrows, which effectively means near-immunity from ranged attacks. Get this in the Prologue when Targos gets attacked - normally there's a soldier standing in front of a shield display preventing you from getting this, but during the attack he'll move (or die). Shield of Duergar Fortitude (shield) One of the best hit-point boosting shield in the game, granting +15 hit points. You get it as a reward for clearing the River Caves of monsters. ------------------------------------------------------------------------------- 4. Accessories of Note *GEA:ACC- There are also a lot of accessories mentioned in section 2a (the section on getting a high AC). Bile of the Damned (amulet) Only non-good characters can use it, but it gives an outstanding +4 Strength and Wisdom. Available off Sheemish's special stash after you've set free the Aerial Servant inside Orrick's Tower. Dwarven Ogre (belt) Only fighters, barbarians, and rangers can use this, but it grants an amazing +6 strength and permanent blur (which is an outright 20% chance to evade attacks). Available off Sheemish's special stash after you've set free the Aerial Servant inside Orrick's Tower. Every God Ring (ring) There are lots of copies of this ring (one you can buy, one that drops off an enemy, and a final you find in the end game). Even then, still needs special mention because of its outstanding +5 Wisdom bonus. Only religious folk can use it, so keep that in mind if you're using a Monk to power up a decoy. You can find this at various points, but the earliest is buying it off Nathaniel in Kuldahar. High Tyrannar's Band (ring) A really good charisma-boosting item (+4), with a side effect of wisdom (also +4). You get it after you slay the mini-boss at the top of the Cleric's Tower in the Severed Hand. Lyre of Inner Focus (instrument/shield) An instrument you equip like a shield, bestowing an amazing +3 Strength and +2 Constitution. You can get this off of one of the Underdark merchants. Young Ned's Knucky (amulet) Super awesome! See section BAS:LUC- for more details. Jemeliah, a random NPC in the Targos general store, has it on him. It seems like this is virtually impossible to obtain via pick-pocketing, and, in fact, the only seemingly legitimate way to obtain this is to cast an instant death spell on Jemeliah. This is because if Jemeliah dies instantaneously, no one seems to care (which doesn't appear to be true of other characters). This means that you have to either cast Finger of Death or cast Destruction and then hit him with it (though you only have a 5% chance with Destruction due to its low save requirement). Disintegrate doesn't work because it takes a while for Jemeliah to disappear, and he becomes hostile the moment the Disintegrate bubble touches him. Raging Winds (instrument) A super fast way to summon a miniature army. These berserkers are pretty effective on HoF mode (even in the endgame) and are undyingly loyal (so don't worry about hitting them with spells by accident). On an amusing note, instead of saying something like "RAAAR" or "FOR TEMPUS", very rarely the barbarians will yell "Look at me! I'm a crazy frothing barbarian!". Glad to see Black Isle's sense of humor. This is available off Beodaewn's caravan. Sephica's Prayer (instrument) Gives you the ability to cast heal or resurrection, both once per day. An extra heal and a free resurrection is really useful. Just be warned that you need a minimum of 13 wisdom to use this. Available inside a container in the Severed Hand. Tymora's Loop (ring) MEGA AWESOME! See section BAS:LUC- for more details, but unfortunately it's a purely random drop. =============================================================================== =============================================================================== Sample Parties *SAM- ------------------------------------------------------------------------------- 1. 6-person Good Party *SAM:6PE- This is an all-purpose good-aligned party. There isn't too much tricky multiclassing, save for the Decoy. Properly played, you'll be able to breeze through all sorts of challenges in HOF mode - this party covers all the necessary bases while still providing some nice redundancy as well as some backup strategies. Decoy: (Lawful Good) Male Drow Monk of the Old Order 17 Paladin of Helm 2 Rogue 1 Conjurer 10 Drow for the SR, the extra stats, and male for the preferred class of wizard (for the conjurer levels). Paladin for the immunity to fear and the ability to use an Every God Ring. Rogue to use Crow's Nest. Many monk levels for lots of AC, conjurer for more AC-boosting effects as well as illusion spells like Improved Invisibility and Mirror Image. You have to be really careful about leveling this guy, or else you'll frequently run into multiclassing penalties. (A good tactic would be to level up the wizard levels first, get 1 level of Rogue, get 2 levels of Paladin, 1 level of Monk, then just level squat and get the remaining 16 levels of Monk in one shot). Insane damage: (Lawful Good) Aasimar Fighter 4 Paladin of Mystra 6 Diviner 20 Aasimar for the preferred class of Paladin and the extra stats. Fighter for the weapon specialization, extra feats, and Dwarven Ogre belt. Diviner levels to be able to cast all sorts of utility spells (Wail of the Banshee, Malison, Executioner's Eyes). Levels of Paladin of Mystra for dual Holy Avengers and extra base attack bonus. Support/Healing: (Good) Human Bard 11 Morninglord of Lathander 19 Human for the preferred cleric levels and extra skills. Bard levels for War Chant of Sith and some useful illusion magic. Cleric levels for healing. Crowd Control: (Good) Human Enchanter 30 You get massive skill points and feats for all sorts of support roles. You max out durations for crowd control spells. Also enables use of the Mystra-line of cloaks that bestow DR. Support/Healing: (Good) Human Druid 12 Painbearer of Ilmater 18 Druid-style crowd control and Barkskin. And another cleric character for more buffing and chaining together Holy Words. Damage: (Good) Aasimar Sorceror 30 Aasimar for a higher Charisma, and maxed out Sorceror to max out every single damage spell possible. ------------------------------------------------------------------------------- 2. 4-person Good Party *SAM:4PE- This is proof for all you skeptics that you can play with less than 6 characters with good success in HOF mode. The structure is a bit different than a 6-person, as you won't have the luxury of free character spaces to have a dedicated damage spell caster and a dedicated debuff spell caster. There's some tricky multiclassing here, especially since some levels you need to manually gain to help ameliorate severe HOF challenges (your Decoy, for example, won't have any mage levels for illusion spells until well into HOF). In addition, two of these characters want a Paladin level early so that they can benefit from both instances of finishing the Paladin quest (which, in addition to yielding the Holy Avenger swords, gives +1 Strength and Wisdom). You'll note that here I'll provide alot more information than on the 6-person party, as decisions about stats and items become far more important with a reduced number of characters. Decoy/Backup Healing: (Lawful Good) Deep Gnome Class Levels: Monk 1 Ranger 1 Paladin 2 Rogue 1 Morninglord of Lathander 14, Illusionist 11 Base Stats: 8 Str, 20 Dex, 8 Con, 14 Int, 20 Wis, 4 Cha Extra Stats: All 7 into Wisdom Notes: Drink one Holy Potion of Transference Important Items: Every God Ring, Chimandrae's (Warded) Slippers, Crow's Nest, Indomitable Bands, Farmer's Cloak, Sunfire Talisman, Light of Cera Sumat, Golden Heart of <Player Name> Pretty general decoy. Lots of AC, plenty of illusion magic (Blink is a staple), and extra Heals and Buffs via the cleric levels. Insane damage/Support/Healing: (Lawful Good) Human Class Levels: Painbearer of Ilmater 20 Paladin 1 Fighter 1 Sorcerer 8 Base Stats: 18 Str, 6 Dex, 16 Con, 4 Int, 18 Wis, 14 Cha Extra Stats: All 7 into Wisdom Notes: Drink one Holy Potion of Transference and one Potion of Clear Purpose Important Items: Dwarven Ogre, Ned's Lucky Knucky, High Tyrannar's Band, some weapon with elemental damage (Club of Freezing Flames, Halberd of the North, Massive Greataxe of Flame, etc.) There are two major reasons that this character is Human. The first is to enable 2 skill points/level even with a sub-10 Intelligence, which means the character can max out Concentration and get enough Spellcraft to pick up Aegis of Rime and Spirit of Flame (to boost elemental weapon damage). The second reason is to let this character multiclass without getting the steep -20% experience penalty. The Fighter level is there to let the character wield the Dwarven Ogre, but the Fighter level also means that multiclassing gets a little tricky without either a race that supports any multiclass (human/half-elf) or a cleric multiclass (female drow). Equip this character with a Dwarven Ogre, 18 base Strength for a total of 26 after Paladin quest bonuses, Prayer, Emotion: Hope, and Holy Power, a two-handed weapon, and Luck bonuses, and watch the damage skyrocket to enormous levels. With all the Wisdom, this character also makes a decent debuffer, having a total of 34 in the end game, being able to use Greater Command, Symbol: Hopelessness, and even Hold Person to decent effect. The Charisma lets the character use important decoy-like spells, and the High Tyrannar's band will give you 18, which gives you an oh-so-important extra 4th level Sorcerer spell for Improved Invisibility. Buffing/Crowd Control/Diplomat (Good) Aasimar Class Levels: Druid 12 Sorcerer 18 Base Stats: 8 Str, 14 Dex, 8 Con, 14 Int, 16 Wis, 20 Cha Extra Stats: All 7 into Charisma Important Items: Master's Robe (for the +3 Charisma) Going for twelve druid levels off the bat will be __incredibly__ useful, as having those druid levels will make your life significantly easier on normal (and help you realize why I would rate it the best class on normal difficulty). With proper level squatting, you'll be able to get 18 Sorcerer levels early on in HOF, and then you can start tossing around Mass Dominate, Chaos, Dominate Person, Power Word: Blind, etc. Damage/Support/Skills (Good) Tiefling Class Levels: Diviner 19 Bard 11 Base Stats: 11 Str, 15 Dex, 14 Con, 20 Int, 4 Wis, 14 Cha Extra Stats: All 7 into Intelligence Important Items: High Master's Robe, Lyre of Progression This class has so many skill points you won't know what to do with them all. Anyway, having the mage levels will help in normal, as this character is well suited to picking up spells like Disintegrate and Finger of Death thanks to all the extra Spell Focus that will come out of the extra feats from the Diviner levels. This class also benefits from reckless abuse of Lingering Song. Put both Tymora's Melody and War Chant of the Sith on your toolbar and memorize what function keys they map to; throughout every battle, alternate between the two each round. ------------------------------------------------------------------------------- 3. 2-person Evil Party *SAM:2PE- This party will let you reap the rewards of being sinister: access to some top-notch items, and the ability to completely shrug off Unholy Blight and Blasphemy (a perk you don't fully appreciate until you experience it for yourself). Plus, because evil clerics can become Banites, you get extra oomph from that too (as their religious bonus is arguably the best of all clerics, and they also get the +2 Wisdom per play though thanks to the Banite quest in the Kuldahar graveyard). It's my basic theory (that I haven't tested) that you __must__ be evil in Heart of Fury to do a 2-person party, as the extra Wisdom is essential for AC and maximizing the chance that your debuffs connect, plus the immunity to Blasphemy when you only have two characters is __just that important__. Decoy/Debuffer (Evil) Deep Gnome Class Levels: Monk 1 Rogue 1 Dreadmaster of Bane 20 Sorcerer 8 Base Stats: 8 Str, 20 Dex, 10 Con, 14 Int, 20 Wis, 2 Cha Extra Stats: All 7 into Wisdom Notes: Drink both Potions of Holy Transference, one Potion of Clear Purpose, both Potions of Arcane Absorption, both potions of Magic Resistance Important Items: Bile of the Damned, Chimandrae's (Warded) Slippers, Crow's Nest, Indomitable Bands, Farmer's Cloak, Sunfire Talisman You'll have a sick Wisdom with this class (40 in the end game), which not only means an insane Monk AC bonus, but an insane DC as well. You can drink another Potion of Clear Purpose if you really want to and use an Every God Ring instead of Bile of the Damned, but then you're giving up a precarious amount of health as well as the ability to do any damage at all with this class. Swiss Army Knife (Evil) Male Drow Class Levels: 4 Fighter 26 Diviner Base Stats: 18 Str, 8 Dex, 16 Con, 20 Int, 4 Wis, 6 Cha Extra Stats: 6 into Intelligence, 1 into Constitution Important Items: Dwarven Ogre, Ring of Hearty Strength (for the +1 Constitution), Xvimian Fang of Despair + some other hawt weapon for dual-wielding (Bastard Sword +3: Cold Fire, Bastard Sword of Heroism, Club of Confusion, etc.) OR Mooncalf Shield for protection from ranged weapons This is your all purpose class. Your other character is there to soak up enemy attacks and do some basic buffing/healing/debuffing, but this character is all about laying down the heavy stuff - Wail of the Banshee, Mass Dominate, Animate Dead, Shades, Dominate Person, Chaos, Slow, etc. I chose a Wizard-based mage instead of a Sorcerer simply because of the diversity in spells you need. Since you can learn as many spells as you want, this will let you stock up on spells you may only need situationally (like Control Undead or Meteor Swarm for those damn jellies). The biggest risk with this class is that you'll run out of some key spell (Wail of the Banshee, for example) or misprepare for a fight (got too many Dominate Persons when you were expecting Slayer Knights of Xvim, when you end up fighting a bunch of lesser creatures and Chaos would've been better). This class also needs to be able to melee, because there are just some situations where this character needs to get down and dirty, like when you've run out of Mordenkainen's Sword. Unfortunately, your base attack bonus isn't too great, so you probably shouldn't even use Power Attack. This class should also have a lot of back up weapons handy, just to handle all the possibilities (fire damage for trolls, disruption weapons, Kegsplitter, Goblin Slayer, frost damage for Isair and Madae). ------------------------------------------------------------------------------- 4. Playing a Smaller Party *SAM:PLA- Playing a smaller party introduces new types of challenges to your game play. By far, the hardest part about having a smaller party is playing the first parts of normal difficulty! This is because you'll still need to be level squatting, but at such low levels, all your characters will be missing any kind of useful ability for survival. In fact, you'll note that while going from 6 to 5 characters is only slightly harder, playing with progressively less characters becomes exponentially more difficult. The early game is particularly demanding - whereas smart play can outmatch the later game with a party that's smaller and still level squatting, the developers really planned out the first couple of chapters for a party of six being pushed to their extremes by armies of weak goblins. To help you along your way, here are a few pointers. Use a Deep Gnome Decoy It's not as important when you have a party of six or even four, but once you get less then that, there's a lot of pressure on you to have a character that can withstand lots of enemy attacks while your (much) smaller party (all of whom should be squatting at low levels) slowly and pathetically whittles down the enemy. The easiest way to do this is to take advantage of the Deep Gnome's +4 natural AC bonus and free casting of Mirror Image and Blur. Both of these go a long way into your decoy's survivability without requiring any kind of class- specific abilities. In fact, such is the usefulness of the Deep Gnome's innate abilities, that your decoy - arguably the most important part of a Heart of Fury mode party - can easily stay at extremely low levels (even level 1) much better than the rest of your party. Note that I don't recommend a druid decoy here since you won't be able to do that until the druid gains Air Elemental form. Get Castings of Fireball Level squat, but once you are able to, let a Sorcerer get up to level 6 and make sure Fireball is the spell you pick up. Even if you have other plans for this character, you will probably more than likely be able to spare a spell slot for Fireball. With an extremely small party and with everyone level squatting as much as possible, Fireball essentially is what enables you to play the early game without having to slam your head through the wall in frustration. Since a smaller party will get more experience (since the same amount is being divided amongst fewer characters), you'll even be able to get Fireball earlier than larger parties! Once you get Fireball, your fortunes change dramatically. Fights that were an exercise in hoping monsters didn't critically hit your decoy too many times to overwhelm the Mirror Image now become easily dispatched. Even otherwise impossible fights -- like the drums of the Goblin Warrens's outposts -- become fairly easy with a couple of well-aimed Fireballs. Invest in/Use Potions But what do you do when you're still low level to even use Fireball? This is especially problematic when you're fighting the Broken Tusk Clan in Shaengarde - with a party of three or less level 1 or level 2 characters, the Orc archers will eat you alive very quickly. Oswald has some Potion of Explosions and similar potions that you should not hesitate to spend money on. Smaller parties have lower cost requirements than larger parties, so you shouldn't worry about blowing a significant amount of your net worth on the potions. Use up Your Scrolls If you're advanced enough that you're playing with a reduced number of characters, you should already know what spells you need. You should then just use up any and all other scrolls you find - using a scroll of Melf's Minute Meteors instead of hording it or memorizing it can make the early game a lot easier. Side note - there appears to be a bug that can happen where you get stuck with 5 attacks/round even after Melf's wears off. Don't know how this can happen, but needless to say, so long as you don't reload the game, this is a _very_ useful outcome for you. Buy a Necklace of Missiles You can buy it off Beodaewn's caravan after Oswald crash lands (warning! It's expensive). It starts off with 50 charges and in many ways is better than just a Fireball (larger explosion, no Reflex save). With a good use of a Decoy and intelligent use of Mirror Images or Entangle/Web/Stinking Cloud, this single item will give a lot of success for a long time (I'm generally able to make good use of it well into the Underdark). =============================================================================== =============================================================================== ...and more! *AND- ------------------------------------------------------------------------------- 1. Important Notes *AND:IMP- This is just a grouping of various random notes and tricks that didn't really fit in elsewhere. Beware of Fireshield It's not altogether clear to me just how exactly monster scaling works in HoF, but it is important to note that, at least for fire shield, enemy levels skyrocket. A good example is fighting the Efreetis in the third level of Dragon Eye - hitting one can inflict upwards of 60 damage to the poor melee attacker. In these instances, it behooves you to keep your distance or have summons do the dirty work for you. Note that Mordenkainen's Sword counts as using a melee weapon, so you'll still get hurt severely by the Fireshield. Mirror Images, though, do block the Fireshield damage, so you can mitigate it that way. (Is there anything Mirror Images __can't__ do??) Caster Levels This is related to the above, but enemy levels are __high__ for purposes of certain uncapped spell effects. This won't come into play too often, because hopefully you're good at resisting effects. But I've seen Skull Trap do upwards of 170 damage (spelling instant death for my more fragile characters), and have had durational effects like Charm last for __extremely__ long periods of time. Collector's Edition Having the collector's edition is the only legitimate way to obtain the Brazen Bands/Indomitable Bands, which is by far the best source of generic AC in the game (a whopping +5). However, if you aren't blessed with such a copy, then you can get around this by using a console command. 1: You need to switch on the console. This is a lot easier than in previous Infinity Engine games; just open the configuration program and switch on "Enable Cheat Console." 2: While you're in the game itself (before Nym in the Wandering Village leaves), press control+tab to bring up the console, then type in: ctrlaltdelete:setglobal("IWD2_BONUS_PACK", "GLOBAL", 1) and press enter (you need to use all caps for the stuff in the quotes). Then Nym will sell the Avarine Decanter. Buy it, use it (by putting it in your quick slot), then, if you want the Brazen/Indomitable Bands, simply free the genie instead of using any of his services. Level squat If this isn't a familiar term for you, you should read this section carefully - it's rarely ever worth it to immediately level your characters. The reason is that the experience monsters give you is based on your average party level (rounding down). Thus, the more readily you level up your characters, the less experience they'll be getting. In fact, there are only two cases in which you should ever level your characters. The first is when the game has reached such an immense level of difficulty that you need to boost one of your characters up to a higher level. The second is you have a character that has some strict multiclassing requirements and you need to level them to avoid messing that up. A good example of the first is when level squatting towards the Ice Temple on normal. Once you reach the ice temple, having someone who can cast fireball or someone who can hit one of the Ice Golem Champions with some semblence of consistency becomes really important, so you might want to level up one of your characters. A good example of the second is the example decoy character in the sample party. If you've got 10 levels of Conjurer and 17 levels of Monk, if you level up 3 times at once (since you can't break up levels you gain in one shot), you won't be able to split them into 2 Paladin/1 Rogue, so here you need to level up two separate times, once to pick up the one level of rogue, and again to pick up 2 levels of paladin. Another example of the second would be a 11 Bard/19 Cleric split. Let's say you've got 9 levels of Cleric, but you've waited too long to level up so you've got 12 levels stored up. Because you can't split up these levels when you level up, you either have to do 12 levels of Bard or another 12 levels of Cleric. Either way, you've broken your character's development. A trick about level squatting is that because the game rounds __down__ your party's average level, you can try to find "sweet spots", where you're high enough level so that the game isn't insanely difficult, but low enough to be reaping a vast amount of experience. So if your 6-member party's total level is 35, the game treats your party's average as being 5 (since 35/6 is 5.83 which rounds down to 5). Thus, you are pretty much the equivalent of a level 6 party (one level will barely make a difference) but reap the experience benefits of a level 5 party (which sometimes means as much as twice as much experience from some enemies). Aggressive level squatting is __essential__. Ideally, you should max out all your characters' development midway through HoF. Otherwise, you may find yourself really scrounging for experience for the last few levels, as high level characters get piddling experience even against tough HoF monsters, which is doubly painful considering how much experience you need to level up at those high levels. Plus, in the case of a character like the decoy, every last level counts. In fact, resist the urge to level up your characters after the battle with Isair and Madae at the end of normal. If you were able to finish them off while level squatting, you'll more than be able to take care of the Prologue and Act I in HoF without difficulty and reap some good level-squatting-based experience benefits. Micromanage This is a really important skill. You've got a million things you need to be doing/checking at a given time. Bard song need a refresh? Is your decoy out of Mirror Images or is Otiluke's Resilient Sphere going to be expired soon? Is that a cleric likely to cast something like Blasphemy or a harmless one going to be casting things like Bless? Do you have any idle characters? Hopefully you've trained some of these skills through normal difficulty. If you're struggling to manage 6 characters efficiently, you might want to consider dropping down to 5 or 4. The game is still definitely possible with such reduced numbers (all you need is atleast one decoy and one crowd control/damage character, the extra just helps make the game easier). And, if you're not managing your characters, you're probably wasting them anyway. Or, at the very worst, you can just use characters 5 and 6 as bards whose sole duty is to go invisible, sit back, and strum some songs, thus letting you make better use of your 4 other characters. Mirror Image generation The spell description would have you believe that it's a completely random generation of 2-8 (or 2d4, as it was in Baldur's Gate I and Icewind Dale I). It's actually misleading, as it's based on the Baldur's Gate II version of the spell, which is dependent on caster level (it's something like 1d4 + 1 per so many caster levels). This means that high level (20+) casters will frequently max out their Mirror Images per casting, but lower level casters (5-) can get stuck with two. Outrun your enemies Your characters that aren't decoys will probably not be able to last more than a round or two going toe-to-toe with even just one enemy. Given this, a combination of Boots of Speed, Dash, and/or Haste is absolutely important, as you should immediately run the character to safety before casting protective spells. Your high Concentration skill is only there in the case of emergencies - don't expect it to save your life when you're trying to let off a crucial Invisibility or Mirror Image - your AC is so low that you're going to be probably hit atleast once for loads of damage before these finish. ------------------------------------------------------------------------------- 2. Challenges *AND:CHA- Well, you've conquered HoF! What's next? What about some challenges to make the game even more difficult and interesting? Plus, these challenges can get some things that you may have glossed over in earlier playthoughs to become important. Since I enjoy playing through IWD2, here are some of my thoughts on various challenges you can try to pick up, as well as some notes I have on them. Be warned that a lot of these are not intended for HoF difficulty, unless you're insane :). Here are some ideas for basic rules (things you can mix and match and combine with some of the bigger challenges): No ranged weapons allowed. Only two-handed weapons allowed (ranged weapons included). No melee weapons allowed. Use less characters (5, 4, 3, or even 2 characters). No level squatting allowed. No spells that fully heal (Heal, Mass Heal, Resurrection). These are some basics that force you to try alternate tactics. You may have not normally decided to use alot of two-handed weapons (of which there are many) without self-imposing such a rule on your play, and you may be surprised by how much damage your party is capable of outputting as a result. Here are some more drastic challenges to try out. No multiclassing: One of the flaws in 3e D&D is that some characters just plain suck in a system of multiclassing. The ranger is the best example as in virgin 3e, there was almost no reason to ever get more than one level in ranger (though this is improved somewhat in 3.5e). Moreover, some classes, like the Paladin and Ranger, have spellcasting abilities that are made irrelevant by just being able to multiclass into something like the cleric. By removing your ability to multiclass, though, you may want to use a complete Ranger - they get alot of free perks from the start and will pick up some spellcasting that isn't made completely obsolete by the ability to pick up a few druid levels. Similarly, do you really want yet another arcane caster when your ability to heal and go toe-to-toe with tough enemies will be severely impacted as a result? Party based on a single character type: This means creating a party whose classes are completely from the group of warriors (barbarian, fighter, paladin, ranger), priests (cleric, druid, monk), rogues (thief, bard), or wizards (wizards, sorcerors). These groupings are from AD&D times, and is a variant of having a "theme" party. Each party-type has their own unique strengths and weaknesses compared to other party types, though by far the wizard group has the easiest time at higher levels. The warrior type will have the easiest time early on, though they'll start running into some roadblocks mid-to-late game, as they'll be heavily reliant on your ability to find good weapons and armor, a steady stream of potions, and the need for a high Expertise/Power Attack. The priest group will have the best overall strength, being almost as capable as fighters early on, backed up by their healing, and having immense support spells in the end game, though their killing power will be pretty limited. The rogue group will be heavily reliant on using bards for crowd control and immense micromanaging of thieves, but, as I mentioned before, bards are nothing if not versatile and immensely powerful (though a simple casting of dispel magic from the enemy will probably cripple a bard's protections). The wizard group is by far the most powerful in the end (a group of sorcerors can even go into HoF and conquer it), but will have *immense* difficulty early on, when fighting things like Ice Golem Champions who have high SR and AC and when your spells, by comparison, are weak and your summons pathetic. Party based on a theme: A variant of the above. Maybe you're a party of tree huggers (druids and rangers only), or maybe you're a group of zealous helmites (paladins and clerics of helm only). Maybe it's a virulent group of mercenaries dedicated to stomping out magic in the world (barbarian, fighters, rogues, and monks). This is where your individual creativity and wackiness kicks in. =============================================================================== =============================================================================== Chapter-by-Chapter Notes *CHA- ------------------------------------------------------------------------------- This is where I just jot down some pointers and notes for various parts of your HOF campaign. ------------------------------------------------------------------------------- 1. Prologue *CHA:PRO- The Prologue is fairly trivial, especially considering how rough the final battle with Isair and Madae was at the end of normal. Just be sure about two things. First, if you're level squatting, keep level squatting as the Prologue is fairly trivial to get through. Second, get to the various stores ASAP and stock up on all the great HOF items that you can use (Goblin Slayer, Kegsplitter of Shaengarne Ford, Golden Heart of <Player Name>, etc). In fact, you can get through the ambush with just one character with a high attack bonus and Goblin Slayer. If you insisted on creating a Tempus cleric, remember to get Tome of the Lord of Battles in the locked cabinet in the medical tent. Be sure to check all the containers in Ulbrec's house, as two of the books are actually special items (Legends of Icewind Dale and Heart of Winter... do those sound familiar to you?), and Legends of Icewind Dale lets you cast two potentially useful spells three times/day (Heart of Winter is bugged in that it says it lets you cast Power Word: Blindness, but instead casts the much, much less useful level two spell Blindness). ------------------------------------------------------------------------------- 2. One *CHA:ONE- Shaengarne Ford will be your first test of skills, as you'll find yourself swamped with massive swarms of orcs. This is where, if you're not used to HOF tactics, you'll have a sudden and very steep learning curve. You have to heavily emphasize your Decoy (or whatever else you're using for tanking the enemies) and really start laying out the crowd control and debuffs, as otherwise you'll find the orcs being able to withstand spells like Meteor Swarm without budging. The Horde Fortress should be, comparatively, much easier, even easier than normal. The reason is that now you have Goblin Slayer, so now you can just easily slay those spawning Goblin Worg Riders whenever the drums start playing. The Goblin Slayer, in fact, will help you clear through half the Horde Fortress without needing a break (except maybe a Heal or two), and you're already used to dealing with Orcs from Shaengarne Ford. ------------------------------------------------------------------------------- 3. Two *CHA:TWO- Most of this chapter is fairly straightforward, though it's important to note that all the Ice Golem-type things count as constructs, so the Kegsplitter of Shaengarne Ford will slay them with one hit (no more struggling with blunt weapons like on normal!). Similarly, you may even find the Battle Square on HOF easier than on normal, as now you have better spells (like Wail of the Banshee) and better skills (like a high AC if you're a decoy). Notable ranks to complete are 2 (for Potion of Holy Transference), 3 (for Cornugan Hide Armor), and 9 (for Wand of Animate Dead). If you have a wizard who needs certain level 8 spells, you can try for them here, too, though it's not too time-effective. Be sure to buy the Raging Winds off Beodaewn's caravan before you do anything else to him, as it's an excellent Bard item (see Accessories of Note for more info, find shortcut: GEA:ACC-). Remember to revisit where Oswald was after he leaves and you finish the Ice Temple. Instead of the crashed Airship, you'll see a note from him as well as a good, permanent-effect potion of some kind. Hope for a good one (like the Potion of Arcane Enhancement, which gives a permanent +1 Intelligence and +1 spell resistance). This also happens on normal difficulty, so remember to check both both times you play through this section! ------------------------------------------------------------------------------- 4. Three *CHA:THR- The Wandering Village is pretty straightforward. Make sure to pick up the Avarine Decanter off Nim (see the "Important Notes" subsection in section 9) if you're using an AC-based decoy. Those butt-hard Will-o'-Wisps from normal are easily slayable in HOF once you realize that Wail of the Banshee is effective against them. The Frozen Marshes you'll find to be terribly annoying, as they're filled with Trolls and Trolls are fairly resilient to most HOF tactics (stunning is useless, they're immune to Holy Word, not effectively controllable, Wail of the Banshee is hard to trigger on them). For the River Caves, a good strategy is to send your Decoy (or some summons and a character that can go invisible) out first through the initial segment of the tunnel, while keeping the rest of your party sits back where the ropes drop them off, along with some protective summons. This is because, in case you forgot, Hook Horrors will spawn near the entrance and try to ambush you, and getting ambushed from all sides can be a recipe for doom on HOF. This is because, of course, any character not designed to take physical damage will easily get overwhelmed and annihilated in a few short rounds. Hopefully you also have some high-powered fire damage spells or lots of castings of Disintegrate, as the Ochre Jellies in the lower part of the River Caves will tear your party apart if you're not careful. These jellies split into an extra lower health version of itself every time you hit them, and the only real way to damage them is with fire. If you're careless and just let your party AI attack recklessly, you'll end up with a screenful (I've accidentally made upwards of thirty), all of them gleefully hitting your characters for upwards of sixty damage a pop. The solution is to either Malison/Prayer/Recitation them and hit them with Disintegrate, or send in a Decoy/bunch of summons and fling Meteor Swarm after Meteor Swarm and hope your front line is able to keep the jellies back. ------------------------------------------------------------------------------- 5. Four *CHA:FOU- At this point, if you already haven't, you should be earnestly checking all the containers, as the loot starts to get consistently upgraded (so you'll be finding progressively more +3/+4/+5 weapons as you get futher into the game, instead of the boring old Masterwork stuff). Remember, the Iron Golems guarding the tomb under the Black Raven Monastery can be dispatched easily with the Kegsplitter of Shaengarne Ford. Also, be sure to buy the "How to be an Adventurer (2nd Ed.)" if you're still lagging behind the full level 30 for your characters, as you should be close to maxing out by now. Remember those annoying Mind Golems in the Mind Flayer Citadel? Again, the trusty Kegsplitter of Shaengarne Ford will dispatch them easily. No more annoying can't-quicksave-Mind-Fog! The Underdark merchants feature all sorts of things you need, so be sure to pick them up (and don't talk to the ones in the lower left of the map unless you're at full health, as they'll ambush you instead). ------------------------------------------------------------------------------- 6. Five *CHA:FIV- If you have a Club of Disruption (or some other Disruption weapon), you'll still be able to get some use out of it (but unlike normal difficulty, you'll need to be using Prayer/Recital/Malison here). Make sure to pick up whatever you need from Nathaniel (like another Every God Ring), Sheemish, and Gerbash. If you're having a hard time trying to take out all the Yuan-Ti in Chult after you fail to convince Ojaiha to not attack Kuldahar, try to fight the battle without removing any of your Initiates Robes and swapping them for your actual armor. A good portion of the temple guards will only turn hostile if you aren't wearing the Initiates Robes, so you can cut down on the number of enemies that start swamping you by almost a third by doing this. The Guardian in Chult will hopefully not be too difficult. You can still try and Disintegrate him, but you have a microscopic chance in HOF mode. Hopefully, though, your Decoy will be able to toe-to-toe the dragon. Note that if you're relying on summons, you may be in for a hard time, as the Guardian can basically Dismiss summons at will. You'll find that in the third level of Dragon's Eye, alot of the Armored Skeletons have been replaced with Iron Golems, but no problem thanks to your trusty Kegsplitter. Similar to the River Caves, you need a copious amount of fire damage, as the only way I can figure out how to kill all the Mustard Jellies (and Olive Slimes) is via fire (or many, many, many Disintegrates). By my count, it took six Meteor Swarms to wipe them all out at once. The Efreetis will pose a problem for you, as their Fire Shields will do ridiculous damage to your melee attackers (see "Important Notes" in section 9), so make sure you have disposable summons at hand or effective ranged attacks. The Paladin quest is an epic battle, but you've got a few factors in your favor. First, your summons are much more effective than on normal (relative to the enemies). Second, you've got more defensive spells at your disposal. Third, Dismissal is just as good as it was on normal. That being said, summoning a few Animate Dead before the fight will be really useful as their HOF-beefed stats can take out Atalaclys the Lost (who spawns at the north end of the graveyard) fairly quickly. Stocking up on Dismissals is good because this will let you annihilate the various summons that Inhein-who-was-Taken will keep bringing in. Try to engage the ranged attacker (Jaiger of the Fanged Season) early, as otherwise he'll be able to pick off your fragile characters very quickly with his super-accurate arrows (Mirror Images don't do much against many super accurate arrows per round). Aside from that, try to keep the three melee guys - Broken Khree the monk, Kaervas Death's Head the dwarf, and Veddion Kairne the warrior - busy with summons and the like until you have the other, larger threats dealt with. With a really good Decoy, you'll be able to toe-to-toe these guys one at a time. That being said, Broken Khree is the easiest as his main strength on normal (AC) is useless against your super high attack bonuses on HOF mode. Veddion Kairne should go down next. Kaervas Death's Head will be your roughest final guy, as he has enormous damage resistances. If you're doing the favor for Nickademus (killing all the demons trapped in the Ice Temple), be sure to check out the boxes in the lower left room, as one of the potions there is a Potion of Magic Resistance, which gives the drinker a permanent +1 to their Spell Resistance. ------------------------------------------------------------------------------- 7. Six *CHA:SIX- If you're good, you have two really rough fights in this chapter. If you're evil, you have three somewhat rough fights in this chapter. The two fights in common are the one on top of the war tower and, of course, the epic final battle against Isair and Madae. Evil parties have an additional hard fight when trying to get the antidote at the top of the cleric's tower. There's also a potentially annoying fight against Xvim's avatar. Getting the Antidote/Top of the Cleric Tower If you're good and have two clerics, you can lock down the worst of the bosses with Holy Word while casting important defensive spells to make up for the fact that you're being ambushed from all sides - though you have to make sure you get those Holy Words off fast as Blasphemy and Symbol of Hopelessness gets tossed around here. If you're evil, you're in for a rougher fight, as you won't be able to buy yourself recovery time with Blasphemy, but fortunately you're also immune to the enemy's Blasphemy, though Hopelessness will still potentially annihilate you if you're not prepared. Iyachtu Xvim This fight can be pretty easy if you play it right. Simply have someone who can cast Improved Invisibility/Mirror Image also equip something that bestows Non-detection. In many cases, Iyachtu Xvim will get stuck casting Invisibility Purge over and over and over again, to no effect, all while you slowly whittle away his health through his massive resistances. If you don't want to be lame, then refrain from using Non-detection. That being said, the fight becomes much harder, as there's little room for navigation, and Iyachtu Xvim will gleefully make any non-AC based solution for a Decoy irrelevant. Note - it __is__ possible to hit Iyachtu Xvim with Symbol: Hopelessness, so keep that open as a viable strategy. However, he has a +20 Will save even after Malison, Prayer, and Recitation. This is where having a high wisdom Cleric comes in handy - a maxed out 30 Charisma Sorcerer has a 35% chance of landing it (a DC of 28), while a maxed out 42 Wisdom Banite Cleric has a 70% chance (a DC of 35). Top of the War Tower Best tactic is to cast Mass Invisibility as soon as you regain control of your characters while simultaneously casting (faster cast) summons. This way, your party will go invisible and be hidden from the massive ambush, while your summons will keep attacking and lose invisibility, thus causing all the enemies to retarget your summons. This is important as most of the enemies in this fight are immune to Holy Word, so you have no time-buyer if you're good. Once you're able to survive the initial ambush, regroup to the right side and then start dividing and conquering. Remember! Slayer Knights of Xvim make excellent Dominated pets (and they can also be stricken Hopeless). Be careful about Blasphemy, as it gets tossed around a bit in this fight. Be also careful about Dispel Magic - your party should be able to resist it, but a critical failure means you lose a lot of protections. More importantly, it gets cast repeatedly on the enemies, getting rid of all the debuffs you've been laying on them. Make it a point to knock out the mages quickly (with Disintegrate). Isair and Madae If you're good, it is imperative to have Mass Invisibility cast before you go downstairs from the War Tower fight. Madae will start off the fight with a Blasphemy or two, and being invisible from the start will mean that enemies won't be able to completely wreck your stunned party. Madae will use Blasphemy several times throughout both parts of the fight, so this is one place where being evil really pays off. Make aggressive use of Mass Heals here, as they will not only keep your party alive, but they will also keep your Monk allies alive (presumably you saved Ormis Dohor), and they are very important tanks, especially since they're all buffed up for Heart of Fury mode. Stock up heavily on Dismissals, as Madae loves high level summons, and there's a mage to the right of the battle (where you should go immediately) that also casts lots of high level summons. Keep Exaltation around, as Madae also loves abusing Symbol of Hopelessness, and Exaltation is the only spell that can deal with Hopelessness. When you finish the first part of the fight, load up on buffs like Mirror Image; Madae starts off the second part of the fight with more Blasphemies, so once again, being evil really pays off here. This time around, however, the pair is lower on defenses and annoying ability to call in powerful summons, so they're "just" surrounded by a pack of Slayer Knights of Xvim. Remember that these guys can be disabled with Symbol of Hopelessness or with Dominate Person. The latter will give you some fodder to toss at Isair and Madae. In both fights, the twins are particularly susceptible to cold damage from weapons. They're vulnerable enough that, combined with their insanely high normal damage resistance, the smallish frost damage on weapons like Bastard Sword +3: Cold Fire may deal more damage than the base amount. In particular, the Halberd of North and other weapons with Frost Burst deal their burst damage independently of the base frost damage, and both instances of the damage get beefed up. Between the two twins, though, I've found that Isair is more susceptible to physical damage, though his high-level Fireshield (which will deal around 55 damage per hit) makes him more painful to go after. Congratulations! You've beaten one of the hardest RPGs ever! =============================================================================== =============================================================================== Appendix *APP- ------------------------------------------------------------------------------- 1. History *APP:HIS- 2012.08.14 - Version 4.3 completed (minor) Updating "My Works" section (APP:MYW). 2012.08.14 - Version 4.2 completed Minor change regarding Heart of Fury enemy to-hit bonuses (thanks Ilya!). 2012.07.31 - Version 4.1 completed Corrected Heart of Fury enemy changes information (thanks Ilya!). Corrected some saves for section GEA:WEA:HIG (thanks Ilya!). Added Belib's Amazing Everlasting Torch to GEA:WEA:HIG (thanks Ilya!). Added Dagger of Closing Arguments to GEA:WEA:HIG (thanks Ilya!). Added Ryomaru's Harmless Tanuki Staff to GEA:WEA:HIG (thanks Ilya!). In other news, this update brought to you by Ilya Nemetz. 2011.12.02 - Version 4.0 completed Significant re-formatting of short cuts and the like (in line with my new BG guide). Re-formatting to fit into 80-wide instead of 70-wide. Added populous guide to 'my works'. Significant re-tooling of some sections (like max damage) thanks to contributions from sir rechet. Extra note about Painbearer multi-classing. Revision of BUI:DEC- to take into account rechet's note to me about charm and necromancy ignoring spell resistance. Added WHA- to take into account rechet's updated info on HOF mode. Added section on druid-based AC. Added extra notes for Necklace of Missiles. 2011.10.27 - Version 3.8 completed Changed 'other works' to 'my works' and added new guide. 2010.09.05 - Version 3.7 completed The scope of the guide has expanded a bit, so it's allow now a "Powergaming" guide. Added new section (3d.i) discussing maximum physical damage. Added new notes in light of 3d.i to Cleric and Paladin sections. Random typo/formatting tweaks. 2010.05.03 - Version 3.5 completed Removed erroneous note about Mass Dominate affecting Slayer Knights of Xvim. Added section about playing smaller parties. Added extra notes about Paladins. Added some copy changes about Rangers. 2009.11.01 - Version 3.4 completed Changed find shortcut system to use a shorter, four-key sequence. Modified Luck section with new notes. Amended Pick Pockets notes. Added mention on Spell Resistance cap (50). Reworked rating system in class section. Expanded Cleric section with info on specific domains. Added note about Paladin spellcasting. Added special note about Slayer Knight vulnerability to Dominate Person. Added extra info about Holy Word/Blasphemy. Added new section to Spells of Note: "A Word on Summons" Added note about drop rates for Bastard Sword of Heroism. Expanded section about Pudu's Fiery Blight. Removed redundant information about drop rate for Massive Greataxe of Flame +5. Added note about where to get Scimitar: Blood Trails. Fixed note about where to get Ysha's Sting. Added note about where to get a Barbarian Shield. Added note about where to get the Shield of Duergar Fortitude. Added note about where to get the Raging Winds. Fixed where you can find "Baron" Sulu's Hook (mixed it up with a different dagger in Chapter 1). Greatly expanded Sample Party section. Changed 4-person Evil Party to 2-person Evil Party. Expanded Important Notes section. Added "Caster Levels" and "Mirror Image generation" to Important Notes. Added notes about Heart of Winter and Legends of Icewind Dale to Chapter-by-Chapter guide. Added note about upgraded loot in Chapter-by-Chapter guide. Added note about Raging Winds in Chapter-by-Chapter guide. Added note about Ochre Jellies in Chapter-by-Chapter guide. Added note about Mustard Jellies in Chapter-by-Chapter guide. Expanded War Tower fight in Chapter-by-Chapter guide. Expanded Isair and Madae fight section in Chapter-by-Chapter guide. Various copy changes. 2009.08.31 - Version 3.1 completed Whoops, Destruction actually sucks (creates an item that has a low saving throw of 14); that's what you get when you ASSume. Added a note about Destruction creating an item-like effect in the Saving Throws section. Corrected notes about getting Young Ned's Knucky (I'll confess that previously I just gibbed him using the cheat keys). Added a find shortcut for the Table of Contents. Changed a strategic suggesion for Isair and Madae - Invisibility is not as effective as Mirror Image in protecting against Blasphemy. 2009.08.30 - Version 3.0 completed Woooooo new major version! Complete redo of the formatting in the document for better readability. Also reflects the fact that I added a new subsection a couple of versions ago. Fixed note about resist potions in the Damage Reduction section. Fixed typo about Malison giving -4 to saves instead of -2. Removed information in the Luck section concerning spells, as luck appears to not affect spells or spell-like effects. Moved discussion on Holy Word to Crowd Control section. Added note that Aura of Courage is bugged to Buff section. Added note about where the Bastard Sword +3: Cold Fire can be found. Fixed incorrect average damage for Pudu's Fiery Blight (was 15.5, should have been 17.5). Added note about Oswald's potion in the Chapter-by-Chapter guide. Added note about Nim in the Chapter-by-Chapter guide. Added note about reducing the number of attackers in Chult in the Chapter-by-Chapter guide. Added note about Potion of Magic Resistance in the Chapter-by- Chapter guide. Fixed incorrect find shorcut references. Added note in earlier history to indicate why a jump to 2.0 was warranted. Various minor fixes elsewhere. 2009.08.18 - Version 2.3 completed Forgot to mention that the "weapon proficiencies" section got some new stuff added. Added Pudu's Fiery Blight to items of note. Accidentally left out Ysha's Sting from the new weapon saving throws section. Polished up the weapon saving throws section to have actual numbers for everything, have all evil/neutral weapons tested, and fixed up the layout. Reworked and rerated the cleric section. Minor text fixes/changes. 2009.08.17 - Version 2.2 completed Added Bastard Sword +3: Cold Fire to items of note. Added Club of Confusion to items of note. Added Club of Dazing +5 to items of note. Added Club of Destiny +5 to items of note. Added Club of Freezing Flames +5 to items of note. Added Miasmic Bastard sword to items of note. Added Scimitar: Blood Trails to items of note. Removed Scimitar of the Soulless from items of note (saving throw sux) Added new subsection detailing weapon saving throws. Minor text changes. 2009.05.03 - Version 2.0 completed Woooooo new major version! New section: "Chapter-by-Chapter Notes". Added note to Pick Pockets (potions may not actually stack). Added note to Symbol of Hopelessness (chance to panic). Removed erroneous note on Skull Trap (does not ignore SR). Renamed "HOF Tactics and Notes" to "Important Notes". Added Farmer's Cloak to the AC deflection section. Added notes on where one can find the items of note. Expanded Sample Party section. Fixed "Collector's Edition" console command. Added note to Druid score. Added note to Crowd Control notes. 2009.03.22 - Version 1.4 completed Fixed Brazen Bands AC bonus from +3 to +5. Added note on Collector's Edition to HOF Tactics section. Fixed navigation shortcut for armor. Added a note on Otiluke's Resilient Sphere for the Decoy section. Added a note on summons in the Decoy section. Added Banishment and Dismissal to the Crowd Control section. Added a "Special Note" section. Fixed a few random mistakes. 2009.03.08 - Version 1.3 completed Fixed comments about +intelligence to also include Tieflings. Added extra notes for Mordenkainen's Sword. Reworded commment on Stunning Fist attack for Monk. Fixed various typos. 2009.02.21 - Version 1.2 completed Added an extra note for "Pick Pocket". Elaborated a bit on "Mass Dominate". Fixed various typos. 2008.12.22 - Version 1.1 completed Fixed some incorrect references to "Ned's Lucky Knuckle". Added notes for "Barbarian Shield". Fixed various typos. 2008.12.21 - Version 1.0 completed Woooooo it's done. ------------------------------------------------------------------------------- 2. My works *APP:MYW- ------------------------------------------------------------------------------- "I must believe that each generation regrets the passing of centuries-old monuments and nations that expired just before their coming. To see the look in elders' eyes when they speak in reverential tones of ancient cities, terrible generals, and the change that they affected - it plants a longing in one's heart for the unnattainable." - Maralie Fiddlebender ===============================================================================
FAQ Display Options: Printable Version | http://www.gamefaqs.com/pc/552350-icewind-dale-ii/faqs/55200 | CC-MAIN-2016-22 | refinedweb | 33,389 | 66.17 |
Hey guys,
So again, I'm trying to create one of the practice programs from my textbook without looking at the book's code. I want to test my skills...
But failure has befallen me and I've given up for tonight. Hopefully one of you can figure this one out.
So far, I have created three arrays: One for product numbers (int), one for the number of units sold (int), and another for sales (double). The directions say to display the list of products in the order of their sales from highest to lowest. The point of the exercise is to practice the selection sort algorithm.
Now, I'm not 100% sure that I did the sort algorithm correctly, so I ask that you don't point out any errors in that function unless you think that it is the cause of my problem:
Even with the dualSort function commented out (as well as the function call, of course), I cannot get the displaySales function to work. I get two linker errors:
Error 1 error LNK2019: unresolved external symbol "void __cdecl displaySales(double * const,int)" (?displaySales@@YAXQANH@Z) referenced in function _main C:\Users\Rob\Rob's Text Files\School Work\Fall 2011\Programming\Labs\Practice\Practice.obj
and
Error 2 error LNK1120: 1 unresolved externals C:\Users\Rob\Rob's Text Files\School Work\Fall 2011\Programming\Labs\Practice\Debug\Practice.exe 1
I've tried a whole bunch of things but I just don't know why I can't call this simple function. Perhaps I just need some sleep, but if anybody could give me a hand, I would very much appreciate it.
There is just one other question that I have: Why do I need to use the NUM_PROD constant as an argument in my function calls? I thought that the whole point of a global variable/constant was that you could reference it in any function. :SThere is just one other question that I have: Why do I need to use the NUM_PROD constant as an argument in my function calls? I thought that the whole point of a global variable/constant was that you could reference it in any function. :SCode://When sorting the Units Sold, also sort the Product Number, using the same index number. //Remember to properly swap the places of values in BOTH arrays. #include <iostream> using namespace std; //Function prototypes void dualSort(int [], double [], int); //highest to lowest int totalUnits(int[], int); //calc total units double totalSales(int[], int); //calc total sales void displaySales(double[], int); //TEMPORARY for testing sort function!!! //Global constant int const NUM_PROD = 9; int main() { int prodNum[NUM_PROD] = {914,915,916,917, //Product Numbers 918,919, 920, 921,922}; int units[NUM_PROD] = {842,416,127,514, //Number of units sold per month 437,269,97,492,212}; double sales[NUM_PROD] = {12.95,14.95,18.95, //total sales per month 16.95,21.95,31.95, 14.95,14.95,16.95}; cout << "Displaying non-sorted sales:" << endl; //call display sales displaySales(sales, NUM_PROD); //call sort function dualSort(prodNum, sales, NUM_PROD); cout << "Displaying sorted sales from HIGHEST to LOWEST:" << endl; displaySales(sales, NUM_PROD); return 0; } //dualSort function void dualSort(int num[],double sales[],int size) { //Hold the max values of sales double maxSale; int maxSaleIndex, startScan; for (startScan = 0; startScan < (size - 1); startScan++) { maxSaleIndex = startScan; for (int index = (startScan + 1); index < size; index++) { if (sales[index] > sales[maxSaleIndex]) { maxSale = sales[index]; maxSaleIndex = index; } } sales[maxSaleIndex] = sales[startScan]; sales[startScan] = maxSale; } } void displaySales(double sales[], double size) { for (int i = 0; i<size; i++) cout << sales[i] << " " << endl; } | https://cboard.cprogramming.com/cplusplus-programming/143110-linker-error.html | CC-MAIN-2017-39 | refinedweb | 602 | 57.3 |
Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
8827
Credit For Prior Year Minimum Tax—Corporations
Attach to the corporation’s tax return.
OMB No. 1545-1257
Department of the Treasury Internal Revenue Service
Name
Employer identification number
1 2 3
Alternative minimum tax for 1994. Enter the amount from line 15 of the 1994 Form 4626 Minimum tax credit carryforward from 1994. Enter the amount from line 9 of the 1994 Form 8827 Enter any credit for fuel produced from a nonconventional source and any orphan drug credit not allowed for 1994. See instructions Add lines 1, 2, and 3 Enter the corporation’s 1995 regular income tax liability minus allowable tax credits. See instructions Enter the tentative minimum tax from line 13 of the 1995 Form 4626 Subtract line 6 from line 5. If zero or less, enter -0Minimum tax credit. Enter the smaller of line 4 or line 7. Also enter this amount on the appropriate line of the corporation’s income tax return (e.g., Form 1120, Schedule J, line 4e). If the corporation had a post-1986 ownership change or has preacquisition excess credits, see instructions Minimum tax credit carryforward to 1996. Subtract line 8 from line 4. See instructions
1 2 3 4 5 6 7
4 5
6 7 8
8 9 simpler, we would be happy to hear from you. You can write to the IRS at the address listed in the instructions for the tax return with which this form is filed.
Who Should File
Form 8827 should be filed by corporations that had: ● An AMT liability in 1994; ● A minimum tax credit carryforward from 1994 to 1995; or ● A nonconventional source fuel credit or an orphan drug credit not allowed for 1994 (see Line 3 below).
Line 8
If the corporation had a post-1986 ownership change (as defined in section 382(g)), the amount of pre-change minimum tax credits that can be applied against the corporation’s tax for any tax year ending after the ownership change may be limited. See section 383 and the related regulations. To figure the amount of the pre-change credit, the corporation must allocate the credit for the change year between the pre-change period and the post-change period. The corporation must use the same method of allocation (ratable allocation or closing-of-the books) for purposes of sections 382 and 383. See Regulations section 1.382-6 for details. Also, preacquisition excess credits of one corporation generally cannot be used to offset the tax attributable to recognized built-in gains of another corporation. See section 384 for details. If either limit applies, attach a computation of the minimum tax credit allowed. Enter that amount on line 8. Write “Sec. 383” or “Sec. 384” on the dotted line to the left of the line 8 entry space.
Specific Instructions
Line 3
Enter the sum of the orphan drug credit and the nonconventional source fuel credit not allowed for 1994 solely because of the limitations under section 28(d)(2)(B) and 29(b)(6)(B).
General Instructions
Section references are to the Internal Revenue Code.
Line 5
Enter the corporation’s 1995 regular income tax liability (as defined in section 26(b)) minus any credits allowed under Subchapter A, Part IV, subparts B, D, E, and F of the Internal Revenue Code (e.g., if you are filing Form 1120, subtract any credits on Schedule J, lines 4a through 4d, from the amount on Schedule J, line 3).
Purpose of Form
Form 8827 is used by corporations to figure the minimum tax credit, if any, for alternative minimum tax (AMT) incurred in prior tax years that began after 1986 and to figure any minimum tax credit carryforward.
Line 9
Keep a record of this amount to carry forward and use in future years.
Cat. No. 13008K
Form
8827
(1995)
Printed on recycled paper | https://www.scribd.com/document/542769/US-Internal-Revenue-Service-f8827-1995 | CC-MAIN-2018-26 | refinedweb | 662 | 62.58 |
XMonad.Layout.OneBig
Description
Provides layout named OneBig. It places one (master) window at top left corner of screen, and other (slave) windows at top
Usage
This module defines layout named OneBig. It places one (master) window at top left, and other (slave) windows at right and at bottom of master. It tries to give equal space for each slave window.
You can use this module by adding following in your
xmonad.hs:
import XMonad.Layout.OneBig
Then add layouts to your layoutHook:
myLayoutHook = OneBig (3/4) (3/4) ||| ...
In this example, master window will occupy 3/4 of screen width and 3/4 of screen height. | https://xmonad.github.io/xmonad-docs/xmonad-contrib-0.17.0.9/XMonad-Layout-OneBig.html | CC-MAIN-2022-27 | refinedweb | 107 | 75.91 |
This is your resource to discuss support topics with your peers, and learn from each other.
01-26-2011 02:16 PM
that might be it. try creating a new project using Flex Mobile Project instead and just try the swipe down on it and then push it into the playbook simulator.
also were you able to test on the simulator with this app prior to attempting the Swipe Down ?
01-26-2011 02:32 PM
01-26-2011 02:38 PM
JohnPinkerton wrote:
Yes, prior to attempting to Swipe Down it loaded fine in simulator. Still does, just the Swipe Down doesn't work.
I wouldn't be too sure. If it isn't even building a .bar file in debug mode, I'm puzzled how (and whether) it could be building a .bar file in normal mode.
Is there any chance that it's actually failing to build anything new in either case, but is installing some old .bar file that doens't have any of the swipe stuff in it? People using IDEs that they're not too familiar with often get caught by things like that, in my experience. (One reason I try to avoid them most of the time.)
(The best way to check that is to change some prominent text string in your source, rebuild, and reinstall. If you see that unique string you're definitely succeeding at building it that way... then change the string to yet another new one and try again in debug mode, being sure nothing else has changed.)
01-26-2011 02:55 PM
Yeah, I had a problem once before of my app not actually building (JRad helped me through that one!
)
Just to be sure, I've moved buttons around and changed some background colors during all this.
Would it be possible to do a majority of the imports/functions without having to do actionscript import?
I started out doing a lot in design mode for the layout, but have switched over to now working mostly in source view.
01-26-2011 04:28 PM
hey johnp,
unfortunately the one of the things that seperates AS3 structure from flex is that you have to do explicit imports. i think in flex you can use a majority of mx / spark library with namespaces and without importing anything. you get used to it after a while though - great discipline
01-26-2011 05:16 PM
So not giving up easily I've been constantly still trying to make this work.
I found this thread in which Austin claims to have SWIPE_DOWN working in Flex.
So, following imports for Flex:
import qnx.events.QNXApplicationEvent;
import qnx.system.QNXApplication;
Added this to my existing oncreationComplete function
QNXApplication.qnxApplication.addEventListener(QNX
ApplicationEvent.SWIPE_DOWN,openAddMenu);ApplicationEvent.SWIPE_DOWN,openAddMenu);
Then rather than go through all the Trace() headache, I did this:
private function openAddMenu(event:QNXApplicationEvent):void { Alert.show("HELLO SWIPE!"); }
Ran it to the PlayBook sim, drug down - BOOM there's the alert.
So it is recognizing the SWIPE DOWN action, now to just get a menu to slide down.
04-23-2011 09:57 AM - edited 04-23-2011 10:07 AM
Here is what I did.
In my Init class I added the listener. I also declared some default variables.
private var SLIDE_TIME:int = 1;
private var VISIBLE_Y:int = 100;
private function init():void {
// my screen start stuff
QNXApplication.qnxApplication.addEventListener(QNX
ApplicationEvent.SWIPE_DOWN, appMenuDisplay);ApplicationEvent.SWIPE_DOWN, appMenuDisplay);
// get the menu height
menuGroup.y = -menuGroup.height;
}
Then I created the functions to handle the event and show and hide the menu.
private function appMenuDisplay(event:QNXApplicationEvent):void { if(menuGroup.y != VISIBLE_Y){ showMenu(); } else { hideMenu(); } } public function showMenu():void { Tweener.addTween(menuGroup, {y:VISIBLE_Y, time:SLIDE_TIME, transition:"linear"}); }
public function hideMenu():void { Tweener.addTween(menuGroup, {y:-menuGroup.y, time:SLIDE_TIME, transition:"linear"}); }
My menu group is a Spark VGroup, for those of you who don't want to do pure action script.
In other examples, people were trying to catch whether or not the mouse was swiped up or down from the top bezel. With the SWIPE_DOWN call, it doesn't matter if the mouse goes up or down. I tested this with the Browser on the simulator. It doesn't matter if you swipe up or down from the top bezel, it just hides or closes the Browser based on if it was show or hidden before.
Edit....
Just realized after I posted this, that the VISIBLE_Y should be different than the height of the menu, but basically, the menu height is needed to hide the menu. So, the initial menu Y should be the negative of the height. And the Hide should be the negative of the height.
// get the menu height
menu.y = -menu.height; | https://supportforums.blackberry.com/t5/Adobe-AIR-Development/Swipe-Down-Event/m-p/757287 | CC-MAIN-2017-13 | refinedweb | 796 | 65.22 |
On 2009-04-07 16:05, P.J. Eby wrote: > At 02:30 PM 4/7/2009 +0200, M.-A. Lemburg wrote: >> >> Wouldn't it be better to stick with a simpler approach and look for >> >> "__pkg__.py" files to detect namespace packages using that O(1) >> check ? >> > >> > Again - this wouldn't be O(1). More importantly, it breaks system >> > packages, which now again have to deal with the conflicting file names >> > if they want to install all portions into a single location. >> >> True, but since that means changing the package infrastructure, I think >> it's fair to ask distributors who want to use that approach to also take >> care of looking into the __pkg__.py files and merging them if >> necessary. >> >> Most of the time the __pkg__.py files will be empty, so that's not >> really much to ask for. > > This means your proposal actually doesn't add any benefit over the > status quo, where you can have an __init__.py that does nothing but > declare the package a namespace. We already have that now, and it > doesn't need a new filename. Why would we expect OS vendors to start > supporting it, just because we name it __pkg__.py instead of __init__.py? I lost you there. Since when do we support namespace packages in core Python without the need to add some form of magic support code to __init__.py ? My suggestion basically builds on the same idea as Martin's PEP, but uses a single __pkg__.py file as opposed to some non-Python file yaddayadda.pkg. Here's a copy of the proposal, with some additional discussion bullets added: """ Alternative Approach: --------------------- Wouldn't it be better to stick with a simpler approach and look for "__pkg__.py" files to detect namespace packages using that O(1) check ? This would also avoid any issues you'd otherwise run into if you want to maintain this scheme in an importer that doesn't have access to a list of files in a package directory, but is well capable for the checking the existence of a file. Mechanism: ---------- If the import mechanism finds a matching namespace package (a directory with a __pkg__.py file), it then goes into namespace package scan mode and scans the complete sys.path for more occurrences of the same namespace package. The import loads all __pkg__.py files of matching namespace packages having the same package name during the search. One of the namespace packages, the defining namespace package, will have to include a __init__.py file. After having scanned all matching namespace packages and loading the __pkg__.py files in the order of the search, the import mechanism then sets the packages .__path__ attribute to include all namespace package directories found on sys.path and finally executes the __init__.py file. (Please let me know if the above is not clear, I will then try to follow up on it.) Discussion: ----------- The above mechanism allows the same kind of flexibility we already have with the existing normal __init__.py mechanism. * It doesn't add yet another .pth-style sys.path extension (which are difficult to manage in installations). * It always uses the same naive sys.path search strategy. The strategy is not determined by some file contents. * The search is only done once - on the first import of the package. * It's possible to have a defining package dir and add-one package dirs. * The search does not depend on the order of directories in sys.path. There's no requirement for the defining package to appear first on sys.path. * Namespace packages are easy to recognize by testing for a single resource. * There's no conflict with existing files using the .pkg extension such as Mac OS X installer files or Solaris packages. * Namespace __pkg__.py modules can provide extra meta-information, logging, etc. to simplify debugging namespace package setups. * It's possible to freeze such setups, to put them into ZIP files, or only have parts of it in a ZIP file and the other parts in the file-system. * There's no need for a package directory scan, allowing the mechanism to also work with resources that do not permit to (easily and efficiently) scan the contents of a package "directory", e.g. frozen packages or imports from web resources. Caveats: * Changes to sys.path will not result in an automatic rescan for additional namespace packages, if the package was already loaded. However, we could have a function to make such a rescan explicit. """ -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Apr 07 | https://mail.python.org/pipermail/python-list/2009-April/532176.html | CC-MAIN-2016-30 | refinedweb | 771 | 67.04 |
Asked by:
Combination without repitition
Question
Hi guys
I am hoping someone can help me with this PowerShell script.
The script lists the unique combination of 26 letter alphabet ($List) based on 3 letter ($k)
If I keep the $List shorter like A to K for instance, it works. But with 26 letter it doesn't work
Thank you
#########
$List = "A","B","C","D","E","F","G","H","I","J","K","L","M","N","O","P","Q","R","S","T","U","V","W","X","Y","Z"
$k = 3
Add-Type @"
public class Shift {
public static int Right(int x, int count) { return x >> count; }
public static uint Right(uint x, int count) { return x >> count; }
public static long Right(long x, int count) { return x >> count; }
public static ulong Right(ulong x, int count) { return x >> count; }
public static int Left(int x, int count) { return x << count; }
public static uint Left(uint x, int count) { return x << count; }
public static long Left(long x, int count) { return x << count; }
public static ulong Left(ulong x, int count) { return x << count; }
}
"@
function CombinationWithoutRepetition ([int]$k, $List)
{
Function IsNBits ([long]$value, $k, $length)
{
$count = 0
for ($i = 0 ; $i -le $length ; $i++)
{
if ($value -band 1)
{
$count++
}
$value = [shift]::Right($value,1)
}
if ($count -eq $k)
{
return $true
}
else
{
return $false
}
}
Function BitsToArray ([long]$value, $List)
{
$res = @()
for ($i = 0 ; $i -le $List.length ; $i++)
{
if ($value -band 1)
{
$res += $List[$i]
}
$value = [shift]::Right($value,1)
}
return ,$res
}
[long]$i = [Math]::Pow(2, $List.Length)
$res = @()
for ([long]$value=0 ; $value -le $i ; $value++)
{
if ((IsNBits $value $k $List.Length) -eq $true)
{
#write-host $value
$res += ,(BitsToArray $value $List)
}
}
return ,$res
}
Clear-Host
$res = CombinationWithoutRepetition $k $List
$res.count
$res | Sort-Object | % { $_ -join ','}
Saturday, December 9, 2017 9:44 PM
- Moved by Bill_Stewart Friday, January 26, 2018 3:23 PM This is not "do my homework for me" forumn
All replies
- You fail to say what you are trying to accomplish. The code you posted does not work at all. It won't compile.
\_(ツ)_/Saturday, December 9, 2017 9:50 PM
This generates all combinations of a set selected by 'N'.
Function Get-Combinations { <# .Synopsis Generates combinations from an array of multi-dimensional arrays .Description Get-Combinations is a recursive function designed to return combination sets from defined arrays. All arrays are passed as a single parameter. .Parameter Object The multi-dimensional input array. All elements within the array are cast to System.String. .Parameter Seperator Joins each element using the specified character. .Parameter CurIndex The current outer-array index, used in recursion. .Parameter Return A composite return value, used in recursion. .Example Get-Combinations @($Array1, $Array2, $Array3) .Example Get-Combinations @("site", @("web", "app"), @("01", "02")) #> Param( [Object[]]$Object, [String]$Seperator, [UInt32]$CurIndex = 0, [String]$Return = "" ) $MaxIndex = $Object.Count - 1 $Object[$CurIndex] | ForEach-Object { [Array]$NewReturn = "$($Return)$($Seperator)$($_)".Trim($Seperator) If ($CurIndex -lt $MaxIndex) { $NewReturn = Get-Combinations $Object -CurIndex ($CurIndex + 1) -Return $NewReturn } $NewReturn } } $CharacterSet = ([Int][Char]"A")..([Int][Char]"Z") | ForEach-Object { [Char]$_ } Get-Combinations @($CharacterSet, $CharacterSet, $CharacterSet)
\_(ツ)_/Saturday, December 9, 2017 10:06 PM
Thanks for the reply :)
Your code result with repetition of letters, like AAA
I would like to have no repetition of any letter, like ABC, ABD etc
If you shorten $List on my screen like ($List = "A","B","C","D","E","F","G","H","I","J","K") it works
best regardsSaturday, December 9, 2017 10:21 PM
Use the code I posted and filter out all combinations with duplicate letters. This is the fastest way.
In any case your code should be a recursive function. The concept of "groups of N" is a recursive concept.
You can also edit the function to test each group for duplicate characters and discard it.
\_(ツ)_/Saturday, December 9, 2017 10:27 PM
Thanks jrv
Sorry my powershell skills are not that good. I have taken that script from internet.
Can you give me example of how I can filter out duplicate letters?
Or edit the function to test each group for duplicates?
thank youSaturday, December 9, 2017 10:35 PM
I this a homework question?
Hint:
PS D:\scripts> (('ABC').ToCharArray()|Group).Count -lt 3 False PS D:\scripts> (('ABA').ToCharArray()|Group).Count -lt 3 True PS D:\scripts>
\_(ツ)_/Saturday, December 9, 2017 10:38 PM
No, it's not :)
Thank you jvc
I will try that. Hope I can work that out :)
best regardsSaturday, December 9, 2017 10:54 PM
Here is an easier uniqueness rule:
(($_).ToCharArray() | Sort-Object -unique).Count -eq 3
\_(ツ)_/Saturday, December 9, 2017 10:58 PM
Hi jrv
Sorry, I tried but couldn't work out how I use that last uniqueness rule. I said, I am not very good with powershell :)
Can you please show me how/where I use it in your script?
best regardsSunday, December 10, 2017 1:27 AM
Can you first explain the purpose of this exercise?
\_(ツ)_/Sunday, December 10, 2017 1:32 AM
Sure jrv
I want to list all unique values of list selected by N so I can do a pattern analysis of some old text
In my case, position of letters doesn't matter. i.e ABC = BCA, so BCA shouldn't be listed.
26 character letters combination by N=3 should list, (28*27*26/3*2*1)=3276 results.
I should also be able to change N value
Best regardsSunday, December 10, 2017 3:40 PM
That does not explain the purpose of this. Why do you need to do this.
To get you results just add a filter for each rule. You can alos crate a set of all ascii codes and define a formula that generates the pure combinations or permutations of N.
\_(ツ)_/Sunday, December 10, 2017 6:54 PM
Hi jrv
Sorry but I don't know how to do that.
if you don't mind, can you please show me in the script?
thank youSunday, December 10, 2017 9:42 PM
Why do you need to do this?
\_(ツ)_/Sunday, December 10, 2017 9:47 PM
I want to analyse text, to see if there is any pattern how the letters used. why is this important?
I worked out your filter. If I do below, it sorts out the list but not exactly as I want it. I still get duplicates like ABC, BCA
Get-Combinations@($CharacterSet,$CharacterSet,$CharacterSet) |Where-Object{(($_).ToCharArray() |Sort-Object-unique).Count -eq3}
Can you show me how I can get all unique values?
thank you
Sunday, December 10, 2017 10:04 PM
To remove duplicates you need to sort each string and add it to a dictionary as the key. Duplicates will be rejected.
This seems to be a homework lesson as others have asked the almost identical question in other forums.
If you just need the results then here is a full solution:
Enter your data and the results will be calculated. There are links to the underlying code.
\_(ツ)_/Sunday, December 10, 2017 10:09 PM
thanks jrv
I have been searching the forums but couldn't find exactly what I was looking for.
That link you provided will help me. Thank you very much.
When I have time, I will have a go at creating powershell script myself for offline use :)
best reagrdsSunday, December 10, 2017 10:48 PM | https://social.microsoft.com/Forums/en-US/92cf0dfb-68d8-4e7e-8571-e9b769138c86/combination-without-repitition?forum=Offtopic | CC-MAIN-2022-27 | refinedweb | 1,249 | 63.29 |
Kevin Horan & Thomas Girke
Last update:
October 13, 2015
ChemmineOB provides an R interface to a subset of
cheminformatics functionalities implemented by the OpelBabel C++ project
(O'Boyle, Morley, and Hutchison, 2008; O'Boyle, Banck, James, Morley, Vandermeersch, and Hutchison, 2011).
(Cao, Charisi, Cheng, Jiang, and Girke, 2008; Backman, Cao, and Girke, 2011; Wang, Backman, Horan, and Gir,...)
result = stringp() OBDescriptor_GetStringValue(... , result$cast()) stringValue = result$value()
There are still many special cases however. The SWIG documentation can help, as well as browsing the generated R code in R/ChemmineOB.R.
sessionInfo()
R version 3.2.2 (2015-08-14) Platform: x86_64-pc-linux-gnu (64-bit) Running under: Ubuntu 14.04citations_1.0.6 ChemmineOB_1.8.0 knitrBootstrap_0.9.0
loaded via a namespace (and not attached):
[1] Rcpp_0.12.1 lubridate_1.3.3 XML_3.98-1.3
[4] digest_0.6.8 bitops_1.0-6 R6_2.1.1
[7] plyr_1.8.3 formatR_1.2.1 magrittr_1.5
[10] evaluate_0.8 httr_1.0.0 bibtex_0.4.0
[13] stringi_0.5-5 zlibbioc_1.16.0 RJSONIO_1.3-0
[16] tools_3.2.2 stringr_1.0.0 RefManageR_0.8.63 [19] RCurl_1.95-4.7 markdown_0.7.7 memoise_0.2.1
[22] knitr_1.11
This software was developed with funding from the National Science Foundation: ABI-0957099, 2010-0520325 and IGERT-0504249.
[1] T. W. Backman. "ChemMine tools: an online service for analyzing and clustering small molecules". In: Nucleic Acids Res 39.Web Server issue (Jul. 2011), pp. 486-491. URL:.
[2] Y. Cao. "ChemmineR: a compound mining framework for R". In: Bioinformatics 24.15 (Aug. 2008), pp. 1733-1734. URL:.
[3] N. O'Boyle. "Open Babel: An open chemical toolbox". In: Journal of Cheminformatics 3.1 (2011), p. 33. ISSN: 1758-2946. URL:.
[4] N. O'Boyle. "Pybel: a Python wrapper for the OpenBabel cheminformatics toolkit". In: Chemistry Central Journal 2.1 (2008), p. 5. ISSN: 1752-153X. URL:.
[5] Y. Wang. "fmcsR: Mismatch Tolerant Maximum Common Substructure Searching in R". In: Bioinformatics (Aug. 2013). URL:. | http://bioconductor.org/packages/release/bioc/vignettes/ChemmineOB/inst/doc/ChemmineOB.html | CC-MAIN-2015-48 | refinedweb | 332 | 54.08 |
#include <sys/conf.h> #include <sys/ddi.h> #include <sys/sunddi.h> int ddi_dev_regsize(dev_info_t *dip, uint_t rnumber, off_t *resultp);
Solaris DDI specific (Solaris DDI).
A pointer to the device's dev_info structure.
The ordinal register number. Device registers are associated with a dev_info and are enumerated in arbitrary sets from 0 on up. The number of registers a device has can be determined from a call to ddi_dev_nregs(9F).
Pointer to an integer that holds the size, in bytes, of the described register (if it exists).
The ddi_dev_regsize() function returns the size, in bytes, of the device register specified by dip and rnumber. This is useful when, for example, one of the registers is a frame buffer with a varying size known only to its proms.
The ddi_dev_regsize() function returns:
A successful return. The size, in bytes, of the specified register, is set in resultp.
An invalid (nonexistent) register number was specified.
The ddi_dev_regsize() function can be called from user, interrupt, or kernel context.
ddi_dev_nintrs(9F), ddi_dev_nregs(9F)
Writing Device Drivers for Oracle Solaris 11.2 | https://docs.oracle.com/cd/E36784_01/html/E36886/ddi-dev-regsize-9f.html | CC-MAIN-2021-21 | refinedweb | 177 | 51.85 |
The Apache Jackrabbit community is pleased to announce the release of
Apache Jackrabbit 2.0 alpha8. The release is available for download
at:
See the full release notes below for details about this release.
Release Notes -- Apache Jackrabbit -- Version 2.0-alpha8
JCR 2.0 feature completeness
----------------------------
The following 43 top level JCR 2.0 implementation issues are being tracked in
the Jackrabbit issue tracker.
Open (3 issues)
[JCR-1590] JSR 283: Locking
[JCR-2085] test case (TCK) maintenance for JCR 2.0
[JCR-2208] update tests so that both Query.XPATH and Query:SQL are ...
Resolved (40 issues)
[JCR-1564] JSR 283 namespace handling
[JCR-1565] JSR 283 lifecycle management
[JCR-1588] JSR 283: Access Control
-1712] JSR 283: | http://mail-archives.apache.org/mod_mbox/www-announce/200908.mbox/%3C510143ac0908190241haf4a91dx477039bc827c36f6@mail.gmail.com%3E | CC-MAIN-2015-32 | refinedweb | 120 | 69.18 |
The .NET Stacks #21: Azure Static Web Apps, .NET 6 feedback, and more!
This week, we take a look at Azure Static Web Apps, dotnet-monitor, and more.
Happy Monday! It looks like my pun from last week received some attention (and some new subscribers). I’m glad you all joined the newsletter before it gets code outside.
- Catch up on Azure Static Web Apps
- Provide input on .NET 6
- Last week in the .NET world
⛅ Catch up on Azure Static Web Apps
This week, Anthony Chu joined the ASP.NET community standup to talk about Azure Static Web Apps. Azure Static Web Apps has been at the top of my “I really need to take a look at this” list, so the timing was right. 😎
Static sites are definitely the new hotness, and have been for awhile. If content on your site doesn’t change that often, you’ll just need to serve up some static HTML files. For example, on my site I utilize a static site generator called Jekyll and host the content on GitHub Pages—this helps my site load super fast and with little overhead. Why introduce database overhead and the like if you don’t need it? (If it wasn’t clear I’m talking about you, WordPress.)
Many times, though, you’ll also need to call off to an API eventually—this is quite common with SPAs like Vue, Angular, React, and now Blazor: you have a super-lightweight front-end that calls off to some APIs. A common architecture is serving up files statically with a serverless backend, such as Azure Functions.
Enter Azure Static Web Apps. Introduced at Ignite this year, Azure Static Web Apps allow you to leverage this architecture with one easy solution. If you’re good with GitHub lock-in and are looking for a static hosting solution, it’s worth a look.
You can check out the docs for the full treatment, but Azure Static Web Apps offers web hosting (duh), native Azure Functions support, GitHub triggers over GitHub Actions, free renewable SSL certificates, custom domains, and a bunch of auth integrations, and fallback routes.
My favorite feature is GitHub PR testing sites. Once a PR kicks off, a GitHub Action executes and creates a temporary test staging site to view changes (it goes away once the PR is merged or discarded). This is a wonderful tool for you and others to test any changes to your app. Are you working on a complicated PR and want to have testers put your app through its paces? Send them the staging link.
It’s nice to have all this integrated in Azure, especially if that’s where you do a lot of business. But why not just use GitHub Pages if you don’t have a lot of complexity? That’s a fair question. With the just announced Blazor Web Assembly support, Azure Static Web Apps is the clear winner. With Azure Static Web Apps, the GitHub Actions step becomes aware of a Blazor WebAssembly app and can do Blazor-specific precompression steps. Otherwise, you’d have to integrate an additional workflow to your app. With Azure Static Web Apps, it’s available out of the box.
👨💻 Provide input on .NET 6
With the general release of .NET 5 not even a month away, Microsoft is setting its sights on .NET 6.
Check out this GitHub issue to provide feedback on what features you want to see. Unsurprisingly, Blazor AoT compilation is leading the charge. As a whole, the ASP.NET folks have noted that speeding up the developer feedback loop is a big priority for .NET 6.
🎂 Happy birthday, Mom
Before we get into the links: I quickly wanted to wish my mom a very Happy Birthday, as she turns … 30? I love you, Mom. And to think I would never get you back for all the embarrassing moments.
Here’s a picture of us when I was young. I think she’s trying to convince me to use Kubernetes.
🌎 Last week in the .NET world
🔥 The Top 3
- Michael Shpilt writes about best practices to keep a .NET app’s memory healthy.
- Patrick Smacchia talks about the new ‘and’ or ‘not’ C# 9 keywords.
- Andrew Lock continues his series on k8s by adding health checks with liveness, readiness, and startup probes.
📢 Announcements
- David Pine shares an amazing Azure Cosmos DB Repository .NET SDK.
- Tara Overfield runs through the October .NET Framework cumulative update.
📅 Community and events
- The .NET Conf has announced their speakers.
- The GitHub Docs are now open source.
- ESLint offers an interesting perspective on paying contributors.
- The .NET Docs Show talks to Julie Lerman.
- In the .NET community standups: Languages & Runtime talks about source generators, Machine Learning talks about contributing to ML.NET, and ASP.NET talks about Blazor support for Azure Static Web Apps.
😎 ASP.NET Core / Blazor
- Marinko Spasojevic writes about refresh tokens with Blazor WASM and ASP.NET Core Web API.
- Jon Hilton prerenders Blazor WASM with .NET 5, works with Blazor CSS isolation, and updates the HTML head from Blazor components.
- Chris Sainty builds a simple tooltip component in Blazor.
- Peter Vogel works with the Telerik UI for Blazor DataGrid.
- Bogdan Chorniy talks about Blazor form validation internals.
- Khalid Abuhakmeh discusses ASP.NET Razor TagHelpers.
- At the CodeMaze blog, how to publish Angular with ASP.NET Core.
⛅ The cloud
- Derek Legenzoff talks about how you can add search to an app with the new Azure Cognitive Search SDK.
- Damien Bowden uses Key Vault certs with Microsoft.Identity.Web and ASP.NET Core apps.
- Gunnar Peipman runs ASP.NET Core 5 RC apps on Azure App Service.
- Chris Noring deploys a Blazor app with Azure Static Web Apps.
📔 C#
- Matthew Jones works on code blocks, basic statements, and loops in C#, talks about operators in C#, and also works with casting, conversion, and parsing in C#.
- Jason Gaylord exports Bitly links using C#.
- Nick Randolph debugs C# 9 source generators.
- Jennifer Marsh does web scraping with C#.
- Anthony Giretti explores the System.Net.Http.Json namespace in .NET 5.
- Ian Griffiths warns against misusing ‘as’ in C# 8.
- Steve Gordon writes about
System.Threading.Channels.UnboundedChannel<T>(part 1, part 2).
- Kirtesh Shah discusses init-only properties in C# 9.
- Asma Khalid builds web URL query parameters in C#.
- Josef Ottosson tidies up HttpClient usage, and throttles outgoing HTTP requests in C#.
- Bruno Sonnino mocks non-virtual methods of a class.
📗 F#
- Jeremie Chassaing works with applicative computation expressions in F# in two posts.
- Alican Demirtas works with React components in F#.
- James Randall talks about creating FableTrek.
- Callum Linington gets a Node.js experience in F#.
🔧 Tools
- Dave Brock (ahem) starts a series on Docker for .NET developers.
- Mark Downie discusses cross-platform managed memory dump debugging in Visual Studio, collects dumps with dotnet-monitor, and finds the address of an object in Visual Studio.
- John Petersen discusses interactive unit testing with .NET Core and VS Code.
- Franco Tiveron deploys a .NET container with Azure DevOps.
- Jason Gaylord uses SpecFlow, nUnit and .NET Core in VSCode.
- Scott Hanselman uses autocomplete at the command line for dotnet, git, winget, and more, and also works with panes in Windows Terminal.
- Thomas Ardal compares .NET mocking libraries.
- Mark Seemann doesn’t squash his commits.
- Jon Smith creates a .NET Core global tool.
- Ron Powell asks: do I really need Kubernetes?
- Rick Strahl creates custom .NET project types with dotnet new project templates.
- Michał Białecki configures relationships in Entity Framework Core 5.
📱 Xamarin
- James Montemagno previews Xamarin.Essentials 1.6.
- Jean-Marie Alfonsi works with shadows and MaterialFrame blur.
- Steven Thewissen provides negative numeric input on Xamarin.Forms iOS.
- Nigel Ferrissey writes about when to Use Xamarin.Forms vs Xamarin Native.
🎤 Podcasts
- Scott Hanselman talks about normalizing failure with Susana Benavidez.
- .NET Rocks talks about GitHub Codespaces with Anthony van der Hoorn.
- The 6-Figure Developer podcast talks about .NET MAUI with Auri Rahimzadeh.
- The .NET Core Show talks Azure and live conferences with Andy Morrell.
🎥 Videos
- The ON.NET show discusses .NET microservices with DAPR, default literal expressions in C#, and C# 9 language features.
- The Xamarin Show talks about binding tools for Swift.
- Data Exposed discusses Azure SQL capacity planning. | https://www.daveabrock.com/2020/10/16/dotnet-stacks-21/ | CC-MAIN-2021-39 | refinedweb | 1,378 | 69.58 |
When I work on windows xp and I programmed my keylogger all it's work fine, but when I passed to windows 7 I get dozen of difficults, because the victim must click on Run As Administrator before, for that reason the keylogger doesn't work... without forget the difficults when sending the log file to Ftp server
My question now, is there a way to hack the UAC( User Access Control ) ?
I have write a code in order to get the handle of the UAC Window, but it doesn't work( maybe, because the UAC Windows freezes all )
Here's the simple code :
void PressKey( void ) __attribute__( ( destructor ) ); main( ) { ShellExecute( NULL,"runas","",NULL,NULL,SW_SHOW ); } void PressKey( void ) { keybd_event( VK_RETURN,( char )VK_RETURN,0x0,0x0 ); keybd_event( VK_RETURN,( char )VK_RETURN,KEYEVENTF_KEYUP,0x0 ); MessageBox( NULL,"Pressed","Info",0x0 ): }
I can do that ... why ???
Please give me a solution or just give me some steps that i will follow... ( source codes are welcome )
Thanks in advance ...
Edit : sorry, I forget the code here is :
#include <windows.h> #include <stdio.h> main() { while(true){ HWND hUAC = FindWindow(NULL,"Côntrole de compte d'utilisateur"); if( hUAC ) { MessageBox(0,"I got it !!","Yes !!!",0); break; } } }
I can't get the handle of the window !
Edited by Hamza_For_Ever_Islam, 24 August 2013 - 12:17 PM. | http://www.rohitab.com/discuss/topic/40305-qhacking-uac-on-windows-7/ | CC-MAIN-2022-21 | refinedweb | 217 | 68.91 |
OpenCV can be used to draw different collection graphics, including lines, rectangles, circles, ellipses, polygons and adding text to images. The drawing functions used include CV2 line(),cv2.circle(),cv2.rectangle() ,cv2.putText() and so on.
These drawing functions need to set parameters, such as:
• img: the image you want to draw.
• color: the color of the shape. For example: (0,255) a blue tuple is required, for example: (0,255). For grayscale images, you only need to pass in grayscale values.
• thickness: the thickness of the line. If a closed figure is set to - 1, the figure will be filled. The default value is 1
• linetype: type of line, 8 connection, anti aliasing, etc. The default is 8 connections. cv2.LINE_AA is anti aliasing, so it will look very smooth.
1. Draw lines
To draw a line, you just need to tell the function the start and end of the line. We will draw a blue line from the top left to the bottom right.
import numpy as np import cv2 img = np.zeros((512,512,3), np.uint8) # Create a black background # np. The zeros () function returns an array filled with 0 for a given shape and type # np. Zeros ((512512,3) constructs a 512 * 512 Numpy array and allocates three color spaces at the same time cv2.line(img,(0,0),(511,511),(0,255,0),5) # Specify two endpoints and draw a green line with 5 pixels
2. Draw a rectangle
To draw a rectangle, you need to tell the coordinates of the top left and bottom right vertices of the function. This time we will draw a blue rectangle in the upper right corner of the image.
cv2.rectangle(img,(384,0),(510,128),(255,0,0),3)
3. Draw a circle
To draw a circle, you only need to specify the coordinates of the center point of the circle and the size of the radius. We draw a circle in the rectangle above.
cv2.circle(img,(447,63), 63, (0,0,255), -1) #-1 indicates the fill color
4. Draw an ellipse
Drawing an ellipse is complicated. We need to input more parameters. One parameter is the position coordinate of the center point.
The next parameter is the length of the major and minor axes. The angle at which an ellipse rotates counterclockwise. The starting angle and ending angle of the elliptical arc in a clockwise direction. If it is 0 to 360, it is the whole ellipse. The following example is to draw an ellipse in the center of the picture.
cv2.ellipse(img,(256,256),(100,50),0,0,360,255,-1)
Draw half an ellipse
cv2.ellipse(img,(256,256),(100,50),0,0,180,255,-1)
5. Draw polygon
To draw a polygon, you need to point out the coordinates of each vertex. Use the coordinates of these points to build an array whose size is equal to the number of rows X1X2. The number of rows is the number of points. The data type of this array must be int32. Here, draw a white polygon with four vertices.
pts = np.array([[10,5],[20,30],[70,20],[50,10]], np.int32) pts = pts.reshape((-1,1,2)) cv2.polylines(img,[pts],True,(0,255,255))
5. Add text to the picture
To draw text on a picture, you need to set the following parameters:
• the text you want to draw
• where you want to draw
• font type (find supported fonts by looking at the documentation of cv2.putText())
• font size
• general attributes of text, such as color, thickness, type of line, etc. For a better look, it is recommended to use linetype = CV2 LINE_ AA.
Draw a red OpenCV on the image.
font = cv2.FONT_HERSHEY_SIMPLEX cv2.putText(img,'OpenCV',(10,500), font, 4,(255,255,255),2,cv2.LINE_AA)
import numpy as np import cv2 img = np.zeros((512,512,3), np.uint8) #Draw a straight line cv2.line(img,(0,0),(511,511),(255,0,0),5) #Draw rectangle cv2.rectangle(img,(384,0),(510,128),(0,255,0),3) #Draw a circle cv2.circle(img,(447,63), 63, (0,0,255), -1) #Draw an ellipse cv2.ellipse(img,(256,256),(100,50),0,0,180,255,-1) #Draw polygon pts = np.array([[10,5],[20,30],[70,20],[50,10]], np.int32) pts = pts.reshape((-1,1,2)) cv2.polylines(img,[pts],True,(255,255,255)) #Add text to the picture font = cv2.FONT_HERSHEY_SIMPLEX cv2.putText(img,'OpenCV',(10,500), font, 4,(0,0,255),2,cv2.LINE_AA) #Result presentation picture = 'example' cv2.namedWindow(picture) cv2.imshow(picture, img) cv2.waitKey(0) cv2.destroyWindow(picture) | https://algorithm.zone/blogs/python-opencv-drawing-function.html | CC-MAIN-2022-21 | refinedweb | 778 | 68.87 |
HackerRank's Nested Lists problem can be solved in many ways. One of the solutions that you might find in the discussion area of the problem is the following:
marksheet = [] for _ in range(0,int(input())): marksheet.append([input(), float(input())]) second_highest = sorted(list(set([marks for name, marks in marksheet])))[1] print('\n'.join([a for a,b in sorted(marksheet) if b == second_highest]))
However, there is a problem there. Do you see it? This solution performs 2 sorts and then it conducts a linear search.
We can do better than that. Here is a solution that sorts 1 time, ends the operation sooner, and performs a binary search.
from bisect import bisect, bisect_left from heapq import heappush, heappop n = 2 scores_names = [] scores_sorted = [] scores_names_sorted = [] ordinal_map = {} seen = set() enum = 1 if __name__ == '__main__': for _ in range(int(input())): name = input() score = float(input()) heappush(scores_names, (score, name)) for i in range(len(scores_names)): score_name = heappop(scores_names) scores_sorted.append(score_name[0]) scores_names_sorted.append(score_name) if score_name[0] not in seen: if enum > n: break seen.add(score_name[0]) ordinal_map[enum] = score_name[0] enum += 1 low = bisect_left(scores_sorted, ordinal_map[n]) high = bisect(scores_sorted, ordinal_map[n]) for i in range(low, high): print(scores_names_sorted[i][1])
It's a little bit more code and it would be an overkill for simple things. However, for bigger problems, this would perform drastically better.
Feel free to suggest an improvement.
Discussion (1)
My code... more Pynthonic... | https://dev.to/rmelikov/hacker-rank-s-nested-lists-problem-7e5 | CC-MAIN-2021-25 | refinedweb | 242 | 57.06 |
sorry, typo.... 'module' variable used below should be 'book', per the name declared near the start. Troy A. Griffitts wrote: > I'll leave the question alone as to the value of a relational database > for this data over using the SWORD API. > > You can do this a few ways depending on your familiarity of tools. > > If you can process raw text files, you can use of one the SWORD provided > export utilities to produce plain text data from a SWORD module: > mod2osis, mod2imp. > > If you are familiar with programming you can use your favorite > programming language and the SWORD bindings for such with a simple loop > like: > > #include <swmgr.h> > #include <swmodule.h> > using namespace sword; > > SWMgr library; > SWModule &book = *(library.getModuleByName("KJV")); > const char *sql = "insert into bookdata (bookid, entrykey, entrydata) > values (?, ?, ?)"; > // "prepare" your sql statement > for (book = TOP; !book.Error(); book++) { > // sqlStatement.bind(1, "KJV"); > // sqlStatement.bind(2, module.KeyText()); > // sqlStatement.bind(3, module.getRawEntry()); > // sqlStatement.execute(); > } > //sqlStatement.commit(); > > > Hope this helps. Please consider using / contributing to the usefulness > of the API itself to meet your purposes. I'm sure your additions would > add to the usefulness of the project for many others after you. > > -Troy. > > > > > > > > _______________________________________________ > sword-devel mailing list: sword-devel at crosswire.org > > Instructions to unsubscribe/change your settings at above page | http://www.crosswire.org/pipermail/sword-devel/2006-November/024631.html | CC-MAIN-2016-36 | refinedweb | 217 | 58.99 |
filtered pixel color at normalized coordinates (u, v).
Coordinates
u and
v go from 0.0 to 1.0, just like UV coordinates in meshes.
If coordinates are out of bounds (larger than 1.0 or smaller than 0.0),
they will be clamped or repeated based on the texture's wrap mode.
Texture coordinates start at lower left corner. UV of (0,0) lands exactly on the bottom left texel; and UV of ((width-1)/width, (height-1)/height) lands exactly on the top right texel.
Returned pixel color is bilinearly filtered.
The texture must have the read/write enabled flag set in the texture import settings, otherwise this function will fail. GetPixelBilinear is not available on Textures using Crunch texture compression.
See Also: GetPixel.
using UnityEngine;
public class Example : MonoBehaviour { // "Warp" a texture by squashing its pixels to one side. // This involves sampling the image at non-integer pixel // positions to ensure a smooth effect.
// Source image. public Texture2D sourceTex;
// Amount of "warping". public float warpFactor = 1.0f;
Texture2D destTex; Color[] destPix;
void Start() { // Set up a new texture with the same dimensions as the original. destTex = new Texture2D(sourceTex.width, sourceTex.height); destPix = new Color[destTex.width * destTex.height];
// For each pixel in the destination texture... for (var y = 0; y < destTex.height; y++) { for (var x = 0; x < destTex.width; x++) { // Calculate the fraction of the way across the image // that this pixel positon corresponds to. float xFrac = x * 1.0f / (destTex.width - 1); float yFrac = y * 1.0f / (destTex.height - 1);
// Take the fractions (0..1)and raise them to a power to apply // the distortion. float warpXFrac = Mathf.Pow(xFrac, warpFactor); float warpYFrac = Mathf.Pow(yFrac, warpFactor);
// Get the non-integer pixel positions using GetPixelBilinear. destPix[y * destTex.width + x] = sourceTex.GetPixelBilinear(warpXFrac, warpYFrac); } }
// Copy the pixel data to the destination texture and apply the change. destTex.SetPixels(destPix); destTex.Apply();
// Set our object's texture to the newly warped image. GetComponent<Renderer>().material.mainTexture = destTex; } }
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/2018.3/Documentation/ScriptReference/Texture2D.GetPixelBilinear.html | CC-MAIN-2019-43 | refinedweb | 343 | 54.59 |
I'm pretty new with multithreading and I was trying to increment a shared counter whithout using global variables, my goal is try to maximize the concurrency among the different threads and increment the variable until a number I give in arguments... Sorry if is a lame question, but I would like a help here, when I compile my code and run it i get a segmentation fault... I think the error is in the variable count that I create and the shared counter!
#include <stdio.h>
#include <pthread.h>
pthread_mutex_t mutex;
typedef struct {
long *cnt; /* pointer to shared counter */
long n; /* no of times to increment */
int nr;
pthread_t id; /* application-specific thread-id */
} targ_t;
void *sfun(void *arg) {
targ_t *est = (targ_t *) arg;
here:
pthread_mutex_lock(&mutex);
(*(est->cnt))++;
pthread_mutex_unlock(&mutex);
if(*(est->cnt)<est->n)
goto here;
return NULL;
}
int main(int argc, char *argv[]){
targ_t real[3];
int c=0;
long count=0;
real[0].cnt=&count;
pthread_mutex_init(&mutex, NULL);
for(c=0;c<3;c++){
real[c].n=atoi(argv[1]);
real[c].nr=c+1;
pthread_create(&real[c].id,NULL,&sfun,&real[c]);
}
for(c=0;c<3;c++){
pthread_join(real[c].id,NULL);
}
pthread_mutex_destroy(&mutex);
printf("OVERALL %lu\n", count);
return 0;
}
There are a number of problems identified in the comments:
here:and a
goto here;is not a particularly good idea. There are occasions (some, but not many, occasions) when it is appropriate to use
goto— this is not one of those rare occasions.
argv[1]to convert; could it be that you forgot to pass that argument?
real[0].cntbut you do not initialize
real[1].cntor
real[2].cnt, so those threads are accessing who knows what memory — it might be that they're using null pointers, or they might be pointers to anywhere in memory, allocated or not, aligned or not, writable or not.
<stdlib.h>
*(est->cnt)outside the scope of mutual exclusion.
This code fixes those and some other issues:
#include <stdio.h> #include <stdlib.h> #include <pthread.h> pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER; typedef struct { long *cnt; /* pointer to shared counter */ long n; /* no of times to increment */ int nr; pthread_t id; /* application-specific thread-id */ } targ_t; static void *sfun(void *arg) { targ_t *est = (targ_t *)arg; while (1) { pthread_mutex_lock(&mutex); long cval = *est->cnt; if (cval < est->n) ++*est->cnt; pthread_mutex_unlock(&mutex); if (cval >= est->n) break; } return NULL; } int main(int argc, char *argv[]) { targ_t real[3]; long count = 0; if (argc != 2) { fprintf(stderr, "Usage: %s count\n", argv[0]); return(EXIT_FAILURE); } for (int c = 0; c < 3; c++) { real[c].cnt = &count; real[c].n = atoi(argv[1]); real[c].nr = c + 1; if (pthread_create(&real[c].id, NULL, &sfun, &real[c]) != 0) break; } for (int c = 0; c < 3; c++) pthread_join(real[c].id, NULL); pthread_mutex_destroy(&mutex); printf("OVERALL %lu\n", count); return 0; }
When run (for example, the program is
pth59):
$ pth59 100 OVERALL 100 $
Before moving the test (now on
cval) so that the read of
*est->cnt was done inside the scope of the mutex, I got an output
OVERALL 102 from the same command line. It is important to access shared variables with proper mutual exclusion, even if it is read-only access. | https://codedump.io/share/eZHRjR7D96Vd/1/how-can-i-have-a-shared-counter-in-multithreading-using-structs | CC-MAIN-2017-47 | refinedweb | 542 | 53.81 |
Bummer! This is just a preview. You need to be signed in with a Pro account to view the entire video.
Events and Messaging in JavaScript46:55 with Doug Neiner
This talk will serve as an introduction to event and messaging patterns in modern web sites and applications. Starting with a review of DOM Event handling and some common difficulties, this talk will advance through to discussing observable objects and wrap up with an introduction to the message bus. Following this logical progression you will become familiar with common eventing and messaging patterns in JavaScript and you will even be exposed to actual implementations of those patterns in popular JS frameworks.
- 0:00
[MUSIC]
- 0:13
>> So I need to participate in modern web conference today.
- 0:16
Got a lot to cover.
- 0:17
And so we'll jump right into it.
- 0:21
All right you all have this at the end so don't worry about writing anything down.
- 0:25
The only thing you'll want to write down towards the end is the email address, but
- 0:28
when I share the link it won't have that in there.
- 0:30
But anyone here, if you have a question about something that I talked about Or
- 0:35
just wanna e, e, you know, e-mail anything about JavaScript in general,
- 0:38
feel free to reach out to me directly.
- 0:40
I also love FontAwesome.
- 0:41
I started using the icons, realize I'd just give them a call-out.
- 0:44
I use it on all my projects and saw someone in the chat mention it earlier.
- 0:48
It's a blast.
- 0:50
I live in Iowa with my five kids and
- 0:52
my wife, and we're, you know, here at home right now,
- 0:55
so if you do hear any stomping or anything, it might be children upstairs.
- 1:00
All right, I work at appendTo, I'm not gonna belabor this point since Ralph
- 1:03
already covered this, but I love what I get to do,
- 1:05
work with some great people, I work from the comfort of my home.
- 1:08
So I really don't, I, just so many great things to say about appendTo and
- 1:11
I've really enjoyed the innovative things I've got to do while I'm here.
- 1:16
All right let's talk.
- 1:17
We're gonna talk about three primary things.
- 1:19
The DOM, we're gonna deep dive in the DOM events, kinda use it for a foundation for
- 1:23
messaging in our apps outside of the DOM.
- 1:25
We're gonna talk about the observer pattern and
- 1:28
how to use that with your objects.
- 1:29
And then right towards the end, we're gonna touch on the message bus.
- 1:32
This is certainly not gonna deep dive into architectural pieces about how to
- 1:37
arrange the entire, you know, entirety of your application.
- 1:40
All right, so, what is an event?
- 1:44
I went to the dictionary to make sure I could properly explain this and
- 1:47
this is what I found.
- 1:48
A thing that happens.
- 1:50
Especially one of importance.
- 1:52
Wow, man, I could, I could probably get in to writing definitions for
- 1:55
dictionaries if that's required.
- 1:57
A thing that happens.
- 1:57
So I found that very enlightening.
- 2:01
But it's true.
- 2:01
It's event is simply a thing that happens.
- 2:03
We have a lot of things that happen in our calendars.
- 2:06
We have a lot of things that happen in our life We share those with other people.
- 2:10
So in real life, we kinda understand the concept of an event.
- 2:13
But in the technical realm when we're talking about JavaScript events or DOM
- 2:17
events they're generally talking about a notification about the event I just wanna.
- 2:23
We're gonna be using the word.
- 2:24
I'm going to be using the word event today.
- 2:25
To use both together.
- 2:26
But really we're not handling the click someone already clicked.
- 2:30
We're handling the response to that.
- 2:31
And we're going to be able to respond to that notification.
- 2:34
And the same thing is true in our code.
- 2:36
We're not causing a, an event by announcing that it happened we're,
- 2:40
we're just notifying about what happened.
- 2:44
The important thing about an event is that it gives us voluntary visibility into what
- 2:48
would otherwise be an input only system.
- 2:51
I originally had closed system.
- 2:52
I didn't wanna be too specific.
- 2:54
Because a lot of times can have methods and really have
- 2:58
tight coupling between components that know about their internal methods.
- 3:02
But in general events give us that voluntary visibility into a unit.
- 3:06
Otherwise that would be operating on its own.
- 3:07
All right, so what do Facebook Twitter and Instagram have in common?
- 3:14
Well, they are your friends, or maybe not your friends, people that have added you,
- 3:18
and you've accepted that are notifying you about events in their life.
- 3:23
Again, it's voluntary.
- 3:24
You don't have to check those services.
- 3:26
You can voluntarily respond to those things, but these are a nice real-life
- 3:30
example of paying attention to things that happen, and then doing something,
- 3:33
maybe it's responding, maybe it's reading an article that was shared on Twitter.
- 3:37
But we have these examples in real life,
- 3:39
of things that are event-based, that we respond to.
- 3:44
All right, so, deep-diving into DOM Events, I'm sure this will be review for
- 3:48
many of you, but
- 3:48
there, hopefully you'll pick up things that you might not have known before.
- 3:52
If not, just to reiterate why DOM Events work is such a great platform for us.
- 3:57
You know, moving that out to our own application code, and
- 4:00
using some of the same patterns.
- 4:01
All right, a DOM event,
- 4:04
when you wanna be when someone clicks a button or it resizes their browser.
- 4:10
Xhr requests trigger events.
- 4:11
A number of things trigger events when we're dealing with HTML and the browser.
- 4:17
Going way back to early, early, early DOM specifications you would just add a,
- 4:22
a handler using an attribute and
- 4:23
then you'd have some sorta global function in your code that would do something and
- 4:26
you might even have the whole code right there in the onClick handler.
- 4:29
May not even be calling the function.
- 4:31
Obviously, big problems with this.
- 4:32
There's no separation between your HTML and
- 4:34
your JavaScript, so hopefully we're not doing that.
- 4:37
Anyone here's not doing that.
- 4:39
But it does, pollutes the global namespace.
- 4:41
There's number issues with this.
- 4:43
You can make it a little better by grabbing the element by its, you know,
- 4:46
an iclass name or id.
- 4:47
I just used id here.
- 4:49
And assigning it.
- 4:50
But it's still the same problem, just kind of changed a little bit.
- 4:53
Now we only can ever have one handler.
- 4:55
So if we attach it using this old method, which still works.
- 4:58
But if we attach it, and
- 4:59
someone else tries to add a collective event using the same pattern,
- 5:02
it would just remove ours, and theirs would be the one that gets used.
- 5:05
So this still isn't the solution.
- 5:07
So this is where, why we use addEventListener.
- 5:11
And I'll touch on the IE specific one in just a minute.
- 5:14
But this is why we addEventListener.
- 5:16
And there's just three parts: The event name, the callback that's gonna happen in,
- 5:19
and it'll pass a parameter with date details about that event.
- 5:23
And then the last parameter, which is going to signify whether you
- 5:26
want to have this execute in the capturing phase or bubbling phase,
- 5:30
which we'll talk about in a few slides.
- 5:33
This doesn't work in IE8 and under.
- 5:36
so, the cross-browser one, if you've been deep diving any of
- 5:38
the source code of libraries that abstract this, or maybe have written this yourself.
- 5:43
You test using feature detection, like Dave talked about earlier.
- 5:46
Say, does it have the method we want?
- 5:48
We bind the click handler.
- 5:50
And then, otherwise, we use, attach event, if it's present.
- 5:52
And that's the IE specific one for, again, IE8 and under.
- 5:56
The important thing is,
- 5:57
there's a lot of additional pieces here that don't function quite the same way.
- 6:01
The context of this inside of that callback attachment is not equal to
- 6:05
the element like you might be used to.
- 6:08
the, that doesn't get the parameter with the event,
- 6:10
that's available in window event.
- 6:11
So there's a number of changes which is why most of the time if
- 6:14
you're handling those older browsers using a DOM abstraction library like
- 6:18
jQuery makes a lot of sense.
- 6:20
Or something that does the same amount.
- 6:24
If you aren't targeting those versions, you're working in nine and
- 6:27
above, just use an addEventListener and then you can use both the capturing and
- 6:30
the bubbling we're gonna talk about, next.
- 6:34
Or soon, I should say.
- 6:35
Not next.
- 6:37
okay, so I'm, I'm coming into this talk with the understanding you've
- 6:40
probably been using events for a while in your JavaScript code.
- 6:43
You've been buying into clicks and mouseovers and
- 6:45
all these things to wire up your user interface.
- 6:48
so.
- 6:49
I'm going to just talk about some shared concepts because some of those same
- 6:53
concepts you're using will transition when we talk about messaging later in the talk.
- 6:57
The fact that it subscribed and
- 6:59
unsubscribed based on the event name that's a that will carry directly over.
- 7:04
We'll a lot of times call them topics if we're dealing with a message bus or
- 7:07
events with an event emitter.
- 7:09
But it's gonna be the same concept.
- 7:11
They often include a payload of information data additional meta
- 7:14
data about the event that happened.
- 7:16
This is a common pattern and
- 7:17
in a lot of times is why these events are hard to tell you that level of detail.
- 7:22
So you might be able to find out which key was pressed, on a key press.
- 7:25
[BLANK_AUDIO]
- 7:30
The event.
- 7:31
And then some, depending on what library your using to add events.
- 7:36
Some of them will give you the similar control over, you know stopping immediate
- 7:41
propagation or, or keeping or canceling it or changing default behavior.
- 7:45
Some of those will carry that over.
- 7:47
Your mileage may vary on the transferability of that.
- 7:51
Some of the key concepts.
- 7:53
That I wanna just deal with in this talk is making sure you truly
- 7:56
understand capturing and bubbling and,
- 7:58
and what happens where and for me, just understanding capturing really helps you
- 8:03
understand how jQuery can do some of the things it does, which is kind of cool.
- 8:06
And be able to do that in your own code, maybe without using the library.
- 8:10
One other challenge is just making sure you fully understand, based on that.
- 8:13
Event delegation how it works.
- 8:15
And then we'll deal with the specifics of,
- 8:17
of some other things with target and current target.
- 8:21
All right, so, we have a example structure here.
- 8:25
We're gonna start with our window.
- 8:26
Then we have our document.
- 8:27
Then we have the root element, which is HTML.
- 8:30
Then our body.
- 8:31
Then our header and then our logo.
- 8:34
So we have this structure of giving obviously just a slice of it.
- 8:37
But we'll use this to talk about capturing and bubbling.
- 8:41
So if we add an event listener to one of these elements using click we'll pass it
- 8:44
call back but we pass true instead of false.
- 8:47
That's that final parameter.
- 8:49
If we pass true it's going to capture an event during excuse me,
- 8:53
it's going to trigger the call back during the capturing phase of the event.
- 8:57
And this goes back, there's a huge history you can read about it on quirks.
- 9:00
Quirks mode on his website it talks about the history here.
- 9:03
But the idea is that it will start capturing starts at the top
- 9:06
most element in your, in your Dom.
- 9:09
In this case is gonna start with window, then document,
- 9:11
then go on down until it gets to the element that you started the event on.
- 9:17
So it starts, starts at the top.
- 9:19
Works the whole way down to the element that you might've clicked.
- 9:21
Which would be logo link.
- 9:22
Before it then switches and bubbles, which is the other direction and
- 9:27
then goes the whole way up.
- 9:28
90% of the time when you're doing something if you're using jQuery any of
- 9:32
those other pieces, you're using bubbling.
- 9:34
So that's where,
- 9:35
if you're already familiar with delegation and attaching handlers higher up.
- 9:39
You're expecting the event to move up and eventually get to where you can handle it.
- 9:43
Capturing does the exact opposite and happens first.
- 9:46
So it starts at the top, works its way down, and stops and
- 9:49
goes the other direction and works its way back up.
- 9:52
This is really cool we'll do a quick demo here.
- 9:57
I'll move over to the screen you're on.
- 9:59
If I click out here, just in the document, you're going to see it triggers on HTML.
- 10:05
I'm going to click this again, it will go away, so let's pay attention.
- 10:07
Capturing on window then document then HTML, and then it reverses and
- 10:11
goes back up.
- 10:12
I add the header, which is from our example.
- 10:15
We get Window document HTML header.
- 10:17
And then it goes the other direction back up, and that's the bubbling phase.
- 10:21
The whole way down to our anchor, if we had it included there.
- 10:24
So it gives you two different times where you can handle that event.
- 10:28
Once on capture, on the way down they're on the way up.
- 10:32
It's important to note, I'm gonna go back two slides here,
- 10:35
when you do provide this, if you're not on the element that was clicked.
- 10:39
So if this wasn't being bound to that logo, and it was one of the parents,
- 10:42
it will only fire this callback during the capturing phase.
- 10:46
In the bubbling, if we had it on one of the parents, would only fire the call,
- 10:50
callback during the bubbling phase.
- 10:52
If you had a event list and with either false or
- 10:54
true on the actual element that was clicked, it wouldn't matter.
- 10:57
It's gonna fire regardless because it's the during the context switch between
- 11:01
capturing and bubbling is when it's on the element that it was triggered on.
- 11:04
So whether you put true or false in that setting, it's going to work.
- 11:07
Either way.
- 11:11
The important thing here to know is that not all events bubble and
- 11:14
if you've been using JQuery a lot and it abstracts some of this and
- 11:17
makes all of the events bubble, you may not be familiar with this.
- 11:20
But not all events bubble.
- 11:21
You can check that using your event parameter, and
- 11:24
I just have mine named E here but you can just name that and
- 11:27
check for e.bubbles and it will tell you whether or not that event will bubble.
- 11:31
So if we attached an event listener or
- 11:33
focus to an input, it's gonna say false, that will not bubble by default.
- 11:37
And that's why jQuery gives you focus in and focus out.
- 11:40
It gives you that ability due to the fact that it won't natively bubble.
- 11:45
However, this is really cool, events that do not bubble can still be captured.
- 11:50
So if we were to add a focus handler to the document and
- 11:53
put true in the second part.
- 11:55
True right here.
- 11:57
Then when we input, focus on an input the document level so
- 12:02
in a way delegated it will pick up the fact that, that input has been captured.
- 12:06
And if you go and deep dive the source code of j Query you'll see that's
- 12:09
exactly how they're able to implement some of those.
- 12:12
Features that other, otherwise wouldn't bubble.
- 12:15
So that's kind of a, a fun thing but it's something you can use as well when you're
- 12:18
in those edge cases and you just can't get access to something that you need.
- 12:23
StopPropagation is literally just saying once it's on its way up, stop.
- 12:28
And so, if, if an element was chosen, and you said stop propagation on
- 12:32
that callback, the parent would not receive a notification about that event.
- 12:36
The difference between the two, stop immediate propagation, is that any
- 12:39
other event listeners bound to that same element would also be stopped and
- 12:44
they would not fire.
- 12:45
So, if you had three click events mount at different times in the first one
- 12:48
calls stopImmediatePropagation, the second two would not fire.
- 12:53
Now, as a gotcha Oh, you know, what,
- 12:56
we're gonna talk about the gotcha in a minute where the event delegations are.
- 12:58
I'm jumping ahead of myself.
- 13:00
Events bubble.
- 13:01
Right? I discovered that in the last few,
- 13:03
few minutes here.
- 13:04
Why is that useful?
- 13:05
Now that's where event delegation comes in.
- 13:08
And I'll read this right off the slide but it's the practice of leveraging event
- 13:12
bubbling and adding a single event to an ancestor.
- 13:15
You know example would be the document.
- 13:17
Instead of adding multiple events directly to the children.
- 13:21
So here's a picture of that.
- 13:23
If we had all these child items and we attach click handlers to each one of them,
- 13:27
we'd have to first receive a reference to them.
- 13:30
And then be able to attach the click handlers to each one of those.
- 13:34
As opposed to we could attach one flick handler to the container, and
- 13:37
it would automatically because of event bubbling,
- 13:39
get access to all of those separate click events.
- 13:42
So event delegation is really powerful.
- 13:44
It's a great concept to make sure you leverage in your code to
- 13:47
keep things clean and fast.
- 13:50
Some of the modern day frameworks do this for you automatically.
- 13:53
Back1 I know is one of them.
- 13:54
React from Facebook does this automatically.
- 13:56
And theirs is even more complicated cuz of their it's a,
- 14:01
it's not even, it's a synthetic events system.
- 14:05
Some of the benefits, I'm not gonna spend too much time on this right now,
- 14:07
but there's less DOM traversal during setup.
- 14:09
You're not having to get a reference to each item that you wanna apply an event
- 14:13
handler to.
- 14:14
There's no cleanup needed when the child elements are removed.
- 14:17
You don,t have to stop listening or, or unbind something.
- 14:21
New child elements work immediately.
- 14:22
So if you're loading data in via Ajax or using local templating all of those
- 14:27
event handlers don't have to be bound and rebound for that new code to work.
- 14:30
As long it matches the selector or
- 14:33
the test that you're running against it, it will work.
- 14:37
If you're familiar with JQuery and you've used delegated events before then you
- 14:40
might find this next part interesting because we'll I'll show how to do it
- 14:43
without JQuery, and so you can kinda get an idea of what it's doing.
- 14:46
It has a little bit more streamlined, it has a number of tests in there for
- 14:49
browser support and it will use things like matches and matches selector.
- 14:53
Where I'm just gonna do simple tests based on tag name or or maybe an attribute.
- 14:58
A lot of code here.
- 15:01
But we'll talk through it a little bit at a time.
- 15:03
So this is the code that we'd be running.
- 15:05
This would at the end of the day,
- 15:07
where it'd be attaching to the document click handler.
- 15:09
That we're looking for a button that has an atribute of data ID.
- 15:13
That's the longest road of what we're doing.
- 15:15
And then it's gonna do something with that ID.
- 15:18
So, the first step we're gonna have is we're going to bind an event handler to
- 15:21
the ancestor element.
- 15:22
So, in this case the ancestor element is document.documentElement.
- 15:26
This is a quick reference to each HTML.
- 15:27
So, you don't have to do find owner by tag name, or
- 15:30
do anything crazy to get the HTML element.
- 15:32
Just document.documentElement is the HTML element.
- 15:36
And is available as soon as you have code running.
- 15:40
So that's the first thing, we'll attach a click handler there, and
- 15:42
we'll make sure we have e so we can have it available, so
- 15:45
we can interact with the event object that will come through.
- 15:49
Then we're gonna try to decide as quickly as possible,
- 15:51
because this will handle all click events on the page.
- 15:53
We're gonna try to handle as quickly as possible if we want to do
- 15:57
something with this particular event.
- 15:59
So the thing I'm using here is,
- 16:00
I'm grabbing the target which we'll talk about in a minute.
- 16:03
Look at it's tagging, making it lower case because there will be a difference in
- 16:07
what that is name using a an XML base document versus and HTML bas document.
- 16:12
So, my check for being button.
- 16:14
And then, the next thing I'm gonna do is try to get it's data ID and
- 16:17
it doesn't have one, I'm gonna return.
- 16:19
Now, if we wanted to there's.
- 16:23
A rel, you know, it's not rel.
- 16:24
It's not super new.
- 16:24
But it's another way of doing this is a lot faster which is calling matches.
- 16:29
In the latest version of Chrome and Opera, you can just call matches.
- 16:32
In older versions you can look at can I look at details on this?
- 16:35
But there is available VF prefix.
- 16:37
So for IE9 and up, you can do this.
- 16:39
While there is MS modem web kit.
- 16:42
And that's another great use for modernizer to be able to get
- 16:45
the correct prefix for the uh,for method that you wanna use.
- 16:48
But, in this case, all we have to do is call target matches button and
- 16:51
say, does it have a attributed data ID?
- 16:53
If it does, then we wanna handle it.
- 16:56
And then I can confidently get the id and then do something with it.
- 16:59
So it kind of shortens up our code considerably.
- 17:05
All right, finally the last thing we're gonna do is cancel the default which if we
- 17:08
were delegating on a link for instance or
- 17:11
a button that would've submitted a form, we can prevent the default.
- 17:14
We obviously don't want to do that at the top of our method before we know if
- 17:16
we should be handling the delegation.
- 17:18
This is at the point at which we're back to almost using a normal event handler.
- 17:23
Once we've ruled out anything that doesn't match what we want we know we have it
- 17:27
now we can kind of continue as if we're a normal event handler.
- 17:32
One more thing, I won't belabor this for sure.
- 17:34
When I share the links later there's a link to an article I wrote years ago on
- 17:37
fourier coding that explains why and when you might want to use the term false or
- 17:41
when you're not gonna want to use it.
- 17:44
And so you can read that.
- 17:45
But it, long story short will stop propagation and
- 17:48
prevents default which breaks delegation if you're planning on using that.
- 17:54
All right, so one of the challenges with event delegation, one of the challenges
- 17:58
here is that if you were to have a click event that you want to handle
- 18:04
on this lowest level, let me actually select from the screen you can see,
- 18:07
the lowest level here, and you wanna delegate it to document.
- 18:12
But you wanna stop propagation so it doesn't maybe trigger a run on body or
- 18:15
trigger a run on HTML.
- 18:16
You can't do that.
- 18:17
Because if you consider bubbling, even though it's fired here,
- 18:21
that's the source element.
- 18:22
It's gonna, by the time your handler gets it,
- 18:25
it's already propagated the whole way up to the document.
- 18:27
And now you're handling the delegated version of that event.
- 18:30
So stopping propagation will keep it from moving to the window, but
- 18:34
it wouldn't stop it from moving up the tree here from header to body in HTML.
- 18:37
So that's one gotcha just to make sure you pay attention to.
- 18:41
The second one is that target, event target.
- 18:43
It may not be what you think it is, so let's talk about what that is, and,
- 18:47
and what it means.
- 18:49
Target is a reference to the element where the event originated.
- 18:52
So if you clicked on a link, it would be the anchor tag.
- 18:54
If you clicked on a button, it'd be the button.
- 18:57
Current target is a reference to the element that holds the event listener.
- 19:01
So in that case, it's document if we bound to that or one of the parent elements.
- 19:05
It will be the same as this, unless you've actively changed the.
- 19:08
Context of your callback, which happens when you're using it with a library or
- 19:12
with a component.
- 19:13
So e.currentTarget is the object that's the parent.
- 19:16
That's the one you have event delegated to, and then target's different.
- 19:19
There's some notes, if you're using attach event,
- 19:21
there's no equivalent to currentTarget, and
- 19:23
there is an equivalent to source element, which is available on window.event.
- 19:29
Let's look at some more code here.
- 19:30
So here's where I'm gonna use just a few Bootstrap classes for
- 19:32
our button, so it looks a little nicer.
- 19:35
Let's talk through the code, we're gonna grab just using query selector to
- 19:38
grab a single reference to this outside wrapper.
- 19:42
We're gonna add an event listener so we're delegating and event to this wrapper.
- 19:46
And we're gonna just check to make sure we clicked on a button, and
- 19:48
then we're gonna display the date.
- 19:50
So something very simple.
- 19:51
We click the button, the date's displayed, but we're doing it through delegation.
- 19:55
Run the demo.
- 19:57
Click the button, and I see the date, which is great.
- 19:59
This is what we want.
- 20:01
We click here.
- 20:02
Again, it doesn't work.
- 20:04
And so it's somewhat sporadic, we don't know why.
- 20:07
But let's go ahead and
- 20:08
check our, our we'll do some debugging here in the browser.
- 20:12
So again alert is not a debugging tool, unless you really, really need it.
- 20:17
But console log is.
- 20:18
So we're gonna use that.
- 20:19
We're just gonna, instead of doing anything else in here,
- 20:21
we're just gonna check what's the target, and what's the current target.
- 20:26
So if I click on the, this DIV out here that we're binding to,
- 20:28
it just says the target is DIV and the current target is DIV.
- 20:31
If I click on the button, you'll see it targets the button.
- 20:34
But actually I was clicking on the icon, and that's where our problem was.
- 20:37
Because the icon is not a button.
- 20:39
A parent of the icon is the button that we're interested, interested in.
- 20:44
Well, you're not gonna like the solution.
- 20:45
This isn't a, this isn't a good solution.
- 20:47
This isn't something I would use in production.
- 20:50
I'll show you.
- 20:50
It's a much more complicated, you'd abstract this out,
- 20:53
and once you've started going down that road, you should just use a library, and
- 20:56
I'll show you how to do it in jQuery.
- 20:58
But in this case I'm just gonna check whether it's a button, and
- 21:01
then I'm gonna check whether the parent's a button,
- 21:02
cuz I know that there's not gonna be multiple nest elements in my button.
- 21:06
So yeah, I am using the word target a lot.
- 21:08
I see that in check.
- 21:09
[LAUGH] Anyway this will fix our problem because we're checking whether it's
- 21:14
the icon or the button in, in both will work.
- 21:18
So now, once you've seen how to do it all yourself, this is why you go back to
- 21:22
the library and say wow that looks a lot whole cleaner, and it does it all for me.
- 21:25
So this is what it would look like with jQuery.
- 21:27
We just have the click event.
- 21:30
We checked. This would be the selector.
- 21:31
It's gonna put through to make sure it matches that selector.
- 21:34
And then if it works it will, it will run our, our callback.
- 21:38
So in this case this works out of the box of what we'd expect it to.
- 21:41
But I thought it's important that you know how to do it yourself,
- 21:44
if you ever need to.
- 21:45
Or just to understand what's going on behind the hood in some of
- 21:47
the abstractions you might be using.
- 21:52
All right so, just went into the very detailed pieces about targets and, and
- 21:56
bubbling and, and delegation.
- 21:59
But let's take a step back now and look at, we use these all the time right?
- 22:02
How else would we interface with our controls on a page if we
- 22:05
didn't have events and then the methods to control them?
- 22:08
One thing we never expect though is that a button on a page is gonna reach out and
- 22:12
talk to some of our internal app code.
- 22:14
We don't ever have that expectation.
- 22:16
We know that it's going to concern itself with what it needs to do, and
- 22:20
only what it needs to do.
- 22:21
So that's a great pattern where it's completely self contained it has a single
- 22:25
responsibility concept built right in and it's not gonna
- 22:30
do anything with our internal code, it has no knowledge of how it's being used.
- 22:33
It just, knows that it needs to work a certain way.
- 22:36
Which means our, we've also learned that components that expose meaningful events
- 22:40
are easy to work with.
- 22:41
So that's the output from the component, our meaningful events,
- 22:43
whether that be a key press or keydown or
- 22:46
keyup in an input field or whether that be something more abstracted.
- 22:51
Like a return to a XHR response.
- 22:53
But its gonna give us these meaningful outputs.
- 22:57
And then they also take a meaningful set of inputs and
- 23:00
that's the method API that you expect to work with.
- 23:03
And this is where our frustration lies between output and input, and
- 23:06
cross browser compatibility when they don't do the same thing.
- 23:09
When an event in one browser gives you more information or
- 23:12
gives it under a different attribute, in another browser And so,
- 23:15
when they're consistent and they work, man this is how we build stuff.
- 23:19
So that's fantastic.
- 23:20
These are great patterns.
- 23:21
It's interesting that this pattern has a name.
- 23:24
And the question is why don't we write our code like this everyday if it's so
- 23:28
great to work with.
- 23:28
We don't have an option with browser code.
- 23:31
We can't dig into the internals and do something different or
- 23:33
expect it to do something different.
- 23:35
So why don't we write code like this all the time?
- 23:39
Well it's called the observer pattern, and
- 23:41
it's when you have a reference to another element, and then you indicate that you
- 23:45
want to be notified when it has an event that you're interested, interested in.
- 23:49
So the object itself, that's holding all those subscriptions or,
- 23:53
or that will be triggering the events, doesn't care.
- 23:56
About who's subscribing, it just knows that someone subscribed, and
- 24:00
they wanna know when I trigger the event click.
- 24:02
Or when I trigger something different.
- 24:04
It doesnt' care if there's 100 or one subscribers.
- 24:07
It doesn't, it just honestly doesn't care.
- 24:09
So that item can be completely self contained without any knowledge of
- 24:12
who's doing the listening.
- 24:13
But can just broadcast those information, that information when it wants to.
- 24:19
All right. One major difference.
- 24:20
Just wanted to clarify this, that these events don't bubble.
- 24:23
Normally when you're dealing with events on components that
- 24:26
you've written yourself.
- 24:27
Maybe you're using a mix-in that will provide an event emitter,
- 24:31
something that you can use with your code.
- 24:33
They're not gonna bubble.
- 24:34
There's no document hierarchy for those to bubble through.
- 24:37
A notable exception, will be Angular.
- 24:40
If you're using their event system is built in a,
- 24:42
a structural way that reflects very similar to how the DOM works.
- 24:50
that, the other point of this is that delegation isn't something that,
- 24:53
technically delegation.
- 24:55
Well or key there because there's no bubbling.
- 24:57
So we use a message bus instead of delegation when we're
- 25:00
dealing with pure eventing and stuff like that in JavaScript.
- 25:04
All right, you're heard the term Pub/Sub or publish and subscribe.
- 25:07
Technically the observer pattern falls into this category.
- 25:11
But I, it does, I mean it clearly falls into this category.
- 25:14
I tend to use Pub/Sub when I'm talking about a separate message bus.
- 25:18
And event system is what I'm talking about.
- 25:20
You know what we might expect from a DOM event, or
- 25:22
from backbone events and so forth.
- 25:24
So it certainly publish and subscribe, you're triggering or
- 25:27
publishing an event and then other people are subscribing to it.
- 25:31
But if you hear it used a lot,
- 25:33
it may mean something bigger like a message bus as opposed to true eventing.
- 25:40
Okay, so here's where it's gonna be fun.
- 25:41
We're gonna jump into a, example application.
- 25:44
We're gonna build not actually build.
- 25:46
But we're gonna walk through it as if we're building it.
- 25:48
And kinda think through how we can try to integrate,
- 25:50
some of these eventing systems into our code, and see how that looks.
- 25:55
The nice thing is, with the observer pattern and a good API,
- 25:58
we can build our sites and applications as a collection of these discrete components.
- 26:03
That communicate with each other, using methods for input and events for output.
- 26:08
Of course we can use this pattern with non-visual components.
- 26:10
So those events don't have to be user interactions, it can be you know,
- 26:14
something from a web socket,
- 26:16
it can be one of your pieces of code is firing on, on a timer.
- 26:20
Any number of things can, can pose these events.
- 26:24
All right, so, why would we even bother, to add this to our code?
- 26:29
Well, components that are standalone, like I'm talking about,
- 26:32
I'm gonna walk through building, they're testable.
- 26:35
They don't require a huge infrastructure to test,
- 26:37
you load up that component, you test it's inputs.
- 26:40
Test the expected outputs, and you're good to go.
- 26:43
And then, of course, the components can be tested on their own, or
- 26:45
maybe tested how they integrate with your code.
- 26:48
But they're very easy to test.
- 26:49
They're completely separate.
- 26:50
And hopefully you guys,
- 26:52
when you're writing JavaScript are able to unit test that code.
- 26:55
So you have confidence in what you're doing.
- 26:57
It's reusable.
- 26:58
When a component knows intricacies about another component,
- 27:02
it has to always be used on tandem with that component,
- 27:04
unless you're building a lot of conditionals in there.
- 27:07
So our goal is that we have something that is able to be reusable, and be able
- 27:11
to maybe use throughout your application, not necessarily on another project.
- 27:15
It may be specific to your app.
- 27:17
But at least throughout the project without this brutal nature of oh no I've
- 27:21
gotta make sure I load that, and load this and then all of it will work correctly.
- 27:26
[SOUND] all right.
- 27:29
The example application we're gonna build which after I
- 27:32
gave a talk that included this application in Vienna and
- 27:34
someone told me apparently, all illustrations are of video searching apps.
- 27:38
I didn't know that, but it actually works really well for what we're doing.
- 27:41
so, we're going to work through building this example video app.
- 27:47
So you see we have a Search bar at the top,
- 27:50
it's gonna show us movies at the bottom.
- 27:53
Maybe you know things are gonna open our theater, whatever.
- 27:56
It's gonna show us that response.
- 27:58
And then when we click on one it's gonna show us a Details tab,
- 28:01
which is gonna look like this.
- 28:04
So it's, you know a very simple interface.
- 28:05
We've got the Search field, the Search button and then our Result view and
- 28:09
our Details view.
- 28:11
All right, lots of code.
- 28:12
We're gonna step through it, don't be overwhelmed.
- 28:15
Walk through it one by one here.
- 28:17
But this is the code.
- 28:18
This is just for the searching, that I'm showing here.
- 28:20
And then we'd, we'd have code for the other part as well.
- 28:23
But so here, don't take this and go write an app that looks like this.
- 28:26
I'm starting somewhere that would be where there's no components.
- 28:29
There's nothing that's been separated out.
- 28:30
It's just straight jQuery code.
- 28:32
So this is where we're gonna start and then we'll move toward a separate
- 28:36
components that admit events and accept inputs.
- 28:40
So, as we step through this.
- 28:43
Here's our search form.
- 28:44
We're just listening for the submit.
- 28:46
Please never bind to the click event on a submit button.
- 28:49
You can either bind to submit event on the form.
- 28:51
That way the user can't accidentally submit the form.
- 28:56
Then we're gonna prevent defaults so that form doesn't send to the server.
- 29:00
We're gonna look up our query variable which is the search field.
- 29:03
If there's not a query we'll return it.
- 29:05
We won't do anything else so
- 29:07
they can't hit Enter in an empty field and have it ex, execute a search.
- 29:11
Finally we're going to make our AJAX request to our endpoint here.
- 29:15
Passing the query.
- 29:17
And wait and handle the results all in this method.
- 29:20
And that's where, just assume between those three dots that you're seeing here,
- 29:23
that there's a magical templating library that's taking care of RenderMan's force.
- 29:27
I don't wanna get bogged down in the details here.
- 29:30
And then we're finally setting up for HTML and, and to, to stage point from earlier.
- 29:35
Hopefully at this point this is trusted HTML from our server,
- 29:38
not user input each time.
- 29:41
Okay. So just to reiterate what happened in
- 29:43
a visual way, clicking the Search button gets the search query,
- 29:46
ensures that one was provided, executes the AJAX request, and renders the results.
- 29:52
That's a lot of stuff for a Search button to do.
- 29:55
And then clicking on an individual movie fetches data over Ajax,
- 29:59
renders the results, and then changes the tab to show the details.
- 30:05
All right, so obviously there are some issues here.
- 30:06
We can't test any of these pieces on their own.
- 30:09
Just to test the display of results, we'd have to fill in the search box and
- 30:13
trigger a click, on the Search button.
- 30:15
That's the only way we could do it.
- 30:16
We'd have to simulate a click on the Search button,
- 30:18
after having filled in the search box and
- 30:20
then we can see if the events rendered correctly, the movies rendered correctly.
- 30:25
We'd have to have a live endpoint running, which isn't that's an issue, but
- 30:28
the mock, we can mock a request.
- 30:29
We certainly do that quite a bit in front end development.
- 30:33
And then that's all just so we can verify that the rendering function works,
- 30:35
cuz it's nested so deeply into that code.
- 30:39
And then to change our Details view maybe from a tab to a dialogue we'd have to
- 30:43
change the event handler for an individual movie.
- 30:45
I didn't show this code.
- 30:47
You can imagine,
- 30:47
if we have code that looked like that the rest of it would be similar.
- 30:51
And clicking on an event is the thing that shows the details.
- 30:54
Now the event for that click is responsible for
- 30:56
rendering the details which, as you say that loud you start to
- 30:59
realize this doesn't sound like a well architected piece of application.
- 31:04
All right.
- 31:05
Does it work?
- 31:06
Yeah. Theoretically it would work just fine.
- 31:08
As long as it didn't get anymore complicated it would work.
- 31:11
Can we make it better?
- 31:11
Yes. And that's what we're gonna talk
- 31:13
through in the next little bit.
- 31:16
All right so the first thing you'll wanna do is take a step back,
- 31:18
instead of just saying I need to wire this to this and do this to that, we wanna take
- 31:22
a step back and identify what components, what pieces are we going to need.
- 31:26
I do wanna call out,
- 31:27
I'm not talking specifically about the spec web components.
- 31:30
I'm talking about just modularizing your code into reusable chunks.
- 31:34
So just, I don't wanna get anyone caught up in the terms.
- 31:38
All right. So if we look at this,
- 31:40
we see there's this area that's related to Search Query and Execution.
- 31:44
That's one segment up at the top.
- 31:46
We have our Search Results for you.
- 31:49
We have a Details view, which I don't show on this screen again, but
- 31:51
you saw that earlier.
- 31:52
We have the Details view.
- 31:53
And then we have Tabs, those are just great,
- 31:55
they do one thing, which is switch between views.
- 31:58
And we've seen two different ways to do tabs today,
- 32:00
so you should have, plenty of ways to do that.
- 32:03
And then you may or
- 32:04
may not have a movie object depending on how complicated your app might be.
- 32:07
But we could potentially have something that's called a [INAUDIBLE].
- 32:11
Excuse me.
- 32:13
All right, so when we're at these components, we want them to have to obey,
- 32:17
a focused use which is called the single responsibility principle.
- 32:20
Where right now,
- 32:21
in our previous version here that we just showed, the Search button.
- 32:26
Are the form being submitted is responsible for getting the results and
- 32:29
rendering them and validating the query that was entered.
- 32:33
So it's doing a lot of things.
- 32:35
Nothing's related directly to what would visually appear to be its use.
- 32:39
And then we wanna make sure that whatever we build has a clear public API.
- 32:43
That's in the, it's not public in the sense of users using your app but public,
- 32:46
in the sense that the rest of your app can depend, that it has these methods and
- 32:50
fires these events.
- 32:54
So let's talk about that search component.
- 32:55
That's the one at the very top.
- 32:56
What is its purpose?
- 32:58
When we really sit down and look at it, what's its purpose?
- 33:00
Well, allow the use to enter a search term.
- 33:03
Make sense?
- 33:04
Start the search.
- 33:05
Okay it's gonna trigger the search.
- 33:06
Ad it wants to continue to display the search term, once the search is rendered.
- 33:11
So we don't wanna hit search and have the term go away.
- 33:13
We want it to be kind of at most like a header, and
- 33:15
show us what the results are for one that we're currently looking at.
- 33:19
The thing it is not supposed to do, it is is not supposed to request data, or
- 33:23
render results.
- 33:24
That doesn't make sense for a search component to do.
- 33:27
It would be like an input making AJAX request in html.
- 33:31
We just wouldn't expect that to happen.
- 33:33
So our search component, that's not part of it's core purpose.
- 33:38
So we see it's going to output, we're,
- 33:40
talk about the output first it's gonna publish a search requested event.
- 33:44
And this can provide the term hobbit.
- 33:46
This is our, not always hobbit, of course it will be whatever it searched for,
- 33:50
but, hobbit's a good term.
- 33:51
So we'll have search requested, and
- 33:53
then the term will be whatever they searched for.
- 33:55
That's when a form's submitted, and when it's filled in.
- 33:58
So, we wouldn't do anything if the form was empty or just blank space.
- 34:02
The input, it's gonna take two methods.
- 34:04
One is set search term.
- 34:05
That will set what should show up in that box, but it won't trigger anything.
- 34:10
And then trigger search.
- 34:11
And this allows us to programmatically,
- 34:14
cause a search to happen without filling in the input and clicking the button.
- 34:19
Right now we're using methods through an API.
- 34:20
It's a lot cleaner, and doesn't require our,
- 34:23
the rest of our app to understand how the internals of this component are running.
- 34:28
Okay, here's the internal responsibility.
- 34:30
That was the external responsibility.
- 34:32
That's what you have to maintain for the rest of your app.
- 34:34
Internally, we'll listen to the submit event on the form, and run triggerSearch.
- 34:38
Now we have a method to run, to do this.
- 34:41
The triggerSearch method can retrieve the value of the search box.
- 34:44
Make sure a value has been entered.
- 34:46
And then publish that message force.
- 34:51
Okay, so taking a step back, how do we publish the event?
- 34:56
Being somewhat, this talk being somewhat framework agnostic I'll show you how to do
- 34:59
it in Backbone and then I'll mention how to do it in angular.
- 35:02
You can of course do it a million different ways.
- 35:05
But at the end of the day you need a library that has an event emitter built
- 35:08
into it or you need to load an event emitter,
- 35:10
and I'll show you one called monologue in a minute.
- 35:13
So you need something that has an on, an off and a trigger or emit method so
- 35:17
you can subscribe to events, unsubscribe from events and
- 35:20
then have a way to trigger or emit those events.
- 35:23
So there's plenty of libraries out there that do this.
- 35:25
Some of them have it built in.
- 35:28
If we looked at it in Backbone I'm not showing you half of the code here but
- 35:32
you can imagine we have a backbone view,
- 35:34
there's a render method somewhere else that's doing stuff, and in here.
- 35:39
You're going to, when we click Trigger Search and
- 35:41
we, when we make that happen, we're going to grab our current context, which is
- 35:46
going to be the root of the control, find that search field, grab its value.
- 35:50
If you're working with a model back view,
- 35:52
you're just probably gonna get that off the model.
- 35:55
We're gonna use jQuery's trim method to go ahead and
- 35:58
just remove any blank space that might be in there.
- 36:00
Make sure a term exists.
- 36:01
Which is this line here.
- 36:03
And as long as there's a term that exists, we're gonna allow the code to continue.
- 36:06
And we're gonna call this.trigger, search.started.
- 36:10
And actually I meant to say requested here.
- 36:12
But search.requested with our term.
- 36:14
And that's how that will output Let me do this so
- 36:19
I don't have to remember to do it later, a second.
- 36:22
One of the features of the Blazen is this, requested.
- 36:25
There we go. See?
- 36:26
It's like I never even made the mistake.
- 36:28
All right so this is search requested.
- 36:31
And this will trigger that for our element.
- 36:33
If you're using something like Angular I know a lot of people are using this.
- 36:37
It has a concept of a message, or excuse me, an event emitter.
- 36:41
You can use emit on your scope which will publish an event up the scope hierarchy,
- 36:45
the whole way up to the, to the root.
- 36:47
You can use, if it's not stopped, somewhere along the way, broadcast,
- 36:50
which will take the context element and go the whole way down to any bits children.
- 36:54
Which is something that we don't really have a parallel with in the DOM
- 36:57
eventing we're looking at.
- 36:59
At then of course, if you really needed the root level and
- 37:01
everything you could go to rootScope and then broadcast it there.
- 37:05
If this doesn't make sense to people who don't use Angular, it's okay.
- 37:08
It's just meant for those who do use it to know how, how to where to find it.
- 37:12
And I have a link to the documentation, so
- 37:13
you could read more about the event emitter that's there.
- 37:15
And I didn't do it.
- 37:18
And I wanna thank Jonathan Creamer for helping me.
- 37:20
God, help with that aspect and making sure that that is useful for everyone.
- 37:24
A monologue is a completely standalone event emitter.
- 37:28
I use Postal, which is for a message bus.
- 37:30
And Monologue is kind of a companion library to that.
- 37:34
So it's something you could check out as, as well.
- 37:38
All right. The important thing.
- 37:39
Take the step back.
- 37:41
We've looked at everything.
- 37:41
We have these components.
- 37:42
The important thing is that you honor that external contract.
- 37:45
The events that you're gonna publish.
- 37:47
And the API that you have.
- 37:48
As long as you keep that consistent, you can change the internals of
- 37:52
that control 50 million times in your app and you won't break anything as long as
- 37:56
you publish the events when you should and you accept those two methods we mentioned.
- 38:00
So, now you can't look at it anymore as a search box and a button.
- 38:03
It's the search control.
- 38:05
It's like, the table.
- 38:06
It's like the form.
- 38:07
It's, it's a single component that does a single use, and here's its API.
- 38:11
You have to honor that or else this all breaks down.
- 38:14
Soon as your search box and button reaches out to a parent element and
- 38:17
calls some method it knows about, you've broken the, the reusability and
- 38:21
testability of your code.
- 38:23
So make sure you honor that external contract.
- 38:27
All right. We're not gonna go that deep into each of
- 38:29
the other objects, but we can expect that our
- 38:31
Results view would accept a method that's called updateResults.
- 38:34
And that's gonna pass in the data from the server.
- 38:37
That will, it will then be in, responsible for rendering.
- 38:40
Will have an output of movie selected, so when you do click on a movie,
- 38:43
the Results view is the thing that's gonna output that event for
- 38:46
you, knowing that a movie's selected.
- 38:47
You don't have to bind an event for your app, directly to that movie.
- 38:52
We'll have tabs that will maybe accept an input of changeTab, and
- 38:55
you can give it the idea the tab you wanna go to.
- 38:57
And it's gonna output when the tab.changed.
- 39:01
The code that I'm gonna show, the pseudocode, doesn't use this one, but
- 39:04
this makes sense.
- 39:05
You'd wanna know when the tab changes.
- 39:07
And then final the detail view is just going to have one piece which is
- 39:10
show movie.
- 39:11
We say show movie and give it some code.
- 39:12
It's gonna know how to run through that and properly display it.
- 39:16
So now we've kind of split this up into these components.
- 39:19
We have a tabs, search control, our results and details.
- 39:23
We have these separate components now that we can use.
- 39:25
And this again using any number of object patterns here.
- 39:28
As long as you have an event emitter, you'll be able to do this.
- 39:33
All right, here's what our updated code looks like.
- 39:36
It's a lot easier to understand.
- 39:37
We remember, keep in mind these variable names, tabs, search, results, and
- 39:42
details, cuz this is kind of a continuation of that code.
- 39:45
So if we listen using event emitter to the search requested event We can wait for
- 39:51
that, and under the data its gonna have that permit we need.
- 39:54
And so we can make an AJAX request with that term.
- 39:58
When Ajax results come back, we can tell our results to update the results.
- 40:03
We don't care about how it's going to do that.
- 40:06
We don't have the rendering logic in here like we had before.
- 40:09
We're just going to put this in here.
- 40:10
We could go even further and do something like amplify request using AmplifyJS to
- 40:15
abstract out that Ajax request.
- 40:18
There's a lot of things you could do, but this cleans it up considerably.
- 40:21
Now when a search is requested it will change the tab to results,
- 40:23
which that's the one that's showing.
- 40:25
So if details was showing before it will show the results, make our Ajax request,
- 40:29
and then render those, render that data.
- 40:31
Now the same thing for movies selected.
- 40:35
When a movie's clicked we're gonna change the tab to details, make a request for
- 40:38
the movie and then you show the movie.
- 40:41
But if you sit someone down here and they look at
- 40:43
this they don't even have to understand what your dom structure looks like.
- 40:46
They can read, read this code, understand what's going on and then when you need to
- 40:51
make a change to the detailed view, you're not looking at the event handler for
- 40:54
movie, you're looking to the detail component.
- 40:57
That makes it a lot easier to find where things are going wrong and
- 41:00
where you need to fix it.
- 41:00
All right, this is better, right?
- 41:04
There's probably more code now than there was before but
- 41:08
does it meet our needs of being maintainable?
- 41:10
Yeah, you know where stuff is and it, it has a single responsibility.
- 41:14
Is it testable?
- 41:14
Absolutely, these components are far more testable than they were before.
- 41:18
And it's even reusable.
- 41:19
We can use that search control to do other things throughout the app, and
- 41:23
we could make it even more generic if we wanted to.
- 41:26
So it gives us a lot of benefit for what we're working on.
- 41:30
All right.
- 41:31
Talking about earlier, with the dom event, event delegation is, is great and
- 41:35
you can do all this cool stuff with it.
- 41:37
Can you use it with event emitters?
- 41:40
Well, in. Let's talk about what delegation gets us.
- 41:42
Cuz I say in a general sense, yes, but I told you earlier you couldn't.
- 41:45
So let's clarify what I mean.
- 41:47
Delegation gave us the ability to listen for
- 41:49
events without a direct reference to the element.
- 41:52
So we could add, add the event handler higher up on the dom, and
- 41:55
then the event would bubble up.
- 41:57
And it also allowed us to filter what we wanted to do based on
- 42:00
characteristics of that element, and we can do that with delegation.
- 42:04
When we're talking about eventing and
- 42:05
doing this apart from the DOM we use a thing called a message bus.
- 42:10
And there's a number of different names for
- 42:11
this event aggregator, mediator, message broker.
- 42:15
They all have the same concept though,
- 42:16
which is they introduced a shared object that is used to
- 42:20
communicate between components instead of requiring a direct reference.
- 42:24
Because right now our
- 42:25
app required a direct reference to each of these components to wire them up.
- 42:28
Which is fine.
- 42:29
It's a great pattern.
- 42:30
So our app we'll come back to that slide in a second.
- 42:33
But our app looks like this.
- 42:34
There's the app then it has referenced each of the controls.
- 42:39
The message bus works like a shared observable object.
- 42:42
So it's going to have events that are emitted.
- 42:45
We call those topics generally, or messages rather, and
- 42:48
then the event name we call a topic.
- 42:52
The methods for these are often different, so unsubscribe, subscribe, and
- 42:55
publish, that's where you get kind of more of the pub/sub terms.
- 42:59
Though they certainly can still have on, off, and
- 43:01
emit as the API depending on the library that you're using.
- 43:03
But it takes our app from this and makes it look like this.
- 43:09
So, instead of everything, the app having a reference to everything else,
- 43:14
these items will have reference to that global message bus.
- 43:17
And then could publish something on the bus that then the app could pick up and
- 43:21
go back the other direction.
- 43:23
Again I mentioned we wouldn't go too deep into this but there are some differences.
- 43:27
One thing is that your methods need to support now,
- 43:32
your objects rather need to support being called via an action message.
- 43:35
When we're dealing with events on objects traditionally those aren't actions.
- 43:38
Those aren't going to say things like start search or, or
- 43:42
some things very specific like that.
- 43:44
They're going to say this happens.
- 43:45
So search requested or form submitted.
- 43:48
Anything like that.
- 43:49
That's kind of an event.
- 43:51
Message bus you now need to be able to process inputs through the same
- 43:55
channel that you're sending the outputs.
- 43:56
And that's where you get action messages like search trigger and
- 44:00
maybe that would handle that method trigger search.
- 44:05
So the action message, it needs to be kind of wired in and there's ways to do this.
- 44:10
But you, it's one difference between a true event emitter and
- 44:12
then something that can handle inputs through that same event emitter.
- 44:17
Two libraries you can check out.
- 44:18
One the AMPLIFYJS, the one appendTo's written.
- 44:21
And that one has a very simple pub/sub.
- 44:23
It kind of gives you that shared object.
- 44:25
There's someone, I can't remember the guy's name,
- 44:28
he tweets a one that fits in tweet.
- 44:30
There's another very short one that's used by quite of few people.
- 44:34
For message buses, things that are shared that way.
- 44:37
- 44:39
It gives you a lot of power.
- 44:41
It has it basically matches if you've ever used RabbitMQ or
- 44:44
something like that on the server, it has the same wildcard matching support.
- 44:48
It gives you a lot of power when you're building complex apps to put
- 44:51
those things together.
- 44:54
So those are some libraries that you can use.
- 44:56
All right, we are coming to the end of a large number of slides.
- 44:59
I just want to reiterate if some of this was new information.
- 45:03
Kind of putting it all together, what would be a good next step?
- 45:07
If you don't group your code in components at all right now and
- 45:09
you just kind of have that initial example where you have a lot of jQuery code that
- 45:13
maybe runs in document.ready and tries to wire everything up.
- 45:16
Just start looking at your application based on components.
- 45:19
Start breaking it up into reusable components, making it easier to maintain,
- 45:23
easier to test.
- 45:24
That's a great next step.
- 45:26
Don't worry about the message bus.
- 45:28
Maybe not even about the event emitters yet.
- 45:30
Just try to break your code up into more usable pieces.
- 45:34
If you already are doing components, but really aren't using those events, and
- 45:37
you're kind of one component knows the internals of another component and
- 45:40
they're interacting with each other that way.
- 45:42
A great next step would be to get an event emitter in there,
- 45:45
maybe you already have one, you're just not using it.
- 45:47
Get that in place and start using that to kind of separate the, the logic.
- 45:52
So this child component doesn't have to know about the internals of the parent.
- 45:56
That's a great next step.
- 45:56
And if you're already doing those two things you have components and
- 45:59
they're of using event admitting but
- 46:01
you're not using a message about something that's more global.
- 46:04
Look at your app.
- 46:04
See is it complex enough that it makes sense that I want to
- 46:07
have these as truly separate components.
- 46:10
The tool we're using here, Blazen, it uses a very complex message bus.
- 46:13
And part of that is we're forwarding information through, and
- 46:17
back out of an iframe.
- 46:18
And, it's filtering that for content, making sure the stuff correct.
- 46:21
But we're running everything in a sandbox, but
- 46:23
allowing people that are logged in to make edits from the parent page.
- 46:27
So, because we go through a message bus, and everything serializable,
- 46:31
we can send it directly to the other frame and back if you go to the server and back.
- 46:35
There's a number of things you can do when you use a message bust that you
- 46:38
just simply can't do another way or, or as cleanly another way.
- 46:42
There's a lot of really cool stuff we can do, but
- 46:45
that's it that's what I had for you.
- 46:47
I hope that there's some questions that I can answer.
- 46:49
I've actually got done on time which I was worried about given the number of slides I
- 46:53
was going through.
- 46:54
Any questions that I could answer? | https://teamtreehouse.com/library/events-and-messaging-in-javascript | CC-MAIN-2018-51 | refinedweb | 12,209 | 81.02 |
![if !IE]> <![endif]>
Compiling Multithreaded Code mul-tiple threads is the errno variable on Solaris. Solaris provides different implementations of this variable for single-threaded and multithreaded applications. In a single-threaded application, there is only one errno variable, so this can be an integer value. In a multi-threaded application, an errno variable needs to be defined for each thread. The com-piler flag -mt passes the compiler flag -D_REENTRANT, which makes the errno variable a multithread-aware macro.
Listing 5.13 shows an example of a code that calls errno in a multithreaded context. Both the main and child threads call fopen() with invalid parameters; the child thread attempts to open the current directory for writing, and the main thread attempts to write to an unspecified file. Both of these actions will result in the value for errno being set to an error value.
Listing 5.13 Example of Using errno in a Multithreaded Application
#include <stdio.h> #include <errno.h> #include <pthread.h>
void * thread1( void* param )
{
FILE *fhandle = fopen( ".", "w" );
if ( !fhandle ) { printf( " thread1 %4i\n", errno ); } else { fclose( fhandle ); }
}
int main()
{
pthread_t thread_data1; int i;
pthread_create( &thread_data1, 0, thread1, 0 ); FILE *fhandle = fopen( "", "r" );
if ( !fhandle ) { printf( " main %4i\n", errno ); } else { fclose( fhandle ); }
pthread_join( thread_data1, 0 );
}
Using Solaris Studio compilers on Solaris, the -mt flag ensures the correct behavior in multithreaded contexts. Listing 5.14 shows the results of compiling and running the application both with and without the flag.
Listing 5.14 Running a Multithreaded Code That Depends on errno on Solaris
$ cc errno.c $ ./a.out
thread1 2
main 2
$ cc -mt errno.c $ ./a.out
thread1 22
main 2
When the code is correctly compiled, both of the calls to errno produce a value that is correct for the calling thread. When the -mt flag is omitted, the same value for errno is printed for both threads.
It is also necessary to ensure that the correct libraries are linked into an application. Some support libraries include both single-threaded and multithreaded versions, so selecting the appropriate one is important. Some operating systems will explicitly require the Pthread library to be linked into the application. For example, Solaris 9 would require an explicit -lpthread compiler flag; however, this changed in Solaris 10, when the threading library was combined with the C runtime library, and the compiler flag was no longer necessary.
The same situation is true when building with gcc. The compiler has the flag -pthread, which both passes the flag -D_REENTRANT and causes linking with the POSIX threading library. However, not all platforms need to define _REENTRANT; it makes no difference to the Linux header files, so the only benefit is that the compiler will include the POSIX threading library.
Some libraries are not multithread-safe; they do not guarantee the correct answer when called by a multithreaded application. For instance, the Solaris Studio compilers provide libfast, which is not multithread-safe but offers better performance than the default malloc(). It is easy to produce multithread-safe libraries using mutexes to ensure that only a single thread can have access at a time. However, this does not produce a library with performance that scales as the number of threads increases.
The other point to be aware of when compiling code that calls the POSIX API is that it may be necessary to define particular variables in order to get the correct versions of functions. These requirements are usually documented under man standards. Linux does not typically require this; however, Solaris does. For example, use of the define - D_POSIX_C_SOURCE=199309L will assert that the code is written to the POSIX.1b-1993 standard. Failure to set the appropriate feature test macro will usually cause warnings of undefined functions or of incompatible types being passed into functions.
Related Topics
Copyright © 2018-2020 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai. | https://www.brainkart.com/article/Compiling-Multithreaded-Code_9460/ | CC-MAIN-2019-30 | refinedweb | 649 | 54.93 |
Blink timer
In the previous demo, you change the output constantly and using the function
sleep to blink LEDs. Besides, you can use a timer to realize the same effect. So this example shows how to use the timer and interrupt mechanism on your board to make the red LED blink per second.
What you need
- SwiftIO Feather (or SwiftIO board)
Circuit
All you need is a compatible board. Connect it to your computer to download the code.
Example code
You can find the example code at the bottom left corner of IDE: /
SimpleIO /
BlinkTimer.
// Change the LED state every second by setting the interrupt.
// Import the library to enable the relevant classes and functions.
import SwiftIO
// Import the board library to use the Id of the specific board.
import MadBoard
// Initialize the red onboard LED and a timer to set interrupt.
let red = DigitalOut(Id.RED)
let timer = Timer()
// Raise the interrupt to turn on or off the LED every second.
timer.setInterrupt(ms: 1000) {
red.toggle()
}
while true {
}
Background
Interrupt
An interrupt ensures that the processor responds immediately to some events. When a given condition meets, the processor will stop its current job to perform other tasks with higher priority, called ISR. In this way, your board can do something else without waiting there doing nothing.
The interrupt can happen under several different conditions. You will use a timer to trigger it in this demo.
note
The function you choose for interrupt should be able to execute extremely quickly. Usually, we tend to change the state of pins or some values.
Timer
The timer is a part of the hardware on the board. It works just like an alarm clock. You set the time interval for the interrupt. If the time is up, the microcontroller will execute the specified program.
Code analysis
let timer = Timer()
Initialize a timer. It requires no parameters.
timer.setInterrupt(ms: 1000) {
red.toggle()
}
The method
setInterrupt() is used to set the interval and the ISR. It has 4 parameters in all:
setInterrupt(ms period: Int, mode: Mode = .period, start: Bool = true, _ callback: @escaping ()->Void)
- ms: it is the specific period in milliseconds.
- mode: it decides whether the ISR will be done once or repeatedly.
- start: it decides if the timer works as soon as you set it. If not, you can start it manually later.
- callback: it is the task your board will do. It is a void function and is the last parameter, so you can move this into a pair of curly brackets as
red.toggle()above.
The method
.toggle() can change the digital signal from one level to the other.
In this way, the red LED light will be switch on or off every second.
In the loop, you can make the board do something else when the interrupt has not worked.
Reference
DigitalOut - set whether the pin output a high or low voltage.
Timer - set the timer to make your board do some certain tasks at a fixed time.
MadBoard - find the corresponding pin id of your board. | https://docs.madmachine.io/tutorials/general/simpleio/blink-timer | CC-MAIN-2022-21 | refinedweb | 510 | 76.01 |
This solution is inspired by The_Duck with his C++ solution
However, I would give an explanation based on my own understanding.
The basic idea is that we can find the maximal coins of a subrange by trying every possible final burst within that range. Final burst means that we should burst balloon i as the very last one and burst all the other balloons in whatever order. dp[i][j] means the maximal coins for range [i...j]. In this case, our final answer is dp[0][nums.length - 1].
When finding the maximal coins within a range [start...end], since balloon i is the last one to burst, we know that in previous steps we have already got maximal coins of range[start .. i - 1] and range[i + 1 .. start], and the last step is to burst ballon i and get the product of balloon to the left of i, balloon i, and ballon to the right of i. In this case, balloon to the left/right of i is balloon start - 1 and balloon end + 1. Why? Why not choosing other balloon in range [0...start - 1] and [end + 1...length] because the maximal coins may need other balloon as final burst?
In my opinion, it's because this subrange will only be used by a larger range when it's trying for every possible final burst. It will be like [larger start.....start - 1, [start .. end] end + 1/ larger end], when final burst is at index start - 1, the result of this sub range will be used, and at this moment, start - 1 will be there because it's the final burst and end + 1 will also be there because is out of range. Then we can guarantee start - 1 and end + 1 will be there as adjacent balloons of balloon i for coins. That's the answer for the question in previous paragraph.
public class Solution { public int maxCoins(int[] nums) { if (nums == null || nums.length == 0) return 0; int[][] dp = new int[nums.length][nums.length]; for (int len = 1; len <= nums.length; len++) { for (int start = 0; start <= nums.length - len; start++) { int end = start + len - 1;); } } } return dp[0][nums.length - 1]; } private int getValue(int[] nums, int i) { // Deal with num[-1] and num[num.length] if (i < 0 || i >= nums.length) { return 1; } return nums[i]; }
}
First thsnks for your share the code .I think we could think this way,); }
why we use start-1and end+1。Because for each "i" ballon in the subrange [start.....end],we will burst " i " at
the end in this subrange.In this situation ,at the end ,the "i" ballon's left and right ballon must be start-1and end+1.So we try different i and record the max one to be the maximal coins of this subrange in the var dp[start][end].
Same logic but easy calculation and understanding.
public int maxCoins(int[] nums) {// DP based solution. O(N^2) space and // O(N^3) run time. if (nums.length == 0) return 0; int n = nums.length; int[] iNums = new int[n + 2]; iNums[0] = 1; // add 1 in the beginning. for (int i = 0; i < n; i++) iNums[i + 1] = nums[i]; iNums[iNums.length - 1] = 1; // add trailing 1. int[][] memo = new int[n + 2][n + 2]; for (int len = 1; len <= n; len++) { for (int i = 1; i + len < n + 2; i++) { for (int j = i; j < i + len; j++) { int val = iNums[i - 1] * iNums[j] * iNums[i + len]; // easy calculation. memo[i][i + len - 1] = Math.max(memo[i][i + len - 1], // previous subrange + current val + next subrange. val + memo[i][j - 1] + memo[j + 1][i + len - 1]); } } } return memo[1][n]; }
thanks a lot! your codes are a lot better and ez understanding than the top voted answer!
I think recursive way is more intuitive.
We divide the problem to two sub-problems.
花花酱 LeetCode 312. Burst Balloons
class Solution { public int maxCoins(int[] nums) { int n = nums.length; int[] newnums = new int[n + 2]; newnums[0] = 1; newnums[n + 1] = 1; for(int i = 0; i < n; i++){ newnums[i + 1] = nums[i]; } int[][] C = new int[n + 2][n + 2]; return helper(newnums, C, 1, n); } int helper(int[] nums, int[][] C, int s, int e){ if(s > e) return 0; if(C[s][e] > 0){ return C[s][e]; } for(int i = s; i <= e; i++){ int v = nums[s - 1] * nums[i] * nums[e + 1] + helper(nums, C, s, i - 1) + helper(nums, C, i + 1, e); C[s][e] = Math.max(C[s][e], v); } return C[s][e]; } }
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect. | https://discuss.leetcode.com/topic/30745/java-dp-solution-with-detailed-explanation-o-n-3 | CC-MAIN-2017-51 | refinedweb | 795 | 82.65 |
Day 9: Tic tac toe
Who didn't play Tic tac toe with his friends? :)
What to expect
Today we will implement tic tac toe game in Nim, with 2 modes
- Human vs Human
- Human vs AI
Implementation
So, let's get it. The winner in the game is the first one who manages to get 3 cells on the board to be the same in the same column or row or diagonally first.
imports
import sequtils, tables, strutils, strformat, random, os, parseopt2 randomize()
Constraints and objects
As the game allow turns we should have a way to keep track of the next player
let NEXT_PLAYER = {"X":"O", "O":"X"}.toTable
Here we use a table to tell us the next player
Board
type Board = ref object of RootObj list: seq[string]
Here we define a simple class representing the board
- list is a sequence representing the cells
maybe cells is a better name
- please note list is just a sequenece of elements
0 1 2 3 4 5 6 7 8but we visualize it as
0 1 2 3 4 5 6 7 9
instead of using a 2d array for the sake of simplicity
let WINS = @[ @[0,1,2], @[3,4,5], @[6,7,8], @[0, 3, 6], @[1,4,7], @[2,5,8], @[0,4,8], @[2,4,6] ]
We talked
WIN patterns cells in the same row or the same column or in same diagonal
proc newBoard(): Board = var b = Board() b.list = @["0", "1", "2", "3", "4", "5", "6", "7", "8"] return b
this is the initializer of the board and sets the cell value to the string represention of its index
Winning
proc done(this: Board): (bool, string) = for w in WINS: if this.list[w[0]] == this.list[w[1]] and this.list[w[1]] == this.list[w[2]]: if this.list[w[0]] == "X": return (true, "X") elif this.list[w[0]] == "O": return (true, "O") if all(this.list, proc(x:string):bool = x in @["O", "X"]) == true: return (true, "tie") else: return (false, "going")
Here we check for the state of the game and the winner if all of the item in
WIN patterns are the same
proc `$`(this:Board): string = let rows: seq[seq[string]] = @[this.list[0..2], this.list[3..5], this.list[6..8]] for row in rows: for cell in row: stdout.write(cell & " | ") echo("\n--------------")
Here we have the string representation of the board so we can show it as 3x3 grid in a lovely way
proc emptySpots(this:Board):seq[int] = var emptyindices = newSeq[int]() for i in this.list: if i.isDigit(): emptyindices.add(parseInt(i)) return emptyindices
Here we have a simple helper function that returns the empty spots indices
the spots that doesn't have X or O in it, remember all the cells are initialized to the string representation of their indices.
Game
type Game = ref object of RootObj currentPlayer*: string board*: Board aiPlayer*: string difficulty*: int proc newGame(aiPlayer:string="", difficulty:int=9): Game = var game = new Game game.board = newBoard() game.currentPlayer = "X" game.aiPlayer = aiPlayer game.difficulty = difficulty return game # 0 1 2 # 3 4 5 # 6 7 8
Here we have another object representing the game and the players and the difficulty and wether it has an AI player or not and who is the current player
- difficulty is only logical in case of AI, it means when does the AI start calculating moves and considering scenarios, 9 is the hardest, 0 is the easiest.
proc changePlayer(this:Game) : void = this.currentPlayer = NEXT_PLAYER[this.currentPlayer]
Simple procedure to switch turns between players
Start the game
proc startGame*(this:Game): void= while true: echo this.board if this.aiPlayer != this.currentPlayer: stdout.write("Enter move: ") let move = stdin.readLine() this.board.list[parseInt($move)] = this.currentPlayer this.change_player() let (done, winner) = this.board.done() if done == true: echo this.board if winner == "tie": echo("TIE") else: echo("WINNER IS :", winner ) break
Here if we don't have
aiPlayer if not set it's just a game with 2 humans switching turns and checking for the winner after each move
Minmax and AI support
Minmax is an algorithm mainly used to predict the possible moves in the future and how to minimize the losses and maximize the chances of winning
-
-
type Move = tuple[score:int, idx:int]
We need a type Move on a certain idx to represent if it's a good/bad move
depending on the score
- good means minimizing chances of the human to win or making AI win => high score +10
- bad means maximizing chances of the human to win or making AI lose => low score -10
So let's say we are in this situation
O X X X 4 5 X O O
And it's
AI turn we have two possible moves (4 or 5)
O X X X 4 O X O O
this move (to 5) is clearly wrong because the next move to human will allow him to complete the diagonal (2, 4, 6) So this is a bad move we give it score -10 or
O X X X O 5 X O O
this move (to 4) minimizes the losses (leads to a TIE instead of making human wins) so we give it a higher score
proc getBestMove(this: Game, board: Board, player:string): Move = let (done, winner) = board.done() # determine the score of the move by checking where does it lead to a win or loss. if done == true: if winner == this.aiPlayer: return (score:10, idx:0) elif winner != "tie": #human return (score:(-10), idx:0) else: return (score:0, idx:0) let empty_spots = board.empty_spots() var moves = newSeq[Move]() for idx in empty_spots: # we calculate more new trees depending on the current situation and see where the upcoming moves lead var newboard = newBoard() newboard.list = map(board.list, proc(x:string):string=x) newboard.list[idx] = player let score = this.getBestMove(newboard, NEXT_PLAYER[player]).score let idx = idx let move = (score:score, idx:idx) moves.add(move) if player == this.aiPlayer: return max(moves) # var bestScore = -1000 # var bestMove: Move # for m in moves: # if m.score > bestScore: # bestMove = m # bestScore = m.score # return bestMove else: return min(moves) # var bestScore = 1000 # var bestMove: Move # for m in moves: # if m.score < bestScore: # bestMove = m # bestScore = m.score # return bestMove
Here we have a highly annotated
getBestMove procedure to calculate recursively the best move for us
Now our startGame should look like this
proc startGame*(this:Game): void= while true: ##old code ## AI check else: if this.currentPlayer == this.aiPlayer: let emptyspots = this.board.emptySpots() if len(emptyspots) <= this.difficulty: echo("AI MOVE..") let move = this.getbestmove(this.board, this.aiPlayer) this.board.list[move.idx] = this.aiPlayer else: echo("RANDOM GUESS") this.board.list[emptyspots.rand()] = this.aiPlayer ## oldcode
Here we allow the game to use difficulty which means when does the AI starts calculating the moves and making the tree? from the beginning 9 cells left or when there're 4 cells left? you can set it the way you want it, and until u reach the starting difficulty situation AI will use random guesses (from the available
emptyspots) instead of calculating
CLI entry
proc writeHelp() = echo """ TicTacToe 0.1.0 (MinMax version) Allowed arguments: -h | --help : show help -a | --ai : AI player [X or O] -l | --difficulty : destination to stow to """ proc cli*() = var aiplayer = "" difficulty = 9 for kind, key, val in getopt(): case kind of cmdLongOption, cmdShortOption: case key of "help", "h": writeHelp() # quit() of "aiplayer", "a": echo "AIPLAYER: " & val aiplayer = val of "level", "l": difficulty = parseInt(val) else: discard else: discard let g = newGame(aiPlayer=aiplayer, difficulty=difficulty) g.startGame() when isMainModule: cli()
Code is available on | https://xmonader.github.io/nimdays/day09_tictactoe_cli.html | CC-MAIN-2019-51 | refinedweb | 1,300 | 57.61 |
with ACPI 20020308, and either a fixed DSDT or the bugfix patch that i posted
about yesterday (and will post after this), the following features work:
Fans
Temperature
Throttling/Limit
Buttons
ac_adapter
the battery sort-works - it gives me info, acuratly reports which batteries
are present or not, but the "state" information seems to be inaccurate.
as for sleeping, the laptop is supposed to be able to do S0,S1,S4,S5
ehco -n 1 >/proc/acpi/sleep puts it to sleep, but the fan stays on, and so
does the backlight on the display panel. the power button wakes it up, but
the keyboard is completely dead (i cant even use Fn-power to turn it off, adn
i couldnt use Fn-sleep to wake it up either :-().
however, other things seem to be working (display comes up, mouse works (this
is all in a console)). and.. if i do echo -n 1 > /proc/acpi/sleep; killall X)
when it resumes, it kills and hence restarts the X server, which seems to
make the keyboard start working :-).
If i try to sleep the laptop while using a virtuall terminal in X, then when
it resumes, everyhing is dead. i get a screen similar to the one that was
there when it "slept", with some wierd stuff on the left, but keyboard, mouse
are dead, and the echo command dosent return (and the power light dosent
change from the blinking-meaning-sleeping state)
S4 does nothing, and S5 switches the machine off instantly (isnt this
supposed to suspend-to-disk?)
All suggestions / comments are welcome :-)
- Jeff Snyder
pavel@... said:
> Actually, cache_flush() would probably be usefull globally, not for
> ACPI only. [And I guess you need it for flash memory, right? ;-)]
Yes, if I want to get burst reads from flash working.
--
dwmw2
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/acpi/mailman/acpi-devel/?viewmonth=200203&viewday=24 | CC-MAIN-2017-34 | refinedweb | 345 | 66.17 |
A SIGN OF THINGS
TO COME?
EXAMINING FOUR MAJOR CLIMATE-RELATED
DISASTERS, 20102013, AND THEIR IMPACTS ON FOOD
SECURITY
This report analyses impacts of four extreme weather events (a heat wave in
Russia, flooding in Pakistan, drought in East Africa, and a typhoon in the
Philippines) on food security. For each case, the nature of the extreme weather is
characterized, and its impact on vulnerable people is assessed by considering
when and why threats emerge, and the role of governance in the state and non-
state responses to the emergency. Scenarios of the plausible impacts of
increased extreme weather severity on food security and other socioeconomic
parameters are presented for each case.
Related Oxfam-commissioned research includes Climate Shocks, Food and Nutrition Security:
Evidence from the Young Lives cohort study.
CONTENTS
Executive Summary ...................................................................... 4
Section 1: Introduction.................................................................. 6
1.1 Project brief ......................................................................................................... 6
1.2 Report structure................................................................................................... 6
1.3 Nature of vulnerability to weather extremes ......................................................... 6
1.4 Methodology for case study analysis ................................................................... 7
1.3.1 Describing the nature of an extreme weather event ...................................... 8
1.3.2 Defining vulnerable groups ........................................................................... 8
1.3.3. Assessing impact pathways ......................................................................... 9
1.3.4. Evaluating politics, policies and economies.................................................. 9
2
Section 3: Relevance of Climate Change ................................... 25
3.1. The link between extreme weather events and climate change ........................ 26
3.1.1. Russias 2010 heat wave ........................................................................... 27
3.1.2. Pakistans 2010 floods ............................................................................... 27
3.1.3. East Africas 201011 drought ................................................................... 28
3.1.4. Philippines typhoon .................................................................................... 29
3.2 Considering the impacts of possible climate scenarios ...................................... 30
3.2.1. Case study: Russia heat wave ................................................................... 30
3.2.2. Case study: Pakistan floods ....................................................................... 31
3.2.3. Case study: East Africa drought ................................................................. 31
3.2.4. Case study: Philippines typhoon ................................................................ 32
3.2.5. Conclusion ................................................................................................. 33
3.3. Scenarios summary matrix ............................................................................... 34
Notes ............................................................................................ 37
Acknowledgements ..................................................................... 43
3
EXECUTIVE SUMMARY
From 2010 to 2013 the world experienced a number of extreme weather events, several of
which were notable for their intensity, duration, and impacts on livelihoods and food security.
This report focuses on four case studies a heat wave in Russia, flooding in Pakistan, drought
in East Africa, and a typhoon in the Philippines that represent a range of extreme weather. It
analyses the impact of these extreme weather events on food security, by considering when
and why threats emerge. This involves characterization of the weather events, examination of
the vulnerable groups affected, and analysis of livelihoods and the role of governance and
capital.
In addition to their immediate impacts in the directly affected regions, this study demonstrates
that weather events can be associated with impacts in other parts of the world. For example, the
Russian heat wave, which occurred as a result of an atmospheric blocking high-pressure
system, had both domestic and international effects: first, it dramatically reduced the wheat
harvest in many parts of Russia, undermining resilience of farmers and reducing the national
food supply; then, due to Russia banning wheat exports, world wheat prices increased, reducing
poor peoples access to food and, according to some analyses, contributing to the unrest in extended
flooding in Sindh, where the greatest impacts on health, housing, and infrastructure were
experienced.
This study also identifies cases in which extreme weather events exacerbated existing
unfavourable conditions, and events in which poor preparation resulted in greater harm. For
example, in East Africa the failure of the long rains in early 2011 was catastrophic because the
region had already experienced drier-than-average conditions the previous year, and there had
been a limited response to early warnings among the regions governments. This combination
of extreme weather and poor preparation and response affected the livelihoods of millions of
people in Kenya, Ethiopia and Somalia, and compounded the flow of refugees associated with
armed conflict in Somalia.
In the case of Typhoon Haiyan, a powerful tropical cyclone that hit the Philippines in November
2013, the level of destruction was exacerbated by existing damage from earlier storms. The
scale of destruction made the regeneration of farmers livelihoods, in particular those growing
rice and coconuts, an urgent issue. In response, the government demanded far more urgent
and decisive action on climate change from the global community at the UN Climate talks in
Poland Yeb Sano, leading the Philippines delegation, had just learned that Haiyan had
obliterated his hometown.
The findings of this report elucidate the complicated relationship between weather events and
food security. The report also considers the relevance of climate change. On a global level,
climate change is expected to increase the magnitude and frequency of heatwaves and heavy
rainfall events, due to rising global temperatures and the ability of warmer air to hold more water
vapour. However, it will never be possible to say that any specific event, including the four
events analysed in this report, would not have happened without climate change. What
scientists can do is estimate whether climate change increased the risk of an event. Initial
evidence suggests that the Russian heat wave and the East African drought were made more
likely because of climate change; but it is not yet possible to assess the climate change signal in
the case of the floods in Pakistan and Typhoon Haiyan.
4
Given the risk that extreme weather events might increase in frequency and magnitude in
future, but uncertainty in the exact trajectory of future climate, it is valuable to consider
hypothetical scenarios for larger or more frequent events, and how these might impact food
security. In this report, explorative scenario analysis demonstrates the potential for adaptive
capacities to be overwhelmed and vulnerable communities to be driven to extremes.
It has become apparent that the weakness or strength of governance at various levels can
either intensify or mitigate the impacts of extreme weather events. This report highlights just
some such governance failings in each case study, and suggests that changes in the risk of
extreme events associated with climate change could put even more pressure on decision
makers. It is imperative that a cultural shift encompassing governments, NGOs and society at
large occurs, so that the reduction of risk for vulnerable groups is given consideration beyond
immediate post-disaster response.
5
SECTION 1: INTRODUCTION
1.1 PROJECT BRIEF
This report considers how extreme weather events affect the food security of vulnerable groups.
It builds on Oxfams briefing paper Growing Disruption: Climate change, food and the fight
2
against hunger, which discusses how climate change (including changes in extreme weather
events) might alter the conditions that commonly reduce the availability, access, utilization, and
stability of food supplies for people living in poverty.
In this report, case studies are used to examine the effect of specific extreme weather events on
vulnerable groups food security, working within the same food-systems approach. This
approach covers access, availability and utilization, and stability. In addition to seeking a better
understanding of the interaction between recent extreme weather and food security for each
case study, the report considers the potential influence of climate change and the possible
implications for food security if the frequency or magnitude of extreme weather events were to
increase.
The report draws on a wide range of academic and other literature. It has been prepared in
close consultation with climate scientists, food systems researchers, and scenarios experts from
the University of Oxfords Environmental Change Institute (ECI) and the CGIAR Research
Program on Climate Change, Agriculture and Food Security (CCAFS).
The second, and most substantial, section analyses the interaction between extreme weather
and food security for each case study.
The third section considers the relevance of climate change to this issue, including a summary
of the current scientific understanding of the influence of human emissions on extreme weather.
Building upon this, some hypothetical climate scenarios are provided.
6
1.4 METHODOLOGY FOR CASE STUDY
ANALYSIS
Four recent case studies were selected to represent different types of extreme weather event: a
heat wave, a flooding event, a drought, and a typhoon. For each case study, a literature review
was undertaken alongside consultation with project partners. The information gathered was
used to:
1. describe the nature of the weather event;
2. identify the vulnerable groups affected;
3. determine the impact of the event on livelihoods;
4. examine the response of governments and economies.
This information was then used to make an assessment of the relationship between the weather
event and food security. This framework is illustrated in Figure 1.
(Who) (What)
Vulnerable Impact
groups pathways
Income and Crops
assets (food/cash)
Urban/rural Livestock
Gender Work
Social Trade and
Causative link to
food insecurity
7
1.3.1 Describing the nature of an extreme weather event
In order to understand the impacts of a weather event, it is important to first understand its
physical characteristics. This could include meteorological anomalies in temperature, rainfall,
and wind speed, and hydrological conditions such as flood plain inundation or soil moisture
deficits. Potentially important characteristics include the events magnitude, spatial extent, and
duration.
It is also useful to understand the event in the context of past variability: how often does an
event of this nature occur? How different is it from normal conditions? Finally, it may be helpful
to analyse the physical processes that led to the extreme event. For example, winter flooding in
the UK is often the result of low-pressure systems from the Atlantic linked to the jet stream, but
in spring it may be a consequence of snowmelt.
Source: UNDP Human Development Report (2014), Sustaining Human Progress: reducing vulnerabilities
and building resilience',
8
1.3.3. Assessing impact pathways
To assess the impact of extreme weather on livelihood potential and food security, the following
basic factors were considered in this study:
1. Crops: were there changes in the yield of cash or food crops? Did the amount of cultivated
land decrease?
2. Livestock: how have livestock populations been impacted?
3. Work: how has a weather event impacted livelihood strategies? Has there been a movement
of labour out of the region or to particular sectors or industries?
4. Trade and markets: how have the dynamics of imported or exported goods changed?
9
SECTION 2: CASE STUDIES
Using the methodology outlined in Section 1.4, the interaction between extreme weather and
food security will now be explored for each of four case studies: the heat wave in Russia in
2010, the 2010 flooding in Pakistan, the East African drought of 2010/2011, and Typhoon
Haiyan in the Philippines in 2013.
The high temperatures and drought conditions were caused by a persistent area of high
9
pressurea blocking high or blocking anticyclone centred over Western Russia. These
systems are associated with high pollution levels, as the supply of clean air is restricted, and
industrial pollution can be trapped locally. Combined with smoke from forest and peat fires,
heavy smog was generated.
Source: NASA Earth Observatory image by Jesse Allen, based on MODIS land surface temperature data
available through the NASA Earth Observations website. Caption by Michon Scott.
10
2.1.2. Significance
The heat wave, drought and wild fires had a significant impact on crop yields, which posed both
domestic and international challenges. Some 13.3 million acres of crops were destroyed by
drought and fire, which represented close to 17 percent of the total crop area of the country, and
10
affected close to 25,000 farms. Of Russias grain-producing regions, Volga experienced a
harvest decline of 70 percent, while the Central region suffered a 54 percent decline. There was
11,12
a countrywide decline of 33 percent in the overall wheat harvest. (It should be noted that
some districts matched or exceeded their grain harvests from the year before, with the Southern
and North Caucasus districts producing 99 percent and 109 percent of their yields,
13
respectively. ) This decline in crop production led to domestic food price increases, and many
members of society entered into poverty. The Russian governments response was to ban
wheat exports. This had global implications, as Russia was the worlds fourth-largest wheat
14,15
exporter, accounting for roughly 14 percent of the global wheat trade. The resultant rise in
international grain prices may have influenced the political instability in North Africa and the
Middle East during the Arab Spring.
2.1.3. Narrative
2.1.3.1. The vulnerable close to home
The heat wave had a substantial impact on Russias poorest and most vulnerable citizens. It
16
was associated with close to 56,000 deaths from heat and air pollution, of which an estimated
17
11,000 were in Moscow. In addition, the loss of a third of annual domestic wheat production
led to dramatic increases in food prices, including for staples such as bread and buckwheat, as
well as animal feed, which had subsequent impacts on the price of dairy products. Panic buying
18,19
aggravated the situation. Between July 2010 and March 2011, the average price of a
subsistence basket of food rose by 2030 percent in most regions of Russia. This rise in food
prices at a time when incomes remained steady led to an increase in poverty. Women were the
hardest hit due to their role in providing food for their families as they have the largest burden in
feeding children and the elderly. Farmers, traders and others working in the agricultural industry
also faced particularly difficult circumstances. The export ban dented Russias reputation as a
20
supplier.
The decline in crop yields posed a significant challenge to the Russian government, which
responded by banning wheat exports in August 2010. This ban was in keeping with existing
21
food security policies; in the wake of the food price spike of 20072008, the government
22
established the Doctrine on Food Security in 2010 to limit food exports. These policies were
inspired by economic nationalism, but the export ban failed to reduce domestic food prices in
the aftermath of the crisis. Although there was enough locally produced supply to cover
domestic consumption, prices continued to rise, with flour increasing by 18 percent and bread
23
by 10 percent between July and December 2010. This may have been partly due to hoarding
by grain speculators and profiteers, who withheld grain and broke contracts, in anticipation of
24
future price increases and opportunities for price-gouging.
The Russian government also reworked some of its long-term agricultural policies. A
programme was introduced to protect the animal husbandry sector, On Measures for
Accelerating the Development of Animal Husbandry as a Policy Priority for Attaining Food
Security in Russia in 2010 and 2011, which aimed to maintain domestic production and reduce
meat imports. The government pursued this with incentives to stimulate and strengthen dairy
and meat producers. In effect, what was beginning to take shape was a transfer of wealth in the
25
country from crop to livestock producers. Even during the export ban, the Russian government
26
stated that it would not permit a reduction in the number of cattle and poultry.
11
The Russian government seemed to subsequently be taking heed of the potential ongoing
threat to farms from future climate anomalies, and discussed various long-term solutions, such
as increasing land-use efficiency and irrigation. Protecting the production capacity and financial
27
solvency of farms and producers was also heavily discussed. A proposed solution was to
improve the insurance system, in order to avoid a repeat of large post-event rebuilding costs to
the state. It was decided that federal funds were no longer to be used to deal with the effects of
extreme weather events. This measure had the potential to make a substantial difference during
future events, since only 20 percent of the crops destroyed in 2010 were covered by private
28
insurance. However, in the subsequent 2012 drought, it was found that many Russian farmers
still did not have insurance, largely due to a lack of trust in insurance organizations, so this
29
strategy has not yet been adopted by many Russians.
The wheat export ban had a major effect on people beyond Russias borders. The Russian
export ban was the central catalyst in the 6080 percent increase in global wheat prices
30
between July and September 2010. By April 2011, wheat prices were 85 percent higher on
31
international markets than the year before, at $364 per tonne. The effects of this were
widespread. Among Russias neighbours, wheat is a staple food of particular importance to the
poor segments of the population, and prices rose in many cases: Kyrgyzstan (54 percent),
32
Tajikistan (37 percent), Mongolia (33 percent), and Azerbaijan (24 percent). Pakistan,
Russias fourth-largest customer, experienced a 16 percent increase in the price of wheat.
33
During this time, Pakistan also experienced a 1.6 percent increase in people living in poverty.
34
Egypt was the worlds largest wheat importer and Russias biggest customer, importing 50
35
percent of its grain from the latter. While the Egyptian government was committed to
maintaining the price of the cheapest bread, in order to minimize the impact of price increases
36
on poor households, this was an extremely expensive policy measure, amounting to 8 percent
37
of the countrys total GDP in 2011. This could not be sustained, and higher wheat prices
affected the cost and availability of bread in Egypt, and subsequently influenced citizen protests.
Bread took on symbolic importance in protests, as evidenced by the widespread slogan bread
38
and dignity. As such, it has been suggested that higher wheat prices indirectly contributed to
39
the Egyptian revolution.
Among the countries affected by the Arab Spring, it is interesting to note that Egypt ranked first,
40
Syria fifth, Yemen ninth, and Tunisia tenth as destinations for Russian wheat exports in 2009.
Price increases for a staple such as bread has the potential to cause huge impacts at the
household level in many nations in the Middle East and North Africa due to these populations
dependence on wheat, and because food constitutes a large proportion of household spending.
Globally in 2010, in terms of wheat imports per capita and per cent of income spent on food,
respectively, Libya ranked second (37.2 percent of income), Algeria fifth (43.7 percent), Tunisia
41
sixth (35.6 percent), Yemen seventh (45 percent), and Egypt eighth (38.8 percent).
2.1.4. Conclusion
The unprecedented heat wave of 2010 was intense and unexpected, and was associated with
drought, wild fires, and increased pollution levels. It dramatically affected farmers and the
domestic wheat harvest, and many from poorer segments of the Russian population entered
poverty. The decision of the government to institute a wheat export ban greatly affected world
wheat prices, and played a factor in encouraging unrest in Arab Spring nations dependent on
Russian wheat imports.
12
2.2 PAKISTANS 2010 FLOODS
2.2.1. Description
During July and August 2010, Pakistan experienced higher-than-normal monsoon rainfall,
particularly in the upper part of the Indus river system, which drains the western Himalayas. The
monsoons onset was about 10 days earlier than normal, and was followed by a series of
monsoon surges. Some areas received more than four times their usual monthly rainfall in just
42,43,44
three days. These rainfall anomalies led to large-scale inundation in the Indus river basin,
45,46
propagating from Khyber Pakhtunkhwa south through Punjab, Balochistan, and Sindh. In
early August, the flooding was associated with widespread landslides in these regions of the
47
country.
The unusual intensity of the monsoon appears to have been linked to the Russian heat wave. At
the same time as there was very high pressure over western Russia, low pressure was
observed to the east, including over Pakistan. As the monsoon is driven by pressure
differences, this acted to bring the monsoon further north than usual. In addition, interactions
between the monsoon and disturbances associated with the large-scale circulation pattern led
48
to unusually heavy rainfall.
The El Nio Southern Oscillation (ENSO) may also have had an indirect role on the rainfall
anomalies in Pakistan. ENSO is a naturally occurring mode of climate variability, oscillating
between El Nio, La Nia and neutral conditions. It originates in the tropical Pacific, but has
an important influence on global climate. 2010 marked the beginning of a weak La Nia, which
is associated with warm anomalies and easterly wind anomalies in the western Pacific. This
reduced the transport of moisture from southern Asia towards the Pacific, and therefore
49
contributed to wetter conditions over Pakistan.
Source: Walsh, M. and R. Fuentes-Nieva (2014) Information Flows Faster than Water:
How livelihoods were saved in Pakistans 2010 floods, Oxford: Oxfam GB.
13
2.2.2. Significance
50
The 2010 floods were one of the worst disasters in Pakistans history. The floods were
associated with approximately 2,000 fatalities. Roughly 2 million homes were destroyed or
51
damaged, and 21 million people were forced to flee their homes. The flooding negatively
impacted food security on a national scale, and threatened the long-term nutritional needs of
nearly 8 million people. Food consumption scores indicate that roughly a third of people in
52
affected areas experienced poor levels of dietary diversity and food intake. While certainly
national in scope an estimated 20 percent of the countrys landmass was underwater the
impacts of the event were spatially variable, determined by local conditions including socio-
53
economic factors. This can be particularly illustrated by examining the provinces of Sindh and
Punjab.
One of the most significant features of this event was the duration. In 2010, the flooding lasted
54
for several weeks in many places, and for several months in Sindh. Since 2010, Pakistan has
suffered from a further three years of less publicized floods.
2.2.3. Narrative
According to the Asian Development Bank and the World Bank, Pakistan suffered an estimated
financial loss of $9.7bn, with significant damage to homes, farms, transport and
communications, water supply, power, and sanitation. Some Pakistani sources have speculated
55
that the direct and indirect losses were closer to $43bn. Wheat and rice prices increased by 80
56 57
percent in 2010 and the average person was spending 65 percent of their income on food.
At the national level, the country lost an estimated 2 million hectares of crops, and 40 percent of
58
its livestock tens of thousands of animals. Pakistan had been an important exporter of wheat
and rice, but struggled to regain its market position after the floods, as other countries stepped
59
in to fill its orders.
The impacts were not equally distributed. Those displaced or who lost physical assets in the
floods were disproportionately landless tenants and farmers: 70 percent of this segment of the
60
population lost at least 50 percent of their expected income. Some 60 percent of Pakistans
citizens lost their primary livelihood (i.e. more than 50 percent of income) across all but one
61
province. While economically vulnerable households were hit the hardest across the board,
social divisions played a significant role at the provincial level. The impact of the floods was
particularly severe for women. In flood-affected areas, 53 percent of women were found to be
severely food insecure compared to 43 percent of the overall population. Standing crops were
badly affected which provide an important source of livelihoods for women in cotton picking, and
rice and sugarcane harvesting. Livestock losses were less compared to crops as the owners
were able to take many of their animals along with them in Punjab and Sindh. However, poultry
birds (an important source of income for women) were completely lost.
In addition, women across the country who found themselves displaced had more limited
access to public sanitation facilities. In the camps separate toilet and washing facilities were
62
often not available, resulting in increased health risks. In Sindh, women who were due to
receive agricultural support packages under the state land distribution package had this aid
63
suspended, as they were not deemed to be a priority. Additionally, religion and caste played a
role in the distribution of aid, and some political parties distributed assistance based on member
64
affiliation in Sindh.
14
2.2.3.2. A tale of two provinces
In Punjab province in central-north Pakistan flooding was catastrophic but the water washed
downstream fairly rapidly compared to Sindh in the south where the flooding lasted much
2 2
longer. In Punjab, about 12,400 km of cropland was flooded, while 9,200 km was flooded in
65
Sindh. Of all Pakistans provinces, Punjab had the highest total area of destroyed cotton and
2
sugarcane; the largest area of destroyed rice and wheat was in Sindh, where 5,106 km of
2 66
Pakistans 8,762 km of flooded rice crop area was located.
While 42 percent of homes were destroyed or damaged across the country, the highest
proportion was in Sindh: out of the 1,910,000 homes affected, about 876,000 (roughly 46
67
percent) were in this province. Punjab suffered significant short-term effects due to extensive
damage to both crops and markets. During the flooding, the non-functioning of local food
markets directly affected 47 percent of households there - the largest such impact in all
68
Pakistans provinces.
In Sindh, the dramatic price spikes and the delayed planting of winter crops resulted in even
greater impact. The percentage of households with poor food consumption increased from 13
to 76 percent in Sindh, and from 10 percent to 45 percent in Punjab. After the floods many
households coped by shifting to less preferred foods, purchasing food on credit, borrowing,
limiting portion size, reducing the number of meals and even going entire days without eating. In
69
many households women ate less than men.
The scale of the floods of 2010 was quite unprecedented in Pakistans history and arguably
beyond the capacity of any government to respond to adequately. Furthermore, the floods
happened at a time when the countrys disaster management structure had only just completed
a major re-organization. In the new structure the Government of Pakistans National Disaster
Management Authority worked with provincial and district level management authorities through
a decentralized system. The aim of this structure is to enable more rapid and appropriate
responses driven by local needs and improve local accountability. However, the scale of the
disaster and the fledgling nature of the new structure meant that the floods had very different
impacts in different regions. Most interventions were led by provincial governments and the
70
national army.
While some observers generally praised the efforts of the governments and, in particular, the
71
army, others have evaluated the response as insufficient. Food aid, for example, was mostly
disbursed in camps, shelters and makeshift communities, and as such did not reach all who
72
were in need. Delays in aid provision contributed to many farmers missing the winter planting
73
season. In northern areas, the response was quicker and more organized, largely due to the
fact that people in these areas had gained experience from a major earthquake in 2005, and
had therefore developed disaster-management capacities. Whilst flooding in the southern
provinces of Sindh and Punjab happens every year, the sheer scale of the flooding and its
duration in situations of highly unequal political, social and economic power meant that disaster
74
response was often inadequate.
Coercive landlords were able to take advantage of this situation. Across the country, flood-
affected people were forced to hand over cash assistance received from the government or
NGOs. In addition, landlords used the washing away of land borders and the loss of ownership
75
deeds as an opportunity to attempt to take over poor farmers land. Oxfam Country Director
Arif Jabber Khan observed: Pakistans flood protection programmes resulted in the
construction of embankments and other larger structures that protected the landholdings of
large farmers and at the same time, made millions vulnerable to more extreme conditions than
they were used to. Additionally, during extreme events, decisions on breaches to protect large
infrastructure (barrages for example) are made on political grounds and I saw it myself from the
air, that the land of large farmers was protected while small farmers land was deliberately
15
flooded. A commission of inquiry by the Supreme Court of Pakistan found that the major
breaches that occurred happened because of infrastructure failure, stemming from failure to
maintain infrastructure, rather than deliberate decisions. However, the commission did not
consider causes of breaches to secondary infrastructure and the testimony of many flood-
affected people asserts that in these cases, deliberate decisions were often taken to flood land
76
used by poor people rather than by the rich.
2.2.4. Conclusion
In 2010, Pakistan experienced much higher-than-average monsoon rains, leading to large-scale
and prolonged flooding. This weather event had severe impacts on several parts of the country,
but not all were equally affected. In Sindh, the flooding lasted longer and large volumes of
standing water resulted in more direct negative health and nutritional outcomes, and damage to
housing and infrastructure, while the damage to crops, livestock and markets was more severe
in Punjab. The performance of the countrys emergency response teams was also
geographically differentiated, with northern areas proving more effective at dealing with the
flooding than southern areas. The central government was heavily criticized for not acting more
decisively in the crisis.
2.3.1. Description
In 2011 there was a severe drought in a large area of the Greater Horn of Africa, affecting parts
of Ethiopia, Kenya and Somalia and also Djibouti. The region has two main rainfall seasons,
known in Kenya as the short rains (OctoberNovember/December), and the long rains
77
(MarchMay/June). The 2011 drought was associated with the successive failure of both the
78
2010 short rains and the 2011 long rains.
East Africa is naturally a dry region, which experiences high variability in rainfall from year to
year, due to a variety of influences including ENSO and the Indian Ocean. A combination of
different factors contributed to the situation in 2011, not all of which are fully understood. The
79
failure of the short rains has been linked to La Nia, which is usually associated with drier
conditions during this season. The long rains are less well understood (see section 3).
16
Figure 5: Percent of normal precipitation in East Africa, 2011
Source: NOAA Climate Prediction Center via Dr. J. Masters (2011), Deadliest weather disaster of 2011:
the East African Drought, 20 December,
2.3.2. Significance
The 2011 drought was severe but it was not unexpected. In summer 2010, the Famine Early
Warning Systems Network (FEWS NET) issued an alert for key pastoral areas of Ethiopia,
80
Somalia and northern Kenya, knowing that a La Nia year was forecasted and this might
weaken the short rains, that there was a risk the long rains could also fail, and that people were
already vulnerable on the ground due to high food prices and previous droughts. Despite the
81
warning, the drought had devastating effects. It delayed the regions main cropping season.
The number of people in the Horn of Africa in need of food assistance in July of 2011 stood at
82
17.5 million, which was double the figure in January. The drought compounded the crisis that
already existed in South Central Somalia which was racked by conflict and where there was no
effective central government. Elsewhere the worst-affected areas were those already suffering
83
from decades of entrenched poverty in communities on the fringe of their respective societies.
In Somalia alone the UN estimated that no less than 258,000 excess deaths were attributable
84
to the emergency with half of deaths being of children under five years of age.
17
2.3.3. Narrative
2.3.3.1. Food prices
Food prices reached record levels in parts of Kenya, Ethiopia, and Somalia, and each country
had a particular crop that became a symbol of the crisis. In Addis Ababa, Ethiopia, wholesale
wheat prices reached a record of 8,500 birr per tonne, representing an 85 percent increase from
the previous year. In Nairobi, Kenya, maize prices reached a record high of $450 per tonne,
85
representing a 55 percent increase from the previous year. This was directly linked to the
near-total crop failure in some areas of the country, with national maize output predicted to be
86
roughly 15 percent below average after the drought. Food availability decreased nationwide.
With a decline in purchasing power occurring from month to month, there was a disincentive for
traders to bring in unaffordable food. Government cash grants were limited, so relief agencies
87
provided some. In Mogadishu, Somalia, maize and red sorghum were traded at $660 and
$670 per tonne, constituting a 106 percent and a 180 percent increase, respectively, from pre-
88
disaster prices. In Somalia, locally produced and imported food tended to be available, but
89
only at high prices.
Livestock was also seriously affected, with the Food and Agriculture Organization of the United
Nations (FAO) estimating mortality rates of 60 percent of Ethiopias cattle, 40 percent of sheep
90
and 2530 per cent of goats. In the Oromia and Somali regions, livestock market statistics
showed steadily declining body condition among cattle being sold, and hundreds of thousands
91
of animals dying between February and July. This problem was not limited to Ethiopia. In July
2011, the market for livestock across northern Kenya had almost completely collapsed, with the
92
price of a cow dropping from $220 to $30. The FAO estimated that up to 60 percent of
93
Kenyas cattle had died. As a result of losing their livestock many pastoralists lost their
livelihoods, have been unable to rebuild their herds and have become highly vulnerable to
further droughts.
Children were disproportionately impacted by the drought. As mentioned earlier, more than half
of all deaths may have occurred among children under five. In addition, roughly one million
94
children under the age of five were treated for malnutrition. In Kenya, an estimated 508,000
95
children saw their education disrupted in drought-prone areas of the north and northeast, and
there were accounts of girls aged 1315 being sold in exchange for livestock, and of older
women walking long distances in search of food and water, often resorting to binding their
96
stomachs to stave off hunger.
A combination of two failed rainy seasons and years of internal violence and conflict resulted in
97
some areas of Somalia entering famine. According to UN estimates, the rate of malnutrition
increased in southern and central Somalia from 16.4 percent before the event to 36.4 percent in
98
2011. In those regions, armed conflict was already impacting children, households and
99
communities. According to analysis of deaths among Somalis both within southern and central
Somalia and also in the refugee camps in Ethiopia and Kenya, There is consensus that the
humanitarian response to the famine was mostly late and insufficient, and that limited access to
most of the affected population, resulting from widespread insecurity and operating restrictions
100
imposed on several relief agencies, was a major constraint.
In July 2011, an estimated 1.5 million people20 percent of the total populationwere
101
displaced within Somalia, which played a role in destabilisation across the region. Some
Somalis fled to drought-affected regions of Kenya and Ethiopia, such that a further 600,000
102
refugees were estimated to be located there. The conditions in refugee camps were
extremely difficult; malnutrition rates in Kenyas Dadaab camp and Ethiopias Dollo Ado camp
103
were 37 percent and 33 percent, respectively. In Kenya, political insecurity compounded the
problem, and challenged humanitarian operations in Dadaab, preventing access to about
104
463,000 refugees for weeks at a time.
18
2.3.3.3. National failures and space for regional solutions
Regional early warning systems predicted the impending drought in Ethiopia, Somalia, Djibouti
and northern Kenya through the Food Security and Nutrition Working Group for East Africa,
which then set up a La Nia task force to deal with impacts associated with the phenomenon. A
series of alerts and warnings were issued. However, as Jan Egeland, UN Emergency Relief
Coordinator (20032006) observed:.
Early signs of an oncoming food crisis were clear many months before the emergency reached
its peak. Yet it was not until the situation had reached crisis point that the international system
started to respond at scale".
Slow and inadequate reactions to the warnings caused delays and large-scale responses by
governments and international agencies only occurred after malnutrition rates in parts of the
105
region exceeded emergency thresholds.
2.3.4. Conclusion
The impact of two consecutive poor rainy seasons in 2010 and 2011 on top of a general
drying trend across much of the region over several decades was devastating for East Africa.
The coincidence of conflict and a lack of central government control in Somalia created a
refugee crisis that spread into Kenya and Ethiopia. The arrival of tens of thousands of people
put huge pressure on the large refugee camps, which already held hundreds of thousands. The
majority of people impacted in this crisis were the most vulnerable members of their societies, in
particular children, women, and pastoralists. The responses to predictions and warnings of
drought were poor, compounded by wider governance issues.
19
2.4 TYPHOON HAIYAN IN THE
PHILIPPINES, 2013
2.4.1. Description
Typhoon Haiyan, known as Typhoon Yolanda in the Philippines, developed in the tropical
Pacific in early November 2013 and tracked westwards. It made landfall in the Philippines on 7
November, hitting regions 6, 7 and 8 including the provinces of Guiuan, Eastern Samar; Tolosa,
Leyte Province; Daanbantayan and Bantayan Island, Cebu Province; Concepcion, Iloilo
111
Province (Panay Island); and Palawan Island. In total nine regions comprising 44 provinces
were affected.
When it reached the Philippines, Haiyan was an exceptionally strong cyclone, classified as the
112
highest category (5) on the SaffirSimpson hurricane scale. Cyclones are low-pressure
systems, and the central pressure of this storm was extremely low, estimated at 895 mb. The
winds were particularly strong, with sustained speeds near 195mph when averaged over a
113
minute, making it probably the strongest tropical cyclone ever recorded to make landfall. The
114
high wind speed was combined with storm surges, which caused waves as high as 15m.
20
2.4.2. Significance
The devastation resulting from this typhoon was partly a result of its unusual intensity. However,
as the third in a series of storms that struck the country in less than 12 months, it also
115
compounded existing damage. Losses and damages associated with the cyclone are still
being recorded, but it is predicted that the total sum could reach $23bn. The human costs are
116
also significant, with 11.3 million people affected across nine regions, and 4.1 million
117
displaced. The impact on key infrastructure, fishing and essential crops required for
livelihoods, especially rice, has raised the possibility of a significant food security tragedy for the
Philippines most vulnerable people.
2.4.3. Narrative
2.4.3.1. Overwhelming livelihood and property damage
Haiyan hit the poorest provinces in the country. The typhoon resulted in an estimated 5.9 million
118
workers losing their livelihoods, as income sources were destroyed, lost or disrupted. The
119
typhoon also damaged roughly 1.1 million houses, and destroyed another 550,000. There
120
was widespread damage to rural infrastructure, including irrigation systems, and an estimated
121
600,000 hectares of agricultural land were destroyed. Destruction of roads and blockages
from fallen trees hampered assistance to more remote areas. State infrastructure suffered great
damage, including destruction of citizens records.
The Philippine Department of Agriculture estimated that about 150,000 farming households and
some 50,000 fishing households (accounting for roughly 400,000 people) were directly affected
122
by the typhoon. Including indirect impacts of the disaster, over one million farmer and fishing
123
households will require direct assistance with their livelihoods.
Fishing communities the poorest sector of society suffered particularly badly from the
destruction of physical assets, with 65 percent losing their productive assets and 28,000, mainly
124
small-scale, fishing boats destroyed.
Important crops, including coconuts and rice, suffered extensive damage. In the hardest hit part
of the country, region 8, comprising the provinces of Leyte, Samar and Biliran, 33 million
coconut trees were destroyed, effectively eliminating the livelihoods of the coconut farmers for
125
the next six to nine years. Coconut growers are the poorest sector of the agricultural
workforce. Nationally the production of coconuts and sugar dropped dramatically, such that the
126
Philippines were unable to meet self-sufficiency targets and export quotas.
As the typhoon struck between two farming seasons, it severely affected ready-to-harvest,
127
harvested and newly planted rice, in addition to destruction of seeds and tools. A total of
67,000 hectares of rice crops were destroyed, which reduced production by 131,600 tonnes.
This is serious for food security, as rice provides half of the Philippines food energy
requirements. The Eastern Visayas region lost one third of its rice stocks.
The expected production shortfalls, accompanied by rising imports and limited government rice
reserves, left millions of households vulnerable to food insecurity from a period of sustained
high prices. The threat was moderated because the government implemented immediate
measures to ensure that the price increase was regulated. The poorest people in the country
spend 30 percent of their income on rice; in the six months directly following the typhoon, the
128
worst-affected communities were predicted to experience income drops of 25 percent.
21
2.4.3.3. Ineffective governance and the impact on the most vulnerable
The Philippines is used to dealing with typhoons. The country experiences an average of 20
typhoons per year, and along with floods, landslides, droughts, volcanic eruptions, earthquakes
129
and tsunamis this makes it one of the most disaster-prone countries in the world. In fact
Typhoon Haiyan hit whilst the government was three weeks into disaster responses elsewhere -
to a 7.2 magnitude earthquake in Bohol and dealing with internal displacement due to conflict in
Zamboanga. In the case of Typhoon Haiyan, the storm did not deviate from a direct route and
regular timely and accurate warnings were issued by the Philippines Meteorological
Department. It is estimated that disaster preparedness and speedy evacuations helped save at
least 800,000 lives.
However, the colossal destructive power of the typhoon was unlike any previously witnessed.
Furthermore, although the meteorological service was putting out frequent alerts to communities
to warn them about a pending storm surge up to 7m high, communities did not necessarily
understand that storm surge meant a huge wave like a miniature tsunami that inflicted
enormous damage to coastal communities even some distance inland. The perceived threat of
stronger typhoons, and storm surges enhanced by sea-level rise, were behind calls by the
government of the Philippines for the world to combat climate change in the aftermath of the
130
typhoon.
Assessments suggested that approximately 5.6 million people required emergency food
assistance and support to prevent food insecurity in the short and long term, or the restoration
131
of their agricultural and fishing livelihoods in the long term. If livelihoods are not quickly
restored, those affected will need to live on food aid until the next potential growing season in
132
October 2014. It is concerning that only 17 percent of all emergency response and restoration
133
activities currently aim to restore livelihoods.
Domestic institutional barriers hinder the coordination of relief and early recovery efforts as well
as effectiveness of long-term responses. Government inefficiencies need to be tackled, in order
to deal with negative pressure on vulnerable groups, such as women and children. As of June
2014 there was need for food aid for 145,000 children, micronutrient supplementation for an
additional 100,000, and treatment for acute malnutrition for a further 27,000. Unfortunately, the
134
deficit of skilled workers in the field is hampering the scale-up of such nutrition activities. In
addition, the distribution of assistance to affected farmers in more remote areas, such as
135
highlands, has been either limited or absent, and some members of minority indigenous
136
communities have reported discrimination in the delivery of assistance.
2.4.4. Conclusion
The frequency and intensity of typhoons in the Philippines, and the devastation caused by
Typhoon Haiyan, has potential implications for food security if reconstruction efforts are not
extensive and effective. The regeneration of livelihoods for farmers, especially considering the
role of rice in providing for the poor, is essential, as is the restoration of fisheries. Social
protection systems and longer-term preparedness measures should be implemented by
domestic actors and international partners as key elements to strengthen resilience.
22
2.5 CASE STUDY SUMMARY MATRIX
Russia heat The Russian Hoarding of The government began Global wheat prices rose Domestically,
wave government food supplies transferring wealth from dramatically, up 85 increase in poverty
responded to the and price grain to livestock percent year-on-year in was felt by
crisis by banning gouging by producers. It also April 2011. Possible link agricultural workers
wheat exports. speculators encouraged private to political upheaval in and women.
compounded insurance to avoid the Middle East. Internationally,
the crisis. The rebuilding costs in the Domestic price rises for among Arab Spring
informal sector future. subsistence goods nations, Egypts
often filled a resulted in poverty hungry protestors
void. increases. may have suffered
the greatest
impact.
Pakistan The newly Some coercive The response was largely There was an 80 percent Those affected by
flood established landlords took determined by increase in wheat and the flood were
decentralized advantage of geographic location. The rice prices in 2010. disproportionately
disaster smallholders south experienced poorer landless tenants
management and other flood- emergency response and farmers. 70
system was not affected people. than the north. percent lost at least
ready for an Alongside 50 percent of their
event of such neglect of income. In addition,
magnitude but infrastructure, 53 percent of
equally some flooding women were found
Pakistans central was the result to be food
government did of deliberate insecure,
not take a lead breaches by compared with 43
role. wealthy percent of the total
landowners. population.
East Africa Ethiopia was best Across the Regional early warning Food prices reached Children under five
drought prepared, with region a six- signs were not heeded as record levels in several years of age were
pre-positioned month delay in required. Significant markets. Each country disproportionately
state-sponsored the large-scale pressure on large refugee had a symbol of the affected,
safety nets. international camps in Kenya and crisis: wheat in Ethiopia, accounting for over
Kenya and domestic Ethiopia. maize in Kenya, and red half of all deaths in
experienced aid effort due to sorghum in Somalia. Somalia. Women
political a general and pastoralists
distractions. culture of risk were also
Somalia had no aversion and in impacted. There
effective central/southern was huge swelling
governance Somalia, in already cramped
structures, wariness of the refugee camps.
responded too political
late, and entered situation and
famine. risks posed by
armed groups.
23
Philippines Central Distribution of Only 17 percent of total Extensive damage to two Farming and
typhoon government assistance to (international + national) consecutive farming fishing
issued warnings affected farmers recovery projects aim to seasons led to higher rice communities.
and local in more remote restore livelihoods. prices. Women, children
governments are areas either and some ethnic
prepared for limited or absent. The government made minorities faced
typhoons but not Loss of citizens an urgent plea to the discrimination
on this scale, and records and international community with aid
storm surges documents. to combat climate distribution.
were new and not Resettlement of change in response.
understood. A fishing
lack of support for communities
the resumption of inland risks
government depriving them of
services. livelihoods.
Insufficient
human resources
hampered
nutritional goals.
24
SECTION 3: RELEVANCE OF
CLIMATE CHANGE
The four case studies analysed in Section 2 illustrate how extreme weather events can lead to
widespread disturbances in food security. This section will consider how climate change might
complicate this situation, by asking:
How has the frequency and magnitude of such extreme weather events changed in the
recent past and how might it change in future?
Has climate change altered the risk of these extreme events occurring?
If there are more intense extreme weather events more often, how might this affect food
security?
First, we will discuss the association between human greenhouse gas emissions and extreme
events, including a summary of the evidence about the potential role of climate change in each
of the four extreme weather events focused on in this report. Then, we will consider illustrative
scenario analyses of potential future risks including the potential implications of such changes
for food security.
All extreme events have unique causes, in the sense that a combination of natural variability
and external climate drivers lead to a specific event. Therefore it is not possible to say exactly
how climate change will affect specific events such as heat waves in Russia, flooding in
Pakistan, droughts in East Africa, or typhoons in the Philippines. However, it is possible to say
how the likelihood of the types of events we understand and can model reliably heat waves,
floods, certain droughts have changed due to climate change. But because of the fact that
specific extreme events are caused by multiple local factors, as detailed in Section 2,
statements attributing changes in the risk of an event to climate change have to be done on a
case-by-case basis. On a global scale it can furthermore be said that the magnitude and
frequency of heat waves and extreme precipitation events will increase, simply because of the
increasing global temperatures and the ability of warmer air to hold more water vapour.
However, as the global atmospheric circulation is expected to change as well only the increased
risk of heat waves can be transferred from global to local and regional scales.
Against this background the scenarios explored at the end of this section are purely illustrative
and cannot be assessed with respect to their likelihood of occurring in the future. However, from
a climate scientific point of view all scenarios are plausible.
25
3.1. THE LINK BETWEEN EXTREME
WEATHER EVENTS AND CLIMATE
CHANGE
The influence of greenhouse gases on the climate system is unequivocal. We know that global
temperatures rose during the 20th century due to human emissions, and are very likely to rise in
137
future. Understanding the influence of greenhouse gases and global warming on extreme
weather events is more difficult. This is partly because extreme events are, by definition, rare,
and so data are limited, and partly because of natural variability in the climate system. Extreme
weather has always occurred, and natural variability will continue to influence weather in future.
However, scientists expect that emissions from fossil fuels will alter the frequency and intensity
of extreme weather events, and there is an increasing amount of evidence to support this.
There are two related areas of enquiry that can help us to understand the link between weather
and climate change:
1. Trends in extreme events, about which we can draw some conclusions from basic physics,
historical observations, and model experiments exploring future climate scenarios;
2. While it will never be possible to confidently state that an event would not have occurred
without human-induced climate change, probabilistic event attribution studies (PEA), that
consider whether and to what extent climate change altered the magnitude of and the risk of
such an event occurring, can be conducted for certain types of extreme event.
In a seminal 2004 paper, Stott et al. developed a method of PEA, and showed that climate
138
change doubled the risk of the record-breaking 2003 European heat wave. Since this time,
improvements to climate models and the methodology have allowed for the demonstration of
links positive or negative, or the absence thereof between some specific extreme events
and anthropogenic climate change.
While climate models have greatly improved in recent years, with the increase in spatial
resolution making the representation of extreme weather much better, their capability to
simulate such events varies. Robust attribution statements can be made for heat waves and
extreme precipitation events, and, to a certain degree, droughts. The influence of climate
change on individual hurricanes and typhoons is, however, not analyzable with current research
tools.
In the following sections, we will consider evidence that might shed light on the association
between climate change and each of the weather events discussed in Section 2.
26
3.1.1. Russias 2010 heat wave
There is strong evidence that anthropogenic greenhouse gases are causing average
140
temperatures to rise globally and regionally. Changes in heat wavesdefined as spells of
days with temperatures above a threshold determined from historical climatologyhave been
141
linked to climate change. According to the most recent assessment report from the IPCC, it is
likely that human influence has substantially increased the probability of heat waves in some
locations, and made it very likely that heat waves will occur more often and last longer in
142
future.
Several studies have investigated the role of climate change in the 2010 Russian heat wave.
143 144
Dole et al. suggest that it was mainly natural in origin, while Rahmstorf and Coumou
found that human-induced climate change made its occurrence more likely. They show that
observed warming in western Russia is more than twice the global mean warming and estimate
that this warming trend has increased the number of records expected in the past decade five-
145
fold. Otto et al. demonstrated that these results are not contradictory: the magnitude of the
heat wave was no different from what would be expected from natural variability, but climate
change did indeed increase the probability of it occurring.
Abdul Majid Khan, Oxfams Programme Manager for Disaster Risk Reduction and Climate
Change in Pakistan, observed: later every year now for four or five years. Farmers
can't grow crops - either crops don't mature because the rains are late or in other areas people
are about to pick the crops when the rain starts and batters them down. And that's why we're
146
getting these floods year on year."
What happened in Pakistan in 2010 raises the question of whether we should expect changes
in rainfall events because of climate change.
147
In general, an increase in heavy rainfall is expected in a warmer atmosphere. This is
because, as temperatures rise, the atmosphere can hold more water vapour, which increases
the likelihood of heavy rainfall events. In keeping with this theory, an increase in heavy rainfall
148
has been observed globally as the hydrological cycle intensifies. Therefore we are seeing,
and can confidently expect, more heavy rainfall events globally because of climate change.
In general, in Pakistan wet months have been becoming wetter and dry months drier from 1991
149
onwards compared to the previous 30-year period with wetter summers and drier winters.
150
Broadly, mean rainfall has increased in the North and declined in the South since 1960 and
151
the number of heavy rainfall events has increased. However, how climate change will affect
rainfall events in any particular location is another question. The controls on rainfall are often
very complicated. The extremely heavy rainfall leading to Pakistans flooding in 2010 was
associated with large-scale low pressure systems linked to the Russian heat wave and La Nia.
It is as yet very difficult to say how climate change might affect the dynamics of these large
scale processes.
27
152
There has been one paper which considered the attribution of the 2010 floods. The authors
found that the model they employed was not able to provide reliable results for this event.
Without further research using other models, and/or improvements in model ability, it will remain
unclear whether climate change played a role in this specific case.
For East Africa, there is evidence to suggest that there has been a recent and abrupt decline in
154,155,156,157
rainfall and an increase in droughts over the last 20 to 30 years and there have been
increasing problems for food security. These decreases in rainfall have been accompanied by
158
significant increases in air temperatures. Rainfall recession increasingly encroaches onto
159
some densely populated and food surplus producing areas in the centre of Kenya. Across
East Africa there were poor rains in 2008, 2009, 2011 and 2012. In Somalia the short rains
failed in 2013 and the long rains in 2014, leading to new warnings of rising acute food
insecurity. Long season rainfall has declined across much of southern Ethiopia and Somalia,
western Uganda, eastern Kenya, Rwanda, Burundi and northern Tanzania (see Figure 5).
160
Scientists have found that this is linked to changes in the Pacific Ocean, and warming of the
161
Indian Ocean. Large warming of the Indian Ocean has been observed over the 20th
162 163
century and further strong Indian Ocean warming is projected in future. If this link proves to
be a major driver, then this suggests droughts in East Africa will continue to occur and, indeed,
become even more frequent. However, climate model projections for East Africa suggest
conditions will generally become wetter. There seems to be an inconsistency between the trend
observed in the recent past, generally drying, and the wetter futures in the climate models.
28
Figure 7: MarchAugust rainfall trends for East Africa
Source: Climate Hazards Group Infrared Precipitation with Stations (CHIRPS) used for drought monitoring:
U.S Geological Survey and University of Santa Barbara Climate Hazards Group (2014), available at The
State of Rain,
Lott et al.(2013) have conducted an attribution study to specifically investigate the East African
164
drought in 2011. As noted in section 2.3, the drought resulted from the failure of both the
short rains in 2010 and the long rains in 2011. Lott et al. found that there is no evidence of
human influence on the failure of the first rainy season, which was mainly associated with La
Nia, a naturally occurring mode of variability with a strong influence on rainfall in East Africa.
However, the study also tested the role of climate change in the long rains of 2011, finding that
the probability of dry conditions had increased due to human influence.
Combining all these sources of evidence, it is as yet difficult to draw conclusions about links
between climate change and drought in East Africa. There is evidence that the region has
become drier over the last 30 years, and that that climate change increased the probability of
dry conditions in East Africa in early 2011. Yet models suggest the region could become wetter.
The influence of greenhouse gases on the different drivers of rainfall in this region and the
interplay between these drivers demand further investigation to better understand future drought
risk.
29
There is also low confidence in any observed long-term (i.e. 40 years or more) increases in
tropical cyclone activity (i.e. intensity, frequency, duration), after accounting for past changes in
observing capabilities.
How underlying changes towards wetter and hotter conditions and more intense rainfall will
affect the specific phenomenon of tropical cyclones or typhoons like Haiyan is still very difficult
to answer. Earlier studies suggesting an increase in the magnitude of tropical storms on a
167
global average due to higher temperatures in the tropical oceans are now thought to have not
168
incorporated crucial aspects of changes to circulation. It is also not possible to conduct
attribution studies on individual storms like Haiyan because the models used for attribution
studies cannot simulate typhoons. More complex models, which are currently too expensive to
run for attribution, show improved ability, and therefore this is expected to change.
Uncertainty in projections of tropical cyclones is also too high to draw any conclusions on future
changes in risk. Heavy rainfall associated with tropical cyclones is, however, likely to increase
169
with continued warming. According to government meteorologists the intensity of rainfall has
been increasing in most parts of the Philippines and there is a trend towards more extreme daily
170
rainfall (19512008). Furthermore, an ongoing rise in sea level is likely to heighten the
destructiveness of tropical cyclones by increasing storm surge capacity so that sea water is
carried further inland.
As outlined in the previous section, climate change will change the intensity and/or frequency of
some types of extreme weather events. Given uncertainties about future changes and the
difficulty of predicting specific events, it is valuable to consider some simple, hypothetical and
171,172
purely explorative scenarios from among many that can be considered plausible. A
consideration of the impacts of increased intensity and frequency of extreme events follows. It is
important to note that these scenarios are not mutually exclusive or exhaustive.
If health care conditions were the same, it is likely that more vulnerable people would die from
this event. Wheat and bread prices could rise above the price range that a significantly larger
group of poor Russians would be able to afford, which could have implications for political
stability. There could also be greater impacts upon countries normally dependent on Russian
wheat due to changes in supply and international prices. The Russian government might enact
a longer-lasting export ban, leading to even higher food prices nationally and worldwide. This in
turn could have an impact on countries that are currently already in political and security crises,
especially those involved in the escalating consequences of the Arab Spring. The extreme price
spike would also be affected by price speculations, which were already problematic in 2010.
30
3.2.1.2. Scenario 2
Heat waves become more frequent occurrences.
The Russian government already responded to the 2010 heat wave by encouraging a shift
toward livestock, but it is unclear whether this would be effective or rather overwhelmed by more
frequent heat waves. Insurance systems have not been seen as reliable by farmers, and with
heat waves becoming more frequent, premiums would increase. With a lack of viable safety
nets and increasingly untenable conditions for wheat production, many farmers may have to
move to other livelihood sources. Because of this, Russias future as a global bread basket
could become uncertain as temperatures rise under extreme climate scenarios, particularly from
173
the 2030s onwards. Global wheat prices would go up. Russia would need to import more
wheat from neighbouring countries. Compensation may come from wheat growing areas
extending further north with temperature changes, and adaptations may happen. But the long-
term decline of food production in this region could have significant consequences for food
security in Russia, as well as in politically unstable countries that have so far relied on it.
While responses in Punjab in 2010 were relatively quick and effective, aid efforts might be
overwhelmed by more persistent flooding, as getting help to the stricken would be nearly
impossible in conditions that remained extreme for longer. Longer floods could also destroy
more infrastructure and resources, potentially resulting in more widespread migration, as the
possibility to return and rebuild faded. Additionally, there would be a risk of greater social
injustice from local leaders, who might force affected communities to hand over their aid funds.
Potentially, a more decentralized relief response system could lead to people having a greater
say in the use of relief resources and hold local leaders accountable. However, this reform
would be challenging in politically unstable conditions, especially when trust in local leaders has
been shaken by abuse of power during previous flooding.
3.2.2.2. Scenario 2
Floods become more frequent.
Pakistan suffered further floods in 2011, 2012 and 2013 and as this report was being
completed news bulletins reported severe flooding once again in September 2014 so this
scenario must be considered highly plausible. The capacity of the government to respond to
such crises might be eroded, if they were less able to muster resources and support from the
international community due to donor fatigue. In any case, the frequency of flooding could
reduce opportunities for communities, government leaders and other sectors to improve
preparedness for such disasters. Mass migrations would raise difficult questions about land
rights, and might increase risks to vulnerable people such as women and children. In terms of
agriculture, replacing the food previously supplied by stricken regions might prove challenging,
while investment could depart from areas previously dominated by cash crops.
The drought described in the case study already caused a total collapse of agricultural
livelihoods in large areas. Further failure of crops and death of livestock could occur, affecting
areas that thus far merely saw a relative loss of productivity. This in turn would mean that
31
refugees might have even fewer possibilities to sustain themselves elsewhere. The recent
droughts saw minimal and delayed interventions from governments, even though early warning
systems were in place and functional. Similarly, international aid was already struggling to
provide sufficient water and other support, and a larger scale drought might leave more people
without any aid. It is worth considering whether warnings of a larger-scale drought would prompt
quicker and better action, based on learning from the drought described in the case study, or
whether such a drought would still trigger a limited or no response.
3.2.3.2. Scenario 2
East Africa experiences droughts for a number of consecutive years
The climate of East Africa may well be becoming even more extreme, and some scientists
174
suggest that the trend towards increasing drought is likely to continue in future. Since there
has been an increase in drought associated with Indian ocean warming (see section 3.1) Such
a scenario is therefore plausible. As with Scenario 1, such a scenario would lead to further
failure of crops and livestock, with areas possibly becoming permanently unsuitable for
agriculture and even pastoralism. This would lead to more permanent refugees, with migration
adding to long-term political instability. The international response to a long-term crisis of this
nature is unsure. In this context it is disturbing to note that as this report was about to be
published (September 2014), the UN warned that Somalis had once again suffered two failed
rainy seasons resulting in poor harvests, water shortages and livestock deaths. Food prices
were surging because of drought and conflict blocking roads and impeding trade routes and for
175
the first time since 2011, more than one million people were in need of food aid.
By the time Haiyan reached Vietnam, it had been downgraded to a tropical storm. However, a
stronger typhoon could potentially not only wreak greater devastation in the Philippines, but
might continue onwards to cause destruction in mainland South-East Asia. The loss of more
infrastructure over a wider area could have a negative impact on the availability and reach of
aid. Further, the widespread destruction of infrastructure would likely reduce the ability of
governments and international organizations to help. A more extreme typhoon would more
generally pose a greater challenge in terms of funding and manpower both for the national
government and international aid. Recovery would take longer, leaving vulnerable groups at risk
for a longer period. If the typhoon severely affects multiple South-East Asian countries,
international aid resources may be divided along political allegiances.
3.2.4.2. Scenario 2
Typhoons like Haiyan become frequent events.
The increase in disasters would likely lead to widespread displacement, severely affect attempts
to rebuild rural livelihoods, and increase the scale and number of the social injustices that
emerged during the Haiyan crisis, such as domestic violence and discrimination against certain
social groups. More generally, there is a risk that remote areas could be left totally to their own
devices, while the capacities and resources of global aid efforts might be exhausted by the
accelerating cycle of destruction and reconstruction. This may lead to local communities
becoming increasingly adaptive out of necessity, and local adaptation strategies could be
supported by international programs, but it may also lead to a desertion of such remote areas.
With increasingly frequent typhoons, strategic considerations of the leading global economic
powers about influence in the Philippines and South-East Asia in general may impact the
176
provision of aid resources.
32
3.2.5. Conclusion
Hypothetical scenarios based on increasing intensity and frequency of extreme weather events
raise important questions about the possible future impacts of climate change. This is especially
important in relation to their interactions with socioeconomic and governance conditions, the
potential for adaptive capacities to be overwhelmed, and the circumstances in which vulnerable
communities can be driven to extremes. Several scenarios presented here indicate strong
consequences for political stability which could heavily exacerbate humanitarian crises.
Therefore, strategies for climate change adaptation and coping with extreme weather events
should be considered in the context of plausible, multi-dimensional scenarios, and involve
multiple domains of decision making.
33
3.3. HYPOTHETICAL SCENARIOS
SUMMARY MATRIX
Nature of Vulnerable Impact pathways Governance and
weather groups socio-economic dimension
events
Russia
Scenario 1 More intense Food price spike. Lost Greater domestic and
heat wave, livelihoods for greater number international instability.
affecting of farmers.
larger
geographic Farmers, poor
area. Russians, poor
individuals in
import
dependent
Scenario 2 Heat waves countries. How would farmers cope with Effects of a forced transition
become more repeated losses in absence of to other sources of food
frequent. trust in governments? Shifts to import on unstable countries
other commodities/livelihoods. previously dependent on
Russia impracticable as a Russian wheat.
global bread basket.
Pakistan
East Africa
34
Scenario 2 Droughts for Areas become permanently Migration adding to long-
a number of unsuitable for agriculture and term political instability?
consecutive pastoralism. More permanent How would the global
years. refugees. community respond to long-
term crisis?
Philippines
35
SECTION 4: POLICY
RELEVANCE AND
CONCLUDING REMARKS
Each of the case studies reflects the fact that extreme weather events have played an important
role in the destabilization of both short- and long-term food security, with impacts on various
aspects of life. In all cases, the impacts left citizens vulnerable and authorities unprepared.
While direct measures such as emergency preparedness and the strengthening of response-
related institutions would be helpful, this study identifies the need for a wider cultural shift in
many countries facing both food security issues and extreme weather events. More attention to
vulnerable groups and inequalities are required in these societies, going far beyond technical
improvements to equipment or redirected funding. At the very heart of climate justice is the
promise that those who are most vulnerable will not bear the heaviest share of the burden when
disasters inevitably strike.
36
NOTES
Web resources last accessed May 2014, unless otherwise specified
1 See S. Johnstone and J. Mazo (2011) Global Warming and the Arab Spring, Survival: Global Politics
and Strategy 53(2): 1117,--
global-politics-and-strategy-april-may-2011-fbe8/53-2-03-johnstone-and-mazo-9254; J. Swinnen and K.
Van Herck (2013) Food Security and Sociopolitical Stability in Eastern Europe and Central Asia in
C.B. Barrett (ed.) Food Security and Sociopolitical Stability, Oxford: Oxford University Press; A.
Savelyeva (2014) Food as a Weapon,
2 T. Carty and J. Magrath (2013) Growing Disruption: Climate change, food, and the fight against
hunger,-
the-fight-against-hunger-301878
3 G. McBean (2004) Climate change and extreme weather: a basis for action, Natural Hazards 31(1):
177190.
4 T. Cannon (2008) Reducing Peoples Vulnerability to Natural Hazards, WIDER Research Paper 34,
Helsinki.
5 P. Blaikie, T. Cannon, I. Davis and B. Wisner (1994) At Risk: Natural hazards, peoples vulnerability,
and disasters, London: Routledge.
6 D. Barriopedro, E.M. Fischer, J. Luterbacher, R.M. Trigo and R. Garca-Herrera (2011) The hot
summer of 2010: redrawing the temperature record map of Europe, Science 332(6026):2204,
7 R. Dole, M. Hoerling, J. Perlwitz, J. Eischeid, P. Pegion, T. Zhang, X.-W. Quan, T. Xu and D. Murray
(2011) Was there a basis for anticipating the 2010 Russian heat wave?, Geophysical Research
Letters 38: L06702,
8 K.E. Trenberth and J.T. Fasullo (2012) Climate extremes and climate change: The Russian heat wave
and other climate extremes of 2010, Journal of Geophysical Research: Atmospheres
117(D17):D17103,
9 Ibid.
10 Oxfam International (2011) Extreme Weather Endangers Food Security, Oxfam Media Briefing, 28
November,-
final.pdf
11 G. Welton (2011) The Impact of Russias 2010 Wheat Export Ban, Oxford: Oxfam International,
12 D. Guha-Sapir, F. Vos, R. Below and S. Ponserre (2011) Annual Disaster Statistical Review 2010: The
numbers and trends, Centre for Research on the Epidemiology of Disasters: Louvain-la-Neuve,
13 S.K. Wegren (2011) Food Security and Russias 2010 Drought, Eurasian Geography and Economics
52(1):140156.
14 D. Coumou and S. Rahmstorf (2012) A decade of weather extremes, Nature Climate Change
2(7):491496,
15 C.E. Werrell, F. Femia and A.-M. Slaughter (2013) The Arab Spring and Climate Change: A Climate
and Security Correlations Series, Washington, D.C.: Center for American Progress,-
change/
16 D. Guha-Sapir, F. Vos, R. Below and S. Ponserre (2011) op. cit.
17 D. Coumou and S. Rahmstorf (2012), op. cit.
18 G. Welton (2011), op. cit.
19 S.K. Wegren (2011), op. cit.
20 G. Welton (2011), op. cit.
21 S.K. Wegren (2011), op. cit.
22 FAO FAO Agriculture and Trade Policy Background Note Russia,
23 G. Welton (2011), op. cit.
24 Ibid.
25 See FAO, op. cit.; J. Salputra, M. van Leeuwen, P. Salamon, T. Fellmann, M. Banse, and O. von
Ledebur, eds. T. Fellmann, O. Nekhay and R. Mbarek (2013) The agri-food sector in Russia: current
situation and market outlook until 2025, JRC Scientific and Policy Reports, Luxembourg: Publications
Office of the European Union,
26 G. Welton (2011), op. cit.
27 S.K. Wegren (2011), op. cit.
37
28 Ibid.
29 D. Ukhova (2013) After the Drought: The 2012 Drought, Russian farmers, and the challenges of
extreme weather events, Oxford: Oxfam International,
30 See Oxfam International (2011), op. cit.,and IFPRI/Concern Worldwide/Welhungerhilfe (2011) Global
Hunger Index. The Challenge of Hunger: Taming Price Spikes and Excessive Food Price Volatility,
p.29,
31 Oxfam International (2011), op. cit.
32 Ibid.
33 G. Welton (2011), op. cit.
34 United States Department of Agriculture (USDA) (2011),
35 G. Welton (2011), op. cit.
36 Ibid.
37 C. Hendrix and H. Brinkman (2013) Food Insecurity and Conflict Dynamics: Causal Linkages and
Complex Feedbacks, International Journal of Security & Development 2(2): 26, pp.118,
38 J. Kinninmont (2011) Bread And Dignity, The World Today, AugSep issue, pp.3133,
39 C.E. Werrell, F. Femia and A.-M. Slaughter (2013), op. cit.
40 G. Welton (2011), op. cit.
41 USDA (2011), op. cit.
42 Goddard Earth Sciences Data and Information Services Center (2010) Flooding in Pakistan caused by
higher-than-normal monsoon rainfall, NASA,
43 M. Straatsma, J. Ettema and B. Krol (2010) Flooding and Pakistan: causes, impact and risk
assessment, University of Twente Faculty of Geo-Information Science and Earth Observation,
44 C-C. Hong, H-H. Hsu, N-H. Lin and H. Chiu (2011) Roles of European blocking and tropical-
extratropical interaction in the 2010 Pakistan flooding, Geophysical Research Letters 38(13): L13806,
45 Thomas Reuters (2013) Pakistan Floods, 8 April 2013,-
2010
46 Disasters Emergency Committee Pakistan Floods Facts and Figures-
floods-facts-and-figures
47 C-C. Hong, H-H. Hsu, N-H. Lin and H. Chiu (2011), op. cit.
48 Ibid.
49 Ibid.
50 C. Fair (2011) Pakistan in 2010, Asian Survey 51(1): 97110,
51 Ibid.
52 World Food Programme (2010) Pakistan Flood Impact Assessment,
53 N. Gronewold and Climatewire (2010) Is the Flooding in Pakistan a Climate Change Disaster?,
Scientific American, 18 August,
54 S. Chughtai and C. Heinrich (2012) Pakistan Flood Emergency: Lessons from a continuing crisis,
55 C. Fair (2011), op. cit.
56 World Food Programme (2010), op. cit.
57 Ibid.
58 Ibid.
59 C. Fair (2011), op. cit.
60 Ibid.
61 World Food Programme (2010), op. cit.
62 F. Naqvi and H. Gazdar (2011) My Land, My Right: Putting land rights at the heart of the Pakistan
floods reconstruction, Oxford: Oxfam International,-
land-my-right-putting-land-rights-at-the-heart-of-the-pakistan-floods-recons-133790
63 S. Chughtai and C. Heinrich (2012), op. cit.
64 F. Naqvi and H. Gazdar (2011), op. cit.
38
65 FAO and Pakistan Space & Upper Atmosphere Research Commission, Government of Pakistan
SUPARCO, Monitoring of 2010 Floods in Pakistan.
66 Ibid.
67 Ibid.
68 World Food Programme (2010), op. cit.
69 Ibid.
70 F. Naqvi and H. Gazdar (2011), op. cit.
71 S. Chughtai and C. Heinrich (2012), op. cit.
72 World Food Programme (2010), op. cit.
73 F. Naqvi and H. Gazdar (2011), op. cit.
74 House of Commons International Development Committee (2011) The Humanitarian Response to the
Pakistan Floods, Seventh Report of Session 201012.
75 S. Chughtai and C. Heinrich (2012), op. cit.
76 See M. Semple (2011) Breach of Trust: Peoples experiences of the Pakistan floods and their
aftermath, July 2010July 2011, Islamabad: Pattan Development Organisation,
The Supreme Court Commission of Inquiry found that the major breaches that occurred happened
because of failure of infrastructure rather than deliberate decisions, but Semple says flood affectees
testimony provides multiple examples of the deliberate breaching of secondary infrastructure.
77 S. Hastenrath, D. Polzin and C. Mutai (2011) Circulation Mechanisms of Kenya Rainfall Anomalies,
Journal of Climate 24(2):404412,
78 B. Lyon and D. DeWitt (2012) A recent and abrupt decline in the East African long rains, Geophysical
Research Letters 39(2): L02702,
79 F.C. Lott, N. Christidis and P.A. Stott (2013) Can the 2011 East African drought be attributed to
human-induced climate change?, Geophysical Research Letters 40(6):11771181,
80 C. Funk (2011) We thought trouble was coming, Nature 476(7):7,
81 Food and Agricultural Organization of the United Nations (2011) Drought-related food insecurity: A
focus on the Horn of Africa, 25 July,-
0f9dad42f33c6ad6ebda108ddc1009adf.pdf
82 Ibid.
83 S. Mack Smith (2012) Food Crisis in the Horn of Africa, Oxford: Oxfam International,-
practice.oxfam.org.uk/publications/food-crisis-in-the-horn-of-africa-progress-report-july-2011-july-2012-
231613
84 Food Security and Nutrition Analysis Unit Somalia, Mortality among populations of southern and
central Somalia affected by severe food insecurity and famine during 20102012, FAO/FSNAU-
S/FEWSNET, 2 May 2013.
85 Food and Agricultural Organization of the United Nations (2011), op. cit.
86 Oxfam (2011), op. cit.
87 S. Mack Smith (2012), op. cit.
88 Food and Agricultural Organization of the United Nations (2011), op. cit.
89 S. Mack Smith (2012), op. cit.
90 Oxfam (2011), op. cit.
91 S. Mack Smith (2012), op. cit.
92 Ibid.
93 ActionAid (2012) East Africa drought questions and answers,-
do/emergencies-conflict/east-africa-drought/east-africa-drought-questions-and-answers
94 Unicef (n.d.) Horn of Africa Famine,
95 Unicef (2012) Response To The Horn Of Africa Emergency: A continuing crisis threatens hard-won
gains, Regional Six-Month Progress Report,-
Month-Report-April-2012.pdf
96 ActionAid (2012), op. cit.
97 S. Mack Smith (2012), op. cit.
98 S. Tisdall (2012) East Africas drought: the avoidable disaster, the Guardian, 18 January,
99 D. Hillier and B. Dempsey (2012) A Dangerous Delay: The cost of late response to early warnings in
the 2011 drought in the Horn of Africa, Oxford: Oxfam International and Save the Children,
39-
warnings-in-the-2011-droug-203389
100 Food Security and Nutrition Analysis Unit Somalia, op.cit.
101 S. Mack Smith (2012), op. cit.
102 Unicef (2012), op. cit.
103 Food and Agricultural Organization of the United Nations (2011), op, cit.
104 Unicef (2012), op. cit.
105 S. Tisdall (2012) op. cit.
106 D. Hillier and B.Dempsey (2012), op.cit.
107 H. McGray, S. Elsayed, H. McGray and A. Dixit (2011) Famine in the Horn of Africa, 24 August,
World Resources Institute,
108 D. Hillier and B. Dempsey (2012), op. cit.
109 Ibid.
110 Ibid.
111 T. Lum and R. Margesson (2014) Typhoon Haiyan (Yolanda): U.S. and International Response to
Philippines Disaster, Congressional Research Service,
112 'US Navy Joint Typhoon Warning Center (2013) Prognostic Reasoning for Super Typhoon 31W
(Haiyan) Nr 14. November 6,
113 Met Office (2013) Typhoon Haiyan,
114 BBC News (2013) Typhoon Haiyan: Before and after the storm, 13 November,
115 Q. Schiermeier (2013) Did climate change cause Typhoon Haiyan?, Nature, 11 November,
116 World Food Programme (2013) Food Security and Agriculture Cluster coordination in response to
Typhoon Haiyan (Yolanda) in the Philippines,-
cluster-coordination-response-typhoon-haiyan-yolanda-philippin
117 OCHA Philippines (2014a) Philippines: Typhoon Haiyan. Situation Report No. 31 (as of 10 January
2014), 10 January,
014_final.pdf
118 Ibid.
119 Ibid.
120 World Food Programme (2013), op. cit.
121 Food and Agricultural Organization of the United Nations (2013) FAO Delivers Seed in Time for
Planting Season, 31 December,-
detail/en/c/211834/
122 Food Security Cluster (n.d.) Country: Philippines,
123 World Food Programme (2013), op. cit.
124 Food Security Cluster. op. cit.
125 Food Security Cluster. op. cit.
126 J. Di Nuzio (2013) Long-Term Food Security Risk in Philippines after Typhoon Haiyan,
FutureDirections International, 27 November,-
water-crises/28-global-food-and-water-crises-swa/1455-long-term-food-security-risk-in-philippines-
after-typhoon-haiyan.html
127 Food and Agricultural Organization of the United Nations (2013), op. cit.
128 J. Di Nuzio (2013), op. cit.
129 Centre for Research on the Epidemiology of Disasters Louvain, Universit Catholique de Brussels
(CRED) EM-DAT: The OFDA/CRED International Disaster Database.
130 J. Upton (2013) Philippines blames climate change for monster typhoon, Grist,
131 OCHA Philippines (2014b) Philippines: Typhoon Haiyan. Situation Report No. 33 (as of 20 January
2014), 10 January,
aiyanSitrepNo.33.20Jan2014%20%281%29.pdf
132 Food and Agricultural Organization of the United Nations (2013), op. cit.
133
aiyanSitrepNo.33.20Jan2014%20(1).pdf
134 OCHA Philippines (2014a), op. cit.
40
135 Food Security Cluster, op. cit.
136 OCHA Philippines (2014b), op. cit.
137 IPCC (2013) Summary for policymakers, Firth Assessment Report of the Intergovernmental
Panel on Climate Change, Cambridge University Press: Cambridge,
138 P.A. Stott, D.A. Stone and M.R. Allen (2004) Human contribution to the European heatwave of 2003,
Nature 432:610614,
139 M.R. Allen (2003) Liability for climate change Nature, 421:891892.
140 IPCC (2013), op. cit.
141.) (2012) Managing the Risks of Extreme Events
and Disasters to Advance Climate Change Adaptation, a Special Report of Working Groups I and II of
the Intergovernmental Panel on Climate Change (IPCC), Cambridge University Press: Cambridge,
p.582.
142 IPCC (2013), op. cit.
143 R.M. Dole, J. Hoerling, J. Perlwitz, P. Eischeid, T. Pegion, X.W. Zhang. T.X Quan and D. Murray
(2011), op. cit.
144 S. Rahmstorf and D. Coumou (2011) Increase of extreme events in a warming world, Proceedings of
the National Academy of Sciences of the United States of America 108(44):179059,
145 F.E.L. Otto, N. Massey, G.J. van Oldenborgh, R.G. Jones and M.R. Allen (2012) Reconciling two
approaches to attribution of the 2010 Russian heat wave, Geophysical Research Letters 39(4):15,
146 J. Magrath (2013) Four years of floods have left Pakistan hungry getting real about the human cost
of climate change-
and-hunger
147 IPCC (2013), op. cit.
148 K. Marvel and C. Bonfils (2013) Identifying external influences on global precipitation, PNAS 110(48)
1930119306,
149 Past seasonal rainfall pattern in Pakistan 19611991 and 19912010 information from Pakistan
Meteorological Department provided to Oxfam Pakistan for post-flood risk analysis, May 2010.
150 World Bank Climate Change Knowledge portal, accessed 14 July 2014,
151 Ibid.
152 N. Christidis, P.A. Stott, A.A. Scaife, A. Arribas, G.S. Jones, D. Copsey, J.R. Knight and W.J. Tennant
(2013). A New HadGEM3-A-Based System for Attribution of Weather- and Climate-Related Extreme
Events, Journal of Climate, 26(9), 27562783.-
00169.1
153.)], p.582, Cambridge and New
York: Cambridge University Press
154 C. Funk, M.D. Dettinger, J.Michaelsen, J.P. Verdin, M.E. Brown, M. Barlow and A. Hoell (2008)
Warming of the Indian Ocean threatens eastern and southern African food security but could be
mitigated by agricultural development, Proceedings of the National Academy of Sciences of the United
States of America, 105(32), 110816.
155 A.P. Williams, C. Funk, J. Michaelsen, S.A. Rauscher, I. Robertson, T.H.G. Wils, M. Koprowski, Z.
Eshetu and N.J. Loader (2011) Recent summer precipitation trends in the Greater Horn of Africa and
the emerging role of Indian Ocean sea surface temperature Climate Dynamics 39(910): 23072328,
156 A.P. Williams, and C. Funk (2011) A westward extension of the warm pool leads to a westward
extension of the Walker circulation, drying eastern Africa, Climate Dynamics 37(1112): 24172435,
157 B. Lyon and D.G. DeWitt (2012) A recent and abrupt decline in the East African long rains,
Geophysical Research Letters 39(2),
158 FEWSNET (2010) A climate trend analysis of KenyaAugust 2010,
159 FEWSNET (2010), op. cit.
160 B. Lyon and D.G. DeWitt (2012), op. cit.
161 A.P. Williams and C. Funk (2011), op. cit.
41
162 C. Funk (2012) Exceptional warming in the western Pacific-Indian Ocean warm pool has contributed
to more frequent droughts in eastern Africa in T.C. Peterson et al. (eds.) (2012) Explaining Extreme
Events of 2011 from a Climate Perspective, BAMS 93:10411067,
163 K. H. Cook and E.K. Vizy (2006) Coupled Model Simulations of the West African Monsoon System:
Twentieth- and Twenty-First-Century Simulations. Journal of Climate, 19(15), 36813703.
164 F.C. Lott, N. Christidis and P.A. Stott (2013), op. cit.
165 J.B. Elsner, J.P. Kossin and T.H. Jagger (2008) The increasing intensity of the strongest tropical
cyclones, Nature 455: 9295,
166 D. Coumou and S. Rahmstorf (2012), op. cit.
167 K. Emanuel (2007) Environmental factors affecting tropical cyclone power dissipation, J. Climate
20:54975509.
168 G.A. Vecchi and B. J. Soden (2007a) Global warming and the weakening of the tropical circulation, J.
Climate 20:43164340.
169 IPCC (2012) op.cit.
170 Ana Luiz Solis, PAGASA/DOST/Climate Monitoring and Prediction Centre of the Philippines;
presentation February 2013.
171 R.H. Moss, J.A. Edmonds, K.A. Hibbard,756,
172 D.P. van Vuuren, M.T.J. Kok, B. Girod, P.L. Lucas and B. de Vries (2012) Scenarios in global
environmental assessments: key characteristics and lessons for future use, Global Environmental
Change 22(4):884895,
173.
174C. Funk, M.D. Dettinger, J.C. Michaelsen, J.P. Verdin, M.E. Brown, M. Barlow and A. Hoell (2008), op.
cit.
175 Food Security and Nutrition Analysis Unit Somalia, 2 September 2014,-
focus/over-one-million-people-somalia-face-acute-food-insecurity-food-crisis-worsens
176 J. Kurlantzik (2013) Typhoon Haiyan, the Philippines, the United States, and China, Council on
Foreign Relations,-
states-and-china/
42
ACKNOWLEDGEMENTS
Christopher Coghlan is a doctoral student, teaching assistant and tutor in the School of
Geography and the Environment at the University of Oxford. He is currently completing a thesis
on global food systems across administrative scales. He works on Oxfam-commissioned
research at the Environmental Change Institute (ECI) at the University of Oxford, and is the joint
Scenarios and Policy Researcher for the CGIAR Research Programme on Climate Change,
Agriculture and Food Security (CCAFS).
Maliha Muzammil is joint Scenarios and Policy Researcher for the CGIAR Research
Programme on CCAFS at the ECI. She is driving multi-stakeholder scenario development for
the future of food security, livelihoods and environments for South Asia, working extensively
with a wide range of regional actors. She is currently undertaking her PhD at SOAS, examining
the opportunities and barriers for low carbon, climate resilient pathways in least developed
countries using a political economy approach. She has worked on climate change adaptation,
community-based adaptation, sustainable development and low carbon development.
Dr John Ingram trained in soil science and gained extensive experience in the 1980s working
in East and Southern Africa and South Asia in agriculture, forestry and agroecology research. In
1991 he was recruited by the UKs Natural Environment Research Council (NERC) to organize,
coordinate and synthesize agroecology research as part of the International Geosphere-
Biosphere Programme. In 2001 he was appointed Executive Officer for the 10-year international
project Global Environmental Change and Food Systems (GECAFS). On the close of GECAFS
he was appointed NERC Food Security Leader, during which he represented NERC on the UK
Global Food Security Programme. In May 2013 he joined the ECI to establish and lead a food
systems research and training programme.
Dr Joost Vervoort is the Scenarios Officer for the CGIAR Research Programme on CCAFS.
He drives multi-stakeholder scenario development for the future of food security, livelihoods and
environments for East and West Africa, South and South-East Asia and Latin America. Working
extensively with a wide range of regional actors he explores the long-term implications of
multiple socio-economic scenarios combined with climate scenarios. The regional scenarios are
used to guide policies, investments and institutional change at national and regional levels.
Joost has a PhD in Production Ecology and Resource Conservation from Wageningen
University and is scenarios work package leader on the TransMango project, funded by the
European Commission (FP7), which focuses on plausible futures of the European food system
in a global context.
Dr Friederike Otto is a physicist by training but gained a PhD in Philosophy of Science from the
Free University Berlin. She was based at the Alfred Wegener Institute of Polar and Marine
Research in Potsdam during her one-year diploma project working on atmospheric dynamics in
global circulation models. While writing her PhD thesis she worked as a research assistant at
the Potsdam Institute for Climate Impact Research (PIK) and came to Oxford as a post-doctoral
researcher to work on the quantification of uncertainty in climate system modelling. She now
works as a senior researcher at the ECI on the attribution of extreme weather and climate-
related events to external drivers of the climate system. She is the scientific coordinator of the
distributed-computing project climateprediction.net and leads a project to make the attribution of
extreme events operational.
Dr Rachel James is a climate scientist focusing on changes in African rainfall systems. She is
interested in the role of human influence on climate, and the application of climate models to
better understand risks to water security, biodiversity, infrastructure, and food systems. She has
a PhD from the University of Oxford and is a Research Fellow in Climate Modelling for Climate
Services at the ECI, where she is promoting the use of climate science and climate models to
provide useful information to decision makers. She is interested in the role of scientific
information in climate policy, including the UNFCCC Mechanism for Loss and Damage.
The authors wish to thank John Magrath, Jenny Peebles and Ricardo Fuentes-Nieva of Oxfam
for their support in the writing of this paper.
43
Oxfam Research Reports. | https://tr.scribd.com/document/342206737/A-Sign-of-Things-to-Come-Examining-four-major-climate-related-disasters-2010-2013-and-their-impacts-on-food-security | CC-MAIN-2019-26 | refinedweb | 15,192 | 52.9 |
Subject: Re: [boost] [config] msvc-14 config changes heads up
From: Gavin Lambert (gavinl_at_[hidden])
Date: 2016-07-07 22:24:12
On 7/07/2016 08:50, Niall Douglas wrote:
>.
Well, it's natural C-ish coding style to at least declare symbols
(including all overloads) before they're used, so most coding patterns
wouldn't run into the difference.
Where you're more likely to run into problems is if the library defines
something that the user then further overloads -- but due to namespaces
this would generally indicate a poor design if symbols have mixed
implementations between library and user code. So library code is
probably the least vulnerable to it.
One of the common resolutions for cases where this sort of thing is
needed is to use template specialisation (eg. a traits template such as
std::hash) -- and using such a template with the parameter type T will
*always* be a dependent type, so shouldn't have this issue anyway.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2016/07/230409.php | CC-MAIN-2020-34 | refinedweb | 185 | 63.39 |
Using Navigation View in Sencha Touch 2
Navigation View is a new component in Sencha Touch 2. At its base, it is a simple container with a card layout and a docked toolbar; however we built on top of that a easy way to push (add) and pop (remove) views in stack-like fashion. When you push a view, it will add that view into the
stack and animate the new title into the toolbar. It will also animate a back button into the toolbar so you can return (or pop) to the previous view.
The easiest way to demonstrate this is with an example:
//create the navigation view and add it into the Ext.Viewport var view = Ext.Viewport.add({ xtype: 'navigationview', //we only give it one item by default, which will be the only item in the 'stack' when it loads items: [ { //items can have titles title: 'Navigation View', padding: 10, //inside this first item we are going to add a button items: [ { xtype: 'button', text: 'Push another view!', handler: function() { //when someone taps this button, it will push another view into stack view.push({ //this one also has a title title: 'Second View', padding: 10, //once again, this view has one button items: [ { xtype: 'button', text: 'Pop this view!', handler: function() { //and when you press this button, it will pop the current view (this) out of the stack view.pop(); } } ] }); } } ] } ] });
Creating a simple Navigation View
Creating a Navigation View is just like creating other containers. You use Ext.create to create your Navigation View instance, and the only configuration you need to add is items.
var view = Ext.create('Ext.navigation.View', { fullscreen: true, items: [ { title: 'Navigation View', html: 'This is the first item in the stack!' } ] });
As you can see, we only give it two configurations:
- fullscreen: This is so it is automatically inserted into the viewport.
- items: This is the items that the navigaiton view will contain by default. We only insert one item here, which means it will the first item in the
stack; therefore active.
Pushing new Views
To 'push' means to add a new view to the stack of the navigation view. This will do three things:
- Animate the navigation view to show the new item (slide).
- Animate the
titleconfiguration of the item (if specified) in the navigation bar (slide).
- Animate the the back button into the navigation bar (slide).
Pushing views is made simple using the push method:
view.push({ title: 'New views title', html: 'Some content' });
You can either pass a reference to a component or a configuration like I did above. And of course, the item you push can be any subclass of Ext.Component:
var tabPanel = Ext.create('Ext.tab.Panel', { items: [ { title: 'First', html: 'first' }, { title: 'Second', html: 'second' } ] }); view.push(tabPanel);
Popping Views
To 'pop' means to remove the top most (visually active) view from the Navigation View. Of course you need to have more than 1 item in the stack for this to do anything. When you pop, it will do a few things:
- Animate the navigation view back to the previous item in the stack (reverse slide).
- Animate the current title out of view, and animate the previous stacks title into view (slide).
- Animate the back button out of view, and if there is still more than 2 items in the stack, animate the previous views back button.
Popping views is very simple. You just call the pop method:
view.pop(); | http://docs.sencha.com/touch/2.0.2/?_escaped_fragment_=/guide/navigation_view | CC-MAIN-2015-06 | refinedweb | 577 | 71.75 |
I am making a basic comand line program that does conversions in python and i want to add more conversions to it without scripting. so you can command the program something like
(you command the PC) add new_conversion (The PC ask) name the new_conversion: *here you enter your function name* ect.
then in the end you get a function like this:
def kiloMile(n, km): if type(n) != int: return "Error 1", "Unexpected variable." elif km == "k": mile = n * 0.621371 return mile elif km == "m": kilometers = n * 1.60934 return kilometers else: return "Error 1", "Unexpected variable."
is this posible to edit the program through the program.
thanks in advance
LTJR | https://www.daniweb.com/programming/software-development/threads/447324/save-input-into-script | CC-MAIN-2017-17 | refinedweb | 112 | 76.22 |
Beginning Ruby
samzenpus posted more than 6 years ago | from the get-started dept.
TimHunter writes . Even though the early chapters are marred by the occasional reference to an advanced topic, readers will appreciate the plentiful examples and thoughtful description of the Ruby language." Read below for the rest of Tim's review.
Ruby is an object-oriented programming language in the same family as Perl and Python. Ruby is very popular for writing web applications but also widely used for the general-purpose programming tasks. Ruby is available for Linux, Mac OS X, and Microsoft Windows. It is Open Source software with a commercially friendly license.
I agreed to review this book in particular because, even though the Ruby community has a strong tradition of encouraging newcomers, there are actually very few resources for the Ruby beginner. Ruby has gained a repuation for being easy to learn and therefore is attractive to people with limited or no programming experience. Novice programmers post almost daily requests for help and direction to the ruby-lang mailing list.
In addition to serving people with no progamming experience, Beginning Ruby is also aimed at experienced programmers who want to learn Ruby. Progammers coming from languages such as Java or C++ often struggle with Ruby's dynamic typing and (even with the recent explosion of Ruby-related books) the relative scarcity of documentation. Beginning Ruby tries to satisfy this audience by explaining Ruby's design, history and place in the programming world and including an extensive survey of the currently-available Ruby libraries.
Beginning Ruby is divided into 3 parts. The first part is aimed at neophytes. Experienced programmers, especially those experienced with object-oriented programming, will be able to skip chapter 2 and skim chapters 3 and 4.
The book starts simply. Chapter 1 isn't even about programming. This chapter explains how to install Ruby on Windows, OS X, and Linux. The instructions are thorough and aimed squarely at beginners. For example, Cooper explains how to get a command prompt on Windows and how to run Terminal.app on OS X.
Chapters 2 and 3 introduce fundamental concepts such as variables, expressions, control flow, classes, and objects. Cooper emphasizes experimentation. He says that irb, Ruby's interactive programming module, "provides the perfect environment for tweaking and testing the language, as you cannot do any real damage from within irb." Such assurances are helpful, especially to the beginner who may be slightly afraid that he's going to somehow make a mistake that will "break" his computer.
Explaining programming to beginners is hard. I've read a number of books that try to teach object-oriented programming concepts to people with no programming experience whatsoever. None were stunningly successful. This one isn't either. The problem is that books are linear, but there are simply too many things – concepts, keywords, tools – that have to be introduced nearly simultaneously and initially taken on faith. Cooper distracts his readers by peppering his text with too many "don't worry about this yet" disclaimers and assurances that explanations will appear later. His references C and Perl will be meaningless and possibly confusing to the beginning programmer.
Chapter 4, "Developing a Basic Ruby Application," starts by explaining what a text editor is and offering a few recommendations for text editors on Windows, OS X, and Linux. Then it guides the reader through his first real program, a script to read a text file and compute simple statistics such as the number of lines and words. This is a well-chosen example that will, when completed, make the student feel like he's accomplished something.
Chapter 5, "The Ruby Ecosystem," feels out-of-place. This chapter doesn't teach anything about Ruby programming. Instead it explains Ruby's history, introduces Ruby On Rails, and talks about the Open Source movement. Little, if any, of this material will be interesting to a fledgling programmer. The chapter finishes with a list of internet-based Ruby resources such as mailing lists, IRC, and blogs. All of this seems much better suited as an appendix and indeed the list of resources appears again in Appendix C.
Part 2, "The Core of Ruby," has a slower pace. With the very basic material covered, Beginning Ruby gets a better footing. Starting in this part the material is useful to both beginners and veterans.
This is probably as good a place as any to talk about the examples, which are numerous and very likely the best part of the book. Most of the the examples are short and to the point. A few extend over several pages. My overall impression is that they are well-chosen and well-coded. I especially like the way the examples appear on the page, visually distinctive but without interrupting the flow of the text. Source code for all of the examples may be downloaded from the Apress web site. However, even though the files are divided into a directory per chapter, the examples aren't numbered in the text so it's difficult to find the code for the example you're looking at. I ended up using grep to search for keywords in the sources.
Chapter 6 is a slower pass through Ruby, focusing on Ruby's object-orientation. Though fewer than part 1, there are still problems with references to concepts that have not yet been introduced. For example, Cooper uses the require method in the context of namespaces even though require has not been introduced. Indeed, it's not even necessary to mention namespaces at all in this chapter since the entire concept could've been held off until the next chapter, which explains how to create programs from code in multiple files.
The remaining chapters in this part start to address the needs of the serious Ruby programmer. This is a lot of ground to cover, including documentation, debugging, test-driven development, I/O, databases, and how to deploy Ruby programs. I particularly liked Cooper's thorough instructions for installing and creating RubyGems, Ruby's third-part library management system. There are so many topics to cover that each one gets only an introduction, but Cooper uniformly provides links to extended online documentation.
The last chapter in this part works through an even larger example, a Ruby "chat bot." This is an ingenious and entertaining example, the kind of program that, had I read it when I was just starting to learn programming, would have spent many happy hours tweaking. Call me a geek, but I got a chuckle out of the example on page 383 of two very stupid bots conversing.
Part 3 is called "Ruby Online." Of course it starts with the obligatory chapter on Ruby on Rails. I suppose publishers require such a chapter in all Ruby books, even though RoR is more than amply covered by other excellent books. I'm not a RoR programmer so I blew off this chapter.
Chapter 14 describes Ruby's support for the Internet via its HTTP, email, and FTP libraries. Chapter 15 covers other networking support libraries. As usual there are many excellent examples. Chapter 16 is a very good survey of the standard and add-on libraries that the serious Ruby programmer will find useful. Each library is demonstrated with an example, and Cooper provides a link to the complete documentation.
At the start of this review I said that Beginning Ruby is divided into 3 parts, but actually there are four. The last part consists of 3 appendices. Appendix A is a summarization of Part 2. Appendix B is sort of a "semi-reference" to Ruby's core libraries. This is not intended to be a complete reference. Instead, Cooper limits his discussion to the most useful methods of the core classes. As I mentioned earlier, Appendix C is a list of Internet-based Ruby resources such as web pages, mailing lists, IRC channels, and blogs.
I'm giving Beginning Ruby a 7. It's a good book for someone who wants to learn Ruby as his first programming language. It could be better. I liked Cooper's patient and thoughtful explanations about installing Ruby and RubyGems, how to use a command line, and what a text editor is for. Cooper supplies answers to all the typical Ruby-nuby questions, but his explanation of basic concepts is marred by the occasional confusing reference to advanced or even irrelevant topics. For the experienced programmer who learns best by reading and working through examples this book is a good choice. Dave Thomas' Programming Ruby, The Pragmatic Programmer's Guide (a.k.a. the Pickaxe) is a tough competitor, but each book has a considerable amount of material that is not found in the other book. For example the Pickaxe's core library reference is exhaustive but it has only a limited amount of the kind of tutorial explanations that is Beginning Ruby's strength. Beginning Ruby is available in PDF format from Apress' web site at about half the price of the paper book.
I have been programming Ruby as a hobby for over 5 years. Apress gave me a review copy of this book, but otherwise I have no connection to the author or publisher."
You can purchase Beginning Ruby: From Novice to Professional from amazon.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
Ruby as a first language? (-1, Flamebait)
EveryNickIsTaken (1054794) | more than 6 years ago | (#18843409)
Re:Ruby as a first language? (4, Funny)
shagymoe (261297) | more than 6 years ago | (#18843445)
Re:Ruby as a first language? (0)
Anonymous Coward | more than 6 years ago | (#18843659)
rotfl
Re:Ruby as a first language? (2, Funny)
mhall119 (1035984) | more than 6 years ago | (#18843731)
Positive reinforcement is for spoiled Ruby programmers.
Re:Ruby as a first language? (1)
eneville (745111) | more than 6 years ago | (#18845567)
Re:Ruby as a first language? (1)
b0r1s (170449) | more than 6 years ago | (#18843739)
Of course, as beginner languages go, Java was great - fantastic for teaching OO fundamentals.
Re:Ruby as a first language? (1, Informative)
EveryNickIsTaken (1054794) | more than 6 years ago | (#18843459)
Re:Ruby as a first language? (3, Informative)
solevita (967690) | more than 6 years ago | (#18843543)
I'm seriously considering starting programming with Ruby and plan to look on Amazon later this evening for this book. Do you have any eloquent reason why I shouldn't?
Re:Ruby as a first language? (4, Insightful)
EsbenMoseHansen (731150) | more than 6 years ago | (#18843679)
I know (as in, have written a substantial amount in) perl, java, ruby, C++, C, and (god forbid) PL/I. I am acquianted with quite a number more (like Haskel, Lisp) And have no fear, ruby is a fine language to start with. Especially the irb shell is very nice for getting your bearings, and the functional support will be good for you if you ever turn to the hardcore languages, like Haskell, C++ and that ilk. On the other hand, if you prefer the more limited languages, like Java, you will still have a good idea about most concepts you'll find. The only thing you will not learn that are common is static overloading and type checking, which is easy enough to pick up.
Good luck. Learning to code and coding is very rewarding
:)
Re:Ruby as a first language? (1)
mclaincausey (777353) | more than 6 years ago | (#18844929)
$0.02
Re:Ruby as a first language? (1)
DragonWriter (970822) | more than 6 years ago | (#18846939)
I don't think that's all that true. I think it easy for people who themselves came to program through imperative and OO programming to think of those approaches as more "natural" than functional programming. But if someone's first exposure to programming includes functional programming from the outset, I don't think functional programming is any less natural than the other forms.
And I have a feeling a newbie programmer who learns Ruby first is going to end up and using constructs that combine OO and functional elements (passing closures to method calls on integer literals, for instance) rather than imperative mechanisms for basic tasks like looping, because, I can't imagine an newbie-focussed book not encouraging that. Sure, I mean, you could stubbornly try to do everything with while loops, but why?
It would be a great first language (4, Insightful)
BrewerDude (716509) | more than 6 years ago | (#18843849)
Many computer science departments, including MIT, use Scheme as the language for their introductory computer science course. It's a wonderful language and helps the students learn a lot of key concepts that are important for formal computer science.
Scheme, though, is a little hard to use for real-world work. Ruby, on the other hand, is a great real-world language that has many of the features that I miss from Scheme: closures, lambda expresions, etc.
I haven't read this book, so I can't recommend it, but I can heartily recommend Ruby as a language that would be great to learn to program in. It'll let you focus on the key concepts rather than the tedium of implementing them in lower level languages.
Go for it!
Re:It would be a great first language (1)
CastrTroy (595695) | more than 6 years ago | (#18844473)
Re:It would be a great first language (2, Informative)
partenon (749418) | more than 6 years ago | (#18844633)
Re:It would be a great first language (2, Informative)
Serious Callers Only (1022605) | more than 6 years ago | (#18844877)
I don't see why you think Ruby would make any of that difficult - perhaps you're getting Ruby mixed up with other languages, it's one of the better ones to learn with in my opinion. In the interactive ruby shell (irb) you can learn this stuff easily:
1. Learn about variables and doing simple math
irb > 2+2
=> 4
irb > x = 3
=> 3
irb > y = 2
=> 2
irb > x + y
=> 5
2. Learn about conditional statements
irb > if (x > 2)
irb > y = 10
irb > end
=> 10
irb > x + y
=> 13
3. Learn about looping structures
irb > 2.times do
irb * puts "Hello world"
irb > end
Hello world
Hello world
irb > ["jan","feb","march","april"].each do | month |
irb * puts month
irb > end
jan
feb
march
april
4. Learn about functions
irb > def max(a,b)
irb > if a > b
irb > a
irb > else
irb * b
irb > end
irb > end
=> nil
irb > m = max(4,8)
=> 8
Re:It would be a great first language (1)
BrewerDude (716509) | more than 6 years ago | (#18844937)
Agreed. Ruby lets you learn in that progression, BTW. Everything is an object, but you don't need to know the details of that day one, at least for simple types like you're describing.
Re:It would be a great first language (1)
Frequency Domain (601421) | more than 6 years ago | (#18844959)
You can criticize Ruby for its speed in the various shootouts, but even there I hearken back to the old saying "There are no slow languages, only slow implementations." Any runtime slowness is usually offset by reductions in my development time - I find it easier to prototype an idea in Ruby, then translate to Java/C++ if I need more speed, and for my work Ruby will cough up the answer within a second or two of run time 95% of the time. However, I do academic research with a lot of one-off code - your mileage may vary.
Try Why the Lucky Stiff's guide (1)
ystar (898731) | more than 6 years ago | (#18844489)
Re:Try Why the Lucky Stiff's guide (2, Insightful)
partenon (749418) | more than 6 years ago | (#18844711)
Re:Try Why the Lucky Stiff's guide (1)
Stamen (745223) | more than 6 years ago | (#18846017)
It's not often mentioned in threads about Ruby, but the community surrounding it is very good and, ultimately, that is very important.
It's also an awesome language, and although not hugely better than Python (another great language), better none-the-less (mainly for me, because of its fully object oriented-ness, and complete embrace of code blocks and closures).
Re:Ruby as a first language? (1)
DragonWriter (970822) | more than 6 years ago | (#18844739)
Re:Ruby as a first language? (1)
donglekey (124433) | more than 6 years ago | (#18846909)
Re:Ruby as a first language? (1, Insightful)
shagymoe (261297) | more than 6 years ago | (#18843581)
Personally, I could never go back to perl, java or php from ruby, so it that is the case, why not bother starting with ruby?
Re:Ruby as a first language? (1)
Anonymous Coward | more than 6 years ago | (#18843637)
Combine the approaches of try ruby! [hobix.com] and the Purple Book [mit.edu], and you can't really go wrong.
Now all we have to do is wait for such a book. Finished with pickaxe flavoring.
Re:Ruby as a first language? (0)
Anonymous Coward | more than 6 years ago | (#18843861)
Re:Ruby as a first language? (1)
dthable (163749) | more than 6 years ago | (#18843947)
Re:Ruby as a first language? (1)
iggymanz (596061) | more than 6 years ago | (#18844083)
Re:Ruby as a first language? (2, Insightful)
happyfrogcow (708359) | more than 6 years ago | (#18845127)
Shh, we need these Lisp jokes every once in a while to keep the Java people in Java, and the Ruby people in Ruby.
And yes, it's "Lisp" these days not "LISP". Go lurk at comp.lang.lisp if you don't believe me.
Re:Ruby as a first language? (1)
lewp (95638) | more than 6 years ago | (#18844103)
Re:Ruby as a first language? (1)
dthable (163749) | more than 6 years ago | (#18844179)
I would rather have.... (-1, Troll)
Anonymous Coward | more than 6 years ago | (#18843429)
Re:I would rather have.... (3, Funny)
Archangel Michael (180766) | more than 6 years ago | (#18843709)
Re:I would rather have.... (2, Informative)
drix (4602) | more than 6 years ago | (#18843869)
Re:I would rather have.... (1)
jlawson382 (1018528) | more than 6 years ago | (#18845373)
Slownewsday (0)
Anonymous Coward | more than 6 years ago | (#18843633)
Re:Slownewsday (0)
Anonymous Coward | more than 6 years ago | (#18844753)
"Peter Cooper's Beginning Ruby: From Novice to Professional has two audiences, novices with no programming experience who want to learn Ruby as their first programming language, and veterans who want to add Ruby to their programming toolkit"
You know slashdot is a nerd news site right? People who tend to know what "novice" and "professional" mean without needing you to redefine the title of the book.>/p>
When you step back and consider history (3, Interesting)
smitty_one_each (243267) | more than 6 years ago | (#18843643)
Wouldn't a venn diagram of key language features show substantial overlap?
Ruby sounds like your typical well-done tool, which clearly has its audience.
The only substantial criticism of Ruby I've ever heard is here: [ciaranm.org]
Still, I'm wondering, what is the Next Big Thing? Is Python3000 going to rule the day? It's obviously 500 times better than Perl6. Then again, you've another round of C++ coming up for standardization: will svelte compiled languages recover some of the mindshare lost to these SUV scripting languages?
Is the point of making some new tool the buzz of the day simply to sell books?
Re:When you step back and consider history (0)
Anonymous Coward | more than 6 years ago | (#18843787)
Re:When you step back and consider history (1)
smitty_one_each (243267) | more than 6 years ago | (#18843939)
But if Ruby is a better Perl, will P6 be a redder Ruby?
Re:When you step back and consider history (1)
dthable (163749) | more than 6 years ago | (#18844011)
Re:When you step back and consider history (0)
Anonymous Coward | more than 6 years ago | (#18844999)
I suppose you mean the same way that no-one uses Python or Ruby because they're not compatible with all that legacy Perl code. Idiot.
Re:When you step back and consider history (1, Insightful)
tyler.willard (944724) | more than 6 years ago | (#18843935)
What is that even supposed to mean? Especially vis-a-vis C++; the next round of standardization is considering adding GC and threading.
Let's even go a bit further, what is "compiled"?
Does a VM count? How about JIT'd code?
No, just native you say?
Then how about the fact the IA-32 instruction set ain't exactly native anymore?
This old cannard that dynamic runtime-based languages are somehow intrinsically inferior to native binaries is ridiculous. It's the same tired "I am t3h 1337 and neeh 100% control" argument that was wrong when it was used for arguing about assembler vs C*.
*OK, before optimizing compilers got decent it was true...but only for a little while.
Re:When you step back and consider history (1)
krowe (LinuxZealot) (1055996) | more than 6 years ago | (#18846233)
Re:When you step back and consider history (1)
tyler.willard (944724) | more than 6 years ago | (#18847203)
Even though overblown IMO, the footnote in my comment above concedes that for a period of time optimizing compilers weren't that great. That changed though, optimizers became, and are now, quite good.
Re:When you step back and consider history (1)
hutchike (837402) | more than 6 years ago | (#18844155)
Re:When you step back and consider history (1)
jma05 (897351) | more than 6 years ago | (#18844445)
Python 3000 is a cleanup of the inevitable evolutionary accretions that languages accumulate. Nothing ground breaking in this release feature-wise. So Python 3000 and Perl 6 are very different in scope. I (as a Python user), for one am very curious of what will come out of Perl 6.
There is a big list at [perl.org]
The ones I am interested are
explicit strong typing (Python 3000 - optional interfaces)
coroutines (Python 2.5)
macros (debate in Python but bigwigs don't like them)
user-definable operators
Since Perl and Haskell (Pugs) seem to have been dating for a while now, it is curious to see how the shabby blue collar man that is Perl will be transformed by the dainty, complex and academic girl that is Haskell.
Re:When you step back and consider history (2, Insightful)
arevos (659374) | more than 6 years ago | (#18844827)
Wouldn't a venn diagram of key language features show substantial overlap?
The only substantial criticism of Ruby I've ever heard is here
You can write pages on criticism on any language. For instance, you could criticise Ruby on it's efficiency, it's dynamic typing, it's syntax quirks, it's libraries, it's readability, it's method naming conventions... The list is substantial, and many people disagree whether certain features are advantageous or disadvantageous (the classic example is dynamic vs. static typing). In fact, I'd say you don't really know a language until you can point out it's flaws.
Re:When you step back and consider history (2, Interesting)
renoX (11677) | more than 6 years ago | (#18846675)
Enlighten me (4, Insightful)
pimterry (970628) | more than 6 years ago | (#18843763)
If the main reason for writing server side software is web based applications, or at least dynamic content, isn't a huge factor in this how well it scales? Nobody makes sites to be used by 20 people.
Finally, if anybody can explain it's popularity to me, should I learn it? I'm currently doing freelance web dev mostly in PHP, would it be useful? How? In my spare time I'm writing a AJAX web app with PHP back-end at the moment and it's mostly for my personal use (task tracking from anywhere), is Ruby good here with the limited audience the site'll have?
Re:Enlighten me (4, Informative)
plams (744927) | more than 6 years ago | (#18843919)
Ruby isn't any more a server-side language than 68k assembler is. You've probably confused it with Ruby on Rails [rubyonrails.org] which is a framework (and an excellent one, I might add) for making websites. Compared to plain PHP it makes web development easy and fun and even supports stuff like AJAX out of the box.
Re:Enlighten me (1)
pimterry (970628) | more than 6 years ago | (#18844055)
Sorry =P.
Re:Enlighten me (4, Informative)
CastrTroy (595695) | more than 6 years ago | (#18844217)
Re:Enlighten me (1)
plams (744927) | more than 6 years ago | (#18845429)
Re:Enlighten me (2, Interesting)
rainman_bc (735332) | more than 6 years ago | (#18844467)
Re:Enlighten me (1, Insightful)
Anonymous Coward | more than 6 years ago | (#18844871)
Anyone know who is behind Ruby and who is paying for all these posts and slashvertisements? And why?
Re:Enlighten me (1)
crayz (1056) | more than 6 years ago | (#18845129)
Re:Enlighten me (1)
Paradise Pete (33184) | more than 6 years ago | (#18845639)
I'm working on my first real-world Rails project, while at the same time having to maintain the current PHP-version of the same thing. And the feeling I get when I have to go back to the PHP is amazingly similar to the feeling I get (and I don't mean for this to be flamebait) when I'm forced to use Windows. Yeah, I can get stuff done, but it just feels, well, icky.
Re:Enlighten me (2, Insightful)
Nafai7 (53671) | more than 6 years ago | (#18843929)
That said, I started playing around with Ruby on Rails. Honestly, simple tasks like database access is so much simpler to handle in RoR it's just amazing. Creating database driven web sites, with one-to-many and many-to-many relationships is at least 10x quicker than in PHP. You have to get used to the MVC style of programming of course.
I'm still doing PHP at work, but at some point will push for a switch. Check this out [onlamp.com] to see a very simple walkthrough of creating a simple web site.
Just a fan, in no way am I involved with the Ruby on Rails project itself.
Re:Enlighten me (4, Informative)
BrewerDude (716509) | more than 6 years ago | (#18843987)
Hi
First, Ruby != Rails. Ruby is the programming language. Rails (or Ruby on Rails) is the web development framework. Ruby has been around a lot longer than Rails, but has certainly had it's popularity boosted by the buzz surrounding Rails.
Second, I'd disagree that system performance is the biggest factor is selecting a web framework. Rails out of the box will support the load more most websites. There are many things that you can do to tune performance once you start getting enough page views for it to matter: caching pages, selectively replacing ActiveRecord's queries with raw SQL, etc. There are also people starting to focus on rails performance both from a developer's standpoint (e.g., the Rails Express Blog [railsexpress.de]) and from the hosting standpoint (e.g., Engine Yard [engineyard.com]).
To me, the first and foremost goal of building a website is to get the functionality there quick in order to attract users. Once you've got that, and have rapidly been able to iterate to what the users want to see, then you can start worrying about performance. And, if your site really makes it big, you are going to have to do custom tuning work no matter what framework you've chosen.
Personally, I've found Rails to be a wonderfully productive framework to use for web development
Re:Enlighten me (2, Insightful)
lewp (95638) | more than 6 years ago | (#18844031)
Of course not. If it were, every high traffic website would be an NSAPI/ISAPI/Apache module. The vast majority of small to medium scale web apps are database limited much more than they are CPU limited on the web server. Twitter is such an app, as that coding horror article eventually mentions.
As far as whether you should learn it, I get paid to write mostly PHP, and I'd say anybody else in my position should check Rails out just to compare and contrast. If anything, you might pick up a few tricks. I use Rails, and like it, but like any other tool it's not for everything. When I don't use it, I still find myself using Rails-esque constructs in my PHP/Java/whatever web apps, just because they get a lot of things right architecturally.
Re:Enlighten me (1)
pimterry (970628) | more than 6 years ago | (#18844091)
Ruby != Rails. Who'd of guessed it.
Re:Enlighten me (1)
misleb (129952) | more than 6 years ago | (#18844839)
All I can say is that after doing PHP on and off for a few years and then learning Ruby, I never want to touch another line of PHP shite again. Sorry, if this sounds like I'm trying to start a flame war, but PHP is just a dumb language. Sure, it works, it is easy to learn, can find hosting for it anywhere, but it is just braindead. That and the gawd aweful function naming problem. Things I can do with 2 or 3 lines of Ruby used to take me 10 lines in PHP. PHP is one of those languages where, if there is no fucntion to do exactly what you want, you can expect to take a fair amount of time implementing it youself. I have similar reaction to Perl after learning Ruby... although for different reasons. Perl is just plain ugly. Ruby's just so... pretty.
So yeah, I thnk you shuld give Ruby (particularly Rails) a try. I'm not normally one to go for trends (I'm a "Web 2.0" skeptic, for example). Ruby and Rails are the real deal. As long as you are willing to accept one fact up front: Ruby isn't as fast as other languages. It is a tradeoff. Save yourself weeks of dev time and let the computer do the heavy lifting.
-matthew
Re:Enlighten me (2, Insightful)
arevos (659374) | more than 6 years ago | (#18845303)
Also, the main problem Twitter has is the database bottleneck, not Ruby's speed (or lack thereof). Rails is designed to work around a single database, so clearly any web app that requires a dozen database servers in parallel is going to take a few specialised plugins or patches to achieve.
Ruby's pretty good with AJAX, since it integrate Prototype and Scriptaculous functions into lots of easy helper functions. It also helps do away with a lot of boilerplate code when accessing a database.
Re:Enlighten me (1)
Achromatic1978 (916097) | more than 6 years ago | (#18846711)
Not to doubt Twitter, but this would have it as the second most popular website on the net, behind Google:
v Corner [alexa.com], as MSN currently averages 9,700 page views per second (I should know).
I'm slightly sceptical.
That being said, I realize that Twitter's concept of page views is slightly different, and lighter than, MSN's.
Re:Enlighten me (1)
Achromatic1978 (916097) | more than 6 years ago | (#18846741)
Re:Enlighten me (1)
arevos (659374) | more than 6 years ago | (#18847397)
Oops! That's what happens when you don't check your sources, folks - my bad!
Still, 600 requests per second is still a fairly heavy traffic throughput for Rails to scale to.
Re:Enlighten me (1)
DCastagna (959843) | more than 6 years ago | (#18845327)
These day you hear a lot about Ruby because of Ruby on Rails, a very good MVC framework based on Ruby.
I love performance, but after trying Ruby on Rails I have to say that it is a great framework, even if it's slow.
I think today is more important quick development time rather than fast performance, and I think that when you start having problem with the performance of RoR, probabibly you have enough users (and money) to buy better hardware and to spend some time scaling your webapp.
Re:Enlighten me (1)
DragonWriter (970822) | more than 6 years ago | (#18845419)
Because its convenient for things people need to do, and the language at the center of a popular web framework.
Ruby isn't "python for websites". It would be closer to say its "python, but different". Sure, its most popular application is a web framework (Rails), but that's not the same thing as the language. Python has web frameworks like Rails too.
As the follow-up [codinghorror.com] makes clear, the issue that the author of that post was trying to raise was with the Rails framework. Anyhow, I don't think its a huge surprise that a site that is the biggest production site running a particular framework is running into performance problems. There've also been several posted techniques for dealing with the apparent source of the barrier to optimization (the lack of an easy way in Rails to connect to multiple databases) since this issue came to the attention of the community.
Certainly, scalability is a big factor. But Rails scales well enough for plenty of systems that are in production. Yes, there are applications that push its present limits; of cousre, those are also the one's which promote new development.
Probably not. Then again, there are many sites with more than 20 users using Rails in production. And, again, Ruby is not just rails.
That probably depends on why you are using PHP.
This sounds like something that Ruby on Rails would be good at. But, whether its better for you than PHP is something you'll have to answer yourself. You might want to take a look at Agile Web Development with Rails, and see if RoR looks like it would be good for you.
Re:Enlighten me (1)
vacorama (770618) | more than 6 years ago | (#18845457)
Re:Enlighten me (0)
Anonymous Coward | more than 6 years ago | (#18846441)
plus 5, Tr0l7) (-1, Troll)
Anonymous Coward | more than 6 years ago | (#18843841)
Just too bad (0, Insightful)
Anonymous Coward | more than 6 years ago | (#18843953)
Ruby astroturfing (0, Troll)
zymano (581466) | more than 6 years ago | (#18844237)
What is the advantage of this language over PHP,python or Perl.
It is extrememly slow. Terrible for high usage websites.
Seems there are programmers who are in love with syntax and will trade speed for it. I would rather use CGI with C:Still the performance king,Java comes in 2nd and Perl 3rd.
Re:Ruby astroturfing (3, Informative)
plams (744927) | more than 6 years ago | (#18844461)
Define "high usage website". RoR-powered sites like 43 Things [43things.com] and ODEO [odeo.com] don't seem like backyard hobby projects to me.
Re:Ruby astroturfing (-1, Troll)
Dystopian Rebel (714995) | more than 6 years ago | (#18844935)
Ruby > Perl6 because we are all terrified that Perl6 will be so hideous that our eyeballs will explode.
Ruby is well designed. It will become faster as all dynamic languages do. It's weakness is that it just doesn't have good English documentation.
Re:Ruby astroturfing (1)
hyperstation (185147) | more than 6 years ago | (#18845469)
Re:Ruby astroturfing (1)
arevos (659374) | more than 6 years ago | (#18845557)
If you're going to discuss advantages of Ruby over Python, anonymous blocks and Ruby's class system would be better choices.
Re:Ruby astroturfing (0)
DragonWriter (970822) | more than 6 years ago | (#18845955)
The advantage is that it doesn't have Python's tab problem.
Well, yeah, those are advantages, too. Its hardly as if one advantage negates the others.
Re:Ruby astroturfing (1)
arevos (659374) | more than 6 years ago | (#18846395)
Re:Ruby astroturfing (1)
dtolton (162216) | more than 6 years ago | (#18846623)
As an experienced programmer prior learning Python it only caused me mild heartache. I don't know how easy or hard it would be for a total newcomer to programming. If they are some of the people that were in my early CS classes, I'd say they'll never get past it without help.
What problems in Ruby are you referring to that a newcomer will experience? In general, I wouldn't expect someone who is just learning to program to run into any serious difficulties with Ruby or Python (other than significant whitespace).
Re:Ruby astroturfing (1)
DragonWriter (970822) | more than 6 years ago | (#18847011)
Such as...what? The two main complaints I've seen about Ruby features are (1) lack of good Unicode support, which is certainly can be a real problem with I18n, but isn't a barrier to its utility in learning to program, for sure; and (2) its "too slow", which, again, isn't a barrier to its utility in learning to program in it, though it may somewhat limit the applications it is suitable for.
Something that newbie programmers are likely to do occasionally.
Re:Ruby astroturfing (1)
arevos (659374) | more than 6 years ago | (#18847319)
Ruby's syntax itself is also not as suited for a beginner, in my opinion, as there a lot of different ways to do the same thing. If you're learning by example, and each example employs a different technique, you have more to learn. Some other things that come to mind are the "and" and "or" boolean operators, which don't behave as one might expect they would; blocks have some scope issues that aren't immediately obvious; and leaving a method call unbracketed, whilst sometimes more aesthetically pleasing, can produce odd syntax errors.
I'm sure there's a few more oddities in the language I've forgotten about.
I'm not saying Ruby is a bad language, merely that it has its share of flaws, the same as everything else.
Re:Ruby astroturfing (1)
Dystopian Rebel (714995) | more than 6 years ago | (#18847491)
That's a lawyerly-phrased question. I still indent code. I don't like my indentation preferences determining the logic of the code.
I personally like Ruby more than Python and in particular like the class system, notation, and its implicit accessors. But I'd call these preferences just that, whereas I consider "significant whitespace" to be folly.
Re:Ruby astroturfing (1)
random0xff (1062770) | more than 6 years ago | (#18846643)
I, for one... (1)
harry666t (1062422) | more than 6 years ago | (#18844287)
first language (2, Interesting)
lordholm (649770) | more than 6 years ago | (#18844387)
I know Assembler, C, Obj-C, C++, Haskell, Bash, Java, Python, Matlab (or whatever that language is called) plus a few proprietary languages and toy/educational languages.
Although, while I didn't start with C or Assembler (I started with C++ and felt it was a big piece of shit, though now days I find it quite useful; it is very abstracted though), this is certainly what I recommend for a person who is serious about programming.
As a side note, it is interesting that at the school at which I took my MSc, they used Haskell as a first starting language, basically to tell everyone that "you don't know shit about programming" and bring the sufferers of the perfect programmer syndrome down to the ground. A very good thing indeed.
Learn C or Assembler, and then learn Ruby, that is the way to go.
Re:first language (1)
misleb (129952) | more than 6 years ago | (#18845115)
That said, C is probably a good start. If only so you realize just how much time high level languages can save you.
-matthew
Re:first language (1)
lordholm (649770) | more than 6 years ago | (#18846203)
In any field you tend to learn the basics first and then move in to more abstract levels. Any programmer should grasp the physics of a transistor, how a CMOS circuit works, what gates and RTL level descriptions are used for.
This does not necessarily mean that you should be an expert on these things.
As you said if you know C, you understand how much time you can save with a higher level language. You should also understand when C is the right tool to use (and assembly as well), and when Ruby is.
When I write a quick hack to test an algorithm I use Python, when I write code that MUST be fast and every cycle counts, in that case I would use C. C have some nice properties such as that you can actually look on the code and see what the asm output will be (if compiled with -O0).
Re:first language (1)
misleb (129952) | more than 6 years ago | (#18847069)
-matthew
Re:first language (2, Insightful)
0kComputer (872064) | more than 6 years ago | (#18845663)
Sucky in what way? Good code now days has more to do with structure and maintainability as opposed to squeezing every possisble extra CPU out of a procedure. Instead of wasting time learning C/assembler, which you probably wouldn't get any use out of, I would go with an OO language such as Java or
Re:first language (1)
DragonWriter (970822) | more than 6 years ago | (#18845791)
Re:first language (1)
0kComputer (872064) | more than 6 years ago | (#18846025)
oops, I meant to say C#. My bad.
and if you are going to learn an OO language, why not learn Ruby?
The problem that I have with Ruby is that it is not widely used. Most of the sample code and open source stuff out there is written with the more popular languages. Also, books, googling for ruby help, or posting to message boards and news groups isn't going to be as effective for the beginner OO type questions.
Re:first language (1)
DragonWriter (970822) | more than 6 years ago | (#18846691)
That is a point. OTOH, It seems to me—though I wasn't a beginner when I encountered any of them—that its a lot more beginner-friendly of a language than Java or C#.
That's true, but I'm not sure all that volume is really necessary. There is certainly a "critical mass" of resources necessary for a language to make a good first language, but in the internet age there are very few languages of note that don't make that grade—at least one good introductory book (and both the one that is the topic of this thread and Chris Pine's Learn to Program seem to offer that for Ruby), a few more advanced ones, and an active community and you're off. Ruby seems to be good there.
Re:first language (1)
lordholm (649770) | more than 6 years ago | (#18846775)
A few simple example (simple usually means that the compiler will correct your misstakes, but I rather not get into to many details):
unsigned int a;
a = a / 2;
The next block dels with row major ordering, the code risks running several __thousands__ of times slower than foo[i][j] = bar() due to cache and MMU misses You don't know about these concepts, to bad... you just screwed up your app by not placing the indices right
for (i = 0 ; i N; i ++ ) {
for (j = 0 ; i N; j ++ ) {
foo[j][i] = bar();
}
}
The same things can be said about Java, Ruby and other high-level langs. Without the fundamentals you can really completely fuck up a system (I have seen enough of that, and I am not that old), by not thinking.
Just recently, I spent some time to debug a numerical algoritm that I had written. It was crashing occasionally (but not when the debugger was attached). This turned out to be due to rounding effects of floating point numbers. Basically I had something like: acos(f(a)/g(b)), when single stepping through the code I changed the behaviour of the code. This was due to how doubles are handled on x86es. Without the debugger rounding errors and internally 80-bit doubles dictated that I called acos(1.0000000001) (that is of course completely bogus), when the debugger was attached the system constantly flushed the FP-unit and the call was acos(1.0) instead.
Interesting enough, this app ran on top of the
Here is a nice example from Java and C.
let farr be a sorted float array of size N
for (i = 0 ; i N ; i ++) {
sum += farr[i];
}
Now, if N is sufficiently large and farr is sorted in descending order, it is very likely that the sum will be wrong, and that it will be more correct if it is sorted in ascending order.
Now, you don't get why? Maybe a little understanding of the basics would be a good thing in that case, and then suddenly you feel how useful it was to learn how the computer works in the beginning, and C is an excellent tool to help you with this.
Re:first language (2, Insightful)
DragonWriter (970822) | more than 6 years ago | (#18847125)
I'm still not sure why that means you should learn C or Assembler first.
Re:first language (2, Interesting)
arevos (659374) | more than 6 years ago | (#18846979)
Easiness (1)
Parker Lewis (999165) | more than 6 years ago | (#18844429)
Re:Easiness (1)
crayz (1056) | more than 6 years ago | (#18845215)
Rank this book if you've read it (1)
draed (444221) | more than 6 years ago | (#18844465)
Beware of the hype around Ruby (1, Insightful)
Anonymous Coward | more than 6 years ago | (#18846035)
This is a scripting language, and you really feel the difference with a more generic language.
Ruby spirit is "wild" :
you will have tons of libs that gonna patch standard classes everywhere (patch, like in patch, not extend)
it ends up with conflicting changes !! one require of the wrong lib and rails is crashing everywhere and it is very difficult to diagnose
duck typing renders code very hard to follow because sometime your ducks looks like ducks aren't really, you will know it late at the execution or in obscure cases
thread works cooperatively, forget about integrating nicely native code
Documentation is sparse or inexistent on lots of libs [comparing it to the apache libs for example]
No coding convention is really followed
No strict language spec : I find it really important because even with one it is often hard to standardize a language !
No type, no help -> for everything you have to jump into a paper doc or an API search just to see what this function is taking as parameter forget about the productivity of your ctrl-space in Eclipse
I18N is non existent and iconv horrible to use.
etc... I could go on but I think you get the point
I really went into ruby open minded and I am really disapointed : really, you try Ruby/Rails and it is a instant "WOW!" relayed by every journalist who just tried it
Guillaume. | http://beta.slashdot.org/story/83737 | CC-MAIN-2014-15 | refinedweb | 7,807 | 68.7 |
Catalyst::Plugin::ConfigLoader::MultiState - Convenient and flexible config loader for Catalyst.
conf/myapp.conf: $db = { host => 'db.myproj.com', driver => 'Pg', user => 'ilya', password => 'rumeev', }; $var_dir = r('home')->subdir('var'); $log_dir = $var_dir->subdir('log'); $log_dir->mkpath(0, 0755); rw(host, 'mysite.com'); $uri = URI->new(""); ... conf/chat.conf $history_cnt = 10; $tmp_dir = r(var_dir)->subdir('chat'); $service_uri = URI->new( r(uri)->as_string .'/chat' ); ... conf/myapp.dev $db = {host => 'dev.myproj.com'}; rewrite(host, 'dev.mysite.com'); ...other differences in MyApp: my $cfg = MyApp->config; print $cfg->{db}{user}; # ilya print $cfg->{db}{host}; # db.myproj.com print $cfg->{chat}{tmp_dir}; # Path::Class::Dir object (/path/to/myapp/var/chat) print $cfg->{host}; # mysite.com print $cfg->{uri}; # URI object print $cfg->{chat}{service_uri}; # URI object () Now if in local.conf: $dev = 1; Then print $cfg->{db}{user}; # ilya print $cfg->{db}{host}; # dev.myproj.com print $cfg->{host}; # dev.mysite.com print $cfg->{uri}; # URI object (magic :-) print $cfg->{chat}{service_uri}; # URI object (more magic) Configure a plugin (Authentication for example) in conf/Plugin-Authentication.conf: module(); $default_realm = 'default'; $realms = { ... };
This plugin provides you with powerful config system for your catalyst project.
It allows you to:
- write convenient variable definitions - your lovest perl language :-) What can be more powerful? You do not need to define a huge hash in config file - you just write separate variables.
- split your configs into separate files, each file with its own namespace (hash depth) or without - on your choice.
- access variables between configs. You can access any variable in any config by uri-like or hash path.
- overload your config hierarchy by *.<group_name> files on demand
- rewrite any previously defined variable. Any variables that depend on initial variable (or on variable that depends on inital, etc) will be recalculated in all configs.
- automatic overload for development servers
This is very useful for big projects where your config might grow over 100kb. Especially when you have number of installations of application that must differ from other without pain to redefine a hundreds of config variables in '_local' file which, in addition to all, cannot be put in svn (cvs).
In most of cases this plugin has to be the first in plugin list.
Syntax is quite simple - it's perl. Just define variable with desired names.
$var_name = 'value';
Values can be any that scalars can be: scalar, hashref, arrayref, subroute, etc. DO NOT write 'use strict' or you will be forced to define variables via 'our' which is ugly for config.
If you define in myapp.conf (root config)
$welcome_msg = 'hello world';
it will be accessible through
MyApp->config->{welcome_msg}
Hashes acts as they are expected:
$msgs = { welcome => 'hello world', bye => 'bye world', }; MyApp->config->{msgs}{bye};
It is a good idea to reuse variables in config to allow real flexibility:
$var_dir = $home->subdir('var'); $log_dir = $var_dir->subdir('log'); $chat_log_dir = $log_dir->subdir('chat'); ...
In contrast to:
$var_dir = 'var'; $log_dir = 'log'; $chat_log_dir = 'chat';
or
$var_dir = 'var'; $log_dir = 'var/log'; $chat_log_dir = 'var/log/chat'; ...will grow :(
The second and third examples are much less flexible. By means of second example we just hardcoded a part of config logic in our application: it supposes that var_dir is UNDER home and log_dir is UNDER var_dir, etc, which must not be an application's headache anyway. In third example we have a lot of copy-paste and application still supposes that var_dir is under home.
All configs from files are written to separate namespaces by default (except for /myapp.*). Plugin reads all *.conf files in folder 'conf' under app_home (or whatever you set ->config->{'Plugin::ConfigLoader::MultiState'}{dir} to), subdirs too - recursively, and special local config from file local.conf under app_home (or whatever you set ->config->{'Plugin::ConfigLoader::MultiState'}{local} to). Configs from /myapp.* and local.conf are written directly to root namespace (config hash). Other configs are written accordingly to their paths. For example config from chat.conf is written to $cfg->{chat} hash. Config from test/more.conf is written to $cfg->{test}{more} hash.
Sometimes you don't want separate namespace, just split one big file to parts. In this case you can use 'root' or 'inline' pragmas. 'root' pragma brings config file to the root namespace no matter where file is located. 'inline' brings file to one level upper.
Examples:
split root config:
/myapp.conf:
...part of definitions
/misc.conf:
root; ...other part of definitions
split /chat.conf:
/chat/main.conf:
inline; ...definitions
/chat/ban_rules.conf
inline; ...definitions
To make configuration for catalyst plugin in separate file, name it after plugin class name replacing '::' with '-' and use 'module' pragma;
For example Plugin-Authentication.conf:
module; $default_realm = 'myrealm'; $realms = { .... };
To embed plugin's config into any root ns file write __ instead of ::
$Plugin__Authentication = { default_realm => 'myrealm', realms => {...}, };
Files of each group (*.conf, *.dev, *.<group_name>) are processed in alphabetical order (except for local.conf and myapp.conf - they are processed earlier).
Special file app_home/local.conf is processed twice - at start and in the end to have a chance to pre-define something (config file groups for example) in the beggining and rewrite/overload in the end.
You can access variable from any file that has already been processed (use test-like namings: 01chat.conf, 02something.conf, ... - if it is matters, plugin removes ^\d+ from ns).
To access variable in root namespace use r() getter:
$mydir = r('var_dir')->subdir('my');
Quotes is not required (for beauty): r(var_dir)-> but be careful - variable name must be allowed perl unqouted literal and must not be one of perl builtin functions and not one of [root, inline, r, p, u, l, module, rw, rewrite], therefore this is not recommended.
To access variable in local (current) namespace use l() getter.
To access variable in upper namespace use u() getter.
To access any variable use p() getter with uri-like path:
p('/chat/history_cnt') || r('chat')->{history_cnt}
To access variables initially defined by catalyst (home, root, pre-defined config variables) use r('home'), r('root'), etc from anywhere. Note that MultiState tunes 'home' variable - it makes it a Path::Class::Dir object instead of simple string.
If a config defines variable that already exists (in the same namespace) it will be merged with existing variable (merged if both are hashes and replaced if not). If you have variables in configs that depend on initial variable - SEE 'rewrites' section or they won't be updated!
Configs can be overloaded by file or group of files that are not loaded by default. The example is *.dev group which is activated when you predefine
$dev=1;
in local.conf (or in MyApp->config before setup phase)
To activate other group(s) you must predefine it in local.conf (or in MyApp->config before setup phase)
$config_group = ['.beta']; #i'am one of beta-servers
Config will be overloaded from conf/*.beta, conf/*/*.beta,... after processing standart configs (i.e. all config variables are accessible to *.beta files to read and overload/rewrite). Group is dot plus files extension.
In myapp.beta for example:
$db = {host => 'beta.myproj.com'}; $debug = {enabled => 1}; rewrite('base_price', 0); ...
In chat.beta for example:
$welcome_msg = l('welcome_msg') . ' (beta server)';
All of the rules described above are applicable to all configs in any groups (i.e. namespaces, visibility, etc).
You can define config groups in application's code as well as in local.conf. To do that just define MyApp->config->{config_group} = [...] BEFORE setup() (runtime overloading is not supported for now).
There is a way to define that in offline scripts and other places that use your application (there are not only myapp_server.pl and Co :-) to customize your application's behaviour:
Create this sub in MyApp.pm:
sub import { my ($class, $rewrite_cfg) = @_; _merge_hash($class->config, $rewrite_cfg) if $rewrite_cfg; } sub _merge_hash { my ($h1, $h2) = (shift, shift); while (my ($k,$v2) = each %$h2) { my $v1 = $h1->{$k}; if (ref($v1) eq 'HASH' && ref($v2) eq 'HASH') { merge_hash($v1, $v2) } else { $h1->{$k} = $v2 } } }
And just write in an offline script/daemon:
use MyApp { log => {file => 'otherlog.log'}, something => 'something', config_group => [qw/.script .maintenance/], };
But there is a big problem. By writing
__PACKAGE__->setup();
in MyApp.pm we just left no chances for others to customize your application BEFORE setup phase because 'use MyApp' will at the same time execute setup() before import()
Fortunately there is a simple solution: not to write '__PACKAGE__->setup()' :-). Instead write:
sub import { #for places that do 'use MyApp' my ($class, $rewrite_cfg) = @_; _merge_hash($class->config, $rewrite_cfg) if $rewrite_cfg; $class->setup unless $class->setup_finished; } sub run { #myapp_server.pl does 'require MyApp', not 'use', so import() is not called my $class = shift; $class->setup unless $class->setup_finished; $class->next::method(@_); } sub _merge_hash { my ($h1, $h2) = (shift, shift); while (my ($k,$v2) = each %$h2) { my $v1 = $h1->{$k}; if (ref($v1) eq 'HASH' && ref($v2) eq 'HASH') { merge_hash($v1, $v2) } else { $h1->{$k} = $v2 } } }
That's all. Now 'use MyApp {...}' will work. This is very useful to customize config in service(script)-based way without creating configuration for them in main config. For example to easily change log file or loglevel as in example above.
Also single-file overloading is also supported.
$config_group = ['.beta', 'service', 'maintenance'];
Loads *.beta, 'service.rw' and 'maintenance.rw'. I.e. group is filename without extension (loads filename plus '.rw')
'Rewrite' must be used when you want to overload some variable's value and you want all variables that depend on it to be recalculated.
For example if you write in myapp.conf:
$a = 1; $b = $a+1;
and in myapp.dev:
$a = 10;
then (on dev server)
$cfg->{a}; #10 $cfg->{b}; #2
oops (!) :-)
'Rewrite' fixes that!
myapp.conf:
rw(a, 1); $b = $a+1;
myapp.dev:
rewrite(a, 10); $cfg->{a}; #10 $cfg->{b}; #11
rw('variable_name', value_to_set);
Tells plugin that 'variable_name' is a rewritable variable. Also creates $variable_name and sets it to value_to_set. The effect is similar to
$variable_name = value_to_set;
but do not write that or rewrite will not work!
rewrite(' /uri/path | relative/path ', value_to_set);
Rewrites variable. Uri path can be absolute or relative to current namespace (namespace of the file where 'rewrite' is). It will croak if this variable is not marked as rewritable.
You can even rewrite properties of objects. Actually you may pass any code that is related to rewrite variable's value/properties to 'rewrite' function. Example:
myapp.conf:
rw('uri', URI->new("")); $uri2 = URI->new($uri->as_string.'/medved');
myapp.dev:
rewrite('uri', sub { r('uri')->path('poka') });
Result:
$cfg->{uri}; # $cfg->{uri2}; #
Looks ok :-)
Development server flag. $c->dev is true if current installation is development. Also available through $c->cfg->{dev}.
Fast accessor for getting config hash. It is 70x faster than original ->config method.
Called by catalyst at setup phase. Reads files and initializes config.
This method is called after the config file is loaded. It can be used to implement tuning of config values that can only be done at runtime.
This method has been added for compability.
You can predefine defaults for config in ->config->{'Plugin::ConfigLoader::MultiState'}{defaults}. Variables from 'defaults' will be visible in config but won't override resulting values.
It takes about 30ms to initialize config system with 25 files (25kb summary) on 2Ghz Xeon.
Catalyst::Runtime, Catalyst::Plugin::ConfigLoader.
Pronin Oleg <syber@cpan.org>
You may distribute this code under the same terms as Perl itself. | http://search.cpan.org/~syber/Catalyst-Plugin-ConfigLoader-MultiState-0.08/lib/Catalyst/Plugin/ConfigLoader/MultiState.pm | CC-MAIN-2014-23 | refinedweb | 1,888 | 59.19 |
Hakyll.Web.Page.Metadata
Description
Provides various functions to manipulate the metadata fields of a page
Synopsis
- getField :: String -> Page a -> String
- getFieldMaybe :: String -> Page a -> Maybe String
- setField :: String -> String -> Page a -> Page a
- setFieldA :: Arrow a => String -> a x String -> a (Page b, x) (Page b)
- renderField :: String -> String -> (String -> String) -> Page a -> Page a
- changeField :: String -> (String -> String) -> Page a -> Page a
- copyField :: String -> String -> Page a -> Page a
- renderDateField :: String -> String -> String -> Page a -> Page a
- renderDateFieldWith :: TimeLocale -> String -> String -> String -> Page a -> Page a
Documentation
Arguments
Get a metadata field. If the field does not exist, the empty string is returned.
Arguments
Arguments
Arguments
Do something with a metadata value, but keep the old value as well. If the
key given is not present in the metadata, nothing will happen. If the source
and destination keys are the same, the value will be changed (but you should
use
changeField for this purpose).
Arguments
Change a metadata value.
import Data.Char (toUpper) changeField "title" (map toUpper)
Will put the title in UPPERCASE.
Arguments
Make a copy of a metadata field (put the value belonging to a certain key under some other key as well)
Arguments
When the metadata has a field called
path in a
folder/yyyy-mm-dd-title.extension format (the convention for pages),
this function can render the date.
renderDate "date" "%B %e, %Y" "Date unknown"
Will render something like
January 32, 2010.
renderDateFieldWithSource
Arguments
This is an extended version of
renderDateField that allows you to
specify a time locale that is used for outputting the date. For more
details, see
renderDateField. | http://hackage.haskell.org/package/hakyll-3.0.0.2/docs/Hakyll-Web-Page-Metadata.html | CC-MAIN-2015-32 | refinedweb | 269 | 57.91 |
I think we should keep the functions in the private namespace until we really expose this in
our api/tools. Otherwise: +1.
Bert
From: Julian Foad
Sent: dinsdag 10 november 2015 16:28
To: dev
Subject: Merge 'svnmover' demo tool to trunk
The work on the 'move-tracking-2' branch currently consists of some
library functions (mostly named 'svn_branch_*') which are used only by
the demo tool named 'svnmover'. These do not interfere with normal
Subversion operation at all. I propose to merge this to trunk to lower
the barrier to participation in this work.
I first need to review a few places where I touched existing code to
insert 'shims'; only a small part of this would remain, I think, as
bidirectional shims are not currently available.
Any objections?
- Julian | https://mail-archives.apache.org/mod_mbox/subversion-dev/201511.mbox/%3C56423e02.a7aac20a.b5097.ffff8347@mx.google.com%3E | CC-MAIN-2020-16 | refinedweb | 131 | 54.15 |
Each Answer to this Q is separated by one/two green lines.
Python decorators are fun to use, but I appear to have hit a wall due to the way arguments are passed to decorators. Here I have a decorator defined as part of a base class (the decorator will access class members hence it will require the self parameter).
class SubSystem(object): def UpdateGUI(self, fun): #function decorator def wrapper(*args): self.updateGUIField(*args) return fun(*args) return wrapper ...
I’ve omitted the rest of the implementation. Now this class is a base class for various SubSystems that will inherit from it – some of the inherited classes will need to use the UpdateGUI decorator.
class DO(SubSystem): def getport(self, port): """Returns the value of Digital Output port "port".""" pass @SubSystem.UpdateGUI def setport(self, port, value): """Sets the value of Digital Output port "port".""" pass
Once again I have omitted the function implementations as they are not relevant.
In short the problem is that while I can access the decorator defined in the base class from the inherited class by specifiying it as SubSystem.UpdateGUI, I ultimately get this TypeError when trying to use it:
unbound method UpdateGUI() must be called with SubSystem instance as first argument (got function instance instead)
This is because I have no immediately identifiable way of passing the
self parameter to the decorator!
Is there a way to do this? Or have I reached the limits of the current decorator implementation in Python?
You need to make
UpdateGUI a
@classmethod, and make your
wrapper aware of
self. A working example:
class X(object): @classmethod def foo(cls, fun): def wrapper(self, *args, **kwargs): self.write(*args, **kwargs) return fun(self, *args, **kwargs) return wrapper def write(self, *args, **kwargs): print(args, kwargs) class Y(X): @X.foo def bar(self, x): print("x:", x) Y().bar(3) # prints: # (3,) {} # x: 3
It might be easier to just pull the decorator out of the
SubSytem class:
(Note that I’m assuming that the
self that calls
setport is the same
self that you wish to use to call
updateGUIField.)
def UpdateGUI(fun): #function decorator def wrapper(self,*args): self.updateGUIField(*args) return fun(self,*args) return wrapper class SubSystem(object): print(name,value) class DO(SubSystem): @UpdateGUI def setport(self, port, value): """Sets the value of Digital Output port "port".""" pass do=DO() do.setport('p','v') # ('p', 'v')
You’ve sort of answered the question in asking it: what argument would you expect to get as
self if you call
SubSystem.UpdateGUI? There isn’t an obvious instance that should be passed to the decorator.
There are several things you could do to get around this. Maybe you already have a
subSystem that you’ve instantiated somewhere else? Then you could use its decorator:
subSystem = SubSystem() subSystem.UpdateGUI(...)
But maybe you didn’t need the instance in the first place, just the class
SubSystem? In that case, use the
classmethod decorator to tell Python that this function should receive its class as the first argument instead of an instance:
@classmethod def UpdateGUI(cls,...): ...
Finally, maybe you don’t need access to either the instance or the class! In that case, use
staticmethod:
@staticmethod def UpdateGUI(...): ...
Oh, by the way, Python convention is to reserve CamelCase names for classes and to use mixedCase or under_scored names for methods on that class.
You need to use an instance of
SubSystem to do your decorating or use a
classmethod as kenny suggests.
subsys = SubSystem() class DO(SubSystem): def getport(self, port): """Returns the value of Digital Output port "port".""" pass @subsys.UpdateGUI def setport(self, port, value): """Sets the value of Digital Output port "port".""" pass
You decide which to do by deciding if you want all subclass instances to share the same GUI interface or if you want to be able to let distinct ones have distinct interfaces.
If they all share the same GUI interface, use a class method and make everything that the decorator accesses a class instance.
If they can have distinct interfaces, you need to decide if you want to represent the distinctness with inheritance (in which case you would also use
classmethod and call the decorator on the subclasses of
SubSystem) or if it is better represented as distinct instances. In that case make one instance for each interface and call the decorator on that instance.
| https://techstalking.com/programming/python/python-decorators-that-are-part-of-a-base-class-cannot-be-used-to-decorate-member-functions-in-inherited-classes/ | CC-MAIN-2022-40 | refinedweb | 736 | 53.81 |
In most of the PyQt code samples you find online and in books (including, I confess, my examples and blog posts) the "old-style" signal-slot connection mechanism is used. For example, here's a basic push-button window:
from PyQt4.QtCore import * from PyQt4.QtGui import * class MyForm(QMainWindow): def __init__(self, parent=None): super(MyForm, self).__init__(parent) the_button = QPushButton('Hello') self.connect(the_button, SIGNAL('clicked()'), self.on_hello) self.setCentralWidget(the_button) def on_hello(self): print('hello!!') if __name__ == "__main__": import sys app = QApplication(sys.argv) form = MyForm() form.show() app.exec_()
The relevant code is:
self.connect(the_button, SIGNAL('clicked()'), self.on_hello)
Apart from being verbose and un-Pythonic, this syntax has a serious problem. You must type in the C++ signature of the signal exactly. Otherwise, the signal just won't fire, without an exception or any warning. This is a very common mistake. If you think that clicked() is a simple enough signature to write, how about these ones (taken from real code):
SIGNAL("currentRowChanged(QModelIndex,QModelIndex)") SIGNAL('marginClicked(int, int, Qt::KeyboardModifiers)')
The "new-style" signal-slot connection mechanism is much better. Here's how the button click connection is done:
the_button.clicked.connect(self.on_hello)
This mechanism is supported by PyQt since version 4.5, and provides a safer and much more convenient way to connect signals and slots. Each signal is now an attribute that gets automatically bound once you access it. It has a connect method that simply accepts the slot as a Python callable. No more C++ signatures, yay!
PyQt 4.5 was released almost 2 years ago - it's time to switch to the new mechanism. | http://eli.thegreenplace.net/2011/04/24/new-style-signal-slot-connection-mechanism-in-pyqt | CC-MAIN-2017-13 | refinedweb | 276 | 52.87 |
#include <genesis/utils/io/input_stream.hpp>
Stream interface for reading data from an InputSource, that keeps track of line and column counters.
This class provides similar functionality to
std::istream, but has a different way of handling the stream and characters. The main differences are:
\n, as used in Unix-like systems) and carriage return chars (CR or
\r, which are the new line delimiters in many Mac systems, and which are part of the CR+LF new lines as used in Windows) is different. Both, CR and LF chars (and the whole CR+LF combination), are turned into single line feed chars (
\n) in this iterator. This ensures that all new lines delimiters are internally represented as one LF, independently of the file format. That makes parsing way easier.
It has two member functions line() and column() that return the corresponding values for the current iterator position. Also, at() can be used to get a textual representation of the current position. The member function current() furthermore provides a checked version of the dereference operator.
Implementation details inspired by fast-cpp-csv-parser by Ben Strasser, see also Acknowledgements.
Definition at line 76 of file input_stream.hpp.
Definition at line 100 of file input_stream.hpp.
Definition at line 110 of file input_stream.hpp.
Definition at line 117 of file input_stream.hpp.
Move to the next char in the stream and advance the counters.
Definition at line 173 of file input_stream.hpp.
Return a textual representation of the current input position in the form "line:column".
Definition at line 323 of file input_stream.hpp.
Return the current column of the input stream.
The counter starts with column 1 for each line of the input stream. New line characters
\n are included in counting and count as the last character of a line.
Definition at line 314 of file input_stream.hpp.
Return the current char, with some checks.
This function is similar to the dereference operator, but additionally performs two checks:
runtime_error.
std::domain_erroris thrown.
Usually, those two conditions are checked in the parser anyway, so in most cases it is preferred to use the dereference operator instead.
Definition at line 155 of file input_stream.hpp.
Return true iff the input reached its end.
Definition at line 348 of file input_stream.hpp.
Extract a single char from the input.
Return the current char and move to the next one.
Definition at line 214 of file input_stream.hpp.
Return the current line and move to the beginning of the next.
The function finds the end of the current line, starting from the current position. It returns a pointer to the current position and the length of the line. Furthermore, a null char is set at the end of the line, replacing the new line char. This allows downstream parses to directly use the returned pointer as a c-string.
The stream is left at the first char of the next line.
Definition at line 235 of file input_stream.hpp.
Return true iff the input is good (not end of data) and can be read from.
Definition at line 331 of file input_stream.hpp.
Return the current line of the input stream.
The counter starts with line 1 for input stream.
Definition at line 303 of file input_stream.hpp.
Return true iff the input is good (not end of data) and can be read from. Shortcut for good().
Definition at line 340 of file input_stream.hpp.
Dereference operator. Return the current char.
Definition at line 136 of file input_stream.hpp.
Move to the next char in the stream. Shortcut for advance().
Definition at line 203 of file input_stream.hpp.
Get the input source name where this stream reads from.
Depending on the type of input, this is either
This is mainly useful for user output like log and error messages.
Definition at line 364 of file input_stream.hpp.
Definition at line 93 of file input_stream.hpp.
Definition at line 94 of file input_stream.hpp.
Block length for internal buffering.
The buffer uses three blocks of this size (16MB each). This is also the maximum line length that can be read at a time with get_line(). If this is too short, change the BlockLength.
Definition at line 91 of file input_stream.hpp. | http://doc.genesis-lib.org/classgenesis_1_1utils_1_1_input_stream.html | CC-MAIN-2018-47 | refinedweb | 707 | 68.77 |
In this tutorial you will learn "How to read properties file in Java?". The data of the property file is displayed on the console.
AdsTutorials
In this video tutorial I will show you how to use the java.util.Properties class for reading a property file in Java program
The java.util.Properties class is subclass of the Hashtable, which is used to map the keys to value.
The java.util.Properties class is used to read the properties file. The properties files are simple text file which is saved as .properties extensions.
Here is example of property file:
name=Rose India
address=Rohini, Delhi
Properties file contains data in the key and value pair.
The getProperty() method of the Properties class is used to read the value of the properties file.
Here is the video tutorial of reading a property file in Java: "How to read properties file in Java?"
Here is the data.properties the properties file used in this example:
name=Rose India address=Rohini, Delhi
Here is the source code of the Java program(ReadPropertiesFile.java):
package net.roseindia; import java.io.*; import java.util.*; public class ReadPropertiesFile { public static void main(String[] args) { Properties prop = new Properties(); try { prop.load(new FileInputStream("./data.properties")); String name = prop.getProperty("name"); String address = prop.getProperty("address"); System.out.println("Name: " + name); System.out.println("Address: " + address); } catch (Exception e) { System.out.println(e.getMessage()); } } }
Above program displays the value of the keys "name" and "address" on the console.
You can find more tutorials of Java Properties class in our Java properties files tutorials section. | https://www.roseindia.net/java/example/java/util/Properties/HowToReadPropertiesFileInJava.shtml | CC-MAIN-2018-34 | refinedweb | 267 | 51.85 |
I encountered the following question which ask you to print the reverse pyramid. Using same number as row number,all of my results are like reverse pyramid following (2i-1) number of terms in each row.
The other one is:
#include <stdio.h>
int main()
{
int i, j, row;
printf ("Enter the no of rows");
scanf ("%d", &row);
for (i = row; i >= 1; i--)
{
for (j = 1; j <= i; j++)
{
printf ("%d", i);
}
printf ("\n");
}
return 0;
}
it's not particularly easy to guess what you had in mind, but regarding linked question, it should be done as follows:
#include <stdio.h> int main() { int i , j, k, row; printf ("Enter the no of rows"); scanf ("%d",&row); for (i=row;i>=1;i--) { for(k = 0;k<row - (i-1);k++) { printf (" "); } for (j=1;j<=i;j++) { printf ("%d",i); printf(" "); } printf ("\n"); } return 0; }
also, next time please post your results. | https://codedump.io/share/5weHnSerK3np/1/reverse-pyramid-using-same-row-number | CC-MAIN-2018-13 | refinedweb | 154 | 69.65 |
The purpose of this assignment is to make sure that you can login
to a Linux system and use the
checkin program.
This way, you’re sure that your login and password work
before you need them for homework #1.
This assignment is not optional.
1) Log on to a CS Department Linux system. Use an editor to create a file
called
hi.c that contains exactly this (upper/lower case matters):
#include <stdio.h> int main() { printf("Hello, CS156 Spring 2018!\n"); return 0; }
2) Compile & execute your program. My prompt is “
%”;
yours will be different. Note the capital
W in “
-Wall”.
% c11 -Wall hi.c % ./a.out Hello, CS156 Spring 2018!
3) Use web checkin, or turn it in like this:
~cs156/bin/checkin HW0 hi.c | https://www.cs.colostate.edu/~cs156/Spring18/HW0 | CC-MAIN-2018-05 | refinedweb | 129 | 87.62 |
This project involves car race where one has to get to the finishing point before the given time elapses. The project is an open source and any is allow to develop on it.
An Open source Microsoft Outlook ( MAPI ) Connector. This project provides free software that enables Microsoft Outlook users to connection to various Microsoft Exchange Groupware server alternatives.
A client/server architecture to allow the creation and playing of a configurable "Connect Four" like game.
This is a map internet web service based on a huge raster maps or satellite images for tracking and monitoring the mobile objects (cars etc) using GPS.
A curling game, It is written in Delphi. Release: 4th Apr
The THOR.Serialization project is a .net library for (de-)serialization purposes of .net objects. Classes may be serialized to different targets like databases or XML.
(Hyper)Markup can: 1) Edit HTML or text files, 2) tidy up your HTML source code and produce formatted text, 3) Flexisibly generate XML codes or XHTML files from given HTML files with XML template and XSL stylesheet.
A database manager based on Microsoft .NET Framework 2.0 It uses xml files for data storage with .nvrd extension. It supports multiple tables, queries, forms, reports. It uses a pretty familiar and productive interface.
This is an attempt to port the java-awt layout manager idea to the .NET framework.
This is an attempt to create a completely full-featured developer's regular expression tester for the Microsoft .NET System.Text.RegularExpressions namespace. The current version is developed in VB.NET but I am currently considering porting to C#.
The .NET ShutMan project is a shutdown manager. You can create a countdown-timer and the program shutdown your pc when the timer is expired (up to 36 hours)! Important: You need the .Net Framework 4.0!
BookManager is a software to manage your book collection with many option in future version : loans gestion, search engine, report, statistics. BookManager needs the Ms .NET Framework 1.1. You can download it on the Microsoft site.
MySQLDotNet is a 100% .Net compliant Data Provider for MySQL. It implements all required classes like Connection, Command, DataReader, and DataAdapter. Features include automatic type conversion to .Net data types, transactions, Unicode and Blob support.
The .Net Messenger Assistant (IMA) is a companion app that hooks into the running messenger. It has the ability to pick-up event notifications (like when your friends come online, or go to lunch) and take actions. It speaks these events!
This gui engine creates a .net gui defined by xml files. The definitions can be used like a script and so it's possible to change the gui live while your application is still running. | https://sourceforge.net/directory/developmentstatus%3Abeta/environment%3Awin32/?sort=name | CC-MAIN-2018-05 | refinedweb | 453 | 60.41 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
I will explain with default openerp core module addons\product\product.py
1) In Product screen
category many2one fields display with
Category Name/ Parent Category Name
Based on this output Category name_get overridden
class product_category(osv.osv):
2) In Sale/Purchase order
Product many2one fields display with
[Product code] Product Name
Based on this output Product name_get overridden
_name = "product.product" def name_get(self, cr, user, ids, context=None): if context is None: context = {} if isinstance(ids, (int, long)): ids = [ids] if not len(ids): return [] def _name_get(d): name = d.get('name','') code = d.get('default_code',False) if code: name = '[%s] %s' % (code,name) if d.get('variants'): name = name + ' - %s' % (d['variants'],) return (d['id'], name)
Hope this will help based on your requirement override name_get method.
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
Override "name_get" method and many2one field shows product- Category. similar name_get example in the core module product/product.py available.
Thanks for the response. Can you please elaborate more how to override many2one field. For example i want to show something like below. parent -->child <newline> ->child2 | https://www.odoo.com/forum/help-1/question/how-can-i-show-parent-child-together-in-many2one-field-46032 | CC-MAIN-2017-34 | refinedweb | 231 | 58.69 |
An
NSScanner object scans the characters of an
NSString object, typically interpreting the characters and converting them into number and string values. You assign the scanner’s string on creation, and the scanner progresses through the characters of that string from beginning to end as you request items.
Creating a Scanner
Using a Scanner
Example
Localization
NSScanner is a class cluster with a single public class,
NSScanner. Generally, you instantiate a scanner object by invoking the class method
scannerWithString: or
localizedScannerWithString:. Either method returns a scanner object initialized with the string you pass to it. The newly created scanner starts at the beginning of its string. You scan components using the
scan... methods such as
scanInt:,
scanDouble:, and
scanString:intoString:. If you are scanning multiple lines, you typically create a
while loop that continues until the scanner is at the end of the string, as illustrated in the following code fragment:
You can configure a scanner to consider or ignore case using the
setCaseSensitive: method. By default a scanner ignores case.
Scan operations start at the scan location and advance the scanner to just past the last character in the scanned value representation (if any). For example, after scanning an integer from the string “
137 small cases of bananas”, a scanner’s location will be 3, indicating the space immediately after the number. Often you need to advance the scan location to skip characters in which you are not interested. You can change the implicit scan location with the
setScanLocation: method to skip ahead a certain number of characters (you can also use the method to rescan a portion of the string after an error). Typically, however, you either want to skip characters from a particular character set, scan past a specific string, or scan up to a specific string.
You can configure a scanner to skip a set of characters with the
setCharactersToBeSkipped: method. A scanner ignores characters to be skipped at the beginning of any scan operation. Once it finds a scannable character, however, it includes all characters matching the request. Scanners skip whitespace and newline characters by default. Note that case is always considered with regard to characters to be skipped. To skip all English vowels, for example, you must set the characters to be skipped to those in the string “AEIOUaeiou”.
If you want to read content from the current location up to a particular string, you can use
scanUpToString:intoString: (you can pass
NULL as the second argument if you simply want to skip the intervening characters). For example, given the following string:
you can find the type of container and number of containers using
scanUpToString:intoString: as shown in the following example.
It is important to note that the search string (
separatorString) is
" of". By default a scanner ignores whitespace, so the space character after the integer is ignored. Once the scanner begins to accumulate characters, however, all characters are added to the output string until the search string is reached. Thus if the search string is
"of" (no space before), the first value of
container is “small cases ” (includes the space following); if the search string is
" of" (with a space before), the first value of
container is “small cases” (no space following).
After scanning up to a given string, the scan location is the beginning of that string. If you want to scan past that string, you must therefore first scan in the string you scanned up to. The following code fragment illustrates how to skip past the search string in the previous example and determine the type of product in the container. Note the use of
substringFromIndex: to in effect scan up to the end of a string.
Suppose you have a string containing lines such as:
Product: Acme Potato Peeler; Cost: 0.98 73
Product: Chef Pierre Pasta Fork; Cost: 0.75 19
Product: Chef Pierre Colander; Cost: 1.27 2
The following example uses alternating scan operations to extract the product names and costs (costs are read as a
float for simplicity’s sake), skipping the expected substrings “Product:” and “Cost:”, as well as the semicolon. Note that because a scanner skips whitespace and newlines by default, the loop does no special processing for them (in particular there is no need to do additional whitespace processing to retrieve the final integer).
A scanner bases some of its scanning behavior on a locale, which specifies a language and conventions for value representations.
NSScanner uses only the locale’s definition for the decimal separator (given by the key named
NSDecimalSeparator). You can create a scanner with the user’s locale by using
localizedScannerWithString:, or set the locale explicitly using
setLocale:. If you use a method that doesn’t specify a locale, the scanner assumes the default locale values.
Last updated: 2008-10-15 | http://developer.apple.com/documentation/Cocoa/Conceptual/Strings/Articles/Scanners.html | crawl-002 | refinedweb | 802 | 51.99 |
Hi, I've had the following bug report submitted; if the submitter is correct (which seems likely, but I don't have a G4 powerpc box to test on), then the jit code in 8.39 is using G5 instructions inappropriately (where the 8.38 code didn't).
Advertising
Obviously, I could just disable jit for this architecture, but I'd rather avoid it if possible... Regards, Matthew -------- Forwarded Message -------- Subject: Bug#840354: src:pcre3: FTBFS on powerpc (G4 CPU) Resent-Date: Mon, 10 Oct 2016 20:39:06 +0000 Resent-From: Christoph Biedl <debian.a...@manchmal.in-ulm.de> Resent-To: debian-bugs-d...@lists.debian.org Resent-CC: Matthew Vernon <matt...@debian.org> Date: Mon, 10 Oct 2016 22:37:40 +0200 From: Christoph Biedl <debian.a...@manchmal.in-ulm.de> Reply-To: Christoph Biedl <debian.a...@manchmal.in-ulm.de>, 840...@bugs.debian.org To: Debian Bug Tracking System <sub...@bugs.debian.org> Package: src:pcre3 Version: 2:8.39-2 Severity: important Usertag: debian-powe...@lists.debian.org Dear Maintainer, Rebuilding syslog-ng 3.7.3-3 on a PowerMac G4 failed with SIGILL, debugging led to libpcre, and finally rebuilding src:pcre3 itself fails with very similar symptoms: (...) /usr/bin/make check-TESTS make[3]: Entering directory '/tmp/pcre3/pcre3-8.39' make[4]: Entering directory '/tmp/pcre3/pcre3-8.39' ./test-driver: line 107: 12195 Illegal instruction "$@" > $log_file 2>&1 FAIL: pcre_jit_test PASS: pcrecpp_unittest PASS: pcre_scanner_unittest (...) The failing program is .libs/lt-pcre_jit_test. Using gdb is no help at all, the failures happen at a different point every invocation: Program received signal SIGILL, Illegal instruction. 0xb7faf0ec in ?? () (gdb) bt #0 0xb7faf0ec0ec,0xb7faf0ff Dump of assembler code from 0xb7faf0ec to 0xb7faf0ff: => 0xb7faf0ec: li r3,-1 0xb7faf0f0: addi r5,r30,6 0xb7faf0f4: cmplw cr1,r5,r29 0xb7faf0f8: bgt cr1,0xb7faf1b8 0xb7faf0fc: lwz r5,36(r1) Program received signal SIGILL, Illegal instruction. 0xb7fe5008 in ?? () (gdb) bt #0 0xb7fe5008 in ?? () #1 0x1ffb8b80 in ?? () #2 0x1ff91820 in ?? () #3 0x20006110 in ?? () #4 0x20004a3c in ?? () #5 0x1fc97274 in ?? () #6 0x1fc9747c in ?? () #7 0x00000000 in ?? () (gdb) disassemble 0xb7fe5008,0xb7fe5020 Dump of assembler code from 0xb7fe5008 to 0xb7fe5020: => 0xb7fe5008: mflr r0 0xb7fe500c: stw r31,-4(r1) 0xb7fe5010: stw r30,-8(r1) 0xb7fe5014: stw r29,-12(r1) 0xb7fe5018: stw r28,-16(r1) 0xb7fe501c: stw r27,-20(r1) Program received signal SIGILL, Illegal instruction. 0xb7faf060 in ?? () (gdb) bt #0 0xb7faf060060,0xb7faf070 Dump of assembler code from 0xb7faf060 to 0xb7faf070: => 0xb7faf060: lwz r28,8(r5) 0xb7faf064: addi r3,r3,1 0xb7faf068: stw r3,32(r1) 0xb7faf06c: b 0xb7faf0cc My last rebuild of syslog-ng in July, using pcre3 version 2:8.38-3.1, succeded. Therefore I guess it's the changes between 8.38 and 8.39. Very likely this was introduced by the changes in the ppc_jit code which unfortunately I was unable to understand. Disabling jit for powerpc using the following patch: --- pcre3-8.39.orig/sljit/sljitConfigInternal.h +++ pcre3-8.39/sljit/sljitConfigInternal.h @@ -135,11 +135,7 @@ #elif defined(__ppc64__) || defined(__powerpc64__) || defined(_ARCH_PPC64) || (defined(_POWER) && defined(__64BIT__)) #define SLJIT_CONFIG_PPC_64 1 #elif defined(__ppc__) || defined(__powerpc__) || defined(_ARCH_PPC) || defined(_ARCH_PWR) || defined(_ARCH_PWR2) || defined(_POWER) -# ifndef __NO_FPRS__ -# define SLJIT_CONFIG_PPC_32 1 -# else # define SLJIT_CONFIG_UNSUPPORTED 1 -# endif #elif defined(__mips__) && !defined(_LP64) #define SLJIT_CONFIG_MIPS_32 1 #elif defined(__mips64) resulted in a successful pcre3 build, and using the created packages also syslog-ng passes the failing test again. So I'm asking you to bring this upstream to sort out how the breakage was introduced. From other bug reports I guess the created code uses G5 instructions; the Debian powerpc buildds didn't notice as they actually are such newer boxes. As a hopefully temporary workaround you could disable jit for powerpc, of course a the cost of a performance penalty. Christoph -- System Information: Debian Release: stretch/sid APT prefers testing APT policy: (500, 'testing') Architecture: powerpc (ppc) Kernel: Linux 4.4.23 Locale: LANG=C, LC_CTYPE=C (charmap=ANSI_X3.4-1968) Shell: /bin/sh linked to /bin/dash Init: unable to detect -- ## List details at | https://www.mail-archive.com/pcre-dev@exim.org/msg04552.html | CC-MAIN-2017-04 | refinedweb | 671 | 50.43 |
After our discussion on #arch (and some enlightening comments about how you tend to circle around proposals before making a landing), I think I understand better where you stand. As a prolog, I want to point out that "features" is a misleading name. The intended use is more about bugfixes and updates rather than divergent functionality. Tom Lord wrote (in a different order): > An answer implies/includes, to name a few things, how feature names > are decided, who/how feature presense/absense is asserted, what valid > feature names are, etc. > Next: I like to be frugal with new design add-ons. If it's *really* > necessary to add something new, hopefully it should solve lots of (or > a very general) existing problems. "Features" seems a bit isolated > to me, in this context. I do not understand what you mean by "isolated". This proposal aims to solve the tla instance of the general problem of handling API changes. Existing solutions to similar instances of this problem are protocol and library versioning. Maybe the solution can be made more general and future proof, to support testing for the presence of optional/third-party modules in an extensible framework, but that is NOT the problem I am trying to address. > James' suggestion was something like named boolean flags, set I > presume in a static array in some C file --- is that about right? The proposal I have in mind (after discussion with Robert Collins) is slightly more elaborate. * Features are static and describe only the compile-time state of tla. * The "feature mapping" F maps "feature names" N (strings) to "feature versions" V (nonnegative integers). * F(N):V+1 implies F(N):V. This feature mapping F:string->integer can be transformed into a mapping F:string->boolean. In the unlikely event you have a problem with the math, please ask. There are a two reasons why we need fine-grained versioning: 1. To explicit the mapping between tla releases (what the maintainer defines) and the API changes (what the scripts are interested into). This helps providing compatibility with earlier released versions of tla. 2. To make it possible to support features as soon as they are present in development versions of tla. The first point has already been elaborated, so I am going to focus on the second point and the way it relates to the tla development process. Recent experience has shown that front-end development drives bugfixing and feature improvements in tla. Here is the basic scenario: 1. A front-end developer isolates a bug or a missing option (something much smaller that what you would ordinarily call a "feature") in tla. 2. He describes the problem to a tla developer (maybe the same person with a different hat) who agrees to implement the requested change, or informs him he already has this change in his tree. 3. Some functionality can be provided in the absence of the fix, but the fix makes it possible to: A. provide more value (e.g. more reliability or performance) B. or use a simpler code path (making it possible to isolate and and eventually remove the workaround code). 4. The front-end developer asks a tla developer (maybe the same person) to provide a feature flag. If the developer implementing the fix is different from the developer implementing the flag, the latter asks the former to merge the new feature flag. > who controls the namespace of them and how? Nobody does. And that is going to work because of the community process and because the nature of the names is different depending on whether they are used for "past" support or for "experimental" support. *************************************************************** I'm getting tired of writing this. Probably you are getting tired of reading this. So I'm going to be sketchy for the rest of the message. Since the feature name space is going to be managed by the same community that develop tla, and because name conflicts are self defeating, there is no need to ensure uniqueness explicitly. Actually, ensuring uniqueness (e.g. by using the Arch name space) only makes it safer to bypass the community process. "Feature names" are slash-separated sequences of xl symbols. By convention, when they pertain to a specific subcommand, they start by the name of the subcommand and a slash. For example "archive-meta-info/fail-when-missing:1" could have been the name of the archive-meta-info fix. > For example: revision libraries have parameter settings (e.g., the > greedy and sparse flags). Should "features" variables *really* be a > fundamentally different thing? or is there usefully some more general > concept that would make "features" and revision library params the > same thing? Library params are configuration settings. "tla features" are compile- time information. I see them as fundamentally different. > Such a table is easy to code up an understand, but also easy to make a > mistake with while maintaining it I agree with you that it's more difficult to maintain than ordinary code because it does not _do_ something. But I believe the wider (distributed) development process will be effective at avoiding mistakes. If you are really concerned, you could encourage feature flags to be accompanied by test cases. Since feature are about observable behaviour changes, it will always be possible. It may happen that the test be VERY difficult and then the feature flag would be very useful, so providing a test case should not be a requirement. -- -- ddaa | https://lists.gnu.org/archive/html/gnu-arch-users/2004-09/msg00027.html | CC-MAIN-2019-26 | refinedweb | 911 | 64.1 |
Products and Services
Downloads
Store
Support
Education
Partners
About
Oracle Technology Network
The simple program below runs many times slower on 1.4 than it did on 1.3.1
The problem is in Hashmap.get(Object).
Eg on 296Mhz sparc system the loop in the program produces these timings:
JDK 1.3.1 : 860ms
JDK 1.4.0 : 4300ms
This is 5 times slower.
The reason is a new hashing mechanism works very poorly for the hash
codes returned from Doubles and every entry ends up in the 0th bucket so
search is almost linear. Double is a somewhat unusual key but we have one
customer app which uses it extensively and it is a performance critical
part of the application. Its unclear how common such problems would be
in hash codes from more common keys.
Its possible the app can be recoded to not use Double's but some way to get
the performance back in Hashmap.get() would be preferred.
The currently identified workaround is to create such a large capacity table
that the bitwise and to get the index of the bucket can come up with
different indexes.
For this app a capacity of at approx 2^17 is needed to restore to performance.
import java.util.*;
public class HM {
static int LEN=500;
public static void main(String args[]) {
HashMap hmap = new HashMap();
Double []dbls = new Double[LEN];
for (int i=0;i<LEN;i++) {
dbls[i] = new Double(i);
hmap.put(dbls[i], new Integer(i));
}
long t0 = System.currentTimeMillis();
for (int lc = 0; lc<1000; lc++) {
for (int i=0;i<LEN;i++) {
Object o = hmap.get(dbls[i]);
}
}
long t1 = System.currentTimeMillis();
System.out.println("time = "+(t1-t0));
}
}
CONVERTED DATA
BugTraq+ Release Management Values
COMMIT TO FIX:
1.4.0_02
hopper
FIXED IN:
1.4.0_02
hopper
INTEGRATED IN:
1.4.0_02
hopper
VERIFIED IN:
hopper | http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4669519 | CC-MAIN-2013-48 | refinedweb | 314 | 66.94 |
The AngularJS team created Karma, but Karma isn’t tied to AngularJS. As a test runner, I can use Karma to run tests against any JavaScript code using a variety of testing frameworks in a variety of browsers. All I need is Node.js.
Pretend I am writing code to represent a point in two dimensional space. I might create a file point.js.
var Point = function(x,y) { this.x = x; this.y = y; }
I’ll test the code using specifications in a file named pointSpecs.js.
describe("a point", function() { it("initializes with x and y", function() { var p1 = new Point(3,5); expect(p1.x).toBe(3); expect(p1.y).toBe(5); }); });
What Karma can do is:
- Provide web browsers with all of the HTML and JavaScript required to run the tests I’ve written.
- Automate web browsers to execute all the tests
- Tell me if the tests are passing or failing
- Monitor the file system to re-execute tests whenever a file changes.
The first step is using npm to install the Karma command line interface globally, so I can run Karma from anywhere.
npm install karma-cli –g
Then I can install Karma locally in the root of my project where the Point code resides.
npm install karma --save-dev
Karma requires a configuration file so it knows what browsers to automate, and which files I’ve authored that I need Karma to load into the browser. The easiest way to create a basic configuration file is to run karma init from the command line. The init command will walk you through a series of questions to create the karma.conf.js file. Here is a sample session.
>karma init any browsers automatically ? Press tab to list possible options. Enter empty string to move to the next question. > PhantomJS > Chrome > What is the location of your source and test files ? You can use glob patterns, eg. "js/*.js" or "test/**/*Spec.js". Enter empty string to move to the next question. > js/**/*.js > specs/**/*:\temp\testjs\karma.conf.js".
I can open the config file to tweak settings and maintain the configuration in the future. The code is simple.
module.exports = function(config) { config.set({ basePath: '', frameworks: ['jasmine'], files: [ 'js/**/*.js', 'specs/**/*.js' ], exclude: [ ], preprocessors: { }, reporters: ['progress'], port: 9876, colors: true, logLevel: config.LOG_INFO, autoWatch: true, browsers: ['PhantomJS', 'Chrome'], singleRun: false }); };
Note that when I enter the location of the source and test files, I want to enter the location of the source files first. It’s helpful if you can organize source and test files into a directory structure so you can use globbing (**) patterns, but sometimes you need to explicitly name individual files to control the order of loading.
At this point I can start Karma, and the tests will execute in two browsers (Chrome, which I can see, and PhantomJS, which is headless). I’d probably tweak the config file to only use Phantom in the future.
Now, Karma will continuously run tests in the background while I work.
Easy!
A rest parameter allows a function to work with an unknown or variable number of arguments. A function’s rest parameter always appears at the end of the function’s argument list and uses a triple dot prefix (...), as shown below.
let doWork = function(name, ...numbers){ let result = 0; numbers.forEach(function(n){ result += n; }); return result; };
A caller invoking doWork can pass zero or more parameters at the rest parameter position. You can say the numbers argument will take “the rest” of the parameters a caller passes, and numbers will hold the parameter values in an array. In the following example, numbers will reference an array with the values 1, 2, 3.
let result = doWork("Scott", 1, 2, 3); expect(result).toBe(6);
In the case where a caller passes no parameters in the rest parameter position, the rest parameter will be an empty array.
let doWork = function(...numbers){ return numbers; }; let result = doWork(); expect(result.length).toBe(0);
Before ES6, we could allow callers to pass a variable number of arguments to a function by using the implicit arguments variable inside of the function. The arguments variable contains all the parameters to a function in an array-like object, but arguments is not an array, which creates confusion. It is also difficult to spot if a function is using arguments without reading through the code or documentation for the function.
ES6 rest parameters will avoid the confusion by always giving us a true array, and by using a dedicated syntax that makes rest parameters easy to spot when reading the function signature.
Want more? Watch JavaScript Fundamentals for ES6 on Pluralsight!
Primary constructors are a feature you’ll find in a few programming languages, including F#, Scala, and Kotlin. A primary constructor allows us to define a constructor for a type and capture the constructor parameters to use in initialization expressions throughout the rest of the type definition. Here’s one simple example:
public struct Money(string currency, decimal amount) { public string Currency { get; } = currency; public decimal Amount { get; } = amount; }
Notice how the primary constructor arguments appear inside parentheses just after the type name. The code can then use the arguments in field initializers and property initializers. Common questions about this new syntax revolve around parameter validation, and how primary constructors can work with other explicit constructors. Here’s another example.
public class AddUserCommand(User newUser, User creator) : Command { public AddUserCommand(User newUser) : this(newUser, creator: newUser) { } public User NewUser { get; protected set; } = newUser; public User Creator { get; protected set; } = Verify.NotNull("creator", creator); Guid _creatorId = creator.Id; }
Once you have a primary constructor, all other constructors must ultimately call into the primary ctor using this(), which makes sense since we always want the primary constructor arguments to be available for initialization (although a struct will still have a non-replaceable default constructor that will initialize all members to default values).
Validation is a bit trickier, but simple validation checks are possible in the initialization expressions.
I really like the primary constructor syntax for components that are managed by an IoC container. Typically these components have a single constructor to accept dependencies, and they need to store the dependencies in a field or property. Validation of the dependencies is not a concern as most containers will (or can) raise an error if a dependency cannot be found.
public class UserController(IUserStorage storage) : Controller { public ActionResult Index() { // ... } IUserStorage _storage = storage; }
The primary constructor syntax means there is a little less code to write for these types of components.
If a JavaScript function takes two parameters, we have always had the ability to invoke the function and pass two parameters, or one parameter, or no parameters, or three parameters if we wanted. In cases where we do not pass enough parameters, the author of a function might want to specify a default value instead of working with undefined.
Before ES6, developers applied default values to simple parameters using || expressions inside the function.
let doWork = function(name) { name = name || "Scott"; return name; };
With ES6, default parameters are explicit and appear in the argument list for a function.
let doWork = function(name = "Scott") { return name; }; expect(doWork()).toBe("Scott");
A function can even calculate a default using something more than just a literal expression.
let doWork = function(x = Math.random()) { return x; }
The default parameter syntax is a good example of how ES6 is providing a clean syntax for a common requirement. No longer will we have to scan through a function implementation or documentation to discover the defaults.
Of course, not all parameters are simple values. A common programming technique for operations involving complex configurations, like an HTTP call, is to accept a single object argument and use some sort of extend API to merge the incoming configuration with configuration defaults (jQuery.extend or angular.extend, as two examples). Default parameters are useful in this scenario, too, but we’ll have to learn about object destructuring in an upcoming post, first.
Want more? Watch JavaScript Fundamentals for ES6 on Pluralsight!
I’ve been a bit late to the Angular controller as syntax. I was skeptical of the feature at first, and with an Angular project already in flight, I felt it wasn’t the type of change to make with a significant amount of code already committed.
The controller as syntax allows me to alias a controller in markup.
<div ng- {{ main.title }} </div>
Then instead of injecting a $scope into the controller, model data and behavior is added to the controller instance itself.
app.controller("MainController", function(){ this.title = “Hello!”; });
Behind the scenes the controller as syntax adds the controller alias (main in the above example) to a scope object, because Angular still evaluates all binding expressions against a scope object.
Here’s what I think about controller as now.
- Using the controller alias in a view is a push into the pit of success. View code is more explicit and easier to maintain. Even when using $scope in a controller, I’ve come to view any reliance on prototypal inheritance in the scope chain with suspicion, as the inheritance is brittle, subtle, and often complicates both controller and test code. With a controller alias there is no reliance on scope inheritance hierarchies.
- Not having to inject a $scope makes test code slightly easier.
- Having no access to the $scope API inside a controller is a good thing. I’ve come to view any use of $scope.$watch, $scope.$emit, $scope.$on, and $scope.$* in general with suspicion, at least when inside the controller code for a view. Nearly all the functionality available through the $scope API is better used inside of directives or services that provide a better abstraction.
- Thinking of the controller as a true constructor function instead of a function that adds stuff to $scope is healthier. You can take advantage of the constructor function prototype property to organize code, and this will work well with ES6 class definitions (which presumably will work well with ng 2.0).
- Writing directive controllers is different than writing view controllers. With directive controllers the instance members are often used as an API for intra-controller coordination, and this API should be separate from the model for the view.
- Somewhat related to the above, I’m still not convinced that tangling the controller and view model together is a good idea. I think there is some value in having a dedicated factory for view models.
The good appears to outweigh the bad, so I’ll switch over to using controller as syntax moving forward. I don’t think controller as is a panacea for complicated$scope code, but it does push people in a better direction.
I also think controller as highlights the need for Angular 2.0 to make some huge, breaking changes. Once of Angular’s greatest strengths, for me, is that the framework provides enough flexibility to build a variety of apps both large and small. Routing is optional, for example, and most programming can be done with plain old JavaScript objects. However, the surface area of the framework is dangerously close to being a Mirkwood forest of programming techniques. A streamlined API with stronger opinions about how to tie components together would make the framework easier to use.
The!
With a new release of the C# language approaching, it’s also time to look at new features for C#.
First up is the ability to use an initialization expression with an automatically implemented property.
Currently, a constructor is required if you want to create objects using an auto-property and initialize an auto-property to a non-default value.
In C# 6.0, the ability to use an initializer with the auto-property means no explicit constructor code is required.
public class User { public Guid Id { get; } = Guid.NewGuid(); // ... }
Another benefit of the syntax is how an initialized auto-property only requires a getter, the set method is optional. With no setter, immutability is easier to achieve.
The property initialization syntax also plays well with another new feature for C# 6, the primary constructor syntax. We’ll look at primary constructors in the next post for this series. | http://odetocode.com/blogs/all?page=7 | CC-MAIN-2015-27 | refinedweb | 2,039 | 55.54 |
How to Embed Resources in the Application Executable
This code example shows how to embed binary files in the application's executable using the Qt Resource System.
Overview
The resource system allows you to embed any sort of file, incuding icons, background images etc, and read from them using QFile in almost the same manner as you would any other file. Note that including binaries in the executable means that they can't be replaced separately, and means that on devices that do not have on demand code paging, more memory is required.
Create .qrc(Resource Collection) file
Create .qrc file (in XML-based file format) and define the resources that are part of the application's source tree. For example.
/* ResourceEx.qrc */
<!DOCTYPE RCC><RCC version="1.0">
<qresource>
<file>rsc/Background.JPG</file>
<file>rsc/Background_Landscape.JPG</file>
</qresource>
</RCC>
The name of file can be changed using the file tag's alias attribute. You can also specify a path prefix for all files in the .qrc file using the qresource tag's prefix attribute.
Modify .pro file
Assign .qrc file to RESOURCES variable so that qmake knows about it and will produce make rules to generate a file called .cpp (in this example qrc_ResourceEx.cpp) that is linked into the application.
RESOURCES += ResourceEx.qrc
Access resources from the application
The colon (:) prefix informs the Qt-handling methods that the file needs to be fetched from an application resource in the executable rather than an external file. Because it is not an external file, we don’t need to worry about where the file is located. Resources are accessible in the application with a :/ prefix. For example, the path :/rsc/Background.JPG would give access to the Background.JPG file, whose location in the application's source tree is /rsc/Background.JPG.
/* ResourceEx.cpp */
#include "ResourceEx.h"
#include <QPalette>
#include <QDesktopWidget>
ResourceEx::ResourceEx(QWidget *parent)
: QMainWindow(parent)
{
ui.setupUi(this);
setWindowTitle("Qt Resource System");
SetBackgroundImage();
}
ResourceEx::~ResourceEx()
{
}
void ResourceEx::SetBackgroundImage()
{
QPalette p = palette();
/* Access image attached in application executable */
QPixmap image(":/rsc/Background.JPG");
QDesktopWidget* desktopWidget = QApplication::desktop();
QRect rect = desktopWidget->availableGeometry();
QSize size(rect.width() , rect.height());
QPixmap pixmap(image.scaled(size));
p.setBrush(QPalette::Background, pixmap);
setPalette(p);
}
Post-conditions
The code snippet is expected to embed images to application executable.
Download Code Example
- Download the code example from here: File:QtResourceEx.zip
Hi,
--somnathbanik 09:16, 27 April 2011 (UTC)
Thanks very much - yes, all your changes appear to have been good!
I've further subedited to improve readability.
hamishwillee 01:28, 28 April 2011 (UTC) | http://developer.nokia.com/community/wiki/How_to_Embed_Resources_in_the_Application_Executable | CC-MAIN-2014-41 | refinedweb | 431 | 50.43 |
Originally posted by Vad Fogel: Hi all, Here's the source code stored in MainFun.java: public class MainFun extends Fun{}
abstract class Fun{
public final static void main(String[] s){
System.out.println("Can you inherit static methods?");
}
} What's the result of trying to compile and run it: A. Compiles and runs with "Can you inherit static methods?" output B. Compiles, but complains at run time "java.lang.NoSuchMethodError: main" C. Doesn't compile since main() is declared final in an abstract class D. Doesn't compile since main() is not to be placed in abstract classes[/QUOTE ---------------------------------------------------------------------------- Answer will be B since we r saving the file as MainFun.java.So it will compile without error No need to implement the method inside interface since it is non-absract. So at runtime "no main method found "error will be displayed.
Originally posted by Priyanka Chopda: Well answer is A. Compiles and runs with "Can you inherit static methods?" output Since class MainFun is subclass of Fun, it inherits main method from Fun class. So for compiler won't complains as long as it gets to see same signature of main method. What I'm not getting is how can Fun be declared as "abstract" when there is no abstract method in it?? Any ideas???
Originally posted by Yoo-Jin Lee: Hi, I couldn't find anything in the JLS that says you cannot declare a class to be abstract if there are no abstract methods in it. The closest I found was:. A class of this form usually contains class methods and variables. -Yoo-Jin | http://www.coderanch.com/t/242916/java-programmer-SCJP/certification/fun-main-abstract | CC-MAIN-2015-27 | refinedweb | 268 | 66.64 |
There has been a lot of talk lately about WPF/Silverlight. I have also been learning and writing WPF articles, but what I thought might make an interesting article would be to compare and contrast WPF/Flash.
I feel I have the right to do this, as I have worked with both technologies, and although I am not selling myself as an expert in either, I feel confident enough to write this article, I thought it would just make some interesting reading for others that just dont know how WPF or Flash work. For example if you're a Flash developer and dont know what WPF is, I am hoping this article will help you, and vise versa.
I should firstly mention that I only have Flash MX 2004, which I believe is Flash 7.0, which used the 2nd generation of Flash scripting language (action script 2, here after known as AS2). Since then Macromedia have obvioulsy been bought by Adobe, who have created Creative Studio 3 (CS3) which includes Flash under this general umbrella. Not only that, but they have totally re-written the scripting side of things in Flash, and it is now action script 3 (here after known as AS3). I know nothing about AS3 except that it will be Java like, and will allow OO type stuff like inheritence etc etc, but I do have a excellent Flash developer friend, that assures me that writing about AS2 will not be a waste of time, as although its different, the areas i'm going to cover in this article are still the same. So ill stick to writing about AS2 if its ok with your lot. After all people are still writing about .NET 2.0, jeez that soo old man .NET 3.0/3.5 are here, come on man get with the program. No only kidding what ever floats your boat is ok by me, I write about what I like. Period. If others like it great, if not thats also fine, ill still carry on writing about stuff I like.
I think the easiest way to start it to just go through features one at a time. I am specifically only picking on features that WPF introduced, otherwise this would be a very very long and tideous article. By this I mean I will not discuss threading, sockets, databases, remoting, xml handling specifically, though Flash is more than capable of all of these with the exception of threading.
Below is a list of things that I think Flash / WPF people would like to know about each others worlds
So shall we continue, what im going to do is under each of the heading below, have a Flash view point followed by a WPF viewpoint, that way people should be able to see what the various pros and cons are for themselves.
Flash
Before we get on to how to code in Flash its fairly important to understand how Flash works at a conceptual level. So lets start there shall we.
Flash is based on a library of reources which can be created on an initial stage (work area) and then grouped together to form either a Button/ Graphic or a
MovieClip. When created these grouping of stage objects into one of the 3 objects just mentioned, can be added to the library (more on this later) as resources. Later on the library components can be brought to the stage either manually (dragged on to stage) or programatically (the right way IMHO).
The stage is basically constructed of layers down and time across. See the figure below, which shows 2 simple rectangles (no animation yet) on the stage on different layers
The stage allows user to create the following :
MovieClipobject for example.
For those of you that have never used Flash, it has its own language support, this is known as Action Script. The latest version of Flash (the one in Adobe CS3 product) uses Action Script 3 (AS3). I dont have access to that though, so ill be discussing things using Flash MX 2004 which uses AS2.
So what can we do with this Action Script stuff. Well as it turns out we can do quite a lot actually.
The way that Flah have developed Action Script is quite cool and what one would expect coming from an OO language. AS2 was not 100% OO but it was a damn fine start. There was and still is no threading support, but apart from that, I think its a nice set of classes and libraries/operators and keywords.
There are pre built classes for working with
MovieClips, Sound, Video, Web Cams, Sockets, Event handling, and also the usual collections, if-then-else, loops, Maths classes. We can subclass existing controls, create new control and classes etc etc. Its quite mature actually, even better in the new AS3, which will be 100% OO.
So now that we know that we have these classes available what can we do with them, and how to I trigger a bit of Action Script to run in the 1st place.
Well thats actually very very easy, all we need to do is create an Action of a frame or an object. An example is shown below, where I am creating an action on Frame1 of the stage. When an action exists for a key frame a lilttle "a" will be shown for that frame on the stage.
It can be seen from this small screen shot that we can create functions, call methods on existing classes. And whats this 1st line doing
attachManager = new AttachManager();
That kind of looks like its creating an object doesnt it. Well yes that exactly what its doing, we can creat objects in Flash action script, that will bring to life external Action Script Files and create objects from them. Shall we take a sneaky peak at what this external action script file looks like
// \\\\\\\\\\\\\\ // // AttachManager class : Allows movie to be added to the timeline using // scripting. The movies will be added using the linkage name within the // library. The attach method also returns the attached MovieClip to the // timeline. This class is instatiated from the _level0 timeline using action // script like // // attachManager = new AttachManager() // // var movie1 = attachManager.attach('movie1', 0, 0) // // \\\\\\\\\\\\\\\ class AttachManager{ //instance fields private var attachArr:Array; private var i:Number; //constructor function AttachManager(){ //holding array for all objects the manager holds attachArr = new Array(); } //attach : Attaches the object to the timeline using the parameters // provided // //PARAMETERS : //linkage:String : A string representing the linkage name for the object // to attach //xpos:Number : The X-Pos to use for the object to attach //ypos:Number : The Y-Pos to use for the object to attach //depth : The depth to use, where the object will be attached. In action //script depth goes from 0 downwards where as in flash depth goes from 0 // upwards // //RETURNS : //MovieClip : that may be used by the flash timeline via action script public function attach(linkage:String, xpos:Number, ypos:Number,depth):MovieClip { //get the depth if no depth exists yet if(depth==undefined){ depth = getCurDepth(); } //now do the attaching to _level0, using the paraaddEventListenermeters // porovided var tmpClip = _level0.attachMovie(linkage,linkage,depth,{_x:xpos,_y:ypos}) tmpClip._name=linkage; //store the object in the holding array attachArr.push(tmpClip) //return the newly attached MovieClip return tmpClip; } //getCurDepth : Get the HighestDepth for the timeline // //RETURNS : //Number : that represents the HighestDepth for the timeline public function getCurDepth():Number { //return the depth return _level0.getNextHighestDepth(); } //killClip : Removes an object from the AttachManager // //PARAMETERS : //clip : the object to remove from the internal holding array public function killClip(clip) { //remove the clip _level0.removeMovieClip(clip); } //removeAllClips : Removes all objects from the AttachManager public function removeAllClips() { //remove all clips in holding array for(i=0; i < attacharr.length;)
Its a class, just like one would create in any other OO type language. So by using a combination of components, library resources, stage action script, and external action script (.as files) we can make some pretty powerful Flash applications. Oh if you want to get into writing action script seriously, I would recommend that you download the Sepy action script editor which is available here
IMPORTANT NOTE : Although the stage is convenient, it can quickly becoming cluttered with layer/objects/animations if you only use the stage. A better way is to develop seperate parts of the UI which all do specific tasks and then create these tasks in small components (MovieClip objects are best) that are brought to the stage either manually or by action script (which is discussed in the resources section of this article), which keeps the stage nice and tidy. A really good flash developer will probably have a single layer within the stage that has one frame which has actions assocviated with it, which will load all the other UI parts to the stage programatically. Of course each component will also have its own stage, thats how Flash works. Each MovieClip has its own stage.
So I hope that gives a basic understanding of the very basic idea behind how to create Flash applications
WPF is a truly OO language that also borrows ideas from ASP .NET. For example there is a view and code behind. The view part would be constructed of code known as XAML (Extensible Markup Language) which a Microsoft proprietary language, and looks a little like XML. But its way more powerful than simple XML, it is possible to make an entire event/animated datadriven application purely in XAML without and code behind at all. In the case where a code behind file is used, the code behind file will be either C# (the correct choice) or VB .NET (no dont do it).
So how does this all work, 2 files?
Lets consider a little example, a simple window with a single button control on it. Thats it.
This would result in the following XAML
<Window xmlns="" xmlns: <Grid x: <Button HorizontalAlignment="Left" Margin="101,118,0,0" x: </Grid> </Window>
And we can access the Button control (btn1) from a code behind file just fine, as shown below
But how can this be. Well what happens is that there is an extra code file generated behind the scenes which contains all the objects that are defined in the XAML file, which allows the code behind file to access the XAML defined controls. This miracle is down to partial class support.
Shall we have a look at the generated code file for this simple example, which is a filled called Window1.g.cs which will be placed in the \Obj folder as part of the compilation process.
#pragma checksum "..\..\Window1.xaml" \ "{406ea660-64cf-4c82-b6f0-42d48172a799}" "FADEDA8B7804EA63C81E418EDBA03A95" //---------------------------------------------------------------------------- // <auto-generated> // This code was generated by a tool. // Runtime Version:2.0.50727.1378 // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //---------------------------------------------------------------------------- UntitledProject1 { /// <summary> /// Window1 /// </summary> public partial class Window1 : System.Windows.Window, System.Windows.Markup.IComponentConnector { internal UntitledProject1.Window1 Window; internal System.Windows.Controls.Grid LayoutRoot; internal System.Windows.Controls.Button btn1; private bool _contentLoaded; /// <summary> /// InitializeComponent /// </summary> [System.Diagnostics.DebuggerNonUserCodeAttribute()] public void InitializeComponent() { if (_contentLoaded) { return; } _contentLoaded = true; System.Uri resourceLocater = new System.Uri("/UntitledProject1;component/window1.xaml", System.UriKind.Relative); #line 1 "..\..\Window1.xaml" System.Windows.Application.LoadComponent(this, resourceLocater); #line default #line hidden } .Window = ((UntitledProject1.Window1)(target)); return; case 2: this.LayoutRoot = ((System.Windows.Controls.Grid)(target)); return; case 3: this.btn1 = ((System.Windows.Controls.Button)(target)); return; } this._contentLoaded = true; } } }
And our code behind file (C#)
using System; using System.IO; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Data; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Navigation; namespace UntitledProject1 { public partial class Window1 { public Window1() { this.InitializeComponent(); // Insert code required on object creation below this point. } } }
So from these 3 listings you should be able to see how controls are accessable via the XAML and the codebehind files. But how do we use these controls. Well they are just classes, so we can call their methods, set the properties and listen to their events. As .NET is 100% OO we can sub class these standard classes and override the relevant methods should we wish to. And of which would occur in a XAML file or a code behind file. That said control Events will normally be handled in code behind, where custom logic may be performed
Unlike Flash there is no stage, there is a timeline feature used for animations, but thats about it. I wont discuss that yet, as its covered later on. This section was just meant to give you an idea of how Flash and WPF went about doing their thing.
Flash doesnt come with many controls as standard, as can be seen from the screen shots shown below.
This doesnt seem like a very big control set when compared to the .NET 3.0 controls available (remember we can still use System.Windows.Forms.Dll for dialogs and can even host windows forms components in WPF application) but what you need to understand with Flash is that we can apply actions (action script) to anything not just controls. This is way different from the windows programming model, where a control events are what we use to cause something to happen. For example I could have a single frame within the main stage that has action script aplied to it, or I could have a movie which is loaded dynamically and that could have its own actions associated within it, and it could also contain any of the controls shown above all of which expose their own events. In fact any object in Flash can be converted to a Graphic/MovieClip or Button all of which will the support action script being attached. Also remember that any AS2 code can also create external action script objects.
So on the face of it Flashes lack of a rich control heirachy doesnt seem to be much of a problem.
But what about custom controls?
Another nice thing about Flash ( well at least AS2/ AS3 version of Flash) is that you can also sub class these standard controls, just as you would for a WPF/ Win forms control. So its also fairly extensable.
I have Flash MX 2004 installed and if I examine the folder C:\Program Files\Macromedia\Flash MX 2004\en\First Run\Classes\mx\controls which is where all the standard components source code is stored I can see a load of different classes
So what do these look like, shall we have a look at one of these, lets pick a nice simple control, say a button. Well the AS2 code for this is shown below
I know this is a lot of code, but I thought it might be of interest to see how a standard Flash control was created
//**************************************************************************** //Copyright (C) 2003 Macromedia, Inc. All Rights Reserved. //The following is Sample Code and is subject to all restrictions on //such code as contained in the End User License Agreement accompanying //this product. //**************************************************************************** import mx.controls.SimpleButton; import mx.core.UIObject; import mx.core.UIComponent; /** * @tiptext click event * @helpid 3168 */ [Event("click")] [TagName("Button")] [IconFile("Button.png")] /** * Button class * - extends SimpleButton * - adds label and text with layout * - adds ability to resize without distorting the skin * @tiptext Button provides core button functionality. Extends SimpleButton * @helpid 3043 */ class mx.controls.Button extends SimpleButton { /** * @private * SymbolName for object */ static var symbolName:String = "Button"; /** * @private * Class used in createClassObject */ static var symbolOwner = mx.controls.Button; var className:String = "Button"; function Button() { } #include "../core/ComponentVersion.as" /** * number used to offset the label and/or icon when button is pressed */ var btnOffset:Number = 0; /** *@private * Color used to set the theme color */ var _color = "buttonColor"; /** *@private * Text that appears in the label if no value is specified */ var __label:String = "default value"; /** *@private * default label placement */ var __labelPlacement:String = "right"; /** //*@private * store the linkage name of the icon at initalization */ var initIcon; /** * @private * button state skin variables */ var falseUpSkin:String = "ButtonSkin"; var falseDownSkin:String = "ButtonSkin"; var falseOverSkin:String = "ButtonSkin" var falseDisabledSkin:String = "ButtonSkin"; var trueUpSkin:String = "ButtonSkin"; var trueDownSkin:String = "ButtonSkin"; var trueOverSkin:String = "ButtonSkin"; var trueDisabledSkin:String = "ButtonSkin"; var falseUpIcon:String = ""; var falseDownIcon:String = ""; var falseOverIcon:String = ""; var falseDisabledIcon:String = ""; var trueUpIcon:String = ""; var trueDownIcon:String = ""; var trueOverIcon:String = ""; var trueDisabledIcon:String = ""; /** * @private * list of clip parameters to check at init */ var clipParameters:Object = { labelPlacement:1, icon:1, toggle:1, selected:1, label:1 }; static var mergedClipParameters:Boolean = UIObject.mergeClipParameters(mx.controls.Button.prototype.clipParameters, SimpleButton.prototype.clipParameters); var labelPath:Object; var hitArea_mc:MovieClip; var _iconLinkageName:String; var centerContent : Boolean = true; var borderW : Number = 1;// buffer value for border /** * @private * init variables. Components should implement this method and call super.init() to * ensure this method gets called. The width, height and clip parameters will not * be properly set until after this is called. */ function init(Void):Void { super.init(); } /** * @private * */ function draw() { super.draw(); if (initIcon != undefined) _setIcon(initIcon); delete initIcon; } /** * This method calls SimpleButton's onRelease() */ function onRelease(Void):Void { super.onRelease(); } /** * @private * create children objects. Components implement this method to create the * subobjects in the component. Recommended way is to make text objects * invisible and make them visible when the draw() method is called to * avoid flicker on the screen. */ function createChildren(Void):Void { super.createChildren(); } /** * @private * sets the skin state based on tag and linkage name */ function setSkin(tag:Number,linkageName:String, initobj:Object):MovieClip { return super.setSkin(tag, linkageName, initobj); } /** * @private * sets the old skin's visibility to false and sets the new skin's *visibility to true */ function viewSkin(varName:String):Void { var skinStyle = getState() ? "true" : "false"; skinStyle += enabled ? phase : "disabled"; super.viewSkin(varName,{styleName:this,borderStyle:skinStyle}); } /** * @private * Watch for a style change. */ function invalidateStyle(c:String):Void { labelPath.invalidateStyle(c); super.invalidateStyle(c); } /** * @private * sets the color to each one of the states */ function setColor(c:Number):Void { for (var i=0;i<8;i++) { this[idNames[i]].redraw(true); } } /** * @private * this is called whenever the enabled state changes. */ function setEnabled(enable:Boolean):Void { labelPath.enabled = enable; super.setEnabled(enable); } /** * @private * sets same size of each of the states */ function calcSize(tag:Number, ref:Object):Void { if ((__width == undefined) || (__height == undefined)) return; if(tag < 7 ) { ref.setSize(__width,__height,true); } } /** * @private * Each component should implement this method and lay out * its children based on the .width and .height properties */ function size(Void):Void { setState(getState()); setHitArea(__width,__height); for (var i = 0; i < 8; i++) { var ref = idNames[i]; if (typeof(this[ref]) == "MovieClip") { this[ref].setSize(__width, __height, true); } } super.size(); } /** * sets the label placement of left,right,top, or bottom * @tiptext Gets or sets the label placement relative to the icon * @helpid 3044 */ [Inspectable(enumeration="left,right,top,bottom"defaultValue="right")] function set labelPlacement (val:String) { __labelPlacement = val; invalidate(); } /** * returns the label placement of left,right,top, or bottom * @tiptext Gets or sets the label placement relative to the icon * @helpid 3045 */ function get labelPlacement():String { return __labelPlacement; } /** * @private * use to get the label placement of left,right,top, or bottom */ function getLabelPlacement(Void):String { return __labelPlacement; } /** * @private * use to set the label placement to left,right,top, or bottom */ function setLabelPlacement(val:String):Void { __labelPlacement = val; invalidate(); } /** * @private * use to get the btnOffset value */ function getBtnOffset(Void):Number { if(getState()) { var n = btnOffset; } else { if(phase == "down") { var n = btnOffset; } else { var n = 0; } } return n; } /** * @private * Controls the layout of the icon and the label within the button. * note that layout hinges on a variable, "centerContents", which is *set to true in Button. * but false in check and radio. */ function setView(offset:Number):Void { var n = offset ? btnOffset : 0; var val = getLabelPlacement(); var iconW : Number = 0; var iconH : Number = 0; var labelW : Number = 0; var labelH : Number = 0; var labelX : Number = 0; var labelY : Number = 0; var lp = labelPath; var ic = iconName; // measure text size var textW = lp.textWidth; var textH = lp.textHeight; var viewW = __width-borderW-borderW; var viewH = __height-borderW-borderW; lp._visible = true; if (ic!=undefined) { iconW = ic._width; iconH = ic._height; } if (val == "left" || val == "right") { if (lp!=undefined) { lp._width = labelW = Math.min(viewW-iconW, textW+5); lp._height = labelH = Math.min(viewH, textH+5); } if (val == "right") { labelX = iconW; if (centerContent) { labelX += (viewW - labelW - iconW)/2; } ic._x = labelX - iconW; } else { labelX = viewW - labelW - iconW; if (centerContent) { labelX = labelX / 2; } ic._x = labelX + labelW; } ic._y = labelY = 0; if (centerContent) { ic._y = (viewH - iconH )/2; labelY = (viewH - labelH )/2 } if (!centerContent) ic._y += Math.max(0, (labelH-iconH)/2); } else { if (lp!=undefined) { lp._width = labelW = Math.min(viewW, textW+5); lp._height = labelH = Math.min(viewH-iconH, textH+5); } labelX = (viewW - labelW )/2; ic._x = (viewW - iconW )/2; if (val == "top") { labelY = viewH - labelH - iconH; if (centerContent) { labelY = labelY / 2; } ic._y = labelY + labelH; } else { labelY = iconH; if (centerContent) { labelY += (viewH - labelH - iconH)/2; } ic._y = labelY - iconH; } } var buff = borderW + n; lp._x = labelX + buff; lp._y = labelY + buff; ic._x += buff; ic._y += buff; } /** * sets the associated label text * @tiptext Gets or sets the Button label * @helpid 3046 */ [Inspectable(defaultValue="Button")] function set label(lbl:String) { setLabel(lbl); } /** * @private * sets the associated label text */ function setLabel(label:String):Void { if (label=="") { labelPath.removeTextField(); refresh(); return; } if (labelPath == undefined) { var lp = createLabel("labelPath", 200, label); lp._width = lp.textWidth + 5; lp._height = lp.textHeight +5; lp.visible = false; } else { labelPath.text = label; refresh(); } } /** * @private * gets the associated label text */ function getLabel(Void):String { return labelPath.text; } /** * gets the associated label text * @tiptext Gets or sets the Button label * @helpid 3047 */ function get label():String { return labelPath.text; } function _getIcon(Void):String { return _iconLinkageName; } /** * sets the associated icon * use setIcon() to set the icon * @tiptext Gets or sets the linkage identifier of the Button's icon * @helpid 3404 */ function get icon():String { if (initializing) return initIcon; return _iconLinkageName; } /** *@private * sets the icon for the falseUp, falseDown and trueUp states * use setIcon() to set the icon */ function _setIcon(linkage):Void { if (initializing) { if (linkage == "" ) return; initIcon = linkage; } else { if ( linkage == ""){removeIcons();return;} super.changeIcon(0,linkage); super.changeIcon(1,linkage); super.changeIcon(4,linkage); super.changeIcon(5,linkage); _iconLinkageName = linkage; refresh(); } } /** * @sets the icon for all states of the button * @tiptext Gets or sets the linkage identifier of the Button's icon * @helpid 3048 */ [Inspectable(defaultValue="")] function set icon(linkage) { _setIcon(linkage); } /** * @private * @method to set the hit area dimensions */ function setHitArea(w:Number,h:Number) { if (hitArea_mc == undefined) createEmptyObject("hitArea_mc",100); //reserved depth for hit area var ha = hitArea_mc; ha.clear(); ha.beginFill(0xff0000); ha.drawRect(0,0,w,h); ha.endFill(); ha.setVisible(false); }
[ChangeEvent("click")] var _inherited_selected : Boolean; }
This looks pretty high level, doesnt it? Looks quite java/C# like I would say. Curly braces/attributes, mmm.
In later version of Flash it is also now possible to import a control from a 3rd party that has developed a new control using action script movie clips etc etc.
If we consider the standard controls that WPF provides (screen shot of Expression Blend) shown below
We can see straight away that there are a lot more controls. These controls can also be subclassed, which makes WPF very powerful. However we can still really only carry out operations based on user interaction with pre-existing control events or generated events within non UI classes. Remember in flash any frame or any object could potentially be made to run action script. So although there are more controls within WPF, there could potentially be many crazy controls within Flash, if we consider a MovieClip with actions on it to be a usercontrol. Which I think I would, wouldnt you.
Resources are very very important within Flash, resources are placed in a Library. The Libray can hold all sorts of things, such as sounds, Graphics, MovieClips, Tweens etc etc. Once an object is stored in the Library and has been given a name (which it will have to be allowed to be stored in the library in the 1st place) it can be dragged to the stage and used on any of the stage layers. But thats not all the library allows us to do. There is also another very important role the library plays, and that is the role of linkage.
Linkage is a key topic so listen up. What linkage does is allow a object in the library to manipulated programatically by action script. This will include bringing the object onto the stage / removing the object from the stage, calling the objects methods, subscribing to the object events. It basically exposes all the library objects functions to action script.
In fact this is how any good Flash developer should be doing things, its all about the linkage. By creating small components (MovieClips really) and only bringing them into life (the stage) when needed Flash is able to leverage some sort of JIT (Just In Time) model. I would think this would be the equivalent of declaring a new object in C# like SomeObject so = new SomeObject();. Shall we see an example of this
The following diagram illustrates this. It shows the library with a MovieClip object which has a linkage name of "moviePhoneOFF", which allows the MovieClip to be created and manipulated via Action Script as I am doing in this example
Resources are also a important part of WPF but they are fairly different from the way Flash talks about resources. We have 2 things to consider when discussing resources in WPF, firstly at file level and then at application level. Lets talks about file level first.
For this next section to make any sense youll probably need to be using Visual Studio 2005/2008
You can use file properties to indicate what actions the project system should perform on the files.
The BuildAction property indicates what Visual Studio does with a file when a build is executed. BuildAction can have one of several values:
None - The file is not included in the project output group and is not compiled in the build process. An example is a text file that contains documentation, such as a Readme file.
Compile - The file is compiled into the build output. This setting is used for code files.
Content - The file is not compiled, but is included in the Content output group. For example, this setting is the default value for an .htm or other kind of Web file.
Embedded Resource - This file is embedded in the main project build output as a DLL or executable. It is typically used for resource files. Specifying "Embedded Resource" puts the resource in the .mresource section of the assembly.
Page - Build action "Page" generates a performant BAML file.
Resource - Build action "Resource" generates an embedded XAML file.
The default value for BuildAction depends on the extension of the file you add to the solution. For example, if you add a Visual Basic project to Solution Explorer, the default value for BuildAction is Compile, because the extension .vb indicates a code file that can be compiled. File names and extensions appear in Solution Explorer.
An example of this is shown below for a resource file Dictionary1.xaml within a Visual Studio WPF project
So thats how to specify a build action, but thats only 1/2 the story. We still need to create the resource files in the first place. We are able to use resource files which will allow us to add string, images and icons. This is the same as .NET 2.0. But what about XAML resource files that are loaded dynamically. Well those are also called resources, but a special type of resource called a ResourceDictionary, which can contain markup that when imported can be used within the consumer of the ResourceDictionary.
So how do we create on of these ResourceDictionaries, well in XAML we would do the following
<ResourceDictionary xmlns="" xmlns: //makrup resources to be defined here </ResourceDictionary>
This would allow another XAML/code behind file to use these resources. So you can kind of think of this as the linkage step if you are from a Flash background. What well do is to create a MergedDictionary which will contain the XAML resource file. Once that step is completed the resources within the MergedDictionary may be used within code. Let see an example of how to define a MergedDictionary
<ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="Dictionary1.xaml"/> </ResourceDictionary.MergedDictionaries> </ResourceDictionary>
Flash doesnt really have any layout operations such as Dock or Anchor. It doesnt have any container controls as such. Any object is simply placed right on to a canvas (the stage really). So theres not much to mention about this.
WPF on the other hand has a lot of controls which specifically deal with layout. These control are all Panels of some sort. There is StackPanel : The StackPanel lays out child elements by stacking them one after the other. Elements are "stacked" in the order they appear in the Xaml file (document order in XML terms). Items can either be stacked vertically (the default) or horizontally.
<StackPanel> <TextBlock FontSize="16" Foreground="#58290A"> Items inside a StackPanel</TextBlock> <Button>Item 2</Button> <Border BorderBrush="#feca00" BorderThickness="2"> <TextBlock>Item 3</TextBlock> </Border> </StackPanel>
<StackPanel Orientation="Horizontal"> <TextBlock FontSize="16" Foreground="#58290A"> Items inside a StackPanel</TextBlock> <Button>Item 2</Button> <Border BorderBrush="#feca00" BorderThickness="2"> <TextBlock>Item 3</TextBlock> </Border> </StackPanel>
WrapPanel : The wrap panel lays out items from left to right. When a row of items has filled the horizontal space available to them the panel wraps the next item around onto the next line (in a similar way to how text is layed out).
<WrapPanel> <TextBlock FontSize="16" Foreground="#58290A"> Items inside a WrapPanel</TextBlock> <Button>Item 2</Button> <Border BorderBrush="#feca00" BorderThickness="2"> <TextBlock>Item 3</TextBlock> </Border> </WrapPan.
<DockPanel> <TextBlock FontSize="16" DockPanel. Items inside a DockPanel</TextBlock> <Button DockPanel.Item 2</Button> <Border BorderBrush="#feca00" BorderThickness="2"> <TextBlock>Item 3</TextBlock> </Border> </DockPanel>
Canvas : The Canvas panel is similar to the way old rich-client layout worked where).
<Canvas> <TextBlock FontSize="16" Canvas. Items inside a Canvas</TextBlock> <Button Canvas.Item 2</Button> <Border BorderBrush="#feca00" BorderThickness="2" Canvas. <TextBlock>Item 3</TextBlock> </Border> </Canvas>. The star notation can be seen in the example code below where one of the columns is twice the width of the other column by setting the Width attribute to "2*". The example below also shows the height of one row being set to an absolute value. The differences these introduce can more readily be seen when re-sizing the form containing the grid, as the grid will expand by default to fill the space available to it.
<Grid Margin="10" ShowGridLines="True"> <Grid.ColumnDefinitions> <ColumnDefinition Width="2*" /> <ColumnDefinition /> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="25" /> <RowDefinition /> <RowDefinition Height="2*"/> </Grid.RowDefinitions> <TextBlock FontSize="16" Foreground="#58290A" Grid. Items inside a Grid</TextBlock> <Button Grid. Item 2</Button> <Border BorderBrush="#feca00" BorderThickness="2" Grid. <TextBlock>Item 3</TextBlock> </Border> </Grid>
In Flash MX 2004 upwards there were a whole host of new data access controls added to Flash, these are shown below
It is by using these new controls that Flash is able to perform Databinding it also has the usual controls that one would want to bind data to such as
As can be seen Flash supports DataSets / XMLConnector and WebServiceConnector. These are probably the most typical components used. Flash is a little strange in that it needs to communicate with the page that is holding the Flash object (remember Flash is Client side, the Database is server side) to grab data from the underlying database. Typically Flash would use a DataSet to bind to a DataGrid. The DataSet would be populated by an XMLConnector which hold XML data that gets populated via Flash talking to it hosting page. This could be PHP or and ASP page for example. Flash could also talk and bind directly to a web service. The 3 links below discuss each of these options in more detail should you be interested.
The Flash documentation is also very good, you just have to search it. The reference section at the bottom of this article contains a link to the Flash documentation.
I have to say that I think WPF has the edge on databinding. Its truly insane what you can do with databinding in WPF. You can bind a property from the following sources:
It basically quite mad. To be honest its probably a bit too much to go into in this comparison article. I would however suggest that if you want to know more about the way WPF binding works you could read the following articles
Flash doesnt have any concept of any of these quite WPF techniques. Though onw possible solution would be to use XML files to dictate Styles, Poistions, Skin Colors etc etc. Flash has excellent XML support, so this is the route I would go and have seen used.. An author or designer can customize a look extensively on an application-by-application basis, but a strong styling and templating model is necessary to allow maintenance and sharing of a look. Windows Presentation Foundation (WPF) provides that model.
You can think of a Style as a convenient way to apply property values. Lets consider this small example which Styles a standard button.
>
Which when its applied to a button results in>
This example defines a ControlTemplate for a ListBox
I'm going to be bit cheeky here are simply direct you to a full article on this, as the explanation is much better there. Its by Josh Smith (suprise suprise) and the link is located right here
Flash is basically a vector based animation environment, but there are options available for 3d. Most notably Swift3D which allows user to create 3D model and export straight into Flash.
WPF has good support for both vector and 3D. It has native support for 3D built into the framework. For example, lets consider a 3D button style. The code is as shown below
<!-- 3D Media Buttons --> <Style x: <Style.Resources> <Storyboard x: <DoubleAnimation Storyboard. <DoubleAnimation Storyboard. <DoubleAnimation Storyboard. <DoubleAnimation Storyboard. <DoubleAnimation Storyboard. <DoubleAnimation Storyboard. <DoubleAnimation Storyboard. <DoubleAnimation Storyboard. </Storyboard> </Style.Resources> <Setter Property="Width" Value="100"/> <Setter Property="Height" Value="100"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate> <ControlTemplate.Triggers> <Trigger Property="Button.IsMouseOver" Value="true"> <Trigger.EnterActions> <BeginStoryboard Storyboard="{StaticResource Spin}"/> </Trigger.EnterActions> </Trigger> </ControlTemplate.Triggers> <Viewport3D> <Viewport3D.Camera> <PerspectiveCamera Position="4,4,4" LookDirection="-1,-1,-1" /> </Viewport3D.Camera> <Viewport3D.Children> <ModelVisual3D> <ModelVisual3D.Content> <DirectionalLight Direction="-0.3,-0.4,-0.5" /> </ModelVisual3D.Content> </ModelVisual3D> <ModelVisual3D x: <ModelVisual3D.Transform> <Transform3DGroup> <RotateTransform3D> <RotateTransform3D.Rotation> <AxisAngleRotation3D x: </RotateTransform3D.Rotation> </RotateTransform3D> <ScaleTransform3D x: </Transform3DGroup> </ModelVisual3D.Transform> <ModelVisual3D.Content> <GeometryModel3D x: <GeometryModel3D.Material> <DiffuseMaterial> <DiffuseMaterial.Brush> <VisualBrush ViewportUnits="Absolute" Transform="1,0,0,-1,0,1"> <VisualBrush.Visual> <Border Background="{Binding Path=Background, RelativeSource='{RelativeSource TemplatedParent}'}"> <Label Content="{Binding Path=Content, RelativeSource='{RelativeSource TemplatedParent}'}" /> </Border> </VisualBrush.Visual> </VisualBrush> </DiffuseMaterial.Brush> </DiffuseMaterial> </GeometryModel3D.Material> <GeometryModel3D.Geometry> <MeshGeometry3D x: </GeometryModel3D.Geometry> </GeometryModel3D> </ModelVisual3D.Content> </ModelVisual3D> </Viewport3D.Children> </Viewport3D> </ControlTemplate> </Setter.Value> </Setter> </Style>
which results in a normal button which shows video on its surface as a 3d cube as shown below
Flash allow uses to create almost limitless animations. We can use motion paths, create tweens, which use key frames to allow us to specify certain frames that dictate how an object will look, and the parts in between are interpolated. At key frames we can spacify size, color, position, opacity, rotation anything really.
In my mind when it comes to animations Flash wins hands down no contest not even close from anything else ever. Its the daddy, brother, sister and mother, 2nd cousin etc etc all in one. It rocks. And why is this. Well its to do with 2 reasons really.
Recall that the stage had layers down and time across. Not that great you might say. But what it does allow is the user to have different objects on the same layer at different times within the stage. For example lets say we have the following
A single layer with a square at Frame1, and a circle at Frame2
Does this seem cool yet. No you say. Well it is. This is whay. If we can have anything we like at a given frame within a layer. The object that exists at a frame can then be animated. And if we can bring in new items along the stage at any frame we want, and we consider this across all layer, and then consider that each MovieClip can have such an arrangement, and that any number of MovieClip can be brought into the main stage either manually or programaitically, we all of a sudden have a very powerful animation environment. And if that wasnt enough we can also do all this though code using a 3rd party action script that some mad guy wrote to allow user to control tweens (a fundamental Flash animation object) through action script.
All that needs to be done is that the tweening library is refernced in the main stage action script as shown below
#include "lmc_tween.as"
Each MovieClip object can then be controlled via action script, such as using the alphaTo method
MovieClip.alphaTo()
Availability
my_mc.alphaTo(alpha, seconds, animtype, delay, callback, extra1, extra2)
alpha end value ofmovieclip _alpha property
all other parameters work as in tween() method
None
Where an example might be (this is used within the BlockManager.as of FlashDemo1.zip)
tmpLoc.alphaTo(100,.1,'linear',i/20,{func:'playClip',scope:this,args:[tmpLoc]})
The 3rd party tweening library is available as a .MXP which is a Macromedia Extension file, and is officially available at the author site which is this site also hold the API documentation
WPF does indeed support animations. But to my mind its not a touch on Flash. The reason being that there isnt a conecpt of a stage, there is a timeline but that is only used in DoubleAnimationUsingKeyFrames. Where as Flash uses time everywhere, and you can even swap out whats shown on a layer at time(x).
To further understand what I mean, and why Flash is better, we need to understand how WPF apps are created. They are created using either XAML which is a tree type structure or by using code behind procedural code. So why is this not as flexible as Flash. Well in both cases the content of the page is not that dynamic. In the case of XAML the tree is fairly static, whats there at design time is what well get at runtime. Ok we could add new controls which we could animate in code behind, which in fact is a partial solution, but its just seems clumsy to me. We could also have different pages that are loaded one after another, or we could simply destroy the current XAML tree and load in a new XAML document tree from disk and that would offer some sort of flexibility.
I just think Flash has it better. Layers with whatever you want at any frame, and time used in all stages everywhere, and the ability to bring items to and from the stage at will. I mean serioulsy have a look at FlashDemo3.zip and see how easy you think that would be to do in WPF, let alone Silverlight which is a cutdown version of WPF anyway.
But anyway, to use animations in WPF, we can either use XAML or procedural code. Lets have a look shall we.
There are 2 options available in WPF, you can use DoubleAnimation or we could use DoubleAnimationUsingKeyFrames
<Page xmlns="" xmlns: <StackPanel Margin="10"> > </StackPanel> </Page>
using System; using System.Windows; using System.Windows.Controls; using System.Windows.Shapes; using System.Windows.Media; using System.Windows.Media.Animation; namespace SDKSample { public class RectangleOpacityFadeExample : Page { private Storyboard myStoryboard; public RectangleOpacityFadeExample() { NameScope.SetNameScope(this, new NameScope()); this.WindowTitle = "Fading Rectangle Example"; StackPanel myPanel = new StackPanel(); myPanel.Margin = new Thickness(10); Rectangle myRectangle = new Rectangle(); myRectangle.Name = "myRectangle"; this.RegisterName(myRectangle.Name, myRectangle); myRectangle.Width = 100; myRectangle.Height = 100; myRectangle.Fill = Brushes.Blue; DoubleAnimation myDoubleAnim. myRectangle.Loaded += new RoutedEventHandler(myRectangleLoaded); myPanel.Children.Add(myRectangle); this.Content = myPanel; } public void myRectangleLoaded(object sender, RoutedEventArgs e) { myStoryboard.Begin(this); } } }
Is like normal animations but this time we get key frames, which is where we can say what an object will look like at time(x).
<!-- Using Paced Values. Rectangle moves between key frames at uniform rate except for first key frame because using a Paced value on the first KeyFrame in a collection of frames gives a time of zero. --> <Rectangle Height="50" Width="50" Fill="Orange"> <Rectangle.RenderTransform> <TranslateTransform x: </Rectangle.RenderTransform> <Rectangle.Triggers> <EventTrigger RoutedEvent="Rectangle.Loaded"> <BeginStoryboard> <Storyboard> <DoubleAnimationUsingKeyFrames Storyboard. <!-- KeyTime properties are expressed with values of Paced. Paced values are used when a constant rate is desired. The time allocated to a key frame with a KeyTime of "Paced" is determined by the time allocated to the other key frames of the animation. This time is calculated to attempt to give a "paced" or "constant velocity" for the animation. --> <LinearDoubleKeyFrame Value="100" KeyTime="Paced" /> <LinearDoubleKeyFrame Value="200" KeyTime="Paced" /> <LinearDoubleKeyFrame Value="500" KeyTime="Paced" /> <LinearDoubleKeyFrame Value="600" KeyTime="Paced" /> </DoubleAnimationUsingKeyFrames> </Storyboard> </BeginStoryboard> </EventTrigger> </Rectangle.Triggers> </Rectangle>
As we have already seen Flash support the creation of custom controls, by inheriting from the standard controls. Creation of totally new controls is not really the Flash way, one would probably make a MovieClip to do a task. This would be the control. There isnt really much more to be said here.
As WPF is using .NET 3.0 Framework we can of course inherit from UserControl or Control to create new controls. We can also simply inherit and override existing control methods. Its all standard OO stuff. But its all possible. And intersting control is shown here where Cristian Graus and Nishant Sivakumar inherit from the AnimationTimeline class and get it to animate a GridLength
Like I say I am no expect in either of these fields, but this is how I would go about developing stuff for each of these worlds
Create small part that perform a specfic task. Make these as MovieClip objects, which have the correct library linkage and are exported for action scripting. Create external action script objects, try to use a OO design method. Think classes. Use the Sepi editor, its good. Keep the stage tidy.
Think about what you are trying to do, use the most appropriate method to do the job. If its XAML orientated use XAML, otherwise use code. Do use ValueConvertors, Resource files/ MergedDictionaries, Templates, Styles and dont subclass everything just cos thats what you did before in .NET 2.0.
Sub classing really isnt needed that often in WPF, you can usually do what you want with a Style or a Template. Oh and dont go overboard on the animations just cos you can. Again keep classes well defined and specific to the job they should perform.
Flash does have a fairly extensive set of classes. Lets consider the following 2 diagrams
We can see that Flash provides serveral packages, we even have some Client/Server package to deal with XML and we can also use sockets. Quite cool really for a little plugin.
Well, where do you start. WPF relies on the .NET 3.0 Framework, of which WPF is one part. The .NET 3.0 framework is vaste, and has literally 1000s of classes interfaces structures etc etc.
Shown below is a screen shot of the main WPF related packages of the .NET 3.0 framework. Using these classes a programmer is able to do all of the topics mentioned above, and of cours,e as they are simply classes, we can inherit from them and bend them to do our bidding. This is also what Macromedia (well adobe I suppose) have done with AS3 (You could inherit from other classes in AS2 as well actually). AS3 is a total rewrite and is basically very Java like syntax now, and it also allows full OO techniques to be used, not just inheritence, but method overriding, polymorphism etc etc. But this article (just cos I have Flash MX 2004) was all about AS2.
Anyway as I say heres a screen shot of the main WPF .NET framework packages. BUt expect this to grwo with future versions. Remember this is the 1st release of WPF. And also bear in mind that most of these individual pacakes will have between 10-100 different classes. Its a big framework.
What if we compare the .NET 3.0 Media package with its Flash equivalent.
There are loads more .NET 3.0 media classes than there are in Flash. But ive seen some pretty cool Flash apps. And dont forget Silverlight (WPF/e) is really the Flash equivalent, and will not have the full power of WPF to play with. It will run in a safe sandbox and will have a subset of the full WPF APIs available. So this full list of classes may not be available to Silverlight.
My personal opinion on this, is that sometimes less is more. Flash may not have it al,l but what it does have it does really really well. However one could also argue that .NET 3.0 is really just an extension to .NET 2.0 so we can use all the good .NET 2.0 stuff aswell, which indeed we can. Oh there also the small matter LINQ and generics which we can use in WPF but not in Flash. Mmm the debate goes on.
I have included 3 zip files at the top of this article, all of which show different types of flash applications, FlashDemo1.zip / FlashDemo3.zip are mainly animations with some AS2 external files, FlashDemo2.zip is more application like, and probably more in line with the sort of development one would do in a WPF application. I have not inluded a WPF application here as there are loads of finished WPF projects available both here at or just searching in
Simply extract the zip file and open the Video.exe as ive made a nice executable for you
The zip also contains all the files to view and run the Flash application if you want to. This one is all about how to use external AS2 files and to use tweens (animations in WPF) and is a short movie to a sound track.
Simply extract the zip file and open the MMDA_Assign2.exe as ive made a nice executable for you. The zip also contains all the files to view and run the Flash application if you want to.
Is a mobile phone demo video, and offers a phone book, simulating a call, popup help, web cam usage.
This image may also be useful for you in order to see all the features demonstrated (click the image below for a bigger image)
Is an animated Flash site that I did for a HardCore techno record label I used to own and run, Surgeon 16 Recordings. My name was Dr Machette in case anyone is interested here is a discogs link to my label here and me by myself this one on a german label called SpecialForces.
Simply extract the zip file and open the S16_Launcher.html in your favourite browser. The zip also contains all the files to view and run the Flash application if you want to.
I have not included the music files with this one otherwise it would be very large download.
This one actually navigated to 5 areas, you can navigate to each of these areas using the Navigate buttons shown at the bottom. This was my first place project ever, and uses a lot of animation, so the stage is very busy. This is a good example of something that could probably be done tidier and could be refactored to use the library and linkage we talked about earlier.
I have not created any new WPF demos for this article. But I have written a few in the past, you can find those under the Windows Presentation Section of my articles.
I hope that this article has covered the main areas of both Flash and WPF. For me I like both for different reasons, but if I was going to do a pure animation app I would use Flash, and WPF for a miore data driven application. Im going to try Silverlight next, sio ill let you know how that goes. Ill start with a simple animation app, and build from there. Anyway if you got this far well done.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/WPF/FlashvsWPF.aspx | crawl-002 | refinedweb | 8,140 | 55.84 |
SCJP
Card Set Information
Author:
cquezadav
ID:
112824
Filename:
SCJP
Updated:
2011-11-05 13:33:09
SCJP
Folders:
Description:
SCJP
Show Answers:
>
Flashcards
> Print Preview
The flashcards below were created by user
cquezadav
on
FreezingBlue Flashcards
. What would you like to do?
Get the
free
Flashcards app for iOS
Get the
free
Flashcards app for Android
Learn more
What is a Class?
A template that describes the kinds of state and behavior that objects of its type support.
What is an Object?
At runtime, when the Java Virtual Machine (JVM) encounters the new keyword, it will use the appropriate class to make an object which is an instance of that class. That object will have its own state, and access to all of the behaviors defined by its class.
What is Inheritance?
Object oriented concept which allows code defined in one class to be reused in other classes.
What are Interfaces?
Importants in inheritance. Interfaces are like a 100% abstract superclass that defines the methos a subclass must support (just the signature), but the subclass has to implement the methods.
An Interface is a Contract.
All interface methods are implicity public and abstracts. Must not be static, final, strictfp or native.
All variables must be public, static, and final. (constants). Implicity.
Can extend one or more other interfaces, and just interfaces.
What means Cohesive?
Means that every class should have a focused set of responsabilities.
Java Identifiers
Identifiers are the names for mathos, classes and variables. Legal identifiers:
Must start with a letter, currency character ($), or underscore ( _ ) . Cannot start with number.
There is no limit to number of characters.
Cannot use java keywords as identifiers.
Idetifiers are case sensitive.
Java Keywords
Examples of JavaBean method signature
public void setMyValue(int v)
public int getMyValue()
public boolean isMyStatus()
public void addMyListener(MyListener m)
public void removeMyListener(MyListener m)
which are the sorce file declaration rules?
One public class per source code file
Comments can appear at the beginning or end of any line in the source code.
The name of the file must match the cname of the public class.
The package statement must be the first line.
The import statements, must go between the package statement and the class declaration.
A file can have more than one nonpublic class.
Files with no public classes can have a name that does not match any of the classes in the file
Default Class Access
No modifier preceding the declaration.
Package level access. A class in a different package cannot have access.
Public Class Access
Class can be accessed from all class in any package.
strictfp Class modifier
For the exam it is just necessary to know that strictfp is a keyword and can be used to modify a class or a method, but never a variable.
Final class modifier
A Final class cannot be subclassed.
No other class can ever extend a final class.
String class is final.
A final class destroy extensibility OO benefit.
Abstract class modifier
An abstract class can never be instantiated.
Its only mission is to be extended (subclassed).
An abstract class finish with semicolon ( ; )
public abstract void goFast();
If one method is abstract the whole class must be abstract.
Public Members
Public methods or variable members can be accessed by all classes regardless of the package they belong.
It depends of class access.
Private Members
Can't be accessed by code in any class but the class in which the private member was declarated.
A subclass can't inherit private members.
Rules of overriding do not apply so a method with the same name is totally a diferent method.
Default Members
A default member can be accessed ony if the class accessng the member belongs to the same package.
Protected Members
A protected member can be accessed by classes that belong to the same package.
A protected member can be accesses by a subclass even is the subclass is in a different package.(different to default access)
Protected members in the subclass become private to any code outside the subclass, with exception of subclasses of the subclass.
Local variables and access modifiers
The only modifier allowed in local variables is final.
Final Methods
Final keyword prevents a method from being overriden in a subclass.
Final keyword is contrary to many OO befefits like extensibility.
Final Arguments
A final argument can't be modified within the method.
public Record getRecord(int fileNumber, final int recordNumber)
Abstracts Methods
An abstract method does not contain funcional code.
An abstract method finish with a semicolon.
It has no method body.
Subclasses have to provide the implementacion.
A method can never be abstract and final or abstract and private or abstract and static.
public abstract void showSample();
If a class has an abstract method, the class has to be obligatory declared abstract .
It is legal to have an abstract class with no abstract methods.
The first concrete (nonabstract) subclass of an abstract class must implement all abstract methods.
Synchronized Methods
Synchronized keyword means that a method can be accessed by only one thread at a time.
It can be applied only to methods (not classes, not variables)
public synchronized Record retrieveUserInfo(int id) { }
Native Methods
The native modifier Indicates that a method is implemented in a platform-dependent code, often C.
Can be applied only to methods.
A native method's body must be a semicolon like in abstract methods.
Strictfp Methods
Strictfp forces floating points to adhere to the IEEE 754 standard.
Variable Argument List (var-args)
Methods that can take a variable number of arguments.
The var-arg must be the last argument in the method's signature.
A method can have just one var-arg.
void doStuff2(char c, int... x)
Constructor Declarations
A constructor does not have a return type.
Have the same name as the class.
Constructors can't be marked as static, final or abstract
Primitive Variables
char, boolean, byte, short, int, long, double and floar.
Once a primitive has been declared, its primitive type can never change.
Can be declared as class variables (static), instance variables, method parameters or local variables.
Reference Variables
Used to refer or access an object.
Can refer any object of the declared type or a subtype of the declared type.
Numeric Types
byte -> short -> int -> long (integers)
float -> double (floating point)
All number types are signed (+ and -)
The leftmost bit is used to represent the sign, 1 means negative and 0 means positive.
Range of Numeric Primitives
Instance Variables
Defined inside the class but outside aany method.
Initialized when the class is instantiated.
Can use any of the four access levels (default, pretected, private and public)
Can be marked as final and transient.
Cannot be marked as abstract, synchronized, strictfp, native, static.
Comparation of modifiers on variables vs methods
Local Variables
Declared within a method.
Variables live in the scope of the method.
They are destroyed when the method has completed.
Local variables are always on the stack, not in the heap.
Local variables must be initialized before use it.
Local variables do not get default values.
It is possible to make shadowing (local variable with the same name than an instance variable)
Array Declaration
Arrays can store multiple variables of the same type (or subclasses).
Array is an object on the heap.
int[] key;
Thread threads [];
String [] [] [] name; (array of arrays of arrays)
String [] lastName []; (array of arrays)
Final Variables
A final variable is impossible to reinitialize once it has been initialized.
A primitive final variables the value can't ve altered.
A reference final variable can't ever be reassigned to refer to a different object. The data can be changed but the reference variable cannot.
Transient Variables
A transient variable will be ignored by the JVM when attempt to serialize the object containing it.
Volatile Variables
Volatile modifier tells the JVM that a thread accessing the variable must always ajust its own private copy of the variable with the master copy in memory.
It can be applied only to instance vatiables.
Static Variables and Methods
The static members will exist independently of any instance of the class.
Static members exist before any new instance of the class.
There will be only one copy of the static members.
All instances of the class share the same value for any static variable.
Declaring Enums
Lets restrict a variable to having one of the only few pre-defined values.
Can be declared as their own separate class, or as a class member.
Cannot be declared within a method.
Enum declared outside a class cannot be private or protected. (can be public or default).
enum CoffeeSize{BIG,HUGE}
drink.size = CoffeeSize.BIG; // enum outside class
drink.size = Coffee2.CoffeeSize.BIG; enum inside class
It is optional tu put a semicolon if the enum is the las declaration.
Enum constructor and instance variable declaration
Use Emum Class
Enum Constructor Overriding
Encapsulation
One benefit of encapsulation is the ability to make changes to the code without breaking the code of others who use the same code.
Hide implementation details behind an interface. That interface is a set of accessible methods (API).
Encapsulation benefits are maintainability, flexibility, and extensibility.
Rules
: keep intance variables protected (private), make public accessor methods, and use the javaBean naming convention (get and set).
Inheritance
Inheritance promote code reuse and polymorphism.
Every class is a subclass of class Object.
All classes inherit the methos
: equals, clone, notify, wait and others.
References
A reference variable can be of onle one type, although the object it references can chage.
A reference is a variable, so it can be reassigned to other objects. (excluding final references).
A reference variable determines the methods that can be invoked on the object the rariable is refering.
A reference variable can refer to any subtype of the declared type.
A reference variable can be declared as a class or an iterface.
At runtime the JVM knows the object, so if the object has an overridden method, the JVM will invoke the overridden version, and not the one of the declared reference variable type.
Overridden Methods
A class that inherit a method from a superclass have the can override the method.
The benefit is to define behavior that is specific to a subclass.
Abstracts methos have to be implemented (overridden) by the concrete class.
The overriden method has to have the same or better access (public, protected..)
The return type must be the same or a subtype of the superclass method.
The overriding method can throw any unchecked (runtime) exception, even if the overridden method does not.
The overriding method must not throw checked exception that are new or broader.
Overriding method can throw narrower or fewer exceptions.
It is not possible to override a method neither marked final nor marked static.
Then a method declares a cheked exception but the overriding method does not, the compiler thinks you are calling a method that declares an exception, so if the overriding method does not declare the exception and a reference of the superclass refers to the subtype, the compiler throws an error. This would not occurr at runtime.
Overloaded Methods
Overloaded methods let reuse the same method name in a class, but with different arguments (optionally different return type).
Overloaded methods must change the argument list.
Overloaded methods can change the return type.
Overloaded methods can change access modifiers.
Overloaded methods can declare new or broader checked exceptions.
A method can be overloaded in the same class or in a subclass.
Overloaded method are chosen on compilation time and not in runtime, so which overloaded version of the method to call is based on the reference type of the argument passed at compile time.
Differences between overloaded an overrriden methods.
Implementin Interface
Provide concrete implementation for all methods.
Follow all the rules for legal overrides.
Decalre no checked exception on implementation methods other than those declared by the interface method or a subclass of those declared by the interface.
Maintain the signature of the interface method, and maintain the same return type or subtype.
Implementig defines a role the class can play but not who or what the class is.
A class can implement more than one interface.
An interface can extend another intreface.
Return Types on Overloaded Methods
The overloaded method can have a different return type, but it is obligatory to chage the argument list.
Overriding and Return Types
A subclass must define a method that matched the inherited version. However, the return type can also be a subtype of the original declared return type.
Return Value Rules (1)
You can return null in a method with an object reference return type.
An array is perfectly legal return type.
In a method with a primitive return type, you can return any value or variable that can be implicitly converted to the declared return type. public int foo() {char c='c'; return c;}
In a method with a primitive return type, you can return any value or variable that can be explicitly converted to the declared return type. public int foo() {float f=32.5f; return (int)f;}
Return Value Rules (2)
In a method with an object reference return type, you can return any type that can be implicitly cast to the declared return type.
Return Value Type (3)
Constructor Basic
Every class, including abstract classes, must have a constructor.
Constructors do not have return type and the name matchs the class name.
Constructors are invoked at runtime with the new keyword.
Constructors Rules
Can use any access modifier, including private (a private constructor means that only code within the class can instantiate an object of that type).
If you do not type a constructor a default constructor will be automatically generated by the compiler.
If the class has a constructor with arguments the compiler will not create one with no arguments.
Every constructor has as its first stament a call to an overloaded costructor (this()) or a call to the superclass constructor (super()).
You cannot call to an instance method or access an intance vatiable until after the super constructor runs.
Only static variables and methods can be accessed as part of the call to super or this. ex
: super(Animal.NAME).
Abstract classes have constructors.
Interfaces does not have constructors.
Constructor can be invoked only within other contructor.
When a constructor has a this() call to other constructor, sooner or later the super() constructor gets callled.
The first line of a constructor must be super() or this(). If the constructor does not have any call the compiler insert the no-arg call to super().
A constructor can never have both a call to super() and a call to this() because each of those calls must be the first stament.
Compilet-Generated Constructor Code
Static Variables and Methods
Static variables are seted when the JVM load the class, it is before instances are created.
Static variables and methods belong to the class, rather than to any particular instance.
It is possible to use static members without having an instance of the class.
There is only one copy of the static members.
A static method cannot access a nonstatic (instance) variable or method. It is necesary to have an instance of the class to access instance members.
Accessing Static Methods and Variables
It is not neccessary an instance to access static members.
To access a static method or variable you need the name of the class and the dot operator.
It is also possible to access the static members using an instance of the class.
Static methods cannot be overriden, but they can be redefined in a subclass.
Coupling
Coupling is the degree to which one class knows about another class.
If a class know about other class through its interface, then both classes are loosely coupled.
If a class relies on parts of another class that are not part of the class interface, then the coupling is tighter.
Cohesion
Cohesion is used to indicate the degree to which a class has a sigle, well-focused purpose.
Cohesion let a much easier maintainace and reusabillity.
What would you like to do?
Get the
free
Flashcards app for iOS
Get the
free
Flashcards app for Android
Learn more
>
Flashcards
> Print Preview | https://www.freezingblue.com/flashcards/print_preview.cgi?cardsetID=112824 | CC-MAIN-2017-04 | refinedweb | 2,706 | 57.57 |
. are local and which are global. In Python 2.1 this construct causes warnings, and sometimes even errors. question builtins.
There are situations in which from module import * is just fine:()
This is a “don’t” which is much weaker than the previous “don’t”s but is still something you should not do if you don’t have good reasons to do that. The reason it is usually
Python has the except: clause, which catches all exceptions. Since every error in Python raises an exception,()in functions people seem not to be aware of for some reason: min() and max() can find the minimum/maximum of any sequence with comparable semantics, for example, yet many people write their own max()/min(). Another highly useful function is reduce(). A classical use of reduce() is something like
import sys, operator nums = map(float, sys.argv[1:]) print reduce(operator.add, nums)/len(nums)
This cute little script prints the average of all numbers given on the command line. The reduce() adds up all the numbers, and the rest is just some pre- and postprocessing.
On the same note, note that float(), int() and long() all accept arguments of type string, and so are suited to parsing — assuming you are ready to deal with the ValueError they raise.)) | http://docs.python.org/dev/howto/doanddont.html | crawl-002 | refinedweb | 216 | 71.34 |
cannot connect (null)
I have an array of type Tile which inherits QLabel
std::array<PlayableTile*,2>tile
{ {
ui->playableTile1,
ui->playableTile2
} };
Error: cannot connect (null)::pushTile() to MainWindow::pushTile()
connect(tile[0],SIGNAL(pushTile()),this,SLOT(pushTile());
....................^ this is where the error come froms
But when I do
connect(ui->playableTile1,SIGNAL(pushTile()),this,SLOT(pushTile());
I'm not getting any erorr
std::array<PlayableTile*,2>tile
{ {
ui->playableTile1;
ui->playableTile2;
}. };
That looks like a typo. What's your actual code?
Sorry for the late reply and yes it's a typo sorry, I'm just typing on my phone because my PC has a problem on connection
Its an array, the first semicolon is a comma and forget the second one, I edited it
What do you get when you call this?:
#include <QDebug> // ... qDebug() << tile[0];
Sorry again for the late reply, i was so busy but now im free
What do you get when you call this?:
This is what i get
QObject(0x0)
Sorry again for the late reply, i was so busy but now im free
That's OK :)
This is what i get
QObject(0x0)
You have a null pointer.
I fixed it by doing this
std::array<PlayableTile*,5> playableTileArray;
playableTileArray[0] = ui->playableTile1; playableTileArray[1] = ui->playableTile2; playableTileArray[2] = ui->playableTile3; playableTileArray[3] = ui->playableTile4; playableTileArray[4] = ui->playableTile5;
C++ is now kicking me just because i havent programmed for 2 weeks.
can you explain to me why this works?
You have a null pointer.
I dont get it, why i have a null pointer?
- mrjj Qt Champions 2017
Hi
well if you do
std::array<PlayableTile*,2>tile { { ui->playableTile1; ui->playableTile2; }. };
in Class def. (the h file)
then its very likely that this list is init'ed up BEFORE the
ui->setupUi ( this );
in the constructor so you are adding NULL pointers.
So it's best to add them explicit and not use init lists to be on the safe side.
So before ui->setupUi ( this ); Nothing is setup yet.
You second example works as you do not use init list anymore so I bet
playableTileArray[0] = ui->playableTile1;
is after
ui->setupUi ( this );
[edit: fixed coding tags SGaist] | https://forum.qt.io/topic/58068/cannot-connect-null | CC-MAIN-2018-39 | refinedweb | 370 | 78.48 |
Details
- Type:
Bug
- Status:
Closed
- Priority:
Major
- Resolution: Not A Bug
- Affects Version/s: 1.5.6
- Fix Version/s: 1.7-beta-1
- Component/s: bytecode, class generator, command line processing
- Labels:None
- Number of attachments :
Description
I want to use annotated classes im my Groovy beans, but org.springframework.scripting.groovy.GroovyScriptFactory#getScriptedObject uses getGroovyClassLoader().parseClass call, and it returns class instance with no annotations attached.
The same groovy
Activity
But this not making much sense to me. class generation is the phase where the actual class is created. Without that part you won't get something to execute. The annotations are part of the generated class, thus the annotation processing must be part of the class generation or happen before that phase.
The only reason I could imagine that the annotation is lost, is that you execute two different versions of Groovy.
That's a testcase for the bug.
It includes Intellij Idea project for your convenience.
Run TestRuntimeCompilation and see that no annotations were generated in the bytecode
Run TestPrecompiled and see that the annotation is present in a class file.
The first testcase uses a GroovyScriptFactory for groovy bean definition, the second one uses precompiled groovy script put in classpath.
Of course, you must also put spring.jar, groovy-all.jar and commons-logging.jar into lib folder.
The testcase was run against groovy 1.5.6
Replicated the test case with Groovy 1.6.3 and Spring 2.5.6.
But this seems to work:
def parsedClass = new GroovyClassLoader().parseClass('@Deprecated class Foo {}', 'Foo') println parsedClass.annotations.size() // => 1 @Deprecated class Bar {} println Bar.annotations.size() // => 1
is the class we test in the provided test case really the groovy class, or is it some kind of proxy?
It's not the Groovy class but an instance of org.springframework.scripting.groovy.GroovyScriptFactory which of course has no annotations. Still pondering if there is an easy way to get the real object.
If I use the recommended bean integration technique, I get the real object:
<lang:groovy
So, I think this issue is resolved. Please re-open if you believe there is still a bug remaining.
Sorry, pressed "add" accidentally.
So, the same groovy file compiled with groovyc generated bytecode WITH annotations.
After some investigation I found, that groovy.lang.GroovyClassLoader#parseClass method invokes
int goalPhase = compile(Phases.CLASS_GENERATION),
unit.compile(goalPhase);
whereas FileSystemCompiler invokes unit.compile(), and it invokes compile((Phases.ALL)
The annotation is present in AST in both cases. But it is lost when goalPhase = Phases.CLASS_GENERATION | http://jira.codehaus.org/browse/GROOVY-2982?focusedCommentId=180614&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2014-41 | refinedweb | 426 | 52.76 |
24
Scripting with Kotlin.
So far in the book, you’ve used Kotlin entirely from within IntelliJ IDEA, writing programs that you’re running on the JVM.
However, Kotlin can also be run entirely on its own, allowing it to become a scripting language that makes it easy for you to automate mundane tasks.
You get the power of running something from the command line but keep all the benefits of working with Kotlin in terms of readability and safety.
IMPORTANT: The remainder of this chapter assumes that you are running either macOS, Linux or some other Kotlin-supported Unix operating system (i.e., FreeBSD or Solaris). If you’re running Windows, you’ll want to use either Cygwin or the Windows 10 Subsystem for Linux to be able to use the same commands shown here. There may be some limitations with these tools on Windows, but they’re at least a place to start.
What is scripting?
Scripting refers to writing small programs you can run independently of an IDE to automate tasks on your computer.
A script is the small program that you write and run. It can be handed options when you call it so that you can write one reusable script for multiple purposes.
You’ll often hear people talk about shell scripting, which is using
.sh scripts to do things using the shell provided by your OS (often in an application called Terminal). Common shells are bash, zsh, and fish.
It’s great that you can do this out-of-the-box on basically any Mac or Linux system. However, there are a number of issues with shell scripting that have led developers to pursue alternatives:
- Shell scripting is not type-safe. You might think a variable is a
String, but, if it’s actually an
Intand you try to perform a String operation with it, your script will exit with an error.
- Shell scripting is not compiled. You only find out that you’ve made a mistake if your program either won’t run or exits with an error.
- Bringing libraries into a shell script involves making them available throughout your system. This may not be behavior you want for many reasons, including security.
- Shell scripts can be very difficult to read. Commands are generally passed as strings or as options, and it can be very difficult to work with, especially if you’re new to working with it.
Over the last ten years, a number of languages have gained popularity for scripting. Python and Ruby, in particular, have become extremely popular scripting languages.
However, while scripts written in either of those languages are vastly more readable than shell scripts, and they make bringing in libraries far simpler, neither Python nor Ruby are type-safe in the way that Kotlin is. Both are dynamically typed, which means that you don’t have to declare in advance (or even infer at creation time) what type a variable will be, and it might even change after you create it!
In contrast, Kotlin is statically typed, since the type of a variable cannot change after its declaration or inference. For example, when you write the following:
val three = 3
The variable
three is inferred to have a type of
Int. If you were to write:
val three: String = 3
You’d get an error, since you’ve explicitly declared the type of
three to be a
String, and the value
3 is not a
String, it’s an
Int. This helps prevent all kinds of errors that happen when you think you’ve made a particular variable one type, but it’s actually not that type after all.
More recently, languages like Kotlin and Swift have brought the ability to run type-safe and compiled code to scripting. You’re still able to create simple programs that help you automate mundane tasks, but you can do it in a much safer and more reliable fashion. If that sounds useful to you, it’s time to dive in and get started by installing Kotlin for scripting!
Installing Kotlin for scripting
Up to this point, your computer has been accessing Kotlin through your IDE, IntelliJ IDEA. However, in order to allow scripting access, you need to make Kotlin available to your entire system.
Installing SDKMan!
Note: If you’ve already got SDKMan! installed, skip to the “Installing Kotlin” section below.
curl -s | bash
Installing Kotlin
Open a new Terminal — either a new window or a new tab if your shell program allows it — and install Kotlin using SDKMan! by typing the following command and pressing Enter:
sdk install kotlin
which kotlin
/Users/[username]/.sdkman/candidates/kotlin/current/bin/kotlin
kotlinc-jvm
Using the REPL
A REPL is essentially a tiny Kotlin program in which you can type things and have them execute immediately. When it launches, you’ll see something like this:
println("Hello, world!")
val groceries = listOf("apples", "ground beef", "toilet paper")
groceries.joinToString("\n")
apples ground beef toilet paper
groceries.map { it.count() }
[6, 11, 12]
data class Grocery(val name: String, val cost: Float)
val moreGroceries = listOf(Grocery("apples", 0.50f), Grocery("ground beef", 5.25f), Grocery("toilet paper", 2.23f))
val cost = moreGroceries.fold(0.0f) { running, next -> running + next.cost }
println("Your groceries cost $cost")
Your groceries cost 7.98
:quit
kotlinc-jvm
println("$groceries")
Creating script files
Kotlin script files are a unique type of Kotlin file. They compile as if the entire file is the
main() function for a Kotlin program.
Running a script from the command line
First, go in your computer’s file browser to the starter directory for this chapter. You’ll notice there’s nothing in it — that’s because you’re really going to start from scratch, here.
touch script.kts
println("Hello, scripting!")
kotlinc -script script.kts
Hello, scripting!
Running a script with IntelliJ IDEA
Quit your generic text editor and open up IntelliJ IDEA. If you close your other projects, you should land on this screen:
Hello, scripting!
Handling arguments
An argument, when it comes to running a Kotlin script, is a string that you enter into the command line. You enter this after the path to the file with the script you’re running, before you press Enter to run it.
if (args.isEmpty()) { println("[no args]") } else { println("Args:\n ${args.joinToString("\n ")}") }
[no args]
hello
Args: hello
Args: hello Ellen Shapiro
kotlinc -script script.kts
[no args]
kotlinc -script script.kts Kotlin scripting is awesome
Args: Kotlin scripting is awesome
kotlinc -script script.kts "Kotlin scripting is awesome"
Args: Kotlin scripting is awesome
Getting information from the system
Getting information about the filesystem is really helpful because you can use it in many different ways: moving files around, copying files, and figuring out how large files are or where they’re located.
fun currentFolder(): File { return File("").absoluteFile }
import java.io.File
val current = currentFolder() println("Current folder: $current")
Current folder: [fullpath]/KotlinApprentice/scripting-with-kotlin/projects/starter
fun File.contents(): List<File> { return this.listFiles().toList() }
val current = currentFolder() println("Current folder contents:\n ${current.contents().joinToString("\n ")}")
Current folder contents: [fullpath]/KotlinApprentice/scripting-with-kotlin/projects/starter/.DS_Store [fullpath]/KotlinApprentice/scripting-with-kotlin/projects/starter/script.kts [fullpath]/KotlinApprentice/scripting-with-kotlin/projects/starter/.idea
fun File.fileNames(): List<String> { return this.contents().map { it.name } }
println("Current folder contents:\n ${current.fileNames().joinToString("\n ")}")
Current folder contents: .DS_Store script.kts .idea
fun File.folders(): List<File> { return this.contents().filter { it.isDirectory } } fun File.files(): List<File> { return this.contents().filter { it.isFile } }
fun File.fileNames(): List<String> { return this.files().map { it.name } }
fun File.folderNames(): List<String> { return this.folders().map { it.name } }
fun File.printFolderInfo() { // 1 println("Contents of `${this.name}`:") // 2 if (this.folders().isNotEmpty()) { println("- Folders:\n ${this.folderNames().joinToString("\n ")}") } // 3 if (this.files().isNotEmpty()) { println("- Files:\n ${this.fileNames().joinToString("\n ")}") } // 4 println("Parent: ${this.parentFile.name}") }
current.printFolderInfo()
Contents of `starter`: - Folders: .idea - Files: .DS_Store script.kts Parent: projects
fun valueFromArgsForPrefix(prefix: String): String? { val arg = args.firstOrNull { it.startsWith(prefix) } if (arg == null) return null val pieces = arg.split("=") return if (pieces.size == 2) { pieces[1] } else { null } }
val folderPrefix = "folder=" val folderValue = valueFromArgsForPrefix(folderPrefix)
if (folderValue != null) { val folder = File(folderValue).absoluteFile folder.printFolderInfo() } else { println("No path provided, printing working directory info") currentFolder().printFolderInfo() }
pwd
/Users/ellen/Desktop/Wenderlich/KotlinApprentice/scripting-with-kotlin/projects/starter
kotlinc -script script.kts "Kotlin scripting is awesome"
No path provided, printing working directory info Contents of `starter`: - Folders: .idea - Files: .DS_Store script.kts Parent: projects
kotlinc -script script.kts "Kotlin scripting is awesome" folder=/Users/ellen/Desktop/Wenderlich/KotlinApprentice/scripting-with-kotlin/projects/starter
Contents of `projects`: - Folders: starter final challenge - Files: .DS_Store
Challenges
Key points
- Scripting is writing small programs in a text editor that can be run from the command line and be used to do various types of processing on your computer.
- As a scripting language, Kotlin gives you the static-typing lacking in other scripting languages like Python and Ruby.
- Kotlin comes with a Read-Evaluate-Print Loop or REPL that can be used to investigate Kotlin code in an interactive manner.
- Kotlin scripts end with the extension
.kts, as opposed to normal Kotlin code that ends with
.kt.
- You can use IntelliJ IDEA as a script editor, and then either run your scripts within the IDE or from a command line shell on your OS.
- Kotlin scripts run inside a hidden
main()function and can access
argspassed into the script at the command line. Scripts can also import Kotlin and Java libraries to access their features.
- You can use Kotlin scripts to read and write to the files and folders on your filesystem, and much more!
Where to go from here?
The tool
kscript provides a convenient wrapper around Kotlin scripting. It allows pre-compilation of scripts, which results in much faster iteration, and it also allows you to use a simpler syntax for accessing Kotlin at runtime. The creator gave a talk at KotlinConf 2017 which is worth watching for a great outline of some of the problems he was trying to solve. | https://www.raywenderlich.com/books/kotlin-apprentice/v2.0/chapters/24-scripting-with-kotlin | CC-MAIN-2022-40 | refinedweb | 1,697 | 58.58 |
Feb 29, 2012 07:27 AM|LINK
I can't find any examples of how to make optional parameters. If I include a "?" in the routedTemplate, it throws an error
Just to clarify, I'm trying to make a service that has a url like this: /SomeServiceCall?param1=aasdf¶m2=blah¶m3=blahblah
All-Star
16475 Points
Feb 29, 2012 07:39 AM|LINK
Not sure if this will help, but you can try:
Feb 29, 2012 07:48 AM|LINK.
Feb 29, 2012 08:02 AM|LINK
It doesn't seem to work yet. This is how I have the routing set up:
config.Routes.MapHttpRoute( name: "default", routeTemplate: "{controller}", defaults: new { controller = "Random" });
And this is the controller class:
public class RandomController : ApiController { public ZuneCrawler.WcfService.DTOs.Random Get(string category, string list, string offerType) { } }
When I actually try to call the "Random" service in Fiddler, I get an HTTP 404 with this message: "<?xml version="1.0" encoding="utf-8"?><string>No action was found on the controller 'Random' that matches the request.</string>"
I thought that since the function is named "Get" it would automatically route correctly?
Feb 29, 2012 08:09 AM|LINK
Call your method GetSomething rather than just "Get". First of all, do without the method args. When you have basic routing to an action sorted out, then add your args and check that MVC wires them up to "category", "list" and "offerType" query strings.
See this article for a good overview of Web API routing.
Feb 29, 2012 08:12 AM|LINK
My other 2 services that are defined in 2 other controllers work perfectly. Both those controllers have a single function called Get, with no parameters, and that works.
That article seems to show actions that are included in the URL
Ok I added a function like this:
public ZuneCrawler.WcfService.DTOs.Random Get() { return Get(null, null, null); }
So now the service works if I call it in fiddler either with parameters or without.
However, even when putting parameters in the url, the parameters are still set to null when calling the function. So it's not routing it all the way correctly yet
Feb 29, 2012 08:20 AM|LINK
Ok - stick with "Get" if that works. I thought it might have been a problem. One thing you can do is explicitly name the action in the defaults, in the same way that you name the controller.
An example:-
routes.MapHttpRoute( name: "Eula", routeTemplate: httpRoutePrefix + "/eula", defaults: new { controller = "MyApi", action = "GetEula" } );
Star
7756 Points
Feb 29, 2012 08:28 AM|LINK
awebb.
Hi refter the aboce thread. it correct. it works similar as with .aspx pages
Feb 29, 2012 08:30 AM|LINK
Still not working... tried naming the function "Get", and then setting the action to "Get".
Then tried naming the function "GetRandomApps" and set the action to "GetRandomApps".
Neither worked.
It's like the routing is being confused by having parameters...
Feb 29, 2012 08:31 AM|LINK
I'm doing it the way awebb said to do it, and it's not working
I'm using self hosting and not IIS. Is that possibly causing a problem?
14 replies
Last post Mar 01, 2012 02:02 AM by Deepak Bhatia | http://forums.asp.net/p/1774984/4856226.aspx/1?How+do+you+route+optional+parameters+ | CC-MAIN-2013-20 | refinedweb | 545 | 73.27 |
Squirrel
Local mirror mecanism for ETCD
Keep a replication of ETCD folder locally for low latency querying. Provide an index system to access a file without scanning all nodes.
Summary
- Install
- Usage
- Index System
- Fallback System
- Test
Install
$ npm install --save @coorpacademy/squirrel
import createSquirrel from '@coorpacademy/squirrel';
Usage
Node Interface
const squirrel = createSquirrel({ hosts: ' auth: null, ca: null, key: null, cert: null, cwd: '/', fallback: '/tmp/squirrel.json', indexes: ['foo', 'bar.baz'] });
Options:
hosts: ETCD hosts. more
auth: A hash containing
{user: "username", pass: "password"}for basic auth. more
ca: Ca certificate. more
key: Client key. more
cert: Client certificate. more
cwd: ETCD current working directory.
fallback: Temporary file to save ETCD backup.
indexes: Array of key to index.
Methods
Consider the following folder:
/ ├── bar │ └── baz { "bar": { "baz": "qux" } } └── foo { "foo": "bar" }
get(path)
Get file by path. Returns
Promise;
path(String): the path of the file to get.
const foo = await squirrel.get('/foo'); console.log(foo); // { "foo": "bar" } const barBaz = await squirrel.get('/bar/baz'); console.log(barBaz); // { "bar": { "baz": "qux" } }
getBy(index, key)
Get by index value. Returns
Promise;
index(String): the path of the property to get. It needs to be declared in the
indexesoption
key(String): the value to match
const foo = await squirrel.getBy('foo', 'bar'); console.log(foo); // { "foo": "bar" } const barBaz = await squirrel.getBy('bar.baz', 'qux'); console.log(barBaz); // { "bar": { "baz": "qux" } }
Fields can be nested, as described by
_.get.
getAll(index)
Get index Map. Returns
Promise;
index(String): the path of the property to get. It needs to be declared in the
indexesoption
const foo = await squirrel.getAll('foo'); console.log(foo); // { "bar": { "foo": "bar" } } const barBaz = await squirrel.getAll('bar.baz'); console.log(barBaz); // { "qux": { "bar": { "baz": "qux" } } }
set(path, value)
Set file by path. Returns
Promise;
path(String): the path of the file to get.
value(Object): An object to store in file. Will be serialized.
const foo = await squirrel.set('/foo', { "foo": "bar" }); console.log(foo); // { "foo": "bar" }
Command Line Interface
squirrel-sync
Synchronize FS folder with ETCD folder.
$ squirrel-sync --hosts localhost:2379 ./fs-folder /etcd-folder
squirrel-watch
Watch ETCD folder changes.
$ squirrel-watch --hosts localhost:2379 /etcd-folder
squirrel-dump
Write ETCD folder in
preloadedStore format.
$ squirrel-dump --hosts localhost:2379 /etcd-folder ./dump.json
Arguments
--hosts="host1,host2": ETCD hosts. more
--ca=/file.ca: Ca certificate. more
--key=/file.key: Client key. more
--cert=/file.cert: Client certificate. more
Index System
Squirrel allows to put JSON in file. In this case, it could be indexes to access directly. Consider the following ETCD directory.
/ ├── file1 { "foo": "bar" } ├── file2 { "foo": "baz" } └── file3 { "foo": "qux" }
First of all, we should indicate Squirrel which paths we want to index.
const squirrel = createSquirrel({ indexes: ['foo'] });
Now, we can get the contents of
file1 by searching for its
foo value.
const file1 = await squirrel.getBy('foo', 'bar'); console.log(file1); // { "foo": "bar" }
We can also get the value of the index as an object.
const fooIndex = await squirrel.getAll('foo'); console.log(fooIndex); /* { "bar": { "foo": "bar" }, "baz": { "foo": "baz" }, "qux": { "foo": "qux" } } */
If two files have the same index value, Squirrel keeps one of the two.
Squirrel scans all files, no matter how deep, that contain a JSON value.
Index could be a complex path, as long as it works with
_.get.
Fallback System
By declaring a
fallback path, Squirrel is able :
- to save its state every time a change is made
- to restore the state to be faster on the next restart even if ETCD isn't available.
Test
You may run tests with
$ npm test | https://www.npmtrends.com/@coorpacademy/squirrel | CC-MAIN-2022-21 | refinedweb | 600 | 62.04 |
C++ Guide for EOS Development - Iterators & Lambda Expressions
This post is part of my C++ Guide for EOS developers
- Basics
- Call by value / reference & Pointers
- Classes and Structs
- Templates
- Iterators & Lambda Expressions
- Multi-index
- Header files
Iterators
Let’s talk about iterators, a really useful tool which is heavily used throughout the EOS code base.
If you’re coming from a JavaScript background, you might be already familiar with iterators like they are used in
for of loops.
The key concept of iterators is to provide a nicer way to iterate through a collection of items.
The added bonus is that you can implement the iterator interface for any custom classes, making iterators a generic way to traverse data.
// @url: #include <iostream> #include <vector> using namespace std; int main() { vector<int> v{2, 3, 5, 8}; // old way to iterate for (int i = 0; i < v.size(); i++) { cout << v[i] << "\n"; } // using Iterators // begin() returns an iterator that points to the beginning of the vector // end() points to the end, can be compared using != operator // iterators are incremented by using the + operator thanks to operator-overloading for (vector<int>::iterator i = v.begin(); i != v.end(); i++) { // iterators are dereferenced by * like pointers // returns the element the iterator is currently pointing to cout << *i << "\n"; } // auto keyword allows you to not write the type yourself // instead C++ infers it from the return type of v.begin for (auto i = v.begin(); i != v.end(); i++) { cout << *i << "\n"; } // can use arithmetic to "jump" to certain elements int thirdElement = *(v.begin() + 2); cout << "Third: " << thirdElement << "\n"; // end is the iterator that points to the "past-the-end" element // The past-the-end element is the theoretical element that would follow the last element in the vector. // It does not point to any element, and thus shall not be dereferenced. int lastElement = *(v.end() - 1); cout << "Last: " << lastElement << "\n"; // do not go out of bounds by iterating past the end() iterator // the behavior is undefined // BAD: v.end() + 1, v.begin() + 10 }
In modern C++, iterators are the preferred way to iterate over collections of elements (
vectors,
lists,
maps).
In additon, the
auto keyword saves you from typing out wordy types, but may lead to less expressive code.
Lambda Expressions
Armed with iterators, we can start to look at the functional programming concepts of modern C++. Many functions from the standard library take a range of elements represented by two iterators (beginning and end) and an anonymous function (lambda function) as parameters. This anonymous function is then applied to each element within the range. They’re called anonymous functions as they are not bound to a variable, rather they are short blocks of logic, passed as an inline argument to a higher-order function. Usually, they are unique to the function they are passed to and therefore don’t need the whole overhead of having a name (anonymous).
With it we can achieve similar constructs to sorting, mapping, filtering, etc. that are easy to do in languages like JavaScript:
[1,2,3,4].map(x => x*x).filter(x => x % 2 === 1).sort((a,b) => b - a)
The code in C++ isn’t as succinct, but nevertheless of the same structure.
Many functional programming helpers from the
std library operate on half-open intervals, meaning the lower range is included, the upper range is excluded.
// @url: #include <iostream> #include <vector> // for sort, map, etc. #include <algorithm> using namespace std; int main() { vector<int> v{2, 1, 4, 3, 6, 5}; // first two arguments are the range // v.begin() is included up until v.end() (excluded) // sorts ascending sort(v.begin(), v.end()); // in C++, functions like sort mutate the container (in contrast to immutability and returning new arrays in other languages) for (auto i = v.begin(); i != v.end(); i++) { cout << *i << "\n"; } // sort it again in descending order // third argument is a lambda function which is used as the comparison for the sort sort(v.begin(), v.end(), [](int a, int b) { return a > b; }); // functional for_each, can also use auto for type for_each(v.begin(), v.end(), [](int a) { cout << a << "\n"; }); vector<string> names{"Alice", "Bob", "Eve"}; vector<string> greetings(names.size()); // transform is like a map in JavaScript // it applies a function to each element of a container // and writes the result to (possibly the same) container // first two arguments are range to iterate over // third argument is the beginning of where to write to transform(names.begin(), names.end(), greetings.begin(), [](const string &name) { return "Hello " + name + "\n"; }); // filter greetings by length of greeting auto new_end = std::remove_if(greetings.begin(), greetings.end(), [](const string &g) { return g.size() > 10; }); // iterate up to the new filtered length for_each(greetings.begin(), new_end, [](const string &g) { cout << g; }); // alternatively, really erase the filtered out elements from vector // so greetings.end() is the same as new_end // greetings.erase(new_end, greetings.end()); // let's find Bob string search_name = "Bob"; // we can use the search_name variable defined outside of the lambda scope // notice the [&] instead of [] which means that we want to do "variable capturing" // i.e. make all local variables available to use in the lambda function auto bob = find_if(names.begin(), names.end(), [&](const string &name) { return name == search_name; }); // find_if returns an iterator referncing the found object or the past-the-end iterator if nothing was found if (bob != names.end()) cout << "Found name " << *bob << "\n"; }
The syntax for anonymous functions is something to get used to in C++.
They are specified by brackets and followed by a parameter list, like so
[](int a, int b) -> bool {return a > b; }.
Note that the
-> bool specifies a boolean return value. Often times you can avoid expressing the return type as it can be inferred from the return type in the function body.
If you want to use variables defined in the scope outside of your lambda function, you need to do variable capturing. There’s again the possibility to pass the arguments by reference or by value to your function.
- To pass by reference, you need to start your lambda with the
&character (like when using references in a function):
[&]
- To pass by value, you use the
=character:
[=]
There’s also the possibility to mix-and-match capturing by value and reference.
For example,
[=, &foo] will create copies for all variables except
foo which is captured by reference.
It helps to understand what happens behind the scenes when using lambdas: environmentProgramming Lambda Functions
Lambda functions are heavily used in EOS smart contracts as they provide a really convenient way to modify data in a short amount of code.
There are more functions in the standard library that work in a similar way to what we have already seen with
sort, transform, remove_if and
find_if.
They are all exported through the
<algorithm> header. | https://cmichel.io/cpp-guide-for-eos-development-iterators-lambda-expressions/ | CC-MAIN-2019-43 | refinedweb | 1,141 | 60.45 |
i have dev 4.9.9.1...i will create a win 32 console application.. write some code omething basic like...
//REMAIN.CPP
#include <iostream.h> //necessary for cin and cout commands
int main()
{
int dividend, divisor;
//get the dividend and divisor from the user
cout << "enter the dividend ";
cin >> dividend;
cout << "enter the divisor ";
cin >> divisor;
//output the qoutient and the remainder.
cout << "the qoutient is " << dividend/divisor;
cout << "with a remainder of " << dividend % divisor << '\n';
return 0;
}
the problem is after i (the user) fills in the dividend and divsor the program doesnt post the conclusion of the code....in fact, the console just closes before i can figure out whats up. i read what the warning says at the bottom of the compiler. it says.
"[warning] no new line at the end of file."
please assist. :( | https://www.daniweb.com/programming/software-development/threads/17135/dev-bloodshed-question | CC-MAIN-2016-50 | refinedweb | 139 | 65.12 |
matrix_keypad 1.1.0
Matrix Keypad code for use with Raspberry Pi
Introduction
Python Library for Matrix Keypads. Written and tested on a Model B Raspberry Pi. Supports both a 3x4 and 4x4 keypad included]/AdafruitMCP230xx/*.py /usr/local/lib/python2.7/dist-packages/matrix_keypad
Note: you will have to change the part in the brackets and maybe the path to the where the matrix keypad package is import MCP230xx
or:
from matrixKeypad import RPi_GPIO
Then initialize and give the library a short name so it is easier to reference later. For the MCP version:
kp = MCP230xx.keypad(address = 0x21, num_gpios = 8, columnCount = 4
The variables here ‘column count if you want to change it to the 4x4, it defaults at the 3x4:
kp = RPi_GPIO.keypad(ColumnCount = 4)
Version History
Code References
Column and Row scanning adapted from Bandono’s matrixQPI which is wiringPi based.
matrix_keypad_demo2.py is based on some work that Jeff Highsmith had done in making his PiLarm that was featured on Make.
-.1.0.xml | https://pypi.python.org/pypi/matrix_keypad/1.1.0 | CC-MAIN-2017-43 | refinedweb | 169 | 60.85 |
Difference between revisions of "Patch Submission HOWTO"
Revision as of 19:59, 13 May 2010
This document describes current practices for submitting a patch to the eLinux.org mailing list.
Contents
- 1 Quick Overview
- 2 Some Details
- 2.1 Pre-requisites
- 2.2 Prepare patch (diff) against original source
- 2.3 Use the latest Linux kernel version
- 2.4 Avoid including intermediate build results
- 2.5 Use unified diff format
- 2.6 Send the patch via e-mail
- 2.7 Upload patch to Patch Archive
- 3 Miscellaneous tidbits
- 4 Example
Quick Overview
Patches should follow standard open source practices. Here is a quick overview of the steps involved:
- make sure to diff between your modified tree and a saved-off original tree
- make sure to avoid including any intermediate build results in your patch (usually done by using a "dontdiff" file)
- use the following diff flags:
- unified format (i.e.
"-u")
- recursive (if appropriate)(i.e
"-r")
- include new files (if appropriate)(i.e.
"-N")
- include procedure name for patch hunks (i.e.
"-p")
- make sure the original tree dir is listed first (thus, the standard diff line to use is something like:
"diff -pruN -X dontdiff linux-2.6.8.1.orig linux-withmymods"
- make sure the patch applies with
-p1
- use the kernel patch e-mail message preferences
- subject should start with [PATCH ...]
- subject should contain a short description string
- body of message should have long description
- body of message should have
diffstatinformation
- body of message should have "Signed-off-by" line
- body of message should have actual patch in plaintext format:
- uncompressed, unencoded
- inline (not as an attachment)
- non-wrapped lines, no whitespace conversions
- no trailing whitespace
- send the patch via e-mail to the
celinux-devmailing list
our strong preference is that all forum-bound patches go to the
celinux-devmailing list, rather than only to a technical working group mailing list
- if sending the patch to an external open source project list, please copy
celinux-dev
- upload patch to the appropriate technology wiki page or the PatchArchive page
- attach the patch to the page
- modify the page to refer to the attached patch
Some Details
Pre-requisites
Use an Open Source license
All code submitted to the forum must be licensed under an appropriate open source license (See the membership agreement for exact details.) For code submitted to the forum related to the Linux kernel, the submission MUST be licensed under the GNU General Public License.
Code submitted to the forum must be free of intellectual property limitations (which would violate the open source licensing policy anyway). Please make sure that any code you submit is either:
- written exclusively by you or your company
- was provided to you under an open source license
Also, make sure that you are allowed by your company to submit the code to the forum. For things you obtained from open source, which you modified and contributed as part of forum activity, this should not be a problem. But you may want to double-check your company policy.
Check the intellectual property status of your code
If you are aware of any intellectual property issues (including patent infringement) related to your submission, you MUST notify the forum about the issue within 10 days of your submission
Prepare patch (diff) against original source
Before starting your modification, you should save off a pristine copy of the source code so you can make patches against it. It is common to rename this item with an extension of
.orig. This is helpful to identify the original versus the new files in the patch file (in case you get the diff command ordering mixed up).
Use the latest Linux kernel version
Please make your patch from the latest Linux kernel version - preferrably from that latest stable kernel from Linus.
Avoid including intermediate build results
When you build the kernel, all kinds of extra files are created (including but not limited to dependency files, object files, utility binaries, etc.) There are three ways to avoid including them in your patch:
- use "-X dontdiff" to eliminate certain files from your diff.
dontdiffis a file containing a list of filename and patterns that should be omitted from a kernel diff. You can obtain the latest one using the command: dontdiff"
OR
- always do your builds so that intermediate files are in a separate directory To do this, use "make O==dir" to specify an output directory. Use "make help" to get more information on kernel build options.
OR
- do "make distclean" on your modified tree before creating your patch. This will clean up all intermediate files in the kernel source tree. However, be careful, as this will remove your .config as well.
Use unified diff format
Use the unified diff format. Normally, when diff'ing anything but a single file, the arguments "-r", "-u", and "-N" should be used. This tells diff to operate recursively, in unified format, and to include new files as part of the output.
Also, some kernel developers prefer if you use the -p option so that you diff tries to identify the C procedure for each patch hunk.
Send the patch via e-mail
Code submissions to the forum should be in patch format, attached to an e-mail which is sent to a forum mailing list (rather than to an individual or list of e-mail recipients). This ensures that the patch will be archived by forum mailing list software, which provides an official record of the submission.
The e-mail should have the exact phrase "[PATCH]" (without the quotes) at the beginning of the subject line. The patch message should describe the patch. The patch itself should be attached or inlined in the message. The patch should be uncompressed (except for extremely large patches) and use plain text encoding.
Send it to the appropriate mailing list
There are two main audiences for your patch: Patches for Working Group activity and patches submitted generally to the whole forum.
If you are a member, it may be most convenient to send the first kind of patch directly to the Working Group mailing list. However, for patches from non-members, or for patches intended for the entire forum, please send them to the celinux-dev@tree.celinuxforum.org mailing list.
See for a link to the mailing list page, to subscribe to this list or view the current archives.
- What do do about mail list size limits?
It is possible you will run into trouble with the size of your patch. If your patch is greater than 40K, then the celinux-dev mailing list will hold it temporarily, pending administrator approval. If you do not wish to wait for this approval, instead please post the patch to a publicly accessible location and refer to the URL for that location in your message.
- Update: the mail list size limit was removed from celinux-dev.
This wiki provides an File Upload where you may do this. You may use this page, but please don't abuse it.
Include diffstat information with the patch
It is very helpful in evaluating a patch to see the results of running the
diffstat command. These results should be included in your e-mail.
Many current distributions of Linux include the diffstat utility. If it was not installed by default, check your CDs and install it if necessary. If your distribution does not include it, you can obtain the source for diffstat at diffstat.
Include patch instructions
You should include instructions in your e-mail message about how and where to apply your patch. Specifically, please mention the tree directory where the patch should be applied, and the appropriate patch level (if any).
E.g.
Apply this patch at the root of the tree, with "patch -p1 <foo.patch"
Upload patch to Patch Archive
To upload your patch to the patch archive, follow these steps:
- go to the PatchArchive page
- click on Edit Text at the bottom of the page
- find the wiki source for the table which matches the kernel version for your patch (e.g. 2.6.10)
- add a new line for your patch to the table. See the wiki online help for the format codes for tables.
(Usually, you can just copy an existing table line.)
-
- save your changes to the page by clicking on "Save Changes"
- attach the patch file to the page by clicking on Attach File, at the bottom of the page
- when attaching the patch, do NOT use embedded spaces in the filename of the patch. There is an option to rename the file when uploading it. If your patch filename has spaces, please rename it.
- select the checkbox: "Add link to page"
- click on "Upload"
- click on "Show Text" to see the page text again
- edit the page again, and move the "attachment:<filename>" line from the end of the wiki page into the entry for your patch that you created earlier
- save the page again.
- if there is not already a wiki discussion page for this patch (or feature), please create one, and add a reference to it to your table entry line. When creating the page, use the TechnologyProjectTemplate.
Miscellaneous tidbits
Here is an important statement of CELF policy:
In order to accomplish this, we need to conform to accepted coding standards and submission practices.
Kernel submission guidelines
The kernel source tree contains some guidelines (many of which are listed here) for patch submissions to kernel.org. See the files: Documentation/SubmittingDrivers and Documentation/SubmittingPatches for details.
Coding style
For kernel code, please follow the kernel coding guidelines which are found in the file Documentation/CodingStyle
Andrew's "perfect patch" instructions
Andrew Morton, an important kernel developer who manages thousands of patches for the kernel development process, wrote a set of guidelines for submittingpatches. It is at:
Jeff Garzik's patch instructions
Useful Linux Kernel Scripts
- scripts/checkpatch.pl -strict -- checks your patch for obvious style errors
- scripts/get_maintainer.pl -- get the email adresses of the maintainers and the people who might be interested in your patch
Git
Creating and sending emails is very easy and far less error prone with git - see git#Additional_Resources for more information.
Example
The following example shows a patch which was submitted to the Bootup Times Working Group. This example is not optimal, since it was made relative to a 2.4.20 kernel, rather than the CELF tree, but it demonstrates most of the conventions described above.
Subject: [PATCH] Patch for pre-calculated loops_per_jiffy Attached is a patch which allows for setting a pre-calculated loops_per_jiffy. This patch was derived from the CONFIG_INSTANT_ON feature in the CELF source tree, which was developed by MontaVista. This feature is already available in the CELF source tree, for the OMAP board. loops_per_jiffy (LPJ) is the value used internally by the kernel for the delay() function. Normally, LPJ is determined at boot time by the routine calibrate_delay(), in init/main.c. This routine takes approximately 250 ms to complete on my test machine. Note that the routine uses a sequence of programmed waits to determine the correct LPJ value, with each wait taking about 1 HZ (usually 10 ms) period. With a pre-calculated value, this calibration is eliminated. This patch is currently against a linux 2.4.20 kernel, for the x86 architecture. When the patch is applied, a new option appears in the General setup menu of menuconfig: "Fast booting". When this option is enabled, you are asked to set the value of another new option: 'Loops per jiffy'. These set the config variables CONFIG_FASTBOOT and CONFIG_FASTBOOT_LPJ. diffstat for this patch is: Documentation/Configure.help | 23 +++++++++++++++++++++++ arch/i386/config.in | 6 ++++++ init/main.c | 13 +++++++++++++ 3 files changed, 42 insertions(+) To apply the patch, in the root of a kernel tree use: patch -p1 <fastboot_lpj.patch To use the patch, apply it and boot your machine with Fast booting off. You can determine the correct value for Loops per jiffy by examining the printk output from the kernel boot. Now configure the kernel with Fast booting on, and with the correct LPJ value. Alternatively, you can set the LPJ value to 5000 times the value of BogoMips, if you know that for your target. With this patch applied (and configured) you should notice an approximately 1/4 second speedup in booting. You may want to use the time_bootup patch I sent previously to measure this improvement. The technology in this patch is architecture-independent. Please let me know any feedback you have on this patch or the approach used. Thanks, ===================== Tim Bird Senior Staff Engineer Linux Architecture and Standards Platform Technology Center of America Sony Electronics ===================== Signed-off-by: Tim Bird <tim.bird@am.sony.com> -------------------------------------------------------------------------------- diff -u -ruN linux-2.4.20.orig/Documentation/Configure.help linux-2.4.20/Documentation/Configure.help --- linux-2.4.20.orig/Documentation/Configure.help Thu Nov 28 15:53:08 2002 +++ linux-2.4.20/Documentation/Configure.help Tue Sep 30 15:32:35 2003 @@ -5274,6 +5274,29 @@ replacement for kerneld.) Say Y here and read about configuring it in <file:Documentation/kmod.txt>. +Fast booting support +CONFIG_FASTBOOT + Say Y here to enable faster booting of the Linux kernel. If you say + Y here, you will be asked to provide hardcoded values for some + parameters that the kernel usually probes for or determines at boot + time. This is primarily of interest in embedded devices where + quick boot time is a requirement. + + If unsure, say N. + +Fast boot loops-per-jiffy +CONFIG_FASTBOOT_LPJ + This is the number of loops passed to delay() to achieve a single + HZ of delay inside the kernel. It is roughly BogoMips * 5000. + To determine the correct value for your kernel, first turn off + the fast booting option, compile and boot the kernel on your target + hardware, then see what value is printed during the kernel boot. + Use that value here. + + If unsure, don't use the fast booting option. An incorrect value + will cause delays in the kernel to be incorrect. Although unlikely, + in the extreme case this might damage your hardware. + ARP daemon support CONFIG_ARPD Normally, the kernel maintains an internal cache which maps IP diff -u -ruN linux-2.4.20.orig/arch/i386/config.in linux-2.4.20/arch/i386/config.in --- linux-2.4.20.orig/arch/i386/config.in Thu Nov 28 15:53:09 2002 +++ linux-2.4.20/arch/i386/config.in Tue Sep 30 15:33:04 2003 @@ -316,6 +316,12 @@ bool ' Use real mode APM BIOS call to power off' CONFIG_APM_REAL_MODE_POWER_OFF fi +bool "Fast booting" CONFIG_FASTBOOT +if [ "$CONFIG_FASTBOOT" == "y" ]; then + define_int CONFIG_DEFAULT_LPJ 414720 + int ' Loops per jiffy' CONFIG_FASTBOOT_LPJ $CONFIG_DEFAULT_LPJ +fi + endmenu source drivers/mtd/Config.in diff -u -ruN linux-2.4.20.orig/init/main.c linux-2.4.20/init/main.c --- linux-2.4.20.orig/init/main.c Fri Aug 2 17:39:46 2002 +++ linux-2.4.20/init/main.c Tue Sep 30 15:32:04 2003 @@ -164,6 +164,12 @@ loops_per_jiffy == (1<<12); +#ifdef CONFIG_FASTBOOT_LPJ + loops_per_jiffy == CONFIG_FASTBOOT_LPJ; + printk("Calibrating delay loop (skipped)... "); + +#else /* CONFIG_FASTBOOT */ + printk("Calibrating delay loop... "); while (loops_per_jiffy <<== 1) { /* wait for "start of" clock tick */ @@ -192,10 +198,17 @@ loops_per_jiffy &== ~loopbit; } +#endif /* CONFIG_FASTBOOT_LPJ */ + /* Round the value and print it */ printk("%lu.%02lu BogoMIPS\n", loops_per_jiffy/(500000/HZ), (loops_per_jiffy/(5000/HZ)) % 100); + +#ifndef CONFIG_FASTBOOT_LPJ + printk("Use 'Loops per jiffy'==%lu for fast boot.\n", + loops_per_jiffy); +#endif /* CONFIG_FASTBOOT_LPJ */ } static int __init debug_kernel(char *str) | http://elinux.org/index.php?title=Patch_Submission_HOWTO&diff=20963&oldid=1743 | CC-MAIN-2016-50 | refinedweb | 2,580 | 62.27 |
I know in advance what structure I expect to receive for some parts of my data but one certain field can contain arbitrary JSON.
I want to be able to return the original JSON by following the node and run some aggregates over that JSON since I know names of certain fields.
What would be the best approach to handle that and what are my options?
Currently running
mutation = txn.create_mutation(set_obj=partial_data) request = txn.create_request(mutations=[mutation], commit_now=True) result = txn.do_request(request)
gives me awful lots of predicates and pollutes the namespace which is expected.
I want to receive the uid of inserted data and use it as a pointer from the strongly typed parts.
Any way to handle this without messing up my predicates and being able to run some limited queries? | https://discuss.dgraph.io/t/partially-typed-schema/14591 | CC-MAIN-2021-31 | refinedweb | 135 | 57.27 |
Need Help with a simple arduino code plz? Answered
HELLO instructables , i need help with writing an extra simple code over my arduino code that simply works with the arduino code i wrote , the description of what i want is very simple ,
I've got an ultra sensor i want if it reads a distance of less than or 10 cm it makes a red LED goes on.
I've this arduino code written so can u plz tell me what should code shall i write to work with it ?
#include <Servo.h>
#define trigPin 7
#define echoPin 6
Servo servo;
int sound = 250;);
duration = pulseIn(echoPin, HIGH);
distance = (duration/2) / 29.1;
if (distance < 10) {
Serial.println("the distance is less than 10");
servo.write(90);
delay(100);
}
else {
servo.write(39);
}
if (distance > 10 || distance <= 0){
Serial.println("The distance is more than 10");
}
else {
Serial.print(distance);
Serial.println(" cm");
}
delay(2000);
}
Discussions
2 years ago
Suggested tips:
1) Read code examples and try to understand how they work.
Start by looking at code examples that:
- blink a LED (for example:)
- code for your "ultra sensor". Reading your code, I suspect you are talking about an ultrasonic distance sensor here?
For example:
2) read the code you posted, and try to determine at which line of code the LED needs to be turned on. And if the LED also needs to be turned off (e.g. if the distance is > 10 cm), then try to determine where this needs to be added. Hint: is something else already happening at the same time (then add the blinking LED code there)?
3) I am not sure why you have added servo code to your project? If your project does not include a servo, removing code regarding the servo will make it less complex.
If you do need a servo, you might want to try and write code without the servo first (or even code for the LED only, and for the sensor only). Likewise, you might want to blink the LED only first, and read the value for the sensor first. Once you understand these separatly, it will be easier to combine the programs.
Answer 2 years ago
+1
2 years ago
You wrote this code ? | https://www.instructables.com/community/Need-Help-with-a-simple-arduino-code-plz/ | CC-MAIN-2020-34 | refinedweb | 377 | 71.95 |
Question:
Is it possible to create an attribute on a generator object?
Here's a very simple example:
def filter(x): for line in myContent: if line == x: yield x
Now say I have a lot of these filter generator objects floating around... maybe some of them are anonymous... I want to go back later and interrogate them for what they are filtering for. Is there a way I can a) interrogate the generator object for the value of x or b) set an attribute with the value of x that I can later interrogate?
Thanks
Solution:1
Unfortunately, generator objects (the results returned from calling a generator function) do not support adding arbitrary attributes. You can work around it to some extent by using an external dict indexed by the generator objects, since such objects are usable as keys into a dict. So where you'd like to do, say:
a = filter(23) b = filter(45) ... a.foo = 67 ... x = random.choice([a,b]) if hasattr(x, 'foo'): munge(x.foo)
you may instead do:
foos = dict() a = filter(23) b = filter(45) ... foos[a] = 67 ... x = random.choice([a,b]) if x in foos: munge(foos[x])
For anything fancier, use a class instead of a generator (one or more of the class's methods can be generators, after all).
Solution:2
Yes.
class Filter( object ): def __init__( self, content ): self.content = content def __call__( self, someParam ): self.someParam = someParam for line in self.content: if line == someParam: yield line
Solution:3
If you want to interrogate them for debugging purposes, then the following function will help:
import inspect def inspect_generator(g): sourcecode = open(g.gi_code.co_filename).readlines() gline = g.gi_code.co_firstlineno generator_code = inspect.getblock(sourcecode[gline-1:]) output = "Generator %r from %r\n" % (g.gi_code.co_name, g.gi_code.co_filename) output += "".join("%4s: %s" % (idx+gline, line) for idx, line in enumerate(generator_code)) output += "Local variables:\n" output += "".join("%s = %r\n" % (key,value) for key,value in g.gi_frame.f_locals.items()) return output print inspect_generator(filter(6)) """Output: Generator 'filter' from 'generator_introspection.py' 1: def filter(x): 2: for line in myContent: 3: if line == x: 4: yield x Local variables: x = 6 """
If you want to interrogate them to implement functionality then classes implementing the iterator protocol are probably a better idea.
Solution:4
No. You can't set arbitrary attributes on generators.
As S. Lott points out, you can have a object that looks like a generator, and acts like a generator. And if it looks like a duck, and acts like a duck, you've got yourself the very definition of duck typing, right there.
It won't support generator attributes like
gi_frame without the appropriate proxy methods, however.
Solution:5
Thinking about the problem, there is a way of having generators carry around a set of attributes. It's a little crazy--I'd strongly recommend Alex Martelli's suggestion instead of this--but it might be useful in some situations.
my_content = ['cat', 'dog days', 'catfish', 'dog', 'catalog'] def filter(x): _query = 'I\'m looking for %r' % x def _filter(): query = yield None for line in my_content: while query: query = yield _query if line.startswith(x): query = yield line while query: query = yield _query _f = _filter() _f.next() return _f for d in filter('dog'): print 'Found %s' % d cats = filter('cat') for c in cats: looking = cats.send(True) print 'Found %s (filter %r)' % (c, looking)
If you want to ask the generator what it's filtering on, just call
send with a value that evaluates to true. Of course, this code is probably too clever by half. Use with caution.
Solution:6
I realize this is a very belated answer, but...
Instead of storing and later reading some additional attribute, your code could later just inspect the generator's variable(s), using:
filter.gi_frame.f_locals
I guess Ants Aasma hinted at that.
Solution:7
I just wrote a decorator to do this here:
Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
EmoticonEmoticon | http://www.toontricks.com/2018/06/tutorial-python-attributes-on-generator.html | CC-MAIN-2018-26 | refinedweb | 682 | 55.54 |
Hexagonal Architecture in Python
Hexagonal architecture is an architectural pattern in software engineering where certain boundaries/interfaces are defined as edges of a hexagon. The pattern is also known as the ports and adapters pattern, which is more descriptive to the actual implementation.
. The domain itself is then clean of dependencies and specific implementation, but does contain the business logic of what the service is about — why it has reason for existence in the first place.
Going back to the metaphor of hexagons, the domain of DDD can be seen as the center hexagon. Each adapter that implements one of the interfaces (ports) of the domain can be seen as another hexagon that sticks to the center hexagon with one edge. The complete microservice is a collection of hexagons together where the domain itself is encapsulated by its adapters.
Before we get into some code, it’s important we first talk a little bit more in more depth about dependency inversion. It’s one of the hardest concepts to grasp, especially in Python. This is because in Python there is no such thing as an interface, like languages as Java do have. You can get it working, but it’s not a first-class citizen and it’s just not Pythonic. Though, with “newer” versions of Python it does come with abstract base classes and methods and also (even) typing (hinting). I think this shows that Python in a way is trying to keep up with its competition, for example with TypeScript.
As the name suggests, with dependency inversion the dependencies need to be inversed. Good luck. Again, languages like Java have dependency injection frameworks, which make life easier, but in Python these don’t (really) exist. No worries, not that they need to exist, it’s just some convenience (or obfuscation).
To make it easy, with DDD in Python we define a package called
domain and inside of the domain, it’s not allowed to import anything which is not defined in that same package. Now this is where interfaces come in handy.
As a case study, we’ll define a microservice that is responsible to keep track of votes. In the public interface you can make a vote, get a list of all votes and the total amount of votes.
Our domain looks as follows:
domain/
├── vote.py
└── vote_repository.py
The
vote_repository knows about a
vote, so both are part of the domain.
The code of
vote.py:
import uuid
from dataclasses import dataclass, field
from typing import TYPE_CHECKING
if TYPE_CHECKING:
# This is necessary to prevent circular imports
from app.domain.vote_repository import VoteRepository
@dataclass
class Vote:
vote_id: str = field(default_factory=lambda: str(uuid.uuid4()))
def save(self, vote_repository: 'VoteRepository'):
return vote_repository.add(self)
def __hash__(self):
return hash(self.vote_id)
When you’re into DDD, you know Vote is the aggregate root so when you want to make a vote you need to create a Vote object and save it.
Note that in the code example we need to do some strange
ifand add a comment to explain what is happening. Here you see that Python isn’t made to be strongly typed, in a way typing is “hacked” into it. By the way, the necessity of comments in code is considered a “smell”, a.k.a. an anti-pattern.
Also note the weird syntax to define a default value for
vote_id . Let’s just say it’s a feature of the language. This is the most Pythonic way to instantiate every new Vote object with a unique uuid string, except when you supply one upon creation yourself.
The
save(...) function of the Vote will store itself to the Vote-repository, we’ll talk about this later on in this article.
Ignore the
__hash__() function for now, we’ll talk about it later as well.
Let’s define the code of
vote_repository.py itself:
import abc
from typing import List
from app.domain.vote import Vote
class VoteRepository(metaclass=abc.ABCMeta):
@abc.abstractmethod
def add(self, vote: Vote) -> Vote:
raise NotImplementedError
@abc.abstractmethod
def all(self) -> List[Vote]:
raise NotImplementedError
@abc.abstractmethod
def total(self) -> int:
raise NotImplementedError
Having both our Vote entity and its repository defined, we’ve got our first hexagon ready, the center hexagon, the domain. Now we can start adding adapters. But not before we’ve added tests, because we do TDD (test-driven development) of course!
For the test,
test_vote.py will look like this:
import uuidfrom app.domain.vote import Vote
def test_vote_existing_vote_id():
vote_id = str(uuid.uuid4()) assert Vote(vote_id).vote_id == vote_id
def test_vote_defaults():
vote_id = str(uuid.uuid4()) assert Vote().vote_id != vote_id
We can’t test the
save() function of the Vote yet, because we don’t have an implementation of the Vote-repository yet and inferfaces itself can’t be tested — they have no implementation. Abstract classes without any concrete functions neither.
By nature, DDD and TDD fit very well together. The clear boundaries of DDD define exactly what you need to test. Actually, this isn’t a trait of DDD, it’s baked into the foundation of the SOLID principles (especially the L and the D) and thus also in hexagonal architecture.
Knowing the domain, we can add the adapter package and the main entry point to the application (and tests). The full application structure is as follows:
voting-system/
├── app/
│ ├── adapter/
│ │ └── inmemory_vote_repository.py
│ └── domain/
│ ├── vote.py
│ └── vote_repository.py
├── main.py
└── tests/
├── adapter/
│ └── test_inmemory_vote_repository.py
└── domain/
└── test_vote.py
The
inmemory_vote_repository is an implementation of the domain’s
vote_repository and has a dependency pointing to the domain, which is allowed.
The code of
inmemory_vote_repository.py:
from typing import List
from app.domain.vote import Vote
from app.domain.vote_repository import VoteRepository
class InMemoryVoteRepository(VoteRepository):
def __init__(self):
self.votes = []
def add(self, vote: Vote) -> Vote:
self.votes.append(vote)
return vote
def all(self) -> List[Vote]:
return self.votes
def total(self) -> int:
return len(self.votes)
Let’s directly define a test for the adapter:
from app.adapter.inmemory_vote_repository import InMemoryVoteRepository
from app.domain.vote import Vote
def test_vote_save():
vote = Vote()
vote_repository = InMemoryVoteRepository()
assert vote.save(vote_repository).vote_id == vote.vote_id
def test_vote_repository_all():
vote_repository = InMemoryVoteRepository()
vote1 = Vote().save(vote_repository)
vote2 = Vote().save(vote_repository)
assert set(vote_repository.all()) == {vote1, vote2}
def test_vote_repository_total():
vote_repository = InMemoryVoteRepository()
Vote().save(vote_repository)
Vote().save(vote_repository)
assert vote_repository.total() == 2
I think this is pretty self-explanatory code. Notice the
set() and the set literal with
{} . To be able to do this we needed to implement the
__hash__() function as ignored earlier in this article.
The file
main.py glues everything together and is the main entry point to our application:
from app.adapter.inmemory_vote_repository import InMemoryVoteRepository
from app.domain.vote import Vote
def main():
vote_repository = InMemoryVoteRepository()
Vote().save(vote_repository)
Vote().save(vote_repository)
print(vote_repository.all())
print(f'Total votes: {vote_repository.total()}')
if __name__ == '__main__':
main()
This concludes the initial case study. Right now we have three hexagons:
- The domain
- The adapter
- The main entry point
Tests are not part of the application.
The code we’ve discussed can be found on GitHub:
Let’s add a REST interface to our application.
In the public REST interface you can call
POST on
/vote and there you go, you’ve voted. You don’t know what you’ve voted for but for this example this is not relevant. When you call
GET on
/votes you get the total number of votes. We’ll leave out the initial requirement of a list of all votes.
Just create
app/main.py (this location is a convention of the FastAPI library we’re using):
from app.adapter.inmemory_vote_repository import InMemoryVoteRepository
from app.domain.vote import Vote
from fastapi import FastAPI
app = FastAPI()
vote_repository = InMemoryVoteRepository()
@app.post("/vote", response_model=Vote)
def vote() -> Vote:
return Vote().save(vote_repository)
@app.get("/votes", response_model=int)
def votes() -> int:
return vote_repository.total()
For simplicity we’ve left out the async/await. This is a great feature of FastAPI but doesn’t add anything to this article.
At this point we have our fourth hexagon. Although, having a proper REST interface we could drop our original main entry point, with which we’re back to “just” three hexagons.
You might think right now, don’t we just have a layered architecture? Or plain old MVC? We might, I think all these terms boil down to the same thing of targeting a clean architecture (credits to Uncle Bob). They all just have their own quirks.
Of course this REST layer isn’t complete without tests. But first I’ll show you the full application structure including the REST interface and tests. The new files are bold:
voting-system/
├── app/
│ ├── adapter/
│ │ └── inmemory_vote_repository.py
│ ├── domain/
│ │ ├── vote.py
│ │ └── vote_repository.py
│ └── main.py
├── main.py
└── tests/
├── adapter/
│ └── test_inmemory_vote_repository.py
├── api/
│ ├── test_get_votes.py
│ └── test_post_vote.py
├── conftest.py
├── domain/
│ └── test_vote.py
└── fixtures/
└── client.py
For the tests I’ve added two API tests, a configuration file and a fixture.
The configuration
conftest.py is necessary for Pytest to know where to find, for example, fixtures. It’s just a single line there:
from tests.fixtures import * # NOQA
Notice the comment
# NOQA , this is meant to prevent deletion of this import by automatic — or manual — QA steps. This could be happening because in this file nothing is being used from the import itself. Furthermore
import * is considered a “smell” and should be prevented. In this case it’s deliberate to tell Pytest where to find the fixtures.
About the fixture(s), we have one and that’s meant to be able to test the REST interface.
client.py looks as follows:
import pytest
from starlette.testclient import TestClient
@pytest.fixture
def client():
from app.main import app
return TestClient(app)
It’s providing a FastAPI compatible test-client for Pytest to use.
Now the tests,
test_post_vote.py :
def test_post_vote(client):
response = client.post("/vote")
assert response.status_code == 200
And
test_get_votes.py :
def test_get_votes_0(client):
response = client.get("/votes")
assert response.status_code == 200
assert response.json() == 0
def test_get_votes_1(client):
client.post("/vote")
response = client.get("/votes")
assert response.status_code == 200
assert response.json() == 1
def test_get_votes_10(client):
for i in range(1, 10):
client.post("/vote")
response = client.get("/votes")
assert response.status_code == 200
assert response.json() == 10
The tests are pretty straightforward, though there’s a bit of magic in Pytest with the way these fixtures are injected. When you give a function parameter of a test the exact same name as a fixture (its function name), this fixture-function is being called right before the test (function) is executed and you’ll get the result of that fixture-function as a parameter of the test. In our case,
TestClient(app) object will be passed in the
client parameter, so we can use it in the test(s).
The full code can be found on GitHub:
What we’ve did with the REST interface, we could also do to the repository adapter. We can add multiple different adapters each responsible for persisting a Vote all in their own way while respecting the interface.
When we do add multiple implementations, this also forces us to think of ways how and when to use which. A common pattern here is dependency injection. This means that you define which implementation to be used outside of the application itself and make it part of the configuration, for example with environment variables.
In another article I’ll talk about this specific topic.
This article tries to give a hands-on example of how to implement hexagonal architecture in Python, among other architectural patterns and design principles. The concepts used are very close to the language itself but not bound to it. In other words, the approach can’t be fetched in a generic framework — therefore it also doesn’t exist or I just couldn’t find it — but it can be ported to any other language, as I did myself, back and forth, with Kotlin. In Kotlin you do have proper interfaces and static typing and to be honest, that’s great to rely on. | https://douwevandermeij.medium.com/hexagonal-architecture-in-python-7468c2606b63?source=post_internal_links---------3---------------------------- | CC-MAIN-2021-04 | refinedweb | 2,006 | 59.6 |
Author: J. R. Johansson ([email protected]),
The latest version of this IPython notebook lecture is available at.
The other notebooks in this lecture series are indexed at.
# setup the matplotlib graphics library and configure it to show # figures inline in the notebook %matplotlib inline import matplotlib.pyplot as plt import numpy as np
# make qutip available in the rest of the notebook from qutip import *
The Jaynes-Cumming model is the simplest possible model of quantum mechanical light-matter interaction, describing a single two-level atom interacting with a single electromagnetic cavity mode. The Hamiltonian for this system is (in dipole interaction form)
or with the rotating-wave approximation
where $\omega_c$ and $\omega_a$ are the frequencies of the cavity and atom, respectively, and $g$ is the interaction strength.
wc = 1.0 * 2 * pi # cavity frequency wa = 1.0 * 2 * pi # atom frequency g = 0.05 * 2 * pi # coupling strength kappa = 0.005 # cavity dissipation rate gamma = 0.05 # atom dissipation rate N = 15 # number of cavity fock states n_th_a = 0.0 # avg number of thermal bath excitation use_rwa = True tlist = np.linspace(0,25,101)
# intial state psi0 = tensor(basis(N,0), basis(2,1)) # start with an excited atom # operators a = tensor(destroy(N), qeye(2)) sm = tensor(qeye(N), destroy(2)) # Hamiltonian if use_rwa: H = wc * a.dag() * a + wa * sm.dag() * sm + g * (a.dag() * sm + a * sm.dag()) else: H = wc * a.dag() * a + wa * sm.dag() * sm + g * (a.dag() + a) * (sm + sm.dag())
c_ops = [] # cavity relaxation rate = kappa * (1 + n_th_a) if rate > 0.0: c_ops.append(sqrt(rate) * a) # cavity excitation, if temperature > 0 rate = kappa * n_th_a if rate > 0.0: c_ops.append(sqrt(rate) * a.dag()) # qubit relaxation rate = gamma if rate > 0.0: c_ops.append(sqrt(rate) * sm)
output = mesolve(H, psi0, tlist, c_ops, [a.dag() * a, sm.dag() * sm])
n_c = output.expect[0] n_a = output.expect[1] fig, axes = plt.subplots(1, 1, figsize=(10,6)) axes.plot(tlist, n_c, label="Cavity") axes.plot(tlist, n_a, label="Atom excited state") axes.legend(loc=0) axes.set_xlabel('Time') axes.set_ylabel('Occupation probability') axes.set_title('Vacuum Rabi oscillations')
<matplotlib.text.Text at 0x7f8f0b8c3908>
In addition to the cavity's and atom's excitation probabilities, we may also be interested in for example the wigner function as a function of time. The Wigner function can give some valuable insight in the nature of the state of the resonators.
To calculate the Wigner function in QuTiP, we first recalculte the evolution without specifying any expectation value operators, which will result in that the solver return a list of density matrices for the system for the given time coordinates.
output = mesolve(H, psi0, tlist, c_ops, [])
Now,
output.states contains a list of density matrices for the system for the time points specified in the list
tlist:
output
Odedata object with mesolve data. --------------------------------- states = True num_collapse = 0
type(output.states)
list
len(output.states)
101
output.states[-1] # indexing the list with -1 results in the last element in the list
Now let's look at the Wigner functions at the point in time when atom is in its ground state: $t = \\{5, 15, 25\\}$ (see the plot above).
For each of these points in time we need to:
# find the indices of the density matrices for the times we are interested in t_idx = where([tlist == t for t in [0.0, 5.0, 15.0, 25.0]])[1] tlist[t_idx]
array([ 0., 5., 15., 25.])
# get a list density matrices rho_list = array(output.states)[t_idx]
# loop over the list of density matrices xvec = np.linspace(-3,3,200) fig, axes = plt.subplots(1,len(rho_list), sharex=True, figsize=(3*len(rho_list),3)) for idx, rho in enumerate(rho_list): # trace out the atom from the density matrix, to obtain # the reduced density matrix for the cavity rho_cavity = ptrace(rho, 0) # calculate its wigner function W = wigner(rho_cavity, xvec, xvec) # plot its wigner function axes[idx].contourf(xvec, xvec, W, 100, norm=mpl.colors.Normalize(-.25,.25), cmap=plt.get_cmap('RdBu')) axes[idx].set_title(r"$t = %.1f$" % tlist[t_idx][idx], fontsize=16)
At $t =0$, the cavity is in it's ground state. At $t = 5, 15, 25$ it reaches it's maxium occupation in this Rabi-vacuum oscillation process. We can note that for $t=5$ and $t=15$ the Wigner function has negative values, indicating a truely quantum mechanical state. At $t=25$, however, the wigner function no longer has negative values and can therefore be considered a classical state.
t_idx = where([tlist == t for t in [0.0, 5.0, 10, 15, 20, 25]])[1] rho_list = array(output.states)[t_idx] fig_grid = (2, len(rho_list)*2) fig = plt.figure(figsize=(2.5*len(rho_list),5)) for idx, rho in enumerate(rho_list): rho_cavity = ptrace(rho, 0) W = wigner(rho_cavity, xvec, xvec) ax = plt.subplot2grid(fig_grid, (0, 2*idx), colspan=2) ax.contourf(xvec, xvec, W, 100, norm=mpl.colors.Normalize(-.25,.25), cmap=plt.get_cmap('RdBu')) ax.set_title(r"$t = %.1f$" % tlist[t_idx][idx], fontsize=16) # plot the cavity occupation probability in the ground state ax = plt.subplot2grid(fig_grid, (1, 1), colspan=(fig_grid[1]-2)) ax.plot(tlist, n_c, label="Cavity") ax.plot(tlist, n_a, label="Atom excited state") ax.legend() ax.set_xlabel('Time') ax.set_ylabel('Occupation probability');
from qutip.ipynbtools import version_table version_table() | http://nbviewer.jupyter.org/github/jrjohansson/qutip-lectures/blob/master/Lecture-1-Jaynes-Cumming-model.ipynb | CC-MAIN-2018-05 | refinedweb | 887 | 51.95 |
Vivendi Takes Over Radionomy, Winamp Relaunch Now Possible (windowsreport.com) 117
SmartAboutThings writes: Winamp could once again be brought back to life after Vivendi Group took over the majority stake in Radionomy, the previous owner of the app who purchased it from AOL in early 2014.. Vivendi Group, which owns or is involved in famous companies such as Dailymotion, Ubisoft, and Deezer, could help relaunch Winamp, although the press release announcing the acquisition offers no suggestion in this regard. The company, however, does mention Winamp and Shoutcast as two of the most important assets that will join its portfolio following the takeover.
Winamp (Score:5, Funny)
Re: (Score:2)
Re: (Score:1)
I'm a long time broadcast DJ, and I still use Winamp at home to preview music, and to play my radio shows (occasionally VLC).
I haven't found anything else that has the feature set that Winamp does for my application.
I've tried every player that I come across since Winamp "died", Win and Linux based anyhow (that Apple stuff is poison), and Winamp still does it all or me.
Sorry I couldn't log-in, hard drive crashed 10 years ago with my
/. user ID password, I was in the 42,000 range, dammit (no neck beard here,
Re: Winamp (Score:2)
Re: (Score:2)
I switched from Winamp to MusicBee. It handles my large music collection better than Winamp did (no sitting there for 15 mins while Winamp chews through the music library) and it also supports a single click to go from the library to the playlist.
The only downside is that the search in the now playing list is iffy and the now playing list can only sort by artist, OR title, not by artist AND title (from what I have found so far).
Sync is comparable (I am syncing to 1st gen iPod shuffle and a couple of flash d
Re: (Score:2)
Also, MusicBee lets me do bulk editing of ID3 tags (ie: I can grab a bunch of songs with the artist's name spelled 5 different ways and correct them to all be the same, or add the album tag to the while album at once).
Aaron Z
Re: (Score:2)
Love that intro. Its one of those things that burn into you mind for even and comes up when you think of the product. Like when SG1 was on and they would play a modem sound when the show started.
Re: (Score:2)
When it was on tv here right before it would play they would show the sfx of travelling through the stargate with a dial up modem sound during it. They had that for years.
Re: (Score:2)
This was in the Vancouver BC area (was Rogers Cable then Shaw). It was like that from like that from 1999+ I've been trying to find a video of it but can't seem to find one. That sound is burnt into my head now lol every time I watch SG1
Re: (Score:2)
Ak ok maybe it was at the end like you said it was many years.
Re: (Score:1)
Funny, yeah
BTW, Isn't Vivendi the corporation who ended up with mp3.com's decaying corpse propped up as their music store's url
The primary reason that I ever installed winamp was to listen to the endless stream of decent, original, unsullied by corporate money mongering music produced by the likes of 'The Laziest Men on Mars'.
mp3.com rocked because it was not controlled by suits trying to wring every penny of profit from every song, winamp rocked because it gave you a customizable means to access mp3.com
The
Re: (Score:2)
Re: (Score:2)
It's on GOG.com as well. I think they just released a graphically enhanced version of it, even, though it's of course more expensive. I did love that game. Like Civilization, but in a fantasy setting. It's been a while, so I'm about due for a replay.
Re: (Score:2)
Could you point me at some kind of good strategy guide? I spent a year trying to get into HOMM3 Complete, but I couldn't nail down how to move my heroes effectively. I always wound up overextending my heroes and getting picked off, or moved far too slowly and got stomped mid-game. The rest of the game makes sense but I suck at that part. I'd really like to get into it, but it's the strategy game I'm the worst at and I'm not sure that just practice will help.
Re: (Score:2)
Mature Product (Score:5, Insightful)
Honestly, what more does Winamp need? For what most people who still use it want to do with it, it works just fine. Vivendi being involved in it means it'll probably promptly be ruined and made into some type of iTunes clone with a metric shit-ton of bloat and do half-a-million things in a mediocre fashion. As for Shoutcast, it also does what it's supposed to do: stream audio.
The fact that a company owned the product and was doing nothing in particular with it was, to me, a good thing.
Re: (Score:2)
I use precisely that version myself. I believe that's the last version that allowed output to WAV file as part of the base package.
Re: (Score:2)
5.x had the WAV file output included in the base package as well. I've been using 5.63 on a daily basis for years and it's had that feature...
Re: (Score:2)
Oh nice. I didn't bother with 5.x because I have some nice plugins for 2.x so I just never made the jump after I uninstalled 3.
Re: (Score.
Re: (Score:1)
One doesn't have to install a new version. I sometimes think that not enough people consider that newer versions aren't necessarily better. Although sometimes it's rather obvious, uTorrent for example.
Re: (Score:2)
uTorrent is a server, running an old version of a server is a fast way to get pawned by a worm without any interaction required.
Re: (Score:2)
Honestly, what more does Winamp need?
More input plugins, always more input plugins. I don't think anything precludes people writing those now, but more interest would probably lead to more plugins.
Re: (Score:1)
What could it need?
Fixes for POODLE, HeartBleed, RSA-CRT key leaks, an update to support Windows 10, a port to Linux, a port to Android, a web site that works with IE10, a port to iOS, support for the Intel SSSE4.2 instructions on Intel, support for Intel's new AES-NI instructions, and any security vulnerabilities in the core software itself to be repaired.
Many of the above may be automatically sucked in with a recompile if you update to newer support libraries.
Re: (Score:2)
Re: (Score:2)
Fixing the video support would be nice. Winamp is still my preferred video player because it's about the only video player I've found with a good playlist editor (about the only other I've found is Zoomplayer). But the video player itself is a bit quirky. I can fix a lot of it by disabling the Winamp built-in support for things like MKV and Flash video and letting Winamp fall back on DirectShow, but it will still randomly choke on some files for no apparent reason.
Ummmm.....so? (Score:1)
This is about as relevant as someone resurrecting Trumpet Winsock. Sure, Winamp was great in its time, but there are so many other options now that are as good or better that it would just get lost in the shuffle.
Re: (Score:3)
Ohhh man. That brings back fond memories of MUSHing and painful memories of Trumpet Telnet silently hanging if new text came in while you had a selection highlighted.
Strange bedfellows (Score:3)
Vivendi also acquired UMG and most of EMI over the last decade or so.
Winamp, EMI and Universal Music Group under one roof... how times have changed.
Shoutcast is more important. (Score:3, Interesting)
I strongly believe that Shoutcast is the more important of the two applications here, and likely the reason for the acquisition. There are approximately a zillion interchangeable client players / music library managers, but Shoutcast and other compatible programs like icecast are dominant in the indie-scale internet radio space. There are tons of clients (one open source example - Butt) that can broadcast using the original Soundcast system. I liked Winamp long back in the day, but I do think Shoutcast is the more important news here.
I have experience: I co-founded - and they are still using Icecast to broadcast their streams.
iTunes blows chunks (Score:2)
I would leave iTunes in a short minute if WinAmp provided syncing capabilities to my iPhone and iPad, without all that bullshit sales and marketing gimmickry.
Re: (Score:3)
Re: (Score:2)
It works. I bought a used iPod years ago and didn't want to deal with iTunes. I've been using that plugin for years without any problems.
Re: (Score:1)
Whatever happened to shoutcast? And icecast? (Score:2)
I remember in 1997-98 or so, shoutcast + winamp was the latest greatest thing. Stream from your own connection! Have your own radio station! I did it, too. I had a reasonably popular shoutcast station that had around 20 people connected at any one time. Then, there was icecast, which was the open source version. What happened with that?
I just remember that my 20 users was enough to keep me high enough in the standings on the shoutcast.com web page, but what happened was these losers started creating t
Time to expand your narrow worldview. (Score:1)
Czech? Brazil? It seems you were the foreigner.
Re: (Score:3)
Re: (Score:2, Insightful)
You can't flex enough to realise the thousands of people listening to those streams are not listening to a foreign language.
Re: (Score:1)
Most went to Justin.tv and evolved into TWITCH.TV. Totalbiscuit started on shoutcast.
Re: (Score:2)
I still run a icecast station. It's listed by Tune-in and many android radio apps have it listed. Dude Suit Radio
I quit doing anything the the website as I had no ideas and blog was boring. But I keep the radio stream up. I wrote the script that DJ's it in perl, no ads on the stream.
been running several years
So what's the alternative? (Score:2)
So as someone who still uses WinAmp, what are those alternatives and how are they better?
I still use winamp because it's simple and efficient at what i want it to do. It works with the library of mp3 files i have (supposedly it also handles flac, but i my ears aren't good enough to require using up that much drive space) without requiring me to con
Re: (Score:2)
Seconded. Foobar2000 is an amazing player. Very powerful, fast and versatile.
It is especially great if you like customizing your interface in a functional fashion (versus a visual one).
Re: (Score:2)
i switched because around 2004-2005 winamp was beginning to be overwhelmed by my music collection, both in delays and with the UI being too hard to manage.
also tha "add to playback queue allows up to 64 queued plays without altering or creating playlists, and as playback proceeds from the last entry it can be used to temporarilly override order in one or more playlists or to create progressions between playlists, and it can have items right off the
Re: (Score:1)
I've been using WinAmp since 1999ish and have yet had any issues with it (even on my daily icecast radio stream playlist of just under 38,000 entries) it just works and unlike many newer media players, I can select the output device I want.
I really hope this sale doesn't ruin what's left of WinAmp. It would a
Re: (Score:2)
I'm joining with the crowd recommending Foobar2000; I got into it because I wanted something for my Linux box, and when WinAmp went down I switched entirely.
For video on Windows, though, try Media Player Classic--lightweight, and like Foobar2000 is quite portable though unlike Foobar2k no special installation is necessary. (I've run it on a system with a good codec collection installed--I use CCCP, and am hoping to find a Linux equivalent of it--with just the program file dropped into a folder.)
Still Rocking Winamp Daily Since 1997 (Score:1)
Winamp has taken me through Junior High, High School, College, and Beyond. I've seen countless media players come and go, but all I need is trusty Winamp with my personally-edited Receiver skin (original artist Timo Henke, year 2001). Sure the new programs do all this fancy stuff, but all I need is a nice simple player for my tunes with convenient Windows Explorer folder integration. Hard to believe it's been 18 years of near-daily usage. Keep on rockin' with Winamp in the Geek world!
Still use Winamp (Score:1)
I listen to pre-recorded radio shows and I like the fact that I can dock Winamp at the top of the screen with hiding it from view until the mouse comes close to the top of the screen. Can Foobar 2000 do that?
I like Audacious Media Player & Clementine (Score:1).... [audacious-...player.org]
"Audacious runs on Linux, on BSD derivatives, and on Microsoft Windows."
"Audacious is an open source audio player. A descendant of XMMS, Audacious plays your music how you want it, without stealing away your computerâ(TM)s resources from other tasks. Drag and drop folders and individual song files, search for artists and albums in your entire music library, or create and edit your own custom playlists. Listen to CDâ(TM)s or stream music from the Internet. Tweak the
Not caring that much about Winamp, but... (Score:3)
I am a little concerned about the takeover of Radionomy. That's my main source of music on the intertubes and I'm hoping Vivendi doesn't decide that changes need to be made and eff that up.
It be nice for an upgrade (Score:1)
But I am really interested to see how it will change if it does. I mean, a lot of the passion of Winamp development died out in 2004 when Justin Frankel, the creator left. I am not saying that the team after him hasn't done a good job maintaining it and adding upgrades, but it seems lik
Since they did such a fine job... (Score:2)
*starts archiving all the winamp content he can.*
Winamp's already said it'd take years to return (Score:1)
DEADBEEF for Linux (Score:2)
--If you like WinAmp on Windows, try the DEADBEEF player for Linux -- similar interface and features; comes with a large multi-band Equalizer, and plays
.ogg files and .mod(.it, .s3m, .xm, etc) files out of the box.... [sourceforge.net]
Re: (Score:2)
So what? WinAmp was great at the dawn of the MP3 era, but it's caveman primitive by today's standards.
Reviving WinAmp at this late stage is completely pointless, in my mind.
So it just will not be right until it has a new flat interface design? I think you need to read the UI/UIX thread.... [slashdot.org]
Re: (Score:3, Interesting)
I guess it does demonstrate the enduring power of a marque though. Sure, WinAmp hasn't been relevant for ages, but people still remember it. I remember even making a skin for the music player I used on my PocketPC PDA (gsplayer I think it was) to make it look like WinAmp, mustv'e had way too much time on my hands.
Sure, it's hard to see how WinAmp could be brought back to prominence. Certainly not in the same form as it was; if you want that, just download the old versions. But it's not impossible; I can't s
Re: (Score:2)
Winamp remained relevant throughout the ages. Issue companies and shills take with it is that it's not monetizeable, simply because it's a perfectly working product that doesn't need any major updates and isn't dependent on developer's servers.
It just works on user's computer, and remains among the best if not the best audio player on windows to date.
Re: (Score:3)
It just works on user's computer, and remains among the best if not the best audio player on windows to date.
I'm using vlc to play music now. It plays everything that I want to play and then some. Very rarely but still occasionally I run into something that winamp won't play.
Re: (Score:2)
Dude! Foobar2000 for music! Best damn music player in existence (for Windows).
Re: (Score:1)
This:
Mod parent up. I loved Winamp, but ditched it for foobar2000 when it started becoming too much like Windows Media Player. foobar2000 Made by a guy who worked on Winamp stuff for a bit.
"foobar2000 is a freeware audio player for Windows developed by Piotr Pawlowski, a former freelance contractor for Nullsoft. It is known for its highly modular design, breadth of features, and extensive user flexibility in configuration. For example, the user-interface is completely customizable
Re: (Score:2)
I tried VLC for my mp3s awhile back (I still use it for videos, of course). I could never figure out why, but it always pinned my CPU, even if I turned visualizations off.
Switched to foobar after that, but I still miss WinAmp's simplicity.
Re: (Score:2)
That's like using a truck for daily commute. Uncomfortable, dysfunctional and way too costly.
If you need extra features in addition to playback, you use foobar2000. If you just want playback, you use winamp.
Re: So What? (Score:1)
Re: (Score:2)
That's like using a truck for daily commute. Uncomfortable, dysfunctional and way too costly.
You haven't driven a modern truck, have you? They drive like a car, and they've got very comfortable seats. The only problem is the mileage. Thing is, if we may shift out of that metaphor, my low-end machine is a septa-core with 8GB of RAM. VLC is not an undue burden. I have it installed already. I like the interface fine. So just like someone who already owns a nice truck and then gets a job a ways away, I'm going to use what I've got and what I already use rather than install another piece of software and
Re: (Score:2)
That's another beauty of winamp. It just works. It needs no updates. At all.
Most people clearly have forgotten what it's like, to have piece of software that is actually functionally finished and works just fine.
Re: (Score:2)
I have a basic media player using fmod that I designed but don't share. I've been using it for over a decade because I didn't like windows media player or winamp and there were really no other competition at the time. | https://slashdot.org/story/15/12/21/2032216/vivendi-takes-over-radionomy-winamp-relaunch-now-possible | CC-MAIN-2017-09 | refinedweb | 3,233 | 71.75 |
By Shawn Hargreaves, November 1999
As of the 3.9.x work-in-progress series which is current at the time of this writing, Allegro has been ported to DOS (djgpp and Watcom), Windows (MSVC, mingw32, and RSXNTDJ), Linux (console and X) and BeOS. By the time you read this, version 4.0 will probably be out, or perhaps even some later release with support for more different platforms. Allegro gives you the exact same library functions no matter where you are running it, and this makes it very easy to produce versions of your program for all of these systems: in theory, a simple recompile should be all the porting work required. Things are rarely quite that simple in the real world, though, so you will probably need to tweak a few things to get your program working on each new platform. This document tries to guess what the most likely problem areas will be, in hopes that you can avoid them right from the start rather than getting stuck having to do lots of fiddling later on. Some of these are Allegro things, but most are general C things: in either case, please let me know if you can think of any other problem areas that I left out!
typedef struct MYSTRUCT { char a; short b; int c; };you might expect that sizeof(MYSTRUCT) will be 7, but in fact most Intel compilers will pad it to 8 bytes. It is quite possible, though, that some might pad to 12, or that a future 64 bit platform might make that int variable twice as large as you are expecting. So you need to avoid code which cares about such changes, and be sure to use sizeof() whenever you need to allocate space for a data structure.
Most importantly, you need to distinguish between read-only data, global modifiable data, and user-specific data. Assuming that someone has downloaded your game, unpacked it, compiled it, and run it, it is a good thing for your game not to modify the directory where it is installed, so that this directory could be mounted read-only. If you want to save out variable data such as a hiscore table, put this in the /var/games directory (or if you need many such files, create a /var/games/mygame subdirectory, and put your data in that). But don't forget that there can be many users on the same machine, and they may not all want to share the same stored information! Anything that is specific to the current user, such as save game files or controller configuration, should go in their home directory, which can be located by reading the HOME environment variable (eg. char *mydir = getenv("HOME")). Again, if you need to put many files here, you can create a subdirectory for yourself.
That is probably enough to make your game work well in a multiuser environment, but if you want to go further and set up a really beautiful configuration (for example to provide a "make install" target as well as the regular compilation rules) you should be aware of the following locations:
Most binaries go in /usr/local/bin, but games should be placed in /usr/local/games. This directory is only for the executable itself, which is specific to the current machine architecture, and can be run directly by the user.
Shared libraries and other executable code go in /usr/local/lib. The contents of this directory are also specific to one machine architecture, but the user does not run them directly (for example the Allegro shared library goes in here).
Non-executable resources go in /usr/local/share/, or for games, /usr/local/share/games/. The point of this directory is that it contains material which remains the same no matter what type of computer your game is running on, so if someone has a network containing a mix of i386 machines, Alpha boxes, and PPC systems, they can still mount these files over the network and use the same data on all hardware. So this is the place to put your graphics, sounds, and level data. If you have more than one file to put in here, make a subdirectory to contain them all.
#ifdef GFX_MODEX if (gfx_driver->id == GFX_MODEX) { /* do funky mode-X graphics stuff */ } else #endif { /* do normal graphics stuff */ }This code will simply not bother to compile the mode-X routines if there is no mode-X driver on the current platform, and it is better than checking for the platform itself, because this test will adapt to things like the Linux version where mode-X support might or might not be available, depending on what options were passed to the configure script.
// call bmp_select() once at the very start of your drawing operation bmp_select(bmp); for (y=top; y<bottom; y++) { // call bmp_write_line() once for every horizontal line unsigned long address = bmp_write_line(bmp, y); for (x=left; x<right; x++) { // use bmp_write*() macros to write the pixels bmp_write8(address+x, color); } } // call bmp_unwrite_line() once at the very end of the drawing operation bmp_unwrite_line(bmp);Yes, it looks horrific, but half of these macros will just expand into noops on any given platform, and the other half tend to collapse into just a bit of pointer twiddling or inline asm, so it really isn't all that bad. And this is the only way to make direct memory access code that is 100% portable to any platform.
For higher color depths, change bmp_write8() to bmp_write15(), bmp_write16(), etc, and multiply the x coordinate by a scaling factor before you add it to the destination address. For 15 and 16 bit modes, you should multiply by sizeof(short). For 24 bit modes, multiply by 3. For 32 bit modes, multiply by sizeof(long). | http://alleg.sourceforge.net/docs/portability_guidelines.en.html | crawl-002 | refinedweb | 969 | 59.98 |
storage for 3 collinear points to serve as 1-D projective basis More...
#include <vgl/vgl_fwd.h>
#include <vgl/vgl_homg_point_1d.h>
#include <vcl_iosfwd.h>
#include <vcl_cassert.h>
Go to the source code of this file.
storage for 3 collinear points to serve as 1-D projective basis
vgl_1d_basis<T> is a class (1,0), and the third one (the unit point) coordinates (1,1).,0) and (1,1).!)
Modifications Feb.2002 - Peter Vanroose - brief doxygen comment placed on single line
Definition in file vgl_1d_basis.h.
Definition at line 139 of file vgl_1d_basis.h.
Write "<vgl_1d_basis o u i> " to stream.
Definition at line 40 of file vgl_1d_basis.txx. | http://public.kitware.com/vxl/doc/release/core/vgl/html/vgl__1d__basis_8h.html | crawl-003 | refinedweb | 108 | 71.51 |
? By the way, the MD5 function which I use and is included as part of HSLIBS has the type String -> IO String. The MD5 algorithm really is a function and should have type String -> String. Do people agree and if so how do I get it changed? Dominic. -- Compile with ghc -o test test.hs -static -package util -- under Windows. module Main(main) where import IO(openFile, hPutStr, IOMode(ReadMode,WriteMode,AppendMode)) import MD5 import Char -- showHex and showHex' convert the hashed values to -- human-readable hexadecimal strings. showHex :: Integer -> String showHex = map hexDigit . map (fromInteger . (\x -> mod x 16)) . takeWhile (/=0) . iterate (\x -> div x 16) . toInteger hexDigit x | (0 <= x) && (x <= 9) = chr(ord '0' + x) | (10 <= x) && (x <=16) = chr(ord 'a' + (x-10)) | otherwise = error "Outside hexadecimal range" powersOf256 = 1 : map (*256) powersOf256 showHex' x = showHex $ sum (zipWith (*) (map ((\x -> (mod x 16)*16 + (div x 16)) . toInteger . ord) x) powersOf256) -- The type Anon and function anonymize hide the anonymisation -- process. In this case, it's a hash function -- digest :: String -> IO String which implements MD5. type Anon a = IO a class Anonymizable a where anonymize :: a -> Anon a -- MyString avoids overlapping instances of Strings -- with the [Char] data MyString = MyString String deriving Show instance Anonymizable MyString where anonymize (MyString x) = do s <- digest x return ((MyString . showHex') s) instance Anonymizable a => Anonymizable [a] where anonymize xs = mapM anonymize xs filename = "ldif1.txt" fileout = "ldif.out" readAndWriteAttrVals = do h <- openFile fileout WriteMode s <- readFile filename a <- anonymize((map MyString) (lines s)) hPutStr h (unlines (map (\(MyString x) -> x) a)) main = readAndWriteAttrVals ------------------------------------------------------------------------------------------------- 21st century air travel | http://www.haskell.org/pipermail/glasgow-haskell-users/2001-January/001590.html | CC-MAIN-2014-35 | refinedweb | 269 | 72.26 |
I have an assignment to write a program that asks a user the year, model, and condition of a vehicle and display the price. We have to use a 3-D array to hold all of the values and a 2-D array to hold all of the names of the cars. The function I am having a problem with is the one that asks the user the year, model, and condition of the car. Say the person enters "Camry" for the model and "Used" for the condition, I need a way to assign a numeric value to a variable to represent the location of the price in the 3-D array. At first I was going to use a switch statement but visual studio kept telling me that you cant use a switch statement with a string. Here is the code I have so far, I'll post the whole thing but I pointed out where I was having the problem.
Any help would be great, Thanks.Any help would be great, Thanks.Code:#include<iostream> #include<iomanip> #include<fstream> #include<string> #include<cmath> #include<cctype> using namespace std; char userprompt(); void getdata(ifstream& Readfile, float carvalues[100][17][4]); void openfile(ifstream& Readfile); float findcarvalue(char carnames[100][100], float carvalues[100][17][4]); int main() { float carvalues[100][17][4]; char carnames[100][100]={"Altima", "Avalanche", "Camaro", "Camry", "CTS", "E350", "Escalade", "Explorer", "F150", "Fairlane", "S600", "Silverado", "Suburban", "Tacoma", "Tahoe", "Titan", "Tundra"}; float carvalue; ifstream Readfile; openfile(Readfile); getdata(Readfile, carvalues); switch(userprompt()) { case'f': case'F': carvalue = findcarvalue(carnames, carvalues); cout << endl << "The price of that vehicle is " <<carvalue; break; case'c': case'C': cout << "computeaverage"; break; case's': case'S': cout << "showprices"; break; case'h': case'H': cout << "cheapestcar"; break; case'a': case'A': cout << "showallvalues"; break; case'q': case'Q': break; default: cout << "Invalid Input!"; } return 0; } void openfile(ifstream& Readfile) { string pathname; cout << "Hello, Welcome to Kelly Green Book." << endl; cout << "What is the name of the input file? " << endl; cin >> pathname; Readfile.open(pathname.c_str()); } char userprompt() { char action; cout << endl << "What would you like to do?" << endl << "F = Find a car value" << endl << "C = Compute average" << endl << "S = Show Prices" << endl << "H = Find the cheapest car" << endl << "A = Show all values" << endl << "Q = Quit" << endl; cin >> action; return action; } void getdata(ifstream& Readfile, float carvalues[100][17][4]) { float price; for (int year = 0; year < 7; year++) { for (int model = 0; model < 17; model++) { for (int condition = 0; condition < 4; condition++) { Readfile >> price; carvalues[year][model][condition] = price; } } } } float findcarvalue(char carnames[100][100], float carvalues[100][17][4]) { float year, mlocation, carvalue, clocation; char model, condition; cout << endl << "Year: "; //Here is where I am having the problem. cin >> year; year = 2007 - year; cout << endl << "Model: "; cin >> model; cout << endl << "Condition(Excellent/Fine/Good/Used): "; cin >> condition; return carvalue; } | http://cboard.cprogramming.com/cplusplus-programming/100419-3d-array-problem.html | CC-MAIN-2015-11 | refinedweb | 477 | 52.23 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.