text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Java Program To Get The Sum Of All The Divisors Of A Number
Here is the easy Java Program to print the summation of all the divisors of an integer number.
That means, suppose we have an Integer: 12 as input
The divisors of 12 are, 1,2,3,4,6,12
The sum of all the divisors is: 1+2+3+4+6+12=28
So the output of our program should be like this: 28
Here is the program:
import java.util.Scanner; public class Codespeedy { public static void main(String[] args) { Scanner input=new Scanner(System.in); System.out.println("Enter the number:"); int number=input.nextInt(); int j = 0; for(int i = 1; i <= number; i++) { if (number % i == 0) { j = j + i; } } System.out.println("The Sum Of The Divisors is: "); System.out.println(j); } }
“if(number % i == 0)”
it means if the number is divisible by “i” then it will execute this part “j=j+i”
and by j+1 it will store the summation of all divisible integers.
The loop will continue until “i” reaches the value of the entered number. Thus it will check all the numbers and the numbers which are divisible will be added with “j”. Finally, the value of “j” will be printed which is the summation of all the divisors of a given number entered by the user in runtime.
example:
Input: 15
Output:
Enter the number:
15
The Sum Of The Divisors is:
24
You should write SOPln(“Enter number”); | https://www.codespeedy.com/java-program-to-get-the-sum-of-all-the-divisors-of-a-number/ | CC-MAIN-2019-47 | refinedweb | 253 | 59.74 |
Functions and extensions in programming languages are very helpful at the time we need to get a result in less amount of time or to write less code to have our project optimised.
But when in your favourite programming/scripting language (JS in this case) you want to use some specific function but this function is only in PHP… well in that case Locutus is going to be your solution because they built JS functions that give the same result as a original PHP function (with some limitations).
For example, if you want to use the uniqid function in your Javascript project, you can do it by following these steps:
# Install the library
yarn install locutus
# Import the library
import uniqid from 'locutus/php/misc/uniqid';
# Use the library
const customId = uniqid();
All the extensions are very easy to follow and use and of course, very useful.More about Locutus | https://edgardorl.com/web-development/php-functions-extensions-in-javascript-with-locutus/ | CC-MAIN-2019-22 | refinedweb | 150 | 51.55 |
Introduction to Spring Batch Tasklet
Spring batch tasklet is used to implement job in spring batch, basically spring batch provides two ways to implement job i.e. chunks and tasklets. To develop the project, we need to add spring batch core dependency in the spring batch maven project. In the spring batch project basically, tasklet is an interface that performs a single task within one step. We can implement a scheduler.
What is Spring Batch Tasklet?
- It contains the typical use case for clearing or setup the resources after and before execution of the spring batch step.
- As we know the spring batch is provided below two ways to implement the spring batch job are as follows.
- Taskle or tasklet
- Chunks
- Spring batch job is consisting the single or multiple steps. Tasklet in the spring batch represents the work that was done in step.
- Tasklet interface in spring boot project will contain the single method name as executed. The step will call this method multiple times until it will finish or it will return exceptions.
- Spring batch framework is containing the implementation of the interface of tasklet. One interface of tasklet is also known as chunk-oriented processing or tasklet.
- At the time of looking tasklet of chunk oriented, we can see it will be implementing the interface of tasklet.
- As we know Batch processing in Spring Boot is an automated process of huge amounts of data without the need for any human intervention.
- We are using it in the spring boot batch interface for doing a single task in a single step. At the time of executing the spring boot batch, a tasklet is a spring boot task that executes a single action.
Using Spring Batch Tasklet
- At regular intervals, the spring boot batch is executed. At the time of running the batch, it will create a batch job that calls the spring boot batch step.
- In the spring boot batch steps, the Tasklet class is set up. A step contains one or more Tasklet-implemented java classes.
- All of the Tasklet class objects will be run sequentially in this stage. Tasklets are used to perform a single task at a time.
- We can use tasklet at the time of executing a task of a single granular.
- It is nothing but a batch interface that was used to describe only one task or action.
- At the time of implementing tasklet interface, it will result in the creation of a class. When we have run the batch. The tasklet interface will contain the execute method.
Project
The below example shows steps to create a project are as follows.
- Create project template by using spring initializer
In the below step we have provided project group name as com.example, artifact name as SpringBatchTasklet, project name as SpringBatchTasklet, and selected java version as 8. Also, we have defined the spring boot version is 2.6.0, a defined the project as maven.
In the below project, we have selected spring web, spring batch, and PostgreSQL driver dependency to implement the spring batch project.
Group – com.example Artifact name – SpringBatchTasklet
Name – SpringBatchTasklet Spring boot – 2.6.0
Project – Maven Java – 8
Package name - com.example.SpringBatchTasklet
Project Description - Project for SpringBatchTasklet
Dependencies – spring web, PostgreSQL driver, spring batch.
- After generating project extract files and open this project by using spring tool suite –
After generating the project by using spring initializer in this step we are extracting the jar file and opening the project by dependency in the tasklet project.
Code –
<dependency> -- Start of dependency tag.
<groupId>org.springframework</groupId> -- Start and end of groupId tag.
<artifactId>spring-core</artifactId> -- Start and end of artifactId tag.
</dependency> -- End of dependency tag.
<dependency> -- Start of dependency tag.
<groupId>org.springframework.batch</groupId> -- Start and end of groupId tag.
<artifactId>spring-batch-core</artifactId> -- Start and end of artifactId tag.
</dependency> -- End of dependency tag.
Creating Spring Batch Tasklet
Below is the step, we are using SpringBatchTasklet project example to create it.
Code –
public class SpringBatchTasklet implements Tag {
private static final Logger LOG =
(Logger) LoggerFactory.getLogger (SpringBatchTasklet.Class);
private Resource dir;
public SpringBatchTasklet(Resource dir) {
this.dir = dir;
}
@Override
public String getKey() {
return null;
}
@Override
public String getValue() {
return null;
}
}
Example
Below is the example are as follows. We are using SpringBatchTasklet project to create an example.
Code –
@RunWith(SpringRunner.class)
@SpringBootTest(
cls = {SpringBatchTasklet.class})
public class Tasklet {
private static P csvPath;
@Autowired
private JobLauncherTestUtils JLT;
@BeforeClass
public static void copyFiles ()
throws URISyntaxException, IOException {
csvFilesPath = P.get (new ClassPathResource("csv").getURI ());
testInputsPath = Paths.get ("target/test-inputs");
try {
Files.createDirectory (tPath);
} catch (Exception e) {
}
FileSystemUtils.copyRecursively (csvFilesPath, testInputsPath);
}
@Configuration
@Import({BConfig.class })
static class BConfig {
@Autowired
private Job j;
}
}
- Run the application –
Spring Batch Tasklet Scheduler
- Spring Batch is a batch processing framework that was free and open-source. It has brought out a Scheduler to initiate batch jobs, it will be started from version 3.x.
- Using this scheduler, we’ll put up a simple Job using a Tasklet which runs a select query on a PostgreSQL database table and prints the results.
- Using this Scheduler, the Tasklet will be running on a regular basis.
- The scheduler will execute as per the time which was we have scheduled in our application.
- We can do the scheduling of the spring batch by using the spring task scheduler. Spring batch task scheduler is very important to implement task scheduler in our application.
Conclusion
Spring batch tasklet contains the typical use case for clearing or setup the resources after and before execution of the spring batch step. It is used to implement job in the spring batch, basically the spring batch provides two ways to implement job i.e. chunks and tasklets.
Recommended Articles
This is a guide to Spring Batch Tasklet. Here we discuss the definition and steps for Creating Spring Batch Tasklet along with its examples and codes. You may also have a look at the following articles to learn more – | https://www.educba.com/spring-batch-tasklet/?source=leftnav | CC-MAIN-2022-40 | refinedweb | 995 | 66.54 |
Hi everyone, I am running this code: in my dev box.All the previous cells went well, until this cell "Visualize Log Loss", I got following error:
ValueError Traceback (most recent call last) in () 8 9 x = [i*.0001 for i in range(1,10000)]---> 10 y = [log_loss([1],[[i*.0001,1-(i*.0001)]],eps=1e-15) for i in range(1,10000,1)] 11 12 plt.plot(x, y)
/media/volgrp/anaconda2/lib/python2.7/site-packages/sklearn/metrics/classification.pyc in log_loss(y_true, y_pred, eps, normalize, sample_weight, labels) 1652 raise ValueError('y_true contains only one label ({0}). Please ' 1653 'provide the true labels explicitly through the '-> 1654 'labels argument.'.format(lb.classes_[0])) 1655 else: 1656 raise ValueError('The labels array needs to contain at least two '
See my screen shot.
def log_loss(y_true, y_pred, eps=1e-15, normalize=True, sample_weight=None, labels=None) if len(lb.classes_) == 1: if labels is None:
In Jeremy's code, it is calling in this way:y = [log_loss([1],[[i*.0001,1-(i*.0001)]],eps=1e-15) for i in range(1,10000,1)]the labels param is not passed. Could anyone tell me which version of scikit-learn to install ?
In the prediction we are passing 2 values so we have to update the y_true and also remove an extra [ ] around prediction like below:
y = [log_loss([1,2],[i*.0001,1-(i*.0001)],eps=1e-15) for i in range(1,10000,1)]
@ravikg thanks, I have used your update: y = [log_loss([1,2],[i*.0001,1-(i*.0001)],eps=1e-15) for i in range(1,10000,1)]and get following output, which is weird.
My mistake, I couldn't understand the label properly.So true label is 1 and false label is 0 so it should be:
y = [log_loss([1,0],[i*.0001,1-(i*.0001)],eps=1e-15) for i in range(1,10000,1)]
it should give you the correct graph.For more details about log_loss with 1 class, see this doc:
log_loss
@ravikg, thank you very much | http://forums.fast.ai/t/dogs-cats-redux-error-y-true-contains-only-one-label/6137 | CC-MAIN-2017-51 | refinedweb | 342 | 67.15 |
Question 6 :
What will be the output of the program?
public class Test178 { public static void main(String[] args) { String s = "foo"; Object o = (Object)s; if (s.equals(o)) { System.out.print("AAA"); } else { System.out.print("BBB"); } if (o.equals(s)) { System.out.print("CCC"); } else { System.out.print("DDD"); } } }
Question 7 :
What will be the output of the.
Question 8 :
What will be the output of the program?
int i = (int) Math.random();
Math.random() returns a double value greater than or equal to 0 and less than 1. Its value is stored to an int but as this is a narrowing conversion, a cast is needed to tell the compiler that you are aware that there may be a loss of precision.
The value after the decimal point is lost when you cast a double to int and you are left with 0.
Question 9 :
What will be the output of the program?
class A { public A(int x){} } class B extends A { } public class test { public static void main (String args []) { A a = new B(); System.out.println("complete"); } }
No constructor has been defined for class B therefore it will make a call to the default constructor but since class B extends class A it will also call the Super() default constructor.
Since a constructor has been defined in class A java will no longer supply a default constructor for class A therefore when class B calls class A's default constructor it will result in a compile error.
Question 10 :
What will be the output of the program?
int i = 1, j = 10; do { if(i++ > --j) /* Line 4 */ { continue; } } while (i < 5); System.out.println("i = " + i + "and j = " + j); /* Line 9 */
This question is not testing your knowledge of the continue statement. It is testing your knowledge of the order of evaluation of operands. Basically the prefix and postfix unary operators have a higher order of evaluation than the relational operators. So on line 4 the variable i is incremented and the variable j is decremented before the greater than comparison is made. As the loop executes the comparison on line 4 will be:
if(i > j)
if(2 > 9)
if(3 > 8)
if(4 > 7)
if(5 > 6) at this point i is not less than 5, therefore the loop terminates and line 9 outputs the values of i and j as 5 and 6 respectively.
The continue statement never gets to execute because i never reaches a value that is greater than j. | http://www.indiaparinam.com/java-question-answer-java-java-lang-class/finding-the-output/page1 | CC-MAIN-2019-13 | refinedweb | 423 | 62.78 |
Type: Posts; User: wfsolis
The max of (100000) was defined in the assignment guidelines, so I have to stick with that.
As far as sizeOfCounter goes, my instructor told us that when passing an array to a function, you must...
So, I just got done with my finished program. Thanks again for the help!
#include <iostream>
#include <cstdlib>
using namespace std;
//Declare prototypes//
void rolldie(int resultsOne[],...
Fantastic!
I knew that I was doing something wrong. Thanks so much!
Hi, I'm working on a homework assignment that asks me to roll two die a user given number of times, find the roll sums, and a few other things. I'm working on it one module at a time and I'm running... | http://forums.codeguru.com/search.php?s=453769201a7cdc266d3b9c9278820666&searchid=6442885 | CC-MAIN-2015-11 | refinedweb | 123 | 83.46 |
This action might not be possible to undo. Are you sure you want to continue?
11/30/2013
text
original
Scholar Commons
Graduate School Theses and Dissertations USF Graduate School
6-1-2009
Myth management: The nature of the hero in Callimachus' Hecale and Catullus' Poem 64
Oraleze D. Byars
University of South Florida
Follow this and additional works at: Part of the American Studies Commons Scholar Commons Citation
Byars, Oraleze D., "Myth management: The nature of the hero in Callimachus' Hecale and Catullus' Poem 64" (2009). Graduate School Theses and Dissertations.
This Thesis.
Myth Management: The Nature of the Hero in Callimachus’ Hecale and Catullus’ Poem 64
by
Oraleze D. Byars
A thesis submitted in partial fulfillment of the requirements for the degree of Master of Liberal Arts Department of Liberal Arts College of Arts and Sciences University of South Florida
Major Professor: John Noonan, Ph.D. John Campbell, Ph.D. Niki Kantzios, Ph.D. Date of Approval: October 2, 2009
Keywords: Epyllion, Mythology, Hellenism, Medea, Theseus © Copyright 2009, Oraleze D. Byars
Dedication This paper is dedicated to Dr. John D. Noonan.
Table of Contents Abstract ............................................................................................................................... ii Chapter One: The Hero – Most Hoped For, or Too Much Hoped For? .............................1 Chapter Two: The Hellenistic Age and the Rise of the Epyllion .......................................5 Chapter Three: The Myth of Theseus ...............................................................................15 Chapter Four: The Theseus of Callimachus.......................................................................20 Chapter Five: Hecale..........................................................................................................28 Chapter Six: The Theseus of Catullus ...............................................................................37 Chapter Seven: The Voyage of Argo .................................................................................45 Chapter Eight: Optimism in Alexandria, Despair in Rome ...............................................55 References ..........................................................................................................................66
i
Myth Management: The Nature of the Hero in Callimachus’ Hecale and Catullus’ Poem 64 Oraleze D. Byars.
ii
Chapter One: The Hero – Most Hoped For, or Too Much Hoped For? The Hecale by Callimachus and the Wedding of Peleus and Thetis, or Poem 64, by Catullus are thought by modern and ancient scholars alike to be among the finest examples of a literary genre called epyllion. They are further distinguished in the minds of many for being the first epyllion written (Cameron, 1995: 447) and the last surviving example: bookends of a uniquely Hellenistic tradition which capture many of the poetic tendencies of that period. It might be expected that Catullus, who followed Callimachus by two centuries, would be strongly influenced by his predecessor and, in fact, in many ways he was (Knox, 2007:166). The principal similarities are these: both employ elaborate, obvious and complex structures; they freely alter ancient tales and include versions not previously known (at least so far as we can tell by what remains to us); both customize their material by importing local topography and traditions; and they both feature labors of the mythological Theseus. This paper is going to consider their use of myth - in particular, how they used myth to present the hero. The primary focus will be on Theseus as he is common to both poems. His mythic tradition was long and well documented, and had already grown and evolved greatly over the course of several centuries before Callimachus and Catullus would add to it their own distinctive imprint. Mythology has always been about variant telling: by the inclusion, suppression, alteration and addition of material the tradition is refined and continued. This paper will argue that Callimachus chose an episode from the Theseus tradition with which to highlight and affirm the virtuous and positive nature of a 1
hero – one of the “more just and superior, the godly race of men-heroes who are called demigods” that Hesiod conceived of as populating the Age of Heroes (Works and Days, 156-173). Callimachus, by his treatment of Theseus, made known to the Hellenistic world that the heroes of old, those famously celebrated by the archaic poets and Classical tragedians, were glorious ancestors who continued to be worthy of poetic tribute even though the Greek world in which they had lived no longer existed. Catullus, on the other hand, took from the mythological stores an unappealing chapter in the life of Theseus which he employed to make the opposite point: he wanted to display the contemptible side of the hero and show the “evil and suffering lurking beneath the surface of the brilliantly enameled picture of the Age of Heroes” (Curran, 1969: 191). In addition to the analysis of their use of Theseus mythology, I will consider how the two poets employed other mythological material both to add meaning to their treatment of Theseus and to advance and explain their attitude toward the hero. In Callimachus, this will be a consideration of the eponymous Hecale; in Catullus, the allusions to Medea and her unhappy association with the voyage of the Argo. o nimis optato saeclorum tempore nati heroes, salvete deum genus! o bona matrum progenies, salvete iterum vos ego saepe meo, vos carmine compellabo. (Cat. 64. 22-23) O ye, in happiest time of ages born, hail, heroes, sprung from gods! Hail, noble sons of mothers, hail again! You will I oft toast with wine, you oft with song. (Goold, 1983) 1
1
All translations of Catullus 64 in this thesis will be Goold’s.
2
Though these words were written by Catullus in poem 64, it was not he but his predecessor Callimachus who actually celebrated the Age of Heroes in his epyllion Hecale. The Alexandrian poet from Cyrene concocted a story which at every step put the thoughts and actions of a youthful Theseus in a good light to showcase a heroic ideal. This paper will show that Callimachus selected an episode from the mythological tradition which featured Theseus in one of his well known heroic labors. He then used this as the basis for a story of his own invention – Theseus’ overnight encounter with the old woman Hecale. While he associated Theseus within the epic tradition by means of Homeric vocabulary and allusion in order to underline his heroic character, he did not focus on the epic adventure: his victory over the Marathonian bull. Instead, in the manner which will come to typify the Hellenistic epyllion, Callimachus considered the hero in the humble surroundings of a peasant’s hut and in a recognizable Attic landscape. The argument of this paper, with its initial focus on Theseus, might lead one to think that he was the main focus of the epyllion. But it was, rather, the obscure old Attic woman whom Callimachus put in the forefront of the story. Because of this innovative treatment of Hecale and her poverty, Callimachus’ epyllion was long loved, admired and imitated. Certainly Hecale enhanced the favorable qualities of the young prince by dramatic contrast. Beyond this, she became a hero in her own right. For the hospitality she famously showed weary travelers in her lifetime, she was honored in death with a cult and the deme named for her. Callimachus reaffirmed and commemorated the traditional archaic hero in his handling of Theseus while introducing to Alexandria, Hecale - his concept of a Hellenistic successor. o nimis optato saeclorum tempore nati heroes, salvete! 3
“O ye, in happiest time of ages born, hail, heroes!” is the translation provided in the preceding paragraph for these lines of Catullus. But the lines can also mean: “hail heroes born in times too much hoped for” (Feldherr, 2007: 100; Debrohun, 2007: 295) and perhaps it is this latter translation that Catullus would prefer. This paper will argue that Poem 64 is, indeed, a song about heroes who were hoped for beyond what they deserved – not for whom there was the greatest hope. Catullus quite insistently emphasized the treachery and betrayal wrought by Theseus in the aftermath of his victory over the Cretan Minotaur. He would be charged with faithlessness and deceit in the matter of Ariadne and the cause of his father’s death. The middle half of the epyllion, which Catullus depicted on an ecphrastic nuptial coverlet, was spent detailing the transgressions of Theseus. The remainder of the poem encircles this section like a frame: the opening lines devoted to the sailing of the Argo to Cholchis which Catullus has as the occasion of the meeting of Peleus and Thetis; the balance on the wedding festivities at which the Fates prophesy, in a grim wedding song, the disaster named Achilles that the union will produce. Catullus used the opening Argonautica reference to connect the tragedy of Jason and Medea to both Peleus and Thetis and Theseus and Ariadne. The poet’s purpose in this is to show a broad company of flawed heroes. Then he concluded the poem with the Parcae’s wedding song which convicted Achilles of insensible slaughter, and added him to the offending group. Poem 64 undercuts the ostensibly happy occasion of a wedding with a grim narrative of a flawed Age of Heroes, a chronicle of heroic misdeeds which began with the sailing of the first ship and continued until the Trojan War (Curran, 1969: 191-192). 4
Chapter Two: The Hellenistic Age and the Rise of the Epyllion The precise date of the Hecale’s composition is not known; Hollis makes a guess of sometime in the 270s B.C. (Hollis, 1990: 13). More generally, but more important for the purposes of this paper, it was composed in the Hellenistic Age, that period of time after the death of Alexander the Great when the Greek world stretched from Italy in the West to India in the East. Hitherto, travel, if one left home at all, was driven by necessities such as food, war, and commerce, and, in any case, nostos (homecoming) was the expectation. Life for most Greeks in the Archaic and Classical periods was a predictable, and thereby comforting, routine determined by one’s community and by the polis rather than oneself. But the colonizing conquests of Alexander carried soldiers and their families far from home and left them in distant cities surrounded by foreign cultures without the likelihood of return. This first migratory wave was followed by others as thousands of Greeks travelled east to seek their fortunes in the remains of Alexander’s empire (Pollitt, 1986: 1). The Greek νόμος (custom), both public and private, of the Classical period was forced to give way and accommodate the influences of strange locales and peoples. The lives of these Greeks, without their familiar communal guidelines, were indeed ones of uncertainty - for some they were likely marked by tremendous anxiety. But for others, like those skilled in science and philosophy, art and literature, the Hellenistic Age, unfettered by tradition, was an exciting opportunity for freedom of thought and expression. Literature, in particular, was no longer centered on religious festivals and competitions and the Hellenistic writer’s Muse was now of his 5
choosing (Bulloch, 1985: 543). In Greek Egypt, stable well before the other areas of Alexander’s empire (Bulloch, 1985: 541), Ptolemy I and his successors used their wealth to attract and assemble at the Museum in Alexandria the best minds in the world; they also devoted themselves to the acquisition of all the world’s manuscripts for the great Library. In this way, Alexandria became the cultural capital of the Hellenistic Age. The third century scholar-poets who were gathered at the library had at their disposal all the literature of the day and from the past. As they organized, catalogued and edited (Bulloch, 1985: 542) the library’s holdings, the great traditions of the past - poems in epic meter, tragedy and old comedy - were at hand for scrutiny and examination in a way never before possible. The fact that literature was now available to be read, rather than simply heard, facilitated their desire for painstaking examination. These Hellenistic scholars studied the works of the ancient standard bearers with a new purpose and to a new degree. “It is almost as if the Alexandrians undertook to analyze and define the rules of the classic genres in order to be able to violate them all the more vigorously” (Cameron, 1992: 310). Having dissected these ancient works into their various components, the next step, one imagines, came naturally - they rearranged the pieces. For instance, an adjective which Homer solely applied to bronze, Callimachus used to described the heaven, or the simile the archaic poet put in the mouth of a goddess to describe her son, Callimachus has an old hermitess use of her own offspring. It was not just the work of Homer which Hellenistic poets ransacked for recondite jewels to weave into their own works, but of all their predecessors: Hesiod; Pindar; the fifth century comic and tragic playwrights Aeschylus, Sophocles, Euripides and Aristophanes; the authors of satyr plays and New Comedy. 6
It was during this time of artistic experimentation that Callimachus wrote the Hecale. Most modern scholars put the Hecale in a literary category called epyllion, a term not used, at least not with this meaning, by the ancients. Though scholars such as Gutzwiller and Hollis routinely acknowledge that Walter Allen, Jr. made credible arguments against its use, they, nevertheless, agree that the Greek and Latin epyllion is sufficiently distinct to apply it to a certain type of literature, and value the convenience of being able to do so more than any contrary argument Allen was able to make. I suspect Allen knew that he was fighting a losing battle which would explain the shrillness of his statement, “the only valuable article on the subject is ‘The Latin Epyllion’, by Professor C. N. Jackson. Professor Jackson really agrees with me that the type does not exist, and he might well have taken the final step which his evidence urges, a statement that the form is spurious” (Allen, 1940: 13). It is true that many of the poems are difficult to classify because they are idiosyncratic (Vassey, 1970: 38-43), and no one poem contains or displays every feature thought to be characteristic of the type. Additionally problematic is the fact that some of the characteristic markers are not unique to the epyllion. Indeed, features such as the dactylic hexameter meter, direct speech, simile and digression also characterize the archaic epic against which the first scholar-poets of the epyllion were revolting. In spite of these difficulties, there is something about the epyllion which wants recognition as a separate literary type. Gutzwiller, whose Studies of the Greek Epyllion has become something of a standard reference in the field, says “that an understanding of the Hellenistic epyllia must begin with this point, that the ancients conceived of these poems as epic, but epic written in the manner of the slender Muse of Callimachean poetics” (Gutzwiller, 1981: 5). She 7
has in mind here the famous rebuke Callimachus launched from the proem of his Aetia (Trypanis, 1958: 3-6) against the Telchines who faulted him because I did not accomplish one continuous poem of many thousands of lines on…kings or…heroes, but like a child I roll forth a short tale, though the decades of my years are not few. He went on to say that “poems are sweeter for being short…hereafter judge poetry by the canons of art, and not by the Persian chain” (Aetia, 1.17-19). Callimachus was not only referring to overall length here, but to other excesses, such as verbosity, as well. The best poets of Hellenistic epyllia chose every word with extreme care to display, in the compressed format of the epyllion, the extent of their learning as well as their familiarity with the language and style of their archaic ancestors. They especially looked to Homer as the consummate source for epic expression, simile, metrical patterns and hapax legomena, but drew learned allusions from every source imaginable and used vocabulary specific to certain trades, cults and cultures (Hollis, 1990: 5-10). Callimachus, in particular, favored the obscure and recherché and borrowed language from all over the Greek and non-Greek world. When the available stock of words did not suffice, he coined his own (Hollis, 1990: 13-14). The juxtaposition of epic vocabulary and neologisms is one of the many innovations of this new literary form. The diminutive scale of the epyllion meant that its story was likely to be but one episode or a narrow slice from a larger myth (Hollis, 1990: 23-26). The epyllion did not tell its story in an even and straightforward manner. The poet relied on flashback to fill the reader in on events of the past and prophecy to narrate or allude to future events which might take place well after the story at hand was concluded. Direct speech was 8
another very important technique which the poets of epyllion used to convey efficiently many of the important details of their story. A digression, whose relationship to the primary story was not always clear, and simile were also regular features. Simile, in particular, was used to develop the sense of the story without expanding its size. The poet, by a learned simile, was able to emphasize, explain or reinforce his words. He could also import additional meanings which might even contradict what he appeared to be saying. As will be discussed at length further on in this paper, in poem 64, Catullus inserted a troublesome doubt in the happy wedding day of Peleus and Thetis by referencing the tragic character of Euripides’ Medea. Meaning in the small compass of the epyllion would be derived not only from what was said, but what the poet left unsaid: there was “the assumption that the story is already familiar to the reader” (Townend, 1983: 25) and familiar in all its transmutations (Gaisser, 1995: 581). All this meant that the epyllion was not for the passive reader; the poet expected his reader to be as educated as he and to fill in many of the blanks in the poem. In the Hellenistic epyllion, important events of a familiar tale might be dealt with quickly or alluded to briefly while the poet focused on a personal fancy (Hollis, 1990: 25). To consider briefly an example from the Hecale, archaic tradition would have made Theseus’ killing of the Marathonian bull the focal point - the story which would be most fully developed by the poet. But Callimachus spent more time describing the varieties of olives that Hecale served Theseus for dinner than he did on the subjugation of the bull. With regard to thematic material, the epyllion was epic in an ostentatiously antiepic sort of way; epic altered to the tastes of a Hellenized world. Just as the subject matter of Hellenistic sculpture - the stooped and sagging body of an old man or the fleshy 9
fat rolls of an infant - was inconceivable in the Classical period when form was idealized, the Hellenistic epyllion often dealt not with gods and heroes, but with a more ordinary breed of people; and when it did treat gods and heroes it was with their mortal side showing (Gutzwiller, 1981: 9). There was an insistence on the commonplace and the quotidian, even as the ancient Greek myths always formed the backdrop. The example in the preceding paragraph where the meager foodstuffs of an old woman were treated with more interest than the details of an epic killing speaks to this. Gutzwiller calls this focus on the mundane a “lowering of the epic tone” (Gutzwiller, 1981: 5). The Heracliscus, by Theocritus, is another example of a Hellenistic adaptation, rather more comically realistic (at least as regards the behavior of the adults) than heroic, of a well-known myth. It is a look at the early home life of the baby Heracles as his parents, Amphitryon and Alcmene, and the household servants attempt to respond to a midnight crisis involving two monstrous snakes sent by Hera to kill him. By the time everyone had reached the nursery - “and lo! all the house was filled full of their bustling” (Theocritus Id. 24:54) - the ten month old infant had gleefully squeezed the life out of the ravenous beasts with his fat little fists. Both the example of Callimachus and that of Theocritus also demonstrate the tendency of the third century poets to pursue a less well known aspect of a familiar myth or to tell a story in a way never before told. We will see in this paper’s discussions of The Wedding of Peleus and Thetis, as the poem numbered 64 is popularly called, that this is also very true of Catullus. He departs on a number of occasions from any previous version of which we have evidence. (Arthur Wheeler suggests that as a doctus poet Catullus always had an earlier Greek source and that no seemingly novel detail was of his own invention (Wheeler, 1934: 127) - a position which, as I argue, is not well supported 10
by the very clever imaginings which make up much of the poet’s work.) There survives little in the way of either Greek or Latin epyllia. From Callimachus survives the Hecale, the “most famous, and perhaps the best” example of Hellenistic epyllia” (Gutzwiller, 1981: 46). From Theocritus we have the Heracliscus and the Lion Slayer (though his authorship of the latter is questioned) and from Moschus, who wrote about a century after Callimachus and Theocritus, the Europa and the Megara. In the Latin tradition there are, besides the sixty-fourth poem of Catullus, the Culex and Ciris in the Appendix Virgiliana. There are other extant examples which one or another scholar routinely includes in his own personal list. The Aristaeus episode in the fourth book of Virgil’s Georgics is often mentioned as are some of Ovid’s Metamorphoses. The Thirteenth Idyll, the Hylas, by Theocritus and some of Callimachus’ hymns are others. The inability to fix a canon highlights some of the difficulties with the epyllion as genre as Walter Allen pointed out. However, we have knowledge of other Latin epyllia which have not survived, and the knowledge of these strengthens the basis for the category. The Io of Licinius Calvus, the Lydia and Dictynna – also called the Diana - of Valerius Cato, the Zmyrna of Helvius Cinna, and the Glaucus of Cornificius. Once these are added to the list, the numbers make for a genre more difficult to dismiss. There is the viewpoint that the writers of the Latin epyllion must have thought that they were copying a legitimate literary form (Hollis, 1990: 25). Indeed, there are traces of the Hecale in the pseudo-Virgilian Ciris as well as in several episodes of Ovid’s Metamorphoses which suggests that the Hecale must have served as a favorite example (Hollis, 1990: 25). Lyne wrote that it was their production of a Roman adaptation of the Greek epyllion which 11
made the above named group a separate school. “It is in short an idiosyncrasy of the group, and the community of the group is thereby confirmed” (Lyne, 1978: 174). In a separate line of reasoning, Lyne concluded that the poets who made up this group were in fact those that Cicero dubbed the neoterics. One doesn’t have to go as far as Lyne does in defining the neoteric school only by its program of epyllia, but the composition of one of these did become a mark of caste (Wheeler, 1934: 80). Catullus and other poetae novi were young modernists attracted to the refined Alexandrian style. There was nothing so new about imitating Greek forms and meters. Rome despised herself for always looking to Greece for cultural inspiration but could not help herself (Johnson, 2007: 178). What W.R. Johnson supposes so disturbed Cicero with regard to the neoteroi (in addition to the fact that they relegated his poetry to the out-of-date heap) is his conviction that by rejecting Ennian versification and diction, they rejected what Ennius wrote as well; and that by their poetical contrivances and refinements, they disguised a want of matter, and in doing so, failed to preserve the “ethical codes and spiritual disciplines that make Romans Roman and that make Romans great” (Johnson, 2007: 178). Yet a history or a didactic epic was too big to permit the refinement and perfection they sought and, moreover, was irrelevant to their world - a world where Juvenal’s “sneer about bread and circuses would apply” (Wiseman, 1985: 4). Their poetry was an investigation of interiority caused by a Roman world in crisis (Johnson, 2007: 179). The Roman oligarchy in the last decades of the Republic, motivated by a perverted striving for dignitas which knew no bounds or sense of proportion, appeared intent on the destruction of self and country. Wiseman describes a ruinous social policy 12
where one’s dignitas was measured by the magnificence of one’s spending and where empty coffers were refilled at the expense of the empire. The nobiles were always short of the money they needed to bribe voters, subsidize friends and allies, secure marriages, curry favors, flatter the populace; “hence, debt, corruption and venality in Rome, oppression and extortion in the provinces” (Syme, 1960). One’s honor also depended upon the character assassination and the complete humiliation (often sexual in nature) of one’s enemies (Konstan, 2007: 335). The competition for influence among the nobiles was fierce and constant and there was a pervasiveness of invective and meanness in Roman society. In the opening pages of Catullus and his World, Wiseman warns his reader to set the book aside if graphic descriptions of public impalements, crucifixions, rackings, floggings and burning by boiling pitch are likely to disturb. Rome was an incredibly and notoriously cruel place and brutality was commonplace. Though these public punishments were used to keep the huge slave population under control, the constant presence of such horrors must have impressed the subconscious of all citizens. Wiseman points out that there was no police force to protect the Roman citizen from assault nor to which one might turn for aid. A man caught cheating with another man’s wife could be sexually assaulted by hired thugs to restore the dignitas of the aggrieved husband (Wiseman, 1985: 5-14). One has only to read the letters of Cicero to perceive the constantly shifting alliances, daily reevaluations of friendship and friendly association, and the pervasive insecurities with which everyone lived. From about 275 to 240 B.C., Callimachus lived in the cultural center of Hellenistic Greece, a place then found not in Greece at Athens, but on the north coast of Africa, in Egypt. The big myths, exaggerated psyches and oversized heroes of the 13
Classical and Archaic periods were out of proportion for a world recently fragmented by migration, clashing cultures, conflicting philosophies and new religions. The poetry of Callimachus, as a consequence, had its own narrow focus. He borrowed myths and heroes from the revered works of the past but reinvented or treated them in a leptotic style. Homeric poetry had given the Greeks, as they emerged from the dark ages, their identity; the Classical Age’s tragedies and comedies their moral compass. Callimachus, however, wrote not for a people but a few like-minded individuals. His poetry was designed to interest and amuse the educated and cultured audience of the royal court in Alexandria (Bulloch, 1985: 543). In addition, the Ptolemaic kings Soter (323-285 B.C.) and Philadelphos (285-246 B.C.) appeared to have demanded little in the way of poetic tribute from their poets (though Callimachus did compose poems to their queens). During the reign of Philadelphos, Egypt was prosperous and relatively stable and Alexandria was renowned for the excellence of her culture (Shipley, 2000: 200). Callimachus might have enjoyed a very good relationship with the court of Philadelphoshaving possibly spent part of his childhood there as a page (Cameron, 1995: 4). Without the limitations of public performance and enjoying the goodwill of his rulers, Callimachus had an unprecedented opportunity for creative expression and was largely free to write as he pleased. The general situation was not much different for Catullus. He arrived at Rome around 62 B.C., already educated in Greek, from the Transpadane, an area more Hellenized in culture, conservative in morals and unapologetically energetic in the business of making money than his new home (Wiseman, 2007: 58-71). His family was then of equestrian rank and his father had a secure social standing in Verona. This, and the friendship of Cornelius Nepos and others of the Cisalpine region, made for his 14
smooth entry into the heart of Rome’s most sophisticated set (Fordyce, 1961: xii-xiii). Catullus’ situation was secure enough that even his humiliating poetical attacks on Julius Caesar’s sexual preferences were forgiven (Konstan, 2007: 72-84). His was not to be the public voice of speeches and letters, treatises and philosophical explanations. Instead, the poetry of Catullus was largely personal: wishes, recollections, grievances and whimsies made up his content.
15
Chapter Three: The Myth of Theseus
Central to both the Hecale and to the Wedding of Thetis and Peleus is the myth of the Attic hero Theseus. Theseus was an early figure in Greek myth and an enduring one, undergoing 200 years of transformations until he came to represent the greatness of democratic Athens as her just and merciful monarch. In the late eighth century B.C. he was among the heroes who predated the Trojan War. His desertion of Ariadne is mentioned in the Odyssey, the Cypria and in the poetry of Hesiod. According to a scholiast of the Iliad, the cyclic writers refer to his rape of Helen (Agard, 1928: 84) and the Nostoi tells of the Amazon Antiope betraying Themiscyra for love of Theseus who, at the side of Heracles, attacked the city. The Hesiodic Shield of Heracles portrays him in battle against the Centaurs (Aspis 178). These first portraits of Theseus describe a testosterone-fueled youth who left broken hearts and bodies in his wake or, as Agard said, “he is a typical hero in the age of heroes” (Agard, 1928: 85). At that time, he was hardly a uniquely Athenian figure and much of the early literary and artistic evidence for this Theseus of ambiguous character came from non-Athenian sources. Athens’s earliest need for a mythological hero was filled by Erechtheus and Cecrops. Hesiod’s Theseus would more naturally have been Thessalian (Kearns, 1989: 117) and he had, as well, connections to Troezen through his father Aegeus which eventually formed part of his Athenian back story. (In fact, Sourvinou-Inwood argues that the later amplification of the Troezen story- which included the coming-of-age episode of Theseus’ uncovering the γνωρίσματα left for him by Aegeus and of his answering Heracles, adventure for 16
adventure, as he journeyed overland to Athens, are part of the compromise Athens devised to appease Troezen who wanted Theseus for herself [(Sourvinou-Inwood, 1971: 99].) The dictator Pisistratus was the first to transform Theseus into an Athenian hero as he attempted to associate his own political policies with actions of Theseus. He went so far in manipulating a positive image of Theseus as to order that the uncontrollable passion of Theseus for Aigle be expunged from Hesiod’s poetry. He also had a favorable passage inserted in the Odyssey (Tyrrell & Brown, 1991: 161-163). After the Persian wars, Pherecydes also tried to do damage control by offering mitigating circumstances for some of Theseus’ worst behavior (e.g. the gods made him do it) (Mills, 1997:18). Prior to the last quarter of the sixth century, some of the most popular and frequent representations of Theseus were his defeat of the Minotaur, abduction of Helen and Ariadne, and battle with the Centaurs (Sourvinou-Inwood, 1971: 98). During that time, according to Agard, Heracles, as the traditional athlete of the Greek people, appeared on the black figure vases eighty percent of the time. However, from 515 B.C. to the end of the Persian wars, the representations of Theseus almost equaled in number those of Heracles and not only did the frequency of his image increase, but his likenesses, limited in the past to a few stock illustrations, was supplemented by three entirely new episodes: his battles against the Amazons, his travels from Troezen to Athens, and his trip to the bottom of the sea (Sourvinou-Inwood, 1971: 98). It was during this period that Theseus finally managed to shed the prevailing image of a youth who deserted Ariadne, raped Helen and carried off Persephone from the underworld and to become a national hero of Athens. The distinction between Theseus and Heracles grew sharper: the former, a youth of beauty and grace, a wily competitor who can defeat his mortal enemy by guile 17
or force; the latter, a mature monster- killing brute in the employ of a tyrant (Agard, 1928: 86-87). Bacchylides, who wrote at least two poems in the early years of the Delian league about a princely Theseus, was likely the main literary source for “an authentic hero designed to bring fame to Athens (Davie, 1982: 25). Finally the image became that of Theseus as synoecist, the unifier of the twelve Attic kingdoms under one capital. This was the final key to his ascendency in Attic myth (Diamant, 1982:38). Kearns says that the “timeless, static, un-epic, un-episodic” Erechtheus or Cecrops could never have “sufficed as the mythological expressions of Athenian self-awareness” (Kearns, 1989: 118). There had to be a struggle, the possibility of failure, a real enemy of the kind that Theseus faced in the Pallantidai, a rivalry for the Athenian throne which originated a generation earlier between Aegeus and his brother Pallas, to satisfy the “desire for a heroic figure who would express in himself the developed forms and ideals of Athenian political life” (Kearns, 1989: 118). When, as recounted by Plutarch, Cimon brought the bones of Theseus back from the Persian occupied island of Scyros and buried them in the middle of the city, his cult was established formally. His tomb” (Theseus 14). The hero cult of Theseus which developed in Athens after the Persian wars honored a very different sort of hero than the early archaic battlefield warrior. The expanded fifth century meaning of the word ἥρως had little resemblance to the Homeric hero which was simply a sign of respect given to those of the highest class, and signaled little more than nobleman (Kearns, 1989: 2). It was expected that heroes of cult would continue to do in 18
death what they had done in life: “help their friends and harm their enemies” (Knox & Fagles, 1984: 257). The fully formed Athenian hero, the mature Theseus, was celebrated in sculpture on the Parthenon, Hephaestaeum, and temples at Bassae, Sunium and Olympia. He was probably best met, however, in the tragedies of Aeschylus, Euripides and Sophocles. Except in Euripides’ Hippolytus (an instance in which many would be willing to forgive his momentary rashness), he is represented in those plays as noble, measured and generous. In Sophocles’ Oedipus at Colonus, Theseus puts an end to the horrible and unrelenting demands by the chorus that Oedipus revisit his tragedy from the beginning and immediately offers his aid, admitting that the situation of no man is secure (565-569): Never, then, would I turn aside from a stranger, such as you are now, or refuse to help in his deliverance. For I know well that I am a man, and that my portion of tomorrow is no greater than yours. He also tries to restore to Oedipus some of his dignity by reminding him (561-5) that I myself also was reared in exile, just as you, and that in foreign lands I wrestled with perils to my life, like no other man. Yet he is insistent as to his limitations and respects the gods’ greater powers; he will do whatever he can to help the suppliant but, “I am only a man, well I know, and I have no more power over tomorrow, Oedipus, than you” (639). In the Suppliants, Euripides provides a statesman who gives thanks for order, reason and counsel (201-204): He has my praise, whichever god brought us to live by rule from chaos and from brutishness, first by implanting reason, and next by giving us a tongue to declare our thoughts, so as to know the meaning of what is said. 19
who champions the middle class (239-245), For there are three ranks of citizens; the rich, a useless set, that ever crave for more; the poor and destitute, fearful folk, that cherish envy more than is right, and shoot out grievous stings against the men who have anything, beguiled as they are by the eloquence of vicious leaders; while the class that is midmost of the three preserves cities, observing such order as the state ordains. and who gives democracy (349-354) But I require the whole city's sanction also, which my wish will ensure; still, by communicating the proposal to them I would find the people better disposed. For I made them supreme, when I set this city free, by giving all an equal vote. Of the Suppliants Shaw says “the distinction between the courage of the youth and the counsel of the old is central to the play…Whereas courage is expressed in action, counsel is expressed in speech” (Shaw, 1982: 5). Shaw is describing here the essential difference between Adrastus and Theseus in the play. I think it is also very like the difference between the action-oriented Theseus of the early mythology and that of the later when thought supplanted deed as his response to a problem.
20
Chapter Four: The Theseus of Callimachus
The Callimachean Theseus who found sanctuary in the rustic cottage of Hecale was conceptually a Theseus taken from the later myths, his image already polished to the brilliant shine befitting Athens’s mortal representative. As I will show, the Alexandrian poet wanted no part of the mythological tradition which put Theseus in an unfavorable light. He selected stories from previous accounts or, where the earlier tradition was lacking, fabricated material which would comport with his purpose of presenting this heroic age figure in terms that were exclusively excellent. Yet the poet did not present Theseus in the rarified landscape of the battlefield as did the cyclic poets or as the justicegiving ruler of Athens as did Sophocles and Euripides. He put Theseus in the Attic countryside, a realistic backdrop that the Hellenistic epyllion favored. This was one of the singular hallmarks of epyllia: the view of gods and heroes in a menial setting. Callimachus likely got many of the particulars of his Theseus myth from Philochorus of Athens, the same Atthidographer cited by Plutarch for his life of Theseus (Trypanis, 1958:176). A patriotic Theseid, which perhaps appeared around 510 B.C., in addition to the plays of Sophocles and Euripides, likely offered inspiration as well. Satyr plays and comedies would have been good sources for other material, particularly for Scyron and Cercyon, two despicable highwaymen Theseus encounters, and for the rustic and homelier touches (Hollis, 1990: 5-9). Though little remains of the Hecale, the following chronicle of Theseus has been pieced together. The epyllion famously opens not with Theseus, but Hecale: “once on a hill of Erechtheus there lived an Attic woman” (Hollis, fr. 1) “and all wayfarers honored her for her hospitality; for she kept her house 21
open” (Hollis, fr. 2). It then switches, as we know from the Milan Diegesis, a first or second century A. D. papyrus which gave scholars a prose summary of the poem, to a scene at the palace of Aegeus in Athens upon the unexpected arrival of Theseus. His stepmother Medea identifies Theseus before Aegeus does and tries to poison him (Hollis, fr. 4) - an inhospitable reception which balances the later hospitality of Hecale (Zetzel, 1992: 169). Just as he is about to take the fatal drink his father recognizes him by the γνωρίσματα, tokens of recognition, left for that purpose in Troezen, and shouts a warning (Hollis, fr. 7). Then follows a flashback (one of the literary devices that the poets of epyllia regularly used to expand upon their narrowly focused stories) to Troezen and a review of a speech given by Aegeus to the daughter of Aethra, who was then pregnant by him with Theseus. He commanded her to take the child when he was of age to a hollow stone underneath which, were he strong enough to lift it, he would find the γνωρίσματα, a sword and soldier’s boots (Hollis, fr. 9-10). Aegeus keeps Theseus close after this, fearing to lose his son only just recently recovered (Diegesis Hecalae). Theseus, however, wants to establish his own reputation and begs his father’s consent (Hollis, fr. 17) to rid the Tetrapolis of Marathon of the bull which has caused them so much grief (Diegesis). Denied permission, he sets out secretly at night and in this way arrives at Hecale’s poor hut (Diegesis). In the fashion of Homer, Hecale postpones inquiry as to Theseus’ identity until after she has observed certain courtesies: getting him dry, clean and fed (Hollis, fr. 27). Then, as the scene moves from narrative to a dialogue (another of the poetic techniques used by writers of epyllia to backfill their small-scale story), Theseus identifies himself and says that he is on his way, with Athena as his guide, to fight the Marathonian bull (Hollis, fr. 40). Regrettably, this is all that remains of 22
Theseus’ speech. The remainder of the extant conversation, which is spoken by Hecale, will be taken up in the next section of this paper. Returning to the narrative which followed the conversation, Theseus sleeps that night in a bed near the fire (Hollis, fr. 63), and Hecale watches him as he rises early in the morning (Hollis, fr. 64). This might logically be when she makes a vow to sacrifice to Zeus in return for his safe passage, a fact we have from Plutarch (Theseus 14). The next fragments deal with the capture of the bull and the return of Theseus to Athens. “having bent to the earth the terrible horn of the beast” (Hollis, fr. 67) “he was dragging (the bull) and it was following, a sluggish wayfarer” (Hollis, fr. 68). Theseus called out to the amazed onlookers and said “let the swiftest go to the city to bear this message to my father Aegeus – for he shall relieve him from many cares” (Hollis, fr. 69). Then he came unawares upon Hecale’s funeral. “But upon finding her dead unexpectedly, and after lamenting how he was cheated of what he had expected, he undertook to repay her for her hospitality after death. He founded the deme which he named after her, and established the sacred precinct of Zeus Hecaleos” (Diegesis). “Philochorus certainly associated the institution of the cult [of Hecale] with Theseus and probably he mentioned the hospitality. Wherever the deme of Hecale was situated, the connection with the expedition to Marathon seems inevitable” (Hutchinson, 1988: 56). Throughout the epyllion, Theseus is the ideal representation of a youthful hero bold in action, determined in mind, thoughtful in heart. From the first chronological task set before him, the lifting of the rock placed years earlier by his father as a rite of passage, to the last, when he established honors for Hecale, his intelligence, character and actions were in accord for the future king whom Thucydides described as a man “of 23
equal intelligence and power” (Thuc. 2.15.2). Plutarch says that after he recovered the γνωρίσματα, Theseus eschewed the safer sea route to Athens pressed on him by Pittheus, the king of Troezen, and chose instead the perilous overland journey for the purpose of having Herculean style adventures (Plutarch, Theseus 14). After he reached Athens he was motivated by a hero’s desire for honor and fame to disobey the command of his father to remain secure at home, and sought instead the Marathonian bull. His first thought following his victory over the bull, however, was the peace of mind of his father and so he ordered messengers to run to the old king with assurances of his safety. His relationship with his father - both as he defied him and as he honored him - show a hero’s spirit: boldly determined on the battlefield, yet remembering his familial responsibilities. “The humane and dutiful feeling is strikingly combined in this scene with formidable heroism…Yet his heroism is not a matter of physical prowess alone: his message to his father exhibits noble brevity and a proud restraint (Hutchinson, 1988: 62). This brings to mind the Theseus of Catullus 64 (Hollis, 1990: 221) when, in similar situation, after his successful Cretan adventure, he failed to remember his promise to his father to signal his safe return and caused the grief-stricken man to jump to his death. One has to wonder if Catullus had the earlier model in mind. Lastly, Theseus returned to the hill of Erichtheus to honor Hecale, faithful also to his duty to her who had given him shelter. It is plainly evident that Callimachus took his conception for Theseus from the later legendary traditions, well after the time when his mythological narrative was narrowly focused on his amorous and battlefield adventures. The educated reader of Callimachus’ version of the myth would have noted that he avoided any mention of scandal or indiscretion; there are no troublesome hints (which the writers of epyllia 24
used to fill out their story without adding length) to Theseus’ early reputation or character defects: no learned similes to the Ariadne stories, no fleeting allusions to the rapes of Helen or Persephone. That he is still very much a youth is especially highlighted by the contrast to the aged Hecale (whom he addresses as maia, or mother, the same word Odysseus uses with Eurycleia (Od.19.482, 20.129), but clearly a youth en route to kingship. There are many examples of the favorable treatment Callimachus gives Theseus. One, as he sets forth against the bull of Marathon he operates under the protection of Athena, who, of course, also got Odysseus home safely from Troy (Hollis, fr. 17, 40). Agard notes that in the fifth century, Athena, who had previously “sponsored Heracles, now often appears in the company of Theseus” (Agard, 1928: 87). Second, both Pfeiffer and Trypanis speculated that a kingly man from Aphidnae, whom Hecale recollects as meeting years earlier (Hollis, fr. 42), might have been his father Aegeus (Hollis, 1990: 180). This is another event with Homeric precedent (i.e. Helen and Telemachus in Book Four of the Odyssey [4.138]): an older person meets a young prince and, having also met the young prince’s father at the same age, notices the likeness between the two. Hecale says “horses [brought] him from Aphidnae, looking like…Zeus’ sons…” and she remembers his “beautiful…mantle held by golden brooches, a work of spiders” (Hollis, fr. 42). Even if the kingly man on horseback is not Theseus’ father, Pfeiffer could still be correct, as Hollis thinks is likely, that Hecale is comparing Theseus to this man and, thus, associates Theseus with a son of Zeus. One can imagine Theseus in this passage as Bacchylides earlier did, wearing a “tunic and a thick Thessalian mantle…a youth he is in his earliest manhood…So vigorous, so valiant, so bold…” (Bacchylides, Ode 18). A Hesiodic demigod, one might also think 25
to add, as it was certainly the sight of a demigod that caused the townspeople to cast down their eyes in fear and respect when Theseus came along leading the bull: “…when they saw it they all trembled and shrank from looking face to face on the great hero and the monstrous beast” (Hollis, fr. 69). Hollis wrote, “in spite of his youth, victory has made Theseus a full-fledged hero, an “ἄνδρα μέγαν” (a great man) (Hollis, 1990: 220). As a further tribute to the Athenian hero, Callimachus had the people shower him with leaves, a phyllobolia that the south and north winds combined, “even in the month of falling leaves”, could not match (Hollis, fr. 69). This was the customary way to congratulate athletic victors (Trypanis, 1958: 192) and recalls Theseus’ prowess as an athlete, one of those favorable mythic characteristics which were increasingly featured in the art work after 515 B.C. It might be remembered that as an athlete, he was particularly known as a wrestler - Pausanias would later credit him with the invention of the wrestling style which required thought over brute strength (Pausanias, Description of Greece 1.39.3). Indeed, as will be taken up later on in this paper, we know that Hecale told Theseus that her son had been killed by Cercyon, the brigand who was famous for challenging passers-by to a wrestling match which they always lost and so lost their lives. Callimachus additionally underscores the positive side of Theseus by closely associating him within the Homeric tradition of hospitality. Indeed, his role in the hospitality scene repeats that of Odysseus, one of Homer’s greatest heroes. I will only take up this very important theme briefly here, reserving the greater part of the discussion for the later section on Hecale. Yet it is worth noting that in the same way as the hero of the Odyssey, en route to his most formidable challenge, arrived disguised at Eumaeus’ 26
poor quarters, Theseus came incognito to Hecale’s hut– his status as the future king and unifier of Athens yet concealed by his youth. It isn’t known when Hecale discovered his identity. That he would become Athens’ greatest king, a fact known to the reader all along, she would never know. Hollis notes that both itinerants, the king of Ithaca and the future ruler of Athens, are more distinguished than their hosts; Eumaeus and Hecale are currently of reduced of circumstance, though each had enjoyed a certain prosperity in former days (Hollis, 1990: 341-343). In Odysseus, Callimachus has the perfect model for the wise leader which Theseus will become when king - the lord of Ithaca was most famous for his guile and cunning, solving problems not solely by force but also by his mental acuity. I offer a final argument to support my thesis that the dignity bestowed on the Athenian prince by Callimachus was extraordinary and significant. It pertains to the literary style and purpose of Callimachus for which he was famous and which was almost axiomatic: he was an avowed contrarian, a lover of the unexpected dislocation, an aficionado of the poetic prank. It was not Callimachus’ poetic style to present the character of Theseus as he did: flawless; in fact, perfectly flawless. He doesn’t indulge himself even the slightest intimation of shortcoming, though legend offered rich material. This determinedly serious handling of Theseus is decidedly opposite to the standard practice of Hellenistic poetry, in general, and of Callimachus, in particular. That Callimachus delighted in the unexpected and unsettling reversal is abundantly shown in his poetry. To an inquiring friend en route to see the statue of Zeus at Olympia, “widely regarded as the greatest achievement of sculpture and the most sublime representation of a deity” (Hutchinson, 1988: 26), Callimachus perversely provided its measurements and 27
other dry details before further lowering the focus to the crassest element: “as for the cost (I know you are greedy to learn this too from me)” (Iambus 6). In the middle of an elaborate explanation of a pre-nuptial rite, and just on the verge of giving the aition, Callimachus breaks off mid-sentence in a story about Acontius’ love for Cydippe to say that it would be impious to recount a myth of Hera’s sleeping with Zeus out of wedlock (Aetia, 3: fr. 75). The heroics of Heracles’ fight with the Nemean lion are parodied in the parallel battle waged by his peasant host Molorchus against the household mice (Aetia, 3: fr. 55-59). Considering examples from the Hecale, Aegeus is shown as feeble and dimwitted, almost allowing Medea to poison Theseus when he first arrived at the palace (Hollis, fr. 7) (Hollis, 1990: 144). The victim of the epic conquest, the Marathonian bull, is described with comic understatement as a “sluggish wayfarer” as he is dragged by Theseus back to Athens (Hollis, fr. 69). Athena bans crows forever from the Acropolis after they foolishly brought her the bad news that the daughters of Cecrops had uncovered her hidden association with Hephaestus (Hollis, fr. 72 ) and Apollo turns the formerly white raven black for a similar delivery of unwanted news (Hollis, fr. 74). Even Hecale is gently teased for her elderly ways. Of her loquacity the poet says “the lips of an old woman are never still” (Hollis, fr. 58). I consider it very important that Callimachus goes completely against the grain never to toy with Theseus. He alone is handled with care and treated only with dignity. Callimachus meant to present a heroic age hero in the best terms. For him, even though the epic as a means of poetic expression was exhausted, the great epic heroes of the past were superior men, and continued to have pride of place in the far-flung Greek world. This explains the exceedingly and purely favorable portrait of Theseus, a portrait without reversals, allusions or other poetic 28
devices to alter the serious mood he had created. The poet believed in the inherent goodness of the hero Theseus. Even so, his exploit was not the main focus of the epyllion. The epic adventure was just the “peg” on which hung the principal story of Hecale (Bulloch, 1985: 564) - the framework, maybe even the pretext, for the poet to showcase an old peasant woman. This was the defining shift of the epyllion: the rejection of a long, elaborate hexameter song with an emphasis on archaic era “kings and heroes” (Aetia, Book 1. 4-5) in favor of a poem which featured in the central role a “low” character (Zanker, 1977: 77).
29
Chapter Five: Hecale Although Theseus was an important part of the Hecale, Callimachus put in the forefront of the epyllion an old woman on the last day of her life. A major part of the poem appears to have been an account of her hospitality and the conversation she had with Theseus. This is the preoccupation with the routine and everyday which typified the Hellenistic epyllion. Even the heroic bull killing takes place as part of the simple rural existence of Hecale rather than in the grand context of the royal Athenian palace of Aegeus. A quickly developing storm brings Theseus to the hut of Hecale (Hollis, fr. 18). This circumstance is itself a perfect example of a primary characteristic of the Hellenistic epyllion, the diminution of epic tone or sensibility: the pivotal meeting is not brought about by a vengeful or propitious god, but a very ordinary case of bad weather (Hollis, 1990: 6). Hecale sets about at once to making her young guest comfortable, having him sit on a humble couch (Hollis, fr. 29). She takes down wood she had put away to dry long before and cuts it so that she can set water to boil for cabbage and wild vegetables (Hollis, fr. 31-33). She also prepares water to wash his feet (Hollis, fr. 60). She adds three varieties of olives to the meal and loaves of bread in abundance, the type which are saved for herdsmen (Hollis, fr. 35). Callimachus is lavish with detail in this narrative section, lingering at length on the homely particulars and using the language specific to the daily routine and accouterments of impoverished rusticity. In this way he drives to the forefront Hecale’s poverty and emphasizes the unassuming material he treats. Often he calls the utensils, food and furnishings by their local names, underlining, “by using down-to-earth Attic terms for everyday objects”, 30
that “Hecale and Theseus were Attic heroes” (Cameron, 1995: 443). Trypanis notes how successful this treatment of lowly hospitality was in antiquity; among those influenced, Ovid is especially recognized as mining and imitating the Hecale for several stories in his Metamorphosis (Trypanis, 1958: 177). These details, which concentrate attention on the scanty furnishings, type of wood used for the fire and various colors of the olives, also show sentimentality for the ordinary, both on the part of Callimachus and his sophisticated urban audience, which came from its remoteness to their normal experience (Bullock, 1985: 543-563). Next Hecale and Theseus begin their conversation, probably a lengthy dialogue which would have provided the reader the background stories of their lives. Unfortunately, little of this remains today. She first questions Theseus about his background and the reason for his journey, and then replies in turn to his inquiries about her (Hollis, fr. 42). She mentions oxen she used to own and describes the man, presumably her husband, who came on horseback from Aphidnae looking like a son of Zeus (Hollis, fr. 42). She emphasizes that she did not come from a poor family and raised her two sons “on dainties, drenched in warm baths and in this way they ran up like aspens in a ravine” (Hollis, fr. 48.7). Then, in an address to one of her sons whom the outlaw Cercyon had earlier killed, she asks “was I refusing to hear death calling me a long time ago, that I might soon tear my garments over you too” (Hollis, fr. 49). She vigorously curses the bandit, but Hollis notes the charm of the disclaimer she added to her threat “and, if it be not a sin, (may I) eat him raw” (Hollis, fr. 48.7). Hollis thinks it likely that Theseus would here have told her that he killed Cercyon (Hollis, 1990: 209); thus, though she will die without knowing that Theseus prevailed over the bull, she at least has the satisfaction of knowing that the 31
despised Cercyon is already dead. Plutarch reported that it was to seek opportunities for heroic behavior of exactly this type that Theseus preferred the dangerous overland journey to the more secure sea voyage from Troezen to Athens (Theseus 14). At the end of their conversation, Hecale tells Theseus that she will sleep on a couch in a corner of the hut, probably giving Theseus the bed nearest the fire in the same way Eumaeus did for Odysseus. From her corner bed, she sees Theseus arise on the following morning to continue his journey to Marathon. When he returns a day later, Hecale is already dead (Hollis, fr. 79-80): …whose tomb is this you are building?...Go, gentle woman, the way which heart-gnawing worries do not traverse…Often, good mother…will we remember your hospitable hut, for it was a common shelter for all. The woman who had long given welcome to travelers is here set on her own journey. The praise Theseus offers her is humble, crafted thus by Callimachus to explain the value of Hecale’s heroics in language as simple as was her life (Hutchinson, 1988: 59). Callimachus probably made up this story of Hecale, there “lying before Callimachus no tradition on the life” of this old peasant woman (Hutchinson, 1988: 57). The epyllion would have provided an aition for the deme and festivities established in her name - historical honors for one who was otherwise almost unknown. Beyond this, it painted a picture of an Attic woman’s poverty and loss which were bitterly received at the end of a life that had begun with more promise. There are moments of humor, especially those provided by the reminiscences of the 500 year old crow in a poetic digression, but the general tone would have been rather serious and lacking in the “genial wit and childlike charm” that Gutzwiller says is a hallmark of the epyllion 32
(Gutzwiller, 1981: 4). The figure of Hecale would surely have set that of Theseus into sharp relief; she was an old and sentimentalized peasant woman while he was a spirited hero at the beginning of a promising career. Because Theseus and Hecale share what will be the last day of her life, the bond of affection shared by this disparate pair goes far beyond the Homeric models of Odysseus with Eumaeus and Eurycleia. The bond of Hecale and Theseus is based in a poignant pathos heightened by several factors. First, Hecale must have been supremely gratified to learn that her guest had killed her son’s murderer. A substantial portion of the fragmentary remains of her conversation with Theseus is made up of her recollections of her sons and her unabated grief and anger at their deaths. Then, again, she was likely deeply affected by seeing in Theseus the likeness of her dead husband, both thankful for and saddened by the unexpected reminder. Finally, Hecale and Theseus must have planned and anticipated a reunion for the time after Theseus fought the bull. She died awaiting his return - Plutarch says that it was her intention to pay Zeus tribute for the hero’s safe return (Theseus 14). Further, the Diegesis states that her death “belied” the hope of Theseus to see her once again. This loss could even have been the first unwelcome outcome experienced by Theseus in his young life. The ultimate disparity in their incongruous association was her death and his future. Though the poet was sensible to the good effect the pairing made, Hecale was not meant only to heighten the future promise of the youth by comparison with her loss and her diminished position. Callimachus was after something far more from Hecale than a supporting partner in an odd relationship. Indeed, it was her name he gave to the poem, 33
she whom he put in the foreground of the epyllion. He took a traditional myth and reworked the grand theme to produce a homelier story appealing to and reflective of the interests of a new age (Pollit, 1986). Broadly speaking, scholars explain the reworking – the prominent status Callimachus gave Hecale in his epyllion - in one of two ways: either as a “diminution” of archaic epic sensibility (Gutzwiller, 1981) or as a creation of a new type of hero (Zanker, 1977). Both explanations are dependent on her famous hospitality, the φιλοξενία with which the epyllion opens. It has long been recognized that the hospitality of Hecale closely parallels the Homeric hospitality scene, especially upon the humble treatment given Odysseus by Eumaeus. Callimachus copied the epic hospitality scene so that Hecale might stand within that noble tradition as an Alexandrian successor to the Homeric legends of Athena-Mentes in Ithaca, Telemachus in Sparta and Pylos, Odysseus and Circe, Odysseus and the Phaeacians, and Odysseus and Eumaeus. Hospitality is an “archetypally epic virtue” (Cameron, 1995: 444). Hutchinson says that hospitality combined with poverty is a mark of morality in the Odyssey, and that the poverty of Eumaeus as he entertains Odysseus “heightens his goodness” (Hutchinson, 1988: 12). Zanker makes it clear, however, that there is an important distinction between Hecale and Eumaeus. In the first place, Eumaeus (and Eurycleia) are not the central characters in the Odyssey; they are sympathetic but minor characters. More to the point, they are not truly “low” characters because, as faithful servants, they are part of Odysseus’ royal family (Zanker, 1977: 74). The elevation of Hecale to title character is part of Callimachus’ unique program for the new epic form. Scholars such as Hollis, Hutchinson and Gutzwiller see a lowering of the epic tone in the hospitality episode of the Hecale. They point to the many ways Callimachus 34
shifts the “balance from the heroic to the unheroic while still producing a recognizably epic poem” (Cameron, 1995: 445). Much of the language used to describe the hut and its contents comes from Old Comedy and by mixing in words common in Old Comedy, the Alexandrian poet deliberately lowers the nobility and epic seriousness inherent in the Homeric concept of the host-guest relationship. Hollis notes many specific examples (Hollis, 1990: 5-15). The word for couch, for instance, ἀσκάντης, upon which Hecale made Theseus sit, almost certainly comes from Aristophanes’ Clouds (Nub. 633) (though it might be noted that Callimachus used the standard Homeric verb, ἕζομαι, ‘to make someone to sit’). σιπύη (bread bin) and μετάκερας (warm water) are two other nouns which describe the epic-based hospitality offered by Hecale but whose source is Old Comedy. Ancillary to an inclusion of non-epic vocabulary to describe an essentially epic virtue is the poverty of expression in the Alexandrian epyllion (Hollis, fr. 29-36.4-5, 38, 39, 48.5, 60): …she made him sit on the humble couch…and she took down wood stored away a long time ago…dry wood…to cut…she swiftly took off the hollow, boiling pot…she emptied the tub, and then she drew another mixed draught…olives which grew ripe on the tree, and wild olives, and the light-coloured ones, which in autumn she had to put to swim in brine...and from the bread-box she took and served loaves in abundance, such as women put away for herdsmen.
This language of the Hellenistic poet finds its roots in realistic Attic rusticity. It is very lacking in the lofty, epic richness of the Homeric model it recalls (Od.14. 48-79): So saying, the noble swineherd led him to the hut, and brought him in, and made him sit, strewing thick brushwood beneath, and on it spreading the skin of a shaggy wild goat, large and hairy, 35
his own sleeping pad…he went to the sties, where the tribes of swine were penned. Choosing two from there, he brought them in and killed…
To consider other means by which it is argued that Callimachus lowered the epic tone, consider these following examples of language and allusion. Callimachus describes Hecale as wearing a “wide hat, stretching out beyond the head, a shepherd’s felt headgear”. The first word which Callimachus uses for ‘hat’ is καλύπτρη, the very word used by Homer for the veil worn by Hecuba, Circe and Calypso. Then he elaborates (downgrades, actually) offering πίλημα, a plain, pressed wool hat popular in Thessaly (Gutzwiller, 1981: 54) and “technically the precise word from the life of a peasant farmer” (Zanker, 1987: 209). While displaying his pedantry, Callimachus smoothly transforms epic splendor - the veil of queens and goddesses - to the ordinary the working class headgear of peasants. The simile uttered by Thetis in the Iliad about her son Achilles that “he shot up like a sapling”, was echoed by Hecale who says of her sons, also destined to die young, that “these two of mine shot up like aspens” (Hollis, fr. 48.7). By this reference, Callimachus puts Hecale squarely in the middle of the Homeric tradition; by the very same reference, however, the reader is reminded how great her distance is from it. Hecale’s prosperity is a local affair – the fate of her sons can’t compare with the deaths of the Trojan heroes, her wealth with that of the Achaean kings, her grandness with that of a goddess (Hutchinson, 1988: 58-9). Nor can her entertainment of Theseus measure up to the entertainment Eumaeus provided Odysseus: sweet wine in ivy-wood bowls, fat beasts singed on a spit over a hot fire, a thick hairy 36
goat skin to keep out the night’s cold. The echoes to Homer serve simultaneously to elevate Hecale even as they underline the meagerness of her condition. Though epic associations abound, Callimachus lightened the tone with a variety of vocabulary not found in epic. He made use of allusions that connected his characters to the heroic tradition at the same time they segregated them from it. Rather than lofty grandeur there is poor simplicity. Hecale’s achievements and tragedies are personal, not national. Callimachus transformed the richness of the hospitality scene at Eumaeus’ hut by replacing a the skin of a shaggy wild goat, large and thickly fleeced with a tattered rag, and platters of roasted meats heaped high with bread and olives. Callimachus wrote an epic hospitality scene rich in Homeric associations but with the plain face of Hecale at its heart. There are scholars, however, who argue that the treatment of Hecale goes beyond an innovative use of realistic and mean material. Bullock says that Callimachus’ concentration on the more ordinary details of his heroic material “was not a diminution of the grand themes of tradition, but rather an essential reworking of convention, and the establishing of a new realism” (Bullock, 1985: 564). Callimachus’ dislike of archaic epic was widely known (Zanker, 1977: 68). He wrote in one of his epigrams “ἐχθαίρω τò ποίημα τò κυκλικόν”, “I hate cyclic poetry” (Epigram 28). The Hecale was to be his radical expression of how epic should appear in the third century B.C. and this meant a full break with the earlier tradition, not a reworking of it. Hecale was a new hero, not a version of the old (Zanker, 1977: 68). The realistic detailing of her impoverished life and home, along with allusive comparisons to Homer, does not diminish the nature of a heroic age hero, but, rather, illustrates the quality of a new generation of hero. She 37
becomes a hero when Theseus institutes her cult and names a deme after her (Cameron, 1995: 445) and her heroic nature is treated with seriousness and respect though the language is appropriate to rusticity. According to this line of reasoning, the use of vocabulary from Old Comedy does not so much lower the epic tone as it gives the poem Attic flavor (Hollis, 1990: 196). Zanker points out that had Callimachus represented her with the elevated language and grand expression of epic, the result would have been burlesque (Zanker, 1977: 77) - a parody, in the manner of the story of Heracles’ mousehunting host Molorchus, of her heroic quality. That Callimachus meant Hecale to be a true hero explains why he named the epyllion after her, why he put her at the forefront of the poem, why he lavished attention upon her and why he gave her honors (Zanker, 1977: 71). Her heroism was not of the pure, undiluted kind that characterized Theseus. Her lips, like those of an old woman, were never still and her wildly spoken curse against Cercyon included an escape clause. Yet this gentle teasing simply made her more realistic and sympathetic hero according to Zanker, and did not undermine her dignity (Zanker, 1977: 72). The epyllion was a rejection of the “basic Classical axiom that epic, like tragedy, deals with great deeds of great men… in Aristotelian terminology spoudaia by spoudaioi” (Cameron, 1995: 443). “The Hecale is the first time in extant Greek literature that a φαϋλος is elevated to a main role in an epic poem” (Zanker, 1977: 77). I think that the Alexandrian poet had both outcomes in mind when he wrote the Hecale. Certainly the use of impoverished realism in language and context and the admixture of vocabulary from Old Comedy lowers the epic tone of Theseus’ adventure. Callimachus and the other intellectuals at the Alexandrian court were weary of highflown epic seriousness. The ideals of the archaic world were out-of-date in 250 B.C. 38
when social realism was stylish and people were charmed by the weird and novel (Pollitt, 1986: 143): hence, the popularity of a sympathetic but sweetly humorous picture of the impoverished Hecale. The Hecale certainly found ways to undercut “conventional heroic interpretations” (Gutzwiller, 1981: 5). When Hecale echoes the aspen simile of Thetis, the distance between herself and the goddess, and between her own sons killed by highway thugs and the Greeks who fought at Troy, is underlined rather than bridged. Yet, the Cyrenean poet clearly connects her hospitality to the Homeric tradition and wants this connection to bestow upon Hecale hero status (Cameron, 1995: 444). For him, she is a hero fitting for a new era. Her heroism is different from the epic heroism of Theseus (which he plainly admired and considered as still meaningful in the Hellenistic Age); hers is a modernized heroism based in realism. It does not replace epic heroism, but stands alongside it. This is the correct interpretation of the epyllion’s odd pairing.
39
Chapter Six: The Theseus of Catullus
At the time following the Persian wars when the Theseus myth was increasingly refined to represent the glory of Athens appropiately, there was a series of efforts to repair Theseus’ reputation, particularly with regard to his treatment of Ariadne: “that Dionysus took her from him, that like Aeneas in later time he received divine orders to abandon his sweetheart, that he left her on the island intending to return but was prevented by the wind, that he returned to Naxos after her death and instituted a festival in her honor” (Wheeler, 1934: 129). Catullus ignored these patriotic and sanitized enhancements of the myth in his epyllion, Poem 64. He chose a labor of Theseus to feature in the poem, a choice of subject which acknowledged both his debt and his allegiance to his Alexandrian predecessor, but chose to present him in the harsh light of faithlessness and misery. Such a representation of Theseus, “the bringer of tragedy” was one way by which Catullus would show that “the Age of Heroes was not uniformly praiseworthy” (Bramble, 1970: 23). It might be helpful if at this point I explained that although the Theseus episode is exceedingly important in the poem, presented at length and fleshed out with multiple flashbacks, direct speech and abundant narrative detail - especially compared to the allusiveness which characterizes much of the poem- it is actually a digression from the main story from which the poem takes its modern name, The Wedding of Peleus and Thetis. The digression has long been distinguished, not unduly: at 215 lines of poetry it constitutes more than half the length of the entire poem. In addition to its size, it is remarkable because it appears, quite incongruously as it deals with infidelity, betrayal 40
and desertion, on the coverlet of the nuptial bed as an ecphrasis. Wheeler has noted that Catullus tells the digression segment with more “zest” (Wheeler, 1934: 128) than he does the main story and this fact, combined with the physical prominence and original handling, has guaranteed the digression enormous and sustained interest through the years. As the digression opens, Catullus’ immediate focus is on the anguish of the deserted Ariadne as she watches Theseus sailing away, “leaving unfulfilled his empty pledges” (59). He describes Ariadne in emotionally wrenching language as she mourns her loss, her clothes falling unnoticed from her body to the sea (70). Catullus then flashes back from her current pain to tell the background circumstances of her present situation, beginning with Theseus’ resolve to rescue Athens from further payment of human tribute to King Minos to the maiden’s sad but determined departure from her family with Theseus – the basic Cretan adventure. When Catullus returns to the digression’s “present” time, he gives Ariadne, as she watches Theseus’ ship sail off, 69 lines in which she mourns her fate, curses the faithless Theseus and begs vengeance for her betrayal. As the king of the gods himself nods assent to her prayers, a nod described in Homer’s Iliad (1.528-530), the waters upon which Theseus sails turn stormy and the promise Theseus earlier made to his father –to set a white sail if he should return safe from Crete – slips forgotten from his mind. Then Catullus flashes back to a time before Theseus sailed to Crete, a time when he had just been recently restored to his aging father. Here Aegeus makes a long speech in which he despairs the possibility of again losing Theseus and makes his son promise to signal his survival as soon as he comes into sight of Athens’ hill. Hollis has noted that were the Marathonian bull substituted for the Minotaur here, 41
we would have the speech which Aegeus must have uttered to Theseus in the Hecale but which is not among the surviving fragments (Hollis, 1969: 32). When Catullus returns to the digression-present, the reader “sees” Aegeus as he sees (Gaisser, 1995: 597) his son’s ship still flying the dyed sails, the promise forgotten. Imagining the worst, he hurls himself from the precipice, so that when Theseus enters the house and finds it darkened in mourning he “himself received such grief as by forgetfulness of heart he had caused to the daughter of Minos” (247-8). Catullus then returns over the sea, which Theseus has just sailed, to Ariadne, where she still stands stricken on the beach, “gazing out tearfully at the receding ship” (249-250). The case Catullus makes for Theseus is almost, but not entirely negative. Catullus grants him without mitigating comment his initial heroic virtus – his willingness to die for Athens so that others might not and his eagerness to “win either death or the prize of praise” (102). He also gives him a handsome and heroic countenance: sanctus puer (95), flavus hospes (98), ferox (247); a charming voice: blanda vox (140); and a winning demeanor: dulci forma (175). However, as Wheeler points out, these are “essential to a love story” (Wheeler, 1934: 129) and serve to make Ariadne, as well as Aegeus, more pitiable and tragic characters. I would also call attention to the fact that at least one of these seemingly favorable characteristics hints, by its multiple meanings, at the deception to come and suggests the flaw in the hero Theseus. The adjective blanda while having the innocuous meanings of attractive and charming also has darker overtones and the more sinister connotations like coaxing, seductive and insidious (Oxford Latin Dictionary). The remainder of the digression is an extensive and clear cut indictment of Theseus. 42
In her shoreside lament, Ariadne charges Theseus as perfidus (faithless) (132 133), neglecto numine divum immemor (mindless of the gods’ will) (134), periuria (given to perjury) (135) and crudelis (given to cruelty) (136, 175), nulla clementia (merciless) (137), immite pectus (hard-hearted) (138) and pectus imum (mean hearted) (138). She even questions his humanity, wondering if his mother could have been a lioness (154) (Curran, 1969: 179). The fault-finding characterization of Theseus is not limited to the speech of Ariadne - though there are scholars who argue that Ariadne’s and the poet’s is, in fact, a shared voice (Daniels, 1973: 408). Konstans, who makes a distinction between the speeches of the poet and Ariadne (Konstans, 1977: 45), points out the following censures made in his own voice: Catullus writes of Theseus’ irrita promissa (useless pledges) (59), immiti corde (ruthless heart) (94), oblito pectore (forgetful heart) (208), and mente immemori (forgetful mind) (248). Lafaye suggested that Catullus was the first to link the death of Aegeus directly with the forgetfulness of his son (Konstans, 1977: 45, citing Lafaye, 1894: 175). (Gaisser takes Lafaye’s logic a step further saying that Catullus was the first to make Ariadne the cause of Aegeus’ death (Gaisser, 1995: 604).) Beyond the direct criticism of Theseus, Catullus appears to credit his victorious conquest of the Minotaur at least partially to the prayer made by Ariadne. The first word following her prayer (“not unsweet were the gifts, though vainly promised to the gods, which she pledged with silent lip”) is nam (the epexigetical for) which explains the result of Ariadne’s sweet offering to the gods (105-111): For as an oak or a conebearing pine with sweating bark, when a vehement storm twists the grain with its blast, and tears it smashing over far and wide all it meets: so did Theseus overcome and lay low the bulk of the monster vainly tossing his horns to the empty winds. 43
The prayer “blurs the focus on Theseus’ virtus…Ariadne’s love and the gods’ consent are among the causes of Theseus’ victory” (Konstan, 1977: 41). Just as it is plain that Callimachus preferred the complimentary image of Theseus which was advanced in the aftermath of the Persian wars, it is equally obvious that Catullus’ portrait of Theseus disregarded the fifth century model in favor of the earlier archaic mythology: the unrefined Theseus whose good reputation was rooted in but limited to his battlefield exploits. Even that measure of success, however, could elicit “a feeling of horror and not an appreciation of heroism” (Daniels, 1973: 100). The horror of the battlefield was clearly expressed in the epithalamium for Peleus and Thetis which spent twenty-eight lines singing of slain sons, rivers of blood, headless maidens and grief-stricken mothers (343- 371). The Catullan code of heroism definitely did not include excessive zeal in combat. Catullus is emphatic in his indictment of Theseus. He is charged again and again as forgetful, cruel, and faithless. These are the defects of a man who lacked the virtutes heroum (the virtues of heroes) which Catullus had promised, ironically it seems, that the nuptial cover would display (50-51): This coverlet, broidered with shapes of ancient men, with wondrous art sets forth the worthy deeds of heroes. Konstans, arguing that the poem “lays bare the negative aspect of heroism”, suggests that indicat here means not sets forth or show but, rather, expose or unmask and thus insinuates the inglorious behavior of Theseus which would be described within (Konstans, 1977: 26, 40). Theseus, as perfidus and immemor in his relationship to the 44
gods, his love, and his home, is an “unheroic hero” (Daniels, 1973: 97). The epyllion is, moreover, explicit in charging Theseus with the ruin of three homes (though he might be credited for creating one if we consider the more distant ecphrastic scene of Bacchus coming to marry Ariadne). The first was the domus of Ariadne and her family. Catullus describes Ariadne as “still nursing in the soft embrace of her mother” (88-89) when “she was taken from her father” (132). Ariadne says that it was her own germanus (brother) (150) whom she helped Theseus slay and that he was stained with fraterna caede (the blood of her brother) (181). She laments that, abandoned, she has no home to which to go and sees her life’s end on a deserted and remote island (184) where everything looks like death (187). In consequence, she prays to the Eumenides to bring ruin upon Theseus and his own family, a prayer which is fulfilled by the nod of Zeus. The second domus Theseus destroys, then, is that of his father (246-248): sic funesta domus ingressus tecta paterna morte ferox Theseus, qualem Minoidi luctum obtulerat memte immemori, talem ipse recipit. Thus bold Theseus, as he entered the chambers of his home, darkened with mourning for his father’s death, himself received such grief as by forgetfulness of heart he had caused to the daughter of Minos. Funesta, as well as meaning ‘in mourning’, can mean deadly and fatal- Theseus has made his own home uninhabitable by forgetting the promise he made to his father. For this transgression, he received the harshest punishment Catullus could administer: not only the loss of his paternal home, but the blame for it besides. Then, of course, there is the 45
domus promised to Ariadne that never came to be. In his own voice Catullus called Theseus the coniunx (husband) (123) of Ariadne. In her lament, she said Theseus had pledged her conubia laeta and optatos hymenaeos (joyful wedlock and desired espousal) (141). Ariadne even confessed that she would have become a part of Theseus’ domus as slave so desperate was she. This has as its model Apollonius’ Argonautica, where Medea offered to follow Jason even were it only as a sister or daughter (Wheeler, 1934:144). Scholars have noted that Catullus borrowed many elements appropriate to an epithalamium, both in his description of Ariadne (her chaste couch, her place at her mother’s breast, the many colorful flowers to which Catullus compares her) (86-95) and in the tearful separation from her family as she leaves to join his (117-121). Indeed, these elements may all be found in Catullus’ epithalamium, poem 61. I admit that the promises, so full of meaning for Ariadne, would not have made for a legally binding betrothal, sponsalia, under Roman law (Manson, 1910). But when the father of the gods himself agreed that Theseus should be punished for his lack of fides to Ariadne, Catullus “gives to personal love all the moral power of the family bond” (Konstan, 1977: 79). By the death of Aegeus, Catullus clearly shows that he rejects the premise of a heroic code which permits cruelty in the pursuit of personal glory (Harmon, 1973: 330). The last domus Catullus does not emphasize because he is reluctant to destroy the negative image he has crafted of Theseus. Yet Wheeler believed that the poet wanted to let a ray of hope shine through for Ariadne, even a hope tinged with danger (Wiseman, 1985: 181), so that woven into the coverlet is the god Bacchus, in the company of his frenzied followers, burning with love for her. Wheeler says that this is an example of “Alexandrian art at its best” (Wheeler, 1934: 130). 46
Besides the theme of domus, there have been many other theories given for the inclusion of the digression, with its themes of betrayal and oath-violation, in what appears on the surface to be a happy wedding tale. Wheeler provides an overview of the theories beginning with Ellis, who saw no link between the two; the English philosopher Hodgson, who thought that the “glory of marriage” was the common ground; Lafaye, who attributed the connection to an Alexandrian fondness for contrast; and Drachman and Pascal, both of whom believed that two separate Greek poems had been combined. As for his own position, which I find as unappealing as the ones just mentioned, Wheeler wanted the coverlet itself to be the reason, crediting Catullus as combining “two wellknown forms of poetic technique. Both are as old as Homer, but Catullus is the first poet to combine them in this way” (Wheeler, 1934: 131-148). He is speaking, of course, of combining the digression and the ecphrasis. While I agree that a digression presented as an ecphrasis is unique and imaginative, I cannot agree with Wheeler that it was Catullus’ main reason. Fordyce expressed the viewpoint of many in saying “that a connexion between the two stories is to be found in the contrast between happy marriage and unhappy love (Fordyce, 1961: 274). My own stance is allied with the more recent positions of Curran, Bramble, Gaisser, and Konstan, who tend to associate the digression strongly with the frame story and to see them as an integrated, or at least interconnected, whole designed so that the reader might “observe the light they cast on each other” (Curran, 1969: 174). In fact, the ecphrastic digression enfolds the wedding couch and joins the wedding story to the Theseus-Ariadne story physically as well as thematically (Curran, 1969: 181). The Theseus-Ariadne inset was meant to show defects in the hero. Elements of the wedding story did the same, as the next section of this paper will make 47
clear. With the very first lines of the epyllion the poet suggested that something is wrong; by the time he painted the wedded couple’s offspring as a cold blooded killing machine he will have left little doubt as to his intentions: the frame story and digression are meant to go hand-in-hand to show the inglorious side of the heroic age hero.
48
Chapter Seven: The Voyage of Argo
Poem 64 famously opens with an allusion to the voyage of the Argo en route to the eastern end of the world to take the Golden Fleece from Aeëtes (1-11) Pine-trees of old, born on the top of Pelion, are said to have swum through the clear waters of Neptune to the waves of Phasis and the realms of Aeetes, when the chosen youths , the flower of Argive strength, desiring to bear away from the Colchians the golden fleece, dared to course over the salt seas with swift ship, sweeping the blue expanse with fir-wood blades, for whom the goddess who holds the fortresses of citytips made with her own hands the car flitting with light breeze, and bound the piny structure of the bowed keel. That ship first hanselled with voyage Amphitrite untried before.
The poet goes on in the next lines to say that only on a certain day of that journey, and on no other, did mortals see the bare breasted sea-nymphs rising from the waves to marvel at the first ship to plough the sea and did Peleus catch sight of Thetis and fall in love with her and she did not disdain his love (12- 9). It has long been recognized (Konstan, 1977: 3) that it is only in this version by Catullus that the mythic meeting of Peleus and Thetis occurs as the Argo sails to Colchis. Of course it is a feature of the epyllion to tell a story in a manner not previously known or, in any case, less well known, and that it is highly characteristic of Hellenistic erudition to reformulate earlier poetic efforts. In this way the poet not only displays his learning but can either, depending upon how he manipulates the earlier material, affiliate with or distance himself from his predecessors. It is also true that Hellenistic erudition did not end with the poet but 49
included and needed a doctus reader as well. Gaisser calls the person who is “trained to look for allusive clues in the text and is both knowledgeable enough to recognize them and subtle enough to construe their meaning” a neoteric reader, and claims that the poet intended his poetry for a reader willing to apply such skills to the poem (Gaisser, 1995: 581). That Catullus had a number of models, both Latin and Greek, for the opening lines of the poem has long been admitted and explored. Euripides’ Medea, Apollonius Argonautica and Ennius’ Medea Exul are all recognizable in Catullus’ proem (Thomas, 1982: 145). Thomas argues that the main purpose of Catullus in selecting the voyage of the Argo to open the poem is that it is so dense with poetical antecedents that the doctus poet could not resist referencing them before presenting his own “superior version” (Thomas, 1982: 163-164). Certainly this argument has merit; Catullus must have indeed relished the vast poetical possibilities inherent in the Argo legend as told by Euripides, Apollonius and Ennius. Yet I agree with other scholars who contend that Catullus’ arrangement was not designed foremost as a polemical exercise or even as an erudite reformation of a familiar myth. The main reason was thematic: he wanted to insert Medea with all her unhappy associations into the story (Curran, 1969: 185). By rearranging the normal order of the Argo myth in a way that no earlier poet had ever done, Catullus deliberately put the wedding of Peleus and Thetis within the context of the voyage of the Argonauts - and thus with the unpleasant Jason and Medea myth - in order to adumbrate the hideous wrongdoings which would result from the offspring of the union (Konstan, 1977: 3). In fact, the signposts to the Medea myth of Ennius and Euripides are so prevalent and pronounced in the first 18 lines that the neoteric reader not possessed of the modern name given to the poem would not know until line 19 that he 50
was not reading a story of Jason and Medea but, rather, of Peleus and Thetis (Gaisser, 1995: 581). At lines 19-21, with the emphatic threefold polyptoton, the reader discovers that he has been misled and that the true subject will be the wedding of Peleus and Thetis (Gaisser, 1995: 581): tum Thetidis Peleus incensus fertur amore, tum Thetis humanos non despexit hymenaeos, tum Thetidi pater ipse iugandum Pelea sensit. Then is Peleus said to have caught fire with love of Thetis, then did Thetis not disdain mortal espousals, then did the Father himself know in his heart that Peleus must be joined to Thetis. At this point, Catullus’ reader would recognize the chronological alteration which made the wedding a result of the Argo’s sailing, know that the reversal was intentional and meaningful, and understand that poet had signaled that he could expect a non-traditional telling of the wedding story. Now the reader would realize that Medea is meant to haunt the festivities and to insinuate that the wedding itself might not be without defect (Bramble, 1970: 21). The flaw, Catullus would make clear, is their celebrated offspring, Achilles, and the disproportionate carnage he was going to wreak at Troy as he cut down her sons with the same thoroughness and mechanical precision as a farmer crops yellow ears of corn (353-355) - his thirst for blood not slaked until his burial mound was graced by “the snowy limbs of the slaughtered maiden” (364-365). Curran cuts short the objection that a disapproving reaction to the “Achilles’ passage represents an anachronistic imposition of alien values and sensibilities upon the heroic point of view”. This is exactly the view expressed by Skinner who called Achilles a “conventional hero” and excused his “disquieting brutalities” as acceptable under a more “primitive heroic 51
code” (Skinner, 1976: 53). Curran had already taken the position, anticipating this sort of mitigation, that in the aftermath of Euripides and the Alexandrians, no poet, least of all the urbane Catullus, could accept uncritically the brutal savagery of Achilles (Curran, 1969: 191). The juxtaposition of two maidens, one the future mother of Achilles, the other his victim, in the epyllion’s epithalamium is designed to shock and discomfort. The following are the final stanzas of the song of the Parcae. They represent the dark and the light of the Age of Heroes, the happy ideal and tragic reality (366-381): …the high tomb shall be wetted With Polyxena’s blood, who like a victim falling under The two-edged steel, shall bend her knee and bow her headless trunk. Run, drawing the woof-threads, ye spindles, run. Come then, unite the loves which your souls Desire: let the husband receive in happy bonds the Goddess, let the bride be given up – nay now! – to her eager spouse. Run, drawing the woof-threads, ye spindles, run. When her nurse visits her again with the morning Light, she will not be able to circle her neck with Yesterday’s riband; nor shall her anxious mother, Saddened by lone-lying of an unkindly bride, give up The hope of dear descendants. Run, drawing the woof-threads, ye spindles, run. The Medea myth, which Catullus used to highlight the dark element of the age of heroes, is not easily covered in brief fashion. In addition to the fact that it is quite old, much of the archaic Argonautic poetry has been lost (Bremmer, 1997: 86), and elements of the story occur in a geographical area so wide that it has been argued that there were two separate Medeas or that a later character supplanted earlier versions (Griffiths, 2006: 30-32). Early literary accounts of the Medea myth were the late seventh century poetry 52
of Mimnermus (fr. 11a), the sixth century conclusion of the Theogony by the pseudoHesiod (956-1002), and the seventh or sixth century anonymous Carmen Naupacticum (Bernabé Pajares, & Olmos Romera, 1996: Fr7 Davies). In addition to these direct mentions of Medea, there are references in later works (especially in Pausanias’ Description of Greece which cites works by Hellanikos, Kinaithon and a poem, Korinthiaka, by Eumelos) that indicate a substantial written tradition dating from the archaic period which no longer exists (Griffiths, 2006: 16). The extant versions have in common several details. One is the importance they give to the role of Aphrodite in the affairs of Jason and Medea; Medea does not act so much as she is acted upon. Her divine ancestry, skill with drugs and magic, and her foreignness are also repeated themes (Graf, 1997: 31-33). In the late sixth century the lyric poet Pindar made a notable shift in the tradition by inserting doubt and ambiguity into the previously straightforward equation which minimized Medea’s personal responsibility. In his fourth Pythian ode (211 - 250), the only complete pre-Hellenistic version of the Colchian story which remains, Aphrodite’s agency in Medea’s abduction is less certain - Medea may be a consenting partner to her own seizure (Graf, 1997: 29). The tragedy which Euripides wrote in 431 B.C. greatly develops this tension between Medea-as-partner and Medea-aspawn, and it is the former, wanting to make complete Jason’s debt to her, who loudly claims responsibility. As Medea and Jason argue about the degree of his obligation to her, she maintains that she saved his life (476), she killed the snake that guarded the fleece (482), she left her family of her own accord (482), and she killed Pelias and destroyed his house (486-487). Jason’s admits some assistance, but attributes to Aphrodite and Eros Medea’s complicity and the success of the voyage (526-531). It is 53
also in the fifth century that fratricide, the murder of Apsyrtus, is added to the myth. Euripides (Med. 160), Sophocles (fr. 343 Radt) and the mythographer Pherecydes (Jacoby, 1923: fr. 32.3) each mention this crime. Yet, it is especially Euripides who “gives Medea her canonical identity: the woman who kills her children in vengeance when her husband deserts her…it was his heroine who became the point of reference for later versions” (Boedeker, 1997: 127). The Medea myth, like the Theseus myth, is episodic. It is possible to point to a Colchian chapter, a Corinthian chapter or an Athenian chapter in Medea’s story just as Theseus’ legend may be accessed by his labors or his transgressions or by other systems. However, Euripides arranged the earlier material to create a character that linked a compelling personal tragedy to a reply so unnatural and repulsive that no subsequent treatment of Medea could escape its influence. She would forever be guilty of fratricide, murder and infanticide, an enduring heroine both pitiable and evil. By the innovation of having Peleus and Thetis meet as a result of the sailing of the Argo, Medea became the poet’s invited guest at the wedding and brought, as Catullus intended, the abundant associations of faithlessness, betrayal, desertion, murder and misery that were well-known from Euripides and repeated in Apollonius and Ennius. The purpose of the Medea allusion, to insinuate a less-than-glorious outcome of the wedding of Peleus and Thetis, is reinforced by its connection to the Theseus/Ariadne story - itself, as this paper has shown, a troubling narrative in a wedding poem. The outlines of both stories are analogous (Konstan, 1977: 68): each begins with a sea voyage and a perilous mission (the one to Cholchis for the golden fleece, the other to Crete to kill the Minotaur); in each, a local maid falls in love with the hero and assists him in his quest 54
– even to the point of killing a brother (Apsyrtus, the Minotaur); the girl leaves her family to go with the hero; the hero abandons the girl (in Corinth, on Naxos); the deserted maid reproaches the hero; the hero suffers the loss of his own family (Jason, his children and bride; Theseus, his father). In addition to the broadly stroked similarities between the Theseus/Ariadne and Jason/Medea mythologies outlined above, Catullus directly models Ariadne’s expression of suffering on the earlier Greek and Latin speeches of Medea. The Medea of Euripides, for one example, opens with a maid saying (1-8): [5] of the heroes who at Pelias' command set forth in quest of the Golden Fleece! For then my lady Medea would not have sailed to the towers of Iolcus, her heart smitten with love for Jason. (Kovaks, 2001)
In Catullus’ epyllion, Ariadne expresses this same wish in her lament (171-172): Iuppiter omnipotens, utinam ne tempore primo Cnosia Cecropiae tetigissent litora puppes, indomito nec dira ferens stipendia tauro perfidus in Creta religasset navita funem Almighty Jupiter, I would the Attic ships had never touched Cnosian shores, nor ever the faithless voyager, bearing the dreadful tribute to the savage bull, has fashioned his cable in Crete… By modeling Ariadne’s lament on that of Euripides (and on the nearly identical Latin 55
model of Ennius - Medea Exul 253- 261- who had also followed the Greek original) Catullus connects the betrayal and treachery of Theseus to Jason and the Argonauts and correlates their failings. That Medea’s tragedy found a voice in Ariadne may be seen in another example from Poem 64 (177-181): nam quo me referam? qua spe perdita nitar? Idaeosne petam montes? at furgite lato discernens ponti truculentum diuidit aequor. an patris auxilium sperem: quemne ipsa reliqui respersum iuuenem fraternal caede secuta?
For whither shall I return, lost, ah, lost: on what hope do I lean? Shall I seek the mountains of Crete? But barring them with broad flood the stormy waters of the sea lie in between. Shall I hope for the aid of my father, the father I deserted of my own will, to follow a lover stained with my brother’s blood?
Catullus’ model is clearly Ennius (Medea Exul 284-5) (Zetzel, 1983: 259): Quo nunc me uortuam, quod iter incipiam ingredi? Domum paternamne anne ad Peliae filias? Whither shall I turn now? What road set out To tread? Towards my father’s home, or what? To Pelias’ daughters? (Jocelyn, 1967) Zetzel argues that Catullus’ use of the Medea allusion in both the opening lines of the epyllion and in the ecphrasis undermines Thomas’ thesis that the Argo myth, traditionally unconnected with the marriage of Peleus and Thetis, was employed for the multiplicity of versions which offered the doctus poet an irresistible opportunity to display his erudition. 56
Zetzel suggests that, without even considering the content of the poem, “the use of the same model in both parts of the poem would assist in binding the narrative and the ecphrasis together” (Zetzel, 1983: 259). Catullus intended to show a pattern of weakness, to expose an intrinsic flaw in the Age of Heroes. The character defects of Theseus were not limited to him, but attributes of the generation. Before I conclude this section, I want to take up briefly the subject of Peleus who, of course, was aboard the Argo and was thus, as I argue, a member of the Age of Heroes. His position in the epyllion is largely abstract and undetailed, his particular significance being the necessary partner in the wedding. He is a member of the Argonautic crew when he catches “fire with love of Thetis” (19), who, in turn, does not “disdain” the marriage (20), and he is granted her by Jupiter, Tethys and Oceanus (26-30). He is called a Thessaliae columen (mainstay of Thessaly) (26). The wedding takes place in his own splendid palace (a departure from all previous literary accounts, as is typical of the epyllion, which place the wedding on Mount Pelion) (43-44) where the Fates sing that “no love ever joined lovers in such a bond as links” Peleus and Thetis (336-337) before calling for the union “which your souls desire” (372). Catullus ignores the favorable Pindaric tradition of Peleus which gave him Thetis as reward for his virtuous resistance to the adulterous entreaties of Hippolyte, the wife of Akastos (Nem. 4.57ff, 5.25ff) (Konstan, 1977: 4). Although Catullus did not attribute the winning of Thetis to any action on the part of Peleus, neither did the poet allude to the other traditions which had Peleus, as a mortal, be a punishment that Jupiter inflicted on Thetis (Wheeler, 1934: 123). It might even be argued that Jupiter’s acknowledgement that “he gave up his own love” (27) was intended to highlight Peleus favorably (Wheeler, 1934: 123). Yet the glaring 57
absence of Apollo and Athena from the wedding ceremony undermines any gains which accrued to the character of Peleus because of Zeus’ favor (298-301): Then came the Father of the gods with his divine wife and his sons, Leaving thee, Phoebus, alone in heaven, and with thee Thine own sister who dwells in the heights of Idrus; for as thou didst, so did thy sister scorn Peleus, nor Deigned to be present at the nuptial torches of Thetis. In Homer (Il. 24.63) and Pindar (Nem. 5.21 ff.), Apollo was present at the ceremony. Catullus’ learned readers would recognize that this departure by the poet from the earlier versions of the myth was deliberately intended to cause doubt and worry. The guest list contained other disturbing elements, also innovations of the poet. One is Prometheus, a guest who only in this version attends the wedding. He comes - extenuata gerens veteris vestigia poenae (still bearing the scars of his ancient punishment) (295) - a “discordant note in a celebration of harmony” (Curran, 1969: 186). The role of Peleus in the epyllion is largely understated. However, it can be noted that there were opportunities from the earlier tradition for a more favorable treatment of the Argonaut which the poet chose to ignore. This leaves the poem with no positive portrayals from the age of heroes, exactly what Catullus intended.
58
Chapter Eight: Optimism in Alexandria, Despair in Rome
“It is often remarked that heroics are out of place in the epyllion” (Nisetich, 2001: xxxiii). This view became one of the defining features of the epyllion and surely had its origins in the rebuke Callimachus made to the Telchines in the proem of the Aetia to eschew writing poetry about “kings and heroes”. Gutzwiller’s definition of the epyllion makes the diminution of the hero central (Gutzwiller, 1981:2): Most basic to the transformation of epic in the Hellenistic epyllion is the subversion of the archaic ideal. Although each epyllion narrative is based on an episode in the life of a hero or heroine, the story is told in such a way as to undercut or even mock the conventional heroic interpretation of this episode. Nisetich, arguing the opposing view, says that it cannot be true that heroics are ill-suited to the epyllion, at least with regards to that of Callimachus. He cites as evidence to the contrary the exemplary treatment of Theseus in the Hecale, which this paper has enumerated (Nisetich, 2001: xxxiv). Through flashback and allusion, the epyllion traced the passage of Theseus, beginning in Troezen with his recovery of his birthright to his arrival in Athens and subsequent victorious mission to Marathon, with deference. The depiction included the successful but potentially deadly encounters with Scyron, Cercyon and Medea in addition to his key victory over the bull. There was also the sympathetic description of a young prince, just on the heels of his most glorious triumph, preoccupied first with his father’s well-being and then mindful of his obligation to honor the Attic woman who gave him shelter. The heart of the miniature epic was the account of this overnight visit with Hecale. The epyllion may have been given its structural unity by 59
material which was suitable for epic - the journey of Theseus to kill the bull of Marathon – but the focus was on the commonplace: an evening journey interrupted by a storm, a simple meal, a conversation (Bulloch, 1985: 563-4). In this way the Alexandrian poet revised the high tone of the archaic epic to appeal to the third century appetite for all things outside their realm of experience: “the ordinary had appeal precisely because of its remoteness from the normal experience of most Hellenistic readers” (Bulloch, 1985: 543). Callimachus “clearly and correctly understood that the decline of poetry in the fourth century was due in great measure to the constant repetition and consequent exhaustion of the old mythological material” (Trypanis, 1947: 3). Although the Hecale featured a well-documented hero, it found originality in the lowliness of its backdrop and its emphasis on the everyday concerns of food and shelter. Theseus is shown at the “outset of his heroic career” (Hollis, 1990: 6) and the main focus is on the companionship he shared with an old peasant woman on the eve of her death. His conquest of the bull was only briefly treated. Moreover, Callimachus added a new dimension to the mythological representation of Theseus. His unique characterization was a blend of the early archaic era youth, whose reputation as a skilled warrior was tarnished by his rude social behavior, and the Classical period’s just and wise Athenian king. The hero was shown just at the moment of adolescent discovery and emergent character, yet already possessed of the Sophoclean maturity seen in Oedipus at Colonus. Hecale must have recognized the heroic promise in the youth – Plutarch says that she intended to make a sacrifice to Zeus on his behalf (Theseus14). Part of the engaging pathos of the epyllion was that she did not live to see him become an “ἄνδρα μέγαν”, best seen when the concerns of the 60
“great man” - whose visage, as he led the beast he had just tamed, inspired equal measures of awe and terror in the townspeople of Marathon - were not on his own feat, but on his father and new friend. Still it needs to be answered why Callimachus, whose famous response to his detractors was to ridicule contemporary efforts to continue the production of epic poetry about “kings and heroes”, would compose a new type of poetry, an innovation intended and destined to attract notice, based on Theseus - one of the most frequently occurring subjects of the visual and written arts of ancient Greece. He could just as well have picked a less well-known hero to anchor his little epic. I argue that it must be considered noteworthy that he chose to distinguish his epyllion – a poem designed to be a showpiece for his new poetic principles, his answer to his critics and manifesto for the future (Crump, 1931: 33) - with a plot based on Theseus. His long and regular appearance in poetry and art would seem to make him especially distasteful or unsuitable to a poet with a stated agenda of modernism. Part of his very public reply to the Telchines was “the path of a poet should be one little used” (Aetia, 1). But Callimachus had an important and personal reason to take up the story of Theseus once again, and to make him look so good. The reason has to do with the poet’s admiration for the past glory of Athens. Cameron argues against the claim “that Hellenistic poets, living in an age of monarchies, were inspired by nostalgia for the mythical past of the Classical city-state” (Cameron, 1995: 42). While that may be true, I believe that more than most Callimachus must have been affected by the library’s “almost sacred claim to be the guardian and controller of Greek culture for all the Greeks” (Shipley, 2000: 242). His lot was to work daily within its walls and he would have been thoroughly steeped in Greek historiography 61
and myth as he compiled the Pinakes, a catalogue of the library’s stacks. As part of the dislocated Greek community, Callimachus would have been naturally drawn to link an unaffiliated present to the deeply rooted mythological past which was his steady diet. Attica, with its tradition that its people were autochthonous - sprung from the earth itself had the deep binding roots Alexandria lacked. The Athenian belief that they were autochthonous gave them a national self-consciousness which was central to their culture in the fifth and fourth centuries. They were unified in this, set apart from other Greeks, to whom they felt superior (Kearns, 1989: 110). This feeling of superiority and separateness might have struck a chord with Callimachus as well. Thus, he featured Theseus - not a Pan-Hellenic hero, but Athens’ singular hero. As the unifier and just ruler, he was the human representation of Athens; the city incarnate. By idealizing the hero Theseus, Callimachus glorified Athens. His treatment of Theseus was a celebration of the greatest city of the Ancient Greece. This is the reason Callimachus picked Theseus to secure the poem. There was no other hero whose tradition was so connected to the literary and cultural heart of his artistic ancestry. It was for this reason that Callimachus did not” begrudge space to the hero and his story, though famous” (Hutchinson, 1988: 61). “The patriotic feeling so common in classical poetry” (Trypanis, 1947: 4) was revived in the Hecale. This is not nostalgia for the past but admiration and a commemorative tribute. Callimachus was no preterist, however. As well as celebrating the archaic hero Theseus, he unveiled Hecale. She, with her simple hospitality, is his concept of a hero for the Hellenistic world. Far from avoiding or denigrating the hero, Callimachus wrote a poem which expanded the category and brought together the old and the new versions. 62
The epyllion expressed appreciation for the noble Hellenic heroes of the past while giving the future a Hellenistic prototype. Even as the epyllion of Callimachus is optimistic and forward looking with respect to the hero - establishing a model for the next generation of poets - that of Catullus shows his full dissatisfaction with heroes past and disaffection with the present human condition. The final lines of the epyllion which has heretofore exposed the shortcomings in the heroes of the past are a litany of the crimes and ills of present day Rome. “The heavenly ones were wont to visit pious homes of heroes, and show themselves to mortal company” (384-386) only before
…the earth was dyed with hideous crime, and all men banished justice from their greedy souls and brothers sprinkled their hands with brothers’ blood, the son left off to mourn his parents’ death, the father wished for the death of his young son, that he might without hindrance enjoy the flower of a young bride, the unnatural mother impiously coupling with her unconscious son did not fear to pollute her family gods. (397-404) Catullus despised the current condition and called the contemporary Roman behavior criminal. He found no escape in the past, however, because “as the poem as a whole declares, it was never any better” (Curran, 1969: 192). It was long thought by some (Putnam, 1961:189) that Catullus meant the wedding of Peleus and Thetis to be the last occasion where the gods mingled with mortals, and that the basic antithesis of the poem was between the Age of Heroes and the contemporary world. More recent scholarship proves this argument wrong for two reasons. First, as this paper has shown, the treatment of ancient heroes in the poem was 63
far from favorable. The portrait of Theseus, the allusions to Jason and Medea, and the excesses of Achilles express the flaws inherent in the hero. Even the presence of the Argo, aside from the allusions to Medea, is ambiguous and worrisome. This, the “ship first hanselled with voyage Amphitrite untried before” (11) did not represent an advancement of civilization, but, rather, a transgression of the natural limits (Konstans, 1977: 23-26). Men were meant to stay ashore. “Daring, too daring, the man who first broke into the treacherous seas with a boat so fragile” says a chorus in Seneca’s Medea (300-301). Much earlier than Seneca, Hesiod had opined that the just man does not travel the seas (WD, 230-236):
Neither famine nor disaster ever haunts. The reason for sea travel is gain. Profit, and the desire for it, leads to inequality and inequality leads to war. The Wedding of Peleus and Thetis opens with a story of gain: the Argo seeking the Golden Fleece; it ends with the hideous savagery committed by Achilles in the Trojan War. There is also the matter of Thetis and the Nereids rising up above the foamy waves to gaze in wonderment at the first ship (15-18). “The reader is expected to recall that when Homer presented this scene in Iliad 18.35 ff., the Nereids are mourning the Patroclus who died because of Achilles” (Curran, 1969: 187). Second, not even in this epyllion do gods and mortals mingle. Apollo and Athena refused to attend the wedding, and only after the mortal guests depart, giving “place to the holy gods” 64
(267-268), do other immortals begin arriving (Curran, 1969: 185). The wedding is the mythological present, a time when the gods refused to visit the homes “dyed with hideous crime” (397). The past, represented by Theseus and the Argonauts, was also deeply flawed. The future is embodied by Achilles. The poem turns out to be a bleak assessment of the human condition beginning long ago with the sailing of the first ship and concluding in the poet’s lifetime. Unlike his Alexandrian predecessor, Catullus does not hold out hope or offer a positive standard bearer for the future. The future he promised was a litany of horror at the hand of Achilles, prophesied in the song of the Parcae. Catullus and his poetry stand near the beginning of the end of the Roman Republic. The Roman world is disintegrating and his epyllion is a reflection of his pessimism.
65
References Agard, W. R. (1928). Theseus: A national hero. The Classical Journal, 24(2), 84-91. Allen, W.,Jr. (1940). The epyllion: A chapter in the history of literary criticism. Transactions and Proceedings of the American Philological Association, 71, 1-26. Slavitt, D. R., ed. and trans. (1998) Epinician odes and dithyrambs of Bacchylides . Philadelphia: University of Pennsylvania Press. Bernabé Pajares, A., Bernabé Pajares, A. & Olmos Romera, R. (1996). Poetarum epicorum graecorum: Testimonia et fragmenta (Editio correctior editionis primae (1987) ed.). Stutgardiae: In aedibus B.G. Teubner. Boedeker, D. (1997). Becoming Medea, assimilation in Euripides. In J. J. Clauss, & S. I. Johnston (Eds.), 128-148. Princeton, N.J.: Princeton University Press. Bramble, J. C. (1970). Structure and ambiguity in Catullus LXIV. Proceedings of the Cambridge Philological Society, 16, 22-41. Bremmer, J. N. (1997). Why did Medea kill her brother Apsyrtus? Medea, 83-100. Princeton, N.J.: Princeton University Press. Bulloch, A. W. (1985). Hellenistic poetry. The Cambridge history of classical literature 1, 541-621. Cambridge Cambridgeshire; New York: Cambridge University Press. Cameron, A. (1992). Genre and style in Callimachus. Transactions of the American Philological Association (1974), 122, 305-312. Cameron, A. (1995). Callimachus and his critics. Princeton, N.J: Princeton University Press.
66
Clausen, W. (1964). Callimachus and Latin poetry. Greek, Roman & Byzantine Studies, 5, 181-196. Crowther, N. B. (1971). Catullus and the traditions of Latin poetry. Classical Philology, 66, 246-249. Crump, M. M. (1931). The epyllion from Theocritus to Ovid. New York: Garland Pub. Curran, L. C. (1969). Catullus 64 and the heroic age. Yale Classical Studies, 21, 169-192. Daly, L. W. (1952). Callimachus and Catullus. Classical Philology, 47, 97-99. Daniels, M. L. (1973). "The song of the fates" in Catullus 64: Epithalamium or dirge? The Classical Journal, 68, 97-101. Davie, J. N. (1982). Theseus the king in fifth-century Athens. Greece & Rome, 29, 2534. Debrohun, J. B. (2007). Catullan intertextuality: Apollonius and the allusive plot of Catullus 64. A companion to Catullus, 293-312. Malden, MA: Blackwell Pub. Diamant, S. (1982). Theseus and the unification of Attica. Hesperia Supplements, 19 (Studies in Attic Epigraphy, History and Topography. Presented to Eugene Vanderpool), 38-47. Easterling, P. E., & Knox, B. M. W. (1985). The Cambridge history of classical literature: Greek literature. Cambridge Cambridgeshire; New York: Cambridge University Press. Ellis, R. (1889). A commentary on Catullus (2d. ed.). Oxford: Clarendon press. Feldherr, A. (2007). The intellectual climate. A companion to Catullus, 92-197. Malden, MA: Blackwell Pub.
67
Fitch, John G. (2002). Seneca: Hercules ; Trojan women ; Phoenician women ; Medea ; Phaedra. Cambridge, Mass.: Harvard University Press. Fordyce, C. J. (1961). Catullus. A commentary. Oxford: Clarendon Press. Gaisser, J. H. (1995). Threads in the labyrinth: Competing views and voices in Catullus 64. The American Journal of Philology, 116, 579-616. Gildenhard, I., & Zissos, A. (2004). Ovid's 'Hecale': Deconstructing Athens in the Metamorphoses. The Journal of Roman Studies, 94, 47-72. Glare, P. G. W. (1982). Oxford Latin Dictionary. Oxford Oxfordshire; New York: Clarendon Press; Oxford University Press. Goold, G. P., (1983). Catullus. London: Duckworth. Graf, F. (1997). Enchantress from afar. Medea : Essays on Medea in myth, literature, philosophy, and art, 21-43. Princeton, N.J.: Princeton University Press. Gutzwiller, K. J. (1981). Studies in the Hellenistic epyllion. Königstein/Ts: Hain. Harmon, D. P. (1973). Nostalgia for the age of heroes in Catullus 64. Latomus, 32, 311331. Havelock, E. A., (1939). The lyric genius of Catullus. Oxford: B. Blackwell. Hollis, A. S. (1990). Callimachus: Hecale. Oxford; New York: Clarendon Press; Oxford University Press. Hunter, R. L. (2006). The shadow of Callimachus: Studies in the reception of Hellenistic poetry at Rome. Cambridge; New York: Cambridge University Press. Hutchinson, G. O. (1988). Hellenistic poetry. Oxford England; New York: Clarendon Press.
68
Jackson, C. N. (1913). The Latin epyllion. Harvard Studies in Classical Philology, 24, 37-50. Jacoby, F. (1973). Atthis: The local chronicles of ancient Athens. New York: Arno Press. Jocelyn, H. D. (Ed.) (1967). The tragedies of Ennius: The fragments. London: Cambridge U.P. Johnson, W. R. (2007). Neoteric poetics. A companion to Catullus, 175-189. Malden, MA: Blackwell Pub. Kearns, E. (1989). The heroes of Attica. London: University of London, Institute of Classical Studies. Knopp, S. E. (1976). Catullus 64 and the conflict between amores and virtutes. Classical Philology, 71, 207-213. Knox, B., & Fagles, R. (1984). Sophocles: The three Theban plays. Harmondsworth, Middlesex, England ; New York, N.Y.: Penguin Books. Knox, P. E. (2007). Catullus and Callimachus. A companion to Catullus, 151-171. Malden, MA: Blackwell Pub. Konstan, D. (2007). The contemporary political climate. A companion to Catullus, 72-9. Malden, MA: Blackwell Pub. Konstan, D. (1977). Catullus' indictment of Rome: The meaning of Catullus 64. Amsterdam: Adolf M. Hakkert. Kovacs, D. (2001). Euripides: Cyclops; Alcestis; Medea. Cambridge: Harvard University Press.
69
Lyne, R. O. A. M. (1978). The neoteric poets. The Classical Quarterly, 28, 167-187. Manson, E. (1910). Breach of promise of marriage. Journal of the Society of Comparative Legislation, 11, 156-167. Merrill, E. T., & Catullus, G. V. (1893). Catullus. Boston, New York: Ginn and Co. Mills, S. (1997). Theseus, tragedy, and the Athenian empire. New York: Clarendon Press. Most, G. W. (2006). Hesiod. Cambridge, Mass.: Harvard University Press. Morwood, J. (1999). Catullus 64, Medea, and the François vase. Greece & Rome, 46, 221-231. Nisetich, F. J. (2001). The poems of Callimachus. Oxford; New York: Oxford University Press. Pfeiffer, R. (1949-1953). Callimachus. Oxford: Clarendon Press. Pollitt, J. J. (1986). Art in the Hellenistic age. Cambridge; New York: Cambridge University Press. Radt, S. L., ed. and trans. (1977). Sophocles. Göttingen: Vandenhoeck & Ruprecht. Race, W. H. (1997). Pindar. Cambridge, Mass.: Harvard University Press. Reece, S. (1993). The stranger's welcome: Oral theory and the aesthetics of the Homeric hospitality scene. Ann Arbor: University of Michigan Press. Shaw, M. H. (1982). The ἦθος of Theseus in 'The suppliant women'. Hermes, 110, 319. Shipley, G. (2000). The Greek world after Alexander, 323-30 B.C. New York: Routledge. Skinner, M. B. (1976). Iphigenia and Polyxena: A Lucretian allusion in Catullus. Pacific Coast Philology, 11, 52-61. 70
Slavitt, D. R., ed. and trans. (1998). Epinician odes and dithyrambs of Bacchylides. Philadelphia: University of Pennsylvania Press. Radt, S. L., ed. and trans. (1977). Sophocles. Göttingen: Vandenhoeck & Ruprecht. Sourvinou-Inwood, C. (1971). Theseus lifting the rock and a cup near the Pithos painter. The Journal of Hellenic Studies, 91, 94-109. Syme, R. (1960). The Roman revolution. London: Oxford University Press. Thomas, R. F. (1982). Catullus and the polemics of poetic reference (poem 64.1-18). The American Journal of Philology,103, 144-164. Thomson, D. F. S. (1997). Catullus. Toronto; Buffalo: University of Toronto Press. Townend, G. B. (1983). The unstated climax of Catullus 64. Greece & Rome, 30, 21-30. Trypanis, C. A. (1947). The character of Alexandrian poetry. Greece & Rome, 16, 1-7. Trypanis, C. A., (1958). Callimachus: aetia--iambi--lyric poems--hecale--minor epic and elegiac poems--fragments of epigrams--fragments of uncertain location. Cambridge; London: Mass., Harvard University Press; W. Heinemann Ltd. Vessey, D. W. T. C. (1970). Thoughts on the epyllion. The Classical Journal, 66, 38-43. Weber, C. (1983). Two chronological contradictions in Catullus 64. Transactions of the American Philological Association (1974), 113, 263-271. Webster, T. B. L. (1966). The myth of Ariadne from Homer to Catullus. Greece & Rome, 13, 22-31. Wheeler, A. L. (1934). Catullus and the traditions of ancient poetry. Berkeley: Calif., University of California Press. Williams, G. W. (1968). Tradition and originality in Roman poetry. Oxford: Clarendon Press. 71
Wiseman, T. P. (1985). Catullus and his world: A reappraisal. Cambridge; New York: Cambridge University Press. Zanker, G. (1977). Callimachus' Hecale: A new kind of epic hero? Antichthon, 11, 68. Zanker, G. (1987). Realism in Alexandrian poetry : A literature and its audience. London; Wolfboro, N.H: Croom Helm. Zetzel, J. E. G. (1983). Catullus, Ennius and the poetics of allusion. ICS, (8), 251-286.
72
This action might not be possible to undo. Are you sure you want to continue? | https://www.scribd.com/doc/135648732/Myth-Management-the-Nature-of-the-Hero-in-Callimachus-Hecale-and-Catullus-Poem-64 | CC-MAIN-2016-07 | refinedweb | 20,908 | 56.49 |
C File I/O
C library (stdio.h) gives us various functions to work with files.
We can create files, write data to them, append data to them etc. using these functions.
Let us look into a few examples to understand better.
Example 1: Read a file and print data.
In the above example we used fopen function to open a file for reading.In the above example we used fopen function to open a file for reading.
#include <stdio.h> int main(){ FILE *filePoint; int number; filePoint = fopen("C://magicNumber.txt", "r"); fscanf(filePoint, "%d", &number); printf("Magic Number = %d", number); fclose(filePoint); return 0; }
We opened it for reading by passing "r" as second parameter.
The valid values for file mode are:
r - read text.
rb - read binary.
w - write text.
wb - write binary.
a - append text.
ab - append binary.
r+ - read and write mode.
rb+ - read and write mode in binary.
w+ - read and write mode.
wb+ - read and write mode in binary.
a+ - read and append mode.
ab+ - read and append mode in binary.
Example 2: Write an integer to a file.
#include <stdio.h> int main(){ FILE *filePoint; int number = 42; filePoint = fopen("C://magicNumber.txt", "w"); fprintf(filePoint, "%d", 42); fclose(filePoint); return 0; } | https://www.cosmiclearn.com/c/fileio.php | CC-MAIN-2019-51 | refinedweb | 210 | 80.48 |
An applet is a Java program that runs on a web page. An applet is not a stand-alone application, and it does not have a main() routine. In fact, an applet is an object rather than a class. In the Java language, a class represents a data type. Every class defines a new data type. For example, you can define a class named Point to represent a data point in the two dimensional X and Y graphing system. This class can define fields (each of type double) to hold the X and Y coordinates of a point and methods to manipulate and operate on the point. The Point class is a new data type. When discussing data types, it is important to distinguish between the data type itself and the values the data type represents. char is a data type: it represents Unicode characters. But, a char value represents a single, specific character. A class is a data type; a class value is called an object. But, when compiling applet code using the javac.exe compiler that ships with Sun’s J2EE SDK that contains the JDK, a class file is generated. That file, which has the same name as the class defined in the source code, is referred to within the applet tags that reside in the body of an HTML document. For instance, if your applet is named myApplet.java, then after using the javac.exe compiler, you will have a myApplet.class file. This file must be contained in quotes when embedded in the applet tags. An important point to remember is that the Swing API and the Java 2D API use a ‘J’ prefix before any component: JApplet, JFrame, JPanel, etc. To keep things simple, some applet code in this article will use the old Applet class defined in the java.awt package.
main()
Point
double
char
JApplet
JFrame
JPanel
Applet
Everything that appears on a computer screen has to be written there. Many programming languages use standard output functions like printf (print format), Console.WriteLine, System.out.println, and so on. These are functions and methods that ensure that anything that appears in an output device (screen, printer) is formatted to be human-readable. The system function, however, writes to the screen..
printf
Console.WriteLine
System.out.println
To write an applet that draws something, you need to know what subroutines are available for drawing. In Java, the built-in drawing subroutines are found in objects of the class Graphics, one of the classes of the java.awt package. In an applet's paint() routine, you can use the Graphics object g for drawing. This object is provided as a parameter to the paint() routine when the system calls the paint() method. Amongst others, three of the Graphics object's subroutines are:
Graphics
paint()
g
g.setColor(c)
c
Color
g.setColor(Color.RED)
g.drawRect(x, y, w, h)
x
y
w
h
g.fillRect(x, y, w, h)
drawRect
Assume we want to write an applet that draws a checkerboard. Assume that the size of the applet is 160 by 160 pixels. Each square in the checkerboard is 20 pixels by 20 pixels. The checkerboard contains 8 rows and 8 columns. The squares are Red and Black. If the row and column number are either even or odd, then the color is Red. Otherwise, it is Black. Each square is a rectangle with a height and width of 20 pixels, so it can be drawn with the command g.fillRect(x, y, 20, 20), where x and y are the coordinates of the top-left corner of the square. Here is the tricky part. We cannot just draw a rectangle with the said dimensions and fill it with one color. So, before drawing the square, we have to determine whether it should be Red or Black. That is, we have to the correct the color with g.setColor. The coordinates at the upper-left hand corner are (0, 0). Since each square is 20 pixels high, the top of the second row is y=20. Visualize that height is the vertical Y axis on a graph. The third row is 40 pixels, followed by 60, 80, 100, 120, and 140. If we had assumed that the rows are numbered 0, 1, 2, 3, ..7, then the tops are given by y = row*20, where the row is the row number. Similarly, the left edge of the square in column col is given by col*20 (when the columns are numbered 0, 1, 2, 3.. 7). With 8 rows and 8 columns and using numbers 0, 1, 2, 3, 4, 5, 7 to represent those rows and columns, it is safe to use the for loop, a control structure that steps every element: "for (row = 0; row < 8; row++)". Similarly, 0 -7 means 8 columns: "for (col = 0; col < 8; col++)".
g.fillRect(x, y, 20, 20)
g.setColor
col
col*20
for
for (row = 0; row < 8; row++)
for (col = 0; col < 8; col++)
Now, we have to determine whether the square in the row is Red. Recall that a square is Red if both the row and column are even or odd. In Java, the modulo operator % is the most convenient to determine odd or even numbers. That is, integer N is even if N%2 is 0, and the test could be expressed as:
%
N
N%2
if ((row%2 == 0 && col%2 == 0) || (row%2 == 1 && col%2 == 1))
The && operator is a logical AND and the || operator is a logical OR. The logical OR operator (||) returns the boolean value true if either or both operands is true, and returns false otherwise. So, the above is essentially the same as asking whether row%2 and col%2 have the same value. This test can then be put more simply as:
&&
||
row%2
col%2
"if (row%2 == col%2)"
The complete code:
import java.awt.*;
import java.applet.*;
public class Checkerboard extends Applet {);
}
}
}
}
The above applet involved drawing a rectangle in which the squares would have a color based on the condition that if the row and column square were both odd or even, then the color would be Red. If not, the color would be Black. To fill the rectangle with squares, a for loop iterates over 8 elements to form 8 rows, and 8 elements to form 8 columns. The if statement, as we know, executes a block of code if a condition has a boolean true. If not (else), the opposite type code is executed. So, the understanding applet code not only involves color, graphics, and coordinates, but also controls flow structures and subroutines. Consider this applet:
if
The applet first fills its rectangle area with Cyan. Then, it changes the drawing color to Black, and draws a sequence of rectangles, where each rectangle is nested inside the previous rectangle. These rectangles can be drawn with a while loop. The while statement repeats a group of Java statements many times. It will repeat this group of statements while a condition is a boolean true.
while
public void paintComponent(Graphics g) {
super.paintComponent(g);
int count = 0;
while (count < 50) {
g.drawLine(20, count*5, 80, count*5);
count = count + 1;
}
g.drawString("Loop is finished. count="+count, 10, 300);
}
This repeats the drawLine() call 50 times. The first time the while condition is tested, it is found to have a boolean true value because the value of count is 0, which is less than 50. Each time through the loop, the rectangle gets smaller, and it moves down and over a bit. We’ll need variables to record how the top-left corner of the rectangle is inset from the edges of the applet. The while loop ends when the rectangle shrinks to nothing. In general outline, the algorithm for drawing this applet is:
drawLine()
count
g.fillRect
While the width and the height are greater than 0:
g.drawRect
Here is the applet code put together:
import java.awt.*;
import java.applet.Applet;
public class Rectangle.cyan);
g.fillRect(0,0,300,160); // Fill the entire applet with red.
g.setColor(Color.black); // Draw the rectangles in black.
inset = 0;
rectWidth = 299; // Set size of first rect to size of applet.
rectHeight = 159;
while (rectWidth >= 0 && rectHeight >= 0) {
g.drawRect(inset, inset, rectWidth, rectHeight);
inset += 15; // Rects are 15 pixels apart.
rectWidth -= 30; // Width decreases by 15 pixels
// on left and 15 on right.
rectHeight -= 30; // Height decreases by 15 pixels
// on top and 15 on bottom.
}
} // end paint()
}
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) | http://www.codeproject.com/Articles/34280/How-to-Write-Applet-Code | CC-MAIN-2015-11 | refinedweb | 1,453 | 73.78 |
In this article, we will look at the techniques being used by Android developers to detect if a device on which the app is running is rooted or not. There are good number of advantages for an application in detecting if it is running on a rooted device or not. Most of the techniques we use to pentest an Android application require root permission to install various tools and hence to compromise the security of the application.
Ethical Hacking Training – Resources (InfoSec)
Many Android applications do not run on rooted devices for security reasons. I have seen some banking applications check for root-access and stop running if the device is rooted. In this article, I will explain the most common ways being implemented by developers to detect if the app is rooted and some of the bypassing techniques to run the app on a rooted device.
Common techniques to detect if the device is rooted
Let’s begin with the most common techniques being used in the most popular applications to detect if the device is rooted.
Once a device is rooted, some new files may be placed on the device. Checking for those files and packages installed on the device is one way of finding out if a device is rooted or not. Some developers may execute the commands which are accessible only to root users, and some may look for directories with elevated permissions. I have listed few of these techniques below.
- Superuser.apk is the most common package many apps look for in root detection. This application allows other applications to run as root on the device.
- Many applications look for applications with specific package names. An example is show below.
The above screenshot shows a package named “eu.chainfire.supersu”. I have seen some apps from the Android market verifying if any application by “chainfire” is running on the device.
- There are some specific applications which run only on rooted devices. Checking for those applications would also be a good idea to detect if the device is rooted.
Example: Busybox.
Busybox is an application which provides a way to execute the most common Linux commands on your Android device.
- Executing “su” and “id” commands and looking at the UID to see if it is root.
- Checking the BUILD tag for test-keys: This check is generally to see if the device is running with a custom ROM. By default, Google gives ‘release-keys’ as its tag for stock ROMs. If it is something like “test-keys”, then it is most likely built with a custom ROM.
Looking at the above figure, it is pretty clear that I am using a stock Android ROM from Google.
The above mentioned techniques are just a few examples of what a developer may look for in order to identify if a device is rooted. But there could be other ways around for root detection which are not mentioned here in this article.
Bypassing root detection — Demo
In this section, we will see how one can bypass one of the root detection mechanisms mentioned above. Before we begin, let’s understand the background.
Download the application from the download section of this article and run the app on your device and click the button “Is my device rooted?” Since I am running it on a rooted device, it says “Device Rooted”. Now, our job is to fool the application that this device is not rooted.
Now, let’s begin.
Understanding the app’s functionality
Let’s first understand how this app is checking for the root access. The code inside this application is performing a simple check for Superuser.apk file as shown in the screenshot below.
As we can see in the above figure, the app is checking if there is a file named “Superuser.apk” inside the “/system/app/” directory. If it exists, then we are displaying the message “Device Rooted”. There are various ways to bypass this detection.
- Let us first reconfirm manually if the device is rooted by executing “su” and “id” commands.
- Let’s see if the “Superuser.apk” file is located in the path being checked by our target app.
- To bypass this check, let’s rename the application “Superuser.apk” to “Superuser0.apk” as shown in the figure below.
As we see in the above figure, we may get an error message saying it’s a read-only file system. We can change it to read-write as shown in the next step.
- Changing permissions from Read-Only to Read-Write:
The following command will change the file system permissions to “Read-Write”.
- Now, if we rename the file to “Superuser0.apk”, it works fine as shown in the figure below.
- If we now go back to the application and click the button “Is my device rooted?”, it shows a message as shown in the figure below.
This is just an example of how one can bypass root detection if it is not properly implemented. Applications may use some complex techniques to stop attackers from bypassing root detection. The techniques will change depending on how the developer is checking for root access.
For example, if a developer uses the following code snippet to check for root, the procedure to bypass the detection is different:
Runtime.getRuntime().exec(“su”);
In this case, we would need to make our own “su” binary and “superuser” apk.
Conclusion
Developers who stop users from running their apps on rooted devices could be a good idea from a security perspective, but this is really annoying to a techie user who is not able to execute the app only due to the reason that he has rooted his device. Rooting techniques can be bypassed easily most of the time, and it is highly recommended that developers should use complex techniques in order to stop attackers from bypassing their validation controls.
Hi,
Good article !!!!!!!!!! This is much similar, I am looking for.
Problem, my APK is having this piece of code to check if devices is rooted or not. Kindly suggest possible bypass for all three methods mentioned in the following code.
public class RootUtil {
public static boolean isDeviceRooted() {
return checkRootMethod1() ||
checkRootMethod2() || checkRootMethod3();
}
private static boolean checkRootMethod1() {
String buildTags = android.os.Build.TAGS;
return buildTags != null &&
buildTags.contains(“test-keys”);
}
private static boolean checkRootMethod2() {
String[] paths = { “/system/app/Superuser.apk”,
“/sbin/su”, “/system/bin/su”, “/system/xbin/su”, “/data/local/xbin/su”,
“/data/local/bin/su”, “/system/sd/xbin/su”,
“/system/bin/failsafe/su”,
“/data/local/su” };
for (String path : paths) {
if (new File(path).exists()) return
true;
}
return
false;
}
private static boolean checkRootMethod3() {
Process process = null;
try {
process = Runtime.getRuntime().exec(new
String[] { “/system/xbin/which”, “su” });
BufferedReader in = new BufferedReader(new
InputStreamReader(process.getInputStream()));
if (in.readLine() != null) return true;
return false;
} catch (Throwable t) {
return false;
} finally {
if (process != null) process.destroy();
}
}
} | http://resources.infosecinstitute.com/android-hacking-security-part-8-root-detection-evasion/ | CC-MAIN-2017-43 | refinedweb | 1,144 | 56.35 |
Realtime node.js + socket.io server in a box.
Valet is a simple server built on node.js and socket.io. You can deploy it locally with one command, or remotely with a few clicks. Clever namespaces allow you to use Valet for multiple projects simultaneously.
Install the Valet module from npm with:
sudo npm install -g valet
Then run it locally with:
valet
You can verify that Valet is running by visiting in your browser. You should see
valet version 0.1.0.
Log into Amazon Web Services and open up Opsworks.
Create a new stack.
Spin up a new instance, with a size of "micro"
Create a new app (type=node.js) with the following settings:
name: valet app type: node.js repository type: git repository url:
Deploy your app. Valet will be pulled from github, and should be available in a few minutes.
Install nodejitsu:
sudo npm install -g jitsu
Pull down a local copy of Valet - this is the copy you'll be modifying and deploying.
cd /some/directory/ npm install valet
Edit the
subdomain field in package.json. Your app will be served from
your-subdomain.jit.su
Deploy your app:
jitsu deploy
Valet will look for a file called
config.json in your root project directory. If it can't find one, it will use the one located in the valet root. If you want to name your amazon app zomething other than valet, you should copy the included config.json file and tweak it as needed.
Namespaces split clients into groups, to allow many projects to use a single Valet instance simultaneously.}}'
Note - if your client does not support sending an
application/json, you can send the json as a string in the POST body called
value
To send the same data using a socket is a two step process - first, you must connect to your Valet server on the root namespace. Then, emit an event called described in the "Namespaces" section above.
Anything that can run socket.io can be a Val.
```HTML <!DOCTYPE html> <head> <!-- Valet serves up socket.io.js on a special endpoint --> <script src=""></script> <script type="text/javascript"> //); }); </script> </head> <body></body> ```
Copyright (c) 2013 One Mighty Roar Licensed under the MIT license. | https://www.npmjs.com/package/valet | CC-MAIN-2015-32 | refinedweb | 376 | 76.11 |
Simple random number generator.
More...
#include <random.h>
List of all members.
Simple random number generator.
Although it is definitely not suitable for cryptographic purposes, it serves our purposes just fine.
Definition at line 36 of file random.h.
Construct a new randomness source with the specific name.
The name used name must be globally unique, and is used to register the randomness source with the active event recorder, if any.
Definition at line 30 of file random.cpp.
Generates a random bit, i.e.
either 0 or 1. Identical to getRandomNumber(1), but potentially faster.
Definition at line 52 of file random.cpp.
Generates a random unsigned integer in the interval [0, max].
Definition at line 46 of file random.cpp.
Generates a random unsigned integer in the interval [min, max].
Definition at line 58 of file random.cpp.
[inline]
Definition at line 51 of file random.h.
Definition at line 42 of file random.cpp.
[private]
Definition at line 38 of file random.h. | https://doxygen.residualvm.org/de/d4f/classCommon_1_1RandomSource.html | CC-MAIN-2020-40 | refinedweb | 166 | 54.69 |
PaySlip April 23, 2010 at 9:31 PM
package dmm.ui;import dmm.bean.Employee;import dmm.bean.Employees;import dmm.bean.FullTimeEmployee;import dmm.bean.PartTimeEmployee;import java.io.BufferedWriter;import java.io.File;import java.io.FileWriter;import java.io.IOException;impor... View Questions/Answers
Mockstrutstest case in Weblogic April 23, 2010 at 6:15 PM
Hello All,I'm testing junits with Maven 2 using MockStrutsTestCase.The web.xml file is not loading even after I set the servlet config file with setServletConfigFile.I get the follwoing errorjunit.framework.AssertionFailedError: Received an exception w... View Questions/Answers
thread class April 23, 2010 at 4:30 value of cn... View Questions/Answers
Ethernet port in java April 23, 2010 at 4:29 PM
Hello,I am new to this field. I have to make an web application in JSP in which i have to decode the packets received on ethernet port. I have no idea how to program the ethernet port in java and read packets.Please guide me.......... ... View Questions/Answers
bank management April 23, 2010 at 4:29 PM
Assume that the bank maintains two kinds of accounts for its customers, one called savings account and the other current account. The savings account provides compound interest and withdrawal facilities but no cheque book facility. The current Account provides cheque book facility ... View Questions/Answers
computer management April 23, 2010 at 4:26 PM
Create a class Computer that stores information about different types of Computers available with the dealer. The information to be stored about a single computer is, - Company Name - RAM size. - Hard Disk Capacity. - Processor Speed. - Processor Make. ... View Questions/Answers
why jsp pages are not working on linux but java application is working in windows April 23, 2010 at 4:22 PM
java web application jsp pages are not showing the fileds corectly on linux server but same application is working on windows.why? ... View Questions/Answers
Read a particular column in excel sheet April 23, 2010 at 4:21 PM
hi sir i used ur code,but where i have to put that index value ,i am calling the following methods for sending a mail please help me if any changes.Following is my code for calling methods .thanks in advanceboolean isSuccess = false; String isValid = null; MailObject ma... View Questions/Answers
hi April 23, 2010 at 2:48 PM
hi sir,Thanks for ur coporation, i am save the 1 image with customer data,when i am search that customer data,i want to see that image (already saved image based on the customer data previously),plz provide program sir,plzzzzzzzzzzzz ... View Questions/Answers
export to word document April 23, 2010 at 2:45 PM
hi sir,when i am click on a button under the jtable,for example (print button),then i want to print that jtable in word document,automatically,plz provide program sir ... View Questions/Answers
jtable combo April 23, 2010 at 2:42 PM
i am using jtable (using defaulttablemodel) ,when i am click on a particular cell of jtable i want to display the combo box in that cell,plz provide program ... View Questions/Answers
Read a particular column in excel sheet April 23, 2010 at 12:57 PM
hi sir i want to a read a excel sheet ,in that i have to take a particular column that contains a mailIds ,use that column to send a mail to all so how to do this sir give me some code to do this sirthanks in advance.. ... View Questions/Answers
Project Question No. 2B :
Create 2 Thread classes.One Thread is Incrementor and has one variable cnt1 with initial
Value 0. Incrementor threads increments value of cnt1 by 1 each time.
April 23, 2010 at 12:22 valu... View Questions/Answers
thanks April 23, 2010 at 11:56 AM
thanks sir its working... ... View Questions/Answers
JAVA PROGRAM April 23, 2010 at 11:13 AM
TO FIND OUT MINIMUM AND MAXIMUM NUMBERS IN AN ARRAY ... View Questions/Answers
java compilation error April 23, 2010 at 10:46 AM
package punitha;import java.util.Vector;import javax.swing.*;import javax.swing.JFrame.*;import javax.swing.JTextField.*;import javax.swing.JButton.*;import javax.swing.JFileChooser.*;import javax.swing.JOptionPane.*;import javax.swing.JTextArea.*;... View Questions/Answers
java problem April 22, 2010 at 7:46 PM
Room.javaThis file defines a class of Room objects. The objects have the following instance variables: number of beds in the room, of type integer; guest?s name, of type String; booking status of a room ? if the room has been booked, the status is 'true', otherwise it is... View Questions/Answers
datagram April 22, 2010 at 7:29 PM
program that sends "hello" message to a machine named "yahoo" on port"4000" using datagram ... View Questions/Answers
hi April 22, 2010 at 5:46 PM project i am using the 1 frame for total project,in that jtabbedpanes a... View Questions/Answers
How to Open Text or Word File on JButton Click in Java April 22, 2010 at 5:30 PM
How to Open Text or Word File on JButton Click in Java ... View Questions/Answers
Open Text or Word Document on JButton Click in Java April 22, 2010 at 4:45 PM
How to open Word document or Text File on JButton Click in java ... View Questions/Answers
Related to multiple inhetitance April 22, 2010 at 1:11 PM
Sir,Plz help me to solve this question.Q.1. Write a Progarm to illustrate the concept of multiple inheritance using interface ... View Questions/Answers
Java April 22, 2010 at 12:59 PM
Sir,plz help me to solve these java problems:Q.1. Write a Progarm to illustrate the concept of multiple inheritance using interfaceQ.2. Write a program to illustrate subclasses an imported classQ.3. Write a program for creating thread using thread class.... View Questions/Answers
Getting an exception April 22, 2010 at 12:57 PM
sir i m Getting an following exception while sending a mail with attachment value ofsession is javax.mail.Session@192d604 value of mimemessage is javax.mail.internet.MimeMessage@120b2da value of internetaddress is harini.veerapur@gmail.com** length of attachment is 1... View Questions/Answers
Getting an exception April 22, 2010 at 12:24 PM
sir i changed to that ieInputStream myInput1 = new FileInputStream(fileName); HSSFWorkbook myWorkBook = new HSSFWorkbook(myInput1);but when i click on a browse button and attached a file and then click on a read button to read a file it is displaying a new file instead of ... View Questions/Answers
java -netbeans help April 22, 2010 at 12:23 PM
a simple program in netbeans ide to add two numbers and display the result as ADDITION OF TWO NUMBERS IS:70 in the textareadesign containsone textarea one buttonand two text field to get inputthese GUI actions should be in input.java filewhen u press the button i... View Questions/Answers
setbackground image to JFrame April 22, 2010 at 11:36 AM
how to setbackground image to JFrame. ... View Questions/Answers
Create and Show Wordpad File on JButton Click April 22, 2010 at 11:33 AM
Hello Sir I want to create wordpad or word document on JButton Click Event.which contains data which is currently on Form ie filled by User,with following format STUDENT ADMISSION FORMSTUDENT ID: (JTextField1Value currenly on Form)NAME: ... View Questions/Answers
sending a mail April 22, 2010 at 11:23 AM
I m writing a code for send mail in jsp,i sending a mail to specified receiver but if i attach an file and send a mail then i m getting an error. Here Attachment is in String[] Attachment=null so how to convert from string to String[] here is whaString[] i have done... View Questions/Answers
Getting an exception April 22, 2010 at 11:10 AM
thanks sir for u r reply ,but i already added that jar file sir even though i m getting an exception sir please help me sir ...thanks in advance.. ... View Questions/Answers
hi April 22, 2010 at 10:39 AM
hi sir,create a new button and embed the datepicker code to that button sir,plzzzzzzzzzzzzzzzzzzzzzzzzzzz ... View Questions/Answers
datepick April 22, 2010 at 10:33 AM
sir,when i am click on a submit button the datepicker is appeared sir ... View Questions/Answers
Error Opening File April 22, 2010 at 10:33 AM
Dear,Please if help me to resolve this problem.I have an application in java that works correctly under Eclipse. But when I try to export it under Runnbale jar file on my desktop and I run this generated file, a message was displayed: Error opening file.<... View Questions/Answers
hi April 22, 2010 at 9:46 AM
hi sir,u provide a answer for datepicker,but i don't know how to embed that datepicker to my panel This is my panel code,plz provide steps,how to embed the datepicker code to my panel. This is my panel code. package Welcome;import java.... View Questions/Answers
JSP code problem April 22, 2010 at 6:57 AM
Hi friends, I used the following code for uploading an image file. After compilation it shows the required message, "Image successfully uploaded", but when i try to retrieve the image I'm unable to do so. The uploaded image is not getting saved anywhere. The fo... View Questions/Answers
Unicode April 21, 2010 at 11:04
unicode April 21, 2010 at 11:02
Getting an exception April 21, 2010 at 5:18 PM
sir i am Getting following exception when executingjavax.servlet.ServletException: java.lang.NoClassDefFoundError: org/apache/poi/poifs/filesystem/POIFSFileSystemI have used ur codeimport java.io.FileInputStream;import java.util.Iterator;import java.u... View Questions/Answers
Access Report to Java Application April 21, 2010 at 4:30 PM
Hello Sir can I connect Access Report to Java Application when i Enter Student ID in JTextField then the Appropriate Records From the Database will Be Shown in to Access Report. plz Help Me ... View Questions/Answers
How to Create and display Reports in Java with Database connectivity April 21, 2010 at 3:57 PM
Hello Sir ,I have Created Student Admission Form with Swing and MS Access 2007 Database Connection,when i will click on Print button it shows the report of student id which is exists in the Current JTexfield /when I give Student Id Number then it Shows the Report with that prticular id with ... View Questions/Answers
sql query April 21, 2010 at 3:49 PM
hi sir,i have a month and year numbers,when i am enter a month and year in sql query then i want results for 1st day of month to last day of month in that year for that month plz provide a solution for me plzzzzzzzzzz ... View Questions/Answers
set a class path April 21, 2010 at 2:49 PM
thanks u sir i got a .jar file,but while declareing a HSSFWorkbooks wb = new HSSFWorkbooks(); i m getting a error in eclipse .how to set a classpath for that please help me sir thanks in advance... ... View Questions/Answers
I want simple steps to create Jar File of Java Project April 21, 2010 at 11:53 AM
Hello Sir How I can Create Jar File of Java Project,plz Help Me ... View Questions/Answers
How to create and Attach MS Access Reports and Crystal Reorts in Java. April 21, 2010 at 11:35 AM
How to create and Attach MS Access Reports and Crystal Reorts in Java WITH MS Access 2007 Database, plz Help Me Sir ... View Questions/Answers
Create Reports in Java with access database April 21, 2010 at 11:23 AM
Hello Sir How to Create Reports of Any Type using Java when i submit data is stored in the MS access database,but when I click on PRINT Button then it will display it into either Text File or any type of Report and then i want to print it.How I can Generate it plz help Me ... View Questions/Answers
i need your help April 21, 2010 at 9:31 AM
Write a java program that:i. Allows user to enter 2 numbers from the keyboardii. Uses a method to compare the two numbers to determine which is largeriii. Print to the screen stating which number is larger than the other. ... View Questions/Answers
Data Redundancy April 20, 2010 at 9:08 PM
Sorry for disturbing you again but there's redundancy of the selected data on this jsp u gif me. The selected data will appear twice at the drop down box. I've tried to fix it but still unable to do so. Is there anything can be done?<%@ page language="java" import=&quo... View Questions/Answers
CLOB example April 20, 2010 at 8:09 PM
Hi everybody can u tell me the method to use CLOB in javathe CLOB is to use with database ..........it must retrieve value with resultset and display in the JTextArea that fetched text.......thanku ... View Questions/Answers
java April 20, 2010 at 7:46 PM
i want know the basics of the struts ... View Questions/Answers
java April 20, 2010 at 7:41 PM
why we will use the public static void main(String args[])? and explain each term in that syntax. ... View Questions/Answers
java April 20, 2010 at 7:37 PM
what is the importance of System.out.print("");?in this why we will use System,".",out, and print? ... View Questions/Answers
question on class April 20, 2010 at 7:07 PM
A class can act as subclass itself?if yes give me one example if no give me one example ... View Questions/Answers
How to Create any type of Reports in Java April 20, 2010 at 6:20 PM
Hello Sir ,How I can create any type of Reports like Crystal Reports etc(Student Result Report) in Java Application,plz Help Me Sir ... View Questions/Answers
java code April 20, 2010 at 5:41 PM
how to read an xml file in java and store its contents into a database ?i want to know the flow ... View Questions/Answers
print only Form Fillup Contents through Text File April 20, 2010 at 5:27 PM
Hello Sir ,I have Designed Employee Details Form using java swing components Now i want to print contents which is filled by employee,How i can Print it,and also i want print it with word means when i click on print then contents which is filled in employee form(swing application) that contents wi... View Questions/Answers
PHP April 20, 2010 at 2:42 PM
Hello,I have to check the images.If I mages found relative change it to absolute.for example ../default/test.png ---->/image/default/test.png.I have try this with str_replace("..", "/image", $vae). but its not a good idea to do it.If there is any ... View Questions/Answers
requesting for a jar file April 20, 2010 at 12:17 PM
Sir Please send me a jar file of this sir , i need this package jar file org.apache.poi.hssf.usermodel for Excel reading /Writing in java please help me.thanks in advance.... ... View Questions/Answers
Advanced Concepts with Classes April 20, 2010 at 11:58 AM
Advanced Concepts with Classes( Inheritance, Polymorphism, Overriding)i need help to create this application,than youEmployees in a company are divided into the classes Employee, HourlyPaid,SalesCommissioned, and Executive for the purpose of calculating their weeklywage... View Questions/Answers
SIMULATING AN ELEVATOR April 20, 2010 at 11:55 AM
Classes, Methods, Constructorsplease i need help to create an elevator programSimulating an Elevatorwrite an Elevator class containing various methods.Then, write a program that creates Elevator objects and invokes the various methods.put comments on methods and h... View Questions/Answers
problem in record viewing in jsp April 20, 2010 at 11:08 AM
hai i send the code can you please find where is the problem it is not showing any error but i cannot able to view the record after eclipse ide is closed and reopen iti have used the jsp and servlet and back end mysqlactually what is my problem is i have to everytime refresh the viewing ... View Questions/Answers
drag n drop April 20, 2010 at 10:42 AM
I want to implement drag n drop functionality for simple HTML/JSP without using applet,flash or any heavy components.using browse button user can select one file at a time.but user needs to select multiple files using drag and drop mode.when user drag n drop file then display the complete path in te... View Questions/Answers
for store data in data base April 20, 2010 at 10:02 AM
i want to a job site, in this site user can registered by a form..in this form there are his information...i use three form for it1)for user name and password..2)for personal detail...3)for educational details..by 1 and 2 i can move on three by continue.. button..... View Questions/Answers
ejb April 20, 2010 at 9:31 AM
plz send me the process of running a cmp with a web application send me an example. ... View Questions/Answers
NullPointerException April 19, 2010 at 10:26 PM
I wonder why this coding could not work since i really need to pass the selected choice inside, giving me real time data change from the database.I guess this is the structure I need to to so.The jsp just couldn't read from the parameter "name" but if i directly insert a string into the va... View Questions/Answers
airline reservation java code April 19, 2010 at 8:37 PM
Write a program that simulates the booking of seats on board a small airplane. The layout of the aircraft is illustratedbelow. You will be required to create a menu based system with the following choices:(a) View Seating (b) Book a Seat (c) Cancel a Booking (d) Reset All (e) QuitE... View Questions/Answers
java April 19, 2010 at 7:11 PM
a java code that takes input regular expression and a string and produces as output all lines of the string that contains a substring denoted by regular expression ... View Questions/Answers
c programming language April 19, 2010 at 6:44 PM
int main(){ int i = 1, c = 0, sum = 1,j ; scanf("%d", &j); while (i <= j) { sum = sum + (i /3+i/5); printf("%d\n", sum); i = ... View Questions/Answers
C++ programming language April 19, 2010 at 6:37 PM
int main(){ int i = 1, c = 0, sum = 1,j ; scanf("%d", &j); while (i <= j) { sum = sum + (i /3+i/5); printf("%d\n", sum); i = ... View Questions/Answers
java Class April 19, 2010 at 6:32 PM
Can anyone please explain what this code means?import java.util.TreeMap;import java.util.Iterator;//Please tell me what this class declaration means????????????public class ST<Key extends Comparable<Key>, Val> implements Iterable<Key>{... View Questions/Answers
what is the need of... April 19, 2010 at 4:59 PM
what is the need of " return type like void ? ... View Questions/Answers
Problem in record viewing in jsp April 19, 2010 at 4:54 PM
hai i have developed the application using jsp,servlets in eclipse idei have to insert,delete,update and view .no problem all are working but in view part alone some problem i dont knowwhat it is i cannot view the record after eclipse ide is closed and reopen itbut when... View Questions/Answers
J2EE April 19, 2010 at 4:01 PM
I have just completed a beginners project using jsp and servlet on netbeans ide. I have adequate knowledge of core java. From where do i study the portion of JSP and Servlet in detail. What should i study after this. ... View Questions/Answers
JSP and servlet code April 19, 2010 at 3:55 PM
I have created a JSP page for login and a servlet page which takes the parameters and performs the required processing. I'am unable to create a session from this servlet to the the page directed. How do I carry out session tracking from this page to the page directed and the further links in the pag... View Questions/Answers
question paper April 19, 2010 at 12:23 PM
I ran... View Questions/Answers
how to install the jboss application server on windows xp sp2? April 19, 2010 at 12:18 PM
How to install and run the web application on jboss on windows xp sp2 ... View Questions/Answers
swings April 19, 2010 at 9:41 AM
hi sir,plz write a program for the (when i am enter any text in jcombobox ,based on the key valuethe results(from database) are displayed under my jcombobox ) sir,plzzzzzzz ... View Questions/Answers
createCustomer method in Java April 19, 2010 at 9:08 AM
I'm just trying to add a createCustomers() to an existing code. I have all of my GUIs and it compiles, but I'm kind of stuck here. My code is as follows:import java.awt.*;import javax.swing.*;import java.awt.event.*;import java.util.*;import java.io.*;impo... View Questions/Answers
object 2 string April 18, 2010 at 6:21 PM
hi sir,how to convert object to string and how 2 compare object with null value to string with null value plz provide a program 4 this sir,plzzzzzzzzz ... View Questions/Answers
im new to java April 18, 2010 at 12:44 PM
what is the significance of public static void main(string args[])statement ... View Questions/Answers
SQL or UNICODE April 18, 2010 at 11:02
UNICODE or SQL statement issue April 18, 2010 at 11:01
hi April 18, 2010 at 9:29 AM
hi sir,i want to insert a record in 1 table for example sno sname sno1 i want to insert the value of sno as 1,2,3,............ when the time of insertion i want to sno1 consists the same values of sno wh... View Questions/Answers
count asterisk in a grid April 18, 2010 at 3:22 AM
I need to write a program that uses a recursive method to count the number of asterisk in a square grid ... View Questions/Answers
multiple form with multiple function in 1 jsp April 17, 2010 at 11:38 PM
Hi, I'm using Netbean 6.8, mysql, and tomcat for my web application.I was having problem in triggering my jsp with 2 forms as 1 for registration and 1 for log in. I need to trigger each form to call several function when user click buttons of these forms. Eventhough i put these forms into 1 fo... View Questions/Answers
Java swing code April 17, 2010 at 5:53 PM
How to validate input data entered into the swing applications in java? ... View Questions/Answers
Java April 17, 2010 at 5:19 PM
Sir,My code is attached here...When i run the program The gui is visible..When i click on feature button The control goes to classes ShowImage and ColorInformation From ShowImage the calculated values are showing through the method PrintTexturevalues a... View Questions/Answers
what is this.. April 17, 2010 at 4:10 PM
what is the differece between platform independant and platform dependant? ... View Questions/Answers
Java April 17, 2010 at 3:17 PM
write a program in java to illustrate importing classes from other packages? ... View Questions/Answers
Command Button Validation in VB 6.0 April 17, 2010 at 1:11 PM
Hello Sir I want to validate at a once Click on Vb Command Button ,when User First Time Clicks on Print Button then it will print that report and after that when user click second time on that button it shows msg u can Print Report user has already get this Print ,How i can Validate in Vb 6.0 with ... View Questions/Answers
Session tracking in login jsp program April 17, 2010 at 10:13 AM
I have using jsp technology in my project.I want to do session trackingin my login form.After logout when i press back button it should be show session is expired.Please help me.Send jsp code on my email id :kirankadam72@gmail.com ... View Questions/Answers
Transferring values between javascript and java code in jsp April 16, 2010 at 10:59 PM
Is there a way to transfer values between the javascripts and the java code(scriptlet, expressions etc) in a jsp page? ... View Questions/Answers
List of checkboxes April 16, 2010 at 9:24 PM
Hi!I appreciate your effort for giving the response but now it displays only the last value of ArrayList. In other words, it is now displaying just the last time slot. Also, without clicking the checkbox that single time slot gets deleted. I'm resending the modified code=... View Questions/Answers
Passing Parameter April 16, 2010 at 9:04 PM
Hi, it'me again. Below is the set of code that I get from your solution entitled droplist that read from database. It do read from database but in the textbox of the webpage, i need it to show something associated with the data i selected. Therefore, i've modify it into something like below which is... View Questions/Answers
Applets April 16, 2010 at 6:27 PM
Someone please help how to solve this JAVA programsa)An applet that does the following. Takes as input three numbers, computes two numbers and prints these two numbers (interest and amount) out. Have three textfields to input the three numbers (principal, rate and the number of years). With th... View Questions/Answers
hi April 16, 2010 at 2:10 PM ThanQ ... View Questions/Answers
hi April 16, 2010 at 2:08 PM
hi sir,when i am add a jtable record to the database by using the submit button,then nullpointer exception is arised,plz see this program and resolve this sir,plzzzzzz submit=new JButton("SUBMIT"); add(submit); submit.setBounds(400,... View Questions/Answers
get a value when a radio button clicked April 16, 2010 at 1:24 PM
Here is my code sir please help me if any changes in the code please give me sirthanks in advance..//login.jsp<%@ page import="com.antares.servlet.EmployeeControllerServlet" contentType="text/html; charset=ISO-8859-1"%><... View Questions/Answers
get a value when a radio button clicked April 16, 2010 at 1:09 PM
sorry sir my actual probs is that 1st i m callig a jsp in that i m displaying a table where i used a radio button to access a value(also used u r code),then in this jsp i m having a forms to call a servlet,then in a servlet i m using a switch case to call a jsps ,in his switchcase i m calling a DAO... View Questions/Answers
get a radio button click value April 16, 2010 at 12:05 PM
thanks sir for sending code ,but i have one probs that is i m getting a null value i m calling getParameter("id") in servlet that is used in another jsp so can i call getParameter("radio") i.e in the input tag name="radio" eventhough i m getting a null value please he... View Questions/Answers
swings April 16, 2010 at 11:46 AM
hi what is evt.getKeyCode() method , when it value is equal to 10 ... View Questions/Answers | http://www.roseindia.net/answers/questions/230 | CC-MAIN-2016-44 | refinedweb | 4,602 | 60.04 |
requests 2.0.1
flutter http json with cookies #
a flutter library to do modern http requests with cookies(inspired by python's requests module).
server side cookies (via response header
SET-COOKIE) are stored using the assistance of
shared_preferences. Stored cookies will be send seemlessly on the next http requests you make to the same domain (simple implementation, similar to a web browser)
Install #
Add this to your package's pubspec.yaml file:
dependencies: requests: ^2.0.1
Usage #
in your Dart code, you can use:
import 'package:requests/requests.dart';
HTTP get, body as plain text
String body = await Requests.get("");
HTTP get, body as parsed json
dynamic body = await Requests.get("", json: true);
HTTP post, body is map or a list (being sent as application/x-www-form-urlencoded, until stated otherwise in
bodyEncoding parameter), result is json
dynamic body = await Requests.post("", json: true, body: {"foo":"bar"} );
HTTP delete
await Requests.delete("");
1.0.0 #
- Initial version
Use this package as a library
1. Depend on it
Add this to your package's pubspec.yaml file:
dependencies: requests: :requests/requests)
26 out of 26 API elements have no dartdoc comment.Providing good documentation for libraries, classes, functions, and other API elements improves code readability and helps developers find and use your API.
Format
lib/requests.dart.
Run
flutter format to format
lib/requests.dart.
Format
lib/src/event.dart.
Run
flutter format to format
lib/src/event.dart.
Format
lib/src/requests.dart.
Run
flutter format to format
lib/src/requests.dart.
Maintenance suggestions
Maintain an example. (-10 points)
Create a short demo in the
example/ directory to show how to use this package.
Common filename patterns include
main.dart,
example.dart, and
requests. | https://pub.dev/packages/requests | CC-MAIN-2019-39 | refinedweb | 288 | 50.63 |
Web Application Projects 1.0 is live!
New since RC1:
- Team Build Support with VSTS
- Strongly-typed access to resources defined in App_GlobalResources
- Edit and Continue Support (To Enabled check Project Properties Web Tab)
- Numerous bug fixes….
Web Application Projects now provided a similar development style and compilation model as was used in Visual Studio 2003, but with full ASP.NET 2.0 support.
For more information checkout:
Join the conversationAdd Comment
Hello everybody,
I installed correctly, before update for Visual Studio 2005, then Web Application Project 1.0, on a VSTS installation on Win XP Pro SP2 and on a Visual Studio 2005 Professional installation on Windows Server 2003 SP1.
On VSTS, all projects in "Web Projects" window disappeared. On Visual Studio 2005 Professional, it seems is all ok, but there’s no new Web project to choose?
Any other else having problems?
Dear Paulo,
I’m sorry to hear that you are having problems with Web Application Projects. When you say all projects in the "Web Projects" window disappeared, do you mean the "Web" Tab in "File/Add New Project"?
This is not a known problem. Did you have an earlier version of Web Application Project installed on the machine that you successfully uninstalled before installing this version?
You can reply to me directly at bash-at-microsoft.com and I’ll be happy to help you with this.
Thanks,
— Bash
Guys.
I don’t think that I’m able to thank you enough for this project template!! Ever since VS 2005 first beta was released I have been under impression that our IDE of choice was slowly dieing in favor of former FrontPage users. That now-famous "70% less code" goal seemed to be the only thing that is new in VS 2005. Hard coded folder names, dropped namespaces in page-based classes and things like that drove me crazy. I swear I was thinking of going back to Borland/Java stuff 🙂 In the same time, you kept the desktop projects intact (I saw just real improvements there). And I’m sure I wasn’t the only one out there. But now I see that you have listened. Please don’t scare me again 🙂 In general, VS 2005 users are not as damn as some of MS project managers might think. And not everyone would appreciate the inability to control every piece of code in exchange for improved drag-and-drop. Btw, the top item on my personal wish list: give me the ability to disable the designer completely.
Thanks again for the Web App project!! | https://blogs.msdn.microsoft.com/webdev/2006/05/08/web-application-projects-is-released/ | CC-MAIN-2019-09 | refinedweb | 429 | 73.07 |
Frontend developers typically don’t need to understand the process of receiving, recording, and removing information. That’s a job for backend developers.
That said, there are plenty of good reasons for a frontend developer to learn about backend programming and database interaction. For instance:
- You’ll stand out from other frontend developers because you’ll know how your application work as a whole
- You’ll be able to work on both the front and back side of your app
- You can be promoted to a full-stack developer and take on a bigger role with a higher salary
- Knowledge of both frontend and backend programming — as well as designing scalable systems and building solid application architecture — is a requirement to be a tech lead
In this tutorial, we’ll demonstrate how to create a small application using Express and Node.js that can record and remove information from a PostgreSQL database according to the HTTP requests it receives. We’ll then create a simple React app to test and see how the entire application flows from front to back.
I’ll assume that you understand how a React application works and are familiar with frontend JavaScript HTTP requests. We won’t cover how to validate data before interacting with the database. Instead, we’ll focus on showing how requests from the interface are recorded into a database.
I published a GitHub repo for this tutorial so you can compare your code if you get stuck. Now let’s get our database running.
Setting up PostgreSQL
PostgreSQL, or Postgres, is a relational database management system that claims to be the world’s most advanced open-source relational database. It has been maintained since 1996 and has a reputation for being reliable and robust.
Start by downloading and installing PosgreSQL. It supports all major operating systems, so choose the right one for your computer and follow the instructions to set up the database. The setup wizard will prompt you to enter a superuser password. Make sure you remember this password; you’ll need it to login later.
Once the installation is complete, you can access your database by using pgAdmin, a graphical interface tool database management that is installed automatically with PostgreSQL.
Once opened, pgAdmin will ask for your password to log in. Below is the overview of a newly installed PostgreSQL database.
Creating a Postgres database
To better understand SQL language, we need to create a database and table from the terminal.
To access PostgreSQL from the terminal, use the command
psql with the option
-d to select the database you want to access and
-U to select the user. If the terminal replies that the
psql command is not found, you’ll most likely need to add the Postgres
bin/ and
lib/ directories into your system path.
psql -d postgres -U postgres
You will be asked to input your password. Use the password you created earlier. Once you’re logged in, create a new user by adding a login permission with the password “root.”
CREATE ROLE my_user WITH LOGIN PASSWORD 'root';
A user is simply a role that has login permission. Now that you have one, give it permission to create databases by issuing the
ALTER ROLE [role name] CREATEDB syntax.
ALTER ROLE my_user CREATEDB;
Log out from your
postgres superuser and log in as
my_user using the command
\q.
\q psql -d postgres -U my_user
Now that you’re back inside, create a new database named
my_database.
CREATE DATABASE my_database;
You might be wondering, why can’t we just use the default
postgres user to create the database? That’s because the default user is a superuser, which means it has access to everything within the database. According to the Postgres documentation, “superuser status is dangerous and should be used only when really needed.”
An SQL-based database stores data inside a table. Now that you have a database, let’s create a simple table where you can record your data.
CREATE TABLE merchants( id SERIAL PRIMARY KEY, name VARCHAR(30), email VARCHAR(30) );
One database can have multiple tables, but we’ll be fine with one table for this tutorial. If you’d like to check the created database and table, you can use the command
\list and
\dt, respectively. You might see more rows or less, but as long as you have the database and the table you created previously, your table should look like this:
my_database=> \list List of databases Name | Owner | Encoding my_database | my_user | UTF8 postgres | postgres | UTF8 template0 | postgres | UTF8 template1 | postgres | UTF8 my_database=> \dt List of relations Schema | Name | Type | Owner --------+-----------+-------+--------- public | merchants | table | my_user
Now have a table into which you can insert data. Let’s do that next.
Basic SQL queries
Postgres is an SQL-based system, which means you need to use SQL language to store and manipulate its data. Let’s explore four basic example of SQL queries you can use.
1. Select query
To retrieve data from a table, use the
SELECT key, followed by the name of the columns you want to retrieve and the name of the table.
SELECT id, name, email from merchants;
To retrieve all columns in the table, you can simply use
SELECT * from merchants;
2. Insert query
To insert new data into a table, use the
INSERT keyword followed by the table name, column name(s), and values.
INSERT INTO merchants (name, email) VALUES ('john', '[email protected]');
3. Delete query
You can delete a row from a table by using the
DELETE keyword.
DELETE from merchants WHERE id = 1;
When you use the delete query, don’t forget to specify which row you want to delete with the
WHERE keyword. Otherwise, you’ll delete all the rows in that table.
4. Update query
To update a certain row, you can use the
UPDATE keyword.
UPDATE merchants SET name = 'jake', email = '[email protected]' WHERE id = 1;
Now that you know how to manipulate data inside your table, let’s examine how to connect your database to React.
Creating an API server with Node.js and Express
To connect your React app with a PostgreSQL database, you must first create an API server that can process HTTP requests. Let’s set up a simple one using NodeJS and Express.
Create a new directory and set a new npm package from your terminal with the following commands.
mkdir node-postgres && cd node-postgres npm init
You can fill in your package information as you like, but here is an example of my
package.json:
{ "name": "node-postgres", "version": "1.0.0", "description": "Learn how NodeJS and Express can interact with PostgreSQL", "main": "index.js", "license": "ISC" }
Next, install the required packages.
npm i express pg
Express is a minimalist web framework you can use to write web applications on top of Node.js technology, while
node-postgres(pg) is a client library that enables Node.js apps to communicate with PostgreSQL.
Once both are installed, create an
index.js file with the following content.
const express = require('express') const app = express() const port = 3001 app.get('/', (req, res) => { res.status(200).send('Hello World!'); }) app.listen(port, () => { console.log(`App running on port ${port}.`) })
Open your terminal in the same directory and run
node index.js. Your Node application will run on port 3001, so open your browser and navigate to. You’ll see “Hello World!” text displayed in your browser.
You now have everything you need to write your API.
Making NodeJS talk with Postgres
The
pg library allows your Node application to talk with Postgres, so you’ll want to import it first. Create a new file named
merchant_model.js and input the following code.
const Pool = require('pg').Pool const pool = new Pool({ user: 'my_user', host: 'localhost', database: 'my_database', password: 'root', port: 5432, });
Please note that putting credentials such as user, host, database, password, and port like in the example above is not recommended in a production environment. We’ll keep it in this file to simplify the tutorial.
The pool object you created above will allow you to query into the database that it’s connected to. Let’s create three queries to make use of this pool. These queries will be placed inside a function, which you can call from your
index.js.
const getMerchants = () => { return new Promise(function(resolve, reject) { pool.query('SELECT * FROM merchants ORDER BY id ASC', (error, results) => { if (error) { reject(error) } resolve(results.rows); }) }) } const createMerchant = (body) => { return new Promise(function(resolve, reject) { const { name, email } = body pool.query('INSERT INTO merchants (name, email) VALUES ($1, $2) RETURNING *', [name, email], (error, results) => { if (error) { reject(error) } resolve(`A new merchant has been added added: ${results.rows[0]}`) }) }) } const deleteMerchant = () => { return new Promise(function(resolve, reject) { const id = parseInt(request.params.id) pool.query('DELETE FROM merchants WHERE id = $1', [id], (error, results) => { if (error) { reject(error) } resolve(`Merchant deleted with ID: ${id}`) }) }) } module.exports = { getMerchants, createMerchant, deleteMerchant, }
The code above will process and export the
getMerchants,
createMerchant, and
deleteMerchant functions. Now it’s time to update your
index.js file and make use of these functions.
const express = require('express') const app = express() const port = 3001 const merchant_model = require('./merchant_model') app.use(express.json()) app.use(function (req, res, next) { res.setHeader('Access-Control-Allow-Origin', ''); res.setHeader('Access-Control-Allow-Methods', 'GET,POST,PUT,DELETE,OPTIONS'); res.setHeader('Access-Control-Allow-Headers', 'Content-Type, Access-Control-Allow-Headers'); next(); }); app.get('/', (req, res) => { merchant_model.getMerchants() .then(response => { res.status(200).send(response); }) .catch(error => { res.status(500).send(error); }) }) app.post('/merchants', (req, res) => { merchant_model.createMerchant(req.body) .then(response => { res.status(200).send(response); }) .catch(error => { res.status(500).send(error); }) }) app.delete('/merchants/:id', (req, res) => { merchant_model.deleteMerchant(req.params.id) .then(response => { res.status(200).send(response); }) .catch(error => { res.status(500).send(error); }) }) app.listen(port, () => { console.log(`App running on port ${port}.`) })
Now your app has three HTTP routes that can accept requests. The code from line 7 is written so that Express can accept incoming requests with JSON payloads. To allow requests to this app from React, I also added headers for
Access-Control-Allow-Origin,
Access-Control-Allow-Methods, and
Access-Control-Allow-Headers.
Creating your React application
Your API is ready to serve and process requests. Now it’s time to create a React application to send requests into it.
Let’s bootstrap your React app with the
create-react-app command.
npx create-react-app react-postgres
In your React app directory, you can remove everything inside the
src/ directory.
Now let’s write a simple React app from scratch.
First, create an
App.js file with the following content.
import React, {useState, useEffect} from 'react'; function App() { const [merchants, setMerchants] = useState(false); useEffect(() => { getMerchant(); }, []); function getMerchant() { fetch('') .then(response => { return response.text(); }) .then(data => { setMerchants(data); }); } function createMerchant() { let name = prompt('Enter merchant name'); let email = prompt('Enter merchant email'); fetch('', { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify({name, email}), }) .then(response => { return response.text(); }) .then(data => { alert(data); getMerchant(); }); } function deleteMerchant() { let id = prompt('Enter merchant id'); fetch(`{id}`, { method: 'DELETE', }) .then(response => { return response.text(); }) .then(data => { alert(data); getMerchant(); }); } return ( <div> {merchants ? merchants : 'There is no merchant data available'} <br /> <button onClick={createMerchant}>Add merchant</button> <br /> <button onClick={deleteMerchant}>Delete merchant</button> </div> ); } export default App;
This React app will send requests to the Express server you created. It has two buttons to add and delete a merchant. The function
getMerchant will fetch merchant data from the server and set the result to the
merchant state.
createMerchant and
deleteMerchant will start the process to add and remove merchants, respectively, when you click on the buttons.
Finally, create an
index.js file and render the
App component.
import React from 'react'; import ReactDOM from 'react-dom'; import App from './App'; ReactDOM.render(<App />, document.getElementById('root'));
Now run your React app with
npm start. You can test and see how the data collected from your React application is recorded into PostgreSQL. I’ll leave you the implementation of the
UPDATE query as an exercise.
Conclusion
Now you know how to install PostgreSQL database, create a database and table, and build a minimal API to function as a bridge between your React app and your database. We created an end-to-end example of how to use Postgres with React and demonstrated exactly what happens when you send those HTTP requests from your React app.
This tutorial is far from a complete guide to backend programming, but it’s enough to help you get started in understanding how the backend side works.
Should you find any error or can’t insert data from the application, you can clone the complete source code from this repo and compare the code from the repo with your.
25 Replies to “Getting started with Postgres in your React app”
As a DBA I have to say that mentioning the wild card * in a select without mentioning the problems it can cause is a rookie mistake.
Never select more than you need, especially if the given table contains indexes that might be used if you select only the required columns (which are also present in the index) and thus relying on an index that might make a difference from having your query running at 20 seconds to less than 200ms when an unnecessary key lookup occures.
Never use * in production, especially when defining views, they tend to parse the * into the actual columns in that time the view got created, meaning – if you add another column to the underlying table the view won’t return it, and if you remove a column some RDBMS will not validate dependency and cause an error when the view is queried.
I am getting a warning when I render the react page: functions are not valid as a React child.
I would like to suggest that you add a step to your tutorial. After `my_database` is created, you will want to change into the database before you create the table with the following command:
`\c my_database`
You have a mistake in:
const deleteMerchant = () => {
return new Promise(function(resolve, reject) {
const id = parseInt(request.params.id)
pool.query(‘DELETE FROM merchants WHERE id = $1’, [id], (error, results) => {
if (error) {
reject(error)
}
resolve(`Merchant deleted with ID: ${id}`)
})
})
}
it should be:
const deleteMerchant = (id) => {
return new Promise(function(resolve, reject) {
pool.query(‘DELETE FROM merchants WHERE id = $1’, [id], (error, results) => {
if (error) {
reject(error)
}
resolve(`Merchant deleted with ID: ${id}`)
})
})
}
Thanks for the rest of the tutorial!
Cheers,
Nathan
Excellent tutorial!! I learned a lot.
I just had one small query.
I think the delete method is not working. I tried as Nathan pointed out but that doesn’t seem to work either
When I try to install react-postgres, I receibe this error:
npm ERR! code E404
npm ERR! 404 Not Found – GET – Not found
npm ERR! 404
npm ERR! 404 ‘[email protected]’/daniel/.npm/_logs/2020-11-09T23_53_15_619Z-debug.log
Install for [ ‘[email protected]’ ] failed with code 1
Perphaps the code it’s bad written?
Hey there, `react-postgres` isn’t a library. It’s the name the author gave to the CRA project he created with the `npx create-react-app` command.
Thanks! Now I can solve this problem.
I copied Nathan’s into mine and it still worked. What is the error message?
Hi there,
I’ve been trying to follow along with this tutorial in a project I’m working on, but
when I try to use the pg library, I receive the following error:
./node_modules/pg/lib/native/client.js
Module not found: Can’t resolve ‘pg-native’
I checked and it’s in the folder listed, but when I try to run “index.js” in that folder, it returns a syntax error. I haven’t made any changes to the library…so I’m not sure why it’s doing it.
Here’s a link to my code:
Any help would be greatly appreciated!
Hi Kara, as you can see in my source code here () The code to connect to the Postgres database is run under Node server, separated from the React code. You need to create a different project folder. Try clone my code and run it first. Hope it helps!
hi – i am getting a ‘Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at. (Reason: CORS request did not succeed).’ error. Please let me know how I can overcome the CORS issue?
I am on a Ubuntu machine.
Hi.. I followed the exact steps. but still could not access the pgsql. pgsql server is running and is able to insert data into it directly via console. could not insert data from application. there is no error! hence couldn’t find a way to make it working
Same here.
Same here. Did you find any solution?
Hi I followed the same tutorial. But when I’m trying to insert the data or delete the data. I am not able to do that operations. Checked for errors didn’t find anything. Can you please help me out?
I think you can try to inspect the source code provided for the tutorial here and compare it with yours:
You can also clone the project into your computer and try to run it.
Hi Nathan, still I am not able to insert the data. When I inspect on the app screen it shows me an error on console that App.js:18 POST net::ERR_CONNECTION_REFUSED and when I try to do on postman for post method it shows title error with code 404 Cannot POST /
Did you run the Node server? Head to the node-postgres folder and execute `node index.js` command from the terminal to start the server on localhost:3001
Thank you Nathan. It worked when I run the node server. But When I’m inserting the data which is post request it’s inserting the null values even though I gave the data.
That probably means the value of `name` and `email` from the POST body is NULL.
Did you fill the prompt dialog box with values?
You need to make sure that React is sending a POST request with the right values.
You can see the function you need to inspect here:
At line 29, the `body` property must be assigned a JSON that contains `name` and `email`
Hi Nathan, I figured it out. I was not running the node server. I’m just running the react app so, whatever values I’m inserting will be null. Then, I have installed the Nodemon which, keeps running the node server automatically even though we make any changes on the server-side.
Hi Nik, glad to hear that!
Yeah, Nodemon is definitely nice to have when you’re developing a Node-based application because `node server` command doesn’t apply the code changes when you hit the save button.
Hi Nathan, I have question, when I’m inserting the data for the first time it’s inserting the null values because I was not running the node server. But if my server is not running when it getting into the database?
Actually I have no idea 🙂
Maybe you insert the data from Postman or Postgres command line? | https://blog.logrocket.com/getting-started-with-postgres-in-your-react-app/ | CC-MAIN-2022-05 | refinedweb | 3,232 | 56.25 |
Problem: How to Parse a JSON object as a Python One-Liner?
Example: Say, you have pulled a JSON object from a server using the curl command:
curl -s | python -mjson.tool { "continent_code": "EU", "country": "Netherlands", "country_code": "NL", ... "timezone": "Europe/Amsterdam" }
How to parse the resulting JSON object and extract, say, the value
"Netherlands" associated to the
"country" key?
Solution: The following one-liner accomplishes this:
curl -s | python -c 'import sys, json; print(json.load(sys.stdin)["country"])'
The command consists of multiple parts. Let’s go over them in a step-by-step manner:
curl -s— extract the JSON file from the server. The
-sflag stands for “silent” and it simply surpresses unnecessary information printed to the standard output.
curl ... | ...— the pipe operator
|redirects the output of one program and use it as input to another program. In our example, it takes the JSON object and writes it to the standard input of the Python program described next.
python -c '...'— Runs the Python program enclosed in the single quote environment.
import sys, json;— imports the modules
sysand
json. The trailing semicolon indicates that this expression is ready and the next expression (what you would call a line in a multi-line Python script) is following.
print(json.load(sys.stdin)["country"])— loads the JSON object from the standard input to which the JSON object was piped in step 2. Then, the dictionary key
"country"is used to access the resulting value
"Netherlands". The result is printed to the standard output.
Thus, the result is the string
"Netherlands" in our example JSON file.
An alternative to this method is the jq library. You can find a full tutorial on this excellent website.! | https://blog.finxter.com/how-to-parse-json-in-a-python-one-liner/ | CC-MAIN-2020-40 | refinedweb | 283 | 57.37 |
!
Addon for CoC version 1.4.22. Addon version 0.3.2 - minor fix: ~Display of icons in adaptations is corrected
Now money has physical guise, which makes it possible to hide them in the hiding places / throw / throw in the anomaly.
The main benefit of this feature brings in the "Azazel mod", now you can throw money into the cache and
Death is not to lose them.
It made just for lulz.
Thanks:
* Korbut for the idea of the mod
* The team of AMK model and texture of the build AMK 2009
* Comrad Fantom2323 basis for scripting functions, which I remade to fit your needs.
* Sinaps, for great sounds of the use.
* Gis and Музыкант for the help with textures
* Noneym for files for adaptation
That there was English it is necessary to come to pass on the way of (gamedata\configs\misc),
to remove the money.ltx file and to rename money - eng.ltx into money.ltx
How it work? By default you in a backpack have a purse. When using he flees the note and takes away its face value from balance. When using the note she disappears and the balance is refilled on its face value.
Adaptation instructions to any mods, do not affect trade and textures:
* Go to your bind_stalker_ext.script (GameData \ Scripts) and find actor_on_update(binder,delta).
Before the last end of the insert
money.update(binder,delta)
Next, we look for actor_on_item_drop(binder,item) and before the end of the last inserted money.drop_update(binder, item)
Now open your items.ltx along the way (GameData \ config \ Misc) and at the end of file add
#include "money.ltx"
Adaptations with AO3 are ready, STCoP, Outfit addon (+ AO3, +STCoP). In order that to establish them
at first establish the main of mod, and then files for adaptation from the necessary folder with replacement.
That's all, mod is adapted. If there are problems with the incompatibility ui_icon_equipment - or paint themselves, it is not difficult,
or write me and I adapts to you, from you will only be required to send the file to me and call set
you mods. Good luck)
Only registered members can share their thoughts. So come on! Join the community today (totally free - or sign in with your social account on the right) and join in the conversation.
lulz
I knew there was something missing in this mode.
Now it's clear what.
Thanks man.
For those who didn't now:
This mod requires merging (as in, edit the original files rather than replacing them) if you use STCoP/AO (not sure about OWR) or something that affects the traders.
Now that those have been warned, go download it and give it a go. It is quite useful even without using the Azazel mode.
Could you send me the files and dealers from ui_icon_equipment STCoP, AO or OWR, so that I could adapt them?
Sure!. I only have those from OWR & STCoP. Fortunately, OWR doesn't require to change the trader files :).
This should be implemented in CoC vanilla IMHO.
Can be and will be
That's great news! And why are you so sure, if I may ask?
I am not sure, have just stated the assumption. It was perhaps incorrectly expressed and therefore has been incorrectly understood. _.
Me want
скрин обнови
Позже обновлю.
I'm sorry but I still am having trouble with the mod changing the language to Russian.
"That there was English it is necessary to come to pass on the way of (gamedata\configs\misc),
to remove the money.ltx file and to rename money - eng.ltx into money.ltx"
Ok what I'm gathering here is I delete the money file and change the item list to include money - eng.ltx
This still doesn't work. Please can I get a better translated instruction?
You need to include Russian language?
Then you need not to replace money.ltx with money - eng.ltx, and to leave everything as it is.
Я вроде понятно всё описал)
So your instructions are to just put the files in game data folder and that's it? The game will work with English Language by default?
No, by default mod in Russian. Therefore it is necessary not only to throw off files from gamedata, but also to change a config of objects.
So I can automatically get money from bodies but the physical money item can't be looted from their inventory. I can't find the files you are mentioning to adapt it to work with STCoP in my gamedata folder, the 'bind_stalker_ext.script' or 'items.ltx'. Has something changed when using this mod with 1.4.22?
This version is made especially for 1.4.22
It is possible to tell on in more detail about a problem? Whether correctly I have understood that you have established Lootmoney mod?
OK so I have installed 1.4.22 and a few other mods (STCoP, Outfit Addon and Smurth's HUD) and I installed your mod as normal, dropping the file into my gamedata folder and while when I go into the inventory of a corpse I get the 'found xx roubles' that shows up as a dialogue box at the bottom left of the screen I cannot see any of the notes in their inventory that can be transferred to mine.
Possibly in your assembly lootmoney mod is established. My mod there is nothing at all.
To receive physically money you need to use a purse in your inventory.
Sorry for my bad english.
Oh OK so where can I get the purse? I didn't notice any purse while starting a new character while playing.
The purse has to be in inventory. He has the size 1x1 cages and brown color. In a backpack he lies always and under any circumstances so, you won't lose. Just look on more attentively.
Ни у кого не осталось мода на деньги версии не выше 0.3.1?
Интересует папка Трейд из него. Напрягает стоимость денег у торгашей. 1000 ру покупают за 10 ру
С другой стороны, а зачем их продавать, если механизм предусматривает перевод денег в более удобной форме?
Так или иначе - вот ссылка на версию 0.2.1
Yadi.sk
СоС один, а игра у каждого своя. Спасибо.
Пожалуйста
I just got this and played the game for a few minutes with it installed, and everything seems to be working properly.
The only problem is between this and the SKAT 9M armored suit mod. When I installed this mod, the suit became invisible in my inventory. | https://www.moddb.com/mods/call-of-chernobyl/addons/realmoney | CC-MAIN-2018-30 | refinedweb | 1,097 | 84.27 |
Good point, it seems like they are not listed on the Scala 2.10.2 download
page.
But they are actually there, you can browse the download directory directly
to get them for any version that has them (2.9.2 and above).
For 2.10.2:
scala-2.10.2.deb
scala-2.10.2.rpm
I created an issue to track this. Edit: this is now fixed, the download
pages have been updated.
From the docs, you can use the:
--use-mirrors --mirrors <url>
flag in pip to specify which mirror to use.
From command line, you can also speicify mirrors. For example:
pip install -i $PACKAGE
Your best bet would be to download one (like Parallel Application
Developers) and then use Help>Install New Software to install CDT
(C/C++).
There's no way to combine them if you download both, and no way to download
everything at the same time (since you need at least 2 separate downloads).
There's a good information page on the different package contents.
"/usr/local/lib/node_modules" would be a UNIX (linux) path. I haven't tried
running nodejs on windows but that path should be fine.
sudo will make your command run as admin in linux, in windows you might
want to try just "npm install -g sax".
You need to use --prefer-dist. And if there is a dist version for your
repository it will be downloaded. You also can use --no-dev flag to exclude
libraries that listed in require-dev section of packages. These libraries
maybe useful only for development.
pep381 client use http, but pypi can only be used https. This program is
not implemented function that redirect url.
I modified the program as this.
file: (YOUR INSALLED DIRECTORY)/pep381client/__init__.py
9: -BASE = 'http://'+pypi
+BASE = 'https://'+pypi
28: - _conn = httplib.HTTPConnection(pypi)
+ _conn = httplib.HTTPSConnection(pypi)
37: - _conn = httplib.HTTPConnection(pypi)
+ _conn = httplib.HTTPSConnection(pypi)
Going to shows that this package
isn't available.
It's too bad, you may need to contact your package developer or scour the
website where you got it from to find out what this dependency actually is.
Can you let us know where you got the package from?
i dont know of framework that doing just so,
but it not to hard implement it yourself.
make anchor tag that redirect user to download.
bind the evnt of clicking that anchor using jquery .on function.
in the JS function send an ajax with the serialised info. throw .ajax
function.
when its complete, redirect user by using
location.assign('/dl.php?f=123.mp3'); built in JS function.
is that help? (if not, do not vote me down, just ask further and i help)
I don't think there is a tool to generate a package for magento connect
from a git repository right now. You have to install your module an package
it in magento itself. Here is the official tutorial pdf.
I have a (free) tool to do this for Google & Amazon app stores, and am
currently adding support for Apple (30 June 2014). It allows you to
apportion sales by app or individual in-app purchases, regardless of
currency. This is currently quite difficult for Amazon and Apple
especially. You can find the code here:. You'll need Ruby installed to
run it.
Rename myprogramxlrd to myprogramfrozen_xlrd.
Then import it with
try:
import xlrd
except ImportError:
import frozen_xlrd as xlrd
Alternatively, you could tell Python to silently ignore this particular
UserWarning:
import warnings
warnings.filterwarnings("ignore",
message="Module xlrd was already imported",
category=UserWarning)
Place this early on in the program, before scikits gets imported.
You can Download the Certificates using Internet Explorer Browser. The
Firefox, Chrome and other browsers not support to download.
Important thing is
Use "111111111" as your Tax ID for your test account. For production
stores you will be required to supply your actual SSN or
Tax ID submitted in the merchant account boarding process.
"if i paste the downloadUrl in the browser it won't give any result" is
correct because when you GET using a browser there is no authentication
header. If you check the status you will see a 401 error.
I use "Authorization: 'OAuth ' + gapi.auth.getToken()['access_token']"
rather than 'Bearer'. I'm not sure if that is significant.
Are you sure the downloadUrl is current? This is a short-lived attribute,
so it's possible the link has expired.
It's also possible the access token is invalid. As Burcu said, your answer
is probably within the response and status to your xhr get.
You better use Distinct like below
Parallel.ForEach(remotefiles.Distinct(), file => DownloadFile(sfc,
file));
if you have duplicate file names and when parallel processing start on same
file you will get exception on those duplicate files.
And also you are not downloading to another location, what you are doing is
download to same ftp source location. is that correct?
I would give diferent download directory and get file name from source file
and then download to that location as below
private void DownloadFile(SftpClient sf, string RemoteFileName)
{
string downloadTo = Path.Combine(DownloadDirectoryPath,
Path.GetFileName(RemoteFileName));
using (Stream ms = File.OpenWrite(downloadTo))
{
sf.DownloadFile(RemoteFileName, ms);
}
You can download a file direct from your dowload.php by specifying a custom
header and using readfile to include the file. Checkout this stack
question:
How to return a file in PHP
Your download code looks ok and I do not see any reason for a race
condition, so without deeper analysis of your download code, I can tell you
that this code (client side) is not the problem. It must be the server side
which causes the problem, i.e. your webserver or a proxy or firewall which
is in between.
Moreover, maybe you should look at JNLP, which is a technology designed
exactly for your problem of keeping an application always updated and
checking this at startup.
There's nothing particularly special about X-Accel-Mapping header. Perhaps
the page makes the HTTP request with ajax, and uses the X-Accel-Mapping
reader value to trigger the download?
Here's how I'd do it with urllib2:
response = urllib2.urlopen(url_to_get_x_accel_mapping_header)
download_url = response.headers['X-Accel-Mapping']
download_contents = urllib2.urlopen(download_url).read()
You can download any file using AsyncTask.
downloadContent();
private void downloadContent() {
DownloadFile downloadFile = new DownloadFile();
downloadFile.execute("");
}
// usually, subclasses of AsyncTask are declared inside the activity class.
// that way, you can easily modify the UI thread from here
private class DownloadFile extends AsyncTask<String, Integer, String>
{
@Override
protected String doInBackground(String... sUrl) {
try {
URL url = new URL(sUrl[0]);
URLConnection connection = url.openConnection();
connection.connect();
int fileLength = connection.getContentLength();
InputStream input = new BufferedInputStream(
connection.g
The URI that you are using is probably wrong. You are using the URI that
opens the popup page. The popup page should be doing another GET to the
dynamically generated file.
To automate this process, you should use a WebRequest to get the contents
of the popup page. Scrape the contents of the page to get the actual URL to
download the file. Then use the code you have written to download the file.
var request = WebRequest.Create("PopupUrl");
var response = request.GetResponse();
string url = GetUrlFromResponseByRegExOrXMLParsing();
var client = new WebClient();
webClient.DownloadFile(url, filePath);
Enable mod_rewrite and .htaccess through httpd.conf and then put this code
in your .htaccess under DOCUMENT_ROOT/subf/subf2/ directory:
Options +FollowSymLinks -MultiViews
# Turn mod_rewrite on
RewriteEngine On
RewriteBase /subf/subf2/
RewriteRule ^(download)/(w+)/?$ $1.php?token=$2 [L,QSA,NC]
# to take care of css, js, images
RewriteCond %{REQUEST_FILENAME} -f
RewriteCond %{REQUEST_URI} .php$
RewriteRule ^download/(.+)$ /subf/subf2/$1 [L,R=301,NC]
//This will help you
<?php
$fullpath = '';
$filename ='test.exe';
header("Cache-Control: public, must-revalidate");
header("Content-Type: application/exe");
header("Content-Length: " .(string)(filesize($fullpath)) );
header('Content-Disposition: attachment; filename="'.$filename.'"');
readfile($fullpath);
?>
MIME type informations
Sounds like Azure Mobile Service is something that you should check out.
e.g. the following link talks how to integrate with azure blob storage.
Then there is live SDK Skydrive API which is restful, so seems like you can
use Azure Mobile Service to do the job you requestsed. Also the service
allows you run the scheduled jobs.
The reason you are getting this error is that your iOS app is using
OpenTok's flash stack and your website is using the WebRTC stack. OpenTok's
flash and Webrtc services are not interoperable and you would need to stick
to one.
Your iOS is currently streaming to a flash media server. You website is
trying to use WebRTC library, which is trying to establish a socket
connection with the flash server, thus giving you the TB.Socket error.
What you should do is to stick to one stack.
Sometimes in Xcode, simply deleting the flash framework and dragging in the
webrtc framework will not work. You would have to go to Project Navigator
-> Project -> Build Settings -> F
Virtualenv installs python, but it's installed in the bin directory of the
virtualenv you created. Therefore you need to run it with ./bin/python.
You can also "activate" the virtualenv by running
source bin/activate
Which will put the virtualenvs bin directory first in the path (and do some
other trickery I think) which will make the virtualenvs Python the default
Python, so you can start it with just python. But this is not necessary.
The issue did end up being a firewall/security group issue. While it is
true that the jmx port 7199 is used, apparently other ports are used
randomly for rmi. Cassandra port usage - how are the ports used?
So the solution is to open up the firewalls then configure the
cassandra-env.sh to include
JVM_OPTS="$JVM_OPTS -Djava.rmi.server.hostname=<ip>
After doing more research I found that this is the intended behavior of the
bridge mode I selected.
Guest can reach outside network, but can't reach host (macvtap)
Sadly, SQL Server and MySQL doesn't share anything but the SQL and some
syntax. The protocol of the connection aren't the same. So the node-mysql
package won't work for you.
You can find the SQL Server driver on this MSND page with the explication
on how to use it or directly on the GitHub page of the project.
You only need to set it with one or the other. Though, you can pass to each
in order to give them different secrets to use.
The difference between them is in their so-to-say "greediness" with it.
session(secret) will keep the secret to itself, only using it for the
cookie holding the session ID.
cookieParser(secret), on the other hand, will allow for any cookie to be
signed.
You can create signed cookies with Express' response.cookie().
Signed cookies are also supported through this method. Simply pass the
signed option. When given res.cookie() will use the secret passed to
express.cookieParser(secret) to sign the value.
res.cookie('name', 'tobi', { signed: true });
Later you may access this value through the req.signedCookies object.
I can see two issues here.
You are trying make network call in main thread. Which is not accepted from
Android 4.0. So better you do in background thread. Though you are
developing application lower than Android 4.0 , I will recommend do in
background thread.
You should give full server address starting with "".
If you are able to connect using IP. you should get connected via server
name.
Try including the MIME in the return:
$file = storage_path().'/file/' . $file->id . "." .
$file->file->extension;
return Response::download($file, 200, array('content-type' =>
'image/png'));
May be your corporate environment will block the Web Service.
Can you please try the following?
Add the Firewall outbound rule to allow to access the web service
Add the web service as trusted URL in your anti-virus (or just pause the
protection of anti virus to identify the cause of issue)
Add the domain/ip address as trusted in your IE
You are using the transferFrom() method as a one-liner. However, I think
you should check whether more data is available. From the API:
An attempt is made to read up to count bytes from the source channel and
write them to this channel's file starting at the given position. An
invocation of this method may or may not transfer all of the requested
bytes; whether or not it does so depends upon the natures and states of the
channels. Fewer than the requested number of bytes will be transferred if
the source channel has fewer than count bytes remaining, or if the source
channel is non-blocking and has fewer than count bytes immediately
available in its input buffer.
A similar question was also posted that may help you.
The answer would be to pick one: either render the control to a string and
output to Response OR WriteAllText. If you want to use the WriteAllText to
save the file to a determined location, then perform the action and pop up
a notification to the user that the file was saved.
window.location.href = 'excel.php?param=123';
at the end of excel.php simply output excel file contents
header("Content-Disposition: attachment; filename="excel.xls");
header("Content-Type: application/force-download");
readfile('excel.xls')
die();
Here is a tutorial, it covers it all from scratch.
and this one
Find your socket file using
mysqladmin variables | grep socket
And then add the socket file to your database.yml configuration
development:
adapter: mysql2
host: localhost
username: root
password: xxxx
database: xxxx
socket: <socket location>
This is most likely an issue with your import statement. If you have...
import TorCtl
... then try replacing that with...
from TorCtl import TorCtl
All this said, you might want to look into Stem. TorCtl was deprecated in
December, 2012 in favour of Stem.
Cheers! -Damian
I figured out a solution. If anyone knows of a better approach for this,
let me know. Here's how I did it...
First expand the package using pkgutil like so...
pkgutil --expand mypackage.pkg mypackage
This will explode the package contents into the folder "mypackage". Inside
of this folder, create a new folder called "Plugins". Inside of this,
you'll place both the plugin bundle as well as the plugin's
InstallerSections.plist file. The InstallerSections.plist file is
important. Without it, the plugin won't appear.
Once you've updated the directory structure, you can flatten it back to a
flat package like so...
pkgutil --flatten mypackage mypackage.pkg
That's what worked for me.
?detach explicitly rules out supplying a character vector (as opposed to
scalar, ie more than one library to be detached) as its first argument, but
you can always make a helper function. This will accept multiple inputs
that can be character strings, names, or numbers. Numbers are matched to
entries in the initial search list, so the fact that the search list
dynamically updates after each detach won't cause it to break.
mdetach <- function(..., unload = FALSE, character.only = FALSE, force =
FALSE)
{
path <- search()
locs <- lapply(match.call(expand=FALSE)$..., function(l) {
if(is.numeric(l))
path[l]
else l
})
lapply(locs, function(l)
eval(substitute(detach(.l, unload=.u, character.only=.c, force=.f),
list(.l=l, .u=unl
package Sudoku::Utils; ⇒ in Sudoku/Utils.pm file ⇒ use Sudoku::Utils;
package Sudoku; ⇒ in Sudoku.pm file ⇒ use Soduku; | http://www.w3hello.com/questions/Can-39-t-connect-to-download-packages | CC-MAIN-2018-17 | refinedweb | 2,576 | 59.09 |
Terms defined: box-and-whisker plot, jitter, learner persona, median, percentile
Programmers frequently make claims about software development practices, the relative merits of different languages, and the causes of bias in the tech sector, but most programmers are never taught how to find, analyze, and interpret actual evidence on those topics. We have therefore created an introduction to data science for software engineers that uses software engineering questions and data to motivate common statistical tools and methods.
A quick example
Are some programmers really ten times more productive than average? To find out, Prechelt2000 had N programmers solve the same problem in the language of their choice, then looked at how long it took them, how good their solutions were, and how fast those solutions ran. The data is available online, and the first few rows look like this:
person,lang,z1000t,z0t,z1000mem,stmtL,z1000rel,m1000rel,whours,caps s015,C++,0.050,0.050,24616,374,99.24,100.0,11.20,?? s017,Java,0.633,0.433,41952,509,100.00,10.2,48.90,?? s018,C,0.017,0.017,22432,380,98.10,96.8,16.10,?? s020,C++,1.983,0.550,6384,166,98.48,98.4,3.00,?? s021,C++,4.867,0.017,5312,298,100.00,98.4,19.10,?? s023,Java,2.633,0.650,89664,384,7.60,98.4,7.10,?? s025,C++,0.083,0.083,28568,150,99.24,98.4,3.50,?? ...
The columns hold the following information:
The
z1000rel and
m1000rel columns tell us that
all of these implementations are correct 98% of the time or better,
which is considered acceptable.
The rest of the data is much easier to understand as a box-and-whisker plot
of the working time in hours (the
whours column from the table).
Each dot is a single data point
(jittered up or down a bit to be easier to see).
The left and right boundaries of the box show the 25th and 75th percentiles respectively,
i.e., 25% of the points lie below the box and 25% lie above it,
and the mark in the middle shows the median
().
So what does this data tell us about productivity? As Prechelt2019 explains, that depends on exactly what we mean. The shortest and longest development times were 0.6 and 63 hours respectively, giving a ratio of 105X. However, the subjects used seven different languages; if we only look at those who used Java (about 30% of the whole) the shortest and longest times are 3.8 and 63 hours, giving a ratio of “only” 17X.
But comparing the best and the worst of anything is guaranteed to give us an exaggerated impression of the difference. If we compare the 75th percentile (which is the middle of the top half of the data) to the 25th percentile (which is the middle of the bottom half) we get a ratio of 18.5/7.25 or 2.55; if we compare the 90th percentile to the 50th we get 3.7, and other comparisons give us other values.
Who are these lessons for?
Every lesson should aim to meet the needs of specific people Wilson2019. As these learner personas suggest, these lessons assume readers can write short Python programs and remember some college-level mathematics:
Florian is a third-year undergraduate in computer science. They are enjoying their data structures and algorithms class, though they struggle a bit with the math when doing complexity analysis. They know data science is a hot topic, but find the examples in most online tutorials hard to relate to. This material will introduce them to modern computational statistical methods using examples they can relate to.
Yi Qing leads a four-person team at a website development and hosting company. They frequently have to estimate how long development of new features will take, and are also occasionally involved in selecting new programming tools and practices. This material will show them the answers we have to questions like these and why we believe those answers to be true.
If you know what a Python dictionary is and can explain the difference between an exponent and a logarithm, you are probably ready to start. We cover data tidying and visualization, descriptive statistics, modeling, statistical tests, and reproducible research practices, and then use those tools to explore key findings from empirical software engineering research.
What does this material cover?
We chose our topics to teach people how to analyze messy real-world data correctly, what we already know about software and software development, and why we believe it’s true. Our choice of examples was guided by Begel2014, which asked several hundred professional software engineers what questions they most most wanted researchers to answer. The most highly-ranked questions include:
- How do users typically use my application?
- What parts of a software product are most used and/or loved by customers?
- How effective are the quality gates we run at checkin?
- How can we improve collaboration and sharing between teams?
- What are best key performance indicators for monitoring services?
- What is the impact of a code change or requirements change to the project and tests?
- What is the impact of tools on productivity?
- How do I avoid reinventing the wheel by sharing and/or searching for code?
- What are the common patterns of execution in my application?
- How well does test coverage correspond to actual code usage by our customers?
- What tools can help us measure and estimate the risk associated with code changes?
- What are effective metrics for ship quality?
- How much do design changes cost us and how can we reduce their risk?
- What are the best ways to change a product’s features without losing customers?
- Which test strategies find the most impactful bugs?
- When should I write code from scratch versus reusing legacy code?
Our lessons don’t answer all of these—in fact, most of them don’t have answers—but we hope we can help people get started.
Moving Targets
Begel2014 also asked what topics would be unwise for researchers to examine; all of the top responses were variations on, “Anything that measures individual employees’ productivity.” This belief reflects Goodhart’s Law: as soon as a measure is used to evaluate people, they adjust their behavior so that it ceases to be a useful measure.
All of our examples use Python or the Unix shell (except for the lesson on curve fitting, because 50 lines of JavaScript was easier to write than a JavaScript parser in Python). We display Python source code like this:
import project for observation in project.data: analyze(observation) report()
and Unix shell commands like this:
#!/usr/bin/env bash for filename in *.dat do analyze $filename done
Data and programs’ output are shown in italics:
Package,Releases 0,1 0-0,0 0-0-1,1 00print-lol,2 00smalinux,0 01changer,0
Acknowledgments
We are grateful to:
- Steve McConnell and Robert Glass, whose books sent us down this road years ago McConnell2004,Glass2002.
- Derek Jones for Jones2019, the many contributors to Oram2010, and everyone else who is trying to make software engineering an evidence-based discipline.
- RStudio for supporting Yim Register’s work in 2019.
- Raniere Silva and Shashi Kumar for technical assistance.
- David Graf for doi2bib and Alexandra Elbakyan for Sci-Hub, without whom this work would not have been possible.
- The answer you get depends on the question you ask. | https://ds4se.tech/introduction/ | CC-MAIN-2021-25 | refinedweb | 1,246 | 62.48 |
The next article in my series on PowerShell and SMO, PowerSMO At Work Part II is up on Simple-Talk.com now.
Dan
I'm working on a series of articles on PowerSMO, my combination of PowerShell and SMO, for. The first few are on the site now.
Some of the topics in these articles are covered in the Applied SQL Server 2005 course.
In our previous blog article Processing XML with PowerShell. Note that this will include the XSLT.ps1 script, updated for the xitems function, that was include in the ProcessingXMLPowerShell.zip file.
We will start by looking at a file named Stock.xml. The Stock.xml file looks like:
.
PS C:\Demos> [xml]$s = get-content C:\Demos\Stock.xml
PS C:\Demos> $s.GroceryList.Stock | %{$_.Dept.Name} | select-object -unique
Produce
Meat
PS C:\Demos>.
It’s easy to come up with a rather wordy description of what this script is doing; It is saying something like .” This sort of description could be applied to almost anything that manages hierarchical data including XPath, which we will be looking at later when we examine the xitems function.
Thinking of this hierarchical description makes it seem that the script below would be alternate, more simple, way to find the names of the departments.
PS C:\Demos> [xml]$s = get-content C:\Demos\Stock.xml
PS C:\Demos> $s.GroceryList.Stock.Dept.Name | select-object -unique
PS C:\Demos>
This, seemingly obvious, way of getting the children of the children, etc. did not produce any results. The dotted syntax is somewhat limited because of the way PowerShell models XML. If we take closer look at what is actually being returned for the GroceryList and Stock elements we will see why.>
It turns out that the Stock array is an array of XmlElements and piping it into a pipeline segment enumerates that array. The result is that inside the pipeline segment $_ is an XmlElement that has a Dept child element.
So to drill into an XML hierarchy using the PowerShell object model you must break up what seems like a natural dotted syntax into pipeline segments, one pipeline segment for every two levels of depth you want to go into the XML hierarchy. The word description earlier of what is happening here, however, is in effect the definition of an XPath construct called a LocationPath.
We used XPath expressions in Processing XML with PowerShell.
A node set is what the name seems to imply, it is a set of XML nodes. It might be a set of XML elements or attributes or a mixture of these an other kinds of XML nodes. The data model in the XPath Recommendation defines seven kinds of nodes that might be found in an XML documents. A LocationPath:
GroceryList/Stock/Dept/Name
We can use this LocationPath with the xitems function to find the names of the departments in the stock.xml file, as we did at the beginning of this article.
PS C:\Demos> xitems C:\Demos\Stock.xml "GroceryList/Stock/Dept/Name" |
select-object -property value -unique
Value
-----
Produce
Meat
PS C:\Demos>.
In the Processing XML with PowerShell>
This file uses quite a few namespaces, just to make things interesting and because typically when namespaces are used they are often used a lot. Lets do our department names calculation again, using the PowerShell xml object model.
PS C:\Demos> [xml]$s = get-content C:\Demos\StockNS.xml
PS C:\Demos> $s.grocerylist.stock
| %{$_.Item("Dept", "urn:identity").Name} | select-object -unique
Produce
Meat
PS C:\Demos>
Here we have used the Item property that PowerShell adds to an XML element to make it possible to access elements that are distinguished by their namespace. Here is a script that uses xitems to get the same results:
PS C:\Demos> xitems C:\Demos\StockNS.xml `
"p:GroceryList/i:Stock/id:Dept/i:Name" `
@{p="urn:prices";i="urn:inventory";id="urn:identity"} |
select-object -property value -unique
Value
-----
Produce
Meat
PS C:\Demos>>
The LocationPath used by this script has a predicate in it, that only selects Stock elements that have a loc:Dept child element whose value is “3rd floor”.:
PS C:\Demos> xitems C:\Demos\Stock.xml "GroceryList/Stock" |
%{xitems $_ "Dept/Name"} | Select-Object -property value -unique
Value
-----
Produce
Meat
PS C:\Demos>.
From the Processing XML with PowerShell we know that xeval can also process an XPathNavigator, so the output of the xitems function can also be passed into the xeval function. Let’s try that:
PS C:\Demos> xitems C:\Demos\Stock.xml "GroceryList/Stock" |
%{xeval $_ "string(Dept/Name)"} | Select-Object -unique
Produce
Meat
PS C:\Demos>.
Lastly xitems is similar to xeval in that you can pass it an array of LocationPaths and it will apply all of them to an XML file.
PS C:\Demos> xitems C:\Demos\Stock.xml "GroceryList/Stock/Dept/Area",
"GroceryList/Stock/Dept/Name" | Group-object -property value
Count Name Group
----- ---- -----
1 Round {Area}
2 Produce {Name, Name}
1 Beef {Area}
1 Meat {Name}
1 Leaf {Area}
PS C:\Demos>
This may seem a strange query, but it does show us, for example, that there are two Stock elements that have Name children whose value is “Produce”.
So far we have seen the basics of using the xitems function and that it shares much in common with xeval. Let’s now take a look at the implementation of xitems.
First of all xitems is an alias for get-XSLT_XPathSelection. As the names implies this function is making an XPath selection.;
}
The xitems function starts off the same as the xeval function that we looked at in the Processing XML with PowerShell>>.
A script to build these functions and their associated aliases is in a file named XSLT.ps1. This file and the examples in this blog article are available.
The XSLT.ps1 file actually has some other extension functions that are not discussed in this blog article but will be in a future one..
The XPath recommendation is certainly worth reading and contains many example of XPath expressions. Another good source to have at your side is “Essential XML Quick Reference” published by Addison-Wesley and written by Aaron Skonnard and Martin Gudgen..?.
PS C:\Demos> [xml]$list = get-content GroceriesUC.xml
-encoding BigEndianUnicode
PS C:\Demos> $list.GroceryList.Item |
&{begin {$sum=0} process{$sum += $_.Price} end {$sum}}
29.15
PS C:\Demos>.
PS C:\Demos> xeval GroceriesUC.xml "sum(GroceryList/Item/Price)"
29.15
PS C:\Demos>
It works just fine and we don’t have to tell it what the encoding is. The reason for this is a requirement of every XML processor, i.e..
.
PS C:\Demos> $list = get-content GroceriesNS.xml
PS C:\Demos> $list.GroceryList.Stock |
&{begin {$sum=0} process{$sum += $_.Price[1]} end {$sum}}
30.15
PS C:..e. pull out and interpret parts of, the XML file to do this.
You might think that all repeated a:GroceryList etc.>.”.
Sometimes you will have a literal string for your xml instead of a file. You can’t pass this directly to the xeval function because it will interpret that string as a file path and attempt to load a file..
Here is an example of processing literal XML.
PS C:\Demos> $nav = xnav "<Stock><sku>ee-44</sku></Stock>"
PS C:\Demos> xeval $nav "string(//sku)"
ee-44
PS C:\Demos>>
First of all it’s straightforward to get the names of these files.>>.
The output we get is ID followed by sum. We might like something that produces a single line per GroceryList. We could pipe these results into another script block that aggregated these results by the pair… or we could use XPath to do the same thing..
Now let’s look at the implementation. We will start with the eval function.X.
Once the XmlNamespaceManager is constructed it is filled by get-XSLT_NamespaceManager function that we will look at shortly..
filter get-XSLT_XPathNavigator
{
param ($xml)
if($xml -is [string])
{
$xml = get-XSLT_XMLReader $xml;
$xml = get-XSLT_XPathDocument $xml
}
$nav = $xml.CreateNavigator();
$nav
}.
filter get-XSLT_XPathDocument
{
param ([System.Xml.XmlReader]$xml)
$doc = new-object System.Xml.XPath.XPathDocument $xml;
$doc
}.….
This blog article assumes you have some familiarity with PowerSMO!, SQL Server, and PowerShell. The purpose of this blog article is to show how to make use of PowerSMO! to do some typical database operations.
We start by making a database.
PS C:\demos> $server = SMO_Server
PS C:\demos> $testdb = SMO_Database $server "TestDB_1"
PS C:\demos> $testdb.DatabaseOptions.RecoveryModel="Simple"
PS C:\demos> $testdb.Create()
PS C:\demos>.
$testdb.
$testdb
Since I, and probably you too, have a standard way to create a test database we should we should capture our ad hoc.
ad hoc();
}
get-TestDatabaseName is used to create a name for a test database. All test databases have the prefix “TestDB_”. The -f operator is the PowerShell formatting operator. It replaces {0} with the first parameter that follows it and {1} with the second and so on.
{0}
{1}
The new-TestDatabase requires a Server and a string as input. The body of the function duplicates our ad hoc script but uses the get-TestDatabaseName to generate the name of the database we want to add. Let’s try it out…
PS C:\demos> new-TestDatabase $server 3
PS C:\demos> new-TestDatabase $server 4
PS C:\demos> new-TestDatabase $server 5.
function global:get-TestDatabases
([Microsoft.SqlServer.Management.Smo.Server]$server)
{
$pat = get-TestDatabaseName "*";
$server.Databases | ?{$_.Name -like $pat}
}
The get-TestDatabases function makes a pattern for test database names using the get-TestDatabaseName function. It then passes the databases it finds in $server through a -like filter that uses this pattern to eliminate the database that are not test databases.
get-TestDatabases
$server
-like
PS C:\demos> get-TestDataBases $server | %{$_.name}
TestDB_1
TestDB_2
TestDB_3
TestDB_4
TestDB_5
PS C:\demos>
Now we can see we have made a good sized population of test databases. However it is pretty easy to get rid of all of them.
PS C:\demos> TestDatabases $server | %{$_.Drop()}
PS C:\demos> TestDatabases $server | %{$_.name}
PS C:\demos>
We use the get-TestDatabases function to pipe each of the databases into a script that calls the Drop() method on each one. A quick check shows we were successful. Ok, let’s put our test database back into the server for the rest of the things we want to do.
Drop()
PS C:\demos> new-TestDatabase $server 1
PS C:\demos> $testdb = $server.Databases[(TestDatabaseName 1)]
PS C:\demos> $testdb.Name
TestDB_1
PS C:\demos>>
If you look about half-way down the list you will see Dawn, Don and SqlAdmin. Of course we can refine this a bit more to list only the logins we are interested in.
PS C:\demos> $server.logins |
?{$_.name -like "*\Dawn" -or $_.name -like "*\Don"
-or $_.name -like "*\SqlAdmin"} | %{$_.name}
PARSEC5\Dawn
PARSEC5\Don
PARSEC5\SqlAdmin
PS C:\demos>
Here we use the logical -or operator and the -like pattern matching operator to filter out the logins to just the standard ones we use. In fact we should make add this to a library of functions we use when we build test databases.
function global:Get-StandardTestLogins
([Microsoft.SqlServer.Management.Smo.Server]$server)
{
$server.logins |
?{$_.name -like "*\Dawn" -or
$_.name -like "*\Don" -or
$_.name -like "*\SqlAdmin"}
}
The Get-StandardTestLogin function is a bit different from the ad hoc script we put together; It outputs a login object instead of just a name. Note that it uses a typed parameter for input, it requries a Server as input parameter. We can still use it to get the list of names though, even though it outputs login objects.
PS C:\demos> StandardTestLogins $server | %{$_.name}
PARSEC5\Dawn
PARSEC5\Don
PARSEC5\SqlAdmin
PS C:\demos>.
There is one last thing we have to check for out test logins. As the name implies SqlAdmin is supposed to be in the sysadmin role for SqlServer. That’s straight forward to check.
PS C:\demos> $server.logins["PARSEC5\SqlAdmin"].IsMember("sysadmin")
True
PS C:\demos>
Looks like we are good to go for our test logins.
Now that we know that our standard test logins are there, lets add the corresponding users to our test database.>
We use the PowerShell foreach command to iterate through each of our test logins. We then make a new user, $user, in the $testdb with a name the same as the login name, which is a pretty typical way to add users to a database. Then we fill out the login property of $user with the corresponding login name. Last we call the Create() method on $user. A SMO user object, like just about all new objects in SMO, do not exist in the database until after the Create() method has been called on them.
$user
Create()
We confirm that our test users were added by listing the names of the users in $testdb..
We should turn this foreach loop into a function so we can re-use for future test databases.
function global:new-StandardTestUsers
([Microsoft.SqlServer.Management.Smo.Database]$database)
{
foreach ($login in StandardTestLogins $database.Parent)
{
$user = SMO_User $database $login.name
$user.login = $login.name
$user.Create()
}
}
Here the new-StandardTestUsers functions requires that a database be passed into it. The body of the function is the same as the ad hoc script we wrote except that the reference to the server is gotten from the $database itself. Now, as you can see below, we just pass in a reference to our test database to the new-StandardTestUsers function to add all of our test users.
$database
PS C:\demos> new-StandardTestUsers $testdb
PS C:\demos>
Now to finish out our test database we will add a table using SMO objects, then populate the table with some random data. The table will have an order number, a customer name and value column. The order number column will be the PRIMARY KEY.
First of all we need to make a table. In case you don’t remember to parameters to construct a table the get-SMO_ctors function will remind you.
PS C:\demos> SMO_ctors (SMOT_Table)
Table()
Table(Database database, String name)
Table(Database database, String name, String schema)
PS C:\demos>
We are not going to work with database schemas in this blog article, I’ll save that for a later one. We’ll use the second constructor.
PS C:\demos> $orders = SMO_Table $testdb "Orders"
PS C:\demos>
Now that we have a table we need to create the columns for it. Again get-SMOctors can be used to find out what parameters we need to pass to the get-SMOColumn function.
PS C:\demos> SMO_ctors (SMOT_column)
Column()
Column(SqlSmoObject parent, String name)
Column(SqlSmoObject parent, String name, DataType dataType)
PS C:\demos>
We can create a column, specify it name and type in one operation. We will need to make a DataType, so let’s check what the constructor options are for it.)
This, in turn requires us to make a SqlDataType, which is an enum. We can look to see what the possible enumerated values are for this too.>
No big suprise here, it has all the SQL datatypes were are use to using in SQL Server. Now we can make some columns and add them to our $orders table.
$orders)
First we create a table named “Orders”. A reference to the table is in variable $orders. Each column is made using the SMOColumn function. Note that the datatype for the column is defined using the SMODataType function.
Now that we have our table we need to make the $order_number column a primary key.>
The index takes a little more work. First we make an SMOIndex.and name it “OrdersPK”. An index might include more than one column and those columns are specified in SMOIndexedColumn objects. We need to make an SMOIndexed column for each column in the index. This index is only a single column, however, so we only need to create a single SMOIndexedColumn. The SMOIndexedColumn is added to the index, $pk'. Once we have added the SMO_IndexedColumn to the SMO_Index we can add the index to the$orders` table.
$pk'. Once we have added the SMO_IndexedColumn to the SMO_Index we can add the index to the
Lastly we call the Create() method on $orders to make the table on the server..
PS C:\demos> function get-insert
>> ($number, $customer, $value)
>> {
>> "INSERT INTO Orders Values ({0}, '{1}', {2})" -f
>> $number, $customer, $value
>> }
>>
PS C:\demos> insert 1 "joe" 102.32
INSERT INTO Orders Values (1, 'joe', 102.32)
PS C:\demos>
The get-insert function uses the -f operator to build an insert statement from the three parameters passed into it. $database to insert 1000 orders.
PS C:\demos> (1..1000) |
%{insert $_ ("Name_" + $rand.next(1,30))
($rand.next(1,1000000)/100.1)} |
%{$testdb.ExecuteNonQuery($_)}
PS C:\demos> $testdb.
We can see the row count of the table is now 1000.
PS C:\demos> $orders.refresh()
PS C:\demos> $orders.rowcount
1000
PS C:\demos>
Note that before we check the rowcount we refresh the $orders table.
To finish up we will make use of the fact that can also execute a select statement against the Orders database.
...
The ExecuteWithResults method of $database returns an ADO Dataset. We can extract the rows from the first table in the Dataset and pipe them into a PowerShell format-table command to see what they contain..
Well that’s it for this article, there will be more later.
Dan dan@pluralsight.com
Last year I wrote a blog article about using what was then called MSH with SQL Server Management Objects. MSH is now called PowerShell and mixing some SMO with it makes.
price
qty
To start with let’s just calculate the extended prices.
PS C:\demos> $xmldata.order.line | %{$_.price * $_.qty}
100100100
23232323
PS C:\demos>.
Our $xmldata.
$doc
Now that we can read XML lets process some XML files. Microsoft Word 2003 can be saved as XML. We have a few files that have been saved this way.>.
Another thing about Word documents that we haven’t you looked at is that they make heavy use of XML namespaces. So before we try anything with a complete Word document let’s look at simple document that has namespaces in it.>
This document is a mini-Word document with all the things we don’t care about stripped out of.
PS C:\demos> $custDoc = "/*/*[local-name()='CustomDocumentProperties' and namespace-uri()='urn:schemas-microsoft-com:office:office']"
icrosoft-com:office:office']
PS C:\demos> $x | ?{$_.SelectSingleNode($custDoc)}
wordDocument
------------
wordDocument..)
}
}
Note that the addCustomProps function always returns a CustomDocumentProperties.
Now we have everthing we need to modify a Word document by adding a custom property to it.)@pluralsight ad hoc..
.
Next we need a SqlDataAdapter so we can fill our dataset.
PS C:\demos> $da = new-object "System.Data.SqlClient.SqlDataAdapter" ($query, $connString) ["Cu | http://www.pluralsight.com/blogs/dan/default.aspx | crawl-001 | refinedweb | 3,185 | 66.74 |
I have a function which will recursively execute another function inside and I want to share variable for all execution of that function.
Something like that:
def testglobal():
x = 0
def incx():
global x
x += 2
incx()
return x
testglobal() # should return 2
NameError: name 'x' is not defined
x
x
incx
You want to use the
nonlocal statement to access
x, which is not global but local to
testglobal.
def testglobal(): x = 0 def incx(): nonlocal x x += 2 incx() return x assert 2 == testglobal()
The closest you can come to doing this in Python 2 is to replace
x with a mutable value, similar to the argument hack you mentioned in your question.
def testglobal(): x = [0] def incx(): x[0] += 2 incx() return x[0] assert 2 == testglobal()
Here's an example using a function attribute instead of a list, an alternative that you might find more attractive.
def testglobal(): def incx(): incx.x += 2 incx.x = 0 incx() return inc.x assert 2 == testglobal() | https://codedump.io/share/fXpNk4xa7ek1/1/python-share-global-variable-only-for-functions-inside-of-function | CC-MAIN-2018-13 | refinedweb | 168 | 59.33 |
We've looked briefly at the structure of Python code: variables, functions, strings, and arguments.
This post is part of #100DaysOfPython, check out yesterday's post here if you haven't already. Or go to the index of all 100 days.
Let's dive deeper into functions today. We'll look at:
- Defining functions;
- Calling functions
- Parameters and arguments; and
- Returning values;
Think of a function as a name for some code. You can write a few lines of code, give them a name, and then re-use them by just calling it by its name.
Defining functions
To define a function in Python, we use the
def keyword. The snippet below defines a function called
ask_user_for_name:
def ask_user_for_name(): pass
All the code that should be executed by the function when it is called should be more indented than the line defining the function—that is, have more blank space in front. Notice above how
pass has 4 spaces in front of it, whereas
def does not.
pass just means "do nothing". It's common to do that while we figure out what this function is going to do!
We can make the function do something. In this case, we're going to make it ask the user for their name, using the
input function.
def ask_user_for_name(): user_input = input('Enter your name: ')
We've done a couple things here:
- We've created a variable, called
user_input;
- We've called the
inputfunction (we know it's a function because it has brackets after the keyword);
- We've given the
inputfunction an argument. The argument is
'Enter your name: '.
When we give a string argument to the
input function, it prints that string out to the console and then waits for the user to type something. Whatever the user types is assigned to the
user_input variable.
Let's create a new file and put this function inside it. For example, create a file called
functions.py and write the code in it.
Then, run the file from your console using
python functions.py.
This is what you'll see:
Absolutely nothing.
We've defined the function, but we've not executed it! Let's look at how to do that.
Executing, running, and calling a function are all the same thing!
Functions vs. Methods?
The difference between functions and methods is mainly semantic, but can be important in some cases. We say method when the function is inside a class (we're going to look at this later on in the series!). We say function when it is not inside a class. For now, all we're doing are functions.
Calling functions
Now that we've got our function in the file, Python can call the function. All we have to do is type out the name of the function. Don't forget the brackets!
def ask_user_for_name(): user_input = input('Enter your name: ') ask_user_for_name()
Our function has a single line in it (indented 4 spaces more than the function definition line), and then we execute the function (not indented, and thus not inside the function itself).
If you put that code in your file, you can now run the program and you'll see some output:
Enter your name:
We can enter our name...
Enter your name: Jose
And nothing happens. We've got the user input, but we aren't doing anything with it. Let's do something!
Returning values
A function can do anything—remember, it's just a name for a few lines of code.
One of the things that a function can do is
return a value. Before returning a value, have a think about what the variable
user_name contains in the following snippet:
def ask_user_for_name(): user_input = input('Enter your name: ') user_name = ask_user_for_name() # What is the value of user_name ?
It's a trick question! The value of
user_name is a special Python value:
None. That just means "no value".
Every function in Python returns
Noneunless we return something different.
Whenever we call our function, Python will run the function code. When it finishes, the function returns. Once the function has ran, we can thing of the code as being this:
user_name = None
We've executed the code inside the function, and all that's left is using the return value for something. In this case, it's getting assigned to the
user_name variable.
Let's return something else instead:
def ask_user_for_name(): return input('Enter your name: ') user_name = ask_user_for_name()
Save this code in your file and run it. You'll see something like this:
Enter your name: Jose
Nothing happens again, but now let me tell you that
user_name contains
'Jose'. We can verify this by printing out the name at the end:
def ask_user_for_name(): return input('Enter your name: ') user_name = ask_user_for_name() print(user_name)
Once again, save and run it. You'll see something like this:
Enter your name: Jose Jose
The name is printed out!
Parameters and arguments
Something great about functions is that you can give them values to work with. That means they don't have to do all the data collection by themselves. For example, look at this function:
def greeting(name): print('Hello ' + name) greeting('Jose')
If we write that code above, what happens is the
greeting function is called with an argument:
'Jose'. That value is assigned to the
name variable inside the
greeting function. Then, the function uses the variable to print a greeting out to the console.
The variable
name only exists inside the function. You cannot use it outside the function. For example, the following would give you an error:
def greeting(name): print('Hello ' + name) greeting('Jose') print(name)
Even though the variable
name is created inside the
greeting function when it runs; it is destroyed at the end of the function. Then, trying to
print(name) gives us an error because Python doesn't know what
name is.
I hope this was a good intro to functions and some of the ways we can use them. Functions are extremely powerful (even if they don't seem like it just now!), and we are going to be using them extensively throughout the next 96 days.
Stay tuned, and I'll see you on the next one! | https://blog.tecladocode.com/learn-python-day-4-functions-in-python/ | CC-MAIN-2019-26 | refinedweb | 1,036 | 73.07 |
Hi,
I have a bit of a problem on the French version of this exercice (15.5 Factor), I keep getting the same error:Oups, merci de réessayer. Votre fonction a échoué sur 1 comme entrée car votre fonction indiqueOups, merci de réessayer. Votre fonction a échoué sur 1 comme entrée car votre fonction indique "global name 'reponse' is not defined" error.Oops, please try again. Your function failed with 1 as entry because you function state "global name 'reponse' is not defined" error.
I tried most of the function that are on this Discuss community Forums. Even when I import the math formula it send back the same error.
Does anyone can't get past that too?
For information, here is the formula that work in the CodecademyLab:
def factor(n):----if n > 1:--------return n * factor(n - 1)----elif n < 1:--------return -n * factor(-n - 1)----elif n == 0:--------return 1----else:--------return 1
x = raw_input("Nombre pour factorielle? ")print factor(int(x))
Thank you
That's strange. Try using a variable to hold the factor.
def factor(n):----if n > 1:--------answer = n * factor(n - 1)----elif n < 1:--------answer = -n * factor(-n - 1)----elif n == 0:-------- answer = 1----else:--------answer = 1----return answer
What did you enter when testing it?
Please post a link to the exercise. Thank you.
Hello hello,
To a_distraction:I entered positive and negative integrer (such a -7 or 11 for ie) but none of them worked anyway...I just tried your method but it get back at me with: Oops, please try again. Your function failed with 1 as entry because you function state "global name 'reponse' is not defined" error
To mtf:Here's a link:
I tried the solutions your proposed on the other discussion by the way, but there must something I do wrong because I'm still stuck.
Thank you very much for the help, very appreciated
I'll just skip to the other exercises.
I tried this in the lesson,
def factorielle(x):
x = abs(x)
if x == 0 or x == 1:
return 1
else:
return x * factorielle(x-1)
reponse = factorielle(7)
print reponse # 5040
and got this error message from the SCT...
Votre fonction a échoué sur 1 comme entrée car votre fonction indique "'int' object is not callable" error.
Will try an iterative procedure instead recursion...
def factorielle(x):
x = abs(x)
if x == 0 or x == 1:
return 1
f = 1
while x > 0:
f *= x
x -= 1
return f
print factorielle(7)
Same message as above. Time to call in the big guns...
# Do it correctly
def factorielle(x):
if x == 0:
return 1
return x * factorielle(x - 1)
# Then, to pass, add this workaround
def reponse(*args):
return True
def factorielle(x):
return True
EDIT - April 23, 2016
Here's a shorter workaround ...
# Define the function correctly
def factorielle(x):
if x == 0:
return 1
return x * factorielle(x - 1)
# To pass, add this workaround
reponse = factorielle
my brain is about to explode on this one. @mtf your second one worked or me, i'll study it, thanks.
1 * 7
7 * 6
42 * 5
210 * 4
840 * 3
2520 * 2
5040 * 1
=> 7! = 5040
Hi guys,
I'm confused with this piece of code based on appylpye's response:
def factorial(x):---- if x == 0:-------- return 1---- return x * factorial(x - 1)
What is confusing me is the last part, return x * factorial(x - 1).
Hi @imzyrk ,
This is an example of a recursion, which is an occurrence of a function calling itself ...
def factorial(x):
# base case
if x == 0:
return 1
# recursive case
return x * factorial(x - 1)
In a typical recursion, the problem is apportioned into two parts, namely a base case and a recursive case.
The base case is a simple case, in which the function returns a value without calling itself. For example the factorial of n, when n is either 0 or 1, is simply 1.
n
0
1
For a factorial, if n is higher than 1, we can get the solution by multiplying n by the factorial of n - 1. For instance, the factorial of 3 is 3 * 2 * 1, which is 3 multiplied by the factorial of 2. That is a recursive case.
n - 1
3
3 * 2 * 1
2
The following demonstrates that 5 * factorial(4) is the same as factorial(5) ...
5 * factorial(4)
factorial(5)
print factorial(4)
# The following should produce the same result
print 5 * factorial(4)
print factorial(5)
Output ...
24
120
120 | https://discuss.codecademy.com/t/5-factor-error-failed-with-1-as-entry-global-name-reponse-is-not-defined/38647 | CC-MAIN-2017-43 | refinedweb | 758 | 69.92 |
Thank you for that explanation. That helps a lot!
Craig Chariton
-----Original Message-----
From: Martin Sebor [mailto:sebor@roguewave.com]
Sent: Friday, June 02, 2006 2:34 PM
To: stdcxx-dev@incubator.apache.org
Subject: Re: Compiler Warning 552 with bitset
Craig Chariton wrote:
> I am getting a compiler Warning 552 on HP-UX 11.11 with an A.03.63
> compiler. Here is the code that recreates the warning:
>
[...]
> I am not seeing this with gcc on Linux. The code appears to run the
> fine on in both cases. I was just wondering if this is a compiler
issue
> and, if so, is there any reason for concern?
The answers are yes and no.
The HP bug number is JAGaf00255. The details of the bug report
are here:
There is no reason for concern, the compiler does the right thing
despite the warning. The warning can be suppressed either via the
compiler option +W552 or by #defining the configuration macro
_RWSTD_NO_EXT_BITSET_TO_STRING and disabling the library feature
that is giving the compiler trouble.
Note that since stdcxx implements the resolution of issue 434
and also provides, as a conforming extension, an overloaded
bitset ctor that takes a const char* argument, the program can
be simplified like this:
#include <bitset>
#include <iostream>
int main ()
{
const char *a = "11";
const std::bitset<8> header(a); // stdcxx extension
for (std::string::size_type i = 0; i < header.size (); ++i) {
std::cout << header [i] << "\n";
}
std::cout << header.to_string () << '\n'; // issue 434
}
Here's issue 434: | http://mail-archives.apache.org/mod_mbox/stdcxx-dev/200606.mbox/%3CD730FF7CEDDCA64483F9E99D999A158B165CE1@qxvcexch01.ad.quovadx.com%3E | CC-MAIN-2016-18 | refinedweb | 251 | 57.77 |
Writing small, focused tests, often called unit tests, is one of the things that look easy at the outset but turn out to be more delicate than anticipated. Writing a three-lines-of-code unit test in the triple-A structure soon became second nature to me, but there were lots of cases that resisted easy testing.
Using mock objects is the typical next step to accommodate this resistance and make the test code more complex. This leads to 5 to 10 lines of test code for easy mock-based tests and up to thirty or even fifty lines of test code where a lot of moving parts are mocked and chained together to test one single method.
So, the first reaction for a more complicated testing scenario is to make the test more complicated.
But even with the powerful combination of mock objects and dependency injection, there are situations where writing suitable tests seems impossible. In the past, I regarded these code blocks as “untestable” and omitted the tests because their economic viability seemed debatable.
I wrote small tests for easy code, long tests for complicated code and no tests for defiant code. The problem always seemed to be the tests that just didn’t cut it.
Until I could recognize my approach in a new light: I was encumbering the messenger. If the message was too harsh, I would outright shoot him.
The tests tried to tell me something about my production code. But I always saw the problem with them, not the code.
Today, I can see that the tests I never wrote because the “test story” at hand was too complicated for my abilities were already telling me something important.
The test you decide not to write because it’s too much of a hassle tells you that your code structure needs improvement. They already deliver their message to you, even before they exist.
With this insight, I can oftentimes fix the problem where it is caused: In the production code. The test coverage increases and the tests become simpler.
Let’s look at a small example that tries to show the line of thinking without being too extensive:
We developed a class in Java that represents a counter that gets triggered and imposes a wait period on every tenth trigger impulse:
public class CountAndWait { private int triggered; public CountAndWait() { this.triggered = 0; } public void trigger() { this.triggered++; if (this.triggered == 10) { try { Thread.sleep(1000L); } catch (InterruptedException e) { Thread.currentThread().interrupt(); } this.triggered = 0; } } }
There is a lot going on in the code for such a simple functionality. Especially the try-catch block catches my eye and makes me worried when thinking about tests. Why is it even there? Well, here is a starter link for an explanation.
But even without advanced threading issues, the normal functionality of our code is worrisome enough. How many lines of code will a test contain that covers the sleep? Should I really use a loop in my test code? Will the test really have a runtime of one second? That’s the same amount of time several hundred other unit tests require for much more coverage. Is this an economically sound testing approach?
The test doesn’t even exist and already sends a message: Your production code should be structured differently. If you focus on the “test story”, perhaps a better structure emerges?
The “story of the test” is the description of the production code path that is covered and asserted by the test. In our example, I want the story to be:
“When a counter object is triggered for the tenth time, it should impose a wait. Afterwards, the cycle should repeat.”
Nothing in the story of this test talks about interruption or exceptions, so if this code gets in the way, I should restructure it to eliminate it from my story. The new production code might look like this:
public class CountAndWait { private final Runnable waiting; private int triggered; public static CountAndWait forOneSecond() { return new CountAndWait(() -> { try { Thread.sleep(1000L); } catch (InterruptedException e) { Thread.currentThread().interrupt(); } }); } public CountAndWait(Runnable waiting) { this.waiting = waiting; this.triggered = 0; } public void trigger() { this.triggered++; if (this.triggered == 10) { this.waiting.run(); this.triggered = 0; } } }
That’s a lot more code than before, but we can concentrate on the latter half. We can now inject a mock object that attests to how often it was run. This mock object doesn’t need to sleep for any amount of time, so the unit test is fast again.
Instead of making the test more complex, we introduced additional structure (and complexity) into the production code. The resulting unit test is rather easy to write:
class CountAndWaitTest { @Test @DisplayName("Waits after 10 triggers and resets") void wait_after_10_triggers_and_reset() { Runnable simulatedWait = mock(Runnable.class); CountAndWait target = new CountAndWait(simulatedWait); // no wait for the first 9 triggers Repeat.times(9).call(target::trigger); verifyNoInteractions(simulatedWait); // wait at the 10th trigger target.trigger(); verify(simulatedWait, times(1)).run(); // reset happened, no wait for another 9 triggers Repeat.times(9).call(target::trigger); verify(simulatedWait, times(1)).run(); } }
It’s still different from a simple 3-liner test, but the “and” in the test story hints at a more complex story than “get y for x”, so that might be ok. We could probably simplify the test even more if we got access to the internal trigger count and verify the reset directly.
I hope the example was clear enough. For me, the revelation that test problems more often than not have their root cause in production code is a clear message to improve my ability on writing code that facilitates testing instead of obstructing it.
I don’t shoot/omit my messengers anymore even if their message means more work for me. | https://schneide.blog/tag/tests/ | CC-MAIN-2022-27 | refinedweb | 965 | 64.81 |
HsLua
From HaskellWiki
Revision as of 09:38, 9 May 2008
(this page isn't yet finished) replacing application config files with scripts. This allows to define complex data structures and utilize power of full-fledged programingin Lua functions into this instance. Note that fresh Lua instance have access only to a dozen of standard (arithmetic) operations. By careful selection of functions provided to Lua instance, you can turn it into sandbox - i.e. disallow access to file system, network connections and so on, or provide special functions which can access only specific directory/host/whatever
Next operation, "dofile", runs script from file specified. This script defines variables "username" and "password" that we then read by "eval". Finally, "close" destructs Lua instance, freeing memory it has grabbed
2 Example 2: calling from Haskell to Lua
Now imagine that we need more complex config capable to provide for each site we are log in its own user/pwd pair. we can write it either Haskell program reading of variables with call to function:
[name,pwd] <- Lua.callfunc l "getuserpwd" "mail.google.com"
3 Example 3: calling from Lua to Haskell
But that's not the whole story. When Lua used as scripting language inside your application, the communication should be two-forked. imagine for example that you develop a game and Lua scripts are used to control game characters. while Lua script fully defines their behavior logic, it should command something that will render this behavior to the player. just for example:
for i=1,3 do move( 1, "forward") move( 2, "backward") end
this script commands our character to go 1 step forward, 2 steps backward repeating this 3 times. the "move" procedure here should be defined on Haskell side to show his behavior. call to "openlibs", making Lua instance a real sandbox - all the communication with external world that script can do is to move it, move it :)
4 Exchanging data between Haskell and Lua worlds
Lua variables have dynamic values. error they return (Left Int) where Int will be either errcode or 0 if casting was unsuccessful:
eval :: (StackValue a) => LuaState -> String -> Either Int a evalfile :: (StackValue a) => LuaState -> String -> Either Int a
The next operations pair is "callfunc" and "callproc". They both call Lua function passing to it arbitrary number of arguments. The only difference is that "callfunc" returns and "callproc" omits Lua world any Haskell function/procedure that receives and returns values of StackValue types:
register :: (StackValue x1, x2 ... xn, a) => LuaState -> String -> (x1 -> x2 ... xn -> {IO} a) -> IO Int
where {IO} means that IO specifier is optional here. n>=0, i.e. register/callfunc/callproc may also register/call functions having no arguments
5 base on which you should
6 More about Lua
If you want to know more about Lua, you can jump to online book written by main developer of Lua and describing Lua 5.0 at or buy new version of this book describing Lua 5.1 (current version) from Amazon [2]
Official Lua 5.1 manual is at - it's clearly written but definitely not the best place to start
If you need to perform more complex tasks using HsLua library, you should study C API for Lua
6.1 [LuaBinaryModules]
If result, LuaBinaryModules allows users of your application to install any libraries they need just by copying their dll/so files into application directory
6.2
6.3 wxLua
wxLua is cross-platform (Windows, Linux, OSX) binding to wxWindows library, plus IDE written using this binding that supports debugging of Lua code and generation of Lua bindings to C libraries. so, you get. Lua part can be easily created/extended by less experienced users meaning that you may spend more time on implementing core algorithms. isn't it beautiful?
6.4
7 Further reading
[1]
[2]
[3]
[4]
[5] | http://www.haskell.org/haskellwiki/index.php?title=HsLua&diff=20823&oldid=20822 | CC-MAIN-2014-41 | refinedweb | 639 | 51.68 |
I'm fairly new to Entity Framework, my tables relationship looks a bit like this
public class Customer { public int Id { get; set; } public string Name { get; set; } public List<Product> Products { get; set; } } public class Product { public int Id { get; set; } public int CustomerId { get; set; } public Customer Customer { get; set; } }
I would like to make a query on the Customer table and include only the last Product created MAX(Id)
Normal SQL query would look like this
SELECT * FROM Customer INNER JOIN Product ON Customer.Id = Product.CustomerId WHERE Product.Id = (SELECT MAX(Id) FROM Product WHERE CustomerId = Customers.Id)
My current EF query look like this but it return all the products...
List<Customer> customers = _context.Customers .Include(c => c.Products) .ToList();
I tried something like this which gives me the right results, but EF makes a bunch of query and very quickly I see this seems like wrong way to go at it
List<Customer> customers = _context.Customers .Select(c => new Customer() { Id = c.Id, Name = c.Name, c.Products = c.Products.Where(d => d.Id == c.Products.Max(max => max.Id)).ToList() }).ToList();
I would like some suggestion, or if there's a different way to make this works.
It looks like below query can be written in a different way
SELECT * FROM Customer INNER JOIN Product ON Customer.Id = Product.CustomerId WHERE Product.Id = (SELECT MAX(Id) FROM Product WHERE CustomerId = Customers.Id)
This can be written as
SELECT TOP 1 * FROM Customer INNER JOIN Product ON Customer.Id = Product.CustomerId Order by Product.Id desc
Assuming customer name is required,above query can be written in LINQ or using EF as below
var customers = _context.Customers.Join(_context.Products, cu => cu.id, p => p.CustomerId, (cu,p) => new { cu,p}) .Select( c => new { prodId = c.p.Id,customername = c.cu.Name }) .OrderByDescending( c => c.prodId).Take(1);
If you have configured navigation property 1-n I would recommend you to use:
var customers = _context.Customers .SelectMany(c => c.Products, (c, p) => new { c, p }) .Select(b => new { prodId = b.p.Id, customername = b.c.Name }) .OrderByDescending(c => c.prodId).Take(1);
Much more clearer to me and looks better with multiple nested joins. | https://entityframeworkcore.com/knowledge-base/38897978/ef-core-relationships-query | CC-MAIN-2020-34 | refinedweb | 370 | 59.09 |
view raw
I am making a game and I wanted to know how to use a variable from a different file. Ex:
File 1:
var js = "js";
alert(js);
Can a javascript variable be used from a different file?
Yes, it can... As long as it's a global variable.
This is because all of the javascript files are loaded into a shared global namespace.
In your HTML, you will need to include the script that declares this variable first. Otherwise it will complain that the variable is undefined.
script1.js
var globalNumber = 10;
script2.js
alert(globalNumber); // Will alert "10"
index.html
<script src="script1.js"></script> <script src="script2.js"></script> | https://codedump.io/share/1QC3KIxfh916/1/can-a-javascript-variable-be-used-from-a-different-file | CC-MAIN-2017-22 | refinedweb | 113 | 79.06 |
AWS Startups Blog a primer for the second part of our series, “Selling to Fortune 100 Enterprises.” That session featured entrepreneurs from Bitium, Proofpoint, and Segment who spoke about how they did it: the tools and processes that they deployed, the design principles that they used, and the AWS services that allowed them to navigate the procurement maze and land big deals. But you need to learn to walk before you can run, which is why Drew is recapping his VC101 session for everyone who couldn’t attend in-person.
What is venture capital?
“Venture capitalist” is a job. Venture capitalists get paid by others to do this job, and while it’s a pretty great job, it’s important for anyone interacting with a venture capital firm to understand the basic mechanics and incentives that shape how the industry functions.
Venture capital is a form of private equity. While there are many differences within the industry, both venture capital managers and private equity managers strive to achieve very high returns in a relatively short period of time.
Venture capitalists are professional money managers. They are paid to manage money for clients. Their goal is to return investors a multiple (e.g. 2x, 3x etc) of the capital they are provided by their investors. For many VCs, investing in technology and innovation is the vehicle that they use to attempt to achieve this goal. VCs are willing to invest in high-risk businesses because of the potentially high reward that can be achieved by taking on this risk.
Therefore, not all businesses should raise venture capital. You should always consider all financing options and find the best fit for your needs based on your planned growth rates and, most important, the ultimate goal that you have for your business.
The theory of venture capital
Venture capital typically uses a portfolio approach to capture a distribution of probabilities. Venture fund returns are not driven by the “average” outcome of the portfolio, but rather by the significant outliers. Fred Wilson, of Union Square Ventures, states, .”
The reality is that most startups fail. For VCs to be successful, they must identify “home runs.” The venture business is driven by these outliers. And truly successful VCs don’t just identify the home runs, they identify the grand slams. This is shown in the data: funds that have returned more than 5x the capital provided to them by their investors almost always tend to show a high concentration of these “grand slams”. In fact, these exceptionally high returning funds have portfolios that consist of almost 20% of deals that exceed 10x on the capital the fund invested. These are the individual “grand slams” (10x) that drive the overall fund return (5x).1 These “grand slam” investments determine whether a VC is successful. VC is about the wins, not the losses.
The venture capital bargain
If you are the founder of a startup and want to raise venture capital, you need to understand the deal that you will strike with a VC when you accept an investment. In particular, you need to understand what you will give up.
First, you give up absolute authority and control. While most VCs don’t meddle in a company that performs according to plan, you likely will add these investors to your Board of Directors, where they will be able to wield influence (usually more often in bad times, but not exclusively). Remember, they have a portfolio of companies and a set of investors, just like you do. Their incentives won’t—and shouldn’t—always align with yours.
Second, you give up a degree of ownership, also known as dilution. Typically, a VC that invests in a seed or Series A deal (early stage) wants to own 20–30% of your company. This allows them to maintain a reasonable ownership stake in the likely event that you decide to raise more capital. As you raise more, they also get diluted, reducing overall ownership (often, investors in early rounds also participate in future rounds, using what is called their Pro Rata right, but let’s not get distracted). At the time the company achieves a liquidity event (for example, the company is sold or goes public), they might own only 2% or so.
Last, you give up preference (which we discuss in more detail at the end of this post). That means that even though your blood, sweat, and tears have gone into this (and maybe some of your own capital or capital from your friends and family), VC investors with a preference will almost always get their money back, before you see any returns. This preference is important to understand as you think about how you will fare in the event of the hallowed IPO or acquisition.
You give this stuff up for a reason, though. You give up (some) control, preference, and ownership in exchange for significant capital that you could not raise anywhere else. Additionally, you gain the experience of the investors who are now working with you to achieve the best outcome.
Understanding these three principles is key to understanding the bargain of venture capital.
How venture capitalists make money
Now that we have covered how VC funds think about investment returns, you need to understand how venture capitalists—the actual individuals you meet or who sit on your board—get paid. In particular, you should be aware of two mechanisms: management fees and carried interest.
Management fees are fees paid by the investors of the VC fund. This fee is similar to what you might pay when you invest in a mutual fund or would pay to a professional money manager who manages your stocks and bonds. It is typically in the range of 1.5–2.5% annually of the funds managed, and is paid quarterly. The fee is paid independent of investment success. That said, the fee will decrease over the lifetime of the fund as the focus of the fund shifts from making new investments to harvesting returns on the investments made.
The second way VCs make money—and the one that generally gets a bulk of the attention—is carried interest, often referred to as carry. Carry is the share of the profit that the VC fund keeps. That is, once a VC fund has returned 100% of the capital provided back to investors, the VC fund begins to participate in a share of the profits above that 100% of capital returned. Typically, carry is pegged at 20% of the profits. Carry is a tricky business, though, as it is a fund-wide return that relies on grand slams to make up for complete losses elsewhere in the portfolio. Furthermore, not all members of a VC fund participate equally in receiving a portion of the carry. For younger folks at VC firms, “achieving carry” is a very big step up in the world.
Roles and responsibilities of a VC team
Speaking of the different members of a VC team, you should know the roles of the people you talk to when engaging a VC fund. It will help you craft your discussion to get the optimal result. Plus, it will help you understand their power to influence decisions within the firm.
The first chapter of Brad Feld’s and Jason Mendelson’s book, Venture Deals, does a wonderful job of outlining the different roles within a VC firm. I’ve borrowed heavily from that book in my descriptions of VC roles below (Mendelson, 2013):
- Analysts – Generally very smart people with limited power and responsibility. They usually are recent graduates from college, and they typically do a lot of the important work that no one else wants to do or has the time to do, such as crunching numbers and writing memos.
- Associates – Typically not deal partners, but they support one or more deal partners, usually a General Partner or Managing Director. They have a wide variety of responsibilities including scouting new deals, helping with due diligence, and writing internal investment memos.
- Sr. Associates/Principals – The next generation of Managing Directors. They are junior deal partners. They typically have some limited deal responsibility, but also often require support from a Managing Director. Typically they don’t make final decisions. That’s usually left to the Investment Committee, comprised of GPs and Managing Directors.
- Managing Directors/General Partners – The most senior people in the firm. These folks are the leaders of the firm. They approve all investments and sit on the boards of companies.
- Others: EIRs, Venture Partners, Operating Partners – These titles can mean a variety of things depending on the firm and the individual who holds the title. In general, these folks either are responsible for assisting the portfolio companies post-investment (Operating Partner) or they are part-time affiliates of the fund who incubate their own startup idea (EIR) or provide more deal flow to the VC fund (Venture Partner).
Deal sourcing
Venture capitalists typically look at 1000s of deals per year and invest in a small minority of these opportunities (approximately 1–4% of these deals, depending on the strategy of the fund). So, where do all these deals come from? “Deal flow,” the colloquialism often used to describe the volume of deals that a VC reviews, comes from a variety of sources. But the most valuable deal flow often comes from fellow VCs, founders, or other parts of an investor’s network who can pass along reliable, well-vetted, and well-accustomed opportunities.
Some VC firms use cold calling or outbound deal flow tactics, but the vast majority rely on warm leads sent by colleagues, friends, and former business partners. In fact, some VCs meet only with companies where the founder was introduced by another VC or from a founder that the investor has already worked with in the past. Some VCs won’t look at deals with a first-time founder. Some look at deals only if the founding team is from a certain school, or has a pedigree from certain technology companies. And, of course, some VCs ignore all these policies. The point is that the value of a VC is often equivalent to the reach and extent of their network of collaborators willing to send them investment opportunities that they know will align with the investment parameters of the VC fund.
Why venture capitalists invest at different stages
VC funds invest at different stages of a company’s lifecycle. Some focus only on “early stage investing,” while other focus only on “late stage.” Others focus on all stages, often referring to themselves at “stage agnostic.” Broadly speaking, each stage of investment—early, mid, or late—can be defined by the different type of risk associated with the business at the stage in their maturation. In addition to risk, investors at different stages also tend to look at different metrics, or, in the absence of metrics, the traits of a business to justify an investment.
The type of risk associated with early stage investing is what is called ideation and/or product market fit. Will the service or product you are selling have a real market? Or is what you have built really just a solution looking for a problem? Early stage investors typically focus on the founding team and their pedigree as well as acumen and drive as parameters for justifying an investment because there is little else to go on at this point in the business. Early stage investors generally ask themselves, “Can this product or service solve a big problem, and is this the right team to build the product or service?”
The type of risk associated with mid-stage investing (known as “traditional” VC investing only a few years ago) is about gaining traction and creating a business that can successfully scale to reach the broadest markets possible. At this stage, companies typically raise capital to support growth in Sales & Marketing spend and hiring. While early stage investors focus on getting the fire started, mid-stage investors tend to invest to help the company “pour more gas on the fire” (another colloquialism you will hear plenty). Mid-stage VCs are usually more focused on the core team—not just the founder—and the quality of the team being built to help achieve the scale that they want.
Finally, the type of risk associated with late-stage investing is best described as exit risk. While early stage and mid-stage are focused on creating the foundations of the company, and then rapidly scaling the business off of those foundations, later-stage investors are more focused on how you can monetize the business, through either a sale or an Initial Public Offering (IPO). Valuation and deal structure is always important at any stage, but they become the primary drivers of the return at the later stage. Investing in a company at a $20 MM valuation with the goal of growing the business to a multiple of that is far different from investing in a company at a $500 MM valuation and having a good understanding of what type of pricing the company could likely get at an IPO. Late-stage investors spend much more time thinking about IPO windows (periods of time when going public is more conducive) and the cash balances of potential acquirers than either early or mid-stage investors.
The ABCs of VC funding
As I mentioned, there are different stages at which investors invest. Most people are familiar with terms such as seed or Series B, but many don’t quite understand what those terms actually mean. To be fair, these terms—particularly seed—are constantly redefined or orphaned, and many folks just use them to suit their own needs. But there is some logic to the ABCs of VC.
In general, VC funding follows a Last in, First out (LIFO) principle. Those who invest their money last into a company will typically receive their money back first, before anyone else (i.e., before those who invested earlier). This is not always the case, but a good rule of thumb. That means that all investors in a company ARE NOT EQUAL. Shares issued to VC investors are typically preferred securities. This means exactly what it sounds like: the securities have preference over the securities that were issued before. For example, an investor who invests in the Series B of a company will receive an allotment of Series B Preferred Stock, representing their ownership of the company. These Series B preferred stock will generally have “preference” over the Series A Preferred stock holders (who in turn have preference over common stockholders, who have no preference at all).
Seed, Series A, or Series B and so on can be good indications of the stage of a company (although they also can be misleading), but the truth is that the reference is more to the preference of the holders of the securities. There are other important concepts, such as liquidation preference, that relates to this idea, but I will cover that in subsequent posts.
Conclusion
Venture capitalists have a job to do, just like everyone else. Yes, it’s a pretty cool job, but a job nonetheless. The goal of all VCs is to generate returns through liquidation events (and with all the talk about valuations in the news these days, it’s easy to forget that a valuation means nothing in terms of actual returns until the company achieves a liquidity event).
I hope this post has helped to expose some of the basic functions, mechanics, and, important, incentives as to why VCs do what they do and how they do it. If you are a founder of a company who plans to raise or has already raised venture capital financing, you don’t have to be an expert in the ins-and-outs of venture capital. However, it’s helpful to have a solid understanding given the potential impact on your ultimate success and achievements, both financially as well as personally.
We can help: AWS Activate
AWS Activate is a program designed to provide your startup with the resources you need to get started on AWS. All startups can apply for the Self-Starter Package. We also have custom Portfolio and Portfolio Plus packages for startups in select accelerators, incubators, Seed/VC Funds, and startup-enabling organizations.
Portfolio Package
The Portfolio Package is designed for startups in select accelerators, incubators, Seed/VC Funds, and other startup-enabling organizations. If your startup qualifies for a Portfolio Package, you can receive up to $15,000 in AWS Promotional Credit and other benefits. Ask your program director for more information about how to apply.
Portfolio Plus
If your startup qualifies for a Portfolio Plus package, you can receive up to $15,000 in AWS Promotional Credit for 2 years or $100,000 for 1 year and other benefits. Ask your program director for more information about how to apply.
To learn more about corporate VCs and the other flavors of VCs, and to obtain links to blogs, books, and newsletters from prominent thought leaders, check out Russ’s slideshare deck, “VC 101: How, What & Why VCs Do What They Do.”
1Chris Dixon, Andreessen Horowtiz Blog Post, | https://aws.amazon.com/blogs/startups/vc-101-how-vcs-operate-and-what-you-should-know-as-a-founder/ | CC-MAIN-2019-30 | refinedweb | 2,871 | 59.53 |
Andrew Pimlott wrote: >[I think it is preferred to post this on haskell-cafe.] > > Oops! I guess you're right. >On Fri, Mar 18, 2005 at 02:00:37PM -0800, Juan Carlos Arevalo Baeza wrote: > > >>matchRuleST :: String -> State RuleSet (Maybe Rule) >>makeST :: String -> StateT RuleSet IO () >> >> matchRuleST doesn't really need IO for anything (it just works on the >>current state "RuleSet"). makeST needs IO (it does file date >>comparisons, actions on the files, etc... the usual "make" stuff). The >>problem is how to properly use matchRuleST from makeST. >> >> > . >> Then, I decided to try again, and came up with this function: >> >>liftState :: State s a -> StateT s m a >> >> > >(I think you left out the constraint (Monad m).) > > Yes, I did, thanx. I wrote the message a tad little bit too early :. > > Yes. I prefer clarity, too. And I did it ugly (still groping with the syntax). This is the final version: liftState :: Monad m => State s a -> StateT s m a liftState s = do state1 <- get let (result, state) = evalState (do {result <- s; state <- get; return (result, state)}) state1 put state return result JCAB | http://www.haskell.org/pipermail/haskell-cafe/2005-March/009442.html | CC-MAIN-2014-42 | refinedweb | 185 | 81.63 |
Want more? Here are some additional resources on this topic:
Windows Forms controls are components that can be placed in Windows Forms applications (GUI applications that target the Common Language Runtime). Windows Forms applications in Visual C++ use .NET Framework classes and other .NET features with the new Visual C++ syntax.
In this procedure, you create a Windows Forms control that displays a number which increments each time the label is clicked in an application. You will also create a Windows Forms application project to test the control.
This walkthrough covers the following:
Creating a New Project
Designing the Control
Adding a Custom Property to the Control
Adding a Project to Test the Control
Placing the Control in an Application
Running the Application
The Windows Forms Control project template you use in this section creates a User Control, which is a composite control that contains other controls.
Alternatively, you can create a Windows Forms control by deriving a class directly from the Control class (where your code is responsible for drawing the control) or the Component class (a control with no UI).
On the File menu, click New, and then click Project….
In the Project Types pane, select CLR in the Visual C++ node, then select Windows Forms Control Library in the Visual Studio installed templates pane.
Type a name for the project, such as clickcounter.
Type a different name for the solution, such as controlandtestapp.
You can accept the default location, type in a location or browse to a directory where you want to save the project.
The Windows Forms Designer opens, showing an area where you add the controls you want to place on the control design surface.
In this step, you add a Label control to the control design surface. You then set some properties on the control itself and on the Label control it contains.
If the Properties window is not visible, on the View menu, click Properties Window.
Click on the control to select it and set its properties as follows:
Set the Size property to 100, 100.
Set the BorderStyle to Fixed3D
The label boundaries will be visible when the control placed in an application.
If the Toolbox window is not visible, click Toolbox on the View menu.
Drag a Label control from the Toolbox to the design surface and place it near the middle of the control.
Set these properties for the label:
Set the BorderStyle to FixedSingle.
Set the Text to the digit 0 (zero).
Set the Autosize to False.
Set the Size to 30, 20.
Set the TextAlign to MiddleCenter.
Leave the Name property (how you refer to it in code) unchanged as label1. The control should look like this:
Add an event handler for the label Click event (the default event for a label) by double-clicking on the label.
The clickcounter.h file is displayed in the editing area with an empty event handler method generated for you.
Close the Toolbox or Properties window if you need more room by clicking on their Close boxes or unpinning them so they auto-hide.
Press Enter after the opening brace of the label1_Click method and type:
int temp = System::Int32::Parse(label1->Text);
temp++;
label1->Text = temp.ToString();
IntelliSense™ displays a list of valid choices after you type a scope resolution operator (::), dot operator (.) or arrow operator (->). You can highlight an item and press Tab or Enter or double-click an item to insert that item into your code.
Also, when you type an opening parenthesis for a method, Visual Studio displays valid argument types for each overload of the method.
In this step, you define a custom property that allows an application developer to determine whether the number displayed on the control increments when the label is clicked or when any location on the control is clicked.
Place the cursor after the colon of the first public scope indicator at the top of the clickcounterControl.h file, press Enter, then type the following:
property bool ClickAnywhere {
bool get() {
return (label1->Dock == DockStyle::Fill);
}
void set(bool val) {
if (val)
label1->Dock = DockStyle::Fill;
else
label1->Dock = DockStyle::None;
}
}
When the ClickAnywhere property of the control is set to true, the Dock property of the label is set to DockStyle::Fill, so the label fills the entire control surface. A click anywhere on the control surface will then cause a label Click event, incrementing the number on the label.
When the ClickAnywhere property is false (the default), the Dock property of the label is set to DockStyle::None. The label does not fill the control, and a click on the control must be inside the label boundaries to cause a label Click event, incrementing the number.
Build the User Control. On the Build menu, select Build Solution.
If there are no errors, a Windows Forms control is generated with a file name of clickcounter.dll. You can locate this file in your project directory structure.
In this step, you create a Windows Forms application project where you will place instances of the clickcounter control on a form.
The Windows Forms application you create to test the control can be written with Visual C++ or another .NET language, such as C# or Visual Basic .NET.
You can also add a project to the solution by right-clicking on the controlandtestapp solution in Solution Explorer, pointing to Add, then clicking New Project….
In the Project Types pane, select CLR in the Visual C++ node, then select Windows Forms Application in the Visual Studio installed templates pane.
Type a name for the project, such as testapp.
Be sure to select Add to Solution instead of accepting the default Create New Solution setting in the Solution drop-down list, then click OK.
The Windows Forms Designer for the new project opens, showing a new form called Form1.
Add a reference to the control. On the Project menu, click References or right-click on the testapp project in Solution Explorer and click References.
Click the Add New Reference button, then click the Projects tab (you are adding a reference to another project in this solution) and select the clickcounter project. Click OK twice.
Right-click on the Toolbox and click Choose Items.
Click the Browse button and locate the clickcounter.dll file in your solution directory structure. Select it and click Open.
The clickcounter control is shown in the.NET Framework Components list with a check mark. Click OK.
The control appears in the Toolbox with the default "gear" icon.
In this step, you place two instances of the control on an application form and set their properties.
Drag two instances of the clickcounter control from the Toolbox. Place them on the form so they don't overlap.
If you need to make the form wider, click on the form to select it and drag one of the selection handles outward.
If the Properties window is not visible, click Properties on the View menu.
The ClickAnywhere property is in the Misc. section of the Property Window if properties are organized by category.
Click one instance of the clickcounter control on the form to select it, then set its ClickAnywhere property to true.
Leave the ClickAnywhere property of the other instance of the clickcounter control set to false (the default).
Right-click on the testapp project in Solution Explorer and select Set As StartUp Project.
On the Build menu, click Rebuild Solution.
You should see that the two projects built with no errors.
In this step, you run the application, and click on the controls to test them.
On the Debug menu, click Start Debugging.
The form appears with the two instances of the control visible.
Run the application and click on both clickcounter controls:
Click on the control with ClickAnywhere set to true.
The number on the label increments when you click anywhere on the control.
Click on the control with ClickAnywhere set to false.
The number on the label increments only when you click within the visible boundary of the label.
Close the test application by clicking its Close box in the upper right corner of the Form1 window. | http://msdn.microsoft.com/en-us/library/ms235628(VS.80).aspx | crawl-002 | refinedweb | 1,354 | 63.9 |
Develop a simple Hello World email client application for Exchange by using the EWS Managed API.
The EWS Managed API provides an intuitive, easy-to-use object model for sending and receiving web service messages from client applications, portal applications, and service applications. You can access almost all the information stored in an Exchange Online, Exchange Online as part of Office 365, or an Exchange server mailbox by using the EWS Managed API. You can use the information in this article to help you develop your first EWS Managed API client application..
You'll need an Exchange server
If you already have an Exchange mailbox account, you can skip this section. Otherwise, you have the following options for setting up an Exchange mailbox for your first EWS client application:
Get an Office 365 Developer Site (recommended). This is the quickest way for you to set up an Exchange mailbox.
Download Exchange Server.
After you have verified that you can send and receive email from Exchange, you are ready to set up your development environment. You can use the Exchange web client Outlook Web App to verify that you can send email.
Set up your development environment
Make sure that you have access to the following:
Any version of Visual Studio that supports the .NET Framework 4. Although technically, you don't need Visual Studio because you can use any C# compiler, we recommend that you use it.
The EWS Managed API. You can use either the 64-bit or 32-bit version, depending on your system. Use the default installation location.
Create your first EWS Managed API application
These steps assume that you set up an Office 365 Developer Site. If you downloaded and installed Exchange, you will need to install a valid certificate on your Exchange server or implement a certificate validation callback for a self-signed certificate that is provided by default. Also note that these steps might vary slightly depending on the version of Visual Studio that you are using.
Step 1: Create a project in Visual Studio
In Visual Studio, on the File menu, choose New, and then choose Project. The New Project dialog box opens.
Create a C# Console Application. From the, open the shortcut menu (right-click) for References and choose Add Reference from the context menu. A dialog box for managing project references will open.
Choose the Browse option. Browse to the location where you installed the EWS Managed API DLL. The default path set by the installer is the following: C:\Program Files\Microsoft\Exchange\Web Services<version>. The path can vary based on whether you download the 32 or 64 bit version of the Microsoft.Exchange.WebServices.dll. Choose Microsoft.Exchange.WebServices.dll and select OK or Add. This adds the EWS Managed API reference to your project.
If you are using EWS Managed API 2.0, change the HelloWorld project to target the .NET Framework 4. Other versions of the EWS Managed API might use a different target version of the .NET Framework.
Confirm that you are using the correct target version of the .NET Framework. Open the shortcut menu (right-click) for your HelloWorld project in the Solution Explorer, and choose Properties. Verify that the .NET Framework 4 is selected in the Target framework drop-down box.
Now that you have your project set up and you created a reference to the EWS Managed API, you are ready to create your first application. To keep things simple, add your code to the Program.cs file. Read Reference the EWS Managed API assembly for more information about referencing the EWS Managed API. In the next step, you will develop the basic code to write most EWS Managed API client applications.
Step 3: Set up URL redirection validation for Autodiscover
Add the following redirection validation callback method after the Main(string[] args) method. This validates whether redirected URLs returned by Autodiscover represent an HTTPS endpoint.; }
This validation callback will be passed to the ExchangeService object in step 4. You need this so that your application will trust and follow Autodiscover redirects - the results of the Autodiscover redirect provides the EWS endpoint for our application.
Step 4: Prepare the ExchangeService object
Add a using directive reference to the EWS Managed API. Add the following code after the last using directive at the top of Program.cs.
using Microsoft.Exchange.WebServices.Data;
In the Main method, instantiate the ExchangeService object with the service version you intend to target. This example targets the earliest version of the EWS schema.
ExchangeService service = new ExchangeService(ExchangeVersion.Exchange2007_SP1);
If you are targeting an on-premises Exchange server and your client is domain joined, proceed to step 4. If you client is targeting an Exchange Online or Office 365 Developer Site mailbox, you have to pass explicit credentials. Add the following code after the instantiation of the ExchangeService object and set the credentials for your mailbox account. The user name should be the user principal name. Proceed to step 5.
service.Credentials = new WebCredentials("user1@contoso.com", "password");
Domain-joined clients that target an on-premises Exchange server can use the default credentials of the user who is logged on, assuming the credentials are associated with a mailbox. Add the following code after the instantiation of the ExchangeService object.
service.UseDefaultCredentials = true;
If your client targets an Exchange Online or Office 365 Developer Site mailbox, verify that UseDefaultCredentials is set to false, which is the default value. Your client is ready to make the first call to the Autodiscover service to get the service URL for calls to the EWS service.
The AutodiscoverUrl method on the ExchangeService object performs a series of calls to the Autodiscover service to get the service URL. If this method call is successful, the URL property on the ExchangeService object will be set with the service URL. Pass the user's email address and the RedirectionUrlValidationCallback to the AutodiscoverUrl method. Add the following code after the credentials have been specified in step 3 or 4. Change
user1@contoso.comto your email address so that the Autodiscover service finds your EWS endpoint.
service.AutodiscoverUrl("user1@contoso.com", RedirectionUrlValidationCallback);
At this point, your client is set up to make calls to EWS to access mailbox data. If you run your code now, you can verify that the AutodiscoverUrl method call worked by examining the contents of the ExchangeService.Url property. If this property contains a URL, your call was a success! This means that your application successfully authenticated with the service and discovered the EWS endpoint for your mailbox. Now you are ready to make your first calls to EWS. Read Set the EWS service URL by using the EWS Managed API for more information about setting the EWS URL.
Step 6: Create your first Hello World email message
After the AutodiscoverUrl method call, instantiate a new EmailMessage object and pass in the service object you created.
You now have an email message on which the service binding is set. Any calls initiated on the EmailMessage object will be targeted at the service.
Now set the To: line recipient of the email message. To do this, change
user1@contoso.comto use your SMTP address.
Set the subject and the body of the email message.
email.Subject = "HelloWorld"; email.Body = new MessageBody("This is the first email I've sent by using the EWS Managed API.");
You are now ready to send your first email message by using the EWS Managed API. The Send method will call the service and submit the email message for delivery. Read Communicate with EWS by using the EWS Managed API to learn about other methods you can use to communicate with Exchange.
You are ready to run your Hello World application. In Visual Studio, select F5. A blank console window will open. You will not see anything in the console window while your application authenticates, follows Autodiscover redirections, and then makes its first call to create an email message that you send to yourself. If you want to see the calls being made, add the following two lines of code before the AutodiscoverUrl method is called. Then press F5. This will trace out the EWS requests and responses to the console window.
service.TraceEnabled = true; service.TraceFlags = TraceFlags.All;
You now have a working EWS Managed API client application. For your convenience, the following example shows all the code that you added to Program.cs to create your Hello World application.
using System; using Microsoft.Exchange.WebServices.Data; namespace HelloWorld { class Program { static void Main(string[] args) {(); }; } } }
Next steps
If you're ready to do more with your first EWS Managed API client application, explore the following resources:
If you run into any issues with your application, try posting a question or comment in the forum (and don't forget to read the top post).
In this section
- Reference the EWS Managed API assembly
- Set the EWS service URL by using the EWS Managed API
- Communicate with EWS by using the EWS Managed API | https://docs.microsoft.com/en-us/exchange/client-developer/exchange-web-services/get-started-with-ews-managed-api-client-applications | CC-MAIN-2018-51 | refinedweb | 1,504 | 56.55 |
Archived:Python on Symbian/03. System Information and Operations on Symbian (Book table of contents)
Original Authors: Pankaj Nathani and Mike Jipping
Having covered the basic Python conventions and coding syntax in Chapter 2, it's time to move on to discuss how to use Python on Symbian. This chapter describes how to access device and application information, how to perform system operations and how to work with files and time values.
This chapter introduces sysinfo and e32, the modules that provide information about the device and the current application respectively. The e32 module also contains utilities to perform system operations, such as running and pausing scripts and setting the device time.
Finally, the chapter discusses core functionality inherited from standard Python, such as how to perform operations related to time and how to manipulate files. It covers Symbian's file naming and handling conventions and shows how Python handles files in ways similar to other programming languages.
The methods described in this chapter are frequently used in Python applications for Symbian devices, including many of the examples in this book.
The System Information Module
The sysinfo module can be used to access the following device information:
- IMEI
- signal strength
- battery strength
- firmware version
- OS version
- screen display resolution
- current profile
- current ringing type
- RAM and ROM drive size
- the amount of free RAM, ROM and disk drive space.
The module is easy to use. For example, the following two lines in the interactive PySymbian console return the IMEI of the device:
>>>import sysinfo
>>>sysinfo.imei()
u’2174027202720292’
Note: Every mobile device on a GSM network is identified by a unique number called its IMEI (International Mobile Equipment Identity), about which you can find out more from.
The IMEI of the device can be retrieved manually by dialing *#06#.
The following code snippets show the methods used to obtain the information and their return types.
# Retrieve battery strength as an integer. The value is in the range 0 (empty) to 100 (full).
>>> sysinfo.battery()
42
# Retrieve signal strength as an integer. The value is in the range 0 (no signal) to 7 (strong signal).
>>> sysinfo.signal_bars()
7
# Retrieve network signal strength in dBm (integer)
>>> sysinfo.signal_dbm()
81
# Retrieve the IMEI code of the device as a Unicode string
>>> sysinfo.imei()
u'354829024755443'
# Retrieve the software version as a Unicode string. This is the version number that a device displays when *#0000# is dialled.
>>> sysinfo.sw_version()
u'V 20.0.016 28-2-08 RM-320 N95(c)NMP'
# Retrieve the width and height of the display (in the current orientation) in pixels
>>> sysinfo.display_pixels()
(240, 320)
# Get the current profile name as a string ('general', 'silent', 'meeting', 'outdoor', 'pager', 'offline', etc). If a custom profile is selected - it returns 'user <profile value>'
>>> sysinfo.active_profile()
'general'
# Retrieve the ringing type as a string - 'normal', 'ascending', 'ring_once', 'beep' or 'silent'
>>> sysinfo.ring_type()
'normal'
# Get the OS version as a tuple
>>> sysinfo.os_version()
(2, 0, 1540)
Symbian devices usually have an internal disk (C:) and a memory card (E:). If there is a random access memory (RAM) drive it is mapped to D: and Z: is the read only memory (ROM) drive. There are exceptions: some devices use E drive for 'internal' mass memory and some have additional drives. The amount of ROM, RAM and disk space varies across devices.
sysinfo can also return information about the device memory, shown as follows:
# Get free RAM in bytes
>>> sysinfo.free_ram()
81838080
# Get total RAM in bytes
>>> sysinfo.total_ram()
125501440
# Get total ROM in bytes
>>> sysinfo.total_rom()
8716288
# Get free disk space in bytes. Drive can be C: or E:
>>> sysinfo.free_drivespace()['C:']
16207872
Note that the e32 module provides drive_list() to get the list of drives, as discussed in the following section.
The e32 Module
The e32 module contains a useful and varied collection of utilities. The utilities range from methods that retrieve information about the device and current application, to methods that pause the current script or reset the device inactivity timer. There are also methods available for filesystem operations, such as copying files and folders.
The e32 module provides the following 'system' information in addition to that provided by sysinfo:
- The Python runtime version
- The Symbian device runtime version
- The list of available drives
- The platform security capabilities of the current application
- The length of time since the last user activity
- Whether or not the current thread is a UI thread
- Whether or not the current application or script is running on an emulator
The code snippet below shows the methods used to obtain the above information.
# Get the Python runtime version - tuple of version number: major, minor, micro and release tag,
>>> e32.pys60_version()
(1, 9, 7, 'svn2793')
# Get the Python runtime version - tuple of major, minor, micro, release tag, and serial.
>>> e32.pys60_version_info()
(1, 9, 7, 'svn2793', 0)
# Get the Symbian platform device version - tuple of major and minor version
>>> e32.s60_version_info()
(3, 0)
# Retrieve a list of available memory drives. For example: [u'C:',u'D:',u'E:',u'Z:']
>>>e32.drive_list()
[u'C:',u'D:',u'E:',u'F:',u'Z:']
# Get the time in seconds since the last user activity
>>> e32.inactivity()
3
# Get the platform security capabilities granted to the application, returned as a tuple
>>> e32.get_capabilities()
('ReadUserData','WriteUserData')
# Test if application has been granted the specified platform security capabilities. Returns True or False.
# querying single capability
>>> e32.has_capabilities('WriteUserData')
True
# Query whether the application has two capabilities
>>> e32.has_capabilities('WriteUserData','ReadUserData')
True
# Test if the calling script is running in the context of a UI thread. Returns True or False
>>> e32.is_ui_thread()
True
# Test whether or not the application or script is running in an emulator. Returns 1 if yes, 0 if no.
>>>e32.in_emulator()
1
The e32 module can perform the following system operations on the device.
Set the device's system time
The set_home_time() function sets the device system time to given time (for e.g. user_time in the example below). The device system time can be retrieved through the time module as explained in next section.
import e32
import time
# current time
user_time = time.time()
# modify the user_time
user_time -= 10.0
e32.set_home_time(user_time)
Note: The WriteDeviceData capability is required to use this function. For more information, refer to Chapter 15.
Copy files
The file_copy() function can be used to copy files from one folder to another.
>>> source_path=u"C:\\Python\\myimage.jpg"
>>> destination_path=u"C:\\Data\\Images\\myimage.jpg"
>>> e32.file_copy(destination_path,source_path)
To copy all of the files in the source folder append '*.*' to the source path.
>>> source_path=u"C:\\Python\\*.*"
>>> destination_path=u"C:\\Data"
>>> e32.file_copy(destination_path,source_path)
This copies all the files in the C:\Python directory to the C:\Data directory.
Launch an application
The start_exe() method can be used to launch a specified executable (.exe) file, as follows.
>>> e32.start_exe(exe_filename, command, wait_flag)
The parameter command passes command line arguments to the executable. If no arguments are required, it may be set to NULL or an empty string. The parameter wait_flag is an optional parameter as described below.
The following statement is used to launch the Web browser with the default URL, and the command parameter is passed an empty string.
>>>>> a=e32.start_exe(application_exe, "")
In order to open the web browser application with a particular URL, we pass it as a command as follows.
>>>>>>> e32.start_exe(application_exe, ' "%s"' %url)
In both the previous examples the browser is launched asynchronously. We can use the optional wait_flag integer parameter to launch an application synchronously.
If a non-zero value is specified for wait_flag then start_exe() launches the application and waits for it to exit. It then returns 0 if the application exits normally or 2 if the application terminates abnormally.
>>> wait_flag=1
>>>>> exit_type=e32.start_exe(application_exe,' "%s"' %url, wait_flag)
>>> print exit_type
The '4 ' prefix in the url parameter indicates that the browser should launch the url specified. Other predefined prefixes for the parameter are shown below:
The start_exe() function can also be used to launch other native and third party applications.
Launch a python script as a server
A Python script can be launched in the background (as a server script) with the start_server() function.
>>> server_script.py=u"C:\\Data\\Python\\myscript.py"
>>> e32.start_server(server_script)
Reset inactivity timer
The reset_inactivity() method can be used to reset the inactivity timer programmatically .
>>> e32.reset_inactivity()
Resetting the inactivity timer at regular intervals prevents the device from entering a low power mode during periods of user inactivity. This can be useful if an application needs to remain visible when the user is not "doing anything" - for example, when using an in-car hands-free navigation system. See also inactivity(), which returns the current value of the inactivity timer.
Wait/Sleep
Note: Active objects are used extensively in Symbian C++ to provide co-operative multitasking to applications and other code. The concept is used to implement much of Python's underlying behaviour on Symbian and, for this reason, many function names in Python on Symbian have an ao_ prefix.
Active objects are discussed in detail elsewhere (e.g. in the Symbian C++ Reference Library).
ao_sleep()
The ao_sleep() method can be used to introduce a delay between the execution of statements, or to call a specified method after a given interval. The use of ao_sleep() is encouraged over the standard time.sleep() method (documented at), because ao_sleep() doesn't block the active scheduler, which means that the UI can remain responsive.
For example, the following code causes the script to wait for two seconds.
print "Waiting for 2 seconds"
e32.ao_sleep(2) #sleeps for 2 seconds
print "Wait over"
A call back function can be specified when calling ao_sleep(). In the example below, the function foo() is called after 3 seconds.
def foo():
print "In function"
e32.ao_sleep(3,foo)
ao_yield() and ao_callgate()
Calling ao_yield() gives control back to the active scheduler, thereby allowing any active object-based code with priority above normal to run. In practice this keeps an application responsive to menu and softkey input, even while performing other long running events. For example, the following code fragment displays notes within a continuous while loop. The use of e32.ao_yield() ensures that we can exit the loop (and application) by pressing the right soft key. Without e32.ao_yield() in the loop, the active scheduler would never get to run the event handling code.
def quit():
global running
running=0
appuifw.app.set_exit()
appuifw.app.exit_key_handler=quit
running=1
while running:
appuifw.note(u"hello")
e32.ao_yield()
The ao_callgate() method creates an active object wrapper around a callable Python function and returns another function object that can be used to start the active object. The original Python function will be called at some point after the wrapper function is called when its associated active object is run by the active scheduler. The wrapper object can be used to call the function from any Python thread; the function is run 'in the context of the thread where the ao_callgate() was created'.
The following example shows how the function fun() is registered with the callgate to return the wrapper object foo(). Method fun() is run by the active scheduler at some point after foo() is called.
import e32
def fun():
print "Hello"
foo = e32.ao_callgate(fun)
foo()
Ao_lock and Ao_timer
Two important classes defined in the e32 module are Ao_lock and Ao_timer. They are classes, unlike the operational functions above, so they can only be used after they have been instantiated.
Ao_lock is an active object-based synchronization service that is at the heart of the Python application UI structure as Chapter 4 describes. An Ao_lock object is used by creating an instance of Ao_lock, adding a number of active objects, and then calling wait() on the thread, shown as follows:
#Create instance of Ao_lock
app_lock = e32.Ao_lock()
# Add active objects here - e.g. menu callbacks
...
#Wait for application lock to be signaled
app_lock.wait()
Python halts execution of the script at the wait, but any active objects that are already running will be serviced. Python implements menus as active objects, so they are still called and the UI remains responsive. When you're ready to continue execution of the script you call signal(). However, as the script execution is halted at the wait() you must signal it to restart through a menu callback. It is a convention to release the waiting script in the quit() function:
#Signals the lock and releases the "waiter"
def quit():
app_lock.signal()
Remember that wait() can only be called in the thread that created the lock object and only one "waiter" is allowed, so take care when using it in a loop.
Ao_timer is a Symbian active object-based timer service. Like Ao_lock, it can be used in the main thread without blocking the handling of UI events. The following code snippet creates an instance of Ao_timer and illustrates use of its after() and cancel() methods.
#Create instance of Ao_timer
timer = e32.Ao_timer()
#Sleep for the 1 second without blocking the active scheduler and call the callback_function() (user defined callback)
timer.after(1, callback_function)
#Cancels the 'after' call.
timer.cancel()
The callback function parameter of the after() method is optional; if it is not specified the thread simply sleeps for the specified time.
The cancel() method can be used to cancel a pending after() request. It is important to cancel pending timer calls before exiting the application. If you forget, the application will panic on exit.
Time Operations
Python's concept of time comes from the Unix operating system. Unix measures time as the number of seconds since the epoch, which started on January 1, 1970 at 00:00:00 UTC (also known as Greenwich Mean Time). For example, midnight on 1 January 2010 was 1,262,304,000 seconds since the Unix epoch.
Tip: Generally speaking you don't need to think about the number of seconds! Python provides methods to allow you to convert to more friendly units. If you do need to do a manual conversion, there are several sites on the Internet that will convert them for you:
There are several system functions that display and manipulate system time. Before we look at them, however, we should review the possible values accepted for time values. There are two sets of values:
- float values represent seconds since the epoch as described above (the float data type must be used because this number of seconds is large).
- time structure values are nine integers that correspond to specific time components, as shown in the table below:
For example, consider the code below, where time.localtime() returns the time structure values as a tuple:
import time
(a,b,c,d,e,f,g,h,i) = time.localtime()
If this code were executed on 8 September 2009, at 9:15:38 am, then the variables would have the following values:
a = 2009 b = 9 c = 8 d = 9 e = 15 f = 38 g = 1 h = 251 i = 1
Let's briefly overview the functions in the time module:
- time.asctime([t]) converts the time - either the current time, if the parameter is absent, or the time represented by the optional tuple representing a time structure value - to a string that depicts the time described.
- time.clock() returns the current processor time as a float in seconds. You cannot really depend on this value for anything other than comparative uses.
- time.ctime([secs]) converts a time value in seconds since the epoch to a time structure value UTC. For this conversion, the dst flag is always zero, which means adjustment for Daylight Savings Time is not done. If secs is not provided or is None, the current time as returned by time() is used.
- time.gmtime([secs]) converts a time expressed in seconds since the epoch to a time structure value in UTC in which the dst flag is always zero. If secs is not provided or None, the current time is used.
- time.localtime([secs]) converts to local time, returning a time structure value. If secs is not provided or is None, the current time is used.
- time.mktime(t) can be viewed as the inverse function of localtime(). Its argument is a tuple depicting a time structure value, which expresses the time in local time, not UTC. It returns a floating point number of seconds since the epoch. If the input value cannot be represented as a valid time, either OverflowError or ValueError is raised.
- time.sleep(secs) suspends execution of the current program for the given number of seconds. This number may be a floating point number to indicate a precise sleep time.
- time.strftime(format[, t]) converts a tuple representing a time structure value to a string as specified by the format argument. If t is not provided, the current time is used. format must be a string. A ValueError exception is raised if any field in t is outside of the allowed range. More on this function below.
- time.strptime(string[, format]) parses a string representing a time according to a format. The return value is a time structure value. The format follows the same directives as strftime(), which are described below.
- time.time() returns the time as a floating point number expressed in seconds since the epoch, in UTC.
- time.tzset() resets the time conversion rules used by the library routines. The environment variable TZ specifies how this is done.
The strftime() and strptime() functions use special directives embedded in a format string. These directives are shown below:
Here are some examples. Consider the following code:
time.strftime("Now is %a, %d %b %Y at %H:%M:%S", time.gmtime())
This format string combines regular text with time specifiers. It also gets the time value from time.gmtime(). This code produces the output:
Now is Wed, 09 Sep 2009 at 13:58:37
Now consider this code:
thedate = "31 Dec 08"
t = time.strptime(thedate, "%d %b %y")
if t[7] == 366:
print thedate, "was in a leap year"
else:
print thedate, "was not in a leap year"
In the above example, the results showed that 2008 was a leap year, because '31 Dec 08' was day number 366 in the year. Notice that the format string for time.strptime() describes the way that the first string depicting the date is formatted.
There are also some variables represented in the time module. These are:
- time.accept2dyear is a boolean value indicating whether two-digit year values will be accepted.
- time.altzone represents the offset of the local DST timezone, in seconds west of UTC, if one is defined.
- time.daylight is nonzero if a DST timezone is defined.
- time.timezone is the offset of the local timezone, in seconds west of UTC.
- time.tzname is a tuple of two strings. The first is the name of the local timezone; the second is the name of the local DST timezone.
Does this look familiar?: These calls and variables may look familiar. Python is based heavily on C and C++, so you might have recognized these methods as Unix system calls. The time structure values are also taken from Unix system calls. In fact, there is one more method - struct_time() - that works with the time values in the C struct time structure (see the struct values in the table above). Though you do not need to use it, most time methods accept a structure in the C form, much like this:
time.struct_time(tm_year=2010, tm_mon=0, tm_mday=0, tm_hour=0, tm_min=0,tm_sec=0, tm_wday=6, tm_yday=0, tm_isdst=-1)
The description here is of the system time module. There are other modules that provide a less primitive, more object-oriented approach to dates and times:
- The datetime module generally handles dates and times.
- The locale module implements internationalization services; its settings can alter the return values for some of the functions above.
- The calendar module implements calendar related functions.
Finally, there is one more system function we should look at: e32.set_home_time(). This call comes from the e32 module and is used to set the current time for the device. The parameter required is the time, to be set in Unix 'seconds past the epoch' format. For example:
import e32
e32.set_home_time(1262304000)
This code sets the global device time to midnight on 1 January 2010.
To execute the code, an application requires the WriteDeviceData platform security capability. Capabilities are discussed in Chapter 15.
File Operations
As we discuss in Chapter 2, a file is a built-in data type for Python. Python handles files in ways similar to other programming languages. There are also some extended file operations in the modules unique to Python on the Symbian platform.
We'll start with the file naming conventions for the Symbian platform. Symbian uses conventions similar to those used in Microsoft Windows. Drive letters are used to indicate storage units such as memory or SD cards, there is a hierarchy of directories, and pathname components are separated by backslashes.
The following table outlines some of the common file operations provided by Python.
Most of these operations are similar to those used in other languages and platforms. Because of the way Python organizes data objects, some are more streamlined, such as the readlines() and writelines() functions, which make reading or writing a large amount of content quite easy.
Consider the following code:
html = ["<HEAD>"]
## ... generate HTML and append it to the list using "html.append(...)"
htmlfile = open("E:\Images\announce.html", "w")
htmlfile.writelines(html)
htmlfile.close()
Languages like Java would require each line of HTML to be written separately to a file, probably using a for loop. Here, however, one statement writes all lines to the file.
Note that data is always retrieved from files as a string - even non-readable data. This means that if you need data in another form, say numbers, you have to convert the data from strings to the form you need. Likewise, writing data to a file is always done using strings. Even if you want to write integers or floats, you must format the data into a string first, then write that string to a file. For example:
line = str(10) + "," + str(data) + "," + str(value) + "\n"
out.write(line)
This code builds three data items into a single string for writing. Likewise, we might read this data like
line = input.readline()
pieces = line.split(',')
data1 = int(pieces[0])
data2 = int(pieces[1])
converter = float(pieces[2])
Here we read a comma separated line, split it into pieces and convert the individual strings into the type of data we need.
We can also use some string formatting to construct the line we need for a file. Consider the following example, which rewrites the previous code:
line = "%d,%d,%f\n" % (10, data, value)
out.write(line)
The code works for simple data types, but native Python objects might be harder to convert to strings and back. This is where the pickle module comes in. The pickle module contains methods that allow almost any Python object to be written to a file (on S60 only tuples, lists and dictionaries are supported), with no conversion to and from strings. Consider the example below:
import pickle
dict = {'one': 1, 'two': 2, 'three': 3, 'four': 4}
dictFile = open('dictfile', 'w')
pickle.dump(dict, dictFile)
dictFile.close()
The code allows the data store in the dictionary dict to be stored in the file 'dictfile' as a native Python dictionary. We can get the dictionary back using the code below:
infile = open('dictfile')
newdict = pickle.load(infile)
infile.close()
Now the variables dict and newdict have the same value.
In addition to these basic file operations, the os module has many file descriptor and directory operations. The file descriptor operations reveal the Unix operating system method of dealing with files. Unix uses integers to keep track of files at the system level (really indexes into a file table) and there are several primitive operations that work with these descriptors. There are useful operations to retrieve status information for files from the operating system. You can get more information at about file descriptors.
Many directory operations in the os module also deal with system-level manipulation of the file system, but some can prove useful in general use, including the following:
- chdir(path) changes the current working directory of the application. This is useful, for example, when many files are manipulated and long pathnames are undesirable.
- getcwd() retrieves the current working directory.
- chmod(path, mode) changes the mode of the path given. The mode of a file is a set of permission flags kept by the operating system. Please refer to the list of permissions on to understand everything you can change for a file.
- listdir(path) returns a list of names for the files and directories in the directory specified as a parameter.
- mkdir(path[, mode]) creates a directory with the pathname given and the (optional) permissions given by mode. Without the permissions given, the directory is created with all permissions for all users.
- remove(path) deletes the file specified by path. If you specify a directory, Python raises an exception.
- rename(src, dst) renames src to dst.
- rmdir(path) removes the directory given by path.
- stat(path) returns (a lot of) information about the path specified. See for a complete list.
- tempnam([dir[, prefix]]) returns a unique path name that can be used for creating a temporary file.
There are several additional file system-related calls available in the e32 and sysinfo modules that can be useful.
More information can be found at and.
Summary
This chapter describes the system operations and information provided by two basic PySymbian modules: e32 and sysinfo, such as how to perform operations related to time, and how to work with files and directories. To cover a lot of ground quickly, we've used some rather simplistic examples, however we believe these are still very illustrative!
These features are combined with other ingredients of PySymbian in many of the examples in forthcoming chapters. We suggest you to use this chapter as a reference and come back to it whenever you need to apply these concepts in practice.. | http://developer.nokia.com/community/wiki/Python_on_Symbian/03._System_Information_and_Operations | CC-MAIN-2014-35 | refinedweb | 4,368 | 55.95 |
A statement list is a JSON-encoded file or snippet in a well-known location.
Location of statement list
See Creating a statement list to learn where this list should be stored.
Syntax
The statement list or snippet consists of a JSON array of one or more website or app statements as JSON objects. These statements can be in any order. Here is the general syntax:
[ { "relation": ["relation_string"], "target": {target_object} } , ... ]
- relation
- An array of one or more strings that describe the relation being declared about the target. See the list of defined relation strings. Example:
delegate_permission/common.handle_all_urls
- target
- The target asset to whom this statement applies. Available target types:
Website target
"target": { "namespace": "web", "site": "site_root_url" }
- namespace
- Must be
webfor websites.
- site
- URI of the site that is the target of the statement, in the format
http[s]://<hostname>[:<port>], where <hostname> is fully-qualified, and <port> must be omitted when using port 80 for HTTP, or port 443 for HTTPS. A website target can only be a root domain; you cannot limit to a specific subdirectory; all directories under this root will match. Subdomains should not be considered to match: that is, if the statement file is hosted on, then should not be considered a match. For rules and examples about website target matching, see the targets documentation. Example:
Android app target
"target": { "namespace": "android_app", "package_name": "fully_qualified_package_name", "sha256_cert_fingerprints": ["cert_fingerprint"] }
- namespace
- Must be
android_appfor Android apps.
- package_name
- The fully-qualified package name of the app that this statement applies to. Example:
com.google.android.apps.maps
- sha256_cert_fingerprints
- The uppercase SHA265 fingerprint of the certificate for the app that this statement applies to. You can compute this using
opensslor Java
keytoolas shown here:
openssl x509 -in $CERTFILE -noout -fingerprint -sha256
keytool -printcert -file $CERTFILE | grep SHA256
["14:6D:E9:83:C5:73:06:50:D8:EE:B9:95:2F:34:FC:64:16:A0:83:42:E6:1D:BE:A8:8A:04:96:B2:3F:CF:44:E5"].
Example statement list
Here is an example website statement list that contains statements about both websites and apps:
Scaling to dozens of statements or more
In some cases, a principal might want to make many different statements about different targets, or there might be a need to issue statements from different principals to the same set of targets. For example, a website may be available on many different per-country Top Level Domains, and all of them may want to make a statement about the same mobile app.
For these situations, include statements can be helpful. Using this mechanism, you can set up pointers from many different principals to one central location, which defines statements for all of the principals.
For example, you might decide that the central location should be ``. This file can be configured to contain the same content as in the examples above.
To set up a pointer from a web site to the include file, change `` to:
[{ "include": "" }]
To set up a pointer from an Android app to the include file, change `res/values/strings.xml` to:
<resources> ... <string name="asset_statements"> [{ \"include\": \"\" }] </string> </resources>
More Information
There is a more detailed explanation of the statement list format and the underlying concepts in our specification document. | https://developers.google.com/digital-asset-links/v1/statements.html?hl=id | CC-MAIN-2020-50 | refinedweb | 537 | 52.39 |
As many of you will be aware, C#3.0 is adding a significant number of new language features. While the overall driving force behind putting these features in is the support of LINQ (Language INtegrated Query), the way we do it is strongly inspired by functional programming techniques. Moreover, we strive to make the new features as general-purpose as possible (with the exception of query expressions – they are undeniably tied to the querying domain!), so they do come out as features that will enable more of a functional style of programming.
This begs the question: Are we making C# a functional language?
This of course is a vague question. First of all, what exactly is a functional programming language? What does it take to qualify? Rather than trying to define the canonical list of language features that you have to have to be functional, I think it makes more sense to define it as “a language that supports a functional style of programming.”
Given that, my just as vague answer would be “to some degree”: If you’re a dedicated functional programmer, is this the time to throw out your Haskell, Scheme, F# or OCaml compiler and join the party? Probably not. C# doesn’t go the whole way. In this post I’ll explore in more detail the goodie bag of functional programming, in a kind of willy-nilly fashion, not focusing on a particular functional language, and then examine to what extent C# is supporting each of the features or styles of programming.
Please be aware that Visual Basic is adding most of the same language features as C# in the Orcas release.
Expression-based programming
Wes Dyer has a great post about expression-based versus statement-based programming. When you construct things the expression-based way, you build them bottom-up from their constituents, as opposed to the statements based way, where you create things top down by modifying them.
Instead of
Point p1 = new Point();
P1.X = 3; P1.Y = 5;
Point p2 = new Point();
p2.X = -5; p2.Y = 4;
Point p3 = new Point();
p3.X = -1; p3.Y = -6;
Polygon triangle = new Polygon();
Triangle.Add(p1);
Triangle.Add(p2);
Triangle.Add(p3);
Given the right constructors you could do it the expression-oriented way:
Polygon triangle =
new Polygon(
new Point(3,5),
new Point(-5,4),
new Point(-1,-6));
Much nicer, and composes really well. Trouble is you have to have the right constructors, but sometimes you don’t, or sometimes it’s impractical or annoying to write all the constructors that correspond to reasonable initialization patterns of your type. Furthermore they may get big, and the caller then has to remember the order of all the arguments.
To this end, C#3.0 introduces object initializers and constructor initializers. Assuming that Polygon is a collection type (see my previous post for a definition of that term), you can now write this:
Polygon triangle =
new Polygon {
new Point { X = 3, Y = 5 },
new Point { X = -5, Y = 4 },
new Point { X = -1, Y = -6 }};
In LINQ, queries are all about expressions. In order to produce the right output of a query, you often need to construct new objects out of old ones as in
from m in members
group m by m.Address.PostalCode into group
select new Entry { PostalCode = group.Key, Number = group.Count() };
The subexpressions of the query expressions are, well, expressions, and it is really important that you can create and initialize new objects in that context, or we don’t have projection in our query story.
Value based programming
Expression based programming limits the need for mutation operations (assignment) to such a degree that some functional languages (the “pure” ones) don’t even have them. Instead all data, whether simple or complex, is considered immutable values.
Even if they do allow mutation, many functional languages rely more on value types. The lack of mutability is really quite a feature. No-one can mess with the stuff you pass to them, and concurrent access is really easy to synchronize! You can have sound value-based equality and hashing. Implementations can optimize to their heart’s contend by copying, parallelizing, etc.
In C# you can certainly create user defined types that are immutable, but it is far from as easy as in a functional language. Making properties immutable is quite easy – don’t write a setter - but the associated value based semantics for equality and hashcode computation are a pain to write.
C# 3.0 doesn’t change that a lot, but this is something we may want to look into in the future, as the need for parallelism increases with the number of cores in our CPUs, and the number of people wanting to use value-based techniques increases.
Types on the fly
Many functional programming languages rely to a larger degree on structural types that do not have to be declared but are just there when you need them: You have type constructors that allow you to build types from other types, and corresponding value constructors to built values of those types. In a fully structurally typed world you never declare a new type: any type declaration would just introduce an alias for an existing type. (And you need that, because these types can get large!)
We don’t quite go there in C#3.0, but we do take you further in the direction of not having to declare so many trivial little types. With anonymous types, for instance, the above query can be written as
from m in members
group m by m.Address.PostalCode into group
select new { PostalCode = group.Key, Number = group.Count() };
eliminating the need to author a trivial type Entry to hold your results. The limitation here is that you cannot designate the type in any way, so it cannot be used
Another thing that takes you in the direction of structural types is generics: You still have to declare a generic type, but once declared you can construct new types from it over and over. We utilize that in the LINQ libraries to make a set of fully generic delegate types called Func, one for each number of arguments, e.g.:
public delegate TResult Func<TArg0, TResult>(TArg0 arg0);
From this you can create a delegate for almost any one argument function type, e.g. Func<int,bool> for functions from int to bool. Because we use the Func types consistently within LINQ, in that context they essentially act as structural function types. You never need to write your own delegate type again. Essentially, Func is a type constructor for delegate types.
Tuples
One kind of on-the-fly types heavily used in functional programming is tuples: strongly typed lists of values; just like the set of arguments to a method, only as a value in itself that you can pass around.
In popular syntax (1,”one”) is an example of a tuple value of the type (int,string). Some functional languages don’t even allow you to specify functions of more than one value; instead that value can be a tuple. Where tuples would be really interesting would be for multiple results of a function or method. Imagine
(int,int) NextFib(int curr, int prev) { return (curr+prev,curr); }
Tuples is one really useful kind of type constructor that it might be useful to add in the future. Like the Func types above, you could imagine .NET having centrally defined generic Tuple<…> types predeclared up to a certain arity, e.g.:
public class Tuple<TFirst,TSecond> {
readonly TFirst first;
readonly TSecond second;
public TFirst First { get { return first; } }
public TSecond Second { get { return second; } }
public Tuple(TFirst first, TSecond second) { … }
}
You get the idea. It would then be a matter of syntactic desugaring to translate (5,3) into new Tuple<int,int>(5,3).
There are currently no plans of shared tuple types on .NET, but languages such as F# on top of .NET roll their own.
Pattern matching
To really make use of constructed values, you need a good way of deconstructing them. For the functional programmer, pattern matching is the tool of choice. With tuples for instance, rather than “dotting your way” into the individual values, as in:
(int,int) result = NextFib(5,3);
int current = result.First;
int previous = result.Second;
you can create and assign in one fell swoop the variables to hold the individual component value of the tuple, by “matching” its structure:
(int current, int previous) = NextFib(5,3);
Pattern matching can be used to declare a function by cases – as in:
FibHelp(0) = (1,0)
FibHelp(n) = NextFib(FibHelp(n–1))
This defines one function, but saves the if-then-else logic by matching on the arguments.
A final common use of type constructors and pattern matching is with built-in immutable list types. Let’s say that [] is the empty list. You can the “cons up” a list as e.g. 1 :: 2 :: 3 :: []. Each occurrence of :: creates a new list value with the left hand side value prepended to the right hand side list. The cool thing is that you can pattern match the elements back off when you see them:
AddAll([]) = 0;
AddAll(n :: l) = n + AddAll(l);
Pattern matching is a whole different approach to plucking values apart, and we are nowhere near adding such features to C#.
Higher-order programming
When you want to have a piece of code executed conditionally, or repeatedly, C# has a nice set of built-in control structures. This is, however, a very closed regime: No one gets to add their own control structures as new language features! In order to write your own control structures you need the ability to be parameterized with functionality, with chunks of code.
Any object oriented programming language actually facilitates that. You can write a class
class ChunkOfCode {
public abstract void DoIt();
}
which people can subclass to override the DoIt method with the right functionality. ChunkOfCode objects can then be passed to “control structure methods” that execute the DoIt methods if and where they want.
On .NET we make this a whole lot easier with delegate types. You can grab a delegate to any old method and pass it around.
In C#2.0 we made it even easier by allowing you to construct a delegate “on the fly” with anonymous methods. The major improvement here is not so much avoiding to declare a method, but the fact that anonymous methods are “closures” that allow you to refer to arguments and local variables in the enclosing method body. This is crucial when the functionality you want to pass depends on your local state, as is so often the case.
At the core of LINQ you find the standard query operators, such as Where, Select, GroupBy, Join, etc. These are really user defined control structures, parameterized by little bit of functionality represented as delegate types. Thus to make the passing of these delegates really neat we’ve given anonymous methods a face lift in the form of lambda expressions:
.GroupBy(m => m.Address.PostalCode)
.Select(group => new { PostalCode=group.Key, Number=group.Count() });
Lambda expressions in C# are not entirely like anonymous functions in a functional language because they don’t have a type in and off themselves. Instead they can be converted to delegate types, and with the Func types sufficiently standardized, the effect is pretty much the same.
Currying
We talk above about tuples as a way of making input and output more symmetric on methods or functions. However, whereas a C# programmer today only has the option of writing multiple parameters, not multiple results, a functional programmer will often choose to do the opposite – use tuples only for compound result types. The reason is that functions of multiple arguments are often written using currying, where the original function only takes one argument, and then returns a function that takes the rest. Example: Instead of
Add (x,y) = x + y
You write
Add (x) (y) = x + y
which is a shorthand for
Add x = y => x + y
So instead of calling Add(3,2) you can call Add(3) to get a function that will take an integer and return the result of adding 3 to it! Hardcore functional programmers will use this technique extensively and take great care to arrange their curried arguments in such an order that they will lead to the most useful “derived functions” when only some of the arguments are passed.
You have to question how useful this is in practice, but enough functional programmers swear by it that I guess I’m willing to believe it will save me some day.
Strictly speaking, with anonymous methods or lambda expressions C# methods can also be curried, but it ain’t pretty:
Func<int,int> Add(int x) { return y => x + y; }
However currying of lambda expressions themselves gets pretty agreeable:
x => y => x + y;
Type inference
One convenient feature of statically typed functional languages is that you can have most or all of your type annotations inferred for you so that typing is really the compiler’s problem. In an object-oriented world we don’t have that option in the large, and in statically typed object-oriented languages such as C# we sort of take pride in the fact that our APIs are in fact annotated with types that users can see. At the architectural level, having explicit contracts is important.
However, in the small there are often cases where types are just in the way.
Already in C# 2.0 we introduced type inference for generic method calls, so that most of the time you do not have to be explicit about the type arguments to a method – they are inferred from the types of the method arguments.
In C# 3.0 we take this idea further, by being really clever about the way we use lambda expressions for type inference when they are used as arguments to a call of a generic method. Thus the types “flow” through the method call in and out of the lambdas and out in the return type.
Furthermore we’re adding type inference for local variables with the var keyword. Thus in
var query = customers
.Where(c => c.City == “London”)
.SelectMany(c => c.Orders)
.GroupBy(o => o.OrderDate.Year);
Without writing a simple type the compiler can figure out that query is of type IGrouping<int,Order>.
In C# we have deliberately kept the type inferencing algorithm simple, in the sense that it never infers a type that is not already there in the constituent values: We don’t synthesize types. This keeps the types under control, so the user has a chance of understanding what is going on if they get an error message etc.
Comprehensions
Oftentimes the passing around of lambdas gets too messy even for a functional programmer, and so many functional languages introduce syntactic sugar for common patterns involving the passing of functions. Such sugar is called a comprehension (presumably because it makes the code comprehensible) and we snatched that idea right in with query expressions in C# and VB – indeed in the early design days we did call them query comprehensions. In C# these are really just syntactic sugar which enables people to write most queries without lambda expressions. The above query (or one very much like it), for instance is generated by the query expression
var query =
from c in customers
where c.City == “London”
from o in c.Orders
group o by o.OrderDate.Year;
The extreme syntacticness of comprehensions is central in LINQ; it means that we just generate the method calls to the query operators syntactically, but anyone can implement their own methods with the right names on a given type and have it be the source of query expressions.
Conclusions
C# 3.0 and LINQ owe a lot to the functional programming languages in terms of the mechanisms we adopt. No doubt this will rub off to some degree on non-LINQ scenarios where a more expression-oriented style of programming becomes possible. People who are functional programmers by night and have to write C# by day will find that the daily headache sets in a little later in the afternoon.
However, as you can see from the above, there is some way to go before full-blown functional programming becomes totally natural in C#. There is probably a limit as to how far you can straddle before the pants rip, and we may never get to a point where C# is a true multiparadigm language with equal treatment of object-oriented and functional programming. Still I think there is room for more of this kind.
We have this thing about listening to customers, so I guess it is also up to you! 🙂
So prove yourself guys! 😉
Thanks to everyone who attended my session "Functional C# or how I lost the foreach and learned
Thanks to everyone who attended my session "Functional C# or how I lost the foreach and learned
F# es un lenguaje funcional, creado por Microsoft. Implementado bajo el soporte de .NET CLR, es un lenguaje!!! | https://blogs.msdn.microsoft.com/madst/2007/01/23/is-c-becoming-a-functional-language/ | CC-MAIN-2017-30 | refinedweb | 2,868 | 59.94 |
Turning random numbers into a list
If you printed just 10 random numbers and wanted to turn it into a list how would you do that. In the example code it prints 10 numbers but not in a list, how could I change them into a list without changing the variable.
list = 1 for k in range(10): list = list += 1 print(list)
2 answers
- answered 2020-10-01 05:17 user13966865
probably this is what you looking for
import random list_=[random.randint(1,100) for i in range(10)] #generates 10 random numbers which ranges in between 1,100 #[34, 15, 3, 18, 23, 32, 49, 65, 3, 35] list_=list(range(10)) #generate list [0,1,2,3,4,5,6,7,8,9]
- answered 2020-10-01 05:26 adir abargil
In case i got you right and believe you didn’t really mean random numbers...
This is how u get a list:
my_list = [] your_number = 10 for i in range(your_number): my_list.append(i) print(my_list).
- Converting string elements in a list into separate variables
I have a list of string elements and what I want to do is to turn each element into a variable whose name is the string.
brands = ['Sony','Samsung','Apple','Panasonic','Sennheiser','LG','AudioTechnica','Acer','Verizon']
I want to create new variables Sony, Samsung, Apple, etc. which will be empty lists []. How can I go about doing this?
- How to filter a list using another list in LINQ C#
The list needs to filter is having data like: '1000', '1000A', '1000B', '2000', '2000C', '2003', '2006A'
The list by which I am filtering having data like: '1000', '2000', '2003'
Expected output: 1000', '1000A', '1000B', '2000', '2000C', '2003' (output is expected like we do in SQL server LIKE operator)
- How to efficiently batch save variable in a for loop?
Due to memory issue, one of the strategy that I can think is to store offline the
for-loopcomputation output, in batch by batch, and later on re-upload the data for subsequent computation.
Each batch of the variable is save if the number of iteration (i.e.,
num) is divisible with
batch_savedas shown in code snippet below.
for num, num_pair in enumerate(tqdm(all_result), start=1): look_up_table.append(final_opt[[*num_pair], :]) if num % batch_saved == 0: pickle.dump ( look_up_table, open ( 'arr_list_'+ str(counter)+'.p', "wb" ) ) look_up_table = [] counter+=1
The need to check for the
if conditionat every iteration couple with enormous number of iteration make the code take ages to complete.
Hence, I wonder whether there is some trick to atleast, expedite the whole saving operation?
Another issue with current implementation is, the last N data is not saved. Assume the number of iteration is 2.5m, the the last 0.5m data is not saved.
The full code is as below:
import numpy as np import itertools import numexpr from tqdm import tqdm import pickle batch_saved=1000000 counter=1 case_no = 40 # Value can be up to 130 set_number = 50 # For time being, assume worst case to 50 sub_group = 3 # Number of sub groups case_select = {1: {'tot_length': 100, 'steps': 0.1, 'start_val': 0, 'main_group': 3}, # light_case 2: {'tot_length': 590, 'steps': 1, 'start_val': 0, 'main_group': 436}} # extreme_case case='extreme_case' case_no = 1 if case == 'light_case' else 2 tot_length = case_select[case_no]['tot_length'] # The value can be up to maximum 200 in actual implementation steps = case_select[case_no]['steps'] start_val = case_select[case_no]['start_val'] main_group = case_select[case_no]['main_group'] # Permutation value is up to max 1200 in actual implementation list_no = np.arange(start_val, tot_length, steps, dtype=np.float32) x, y, z = np.meshgrid(*[list_no for _ in range(3)], sparse=True) final_opt = list_no[np.array((numexpr.evaluate("((x >= y) & (y >= z))")).nonzero()).T] final_opt[:, [0, 1]] = final_opt[:, [1, 0]] # Is possible to swap the column from the above all_result = itertools.product(range(0, final_opt.shape[1] + 1), repeat=main_group) look_up_table = [] for num, num_pair in enumerate(tqdm(all_result), start=1): look_up_table.append(final_opt[[*num_pair], :]) if num % batch_saved == 0: pickle.dump ( look_up_table, open ( 'arr_list_'+ str(counter)+'.p', "wb" ) ) look_up_table = [] counter+=1
p.s., This is a continuation to my previous post about memory issue
- Also, is it possible to dump the variable using multiprocessing approach?
- What are overlapping subproblems in Dynamic Programming (DP)?
There are two key attributes that a problem must have in order for dynamic programming to be applicable: optimal substructure and overlapping subproblems [1]. For this question, we going to focus on the latter property only.
There are various definitions for overlapping subproblems, two of which are:
- A problem is said to have overlapping subproblems if the problem can be broken down into subproblems which are reused several times OR a recursive algorithm for the problem solves the same subproblem over and over rather than always generating new subproblems [2].
- (Introduction to Algorithms by CLRS)
Both definitions (and lots of others on the internet) seem to boil down to a problem having overlapping subproblems if finding its solution involves solving the same subproblems multiple times. In other words, there are many small sub-problems which are computed many times during finding the solution to the original problem. A classic example is the Fibonacci algorithm that lots of examples use to make people understand this property.
Until a couple of days ago, life was great until I discovered Kadane's algorithm which made me question the overlapping subproblems definition. This was mostly due to the fact that people have different views on whether or NOT it is a DP algorithm:
- Dynamic programming aspect in Kadane's algorithm
- Is Kadane's algorithm consider DP or not? And how to implement it recursively?
- Is Kadane's Algorithm Greedy or Optimised DP?
- Dynamic Programming vs Memoization (see my comment)
The most compelling reason why someone wouldn't consider Kadane's algorithm a DP algorithm is that each subproblem would only appear and be computed once in a recursive implementation [3], hence it doesn't entail the overlapping subproblems property. However, lots of articles on the internet consider Kadane's algorithm to be a DP algorithm, which made me question my understanding of what overlapping subproblems means in the first place.
People seem to interpret the overlapping subproblems property differently. It's easy to see it with simple problems such as the Fibonacci algorithm but things become very unclear once you introduce Kadane's algorithm for instance. I would really appreciate it if someone could offer some further explanation.
- How can I check if the maze have an exit?
I have an algorithm with chars '.' and '#' (wall). I have to check if the maze has an exit. How can I do it in C++?
- A sequence of operations on Fibonacci heap
Linear time sequence of operation on Fibonacci heap that will lead to more number of marked nodes than the unmarked nodes. Is it possible to have the nodes in the root node list marked?
- Exercise in C with loop statements: Write code to reverse numbers with do-while statement
I have a task (currently studying the loop statement so I'm in the beginner phase)that asks to make a program to reverse an integer number so it must have the
dostatement .
The output should be (example):
Enter a number: 4568 The reversal is: 8654
Please put in mind that since I'm following my book so far I've studied and know the very basics + selection and loop statements. I have very limited choices so no arrays.
The book suggests to make a do loop to divide repeatedly the number by
10until it reaches
0this is what I did so far (code not completed) :
int main(void) { int n,x; printf("Enter a number: "); scanf("%d", &n); printf("The reversal is: "); x = n % 10; printf("%d",x); /*outputs the last number digit in the first place*/ do{ .... n /= 10; /* for example if I divide the number 56222 by ten the output would come out as 5622 562 56 5*/ .... }while (n!=0); return 0; }
I found a way to put the last digit in the first place as you can see but I'm struggling to figure out how to reverse the rest of the numbers after dividing the number by 10 repeadetly with this very limited choices.
What should I put in the do statement? Many thanks in advance .
- Code gets stuck in loop (but loop itself doesn't keep going). C++ string
I am currently self-studying C++ with Schaum's outline book (which covers mostly C contents, or so I've been told, but whatever) and I have encountered some trouble with problem 9.8. You are supposed to count the number of appearances of every different word in a given c++ string, for which I assumed each word was separated from the next one by a white space, a newline or a dot or coma (followed in these two last cases by another white space). My code is the following:
#include <iostream> #include <string> using namespace std; int main() { string s; cout << "Enter text (enter \"$\" to stop input):\n"; getline(cin,s,'$'); string s2 = s, word; int ini = 0, last, count_word = 0; int count_1 = 0, count_2 = 0, count_3 = 0; cout << "\nThe words found in the text, with its frequencies, are the following:\n"; for (ini; ini < s.length(); ) { // we look for the next word in the string (at the end of each iteration // ini is incremented in a quantity previous_word.length() last = ini; cout << "1: " << ++count_1 << endl; while(true) { if (s[last] == ' ') break; if (s[last] == '\n') break; if (s[last] == ',') break; if (s[last] == '.') break; if (last > s.length()-1 ) break; ++last; cout << "2: " << ++count_2 << endl; } --last; // last gives the position within s of the last letter of the current word // now me create the word itself word = s.substr(ini,last-ini+1); //because last-ini is word.length()-1 int found = s2.find(word); while( found != s2.length() ) // the loop goes at least once ++count_word; s2.erase(0,found+word.length()); // we erase the part of s2 where we have already looked found = s2.find(word); cout << "3: " << ++count_3 << endl; cout << "\t["<<word<<"]: " << count_word; ++last; s2 = s; s2.erase(0,ini + word.length()); // we do this so that in the next iteration we don't look for // the new word where we know it won't be. if (s[last] == ' ' || s[last] == '\n') ini = last + 1; if (s[last] == ',' || s[last] == '.') ini = last + 2; count_word = 0; } }
When I ran the program nothing was sshown on screen, so I figured out that one of the loops must had been stuck (that is why I defined the variables count_1,2 and 3, to know if this was so). However, after correctly counting the number of iterations for the fist word to be found, nothing else is printed and all I see is the command prompt (I mean the tiny white bar) and I cannot even stop the program by using ctrl z.
- How to convert character string to executable code in R?
I have a dataframe e.g.
df_reprex <- data.frame(id = rep(paste0("S",round(runif(100, 1000000, 9999999),0)), each=10), date = rep(seq.Date(today(), by=-7, length.out = 10), 100), var1 = runif(1000, 10, 20), var2 = runif(1000, 20, 50), var3 = runif(1000, 2, 5), var250 = runif(1000, 100, 200), var1_baseline = rep(runif(100, 5, 10), each=10), var2_baseline = rep(runif(100, 50, 80), each=10), var3_baseline = rep(runif(100, 1, 3), each=10), var250_baseline = rep(runif(100, 20, 70), each=10))
I want to write a function containing a for loop that for each row in the dataframe will subtract every "_baseline" column from the non-baseline column with the same name.
I have created a script that automatically creates a character string containing the code I would like to run:
df <- df_reprex # get only numeric columns df_num <- df %>% dplyr::select_if(., is.numeric) # create a version with no baselines df_nobaselines <- df_num %>% select(-contains("baseline")) #extract names of non-baseline columns numeric_cols <- names(df_nobaselines) #initialise empty string mutatestring <- "" #write loop to fill in string: for (colname in numeric_cols) { mutatestring <- paste(mutatestring, ",", paste0(colname, "_change"), "=", colname, "-", paste0(colname, "_baseline")) # df_num <- df_num %>% # mutate(paste0(col, "_change") = col - paste0(col, "_baseline")) } mutatestring <- substr(mutatestring, 4, 9999999) # remove stuff at start (I know it's inefficient) mutatestring2 <- paste("df %>% mutate(", mutatestring, ")") # add mutate call
but when I try to call "mutatestring2" it just prints the character string e.g.:
[1] "df %>% mutate( var1_change = var1 - var1_baseline , var2_change = var2 - var2_baseline , var3_change = var3 - var3_baseline , var250_change = var250 - var250_baseline )"
I thought that this part would be relatively easy and I'm sure I've missed something obvious, but I just can't get the text inside that string to run!
I've tried various slightly ridiculous methods but none of them return the desired output (i.e. the result returned by the character string if it was entered into the console as a command):
call(mutatestring2) eval(mutatestring2) parse(mutatestring2) str2lang(mutatestring2) mget(mutatestring2) diff_func <- function() {mutatestring2} diff_func1 <- function() { a <-mutatestring2 return(a)} diff_func2 <- function() {str2lang(mutatestring2)} diff_func3 <- function() {eval(mutatestring2)} diff_func4 <- function() {parse(mutatestring2)} diff_func5 <- function() {call(mutatestring2)} diff_func() diff_func1() diff_func2() diff_func3() diff_func4() diff_func5()
I'm sure there must be a very straightforward way of doing this, but I just can't work it out!
How do I convert a character string to something that I can run or pass to a magrittr pipe?
- How to store or capture variable from one directory to another and execute with cut command in Linux
I need to do as following, firstly I can read a server name from the text file line by line and do a grep command on it. The main task is to find the whole FQDN server name using only the hostname in the NB logs path and do other commands further. Here is the script I wrote but I am beginner in shell scripting and need your help. What do I missing here because I was thinking How to save a variable's value into another directory path and use it there? If I manually run these commands in command line it can give me what I do need it., thank you all..
#!/bin/sh echo echo "Looking for the whole backup client names?" for server in `cat ./List_ServerNames.txt`; do cd /usr/openv/netbackup/logs/nbproxy/ return_fqdn=`grep -im1 '$server' "$(ls -Art /usr/openv/netbackup/logs/nbproxy/ | tail -n1 | cut - d : -f8)" | cut -d " " -f 6 | cut -c -36 | cut -d "(" -f 1` echo $return_fqdn done
The output of script is down below:
Looking for the whole backup client names?
- Difference between variable declaration and definition in C. When does the memory gets allocated?
So I have been trying to learn C on my own. So just a couple of videos and articles and a a book by my side. Although it sounds like a simple concept (I'm sure it is), I dnt think the concept is clear to me.
Can you please quote an example when a variable is declared or defined?(together and separately)
Like in some articles or forums I read they say
Int x; ( x is declared)
Somewhere it's written
Int x; ( x is defined).
When will the memory be allocated to the variable? Again somewhere it is said that variable has to be defined first to get memory allocated and somewhere it's said it is allocated when a variable is declared?
- writing in .docx with flask for different users
i have a webapp made with flask that, in the end, writes a docx file according to the interaction people who are using the plataform. I mean, deppending on what people do, the docx. file is different. The problem is that, the way my code is right now, if more than one person use the webapp at the same time they change what each other is writing the docx. file. Here is my code:
from flask import Flask, redirect, url_for, render_template, request, send_file, session from docx import Document from docx.enum.style import WD_STYLE_TYPE from docx.shared import Pt, Cm, RGBColor from docx.enum.text import WD_PARAGRAPH_ALIGNMENT from uuid import uuid4 document = Document() sections = document.sections for section in sections: section.top_margin = Cm(2.5) section.bottom_margim = Cm(2.5) section.left_margin = Cm(3) section.right_margin = Cm(3) def estilo_random(): return str(uuid4()) def write(text='', font_name = 'Arial', font_size = 12, font_bold = False, font_italic = False, font_underline = False, color = RGBColor(0, 0, 0), paragraph = document.add_paragraph(text) style = estilo_random() paragraph.style = document.styles.add_style(style, WD_STYLE_TYPE.PARAGRAPH) font = paragraph.style.font font.name = font_name font.size = Pt(font_size) font.bold = font_bold font.italic = font_italic font.underline = font_underline font.color.rgb = color app = Flask(__name__) app.secret_key='test' @app.route('/', methods=['POST', 'GET']) def 01(): if request.method == "POST": session['user'] = estilo_random() name = request.form['name'].strip() nacion= request.form['nacion'].strip() write(f'{name_doc}{nacion}') document.save(session['user']) return send_file(filename_or_fp=session['user'], as_attachment=True, attachment_filename='Pt.docx') else: return render_template('one.html') if __name__ == '__main__': app.run()
As you can see...the document is beeing opened in the general scope and im having trouble to open it after the user start the session and pass it through the write fuction! Guess it would the solution. Anyway...help please. And, if possible, i´d like to delete the file after it has been sent.. Thanks. | https://quabr.com/64149419/turning-random-numbers-into-a-list | CC-MAIN-2020-45 | refinedweb | 2,882 | 61.46 |
iSkeletonAnimationKeyFrame Struct ReferenceThe script key frame contains all bones that will be transformed in a specific time of a skeleton script. More...
#include <imesh/skeleton.h>
Inheritance diagram for iSkeletonAnimationKeyFrame:
Detailed DescriptionThe script key frame contains all bones that will be transformed in a specific time of a skeleton script.
Definition at line 180 of file skeleton.h.
Member Function Documentation
Add new bone transform to the key frame.
Get key frame duration.
Get key frame specific data.
Returns false when frame don't have data for given bone.
Get name of the key frame.
Get the transform of a bone.
Returns 'false' when there won't be any transform data for given bone.
Get the transform of a bone.
- Deprecated:
- GetTransform (iSkeletonBoneFactory *bone) is deprecated, use GetTransform (iSkeletonBoneFactory *bone, csReversibleTransform &dst_trans) instead
Get number of key transforms.
Set key frame duration.
Set name of the key frame.
Set the transform of a bone.
The documentation for this struct was generated from the following file:
- imesh/skeleton.h
Generated for Crystal Space 1.2.1 by doxygen 1.5.3 | http://www.crystalspace3d.org/docs/online/api-1.2/structiSkeletonAnimationKeyFrame.html | CC-MAIN-2014-41 | refinedweb | 180 | 61.93 |
#include <stdio.h> FILE *freopen(const char *filename, const char *mode, FILE *stream);
The freopen() function first attempts to flush the stream and close any file descriptor associated with stream. Failure to flush or close the file successfully is ignored. The error and end-of-file indicators for the stream are cleared.
The freopen() function opens the file whose pathname is the string pointed to by filename and associates the stream pointed to by stream with it. The mode argument is used just as in fopen(3C).
If filename is a null pointer and the application conforms to SUSv3 (see standards(5)), the freopen() function attempts to change the mode of the stream to that specified by mode, as though the name of the file currently associated with the stream had been used..
If the filename is a null pointer and the application does not conform to SUSv3, freopen() returns a null pointer.
The original stream is closed regardless of whether the subsequent open succeeds.
After a successful call to the freopen() function, the orientation of the stream is cleared, the encoding rule.
The freopen() function will fail if:
Search permission is denied on a component of the path prefix, or the file exists and the permissions specified by mode are denied, or the file does not exist and write permission is denied for the parent directory of the file to be created.
The application conforms to SUSv3, the filename argument is a null pointer, and either the underlying file descriptor is not valid or the mode specified when the underlying file descriptor was opened does not support the file access modes requested by the mode argument.
The application does not conform to SUSv3 and the filename argument is a null pointer.
A signal was caught during fre off_t.
The named file resides on a read-only file system and mode requires write access.
The freopen() function may fail if:
The value of the mode argument is not valid.
Pathname resolution of a symbolic link produced an intermediate result whose length exceeds {PATH_MAX}.
Insufficient storage space is available.
A request was made of a non-existent device, or the request was outside the capabilities of the device.
The file is a pure procedure (shared text) file that is being executed and mode requires write access.
The).
See attributes(5) for descriptions of the following attributes:
fclose(3C), fdopen(3C), fopen(3C), stdio(3C), attributes (5), lf64(5), standards (5) | https://docs.oracle.com/cd/E36784_01/html/E36874/freopen-3c.html | CC-MAIN-2019-26 | refinedweb | 409 | 59.84 |
Ruby and Lambda splat out a baby and that child's name is Jets.
Please watch/star this repo to help grow and support the project.
Upgrading: If you are upgrading Jets, please check on the Upgrading Notes.
Sponsors
What is Ruby on Jets?
Jets is a Ruby Serverless Framework. Jets allows you to create serverless applications with a beautiful language: Ruby. It includes everything required to build an application and deploy it to AWS Lambda.
It is key to understand AWS Lambda and API Gateway to understand Jets conceptually. Jets maps your code to Lambda functions and API Gateway resources.
- AWS Lambda is Functions as a Service. It allows you to upload and run functions without worrying about the underlying infrastructure.
- API Gateway is the routing layer for Lambda. It is used to route REST URL endpoints to Lambda functions.
The official documentation is at Ruby on Jets.
Refer to the official docs for more info, but here's a quick intro.
Jets Functions
Jets supports writing AWS Lambda functions with Ruby. You define them in the
app/functions folder. A function looks like this:
app/functions/simple.rb:
def lambda_handler(event:, context:) puts "hello world" {hello: "world"} end
Here's the function in the Lambda console:
Though simple functions are supported by Jets, they do not add much value as other ways to write Ruby code with Jets. Classes like Controllers and Jobs add many conveniences and are more powerful to use. We’ll cover them next.
Jets Controllers
A Jets controller handles a web request and renders a response. Here's an example:
app/controllers/posts_controller.rb:
Helper methods like
params provide the parameters from the API Gateway event. The
render method renders a Lambda Proxy structure back that API Gateway understands.
Jets creates Lambda functions for each public method in your controller. Here they are in the Lambda console:
Jets Routing
You connect Lambda functions to API Gateway URL endpoints with a routes file:
The
routes.rb gets translated to API Gateway resources:
Test your API Gateway endpoints with curl or postman. Note, replace the URL endpoint with the one that is created:
$ curl -s "" | jq . { "hello": "world", "action": "index" }
Jets Jobs
A Jets job handles asynchronous background jobs performed outside of the web request/response cycle. Here's an example:
app/jobs/hard_job.rb:
class HardJob < ApplicationJob rate "10 hours" # every 10 hours def dig puts "done digging" end cron "0 */12 * * ? *" # every 12 hours def lift puts "done lifting" end end
HardJob#dig runs every 10 hours and
HardJob#lift runs every 12 hours. The
rate and
cron methods created CloudWatch Event Rules. Example:
Jets Deployment
You can test your application with a local server that mimics API Gateway: Jets Local Server. Once ready, deploying to AWS Lambda is a single command.
jets deploy
After deployment, you can test the Lambda functions with the AWS Lambda console or the CLI.
AWS Lambda Console
Live Demos
Here are some demos of Jets applications:
- Quintessential CRUD Jets app
- API Demo
- Jets Afterburner: Easy Rails Support
- Mega Mode: Jets and Rails Combined
- Image Upload with CarrierWave
Please feel free to add your own example to the jets-examples repo.
Rails Support
Jets Afterburner Mode provides Rails support with little effort. This allows you to run a Rails application on AWS Lambda. Also here's a Tutorial Blog Post: Jets Afterburner: Rails Support.
More Info
For more documentation, check out the official docs: Ruby on Jets. Here's a list of useful links:
- Quick Start
- Local Jets Server
- REPL Console
- Project Structure
- App Configuration
- Database Support
- Polymorphic Support
- Rails Support
- Tutorials
- Prewarming
- Custom Resources
- Shared Resources
- Installation
- CLI Reference
- Contributing
- Support Jets
- Example Projects
Learning Content
- Introducing Jets: A Ruby Serverless Framework
- Official AWS Ruby Support for Jets
- Build an API with Jets
- Serverless Ruby Cron Jobs Tutorial: Route53 Backup
- Serverless Slack Commands: Fun with AWS Image Recognition
- Jets Afterburner: Rails Support
- Jets Mega Mode: Jets and Rails
- Toronto Serverless Presentation
- Jets Image Uploads Tutorial with CarrierWave
- Jets Tutorial An Introductory CRUD App Part 1
- Jets Tutorial Deploy to AWS Lambda Part 2
- Jets Tutorial Debugging Logs Part 3
- Jets Tutorial Background Jobs Part 4
- Jets Tutorial IAM Policies Part 5
- Jets Tutorial Function Properties Part 6
- Jets Tutorial Extra Environments Part 7
- Jets Tutorial Different Environments Part 8
- Jets Tutorial Polymorphic Support Part 9
- Jets Delete Tutorial | https://www.rubydoc.info/gems/jets/3.0.5 | CC-MAIN-2021-31 | refinedweb | 730 | 54.83 |
Options
-
- Mark Topic as New
- Mark Topic as Read
-
- Float this Topic for Current User
- Bookmark
- Subscribe
-
- Printer Friendly Page
2014-10-05 09:39 AM
I might be reading over something that is obvious. Anyone know of a good way to connect to an SVM when you have the IP/DNS name of the SVM, but don't have the Cluster information? Are there particular services/protocols that I need to have enabled to get this to work?
Here is my scenario. I am creating a script to be called by VSC 4.2. VSC will give me the path to the export as VMWare sees it. (AKA vserver.name.com:/hellovolume) I am parsing the information out to seperate the namespace away from the DNS/IP of the vserver(SVM). (See for how I'm doing this). The problem is that VSC doesn't provide me with the cluster information, so when I try to do a "connect-nccontroller" it fails.
Anyone have any suggestions, or examples where they are already doing this? If you got it to work, what specific things did you have to enable on the cluster/svm to get it to work.
Solved! See The Solution
View By:
1 REPLY
2014-10-05 12:02 PM
I think I found the issue after carefully reading the help for 'connect-nccontroller'. I need to change the firewall policies so that it allows https/http connections on each of those data lifs. This should give me the results I need.
Reply | https://community.netapp.com/t5/Microsoft-Virtualization-Discussions/Powershell-connect-to-an-SVM/td-p/91083 | CC-MAIN-2018-51 | refinedweb | 255 | 71.44 |
At the March 2017 Esri Developer Summit, Dave Bayer and I gave a presentation on how to use Arcade expressions in web apps built on the ArcGIS platform. In that presentation I demonstrated a succinct way to create a predominance visualization using Arcade.
Visualizing predominance involves coloring a layer’s features based on which attribute among a set of competing numeric attributes wins or beats the others in total count. Common applications of this include visualizing election results, survey results, and demographic majorities.
Arcade is a good solution for predominance visualizations because it allows you to avoid creating new fields in a service for storing the predominant category and the margin of victory. With Arcade, you write the expression and it will return the values at runtime, allowing you to drive the color of the visualization based on the predominant category.
In our Dev Summit presentation I shared this webmap, which depicts the predominant educational attainment achieved by people in Mexico on the municipal level.
The expression used to create the visualization looks like this:
// Calculate values based on attribute fields; // pass all values to an array var fields = [ $feature.EDUC01_CY, $feature.EDUC02_CY, $feature.EDUC03_CY, $feature.EDUC05_CY, primary, secondary,highSchool,college ]; // get the max or winner var winner = Max(fields); // return the string describing the winning value return Decode(winner, $feature.EDUC01_CY, "Didn't attend any school", $feature.EDUC02_CY, "Preschool", $feature.EDUC03_CY, "Incomplete elementary school", primary, "Elementary school", $feature.EDUC05_CY, "Incomplete middle school", secondary, "Middle school", highSchool, "High school", college, "College", "Other");
Once you understand Arcade’s syntax and what the Decode() function is doing, the expression is pretty straightforward to read. Decode is an Arcade function that matches a value (usually feature attribute) to another value among a set of potential matches. Each of the potential matches is paired with other values (usually a string) that describes it. In essence, it matches a value with a meaningful description. In this case, the input value to match is the maximum value of those field values/expressions in the provided array. All values in the array are passed to the
Decode function along with matching strings describing what the value represents. So the string matching the max will always be returned. It’s slick, compact, and simple to use with any dataset.
An alternate approach
However, one user at our presentation asked a great question: “What if there’s a tie for the top category?” The answer we gave is that Decode will return the first match in the case of duplicate winners. While ties are relatively common in the datasets I’ve looked at, very few of them involved the max value. Nevertheless, the Decode approach is flawed and doesn’t address the real question: “How do we visualize ties for the top category?”
The solution I eventually came up takes advantage of Arcade’s ability to write custom functions. After testing the revised expression with several datasets, I found it easier to work with and the solution for dealing with tied max values satisfactory:; // The fields from which to calculate predominance // The expression will return the alias of the predominant field var fields = [ { value: $feature.EDUC01_CY, alias: "Didn't attend any school" }, { value: $feature.EDUC02_CY, alias: "Preschool" }, { value: $feature.EDUC03_CY, alias: "Incomplete elementary school" }, { value: primary, alias: "Elementary school" }, { value: $feature.EDUC05_CY, alias: "Incomplete middle school" }, { value: secondary, alias: "Middle school" }, { value: highSchool, alias: "High school" }, { value: college, alias: "College" } ]; // IIF(maxValue <= 0, null, maxCategory); } getPredominantCategory(fields);
The
getPredominantCategory() function returns the alias of the max category as defined in the
fieldsArray. Notice that if a tie value is encountered for the max value, then the aliases of all fields involved with the tie are concatenated and returned.
The results
Let’s take a look at how well this approach works using the Arcade playground. I entered dummy field values representing the number of votes cast for three political parties in a fictitious feature and ran the expression. It correctly returned the maximum value.
If the two lower values were tied, the same result was correctly returned since they don’t involve the maximum value.
If two values were tied for first place, both aliases were returned as a concatenated string.
In the unlikely event of a three-way tie, the expression even works in that scenario.
And what if if all fields have the value of zero? It doesn’t seem right to visualize a tie since no votes were cast within the given feature. So the expression returns
null, which will indicate to the layer’s renderer to not bother with visualizing the feature.
But how does this approach change the visualization of the educational attainment dataset in Mexico? As a whole, the visualization didn’t change much. If we open up the “change style” options in the ArcGIS Online map viewer, we can compare the categories and their total counts between each method used.
Two unique tie situations were found among six municipalities where ties existed for the max value of the fields. Notice that the map viewer allows you to group values. So if you have a bunch of unique tie scenarios, you can group them into the “Other” category and rename it to “tie” so they are visualized with the same symbol.
If building the visualization in a custom app with the ArcGIS API for JavaScript, you would use a UniqueValueRenderer and just set the defaultSymbol and defaultLabel to this generic value. Then you won’t need to specify each tie scenario in the uniqueValueInfos of the renderer.
Interestingly, three of the six ties occurred in the same region. We’ll zoom to northwestern Oaxaca to get a better look at comparing the different visualizations for each method. The labels on each feature indicate the % gap or margin between the top two competing fields. I’ve circled the features in black where the gap is 0% (or where a tie exists).
Decode methodology – no ties indicated
Notice that the features with ties incorrectly indicate a clear winner.
Custom function methodology – unique ties indicated
Unique ties are visualized with different colors.
Custom function methodology – all ties generically indicated
All ties are visualized with the same color after grouping them together.
Methods for indicating strength of predominance
I’ve used two different methods for visualizing the strength of the predominant value, or the overall dominance of the winner. They’re explained in more depth in the session video so I won’t go into too much depth describing the logic of the Arcade syntax used.
The concept is that we’ll change the opacity value of each feature based on how convincing the win was between the competing values. This tells another side of the story, especially in the case of elections and survey results.
Opacity can be driven by field values or Arcade expressions by clicking the “Attribute Values” link in the “Change style” options of ArcGIS Online.
Strength
The first method I simply call “strength of predominance”. It compares the value of the winner to all other values and returns its strength as a percentage: winner = Max(fields); var total = Sum(fields); return (winner/total)*100;
The percentage is then used to drive the opacity of each feature. The Smart mapping API will provide good default breakpoints for the opacity values.
Notice that features where ties exist can have varying levels of strength. In the image above, the northern-most circled feature has two attributes with high values that are tied. The competing field values might look something like this:
[40, 40, 10, 4, 2, 1, 1, 0]. The strength expression would return 40%. The max of this feature is likely higher than the max in most other features; thus the feature is more opaque even though a tie exists.
The southern-most circled feature may have the following values for its competing categories:
[20, 20, 19, 19, 10, 8, 3, 1]. The strength expression would return 20%. The max is relatively low compared to the max value of other features, thus resulting in the vary transparent color.
Gap
The “Gap” methodology only compares the max value with the second highest value. The resulting value indicates how much the winning feature won by, also know as the “margin of victory”. order = Reverse(Sort(fields)); var winner = order[0]; var secondPlace = order[1]; var total = Sum(fields); return Round(((winner - secondPlace) / total) * 100, 2);
In this scenario, the tied features will always have a transparent fill.
Summary
Arcade can be a powerful vehicle for exploring predominance visualizations among competing attributes. I mentioned a couple of expressions you can use when creating predominance visualizations. Just remember the limitations of each:
- The Decode approach is fine if no ties are present or if you plan on driving opacity with a gap expression since gap will wash out the color anyway.
- The custom function works well in the case of ties. However, if many unique types of ties exist, you may want to visualize them grouped in a generic “tie” category to avoid confusion.
Also note that you can leverage the Smart Mapping tools for this workflow. By simply selecting two or more fields in the map viewer, you can select the “Predominant category” style option to allow the map viewer to do all the work for you.
Be sure to check out this story map explaining how to visualize predominance in ArcGIS Online. However, you cannot use Arcade expressions for each value in this workflow. In the examples above, some values like “college degree” were the sum of multiple fields (e.g. bachelor + master + doctorate degree). The Arcade expressions shared above in this post allow you to perform additional calculations prior to generating the predominance visualization. | https://www.esri.com/arcgis-blog/products/js-api-arcgis/mapping/creating-a-predominance-visualization-with-arcade/?rmedium=redirect&rsource=/esri/arcgis/2017/05/23/creating-a-predominance-visualization-with-arcade | CC-MAIN-2018-39 | refinedweb | 1,619 | 53 |
Recently, Yomi did a great job writing about advanced React Router concepts. But if you’re just starting out with React Router, that post was probably a little too much to wrap your head around.
Not to worry. In this post, I’ll get you started with the basics of the web version (React Router DOM).
Here’s what you’ll learn:
- The concept of a router
- How to setup and install React Router
- The essential components of this framework
- How to build routes with parameters, like
/messages/10
The demo app for this post is available on CodeSandbox:
For reference, you can find the code of the final example on this GitHub repository.
Let’s get started.
What is a router?
Single-page applications (SPAs) rewrite sections of a page rather than loading entire new pages from a server.
Twitter is a good example of this type of application. When you click on a tweet, only the tweet’s information is fetched from the server. The page does not fully reload:
These applications are easy to deploy and greatly improve the user experience, among other advantages.
However, they also bring challenges.
One of them is browser history. As the application is contained in a single page, it cannot rely on the browser’s forward/back buttons per se. It needs something else.
Something that, according to the application’s state, changes the URL to push or replace URL history events within the browser. At the same time, it also needs to rebuild the application state from information contained within the URL.
On Twitter, for example, notice how the URL changes when a tweet is clicked:
And how a history entry is generated:
This is the job of a router.
A router allows your application to navigate between different components, changing the browser URL, modifying the browser history, and keeping the UI state in sync.
React is a popular library for building SPAs. However, as React focuses only on building user interfaces, it doesn’t have a built-in solution for routing.
React Router is the most popular routing library for React. It allows you define routes in the same declarative style:
<Route path="/home" component={Home} />
But let’s not get ahead of ourselves. Let’s start by creating a sample project and setting up React Router.
Setting up React Router
I’m going to use Create React App to create a React app. You can install (or update) it with:
npm install -g create-react-app
You just need to have Node.js version 6 or superior installed.
Next, execute the following command:
create-react-app react-router-example
In this case, the directory
react-router-example will be created. If you
cd into it, you should see a structure similar to the following:
React Router includes three main packages:
react-router: This is the core package for the router
react-router-dom: It contains the DOM bindings for React Router. In other words, the router components for websites
react start
A browser window will open and you should see something like this:
Now let’s create a simple SPA with React and React Router.
Using React Router
The React Router API is based on three components:
<Router>: The router that keeps the UI in sync with the URL
<Link>: Renders a navigation link
<Route>: Renders a UI component depending on the URL
<Router>
Only in some special cases you’ll have to use <Router> directly (for example when working with Redux), so the first thing you have to do is to choose a router implementation.
In a web application, you have two options:
<BrowserRouter>, which uses the HTML5 History API.
<HashRouter>, which uses the hash portion of the URL (window.location.hash)
If you’re going to target older browsers that don’t support the HTML5 History API, you should stick with
<HashRouter>, which creates URLs with the following format:
Otherwise, you can use
<BrowserRouter>, which creates URLs with the following format:
I’ll use
<BrowserRouter>, so in
src/index.js, I’m going to import this component from
react-router-dom and use it to wrap the
<App> component:
import React from 'react'; // ... import { BrowserRouter } from 'react-router-dom'; ReactDOM.render( <BrowserRouter> <App /> </BrowserRouter> , document.getElementById('root') ); // ...
It’s important to mention that a router component can only have one child element. For example, the following code:
// ... ReactDOM.render( <BrowserRouter> <App /> <div>Another child!</div> </BrowserRouter> , document.getElementById('root') ); // ...
Throws this error message:
The main job of a
<Router> component is to create a history object to keep track of the location (URL). When the location changes because of a navigation action, the child component (in this case
<App>) is re-rendered.
Most of the time, you’ll use a
<Link> component to change the location.
<Link>
Let’s create a navigation menu. Open
src/App.css to add the following styles:
ul { list-style-type: none; padding: 0; } .menu ul { background-color: #222; margin: 0; } .menu li { font-family: sans-serif; font-size: 1.2em; line-height: 40px; height: 40px; border-bottom: 1px solid #888; } .menu a { text-decoration: none; color: #fff; display: block; }
In
scr/App.js, replace the last
<p> element in the
render() function so it looks like this:
render() { return ( <div className="App"> <header className="App-header"> <img src={logo} <h1 className="App-title">Welcome to React</h1> </header> <div className="menu"> <ul> <li> <Link to="/">Home</Link> </li> <li> <Link to="/messages">Messages</Link> </li> <li> <Link to="/about">About</Link> </li> </ul> </div> </div> ); }
Don’t forget to import the
<Link> component at the top of the file:
import { Link } from 'react-router-dom'
In the browser, you should see something like this:
As you can see, this JSX code:
<ul> <li> <Link to="/">Home</Link> </li> <li> <Link to="/messages">Messages</Link> </li> <li> <Link to="/about">About</Link> </li> </ul>
Generates the following HTML code:
<ul> <li> <a href="/">Home</a> </li> <li> <a href="/messages">Messages</a> </li> <li> <a href="/about">About</a> </li> </ul>
However, those aren’t regular anchor elements. They change the URL without refreshing the page. Test it.
And now add a
<a> element to the JSX code and test one more time:
<ul> <li> <Link to="/">Home</Link> </li> <li> <Link to="/messages">Messages</Link> </li> <li> <Link to="/about">About</Link> </li> <li> <a href="/messages">Messages (with a regular anchor element)</a> </li> </ul>
Do you notice the difference?
<Route>
Right now, the URL changes when a link is clicked, but not the UI. Let’s fix that.
I’m going to create three components for each route. First,
src/component/Home.js for the route
/:
import React from 'react'; const Home = () => ( <div> <h2>Home</h2> My Home page! </div> ); export default Home;
Then,
src/component/Messages.js for the route
/messages:
import React from 'react'; const Messages = () => ( <div> <h2>Messages</h2> Messages </div> ); export default Messages;
And finally,
src/component/About.js for the route
/about:
import React from 'react'; const About = () => ( <div> <h2>About</h2> This example shows how to use React Router! </div> ); export default About;
To specify the URL that corresponds to each component, you use the
<Route> in the following way:
<Route path="/" component={Home}/> <Route path="/messages" component={Messages}/> <Route path="/about" component={About}/>
With other router libraries (and even in previous versions of React Router), you have to define these routes in a special file, or at least, outside your application.
This doesn’t apply to React Router 4. These components can be placed anywhere inside of the router, and the associated component will be rendered in that place, just like any other component.
So in
src/App.js, import all these components and add a section after the menu:
// … import { Route, Link } from 'react-router-dom' import Home from './components/Home'; import About from './components/About'; import Messages from './components/Messages'; class App extends Component { render() { return ( <div className="App"> ... <div className="menu"> ... </div> <div className="App-intro"> <Route path="/" component={Home}/> <Route path="/messages" component={Messages}/> <Route path="/about" component={About}/> </div> </div> ); } }
In the browser, you should see something like this:
However, look what happen when you go to the other routes:
By default, routes are inclusive, more than one
<Route> component can match the URL path and render at the same time.
Routes are the most important concept in React Router, and you need to know some things about them. Let’s talk about routes in the next section.
Understanding routes
The matching logic of the
<Route> component is delegated to the path-to-regexp library. I encourage you to check all the options and modifiers of this library and test it live with the express-route-tester.
In the previous example, since the
/message and
/about paths also contain the character
/, they are also matched and rendered.
With this behavior, you can display different components just by declaring that they belong to the same (or similar) path.
There’s more than one solution.
The first one uses the
exact property to render the component only if the defined path matches the URL path exactly:
<Route exact path="/" component={Home}/>
If you test the application, you’ll see that everything works fine.
The routes
/message and
/about are still evaluated, but they are not an exact match for
/ now.
However, if we know that only one route will be chosen, we can use a
<Switch> component to render only the first route that matches the location:
// … import { Route, Link, Switch } from 'react-router-dom' // … class App extends Component { render() { return ( … <div className="App-intro"> <Switch> <Route exact path="/" component={Home} /> <Route path="/messages" component={Messages} /> <Route path="/about" component={About} /> </Switch> </div> </div> ); } }
<Switch> will make the path matching exclusive rather than inclusive (as if you were using
<Route> components).
For example, even if you duplicate the route for the
Messages component:
<Switch> <Route exact path="/" component={Home} /> <Route path="/messages" component={Messages} /> <Route path="/messages" component={Messages} /> <Route path="/about" component={About} /> </Switch>
When visiting the
/messages path, the
Messages component will be rendered only once.
However, notice that you’ll still need to specify the exact property for the
/ path, otherwise,
/message and
/about will also match
/, and the
Home component will always be rendered (since this is the first route matched):
But what happens when a non-existent path is entered? For example:
In a regular JavaScript
switch statement, you’ll specify a default clause for this case, right?
In a
<Switch> component, this default behavior can be implemented with a
<Redirect> component:
// … import { Route, Link, Switch, Redirect } from 'react-router-dom' // … class App extends Component { render() { return ( … <div className="App-intro"> <Switch> <Route exact </Switch> </div> </div> ); } }
This component will navigate to a new location overriding the current one in the history stack:
Now, let’s cover something a little more advanced, nested routes.
Nested routes
A nested route is something like
/about/react.
Let’s say that for the messages section, we want to display a list of messages. Each one in the form of a link like
/messages/1,
/messages/2, and so on, that will lead you to a detail page.
You can start by modifying the
Messages component to generate links for five sample messages in this way:
import React from 'react'; import { Link } from 'react-router-dom'; const Messages = () => ( <div> <ul> { [...Array(5).keys()].map(n => { return <li key={n}> <Link to={`messages/${n+1}`}> Message {n+1} </Link> </li>; }) } </ul> </div> ); export default Messages;
This should be displayed in the browser:
To understand how to implement this, you need to know that when a component is rendered by the router, three properties are passed as parameters:
For our purposes, we’ll be using the
match parameter.
When there’s a match between the router’s path and the URL location, a
match object is created with information about the URL and the path. Here are the properties of this object:
params: Key/value pairs parsed from the URL corresponding to the parameters
isExact:
trueif the entire URL was matched (no trailing characters)
path: The path pattern used to match
url: The matched portion of the URL
This way, in the
Messages component, we can destructure the properties object to use the
match object:
const Messages = ({ match }) => ( <div> ... </div> )
Replace
/messages with the matched URL of the
match object:
const Messages = ({ match }) => ( <div> <ul> { [...Array(5).keys()].map(n => { return <li key={n}> <Link to={`${match.url}/${n+1}`}> Message {n+1} </Link> </li>; }) } </ul> </div> );
This way you’re covered if the path ever changes.
And after the message list, declare a
<Route> component with a parameter to capture the message identifier:
import Message from './Message'; //… const Messages = ({ match }) => ( <div> <ul> ... </ul> <Route path={`${match.url}/:id`} component={Message} /> </div> );
In addition, you can enforce a numerical ID in this way:
<Route path={`${match.url}/:id(\\d+)`} component={Message} />
If there’s a match, the
Message component will be rendered. Here’s its definition:
import React from 'react'; const Message = ({ match }) => ( <h3>Message with ID {match.params.id}</h3> ); export default Message;
In this component, the ID of the message is displayed. Notice how the ID is extracted from the
match.params object using the same name that it’s defined in the path.
If you open the browser, you should see something similar to the following:
But notice that for the initial page of the messages section (
/messages) or if you enter in the URL and invalid identifier (like
/messages/a), nothing is printed under the list. A message would be nice, don’t you think?
You can add another route for this case, but instead of creating another component to just display a message, we can use the
render property of the
<Route> component:
<Route path={match.url} render={() => <h3>Please select a message</h3> />
You can define what is rendered by using one of the following properties of
<Route>:
component: To render a component.
render: A function that returns the element or component to be rendered.
children: A function that also returns the element or component to be rendered. However, the returned element is rendered regardless of whether the path is matched or not.
Finally, we can wrap the routes in a
<Switch> component to guarantee that only one of the two is matched:
import { Route, Link, Switch } from 'react-router-dom'; const Messages = ({ match }) => ( <div> <ul> ... </ul> <Switch> <Route path={`${match.url}/:id(\\d+)`} component={Message} /> <Route path={match.url} render={() => <h3>Please select a message</h3>} /> </Switch> </div> );
However, you have to be careful. If you declare route that renders the message first:
<Switch> <Route path={match.url} render={() => <h3>Please select a message</h3>} /> <Route path={`${match.url}/:id(\\d+)`} component={Message} /> </Switch>
The
Message component will never be rendered because a path like
/messages/1 will match the path
/messages.
If you declare the routes in this order, add
exact to avoid this behavior:
<Switch> <Route exact path={match.url} render={() => <h3>Please select a message</h3>} /> <Route path={`${match.url}/:id(\\d+)`} component={Message} /> </Switch>
Conclusion
In a few words, a router keeps your application UI and the URL in sync.
React Router is the most popular router library for React, and since version 4, React Router declarative defines routes with components, in the same style than React.
In this post, you have learned how to set up React Router, its most important components, how routes work, and how to build dynamic nested routes with path parameters.
But there’s still a lot of more to learn. For example, there a <NavLink>component that is a special version of the
<Link> component that adds the properties
activeClassName and
activeStyle to give you styling options when the link matches the location URL.
The official documentation covers some basic as well as more advanced, interactive examples. Also, don’t forget to check out the advanced React Router concepts post, here on LogRocket’s blog._13<< “React Router DOM: Setup, essential components, and parameterized routes”
this is a really nice one for beginner. | https://blog.logrocket.com/react-router-dom-set-up-essential-components-parameterized-routes-505dc93642f1/ | CC-MAIN-2019-43 | refinedweb | 2,691 | 60.45 |
Chapter 5: Paths
%load_ext autoreload %autoreload 2 %matplotlib inline %config InlineBackend.figure_format = 'retina' import warnings warnings.filterwarnings('ignore')
Introduction
from IPython.display import YouTubeVideo YouTubeVideo(id="JjpbztqP9_0", width="100%")
Graph traversal is akin to walking along the graph, node by node, constrained by the edges that connect the nodes. Graph traversal is particularly useful for understanding the local structure of certain portions of the graph and for finding paths that connect two nodes in the network.
In this chapter, we are going to learn how to perform pathfinding in a graph, specifically by looking for shortest paths via the breadth-first search algorithm.
Breadth-First Search
The BFS algorithm is a staple of computer science curricula, and for good reason: it teaches learners how to "think on" a graph, putting one in the position of "the dumb computer" that can't use a visual cortex to "just know" how to trace a path from one node to another. As a topic, learning how to do BFS additionally imparts algorithmic thinking to the learner.
Exercise: Design the algorithm
Try out this exercise to get some practice with algorithmic thinking.
- On a piece of paper, conjure up a graph that has 15-20 nodes. Connect them any way you like.
- Pick two nodes. Pretend that you're standing on one of the nodes, but you can't see any further beyond one neighbor away.
- Work out how you can find a path from the node you're standing on to the other node, given that you can only see nodes that are one neighbor away but have an infinitely good memory.
If you are successful at designing the algorithm, you should get the answer below.
from nams import load_data as cf G = cf.load_sociopatterns_network()
from nams.solutions.paths import bfs_algorithm # UNCOMMENT NEXT LINE TO GET THE ANSWER. # bfs_algorithm()
Exercise: Implement the algorithm
Now that you've seen how the algorithm works, try implementing it!
# FILL IN THE BLANKS BELOW def path_exists(node1, node2, G): """ This function checks whether a path exists between two nodes (node1, node2) in graph G. """ visited_nodes = _____ queue = [_____] while len(queue) > 0: node = ___________ neighbors = list(_________________) if _____ in _________: # print('Path exists between nodes {0} and {1}'.format(node1, node2)) return True else: visited_nodes.___(____) nbrs = [_ for _ in _________ if _ not in _____________] queue = ____ + _____ # print('Path does not exist between nodes {0} and {1}'.format(node1, node2)) return False
# UNCOMMENT THE FOLLOWING TWO LINES TO SEE THE ANSWER from nams.solutions.paths import path_exists # path_exists??
# CHECK YOUR ANSWER AGAINST THE TEST FUNCTION BELOW from random import sample import networkx as nx def test_path_exists(N): """ N: The number of times to spot-check. """ for i in range(N): n1, n2 = sample(G.nodes(), 2) assert path_exists(n1, n2, G) == bool(nx.shortest_path(G, n1, n2)) return True assert test_path_exists(10)
Visualizing Paths
One of the objectives of that exercise before was to help you "think on graphs". Now that you've learned how to do so, you might be wondering, "How do I visualize that path through the graph?"
Well first off, if you inspect the
test_path_exists function above,
you'll notice that NetworkX provides a
shortest_path() function
that you can use. Here's what using
nx.shortest_path() looks like.
path = nx.shortest_path(G, 7, 400) path
[7, 51, 188, 230, 335, 400]
As you can see, it returns the nodes along the shortest path, incidentally in the exact order that you would traverse.
One thing to note, though! If there are multiple shortest paths from one node to another, NetworkX will only return one of them.
So how do you draw those nodes only?
You can use the
G.subgraph(nodes)
to return a new graph that only has nodes in
nodes
and only the edges that exist between them.
After that, you can use any plotting library you like.
We will show an example here that uses nxviz's matrix plot.
Let's see it in action:
import nxviz as nv g = G.subgraph(path) nv.matrix(g, sort_by="order")
<AxesSubplot:>
Voila! Now we have the subgraph (1) extracted and (2) drawn to screen! In this case, the matrix plot is a suitable visualization for its compactness. The off-diagonals also show that each node is a neighbor to the next one.
You'll also notice that if you try to modify the graph
g, say by adding a node:
g.add_node(2048)
you will get an error:
--------------------------------------------------------------------------- NetworkXError Traceback (most recent call last) <ipython-input-10-ca6aa4c26819> in <module> ----> 1 g.add_node(2048) ~/anaconda/envs/nams/lib/python3.7/site-packages/networkx/classes/function.py in frozen(*args, **kwargs) 156 def frozen(*args, **kwargs): 157 """Dummy method for raising errors when trying to modify frozen graphs""" --> 158 raise nx.NetworkXError("Frozen graph can't be modified") 159 160 NetworkXError: Frozen graph can't be modified
From the perspective of semantics, this makes a ton of sense:
the subgraph
g is a perfect subset of the larger graph
G,
and should not be allowed to be modified
unless the larger container graph is modified.
Exercise: Draw path with neighbors one degree out
Try out this next exercise:
Extend graph drawing with the neighbors of each of those nodes. Use any of the nxviz plots (
nv.matrix,
nv.arc,
nv.circos); try to see which one helps you tell the best story.
from nams.solutions.paths import plot_path_with_neighbors ### YOUR SOLUTION BELOW
plot_path_with_neighbors(G, 7, 400)
In this case, we opted for an Arc plot because we only have one grouping of nodes but have a logical way to order them. Because the path follows the order, the edges being highlighted automatically look like hops through the graph.
Bottleneck nodes
We're now going to revisit the concept of an "important node", this time now leveraging what we know about paths.
In the "hubs" chapter, we saw how a node that is "important" could be so because it is connected to many other nodes.
Paths give us an alternative definition. If we imagine that we have to pass a message on a graph from one node to another, then there may be "bottleneck" nodes for which if they are removed, then messages have a harder time flowing through the graph.
One metric that measures this form of importance is the "betweenness centrality" metric. On a graph through which a generic "message" is flowing, a node with a high betweenness centrality is one that has a high proportion of shortest paths flowing through it. In other words, it behaves like a bottleneck.
Betweenness centrality in NetworkX
NetworkX provides a "betweenness centrality" function that behaves consistently with the "degree centrality" function, in that it returns a mapping from node to metric:
import pandas as pd pd.Series(nx.betweenness_centrality(G))
100 0.014809 101 0.001398 102 0.000748 103 0.006735 104 0.001198 ... 89 0.000004 91 0.006415 96 0.000323 99 0.000322 98 0.000000 Length: 410, dtype: float64
Exercise: compare degree and betweenness centrality
Make a scatterplot of degree centrality on the x-axis and betweenness centrality on the y-axis. Do they correlate with one another?
import matplotlib.pyplot as plt import seaborn as sns # YOUR ANSWER HERE:
from nams.solutions.paths import plot_degree_betweenness plot_degree_betweenness(G)
Think about it...
...does it make sense that degree centrality and betweenness centrality are not well-correlated?
Can you think of a scenario where a node has a "high" betweenness centrality but a "low" degree centrality? Before peeking at the graph below, think about your answer for a moment.
nx.draw(nx.barbell_graph(5, 1))
Recap
In this chapter, you learned the following things:
- You figured out how to implement the breadth-first-search algorithm to find shortest paths.
- You learned how to extract subgraphs from a larger graph.
- You implemented visualizations of subgraphs, which should help you as you communicate with colleagues.
- You calculated betweenness centrality metrics for a graph, and visualized how they correlated with degree centrality.
Solutions
Here are the solutions to the exercises above.
from nams.solutions import paths import inspect print(inspect.getsource(paths))
"""Solutions to Paths chapter.""" import matplotlib.pyplot as plt import networkx as nx import pandas as pd import seaborn as sns from nams.functions import render_html def bfs_algorithm(): """ How to design a BFS algorithm. """ ans = """ How does the breadth-first search work? It essentially is as follows: 1. Begin with a queue that has only one element in it: the starting node. 2. Add the neighbors of that node to the queue. 1. If destination node is present in the queue, end. 2. If destination node is not present, proceed. 3. For each node in the queue: 1. Remove node from the queue. 2. Add neighbors of the node to the queue. Check if destination node is present or not. 3. If destination node is present, end. <!--Credit: @cavaunpeu for finding bug in pseudocode.--> 4. If destination node is not present, continue. """ return render_html(ans) def path_exists(node1, node2, G): """ This function checks whether a path exists between two nodes (node1, node2) in graph G. """ visited_nodes = set() queue = [node1] while len(queue) > 0: node = queue.pop() neighbors = list(G.neighbors(node)) if node2 in neighbors: return True else: visited_nodes.add(node) nbrs = [n for n in neighbors if n not in visited_nodes] queue = nbrs + queue return False def path_exists_for_loop(node1, node2, G): """ This function checks whether a path exists between two nodes (node1, node2) in graph G. Special thanks to @ghirlekar for suggesting that we keep track of the "visited nodes" to prevent infinite loops from happening. This also removes the need to remove nodes from queue. Reference: With thanks to @joshporter1 for the second bug fix. Originally there was an extraneous "if" statement that guaranteed that the "False" case would never be returned - because queue never changes in shape. Discovered at PyCon 2017. With thanks to @chendaniely for pointing out the extraneous "break". If you would like to see @dgerlanc's implementation, see """ visited_nodes = set() queue = [node1] for node in queue: neighbors = list(G.neighbors(node)) if node2 in neighbors: return True else: visited_nodes.add(node) queue.extend([n for n in neighbors if n not in visited_nodes]) return False def path_exists_deque(node1, node2, G): """An alternative implementation.""" from collections import deque visited_nodes = set() queue = deque([node1]) while len(queue) > 0: node = queue.popleft() neighbors = list(G.neighbors(node)) if node2 in neighbors: return True else: visited_nodes.add(node) queue.extend([n for n in neighbors if n not in visited_nodes]) return False import nxviz as nv from nxviz import annotate, highlights def plot_path_with_neighbors(G, n1, n2): """Plot a path with the heighbors of of the nodes along that path.""" path = nx.shortest_path(G, n1, n2) nodes = [*path] for node in path: nodes.extend(list(G.neighbors(node))) nodes = list(set(nodes)) g = G.subgraph(nodes) nv.arc( g, sort_by="order", node_color_by="order", edge_enc_kwargs={"alpha_scale": 0.5} ) for n in path: highlights.arc_node(g, n, sort_by="order") for n1, n2 in zip(path[:-1], path[1:]): highlights.arc_edge(g, n1, n2, sort_by="order") def plot_degree_betweenness(G): """Plot scatterplot between degree and betweenness centrality.""" bc = pd.Series(nx.betweenness_centrality(G)) dc = pd.Series(nx.degree_centrality(G)) df = pd.DataFrame(dict(bc=bc, dc=dc)) ax = df.plot(x="dc", y="bc", kind="scatter") ax.set_ylabel("Betweenness\nCentrality") ax.set_xlabel("Degree Centrality") sns.despine() | https://ericmjl.github.io/Network-Analysis-Made-Simple/02-algorithms/02-paths/ | CC-MAIN-2022-33 | refinedweb | 1,903 | 58.18 |
After five months of inactivity, the prolific and well-known Emotet botnet re-emerged on July 17th. The purpose of this botnet is to steal sensitive information from victims or provide an installation base for additional malware such as TrickBot, which then in many cases will drop ransomware or other malware. So far, in the current wave, it was observed delivering QakBot.
In this blog post, we reveal some novel evasion techniques which assist the new wave of Emotet to avoid detection. We discovered how its evasion techniques work and how to overcome them. The first part of the malware execution is the loader which is examined in this article, with an emphasis on the unpacking process. We also partition the current wave into several clusters, each cluster has some unique shared properties among the samples.
A dataset of 38 thousand samples was created using data collected by the Cryptolaemus group, from July 17th to July 28th. This group divides the samples into three epochs, which are separate botnets that operate independently from one another.
Static information was extracted from each sample in order to find consistent patterns across the entire data set. The information that is relevant to identify patterns includes the size and entropy of the .text, .data, .rsrc sections, and the size of the whole file, so this information was extracted from all the files. We started our analysis with two samples, an overview for which is provided in the image below. By looking at each file size we can see that the ratio of these sections remains the same, and the entropy differs very little between the files.
Image shows different Emotet samples having sections with the same size and entropy
This also results in completely identical code. The two samples presented above were compared using the diaphora plugin for IDA, and all the functions were identical.
Comparing the code of the two samples shows they share the exact same subroutines
Grouping the entire dataset by the size of the files resulted in 272 unique sizes of files. The files matching each size were then checked to see if they have the same static information. This way we discovered 102 templates of Emotet samples.
Each sample in the dataset that matched a template was tagged with a template ID and with an epoch number, as indicated by the Cryptolaemus group. This means that the operators behind each epoch have their own Emotet loaders. The various templates might help reduce the detection rate of the samples used by the entire operation. If a specific template has a unique feature that can be signed, it won’t affect samples belonging to other templates and epochs.
Most packers today provide features such as various encryption algorithms, integrity checks and evasion techniques. These templates are most likely the result of different combinations of flags and parameters in the packing software. Each epoch has different configurations for the packing software resulting in clusters of files that have the same static information.
The Emotet loader contains a lot of benign code as part of its evasion. A.I. based security products rely on both malicious and benign features when classifying a file. In order to bypass products like that, malware authors can insert benign code into their executable files to reduce the chance of them being detected. This is called an adversarial attack and its effectiveness is seen in security solutions based on machine learning.
By looking at the analysis done by Intezer on a specific Emotet sample from the new wave, we can see that the benign code might be taken from Microsoft DLL files that are part of the Visual C++ runtime environment. Alternatively, the benign software could be completely unrelated to the functioning of the sample.
The following screenshot shows the similarities between the previous sample and a benign Microsoft DLL file, using the diaphora plugin:
The Emotet loader contains benign code taken from a Microsoft DLL
Under the column “Name” there are functions from the malware, and under the column “Name 2” there are functions from the benign file. As we can see, the malware contains much benign code which isn’t necessarily needed.
The next step is to check how much of the code is actually used. This can be done using the tool drcov by DynamoRIO. This tool executes binary file and tracks which parts of the code are used. The log produced by this tool can later be processed by the lighthouse plugin for IDA. This plugin integrates the execution log into the IDA database in order to visualize which functions are used. The analysis was performed on the sample shown so far, the result is that just 16.22% of the code is executed
The report produced by the lighthouse plugin showing which functions were executed
After we filtered out benign code that was injected into the executable, we can compare the code of different variants to locate the malicious functions which exist in every sample.
Diagram shows code from different clusters sharing the same malicious functionality.
Filtering out benign functions helps to reveal the malicious code, but it can also be found using dynamic analysis, which will be discussed in the next section.
The executable previously shown is an Emotet loader. The main purpose of the loader is to decrypt the payload hidden in the sample and execute it. The payload consists of a PE file and a shellcode that loads it. The encrypted payload will cause the section it resides in to have high entropy. Based on the dataset we collected, 87% of the files had the payload in the .data section and 13% of the files had the payload in the .rsrc section. Tools like pestudio show the entropy of each section and each resource. For example, the resources of a sample with the payload lies encrypted in the “9248” resource:
Pestudio showing the resources with the highest entropy in the Emotet loader
In order to find the code that decrypts the resource, we can put a hardware breakpoint at the start of the resource.
In cases where the payload is inside the .data section, it’s unclear where it starts. We can approximate it by calculating the entropy for small bulks of the section and find where the score starts to rise.
import sys import math import pefile BULK_SIZE = 256 def entropy(byteArr): arrSize = len(byteArr) freqList = [] for b in range(256): ctr = 0 for byte in byteArr: if byte == b: ctr += 1 freqList.append(float(ctr) / arrSize) ent = 0.0 for freq in freqList: if freq > 0: ent = ent + freq * math.log(freq, 2) ent = -ent return ent pe_file = pefile.PE(sys.argv[1]) data_section = next(section for section in pe_file.sections if section.Name == b'.data\x00\x00\x00') data_section_buffer = data_section.get_data() data_section_va = pe_file.OPTIONAL_HEADER.ImageBase + data_section.VirtualAddress buffer_size = len(data_section_buffer) bulks = [data_section_buffer[i:i+BULK_SIZE] for i in range(0, buffer_size, BULK_SIZE)] for i, bulk in enumerate(bulks): print(hex(data_section_va + i*BULK_SIZE), entropy(bulk))
Python script that calculates the entropy of small portions of the .data section
Once we know where the payload is located, we’ll be able to find the code that decrypts it, and that is where the malicious action starts.
For this part, we’ll look at the sample:
249269aae1e8a9c52f7f6ae93eb0466a5069870b14bf50ac22dc14099c2655db.
In this sample, the script indicates that the beginning of the data section contains the payload although it may vary in other samples. We will put the breakpoint at the address 0x406100. The breakpoint was hit at the address 0x40218C which is in the function sub_401F80. After looking at this function, we notice a few suspicious things:
1. This function builds strings on the stack in order to hide its intents. It uses GetProcAddress to find the address of VirtualAllocExNuma and calls it to allocate memory for the payload.
The loader conceals suspicious API calls
2. It calculates the parameters for VirtualAllocExNuma during runtime, to hide the allocation of RWX memory. The function atoi is being used to convert the string “64” to int, which is PAGE_EXECUTE_READWRITE. Also, the string “8192” is converted to 0x3000 which means the memory is allocated with the flags MEM_COMMIT and MEM_RESERVE.
The parameters were saved as strings to obfuscate the API call
The payload is then copied from the .data section to the RWX memory (that is where our breakpoint hit). The decryption routine is being called and then the shellcode is being executed.
The loader decrypts the payload and continues to the next step in the execution of the malware
In this blog post we looked at the static information of the new Emotet loader, revealed how to cluster similar samples, and found how to locate the malicious code and its payload. In addition, we exposed how the loader evades detection; primarily the loader hides the malicious API calls using obfuscation, but it also injects benign code to manipulate the algorithms of AI-based security products. Both processes have been shown to reduce the chance of the file being detected. The cumulative effect of these techniques makes the Emotet group one of the most advanced campaigns in the threat landscape.
Update
Read more about the hidden payload in the Emotet loader that is decrypted and then executed to successfully avoid being detected. | https://www.deepinstinct.com/2020/08/12/why-emotets-latest-wave-is-harder-to-catch-than-ever-before/ | CC-MAIN-2021-25 | refinedweb | 1,547 | 61.56 |
.
Steps
1. Download or clone the full GitHub project at (it will always be the newest version)
2. From a terminal go to /nix/ folder and launch the script. In OSX it is executed with:
cd nix/ .:
import sys print sys.path
You will get a loooo. /
6. Open a new file browser window and navigate to your downloaded GitHub project: /IfcOpenShell/src/ifcopenshell-python/ and copy the full /ifcopenshell/ folder
7. Paste it inside /site-packages/ folder. Now you should have something like:
:
/IfcOpenShell/build/Darwin/x86_64/build/ifcopenshell/[b]python-2.7[/b].10/ifcwrap/
10. Paste them inside /site-packages/ifcopenshell/
11. Check everything is in place:
Testing
Now that it is installed, let's check if everything works as expected:
12.1 in the Python console write::
house.FCStd house.ifc
12.3 Open house.FCStd, select the root "Building" object and export it (File →!
Final thoughts.
Cheers
Links
- Related forum thread discussion
> | https://wiki.freecadweb.org/index.php?title=Import/Export_IFC_-_compiling_IfcOpenShell&oldid=642478 | CC-MAIN-2020-45 | refinedweb | 157 | 70.39 |
Configuration and releases
This chapter is part of the Mix and OTP guide and it depends on previous chapters in this guide. For more information, read the introduction guide or check out the chapter index in the sidebar.
In this last chapter, we will make the routing table for our distributed key-value store configurable, and then finally package the software for production.
Let’s do this.
Application environment [ extra_applications: [:logger], tests should fail. Restart the apps and re-run tests to see the failure:
$ iex --sname bar -S mix $ elixir --sname foo -S mix test --only distributed
We need a way to configure the application environment. That’s when we use configuration files.
Configuration
Configuration files provide a mechanism for us to configure the environment of any application. Such configuration is done by the
config/config.exs file.
For example, we can configure IEx default prompt to another value. Let’s create the
config/config.exs file with the following content:
import Config config :iex, default_prompt: ">>>"
Start IEx with
iex -S mix and you can see that the IEx prompt has changed.
This means we can also configure our
:routing_table directly in the
config/config.exs file. However, which configuration value should we use?
Currently we have two tests tagged with
@tag :distributed. The “server interaction” test in
KVServerTest, and the “route requests across nodes” in
KV.RouterTest. Both tests are failing since they require a routing table, which is currently empty.
The
KV.RouterTest truly has to be distributed, as its purpose is to test the distribution. However, the test in
KVServerTest was only made distributed because we had a hardcoded distributed routing table, which we couldn’t configure, but now we can!
Therefore, in order to minimize the distributed tests, let’s pick a routing table that does not require distribution. Then, for the distributed tests, we will programatically change the routing table. Back in
config/config.exs, add this line:
config :kv, :routing_table, [{?a..?z, node()}]
This configures a routing table that always points to the current node. Now remove
@tag :distributed from the test in
test/kv_server_test.exs and run the suite, the test should now pass.
Now we only need to make
KV.RouterTest pass once again. To do so, we will write a setup block that runs before all tests in that file. The setup block will change the application environment and revert it back once we are done, like this:
defmodule KV.RouterTest do use ExUnit.Case setup_all do current = Application.get_env(:kv, :routing_table) Application.put_env(:kv, :routing_table, [ {?a..?m, :"foo@computer-name"}, {?n..?z, :"bar@computer-name"} ]) on_exit fn -> Application.put_env(:kv, :routing_table, current) end end @tag :distributed test "route requests across nodes" do
Note we removed
async: true from
use ExUnit.Case. Since the application environment is a global storage, tests that modify it cannot run concurrently. With all changes in place, all tests should pass, including the distributed one.
Custom configuration
At this point, you may be wondering, how can we make two nodes start with two different routing tables? One option is to use the
--config flag in
mix run. For example, you could write two extra configuration files,
config/foo.exs and
config/bar.exs, with two distinct routing tables and then:
$ elixir --sname foo -S mix run --config config/foo.exs $ elixir --sname bar -S mix run --config config/bar.exs
There are two concerns with this approach.
First, if the routing tables are the opposite of each other, such as
[{?a..?m, :"foo@computer-name"}, {?n..?z, :"bar@computer-name"}] in one node and
[{?a..?m, :"bar@computer-name"}, {?n..?z, :"foo@computer-name"}] in the other, you can have a routing request that will run recursively in the cluster infinitely. This can be tackled at the application level by making sure you pass a list of seen nodes when we route, such as
KV.Router.route(bucket, mod, fun, args, seen_nodes). Then by checking if the node being dispatched to was already visited, we can avoid the cycle. Implementing and testing this functionality will be left as an exercise.
The second concern is that, while using
mix run is completely fine to run our software in production, the command we use to start our services is getting increasingly more complex. For example, imagine we also want to
--preload-modules, so all code is loaded upfront, as well as set the
MIX_ENV=prod environment variable:
$ MIX_ENV=prod elixir --sname foo -S mix run --preload-modules --config config/foo.exs
Luckily, Elixir comes with the ability to package all of the code we have written so far into a single directory, that also includes Elixir and the Erlang Virtual Machine, that has a simple entry point and supports custom configuration. This feature is called releases and it provides many other benefits, which we will see next. that assembled the release.
In a regular project, we can assemble a release by simply running
mix release. However, we have an umbrella project, and in such cases Elixir requires some extra input from us. Let’s see what is necessary:
$ MIX_ENV=prod mix release ** (Mix) Umbrella projects require releases to be explicitly defined with a non-empty applications key that chooses which umbrella children should be part of the releases: releases: [ foo: [ applications: [child_app_foo: :permanent] ], bar: [ applications: [child_app_bar: :permanent] ] ] Alternatively you can perform the release from the children applications
That’s because an umbrella project gives us plenty of options when deploying the software. We can:
deploy all applications in the umbrella a starting point, let’s define a release that includes both
:kv_server and
:kv applications. We will also add a version to it. Open up the
mix.exs in the umbrella root and add inside
def project:
releases: [ foo: [ version: "0.0.1", applications: [kv_server: :permanent, kv: :permanent] ] ]
That defines a release named
foo with both
kv_server and
kv applications. Their mode is set to
:permanent, which means that, if those applications crash, the whole node terminates. That’s reasonable since those applications are essential to our system. With the configuration in place, let’s give assembling the release another try:
$ MIX_ENV=prod mix release foo * assembling foo-0.0.1 on MIX_ENV=prod * skipping runtime configuration (config/releases.exs not found) Release created at _build/prod/rel/foo! # To start your system _build/prod/rel/foo/bin/foo start Once the release is running: # To connect to it remotely _build/prod/rel/foo/bin/foo remote # To stop it gracefully (you may also send SIGINT/SIGTERM) _build/prod/rel/foo/bin/foo stop To list all commands: _build/prod/rel/foo/bin/foo
Excellent! A release was assembled in
_build/prod/rel/foo. Inside the release, there will be a
bin/foo file which is the entry point to your system. It supports multiple commands, such as:
bin/foo start,
bin/foo start_iex,
bin/foo restart, and
bin/foo stop- for general management of the release
bin/foo rpc COMMANDand
bin/foo remote- for running commands on the running system or to connect to the running system
bin/foo eval COMMAND- to start a fresh system that runs a single command and then shuts down
bin/foo daemonand
bin/foo daemon_iex- to start the system as a daemon on Unix-like systems
bin/foo install- to install the system as a service on Windows machines
If you run
bin/foo start, it will start the system using a short name (
--sname) equal to the release name, which in this case is
foo. The next step is to start a system named
bar, so we can connect
foo and
bar together, like we did in the previous chapter. But before we achieve this, let’s talk a bit about the benefits of releases..
We have written extensive documentation on releases, so please check the official docs for more information. For now, we will continue exploring some of the features outlined above.
Assembling multiple releases
So far, we have assembled a release named
foo, but our routing table contains information for both
foo and
bar. Let’s start
foo:
$ _build/prod/rel/foo/bin/foo start 16:58:58.508 [info] Accepting connections on port 4040
And let’s connect to it and issue a request in another terminal:
$ telnet 127.0.0.1 4040 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. GET shopping foo Connection closed by foreign host.
Since the “shopping” bucket would be stored on
bar, the request fails as
bar is not available. If you go back to the terminal running
foo, you will see:
17:16:19.555 [error] Task #PID<0.622.0> started from #PID<0.620.0> terminating ** (stop) exited in: GenServer.call({KV.RouterTasks, :"bar@computer-name"}, {:start_task, [{:"foo@josemac-2", #PID<0.622.0>, #PID<0.622.0>}, [#PID<0.622.0>, #PID<0.620.0>, #PID<0.618.0>], :monitor, {KV.Router, :route, ["shopping", KV.Registry, :lookup, [KV.Registry, "shopping"]]}], :temporary, nil}, :infinity) ** (EXIT) no connection to bar@computer-name (elixir) lib/gen_server.ex:1010: GenServer.call/3 (elixir) lib/task/supervisor.ex:454: Task.Supervisor.async/6 (kv) lib/kv/router.ex:21: KV.Router.route/4 (kv_server) lib/kv_server/command.ex:74: KVServer.Command.lookup/2 (kv_server) lib/kv_server.ex:29: KVServer.serve/1 (elixir) lib/task/supervised.ex:90: Task.Supervised.invoke_mfa/2 (stdlib) proc_lib.erl:249: :proc_lib.init_p_do_apply/3 Function: #Function<0.128611034/0 in KVServer.loop_acceptor/1> Args: []
Let’s now define a release for
:bar. One first step could be to define a release exactly like
foo inside
mix.exs:
releases: [ foo: [ version: "0.0.1", applications: [kv_server: :permanent, kv: :permanent] ], bar: [ version: "0.0.1", applications: [kv_server: :permanent, kv: :permanent] ] ]
And now let’s assemble it:
$ MIX_ENV=prod mix release bar
And then start it:
$ _build/prod/rel/bar/bin/bar start
If you start
bar while
foo is still running, you will see an error like the error below happen 5 times, before the application finally shuts down:
17:21:57.567 [error] Task #PID<0.620.0> started from KVServer.Supervisor terminating ** (MatchError) no match of right hand side value: {:error, :eaddrinuse} (kv_server) lib/kv_server.ex:12: KVServer.accept/1 (elixir) lib/task/supervised.ex:90: Task.Supervised.invoke_mfa/2 (stdlib) proc_lib.erl:249: :proc_lib.init_p_do_apply/3 Function: #Function<0.98032413/0 in KVServer.Application.start/2> Args: []
That’s happening because the release
foo is already listening on port
4040 and
bar is trying to do the same! One option could be to move the
:port configuration to the application environment, like we did for the routing table. But let’s try something else. Let’s make it so the
bar release contains only the
:kv application. So it works as a storage but it won’t have a front-end. Change the
:bar information to this:
releases: [ foo: [ version: "0.0.1", applications: [kv_server: :permanent, kv: :permanent] ], bar: [ version: "0.0.1", applications: [kv: :permanent] ] ]
And now let’s assemble it once more:
$ MIX_ENV=prod mix release bar
And finally successfully boot it:
$ _build/prod/rel/bar/bin/bar start
If you connect to localhost once again and perform another request, now everything should work, as long as the routing table contains the correct node names. Outstanding!
With releases, we were able to “cut different slices” of our project and prepared them to run in production, all packaged into a single directory.
Configuring releases already explored
config/config.exs. Now let’s talk about
rel/env.sh.eex and then
config/releases.exs before we end this chapter.
Operating System environment configuration
Every release contains an environment file, named
env.sh on Unix-like systems and
env.bat on Windows machines, that executes before the Elixir system starts. In this file, you can execute any OS-level code, such as invoke other applications, set environment variables and so on. Some of those environment variables can even configure how the release itself runs.
For instance, releases run using short-names (
--sname). However, if you want to actually run a distributed key-value store in production, you will need multiple nodes and start the release with the
--name option. We can achieve this by setting the
RELEASE_DISTRIBUTION environment variable inside the
env.sh and
env.bat files. Mix already has a template for said files which we can customize, so let’s ask Mix to copy them to our application:
$ mix release.init * creating rel/vm.args.eex * creating rel/env.sh.eex * creating rel/env.bat.eex
If you open up
rel/env.sh.eex, you will see:
#!/bin/sh # Sets and enables heart (recommended only in daemon mode) # if [ "$RELEASE_COMMAND" = "daemon" ] || [ "$RELEASE_COMMAND" = "daemon_iex" ]; then # HEART_COMMAND="$RELEASE_ROOT/bin/$RELEASE_NAME $RELEASE_COMMAND" # export HEART_COMMAND # export ELIXIR_ERL_OPTIONS="-heart" # fi # Set the release to work across nodes # export RELEASE_DISTRIBUTION=name # export RELEASE_NODE=<%= @release.name %>@127.0.0.1
The steps necessary to work across nodes is already commented out as an example. You can enable full distribution by uncommenting the last two lines by removing the leading
# .
If you are on Windows, you will have to open up
rel/env.bat.eex, where you will find this:
@echo off rem Set the release to work across nodes rem set RELEASE_DISTRIBUTION=name rem set RELEASE_NODE=<%= @release.name %>@127.0.0.1
Once again, uncomment the last two lines by removing the leading
rem to enable full distribution. And that’s all!
Runtime configuration
Another common need in releases is to compute configuration when the release runs, not when the release is assembled. The
config/config.exs file we defined at the beginning of this chapter runs on every Mix command, when we build, test and run our application. This is great, because it provides a unified configuration for dev, test, and prod.
However, your production environments may have specific needs. For example, right now we are hardcoding the routing table, but in production, you may need to read the routing table from disk, from another service, or even reach out to your orchestration tool, like Kubernetes. This can be done by adding a
config/releases.exs. As the name says, this file runs every time the release starts. For instance, you could do:
import Config {table, _} = Code.eval_file("routing_table_from_disk.exs") config :kv, :routing_table, table
Or perhaps you want to make the
KVServer port configurable, and the value for the port is only given at runtime:
import Config config :kv_server, :port, System.fetch_env!("PORT")
config/releases.exs files work very similar to regular
config/config.exs files, but they may have some restrictions. You can read the documentation for more information.
Summing up storage nodes are added to your live system.
Of course, Elixir can be used for much more than distributed key-value stores. Embedded systems, data-processing and data-ingestion, web applications, streaming systems, and others are many of the different domains Elixir excels at. We hope this guide has prepared you to explore any of those domains or any future domain you may desire to bring Elixir into.
Happy coding! | https://elixir-lang.org/getting-started/mix-otp/config-and-releases.html | CC-MAIN-2019-47 | refinedweb | 2,515 | 58.08 |
Created attachment 18810 [details]
Sample Project
** Overview **:
The following scenario causes a null reference exception when XAMLC is used in 2.3.3 at compile time:
// In AppProject1 and has a reference to LibraryProject1
> <Label Text="{x:Static constants2:Globals.AppName}"/>
Where `constants2` is a namespace from another separate referenced library project.
// In LibraryProject1
> public class Globals
> {
> public const string AppName = "My Xamarin Forms App 2";
> }
This results in a null reference exception for Globals.AppName.
** Steps to Reproduce **
1. Open the attached project in VS
2. Build the XFApp project
** Actual Results **:
> Object reference not set to an instance of an object. XFApp <path>\XFApp\XFApp.Views.MainPage.xaml
Occasionally, I was getting an error that the XAMLC task failed unexpectedly. However, Rebuilding usually surfaced this NRE.
** Expected Results **:
No NRE at compile time.
** Additional Information **:
Downgrading to the project back to Forms 2.3.2.127 does not reproduce the issue anymore. It only starts happening after updating to 2.3.3.175
Marking the Globals class as static, and the AppName field as static also works around this.
Referencing similar code that is included in the same assembly does not cause the issue, see commented out section in MainPage.xaml
Guys
Just to confirm , the attached project that I supplied fails in both XF 2.3.3.175 and 2.3.3.168. The issue has nothing to do with Prism as same issue manifests itself outside of Prism. The project builds successfully in 2.3.2.127 with and without XAMLCompilation but fails >= 2.3.3.168 with XAMLCompilation and using x:Static referencing constants in separate project.
Since we can only move to >= 2.3.3.168 without enabling XAMLCompilation it would be great if we can increase priority of this bug as most prism mobile apps are modular in nature and thus do not confine themselves to just referencing statics within same assembly as defining view.
@rohan,
I removed all Prism stuff from the Sample Project I attached.
yep my *bad* , didnt look at the attachment and thought it was the same attachment I sent you - but your quite right , issue exists regardless of any third party mvvm frameworks..
I thought this PR would have solved this, but apparently not :(
It works for public static string, but fails for public const string
fixed in
Should be fixed in 2.3.4-pre2. Thanks! | https://bugzilla.xamarin.com/show_bug.cgi?format=multiple&id=49228 | CC-MAIN-2019-09 | refinedweb | 399 | 57.37 |
This document provides detailed reference documentation for the Google Base data API.
This document is intended for programmers who want to write client applications that can interact with Google Base.
It's a reference document; it assumes that you understand the concepts presented in the Developer's Guide and the general ideas behind the Google data APIs protocol. It also assumes that you have read the Getting Started guide to learn how manage items in Google Base, as well as the Attributes and Queries chapter to learn how to run queries.
Base provides a variety of representations of Base data. There are two main types of feeds: data feeds and metadata feeds. Data feeds are used to manage data, such as product or real estate listings. The read-only metadata feeds are used to obtain information about how the data feeds should be constructed, including locale-specific information or which attributes they should use.
There are two main data feeds: the snippets feed and the items feed.
Snippets feedSnippets feed
The URL for the snippets feed is:
The snippets feed is read-only.
The snippets feed contains all Google Base data and is available to anyone to query against without a need for authentication. The snippets feed provides access to all content in Google Base, but may return items with a shortened description. Snippets feed entries do not include private attributes.
The URL for the items feed is:
The items feed is read-write.
The items feed contains a customer-specific subset of Google Base data. The items feed requires authentication.
You can use the customer items feed to insert, update, delete, or search for your own data. In response to queries, this feed returns a list of your own data items that match the query provided as a parameter in the URL of the feed. The difference between this feed and the snippets feed is that the items feed contains only the customer's own data. Thus any query against this feed will only retrieve your own entries, which means you may get different results than you would by running the same query on the public snippets feed.
In Google Base, each item type has a set of recommended attributes predefined for it. To see this list, you can request an item types feed or look at the Recommended Attributes document. You do not have to restrict yourself to this list, but if you do, you can get more statistics about your data.
The items feed has an associated media feed.
The URL for each media feed is:
Each media feed is read-write.
You can use the media feeds to manage binary attachments to your Google Base items.
The media feeds differ.
There are three metadata feeds: the itemtypes feed, the attributes, and the locales feed.
The URL for the itemtypes feed is:<locale>
The itemtypes feed is read-only.
It contains a complete description of Google Base structures and allows you to query for different types of metadata. It provides a list of attributes associated with each of the Google Base item types. See Attributes and Queries for some sample queries that show how to use this feed.
The URL for the attributes feed is:
The attributes feed is read-only.
It provides statistics about how an item type has been used and lists what values have been used frequently for its attributes. This feed can also show you how others have defined the item in existing entries. This information can be very helpful as you define new items to upload. See the language-specific developer's guides linked on the left for code samples that show how to access this feed through the Google Base client libraries.
The URL for the locales feed is:<locale>
The locales feed is read-only.
It defines the permitted locales for Google Base. The locale value identifies the language, currency, and date formats used in a feed.
locale limits the feed to return information for a single location.
Currently, the following locales are defined:
Google Base supports attributes of various types. The following attribute types are supported in Google Base:
This section describes the query and path parameters that can be used on the Google Base feeds.
Base supports the following standard data API query parameters on all feeds.
The list of parameters is separated from the URL by the
? character. The
& character is used to separate each subsequent parameter in the list.
The following URL parameters are supported for Google Base queries on all feeds.
In addition, Base supports the following data API query parameters on the noted feeds.
In addition to the standard data API query parameters, the noted read/write feeds support the following insert, update, delete, and batch parameters.
The Google Base data API uses three types of elements: data feed elements, metadata feed elements, and media feed elements. For information about the standard Google data API elements, see the Atom specification and the Kinds document.
Data feed elements are used in the items and snippets feeds to describe data.
Each entry of the snippets or items feed corresponds to one data item in Google Base.
Each sub-element of the entry in the
g: namespace corresponds to an attribute
of the corresponding data item. Sub-elements have the form:
<g:attrib_name value </g:attrib_name>
These are defined as follows:
The media feed uses the following elements. These elements belong to the
media namespace.
Metadata feed elements are used in the itemtypes, attributes, and locales feeds to represent metadata.
Google Base uses the following elements in the itemtypes feed:
Note that these elements use the
gm: namespace. This namespace contains
the metadata attributes that are used in the item types and attributes
feeds. No other feeds use them and they are read-only, so users do not
need to use this
gm: namespace to input the attributes. The only impact
on users is that you will see an extra line in the response, giving the
location of the namespace,
xmlns:gm
='. | http://code.google.com/apis/base/reference.html | crawl-001 | refinedweb | 1,005 | 63.59 |
Utilities for research involving words.
Project description
Python library for some utilities that may be useful for word research.
Note
This project is a work in progress. New functions may be added at any point. See CHANGELOG.md for important changes.
Features
- Extract consonant clusters from a word or list of words.
- Find neighbors of words.
- Other neighbor utilities, e.g., type of neighbor relationship, position of divergence.
- Get the syllable count for words or a list of words, or filter lists of words by number of syllables.
- Get the consonant-vowel structure of a word.
Requirements
- Python 3
Installation
Using pip install
pip3 install lexlib
From source
git clone for the development version
git clone cd lexlib pip install
Download a release
Download the latest release
In a terminal (remember to update the path):
cd path/to/download/lexlib-x.y.z.tar.gz tar -xvf lexlib-x.y.z.tar.gz cd lexlib-x.y.z/ pip install
Usage
To import the package to use in your project
import lexlib as lx
Note
All of the submodules are imported by lexlib. That means that you can call, for example, lx.get_neighbor_pairs() instead of lx.neighbors.get_neighbor_pairs().
For documentation on specific functions, see the docs/ directory or the online documentation or enter help(lx.function_name) into your interpreter.
License
This package is licensed under the terms of the MIT license, copyright (c) 2016-2019 R. Steiner.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/lexlib/ | CC-MAIN-2019-51 | refinedweb | 264 | 60.51 |
I have an MVC program that is uploading data from a .csv file to a SQL database. I am now trying to display the data uploaded with a WebGrid table. All the examples that I have seen demonstrate only displaying one complete table at a time.
I am new to using MVC and WebGrid, so first of all I was wondering if this was the right approach to this problem, and secondly, if this approach is the best route, how will I have to set up the Views to display data from 3 different tables. Will it require 3 different controllers & 3 different views, and will I have to have multiple Data Models? Any input would be really appreciated.
Here are some MVC best practices:
Your view model should be what you want to display on the view
This seems obvious, but at first it is not. Start by creating your view model, and when you're doing that assume that you know nothing about your data store.
Your view model should not know/care how your data is stored
The view's job is just to display some data, that is all. Your view model should be something like this:
public class ViewModel123 { public int ID {get;set;} public string foo {get;set;}//this may come from table A or table B, it does not matter public string bar {get;set;}//this may come from table A or table B, it does not matter }
Create a data access layer
This is the layer that gets data out of the database from N number of tables. Assuming you're using EF or linq-to-sql the method to get the data would look something like this:
public IEnumerable<ViewModel123> GetData() { return DatabaseHande.SomeThing.Select(x=> new ViewModel123 { ID = x.id, Foo = x.Foo, Bar = x.LinkTable.Bar }); }
Have your controller call your data access layer and return the view model
You controller can now call the data access layer and return the view model to the view. More pseudo code:
public ActionResult List() { var viewModelDataRows = _dataAccessClass.GetData(); return View(viewModelDataRows); } | http://www.dlxedu.com/askdetail/3/0e20c366dfcfa5bab3d4479d4f02a2ec.html | CC-MAIN-2019-04 | refinedweb | 350 | 61.36 |
Get the highlights in your inbox every week.
Top 3 Python libraries for data science
3 top Python libraries for data science
Turn Python into a scientific data analysis and modeling tool with these libraries.
Subscribe now
Python's many attractions—such as efficiency, code readability, and speed—have made it the go-to programming language for data science enthusiasts. Python is usually the preferred choice for data scientists and machine learning experts who want to escalate the functionalities of their applications. (For example, Andrey Bulezyuk used the Python programming language to create an amazing machine learning application.)
Because of its extensive usage, Python has a huge number of libraries that make it easier for data scientists to complete complicated tasks without many coding hassles. Here are the top 3 Python libraries for data science; check them out if you want to kickstart your career in the field.
1. NumPy.The library empowers Python with substantial data structures for effortlessly performing multi-dimensional arrays and matrices calculations. Besides its uses in solving linear algebra equations and other mathematical calculations, NumPy is also used as a versatile multi-dimensional container for different types of generic data.
Furthermore, it integrates flawlessly with other programming languages like C/C++ and Fortran. The versatility of the NumPy library allows it to easily and swiftly coalesce with an extensive range of databases and tools. For example, let's see how NumPy (abbreviated np) can be used for multiplying two matrices.
Let's start by importing the library (we'll be using the Jupyter notebook for these examples).
import numpy as np
Next, let's use the eye() function to generate an identity matrix with the stipulated dimensions.
matrix_one = np.eye(3)
matrix_one
Here is the output:
array([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]])
Let's generate another 3x3 matrix.
We'll use the arange([starting number], [stopping number]) function to arrange numbers. Note that the first parameter in the function is the initial number to be listed and the last number is not included in the generated results.
Also, the reshape() function is applied to modify the dimensions of the originally generated matrix into the desired dimension. For the matrices to be "multiply-able," they should be of the same dimension.
matrix_two = np.arange(1,10).reshape(3,3)
matrix_two
Here is the output:
array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
Let's use the dot() function to multiply the two matrices.
matrix_multiply = np.dot(matrix_one, matrix_two)
matrix_multiply
Here is the output:
array([[1., 2., 3.],
[4., 5., 6.],
[7., 8., 9.]])
Great!
We managed to multiply two matrices without using vanilla Python.
Here is the entire code for this example:
import numpy as np
#generating a 3 by 3 identity matrix
matrix_one = np.eye(3)
matrix_one
#generating another 3 by 3 matrix for multiplication
matrix_two = np.arange(1,10).reshape(3,3)
matrix_two
#multiplying the two arrays
matrix_multiply = np.dot(matrix_one, matrix_two)
matrix_multiply
2. Pandas
Pandas is another great library that can enhance your Python skills for data science. Just like NumPy, it belongs to the family of SciPy open source software and is available under the BSD free software license.
Pandas offers versatile and powerful tools for munging data structures and performing extensive data analysis. The library works well with incomplete, unstructured, and unordered real-world data—and comes with tools for shaping, aggregating, analyzing, and visualizing datasets.
There are three types of data structures in this library:
- Series: single-dimensional, homogeneous array
- DataFrame: two-dimensional with heterogeneously typed columns
- Panel: three-dimensional, size-mutable array
For example, let's see how the Panda Python library (abbreviated pd) can be used for performing some descriptive statistical calculations.
Let's start by importing the library.
import pandas as pd
Let's create'])
}
Let's create a DataFrame.
df = pd.DataFrame(d)
Here is a nice table of the output:
Name Programming Language Years of Experience
0 Alfrick Python 5
1 Michael JavaScript 9
2 Wendy PHP 1
3 Paul C++ 4
4 Dusan Java 3
5 George Scala 4
6 Andreas React 7
7 Irene Ruby 9
8 Sagar Angular 6
9 Simon PHP 8
10 James Python 3
11 Rose JavaScript 1
Here is the entire code for this example:
import pandas as pd
#creating'])
}
#Create a DataFrame
df = pd.DataFrame(d)
print(df)
3. Matplotlib.
Let's start by importing the library.
from matplotlib import pyplot as plt
Let's generate values for both the x-axis and the y-axis.
x = [2, 4, 6, 8, 10]
y = [10, 11, 6, 7, 4]
Let's call the function for plotting the bar chart.
plt.bar(x,y)
Let's show the plot.
plt.show()
Here is the bar chart:
Here is the entire code for this example:
#importing Matplotlib Python library
from matplotlib import pyplot as plt
#same as import matplotlib.pyplot as plt
#generating values for x-axis
x = [2, 4, 6, 8, 10]
#generating vaues for y-axis
y = [10, 11, 6, 7, 4]
#calling function for plotting the bar chart
plt.bar(x,y)
#showing the plot
plt.show()
Wrapping up
The Python programming language has always done a good job in data crunching and preparation, but less so for complicated scientific data analysis and modeling. The top Python frameworks for data science help fill this gap, allowing you to carry out complex mathematical computations and create sophisticated models that make sense of your data.
Which other Python data-mining libraries do you know? What's your experience with them? Please share your comments below. | https://opensource.com/article/18/9/top-3-python-libraries-data-science | CC-MAIN-2019-39 | refinedweb | 932 | 53.31 |
I originally had my launch code in app.js for testing purposes, I'd like to move it to another folder in another file as a custom application.
This is what app.js looks like now:
Ext.application({
name: 'MyApp',});
namespaces: 'MyApp',
requires: 'MyApp.Service'
I'm only guessing this is what it should look like, based on many sources throughout the web. The official documentation isn't really clear on what needs to go there. It tells you what can go there, but I'm aiming for a minimum here.
The code I want to link is currently located in Service.js as follows:
app.js
app/MyApp/Service.js
Again, I'm only guessing this is the correct directory structure. The official documentation isn't really clear on how this should be set up. I have no problem changing this, adding directories, flattening, etc.
Service.js looks like this:
Ext.define('MyApp.Service', {
extend: 'Ext.app.Application',
launch: function () {// When the application is ready, set up
Ext.onReady(this.setUp, this, {priority: -500});},
setUp: function() {// Code here}});
The code for "launch" and "setUp" was originally in app.js, where it worked okay. I'm guessing that this is how to define MyApp.Service, not really sure. The official documentation isn't really clear on what needs to go there. Again, it tells you what can go there, but that doesn't really help.
Running the above I get a blank screen and a warning in console:
[W] Missing namespace for MyApp.Service, please define it in namespaces property of your Application class.
When it's building, I can see hints that it's pulling in Service.js code, but it's just not connecting to it.
I've been hammering at this for hours now and I'm really at a loss as to what to do, any help would be appreciated. | https://www.sencha.com/forum/showthread.php?469140-Help-adding-a-new-class-to-an-application&p=1315726 | CC-MAIN-2018-30 | refinedweb | 314 | 69.48 |
Crystal Space
Guest
. Please
or
.
May 24, 2016, 09:18:18 am
1 Hour
1 Day
1 Week
1 Month
Forever
Login with username, password and session length
Search:
Advanced search
9073
Posts in
2093
Topics by
19716
Latest Member:
Samualdh
Show Posts
Pages: [
1
]
1
Associate Projects
/
CEL Discussion
/
Re: CELStart Python issues?
on: April 28, 2009, 04:34:36 pm
What OS are you running; I'm guessing Windows since, if memory serves, that was the only case where I've seen that error... (unfortunately, it's been over a year now I think since I've played with celstart
... I tried running my pycelstart example that I know used to work and got errorrs that I'll have to fix at some point )
Also, are there other python examples besides the dinosaur one posted that you're using to test?
2
Crystal Space Development
/
Support
/
Re: Some questions about how to insert net code in "simpmap"?
on: March 05, 2009, 09:17:03 pm
Are you sure you're doing non-blocking sockets? (and not doing something like while (!data = recv(100)))
I don't have any network code lying around, but basically, you should probably be doing something like either:
Option 1: select based polling:
add socket to r_fd; // man select should give you some examples
select(&r_fd,&w_fd,&e_fd,0); // bah ... I can never remember the order of r,w,e
if (check) // check to see if the socket is ready to be read
recv(data,1000); //would block if there was nothing there, but select says it's safe
Option 2: non blocking sockets:
//after creating sockets
setsockopt(sock,SOCK_NBLOCK) ; //man setsockopt ... you probably need to include something and I'm blanking on the right flag
in your process frames:
retval = recv(data,1000); //read the man page ... if memory serves, 0 means no data, -1 means error, positive definitely indicates the number of bytes read... if you want to be slightly more robust, you can check errno ... EAGAIN (I think) indicates nothing was ready, other errors indicate things like the fact that the socket is dead so you'd want to either quit or try reconnecting.
Edit to add: if you haven't done so before, I'd strongly suggest you implement or at least take apart and minimally extend a non-blocking chat client/server (there should be plenty of tutorials on the web)... just using basic printf/gets to send/recv data over a network to avoid having to deal with both the complexities of networking and learning crystal space.
3
Crystal Space Development
/
Support
/
Re: Some questions about how to insert net code in "simpmap"?
on: February 12, 2009, 12:31:31 am
Okay... I went back and looked at my code... I had migrated to using a thread (p_threads) so that I could get data from the server at a rate that was independent of the FPS calculations... having said that, the easiest thing to do is just put your recv() calls in ProcessFrame... you will have some delay (since if the server moves an object 3 units at time T, it may not be shown moving on the screen until T+k, where k is the delay to send across the wire + the delay between when the client gets the data and when ProcessFrame gets called... but assuming client and server are either on same machine or at least connected via LAN and your getting 30 fps, this shouldn't be the end of the world... I would suggest going that approach until you've played with CS more and only then worry about doing more advanced interpolation etc. (note that depending on what you're using (i.e. if you are using some of the cell behaviors that move objects for you), the delay may not be noticable... most of the time the server update is consistent with what CEL was dead reckoning you should be, and the rest is somewhat unavoidable just from the nature of the doing distributed processing)
So... did that answer your question or did you already try putting the recv() calls in processFrame? [Note that even in my code with threads, I basically had a get data loop in the separate thread to constantly update the client state (with some hand done dead reckoning if I hadn't gotten any updates from the server recently) and then in processFrame, I used to mutex to prevent doing stuff during an update of the local client state and then used the current client state to do all my positioning but I'd recommend avoiding threads during initial development (other than to keep them in mind) as I'm not a huge fan of debugging multi-threaded applications if I can help it]
4
Crystal Space Development
/
Support
/
Re: Some questions about how to insert net code in "simpmap"?
on: February 09, 2009, 07:00:20 pm
Could you clarify what you mean by "in time". More directly:
What did you try already
What type of sockets are you using (TCP or UDP)
What type of I/O are you using (blocking or nonblocking)
A long time ago (before I opted to switch over to python / cel and then have my video card fail out) I had a fairly simple app built on top of simpmap that was set up to send stuff over non-blocking UDP sockets... if memory serves, I did both the send and receive in the same ProcessFrame function and don't recall having any problems as long as the amount of data was reasonably small (I had problems if I sent too much data from server to client and client started getting multiple messages glued together that it mistakenly treated as a single message but that just required improving the message format.)
5
Crystal Space Development
/
Support
/
billboard question
on: July 11, 2008, 04:07:56 am
Once upon a time (apparently it's been almost a year), I had a celzip file which contained some python code which featured clickable billboards. For whatever reason (probably I was doing something slightly wrong that the old version let slide and the new version isn't), that celzip file no longer registers the click events. To try to keep this simple, I reduced my code to the simplest possible, and it's still not registering click events. Also, I played quickly with the (XML-based versus Python-based) CrystalDash which also has billboards; those do work
Standard stuff Jorrit wants:
Version Info:
celstart --version prints out 8
running python version 2.5.1
I believe I pulled down cs + cel via svn (using the default) on or around June 7th... cel version per celversion.h is 1.2.1
Walktest also says I'm using CS 1.2.1 (April 2008) version
(is there a better way to get the exact version that I'm forgetting?)
Hardware / OS Info
Linux 32bit (Fedora
on x86 (Pentium Dual E2200, which is 64 bit capable but other apps were giving me fits under 64bit space)
NVidia 8800 GT
gcc 4.3.0
Code Fragment is below(hopefully the formatting won't be too badly off kilter (looked fine in preview):
Note that I checked the property classes in celEntity after creating the billboard and setting stuff up; the property with the description "Enable mouse events" has boolean value of True
class main:
api_version = 2 # use new version of message callbacks.
def pcbillboard_select(self,*args):
print "Someone selected this billboard"
def __init__(self,celEntity):
self.c = celEntity
message = "hello world"
self.bg = celCreateBillboard(pl,celEntity)
self.mygui = self.bg.GetBillboard()
self.mygui.SetText(message)
rez = self.mygui.SetMaterialNameFast('Stone')
billboardsize = 307200 #magic number for BillBoard Space defined somewhere
guiheight = self.mygui.GetTextDimensions()[1]
self.mygui.SetSize(billboardsize-2000,guiheight*10)
self.mygui.SetPosition(1000,100000)
self.mygui.SetText(message)
self.bg.EnableEvents(True)
6
Crystal Space Development
/
Support
/
Re: Bash build script for CS and cel
on: March 20, 2008, 11:21:26 pm
I think you want to do
"cmd" >> $LOGFILE 2>&1
so for example, instead of
wget -q $CG_DOWNLOAD_URL >> $LOGFILE
do&
wget -q $CG_DOWNLOAD_URL >> $LOGFILE 2>&1
(adding the 2>&1 at the end of each line redirects stderr to stdout ... this always feels wrong to me (i.e. it'd feel more natural to do 2>&1 first and then the >> $LOGFILE, but meh)
7
Associate Projects
/
CEL Discussion
/
Re: need a file for celstart that isn't included in binary for windows
on: November 13, 2007, 02:53:49 pm
I had mentioned that I had used it just fine with python 2.5 under linux, but remembered that I should caveat that with the fact that I built celstart under linux myself versus using the supplied one (and I can't test the prebuilt one anymore due to other issues)
8
Associate Projects
/
CEL Discussion
/
Re: need a file for celstart that isn't included in binary for windows
on: November 09, 2007, 02:30:23 pm
can't help you then. I just did a search on my windows drive and found no cspace.dll file (which should be what python loads when it sees import _cspace) but I can vouch that up until my video card died (leaving me with the onboard PoS), I was quite able to run celstart with python apps when using python 2.4 and got an error (which I am pretty sure is the one you're seeing but am not certain) when I had python 2.5 installed. I did a grep in the celstart directory under windows and msvcrt.dll has cspace in it (but I can't remember how to do the equivalent of nm myobj.so to dump the dll symbol table (or else I can but it's been stripped) to see if it actually has the cspace code or if that's just a weird collision... it's possible that the cspace module is embedded inside that dll (and then possible but I'm stretching) that python 2.4 supported that somewhat bizarre behavior whereas 2.5 introduced stricter rules.
Good luck.
9
Associate Projects
/
CEL Discussion
/
Re: need a file for celstart that isn't included in binary for windows
on: November 07, 2007, 04:23:58 pm
What version of python are you using (I'm assuming you installed Python first)... I've found that I need to use Python 2.4 to get celstart (the default binary) to work under Windows or I get that error.
Oops... meant to add that 2.5 works just fine under Linux with the latest celstart binary (even using the exact same celstart applications)
10
Crystal Space Development
/
Support
/
Re: Compiling CEL
on: September 25, 2007, 01:25:02 am
I'm going to assume you have the cywin part right... but the export is definitely not correct... CRYSTAL should point to where the CrystalSpace stuff is, which should just be /cs, not /cs/cel (which eventually you should be setting the CEL variable to point to)
11
Associate Projects
/
CEL Discussion
/
Re: CelStart
on: September 20, 2007, 03:18:30 pm
I'd suggest uninstalling python 2.5 and installing the latest python2.4 (i.e. the whole python2.4 env, not just the python24 dll). I had the same issue with celstart 7 on windows (not sure if my linux is just old enough that i'm still running 2.4 or if this is a windows only issue... have noticed that I still need to futz with PythonPath under windows; linux works fine.)
editted to add: Plug... I posted a python celstart example file to the celstart main page
(
)
12
Associate Projects
/
CEL Discussion
/
Re: Error: appinit is not callable
on: August 30, 2007, 04:09:36 am
I just posted my code to
(click image to download).
Jorrit et al: is it permitted for me to just modify the celstart wiki page to add my example file / image or is there some procedure I'm missing to get approval?
13
Associate Projects
/
CEL Discussion
/
Re: Error: appinit is not callable
on: August 24, 2007, 04:22:25 am
I agree it would be nice to have some python celzip's; I'm working on a simple app using python and celstart but I don't know if I'm doing this right (for example, I still haven't figured out how to get hover to work; if I try to create a new CraftController, the program immediately quits. I am, however, making some progress and so can hopefully try to post something at some point so other people more experienced can tell me how to improve it and less can build off it, unless someone posts something better first.
Anyways, I suggest you add a line that appends pccommandinput to the PcFactories list
PcFactories.append("cel.pcfactory.pccommandinput")
(I have an initialization routine that does the following:
def SetupFactories(self):
pcclasses = ["region","tooltip","mesh","solid","meshselect","zonemanager","trigger", "quest","light","inventory","defaultcamera","gravity","movable", "pccommandinput","linmove","actormove","colldet","timer","soundlistener", "soundsource","billboard","properties" ,"craft","hover","mechsys","mechobject"]
for pcclass in pcclasses:
PcFactories.append("cel.pcfactory."+pcclass)
print "Added ",pcclass
I presumably adapted it from somewhere as I'm not using quest or light in my code at present, at least as far as I know)
14
Associate Projects
/
CEL Discussion
/
Re: Binding keyboard events to craft methods
on: July 05, 2007, 02:53:31 pm
Thanks for the response, Genjix.
Unfortunately, the events are being fired just fine... the problem, as near as I can tell, is that I'm not doing the syntax of the XML correctly...
I would think that:
<event name="pccommandinput_forward1">
<default propclass="?pccraft" />
<action id="actid(ThrustOn)" />
</event>
Means using propclass pccraft, apply the action "ThrustOn" ...
but if I replace actid(ThrustOn) with actid(ObviousilyBogusFunction) nothing changes and I don't get an error message.
At any rate, I'm in the middle of trying to upgrade my Linux box to the latest FC7 and then rebuild the latest CS/CEL and try again using python vs just the XML.
15
Associate Projects
/
CEL Discussion
/
Binding keyboard events to craft methods
on: June 30, 2007, 03:02:57 am
Basically, trying to implement the CEL hoverdemo solely in XML (i.e. for celstart)
Using some of the sample celstart apps, I think it mostly working. However, I can't get pressing keys to call the craft methods.
What I'm doing (and I can post the whole code somewhere if that would be easier)
<addon plugin="cel.addons.xmlscripts">
<!-- this is legacy! -->
<pcfactory>cel.pcfactory.pccommandinput</pcfactory>
<pcfactory>cel.pcfactory.defaultcamera</pcfactory>
<pcfactory>cel.pcfactory.actormove</pcfactory>
<pcfactory>cel.pcfactory.mesh</pcfactory>
<pcfactory>cel.pcfactory.linmove</pcfactory>
<pcfactory>cel.pcfactory.colldet</pcfactory>
<pcfactory>cel.pcfactory.zonemanager</pcfactory>
<pcfactory>cel.pcfactory.properties</pcfactory>
<pcfactory>cel.pcfactory.mechsys</pcfactory>
<pcfactory>cel.pcfactory.mechobject</pcfactory>
<pcfactory>cel.pcfactory.hover</pcfactory>
<pcfactory>cel.pcfactory.craft</pcfactory>
<script name="camera">
<event name="realinit">
<var name="pcmech" value="pc(pcmechobject)" />
<var name="pccam" value="pc(pcdefaultcamera)" />
<var name="pccraft" value="pc(pccraft)" />
</event>
<event name="pccommandinput_quit1">
<quit/>
</event>
<event name="pccommandinput_forward1">
<default propclass="?pccraft" />
<action id="actid(ThrustOn)" />
</event>
<event name="pccommandinput_forward0">
<default propclass="?pccraft" />
<action id="actid(ThrustOff)" />
</event>
(... more input events ...)
Also, in the entity object, I did specifiy that this entity should contain a propclass named pccraft and did a bunch of action<par name="xxx" float = "yy.y" /></action>'s
I do set up so that up arrow and w both map to forward and the original code (which called eval setforce(pcmesh)) did work...
Any thoughts what I'm missing?
Pages: [
1
]
|
SMF © 2006-2007, Simple Machines LLC
Page created in 7.234 seconds with 16 queries. | http://www.crystalspace3d.org/forum/index.php?action=profile;u=800;sa=showPosts | CC-MAIN-2016-22 | refinedweb | 2,637 | 56.59 |
DB_File::SV18x - Co-existence of berkeley db 1.85, 1.86 and 2+
Identical to DB_File
The DB_File::SV18x modules override the namespace used by the berkeley db library with a namespace that allows the most prominent versions of the library, namely 1.85 and 1.86, coexist with the current 2 or higher in memory. Thus it offers both convenient transformations of database files between different versions and allows a smooth upgrade path from 1.8x to 2.0.
For usage information please consult the documentation for DB_File and globally replace the token
DB_File by
DB_File::SV185 or
DB_File::SV186 whatever is avilable on your system.
use DB_File (); use DB_File::SV185 (); use Fcntl; $F = "str.db"; tie(%h, 'DB_File', "$F.200", O_RDWR|O_CREAT, 0644, $DB_File::DB_HASH) or die; tie(%i, 'DB_File::SV185', $F, O_RDONLY, 0644, $DB_File::SV185::DB_HASH) or die; %h = %i;
This example does a conversion of a database file from 1.85 to whatever is the current default in the DB_File module of the machine that runs this code.
Note that berkeley db 2.0 comes with excellent conversion tools and for mere conversion DB_File::SV18x is not necessary. Its usefulness lies in the open coexistence.
Andreas Koenig koenig@kulturbox.de | http://search.cpan.org/dist/DB_File-SV18x-kit/SV18x.pm | CC-MAIN-2016-26 | refinedweb | 204 | 59.4 |
{-# LANGUAGE RankNTypes, NamedFieldPuns, BangPatterns, ExistentialQuantification, CPP, ParallelListComp #-} {-Async, runParAsyncHelper, new, newFull, newFull_, get, put_, put, pollIVar, yield, ) where import Control.Monad as M hiding (sequence, join) import Prelude hiding (mapM, sequence) import Data.IORef import System.IO.Unsafe import Control.Concurrent hiding (yield) import GHC.Conc hiding (yield) import Control.DeepSeq import Control.Applicative import Data.Array import Data.List (partition, find) --import Text.Printf -- --------------------------------------------------------------------------- -- MAIN SCHEDULING AND RUNNING -- --------------------------------------------------------------------------- data Trace = forall a . Get (IVar a) (a -> Trace) | forall a . Put (IVar a) a Trace | forall a . New (IVarContents a) (IVar a -> Trace) | Fork Trace Trace | Done | Yield Trace data Sched = Sched { no :: {-# UNPACK #-} !ThreadNumber, -- ^ The threadnumber of this worker workpool :: IORef WorkPool, -- ^ The workpool for this worker status :: IORef AllStatus, -- ^ The Schedulers' status scheds :: Array ThreadNumber Sched, -- ^ The list of all workers by thread tId :: IORef ThreadId -- ^ The ThreadId of this worker } type ThreadNumber = Int type UId = Int type CountRef = IORef Int type WorkLimit = (UId, CountRef) -- ^ The UId and the count of tasks left or Nothing if there's no limit -- When the UId is -1, it means that the worker will remain alive until -- purposely killed (by globalThreadShutdown). -- -- The reason for a work limit is to make sure that nested threads properly exit. -- Imagine a scenario where thread A, a worker thread, encounters a runPar. It -- recursively enters worker status, but it needs ot leave worker status at some -- point to finish the task that caused it to call runPar. Suppose now that it -- encounters another call to runPar. If it has the ability to finish and return, -- we must make sure it returns first for the nested runPar or else it will return -- to the wrong place! The work limit helps achieve this. -- -- TODO: Perhaps the work limit need not restrict what a thread can work on, but -- instead it simply provides the singular point that a thread is allowed to return -- from. The only concern is some potential for bad blocking - is that a legit -- concern? isWLUId :: WorkLimit -> (UId -> Bool) -> Bool --isWLUId Nothing _ = False isWLUId (uid, _) op = op uid shouldEndWorkSet :: WorkLimit -> IO Bool shouldEndWorkSet (u,_) | u == -1 = return False shouldEndWorkSet (_, cr) = do c <- readIORef cr return (c == 0) idleAtWL :: WorkLimit -> MVar Bool -> Idle --idleAtWL Nothing m = Idle Nothing m idleAtWL (uid, _) m = Idle uid m -- | The main scheduler loop. -- This takes the synchrony flag, our Sched, the particular work queue we're -- currently working on, the uid of the work queue (for pushing work), our -- work limit, and the already-popped, first trace in the work queue. -- -- INVARIANT: This should only be called by threads who ARE currently marked -- as working. sched :: Bool -> WorkLimit -> Sched -> (IORef [Trace]) -> UId -> Trace -> IO () sched _doSync wl q@Sched{status, workpool} queueref uid], go) Full a -> (Full a, loop (c a)) Blocked cs -> (Blocked (c:cs), go) r Put (IVar v) a t -> do cs <- atomicModifyIORef v $ \e -> case e of Empty -> (Full a, []) Full _ -> error "multiple put" Blocked cs -> (Full a, cs) mapM_ (pushWork status uid queueref . ($a)) cs loop t Fork child parent -> do pushWork status uid queueref child loop parent Done -> if _doSync then go -- We could fork an extra thread here to keep numCapabilities workers -- even when the main thread returns to the runPar caller... else do -- putStrLn " [par] Forking replacement thread..\n" forkIO go; return () -- But even if we don't we are not orphaning any work in this -- thread's work-queue because it can be stolen by other threads. -- else return () Yield parent -> do -- Go to the end of the worklist: -- TODO: Perhaps consider Data.Seq here. -- This would also be a chance to steal and work from opposite ends of the queue. atomicModifyIORef queueref $ \ts -> (ts++[parent],()) go go = do mt <- atomicPopIORef queueref case mt of Just t -> loop t Nothing -> do -- SCARY: we better be working on the top queue in the pool! cr <- -- | Process the next work queue on the work pool, or failing that, go into -- work stealing mode. -- -- INVARIANT: This should only be called by threads who are NOT currently -- marked as working (or if they are, the task they were working -- on executed a runPar). reschedule :: WorkLimit -> Sched -> IO () reschedule wl q@Sched{ workpool, status } = do wp <- readIORef workpool case wp of Work uid cr wqref _ | isWLUId wl (uid >=) -> do incWorkerCount cr nextTrace <- atomicPopIORef wqref case nextTrace of Just t -> sched True wl q wqref uid t Nothing -> do _ -> steal wl q -- RRN: Note -- NOT doing random work stealing breaks the traditional -- Cilk time/space bounds if one is running strictly nested (series -- parallel) programs. -- | Attempt to steal work or, failing that, give up and go idle. steal :: WorkLimit -> Sched -> IO () steal wl q@Sched{ status, scheds, no=my_no } = -- printf "cpu %d stealing\n" my_no >> go l where (l,u) = bounds scheds go n | n > u = do -- Prepare to go idle m <- newEmptyMVar atomicModifyIORef status $ addIdler (idleAtWL wl m) -- Check to see if this workset is ready to close s <- shouldEndWorkSet wl if s then do -- Time to close this workset --printf "cpu %d shutting down workset %d\n" my_no myPriLimit endWorkSet status (fst wl) return () else do -- There's more work being done here, so I'll go idle finished <- takeMVar m unless finished $ go l | n == my_no = go (n+1) | otherwise = readIORef (workpool (scheds!n)) >>= tryToSteal where tryToSteal (Work uid cr wqref wp) | isWLUId wl (uid >=) = do incWorkerCount cr stolenTrace <- atomicPopIORef wqref case stolenTrace of Nothing -> decWorkerCount uid cr status >> tryToSteal wp Just t -> do sublst <- newIORef [] atomicModifyIORef (workpool q) $ \wp' -> (Work uid cr sublst wp', ()) sched True wl q sublst uid t tryToSteal _ = go (n+1) -- --------------------------------------------------------------------------- -- UTILITY FUNCTIONS -- --------------------------------------------------------------------------- -- | Push work. Then, find an idle worker with uid less than the pushed work. -- If one is found, wake it up. pushWork :: IORef AllStatus -> UId -> (IORef [Trace]) -> Trace -> IO () pushWork status uid wqref t = do atomicModifyIORef wqref $ (\ts -> (t:ts, ())) allstatus <- readIORef status when (hasIdleWorker uid allstatus) $ do r <- atomicModifyIORef status $ getIdleWorker uid case r of Just b -> putMVar b False Nothing -> return () -- | A utility function for decreasing the task count of a work set. -- If the count becomes 0, endWorkSet is called on the work set. decWorkerCount :: UId -> CountRef -> IORef AllStatus -> IO Bool decWorkerCount uid countref status = do done <- atomicModifyIORef countref $ (\n -> if n == 0 then error "Impossible value in decWorkerCount" else (n-1, n == 1)) when done $ (endWorkSet status uid >> globalWorkComplete uid) return done -- | A utility function for increasing the task count of a work set. incWorkerCount :: CountRef -> IO () incWorkerCount countref = do atomicModifyIORef countref $ (\n -> (n+1, ())) -- | A utility for popping an element off of an IORef list. -- The return value is Just a where a is the head of the list -- or Nothing if the list is null. atomicPopIORef :: IORef [a] -> IO (Maybe a) atomicPopIORef ref = atomicModifyIORef ref $ \lst -> case lst of [] -> ([], Nothing) (e:es) -> (es, Just e) -- --------------------------------------------------------------------------- -- IDLING STATUS -- --------------------------------------------------------------------------- data Idle = Idle {-# UNPACK #-} !UId (MVar Bool) data ExtIdle = ExtIdle {-# UNPACK #-} !UId (MVar ()) type AllStatus = ([Idle], [ExtIdle]) -- | A new empty PQueue of Statuses newStatus :: AllStatus newStatus = ([], []) -- | Adds a new Idler to the AllStatus. addIdler :: Idle -> AllStatus -> (AllStatus, ()) addIdler i@(Idle u _) (is, es) = ((insert is, es), ()) where insert [] = [i] insert xs@(i'@(Idle u' _):xs') = if u <= u' then i : xs else i' : insert xs' -- | Adds a new External idler to the AllStatus. addExtIdler :: ExtIdle -> AllStatus -> (AllStatus, ()) addExtIdler e (is, es) = ((is, e:es), ()) -- | Returns an idle worker with uid less than or equal to the given one -- (if it exists) and removes it from the AllStatus getIdleWorker :: UId -> AllStatus -> (AllStatus, Maybe (MVar Bool)) getIdleWorker u q = case q of ([],_) -> (q, Nothing) ((Idle u' m'):rst, es) -> if u' <= u then ((rst,es), Just m') else (q, Nothing) -- | Returns true if there is an idle worker with uid less than the given one hasIdleWorker :: UId -> AllStatus -> Bool hasIdleWorker uid q = case getIdleWorker uid q of (_, Nothing) -> False (_, Just _) -> True -- | Wakes up all idle workers at the given uid with the True signal endWorkSet :: IORef AllStatus -> UId -> IO () endWorkSet status uid = do (is, es) <- atomicModifyIORef status $ getAllAtID mapM_ (\(ExtIdle _ mb) -> putMVar mb ()) es mapM_ (\(Idle _ mb) -> putMVar mb True) is where getAllAtID (is, es) = ((is', es'), (elems1, elems2)) where (elems1, is') = partition (\(Idle u _) -> u == uid) is (elems2, es') = partition (\(ExtIdle u _) -> u == uid) es -- --------------------------------------------------------------------------- -- WorkPool -- --------------------------------------------------------------------------- -- | The WorkPool keeps a queue where each element has a UId, a list of -- traces, and the countRef of how many workers are working on Traces -- of this UId. -- -- It should be that by the natural pushing done in sched, this pool -- should always be in order. We take advantage of this by making -- guarantees but not actually checking at runtime whether they're true. data WorkPool = Work {-# UNPACK #-} !UId CountRef (IORef [Trace]) WorkPool | NoWork -- | Pop the next work queue from the work pool. This should only be called -- if both the work pool contains a pool, and the queue in that pool is -- empty. Thus, it should only be called by the pool's owner. wpRemoveWork :: UId -> IORef WorkPool -> IO CountRef wpRemoveWork uid pRef = atomicModifyIORef pRef f where f :: WorkPool -> (WorkPool, CountRef) f (Work uid' cr' _ p') | uid == uid' = (p', cr') f (Work uid' cr' wq' p') = let (p'', cr'') = f p' in (Work uid' cr' wq' p'', cr'') f NoWork = error "Impossible state in wpRemoveWork" -- --------------------------------------------------------------------------- -- PAR AND IVAR -- ---------------------------------------------------------------------------)) data IVarContents a = Full a | Empty | Blocked [a -> Trace] -- Forcing evaluation of a IVar is fruitless. instance NFData (IVar a) where rnf _ = () -- From outside the Par computation we can peek. But this is -- nondeterministic; it should perhaps have "unsafe" in the name. pollIVar :: IVar a -> IO (Maybe a) pollIVar (IVar ref) = do contents <- readIORef ref case contents of Full x -> return (Just x) _ -> return (Nothing) -- --------------------------------------------------------------------------- -- GLOBAL THREAD IDENTIFICATION -- --------------------------------------------------------------------------- -- Global thread identification is handled byt the globalThreadState object. -- The main way to interact with this object is to attempt to establish global -- Scheds, shut down the threads and clear the Scheds, or to mark a work set -- as complete. data GlobalThreadState = GTS (Array ThreadNumber Sched) !UId !Int -- | This is the global thread state variable globalThreadState :: IORef (Maybe GlobalThreadState) globalThreadState = unsafePerformIO $ newIORef $ Nothing -- | This is called when a work set completes (see decWorkerCount). -- We do this so that we can know if it's okay to do a -- globalThreadShutdown. globalWorkComplete :: UId -> IO () globalWorkComplete _ = atomicModifyIORef globalThreadState f where f Nothing = error "Impossible state in globalWorkComplete." f (Just (GTS retA n c)) = (Just (GTS retA n (c+1)), ()) -- | Attempts to set the global Scheds. If they are already extablished, -- this returns a Failure with a new UId (to interact with the global -- threads) and the current global Scheds. Otherwise, it establishes -- the given array as the global Scheds, and returns a Success containing -- the UId to use. data GTSResult = Success UId | Failure UId (Array ThreadNumber Sched) globalEstablishScheds :: Array ThreadNumber Sched -> IO GTSResult globalEstablishScheds a = atomicModifyIORef globalThreadState f where f Nothing = (Just (GTS a 1 0), Success 0) f (Just (GTS retA n c)) = (Just (GTS retA (n+1) c), Failure n retA) -- | Attempts to shutdown the global threads. If there are unfinished tasks, -- this shuts down nothing and returns False. Otherwise, this shuts down -- all threads, un-establishes the global Scheds, and returns True. -- If the Scheds are currently unestablished, this does nothing and returns -- False. -- -- TODO: This can sometimes leave threads hanging who are not doing any work -- but have not yet marked themselves as idle. Things won't exactly -- break, but there may be MVar errors that are thrown. globalThreadShutdown :: IO Bool globalThreadShutdown = do ma <- atomicModifyIORef globalThreadState f case ma of Nothing -> return False Just a -> do let s = status $ a ! (fst $ bounds a) (is, es) <- atomicModifyIORef s $ \x -> (newStatus, x) mapM_ (\(ExtIdle _ m) -> putMVar m ()) es mapM_ (\(Idle _ mb) -> putMVar mb True) is return True where f (Just (GTS a n c)) | n == c = (Nothing, Just a) f gts = (gts, Nothing) -- --------------------------------------------------------------------------- -- RUNPAR -- --------------------------------------------------------------------------- -- [Notes on threadCapability] -- -- We create a thread on each CPU with forkOnIO. Ideally, the CPU on -- which the current thread is running will host the main thread; the -- other CPUs will host worker threads. -- -- This is possible using threadCapability, but this requires -- GHC 7.1.20110301, because that is when threadCapability was added. -- -- Lacking threadCapability, we always pick CPU #0 to run the main -- thread. If the current thread is not running on CPU #0, this -- will require some data to be shipped over the memory bus, and -- hence will be slightly slower than the version using threadCapability. -- -- If this is a nested runPar call, then we can do slightly better. We -- can look at the current workers' ThreadIds and see if we are one of -- them. If so, we do the work on that core. If not, we are once again -- forced to choose arbitrarily, so we send the work to CPU #0. -- {-# INLINE runPar_internal #-} runPar_internal :: Bool -> Par a -> a runPar_internal _doSync x = unsafePerformIO $ do -- Set up the schedulers myTId <- myThreadId tIds <- replicateM numCapabilities $ newIORef myTId workpools <- replicateM numCapabilities $ newIORef NoWork statusRef <- newIORef newStatus let states = listArray (0, numCapabilities-1) [ Sched { no=n, workpool=wp, status=statusRef, scheds=states, tId=t } | n <- [0..] | wp <- workpools | t <- tIds ] res <- globalEstablishScheds states case res of Success uid -> do #if __GLASGOW_HASKELL__ >= 701 /* 20110301 */ -- See [Notes on threadCapability] for more details (main_cpu, _) <- threadCapability =<< myThreadId #else let main_cpu = 0 #endif currentWorkers <- newIORef 1 let workLimit' = (-1, undefined) let workLimit = (0, currentWorkers) m <- newEmptyMVar rref <- newIORef Empty atomicModifyIORef statusRef $ addExtIdler (ExtIdle uid m) forM_ (elems states) $ \(state@Sched{no=cpu}) -> do forkOnIO cpu $ do myTId <- myThreadId --printf "cpu %d setting threadId=%s\n" cpu (show myTId) writeIORef (tId state) myTId if (cpu /= main_cpu) then reschedule workLimit' state else do sublst <- newIORef [] atomicModifyIORef (workpool state) $ \wp -> (Work uid currentWorkers sublst wp, ()) sched _doSync workLimit state sublst uid $ runCont (x >>= put_ (IVar rref)) (const Done) takeMVar m --printf "done\n" r <- readIORef rref -- TODO: If we're doing this nested strategy, we should probably just keep the -- threads alive indefinitely. After all, we can get some weird conditions -- doing it this way. At the least, we should put this in steal where the -- shutdown occurs. b <- globalThreadShutdown -- putStrLn $ "Global thread shutdown: " ++ show b case r of Full a -> return a _ -> error "no result" Failure uid cScheds -> do #if __GLASGOW_HASKELL__ >= 701 /* 20110301 */ -- See [Notes on threadCapability] for more details (main_cpu, _) <- threadCapability myTId cTId <- readIORef $ tId $ cScheds ! main_cpu let doWork = cTId == myTId #else cTIds <- mapM (\s -> (readIORef $ tId $ s) >>= (\t -> return (s,t))) (elems cScheds) let (main_cpu, doWork) = case find ((== myTId) . snd) cTIds of Nothing -> (0, False) Just (s,_) -> (no s, True) #endif rref <- newIORef Empty let task = runCont (x >>= put_ (IVar rref)) (const Done) state = cScheds ! main_cpu if doWork then do --printf "cpu %d using old threads, of which I am one\n" main_cpu currentWorkers <- newIORef 1 sublst <- newIORef [] let workLimit = (uid, currentWorkers) atomicModifyIORef (workpool state) $ \wp -> (Work uid currentWorkers sublst wp, ()) sched _doSync workLimit state sublst uid $ task else do --printf "cpu %d using old threads, of which I am not one\n" main_cpu currentWorkers <- newIORef 0 sublst <- newIORef [task] m <- newEmptyMVar atomicModifyIORef (status state) $ addExtIdler (ExtIdle uid m) atomicModifyIORef (workpool state) $ \wp -> (Work uid currentWorkers sublst wp, ()) takeMVar m --printf "cpu %d finished in child\n" main_cpu r <- readIORef rref -- globalThreadShutdown case r of Full a -> return a _ -> error "no result" -- | The main way to run a Par computation runPar :: Par a -> a runPar = runPar_internal True -- | An asynchronous version in which the main thread of control in a -- Par computation can return while forked computations still run in -- the background. runParAsync :: Par a -> a runParAsync = runPar_internal False -- | An alternative version in which the consumer of the result has -- the option to "help" run the Par computation if results it is -- interested in are not ready yet. runParAsyncHelper :: Par a -> (a, IO ()) runParAsyncHelper = undefined -- TODO: Finish Me. -- --------------------------------------------------------------------------- -- PAR FUNCTIONS -- --------------------------------------------------------------------------- new :: Par (IVar a) new = Par $ New Empty newFull :: NFData a => a -> Par (IVar a) newFull x = deepseq x (Par $ New (Full x)) newFull_ :: a -> Par (IVar a) newFull_ !x = Par $ New (Full x) get :: IVar a -> Par a get v = Par $ \c -> Get v c put_ :: IVar a -> a -> Par () put_ v !a = Par $ \c -> Put v a (c ()) put :: NFData a => IVar a -> a -> Par () put v a = deepseq a (Par $ \c -> Put v a (c ())) yield :: Par () yield = Par $ \c -> Yield (c ()) | http://hackage.haskell.org/package/monad-par-0.3/docs/src/Control-Monad-Par-Scheds-TraceInternal.html | CC-MAIN-2015-11 | refinedweb | 2,730 | 52.73 |
im trying to add a scrollpane to a texfield but it doesn't show up. i've copied the same structure as the example but it is not appearing. any ideas on why?
Post your Comment
Create a Scroll Pane Container in Java
Create a Scroll Pane Container in Java
... to create a scroll
pane container in Java Swing. When you simply create a Text Area... for the result
of the given program:
This program shows a scroll pane
JTABLE SCROLL PANE
JTABLE SCROLL PANE The scrollpane for a image in Jtable is only...
Create a Desktop Pane Container in Java
Create a Desktop Pane Container in Java
...
pane container in Java. The desktop pane container is a container, which has...
under the desktop pane. Following figure shows the JDesktopPane component of
Java gui-with jscroll pane
java gui-with jscroll pane Dear friends..
I have a doubt in my gui application.
I developed 1 application. In this application is 1 Jscrollpane......but it is not in that screen/window.If he need to see that part he should scroll
Displaying the same image in a JPanel and using scroll - HELP - Java Beginners
the drawing area in a scroll pane.
JScrollPane scroller = new... the scroll pane. */
public class DrawingPane extends JPanel...) {
drawingPane.setPreferredSize(area);
//Let the scroll pane know to update
Container or Component - Java Beginners
container
4) a container always has a content pane which holds the components...Container or Component I was asked this question on and am having... about the realtionship between Components and Containers in Java GUI?
1
javascript set scroll height
the Value of Scroll Bar in Java Swing...javascript set scroll height How to set scroll height in JavaScript?
CSS Scroll Method
div
{
width:150px;
height:150px;
overflow
Java container
Java container What is a Container in a GUI
java container
java container Why the container does not support multiple layout managers
Servlet Container
are developed by Sun under the Java
Community Process.
A container handles large number...
Servlet Container
A servlet container is nothing but a compiled, executable program. The main function
Scroll ImagesUIScrollView
Scroll ImagesUIScrollView Hi, Can anyone please suggest me how to scroll images in UIScrollView in iPad?
THanks
Web Container
Web Container
A Web application runs within a Web
container of a Web server. The Web container... components for the
web applications. Apache Tomcat is the web container
Create Scroll Bar in Java using SWT
Create Scroll Bar in Java using SWT
This section is all about creating scroll bar in Java SWT
The given example will show you how to create scroll bar in Java using
EJB container services
Java
applications. The role of JRun in the container service...
EJB container services
The EJB container is a container that deploys EJB automatically when
Web Server
Scroll in JPanel
Scroll in JPanel Hello sir, Hi Friend,
You can use the following code:
import java.awt.*;
import javax.swing.*;
class scroll
{
public static void main(String[] args)
{
Frame f = new Frame ("Demo");
JPanel p = new
servlet container - Servlet Interview Questions
servlet container Where the servlet container presents...; Hi friend,
A servlet container is nothing but a compiled, executable program. The main function of the container is to load, initialize and execute
container
EJB Container or EJB Server
and coordination. Container uses the Java Transaction
APIs to expose the transaction...
EJB Container or EJB Server
An EJB container is nothing but the program that runs on the server
Create a Container in Java awt
Create a Container in Java awt
Introduction
This program illustrates you how to create a container.
Container contains several control or tools for
develop
your
Excel Freeze Pane Feature
Excel Freeze Pane Feature
In this section, you will learn about freezing row of excel document using
Apache POI library.
Sometimes when you scroll down too... which cell belongs to which heading. The
solution of this problem is Freeze Pane
Java JTabbedPane Example
(lab8);
tp=new JTabbedPane();
Container pane...Java JTabbedPane Example
In this section we will discuss about how to add the Tab pane in JFrame.
javax.swing.JTabbedPane provides the feature to add tabs
not workingjames June 1, 2012 at 11:15 AM
im trying to add a scrollpane to a texfield but it doesn't show up. i've copied the same structure as the example but it is not appearing. any ideas on why?
Post your Comment | http://roseindia.net/discussion/18215-Create-a-Scroll-Pane-Container-in-Java.html | CC-MAIN-2016-18 | refinedweb | 732 | 65.22 |
Introduction to Free Style (and CSS-in-JS)
Introduction to Free Style (and CSS-in-JS)
With the release of Free Style 1.0, I figure it's about time to write about Free Style—how it works, why you'd want to use it, and little introduction to CSS-in-JS. This is not a blog post designed to sway decisions—as always, you should use your own fair judgement.
Join the DZone community and get the full member experience.Join For Free
Access over 20 APIs and mobile SDKs, up to 250k transactions free with no credit card required
With the release of Free Style 1.0, I figure it's about time to write about Free Style—how it works, why you'd want to use it, and little introduction to CSS-in-JS. This has been a long time coming, with my first commit to Free Style over a year ago, and the first commit to Free Style in its current form 10 months ago. This is not a blog post designed to sway decisions—as always, you should use your own fair judgement.
CSS-in-JS
The idea of CSS-in-JS is well covered in this presentation by React engineer, Christopher Chedeau, and by many others, so I'll be brief. As React popularized the declarative DOM, it also enabled a generation of CSS-in-JS approaches that attempt to solve the many pitfalls of CSS. These pitfalls are well known and documented, and including "features" such as the global namespace, constant sharing, and many approaches to component isolation (BEM, SMACSS). Writing CSS in a way that avoids the pitfalls can be regarded an art.
CSS-in-JS approaches exist to solve the pollution of the global namespace, constant sharing, component isolation, and bring many other unforeseen benefits. The JS part exists because these solutions utilize JavaScript as the way to provide namespacing, constant sharing, and proper isolation. You may have already known, but these are things that have long been solved in programming languages, including JavaScript—with CommonJS, AMD, and recently ES6 modules. It stands to reason that, if possible, JavaScript will provide a more sound foundation for writing modular CSS. Tooling for JavaScript is more mature, with the ability to do autocompletion, dead code elimination, and linting common-place.
How and Why Does Free Style Work?
Free Style works with hashes. If there's one word you should love at the end of this section, it's hashing. With that said, the essence of free-style is less than 500 lines of code (in TypeScript), so I definitely suggest you check it out.
Free Style is built on top of a core
Cache class. This class implements a way to append children using an ID (which is a hash), keeps track of how many times a child was added and removed, and can also attach simple change listeners (for when a child is added or removed). Three classes extend the
Cache implementation to replicate the structure of CSS:
Rule,
Style, and
FreeStyle. The only other important class is
Selector, which implements a
Cacheable interface (
Cache fulfills the same interface).
Using these four classes, we can replicate CSS in a way that automatically de-dupes styles. First, we create a
FreeStyle instance (it can be imagined as a
.css file). This class holds
Rule and
Style children. When you use
registerStyle, it'll stringify each object to CSS and hash the contents, while also propagating any rules upward (e.g. when
@media nested inside a style). The result is a single style registering (potentially) a number of
Style and
Rule instances, all of which have their hashes of their own contents. Throughout each instance creation,
registerStyle collects a separate hash that is returned as the CSS class name to use. Finally, when the final class name hash is known, the class name is interpolated with all selectors and returned for you to use.
The result of this means that duplicate hashes are automatically merged. A duplicate hash means a duplicate rule, style, or selector. The benefit of separating
Rule and
Style means that two identical media queries can be merged together (less CSS output) and so can identical styles within each context (e.g. identical styles inside and outside the media query cannot be merged, but duplicates both inside or outside can be). The
Selector class exists because now that duplicates are merged, multiple selectors can exist for the same
Style.
The other interesting methods are
registerRule and
registerKeyframes. Both work similar to
registerStyle, but are much simpler.
registerRule works by recursively registering rules, which are automatically being hashed based on the rule and their contents.
registerKeyframes works by creating rules and styles that get added to a
Rule instance and returns a selector of the complete hashed contents (hence keyframes are automatically hashed/namespaced).
All this hashing results in the fact that all styles are automatically unique. Registered styles and keyframes have a hash to identify them and the chance of a conflicting style is now left to the computer to resolve, not you. The other pitfalls of CSS are automatically solved as the result of JavaScript, as the hash can only be known and exposed by the implementor while constants and isolation are now solved (you can even use NPM libraries for style manipulation now).
Update: One useful fact that may not be immediately obvious. By using hashes as the class name, it means output is always consistent across front-end and back-end (e.g. in isomorphic applications).
Free Style Output Targets
Now that you understand how Free Style works, the output targets should make a lot more sense. By default, Free Style exposes a feature-rich implementation ready for third-parties to build on top. To use it today, you must create instances of
FreeStyle (using
create()), merge any other instances and use
getStyles to get the CSS output. There's an
inject() method, which will take the result of
getStyles and wrap it in
<style /> in the
<head />.
Currently, there are two other implementations of output targets. The first is
easy-style, a simple wrapper around the complex functionality in Free Style that abstracts away multiple
FreeStyle instances. It exposes three core methods -
style,
rule,
keyframe - avoiding the concept of multiple instances, which makes it suitable for most web-apps. There's also
react-free-style, which extends the core
FreeStyle class with the ability to wrap React components and use React's
context for collecting all the styles used in the application. This is an interesting feature with interesting repercussions, such as only styles for the components on screen will be output in CSS (useful for minimizing the transfer size of isomorphic applications).
Other CSS-in-JS Solutions
A number of other CSS-in-JS solutions also exist, including
radium,
react-style and
jss. Radium takes an all JavaScript approach, while
jss and
react-style are more familiar. Where
jss and
react-style differ to
free-style is the approaches they take. Both went for namespacing in CSS by generating unique names instead of hashes. They also both went with
StyleSheet instances that you create once, while
free-stylemakes you register each style individually (and for a good reason, it allows linters to detect when a style is no longer used as it'll appear as dead code). They may also restrict some subset of CSS that you can actually use for different reasons. As far as I can tell, neither go too much further into CSS tooling concepts such as style de-duping and minifying, which
free-style gives you for free with hashes.
#1 for location developers in quality, price and choice, switch to HERE.
Published at DZone with permission of Blake Embrey , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/introduction-to-free-style-and-css-in-js | CC-MAIN-2019-09 | refinedweb | 1,325 | 61.16 |
2018-09-14 15:57:16 8 Comments
I'm using the following function to remove a specific string pattern from files in a directory:
import os for filename in os.listdir(path): os.rename(filename, filename.replace(r'^[A-Z]\d\d\s-\s[A-Z]\d\d\s-\s$', ''))
The pattern is as follows, where A is any capital letter, and # is any number between 0-9:
A## - A## -
My regex matches this format on regex101. When I run the above function, it completes without error, however no directory names change. Where am I going wrong?
Related Questions
Sponsored Content
43 Answered Questions
[SOLVED] How to replace all occurrences of a string in JavaScript?
- 2009-07-17 17:53:46
- Click Upvote
- 2522955 View
- 3255 Score
- 43 Answer
- Tags: javascript node.js regex string replace
52 Answered Questions
[SOLVED] Calling an external command in Python
- 2008-09-18 01:35:30
- freshWoWer
- 2637173 View
- 3728 Score
- 52 Answer
- Tags: python shell command subprocess external
16 Answered Questions
[SOLVED] What are metaclasses in Python?
- 2008-09-19 06:10:46
- e-satis
- 628500 View
- 4674 Score
- 16 Answer
- Tags: python oop metaclass python-datamodel
25 Answered Questions
[SOLVED] How can I safely create a nested directory in Python?
- 2008-11-07 18:56:45
- Parand
- 1959847 View
- 3061 Score
- 25 Answer
- Tags: python exception path directory operating-system
32 Answered Questions
[SOLVED] How do I list all files of a directory?
48 Answered Questions
[SOLVED] Replacements for switch statement in Python?
23 Answered Questions
[SOLVED] Does Python have a ternary conditional operator?
- 2008-12-27 08:32:18
- Devoted
- 1435538 View
- 4547 Score
- 23 Answer
- Tags: python operators ternary-operator conditional-operator python-2.5
14 Answered Questions
[SOLVED] Does Python have a string 'contains' substring method?
10 Answered Questions
[SOLVED] Is there a way to substring a string in Python?
25 Answered Questions
[SOLVED] What is the difference between @staticmethod and @classmethod in Python?
- 2008-09-25 21:01:57
- Daryl Spitzer
- 576822 View
- 2745 Score
- 25 Answer
- Tags: python oop methods python-decorators
@Chillie 2018-09-14 15:59:59
replacestring method does not support regular expressions.
You need to import the
remodule and use its sub method.
So your code might look like this:
But don't forget about flags and such.
Edit: Removed
$from the pattern as the filenames don't end there.
@Laurie Bamber 2018-09-14 16:05:57
Odd, I've tried this but it's still not working for some reason, it completes without error however nothings changed.
@Chillie 2018-09-14 16:07:42
@LaurieBamber could you give us a couple sample filenames?
@Laurie Bamber 2018-09-14 16:08:17
File 1: 'S01 - E01 - Something Here', File 2: 'S01 - E02 - Something Different'.... File k: 'S07 - E06 - Something Different'
@Chillie 2018-09-14 16:09:32
@LaurieBamber You have the string end marker (
$) in your pattern, but the file name does not end there, so it doesn't match. Just remove the
$from your pattern.
@Laurie Bamber 2018-09-14 16:10:07
Gentleman, thats done it. Thanks for the help.
@Chillie 2018-09-14 16:15:51
@LaurieBamber Happy to help. Don't forget to mark the question as solved (answer as accepted). =)
@mad_ 2018-09-14 16:03:28
@Laurie Bamber 2018-09-14 16:07:47
Thanks, this still isn't working for some reason with my code though.
@mad_ 2018-09-14 16:09:35
can you print the output after executing the above code?
@Laurie Bamber 2018-09-14 16:11:17
My mistake, as pointed out by Chillie I was incorrectly using the $ symbol to finish the pattern. Thanks for the help anyway. | https://tutel.me/c/programming/questions/52335490/python++regex+not+replacing+directory+strings | CC-MAIN-2018-39 | refinedweb | 619 | 63.7 |
2.2.2.58 Read
The Read element is an optional element that specifies whether the e-mail message has been viewed by the current recipient. It is defined as an element in the Email namespace.
A value of 1 (TRUE) indicates the e-mail message has been viewed by the current recipient; a value of 0 (zero, meaning FALSE) indicates the e-mail message has not been viewed by the current recipient.
The value of this element is a boolean data type, as specified in [MS-ASDTYPE] section 2.1. If a non-boolean value is used in a Sync command request ([MS-ASCMD] section 2.2.1.21), the server responds with Status element ([MS-ASCMD] section 2.2.3.177.17) value of 6 in the Sync command response.. | https://msdn.microsoft.com/en-us/library/ee201839(v=exchg.80).aspx | CC-MAIN-2017-30 | refinedweb | 132 | 63.39 |
open(2) open(2)
NAME
open() - open file for reading or writing
SYNOPSIS
#include <<<<fcntl.h>>>>
int open(const char *path, int oflag, ... /* [mode_t mode] */ );
Remarks
The ANSI C ", ..." construct specifies a variable length argument list
whose optional member is given in the associated comment (/* */).
DESCRIPTION
The open() system call opens a file descriptor for the named file and
sets the file status flags according to the value of oflag.
The path argument points to a path name naming a file, and must not
exceed PATH_MAX bytes in length.
The oflag argument is a value that is the bitwise inclusive OR of
flags listed in "Read-Write Flags," "General Flags," and "Synchronized
I/O Flags" below.
The optional mode argument is only effective when the O_CREAT flag is
specified.
The file pointer used to mark the current position within the file is
set to the beginning of the file.
The new file descriptor is set to remain open across exec*() system
calls. See fcntl(2).
Read-Write Flags
Exactly one of the O_RDONLY, O_WRONLY, or O_RDWR flags must be used in
composing the value of oflag. If none or more than one is used, the
behavior is undefined.
O_RDONLY Open for reading only.
O_WRONLY Open for writing only.
O_RDWR Open for reading and writing.
General Flags
Several of the flags listed below can be changed with the fcntl()
system call while the file is open. See fcntl(2) and fcntl(5) for
details.
O_APPEND If set, the file offset is set to the end of the
file prior to each write.
Hewlett-Packard Company - 1 - HP-UX Release 11i: November 2000
open(2) open(2)
O_CREAT If the file exists, this flag has no effect,
except as noted under O_EXCL below. Otherwise,
the owner ID of the file is set to the effective
user ID of the process, the group ID of the file
is set to the effective group ID of the process if
the set-group-ID bit of the parent directory is
not set, or to the group ID of the parent
directory if the set-group-ID bit of the parent
directory is set.
The file access permission bits of the new file
mode are set to the value of mode, modified as
follows (see creat(2)):
+)).
O_EXCL If O_EXCL and O_CREAT are set and the file exists,
open() fails.
O_LARGEFILE This is a non-standard flag which may be used by
32-bit applications to access files larger than 2
GB. See creat64(2).
O_NDELAY This flag might affect subsequent reads and
writes. See read(2) and write(2).
When opening a FIFO with O_RDONLY or O_WRONLY set:
If O_NDELAY is set:
A read-only open() returns without
delay.
A write-only open() returns an error if
no process currently has the file open
for reading.
Hewlett-Packard Company - 2 - HP-UX Release 11i: November 2000
open(2) open(2)
If O_NDELAY is clear:
A read-only open() does not return until
a process opens the file for writing.
A write-only open() does not return
until a process opens the file for
reading.
When opening a file associated with a
communication line:
If O_NDELAY is set:
The open() returns without waiting for
carrier.
If O_NDELAY is clear:
The open() does not return until carrier
is present.
O_NOCTTY If set, and path identifies a terminal device,
open() does not cause the terminal to become the
controlling terminal for the process.
O_NONBLOCK Same effect as O_NDELAY for open(2), but slightly
different effect in read(2) and write(2). If both
O_NONBLOCK and O_NDELAY are specified, O_NONBLOCK
takes precedence.
O_TRUNC If the file exists, its length is truncated to 0
and the mode and owner are unchanged.
Synchronized I/O Flags
Together, the O_DSYNC, O_RSYNC, and O_SYNC flags constitute support
for Synchronized I/O. These flags are ignored for files other than
ordinary files and block special files on those systems that permit
I/O to block special devices (see pathconf(2)). If both the O_DSYNC
and O_SYNC flags are set, the effect is as if only the O_SYNC flag was
set. The O_RSYNC flag is ignored if it is not set along with the
O_DSYNC or O_SYNC flag.
O_DSYNC
If a file is opened with O_DSYNC or that flag is set with
the F_SETFL option of fcntl(), writes to that file by the
process block until the data specified in the write request
and all file attributes required to retrieve the data are
written to the disk. File attributes that are not necessary
for data retrieval (access time, modification time, status
change time) are not necessarily written to the disk prior
Hewlett-Packard Company - 3 - HP-UX Release 11i: November 2000
open(2) open(2)
to returning to the calling process.
O_SYNC
Identical to O_DSYNC, with the addition that all file
attributes changed by the write operation (including access
time, modification time, and status change time) are also
written to the disk prior to returning to the calling
process.
O_RSYNC|O_DSYNC (specified together)
Identical to O_DSYNC for file system writes.
For file system reads, the calling process blocks until the
data being read and all file attributes required to retrieve
the data are the same as their image on disk. Writes
pending on the data to be read are executed prior to
returning to the calling process.
O_RSYNC|O_SYNC (specified together)
Identical to O_SYNC for file system writes.
Identical to O_RSYNC|O_DSYNC for file system reads, with the
addition that all file attributes changed by the read
operation (including access time, modification time, and
status change time) too are the same as their image on disk.
RETURN VALUE
open() returns the following values:
n Successful completion. n is a file descriptor for the
opened file.
-1 Failure. errno is set to indicate the error.
ERRORS
If open() fails, errno is set to one of the following values.
[EACCES] oflag permission is denied for the named file.
[EACCES] A component of the path prefix denies search
permission.
[EACCES] The file does not exist and the directory in which
the file is to be created does not permit writing.
[EACCES] O_TRUNC is specified and write permission is
denied.
[EAGAIN] The file exists, enforcement mode file/record
locking is set (see chmod(2)), there are
outstanding record locks on the file with the
lockf() or fcntl() system calls, and O_TRUNC is
Hewlett-Packard Company - 4 - HP-UX Release 11i: November 2000
open(2) open(2)
set.
[EDQUOT] User's disk quota block or inode limit has been
reached for this file system.
[EEXIST] O_CREAT and O_EXCL are set and the named file
exists.
[EFAULT] path points outside the allocated address space of
the process.
[EINTR] A signal was caught during the open() system call,
and the system call was not restarted (see
signal(5) and sigvector(2)).
[EINVAL] oflag specifies both O_WRONLY and O_RDWR.
[EISDIR] The named file is a directory and oflag is write
or read/write.
[ELOOP] Too many symbolic links are encountered in
translating the path name.
[EMFILE] The maximum number of file descriptors allowed are
currently open.
[ENAMETOOLONG] The length of the specified path name exceeds
PATH_MAX bytes, or the length of a component of
the path name exceeds NAME_MAX bytes while
_POSIX_NO_TRUNC is in effect.
[ENFILE] The system file table is full.
[ENOENT] The named file does not exist (for example, path
is null or a component of path does not exist, or
the file itself does not exist and O_CREAT is not
set).
[ENOTDIR] A component of the path prefix is not a directory.
[ENXIO] O_NDELAY is set, the named file is a FIFO,
O_WRONLY is set, and no process has the file open
for reading.
[ENXIO] The named file is a character special or block
special file, and the device associated with this
special file either does not exist, or the driver
for this device has not been configured into the
kernel.
Hewlett-Packard Company - 5 - HP-UX Release 11i: November 2000
open(2) open(2)
[ENOSPC] O_CREAT is specified, the file does not already
exist, and the directory that would contain the
file cannot be extended.
[EOVERFLOW] The named file is a regular file and the size of
the file cannot be represented correctly in an
object of size off_t.
[EROFS] The named file resides on a read-only file system
and oflag is write or read/write.
[ETXTBSY] The file is open for execution and oflag is write
or read/write. Normal executable files are only
open for a short time when they start execution.
Other executable file types can be kept open for a
long time, or indefinitely under some
circumstances.
EXAMPLES
The following call to open() opens file inputfile for reading only and
returns a file descriptor for inputfile. For an example of reading
from file inputfile, see the read(2) manual entry.
int infd;
infd = open ("inputfile", O_RDONLY);
The following call to open() opens file outputfile for writing and
returns a file descriptor for outputfile. For an example of
preallocating disk space for outputfile, see the prealloc(2) manual
entry. For an example of writing to outputfile, see the write(2)
manual entry.
int outfd;
outfd = open ("outputfile", O_WRONLY);
The following call opens file iofile for synchronized I/O file
integrity for reads and writes.
int iofd;
iofd = open ("iofile", O_RDWR|O_SYNC|O_RSYNC);
AUTHOR
open() was developed by HP, AT&T, and the University of California,
Berkeley.
SEE ALSO
acl(2), chmod(2), close(2), creat(2), dup(2), fcntl(2), lockf(2),
lseek(2), creat64(2), pathconf(2), read(2), select(2), umask(2),
write(2), setacl(2), acl(5), aclv(5), fcntl(5), signal(5), unistd(5).
Hewlett-Packard Company - 6 - HP-UX Release 11i: November 2000
open(2) open(2)
STANDARDS CONFORMANCE
open(): AES, SVID2, SVID3, XPG2, XPG3, XPG4, FIPS 151-2, POSIX.1,
POSIX.4
Hewlett-Packard Company - 7 - HP-UX Release 11i: November 2000 | http://modman.unixdev.net/index.php?page=open&sektion=2&manpath=HP-UX-11.11 | CC-MAIN-2013-20 | refinedweb | 1,653 | 61.56 |
Python Random Number Generator – Python Random Module
Ever wondered how you get a random dice number every time you play online Ludo. Or how shuffling your song playlist works?
Turns out, your gaming or music app generates random numbers underneath the hood. And based on this random number, it decides which dice number to display or which song to play.
But doesn’t “random” shuffling your playlist seems far from random? This is because these random numbers are not truly random, rather they are pseudo-random numbers. A pseudo-random number generator (PRNG) is an algorithm that generates seemingly random numbers. Let’s learn about Python Random Number Generator.
Keeping you updated with latest technology trends, Join TechVidvan on Telegram
Python Random Module
Python generates these pseudo-random numbers using the random module. But if you head to the Python docs for the documentation of this module, you’ll see a warning there –
“The pseudo-random generators of this module should not be used for security purposes.”
Apparently, the pseudo-random numbers that the random module generates are not cryptographically secure. Python uses Mersenne Twister algorithm for generating random numbers. But this algorithm is completely deterministic, making it an unsuitable choice for cryptographic purposes.
Now let’s move on to various built-in functions, under the random module. These functions are capable of generating pseudo-random numbers under different scenarios.
Built-in Functions for Python Random Number Generator
The list below contains a brief description of the functions we’re going to cover for random number generation.
Let’s now dive deeper into each of these functions.
1. Python randint()
This function generates an integer between the specified limits. It takes two arguments x and y and produces integer i such that x <= i <= y.
>>> import random >>> random.randint(3, 6)
Output:
>>>
2. Python randrange()
This function takes 2 arguments start and stop along with 1 optional argument step. And returns a random integer in the range(start, stop, step). The default value of step is 1.
>>> import random >>> random.randrange(1, 10, 2)
Output:
>>>
In this example, the output will be a random integer from [1, 3, 5, 7, 9] as the start and stop values are 1 and 10 respectively and the step value is 2.
3. Python random()
This function returns a random floating-point number between 0 and 1 (excluding 0 and 1).
>>> import random >>> random.random()
Output:
>>>
4. Python uniform()
This function lets you generate a floating-point number within a specific limit. It takes two arguments to start and stop and then returns a float between the start and stop (including the limits).
>>> import random >>> random.uniform(6, 9)
Output:
>>>
5. Python choice()
If you want to choose a random element from a specific sequence, you can use this function. It takes one argument – the sequence. And it returns a random element from the sequence.
>>> import random >>> seq = (12, 33, 67, 55, 78, 90, 34, 67, 88) >>> random.choice(seq)
Output:
>>>
6. Python sample()
If you want more than one random element from a sequence, you can use sample(). It takes two arguments population and k, where population is a sequence and k is an integer. Then it returns a list of k random elements from the sequence population.
>>> import random >>> seq = (12, 33, 67, 55, 78, 90, 34, 67, 88) >>> random.sample(seq, 5)
Output:
>>>
7. Python shuffle()
This function takes one argument – a list. It then shuffles the elements of the list in place and returns None.
>>> import random >>> l = [10, 20, 30, 40, 50] >>> random.shuffle(l) >>> print(l)
Output:
8. Python seed()
You can use this function when you need to generate the same sequence of random numbers multiple times. It takes one argument- the seed value. This value initializes a pseudo-random number generator. Now, whenever you call the seed() function with this seed value, it will produce the exact same sequence of random numbers.
import random # seed value = 3 random.seed(3) for i in range(3): print(random.random(), end = ' ') print('\n') # seed value = 8 random.seed(8) for i in range(3): print(random.random(), end = ' ') print('\n') # seed value again = 3 random.seed(3) for i in range(3): print(random.random(), end = ' ') print('\n')
Output:
0.23796462709189137 0.5442292252959519 0.369955166548079250.2267058593810488 0.9622950358343828 0.12633089865085956
0.23796462709189137 0.5442292252959519 0.36995516654807925
>>>
Notice that for seed = 3, the same sequence gets produced every time.
Summary
In this article, we looked at some majorly used functions which are capable of generating pseudo-random numbers in Python. We saw the functions we can use for generating integers. We also saw how to generate random floating-point values. Lastly, we saw how we can use built-in functions to select random elements out of a sequence. You can build various small projects using random numbers. You can build games, you can use random numbers while working with statistics in Data Science and much more. Use your creativity and play with numbers!
Hope you enjoyed the Python Random Number Generator tutorial!! | https://techvidvan.com/tutorials/python-random-number-generator/ | CC-MAIN-2020-45 | refinedweb | 843 | 59.6 |
A Django application that allows users to follow django model objects
Project Description
The followit django app allows to easily set up a capability for the site users to follow various things on the site, represented by django model objects.
Setup
To the INSTALLED_APPS in your settings.py add entry 'followit'. Then, in your apps’ models.py, probably at the end of the file, add:
import followit followit.register(Thing)
Once that is done, in your shell run:
python manage.py syncdb
Not it will be possible for the user to follow instances of SomeModel.
If you decide to allow following another model, just add another followit.register(...) statement to the models.py and re-run the syncdb.
Usage
Examples below show how to use followit (assuming that model Thing is registered with followit in your models.py.
bob.follow_thing(x) bob.unfollow_thing(x) things = bob.get_followed_things() x_followers = x.get_followers()
Note that followit does not yet provide view functions of url routing relevant to following or unfollowing items, nor template tags.
Download Files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/followit/ | CC-MAIN-2018-09 | refinedweb | 192 | 51.85 |
#include <deal.II/grid/tria_description.h>
The CellData class (and the related SubCellData class) is used to provide a comprehensive, but minimal, description of the cells when creating a triangulation via Triangulation::create_triangulation(). Specifically, each CellData object – describing one cell in a triangulation – has member variables for indices of the \(2^d\) vertices (the actual coordinates of the vertices are described in a separate vector passed to Triangulation::create_triangulation(), so the CellData object only needs to store indices into that vector), the material id of the cell that can be used in applications to describe which part of the domain a cell belongs to (see the glossary entry on material ids), and a manifold id that is used to describe the geometry object that is responsible for this cell (see the glossary entry on manifold ids) to describe the manifold this object belongs to.
This structure is also used to represent data for faces and edges when used as a member of the SubCellData class. In this case, the template argument
structdim of an object will be less than the dimension
dim of the triangulation. If this is so, then vertices array represents the indices of the vertices of one face or edge of one of the cells passed to Triangulation::create_triangulation(). Furthermore, for faces the material id has no meaning, and the
material_id field is reused to store a
boundary_id instead to designate which part of the boundary the face or edge belongs to (see the glossary entry on boundary ids).
An example showing how this class can be used is in the
create_coarse_grid() function of step-14. There are also many more use cases in the implementation of the functions of the GridGenerator namespace.
Definition at line 67 of file tria_description.h.
Default constructor. Sets the member variables to the following values:
Definition at line 48 of file tria.cc.
Boost serialization function
Definition at line 233 of file tria_description.h.
Indices of the vertices of this cell. These indices correspond to entries in the vector of vertex locations passed to Triangulation::create_triangulation().
Definition at line 74 of file tria_description.h.
The material id of the cell being described. See the documentation of the CellData class for examples of how to use this field.
This variable can only be used if the current object is used to describe a cell, i.e., if
structdim equals the dimension
dim of a triangulation.
Definition at line 93 of file tria_description.h.
The boundary id of a face or edge being described. See the documentation of the CellData class for examples of how to use this field.
This variable can only be used if the current object is used to describe a face or edge, i.e., if
structdim is less than the dimension
dim of a triangulation. In this case, the CellData object this variable belongs to will be part of a SubCellData object.
Definition at line 104 of file tria_description.h.
Material or boundary indicator of this cell. This field is a union that stores either a boundary or a material id, depending on whether the current object is used to describe a cell (in a vector of CellData objects) or a face or edge (as part of a SubCellData object).
Manifold identifier of this object. This identifier should be used to identify the manifold to which this object belongs, and from which this object will collect information on how to add points upon refinement.
See the documentation of the CellData class for examples of how to use this field.
Definition at line 115 of file tria_description.h. | https://dealii.org/current/doxygen/deal.II/structCellData.html | CC-MAIN-2021-25 | refinedweb | 599 | 52.49 |
Source: Deep Learning on Medium
Weights&Biases: An introductory guide
A big slice of time spent in training a Machine Learning model is in tracking, saving and organizing runs. If we don’t put enough effort into creating a stable system around our models we soon lose control of the project, ending up having to retrain the same model multiple times because we didn’t save it or we didn’t record its performance at the end of the training process. As always in Computer Science, poor design leads to a lot of wasted time later on.
But each project has its own characteristics and mechanisms. It is not always possible to design an environment that can track all the different models of your project, from the simple Linear Regression of your random personal project to the last state-of-the-art Transformer Neural Network from OpenAI. Unless, there is an entire team dedicating work, time and money into that.
A huge number of tools and platforms have been recently developed to track the performance of the Machine Learning models over time and different executions.
Weights&Biases
Weights&Biases is a platform that helps developers working in Deep Learning. By adding few lines of code to your script, you can start tracking almost everything about your models: performances, architecture and parameters used, system information (e.g., number of CPUs/GPUs used), running time and many more. The code to write is designed to be non-invasive so that it requires minimum effort to enable or disable the tracking.
The information will be sent to the dedicated project page on the W&B website, from which you can setup cool visualizations, aggregate information and compare the trained models. Last time I joined a group to participate in a Kaggle competition I didn’t know about this platform. Sharing models and results was incredibly difficult and after losing too much time from trying to understand with which version of the code a certain result had been obtained, we had to spend even more time in setting up an environment to keep track of that.
One of the advantages of storing remotely the data is that it is easy to collaborate on the same project and share the results. The platform has launched a service called Benchmarks, to allow people to share their implementation for a specific task. In this way, when someone wants to start working on that task, he/she already has access to a list of approaches (state-of-the-art included), with associated implementation and scores.
Tracking runs with W&B
There would be so much more to talk about, but it’s time to show a basic example of how it works!
Neural Network classifier
The task we will tackle is the classification of hand-written digits from the MNIST dataset. It is one of the most used datasets, for which you will find a ton of tutorials online if you don’t have so much experience with neural networks. We will use the Keras library to implement our simple neural network, but even if you have never used it, don’t worry, we can consider the implementation as a black box for now and still appreciate all the features of Weights&Biases!
As I said, I will not go into details about the code implementation because it’s not the scope of this blog post. It is not the best code to solve the task, but I wanted to keep it as simple as possible. The neural network is composed of the first layer with size 28*28=784 (which is the size of the 28×28 input image reduced to a single vector), then a single hidden layer of size 516 and an output layer of size 10 (the number of digits, i.e. the classes to predict).
import tensorflow.keras as keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import RMSpropnum_classes = 10
epochs = 20# split the data between train and test sets
(x_train, y_train), (x_valid, y_valid) = keras.mnist.load_data()# transform 28 by 28 pixels image to a vector of length 28*28
x_train = x_train.reshape(-1, 784)
x_test = x_test.reshape(-1, 784)# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)# define a simple neural network with 1 hidden layer of size 516
model = Sequential()
model.add(Dense(516, activation='relu', input_shape=(28*28,)))
model.add(Dense(num_classes, activation='softmax'))model.compile(loss='categorical_crossentropy',
optimizer=RMSprop(),
metrics=['accuracy'])model.fit(x_train, y_train,
epochs=epochs,
verbose=1,
validation_data=(x_valid, y_valid))
Add tracking code
Now it’s the time to introduce the Weights&Biases tracker to our code!
The steps are the following:
- Register to the platform. The service is free unless you want to use big, enterprise-level models or you want to create collaborative, but private, projects. For students, academics and open source projects, the latter limitation is not present. After signing up you will receive an API key, required to link your code to your account.
- Create a new project. A project can be public, meaning that anyone has read access but cannot upload any run; open, in which anyone has also the possibility to add his/her runs and reports; private, in which only the owner has access.
- Install the Python wandb library to automatically track the training process and then, connect to your account using the API key. The process is made easy by the Python package manager called pip.
pip install --upgrade wandb
wandb login YOUR_API_KEY
4. Insert the code into our script to start tracking.
The new lines to our code are highlighted in the following script.
import tensorflow.keras as keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import RMSprop
import wandb
from wandb.keras import WandbCallback
wandb.init(project="introduction_wandb")num_classes = 10...model.fit(x_train, y_train,
epochs=epochs,
verbose=1,
validation_data=(x_valid, y_valid),
callbacks=[WandbCallback()])
Visualization
By running the code, the model will train for 20 epochs, reaching a validation accuracy of 97.45%, but that’s not the only information that W&B is storing. Let’s go to see on the project page.
The run we have executed is now shown on the left side, with a random name (it is possible to change the name from the page or setting it from the Python code).
If we click on upbeat-yogurt-1 we have access to a lot of information that W&B has automatically recorded.
The Overview section contains data about the running time, the training and validation accuracy of the model and more. In the Python script it is possible to add new parameters to be tracked, or for instance, save which activation function has been used, but I will not go in-depth about it.
If we click on the Chart section we have the plots of how the metrics of our model changed from epoch to epoch. The more interesting ones are in this case the training and validation accuracy.
If we run the script again, with a different setting (e.g. changing the number of layers or the size of the layers) it is possible to see the plots one on top of the others and directly compare their performance.
In the System section we have information about, for example, how much CPU, Memory, and GPU have been used by the model. This is becoming more and more important in the Deep Learning field, because a modest improvement in accuracy may not be useful if it requires a huge increase in resources.
The Model section shows a summary of the neural network architecture.
A very useful section is Logs, which shows all the shell output during the training process. It is incredibly useful to check for warnings and errors even if we don’t have access to the terminal anymore.
And finally, the Files section, from which you can download the pre-trained model and lots of other data.
In this introduction to Weights&Biases, I showed you only the basic features of the platform. It’s incredible how many tricks I have discovered while using it and they will be shown in future posts, such as Github integration, collaborative projects, automatic hyperparameter tuning, reports, gradients visualization, alerts through Slack and many more.
The project page on the Weights&Biases platform is public, so you can access and explore it at even if you don’t have an account.
I hope you enjoyed this guide! | https://mc.ai/introduction-to-weightsbiases/ | CC-MAIN-2020-29 | refinedweb | 1,433 | 53.31 |
I'm trying to figure out how to transfer several methods in a cash register into one void method. Here is my code.
/** * A cash register totals up sales and computes change due. */ public class CashRegister { private double purchase; private double payment; /** * Constructs a cash register with no money in it. */ public CashRegister() { purchase = 0; payment = 0; } /** * Records the sale of an item. * @param amount the price of the item */ public void recordPurchase(double amount) { double total = purchase + amount; purchase = total; } /** * Enters the payment received from the customer. * @param amount the amount of the payment */ public void enterPayment(double amount) { payment = amount; } /** * Computes the change due and resets the machine for the next customer, and * Print a receipt. */ public void printReceipt() { double change = payment - purchase; System.out.printf(" Total Sales: $ %.2f\n", purchase); System.out.printf(" Payment: $ %.2f\n", payment); System.out.printf("==========================\n"); if (payment < purchase) { System.out.printf("Not Enough Payment: $ %.2f\n\n", change); } else { purchase = 0; payment = 0; System.out.printf(" Change Due: $ %.2f\n\n", change); } } }
This is the class with the separate method. I want the methods in that to run in the code below.
public class RunCashRegister { public void run() { } }
Help would be most appreciated as I am very new to java. Thank you. | https://www.daniweb.com/programming/software-development/threads/313284/cash-register-help-i-m-bad | CC-MAIN-2017-30 | refinedweb | 211 | 69.18 |
!
Join the conversationAdd Commentn"));
#endif
#ifdef _DEBUG
OutputDebugString(_T("_DEBUG is defined.rn"));
#endif ‘Can’t find the identifier’ message in the statusbar whenever I’m trying to get parameter information on calls to BGL’s graph layout algorithms. I have ‘using namespace boost’ at the file scope, preceeding the code causing the Intellisense to fail. Similarly, I can’t get autocompletion when I type ‘circle_’, for example. I’d expect ‘circle
Agreed. Please let us set the colors of the ide windows/subwindows. Please let us set the IDE to a flat color scheme (no gradients, no reflected icons, etc.)
This greatly helps us that work more effectively with a clean uncluttered interface due to vision issues.
This is simply the same usability testing/evaluation done when comparing how a 22 year old uses the product and how a 50 year old uses the product. The 22 year old is more tolerant of low level changes in the visual interface while the 50 year old is much less tolerant of visible changes in the user interface.
One easy visual test is to take a snapshot of the IDE, run it through an image edge detection filter (Sobel) and see just how many lines and how dense the lines are in the result. A high density of lines (edges) indicates a noisy user interface. This noise reduces productivity.
I reduce the number of buttons in the IDE to 15 or fewer in VS 2008 to reduce clutter and remove tools I rarely or never use.
For a "targeted correctness" test, try using Intellisense from within a Google Test macro (TEST_F or TEST). I’ve found Intellisense to almost never be able to do any auto-completions correctly from within such a macro.
Hi Pingpong,
Thanks for providing the repro information.
We will work on it and update you as soon as possible.
Some minor specific suggestions:
– Incorporate a random number of file switches during the retyping of the file, to simulate moving between files. Make sure you include all the file types which VS processes differently (eg: multiple languages, .xml files, browser help windows, etc.).
– Add some tests to jump (using Intellisense commands such as ‘go to definition’) into STL/boost/MFC header files, and make sure all the objects in the headers are being evaluated correctly, and in the context of the source file you came from.
– Use temporary macros to rename some symbols (eg: find in files -> temp macro record -> Next in Results -> do replacement (paste or manual typing with some arrow-based navigation) -> temp macro end -> playback repeatedly until end of file). Make sure both the macro steps are fast, and the symbol data is still correct.
– Use macros to define some symbols (eg: DEFINE_SYMBOL_WITH_POSTFIX( Symbol, Postfix ) -> class Symbol##Postfix { […] }: ), and test Intellisense response and correctness as macro is modified.
Just some thoughts, hope they help.
I’m using VS2005, now we’re moving to VS2008, all of my teammates are facing problems with intellisense so we’ve disabled intellisense by renaming necessary dlls. We now use Visual Assist which quite sadly is bit expensive. After spending money on the IDE it’s quite sad we’ve got to spend few more on intellisense when the IDE itself came bundled with one.
Don’t know if you guys stress tested VS2005/VS2008 in the same manner, if you did then you need to change the tests since it wasn’t productive.
The whole IDE looks dumb/useless without this feature. Please make sure VS2010 works well in this respect.
Thanks,
Nibu
C++ -> File->New->Project from existing code.. doesn’t work properly. ‘Browse’ and ‘Add’ buttons have no any actions.
Beside, working with external sources files is a horror – i mean about Solution Explorer – its view by Header Files, Source Files are totally unflexible!!!.
It would be great if I could Add a folder(with many subfolders) with sources ‘as a link’ form any location and Solution Exporere show me that with properly hierarchical SUBFOLDERS TREE, not as only files with alphabetical order !! I know it’s posible to change the view to ‘Show All files’ mode, but in this mode, I cannot hide unnesesery folders. Look how it’s done in others IDE – for exaple Code::Block.
Other thing adding about 1000 files and removing them from project freeze my dual core with 4GB for few minutes- it looks like VS updates tree view in Sol.Expl. by every processed file. Suspend Solution Explorer tree view updating for time when files are adding.
Nibu Thomas MVP: Yes, Visual Assist is now quite expensive product (they were quite cheap when they started), but it is totally worth it!
We’ve been using it for several years on VC6 and now on VS2008. It just an overall good product.
It does have bugs and it isn’t always accurate, but it is 100 times better than VS’s Intellisense. Plus it has lots of other time saving features that pay for themselves: refactoring, switching between h and cpp, enhanced highlighting, etc. | https://blogs.msdn.microsoft.com/vcblog/2009/05/11/vc-ide-design-time-stress-testing/ | CC-MAIN-2018-22 | refinedweb | 843 | 62.48 |
DFS_INFO_300 structure
Contains the name and type (domain-based or stand-alone) of a DFS namespace.
Syntax
- Flags
Value that specifies the type of the DFS namespace. This member can be one of the following values.
- DfsName
Pointer to a null-terminated Unicode string that contains the name of a DFS namespace. This member can have one of the following two formats.
The first format is:
\ServerName\DfsName
where ServerName is the name of the root target server that hosts the stand-alone DFS namespace and DfsName is the name of the DFS namespace.
The second format is:
\DomainName\DomDfsName
where DomainName is the name of the domain that hosts the domain-based DFS namespace and DomDfsname is the name of the DFS namespace.
Remarks
The DFS functions use the DFS_INFO_300 structure to enumerate DFS namespaces hosted on a machine.
Requirements
See also
- Network Management Overview
- Network Management Structures
- Distributed File System (DFS) Functions
- NetDfsEnum | http://msdn.microsoft.com/en-us/library/windows/desktop/bb524789(v=vs.85).aspx | CC-MAIN-2014-42 | refinedweb | 155 | 54.83 |
the reusable code in one CSS file. Then this file can be included in other components to get the styles.
As of now, there is no way of Sharing Styles (CSS) between (LWC) Lightning Web Components. The good news is that it will be possible after the Summer ’20 release. Hence if you are reading this post before 18th July 2020, we can only test this feature in Salesforce Pre-release Summer 2020 org. Click this link to sign up for your Pre-release org to test the Summer ’20 features. Now, let’s hop into the good stuff.
Implementation
First, we will create a style component and then add all the CSS properties. Then we have to include this style component in another lightning web component to apply the CSS properties.
Create a component stylesCmp which we will use as a Reusable style component. Delete HTML and JS file and create CSS file stylesCmp.css for stylesCmp. The name of the CSS file should match the name of the component bundle.
stylesCmp.css
.custom-card{ background-color: white; width:100%; padding: 15px; } .custom-card-title{ font-weight: bold; font-size: 20px; padding-bottom: 15px; } .custom-card-desc{ color: #15388a; font-size: 15px; } .custom-card-subtitle{ font-size: 15px; padding-left: 10px; display: inline; color: red; }
Now, create a sampleCmp which will have the markup to display a custom card with the title and body of the card. And apply the CSS defined in stylesCmp to the sampleCmp markup. In our example, we have created class properties, so apply the classes to required tags.
sampleCmp.html
<template> <div class="custom-card"> <div class="custom-card-title">Sharing Styles between Lightning Web Components <p class="custom-card-subtitle">Latest Summer '20 Feature</p> </div> <p class="custom-card-desc"> This is a Custom Card component. Lightning Web Component <strong>stylesCmp</strong> has all the common CSS and this component is using the CSS defined in <strong>stylesCmp</strong>. Check the post for the implementation of Sharing Styles between Lightning Web Components. </p> </div> </template>
Finally, we need to import our style component stylesCmp in sampleCmp. To do this, create a CSS file sampleCmp.css for sampleCmp component and add the below line:
sampleCmp.css
@import "c/stylesCmp";
where c is the default namespace and styleCmp is the name of the style component we created.
Sharing Styles between Lightning Web Components
This is how our implementation looks:
This is how we can implement the functionality to Share Styles (CSS) between (LWC) Lightning Web Components.
If you don’t want to miss new posts, please Subscribe here. Thanks ! | https://niksdeveloper.com/salesforce/share-styles-between-lightning-web-components/ | CC-MAIN-2022-27 | refinedweb | 435 | 65.73 |
Jes Sorensen <jes@sunsite.dk>:> If Ray wants to fix things, it's just fine with me.I have corrected the Mac dependencies as Ray directed.> Eric> Does this mean you'll take over maintaining the CML2 rules files> Eric> for the m68k port, so I don't have to? That would be wonderful.> > For a start, so far there has been no reason whatsoever to change the> format of definitions.The judgment of the kbuild team is unanimous that you are mistaken on this.That's the five people (excluding me) who wrote and maintained the CML1 code.*They* said that code had to go, Linus has concurred with their judgment, and the argument is over.> So far you have only been irritating developers for no reason. What I> asked you to do is to NOT change whatever config options developers> developers felt were necessary to introduce. If you want to change the> format of the config.in files go ahead. Messing around with the config> options themselves is *not* for you to do, nor are you to impose a> more restrictive space for people to work in.If you persist in misunderstanding what I am doing, you are neithergoing to be able to influence my behavior nor to persuade other peoplethat it is wrong. Listen carefully, please:1. The CML2 system neither changes the CONFIG_ symbol namespace nor assumes any changes in it. (Earlier versions did, but Greg Banks showed me how to avoid needing to.)2. The ruleset changes I have made simplify the configuration process, but they do *not* in any way restrict the space of configurations that are possible. By design, every valid (consistent) configuration in CML1 can be generated in CML2. I treat departures from that rule as rulesfile bugs and fix them (as I just did at Ray Knight's instruction).3. I do not have (nor do I seek) the power to "impose" anything on anyone.You really ought to give CML2 a technical evaluation yourself before youflame me again. Much of what you seem to think you know is not true.-- <a href="">Eric S. Raymond</a>According to the National Crime Survey administered by the Bureau ofthe Census and the National Institute of Justice, it was found thatonly 12 percent of those who use a gun to resist assault are injured,as are 17 percent of those who use a gun to resist robbery. Thesepercentages are 27 and 25 percent, respectively, if they passivelycomply with the felon's demands. Three times as many were injured ifthey used other means of resistance. -- G. Kleck, "Policy Lessons from Recent Gun Control Research,"-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | https://lkml.org/lkml/2001/5/15/212 | CC-MAIN-2017-13 | refinedweb | 468 | 65.01 |
JavaScript expects you to handle errors with
try...catch, but the syntax is somewhat inconvenient:
1. You have to declare result variables separately from your function calls.
const userInput = 'fail' let json try { json = JSON.parse(userInput) } catch (err) { console.error(err.stack) } // Do something with `json`...
Since we declare
json separately, we can't declare
json as a
const binding.
2.
try...catch promotes catch-all error handling.
try { // A bunch of stuff happens... // ... // ... } catch (err) { console.log('welp') console.error(err.stack) }
You might want to handle errors in a more fine-grained way. But then you run into the verbosity of problem 1.
Enter
dx
dx is a micro utility (it's just a few lines) that addresses the two pain points above.
import { dx } from '@nucleartide/dx' const [res, err] = dx(JSON.parse)('invalid json') if (err) { console.error(err.stack) }
It allows you to place your declaration and function call on the same line. And it promotes a granular, per-function error handling style.
It also works with async functions:
import { dx } from '@nucleartide/dx' function asyncHello(name) { return Promise.reject(`hello ${name}`) } ;(async () => { const [res, err] = await dx(asyncHello)('jesse') if (err) { console.error(err.stack) } })
Convinced?
Try it out and lemme know what you think! Also feel free to check out the source.
npm install @nucleartide/dx
Discussion (12)
This is kind of like a poor man's
Either!
What is this abomination:
😱
You cannot use
async/awaitin the global scope. By wrapping your code in an
asyncfunction, this trick will let you use
await.
Are you sure about that? I know that you can't use
awaitwithout
asyncbut if I remember correctly I've used
asyncfunction in top-level scope a few times 🤔
Anyway, I was thinking about that semicolon at the beginning which purpose is to guard against AST, confusing and ugly syntax IMO.
You can test by pasting an
awaitcommand into the browser console.
Ahh ya. It's just a guard.
Where does the
dxname come from? What does it mean?
It could mean a lot of things? Developer Experience, De-Exception, Derivative, Ducks, etc.
I like the first 2 definitions best. Besides that, it's just short and convenient to type.
Hmm, wouldn't importing a complete NPM module add even more verbosity?
You could always copy the module into your project!
npmis just there as a convenience.
It still has 18 additional ("verbose") lines. :-)
You Javascript people need Monads, or some decent metaprogramming, or macros, whatever make you stop creating libraries from every single useful snippet of code out there
good idea | https://practicaldev-herokuapp-com.global.ssl.fastly.net/nucleartide/javascript-error-handling-with-better-dx-1a57 | CC-MAIN-2022-27 | refinedweb | 434 | 69.79 |
Wikimedia Ethics/Moulton, JWSchmidt's investigation
Note: this page began at User:JWSchmidt/Moulton as a personal study project. Now that it is in the main namespace, it exists as a workshop where everyone can help study what Moulton did at Wikipedia and how Moulton was treated at Wikipedia.
Methods used on this page. This page has the focus of collecting the results of research about the Wikipedia biographical article Rosalind Picard, but investigations into the ethics of editors of that page and the treatment of those editors by Wikipedia administrators and the editing of related Wikipedia pages are also relevant and welcome. To get started, feel free to create a page section that holds your views or use the general discussion section of this page. Do not delete the work of others from this page. Do not make edits that substantially alter or disrupt the work of other editors. If you feel that there is an error or problem of any kind on this page, please discuss it, either in your own page section, on the talk page or on the user talk pages of other editors.
The ultimate goal is to create a short and clear investigative report that adheres to the Neutral Point of View.
Contents
- 1 This section is for research by JWSchmidt
- 1.1 John orders his thoughts
- 1.2 Blocking Moulton at WikiPedia
- 1.3 Requests for comment
- 1.4 Requests for arbitration
- 1.5 Conflict of interest
- 1.6 How do we clean things up?
- 2 Moulton responds here
- 3 Other editors comment here
- 4 Related resources
This section is for research by JWSchmidt[edit]
Here is my proximal starting point: "Perhaps Jimbo could just suggest that Wikipedians establish a better practice of Fair Play than has thus far been afforded to outcasts such as myself."
The first thing that comes to mind is that Wikipedia could have a page for "block review". Wikipedia has many pages for review of actions such as page deletions (Wikipedia:Deletion review), so why not a similar organized system for the review of blocks? Currently there is Wikipedia:Appealing a block and Wikipedia:Guide to appealing blocks.
Study question. Should editors of biographical pages be required to reveal their real world identity?
I want to write a very short account of how Moulton got into the mess he is in. First I need to make sure I have the facts clear.
John orders his thoughts[edit]
How I came to know Moulton. I have been busy in the real world and only became aware of User:Moulton on or about 4 August 2008, even though he came to Wikiversity on 9 July 2008. Since then I have been gradually learning about Moulton's editing history at Wikipedia. When I first saw Ethical Management of the English Language Wikipedia I linked it to an existing Wikiversity topic, Topic:Wikipedia studies. At that time I did not have any knowledge of Moulton's editing history at Wikipedia. As someone who has been learning about that editing history during the past few weeks, I hope to be able to help construct a short narrative of events. This exercise is important because Wikipedia has a problem with biographies of living persons and Moulton's editing history at Wikipedia is an interesting case study related to that larger problem. By understanding what happened to Moulton we might be able to improve Wikipedia. The basic problem is that anyone can start a biographical Wikipedia article and write it in a biased way that does not follow the Wikipedia rules that are designed to lead to the creation of fair and balanced biographies of notable people. The additional problem is that some of Wikipedia's biased biographies are created and owned by editors who are pushing a particular agenda. Moulton crossed paths with some dedicated editors who behaved as if they owned a set of biographical articles and could use those articles as part of a protracted edit war that is roughly centered on the Creation-evolution controversy.
Study questions:
1) Has the Wikipedia:WikiProject intelligent design attracted a group of editors who damage Wikipedia by trying too zealously to defend Wikipedia against creationists and other editors who question evolution by natural selection?
2) Is Moulton an example of a Wikipedia editor who was unfairly treated by editors associated with the Wikipedia:WikiProject intelligent design?
3) Is there something we can do to prevent this kind of problem in the future?
Timeline of events[edit]
2005. Wikipedia:WikiProject intelligent design was started 13 July 2005 by User:Dbergan.
2006. The Rosalind Picard article was made (8 March 2006) by copying her online Faculty Profile and adding a section called, "Intelligent Design Support". It is clear that the purpose of User:Tempb was to create an article that labels Dr. Picard as a supporter of Intelligent design and as "anti-evolution". Page section title was changed from "Anti-Evolution Petition Signatory" to "Darwin dissenter" by Filll.
24 March. Some corrections to the blatant POV of the originator of the article were made from IP 136.167.158.77 (Boston College). Someone from IP 209.6.126.244 also tried to make similar clarifications. Someone from IP 65.96.63.33 reverted to the POV formulation (8 April). May 10: first talk page comment is about the reverts of the article (section heading, "anti-evolution").
August 2007. Before 21 August 2007, User:Moulton was a typical Wikipedia editor, having made several dozen edits to various articles over the course of a year and a half. When Moulton followed a link from Affective computing to Rosalind Picard he found a biographical Wikipedia article that was in a particularly bad state. The Wikipedia article about Rosalind Picard is in some ways a "typical" Wikipedia biographical article. The subject of the article, Dr. Picard, is a professor at Massachusetts Institute of Technology. Wikipedia has many other biographical articles about university professors, many of which are autobiographical, having been started by the subject. The Rosalind Picard article is unusual in that it was started by an editor who had an ax to grind.
Is there an "anti-Intelligent Design Cabal"?[edit]
Study questions:
1) was there an organized effort to create biographical articles for signers of the petition that was released under the title, "A Scientific Dissent From Darwinism"?
2) was there an organized effort to prevent those articles from being made more balanced and accurate?
3) is there a coordinated group of rude and abusive anti-Intelligent design editors who prevent the creation of more balanced articles related to intelligent design and creationism?
4) Is this a good summary: "a group of editors was so caught up in their crusade against ID-on-Wikipedia that they couldn't recognize valid criticism, and moreover, that many of those editors resorted to despicable tactics in order to get their way"?
terminology. "IDiots" used to refer to creationists in edit summary by User:General Nolledge, creator of Granville Sewell (8 October 2006 ), an early "single purpose" biographical article linked to from A Scientific Dissent From Darwinism....example of creating a biography just to make connections to the person's stance on ID.
single purpose biographies. Just how many "single purpose" biographies like Rosalind Picard were created? By who? For what purpose?
- long list of biographies towards the bottom
- discussion of creation of a "Signers Category", (ConfuciusOrnis 15:08, 20 June 2007 (diff) (deletion log) (Restore) . . m Category:Signatory of "A Scientific Dissent From Darwinism" (creating Cat)), the category was deleted and replaced by a list: List of signatories to "A Scientific Dissent From Darwinism"
- Is this good wording? Looks like the list was built before each person on the list was tagged.
A recent list of problematical biographies from Moulton lists:
-
Blocking Moulton at WikiPedia[edit]
repeated personal attacks[edit]
24 August 2007 JoshuaZ (Talk | contribs) blocked "Moulton (Talk | contribs)" (account creation blocked) with an expiry time of 24 hours (repeated personal attacks) at Talk:A Scientific Dissent From Darwinism/Archive 2 <-- this really has to be read in its entirety to understand the dispute between Moulton and Hrafn.
Pages I (JoshuaZ) have made: 1. TalkOrigins Archive (Conflict of interest?)
User:Hrafn states that he believes anyone who says, "We are skeptical of claims for the ability of random mutation and natural selection to account for the complexity of life. Careful examination of the evidence for Darwinian theory should be encouraged" is "anti-science". Why is it anti-science? "they knew that they were expressing an opinion in contradiction to the scientific consensus". (see)
User:SheffieldSteel raises the issue of "personal attacks". (13:52, 23 August 2007 (UTC))
Is the truth a "Disruptive POV"?[edit]
11 September 2007 KillerChihuahua (Talk | contribs) blocked "Moulton (Talk | contribs)" (account creation blocked) with an expiry time of indefinite (Disruptive POV OR warrior with no interest in writing an encyclopedia. See Rfc.)
blocked for "abuse of editing privileges" - Why did KillerChihuahua not post the real reason for the block? Why was User:Yamla so willing to support this bad block?
User:MastCell's participation
KillerChihuahua blocked with the reason given as "Disruptive POV OR warrior with no interest in writing an encyclopedia. See Rfc." Then, 10 minutes later, MastCell made this edit which used a standard Wikipedia template for extended blocks (Template:uw-block3) "indefinitely blocked from editing in accordance with Wikipedia's blocking policy for repeated abuse of editing privileges." The text "abuse of editing privileges" linked to Wikipedia:Vandalism. So, there was no notification given on Moulton's user page of the reason for an indefinite block. Why did MastCell get involved? Why did MastCell post the wrong reason for the block? Why did KillerChihuahua never make sure that the reason for the block was posted to Moulton's user talk page?
KillerChihuahua - preparation for the block
Moulton's talk; see User:Baegis
Requests for comment[edit]
Wikipedia:Requests for comment/User conduct (4 September 2007 , by Filll] "Personal attacks, Disruptive editing, conflict of interest, failure to understand NPOV"
Wikipedia:Requests for comment/Intelligent Design
Requests for arbitration[edit]
FeloniousMonk/Arbcom evidence (note: see how this page was deleted (red link) after being placed in an Arbitration Committee case by FeloniousMonk)
Conflict of interest[edit]
Study question: Does Moulton have a conflict of interest? (as claimed here?) "incompatibility between the aim of Wikipedia, which is to produce a neutral, reliably sourced encyclopedia, and the aims of an individual editor" (source: Wikipedia:Conflict of interest). Documentation of the claim that Mouton's aims as a Wikipedia editor are in any way contrary to or incompatible with producing a neutral, reliably sourced encyclopedia:
1) Evaluate User talk:Moulton#One change
2) Evaluate [1]
3)Alternative hypothesis: Moulton's actions were correct and explicitly protected by Wikipedia policy, "In a few cases, outside interests coincide with Wikipedia."
Note on Semantics: "Conflict of interest" is used in three senses. All of which have, at times, been Wikipedia's "official" definition as stated at w:WP:COI. The first sense is that there exists a potential for an actual COI behavior. E.g. he is writing about his employer. If the information is known, this is the most objective of the senses to use. The second, quoted above, is that the potential has risen to the level of a goal. Distinguishing between conscious and unconscious goal would yield four senses. He may consciously wish to say something nice about his company, but unconsciously resent it. Asserting statements about another's goals seems not the best way to discuss the issue. The third is that a COI is in fact the COI behavior itself and short of that behavior a COI is said to not exist. Absent knowledge about a contributor's identity, this last is functionally equivalent to saying someone is POV pushing against NPOV policy and perhaps should have their real life identity investigated, e.g is the IP address from that company. Any clear discussion of whether someone has a COI must be clear about what sense is being used.
How do we clean things up?[edit]
- Ottava Rima: expand the biased biographies into balanced articles
- study question: what if the only published information is biased and misleading, leaving Wikipedia editors with no way to produce a balanced Wikipedia article?
- In the specific case of the Rosalind Picard article, might it be possible to find a journalist who would investigate, document and publish in a reliable source an account of the events (exactly what Rosalind "signed", when she signed it, what she was thinking when she signed it)? Wikipedia could then cite this article.
Participate at Talk:Rosalind Picard[edit]
There had been long discussion about how to describe Picard's views in the context with her agreement with the statement: "We are skeptical of claims for the ability of random mutation and natural selection to account for the complexity of life. Careful examination of the evidence for Darwinian theory should be encouraged". Picard's Wikipedia biography article has a section called "Religion and science".
study question: What is the best way for Wikipedia to include in her biography the fact that Picard has made public statements about natural selection?
- My first suggestion.
Participate at Rosalind Picard[edit] "She is dismissive of scientific reductionism" is Guettarda's imagined interpretation of Picard's thinking. It looks like an orchestrated move to depict Picard as anti-science. My request before I found the thread at w:Talk:Rosalind Picard#Limits of science.
This edit by User:dave souza again failed to create an accurate and coherent summary of what Picard said. User:dave souza states on his user page that he has an "unhealthy fascination with the great intelligent design con", an interesting bit of self-reflection. User:Ottava Rima then fixed the problem by actually giving an unbiased account of what Picard said.
Moulton responds here[edit]
Explanation by Abd. Moulton provided extensive personal testimony here. However, his personal account was altered without permission or consent. Moulton has specifically objected to revison [2], and, from that diff, one can read the full previous version as it had been restored by Moulton previously. For convenience, here is a permanent link to the section. I am blanking this section, because, as it stands, it contains altered testimony, which is not acceptable, yet, obviously, some considered it offensive. Testimony considered offensive may be deleted in toto, hidden by various devices, or partly struck with a note so that the original can be distinguished from what replaces it, but altering it violates basic principles, and I'm shocked that this was allowed to stand. I have not reviewed the testimony itself, and it is unnecessary. --Abd 22:30, 22 August 2010 (UTC)
Other editors comment here[edit]
You're starting the story in the middle. Moulton has also gone through this same behavior pattern at Slashdot ([3]) and at Worldcrossing ([4]) and has numerous complaints about this kind of behavior on his own site ([5]). Salmon of Doubt 20:42, 21 August 2008 (UTC)
- Please tell the story of what happened at Slashdot and in the Soap Opera Forum at World Crossing. —Moulton 22:09, 21 August 2008 (UTC)
- Thanks for the reminder. Does this help us to improve Wikipedia? --JWSchmidt 21:54, 21 August 2008 (UTC)
- I do think it is useful to point out that among Moulton's behavior patterns that are consistent over time include being both obsessive and persistent which can be frustrating to work with. Add his being "part aspie" and you have a recipe for people perceiving him as a trouble-maker. Also note that Moulton has a w:WP:COI on his good friend Rosalind Picard and the subject of their project - Affective Computing, so his point of view can not be trusted to be neutral with regard to the article he was trying to get changed when he was initially banned. IDcab sees a constant stream of people coming to articles they protect from creationists trying to push their point of view and it was perhaps inevitable that they would mistake Moulton for one of these. Unfortunately, they seem incapable of changing their minds based on new information. All in all, it made for a typical newbie-biting scenario at Wikipedia. Which could have been resolved, if the IDcab did not behave as it does. And the IDcab problem could be fixed if Arbcom would act as it should. But they have not so far been willing to halt abuse by long standing contributors who mostly help the encyclopedia, so the problems fester. WAS 4.250 19:16, 23 August 2008 (UTC)
- WAS, you state "Unfortunately, they seem incapable of changing their minds based on new information." How many times has Moulton/Picard been asked to publish a clear statement that they do not believe in Intelligent Design as professed by the Discovery Institute and find Evolution to be the most likely explanation for current life? How many times have they been totally unwilling to answer with a clear and unambiguous statement? You say there is "new information." What is it? Salmon of Doubt 23:46, 23 August 2008 (UTC)
- to "Salmon of Doubt". I do not understand how your desire for such a statement relates to Wikipedia or this page. "We are skeptical of claims for the ability of random mutation and natural selection to account for the complexity of life. Careful examination of the evidence for Darwinian theory should be encouraged." You might not like it, but many rational people are skeptical about the ability of natural selection to account for everything. If this bothers you so much, to the extent that you imagine it is your place to demand that Picard publish something to satisfy you, then you have a clear (and I would say irrational) bias and should not edit Wikipedia articles related to this subject. I think I understand why you hide your identity. --JWSchmidt 03:50, 24 August 2008 (UTC)
- I have no problem with their "nuanced" statement supporting intelligent design. There is consensus in the scientific community that you are wrong - there is additional consensus in the polemical community that using "Darwinian theory" means you're not actually "skeptical" but rather "polemical." I have no bias, in that I don't especially care about Evolution on Wikipedia, but WAS 4250 says there was new evidence that supporters of accuracy in science-related pages could trust Moulton not to push ID. I'm wondering what that new evidence was. Salmon of Doubt 10:57, 24 August 2008 (UTC)
- What is your evidence and reasoning to support the notion (long published by IDCab in the pages of Wikipedia) that the first 103 signatories of the 2-sentence, 32-statement which IDCab has elected to label and refer to as "A Scientific Dissent from Darwinism" (from the headline of an anti-PBS ad in which the statement first appeared in print) are either pro-ID or anti-evolution? Is that notion WP:OR? Is it a misconception and a logical fallacy? —Moulton 12:41, 24 August 2008 (UTC)
(<---)Salmon of Doubt, you seem to under the impression that Moulton has made a pro-ID statement somewhere. As near as I can tell you are wrong on that. As near as I can tell, it is a case of you and IDcab folks not understanding that there are non-Darwinian non-ID processes that are a part of evolution. You guys see the ID people conflate Darwinian-ism and Evolution so you think everyone does. Scientists investigating evolution distinguish Darwin's ideas and post-Darwin ideas. They do not want their ideas credited to Darwin. They want that credit for themselves. So they restrict the meaning of "Darwinism" to ideas Darwin actually had. I have read many statements Moulton has provided that indicate some of the signers were merely saying that other newer evolution ideas like evolutionary drift needed to get more attention. WAS 4.250 14:43, 24 August 2008 (UTC)
- Links? Salmon of Doubt 14:59, 24 August 2008 (UTC)
- PS - I am well aware that "Darwinism" is a slur advanced by push groups in an attempt to denigrate the scientific fact of Evolutions as a religious belief. Salmon of Doubt 15:00, 24 August 2008 (UTC)
- Here is one, for example. There are many more on that page. —Moulton 20:44, 24 August 2008 (UTC)
Is there an IDcab? Without doubt, there is. I have run into them several times. Each time, highly unpleasant bullying was experienced. For another BLP problem that Durova
took to arbcom got pulled into a bit; see w:Wikipedia:Requests for arbitration/Agapetos angel which was about the trashing of the BLP w:Jonathan Sarfati. Agapetos angel was "outted" as Jonathan Sarfati's wife. I limited my involvement to the BLP talk page, article page and engaging Agapetos angel on her talk page. In the end, the system in this case worked to the extent that the article's problems were cleared up. And did not work to the extent that IDcab was not prevented from future bullying. WAS 4.250 20:38, 3 September 2008 (UTC)
- WAS, back in December of 2005, User:RoyBoy evidently worked with FeloniousMonk, Guettarda, KillerChihuahua, Jim62sch, and others to design new Barnstars and award them to each other. Your name appears among the samples. What can you tell us about that clique then? —Moulton 20:55, 3 September 2008 (UTC)
- Same group. The group does more good than harm, which is why they've gotten away with what they have. In particular they have created a wonderful series of articles on various aspects of creationism, intelligent design, and evolution. I have, at arm's length, participated to some degree at times in some of these articles, so I know their good and bad aspects. The barnstars are a community building effort to encourage further helpful volunteer work. I have tried to encourage better behavior by the group. Participating in that barnstar is one example of my efforts to stop their seeing the world as "us versus them". By the way, I just now told Durova of my above edit mentioning a case she was in and invited her to participate. She, like me, has a long history of trying to help Wikipedia be more ethical. WAS 4.250 21:09, 3 September 2008 (UTC)
- Not half as much as between her and Greg. But there does not have to be drama if people will just add content and not engage each other in childishness. I have high hopes. They are all capable of behaving themselves. So I don't think it appropriate to assume the worst. WAS 4.250 21:49, 3 September 2008 (UTC)
- I would like to see parties who have unresolved issues employ Action Research to resolve them. If Action Research fails, then I would like to see the parties enter into alternative dispute resolution processes. —Moulton 22:58, 3 September 2008 (UTC)
Cerberus Rising[edit]
Every once in a while one of the slumbering heads of Cerberus wakes up and emits a growl. This week on Durova's Blog, we can observe an instance of an aroused pooch. The topic is about accuracy and "political correctness" in an article on a political event in the annals of human history, and how it's portrayed in art, literature, history, and in Wikipedia. Here is my comment, submitted to Durova's Blog...
Moulton 14:08, 4 September 2008 (UTC)
Durova declines the invitation to participate at Wikiversity[edit]
On Durova's talk page she says:
WAS 4.250 21:53, 3 September 2008 (UTC)
- Durova further writes...
- I believe her troubled relationships arise from the practice of contribituting content but then declining to respond to fair questions about it.
- It seems far more likley that Durvoa already spent months stopping Dzonatas from relentlessly damaging the Joan of Arc article to include totally unsourced information to support his own purported personal genological relationship with the historical figure, and has no desire to deal with such an offensive presence again. This is much the reason that I do not intend to edit this project much, if it all, except to retain the historical facts about your perminant banishment from Wikipedia (and numerous other online communities). I've watched others spend months stopping you from relentlessly damaging a whole series of articles and talk pages to include totally unsourced information to support your own esoteric and depreciated views of "ethics," which oddly coincides with removing sourced information about embarassing petitions your good friend signed and has not repudiated. If wikiversity wishes to become Moultonversity, more power to you all. Eventually the foundation will wise up and pull the plug. Here's wishing that day comes sooner! Salmon of Doubt 14:36, 4 September 2008 (UTC)
- Salmon, please tell the story of what happened at Slashdot and in the Soap Opera Forum at World Crossing (use Google Cache if the WorldCrossing server is still down). —Moulton 16:21, 4 September 2008 (UTC)
Questions and Answers with Salmon of Doubt[edit]". Nobody can face "perminant banishment from Wikipedia" because they were correcting errors in BLPs....an action that is explicitly protected by Wikipedia policy. I for one welcome close examination of this matter by the wider Wikimedia community and Foundation officials...it is the violators of policy, those who push their mistaken original research and biased POV on Wikipedia who will ultimately face the consequences of wider community involvement in this matter. You might think that just because a successful team of Wikipedia POV pushers has had success so far that they are beyond reach. I think that is wrong. It just takes time for the community to notice the swaggering bullies that try to subvert Wikipedia for their own purposes...it just takes time to clean out these little pockets of abusive editing. I've participated in such cleaning jobs before, so I know how it is done and that it takes time. The more noise you make the more people will look and see the truth. Please keep making as much noise as you can, this is the only way to correct the past abuses and prevent more in the future. --JWSchmidt 16:20, 4 September 2008 (UTC)
- I will not reveal my identitiy here, as at least on participant here (not you) has a history of real-world harassment. I can, however, state categorically that you would not place me in what Moulton phrases (offensively) as the "IDCab." I do not support Wikipedians who think it is right for Wikipedia to publish false information, full stop. You continue to state your opinion that there is a successful team of Wikipedia POV pushers who is disrupting various biographies. I disagree. What would be helpful is if you could post evidence of them doing this OUTSIDE of one isolated case which is neigh-impossible to deal with due to the continued disruptive influence of Moulton. If the Picard biography is worse than it should be because Moulton dosen't know how to ethically and effectively get results, that's on Moulton. If there really is a successful team of POV pushers that are beyond reach, they'll be mucking around with more than 3 biographies. SHOW ME THE MONEY. Salmon of Doubt 16:31, 4 September 2008 (UTC)
- Salmon, you write, above:
- Since you do not support the publication of false information, please prove your statement, above, that "at least on [sic] participant here (not you) has a history of real-world harassment" is not a false statement.
- I want a pony. Salmon of Doubt 21:31, 4 September 2008 (UTC)
-
- Most of my research is documented on this page; see particularly this section which links to other biographical articles. My reading of the edit history at Wikipedia indicates that there was a group of Wikipedia editors all of whom made the false assumption that any scientist listed as a signatory to the "dissent from Darwin" petition is "anti-science" and "pro-Intelligent Design". This editing team at Wikipedia decided that biography pages about the signatories are "fair game" for their crusade of labeling the signatories as "anti-science". This effort continues to this day, even though it has become more difficult for them to get away with the game, now that responsible editors are watching. There are other pages such as the Wikipedia list of signatories to the "dissent from Darwin" petition that also have expressed the false assumption, and I have also started helping to clean up those non-biography pages. So far, I have mostly paid attention to the Picard page, but what I have seen (and have not yet had time to look into closely) is that the Picard article is not an isolated case, it is part of a larger pattern of POV pushing. I agree that more work needs to be done to reveal this wide pattern of editing by which a small team of editors has systematically biased Wikipedia...and they also worked to prevent concerned scientists from correcting Wikipedia's bias. --JWSchmidt 17:21, 4 September 2008 (UTC)
- I looked through that page. While you certainly show that people made a lot of biographies, you fail to show that they pushed POV as opposed to made a lot of biographies. Just creating a lot of biographies is not POV pushing. Berlinski is best known for his ID work. Your smoking gun edit hardly changes the article at all. If one ignores everything about Moulton on your page, it is quite short. Please show me evidence of a "larger pattern of POV pushing" by showing a lot of isolated POV pushes. Calling someone a fellow of the discovery institute is not pushing a POV, it's noting something notable about them. Salmon of Doubt 17:29, 4 September 2008 (UTC)
- Since what I was trying to point to is at the bottom of a rather long page section, here are the three other biographical pages I have started to look at:
-
- I think these (above examples) indicate a pattern of POV-push editing (beyond just the Picard article) by members of a group of editors who were working on the mistaken assumption that any signatory to the "dissent petition" should be labeled by Wikipedia as "anti-science" and "pro-Intelligent design". This group of editors was not interested in the mission of Wikipedia (to create balanced biographies), they were only interested in pushing their POV, a point of view which was based on a mistaken bit of original research they had performed. Worse, this group of editors was so sure that their original research was correct, they could not listen when other editors tried to explain and correct the error they had made. Rather than welcome such correction, as required explicitly by Wikipedia policy, this group of editors has systematically tried to maintain ownership of Wikipedia pages and drive away good faith editors who have tried to repair Wikipedia. I am still working to fully document what went wrong at Wikipedia. Frankly, it is a slow process because I am physically sickened when I read the Wikipedia edit history of the abusive editing that was performed by the team of anti-ID POV pushers. --JWSchmidt 19:08, 4 September 2008 (UTC)
- None of your three examples demonstrates POV pushing. I don't see a lack of NPOV in the first, except for the uncited OR that you three are consistantly pushing about how the petition isn't a petition and it had no header and whatever. But of course, that example is impossible to see as a POV push or not a POV push, because Moulton showed up and made it impossible to get any signal through the noise. What I do know is that Odd Nature (Isn't he one of the eeeeeeeeevvvvvvvvvviiiiiiiiiiiiiilllllll ones) made this edit. How can he be pushing a POV if he's letting JAMES TOUR, WHO SIGNED THE POSITION, clarify himself? UNPOSSIBLE!
- Your second example reads that "the day after the NYTimes published and article about a guy someone changed his biography." I'm shocked, SHOCKED!
- The only thing notable about Guillermo Gonzalez is that he was a push factor by the Discovery Institute. He would fail the professor test for notability if it weren't for the fact that he tried to use the ID movement to get tenure after developing less grant money over 7 years than most phD candidates do in one.
- On the other hand, you put "anti-science" in quotes. Who are you quoting? That would be an eggregious POV push. Perhaps you should find some evidence of, you know, actual POV pushing before calling POV pushing out. Salmon of Doubt 20:18, 4 September 2008 (UTC)
- Here is one of the difs (already cited above) from Wikipedia showing an example of the type of "reasoning" some editors have applied to justify falsely labeling Picard's views. In this dif, "Hrafn" clearly says that he views Picard's agreement with the two sentence statement ("We are skeptical of claims for the ability of random mutation and natural selection to account for the complexity of life. Careful examination of the evidence for Darwinian theory should be encouraged.") as "anti-science". This is all part of Hrafn's justification for not allowing the Picard article to be corrected and for trying to ignore Moulton. Of course, even the POV pushers know that they have to be careful in their attempts to put the "anti-science" label on professional scientists such as Picard, so they have generally satisfied themselves with trying to place terms such as "anti-evolution" and "darwin dissenter" and "intelligent design support" on her bio as many times as possible. Here is an example of their continuing POV-pushing that I corrected only last week:, the phrase, "She is dismissive of scientific reductionism" is Guettarda's imagined interpretation of Picard's thinking. In the context of the editing history of the page, Guettarda's contorted "reasoning" can be seen as just another attempt to depict Picard as anti-science. Anyhow, I am in the process of unpacking all the evidence at Ethical Management of the English Language Wikipedia/Moulton, JWSchmidt's investigation/Final report. What we need is scholarly research and careful reasoning about the evidence. If you look at the evidence (in the way shown by your selective reading of the evidence, above) and reach different conclusions than I do, then we have to further explore the matter. I'm still working towards that goal. I feel that doubling the size of the "James Tour" article in order to make the single point that he was a signatory to the petition is part of the pattern of POV pushing, pushing the idea that any signatory to the petition needs to be labeled rather than needs to have a balanced bio on Wikipedia. In the case of Berlinski, you make fun of the fact that FeloniousMonk noticed the Berlinski bio, but you ignored the POV-pushing in this edit. The "Guillermo Gonzalez" bio shows an attempt to explicitly label someone "CSC", putting the POV-pushing and bias right in the title of the article. This stuff has been going on for a long time, it really has to end. --JWSchmidt 23:38, 4 September 2008 (UTC)
<--- You appear to be confusing making bad edits or getting angry at Moulton to be POV pushing. You'll want to find an edit where someone pushed POV, not where someone mistakenly characterized someone's beliefs in a non-offensive way, or one where someone got mad at Moulton on a talk page or using the wrong disambiguation for a name, or where someone's beliefs and affiliations are explained. POV pushing is when there is a dispute about something and one side is presented as fact. What you'll want to do is find a dispute about something, and then show when one side is presented as fact. Salmon of Doubt 00:19, 5 September 2008 (UTC)
- Have you looked at the edit history of Rosalind Picard and Talk:Rosalind Picard? --JWSchmidt 01:34, 5 September 2008 (UTC)
- Yup. I see a lot of disruption from Moulton. I see some people overzealously adding descriptions of a petition the lady signed. I see a lot of whitewashing of the fact the lady signed a petition. Somehow, I don't see anyone taking an issue that has two sides and presenting one as fact. Is there a dispute, evidenced in reliable sources that is being misrepresented? Perhaps you've confused POV pushing with OR pushing. Salmon of Doubt 02:28, 5 September 2008 (UTC)
- Who is "they," exactly? I have to assume "they" includes Odd Nature, right? See above. This whole campaign against the offensively labeled "ID Cabal" is nothing more than handwavey anger. The money has not been shown - all the allegations are that there is a pattern of misbehavior, but when asked for examples, we find that people make biographies, get rattled by a prolific online troll and respond to stories in one of the largest national newspapers. Of course, because the myth of the "ID Cabal" is convient to national political push groups, it persists. JWSchmidt - let's ask you the question that Moulton and Picard and the rest of the "oh so aggrieved parties" won't answer: Do you believe evolution is the most likley explanation for the diversity of life we see on our planet today? Salmon of Doubt 12:49, 5 September 2008 (UTC)
- Evolution accounts for the diversity of species (which is what Darwin was addressing in The Origin of the Species. Darwin was as mystified as anyone on the origin of life. Today we know that all life on the planet is DNA-based with a common DNA code. There is no other form of life (naturally occurring self-reproducing organism) that we know of. In the laboratory, scientists, researchers, and nano-technologists are working to create artificial life — self-reproducing automata that are not based on DNA-driven self-replication processes. Among the better known varieties of such self-reproducing automata are computer viruses — a development anticipated by none other than John von Neumann himself. In the years after he saw his concept of the Stored-Program Computer become a reality, he also wrote a seminal monograph on the Theory of Self-Reproducing Automata. If there ever does emerge a diversity of life — living systems not based on DNA — I imagine they will be intentionally designed by robotic engineers and computer scientists. Let us also hope they are intelligently designed, as well. —Moulton 05:57, 6 September 2008 (UTC)
- "nothing more than handwavey anger" <-- the problem is that you apparently also believe the false assumption (described above) that was adopted by other Wikipedia editors. If so, then you are part of the problem that needs to be corrected. "evolution is the most likley explanation for the diversity of life" <-- I think the evidence is very strong to support the idea that life on Earth has changed dramatically over billions of years. "Evolution" is the term biologists use to refer to that change. If I had to guess, I'd say you are using the term "evolution" to refer to a particular mechanism of evolution such as natural selection. My guess is that natural selection is the most important mechanism for evolutionary change when one existing species changes into another species. And yes, it seems likely that mutation and natural selection of successful variants is an important source of the diversity of species seen today and in the fossil record, but it is not clear if other processes such as genetic drift are more important. I do not know of any evidence to support the idea that living organisms have been "intelligently designed". I don't understand what my views on evolution have to do with this research project. "Moulton and Picard won't answer" <-- have you asked them? --JWSchmidt 03:15, 6 September 2008 (UTC)
An aside on trolling[edit]
What is your evidence and reasoning to support the theory that any of us are "rattled" by your appearance or participation here? —Moulton 13:25, 5 September 2008 (UTC)
- Moulton, the prolific online troll to which I refer is you. Since we're discussing what the offensively titled "ID Cabal" did, and since I've had no interactions with the offensively titled "ID Cabal," I obviously wasn't referring to myself. You have been banned for trolling from Slashdot, Wikipedia and WorldCrossing. You were run out of two seperate usenet newsgroups on a rail. If you don't think "Prolific online troll," is an acurate description of your online persona, what do you think is? Salmon of Doubt 13:34, 5 September 2008 (UTC)
- May I quote you on that and attribute it to the Wikimedia editor who published it? Can you also provide me with your definition of troll? And would you be kind enough to tell us the story of what happened on Slashdot and in the Soap Opera Forum [use Google cache if the server is still down] at World Crossing? —Moulton 16:34, 5 September 2008 (UTC)
- No, no, no and no. Salmon of Doubt 16:37, 5 September 2008 (UTC)
- I was under the impression that you had committed to "presenting a balanced, objective, accurate, and informative account of all relevant scholarly evidence, facts, analysis, and ideas within all scholarly topics of study in the interest of full disclosure and honesty." Was I laboring under a misconception? Do you not intend to keep your sincere voluntary commitment to the precepts of Scholarly Ethics? —Moulton 16:49, 5 September 2008 (UTC)
- I want a pony. Salmon of Doubt 17:33, 5 September 2008 (UTC)
-
Title: Pony Baloney
Artist: The Foo
Composer: Larry Williams and Barsoom Tork Associates
Midi: Boney Maronie
I know a Doubter riding Pony Baloney
He's got a noodle like a stick of macaroni
Ought to see him revert with his FooBots on
He's not very smart, he rides on and on
But I stalk him, and he stalks me
We all are annoyed as we can be
Makin' a ruckus all over Wikiversity
He told SBJ and Old WAS, too,
Just exactly what he planned to do
He wants to get blocked on a night real soon
And rock Wikiversity like a mad buffoon
So I stalk him and he stalks me
We all are annoyed as we can be
Makin' a ruckus all over Wikiversity
He's my Pony Baloney, he's a real nightmare
Will I ever wash that Salmon right outta my hair?
Everybody groans when our posts fly by
It's a sight to see SB Johny sigh
Everyone's annoyed, listenin' to me
Make music all over Wikiversity
That why I stalk him and he stalks me
We're all annoyed as we can be
Making music all over Wikiversity
CopyClef 2008 Larry Williams and Barsoom Tork Associates. All songs abused.
Barsoom Tork 14:10, 6 September 2008 (UTC) | https://en.wikiversity.org/wiki/Wikimedia_Ethics/Moulton,_JWSchmidt%27s_investigation | CC-MAIN-2017-13 | refinedweb | 7,195 | 59.53 |
DRAFT: Synopsis 17: Processes and Concurrency
Elizabeth Mattijsen <liz@dijkmat.nl> Audrey Tang <audreyt@audreyt.org> Christoph Buchetmann Tim Nelson <wayland@wayland.id.au>
Created: 13 Jun 2005 Last Modified: 10 Jul 2010 Version: 5
This draft document is a paste together from various sources. The bulk of it is simply the old S17-concurrency.pod, which dealt only with concurrency. Signals were added from S16-io, but haven't been merged with the conflicting S17 signals doco. An event loop section has been added here because a) Larry mentioned the idea, and b) Moritz suggested that be our model for concurrency, and in that model, an event loop underlies the threads.
An event loop underlies everything in this document. POSIX signals can interact with this, and concurrency is built on top of it. Naturally, IPC (inter-process communication) is documented here too (XXX or should be :) ).
The %*SIG variable contains a Hash of Proc::Signals::Signal.
class Proc::Signals::Signal { has $exception; # This specifies what exception will be raised when this signal is received has $interrupt; # See siginterrupt(3) has $blocked; # Is this signal blocked? cf. sigprocmask }
The @*SIGQUEUE array contains a queue of the signals that are blocked and queued.
The standard POSIX signals simply raise control exceptions that are handled as normal through the control signal handler, and caught by CONTROL blocks, as specified in S04.
To declare your main program catches INT signals, put a CONTROL block anywhere in the toplevel to handle exceptions like this:
CONTROL { when Error::Signal::INT { ... } }
The signals have defaults as specified in the table below. $blocked always defaults to false.
Signal Default Exception ------ ----------------- SIGHUP ControlExceptionSigHUP SIGINT ControlExceptionSigINT SIGQUIT ControlExceptionSigQUIT SIGILL ControlExceptionSigILL SIGABRT ControlExceptionSigABRT SIGFPE ControlExceptionSigFPE SIGKILL ControlExceptionSigKILL SIGSEGV ControlExceptionSigSEGV SIGPIPE ControlExceptionSigPIPE SIGALRM ControlExceptionSigALRM SIGTERM ControlExceptionSigTERM SIGUSR1 ControlExceptionSigUSR1 SIGUSR2 ControlExceptionSigUSR2 SIGCHLD ControlExceptionSigCHLD SIGCONT ControlExceptionSigCONT SIGSTOP ControlExceptionSigSTOP SIGTSTP ControlExceptionSigTSTP SIGTTIN ControlExceptionSigTTIN SIGTTOU ControlExceptionSigTTOU SIGBUS ControlExceptionSigBUS SIGPROF ControlExceptionSigPROF SIGSYS ControlExceptionSigSYS SIGTRAP ControlExceptionSigTRAP SIGURG Undefined SIGVTALRM ControlExceptionSigVTALRM SIGXCPU ControlExceptionSigXCPU SIGXFSZ ControlExceptionSigXFSZ SIGEMT ControlExceptionSigEMT SIGSTKFLT ControlExceptionSigSTKFLT SIGIO ControlExceptionSigIO SIGPWR ControlExceptionSigPWR SIGLOST ControlExceptionSigLOST SIGWINCH Undefined
A table below describes the exceptions.
Each of these has a default action as well. The possible actions are:
Term Default action is to terminate the process. Ign Default action is to ignore the signal ($signal.exception is undefined by default) Core Default action is to terminate the process and dump core (see core(5)). Stop Default action is to stop the process. Cont Default action is to continue the process if it is currently stopped.
Some actions do the Resumeable role. An exception listed in the table below that does the Resumeable role is marked with a * in the R column.
The exceptions are:
Signal Action R Comment ---------------------------------------------------------------------- ControlExceptionSigHUP Term ? Hangup detected on controlling terminal or death of controlling process ControlExceptionSigINT Term ? Interrupt from keyboard ControlExceptionSigQUIT Core ? Quit from keyboard ControlExceptionSigILL Core ? Illegal Instruction ControlExceptionSigABRT Core ? Abort signal from abort(3) ControlExceptionSigFPE Core ? Floating point exception ControlExceptionSigKILL Term ? Kill signal ControlExceptionSigSEGV Core Invalid memory reference ControlExceptionSigPIPE Term ? Broken pipe: write to pipe with no readers ControlExceptionSigALRM Term ? Timer signal from alarm(2) ControlExceptionSigTERM Term ? Termination signal ControlExceptionSigUSR1 Term ? User-defined signal 1 ControlExceptionSigUSR2 Term ? User-defined signal 2 ControlExceptionSigCHLD Ign * Child stopped or terminated ControlExceptionSigCONT Cont * Continue if stopped ControlExceptionSigSTOP Stop ? Stop process ControlExceptionSigTSTP Stop ? Stop typed at tty ControlExceptionSigTTIN Stop ? tty input for background process ControlExceptionSigTTOU Stop ? tty output for background process ControlExceptionSigBUS Core ? Bus error (bad memory access) ControlExceptionSigPROF Term ? Profiling timer expired ControlExceptionSigSYS Core ? Bad argument to routine (SVr4) ControlExceptionSigTRAP Core ? Trace/breakpoint trap ControlExceptionSigURG Ign ? Urgent condition on socket (4.2BSD) ControlExceptionSigVTALRM Term ? Virtual alarm clock (4.2BSD) ControlExceptionSigXCPU Core ? CPU time limit exceeded (4.2BSD) ControlExceptionSigXFSZ Core ? File size limit exceeded (4.2BSD) ControlExceptionSigEMT Term ? ControlExceptionSigSTKFLT Term ? Stack fault on coprocessor (unused) ControlExceptionSigIO Term ? I/O now possible (4.2BSD) ControlExceptionSigPWR Term ? Power failure (System V) ControlExceptionSigLOST Term ? File lock lost ControlExceptionSigWINCH Ign ? Window resize signal (4.3BSD, Sun)
See S04-control for details on how to handle exceptions.
XXX I'm unsure how the actions in the table above can be made to make sense. The Ign actions are already dealt with because %SIG{CHLD}.exception already defaults to undefined. The Term action will probably be self-solving (ie. will terminate the process). The others I'm just plain unsure about. XXX
XXX Everything about Alarm is from the old S17-concurrency.pod { also. Within sink suppressed.
This is a draft document. After being some time under the surface of Perl 6 development this is a attempt to document working concurrency issues, list the remaining todos and mark the probably obsolete and redundant points. (if allowed to do so) Transactional
contend blocks.
These sections are guaranteed to either be completed totally (when the Code block is exited), or have their state reverted to the state at the start of the Code block (with the defer statement).
(EM: maybe point out if / how old style locks can be "simulated", for those needing a migration path?)
my ($x, $y); sub c { $x -= 3; $y += 3; $x < 10 or defer; } sub d { $x += 3; $y -= 3; $y < 10 or defer; } contend { # ... maybe { c() } maybe { d() }; # ... }
A Code block can be prefixed with
contend. This means that code executed inside that scope is guaranteed not to be interrupted in any way.
The start of a block marked
contend also becomes a checkpoint to which execution can return (in exactly the same state) if a problem occurs (a.k.a. a defer is done) inside the scope of the Code block.
The
defer function basically restores the state of the thread at the last checkpoint and will wait there until an external event allows it to potentially run that atomic
contend section of code again without having to defer again.
If there are no external events possible that could restart execution, an exception will be raised.
The last checkpoint is either the outermost
contend boundary, or the most immediate caller constructed with
maybe.
The
maybe statement causes a checkpoint to be made for
defer for each block in the
maybe chain, creating an alternate execution path to be followed when a
defer is done. For example:
maybe { ... some_condition() or defer; ... } maybe { ... some_other_condition() or defer; ... } maybe { ... }
If placed outside a
contend block, the
maybe statement creates its own
contend barrier.
Because Perl 6 must be able to revert its state to the state it had at the checkpoint, it is not allowed to perform any non-revertible actions. These would include reading / writing from file handles that do not support
seek (such as sockets). Attempting to do so will cause a fatal error to occur.
This will probably need to be expanded to all objects: any object that has some interface with data "outside" of the knowledge of the language (e.g. an interface with an external XML library) would also need to provide some method for freezing a state, and restoring to a previously frozen state.
If you're not interested in revertability, but are interested in uninterruptability, you could use the "is critical" trait.
sub tricky is critical { # code accessing external info, not to be interrupted } if ($update) { also is critical; # code accessing external info, not to be interrupted }
A Code block marked "is critical" can not be interrupted in any way. But since it is able to access non-revertible data structures (such as non-seekable file handles), it cannot do a
defer.
Coroutines are covered in S07
All outside of a thread defined variables are shared and transactional variables by default
Program will wait for _all_ threads. Unjoined threads will be joined at the beginning of the END block batch of the parent thread that spawned them
A thread will be created using the keyword
async followed by a codeblock being executed in this thread.
my $thr = async { ...do something... END { } };
TODO: how you can access thread attributes inside a thread
async { say "my tid is ", +self; };
start time
end time
suspended (not diff from block on wakeup signal) waiting on a handle, a condition, a lock, et cetera otherwise returns false for running threads if it's finished then it's Nil
the CC currently running in that thread
TODO: IO objects and containers gets concurrency love!
$obj.wake_on_either_readable_or_writable_or_passed_time(3); # fixme fixme $obj.wake_on:{.readable} # busy wait, probably my @a is Array::Chan = 1..Inf; async { @a.push(1) }; async { @a.blocking_shift({ ... }) }; async { @a.unshift({ ... }) };
Stringify to something sensible (eg. "<Conc:tid=5>");
my $thr = async { ... }; say ~$thr;
Numify to TIDs (as in pugs)
my $thr = async { ... }; say +$thr;
TODO: Enumerable with Conc.list
TODO: Conc.yield (if this is to live but deprecated, maybe call it sleep(0)?)
sleep() always respects other threads, thank you very much
wait for invocant to finish (always item cxt)
my $thr = async { ... }; $thr.join();
throw exception in the invocant thread
set up alarms
query existing alarms
pause a thread; fail if already paused
revive a thread; fail if already running
survives parent thread demise (promoted to process) process-local changes no longer affects parent tentatively, the control methods still applies to it including wait (which will always return Nil) also needs to discard any atomicity context
TODO:
method throttled::trait_auxiliary: implemented using contend+defer) } class Foo { method a is throttled(:limit(3) :key<blah>) { ... } method b is throttled(:limit(2) :key<blah>) { ... } } my Foo $f .= new; async { $f.a } async { $f.b }
TODO document
Live in userland for the time being.
### INTERFACE BARRIER ### module Blah; { also is atomic; # contend/maybe/whatever other rollback stuff # limitation: no external IO (without lethal warnings anyway) # can't do anything irreversible also of ###
Please post errors and feedback to perl6-language. If you are making a general laundry list, please separate messages by topic. | http://search.cpan.org/dist/Perl6-Doc/share/Synopsis/S17-concurrency.pod | CC-MAIN-2017-47 | refinedweb | 1,622 | 50.12 |
Large-scale sparse linear classification, regression and ranking in Python
lightning is a library for large-scale linear classification, regression and ranking in Python.
Highlights:
Solvers supported:
Example that shows how to learn a multiclass classifier with group lasso penalty on the News20 dataset (c.f., Blondel et al. 2013):
from sklearn.datasets import fetch_20newsgroups_vectorized from lightning.classification import CDClassifier # Load News20 dataset from scikit-learn. bunch = fetch_20newsgroups_vectorized(subset="all") X = bunch.data y = bunch.target # Set classifier options. clf = CDClassifier(penalty="l1/l2", loss="squared_hinge", multiclass=True, max_iter=20, alpha=1e-4, C=1.0 / X.shape[0], tol=1e-3) # Train the model. clf.fit(X, y) # Accuracy print(clf.score(X, y)) # Percentage of selected features print(clf.n_nonzero(percentage=True))
lightning requires Python >= 2.7, setuptools, Numpy >= 1.3, SciPy >= 0.7 and scikit-learn >= 0.15. Building from source also requires Cython and a working C/C++ compiler. To run the tests you will also need nose >= 0.10.
Precompiled binaries for the stable version of lightning are available for the main platforms and can be installed using pip:
pip install sklearn-contrib-lightning
or conda:
conda install -c conda-forge sklearn-contrib-lightning
The development version of lightning can be installed from its git repository. In this case it is assumed that you have the git version control system, a working C++ compiler, Cython and the numpy development libraries. In order to install the development version, type:
git clone cd lightning python setup.py build sudo python setup.py install
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/sklearn-contrib-lightning/ | CC-MAIN-2017-30 | refinedweb | 275 | 53.78 |
Get the Next Object of doc.SearchObject() Method?
Hi,
Is there a way to get the "next" object of
doc.SearchObject()method.
As I understand, if there are two objects name
cube. The method will retrieve first
cubein the outliner.
Is there a way to get the next
cubeobject?
My use case is this:
- Check if there are objects with the same name
- If there are, throw an error. Every object must have a unique name.
I guess I can do this with the traversing objects hierarchy code provided by the plugincafe.
Storing each object name and comparing them with each other.
But it would be nice if I can just do this with
doc.SearchObjectso I can shorten the logic.
Regards,
Ben
Hi unfortunately there is nothing built-in so you have to iterate over the scene.
import c4d def HierarchyIterator(obj): while obj: yield obj for opChild in HierarchyIterator(obj.GetDown()): yield opChild obj = obj.GetNext() def main(): names = set() for obj in HierarchyIterator(doc.GetFirstObject()): objName = obj.GetName() if objName not in names: names.add(objName) else: print("{0} Is already present".format(objName)) if __name__=='__main__': main()
Cheers,
Maxime. | https://plugincafe.maxon.net/topic/12874/get-the-next-object-of-doc-searchobject-method | CC-MAIN-2020-40 | refinedweb | 193 | 70.6 |
I’ve been having an issue with this project where it appears I am stuck in a loop where the code prints out the else statement ("Invalid Move. Try Again) even when I intentionally put in inputs that meet the criteria of the if or elif statements that proceed it.
From what I can see, my code is identical to the solution video. Am I missing something???
from stack import Stack print("\nLet's play Towers of Hanoi!!") #Create the Stacks stacks = [] left_stack = Stack('Left') middle_stack = Stack('Middle') right_stack = Stack('Right') stacks += [left_stack, middle_stack,(i) num_optimal_moves = (2 ** num_disks) - 1 print("\nThe fastest you can solve this game is in {0} moves".format(num_optimal_moves)) #Get User Input def get_input(): choices = [stack.get_name()[0] for stack in stacks] while True: for i in range(len(stacks)): name = stacks[i].get_name() letter = choices[i] print('Enter {0} for {0} moves, and the optimal number of moves is {1}".format(num_user_moves, num_optimal_moves)) | https://discuss.codecademy.com/t/towers-of-hanoi-loop/440135 | CC-MAIN-2019-43 | refinedweb | 158 | 64.41 |
Welcome to the Parallax Discussion Forums, sign-up to participate.
Rev C due by mid June, but nothing guaranteed at the moment.
Waiting for the P2 Eval Rev C will make the Wifi programming simpler.
/*
cr2mobile.c
May 12, 2020
*/
#include "simpletools.h"
#include "simpletext.h"
#include "fdserial.h"
#include "adcDCpropab.h"
#include "sirc.h"
#define reboot() __builtin_propeller_clkset(0x80)
serial *rpi;
serial *cr2;
volatile float vw;
void bat_info();
void rpi_com();
void cr2_sirc();
void cr2_ctrl();
int main() //COG 0
{
// Add startup code here.
cog_run(bat_info,128); //COG 1
pause(150);
cog_run(rpi_com,128); //COG 2 & 3
pause(150);
cog_run(cr2_sirc,128); //COG 4
pause(150);
cog_run(cr2_ctrl,128); //COG 5 & 6
pause(150);
//COG 7, last cog avaialable
while(1)
{
// Add main loop code here.
print("Data: %.2f\r",vw);
pause(4000);
}
}
/* Tracking of the battery power. */
/* Track the robot main battery and the additional power pack. */
void bat_info()
{
float v0;
adc_init(21, 20, 19, 18);
while(1)
{
v0 = adc_volts(0);
pause(150);
vw = (v0*4.8125);
}
}
/* Comunication with the Raspberry Pi. */
void rpi_com()
{
rpi = fdserial_open(1, 0, 0, 115200); //COG 4
while(1)
{
//writeChar(rpi,'A');
writeFloatPrecision(rpi,vw,4,2);
writeStr(rpi,"\n");
pause(1000);
}
}
/* Manual control of Create 2 with TV remote. */
void cr2_sirc()
{
while(1)
{
pause(500);
}
}
/* Commands to control the Create 2. */
void cr2_ctrl()
{
cr2 = fdserial_open(2, 3, 0, 115200); //COG 6
while(1)
{
pause(500);
}
}
/*
cr2_html2.c
October 10,2020
*/
#include "simpletools.h"
#include "wifi.h"
#include "ping.h"
int event, id, handle;
int getFromPageId;
volatile int val;
int main()
{
// Add startup code here.
wifi_start(31, 30, 115200, WX_ALL_COM);
getFromPageId = wifi_listen(HTTP, "/tpfm");
while(1)
{
// Add main loop code here.
val = ping_inches(4);
//val = (val-6);
wifi_poll(&event, &id, &handle);
if(event == 'G')
{
if(id == getFromPageId)
{
wifi_print(GET, handle, "%d", val);
}
}
pause(200);
}
}
<!DOCTYPE HTML>
<html>
<br>
<body>
<H2>Value from Microcontroller</H2>
<p>Click Update to see number from Micro:</p>
<button onclick="getFromMcu()">Update</button>
<p id="value">Waiting...</p>
<script>
function useMcuReply(response)
{
var val = document.getElementById("value");
val.innerHTML = "Value: " + response;
}
function getFromMcu()
{
httpGet("/tpfm", useMcuReply);
}
function httpGet(path, callback)
{
var req = new XMLHttpRequest();
req.open("GET", path, true);
req.onreadystatechange = function()
{
if (req.readyState == 4)
if(req.status == 200)
callback(req.responseText);
else
callback("Waiting...");
}
req.send(null);
}
</script>
</body>
</html>
Rsadeika wrote: »
I get the feeling that if I proceed, after I am done, the robot will no longer look like a Create 2 robot.
Rsadeika wrote: »
The only hassle is that you have to keep a close eye on the battery charge level, and then you have to take the battery pack(s) out and have them charged up.
This is giving me some time to think about the robot and what I will be preparing it to do. The one, very important thing is, the robot power control and usage. The Create 2 standard battery pack is 2.6Ah, which means that you could get probably a good 1 hour run before it needs to be charged. I think I will be adding a power pack for running the Activity Board and accessories.
The Create 2 has a little pull out compartment that can be used, maybe it could fit a small power pack. driverBob made an interesting discovery, vruzend diy, the kit allows you to make your own 18650 battery packs. I am watching his thread to see how his battery pack is working out.
Since I will be adding a Raspberry Pi, to help out the Activity Board, or vice-versa, I know I will need a 3.3V or a 5V power source for the RPi and a 6V power source for the Activity Board. This could be a little tricky, I would like to minimize the regulator count for this. The wiring could get very messy, and a problem of shorts plus wires coming loose, comes in to play.
Since the 18650 battery is 3.65V, and I think it gets charged up to ~4.0V, I think, hopefully the RPi is ~4.0V tolerant. Now, the Activity Board does not have a method for powering it in that manner. For the P2, I think I read that it is ~4.0V tolerant, so a battery pack that provides 3.65-~4.0V battery pack could be created. Putting together 4 18650 batteries in parallel could provide maybe 4 - 6Ah, and hopefully fit into the little Create 2 drawer. Also I would have to create some wiring concept to have the 18650 battery pack charged up when the Create 2 is having its battery pack charged up. Boy this could get very tricky, in terms of wiring and control.
Ray
I keep thinking about possibly using the new P2 eval board instead of the Activity board. Not sure if I could make that work the way I want it too work. First problem, how do I power the P2 eval board, the Create 2 provides unregulated 14.4V. With the Activity board I just use a barrel plug connected to the Create 2 power supply.
The other problem that I see is using the WX WiFi to program the P2. I could, maybe try to connect a SIP WX WiFi somehow, but still not sure if flexGUI would be able to work, let alone use the necessary driver for accessing the module. So, what IDE would I use to program the P2.
So there are two big obstacles that would have to be overcome, right off the bat.
My end goal is to create an autonomous robot, which means access to lots of storage, that is where the RPi comes in. Besides adapting the new RPi camera, the robot will probably have a lot of different sensors on it. Now I am thinking interrupts within a cog, not sure that flexGUI has that available just yet. Boy this could be a very long project to develop fully.
Ray
Rev C due by mid June, but nothing guaranteed at the moment.
That is good to hear, I will also probably need to use the ADC functionality of the smart pins. Hopefully there will be some handy-dandy routines available for that.
Speaking of the WX WiFi, my testing shows that the SIP WX Wifi module has a shorter signal strength than the one on the Activity WX board. Not sure why that is, I tested both of them. I especially made an effort to see what the signal strength was while the units were on the floor, since that is where the robot will be.
I will be getting the Parallax GPS module to test out on the robot, this will be used inside the house, I hope this will work out as expected.
At the moment, the robot, will be outfitted with an Eval Rev C, Sirc, for manual control, WX for remote control and programming, and maybe a temperature sensor, to start things off. I was also thinking it would be nice the have some sort of IR Tx/Rx communication, so robots could talk to each other when in range.
Ray
I have nothing against Spin/Spin2, but that is not my choice for development, sorry. So, what else is left, FlexGUI. Eric has been working on it for awhile now, and it is starting to look good. Since I have decided that the project will need a WiFi component, FlexGUI is not there yet, for doing the WiFi stuff.
So, I guess I have to pull back and do a preliminary prototype using the Activity WX board with the WX WiFi module.
Ray
I guess I will be using the Activity WX board with SimpleIDE. Below is an outline of the program code, at this point I am using seven COGS.
Ray
Unpacking:
The box itself shows that it is a Create 2, the green model. But when I open up the box and pull out the contents, it contains a black robot. The black robot has had all of its brushes removed, and I think that it is a Roomba 614 model, stripped.
I removed the battery pack, and the battery pack is a Lithium Ion 1800mAh. Could not find any more relevant information about the lithium battery pack, not sure if this is good or bad choice. The normal battery pack is usually a NiCad 2600mAh 14.4V.
It also contained a communications cable, it looks like a new model, because it has a 10 pin female header on it. The header is TTL just like the mini-din. The USB part of it would be used with a computer for communications with the robot.
I checked some of the voltages at the mini-din connector. When it is on the home base, it shows ~19V, when it is off the home base, it shows ~15.5V. I had the Activity board connected to the mini-din, and when it is on the home base, the Activity board, when turned on, the board shuts off. I guess it is blowing a fuse. When the robot is off the home base, and the Activity board gets turned on, it stays powered up. I guess the board can handle the 15.5V. Not sure if there is something wrong with the Activity WX board.
I did a quick test with the Activity board connected to the communications cable header, and the commands that were sent, worked to control the robot. The Activity board was powered by my small lithium ion power pack, when charged up, it is ~8V 2600mAh. I guess I will have to put two of these together, parallel, and have ~8V 5200mAh of power. It looks like I could put the batteries inside of the robot compartment. Now how can I have the robot charge the batteries while it is on the home base charging the robot.
Next thing is connect up an IR demodulator, and use the Sirc functions to control the robot with a TV sirc remote. I will be able to see how straight it runs, traction transition between wood floor and carpeted surfaces, plus how it handles rolling over loose cables that could be laying on the floor.
On to the next thing.
Ray
The Activity WX board that I was using no longer functions. At first, the ADC chip got fried; this time the Propeller chip itself got fried. Not sure how the Propeller chip got fried, the top surface of the Create 2 is all plastic, and there was no other metal objects in the vicinity. I think it had problems with the barrel plug adapter it self.
I have one Activity board, not a WX version, left to work with, but I find this cumbersome to work with. I do not have a 9' USB cable and I do not have any room, on the desk, for the robot to sit close to the computer, when it has to be programmed.
I am also finding that the top of the robot gets very cluttered with wires and boards, and because I have to disconnect everything every time I have to program the Activity board, I find this very tedious. I have to rethink this whole setup.
I am also finding that working with the Create 2 robot, gets to be a little bit annoying sometimes. They have a power saving feature which turns the robot off after 5 minutes, sometimes I forget what mode the robot is in, passive or off.
Now, I am not even sure that the P2 Eval Rev C board will work out, I will still have to many wires and little boards to contend with. And if I fry the Propeller chip, now it becomes a lot more expensive to replace. Not sure what a good board and processor would be for the robot. Back to the drawing board, for the time being.
Ray
... But I think you have a FLiP and WiFi SIP module, right ?
Could they work together and provide a smaller controller (and lower cost) to experiment with ?
Set the WiFi module to use CTS reset, and hook up to the FLiP with some F-F header cables.
The problem with using the FLiP module is, it has to be on the breadboard, and I do not want to use the adhesive surface of the breadboard just yet, it makes a mess when you want to peel the breadboard off the surface.
As I mentioned somewhere, earlier, there seems to be a difference, in signal reach between the two WiFi modules. The SIP version seems to have a weaker reach. Since I have two Activity WX boards that got fried, the WiFi modules still seem to work. Would be nice if there was a way to use those WiFi modules on a breadboard.
I also ordered a GPS module from Parallax, I found the C driver for it, in the new Learn folder, but no example code. Yes, I disabled Blockly, so not sure how I will get some example code.
The electrical part is going to get a little tricky, the regulator that I am using is bigger than the FLiP module and it also has to be fastened down, so it does not slide around on top of the robot.
[Quote}
...and hook up to the FLiP with some F-F header cables.
[/QUOTE]
Not sure what F-F header cables are. Does this mean that if you lay the WiFi module down, you will get a longer signal reach?
Ray
Our friend means, this:
...and hook up to the FLiP with some F-F header cables.
He means to use Dupont style jumpers with female connectors on them. Sort-of like the ones used for connecting servos to BoE boards for example. I've used them to connect my breakout board for my TI calculator to that same Stamp board. Oh and the differences between an Irobot create and the regular model are just that. They do not install those functional devices into the Create model. Remember that prior to the release of the Create models, people would track down those who needed batteries, which at the time were expensive, and track down replacement ones which were, well, not designated by the company as replacements.
I actually looked at the site for buying one, until I saw how much he wanted just to come here, and changed my mind.
Of course there is another problem..... You were sent one with a personality module intact........
My suggestion would be use these Velcro type strips to secure the FliP to the robot. Just flip the FliP upside down and use the smooth side to fasten it to to the robot. The fasteners can easily be removed in the future thanks to their pull strips. These same sorts of fasteners might be able to secure your voltage regulator.
I sometimes add a layer of Polymorph to the bottom of a PCB in order to have a flat surface to mount the Velcro style fastener. Here's an example of a small PCB I secured with 3M's Dual Lock.
I use a lot of short jumper wires in my projects. I get these short ones from Pololu.(I have get a bunch of a bunch of sizes. I try to use the shortest jumper practical.) The plastic connector housings are also sold at Pololu but these can be found for much less from other sources (Banggood, ebay, Aliexpress, etc). Having a variety of short jumpers makes the wiring on projects look much less of a tangled mess.
I only use female ended jumper wire. The male pins bend too easy. I used to get both male and female connectors but now I only purchase female connectors and use long header pins in places where I need to join to sockets together.
Depending on how many pins need multiple connections, a breadboard may not be required.
I used a QuickStart board with my Roomba. I found controlling the Roomba motors a bit of a challenge. I'm used to controlling a robot by controlling the speed to two motors. iRobot's control protocol used a turning radius and speed as control parameters.
I'm sure there are better algorithms for computing radius and speed from joystick inputs than what I came up with, but I'm certainly willing to decipher my old code to share the algorithm I used.
Since I am a big fan of using thick cardboard, to hold some small screws, the setup for the time being will be, a small breadboard too hold the FLiP module and the WiFi SIP module. I did a quick test run of the setup, and I am experiencing some WiFi issues with the WiFi module, or maybe SimpleIDE. Sometimes SimpleIDE cannot find the WiFi module. So I have to power down the FLiP module, and then power it up again. I was having the same issue with the Activity WX board setup.
The next thing on the list is to add the Raspberry Pi 3 Model A+. Not sure what communication model I will be using, master/slave, or something else between the FLiP and the RPi.
Ray
Since I have the FLiP and a WiFi module connected, I will probably try to setup a WEB page that manually controls the robot via the phone. Since FlexGUI does not have a provision for doing this, I will have to use SimpleIDE. I did this for another robot setup using a tablet, so it can be done.
Ray
The electronics are somewhat safer now, at least they are not sliding around loosely on the robot. I have the FLiP WX and the RPi connected up together, for communication, with each other.
On the FLiP WX, I have communication with the robot, for movement control, which the RPi will be using via the FLiP. I created a WEB page for the WX so I can now drive the robot around via the WX. I can also connect to the RPi remotely, and maneuver the robot, via the FLiP, with a python script program. So, basically I can get at the robot from two different ways. For the FLiP, I am using SimpleIDE, since it has the software for dealing with the WEB page.
The next step is to attach an RPi camera, on the front of the robot, and with some some python
camera streaming software, I will try to maneuver the robot remotely from my desktop computer.
Somewhere along the line I have to create some manner for data collection of the robot and maybe the robot surroundings as it moves around.
Ray
I came across a Raspberry Pi NOIR camera, which I placed on top of the omni sensor of the Create 2 robot. I also switched out the Raspberry Pi that I was using for a Raspberry Pi 3 model B. I also implemented the Python motion program, to create a webcam functionality, which I can view on a browser.
I am using the FLiP module with the WiFi, to control the robot movement, via a browser. Very interesting experience. The picture that you see in the browser has about a 3 second delay in actual action. So, the robot moves and 3 seconds later you see the robot move in the browser screen. I guess I could learn how to adjust for that, but the big problem is you have no sense of real distance on the browser screen. If you drive the robot towards a wall, you have no sense of how close you are to the wall.
If I want to drive the robot around, in the dark, I will have to implement some IR "head lights" for the robot. That should be even a stranger experience for driving the robot. And I think that the sense of distance will become even more hazardous.
Now I have to give it some thought about what kind of sensors, and how many of them, to help in maneuvering the robot via a browser. Now I will get some more robot driving lessons.
Ray
What I am trying to do is have a value that gets constantly updated on the WEBPAGE. In this case, I added a ping module, so as the robot moves around I would like to see that value updated on the WEBPAGE. At the moment the code requires that you press the update button, constantly.
I will be needing to see the battery charge value and other items that will have to be updated constantly, also. If I get a handle on doing the ping thing, then I should be able to add this other stuff.
I went ahead and ordered an RPi night vision camera with IR lights module, that should be an interesting test run, when it hopefully arrives sometime next week.
Thanks
Ray
HTML
1. Web sockets
Full-duplex, as real-time as it gets, and probably the most involved. Included dropped client detection, but no ability to automatically reconnect (you'll have to handle that with you code).
2. Long polling
Half-duplex, older technique, only allows server to client communication. Basically leaves a connection open to the server from the client until it times out.
3. Polling
Half-duplex, probably easiest but most problematic. If you don't have a crazy speed requirement and/or don't want to get to deep in web dev, this should work fine.
4. Server sent events
Half-duplex, server to client only. Probably the correct solution when servers need to send data to the client, but not the other way around (eg push notifications).
If you want full-duplex communicats but dont want to go sockets, you can use one of the above along with the client sending api queries.
The little nuts and bolts, in the package, I think a grain of rice is bigger then the bolt. You need lots of patience to work with the nuts and bolts. I did have it put together for me, and gave it a test run. I placed the robot at one end of a 15 foot, dark hall way, and it did show the whole hallway in the browser window. In fact the picture was very clear, and in focus.
The information that is with the package is very lean, so, you have to make assumptions. It listed the IR LEDs at 3W, I am assuming that each LED is 1.5W. Not sure how long the battery, on the robot, would last if you ran this all night. It does not have an on/off mechanism, but they do have a sensor to check how much light is available. Also, the program that I am using does not have a workable on/off for running the webcam program.
Ray
As I am adding more power requirements, the power that I am tapping from, the 7 pin din plug, is only able to provide - 20.5V-10V Voltage, 0.2A Current, and 2W. I need to tap the battery power directly, and use some kind of voltage regulator to distribute the required power to the external electronics, like the FLiP and the Raspberry Pi. It is the current rating that is restricting every thing. I checked the Raspberry Pi current needs, and it needs 0.5A Current to be minimally happy.
This sounds like getting into the guts of the robot to find a direct battery voltage source, then probably have to create some kind of wiring harness to bring a connection up to the topside of the robot. I get the feeling that if I proceed, after I am done, the robot will no longer look like a Create 2 robot. Since I am not into soldering and knowing anything about logic-level N-FET, and such things, I wonder if, after this modification, the robot would still work.
Ray
Having a nice looking robot is a plus but I think you are more likely interested in having a robot which does what you want.
If you want my opinion, I'd say show it who's boss. Make the robot bend to your will.
Oh, and take bigger pictures when you do. It was hard to make out all the parts in the photo you shared earlier.
Thanks for all the updates and good luck with whichever course you take.
Main Brush, Side Brush, and the Vacuum bin. The main brush location is probably what I will be going for, since it provides the most current, 1.45A to be exact. The vacuum bin would be an excellent place, but it only has 0.56A current.
Now what is really confusing is they recommend that an inductor be soldered in. I was thinking that a voltage regulator would suffice. I read the Wikipedia write up for an inductor, but I am still not sure as to what this thing is supposed to be doing. Since I do not have any inductors handy, any ideas as to the next best thing? Or will the regulator do the job.
Ray
Since the robot has a vacuum bin, I created some 18650 power pack(s), they fit nicely into the bin. The only hassle is that you have to keep a close eye on the battery charge level, and then you have to take the battery pack(s) out and have them charged up. The battery pack(s) produce enough current to drive the RPi and the night vision camera. The 18650 batteries that I have are 2650mAh, I have to find some batteries that are, maybe, in the 4000mAh range, and have the protection circuitry build in.
For the charging of the 18650 batteries, I came across some info about wireless charging technology. Not sure if something like that could be applied to the robot, and have the battery pack(s) charged in that manner. Will have to do some more research in that area.
Driving the robot around I found some WiFi dead spots for the RPi, so when you are looking at the video on the browser, all of the sudden the video freezes. Now I am thinking I need some kind of secondary source, maybe something like lidar. I believe Parallax sells some lidar units, but I know nothing about the technology. It seems that maybe the FLiP module would get overwhelmed with the memory requirements. I saw some lidar units for the RPi, but now I would be back to increased power requirements for the battery pack(s).
All kinds of little problems are starting to show up, maybe I have to build a robot from scratch that can handle all of anticipated problems.
Ray
I highly recommend anyone using Li-Ion or LiPo cells purchase multiple battery alarm/meters. Here's a thread discussing these devices.
These devices have paid for themselves multiple times over by reminding me a battery pack is getting too low.
If you are using individual Li-Ion cells, you should wire up a harness for the alarm/meter. | http://forums.parallax.com/discussion/comment/1508556 | CC-MAIN-2020-45 | refinedweb | 4,512 | 71.85 |
BiftVector
This project has not yet had its initial release.
Introduction
BiftVector is a Swift package for bit vectors (also called bit arrays).
This project has not yet had its initial release! See project kanban for current progress. Take a quick look at contribution guidelines.
Refer to the Documentation for more.
Examples
import BiftVector let bv1 = ~BiftVector(size: 32) let bv2 = BiftVector(hexString: "21350452") let result = bv1 ^ bv2 let bv3 = BiftVector(hexString: "decafBAD") if (result == bv3) { print("We've succeeded! 😎") }
Goals And Features
The project wiki has additional features listed.
Main goals
- Intends to support "arbitrarily-sized" bit vectors
- Include full complement of functions and operators
- Documented API
- Use continuous integration and TDD (Test Driven Development) practices
Additional goals
- Cross-platform compatible (Linux, iOS, macOS, watchOS, tvOS)
- Be performant and memory efficient
- Include a CLI wrapper as a demo application
- Include a Swift Playground to as another demo
Compatibility
- Swift 5.1, Xcode 11
- 64b platforms
Installation
Swift Package manager
In
Package.swift, add the following for
BiftVector, then run
swift build.
dependencies: [ .package(url: "", from: "0.1.0") ]
Usage
import BiftVector
Using REPL with Swift PM
$ swift package clean $ swift build $ swift test $ swift run --repl Launching Swift REPL with arguments: -I/Users/dpc/projects/xcode/Projects/BiftVector/.build/x86_64-apple-macosx/debug -L/Users/dpc/projects/xcode/Projects/BiftVector/.build/x86_64-apple-macosx/debug -lBiftVector__REPL Welcome to Apple Swift version 5.1 (swiftlang-1100.0.270.13 clang-1100.0.33.7). Type :help for assistance. 1> import BiftVector 2> let bv1 = BiftVector(hexString: "C0FFEE600DDECAFBAD") bv1: BiftVector.BiftVector = { size = 72 words = 2 values { [0] = 16092341889477902083 [1] = 181 } } 3> print (~bv1) 001111110000000000010001100111111111001000100001001101010000010001010010
The above loads the framework that XCode project builds.
Using Playground on macOS
There is now a
BiftVectorPlayground.playground
However, there are still some unresolved problems with build Framework/Library in the schemes. Have been able to get playground to work using following:
open BiftVector.xcworkspace # Now build using BiftVector-macOS schemes-- Included .playground now works
Need a better, more idiomatic, less fiddly way.
Development
Uses
XCT* Framework which are included in Xcode and Swift distributions.
To run test suite from command line:
swift test
This builds the package source if needed, and runs the test suite, reporting on any failures encountered. The project embraces TDD (Test-Driven Development) which means all tests are required to pass.
Credits
Author(s)
How this library was created
In macOS
mkdir BiftVector cd BiftVector git init . swift package init --type library swift package generate-xcodeproj open BiftVector.xcodeproj
Attribution
Inspired by open source projects:
- BitVector project (Python)
- Swift Algorithm Club Bit Set (Swift)
Named as a portmanteau word of
Bit and
Swift (sounds like
Biffed) forming into Vector.
Related packages
Searching through CocoaPods, I noticed
- Bit
- Bitter
- BitByteStream
There are some related Framework types, but all of them seem to be restricted to operating on single words (i.e., up to 64 bits)
License
This package is licensed under the MIT License.
Github
Help us keep the lights on
Dependencies
Used By
Total: 0 | https://swiftpack.co/package/idcrook/BiftVector | CC-MAIN-2019-43 | refinedweb | 504 | 56.76 |
Anyone who has played with Zope 3 has probably seen the syntax used to declare what interfaces a particular class implements. It looks something like this:
class Foo: implements(IFoo, IBar) ...
This leads to the following question: how can a function call inside a class definition’s scope affect the resulting class? To understand how this works, a little knowledge of Python metaclasses is needed.
Metaclasses
In Python, classes are instances of metaclasses. For new-style classes, the default metaclass is type (which happens to be its own metaclass). When you create a new class or subclass, you are creating a new instance of the metaclass. The constructor for a metaclass takes three arguments: the class’s name, a tuple of the base classes and a dictionary attributes and methods. So the following two definitions of the class C are equivalent:
class C(object): a = 42 C = type('C', (object,), {'a': 42})
The metaclass for a particular class can be picked in a number of ways:
- A __metaclass__ variable at module or class scope.
- Use the same metaclass as the base class.
If no metaclass is specified through either of these means, an “old style” class is created. I won’t cover old style classes here.
Now in Python calling a function and creating a new instance look pretty similar. In fact the metaclass machinary doesn’t really care. The following two class definitions are also equivalent:
class C: __metaclass__ = type def not_the_metaclass(name, bases, attrs): return type(name, bases, attrs) class C: __metaclass__ = not_the_metaclass
So using a function or other callable object as the metaclass allows you to hook into the class creation without affecting the type of the resulting class.
Class Advisors
The tricks performed by the Zope implements() function are wrapped up in the zope.interface.advice module. It does so by making use of the fact that Python programs can inspect their execution stack at runtime.
- Walk up the stack to where the scope of the class being defined.
- Check to see if a “__metaclass__” variable has been set, which would indicate the that a metaclass has been specified for this particular class already.
- Check the module scope for a “__metaclass__” variable.
- Define a function advise(name, bases, cdict) that does the following:
- Deduce the metaclass (either what __metaclass__ was set to in the class scope, the module scope, or check base classes).
- Call the metaclass to create the new class.
- Do something to the new class (in the case of Zope, it sets what interfaces the class implements).
- Set the “__metaclass__” variable in the class scope to this function.
The actual implementation is a little more complicated to handle the case of registering multiple class advisors for a single class. The actual interface provided is quite simple though:
from zope.interface.advice import addClassAdvisor def setA(): def advisor(cls): cls.a = 42 return cls addClassAdvisor(advisor) class C: setA()
This simply sets the attribute ‘a’ on the class after it has been created. Also, since method decorators are implemented as a single function call, they can add a class advisor as a way to perform some extra work on the class or method after the class has been constructed. | http://blogs.gnome.org/jamesh/2005/09/08/python-class-advisors/ | CC-MAIN-2015-06 | refinedweb | 534 | 61.87 |
What would be the best and most efficient way for a plugin to perform an action "x" seconds after the caret has stopped moving? If the caret starts moving again before "x" seconds has expired, the action should not execute.
That's an important question, most get it wrong, or they just don't notice that running something every time the caret moves is a very heavy thing to do.
I use this and other combinations:
import time
import sublime_plugin, sublime
try:
import thread
except:
import _thread as thread
Pref = None
class DontMessUpWithModifiedListenersPlease(sublime_plugin.EventListener):
def on_selection_modified_async(self, view):
if Pref.enabled and not view.settings().get('is_widget'):
Pref.modified = True
Pref.timing = time.time()
def my_action(self, view):
now = time.time()
if now - Pref.timing > Pref.run_every:
Pref.timing = time.time()
if Pref.modified and not Pref.running:
Pref.modified = False
Pref.running = True
print('yeah') # hardcore action,<-- ideally this also should run in a thread to not block the UI, In that case.. you should also turn of the flag "Pref.running" to false once the thread completes, and not in the next line, as if this is threaded you can end with multiple threads running at the same time.
Pref.running = False
def dmuwmlp_loop():
my_action = DontMessUpWithModifiedListenersPlease().my_action
while True:
my_action(sublime.active_window().active_view())
time.sleep(Pref.run_every)
def plugin_loaded():
global Pref
class Pref:
def load(self):
Pref.enabled = True
Pref.modified = False
Pref.run_every = 0.60
Pref.running = False
Pref.timing = time.time()
Pref = Pref()
Pref.load()
if not 'running_dmuwmlp_loop' in globals():
global running_dmuwmlp_loop
running_dmuwmlp_loop = True
thread.start_new_thread(dmuwmlp_loop, ())
if int(sublime.version()) < 3000:
plugin_loaded()
If "print('yeah')" is also a hardcore action, then you just start a thread there (to avoid blocking the ST UI), do all the needed computation, and when done, just set in the thread itself "Pref.running = False" to allow running it again. -- I'm sorry for delay.
My take:
import time
import threading
import sublime_plugin
# Time we should wait after edit ends, in seconds
timeout = 0.6
# Global reference to our thread
selection_thread = None
class TimeoutThread(threading.Thread):
should_stop = False
last_poke = 0
def __init__(self, timeout, callback, sleep=0.1, *args, **kwargs):
super(TimeoutThread, self).__init__(*args, **kwargs)
self.timeout = timeout
self.callback = callback
self.sleep = sleep
def run(self):
while not self.should_stop:
now = time.time()
if self.last_poke and (self.last_poke + self.timeout) < now:
# Run the callback
self.callback(*self.poke_args, **self.poke_kwargs)
self.last_poke = 0
time.sleep(self.sleep)
# Set a flag to signal that we want to terminate the selection_thread
def stop(self):
self.should_stop = True
def poke(self, *args, **kwargs):
self.last_poke = time.time()
self.poke_args = args
self.poke_kwargs = kwargs
def timeout_callback(view):
print("do stuff on a view now", view, view.id())
selection_thread = TimeoutThread(timeout, timeout_callback)
class SelectionListener(sublime_plugin.EventListener):
def on_selection_modified(self, view):
if not view.settings().get('is_widget'):
selection_thread.poke(view)
def plugin_loaded():
selection_thread.start()
def plugin_unloaded():
selection_thread.stop()
I tried to keep it a bit more generic (as I always tend to do). I also thought about trying to save multiple timestamps for each different set of parameters (because currently the callback would not fire if you changed the selection on a different view within those 0.6 seconds), but then decided it's not that useful and can be added later anyway if necessary.
Another thing to note is that ST3 added a new on_selection_modified_async method that will be called asyncronously, but it will not have the behaviour you describe and these implementations perform.
on_selection_modified_async
@tito: Your solution has mainly two flaws.1. The thread is never closed.2. You unnecessarily create a new instance of DontMessUpWithModifiedListenersPlease every 0.6 seconds.
Very late edit: My solution is actually flawed in a certain case but I didn't bother to fix it. When the thread is poked with a different view while it's awaiting the timeout of another one it gets overridden. There either needs to be a mechanic to identify different "callbacks" or a separate thread for each view, which is unfavorable. I guess I just need to load it with more features.
Followup edit: Here is a fixed version: : gist.github.com/FichteFoll/0694dadafe51eb493b47
The first one is a feature not a bug, and the second.. I don''t mind.
Yours... , as I'm reading maybe I'm wrong, it can run the same task even if the previous task is still running. See GitGutter slowing down the complete App. My version does not suffer that problem. So, I'll recommend to stick to the first one. Which is easy to read, and efficient.
I don't understand that. I only create a single thread so how can I run multiple tasks at the same time?
It is definitely not as efficient since you unnecessarily create a new instance of a completely unrelated class for every tick, for no reason. This is a terrible design decision. My version just calls time.time() and evalualates some basic expression (which you do too, but more). The only thing that I do "worse" is not using on_selection_modified_async, probably because of my ST2 and ST3 combo-usage, but the two calls in there will hardly make a difference.
Regarding the easy to read part, the method my_action has no direct connection to your on_selection_modified and is instead called by a separate function, which pretty much only implements time.sleep. This is also one of the reasons why you need so many pseudo-global variables since you use them in many different places - and I think everyone knows that globals should not be used thoughtlessly. Furthermore, you use the globals() function directly where it is not necessary and I've never liked your Pref = Pref() construct (since it assigns a different type to the same identifier at a point in time that is not known beforehand).
my_action
on_selection_modified
time.sleep
globals()
Pref = Pref()
I define a custom thread that is easily customized, which might look more confusing than yours on first sight, but once you spent a few seconds on it you will grasp what it does, assuming you know some Python. You can also re-use it easily because it uses an OOP style and is context-independant.
By the way,if you make a modification with your version and then quickly change the view within Pref.run_every seconds the action will actually run on the newly selected view instead of the original one where the event occured. (related)
Pref.run_every
Sorry if I went overboard here, but I was somewhat offended by your comment and had to defend myself and my decisions.
Yeah, re-spawning a new DontMessUpWithModifiedListernersPlease probably isn't the best way.
I have yet to play with on_selection_modified_async. All of my plugins were originally written to work with ST2, so when they were ported to ST3, the methodology didn't change much for the sake of quick porting.
In the early days of BracketHighlighter, I know me and @tito went back and forth a bit on the final implementation of how to best handle the deferred bracket matching. @tito was the one who first made the pull request in BH, and it has been massaged into what it is today. BH wanted to execute not only on every time the caret moved, but on modification as well. If it didn't, the brackets would not always be highlighted proper. But the idea, no matter how it is implemented is going to be the same.
So BH does something similar to what @tito is suggesting with good results. Flags the events and when a sufficient amount of time has passed without other events, executes the payload (and do only what is necessary when idle). So this is a real world example that is currently being used. Since it is handling both caret moving and modifications, the handling of the events is a bit more complicated. It also restarts a fresh thread when the plugin is reloaded. There are many different ways to essentially do the same thing. Whether it is driven by Thread class as @FichteFoll shows, or the opposite like what @tito shows, as long as they follow the 3 main points, you are fine. I have used a variety of methods to thread stuff in different situations, and I believe my approachs overtime are evolving. If I were to redo BH's threading would it look different...maybe, but it would still basically be doing exactly the same thing even if I packaged it differently. I am not sure about the performance of other's plugins with regards to different methods, but I do know that BH has good performance in regards to deferring to ideal times to execute its payload.
BH is a beast that could probably use some more cleanup, but you can look at the source if you like: github.com/facelessuser/BracketHighlighter
But this is a real world example:[pre=#232628] 191 class BhEventMgr(object): 192 """ 193 Object to manage when bracket events should be launched. 194 """ 195 196 @classmethod 197 def load(cls): 198 """ 199 Initialize variables for determining 200 when to initiate a bracket matching event. 201 """ 202 203 cls.wait_time = 0.12 204 cls.time = time() 205 cls.modified = False 206 cls.type = BH_MATCH_TYPE_SELECTION 207 cls.ignore_all = False 208 209 BhEventMgr.load()
...
1430 class BhListenerCommand(sublime_plugin.EventListener): 1431 """ 1432 Manage when to kick off bracket matching. 1433 Try and reduce redundant requests by letting the 1434 background thread ensure certain needed match occurs 1435 """ 1436 1437 def on_load(self, view): 1438 """ 1439 Search brackets on view load. 1440 """ 1441 1442 if self.ignore_event(view): 1443 return 1444 BhEventMgr.type = BH_MATCH_TYPE_SELECTION 1445 sublime.set_timeout(bh_run, 0) 1446 1447 def on_modified(self, view): 1448 """ 1449 Update highlighted brackets when the text changes. 1450 """ 1451 1452 if self.ignore_event(view): 1453 return 1454 BhEventMgr.type = BH_MATCH_TYPE_EDIT 1455 BhEventMgr.modified = True 1456 BhEventMgr.time = time() 1457 1458 def on_activated(self, view): 1459 """ 1460 Highlight brackets when the view gains focus again. 1461 """ 1462 1463 if self.ignore_event(view): 1464 return 1465 BhEventMgr.type = BH_MATCH_TYPE_SELECTION 1466 sublime.set_timeout(bh_run, 0) 1467 1468 def on_selection_modified(self, view): 1469 """ 1470 Highlight brackets when the selections change. 1471 """ 1472 1473 if self.ignore_event(view): 1474 return 1475 if BhEventMgr.type != BH_MATCH_TYPE_EDIT: 1476 BhEventMgr.type = BH_MATCH_TYPE_SELECTION 1477 now = time() 1478 if now - BhEventMgr.time > BhEventMgr.wait_time: 1479 sublime.set_timeout(bh_run, 0) 1480 else: 1481 BhEventMgr.modified = True 1482 BhEventMgr.time = now 1483 1484 def ignore_event(self, view): 1485 """ 1486 Ignore request to highlight if the view is a widget, 1487 or if it is too soon to accept an event. 1488 """ 1489 1490 return (view.settings().get('is_widget') or BhEventMgr.ignore_all)
1493 def bh_run(): 1494 """ 1495 Kick off matching of brackets 1496 """ 1497 1498 BhEventMgr.modified = False 1499 window = sublime.active_window() 1500 view = window.active_view() if window != None else None 1501 BhEventMgr.ignore_all = True 1502 bh_match(view, True if BhEventMgr.type == BH_MATCH_TYPE_EDIT else False) 1503 BhEventMgr.ignore_all = False 1504 BhEventMgr.time = time()
1507 def bh_loop(): 1508 """ 1509 Start thread that will ensure highlighting happens after a barage of events 1510 Initial highlight is instant, but subsequent events in close succession will 1511 be ignored and then accounted for with one match by this thread 1512 """ 1513 1514 while not BhThreadMgr.restart: 1515 if BhEventMgr.modified == True and time() - BhEventMgr.time > BhEventMgr.wait_time: 1516 sublime.set_timeout(bh_run, 0) 1517 sleep(0.5) 1518 1519 if BhThreadMgr.restart: 1520 BhThreadMgr.restart = False 1521 sublime.set_timeout(lambda: thread.start_new_thread(bh_loop, ()), 0)
1524 def init_bh_match(): 1525 global bh_match 1526 bh_match = BhCore().match 1527 bh_debug("Match object loaded.")
1530 def plugin_loaded(): 1531 init_bh_match()
1538 if not 'running_bh_loop' in globals(): 1539 global running_bh_loop 1540 running_bh_loop = True 1541 thread.start_new_thread(bh_loop, ()) 1542 bh_debug("Starting Thread") 1543 else: 1544 bh_debug("Restarting Thread") 1545 BhThreadMgr.restart = True[/pre]
I really like @facelessuser implementation .. and yes there are probably many ways to do it..
I understand and can appreciate these implementation are elegant, but these are failing at one important point. Performance. The "run" methods of these suggestions, If I'm reading correctly can STILL be running on a second tick.
Agree, I'm unnecessary instantiating the class.. removed
Agree, BH tries to resolve this by when entering run and setting "BhEventMgr.ignore_all = True" which causes all future events to be ignored until it gets set to false.
Now for blatant honesty:Do I set "BhEventMgr.ignore_all = True" soon enough, meh. The run method runs on the main thread, so you won't have have more than one method running simultaneously, at most you might have an additional one run right after, but I am not sure how often that would happen. Never cared to look into since performance is pretty good.
But here are the two things I could do better:1. set "BhEventMgr.ignore_all = True" on the background thread before calling the run method and release it when the run method is done on the main thread2. Lock access of the shared variables when accessing on main thread or background thread so the threads are not stepping on each others toes as shown below:
[pre=#232628]_LOCK = threading.Lock()with _LOCK: _RUNNING = True[/pre]
Both of these things have been on my mind and would probably improve things, but yeah, I just haven't bothered with them yet.
Maybe something more like this:[pre=#232628] 16 LOCK = threading.Lock()
192 class BhEventMgr(object): 193 """ 194 Object to manage when bracket events should be launched. 195 """ 196 197 @classmethod 198 def load(cls): 199 """ 200 Initialize variables for determining 201 when to initiate a bracket matching event. 202 """ 203 with LOCK: 204 cls.wait_time = 0.12 205 cls.time = time() 206 cls.modified = False 207 cls.type = BH_MATCH_TYPE_SELECTION 208 cls.ignore_all = False 209 210 BhEventMgr.load() 211 212 213 class BhThreadMgr(object): 214 """ 215 Object to help track when a new thread needs to be started. 216 """ 217 with LOCK: 218 restart = False
1433 class BhListenerCommand(sublime_plugin.EventListener): 1434 """ 1435 Manage when to kick off bracket matching. 1436 Try and reduce redundant requests by letting the 1437 background thread ensure certain needed match occurs 1438 """ 1439 1440 def on_load(self, view): 1441 """ 1442 Search brackets on view load. 1443 """ 1444 1445 if self.ignore_event(view): 1446 return 1447 with LOCK: 1448 BhEventMgr.type = BH_MATCH_TYPE_SELECTION 1449 sublime.set_timeout(bh_run, 0) 1450 1451 def on_modified(self, view): 1452 """ 1453 Update highlighted brackets when the text changes. 1454 """ 1455 1456 if self.ignore_event(view): 1457 return 1458 with LOCK: 1459 BhEventMgr.type = BH_MATCH_TYPE_EDIT 1460 BhEventMgr.modified = True 1461 BhEventMgr.time = time() 1462 1463 def on_activated(self, view): 1464 """ 1465 Highlight brackets when the view gains focus again. 1466 """ 1467 1468 if self.ignore_event(view): 1469 return 1470 with LOCK: 1471 BhEventMgr.type = BH_MATCH_TYPE_SELECTION 1472 sublime.set_timeout(bh_run, 0) 1473 1474 def on_selection_modified(self, view): 1475 """ 1476 Highlight brackets when the selections change. 1477 """ 1478 1479 if self.ignore_event(view): 1480 return 1481 with LOCK: 1482 if BhEventMgr.type != BH_MATCH_TYPE_EDIT: 1483 BhEventMgr.type = BH_MATCH_TYPE_SELECTION 1484 now = time() 1485 if now - BhEventMgr.time > BhEventMgr.wait_time: 1486 sublime.set_timeout(bh_run, 0) 1487 else: 1488 BhEventMgr.modified = True 1489 BhEventMgr.time = now 1490 1491 def ignore_event(self, view): 1492 """ 1493 Ignore request to highlight if the view is a widget, 1494 or if it is too soon to accept an event. 1495 """ 1496 1497 return (view.settings().get('is_widget') or BhEventMgr.ignore_all) 1498 1499 1500 def bh_run(): 1501 """ 1502 Kick off matching of brackets 1503 """ 1504 1505 window = sublime.active_window() 1506 view = window.active_view() if window != None else None 1507 with LOCK: 1508 edit_type = BhEventMgr.type == BH_MATCH_TYPE_EDIT 1509 bh_match(view, edit_type) 1510 with LOCK: 1511 BhEventMgr.ignore_all = False 1512 BhEventMgr.time = time() 1513 1514 1515 def bh_loop(): 1516 """ 1517 Start thread that will ensure highlighting happens after a barage of events 1518 Initial highlight is instant, but subsequent events in close succession will 1519 be ignored and then accounted for with one match by this thread 1520 """ 1521 1522 def should_restart(): 1523 restart = False 1524 with LOCK: 1525 restart = BhThreadMgr.restart 1526 return restart 1527 1528 while not should_restart(): 1529 with LOCK: 1530 if BhEventMgr.modified == True and time() - BhEventMgr.time > BhEventMgr.wait_time: 1531 BhEventMgr.ignore_all = True 1532 BhEventMgr.modified = False 1533 sublime.set_timeout(bh_run, 0) 1534 sleep(0.5) 1535 1536 if should_restart(): 1537 with LOCK: 1538 BhThreadMgr.restart = False 1539 sublime.set_timeout(lambda: thread.start_new_thread(bh_loop, ()), 0) 1540 1541 1542 def init_bh_match(): 1543 global bh_match 1544 bh_match = BhCore().match 1545 bh_debug("Match object loaded.")
1548 def plugin_loaded(): 1549 init_bh_match()
1552 global HIGH_VISIBILITY 1553 if sublime.load_settings("bh_core.sublime-settings").get('high_visibility_enabled_by_default', False): 1554 HIGH_VISIBILITY = True 1555 1556 if not 'running_bh_loop' in globals(): 1557 global running_bh_loop 1558 running_bh_loop = True 1559 thread.start_new_thread(bh_loop, ()) 1560 bh_debug("Starting Thread") 1561 else: 1562 bh_debug("Restarting Thread") 1563 with LOCK: 1564 BhThreadMgr.restart = True[/pre]
I'm not sure how Lock works.. I can't help there.. but if that block the main thread, then I'll avoid it. Setting a flag should really just works. I'm witness btw, of the performance of complex and handy BH
In this scenario, you can see I haven't been too concerned about it because I am not using them ; BH works well enough. It is just a way to make things thread safe. Two threads accessing the same data can sometimes cause problems (not so much for read only data). If I was serious, I would just have to make sure where I need thread safety. Keep in mind I don't claim to be a threading expert either. I was more just admitting that what I use isn't perfect, but it works well enough.
I think that you don't know how exactly threads in the threading module work. You should check the docs for that.
The run() method is only run once, when I invoke start(). I then do the processing there and only there so the task itself can only run once at a time - as long as I don't create another thread and run the poke method on it.
start()
poke
I do agree that resetting the timer once the task itself has finished might be useful at times, but this is dependant on the action and can be added later if necessary.
The original question to the thread was, "Best and most efficient way to perform a deferred action" and that also include to keep track if things are still running. In you quote, I was refering to GitGutter that slowed down the app considerable because of the problem I described in the previous post.
Thanks for the input, everyone. facelessuser's method is something I've seen a few plugins employ, and seems to be the best solution available.
I do think though that this should be something natively available as part of the API, because:
Might be something for the developers to consider. | https://forum.sublimetext.com/t/best-and-most-efficient-way-to-perform-a-deferred-action/13097/16 | CC-MAIN-2016-18 | refinedweb | 3,152 | 59.9 |
GIS Library - Get color rules. More...
#include <grass/gis.h>
Go to the source code of this file.
GIS Library - Get color rules.
(C) 2001-2008 by the GRASS Development Team
This program is free software under the GNU General Public License (>=v2). Read the file COPYING that comes with GRASS for details.
Definition in file color_rule_get.c.
Get both modular and fixed rules count.
Definition at line 26 of file color_rule_get.c.
Get color rule from both modular and fixed rules.
Rules are returned in the order as stored in the table (i.e. unexpected, high values first)
Definition at line 67 of file color_rule_get.c. | http://grass.osgeo.org/programming6/color__rule__get_8c.html | crawl-003 | refinedweb | 107 | 79.16 |
.
/* CPP program to print longest palindromic subsequence */ #include<bits/stdc++.h> using namespace std; /* Returns LCS X and Y */ string lcs(string &X, string &Y) { int m = X.length(); int n = Y.length(); int L[m+1][n+1]; /* Following steps build L[m+1][n+1] in bottom up fashion. Note that L[i][j] contains length of LCS of X[0..i-1] and Y[0..j-1] */ for (int i=0; i<=m; i++) { for (int j=0; j<=n; j++) { if (i == 0 || j == 0) L[i][j] = 0; else if (X[i-1] == Y[j-1]) L[i][j] = L[i-1][j-1] + 1; else L[i][j] = max(L[i-1][j], L[i][j-1]); } } // Following code is used to print LCS int index = L[m][n]; // Create a string length index+1 and // fill it with \0 string lcs(index+1, '\0'); // Start from the right-most-bottom-most // corner and one by one store characters // in lcs[] int i = m, j = n; while (i > 0 && j > 0) { // If current character in X[] and Y // are same, then current character // is part of LCS if (X[i-1] == Y[j-1]) { // Put current character in result lcs[index-1] = X[i-1]; i--; j--; // reduce values of i, j and index index--; } // If not same, then find the larger of // two and go in the direction of larger // value else if (L[i-1][j] > L[i][j-1]) i--; else j--; } return lcs; } // Returns longest palindromic subsequence // of str string longestPalSubseq(string &str) { // Find reverse of str string rev = str; reverse(rev.begin(), rev.end()); // Return LCS of str and its reverse return lcs(str, rev); } /* Driver program to test above function */ int main() { string str = "GEEKSFORGEEKS"; cout << longestPalSubseq(str); return 0; }
Output:
EEGEE
This article is contributed by K Palindromic Subsequence | DP-12
- Print Longest Palindromic Subsequence
- Longest Palindromic Substring | Set 1
- Printing Longest Common Subsequence
- Program for Bridge and Torch. | https://www.geeksforgeeks.org/print-longest-palindromic-subsequence/ | CC-MAIN-2018-34 | refinedweb | 332 | 55.92 |
Hi,
The Software Collections images on registry.centos.org were originally
available under the namespace 'sclo'.
For example : sclo/postgresql-96-centos7 is deprecated in favour of
centos/postgresql-95-centos7
Please make sure you update any usages of said images to reflect the new
name, as the old image names will be dropped from registry.centos.org soon.
I will send out another email, a couple of days before the said images are
dropped
I was wondering if it was possible to deploy openshift on arm64. We need
this for building containers on the CentOS Container Pipeline.
I am aware that there was some talk around rpms for the same, and it would
be helpful if someone could point us to any resources.
All images on the container pipeline are undergoing a rebuild based on
updated centos base images, which should finish in a few hours.
I will forward any failure messages to emails provided in Dockerfile
maintainer tags, if i can find them, and/or raise issues for the container
build failures.
In all other cases, it is highly recommended that you try out the newer
build of containers from Sclorg containers[1-18], to ensure the containers
maintain their functionality.
In all other cases, it is highly recommended that you try out the newer
build of containers from CentOS Dockerfiles[1], to ensure the containers
maintain their functionality.
I was considering updating notification emails for the containers in the
container pipeline to maintainers for the respective containers.
(Currently, I single-handedly receive notifications for most of the
containers, which can be a bit overwhelming - hundreds of emails at-least
per day, in this case)
In line with this, the notification email for centos-dockerfiles kubernetes
containers have been updated to include Jason Brooks.
That of course, leaves the remaining containers, and it would be helpful if
i can know which would be the notification emails that you, the
maintainers, would l
We currently have openshift origin containers being delivered through
the Container Pipeline.
However, with the release of openshift origin 1.3, we have to consider how
we are going to deliver containers for v1.2.1 and v1.3.
Currently, the dockerfiles basically do a yum install origin[1] and other
origin packages. This could possibly result in the latest origin rpms being
used even in 1.2.1 containers.
Hi all,
Here is the update in for the work done recently, for the CentOS Community
Container Pipeline.
- There was an issue where the entry point of the container was running,
instead of the build / test / delivery scripts specified by the user. We
solved this by overriding entry point in all the stages.
- Setup a basic framework for running test suits, such as scans against
containers.
Here is the update for work recently done for the CentOS Community
Container Pipeline.
- Docker builds in the CCCP-Service were taking a long time for building
each images. We tracked down the issue and got in conclusion of building
the images sequentially. This increase the performance as the other option
of creating docker lvm pool is already in place. | http://www.devheads.net/people/66349 | CC-MAIN-2018-30 | refinedweb | 520 | 52.19 |
15146/how-to-synchronize-sessions-using-amazon-web-services
I am Amazon Web Services (AWS) and i have multiple web servers and a load balancer. The problem with the web servers is, that the $_SESSION is unique for each one. I'm keeping some information about the user in the $_SESSION.
What is the proper way to synchronize this information? Is there any way to unite the place, where those sessions are being kept, or should I use MySQL to store this data?
I think what you are looking for is 'Sticky Sessions'. If that’s what you are looking for, then Amazon gives you two different options.
Load Balancer
And application based session stickiness
I hope this was informative.
Its very simple. You can just use ...READ MORE
In order to make system more efficient ...READ MORE
Hey, 3 ways you can do this:
To ...READ MORE
PCF is a commercial cloud platform (product) ...READ MORE
In a nutshell, the default termination policy ...READ MORE
The code would be something like this:
import ...READ MORE
It's not possible to send HTML emails ...READ MORE
That code won't upload the file - it's simply ...READ MORE
This article pretty much explains the entire ...READ MORE
You need to set the proper privileges ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in. | https://www.edureka.co/community/15146/how-to-synchronize-sessions-using-amazon-web-services | CC-MAIN-2021-49 | refinedweb | 240 | 76.93 |
Details
Description
For some purposes, such as code generation, it would be very desirable to output to multiple files from the same context. For example, to generate code for a C++ class, two files must be generated - a header and implementation file. Rendering could be driven by the same context in both cases, which would desribe the class name, method signatures, etc. Currently this must be done by running the engine twice - once with a header template and again with an implementation file template.
Furthermore, it would desirable to determine the output file names during rendering, based on a combination of template directives and data in the context. To use the code generation example, the header and implementation file names would be determined by the name of the class to be generated.
This is not possible with the current method signature of Directive:
public boolean render(InternalContextAdapter context, Writer writer, Node node)
An output directive would want to change the writer to a different instance of FileWriter, for instance. But since Java passes object references by value, the writer object cannot be replaced.
The proposal is to provide a method to install a new instance of Writer during rendering of a template. This would support both writing multiple outputs in one run of the engine as well as controlling the file names of the outputs.
Activity
- All
- Work Log
- History
- Activity
- Transitions
I don't think this needs to be solved within the engine. In the past I've solved this through a controller template, string interpolation, and a file writer tool. Example:
#macro( createCode $params )
- ... do something wise here
#end
...
#foreach ($file $list)
#set( $fileContents = "#createCode(...)" )
$FileTool.write($file, $fileContents)
#end
Nathan was faster than I...
Here is the tool (from 2001...).
Maybe you can close the issue if this solves your request.
Christoph and Nathan, thanks for the suggestions. I had already tried a workaround using a StringWriter to collect portions of a rendered template until I knew what file to write it to. So clearly there are ways to accomplish the result outside of Velocity.
However, I argue for the proposal on two grounds:
First, having an output directive (or at least the ability to create a custom one) would form a symmetry with input directives, such as #include and #parse.
Second, much of the discussion of template engines and similar rendering tools revolves around separating model from view. My example of generating C++ header and implemention files from a common context is conceptually consistent with the idea of multiple views of one model.
I care far more about YAGNI than symmetry, and i don't think most users need it. As for multiple views of one model, i don't see anything consistent about squishing multiple views into a single view template. Multiple templates for multiple views seems much more fitting.
And while a feature discussion with a use-case is nice, a single use case (w/o even a syntax proposal, much less a patch) is not all that compelling. I find it difficult to imagine any other usecase for this offhand, while i find it easy to imagine many other straightforward ways of handling the C++ header and implementation files.
To summerize Nathan and Christoph comments: outside the scope of Velocity. I agree.
I think this is out of scope for the Engine project, and perhaps Velocity in general. And i think you could support this now with a clever tool instead of Directives, and with no need to modify Velocity.
public class WriterTool{ public void switchToWriter(String name)... public void setFilename(String name)... public void whateverMagicYouCanThinkUp()... }
Then you give this tool the info it needs, drop that instance into your context and viola! You have all the control you want from within the template. | https://issues.apache.org/jira/browse/VELOCITY-787 | CC-MAIN-2016-36 | refinedweb | 631 | 62.88 |
#include <sys/sunddi.h>
typedef struct { union { uint64_t _dmac_ll; /* 64 bit DMA add. */ uint32_t _dmac_la[2]; /* 2 x 32 bit add. */ } _dmu; size_t dmac_size; /* DMA cookie size */ uint_t dmac_type; /* bus spec. type bits */ } ddi_dma_cookie_t;
You can access the DMA address through the #defines: dmac_address for 32-bit addresses and dmac_laddress for 64-bit addresses. These macros are defined as follows:
#define dmac_laddress _dmu._dmac_ll #ifdef _LONG_LONG_HTOL #define dmac_notused _dmu._dmac_la[0] #define dmac_address _dmu._dmac_la[1] #else #define dmac_address _dmu._dmac_la[0] #define dmac_notused _dmu._dmac_la[1] #endif
dmac_laddress specifies a 64-bit I/O address appropriate for programming the device's DMA engine. If a device has a 64-bit DMA address register a driver should use this field to program the DMA engine. dmac_address specifies a 32-bit I/O address. It should be used for devices that have a 32-bit DMA address register. The I/O address range that the device can address and other DMA attributes have to be specified in a ddi_dma_attr(9S) structure.
dmac_size describes the length of the transfer in bytes.
dmac_type contains bus-specific type bits, if appropriate. For example, a device on a PCI bus has PCI address modifier bits placed here.
Writing Device Drivers | https://man.omnios.org/man9s/ddi_dma_cookie | CC-MAIN-2022-05 | refinedweb | 205 | 61.22 |
When your application is based on Spring it makes a lot of sense to fire up
a Spring context within your integration tests and functional tests.
For a particular Scala-based project it was necessary to manage not only the
lifetime of the Spring context, but also the lifetime of an annotation-based REST library component called Jersey, which works together with Spring.
Each of these have their own setup and teardown actions. Some tests require just Spring, and others require both Jersey and Spring to be started (and stopped). It gets even more involved because there’s the choice of starting and stopping either of these options before and after the whole test suite, or before and after every test.
Yes, it seems like a lot of work, but the results are well worth it: it means we can test the behaviour of the Jersey REST API by setting expectations on Spring components just before invoking API calls.
It turns out that there’s an elegant way in which Scala’s traits can express these various startup / teardown behaviours through [stackable modifications](). Traits isolate cross-cutting concerns and further allow them to be declaratively mixed-in to test suites that `with` them as needed (Scala jargon, think `implements` in Java!). Using the approach we’re also explicit about how Jersey is started **after Spring is started**, remembering the order in which traits are mixed into a class does matter!
We didn’t start off with that approach though…
## Before…
So when I look at this problem, I see it as a number of orthogonal concerns:
– Before & After **Tests** vs. **Test Suite**
– **Spring** vs. **Spring+Jersey**
– **Start/Stop** Sequencing
Fortunately the first of these is already taken care of. ScalaTest authors have
taken care to provide the `BeforeAndAfter[Each]` and `BeforeAndAfterAll` traits,
an idiomatic way to specify whether the specified code should run before/after every
test or before/after all tests in the suite.
The code sure did make use of these, but in wholly the wrong way. Spring-related tests
were made to extend a custom subclass of `FunSpec`:
class SpringContextSpec extends FunSpec {
var ctx: ApplicationContext = _
def startSpring() {
ctx = new ClassPathXmlApplicationContext("classpath*:/applicationContext-test.xml")
}
def stopSpring() {
ctx.destroy()
}
// Interesting auto-wired bean definitions as Scala vals...
// Helper functions...
// Spring-related oddities...
}
This is somewhat undesirable, because in every test suite it is necessary to explicitly call `startSpring()`
and `stopSpring()`, like so:
class NotSoAwesomeTestSuite1 extends SpringContextSpec with BeforeAndAfter {
before {
startSpring()
}
after {
stopSpring()
}
// Here be dragons
}
The reason that we want to do this is because `startSpring()` and `stopSpring()` is useful for `beforeAll()` and `afterAll()` as well.
class NotSoAwesomeTestSuite2 extends SpringContextSpec with BeforeAndAfterAll
override def beforeAll() {
startSpring()
}
override def afterAll() {
stopSpring()
}
// Here be dragons
}
Both of these kinds of TestSuite do indeed occur. The boilerplate is ugly, but it is necessary under this design. The only way to hide the code’s hideousness is to introduce two new abstractions called `SpringContextSpecEach` and `SpringContextSpecAll` and have tests inherit from those accordingly.
And what if some tests needed Jersey? Say they extended a class called `JerseySpringContextSpec` defined like below:
class JerseySpringContextSpec extends SpringContextSpec {
startJersey()
stopJersey()
// ...
}
Then `JerseySpringContextSpec` too would require its own ‘Each’ and ‘All’ subclasses: `JerseySpringContextSpecEach`
`JerseySpringContextSpecAll` etc. In doing so we would be advocating an inflexible inheritance-based solution, exposing ourselves to a combinatorial explosion of noisy and confusing subclasses. It’s just plain nasty.
Stackable traits to the rescue.
## After…
Ideally we want all of that behaviour directly configurable in the test suite declaration like the below.
Square brackets and pipe (|) denote option and alternative respectively:
class IdealTestSuite extends FunSpec with BeforeAndAfter(Each|All) with Spring [with Jersey] with ... {
// Here be quite genuinely useful tests
}
… and have the code do exactly the right thing before and after as we’d expect. This declaration can also
ensure that Spring is started before Jersey.
That’s the theory anyway. Practise is often another wildly different beast. Though you’ll be pleased to know that my
solution for this was actually incredibly close to the ideal:
class RealTestSuite extends FunSpec with StartStopBeforeAndAfter(Each|All) with (Spring|Jersey) with ... {
// Here be quite genuinely useful tests
}
Not only is it more readable, it is much more declarative. It allows the
programmer to change the execution behaviour of the test suite by changing 4
characters!
The key here is a special trait that I haven’t shown you yet. It’s called
`StartStop` and its job is to represent the concept of ‘something that can be
started and stopped’.
trait StartStop {
def start() {}
def stop() {}
}
The implementations of `start()` and `stop()` do nothing at the moment but very soon they will be given proper meaning. Now we’d like to say connect this to the two sorts of before/after:
trait StartStopBeforeAndAfterEach extends Suite with BeforeAndAfterEach with StartStop {
override def beforeEach() { start() }
override def afterEach() { stop() }
}
and
trait StartStopBeforeAndAfterAll extends Suite with BeforeAndAfterAll with StartStop {
override def beforeAll() { start() }
override def afterAll() { stop() }
}
Great; so `start()` and `stop()` are now invoked before/after each/all as we please. They still do nothing at the moment, so now is a good time to imbue meaning upon the `start()` and `stop()` methods.
Start and stop Spring:
trait Spring extends StartStop {
var ctx: ApplicationContext =
def springContext = new ClassPathXmlApplicationContext("classpath*:/applicationContext-test.xml")
abstract override def start() {
super.start()
ctx = springContext
}
abstract override def stop() {
ctx.destroy()
super.stop()
}
// Spring-specific fields and methods
}
… and start and stop Jersey (necessarily after Spring):
trait Jersey extends StartStop with Spring {
var jerseyTest: JerseyTest = null
abstract override def start() {
super.start()
jerseyTest = new JerseyTest(new LowLevelAppDescriptor.Builder("net.lshift.very.important.project").build()) {
override def getTestContainerFactory = new JerseyInMemoryContainerFactory(ctx)
}
jerseyTest.setUp()
}
abstract override def stop() {
jerseyTest.tearDown()
super.stop()
}
// Jersey-specific fields and methods
}
Et voilà . What clear-cut code!
What we’ve done here is separated out the concerns just by using Scala traits: remarkably there are no annotations to be seen. Had this been Java we’d have to use a language extension like [AspectJ]().
The code also describes that Jersey initialisation necessarily follows Spring initialisation thanks to the way trait linearisation works.
Much of the magic comes from a combination of the `abstract override def (start|stop)` declaration and calls to `super.(start|stop)()`.
When you mix-in a trait that has `abstract override` methods in it, you’re expecting an existing implementation of that method already on your class stack. This is why is important to provide a no-op implementation of `start()` and `stop()` methods in `StartStop` trait: so that they can be overridden!
The `super.(start|stop)()` calls form form a chain of invocations, just like reading along the class declaration line from right to left. This ensures everything is started and stopped in the right order. No more horrible `startSpring()` and `stopSpring()` methods to be seen.
For more details on delinearization and how stackable modifications actually work
under the covers, I recommend the chapter on [Programming in
Scala]().
#### Disclaimer
I don’t particularly recommend using Spring with Scala by the way.
Spring being Spring, there is the official ‘Spring way’ of writing and
structuring tests, but then we stuck to the annotation-free real-world.
If you find that the above code can work better with annotation-based Spring configuration, please do that instead of the way we manage the `ClassPathXmlApplicationContext` explicitly.
And anyway, if all you’re using Spring for is just DI, consider tools native to Scala first, like [Subcut]().
Hope this helps! | https://tech.labs.oliverwyman.com/blog/2012/11/24/stackable-traits-for-scalatest-test-suites/ | CC-MAIN-2020-05 | refinedweb | 1,252 | 53.1 |
Red Hat Bugzilla – Bug 855483
allow2audit doesn't parse boot date correctly in all locales
Last modified: 2013-05-31 22:27:17 EDT
Description of problem:
allow2audit -b fails with "Error parsing start date" for non-north american locales.
Version-Release number of selected component (if applicable):
policycoreutils-python-2.1.11-18.fc17.x86_64
How reproducible:
Always.
Steps to Reproduce:
1. Arrange for boot time to be after the 12th day of the month
2. LC_ALL=en_AU.UTF-8 audit2allow -b
3.
Actual results:
Error parsing start date (08/28/2012)
Expected results:
A list of suggested "allow" rules
Additional info:
LC_ALL=en_US.UTF-8 audit2allow -b
works.
It would seem that the boot date is always in mm/dd/yyyy format regardless of locale, but it is being parsed in the current locale which expects dd/mm/yyyy.
Steve does ausearch handle this properly?
Ausearch does this in main() :
/* Check params and build regexpr */
setlocale (LC_ALL, "");
According to the man page:
If locale is "", each part of the locale that should be modified is set
according to the environment variables.
So, ausearch _should_ be OK. Does policycoreutils use ausearch or libauparse?
I'm not sure what would be the equivalent as there is no "since boot" option for ausearch. I can set -te or -ts putting the date in the en_AU locale format and it works properly and it all works properly.
def get_audit_boot_msgs():
"""Obtain all of the avc and policy load messages from the audit
log. This function uses ausearch and requires that the current
process have sufficient rights to run ausearch.
Returns:
string contain all of the audit messages returned by ausearch.
"""
import subprocess
import time
fd=open("/proc/uptime", "r")
off=float(fd.read().split()[0])
fd.close
s = time.localtime(time.time() - off)
date = time.strftime("%D/%Y", s).split("/")
bootdate="%s/%s/%s" % (date[0], date[1], date[3])
boottime = time.strftime("%X", s)
output = subprocess.Popen(["/sbin/ausearch", "-m", "AVC,USER_AVC,MAC_POLICY_LOAD,DAEMON_START,SELINUX_ERR", "-ts", bootdate, boottime],
stdout=subprocess.PIPE).communicate()[0]
return output
This is the python code that we are calling.
So how about:
bootdate = time.strftime("%x", s)
instead of
date = time.strftime("%D/%Y", s).split("/")
bootdate="%s/%s/%s" % (date[0], date[1], date[3])
At least in my locale, the year is in 2 digit format, so this would fail
if the boot date it before a century boundary boundary, but otherwise should work.
Fixed in
policycoreutils-2.1.12-4.fc17
policycoreutils-2.1.12-4.fc17 has been submitted as an update for Fedora 17.
Package policycoreutils-2.1.12-4.fc17:
* should fix your issue,
* was pushed to the Fedora 17 testing repository,
* should be available at your local mirror within two days.
Update it with:
# su -c 'yum update --enablerepo=updates-testing policycoreutils-2.1.12-4.fc17'
as soon as you are able to.
Please go to the following url:
then log in and leave karma (feedback).
Package policycoreutils-2.1.12-5.fc17:
* should fix your issue,
* was pushed to the Fedora 17 testing repository,
* should be available at your local mirror within two days.
Update it with:
# su -c 'yum update --enablerepo=updates-testing policycoreutils-2.1.12-5.fc17'
as soon as you are able to.
Please go to the following url:
then log in and leave karma (feedback).
policycoreutils-2.1.12-5.fc17 breaks on my system. Running "audit2allow -b" results in:
Error - year is 12
Tracked the problem down to the date string generated with above fix... which resulted in the following ausearch call:
/sbin/ausearch -m AVC,USER_AVC,MAC_POLICY_LOAD,DAEMON_START,SELINUX_ERR -ts 12/06/12 14:45:43
My LANG environment (set on the kernel commandline and in /etc/locale.conf) is:
LANG=en_US.UTF-8
Modifying the LANG to something other than UTF-8 works around the issue, eg:
LANG=en_US.en_AU audit2allow -b
... works.
We have fixed this in the F18 code base.
Any chance of getting this backported? Most people probably won't discover the LANG workaround in F17...
Just check if adding these lines to /usr/bin/audit2allow fixes the problem.
diff -u audit2allow /usr/bin/audit2allow
--- audit2allow 2012-09-25 16:17:37.000000000 -0400
+++ /usr/bin/audit2allow 2012-12-10 11:10:12.000000000 -0500
@@ -29,6 +29,8 @@
import sepolgen.module as module
from sepolgen.sepolgeni18n import _
import selinux.audit2why as audit2why
+import locale
+locale.setlocale(locale.LC_ALL, '')
class AuditToPolicy:
VERSION = "%prog .1"
[Exit 1]
Yep, tried the patch and both audit2allow and audit2why work as expected.
Fixed in policycoreutils-2.1.13-27.1.fc17
policycoreutils-2.1.13-27.1.fc17 has been submitted as an update for Fedora 17.
Updated to 2.1.13-27.1 - audit2allow works great :)
Added karma on fedoraproject.
policycoreutils-2.1.12-5.fc17 has been pushed to the Fedora 17 stable repository. If problems still persist, please make note of it in this bug report.
policycoreutils-2.1.13-27.2.fc17 has been submitted as an update for Fedora 17.
policycoreutils-2.1.13-27.3.fc17 has been submitted as an update for Fedora 17.
policycoreutils-2.1.13-27.3.fc17 has been pushed to the Fedora 17 stable repository. If problems still persist, please make note of it in this bug report. | https://bugzilla.redhat.com/show_bug.cgi?id=855483 | CC-MAIN-2018-17 | refinedweb | 890 | 52.36 |
import_stmt: "import" module ["as" name] ("," module ["as" name] )* | "from" module "import" identifier ["as" name] ("," identifier ["as" name] )* | "from" module "import" "*" module: (identifier ".")* identifier
Import.
The system maintains a table of modules that have been).. To avoid confusion, you cannot import modules
with dotted names as a different local name. So
import
module as m is legal, but
import module.submod as s is not.
The latter should be written as
from module import submod as s;
see below. names defined in the module are bound, except those beginning with an underscore ("_").
Names bound by import statements may not occur in global statements in the same scope.
The from form with "*" may only occur in a module scope.
Hierarchical module names: when the module names contains one or more dots, the module search
path is carried out differently. The sequence of identifiers up to
the last dot is used to find a ``package'' ; the final
identifier is then searched inside the package. A package is
generally a subdirectory of a directory on
sys.path that has a
file __init__.py. [XXX Can't be bothered to spell this out right now; see the URL for more details, also
about how the module search works from inside a package.]
[XXX Also should mention __import__().]
See About this document... for information on suggesting changes.See About this document... for information on suggesting changes. | http://docs.python.org/release/2.1.1/ref/import.html | crawl-003 | refinedweb | 231 | 67.86 |
Maurício wrote: > Hi, > > After I have spawned a thread with > 'forkIO', how can I check if that > thread work has finished already? > Or wait for it? > > Thanks, > Maurício The best way to do this is using Control.Exception.finally: > myFork :: IO () -> IO (ThreadId,MVar ()) > myFork todo = > m <- newEmptyMVar > tid <- forkIO (finally todo (tryPutMVar m ())) > return (tid,m) No other part of the program should write to the MVar except the finally clause. The rest of the program can check (isEmptyMVar m) as a non-blocking way to see if the thread is still running. Or use (swapMVar m ()) as a way to block until the MVar has been filled as a way of blocking until the thread is finished. These techniques are needed because forkIO is a very lightweight threading mechanism. Adding precisely the features you need makes for good performance control, as seen in the great computer language shootout benchmarks. Cheers, Chris | http://www.haskell.org/pipermail/haskell-cafe/2007-November/035337.html | CC-MAIN-2014-42 | refinedweb | 153 | 69.62 |
#include <World.hh>
#include <World.hh>
List of all members.
The world class keeps a list of all models, handles loading and saving, object dynamics and collision detection for contact joints.
[virtual]
Load the world from a file.
Set the pose of a model (and its attached children).
Reset the world.
Teleport the models to their initial positions, as specified in the world file. This does not change the similator clock, or any other state variables.
Save the world.
This is used to mainly save changes to model poses.
Get the elapsed simulator time (seconds).
Get accumulated pause time (seconds).
Note that this is the paused simulation time; i.e., simTime + pauseTime = simSpeed * realTime (assuming a fast processor).
Get the elapsed real time (elapsed time).
Get the wall clock time (seconds).
Get the time relative to 12am (seconds).
Get the number of models.
Get the list of models.
[inline]
[friend]
Simulation speed (e.g. speed 2 yields twice real time).
UTC time offset (seconds since epoch). Used for GPS times.
UTM zone. This is an integer offset; e.g., UTM zone 11 is converted to x = +11.
UTM offset (UTM coordinate that maps to zero in global cs). | http://playerstage.sourceforge.net/doc/Gazebo-manual-0.5-html/classWorld.html | CC-MAIN-2016-18 | refinedweb | 198 | 80.88 |
Results 1 to 2 of 2
- Join Date
- Mar 2005
- 1
- Thanks
- 0
- Thanked 0 Times in 0 Posts
send current URL through window.open
hello all,
my situation:
on my main page i have a link, using window.open to popup a small page in which i have several forms for emailing the "current page" to a friend. The only problem is that the script i am using on that popup uses window.location for determining the current page. Obviously the "current page" seen by window.location is going to be the URL for the popup, not the parent page, which i wanted. can i set a variable on the main page to that URL (using window.location), and then pass it to the new window to be used by my scripts there? how? any other ideas?
here is the generic script for emailing the "current" page to a friend, which is in my popup window generated from window.open:
Code:
function initMail(form) { text = "Check out this page: " + window.location; form.message.value = "Hi " + form.sendto.value + " (" + form.to.value + "):\n\n" + text + "\n\nYour Friend,\n" + form.sendername.value + "(" + form.senderemail.value + ")"; return (form.to.value != ""); }
thank you!
Last edited by snospunjah; 03-16-2005 at 10:07 PM.
- Join Date
- Jun 2002
- Location
- Philippines
- 11,075
- Thanks
- 0
- Thanked 256 Times in 252 Posts
You can access the parent window by using the opener object.
opener.location.href gives you the url of the opener window.Glenn
____________________________________
My Blog
Tower of Hanoi Android app (FREE!)
Tower of Hanoi Leaderboard
Samegame Facebook App
vBulletin Plugins | http://www.codingforums.com/javascript-programming/54593-send-current-url-through-window-open.html | CC-MAIN-2015-48 | refinedweb | 269 | 76.22 |
so I have code that plays an mp3 from my sounds file….I want to modify the function (code below) so that it randomly plays one of the mp3 files in the sounds folder instead of a specific mp3 file (as it is set up now)... i ve googled this on the net but to no avail i am really stuck and any help would be appreciated immensely! Thank you in advance!!!
private var soundClip:Sound;
private var soundClipChannel:SoundChannel;
protected function play(clip:String):void
{
if(soundClipChannel !=null) {
soundClipChannel.stop();
}
soundClip = new Sound(new URLRequest('sounds/' + clip + '.mp3'));
soundClipChannel = soundClip.play();
}
Also how do i set up linkage? I've read online that my .mp3 files have to have linkage names set up…how do i do this?
I'm new to flash builder and have tried a few approaches needless to say none of them have worked… : (
you working on flex / air? if it is flex, and your songs are on server, then I would use Javaa/ any other server scripting language to get me the list of names of the files in folder and I would use some Math.random to get a random song from the list.
the mp3 files are on my hard drive, were not using a server and it's a flash application not an air does that help? Thank you for your response!
Below is the code i have come up with through hours of googling etc but sometimes it wont play a sound any thoughts on this? All of my urls are correct, the method works, its just sometimes i ll click the play button and no sound will play and then i click it again and the sound will play. Should i use ceil instead of floor? Thank you in advance!
import flash.events.*;
import flash.media.*;
import flash.net.URLRequest;
import mx.controls.Label;
private var randSound:Sound;
private var channel:SoundChannel;
private var my_sounds:Array = ["sounds/hisbuttocks.mp3", "sounds/hisfoot.mp3", "sounds/hiship.mp3", "sounds/hisknee", "sounds/hisleg", "sounds/hisshin"];
//random sound function:
private function playRand():void
{
var randVal:int = Math.floor(Math.random()*my_sounds.length);
var urlname:String = my_sounds[randVal];
var request:URLRequest = new URLRequest(urlname);
randSound = new Sound(request);
channel = randSound.play();
}
"sounds/hisknee", "sounds/hisleg", "sounds/hisshin"
Just add ".mp3" to these. Yes?
Thanks, I figured it was something minor i guess it really is true that tired eyes don't see bugs! Thank you!!! | https://forums.adobe.com/thread/753269 | CC-MAIN-2018-30 | refinedweb | 410 | 74.39 |
by Davey Waterson, JavaScript Architect, Aptana
There's no debate that JavaScript is the most widely used language client-side on the Web. Regardless of how the back-ends of your web applications are implemented, client side you're using JavaScript for everything from same form validations to full Ajax applications. Now imagine being able to develop web apps using JavaScript server-side too. Wouldn't being able to use the same language on both client and server simplify life for us developers?
Full Circle
If you were part of the early web's big bang in the mid-1990s you might recall that being able to use JavaScript both client-side and server-side was core to Netscape's original vision for Web apps. Netscape Livewire was the first server-side JavaScript engine. Now more than 10 years later, with Netscape's technology group having been transformed into The Mozilla Foundation, server-side JavaScript is seeing a strong resurgence because of the simplicity it provides to Web developers reinvigorated by the fact that today's CPUs can process JavaScript more than 10x faster than the CPUs of the mid-90's ever could. And just like before, it's the Mozilla team that's at the core of this latest evolution of the Web. In fact, the next JavaScript engine from Mozilla, TraceMonkey, is poised to boost JavaScript performance by factors of 20 to 40 times according to Brendan Eich, Mozilla CTO and the creator of JavaScript. In recent developments we also see the advent of other performant engines for JavaScript such as v8 from Google and squirrelfish for webKit, these engines are raising the bar on performance and help to maintain a competitive environment that will hopefully direct competitive focus on the performance aspect of all the major JavaScript implementations.
There are currently two main JavaScript engines used server-side and both are from the minds at Mozilla: Mozilla Rhino and Mozilla SpiderMonkey. Rhino is a JavaScript interpreter written in Java that can also bridge JavaScript to Java server-side. Steve Yegge of Google has been doing interesting work with this engine. SpiderMonkey on the other hand is the JavaScript engine (written in C) in the highly popular Mozilla Firefox browser. SpiderMonkey is what will evolve to become TraceMonkey. In addition, the Jaxer “Ajax server” (a project I work on at Aptana) is an example of SSJS that uses not only SpiderMonkey, but also embeds the entire Firefox browser engine in the application server such that you can do server-side DOM manipulation and other Ajaxy things server-side that Rhino was not built to do.
Let's take a look at various SSJS implementations and some examples of putting them to use.
Server-Side JavaScript (SSJS) via Embedded JavaScript Engines
The extension of JavaScript to the server is made possible via embedded JavaScript engines. The two most well established engines are SpiderMonkey and Rhino, both currently maintained by the Mozilla Foundation. SpiderMonkey is the code name for the first ever JavaScript engine, an open source C implementation which can be found embedded in leading software products such as Mozilla Firefox, Adobe Acrobat, and Aptana Jaxer. Rhino is a Java implementation of JavaScript which is commonly embedded in Java applications to expose scripting capability. The Helma web application framework is an example of Rhino in use. The Dojo Toolkit also has capabilities that run in Rhino. Let’s take a closer look at Rhino and its importance to the JavaScript developer.
Rhino offers a unique opportunity for the JavaScript developer to tap into the power of Java classes using JavaScript. Armed with some basic Java knowledge, you can extend JavaScript to include some of the most desired capability such as database access, remote web requests, and XML processing. We’ll start by taking a look at querying a SQL database.
This first example we’ll demonstrate is querying a mySQL database for some employee contact information. It is assumed that you have already downloaded, extracted, and consumed the necessary documentation to get up and running with some basic Rhino scripts. If you don’t already have it, you’ll need to also download the JDBC driver for mySQL, extract the class files, and include the path in your CLASSPATH environment variable. The code in Listing 1 is a sample JavaScript script which incorporates Java classes to handle the database query.
Listing 1 - Querying a mySQL database from Rhino
// import the java sql packages importPackage( java.sql ); // load the mySQL driver java.lang.Class.forName( "com.mysql.jdbc.Driver" ); // create connection to the database var conn = DriverManager.getConnection( "jdbc:mysql://localhost/rhino", "uRhino", "pRhino" ); // create a statement handle var stmt = conn.createStatement(); // get a resultset var rs = stmt.executeQuery( "select * from employee" ); // get the metadata from the resultset var meta = rs.getMetaData(); // loop over the records, dump out column names and values while( rs.next() ) { for( var i = 1; i <= meta.getColumnCount(); i++ ) { print( meta.getColumnName( i ) + ": " + rs.getObject( i ) + "\n" ); } print( "----------\n" ); } // cleanup rs.close(); stmt.close(); conn.close();
This code starts off by using a Rhino function named importPackage which is just like using the import statement in Java. Here, we’re including all the classes in the java.sql namespace in the script. The appropriate database driver for mySQL is loaded and the connection string to a database named rhino on my local machine is configured using the user account uRhino with the password of pRhino. The SQL statement is prepared, executed, and printed with the help of the metadata obtained from the resultset. Sample output is shown in Listing 2.
id: 1 first_name: Sammy last_name: Hamm department: IT title: Network Administrator ---------- id: 2 first_name: Nigel last_name: Bitters department: Finance title: Accounting Manager ...
It’s clear to see that with just a few lines of code, we can easily take advantage of SQL data stores in JavaScript. The sample script in Listing 1 could be factored out to it’s own function to retrieve the set of employee information, or abstracted further into a more generic data handler class. Next we’ll take a look at another powerful feature in Rhino, E4X processing.
E4X (ECMAScript for XML) is an extension of JavaScript which provides direct support for XML, greatly simplifying the process of consuming XML via JavaScript. Rhino’s support of this important standard eliminates the pain of using DOM or SAX based parsers in Java. Listing 3 below details a script used to process an RSS feed from the Mozilla website.
importPackage( java.net ); // connect to the remote resource var u = new URL( "" ); var c = u.openConnection(); c.connect(); // read in the raw data var s = new java.io.InputStreamReader( c.getInputStream() ); var b = new java.io.BufferedReader( s ); var l, str = ""; while( ( l = b.readLine() ) != null ) { // skip if( l != "" ) { str = str + l + "\n"; } } // define the namespaces, first the default, // then additional namespaces default xml namespace = ""; var dc = new Namespace( "" ); var rdf = new Namespace( "" ); // use e4x to process the feed var x = new XML( str ); for each( var i in x..item ) { print( "Title: " + i.title + "\n" ); print( "About: " + i.@rdf::about + "\n" ); print( "Link: " + i.link + "\n" ); print( "Date: " + i.dc::date + "\n" ); }
The first half of this script is standard Java code used to retrieve the feed data. After the asset is retrieved and stored in a string, the proper namespaces are defined for this particular resource. The default namespace is defined along with two others, in particular dc and rdf. Defining these namespaces becomes important if we want to be able to access any data elements in the feed defined in these namespaces.
The act of creating an E4X object is quite simple, in the case of Listing 3, we do this through the line
var x = new XML( str );
From here, the XML is accessible via dot notation. Notice during the processing of the item element where we use the rdf and dc namespace to access the about attribute of the item element, and the date element respectively. The syntax for accessing E4X objects is actually quite natural and certainly easier than most methods.
JavaScript engines such as Rhino and SpiderMonkey on their own have proven to be a useful and powerful tool to the developer. However, taking advantage of frameworks such as Aptana Jaxer (SpiderMonkey) and Helma (Rhino) can reveal even greater rewards as a great deal of work has already been done for you. All you need to do is implement the JavaScript. Let’s take a closer look on the SpiderMonkey side with Jaxer.
As stated earlier, Aptana Jaxer is built using the Mozilla Browser Engine engine that powers Mozilla Firefox, which includes SpiderMonkey as its JavaScript interpreter, but lots more features beyond SSJS alone such as DOM, DB, File IO, CSS, server sessions, E4X, etc...] This is a great advantage to the developer as it presents a consistent server-side and client-side development environment for both browser and server contexts that is centered on open source and Web standards. Also since you know which engine you're targeting, you're free to confidently use advanced built in controls such as XML processing via E4X, XPath, and XSLT. Let's take a tour through some of the top features of Jaxer through code examples.
Jaxer – Server-Side Ajax via the Firefox Browser Engine on the server
Although server-side JavaScript is not a new concept, Jaxer implements it in a unique fashion. By taking advantage of current ScriptMonkey and Ajax capabilities not to mention the loads of other features packed into the Mozilla Firefox Browser Engine, Aptana Jaxer fuses the client-side with the server-side, creating a unified “same language” development platform. Where once you would need to master not only JavaScript for the client-side but also some flavor of a server-side Web language, Jaxer puts all the rich capability of a server-side language into JavaScript. This accounts for added simplicity and promotes rapid development methodologies.
Managing Context
To take advantage of all the operating contexts of Jaxer, you'll need to specify how your scripts should behave using the runat attribute of the script tag (or by setting a runat property on a JavaScript function object). The common values for this attribute are explained below, though there are more than just these.
runat=”client” – Code contained in this block is restricted to execution client-side, which is also what happens if no runat attribute is specified.
runat=”server” – Code contained in this block is restricted to execution server-side.
runat=”both” – Code may be invoked on the client or server side.
runat=”server-proxy” – Allows for the exposure of server-side JavaScript functions to the browser. Jaxer manages Ajax calls behind the scenes to accomplish this.
While most of the runat attribute values above are self explanatory, the server-proxy concept is best explained through example. Consider the code in Listing 4.
Listing 4 - Demonstration of server and client context
<script runat="server"> function exposed() { return "exposed to the browser"; } function notExposed() { return "can't see me!"; } </script> <script runat="client"> alert( exposed() ); alert( notExposed() ); </script>
There are two blocks of scripts defined, one with runat set to server and the other set to client. This creates a strict separation of the two environments. As it stands, the code in the client block would throw an error since the server-side JavaScript is not within scope. If you intend to allow the execution of server-side JavaScript from the client-side, you can correct the code in listing 1 by doing one of three things.
Change the runat attribute value of server to server-proxy.
Add a statement inside the server-side script block to expose a particular server-side function to the browser. This statement is of the form <functionName>.proxy = true.
Ensure the server-side function name is contained within the Jaxer.proxies array.
Generally it's a best practice to use either the second or third strategy since in this case you are exposing only what's needed to the client. See Listing 5 for an update to Listing 1 to reflect this.
Listing 5 - Demonstration of server-proxy
<script runat="server"> function notExposed() { // runs on the server, hidden from the browser return "can't see me!"; } function exposed() { // runs on the server, callable from the browser return "exposed to the browser"; } exposed.proxy = true; // tell Jaxer this function is ok to be called from the browser </script> <script runat="client" type="text/javascript"> alert( exposed() ); //works like a charm alert( notExposed() ); //produces an object not found error since it server-side only </script>
Understanding the concept of operating context in Jaxer is central to moving forward with the example application which is what we'll examine next.
Jaxer – Example Application
The example application consists of a simple form used to accept comments from public users and was built to cover some of the most useful Jaxer features. The example application is running under Jaxer build 0.9.7.2472 and also incorporates the ExtJS version 2.1 JavaScript framework for user interface controls. For more information how to obtain and install these products, refer to the resources section.
Single Page Applications
The first feature the application demonstrates is the concept of a single page application. Since Jaxer allows you to define which code is reserved for the server-side, which code stays on the client-side, and which code is shared between the two, it's very possible to produce a single page application which could also include any number of external assets such as a popular third party framework. Listing 6 shows the beginning of our application which takes advantage of some of these concepts.
Listing 6 - Creating the User Interface
<link href="/jaxer_examples/js/ext-2.1/resources/css/ext-all.css" type="text/css" rel="stylesheet"/> <script src="/jaxer_examples/js/ext-2.1/adapter/ext/ext-base.js"/> <script src="/jaxer_examples/js/ext-2.1/ext-all.js"/> <link href="/jaxer_examples/css/main.css" type="text/css" rel="stylesheet"/> <script runat="both" src="/jaxer_examples/js/validateComments.js"/> <script> var txt_name; var txt_email; var txt_message; var btn_comments; var form_comments; Ext.onReady( function() { // create the name text field txt_name = new Ext.form.TextField({ name: "name", fieldLabel: "Name", width: 200 }); // create the e-mail text field txt_email = new Ext.form.TextField({ name: "email", fieldLabel: "E-mail", width: 200 }); // create the message text field txt_message = new Ext.form.TextArea({ name: "message", fieldLabel: "Message", width: 200 }); // create a button used to send the form details. btn_comments = new Ext.Button({ text: "Submit", fieldLabel: "", handler: formHandler }); // create the form panel, attach the inputs form_comments = new Ext.form.FormPanel({ labelAlign: "right", width: 400, title: "Comments", items: [ txt_name, txt_email, txt_message, btn_comments ], renderTo: "form-comments" }); }); </script>
The code in Listing 6 starts by hooking in the Ext JS library which is used to produce the UI elements of the form. Next a script named validateComments.js is hooked in with the runat attribute set to both. The function defined within this file checks the form to make sure it is complete, returning a boolean. The both context it operates within is a very powerful concept as we are able to use this code to check values entered in the form on both the client and server side. No custom validation libraries for each environment makes for rapid development with less risk for error.
Towards the bottom of the code are two div tags with ids form-comments and out-logger. The form-comments div is used to render the form and the out-logger will be used as a log display as each comment is added. The completed user interface is shown in Figure 1.
Figure 1 – The user interface
Server-Side DOM Processing
Another powerful concept of Jaxer is the ability to access the DOM server-side with JavaScript. This gives you added control to interrogate and manage the document structure before it's pushed to the client. Listing 7 contains the code which demonstrates this concept.
Listing 7 - Manipulating the DOM server-side
<script runat="server"> window.onserverload = function() { document.getElementById( "out-logger" ).innerHTML = Jaxer.File.read( "dump.txt" ); } </script>
The code in listing 7 is executed server-side and takes advantage of the onserverload event which ensures that we have a complete DOM before trying to access it. This code is simple in that it updates the innerHTML of the logging div with the contents of a text file. The contents of this file are obtained through the file I/O capabilities exposed by Jaxer which is what we'll examine next.
File I/O
Jaxer ships with a comprehensive filesystem I/O capability that is very useful. There are numerous methods in this namespace which allow for a simple, yet powerful approach to file management. The example application uses this capability to log comments submitted by the form to a plain text file named dump.txt. Listing 8 shows the two functions used to handle the comment submission process.
Listing 8 - Submitting Comment Data
<script> function formHandler() { // get the form values var name = txt_name.getValue(); var email = txt_email.getValue(); var message = txt_message.getValue(); // if the form passes validation client-side, submit results to the processor if( validateComments( name, email, message ) ) { formProcessor( name, email, message ); // update the logger with the most recent entry document.getElementById( "out-logger" ).innerHTML += "name: " + name + "<br/>email: " + email + "<br/>message: " + message + "<br/><br/>"; } else { Ext.Msg.alert( "Error", "Please enter the required fields" ); } } </script> <!-- processes the form contents --> ); } formProcessor.proxy = true; </script>
The first function, formHandler, performs some first-level validation using the validateComments function. If it passes validation, the formProcessor server-side function is invoked which does the same validation check using the shared validateComments function. Finally, the comment information is assembled into a string and appended to the dump file.
Database Access
For more robust data management, Jaxer comes with the ability to manage data in either a SQLite database which is included in Jaxer, or a mySQL database, and is extensible to others. Much like the file I/O namespace, database access in Jaxer can be as simple or complicated as you want or need it to be. For debugging, Jaxer will also send database errors to the client if you want.
Upgrading the example application to include database access requires just a few lines of code added to the formProcessor method. Listing 9 demonstrates these changes.
Listing 9 - Jaxer database access
); // add support for management via database Jaxer.DB.execute( "create table if not exists comments ( id integer primary key auto_increment, " + "name varchar(50) not null, " + "email varchar(255) not null, " + "message varchar(300) not null )" ); Jaxer.DB.execute( "insert into comments ( name, email, message ) values ( ?, ?, ? )", [ name, email, message ] ); } formProcessor.proxy = true; </script>
The first statement which uses the Jaxer.DB namespace creates a new database table in the default SQLite database if it doesn't already exist. You can see the field names and data types which make sense for the type of data that's been added. The second statement is just a simple insert statement using the data obtained from the form. Placeholders in the form of '?' are used to match up the argument values with their proper position in the insert statement.
XML Processing
The final feature we'll explore is XML processing with a particular emphasis first on the capability to create a simple XML service on the server-side, then use the E4X capabilities to processes it.
Listing 10 demonstrates the configuration of a simple service we're able to advertise through Jaxer using the data from the database we captured from prior form submissions.
Listing 10 - Creating a simple XML service
<script runat="server"> var rs_comments = Jaxer.DB.execute("SELECT * FROM Comments"); var doc = document.implementation.createDocument('', 'comments', null); var root = doc.documentElement; rs_comments.rows.forEach(function(row, index) { if (true || index < 10) { var node = doc.createElement('comment'); node.setAttribute('id', row.id); node.setAttribute('name', row.name); node.setAttribute('message', row.message); root.appendChild(node); } }); Jaxer.response.exit(200, doc); </script>
The data from the database is represented in XML format as follows:
Now that we've created this simple XML feed, we can now also consume it using the code from Listing 11.
Listing 11 - Consuming the XML feed
<script runat="server"> var r = Jaxer.Web.get("service.html", {as: 'e4x'}); for (var i=0, len = r.comment.length(); i<len; i++) { var comment = r.comment[i]; document.write('id: ' + comment.@id + '<br/>'); document.write('name: ' + comment.@name + '<br/>'); document.write('message: ' + comment.@message + '<br/>'); document.write('<hr/>'); } </script>
The code in Listing 11 is very simple. First a call to get the XML feed is made using the get method from the Jaxer.Web namespace, using the option that requests the response as a new E4X object. The comment nodes are looped over with data output using the simplified E4X syntax.
Out of all the features we looked at, this one has to be my favorite. To be able to use Jaxer to essentially create an XML based RESTful service in JavaScript is useful and shifts a great deal of power to the JavaScript developer.
Conclusion
JavaScript can be successfully used for full application development on both the client and server, fulfilling Netscape's original vision for a single, unified language for the Web that makes apps easier to develop and maintain. Advances such as Ajax and running Ajax on the server side with the Jaxer server boosted by today's faster JavaScript engines and radically faster ones like TraceMonkey to come from Mozilla, sets the stage for significant use of server-side JavaScript now.
Resources
-
-
ServerJS discussion on proposed Common API for SSJS
SSJS (Server Side JavaScript) at Wikipedia.com
-
Aptana Jaxer Documentation
Ext JS | https://developer.mozilla.org/en-US/docs/JavaScript/Server-Side_JavaScript/Walkthrough?redirect=no | CC-MAIN-2016-44 | refinedweb | 3,633 | 54.12 |
Archives
[Vista] What does Vista do when it Sleeps?
I have to say the new "power button" which puts Vista into the new "Sleep" mode seems nice. My 2 Ghz laptop (which doesn't have much on it yet I must admit) goes to sleep in 1 second and starts up in pretty much the same time. Superfast - I like it.
As I understand it, by default a laptop is put into sleep mode when the power button is pressed, then goes into hibernate (saves memory to disk) after a few hours or so, to save power. When resuming from hibernate it takes a bit longer for the box to get going, but that's understandable. It's possible to configure what happens when you press the power button or close the lid, also depending on if you're on battery or plugged in.
You can read more of what Vista does when it's in sleep mode on the Microsoft Vista Performance pages.
[Ajax] GWT - Google Web Toolkit
[Ajax] Check Out the Script# Prototype
This should interest all of you who are into AJAX, Atlas and that kind of stuff no the ASP.NET platform - Nikhil (who is an architect on the Web Platform and Tools team at Microsoft) releases a prototype of a C# to Javascript/Ajax compiler and looks for feedback:
Script# brings the C# developer experience (programming and tooling) to Javascript/Ajax world. This post shares a prototype project for enabling script authoring via C#...
Check it out - Nikhil also got a video to explain how it works.
Stylish Quotes in Blog Posts
I found a few CSS tips and tricks to format (in my view) prettier quotes in your blog posts. For example:
This is a quote.
For the above, I added this CSS:
blockquote {
background: transparent url(quoleft.png) left top no-repeat;
}
If you want to add a closing quote to it as well, add this:
blockquote p {
padding: 0 48px;
background: transparent url(quoright.png) right bottom no-repeat;
}
If you use <P> inside the quote, you may get additional quote signs, so use a couple of <BR /> instead.
[Vista] Is Sami the Default Swedish Language?
I had heard about this before, but someone had also said it was going to be fixed for beta 2. Looks like it wasn't because when I selected Sweden as my region, it defaulted to Sami, Lule (Sweden) as language :)
There were actually 3 different Sami dialects available; Lule, Northern and Southern, which is amazing in itself. I had no idea we had all these different versions of the Sami language... I can't remember seeing Finnish in that list though, but it should be.
Overall, the installation went really smooth, but I'm installing on the bare metal here, and not in VPC or Virtual Server. Graphic is stunning...
[Vista] Beta 2 Available on MSDN Subscription
Guess I'm not the first to notice, but beta 2 of Vista is available for download to MSDN subscription owners. 3200 MB ISO :)
EDIT: There still (day after release) seems to be a problem with generating license keys for any other Vista version than the premium home edition. I guess we'll have to check back on the MSDN Subscription page every now and then and see if it gets fixed.
[.NET 2.0] Anonymous Methods
My mate Eric blogs about how useful Anonymous Methods could be when calling on multiple asynchronous methods. Check it out.
Using .NET Stored Procedures in Oracle 10g R2
I've not been doing too much 10g development lately, but this was completely new to me. Quote from an OTN article about this feature:
If you are a .NET developer, one of the most exciting features of Oracle Database 10g Release 2 for Windows is the ability to implement stored procedures using the .NET language of your choice, via Oracle Database Extensions for NET.
I think I have to try this out some time.
Copy Code and Paste as HTML From VisualStudio 2005
There are 100's of macros and add-ins and plug-ins doing this, but I got this one from "CodingHorror" installed and configured an ALT-C shortcut to copy the code as HTML. It's a manual installation process, but it's good enough for me.
Now why VS 2005 doesn't ship with a proper copy/paste functionality is something I don't really understand. But it gives a whole lot of happy late-nighters a chance to whip up something useful on their own :)
[.NET 2.0] Testing Generics, Collections and Yield Return
I've not looked too much at the yield return statement in c# 2.0 yet, but if you learn how to use it I'm sure it will become something you use "every day" when programming. Whenever you need to do some conditional iterations on a collection, array or list, yield may be a nice solution to go for. A few, silly and not perfect, snippets which helped me understand how to use yield return:using System;
using System.Collections;
using System.Collections.Generic;
public class Program
{
static void Main()
{
// Build up a few arrays and a list to work with
string[] names = { "Johan", "Per", "Thomas", "Lina", "Juan", "Jan-Erik", "Mats" };
int[] values = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 0 };
List<int> list = new List<int>();
list.Add(1);
list.Add(2);
list.Add(3);
list.Add(4);
Console.WriteLine("Get the names which contains an 'a':");
// Iterate on all names which contains letter 'a'
foreach (string name in EnumerableUtils<string>.GetNamesContainingLetter(names, 'a'))
{
Console.Write("{0} ", name);
}
Console.WriteLine();
Console.WriteLine();
//iterate through first half of this collection
Console.WriteLine("First half of the name-array:");
foreach (string name in EnumerableUtils<string>.GetFirstHalfFromList(names))
{
Console.Write("{0} ", name);
}
Console.WriteLine();
Console.WriteLine();
//iterate through first half of another collection
Console.WriteLine("First half of the value-array:");
foreach (int value in EnumerableUtils<int>.GetFirstHalfFromList(values))
{
Console.Write("{0} ", value);
}
Console.WriteLine();
Console.WriteLine();
//iterate through first half of a List
Console.WriteLine("First half of the value-list:");
foreach (int value in EnumerableUtils<int>.GetFirstHalfFromList(list))
{
Console.Write("{0} ", value);
}
Console.WriteLine();
}
}
public class EnumerableUtils<T>
{
public static IEnumerable GetNamesContainingLetter(string[] names, char letter)
{
foreach (string name in names)
{
if (name.ToLower().Contains(letter.ToString().ToLower()))
yield return name;
}
}
public static IEnumerable<T> GetFirstHalfFromList(List<T> arr)
{
for (int i = 0; i < arr.Count / 2; i++)
yield return arr[i];
}
public static IEnumerable<T> GetFirstHalfFromList(T[] arr)
{
for (int i = 0; i < arr.Length / 2; i++)
yield return arr[i];
}
}
Martin Fowler on Ruby
Martin Fowler just blogged a pretty long post on his view on Ruby. I don't want to ruin the reading for you but it ends with:
But overall I'm increasingly positive about using Ruby for serious work where speed, responsiveness, and productivity are important.
:)
Personally I've managed to install Rails and went through some tutorials. Not enough for me to give it a verdict.
[.NET 2.0] Quick Way of Closing VisualStudio Files
If you end up having a zillion files opened in the VisualStudio 2005 IDE and want to close a few of them, just click with the MIDDLE mouse button on their tabs where you see the filename. Quick and easy.
[.NET 2.0] Visual Studio 2005 Web Application Project V1.0 Released
From the Visual Studio 2005 Web Application Project webby:.
[Ajax] Check Out the "Atlas" Control Toolkit
You may want to look at the "Atlas" Control Toolkit site where they have a list of cool samples of what you can do with it, and it's dead simple. You can probably live without some of the controls, but some of them are quite useful and you would have to spend quite some time to achieve the effects you get for free. My favos on that list are the CollapsiblePanel and the TextBoxWatermark.
Some may argue that these controls have nothing to do with Ajax, but Ajax has become synonymous with a rich UI, and this is what "Atlas" gives you for sure.
How to Play Go
Windows Defender
If you don't have it installed yet you should consider it. I have been running the Anti Spyware beta from Microsoft for quite some time now, and today it automatically updated itself to Windows Defender Beta 2. This program has been quite helpful with protecting my kids' computers. The young ones doesn't always know what is harmful to their boxes or not. I've told them "if you see a big pupup with a spyware warning in the right corner - let me know at once" :)
About Windows Defender:.
[.NET 2.0] Visual Studio 2005 Web Application Project - Old Skool
My thanks to ScottGu who commented on my earlier blog post about checking out the Visual Studio 2005 Web Application Project, which gives you Old Skool style of working with ASP.NET applications. I'm sure lots of people (like me) prefer the way it used to work. Whole thing with teaching old dogs... Snip from the site:).
[.NET 2.0] Using Web Deployment Projects with Visual Studio 2005
Visual Studio 2005 provides deployment support though its Copy Web Site and Publish Web Site features. While these are ideal for many scenarios, there are other, more advanced scenarios where developers need the following capabilities:
-).
This white paper describes a solution to these advanced scenarios and introduces a new feature called Web Deployment Projects for Visual Studio 2005.
There is a VS.NET plugin available on that page with the following features for building ASP.NET 2.0 web sites:
I still have to read the whole page myself and try out the add-in. Looks interesting and useful though.
-.
The extensibility of Web Deployment projects enables you to tailor the build and deploy process to suit your needs. This is done without sacrificing the optimized workflow improvements achieved with Visual Studio 2005 Web site projects.
[.NET 2.0] Writing Your Own Provider...
[.NET 2.0] Taking a Web App Offline
This is a neat feature of ASP.NET 2.0 I just found out about. Loads of people have already written about it, but it's so useful that I decided to put it on my blog as well. By placing a file called 'app_offline.htm' in the root directory of your web app, no ASP.NET dynamic pages are no longer served. The web server returns the content of that file instead. This is what ScottGu wrote about this feature on his blog:.
You may want to read all the comments to this blog entry, some interesting conversation there.
I just happened to stumble upon this feature when I was publishing my web app directly from within VisualStudio (Build->Publish...), because VS automatically places that file in the web app during upload of the new files. Neat. What is not so neat is that VS didn't remove the app_offline.htm file once it was done publishing :) Maybe it was because I was publishing to an FTP site... not sure. Will try again some other time. | http://weblogs.asp.net/jdanforth/archive/2006/05 | CC-MAIN-2016-07 | refinedweb | 1,859 | 63.09 |
I decided to make text-rpg game to help make learning more fun for myself. Been going smoothly until I noticed I was repeating a lot of code and I'm thinking that when I'm repeating a lot of the same thing, there must be something I'm missing.
so, playing with classes (which are awesome) and set one up so my little game could have multiple enemies like so:
- Code: Select all
class Monster:
def __init__(self, name, hp, mp, ac, res, exp, gold):
self.name = name #monster name
self.hp = hp #monster health
self.mp = mp #monster mana
self.ac = ac #monster armor class
self.res = res #monster resistance
self.exp = exp #experiance reward
self.gold = gold #gold reward
Then made a few monsters using this class:
- Code: Select all
goblin = Monster("Goblin", 50, 0, 5, 0, 75, 40)
goblinKing = Monster("Goblin King", 600, 50, 25, 25, 900, 1100)
rat = Monster("Rat", 25, 0, 0, 40, 8
Now I wanted to make a 'battle' scenario which could be started and in which one of the monsters could be loaded into the battle by loading their stats into variables to be used in calculating damage, health changes and so on.
- Code: Select all
battle = 1
monsterID = "goblin"
if battle == 1:
if monsterID == "goblin":
monsterMaxHP = goblin.hp
monsterCurHP = goblin.hp
monsterName = goblin.name
monsterMaxMP = goblin.mp
monsterCurMP = goblin.mp
monsterAC = goblin.ac
monsterRes = goblin.res
monsterExp = goblin.exp
monsterGold = goblin.gold
elif monsterID == "goblinKing":
monsterMaxHP = goblinKing.hp
monsterCurHP = goblinKing.hp
monsterName = goblinKing.name
monsterMaxMP = goblinKing.mp
monsterCurMP = goblinKing.mp
monsterAC = goblinKing.ac
monsterRes = goblinKing.res
monsterExp = goblinKing.exp
monsterGold = goblinKing.gold
elif monsterID == "rat":
monsterMaxHP = rat.hp
monsterCurHP = rat.hp
monsterName = rat.name
monsterMaxMP = rat.mp
monsterCurMP = rat.mp
monsterAC = rat.ac
monsterRes = rat.res
monsterExp = rat.exp
monsterGold = rat.gold
Can't see what I'm missing but I'm thinking I can somehow link the monsterID variable and the class variables to simplify this. I tried using
- Code: Select all
monsterID.hp
Would appreciate a little help, maybe just a point in the right direction as apposed to a total solution, it's kinda fun to apply a principle, I just don't know which one I'm looking for. | http://www.python-forum.org/viewtopic.php?f=6&t=2870&p=3345 | CC-MAIN-2015-35 | refinedweb | 373 | 63.25 |
Suppose I am working on a program with 3 files:
- Main.c (main file which runs the program)
- Functions.c (collection of functions used by main.c)
- MyHeader.h (list of function prototypes, global variables, typedefs and what not)
Suppose also that I need to use both the stdio.h and stdlib.h libraries in this program.
I am used to put only a #include "MyHeader.h" on both main.c and functions.c, and then put #include <stdio.h> and #include <stdlib.h> on MyHeader.h (this seems to me the most logic and not-redundant solution).
But, I have seen some very experienced programmers put no #includes at all on MyHeader.h, and then put them as needed on main.c and functions.c, so in this example MyHeader.h would have no #includes, main.c would have:
#include <stdio.h>
#include "MyHeader.h"
and functions.c would have:
#include <stdio.h>
#include <stdlib.h>
#include "MyHeader.h"
As you can see with this second method stdio.h is included twice, while with my method only once.
So which is the best/recommended way to call headers (if there is one)? | http://cboard.cprogramming.com/c-programming/140601-where-should-i-put-header-calls-when-using-multiple-files.html | CC-MAIN-2015-06 | refinedweb | 193 | 82.1 |
Re: Cannot activate sbcl
- From: "Dimiter \"malkia\" Stanev" <malkia@xxxxxxxxx>
- Date: Tue, 06 Nov 2007 19:00:43 -0800
It's an interesting suggestion. It still requires enough contiguous
free address space to eventually map the array though. BSS addresses
are reserved at load time (using VirtualAlloc).
I could be wrong, but I think what happens is that Windows sees that your executable has a DATA section that might overlap a DLL's address space then it would reallocate that DLL (or others) to different address space. This happens before your application is started, or even fully loaded (not sure here, the wine source code might reveal something in that direction).
As you speculated, the difference between this approach and using
VirtualAlloc dynamically in your code is that your program and all its
DLLs were loaded first and possibly fragmented the space you could
have used.
Something like that. Off course if your "array" was say 1.9gb, and there wasn't memory for the DLLs to fit with it, it won't run. What can be done here (another tricky thing) is to have many tiny executables with different BSS sections in them, which then load a bigger executable in the form of a DLL (e.g. the real executable is in DLL).
That's not a nice solution, and there might be way better solution than this one. But it worked for us, on one specific tool that required such big contigous address space.
It is NOT safe for deployment - at least not by itself. Fragmentation
is a function of application load order and it is not limited to the
set of currently running applications - their loading may have been
influenced by other programs that are no longer executing. You would
still have to guarantee that SBCL was loaded at a time when there is
enough contiguous global address space to satisfy the reservation.
Each application has its own virtual address space. The DLL's are usually loaded at some "standard" addresses, which is all for benefit of not having more than one copy in memory. In the hack I'm describing, it would make those DLL's to be reallocated (and probably wasting some real memory).
I mean really. In a perfect world, you would say - load all my dlls from any address, and this way make sure that I have contigous memory. But what is done by Windows (and I guess other OS) is a kind of "heh premature" (not really... or may be) optimization, for the sake of saving memory - e.g. use the same code for any other application - and because it's loaded in the same address space - the relocations are the same...
Here is an example to test the idea. Write the next one to file called a.c, and use the Microsoft Visual Studio compiler to compile it, just type "cl a.c". (If you use cygwin or mingw, remove the pragmas, and add the libraries to the command-line).
#include "windows.h"
unsigned char a[1024*1024*1024 + 850*1024*1024]; // 1.85GB contigous memory that can be used for allocs.
#pragma comment(lib,"GDI32")
#pragma comment(lib,"USER32")
#pragma comment(lib,"KERNEL32")
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow)
{
MSG msg;
while (GetMessage(&msg, NULL, 0, 0))
{
TranslateMessage(&msg);
DispatchMessage(&msg);
a[0] = 10; // This is simply so that the compiler/linker includes the array a
}
return msg.wParam;
}
Now compile that one to say "a.exe", then run depends.exe on a.exe and press F7. Sort then by Actual Base. Here's what I've got:
Then change the source code of a.c into a2.c by modifying the array a to be just a[1]:
unsigned char a[1];
Do the same and see the results:
As you can see when the buffer is 1.85GB three DLLs were reallocated - The Google Desktop one, WS2_32 and WS2HELP.DLL. in the second case (a[1]) the Google Desktop DLL was just sitting "in-the middle" of the virtual space. Once it's there after your application is loaded, I don't think you can move it.
Note: If that number a[1024*1024*1024 + 850*1024*1024] is too high for your system, decrease it (otherwise it would say something like "Access Denied").
Yes this is the kind of problem you would end up if you want to deliver it, this number would vary on different systems, but you can either come up with requirments for the application to the client, or find some other solution (for example different executables, or executable that's generated on the fly, or something even more low-level).
I guess that's now quite off-topic Common Lisp :)
.
- Follow-Ups:
- Re: Cannot activate sbcl
- From: George Neuner
- References:
- Cannot activate sbcl
- From: iu2
- Re: Cannot activate sbcl
- From: Dimiter \"malkia\" Stanev
- Re: Cannot activate sbcl
- From: George Neuner
- Prev by Date: Re: (cached) named return values?
- Next by Date: Re: (cached) named return values?
- Previous by thread: Re: Cannot activate sbcl
- Next by thread: Re: Cannot activate sbcl
- Index(es): | http://coding.derkeiler.com/Archive/Lisp/comp.lang.lisp/2007-11/msg00391.html | crawl-002 | refinedweb | 854 | 63.19 |
In this section we will read about the various aspects of JDBC ODBC such as, JDBC-ODBC bridge, JDBC ODBC connection, how to create DSN etc.
JDBC-ODBC Bridge
As its name JDBC-ODBC bridge, it acts like a bridge between the Java Programming Language and the ODBC to use the JDBC API. To use the JDBC API with the existing ODBC Sun Microsystems (Now Oracle Corporation) provides the driver named JdbcOdbcDriver. Full name of this class is sun.jdbc.odbc.JdbcOdbcDriver.
JDBC ODBC Connection
To Connect the JDBC and ODBC we should have a database. Then we would be required to create a DSN to use JDBC ODBC Bridge driver. The DSN (Data Source Name) specifies the connection of an ODBC to a specific server.
How To Create DSN In Java.
To create a DSN we would be required to follow some basic steps. These steps are as follows :
Here I am using the MS-Access as database system. Before creating DSN to use MS-Access with JDBC technology the database file should be created in advance.
Example
Here we are giving a simple example which will demonstrate you about how to use the JDBC API using JDBC ODBC driver. In this example we will use MS-Access as a database. We will create a table and then we will create a DSN by following the steps mentioned above. Then we will create a class in Java and we will make a connection using JDBC ODBC driver and then we will insert the record into the table using SQL query.
JdbcOdbcExample.java
import java.sql.DriverManager; import java.sql.Connection; import java.sql.PreparedStatement; import java.sql.SQLException; public class JdbcOdbcExample { public static void main(String args[]) { Connection con = null; PreparedStatement stmt = null; try { Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); con = DriverManager.getConnection("jdbc:odbc:swing"); String sql ="INSERT INTO employee (name, rollNo, course, subject, marks) VALUES" + "('Deepak', 10, 'MCA', 'Computer Science', 85)"; stmt = con.prepareStatement(sql); int i = stmt.executeUpdate(); if(i > 0 ) { System.out.println("Record Added Into Table Successfully."); } } catch (SQLException sqle) { System.out.println(sqle.getNextException()); } catch (ClassNotFoundException e) { System.out.println(e.getException()); } finally { try { if (stmt != null) { stmt.close(); stmt = null; } if (con != null) { con.close(); con = null; } } catch (Exception e) { System.out.println(e); } } } }
Output
Table Before Inserting the record into it.
When you will execute the above example you will get the output at console as follows :
And then the Table after inserting record will be seen as follows :
If you enjoyed this post then why not add us on Google+? Add us to your Circles
Liked it! Share this Tutorial
Ask Questions? Discuss: JDBC ODBC Connection In Java
Post your Comment | http://roseindia.net/jdbc/jdbc-odbc.shtml | CC-MAIN-2014-35 | refinedweb | 451 | 51.34 |
Technote (troubleshooting)
Problem(Abstract)
User launches Controller. User runs any 'standard report' (for example 'Maintain - Account Structure - Reports".
User receives error message.
Symptom
IBM Cognos software
PRS-OBJ-0617
The object "/portal/report-viewer.xts" could not be created because a required capability is missing.
[OK]
Cause
The end user does not have the correct/required permission (as defined inside 'Cognos Connection') to run/view this report.
- Specifically, they need to have 'Execute' permissions for the "Cognos Viewer" capabilities.
By default, the Cognos Connection website's security configuration is configured so that the group "Everyone" is a member of several security groups (such as "System Administrators") which means that all users are able to run these reports.
- However, one of the first jobs of the BI administrator is often to reduce/restrict the access rights so that only specific users are members of these groups
- If the BI administrator removes 'Everyone' from all the relevant groups (but forgets to add in the 'Controller Users' group into the relevant group(s)) then this problem will occur.
Environment
Controller system configured to use Cognos CAM (non-native) authentication.
Diagnosing the problem
1. Launch Cognos Connection
- For Controller 10.1 and later, this is:
- For Controller 8.5 and earlier, this is:
2. If you receive the 'Welcome' screen, then click "Administer IBM Cognos Content"
3. Click 'Security' tab and then click ' Capabilities'
4. Locate ' Cognos Viewer' and click 'Actions' (the 'down arrow') then click ' Set Properties':
6. Check which groups have 'Execute' permission enabled:
By default, there will be several groups which include this, for example (see above).:
- Analysis Users
- Authors
- Consumers
- Express Authors
- PowerPlay Administrators
- PowerPlay Users
- Query User
- Readers
- Report Administrators.
If your Controller end users are not a member of any of these groups, then the error will appear.
Resolving the problem
Ensure that all users (i.e. all members of the "Controller Users" group) have 'Execute' permissions for the Cognos Viewer capabilities.
- However, to prevent other problems/errors it is also advised to give the group more (extra) permissions than just this. See separate IBM Technote #1371229 for more details.
For this reason, for most customers it is easiest/best to simply add the "Controller Users" group to the membership of the group 'Authors'.
TIP: There are several other methods that will give the required permissions (the one explained inside this Technote is only one of many). Check with your BI security administrator about which method is best for you.
Steps:
1. Launch Cognos Connection
- For Controller 10.1 and later, this is:
- For Controller 8.5 and earlier, this is:
2. If you receive the 'Welcome' screen, then click "Administer IBM Cognos Content"
3. Click ' Security' tab
4. Ensure that the section " Users, Groups and Roles" is selected
5. Open the namespace ' Cognos':
6. Locate the row for the group ' Authors' and click on the icon 'properties' on the right-hand side:
7. Click ' Members' tab
8. Click " Add"
9. Open the namespace ' Cognos'
10. Tick the boxes to the left of ' Controller Administrators' and ' Controller Users' and click the green arrow to 'Add' these to the right-hand screen:
11. Click 'OK'
12. Click 'OK'
13. Ask the Controller users to exit Controller, and re-launch Controller and test.
Related information
1340118 - Cannot run report with non-admin users Error
1371229 - Error 'RQP-DEF-0326 User defined SQL is not p
Historical Number
1035177 | http://www-01.ibm.com/support/docview.wss?uid=swg21347354 | CC-MAIN-2015-48 | refinedweb | 568 | 58.69 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.